Tutorial: Executing Knowledge Graph Initiatives in Organizations – A Field Guide
Author: Panos Alexopoulos
Аbstract: Ever since Google announced that “their knowledge graph allowed searching for things, not strings”, the term “knowledge graph” has been widely adopted to denote any graph-like network of interrelated typed entities and concepts that can be used to integrate, share and exploit data and knowledge in one or more domains. Apart from Google, knowledge graphs are found and developed within several prominent companies, including Microsoft, Apple, LinkedIn, Amazon and others, as an enabling technology for data integration and analytics, semantic search and question answering, and other cognitive applications. In this tutorial I describe the technical, business and organizational dimensions and challenges that Knowledge Graph Architects need to be aware of before launching a Knowledge Graph initiative in an organization. More importantly, I provide a framework to guide the successful execution of a knowledge graph project, combining state-of-the-art techniques with practical advice and lessons learned from real-world case studies.
Tutorial: Music Knowledge Graph and Deep-Learning Based Recommender Systems
Author: Pasquale Lisena and Raphaël Troncy
Аbstract: Music information can be very complex. Describing a classical masterpiece in all its form (the composition, the score, the various publications, a performance, a recording, the derivative works, etc.) is a complex activity. In the context of the DOREMUS research project, we develop tools and methods to exploit music catalogues on the web using semantic web technologies.
In the first part of this tutorial, we will present models and vocabularies for representing fine-grained information about music, making it a powerful resource for answering music specific questions which are of interest for musicologists, librarians, concert hall or programmers. In the second part of this tutorial, we will present methods and datasets for training recommendation engines. From a music information point of view, we will touch topics like how to build entity embeddings, how to select similarity measures, how to tune recommender systems and provide explanation of the recommendation to the final user. During the tutorial, we will propose several hands-on for the audience to play with the DOREMUS datasets and tools.
Tutorial: How to build a Question Answering system overnight
Author: Andreas Both, Denis Lukovnikov, Gaurav Maheshwari, Ioanna Lytra, Jens Lehmann, Kuldeep Singh, Mohnish Dubey, Priyansh Trivedi
Аbstract: With this tutorial, we aim to provide the participants with an overview of the field of Question Answering, insights into commonly faced problems, its recent trends, and developments. At the end of the tutorial, the audience would have hands-on experience of developing two working QA
systems- one based on rule-based semantic parsing, and another, a deep learning based method. In doing so, we hope to provide a suitable entry point for the people new to this field and ease their process of making informed decisions while creating their own QA systems.
Tutorial: From heterogeneous data to RDF graphs and back
Author: Olivier Corby, Catherine Faron Zucker, Maxime Lefrançois and Antoine Zimmermann
Аbstract: It is commonly understood by developers that the adoption of the Semantic Web models and technologies are enablers for semantic interoperability on the Web and the Web of Things, but that their adoption is bound to that of RDF data formats. True, the RDF data model may be used as a lingua franca to reach semantic interoperability and integration and querying of data having heterogeneous formats. The topic of this tutorial is SPARQL-Generate and STTL (), that both contribute to making the choice of a data format and that of a data model orthogonal.
SPARQL-Generate is an extension of SPARQL for querying not only RDF datasets but also documents in arbitrary formats. It offers a simple template-based option to generate RDF Graphs from documents in heterogeneous formats. SPARQL Template Transformation Language (STTL) is an extension of SPARQL which enables Semantic Web developers to support the many cases where they need to transform RDF data. It enables them to write specific yet compact RDF transformers toward other languages and formats, including RDF itself. Combining SPARQL-Generate and STTL enables users to develop a new variety of applications where RDF is used as a pivot language in Web applications requiring heterogeneous data transformation processes.
Workshop: 2nd Workshop on Semantic Web solutions for large-scale biomedical data analytics (SeWeBMeDA)
Authors: Ali Hasnain, Oya Beyan, Stefan Decker and Dietrich Rebholz-Schuhmann
Аbstract: The life sciences domain has been an early adopter of linked data and, a considerable portion of the Linked Open Data cloud is composed of life sciences data sets. The deluge of inflowing biomedical data, partially driven by high-throughput gene sequencing technologies, is a key contributor and motor to these developments. The available data sets require integration according to international standards, large-scale distributed infrastructures, specific techniques for data access, and offer data analytics benefits for decision support. Especially in combination with Semantic Web and Linked Data technologies, these promises to enable the processing of large as well as semantically heterogeneous data sources and the capturing of new knowledge from those.
This workshop invites papers for life sciences and biomedical data processing, as well as the amalgamation with Linked Data and Semantic Web technologies for better data analytics, knowledge discovery and user-targeted applications. This research contribution should provide useful information for the Knowledge Acquisition research community as well as the working Data Scientist. This workshop at the Extended Semantic Web Conference (ESWC) seeks original contributions describing theoretical and practical methods and techniques that present the anatomy of large scale linked data infrastructure, which covers: the distributed infrastructure to consume, store and query large volumes of heterogeneous linked data; using indexes and graph aggregation to better understand large linked data graphs, query federation to mix internal and external data-sources, and linked data visualisation tools for health care and life sciences. It will further cover topics around data integration, data profiling, data curation, querying, knowledge discovery, ontology mapping / matching / reconciliation and data / ontology visualisation, applications / tools / technologies / techniques for life sciences and biomedical domain. SeWeBMeDA aims to provide researchers in biomedical and life science, an insight and awareness about large scale data technologies for linked data, which are becoming increasingly important for knowledge discovery in the life sciences domain.
Topics of interest include, but are not limited to Semantic Web and Linked Data technologies in the following areas:
- Techniques for analysing semantic data in the life sciences, medicine and health care
- The description, integration, analysis and use of data in pursuit of challenges in the life sciences, medicine and health
- Tools and applications for biomedical and life sciences
- Large scale biomedical data curation and integration
- Processing biomedical data at scale
- Knowledge representation and knowledge discovery for biomedical data
- Data publishing, profiling and new datasets in biomedical and life sciences
- Querying and federating data over heterogeneous data sources
- Biomedical ontology creation, mapping/ matching/ translation and reconciliation
- Biomedical Ontology and data visualisation
- Text analysis, text mining and reasoning using semantic technologies
- New technologies and exploitation of existing ones in Linked Data and Semantic Web
- Social and moral issues publishing and consuming biomedical and life sciences data.
Workshop: Fourth International Workshop at ESWC on Sentic Computing, Sentiment Analysis, Opinion mining and Emotion Detection
Authors: Mauro Dragoni, Diego Reforgiato, Mehwish Alam, Davide Buscaldi and Erik Cambria
Аbstract: As the Web rapidly evolves, people are becoming increasingly enthusiastic about interacting, sharing, and collaborating through social networks, online communities, blogs, wikis, and the like. In recent years, this collective intelligence has spread to many different areas, with particular focus on fields related to everyday life such as commerce, tourism, education, and health, causing the size of the social Web to expand exponentially. To identify the emotions (e.g. sentiment polarity, sadness, happiness, anger, irony, sarcasm, etc.) and the modality (e.g. doubt, certainty, obligation, liability, desire, etc.) expressed in this continuously growing content is critical to enable the correct interpretation of the opinions expressed or reported about social events, political movements, company strategies, marketing campaigns, product preferences, etc.
Existing solutions still have many limitations leaving the challenge of emotions and modality analysis open. For example, there is the need for building/enriching semantic/cognitive resources for supporting emotion and modality recognition and analysis. Additionally, the joint treatment of modality and emotion is, computationally, trailing behind, and therefore the focus of ongoing, current research. Also, while we can produce rather robust deep semantic analysis of natural language, we still need to tune this analysis towards the processing of sentiment and modalities, which cannot be addressed by means of statistical models only, currently the prevailing approaches to sentiment analysis in NLP. The hybridization of NLP techniques with Semantic Web technologies is therefore a direction worth exploring.
The Workshop on Sentic Computing, Sentiment Analysis, Opinion Mining, and Emotion Detection will also be connected to the ESWC 2018 Fine-Grained Sentiment Analysis Challenge.
Workshop: 4th Edition of the International Workshop on Social Media World Sensors
Authors: Luigi Di Caro, Mario Cataldi and Claudio Schifanella
Аbstract: Social media services represent freely-accessible social networks allowing registered members to broadcast short posts referring to a potentially-unlimited range of topics, by also exploiting the immediateness of handy smart devices. This workshop wants to stress the vision of this powerful communication channel as social sensor, which can be used to detect and characterize interesting and yet unreported information and events in real time, crossing all topics and locations. Future technologies on this connectivity may also provide applications with automatic techniques for the generation of news (filtered over user profiles), offering a sideways to the existing authoritative information media.
Workshop: Managing the Evolution and Preservation of the Data Web – MEPDaW 2018
Authors: Javier D. Fernández, Jeremy Debattista, Jürgen Umbrich and Maria Esther Vidal
Managing the evolution and preservation of linked open datasets poses a number of challenges, mainly related to the nature of the Linked Data principles and the RDF data model. More specifically, Linked Data techniques are expected to tackle major issues such as the synchronisation problem (how to monitor changes), the curation problem (how to repair data imperfections and add value over time), the appraisal problem (how to assess the quality of a dataset), the citation and provenance problem (how to cite a particular version of a linked dataset, how to keep the lineage/provenance of the data), the archiving problem (how to retrieve the most recent or a particular version of a dataset), and the sustainability problem (how to support preservation at scale, ensuring long-term access).
This workshop aims at addressing the above mentioned challenges and issues by providing a forum for researchers and practitioners who apply linked data technologies to discuss, exchange and disseminate their work. More broadly, this forum will enable communities interested in data, knowledge and ontology dynamics, lifecycles and versioning to network and cross-fertilise.
Workshop: Workshop on Deep Learning for Knowledge Graphs and Semantic Technologies
Authors: Michael Cochez, Gerard de Melo, Thierry Declerck, Luis Espinosa Anke, Besnik Fetahu, Dagmar Gromann, Mayank Kejriwal, Maria Koutraki, Freddy Lecue, Enrico Palumbo, Harald Sack
Аbstract: “Semantic Web technologies and deep learning share the goal of creating intelligent artifacts that emulate human capacities such as reasoning, validating, and predicting. There are notable examples of contributions leveraging either deep neural architectures or distributed representations learned via deep neural networks in the broad area of Semantic Web technologies. In the past years, Deep Learning (DL) algorithms have been used to learn features from knowledge graphs, resulting in enhancements of the state-of-the-art in entity relatedness measures, entity recommendation systems and entity classification. DL algorithms have equally been applied to classic problems in semantic applications, such as (semi-automated) ontology learning, ontology alignment, duplicate recognition, ontology prediction, relation extraction, and semantically grounded inference. This full-day workshop aims to gather researchers and practitioners presenting innovative research contents as well as applications involving deep learning, knowledge graphs and semantic technologies. The workshop will include oral presentations of short papers and full papers as well as a keynote speech.”
Workshop: QuWeDa 2018: 2nd Workshop on Querying the Web of Data
Authors: Muhammad Saleem, Ricardo Usbeck, Ruben Verborgh, Olaf Hartig and Axel-Cyrille Ngonga Ngomo
Аbstract: The constant growth of Linked Open Data (LOD) on the Web opens new challenges pertaining to querying such massive amounts of publicly available data. LOD datasets are available through various interfaces, such as data dumps, SPARQL endpoints and triple pattern fragments. In addition, various sources produce streaming data. Efficiently querying these sources is of central importance for the scalability of Linked Data and Semantic Web technologies. The trend of publicly available and interconnected data is shifting the focus of Web technologies towards new paradigms of Linked Data querying. To exploit the massive amount of LOD data to its full potential, users should be able to query and combine this data easily and effectively.
This workshop at the Extended Semantic Web Conference 2018 (ESWC 2018) seeks original articles describing theoretical and practical methods and techniques for fostering, querying, and consuming the Data Web. Topics relevant to this workshop include — but are not limited to — the following:
- Centralized, federated, and distributed SPARQL query processing
- SPARQL query processing in streams
- Temporal and spatial queries
- Querying embedded Linked Data
- Caching and replication in SPARQL query processing
- Query processing under entailment regimes
- SPARQL query processing in Map-Reduce and Big Data
- SPARQL query optimization and source selection
- SPARQL query processing benchmarks, especially those focusing on multiple measures
- Ranking, measures, and performance evaluation of SPARQL querying engines
- Query relaxation and rewriting
- SPARQL query processing demos and applications
- Lightweight Linked Data interfaces for querying
- Live Linked Data querying
- Query execution over Linked Data Fragments interfaces
- Dividing query execution between clients and servers
- User interfaces for querying
- Security and privacy in querying the Web of Data
- Alternative languages for querying the Web of Data
- Analysis of the SPARQL query logs, i.e., real queries
Workshop: Third International Workshop on Semantic Web for Cultural Heritage (SW4CH 2018)
Authors: Béatrice Markhoff, Stéphane Jean, Antonis Bikakis and Alessandro Mosca
WORKSHOP SCOPE AND AIM
Cultural Heritage (CH) is gaining a lot of attention from academic and industry perspectives. Scientific researchers, organisations, associations, and schools are looking for appropriate technologies for annotating, integrating, sharing, accessing, analysing and visualising the mine of cultural collections and, more generally, cultural data, taking also into account the profiles and preferences of end users.
Several national and European research and innovation project have been launched in these directions. A fundamental challenge that many of these projects deal with is how to make Cultural Heritage data, which is typically made available in diverse languages and formats, mutually interoperable, so that it can be searched, linked, and presented in a harmonised way across the boundaries of a Cultural Heritage Institution.
Early solutions were based on the syntactic or structural level of data, without leveraging the rich semantic structures underlying the content.
During the last decades, solutions based on the principles and technologies of the Semantic Web have been proposed to explicitly represent the semantics of data sources and make both their content and their semantics machine operable and interoperable. In parallel, resources such as the CIDOC-CRM ecosystem have matured. As institutions bring their data to the Semantic Web level, the tasks of integrating, sharing, analysing and visualising data are now to be conceived in this new and very rich framework.
The aim of the SW4CH workshop is to bring together Computer Scientists, Data Scientists and Digital Humanities researchers and practitioners involved in the development or deployment of Semantic Web solutions for Cultural Heritage. The goal is to provide a forum, where people from these fields will have the opportunity to exchange ideas and experiences, present state of the art of realisations and outcomes of relevant projects, and discuss related challenges and solutions.
We seek original and high quality submissions related (but not limited) to one or more of the following topic areas:
- SW vocabularies and ontologies for CH
- SW-based interaction with CH data
- SW applications for CH
- SW techniques, services and architectures of CH
- Paper submission: 2 March 2018
- Notification: 13 April 2018
- Camera ready version: 27 April 2018
- Workshop: 3-4 June 2018
Workshop: 3rd Geospatial Linked Data Workshop
Authors: Matthias Wauer, Mohamed Sherif and Axel-Cyrille Ngonga Ngomo
Аbstract: Geospatial data is vital for many application scenarios, such as navigation, logistics, and tourism. At the same time, a large number of currently available datasets (both RDF and conventional) contain geospatial information. Examples include DBpedia, Wikidata, Geonames, OpenStreetMap and its RDF counterpart, LinkedGeoData. RDF stores have become robust and scalable enough to support volumes of billions of records (RDF triples). Despite improving implementations and standards such as GeoSPARQL, traditional geospatial data management systems still outperform them in functionality, efficiency and scalability regarding geospatial content. On the other hand, geospatial information systems (GIS) can benefit from Linked Data principles (e.g., schema agility and interoperability).
The goal of the GeoLD workshop is to provide an opportunity for the Linked Data community to focus on the emerging need for effective and efficient production, management and utilization of Geospatial information within Linked Data. Emphasis will be given to works describing novel methodologies, algorithms and tools that advance the current state of the art with respect to efficiency or effectiveness. We welcome both mature solutions, as well as ongoing works that present promising results.
Workshop: SWeTI: Semantic Web of Things for Industry 4.0
Authors: Pankesh Patel, Dhavalkumar Thakker, Ali Intizar and Amit Sheth
Industry 4.0 refers to the 4th Industrial revolution – the recent trend of automation and data exchange in manufacturing technologies. To fully realize the Industry 4.0 vision, manufacturers need to unlock several capabilities: vertical integration through connected and smart manufacturing assets of a factory; horizontal integration through connecting discrete operational systems of a factory; end-to-end integration through the entire supply chain. In recent technology advancements in Web of Things (WoT) and Semantic Web (Jointly referred as Semantic Web of Things) have a promising role to play to address Industry 4.0 vision. Integration of Semantic Web with WoT technologies enables communications among heterogeneous Industrial assets. Semantic Web can be also used to represent manufacturing knowledge in machine-interpretable way. The semantic modeling of industrial assets and their service produces unambiguous and machine-interpretable descriptions and creates interoperability among assets and their services across domains. Semantic Web is indeed a good fit for a plethora of complex problems related to automated, flexible, and self-configurable systems like Industry 4.0 systems.
Several of such novel systems based on Semantic Web of Things are already being proposed. However, the efforts have not been consolidated to link together, and capitalize on experience in, the major issues related to computational underpinning, multidisciplinary technologies involved, and application domain demands. Time is ripe to bring together the different disciplines related to use of Semantic Web of Things for Industry 4.0 and form an international community to identify the major challenges and research directions. The workshop is intended to make the first step in shaping such community and providing a forum that enables: (1) Sharing techniques and experience, (2) Develop better underlying of the foundation principles of building Industry 4.0 systems using Semantic Web, (3) Identifying potential domains and application areas, and (4) Identifying future research directions.