{"id":890,"date":"2018-01-22T13:34:39","date_gmt":"2018-01-22T12:34:39","guid":{"rendered":"\/?page_id=890"},"modified":"2018-05-09T15:46:57","modified_gmt":"2018-05-09T13:46:57","slug":"tutorials-workshops","status":"publish","type":"page","link":"\/program\/tutorials-workshops\/","title":{"rendered":"Tutorials & Workshops"},"content":{"rendered":"
Author:\u00a0<\/strong>Panos Alexopoulos<\/p>\n \u0410bstract:\u00a0<\/strong>Ever since Google announced that \u201ctheir knowledge graph allowed searching for things, not strings\u201d, the term \u201cknowledge graph\u201d has been widely adopted to denote any graph-like network of interrelated typed entities and concepts that can be used to integrate, share and exploit data and knowledge in one or more domains.\u00a0 Apart from Google, knowledge graphs are found and developed within several prominent companies, including Microsoft, Apple, LinkedIn, Amazon and others, as an enabling technology for data integration and analytics, semantic search and question answering, and other cognitive applications. In this tutorial I describe the technical, business and organizational dimensions and challenges that Knowledge Graph Architects need to be aware of before launching a Knowledge Graph initiative in an organization. More importantly, I provide a framework to guide the successful execution of a knowledge graph project, combining state-of-the-art techniques with practical advice and lessons learned from real-world case studies.<\/p>\n website:<\/strong> http:\/\/www.panosalexopoulos.com\/executing-knowledge-graph-initiatives-in-organizations-a-field-guide\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Author:\u00a0<\/strong>Pasquale Lisena and Rapha\u00ebl Troncy<\/p>\n \u0410bstract:\u00a0<\/strong>Music information can be very complex. Describing a classical masterpiece in all its form (the composition, the score, the various publications, a performance, a recording, the derivative works, etc.) is a complex activity. In the context of the DOREMUS research project, we develop tools and methods to exploit music catalogues on the web using semantic web technologies. website:\u00a0<\/strong>https:\/\/doremus-anr.github.io\/eswc18_tutorial\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Author:\u00a0<\/strong>Andreas Both, Denis Lukovnikov, Gaurav Maheshwari, Ioanna Lytra, Jens Lehmann, Kuldeep Singh, Mohnish Dubey, Priyansh Trivedi<\/p>\n \u0410bstract:\u00a0<\/strong>With this tutorial, we aim to provide the participants with an overview of the field of Question Answering, insights into commonly faced problems, its recent trends, and developments. At the end of the tutorial, the audience would have hands-on experience of developing two working QA<\/p>\n systems- one based on rule-based semantic parsing, and another, a deep learning based method. In doing so, we hope to provide a suitable entry point for the people new to this field and ease their process of making informed decisions while creating their own QA systems.<\/p>\n website:\u00a0<\/strong>http:\/\/qatutorial.sda.tech\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Author:\u00a0<\/strong>Olivier Corby, Catherine Faron Zucker, Maxime Lefran\u00e7ois and Antoine Zimmermann<\/p>\n \u0410bstract:\u00a0<\/strong>It is commonly understood by developers that the adoption of the Semantic Web models and technologies are enablers for semantic interoperability on the Web and the Web of Things, but that their adoption is bound to that of RDF data formats. True, the RDF data model may be used as a lingua franca to reach semantic interoperability and integration and querying of data having heterogeneous formats. The topic of this tutorial is SPARQL-Generate<\/a>\u00a0and STTL<\/a> (), that both contribute to making the choice of a data format and that of a data model orthogonal.<\/p>\n SPARQL-Generate is an extension of SPARQL for querying not only RDF datasets but also documents in arbitrary formats. It offers a simple template-based option to generate RDF Graphs from documents in heterogeneous formats. SPARQL Template Transformation Language (STTL) is an extension of SPARQL which enables Semantic Web developers to support the many cases where they need to transform RDF data. It enables them to write specific yet compact RDF transformers toward other languages and formats, including RDF itself. Combining SPARQL-Generate and STTL enables users to develop a new variety of applications where RDF is used as a pivot language in Web applications requiring heterogeneous data transformation processes.<\/p>\n website:<\/strong>\u00a0https:\/\/eswc2018-sparql-ext.github.io\/tutorial\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Ali Hasnain, Oya Beyan, Stefan Decker and Dietrich Rebholz-Schuhmann<\/p>\n \u0410bstract:\u00a0<\/strong>The life sciences domain has been an early adopter of linked data and, a considerable portion of the Linked Open Data cloud is composed of life sciences data sets. The deluge of inflowing biomedical data, partially driven by high-throughput gene sequencing technologies, is a key contributor and motor to these developments. The available data sets require integration according to international standards, large-scale distributed infrastructures, specific techniques for data access, and offer data analytics benefits for decision support.\u00a0 Especially in combination with Semantic Web and Linked Data technologies, these promises to enable the processing of large as well as semantically heterogeneous data sources and the capturing of new knowledge from those.<\/p>\n This workshop invites papers for life sciences and biomedical data processing, as well as the amalgamation with Linked Data and Semantic Web technologies for better data analytics, knowledge discovery and user-targeted applications. This research contribution should provide useful information for the Knowledge Acquisition research community as well as the working Data Scientist. This workshop at the Extended Semantic Web Conference (ESWC) seeks original contributions describing theoretical and practical methods and techniques that present the anatomy of large scale linked data infrastructure, which covers: the distributed infrastructure to consume, store and query large volumes of heterogeneous linked data; using indexes and graph aggregation to better understand large linked data graphs, query federation to mix internal and external data-sources, and linked data visualisation tools for health care and life sciences. It will further cover topics around data integration, data profiling, data curation, querying, knowledge discovery, ontology mapping \/ matching \/ reconciliation and data \/ ontology visualisation, applications \/ tools \/ technologies \/ techniques for life sciences and biomedical domain. SeWeBMeDA aims to provide researchers in biomedical and life science, an insight and awareness about large scale data technologies for linked data, which are becoming increasingly important for knowledge discovery in the life sciences domain.<\/p>\n Topics of interest include, but are not limited to Semantic Web and Linked Data technologies in the following areas:<\/p>\n <\/p>\n <\/p>\n website:<\/strong>\u00a0https:\/\/sites.google.com\/insight-centre.org\/sewebmeda-2018\/home?authuser=0<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Mauro Dragoni, Diego Reforgiato, Mehwish Alam, Davide Buscaldi and Erik Cambria<\/p>\n \u0410bstract:\u00a0\u00a0<\/strong>As the Web rapidly evolves, people are becoming increasingly enthusiastic about interacting, sharing, and collaborating through social networks, online communities, blogs, wikis, and the like. In recent years, this collective intelligence has spread to many different areas, with particular focus on fields related to everyday life such as commerce, tourism, education, and health, causing the size of the social Web to expand exponentially. To identify the emotions (e.g. sentiment polarity, sadness, happiness, anger, irony, sarcasm, etc.) and the modality (e.g. doubt, certainty, obligation, liability, desire, etc.) expressed in this continuously growing content is critical to enable the correct interpretation of the opinions expressed or reported about social events, political movements, company strategies, marketing campaigns, product preferences, etc. website:<\/strong> http:\/\/www.maurodragoni.com\/research\/opinionmining\/events\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Luigi Di Caro, Mario Cataldi and Claudio Schifanella<\/p>\n \u0410bstract:\u00a0<\/strong>Social media services represent freely-accessible social networks allowing registered members to broadcast short posts referring to a potentially-unlimited range of topics, by also exploiting the immediateness of handy smart devices. This workshop wants to stress the vision of this powerful communication channel as social sensor, which can be used to detect and characterize interesting and yet unreported information and events in real time, crossing all topics and locations. Future technologies on this connectivity may also provide applications with automatic techniques for the generation of news (filtered over user profiles), offering a sideways to the existing authoritative information media. website:<\/strong>\u00a0\u00a0http:\/\/linc.iut.univ-paris8.fr\/sideways\/index.html<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Javier D. Fern\u00e1ndez, Jeremy Debattista, J\u00fcrgen Umbrich and Maria Esther Vidal<\/p>\n \u0410bstract:\u00a0<\/strong><\/p>\n Managing the evolution and preservation of linked open datasets poses a number of challenges, mainly related to the nature of the Linked Data principles and the RDF data model. More specifically, Linked Data techniques are expected to tackle major issues such as the synchronisation problem (how to monitor changes), the curation problem (how to repair data imperfections and add value over time), the appraisal problem (how to assess the quality of a dataset), the citation and provenance problem (how to cite a particular version of a linked dataset, how to keep the lineage\/provenance of the data), the archiving problem (how to retrieve the most recent or a particular version of a dataset), and the sustainability problem (how to support preservation at scale, ensuring long-term access).<\/p>\n This workshop aims at addressing the above mentioned challenges and issues by providing a forum for researchers and practitioners who apply linked data technologies to discuss, exchange and disseminate their work. More broadly, this forum will enable communities interested in data, knowledge and ontology dynamics, lifecycles and versioning to network and cross-fertilise.<\/p>\n website:\u00a0<\/strong>https:\/\/mepdaw2018.ai.wu.ac.at\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Michael Cochez, Gerard de Melo, Thierry Declerck, Luis Espinosa Anke, Besnik Fetahu, Dagmar Gromann, Mayank Kejriwal, Maria Koutraki, Freddy Lecue, Enrico Palumbo, Harald Sack<\/p>\n \u0410bstract:\u00a0<\/strong>“Semantic Web technologies and deep learning share the goal of creating intelligent artifacts that emulate human capacities such as reasoning, validating, and predicting. There are notable examples of contributions leveraging either deep neural architectures or distributed representations learned via deep neural networks in the broad area of Semantic Web technologies. In the past years, Deep Learning (DL) algorithms have been used to learn features from knowledge graphs, resulting in enhancements of the state-of-the-art in entity relatedness measures, entity recommendation systems and entity classification. DL algorithms have equally been applied to classic problems in semantic applications, such as (semi-automated) ontology learning, ontology alignment, duplicate recognition, ontology prediction, relation extraction, and semantically grounded inference. This full-day workshop aims to gather researchers and practitioners presenting innovative research contents as well as applications involving deep learning, knowledge graphs and semantic technologies. The workshop will include oral presentations of short papers and full papers as well as a keynote speech.”<\/p>\n website:<\/strong>\u00a0http:\/\/usc-isi-i2.github.io\/DL4KGS\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Muhammad Saleem, Ricardo Usbeck, Ruben Verborgh, Olaf Hartig and Axel-Cyrille Ngonga Ngomo<\/p>\n \u0410bstract:\u00a0 <\/strong>The constant growth of Linked Open Data (LOD) on the Web opens new challenges pertaining to querying such massive amounts of publicly available data. LOD datasets are available through various interfaces, such as data dumps, SPARQL endpoints and triple pattern fragments. In addition, various sources produce streaming data. Efficiently querying these sources is of central importance for the scalability of Linked Data and Semantic Web technologies. The trend of publicly available and interconnected data is shifting the focus of Web technologies towards new paradigms of Linked Data querying. To exploit the massive amount of LOD data to its full potential, users should be able to query and combine this data easily and effectively. <\/p>\n <\/p>\n website:<\/strong>\u00a0https:\/\/sites.google.com\/site\/quweda2018\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>B\u00e9atrice Markhoff, St\u00e9phane Jean, Antonis Bikakis and Alessandro Mosca<\/p>\n \u0410bstract:\u00a0<\/strong><\/p>\n WORKSHOP SCOPE AND AIM<\/strong><\/p>\n Cultural Heritage (CH) is gaining a lot of attention from academic and industry perspectives. Scientific researchers, organisations, associations, and schools are looking for appropriate technologies for annotating, integrating, sharing, accessing, analysing and visualising the mine of cultural collections and, more generally, cultural data, taking also into account the profiles and preferences of end users.<\/p>\n Several national and European research and innovation project have been launched in these directions. A fundamental challenge that many of these projects deal with is how to make Cultural Heritage data, which is typically made available in diverse languages and formats, mutually interoperable, so that it can be searched, linked, and presented in a harmonised way across the boundaries of a Cultural Heritage Institution.<\/p>\n Early solutions were based on the syntactic or structural level of data, without leveraging the rich semantic structures underlying the content.<\/p>\n During the last decades, solutions based on the principles and technologies of the Semantic Web have been proposed to explicitly represent the semantics of data sources and make both their content and their semantics machine operable and interoperable. In parallel, resources such as the CIDOC-CRM ecosystem have matured. As institutions bring their data to the Semantic Web level, the tasks of integrating, sharing, analysing and visualising data are now to be conceived in this new and very rich framework.<\/p>\n The aim of the SW4CH workshop is to bring together Computer Scientists, Data Scientists and Digital Humanities researchers and practitioners involved in the development or deployment of Semantic Web solutions for Cultural Heritage. The goal is to provide a forum, where people from these fields will have the opportunity to exchange ideas and experiences, present state of the art of realisations and outcomes of relevant projects, and discuss related challenges and solutions.<\/p>\n <\/p>\n TOPICS<\/strong><\/p>\n We seek original and high quality submissions related (but not limited) to one or more of the following topic areas:<\/p>\n <\/p>\n IMPORTANT DATES<\/strong><\/p>\n <\/p>\n website:<\/strong>\u00a0https:\/\/sw4ch2018.ensma.fr\/<\/a><\/p>\n<\/div>\n\t\t<\/div><\/div><\/div><\/div> Authors:\u00a0<\/strong>Matthias Wauer, Mohamed Sherif and Axel-Cyrille Ngonga Ngomo<\/p>\n \u0410bstract:\u00a0<\/strong>Geospatial data is vital for many application scenarios, such as navigation, logistics, and tourism. At the same time, a large number of currently available datasets (both RDF and conventional) contain geospatial information. Examples include DBpedia, Wikidata, Geonames, OpenStreetMap and its RDF counterpart, LinkedGeoData. RDF stores have become robust and scalable enough to support volumes of billions of records (RDF triples). Despite improving implementations and standards such as GeoSPARQL, traditional geospatial data management systems still outperform them in functionality, efficiency and scalability regarding geospatial content. On the other hand, geospatial information systems (GIS) can benefit from Linked Data principles (e.g., schema agility and interoperability).<\/p>\n The goal of the GeoLD workshop is to provide an opportunity for the Linked Data community to focus on the emerging need for effective and efficient production, management and utilization of Geospatial information within Linked Data. Emphasis will be given to works describing novel methodologies, algorithms and tools that advance the current state of the art with respect to efficiency or effectiveness. We welcome both mature solutions, as well as ongoing works that present promising results.<\/p>\nTutorial: Music Knowledge Graph and Deep-Learning Based Recommender Systems<\/h2>\n
\n
\nIn the first part of this tutorial, we will present models and vocabularies for representing fine-grained information about music, making it a powerful resource for answering music specific questions which are of interest for musicologists, librarians, concert hall or programmers. In the second part of this tutorial, we will present methods and datasets for training recommendation engines. From a music information point of view, we will touch topics like how to build entity embeddings, how to select similarity measures, how to tune recommender systems and provide explanation of the recommendation to the final user. During the tutorial, we will propose several hands-on for the audience to play with the DOREMUS datasets and tools.<\/p>\nTutorial: How to build a Question Answering system overnight<\/h2>\n
\nTutorial: From heterogeneous data to RDF graphs and back<\/h2>\n
Workshop: 2nd Workshop on Semantic Web solutions for large-scale biomedical data analytics (SeWeBMeDA)<\/h2>\n
\n\n
Workshop: Fourth International Workshop at ESWC on Sentic Computing, Sentiment Analysis, Opinion mining and Emotion Detection<\/h2>\n
\n
\nExisting solutions still have many limitations leaving the challenge of emotions and modality analysis open. For example, there is the need for building\/enriching semantic\/cognitive resources for supporting emotion and modality recognition and analysis. Additionally, the joint treatment of modality and emotion is, computationally, trailing behind, and therefore the focus of ongoing, current research. Also, while we can produce rather robust deep semantic analysis of natural language, we still need to tune this analysis towards the processing of sentiment and modalities, which cannot be addressed by means of statistical models only, currently the prevailing approaches to sentiment analysis in NLP. The hybridization of NLP techniques with Semantic Web technologies is therefore a direction worth exploring.
\nThe Workshop on Sentic Computing, Sentiment Analysis, Opinion Mining, and Emotion Detection will also be connected to the ESWC 2018 Fine-Grained Sentiment Analysis Challenge.
\n<\/strong><\/p>\nWorkshop: 4th Edition of the International Workshop on Social Media World Sensors<\/h2>\n
\n
\n<\/strong><\/p>\nWorkshop: Managing the Evolution and Preservation of the Data Web – MEPDaW 2018<\/h2>\n
\nWorkshop: Workshop on Deep Learning for Knowledge Graphs and Semantic Technologies<\/h2>\n
\nWorkshop: QuWeDa 2018: 2nd Workshop on Querying the Web of Data<\/h2>\n
\n
\nThis workshop at the Extended Semantic Web Conference 2018 (ESWC 2018) seeks original articles describing theoretical and practical methods and techniques for fostering, querying, and consuming the Data Web. Topics relevant to this workshop include — but are not limited to — the following:<\/p>\n\n
Workshop: Third International Workshop on Semantic Web for Cultural Heritage (SW4CH 2018)<\/h2>\n
\n\n
\n
Workshop: 3rd Geospatial Linked Data Workshop<\/h2>\n
\n