Abstract
Numerous success use cases involving deep learning have recently started to be propagated to the Semantic Web. Approaches range from utilizing structured knowledge in the training process of neural networks to enriching such architectures with ontological reasoning mechanisms. Bridging the neural-symbolic gap by joining deep learning and Semantic Web not only holds the potential of improving performance but also of opening up new avenues of research. This editorial introduces the Semantic Web Journal special issue on Semantic Deep Learning, which brings together Semantic Web and deep learning research. After a general introduction to the topic and a brief overview of recent contributions, we continue to introduce the submissions published in this special issue.
Introduction
Semantic Web technologies and deep learning share the goal of creating intelligent artifacts that emulate human capacities such as reasoning, validating, and predicting. Both fields have been impacting data and knowledge analysis considerably as well as their associated abstract representations. The term
There are notable examples showcasing the influence of neural approaches to knowledge acquisition and representation learning on the broad area of Semantic Web technologies. These include, among others, ontology learning [40,49,65], learning structured query languages from natural language [69], ontology alignment [20,28,35,52], ontology annotation [15,58], joined relational and multi-modal knowledge representations [62], and relation prediction [1,59]. Ontologies, on the other hand, have been repeatedly utilized as background knowledge for machine learning tasks. As an example, there is a myriad of hybrid approaches for learning linguistic representations by jointly incorporating corpus-based evidence and semantic resources [13,25,27,33,50]. This interplay between structured knowledge and corpus-based approaches has given way to knowledge graph embeddings, which in turn have proven useful for tasks such as hypernym discovery [21], collocation discovery and classification [22], word sense disambiguation [12,54], joined relational and multi-modal knowledge representations [62] and many others.
In this context, this special issue aims to provide a playground for exploring the interaction between neural NLP and representation learning, on the one hand, and symbolic representation of knowledge and data-driven approaches to pattern recognition, on the other. Specifically, we invited submissions illustrating how Semantic Web resources and technologies benefit from interacting with neural networks. At the same time, we also encouraged submissions showing how knowledge representation would assist in neural NLP tasks, and how knowledge representation systems can build on top of deep learning. The timeliness of this special issue becomes apparent, also, in the potential of symbolic representations of knowledge in the form of ontologies, knowledge graphs, and rules to contribute to the long standing goal of explainable and interpretable Artificial Intelligence [39], for example, for “keep-a-human-in-the-loop” approaches [31] or directly for reasoning about neural network decisions [14].
This special issue builds on and complements a series of workshops dedicated to
Recent Semantic Deep Learning approaches
Neural-symbolic approaches (see e.g. [6,8,30] for an overview) represent a relatively young field of research, having only attracted considerable attention within the last few years. The SemDeep series in general, and this special issue in particular, have offered a forum where such methods, from a proof-of-concept stage to a more advanced and robust stage of development, could be presented and discussed.
Specifically, SemDeep has seen contributions on the explicit modeling of lexical and semantic relations stemming from joint neural-symbolic methods [23,44,55]. Additionally, well-defined NLP tasks have also been the focus of several SemDeep papers over the years, covering event detection [11], part-of-speech tagging [67], co-reference resolution [63], sentiment analysis [47], named entity recognition [41] or question answering [32]. Interestingly, another area that has been prominently covered in SemDeep is (formal) knowledge representation, such as the tasks of link prediction in generic knowledge bases as well as domain-specific use cases [3,4,71]. Fewer works focused on more technical aspects of a knowledge-enhanced deep learning pipeline, for example, exploring disjointness in loss functions for classification tasks [57], end-to-end memory networks [36], image-based neural user profiling [68] or Siamese Long Short Term Memory (LSTM) networks [29].
The topic that has attracted most interest in the SemDeep workshop series has been representation learning, and a plethora of submissions were accepted for publication where vector representation of linguistic items, as well as meta-embeddings, were discussed. The concrete topics covered included word and document embeddings [56,70], knowledge graph embeddings [26], joint knowledge graph and text embeddings [16,42], multi-modal approaches [64,68], leveraging external information such as lexical resources [46], embeddings for low resource languages like Igbo [24], and learning structured knowledge [2,18].
Overview of this special issue
The paper
In
The paper
In
The paper
In
Conclusion and future directions
Contributions to this special issue have focused on utilizing deep learning in connection with reasoning – either making the network itself a reasoner or enabling interaction between deep learning and a reasoner – multi-modal embeddings, feature extraction from natural language, and knowledge base completion. While this enumeration already hints at the large variety of central approaches to
Combining Semantic Web technologies and deep learning holds the potential to crucially contribute to the recent hype of Explainable Artificial Intelligence (XAI). This might, for instance, take the form of injecting knowledge into training procedures to estimate changes of behaviors depending on utilized knowledge. Another important future direction is further systematic investigations into multi-modal approaches connecting linguistic, visual, and sensory inputs.
Footnotes
Acknowledgements
We would like to thank all the authors of accepted and rejected articles for their efforts and the editors-in-chief of Semantic Web Journal, Pascal Hitzler and Krzysztof Janowicz, for their continued support and help. A special thanks goes to all the people who have made this special issue possible. This is devoted in particular to our reviewers (in alphabetical order): Kemo Adrian, Luu Ahn Tuan, Claudia d’Amato, Miguel Ballesteros, Peter Bloem, Jose Camacho-Collados, Michael Cochez, Stamatia Dasiopoulou, Derek Doran, Cristina España i Bonet, Maarten Grachten, Dario Garcia-Gasulla, Jorge Gracia, Jindrich Helcl, Rezaul Karim, Mayank Kejriwal, Freddy Lecue, Alessandro Lenci, Antonio Lieto, Alessandra Mileo, Sergio Oramas, Petya Osenova, Simone Paolo Ponzetto, Heiko Paulheim, Martin Riedl, Francesco Ronzano, Enrico Santus, Francois Scharffe, Vered Shwartz, Kiril Simov, Michael Spranger, Armand Vilalta, Piek Vossen, and Arkaitz Zubiaga.
The edition of this special issue on Semantic Deep Learning has been supported by the German national project DeepLee, which is partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001. Responsibility for the content of this workshop is with the editor(s).
