Paper 33 (Research track)

RWRDoc- Random Walk with Restart based Entity Documentation

Author(s): Takahiro Komamizu

Full text: submitted version

Abstract: Entity documentation is a fundamental research for utilizing Linked Data datasets. Existing entity documentation methods are based on heuristics which determine fields (predicates and/or related entities) providing contents for documents of entities. This paper releases entity documentation from heuristic methods by applying a graph proximity measurement, namely random walk with restart (RWR). This paper proposes RWRDoc, an RWR-based documentation method which generates documents of entities by weighted summation of minimal vector representations of entities. RWRDoc is a simple and general parameter-free entity documentation, and is beneficial for various applications upon entity documentations. Comprehensive experiments which apply RWRDoc for diverse applications (such as ad-hoc entity search, entity type inference, and recommender system using
Linked Data) reveal the effectiveness of RWRDoc.

Keywords: Entity documentation; Random walk with restart; Linked Data

Decision: reject

Review 1 (by Petar Ristoski)

(RELEVANCE TO ESWC) The work is highly relevant for ESWC, as it presents an approach for exploiting semantic knowledge graphs in a variety of applications.
(NOVELTY OF THE PROPOSED SOLUTION) Similar approaches based on random walks have already been proposed in the area of information retrieval, but I find it interesting how the whole approach is presented and the possible applications of it. However, there are a lot of details missing in the paper, which put the novelty of the approach in question.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The approach is mostly presented clearly, except what is the final entity representation. Based on the initial explanation it is expected to be a feature vector, however based on Definitio 2 it seems to be a single numerical value.
(EVALUATION OF THE STATE-OF-THE-ART) Important related work is missing, e.g. [5,6], which makes it difficult to identify the novelty in the paper. 
In some of the evaluation tasks the authors don't compare their approach to important related work approaches.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) While the authors have conduced extensive performance evaluation of the approach, there is no runtime evaluation.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The authors use standard datasets that are used in the related work. However, there is no discussion about the availability of the tool/code of the presented approach.
(OVERALL SCORE) The paper presents an approach for entity representation based on random walks with restart, named RWRDoc. The approach has been tested on a set of tasks, and in some cases shows improvement over the related work.
SP:
1. The method can be applied in a variety of applications.
WP:
1. Difficult to identify the novelty and contribution of the paper.
2. Unclear how the final vectors are generated.
3. The evaluation setup in several tasks seems to be invalid.
Detailed Review:
The paper presents an approach for entity representation based on random walks with restart, named RWRDoc. The approach has been tested on a set of tasks, and in some cases shows improvement over the related work.
Similar approaches based on random walks have already been proposed in the area of information retrieval, but I find it interesting how the whole approach is presented and the possible applications of it. However, there are a lot of details missing in the paper.
Important related work is missing, e.g. [5,6], which makes it difficult to identify the novelty in the paper.
Based on the introduction, I understood that the final entity representation is "a weighted combination of minimal representations", which I assume is a weighted feature vectors, however based on Definition 2 and Algorithm 1 (line 10) it seems that each entity is represented as a sum (a single number). If an entity is represented as a number, I don't see how it can be further used.
The entity search evaluation is not convincing, i.e., the approach doesn't clearly show advantage over the related approaches. Only NDCG is used as an evaluation metric, while MAP and Precision are not shown, which should give more insights.
The evaluation of the approach on the type inference task seems to be incorrect. The author doesn't provide too much details how the evaluation dataset and the entity vectors were created, but as it is given it seems that the entity vectors already encode the type information from the original graph, i.e., the vectors already contain the type label. This doubt can be confirmed with the perfect result with accuracy of 100%. 
Furthermore, if the author wants to examine the validity of their approach in data mining, as stated, they could compare to [1,2] for example.
The recommender systems evaluation focuses only on 2 related approaches, and completely ignoring some approaches that have already scored good results on the chosen dataset, like [3,4] among the others. 
The entity summarization evaluation section is not necessary, as the approach is only compared to a simple baseline, omitting the state-of-the-art solutions.
I would suggest the author to focus on the entity search evaluation, and maybe extend it with entity relatedness and document similarity tasks, rather than the current set of tasks where the approach seems not to be comparable with the related work.
The author claims that the approach is significantly more efficient than the related graph embedding approaches, but doesn't back this claim with an evaluation, nor details on the complexity of the proposed approach.
Minor comments: 
The term "Entity Documentation" might not be the most appropriate one. I would opt for "Entity Representation" or "Propositionalization", or similar.
I don't think there is need for the for cycle at line 9, when most of the R[e][r] will be zero, instead only R should be iterated, thus reducing the complexity.
The paper should be proof-read by a native/good English speaker as it contains some grammatical mistakes and typos, such as:
- A random surfer traverses one of neighbours for each moment...
- Secondly, the texts for entities are compose bags of words...
[1] Ristoski, P., & Paulheim, H. (2014). A comparison of propositionalization strategies for creating features from linked open data. Linked Data for Knowledge Discovery, 6.
[2] Ristoski, P., & Paulheim, H. (2016, October). Rdf2vec: Rdf graph embeddings for data mining. In International Semantic Web Conference (pp. 498-514). Springer International Publishing.
[3] Di Noia, T., Ostuni, V.C., Tomeo, P., Di Sciascio, E.: Sprank: Semantic path-based ranking for top-n recommendations using linked open data. ACM Transactions on Intelligent Systems and Technology (TIST) (2016)
[4] Ning, X., Karypis, G.: SLIM: sparse linear methods for topn recommender systems. In: 11th IEEE International Conference on Data Mining, ICDM 2011, Vancouver, BC, Canada, December 11-14, 2011. pp. 497–506 (2011)	
[5] Lao, N., & Cohen, W. W. (2010). Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1), 53-67. 
[6] Schuhmacher, M., & Ponzetto, S. P. (2014, February). Knowledge-based graph document modeling. In Proceedings of the 7th ACM international conference on Web search and data mining (pp. 543-552). ACM.
Chicago


Review 2 (by Ziqi Zhang)

(RELEVANCE TO ESWC) The paper describes a method for learning entity representations using PageRank. The topic is relevant for many semantic web related tasks. But the author tries to distinguish themselves by proposing the idea of 'entity documentation', which is not well explained or justified.
(NOVELTY OF THE PROPOSED SOLUTION) Applying PageRank to a graph of entities is an extensively researched topic. To me the technical solutions are the same, but applied to a graph of linked data. The author did not highlight how their method (graph creation, PageRank algorithm) differ from others, making it difficult to assess its novelty.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The proposed method is sound but can benefit from more details such as an example. It is not very clear how the graph is created.
(EVALUATION OF THE STATE-OF-THE-ART) In many cases, the compared methods are weak or not suitable. The author also did not compare one or two existing entity representation learning methods. For some experiments, result interpretation may not make sense. For example, the inference task involves two different algorithms each using a different entity representation. But you should alter one variable in your experiment at a time so you should fix the algorithm but test different representations.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) Discussion of results are mostly descriptive, it would be valuable if you can talk about lessons learned.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The experiment uses mostly public datasets but the compared systems are not easily available. No tools are shared for re-producing the system and experiments.
(OVERALL SCORE) **short description**: The paper describes a method for learning entity representations using PageRank. It is evaluated in several in-vivo applications. However, the paper lacks clarity and some experiments and results are questionable. Therefore, it is difficult to comment on the contribution of this work.
**strong points**
1. the topic of this work is very important and relevant
2. the method is evaluated in several different applications
**weak points**
1. the paper is difficult to follow in many places, it should be proof-read before submission
2. experiment results are not always convincing. On some experiments, the method did not contribute to the best performance
3. some experiments design could be flawed.
**QAs**
1. how is 'entity documentation' different from learning entity representations?
2. how do you interpret results in Figure 2?
The paper describes a method for learning entity representations using PageRank. The topic is important for many tasks as the author demonstrated and is a good fit with this conference. However, the quality of the paper is rather unsatisfactory and has many problems.
First, its English and writing style is very poor. In many places, I found it difficult to follow what the authors intend to say. The author should use a native speaker who is familiar with this domain to read the paper before submission.
Second, the paper lacks a clear definition of the task and strong motivation. I do not understand why you create the concept of 'entity documentation', which to my knowledge is not used in the literature and I cannot understand how you intended to define it until I have read the full paper. It turns out that it is highly related to entity representation learning, so why don't you stick to that? The paper did not clearly define and compare the difference.
Third, the paper lacks novelty. Regardless of the novelty of the idea - which is questionable anyway as I explained above - applying PageRank to a graph of entities is an extensively researched topic, such as in disambiguation [1], ranking [2], and entity recommendation [3]. It did not take me much effort to google for related work. To me the technical solutions are the same, but applied to a graph of linked data. The author also did not highlight how your method (graph creation, PageRank algorithm) differ from others.
Finally, the evaluation is questionable and the results are not significant. In many cases, the compared methods are weak or not suitable. Given that your work is most related to entity representation learning, I think in your experiments you should at least compare against one or two existing entity representation learning methods. Specifically, the ranking results are inconclusive. Just like other algorithms, the proposed method is better on some dataset then worse on other. I am not sure how useful this is, unless you can say something about lessons learned (why it is expected to work well and why). The comparison in the inference task does not make sense - you compare a neural network based method using your learned representation against another and the first works better, but how do you know it is down to your representations, not the algorithm? Instead, you should have two settings both using the same machine learning algorithm, but using different features (one is based on yours, the other some state-of-the-art). I also do not understand Figure 2, does it mean, e.g., when P=1.0, R can be anything between 0 and 1.0?? This just does not make sense. On the recommendation task, the proposed method is not the winning one, which again questions the contribution of your method.
[1] http://www.aclweb.org/anthology/N15-1026
[2] https://arxiv.org/pdf/0711.3128.pdf
[3] http://web.engr.illinois.edu/~hanj/pdf/wsdm14_xyu.pdf


Review 3 (by anonymous reviewer)

(RELEVANCE TO ESWC) I think the topic is relevant.
(NOVELTY OF THE PROPOSED SOLUTION) I assume this is a novel approach.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) There are false claims and unexplained parameters.
(EVALUATION OF THE STATE-OF-THE-ART) I assume the compared methods are state of the art.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The discussion is rather weak.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) I assume the tests are reproducible.
(OVERALL SCORE) Komamizu describes a method "RWRDoc" to document entities based on random walks in a network of entities.
Major issues:
In general, the manuscript is not very well written, but readable. Sometimes there are sentences that are hard to makes sense of, e.g. "More sophisticated approaches are fielded documentation techniques [13] which heuristically select informative predicates and consider texts at their objects are more important." More important than what? Why? The whole manuscript needs more concrete examples to improve perceivability.
More importantly, the author states that RWRDoc is parameter-free. This seems clearly wrong, given that there is a restart-probability or, how the author calls it, a dumping factor (see first Equation). This parameter is really important, especially when set to 1 or 0 the results will differ a lot. Also, it is not clear from the manuscript which value the dumping factor was set to in this study.
Also, it is not clear to me how the graph, on which the random walk is performed, is constructed. What are the nodes and edges in this graph? The authors writes: "Basic idea of RWRDoc is, for an entity, entities with high proximity to the entity are highly relevant... ". There are 'entities' everywhere in this sentence and no clear definition what an "entity" in this manuscripts refers to. This should be clarified and it should be clarified on what network the random walk is performed.
In general the manuscript would profit from more examples, as e.g. in Table 2. The other parts of the manuscript are hard to follow without examples.
In section 3.4: How many entities were tested?
Deep learning discussion. The author states that deep learning approaches suffer from bad interpretability. I don't see that problem being solved by RWRDoc. Also, for a computer science paper, I find it a bit sloppy to state that "learning takes quite large computational costs" (for deep learning). It is not clear to the reader which costs are associated with deep learning or RWRDoc. This is not a proper runtime or complexity analysis.
Page 5: What are "IR communities"? IR is undefined.
Page 6: The author must clarify the number of queries. It is not clear where the queries are generated.
Minor issues:
I find the precision-recall in Figure 2 highly suspicious. RWRDoc achieves a perfect performance with a curve that looks as if it has an area under the curve of 1. This makes me wonder if the analysis is really correct. I can't prove otherwise. 
Sentence: "Finding 3: ...for incorporating ... into account." Either "take into account" or just "incorporating".
Sentence makes no real sense: "Here takes a deeper look at Table 2." Should probably read "In the following we take a deeper look at Table 2." ?
4 Related work: Fix: "For more complicated tasks... more modern approach" add " a more modern approach"
4.1:
the importances -> the importance
4.2:
Re-write sentence: This extension enriches network embeddings more sematically meaningful.


Review 4 (by anonymous reviewer)

(RELEVANCE TO ESWC) The paper addresses the problem of entity documentation, which can be useful for several applications such as search and exploratory services.
(NOVELTY OF THE PROPOSED SOLUTION) The method used is Random Walk with Restart (RWR), which is a well known technique. It is simple to apply this method to LOD datasets.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The method is well described and it looks sound to me. However, the lacks a discussion about how to fine tune the parameters, see P1 below.
(EVALUATION OF THE STATE-OF-THE-ART) The authors evaluate with several existing related methods. I would have loved to see some more comparison with embedding-based methods that are mentioned in the related work section. I would suggest to compare the approach also with [26].
[26] Petar Ristoski, Heiko Paulheim. RDF2Vec: RDF Graph Embeddings for Data Mining. International Semantic Web Conference (1) 2016: 498-514
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The main features of the algorithm are well described, and are the applications of the algorithm.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The approach is described but few details are missing that make it difficult to reproduce the experiments. In this direction, releasing the source code would be a plus.
(OVERALL SCORE) This paper introduces RWRDoc, a random walk based weighting scheme for entity documentation. The main idea is to weigh TF-IDF vectors' terms in entity descriptions. Typically each entity has a description field, but sometimes these descriptions may not be enough or may miss some relevant terms.
In this work, the descriptions of entities are modeled as TF-IDF vectors. This means that the descriptions are not sentences or bags of words, but words with associated weights. The goal of the paper is then to produce TF-IDF vectors as documentation. The TF-IDF vectors are obtained by a weighted combination of such vectors from nodes reached by random walks. RWRDoc does random walks with restarts from each entity node to obtain additional terms in the description from the traversed entities. Then based on the random-walk reachability scores, the documentation of an entity is the weighted sum of TF-IDF vectors of entities reached in random walks.
The author uses RWRDoc and compares it with other algorithms on tasks like entity search, entity similarity search (for recommender systems), entity classification, and a qualitative evaluation for entity description. The chosen competitors are mainly TF-IDF based methods that use weights of terms in each entity's own description.
Strong Points:
--------------
The presented algorithm is intuitive and shows good performance on several relevant tasks.
The author states the strengths as well as the weaknesses of the presented algorithm. For example in Finding 1, he notes that RWRDoc increases recall but lacks ranking capability. Later in Finding 3, some possible improvements are mentioned.
The paper is clear and easy to read except in some places where typos (see below) or incomplete sentence make some parts difficult to understand (e.g., how human judges were used).
Weak Points and Questions:
--------------------------
I have the following concerns, which I'd like to be commented by the author in the rebuttal:
P1 - The author stresses that RWRDoc is parameter-free. In fact, although this is a property of random walks, it is mentioned as a contribution (Section 1) of the work. Normal random walks are parameter-free, but the author's choice to use Random Walk with Restarts (Section 2) involves some parameters, like damping factor d.
P2 - Usually, Random Walks with Restarts also have an upper limit on the number of hops they make, and the number of samples generated for each node. What were these parameters in this work? Were they the same in all experiments, or were they optimized? It is never mentioned.
P3 - Fig 2: A perfect AUC of 1 for all settings is very strange. The author does not describe this behavior. Essentially, this figure is saying that for all positions, the rankings generated by RWRDoc are perfect. From other experiments, RWRDoc does not seem like a perfect ranking system, so why is this behavior not explained in Section 3.2?
P4 - For experiments in Section 3.3, the author does not provide the parameter settings for competitors, e.g., PPR and PLDSD. Were they compared fairly?
P5 - As stated by the author, there are studies in the state of the art, e.g., [5,17], which can perform the tasks considered in the experiments. Why were such methods not considered in the tests? How does RWRDoc compare to them?
P6 - Table 2b: I am surprised that Nagoya is not associated to "Japan" and "city." I understand that the evaluation was done using humans, but it would be interesting to know more about the profile of the human judges.
P7: For judging the quality of terms describing entities, the authors could have used the Topic Coherence measure. Did the authors consider such an approach? If yes, why did the author opt for a human-based evaluation? 
Typos:
------
As a minor note, I spotted the following typos.
page 2:
- This paper tackles the aforementioned concerns
- stops random walk and restarts from source vertices
- according to
page 4:
- the texts for entities are composed of
page 6:
- increase recall but lack ranking capability
page 10:
- RWRDoc provides richer
- this sentence doesn't make sence, or is incomplete: "While, RWRDoc fails... "
- RWRDoc still leaves space
- this sentence doesn't make sence, or is incomplete: "In this experiments, five voluntary human judges..."
- terms which are judged as relevant
- Lines indicate
page 11:
- RWRDoc summarizes
- Here we take a deeper look
page 12:
- this sentence doesn't make sence, or is incomplete: "The number of relevant vocabularies increases..."


Metareview by Hala Skaf

Despite the interest of the approach, the reviewers agree that some technical details are not well addressed in the paper (e.g., is it really parameter-free as the author claims?) and some experimental results are not convincing. The author does not provide any rebuttal.


Share on

Leave a Reply

Your email address will not be published. Required fields are marked *