Skip to main content

Feature-enriched author ranking in incomplete networks

Abstract

Evaluating scientists based on their scientific production is a controversial topic. Nevertheless, bibliometrics and algorithmic approaches can assist traditional peer review in numerous tasks, such as attributing research grants, deciding scientific committees, or choosing faculty promotions. Traditional bibliometrics rank individual entities (e.g., researchers, journals, faculties) without looking at the whole data (i.e., the whole network). Network algorithms, such as PageRank, have been used to measure node importance in a network, and have been applied to author ranking. However, traditional PageRank only uses network topology and ignores relevant features of scientific collaborations. Multiple extensions of PageRank have been proposed, more suited for author ranking. These methods enrich the network with information about the author’s productivity or the venue and year of the publication/citation. Most state-of-the-art (STOA) feature-enriched methods either ignore or do not combine effectively this information. Furthermore, STOA algorithms typically disregard that the full network is not known for most real-world cases.Here we describe OTARIOS, an author ranking method recently developed by us, which combines multiple publication/citation criteria (i.e., features) to evaluate authors. OTARIOS divides the original network into two subnetworks, insiders and outsiders, which is an adequate representation of citation networks with missing information. We evaluate OTARIOS on a set of five real networks, each with publications in distinct areas of Computer Science, and compare it against STOA methods. When matching OTARIOS’ produced ranking with a ground-truth ranking (comprised of best paper award nominations), we observe that OTARIOS is >30% more accurate than traditional PageRank (i.e., topology based method) and >20% more accurate than STOA (i.e., competing feature enriched methods). We obtain the best results when OTARIOS considers (i) the author’s publication volume and publication recency, (ii) how recently the author’s work is being cited by outsiders, and (iii) how recently the author’s work is being cited by insiders and how individual he is. Our results showcase (a) the importance of efficiently combining relevant features and (b) how to adequately perform author ranking in incomplete networks.

Introduction

The scientific impact of a researcher measures how much a person has contributed to a scientific field. Due to the nature of scientific development, more impactful researchers tend to have access to more funding which supports the creation of more quality work. As a result estimating the scientific impact of researchers has a direct impact on science. For more important decisions such as allocating scientific committees, attributing research grants, or choosing faculty promotions, the process is mostly done via peer review. More recently, bibliometrics (i.e., measures to determine scientific impact without human intervention) have been proposed to assist the peer review process (Vieira et al. 2014). Bibliometrics typically take into account the author’s productivity (i.e., statistics of author’s papers) and the author’s impact (i.e., statistics of author’s citations) (BV 2018). The h-index (Hirsch 2005) is one of the most widely used bibliometrics; it measures the impact of an author as the number of citations of his most cited papers (e.g., an author has h-index = 3 if he has 3 papers with at least 3 citations).

Traditional bibliometrics have, however, the drawback of assigning impact to direct citations only, thus ignoring indirect citations. For example, if A cites B, and B cites C, traditional bibliometrics give no additional credit to C from A’s indirect citation. To address this limitation, graph algorithms have been developed for citation networks (Ding 2009; Radicchi et al. 2009; Dunaiski and Visser 2012; West et al. 2013; Sidiropoulos and Manolopoulos 2006). These algorithms are modifications of PageRank applied to citation networks (Page et al. 1998). One of PageRank’s major algorithmic ideas is that not all nodes are equal, i.e., in the context of hyperlinks, it is good to be referenced by any webpage but it is better to be referenced by important webpages. This idea extends naturally to citation networks, i.e., it is good to be cited by any author but is better to be cited by important authors. State-of-the-art (STOA) author ranking algorithms (discussed in “State-of-the-art author ranking methods” section) adapt PageRank and introduce modifications to favour different types of authors (e.g., authors cited in important venues, or authors cited more recently). Since these methods introduce features outside of the topology of the network, we call them featured enriched methods, in contrast with traditional PageRank, which is a topology-based method.

We found that STOA methods were lacking in two aspects. First, they do not adequately combine publication features (e.g., author’s productivity, the venues prestige of where he usually publishes, and how recent his papers are) with citation features (e.g., the prestige of the venue he is being cited from, how recent his citations are). Second, these methods assume that the full network is known, thus the algorithm does not distinguish between fully explored nodes and partially explored nodes. Therefore, these methods are not capable of adequately dealing with incomplete networks (Kim and Leskovec 2011).

We recently proposed a new feature enriched author ranking algorithm for incomplete networks named OTARIOS (OpTimizing Author Ranking with Insiders/Outsiders Subnetworks) (Silva et al. 2018), and showed that OTARIOS outperformed traditional PageRank (a topology-based method) and simple bibliometrics. Here we compare OTARIOS against other feature-enriched author ranking methods.

OTARIOS efficiently combines different publication/citation features in a multi-edge weighted network (instead of a simple unweighted network used by STOA methods). OTARIOS is also a flexible algorithm in the sense that publication/citation features can be personalised to fit what the user wants to rank the researchers by (e.g., take in consideration venue’s prestige and citation’s age for the rankings). OTARIOS handles incomplete networks by dividing the citation network in two subnetworks, insiders and outsiders. Then, only insiders are ranked (since we have their full information) while outsiders contribute to the ranks of insiders, not being themselves ranked. Our results on five networks belonging to different areas of Computer Science show that OTARIOS is >20% more accurate than STOA methods. OTARIOS obtains the best results when considering (i) the author’s publication volume and recency, (ii) how recently his work is being cited by outsiders, and (iii) how recently his work is being cited by insiders and how individual he his (i.e., publishing a paper with few co-authors).

The paper is organised as follows. “Preliminaries” section describes terminology that is used throughout the work, as well as an overview of STOA methods. “Methodology” section describes OTARIOS and our methodology. “Results” section presents the performance of OTARIOS against STOA methods on a set of five networks. Finally, “Conclusions” section presents our main conclusions and gives some directions for future work.

Preliminaries

Terminology

Recency of a paper

$$ \delta(P_{j}) = \Big(\max\limits_{P_{j'} \in \mathcal{P}} y(P_{j'})\Big) - y(P_{j}) $$
(1)

Recency of an author

$$ \delta(A_{i}) = \min\limits_{P_{j} \in \mathcal{P}_{A_{i}}} \delta(P_{j}) $$
(2)

Venue prestige

$$ \lambda(V_{k}, y) = \frac{c(V_{k}, y)}{\sum_{x=1}^{3} p(V_{k}, y-x)} $$
(3)

Cited individuality

$$ w(A_{i'} \rightarrow A_{i}, P_{j}) = \frac{1}{|\mathcal{A}_{P_{j}}|}, A_{i} \in \mathcal{A}_{P_{j}} $$
(4)

Citation recency

$$ a(A_{i'} \rightarrow A_{i}, P_{j}) = e^{\frac{-\delta(P_{j})}{\tau}}, A_{i'} \in \mathcal{A}_{P_{j}} $$
(5)

Citation prestige

$$ v(A_{i'} \rightarrow A_{i}, P_{j}) = v(P_{j}), A_{i'} \in \mathcal{A}_{P_{j}} $$
(6)

For consistency, we denote sets by calligraphic letters (e.g., \(\mathcal {S}\)), elements of those sets (i.e., entities) by capital letters with an index (e.g., \(S_{i} \in \mathcal {S}\)), features of entities (e.g., year, impact factor) as functions named in lower-case alphabetic or Greek letters (e.g., a(Si) or α(Si)), and constants as sole Greek letters (e.g., τ). The cardinality of a given set \(\mathcal {S}\) is denoted by \(|\mathcal {S}|\). We address the following problem.

Problem 1

Given a set of papers \(\mathcal {P}\) published in a set of venues \(\mathcal {V}\) by a set of authors \(\mathcal {A}\), who are the top-n ranked authors?

A paper \(P_{j} \in \mathcal {P}\) is co-authored by authors \(\mathcal {A}_{P_{j}} \subseteq \mathcal {A}\). Likewise, an author \(A_{i} \in \mathcal {A}\) is (one of) the author(s) of papers \(\mathcal {P}_{A_{i}} \subseteq \mathcal {P}\). In paper-level networks, graph \(G = \{\mathcal {N}, \mathcal {E}\}\) comprises a set \(\mathcal {N}\) of nodes that represent papers and a set \(\mathcal {E}\) of edges that represent paper citations, written as \(\phantom {\dot {i}\!}P_{j^{\prime }} \rightarrow P_{j}\). In author-level networks, nodes represent authors and edges represent citations between authors, written as \(A_{i^{\prime }} \rightarrow A_{i}\phantom {\dot {i}\!}\).

Regarding node features, papers have publication metadata which we use as features, namely the year, venue prestige, and the number of references, represented by y(Pj), v(Pj) and rout(Pj), respectively. The recency of a paper (Eq. 1), represented by δ(Pj), is the difference in years between the year of the paper and the most recent paper in the dataset (e.g., a paper from 2015 in a dataset where the most recent paper is from 2017 has a recency of 2). Similarly, the recency of an author (Eq. 2), represented by δ(Ai), is simply the recency of his most recent paper (i.e. the number of years that have passed since his last publication). The venue prestige of a paper Pj (Eq. 3) depends on the venue \(V_{k} \in \mathcal {V}\) where it was published and the year when it was published, represented by v(Pj)=λ(Vk,y(Pj)). We estimate venue prestige with CiteScore, a widely used metric (BV 2018) (Eq. 3), where p(Vk,y) is the number of papers published in Vk in year y and c(Vk,y) is the number of citations that all papers published in Vk in year y received. Thus, we are assuming that venues with many citations per paper have higher prestige.

Regarding edges, in paper-level networks edges are traditionally unweighted and simple, i.e., two papers are connected by a single edge with weight equal to 1 (Hwang et al. 2010; Dunaiski and Visser 2012). In author-level networks, edges are weighted and multiple, i.e., two authors are connected by multiple edges with different weights. These multiple edges concern different edge features that depend on the publication Pj where author \(\phantom {\dot {i}\!}A_{i^{\prime }}\) cites author Ai. The recency of an edge (Eq. 5), represented by \(a\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\phantom {\dot {i}\!}\), gives more importance to recent citations. As discussed in the YetRank paper which originally proposes this concept for author ranking algorithms (Dunaiski and Visser 2012), we set the decay factor τ=4. This value highly favours received citations in the last 4 years (e.g., a citation with recency 0 has a maximum link weight of 1.0, while citations with recency 4 and 8 have links with weights 0.37 and 0.13 respectively). The venue prestige of an edge (Eq. 6), represented by \(\phantom {\dot {i}\!}v\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\), gives more importance to citations in important venues (e.g. citations coming from the most prestigious venue in the dataset have a maximum link weight of 1.0, while citations coming from the least prestigious venues have a value close to 0.0). Finally, the individuality of an edge (Eq. 4), represented by \(\phantom {\dot {i}\!}w\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\), gives more importance to citations received in papers that author Ai has few (or no) co-authors (e.g. if an author has a publication P1 with 2 authors and a publication P2 with 4 authors, the importance of citations coming to P1 is double of the ones coming to P2 for that author). Thus, \(\phantom {\dot {i}\!}w\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\), unlike \(a\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\) and \(\phantom {\dot {i}\!}v\left (A_{i^{\prime }} \rightarrow A_{i}, P_{j}\right)\), depends on the cited author Ai and not on the citing author \(\phantom {\dot {i}\!}A_{i^{\prime }}\). The author’s feature total out-edge weight is obtained by summing all of its out-edges (e.g., for citation recency, \(a_{out}\left (A_{i}\right) = \sum \limits _{(A_{i} \rightarrow A_{i'}, P_{j})} a\left (A_{i}\rightarrow A_{i'}, P_{j}\right)\). wout and vout are obtained in the same way).

State-of-the-art author ranking methods

Measuring the scientific impact of institutions, journals, or authors is an important task in the peer review process. Here we focus on measuring the scientific impact of authors, i.e., author ranking. Paper citations are often used by traditional bibliometrics; for instance, the widely adopted h-index (Hirsch 2005) measures an author’s impact by the number of citations of his most cited papers. However, h-index and similar bibliometrics fail to capture the nature of scientific development since they disregard the fact that a new discovery is not solely due to previous work directly referenced. Graph-based metrics, on the other hand, spread the credit to previous works that paved the way (Wang et al. 2016).

There are two groups of author ranking methods: paper-level and author-level (Wang et al. 2016) (Fig. 1). On one hand, paper-level ranking uses the papers’ citation network to diffuse scientific credit to cited papers, and then authors credit is derived from the credit of their papers (Hwang et al. 2010; Dunaiski and Visser 2012). On the other hand, author-level ranking uses the authors’ citation network to diffuse scientific credit to cited authors, thus the authors’ credit is directly obtained (Radicchi et al. 2009; Ding 2009; West et al. 2013). Numerous graph centrality measures exist and can be used to measure node importance (Valente et al. 2008). PageRank (Page et al. 1998) is one of the most widely used measures and its rationale is intuitive in author-ranking, i.e., authors cited by important authors are (themselves) more important than authors cited by less important authors.

Fig. 1
figure 1

Comparison of paper-level and author-level networks

PageRank consists of two main steps: score initialisation and score diffusion. The score initialisation step creates a vector R that defines an initial score for every node using a priori information. In the simplest case, every node (i.e., paper or author) is considered equally important, thus an uniform distribution is used (i.e., \(R[A_{i}]=\frac {1}{|\mathcal {A}|}\)) (Page et al. 1998; Ding 2009; Sidiropoulos and Manolopoulos 2006). Approaches based on paper citation networks typically assign higher initial scores to more recent papers (Dunaiski and Visser 2012) or favour a combination of recent papers and papers published in venues with higher impact factor (Hwang et al. 2010). Approaches based on author citation networks typically assign higher initial scores to authors that publish many papers (West et al. 2013) or favour authors that publish many papers but with few co-authors (Radicchi et al. 2009).

The score diffusion step updates the node scores taking into consideration network dynamics. Score diffusion is an iterative process which computes three addends: random restart, dangling nodes, and score term. Random restart (RR) evaluates how likely it is to reach a certain node by moving randomly in the network. PageRank defines a value q as the random restart probability, and q is multiplied by the node’s initial score R (thus, nodes with higher initialisation receive higher random restart score). Dangling nodes (DN) is a process where the score of nodes that do not have any out-links is split by all other nodes. This is performed to avoid having nodes that do not disseminate their credit. Like random restart, this division takes into consideration the initialisation vector R (thus, nodes initialised with higher values receive higher dangling nodes score). Score term (ST) updates the score of a node Ai, according to the score of his in-links (i.e., nodes citing Ai). In the simplest case, scores are evenly split by co-authors of the cited publication, e.g., if the paper has two authors, the score is divided by the two authors, if it has three authors, the score is divided by the three authors, thus, in the case of three authors, each author receives less credit than in the case with just two authors (Radicchi et al. 2009; Ding 2009; West et al. 2013). SCEAS (Sidiropoulos and Manolopoulos 2006) adds a constant value b to the every score received by nodes and divides the total score received by another constant a in order to make the algorithm converge faster. YetRank (Hwang et al. 2010) and NewRank (Dunaiski and Visser 2012) take into consideration the vector R in the score distribution (i.e., if a paper cites a paper Pj from 2015 and another paper Pj′ from 2010, Pj receives a bigger chunk of the score). In case of the YetRank, the distribution of score also takes into consideration the impact factor of the venues where Pj and Pj′ where published, favouring papers published in venues with higher prestige. Table 1 summarises STOA methods and their differences.

Table 1 Comparison of state of the art methods

These approaches have two drawbacks. One is that none combines all possible features of the publications and citations. Another drawback is that they assume that the complete citation network is known. However, in real-world cases, it is not possible to obtain a complete network. Let’s assume that we want to rank a set of authors \(\mathcal {A}\). First, we need to expand the network by obtaining all authors \(B_{i} \in \mathcal {B}\) that cite any \(A_{i} \in \mathcal {A}\) such that \(B_{i} \not \in \mathcal {A}\). Then, we need to also extract all authors \(C_{i} \in \mathcal {C}\) that cite any \(B_{i} \in \mathcal {B}\) such that Ci{AB}, to correctly determine the scores of all \(A_{i} \in \mathcal {A}\), i.e., Ci does not cite Ai directly but Ci cites some Bi which cites Ai, thus Ci cites Ai indirectly. Ideally, this should be performed recursively until the complete set authors (and their citations) with seed \(\mathcal {A}\) is obtained. Due to memory and time constraints, only a sample of the citation network can be obtained. As a result, current STOA author ranking algorithms estimate scientific rankings based on incorrect information, i.e., authors in the periphery are not being adequately taken into account since their citations are not in the network. Although there is no ideal solution for this problem, one can be more careful in estimating the rank of nodes in the periphery.

Table 2 summarises current STOA methods and contrasts them with OTARIOS, our proposed method (Silva et al. 2018). OTARIOS is the only method that efficiently combines multiple features (of the publications and citations) and deals with incomplete networks.

Table 2 Comparison of state of the art methods with OTARIOS

Methodology

Problem description

We formalize the problem of author ranking as the task of receiving a set of authors \(\mathcal {I}\) and ranking them according to their scientific impact based on a set of user-defined criteria. First, we obtain all citations between all authors \(I_{i}, I_{i'} \in \mathcal {I}\) (i.e., a complete citation network for \(\mathcal {I})\). Second, for each author Ii, we obtain all of his received citations coming from authors \(O_{i} \not \in \mathcal {I}\). The process stops here, i.e., we do not obtain all received citations for authors \(O_{i} \in \mathcal {O}\). Doing so iteratively is unfeasible in practice because the number of authors added at each step grows very rapidly. Thus, we divide the citation network into two groups of nodes: insiders (\(\mathcal {I}\)) and outsiders (\(\mathcal {O}\)), i.e., \(\mathcal {A} = \{\mathcal {I},\mathcal {O}\}\) (Fig. 2). Note that no outsider can also be an insider, and vice-versa. Edges connect insiders (\(\mathcal {E}_{\mathcal {I}}\)) or outsiders to insiders (\(\mathcal {E}_{\mathcal {O}}\)), but no edges exist from insiders to outsiders nor between outsiders, i.e., \(\mathcal {E} = \left \{\mathcal {E}_{\mathcal {I}}, \mathcal {E}_{\mathcal {O}}\right \}\).

Fig. 2
figure 2

Example of insiders and outsiders subnetworks. Insiders are nodes/authors coloured in black and outsiders are coloured in blue. Note that no links between outsiders exists (dashed red lines). Furthermore, no information exists of outsiders that do not cite any insiders (coloured in red)

The outsiders are authors that were not in the initial set of authors \(\mathcal {I}\), thus they are not ranked. Instead they are used to mitigate the problem of incomplete networks and improve the insiders’ ranks. Before calculating the ranks of the insiders we estimate outsiders’ prestige (λ). We use the outsider’s history of publications and give higher prestige to authors with more citations (c(Ai)) in fewer publications (p(Ai)) (Eq. 7). The outsiders’ prestige is then used along the links between outsiders and insiders to improve the initial rankings of the insiders.

$$ \textbf{Outsider prestige}\\ \lambda(A_{i}) = \frac{c(A_{i})}{p(A_{i})} $$
(7)

OTARIOS

OTARIOS is a graph-based algorithm for author-level citation networks. Its aim is to rank authors based on their publication and citation history. OTARIOS uses the notion of insider/outsider subnetworks to adequately estimate authors scores in a network with limited information. Furthermore, OTARIOS is a flexible algorithm that analyses which set of publication/citation attributes lead to better rankings.

As a first step, OTARIOS computes an initial score for each author, represented by R(Ai). OTARIOS calculates R(Ai) by taking into account multiple features that favour different author characteristics (Table 3). We divide the features into two categories: productivity and outsiders influence. Productivity measures the value of the author’s publications, while outsider influence measures the value of the author’s citations coming from outsiders. Regarding productivity, OTARIOS takes three factors into account: volume, recency and venues. Regarding outsiders influence, OTARIOS takes another three factors into account: individuality, recency and venues. We compute the author’s initial score R(Ai) as the sum of the two products of the factors in each group (i.e., productivity (volume×recency×venues) + outsiders influence (individuality×recency×venues)).

Table 3 List of features used for OTARIOS’ author rank initialisation: R(Ai)

Then, on the second step, OTARIOS improves author scores in an iterative process. Outsiders are removed from the network since their presence degrades the score diffusion step. In each iteration, OTARIOS updates an author’s score S(Ai) as ST(Ai)+RR(Ai)+DN(Ai). We compute RR(Ai) and DN(Ai) as a function of the initial rank of each author (discussed in Table 3), and compute ST(Ai) as a function of the author’s citations coming from other insiders. OTARIOS considers three different features to assess score term ST(Ai): individuality, recency and venues (Table 4). The ST(Ai) at each iteration is the product of every feature (i.e., score term (individuality×recency×venues)). Like PageRank, OTARIOS stops when it reaches low variation in the node scores. Figure 3 illustrates the three feature categories used in the OTARIOS algorithm.

Fig. 3
figure 3

Illustration of the three different feature categories used in OTARIOS to rank authors

Table 4 List of features used for OTARIOS’ author score term calculation: ST(Ai)

Here we do not assume that every feature should be used for author ranking. The features’ importance depends greatly on the dataset. For instance, venue prestige might be very important to rank some communities (i.e., top authors publish in top conferences of that scientific area, e.g., machine learning) but irrelevant in some other community because we are studying a specific conference (i.e., all authors publish in the same venue, e.g., KDD). OTARIOS is parameterisable, i.e., users can define by which features authors are ranked. For example, for a certain application, we may want to rank authors taking into account recent publications and the venue prestige of citations coming from both insiders and outsidersFootnote 1.

Results

In this section we compare OTARIOS against STOA methods. We create a test scenario using a snapshot from December of 2017 of the DBLP dataset (a bibliographic database for computer science). This dataset contains over 3 million publications and for each one we have: title, authors, abstract, venue, year, number of citations and references. Using the publications’ references we obtain the author-citation network of 26 top-tier computer science venues. In order to prevent the impact of citation manipulation in the rankings we do not consider self-citations in the networks (Ioannidis 2015). Furthermore, for each conference we create a ground. We counted each paper award as a unit of prestige which is equally divided by its authors. Thus, we are assuming that authors that have won more awards with fewer co-authors should be ranked higher. We use the ACM taxonomyFootnote 2 in order to group the conferences into five networks (see Table 5), each representing a different computer science area.

Table 5 Set of networks used for experimental evaluation

In truth ranking using the best paper award informationFootnote 3 our experiments, we evaluate a predicted rank (i.e., one produced by OTARIOS or any STOA method) against a ground truth rank (i.e., the ranking based on conferences best paper awards). Methods that produce rankings more similar to the ground truth (obtained by human judgement) are considered better. In order to compare two ranks we use two ranking quality measures: Normalized Discounted Cumulative Gain (NDCG) (Järvelin and Kekäläinen 2002) and Mean Reciprocal Rank (MRR) (Voorhees and et al. 1999). NDCG is a Discounted Cumulative Gain(DCG) based metric which divides the DCG of the predicted rank by the DCG of the ground truth (i.e., the perfect ranking). The idea of DCG is that highly relevant authors (from the ground truth) appearing lower in the predicted rank should be penalized using a logarithmic function (base 2) proportional to the position of the result. For example, placing an author with 0 relevance at the 2nd position presents an higher penalty than placing him in the 10th position. The NDCG value ranges from 0 to 1, where 1 represents the perfect ranking (i.e., the predicted rank is the same as the ground truth one). MRR is the mean ground truth ranking position of a set of the top authors of a predicted rank. Thus, lower MRRs indicate better rankings since the top authors of the ground truth are, on average, ranked more highly by the method. For example, a method that places the 1st, 5th and 6th highest ranked authors from the ground truth in its top-3 authors is better than one that places the 2nd, 6th and 7th highest ranked authors from the ground-truth in its top-3 authors. Usually, both measures are calculated considering only the top-n authors. For a detailed analysis, we calculate NDCG and MRR for the top-5, 10, 20, 50, 100 authors.

Finding the best oTARIOS variants

OTARIOS does not define a strict set of rules to rank authors, since the criteria (i.e., features) used depends on many factors (e.g., scientific area, preferences of the entity ranking authors). Instead, OTARIOS gives the freedom to personalise the features used to rank authors. In the particular case of our test scenario, we did not know a priori which features would be the most important, so we did an exploratory analysis to find the best OTARIOS variants. However, there are more than 500 variants that we can create by combining different features. In order to estimate the best variant without testing a large number of variants, we performed a greedy search for each network.

We start with simple variants (with a single feature) and progressively add more features to the more promising variants. We illustrate this process for the network NET on Table 6. We begin by comparing OTARIOS variants that only contain outsiders influence (e.g., + A + ). For the best ones, we added the productivity (e.g., AP + A + ). In general, we see that results improve when merging outsiders influence with productivity. Finally, we add the score term calculation to the best variants (e.g., AP + A + A). Again, we see that overall the results improve when we add this feature to the score term. For the NET network, we see that AP + A + AW is the best variant with a mean NDCG of 0.330 and a mean MRR of 606. This variant uses recency and volume to measure author productivity, uses recency to calculate outsiders influence, and uses recency and individuality on the score diffusion step.

Table 6 Comparison of OTARIOS variants on network NET (from Table 5)

Figure 4 presents the features used by the 20 best OTARIOS variants according to the average NDCG across the five networks created. We observe that the top-9 variants always use a mix of productivity, outsiders influence and score term, thus revealing the importance of considering multiple aspects of publications and citations information. Of the top 20 variants, only 6 do not use productivity features and only 1 does not use the outsiders influence; score term features are present in all top-20 variants. Regarding specific features, we observe that recency (A) seems to be the most important feature for all three categories: productivity, outsider influence and score term. In fact, recency is used in the score term of all top-20 variants. This indicates that most of awarded authors are still actively publishing and/or being cited. Individuality (W) and volume (P) seem to be more important to measure productivity and score term than to measure outsiders influence. This indicates that awarded authors publish more papers and also that publish with fewer co-authors. Venue prestige (V) seems to be more relevant when measuring outsiders influence than productivity and insiders score term. This is expected because, due to the nature of the two subnetworks, insiders tend to publish in the same venues, while outsiders cite insiders in any venues, thus the venue prestige of outsiders citations varies greatly.

Fig. 4
figure 4

Features considered on the top 20 OTARIOS variants on the NDCG metric. The rows represent different features and the columns the variants that ranked at position n. The blue colour in a column indicates that the feature is considered on the variant, while the red colour indicates its absence

Comparison against sTOA

We compare OTARIOS against STOA methods discussed in “State-of-the-art author ranking methods” section and a baseline method named CountRank (CR) which counts the citations received by each author. We create three CR variants: uniform, individuality, and position. For each citation received, uniform assigns the same merit to all of the authors in publication (i.e., merit = 1), individuality equally divides the merit for all the authors (i.e., merit = \(\frac {1}{|\mathcal {A}|}\)), and position gives more credit to authors whose name appears first in the publication (first author: merit = 1, second author: merit = \(\frac {1}{2}\), third author: merit = \(\frac {1}{3}, \ldots \)). Table 7 shows the results obtained for all STOA methods and 5 OTARIOS variants over all networks. For each network, we calculate NDCG and MRR for the top-5, 10, 20, 50 and 100 authors, and compute their mean values. Furthermore, we compute the mean NDCG and MRR across all networks.

Table 7 Comparison of state of the art (STOA) methods against OTARIOS over all networks

In our experiments, SCEAS is the best STOA method, obtaining the highest mean NDCG (0.208) and the lowest mean MRR (691). The CRposition method obtained the lowest NDCG mean (0.154), while NewRank obtained the highest mean MRR by a considerable margin (4091). An important aspect to highlight is that CRindividuality, despite being a baseline strategy, obtained the second highest NDCG and MRR across the five networks, among the STOA methods.

With respect to OTARIOS variants, we tested 53 variants and 21 of them obtained higher mean MRR and mean NDCG than the best STOA method, SCEAS. The best mean NDCG and mean MRR that OTARIOS variants obtained were 0.246 and 567, respectively. Assuming that both NDCG and MRR measures have the same weight (i.e., are equally important), the best OTARIOS variant is (AP+A+AW), which uses (a) recency and volume to measure productivity, (b) recency to measure outsiders influence, and (c) recency and individuality to measure the score term. This variant obtained a mean NDCG of 0.245 and a mean MRR of 570. We compared the gain of this variant with respect to STOA methods, using Eqs. 8 and 9. Compared to RLPR, a topology-based author ranking algorithm, we achieved a gain of 28% in terms of NDCG and 27% in terms of MRR. With respect to the best feature-enriched author ranking method (SCEAS), we achieved a gain of 18% in terms of NDCG and 21% in terms of MRR.

$$ G_{NDGC} = \frac{\text{OTARIOS}_{<NDGC>} - \text{STOA}_{<NDGC>}}{min(\text{OTARIOS}_{<NDGC>}, \text{STOA}_{<NDGC>})} $$
(8)
$$ G_{MRR} = \frac{\text{STOA}_{<MRR>} - \text{OTARIOS}_{<MRR>}}{min(\text{OTARIOS}_{<MRR>}, \text{STOA}_{<MRR>})} $$
(9)

STOA performance on outsider and insiders network

In our previous experiments, STOA methods only used the author citation network of the insider authors (i.e., outsiders were not part of the network). However, for the OTARIOS variants, since we require outsiders to calculate outsider influence features, we used a network consisting of insiders and outsiders. In order to demonstrate that we were not unfairly comparing our variants with other methods with less information, we tested STOA algorithms using the complete network (i.e., outsiders + insiders) and compared those results with the ones obtained using only the insiders network. Table 8 shows the results of this comparison Footnote 4. The results indicate that, on average, the STOA methods obtained a negative gain of −17% for NDCG and −25% for MRR when using the full network. The NewRank and SCEAS methods were the ones that presented the highest losses (−54% and −30% on NDCG, and −63% and −37% on MRR). These methods were among the top STOA methods when considering only the insiders network, as a result the complete network had a higher negative impact when compared to other methods that obtained worse results when considering only the insiders (e.g., NewRank). The only method that presented an overall positive gain was YetRank in terms of NDCG. This test demonstrated that adding more authors to the citation network decreases the overall performance of STOA methods when there is incomplete information about the new authors (i.e., their received citations are unknown) and they are treated equally as those authors whose full citation network is known. Thus, this further corroborates our hyphotesis that incomplete networks should be carefully divided into fully known nodes and partially known nodes.

Table 8 Gain of using outsiders as part of the network in the score diffusion step

Conclusions

In this paper, we described OTARIOS, a new feature-enriched author ranking algorithm, and compared it against (a) biliometrics, (b) topology-based author ranking algorithms, and (c) feature enriched author ranking algorihtms. Previous author ranking methods did not combine relevant information effectively, such as the author’s productivity and the citations’ relevance. Furthermore, previous methods assume that the full network is known, which is not true for most real cases. We thus divided the network into insiders (i.e., the authors that we want to rank) and outsiders (i.e., the authors that cite insiders but we do not rank). In our experiments, we analysed which publication/citation information is more relevant and how it can be efficiently combined.

We obtained the best results when OTARIOS considers (i) the author’s publication volume and publication recency, (ii) how recently his work is being cited by outsiders, and (iii) how recently his work is being cited by insiders and how individual his work is (i.e., publishing with few authors is better). This evaluation was performed on a set of five networks where the ground-truth was the number of best awards in the conferences belonging to the specific network. Our tests showed that OTARIOS is >30% more efficient than topology-based author ranking methods, namely PageRank, and is >20% more efficient than other feature-enriched author ranking methods. We demonstrated that OTARIOS efficiently uses outsiders (i.e., authors whose received citations are not fully known) on the score initialisation process. Furthermore, we showed that adding outsiders to the score diffusion process decreases the performance of STOA algorithms. These results indicate that current methods have poor results on networks where some nodes have missing information (which is true for most real cases), while OTARIOS is able to use nodes with limited information adequately.

Finally, regarding future work, we plan to test OTARIOS on paper-level citations and verify that we are also capable of improving that approach from the STOA. We also plan to develop a method to automatically identify outsiders (e.g., insiders with low density in the citation network, or insiders with low co-authorship ratio to other insiders) and analyse if this strategy improves author-ranking. Furthermore we intend to tackle the problem of the impact of citation manipulation on the rankings (Ioannidis 2015) not only by removing self-citations (our current strategy) but also by decreasing the credit given by citations between colleagues and collaborators. For the purpose we plan to use a multi-layered network that augments the author citation network with the co-authorship network.

Notes

  1. Note that we define variants using notation APV+AVW+AVW, where the addends define the features used at each group. The first for productivity, the second for outsiders influence and the last for score term. For the example in the text, the variant nomenclature is A+V+V.

  2. https://www.acm.org/about-acm/class

  3. Awards information obtained from: https://jeffhuang.com/best_paper_awards.html

  4. Gains estimated using Eqs. 8 and 9

Abbreviations

DCG:

Discounted cumulative gain

DN:

Dangling nodes

MRR:

Mean reciprocal rank NDCG: Normalized discounted cumulative gain

OTARIOS:

OpTmizing author ranking with insiders/outsiders subnetworks

RR:

Random restart

ST:

Score term

STOA:

State-of-the-art

References

  • BV, E (2018) Research Metrics Guidebook. Elvesier.

  • Ding, Y (2009) J Am Soc Inf Sci Technol 62(2):236–245. https://doi.org/10.1002/asi.21452. http://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/asi.21452.

    Article  Google Scholar 

  • Dunaiski, M, Visser W (2012) Comparing paper ranking algorithms In: Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference, 21–30.. ACM.

  • Hirsch, JE (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci 102(46):16569–16572.

    Article  Google Scholar 

  • Hwang, W-S, Chae S-M, Kim S-W, Woo G (2010) Yet another paper ranking algorithm advocating recent publications In: Proceedings of the 19th International Conference on World Wide Web, 1117–1118.. ACM.

  • Ioannidis, JP (2015) A generalized view of self-citation: Direct, co-author, collaborative, and coercive induced self-citation. J Psychosom Res 78(1):7–11.

    Article  Google Scholar 

  • Järvelin, K, Kekäläinen J (2002) Cumulated gain-based evaluation of ir techniques. ACM Trans Inf Syst (TOIS) 20(4):422–446.

    Article  Google Scholar 

  • Kim, M, Leskovec J (2011) The network completion problem: Inferring missing nodes and edges in networks In: Proceedings of the 2011 SIAM International Conference on Data Mining, 47–58.. SIAM.

  • Page, L, Brin S, Motwani R, Winograd T, et al. (1998) The pagerank citation ranking: Bringing order to the web.

  • Radicchi, F, Fortunato S, Markines B, Vespignani A (2009) Diffusion of scientific credits and the ranking of scientists. Phys Rev E 80(5):056103.

    Article  Google Scholar 

  • Sidiropoulos, A, Manolopoulos Y (2006) Generalized comparison of graph-based ranking algorithms for publications and authors. J Syst Softw 79(12):1679–1700.

    Article  Google Scholar 

  • Silva, J, Aparício D, Silva F (2018) Otarios: Optimizing author ranking with insiders/outsiders subnetworks In: International Workshop on Complex Networks and Their Applications, 143–154.. Springer.

  • Tang, J, Zhang J, Yao L, Li J, Zhang L, Su Z (2008) Arnetminer: Extraction and mining of academic social networks In: KDD’08, 990–998.

  • Tang, J, Zhang J, Yao L, Li J, Zhang L, Su Z (2017) Citation Network Dataset. https://aminer.org/citation. Accessed 14 Sept 2018.

  • Valente, TW, Coronges K, Lakon C, Costenbader E (2008) How correlated are network centrality measures?Connect (Tor Ont) 28(1):16.

    Google Scholar 

  • Vieira, ES, Cabral JA, Gomes JA (2014) How good is a model based on bibliometric indicators in predicting the final decisions made by peers?J Informetrics 8(2):390–405.

    Article  Google Scholar 

  • Voorhees, EM, et al. (1999) The trec-8 question answering track report In: Trec, 77–82.. Citeseer.

  • Wang, H, Shen H-W, Cheng X-Q (2016) Scientific credit diffusion: Researcher level or paper level?Scientometrics 109(2):827–837.

    Article  Google Scholar 

  • West, JD, Jensen MC, Dandrea RJ, Gordon GJ, Bergstrom CT (2013) Author-level eigenfactor metrics: Evaluating the influence of authors, institutions, and countries within the social science research network community. J Am Soc Inf Sci Technol 64(4):787–801.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Jorge Silva is supported by a FCT/MAP-i PhD research grant (PD/BD/128157/2016).

Author information

Authors and Affiliations

Authors

Contributions

All authors wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jorge Silva.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Silva, J., Aparício, D. & Silva, F. Feature-enriched author ranking in incomplete networks. Appl Netw Sci 4, 74 (2019). https://doi.org/10.1007/s41109-019-0182-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41109-019-0182-8

Keywords