Copyright is held by the author/owner(s). WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA. ACM 1-58113-449-5/02/0005.
H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - search process, information filtering, retrieval models; H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing - linguistic processing
Algorithms, Experimentation
search, Web graph, link structure, PageRank, search in context, personalized search
Various link-based ranking strategies have been developed recently for improving Web-search query results. The HITS algorithm proposed in�[14] relies on query-time processing to deduce the hubs and authorities that exist in a subgraph of the Web consisting of both the results to a query and the local neighborhood of these results. [4] augments the HITS algorithm with content analysis to improve precision for the task of retrieving documents related to a query topic (as opposed to retrieving documents that exactly satisfy the user's information need). [8] makes use of HITS for automatically compiling resource lists for general topics.
The PageRank algorithm discussed in�[7,16] precomputes a rank vector that provides a-priori ``importance'' estimates for all of the pages on the Web. This vector is computed once, offline, and is independent of the search query. At query time, these importance scores are used in conjunction with query-specific IR scores to rank the query results. PageRank has a clear efficiency advantage over the HITS algorithm, as the query-time cost of incorporating the precomputed PageRank importance score for a page is low. Furthermore, as PageRank is generated using the entire Web graph, rather than a small subset, it is less susceptible to localized link spam.
In this paper, we propose an approach that (as with HITS) allows the query to influence the link-based score, yet (as with PageRank) requires minimal query-time processing. In our model, we compute offline a set of PageRank vectors, each biased with a different topic, to create for each page a set of importance scores with respect to particular topics. The idea of biasing the PageRank computation was suggested in�[6] for the purpose of personalization, but was never fully explored. This biasing process involves introducing artificial links into the Web graph during the offline rank computation, and is described further in Section�2.
By making PageRank topic-sensitive, we avoid the problem of heavily linked pages getting highly ranked for queries for which they have no particular authority�[3]. Pages considered important in some subject domains may not be considered important in others, regardless of what keywords may appear either in the page or in anchor text referring to the page�[5]. An approach termed Hilltop, with motivations similar to ours, is suggested in�[5] that is designed to improve results for popular queries. Hilltop generates a query-specific authority score by detecting and indexing pages that appear to be good experts for certain keywords, based on their outlinks. However, query terms for which experts were not found will not be handled by the Hilltop algorithm.
[17] proposes using the set of Web pages that contain some term as a bias set for influencing the PageRank computation, with the goal of returning terms for which a given page has a high reputation. An approach for enhancing rankings by generating a PageRank vector for each possible query term was recently proposed in�[18] with favorable results. However, the approach requires considerable processing time and storage, and is not easily extended to make use of user and query context. Our approach to biasing the PageRank computation is novel in its use of a small number of representative basis topics, taken from the Open Directory, in conjunction with a unigram language model used to classify the query and query context.
In our work we consider two scenarios. In the first, we assume a user with a specific information need issues a query to our search engine in the conventional way, by entering a query into a search box. In this scenario, we determine the topics most closely associated with the query, and use the appropriate topic-sensitive PageRank vectors for ranking the documents satisfying the query. This ensures that the ``importance'' scores reflect a preference for the link structure of pages that have some bearing on the query. As with ordinary PageRank, the topic-sensitive PageRank score can be used as part of a scoring function that takes into account other IR-based scores. In the second scenario, we assume the user is viewing a document (for instance, browsing the Web or reading email), and selects a term from the document for which he would like more information. This notion of search in context is discussed in�[10]. For instance, if a query for ``architecture'' is performed by highlighting a term in a document discussing famous building architects, we would like the result to be different than if the query ``architecture'' is performed by highlighting a term in a document on CPU design. By selecting the appropriate topic-sensitive PageRank vectors based on the context of the query, we hope to provide more accurate search results. Note that even when a query is issued in the conventional way, without highlighting a term, the history of queries issued constitutes a form of query context. Yet another source of context comes from the user who submitted the query. For instance, the user's bookmarks and browsing history could be used in selecting the appropriate topic-sensitive rank vectors. These various sources of search context are discussed in Section�5.
A summary of our approach follows. During the offline processing of the Web crawl, we generate 16 topic-sensitive PageRank vectors, each biased (as described in Section�2) using URLs from a top-level category from the Open Directory Project (ODP)�[2]. At query time, we calculate the similarity of the query (and if available, the query or user context) to each of these topics. Then instead of using a single global ranking vector, we take the linear combination of the topic-sensitive vectors, weighted using the similarities of the query (and any available context) to the topics. By using a set of rank vectors, we are able to determine more accurately which pages are truly the most important with respect to a particular query or query-context. Because the link-based computations are performed offline, during the preprocessing stage, the query-time costs are not much greater than that of the ordinary PageRank algorithm.
A review of the PageRank algorithm ([16,7,11]) follows. The basic idea of PageRank is that if page has a link to page , then the author of is implicitly conferring some importance to page . Intuitively, Yahoo! is an important page, reflected by the fact that many pages point to it. Likewise, pages prominently pointed to from Yahoo! are themselves probably important. How much importance does a page confer to its outlinks? Let be the outdegree of page , and let represent the importance (i.e., PageRank) of page . Then the link confers units of rank to . This simple idea leads to the following fixpoint computation that yields the rank vector over all of the pages on the Web. If is the number of pages, assign all pages the initial value . Let represent the set of pages pointing to . In each iteration, propagate the ranks as follows:2
(1) |
The process can also be expressed as the following eigenvector calculation, providing useful insight into PageRank. Let be the square, stochastic matrix corresponding to the directed graph of the Web, assuming all nodes in have at least one outgoing edge. If there is a link from page to page , then let the matrix entry have the value . Let all other entries have the value 0. One iteration of the previous fixpoint computation corresponds to the matrix-vector multiplication . Repeatedly multiplying by yields the dominant eigenvector of the matrix . In other words, is the solution to
One caveat is that the convergence of PageRank is guaranteed only if is irreducible (i.e., is strongly connected) and aperiodic�[15]. The latter is guaranteed in practice for the Web, while the former is true if we add a damping factor to the rank propagation. We can define a new matrix in which we add transition edges of probability between every pair of nodes in :
(3) |
(4) | ||
� | (5) |
In terms of the random-walk model, the personalization vector represents the addition of a complete set of transition edges where the probability on an artificial edge is given by . We will refer to the solution of Equation�5, with and a particular , as . By appropriately selecting , the rank vector can be made to prefer certain categories of pages. The bias factor specifies the degree to which the computation is biased towards .
In our approach to topic-sensitive PageRank, we precompute the importance scores offline, as with ordinary PageRank. However, we compute multiple importance scores for each page; we compute a set of scores of the importance of a page with respect to various topics. At query time, these importance scores are combined based on the topics of the query to form a composite PageRank score for those pages matching the query. This score can be used in conjunction with other IR-based scoring schemes to produce a final rank for the result pages with respect to the query. As the scoring functions of commercial search engines are not known, in our work we do not consider the effect of these other IR scores.5 We believe that the improvements to PageRank's precision will translate into improvements in overall search rankings, even after other IR-based scores are factored in.6
The first step in our approach is to generate a set of biased PageRank vectors using a set of ``basis'' topics. This step is performed once, offline, during the preprocessing of the Web crawl. For the personalization vector described in Section�2, we use the URLs present in the various categories in the ODP. We create 16 different biased PageRank vectors by using the URLs present below each of the 16 top level categories of the ODP as the personalization vectors. In particular, let be the set of URLs in the ODP category . Then when computing the PageRank vector for topic , in place of the uniform damping vector , we use the nonuniform vector where
We also compute the 16 class term-vectors consisting of the terms in the documents below each of the 16 top level categories. simply gives the total number of occurrences of term in documents listed below class of the ODP.
One could envision using other sources for creating topic-sensitive PageRank vectors; however, the ODP data is freely available, and as it is compiled by thousands of volunteer editors, is less susceptible to influence by any one party.7
The second step in our approach is performed at query time. Given a query , let be the context of . In other words, if the query was issued by highlighting the term in some Web page , then consists of the terms in . For ordinary queries not done in context, let . Using a unigram language model, with parameters set to their maximum-likelihood estimates, we compute the class probabilities for each of the 16 top level ODP classes, conditioned on . Let be the th term in the query (or query context) . Then given the query , we compute for each the following:
(7) |
is easily computed from the class term-vector . The quantity is not as straightforward. We chose to make it uniform, although we could personalize the query results for different users by varying this distribution. In other words, for some user , we can use a prior distribution that reflects the interests of user . This method provides an alternative framework for user-based personalization, rather than directly varying the damping vector as had been suggested in�[7,6].
Using a text index, we retrieve URLs for all documents containing the original query terms . Finally, we compute the query-sensitive importance score of each of these retrieved URLs as follows. Let be the rank of document given by the rank vector (i.e., the rank vector for topic ). For the Web document , we compute the query-sensitive importance score as follows.
(8) |
The above query-sensitive PageRank computation has the following probabilistic interpretation, in terms of the ``random surfer'' model�[7]. Let be the coefficient used to weight the th rank vector, with (e.g., let ). Then note that the equality
(9) |
To measure the behavior of topic-sensitive PageRank, we conducted a series of experiments. In Section�4.1 we describe the similarity measure we use to compare two rankings. In Section�4.2, we investigate how the induced rankings vary, based on both the topic used to bias the rank vectors as well as the choice of the bias factor . In Section�4.3, we present results of a user study showing the retrieval performance of ordinary PageRank versus topic-sensitive PageRank. Finally, in Section�4.4, we provide an initial look at how the use of query context can be used in conjunction with topic-sensitive PageRank.
As a source of Web data, we used the latest Web crawl from the
Stanford WebBase�[12],
performed in January 2001, containing roughly 120 million pages.
Our crawl contained roughly 280,000 of the 3 million URLs in the
ODP. For our experiments, we used 35 of the sample queries given
in�[9], which were in turn
compiled from earlier papers.9 The
queries are listed in Table�1.
affirmative action | lipari |
alcoholism | lyme disease |
amusement parks | mutual funds |
architecture | national parks |
bicycling | parallel architecture |
blues | recycling cans |
cheese | rock climbing |
citrus groves | san francisco |
classical guitar | shakespeare |
computer vision | stamp collecting |
cruises | sushi |
death valley | table tennis |
field hockey | telecommuting |
gardening | vintage cars |
graphic design | volcano |
gulf war | zen buddhism |
hiv | zener |
java |
We use two measures when comparing rankings. The first measure, denoted , indicates the degree of overlap between the top URLs of two rankings, and . We define the overlap of two sets and (each of size ) to be . In our comparisons we will use . The overlap measure gives an incomplete picture of the similarity of two rankings, as it does not indicate the degree to which the relative orderings of the top URLs of two rankings are in agreement. Therefore, we also use a variant of the Kendall's distance measure. See�[9] for a discussion of various distance measures for ranked lists in the context of Web search results. For consistency with , we will present our definition as a similarity (as opposed to distance) measure, so that values closer to 1 indicate closer agreement. Consider two partially ordered lists of URLs, and , each of length . Let be the union of the URLs in and . If is , then let be the extension of , where contains appearing after all the URLs in .10We extend analogously to yield . We define our similarity measure as follows:
(10) |
In other words, is the probability that and agree on the relative ordering of a randomly selected pair of distinct nodes .
In this section we measure the effects of topically biasing the PageRank computation. Firstly, note that the choice of the bias factor , discussed in Section�2, affects the degree to which the resultant vector is biased towards the topic vector used for . Consider the extreme cases. For , the URLs in the bias set will be assigned the score , and all other URLs receive the score 0. Conversely, as tends to 0, the content of becomes irrelevant to the final score assignment.
We chose to use heuristically, after inspecting the rankings for several of the queries listed in Table�1. We did not concentrate on optimizing , as we discovered that the induced rankings of query results are not very sensitive to the choice of . For instance, for and , we measured the average similarity of the induced rankings across our set of test queries, for each of our PageRank vectors.11 The results are given in Table�2. We see that the average overlap between the top 20 results for the two values of is very high. Furthermore, the high values for indicate high overlap as well agreement (on average) on the relative ordering of these top 20 URLs for the two values of . All subsequent experiments use .
Bias Set | ||
NOBIAS | 0.72 | 0.64 |
ARTS | 0.66 | 0.58 |
BUSINESS | 0.63 | 0.54 |
COMPUTERS | 0.70 | 0.60 |
GAMES | 0.78 | 0.67 |
HEALTH | 0.73 | 0.62 |
HOME | 0.77 | 0.67 |
KIDS & TEENS | 0.74 | 0.66 |
NEWS | 0.74 | 0.65 |
RECREATION | 0.62 | 0.55 |
REFERENCE | 0.68 | 0.57 |
REGIONAL | 0.60 | 0.52 |
SCIENCE | 0.69 | 0.59 |
SHOPPING | 0.66 | 0.55 |
SOCIETY | 0.57 | 0.50 |
SPORTS | 0.69 | 0.60 |
WORLD | 0.64 | 0.55 |
The differences across different topically-biased PageRank vectors is much higher, dwarfing any variations caused by the choice of . We computed the average, across our test queries, of the pairwise similarity between the rankings induced by the different topically-biased vectors. The 5 most similar pairs, according to our measure, are given in Table�3, showing that even the most similar topically-biased rankings have little overlap. Table�4 shows that the pairwise similarities of the rankings induced by the other ranking vectors are close to 0. Having established that the topically-biased PageRank vectors each rank the results substantially differently, we proceed to investigate which of these rankings is ``best'' for specific queries.
As an example, Table�5 shows the top 5 ranked URLs for the query ``bicycling,'' using each of the topically-biased PageRank vectors. Note in particular that the ranking induced by the SPORTS-biased vector is of high quality.12 Also note that the ranking induced by the SHOPPING-biased vector leads to the high ranking of websites selling bicycle-related accessories.
In this section we look at how effectively we can utilize the ranking precision gained by the use of multiple PageRank vectors. Given a query, our first task is to determine which of the rank vectors can best rank the results for the query. We found that simply using as discussed in Section�3.3 yielded intuitive results for determining which topics are most closely associated with a query. In particular, for most of the test queries, the ODP categories with the highest values for are intuitively the most relevant categories for the query. In Table�6, we list for each test query, the 3 categories with the highest values for . When computing the composite score in our experiments, we chose to use the weighted sum of only the rank vectors associated with the three topics with the highest values for , rather than all of the topics. Based on the data in Table�6, we saw no need to include the scores from the topic vectors with lower associated values for .
To compare our query-sensitive approach to ordinary PageRank, we conducted a user study. We randomly selected 10 queries from our test set for the study, and found 5 volunteers. For each query, the volunteer was shown 2 result rankings; one consisted of the top 10 results satisfying the query, when these results were ranked with the unbiased PageRank vector, and the other consisted of the top 10 results for the query when the results were ranked with the composite score.13The volunteer was asked to select all URLs which were ``relevant'' to the query, in their opinion. Furthermore, they were asked to say which of the two rankings was ``better'' overall, in their opinion. They were not told anything about how either of the rankings was generated. The rankings induced by the topic-sensitive PageRank score were significantly preferred by our test group. Let a URL be considered relevant if at least 3 of the 5 volunteers selected it as relevant for the query. The precision then is the fraction of the top 10 URLs that are deemed relevant. The precision of the two ranking techniques for each test query is shown in Figure�1. The average precision for the rankings induced by the topic-sensitive PageRank scores is substantially higher than that of the unbiased PageRank scores. Furthermore, as shown in Table�7, for nearly all queries, a majority of the users preferred the rankings induced by the topic-sensitive PageRank scores. These results suggest that the effectiveness of a query-result scoring function can be improved by the use of a topic-sensitive PageRank scheme in place of a generic PageRank scheme.
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
|
||||||||||||||||||||||||||||||||||||||||
|
|
|
Query | Preferred by Majority |
alcoholism | TOPICSENSITIVE |
bicycling | TOPICSENSITIVE |
citrus groves | TOPICSENSITIVE |
computer vision | TOPICSENSITIVE |
death valley | TOPICSENSITIVE |
graphic design | TOPICSENSITIVE |
gulf war | TOPICSENSITIVE |
hiv | NOBIAS |
shakespeare | Neither |
table tennis | TOPICSENSITIVE |
In Section�4.3, the topic-sensitive ranking vectors were chosen using the topics most strongly associated with the query term. If the search is done in context, for instance by highlighting a term in a Web page and invoking a search, then the context can be used instead of the query to determine the topics. Using the context can help disambiguate the query term and yield results that more closely reflect the intent of the user. We now illustrate with an example how using query-context can help a system which uses topic-sensitive PageRank.
Consider the query ``blues'' taken from our test set. This term has several different senses; for instance it could refer to a musical genre, or to a form of depression. Two Web pages in which the term is used with these different senses, as well as short textual excerpts from the pages, are shown in Table�8. Consider the case where a user reading one of these two pages highlights the term ``blues'' to submit a search query. At query time, the first step of our system is to determine which topic best applies to the query in context. Thus, we calculate as described in Section�3.3, using for the terms of the entire page, rather than just the term ``blues.'' For the first page (discussing music), is ARTS, and for the second page (discussing depression), is HEALTH. The next step is to use a text index to fetch a list of URLs for all documents containing the term ``blues'' -- the highlighted term for which the query was issued. Finally, the URLs are ranked using the appropriate ranking vector that was selected using the values (i.e., either ARTS or HEALTH). Table�9 shows the top 5 URLs for the query ``blues'' using the topic-sensitive PageRank vectors for ARTS, HEALTH, and NOBIAS. We see that as desired, most of the results ranked using the ARTS-biased vector are pages discussing music, while all of the top results ranked using the HEALTH-biased vector discuss depression. The context of the query allows the system to pick the appropriate topic-sensitive ranking vector, and yields search results reflecting the appropriate sense of the search term.
That Blues Music Page | Postpartum Depression & the `Baby Blues' |
http://www.fred.net/turtle/blues.shtml | http://familydoctor.org/handouts/379.html |
...If you're stuck for new material, visit Dan Bowden's Blues and Jazz Transcriptions - lots of older blues guitar transcriptions for you historic blues fans ... | ...If you're a new mother and have any of these symptoms, you have what is called the ``baby blues.'' ``The blues'' are considered a normal part of early motherhood and usually go away within 10 days after delivery. However, some women have worse symptoms or symptoms last longer. This is called ``postpartum depression.'' ... |
|
|
||||||||||||||||||||||
|
In the previous section, we discussed one possible source of context to utilize in the generation of the composite PageRank score, namely the document containing the query term highlighted by the user. There are a variety of other sources of context that may be used in our scheme. For instance, the history of queries issued leading up to the current query is another form of query context. A search for ``basketball'' followed up with a search for ``Jordan'' presents an opportunity for disambiguating the latter. As another example, most modern search engines incorporate some sort of hierarchical directory, listing URLs for a small subset of the Web, as part of their search interface.14 The current node in the hierarchy that the user is browsing at constitutes a source of query context. When browsing URLs at TOP/ARTS, for instance, any queries issued could have search results (from the entire Web index) ranked with the ARTS rank vector, rather than either restricting results to URLs listed in that particular category, or not making use of the category whatsoever. In addition to these types of context associated with the query itself, we can also potentially utilize query independent user context. Sources of user context include the users' browsing patterns, bookmarks, and email archives. As mentioned in Section�3.3, we can integrate user context by selecting a nonuniform prior, , based on how closely the user's context accords with each of the basis topics.
When attempting to utilize the aforementioned sources of search context, mediating the personalization of PageRank via a set of basis topics yields several benefits over attempting to explicitly choose a personalization vector directly.
A wide variety search-context sources exist which, if utilized appropriately, can help users better manage the deluge of information they are faced with. Although we have begun exploring how best to make use of available context, much work remains in identifying and utilizing search context with the goal of personalizing Web search.
We are currently exploring several ways of improving our approach for topic-sensitive PageRank. As discussed in the previous section, discovering sources of search context is a ripe area of research. Another area of investigation is the development of the best set of basis topics. For instance it may be worthwhile to use a finer-grained set of topics, perhaps using the second or third level of the Open Directory hierarchy, rather than simply the top level. However, a fine-grained set of topics leads to efficiency considerations, as the cost of the naive approach to computing these topic-sensitive vectors is linear in the number of basis topics. See�[13] for approaches that may make the use of a larger, finer grained set of basis topics practical.
We are also currently investigating a different approach to creating the damping vector used to create the topic-sensitive rank vectors. This approach has the potential of being more resistant to adversarial ODP editors. Currently, as described in Section�3.2, we set the damping vector for topic to , where is defined in Equation�6. In the modified approach, we instead first train a classifier for the basis set of topics using the ODP data as our training set, and then assign to all pages on the Web a distribution of topic weights.15Let this topic weight of a page for category be . Then we replace Equation�6 with
(11) |
We plan to investigate the above enhancements to generating the topic-sensitive PageRank score, and evaluate their effect on retrieval performance, both in isolation and when combined with typical IR scoring functions.
I would like to thank Professor Jeff Ullman for invaluable comments and feedback. I would like to thank Glen Jeh and Professor Jennifer Widom for several useful discussions. I would also like to thank Aristides Gionis for his feedback. Finally, I would like to thank the anonymous reviewers for their insightful comments.
In this section we derive the interpretation of the weighted sum of PageRank vectors.16 Consider a set of rank vectors for some fixed .17 For brevity let . Furthermore let , and . We claim that . In other words, is itself a PageRank vector, where the personalization vector is set to . The proof follows.
Because each satisfies Equation�5 (with ), we have that
(12) | ||
� | (13) | |
� | (14) | |
� | (15) | |
� | (16) |
Thus satisfies Equation�5 for the personalization vector , and our proof is complete.