BY DAVID HART Computer science researchers at Stanford have developed several new techniques that together may make it possible to calculate web page rankings as used in the Google search engine up to five times faster. The methods may make it realistic to calculate page rankings personalized for an individual's interests or customized to a particular topic. The team includes graduate students Sepandar Kamvar and Taher Haveliwala; noted numerical analyst Gene Golub, the Fletcher Jones Professor of Computer Science; and Christopher Manning, assistant professor of computer science and linguistics. Their work is in three papers, one of which is being presented at the 12th Annual World Wide Web Conference (WWW2003) in Budapest, Hungary, May 2024. The work was supported by the National Science Foundation, which in the mid1990s also supported Stanford graduate students Larry Page and Sergey Brin while they developed what would become the Google search engine. Computing PageRank, the ranking algorithm behind the Google search engine, for a billion web pages can take several days. Google currently ranks and searches 3 billion web pages. Each personalized or topicsensitive ranking would require a separate multiday computation, but the payoff would be less time spent wading through irrelevant search results. For example, searching a sportsspecific Google site for "Giants" would give more importance to pages about the New York or San Francisco Giants and less importance to pages about Jack and the Beanstalk. To speed up PageRank, the Stanford team developed a trio of techniques in numerical linear algebra. First, in the WWW2003 paper, they describe socalled "extrapolation" methods, which make some assumptions about the web's link structure that aren't true but permit a quick and easy computation of PageRank. Because the assumptions aren't true, the PageRank isn't exactly correct, but it's close and can be refined using the original PageRank algorithm. The Stanford researchers have shown that their extrapolation techniques can speed up PageRank by 50 percent in realistic conditions and by up to 300 percent under less realistic conditions. A second paper describes an enhancement, called "BlockRank," which relies on a feature of the web's link structure  a feature that the Stanford team is among the first to investigate and exploit. Namely, they show that approximately 80 percent of the pages on any given website point to other pages on the same site. As a result, they can compute many singlesite PageRanks, glue them together in an appropriate manner and use that as a starting point for the original PageRank algorithm. With this technique, they can realistically speed up the PageRank computation by 300 percent. Finally, the team notes in a third paper that the rankings for some pages are calculated early in the PageRank process, while the rankings of many highly rated pages take much longer to compute. In a method called "Adaptive PageRank," they eliminate redundant computations associated with those pages whose PageRanks finish early. This speeds up the PageRank computation by up to 50 percent. "Further speedups are possible when we use all these methods," Kamvar said. "Our preliminary experiments show that combining the methods will make the computation of PageRank up to a factor of five faster. However, there are still several issues to be solved. We're closer to a topicbased PageRank than to a personalized ranking." The complexities of a personalized ranking would require even greater speedups to the PageRank calculations. In addition, while a faster algorithm shortens computation time, the issue of storage remains. Because the results from a single PageRank computation on a few billion web pages require several gigabytes of storage, saving a personalized PageRank for many individuals would rapidly consume vast amounts of storage. Saving a limited number of topicspecific PageRank calculations would be more practical. The reason for the expensive computation and storage requirements lies in how PageRank generates the rankings that have led to Google's popularity. Unlike pageranking methods that rate each page separately, PageRank bases each page's "importance" on the number and importance of pages that link to the page. Therefore, PageRank must consider all pages at the same time and can't easily omit pages that aren't likely to be relevant to a topic. It also means that the faster method will not affect how quickly Google presents results to users' searches, because the rankings are computed in advance and not at the time a search is requested. The Stanford team's conference paper and technical reports on enhancing the PageRank algorithm, as well as the original paper describing the PageRank method, are available on the Stanford Database Group's Publication Server (http://dbpubs.stanford.edu/). David Hart is a public information officer at the National Science Foundation. SR

Stanford Report, May 21, 2003