Advanced Evaluation of Web Search – Methodology and Technology


It is no doubt that search is critical to the web. And I believe it will be of similar importance to the semantic web. Once you are talking about searching from billions of objects, it will be impossible to always give a single right result, no matter how intelligent the search engine is. Instead, a set of possible results will be provided for the user to choose from. Moreover, if we consider the trade-off between the system costs of generating a single right result and a set of possible results, we may choose the latter. This will naturally lead to the question of how to decide on and present the set to the user and how to evaluate the outcome.

In this presentation, I’ll talk about some new results in the methodology and technology developed for evaluation of web search technologies and systems. As we know, the dominant method for evaluating search engines is the Cranfield paradigm, which employs a test collection to qualify the systems' performance. However, the modern search engines are much different than the traditional information retrieval systems when the Cranfield paradigm was proposed: 1) Most modern search engines have much more features, such as query-dependent document snippets and query suggestions, and the quality of such features can affect the users' effectiveness to find out useful information; 2) The document collections used in search engines are much larger than ever, so the complete test collection that contain all query-document judgments is not available. As response to the above differences and difficulties, the evaluation based on implicit feedback is a promising alternative methodology employed in IR evaluation. With this approach, no extra human effort is required to judge the query-document relevance. Instead, such judgments information can be automatically predicted from real users' implicit feedback data. There are three key issues in this methodology: 1) How to predict the query-document relevance and other useful features that useful to qualify the search engine performance; 2) If the complete "judgments" are not available, how can we efficiently collect the most critical information that can determine the system performance; 3) Because more than query-document relevance features can affect the performance, how can they integrate to be a good metric to predict the system performance. We will show a set of technologies dealing with these issues.

While semantic web search may present different requirements from web search, evaluation of any search technology will be inevitable. As such, I hope the materials covered in the talk will benefit some of you in semantic web community in the future.

The Speaker

Li Xiaoming is a professor of computer science and technology and the director of Institute of Network Computing and Information Systems (NCIS) at Peking University, China. His current research interest is in search engine and web mining. He led the effort of developing a Chinese search engine (Tianwang) since 1999, and is the founder of the Chinese web archive (Web InfoMall). Related papers have been published in the WWW Conference, CIKM, Computer Networks, Journal of Software and Systems, and Journal of Web Engineering, etc. Under his direction, the Institute is focused on the areas of search engine and web mining, peer-to-peer computing, distributed systems, mobile computing, high productivity computing, and database systems. He serves on the editorial boards of several journals, including Concurrency and Computation (John Wiley), Journal of Web Engineering (Rinton), etc. He is a senior member of IEEE and a member of Eta Kappa Nu. He also serves as a vice president for the Chinese Computer Federation and is chairing the Advisory Subcommittee for Undergraduate Computing Education in China.