ICTIR 2015   September 27-30, 2015                                                Northampton, Massachusetts (USA)

ACM SIGIR International Conference on the Theory of Information Retrieval

Facebook Twitter Icon

Slideshow 1 Slideshow 2 View from Mount Tom Slideshow 4

ICTIR 2015 Tutorials

Sunday, September 27, 2015 -- 1:00 p.m. - 5:30 p.m.

Two half-day tutorials will be held in parallel sessions. Attendees will need to choose which tutorial to attend.
Note that tutorials are included with ICTIR registration.

Tutorial 1: Statistical Significance Testing in Theory and in Practice
Ben Carterette

 

Ben Carterette,
University of Delaware

Tutorial 2: Theory of Retrieval:
The Retrievability of Information

 

Leif Azzopardi,
University of Glasgow

The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference), and the increased practice of statistical hypothesis testing to determine whether measured improvements can be ascribed to something other than random chance. Together these create a very useful standard for reviewers, program committees, and journal editors; work in information retrieval (IR) increasingly cannot be published unless it has been evaluated using a well-constructed test collection and shown to produce a statistically significant improvement over a good baseline. But, as the saying goes, any tool sharp enough to be useful is also sharp enough to be dangerous. Statistical tests of significance are widely misunderstood. Most researchers and developers treat them as a "black box": evaluation results go in and a p-value comes out. Because significance is such an important factor in determining what research directions to explore and what is published, using p-values obtained without thought can have consequences for everyone doing research in IR. Ioannidis has argued that the main consequence in the biomedical sciences is that most published research findings are false; could that be the case in IR as well?

This tutorial will help researchers and developers gain a better understanding of how tests work and how they should be interpreted so that they can both use them more effectively in their day-to-day work as well as better understand how to interpret them when reading the work of others. It is for both new and experienced researchers and practitioners in IR, anyone who wishes to perform tests and also desires a deeper understanding of what they are and how to interpret the information they provide.

http://ir.cis.udel.edu/ICTIR15tutorial

Bio: Ben Carterette is an Associate Professor of Computer and Information Sciences at the University of Delaware in Newark, Delaware, USA. His research primarily focuses on evaluation in Information Retrieval, including test collection construction, evaluation measures, and statistical testing. He has published over 70 papers in venues such as ACM TOIS, SIGIR, CIKM, WSDM, ECIR, and ICTIR, winning three Best Paper Awards for his work on evaluation. In addition, he has co-organized four workshops on IR evaluation and co-coordinated five TREC tracks. Most recently he served as General Co-Chair for WSDM 2014, and he will co-chair ICTIR in 2016. Ben can be reached at carteret@udel.edu.


Retrievability is an important and interesting indicator that can be used in a number of ways to analyse Information Retrieval systems and document collections. Rather than focusing totally on relevance, retrievability examines what is retrieved, how often it is retrieved, and whether a user is likely to retrieve it or not. This is important because a document needs to be retrieved, before it can be judged for relevance. In this tutorial, we shall explain the concept of retrievability along with a number of retrievability measures, how it can be estimated and how it can be used for analysis. Since retrieval precedes relevance, we shall also provide an overview of how retrievability relates to effectiveness - describing some of the insights that researchers have discovered thus far. We shall also show how retrievability relates to efficiency, and how the theory of retrievability can be used to improve both effectiveness and efficiency. Then we shall provide an overview of the different applications of retrievability such as Search Engine Bias, Corpus Profiling, etc., before wrapping up with challenges and opportunities. The final session will look at example problems and ways to analyse and apply retrievability to other problems and domains. Participants are invited to bring their own problems to be discussed after the tutorial.

This half-day tutorial is ideal for: (i) researchers curious about retrievability and wanting to see how it can impact their research, (ii) researchers who would like to expand their set of analysis techniques, and/or (iii) researchers who would like to use retrievability to perform their own analysis.

 

 

Bio: Dr. Leif Azzopardi is a Senior Lecturer within the Glasgow Information Retrieval Group and a full time academic member of staff within the School of Computing Science, at the University of Glasgow. His research focuses on building formal models for Information Retrieval - usually drawing upon different disciplines for inspiration, such as Quantum Mechanics, Operations Research, Microeconomics, Transportation Planning and Gamification. In 2008,  he developed a series of retrievability measures and has written numerous publications on the subject and how it is fundamentally related to effectiveness and efficiency. He has been invited to given talks on the subject through-out the world, and has delivered related tutorials at SIGIR 2014 and ECIR 2015. Leif can be reached at leif.azzopardi@glasgow.ac.uk.