Oct 24 2010 Published by under Conferences

Perspectives on adaptivity in information retrieval interaction – this is stream of consciousness so forgive omissions, typos, etc.

This is actually the same time as something that’s titled Scholarly publishing, but I chose this session because the content seemed very interesting and the other session was presenting papers, which I can presumably read on my own time.

Panelists: Birger Larsen, Peter Ingwersen, Peiling Wang, Diane Kelly, Marianne Lykke

system adaptation to user

  • dynamic user modeling
  • effective IA
  • enhanced search features
  • search integration
  • relevance feedback

searcher adapts to system through interaction

  • builds a mental model of systems… (too slow to type)

20slides, 20s each for each dimension - questions written on slips and they are discussed in the interaction period – in practice, no time to write things on slips, very hard to take notes, but good talks)


example system adaptation to user – logged in to google, do an ego search, it finds you first, it also finds different types of information (books, blogs, pictures, websites)- different verticals… in libraries we require people to search each of these verticals separately. We justify this part by teaching information literacy? Is that a good idea? Users don’t want silos, Why don’t we integrate – it hurts – metadata and lowest common denominator… We tried federated search and normalization. But users aren’t so happy about federated search – it’s not like google.  New solution – integrated search (like Summon, Ebsco Discovery) – harvest, normalize… But development left to the vendors – where does this leave us as a field… there are opportunities for research. How can we evaluate these scientifically and not just rely on vendors. They did a test integrating different types of information in physics information tasks were well defined… graded relevance assessment. How to design and evaluate systems that successfully integrate different genres

[this is where my battery died, remainder reconstructed from handwritten notes]


Told a story about explicit-o-saurus who is now nearly extinct, but who she would like to bring back given that users are more sophisticated now. Lots of explicit feedback in the early days, then there was a move toward user models, and then a move toward implicit, and then agents. Users were unable or unwilling to provide explicit feedback, then, but they were learning to use the computer at the same time. Now users give feedback all the time like on NetFlix and Amazon. Be creative  - is there a way to make it a game? An audience member mentioned trust and also the expectation that the user will get something for giving feedback.


System adaptability is important, but systems can’t do it alone, users must adapt, too. Users must learn and evolve all the time. The question is: how to measure adaptability? There are measures like job adaptability and things like cross-culture adaptability.  Adaptability is a feature of the user whereas learnability is a feature of the system. Users’ ability to adapt needs to be encouraged – it is not helpful for everyone to have the same google box but with different functionality underneath (good point).


[my pencil writing is nearly completely illegible here, but she described a project over three years to adapt the information architecture of an egov site in Denmark. Are we heros to say that we can model and understand the users?]

There are many interesting things to be taken from all of this. For one thing, look at all of the different time scales for adaptability. From within a transaction for the human, to over the course of a few transactions for the systems, to over the course of years for information architecture.

Comments are off for this post