Panning for gold & an exercise in heuristics

Panning for gold

Edwards, Sylvia L., & Bruce, Christine S. (2006). Panning for gold: Understanding student’s information searching experiences. In Bruce, C.S., Mohay, G., Smith, G., Stoodley, I., & Tweedale, R. (Eds.), Transforming IT education: Promoting a culture of excellence (pp. 351-369). Santa Rosa, California: Informing Science Press.
Need to know: this idea of needing to know how students actually use search tools so teachers can intercede presupposes that students should not be taught to search a certain way (p. 352).
Relevance of information: that this should vary across students appears obvious to me. They all have separate purposes, and assuming an objective standard about information value is a bit authoritarian (p 353).
Cognitive abilities: [Is this like the assumption in economics that all information users will behave rationally?] (p. 354.)

Structure: when assuming a little understanding of the information environment, does that pre-suppose a fixed structure? Not learning from approach and results? (p. 360.)
Blame the tool and not own abilities [how many tools include a prominent help function?] (p. 362).
Reflective Practice: thinking about how tools are used while using them. That this might not be common could be due to time-constraints and information overload (p. 356).

Heuristic evaluation

Manzari, L., & Trinidad-Christensen, J. (2006). User-centered design of a web site for library and information science students: Heuristic evaluation and usability testing. Information Technology and Libraries (25)3, 163-169.
Boring: this paper has the most soporific immediate effect!
Nielsen: Citing Jakob Nielsen to endorse engineering approach to web interface design (p. 164) and between 3 to 5 ‘expert’ users for feedback, citing that more than 5 returns no greater value of information.
Bureaucracy: subjective choices appear to override any consideration of students NOT already used to the special collection or the institution (p. 165). [Is that a pre-requisite assumption?]
Testing: [There appears to be little/no consideration of longitudinal changes to search skill-sets or tools. The example of being jarred by changed page layout was somewhat on point considering the half dozen pages one sometimes need to traverse in the QUT system (including those damned security verification pages) to get to a resource.] (pp. 166-167.)
Heuristics: people asking for things they didn’t seem to have used – there may be time pressure NOW but a desire to look at details LATER (p. 167).


Interpellation: Some interest in academic search technologies is implied. There appears to be little by way of evaluation of ethics/motivations/command structures. ‘Just do as you’re told.’
Absences: Predicting changes to user skills and tool-sets. An unchanging tableau.
Utility: Fair examples of how these things might be done.
Questions: Where are the disruptive/dissenting methodologies if they exist)?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.