AI technologies and information seeking

Submitted by: Anna-Lena Godhe
Abstract: The project, ReSearch: Researching the transforming landscape of information seeking – AI technologies and learning in Swedish schools, which starts in July 2025, seeks to understand what generative AI technologies used for information-seeking entail for Swedish compulsory school. Seeking information is a taken-for-granted activity in many learning exercises, but the introduction of AI unsettles conditions for how information can be sought and evaluated.
The presentation will focus on the general setting of the study and aim to generate a discussion amongst participants on how AI affects search and the implications for L1-teaching and learning. Previous research shows the difficulties young people face in evaluating and reasoning about the sources they encounter while looking for information on the internet (e.g., McGrew, 2020). Haider and Sundin (2022) distinguish between the sceptical approach to evaluating sources, which states that you cannot trust anything unless you have experienced it yourself, and a pragmatic approach, which emphasises the importance of reasonable trust in established sources. A sceptical approach runs the risk of contributing to a loss of trust in expertise, established sources, and public institutions; at the same time, it is important that the pragmatic approach does not end in a naïve trust in the same actors. Research has also underlined the importance of people understanding how algorithms work on different platforms (e.g., Lomborg & Kapsch, 2020), and more specifically in relation to chatbots and voice assistants (Parnell et al., 2022).
A central question is how to make students aware of the different infrastructures of AI-technology like ChatGPT and search engines like Google. How does the infrastructure affects the “answers” we get when posing questions in the different settings and how does this matter in an educational context? Search engines are often seen as neutral (e.g., Hillis et al., 2012), despite research showing that such neutrality is impossible (e.g., Noble, 2018). When pupils receive the results of their questions in the form of AI generated text rather than a series of clickable links, the underlying machinery is likely to become even more invisible (e.g., Tlili et al., 2023). Shah and Bender (2022) argue that chatbot responses to user queries risk masking search engine bias and making it extremely difficult for users to discern the information sources. Research has also reported challenges in creating information-seeking content for teaching and learning in Swedish schools (Sundin & Carlsson, 2016). Seeking information is seen by many pupils as a simple look-up task rather than an activity involving understanding, interpretation, and evaluation (Marchionini 2006) and information seeking in schools is often taken for granted and rarely problematised (Alexandersson & Limberg, 2012). How can source criticism be developed and taught in a digital environment that increasingly is saturated with AI-technology and how can we as users act if we want to avoid AI-generated answers?

References
Alexandersson, M., & Limberg, L. (2012). Changing conditions for information use and learning in Swedish schools: A synthesis of research. Human IT: Journal for Information Technology Studies as a Human Science, 11(2).
Hillis, K., Petit, M., & Jarrett, K. (2012). Google and the Culture of Search. Routledge.
Lomborg, S. & Kapsch, P. H. (2020). Decoding algorithms. Media, Culture & Society, 42(5), 745-761.
Marchionini, G. (2006). Exploratory search: from finding to understanding. Communications of the ACM, 49(4), 41-46.
McGrew, S. (2020). Learning to evaluate: an intervention in civic online reasoning. Computers & Education, 145(103711).
Noble, S. U. (2018). Algorithms of oppression. New York University Press.
Parnell, S. I., Klein, S. H., & Gaiser, F. (2022). Do we know and do we care? Algorithms and Attitude towards Conversational User Interfaces: Comparing Chatbots and Voice Assistants. In Proceedings of the 4th Conference on Conversational User Interfaces (pp. 1- 6).
Shah, C., & Bender, E. M. (2022). Situating search. In ACM SIGIR Conference on Human Information Interaction and Retrieval (pp. 221-232).
Sundin, O., & Carlsson, H. (2016). Outsourcing trust to the information infrastructure in schools: How search engines order knowledge in education practices. Journal of Documentation, 72(6), 990-1007.
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT asa case study of using chatbots in education. Smart Learning Environments, 10(1), 15.