The instructions for our open-ended questions stated the following: „Not only correct and reliable information can be found on the internet, but also a lot of false and/or misleading information (e.g., fake news). Please indicate up to 5 strategies or rules that you use to evaluate the accuracy and reliability of information on the internet or to identify false information.” In addition, the pre-service teachers were asked: “For each strategy/rule mentioned, please also indicate how relevant you consider the teaching of this strategy/rule to pupils in the classroom,” using a scale ranging from 0 (“not relevant at all”) to 6 (“very relevant”). The strategies were assessed in open statements. These open strategy statements were coded and summarized with the aid of a coding scheme following the standard approach by Mayring (2015) , as used in previous research (Kaspar et al., 2010 (link), 2014 (link); Hoss et al., 2021 (link)). In a first step, a category system was developed based on the first 10% of the data material by deriving categories from a total of 1,295 statements. This resulted in a category system comprising 17 categories. Subsequently, two persons coded the material independently after prior introduction to the categories. Inter-coder reliability was calculated by Kappa (Cohen, 1960 (link)), to ensure the applicability and objectivity of the categories. Agreement was very good across all categories with a minimum κ = 0.922. A consensual agreement was subsequently achieved through discussion in the very few cases of disagreement.
Free full text: Click here