A structured questionnaire by Aguilera-Hermida (2020 (link)) consisting of validated constructs was adopted for this study. The author developed the instrument based on Kemp et al. (2019 (link)) taxonomy. Request for the instrument was sought through email and the author obliged its use and made the instrument available. The relevant sections in the instrument used in the current study are outlined below:

Section A contained questions on demographic information and this included age, gender, faculty, number of courses and respondents’ household structure.

Section B had questions on Attitude, Satisfaction and Motivation. Attitude of respondents towards ERT was based on their preference for online teaching, while data on Satisfaction were based on students’ overall satisfaction with their courses during the ERT. Both constructs were measured based on a 3-point scale (ranging from 1 = Disagree, 2 = Neither agree or disagree and 3 = Agree). On a 4-point response scale (1 = not motivating, 2 = slightly motivating, 3 = motivating, 4 = very motivating), students also rated how the following factors motivated them for learning after the ERT commenced: interaction with lecturers, talking to classmates, school activities, hanging out (studying, talking, eating, etc.), interest in class topics, complete schoolwork and finishing degree/programme. The internal consistency (Cronbach’s alpha) for Motivation was 0.861.

Section C elicited data on Perceived Behavioural Control (Accessibility, Self-efficacy and Ease of use). Questions on Accessibility collected data on extent of respondents’ access to reliable digital device and internet service, Communication platforms (such as Google Classroom, Microsoft Teams, Zoom, etc.) and technical support. These items were each rated on a four-point scale (4 = always, 3 = most of the time, 2 = sometimes, 1 = never). The internal consistency for Accessibility was 0.801.

Data on Ease of use were collected based on respondents’ use of the educational technology during ERT. On a 5-point response scale (5 = very frequently, 4 = frequently (once per week), 3 = occasionally (1 to 2 times per month), 2 = rarely and 1 = Never), participants rated their use of the following: communication tools (Zoom, Teams, Google); online educational platforms (Canvas, Classroom, Blackboard, etc.); social media (LinkedIn, Instagram, TikTok, Facebook, Twitter, etc.); synchronous class sessions (live) and asynchronous videos (sent by lecturers). Good internal consistency was observed in the (Cronbach’s alpha = 0.79) items.

Self-efficacy questions collected data on the respondents’ assessment of how their skills have changed since the commencement of ERT. Students assessed this change on a 5-point response scale (5 = much better, 4 = somewhat better, 3 = about the same, 2 = somewhat worse and 1 = much worse) based on six scholastic abilities which included ‘complete assignments on time’, ‘new learning tools’, ‘successful in classes’, ‘discussion of topics with classmates and lecturers’, ‘manage group projects’, ‘Time management skills’. The internal consistency for Self-efficacy was 0.880.

Section D elicited data on Cognitive engagement, and respondents were asked to compare their school performance now with how they were before ERT using a 5-point response scale (5 = much better, 4 = somewhat better, 3 = about the same, 2 = somewhat worse and 1 = much worse). Cognitive engagement was measured using five items namely ‘knowledge/learning, concentration, class attendance, level of engagement, interest and enthusiasm. The internal consistency for Cognitive engagement was 0.913.

The total of 394 copies of the questionnaire were administered to students across the sixteen faculties. However, 376 copies were retrieved showing a 92.6% return rate, out of which 10 were unusable. Hence, the total of 366 copies were used for this study (Table 1). Data analysis was carried out using SPSS, and descriptives as well as Pearson correlation and multiple regression were used in the analyses.