The current study was intentionally organized to focus on objective learning performance outcomes resulting from three common instructional strategies, Passive learning, Active learning, and Gamified instruction. Although research in medical education examine both knowledge and survey constructs of difficulty, engagement, motivation, and/or satisfaction, this study’s exclusive focus on performance provides a clearer answer to a thorny, core curricular question: which method most effectively improves knowledge acquisition? By leveraging a one-way repeated measures crossover design with unique information taught in each of the sessions, each participant served as their own control, thereby minimizing individual variability and thus strengthening within-subject validity.
Overall, our data demonstrates that Active learning instruction consistently outperformed both Passive and Gamified instruction across all classroom sessions. Although this outcome aligns with previous studies of Active learning in medical education [5, 44–46], the finding is particularly notable considering the relatively modest cohort size and repeated measures study design. Furthermore, the large educational effect size (partial η² = 0.20) underscores the practical importance of this outcome and the value of active engagement, collaboration, and application-based learning strategies in preclinical education [42].
Active learning is grounded in the principle that students construct knowledge through interaction and application with self-guided reflection. This approach promotes deeper cognitive processing by requiring learners to retrieve and apply information in varied contexts [44, 46]. Numerous studies have examined Active learning but interestingly they focus in on how new information is processed [15, 47]. The findings within our study align with Kolb’s Experiential Learning Theory which emphasizes that the student learns through iterative cycles of active engagement and reflection [48]. Students were allowed to interact with the material, apply it in context, and then to disseminate it to others, thus reflecting on their own work.
Although Active learning methods enhanced objective performance, several factors influence effectiveness. Research has shown that “spaced education,” in which shorter learning encounters are spaced and repeated over time, leads to more durable learning than the “bolus” sessions typical of many medical school curricula [21]. This approach parallels spiral curricula that scaffold knowledge through repeated opportunities for retrieval and application, much like what students encounter in clinical settings [49, 50]. Though this study only looked at short-term learning performance within a single class, future courses should explore how curricular alignment can institute educational reinforcement, especially before National Boards (APMLE) Part 1.
Anecdotally, another consideration that impacts Active learning is that perceptions of learning often diverge from actual learning [29]. Deslauriers and colleagues noted that students in Active classrooms felt they learned less, even when their performance improved. Within our study, Active learning was received well with students remarking that the strategy was “more memorable” whereas in Gamified, students expressed frustration when unable to outperform their peers. This suggests that the success of Active learning depends on instructional quality and content complexity but also relies on the expectations of the student especially if the student is resistant to new forms of learning [51]. Likewise, poorly scaffolded Active learning session can increase cognitive load and undermine learning performance and motivation. Within this study, it is theorized that though self-directed learning approach [52] was implemented in both Active and Gamified learning instruction, Active learning environment fostered more collaboration and engagement, therefore resulting in larger learning performance gains. Although the current study didn’t measure cognitive load, engagement, or motivation directly, prior research indicates these factors can moderate Active learning outcomes implying that Active learning is not a universal solution but a flexible framework to refine [53, 54].
While gamification has been shown to enhance motivation, enjoyment, and participation among medical students [30, 34, 35, 54], its direct impact on student learning isn’t entirely consistent. Festinger’s social comparison theory suggests that peer competition can heighten engagement [55] and positive experience with medical content [56–58]. However, systematic reviews report mixed effects on knowledge acquisition, citing irrelevance and lack of motivation as hindering variables [36–38]. In the present study, Gamified sessions yielded positive learning gains but slightly lower performance than Passive learning and significantly lower than Active learning. Several explanations may account for this. First gamification in theory relies on extrinsic motivators such as points, prizes, and leaderboards which can shift cognitive resources away from knowledge understanding and towards game performance [59]. These dynamics could enhance engagement for some learners while reducing it in others, resulting in uneven benefits and masking class performance [53, 60–62]. Likewise, potential misalignments between game activities and assessment objectives can limit learning when a game measures recall over higher order thinking, for example [14, 63]. Gamified learning, if poorly implemented, can introduce stress or frustration that detracts from engagement and may negatively affect learner well-being [64, 65].
Within Gamified learning, formalized instructor feedback, which is a hallmark of Active learning, is absent [66] and may hinder engagement, particularly once novelty of the game wears off [67]. Likewise, since Gamified activities often target speed over problem solving, this may have been a reason for why Gamified learning didn’t match or surpass Active learning gains in our study [67]. Our findings do support that learning performance increases within Ericsson’s theory of deliberate practice [68], which emphasizes that expertise develops through structured repetition and timely feedback, however the gains are significantly less than Active learning. Though initially surprising, perhaps a lack of structured feedback within the games didn’t allow for enough repetition and building of durable learning.
Implications for podiatric medical education
This study provides one of the first empirical comparative, within subjects approaches within U.S. podiatric medical education. The finding that Active learning produced the largest and most consistent learning gains has practical implications for preclinical courses such as microbiology and immunology, where early competency directly supports podiatric clinical competencies. Podiatric practitioners must rapidly integrate microbiological and immunological knowledge into wound management, infection control, and antimicrobial selection, areas where applied understanding and differential diagnosis is critical to patient outcomes.
As podiatric curricula continue to evolve toward competency-based and integrated learning environments, these results highlight that pedagogical design, not novelty, drive effective learning. Implementing structured active learning strategies such as flipped classrooms, team-based learning, problem-based learning, simulations, and jigsaw activities can foster not only short-term learning gains but also higher order reasoning. In contrast, the mixed performance of Gamified matching up with Passive Learning suggests that competition alone does not ensure durable knowledge gain, reinforcing the need for alignment between teaching methods, learning outcomes, and assessment performance.
Limitations
Several limitations should be noted. One key limitation is the sample size of the assessment. This study focused on one cohort of first year podiatric medical students from United States and may limit generalizability of the findings to other educational institutions or other systems. Additionally, another limitation is that knowledge gain was measured for short-term knowledge gains, long-term retention including encoding, storage, and retrieval was not assessed and should be followed up longitudinally. Finally, quiz reliability (KR-20) was not evaluated given the small number of items per session which was in part due to course time constraints; subsequent studies should include instrument quality metrics to ensure measurement precision.