Over the past decades, computer technologies have had profound impacts on education, especially as access to computers and the internet have increased in Pre-K–12 education settings. The creation and sale of educational technologies and related software (often referred to as “Ed Tech”) is a huge global industry. Already a multibillion dollar market, projections for the global K–12 Ed Tech sector predict that it will reach multiple hundreds of billions of dollars by the end of this decade, with much of the growth in North America.1 The goals and applications of Ed Tech products are quite varied, and include personalizing instruction, supporting learners with special needs, enabling collaborative learning, providing continuity between in- and out-of-school settings, assisting teachers in managing instruction and assessment, and preparing students to be digitally literate and capable as future workers and citizens. As technology tools are now being increasingly used in Pre-K–12 schools, it is important to understand how novel technologies can affect learning and teaching in science, technology, engineering, and mathematics (STEM) education, as they may commonly be embedded across a range of types of innovations.
It is beyond the scope of this report to conduct a comprehensive review of technology in education (see Duran, 2022 for an overview). The focus in the first part of this chapter is on an overview of some of the types of technology-based resources created specifically for STEM education that have been developed iteratively and studied sufficiently in recent decades
___________________
1 See https://market.us/report/k-12-education-technology-edtech-market/#overview
to provide an evidence base on their potential impacts and the factors that affect how likely they are to spread more widely and to be taken up and adapted effectively in new contexts. The chapter also considers the emerging frontier of rapidly developing AI technologies in education, which have some distinctive and unprecedented advantages as well as risks. The chapter goes on to examine the potential of new technologies for expanding opportunities to learn and participate in STEM in powerful ways, while also considering persistent challenges of ensuring that these opportunities are fully and equitably accessible to students and schools regardless of geographic location, socioeconomic status, or student demographics. Finally, the chapter considers issues related to the rapid pace at which new technology-based resources may stream into Pre-K–12 education—a pace that is increasingly outstripping the standard capacities of research and policy making processes to vet their quality in a timely way and ensure their alignment with educational goals and student needs.
The committee begins with a brief discussion of research-based technologies that have demonstrated positive effects on students’ learning and teaching practices in STEM through empirical research. It is important to note that the review is not exhaustive but describes some of the most widely used and researched technologies (see Table 6-1).
Virtual experiments using simulations have been widely used to help students learn, and research has found several strengths of virtual experiments for supporting students’ learning such as enabling unique testing conditions that are impossible in real-world laboratories, highlighting conceptually salient features of experiments while constraining students’ manipulation of relevant variables, allowing for the presentation of dynamically changing graphs and tables to enable students to test multiple “what-if” scenarios, seeing relationships between variables, and interpreting data more accurately (Chini et al., 2012; de Jong, Linn, & Zacharia, 2013; Puntambekar et al., 2021; Sullivan et al., 2017). However, educators also need to understand that simulations rely on models of real-world scenarios and, by necessity and definition, models strip away contextual elements to focus on specific phenomena. Further, students can quickly run trials during virtual labs, potentially leading them to engage in more trial and error or “play” types of behaviors, without thinking about underlying reasons for their experimental choices (Bumbacher et al., 2018; Renken &
TABLE 6-1 Technologies to Support Learning and Teaching
| Type of Technology | What It Supports | Who It Supports | Benefits | Challenges |
|---|---|---|---|---|
| Simulations/Virtual Experiments |
|
Students |
|
|
| Visualizations, Representations |
|
Students |
|
|
| Models and Modeling |
|
Students |
|
|
| Type of Technology | What It Supports | Who It Supports | Benefits | Challenges |
|---|---|---|---|---|
| Immersive Environments |
|
Students |
|
|
| Intelligent Tutoring Systems, Conversational and Dialogue Agents |
|
Students |
|
|
| Automated Assessments |
|
Students and Teachers |
|
|
| Dashboards for Classroom Orchestration |
|
Teachers |
|
|
SOURCE: Puntambekar, 2024.
Nunez, 2013). Simulations can be used strategically to support learning of abstract content, or in instances where setting up physical labs might be expensive, dangerous, or time consuming—all of which are important considerations when selecting and potentially scaling innovations.
Visualizations can be part of simulations where students can manipulate variables, or they can be used as stand-alone entities to provide graphical, multiple representations of complex or invisible phenomena to foster learning. Visualizations range from static depictions to dynamic visualizations and can also range from concrete representations to abstract (Braithwaite & Goldstone, 2013), to mathematical and symbolic. Mathematical representations play an important role in helping students visualize data (Wu & Shah, 2004) across time scales. Research suggests that visual representations can support students’ deeper content understanding (e.g., de Jong, 2006) and “representational competence,” or the ability to understand and translate between representations (Shafrir, 1999). Nonetheless, research has also acknowledged that deeper learning with multiple representations depends on how students translate between representations, i.e., how students understand the connections across representations (e.g., Ainsworth 2006, 2014; Kozma & Russell, 2005) based on the core principles of the domain. Scaffolding translation by building support into the representations and by incorporating appropriate instructional strategies by the teacher are both effective in ameliorating difficulties that students face in understanding multiple representations.
While simulations are models created by others and made available as tools for teaching, modeling practices involve students in constructing, evaluating, and revising models. Research suggests that immersing students in model building means that they are engaging with content in authentic ways as they create, evaluate, and revise their ideas; in these ways, model building is an integral aspect of STEM (Gilbert & Justi, 2016). Studies have focused on various types of scaffolding that can be built in modeling environments to provide task- and strategy-specific adaptive feedback (Basu, Biswas, & Kinnebrew, 2017). Instructional scaffolds designed to support students’ more open-ended, self-directed science inquiries using models can promote a greater understanding of science and the processes underlying the phenomena under study. Further, research has indicated that prompts to scaffold students’ self-reflection and monitoring as they learn with models
promote a more comprehensive grasp of the scientific concepts under investigation (Linn et al., 2018).
Immersive environments can include such things as augmented reality (AR) or mixed reality (MR) and virtual reality (VR) technologies. This is an emerging class of technologies that combine physical objects, actions, and environments with digital representations, either through augmenting the physical world with digital attributes or by augmenting a digital environment with aspects of the physical world. VR uses technologies that simulate an artificial environment; users experience this environment through senses such as sight, sound, and touch, and may be able to interact with it or receive feedback from it (Cromley, Chen, & Lawrence, 2023). AR technologies superimpose virtual objects onto the existing environment; users may be able to interact with them but are still able to see the real world around them (Scavarelli, Arya, & Teather, 2021). While meta-analyses have found that VR and AR interventions have positive effects on factual and conceptual learning as well as transfer tasks compared to instruction without the technology, it may vary by the pedagogical strategies used, subject, and classroom settings (Cromley, Chen, & Lawrence, 2023; Garzón et al., 2020; Xu et al., 2022). For example, Cromley and colleague’s 2023 meta-analysis of studies of STEM learning in middle and high school found the strongest effects for VR combined with active learning, such as a specific constructive task or answering transfer questions that required students to transform information presented in the VR, rather than engaging in passive viewing. Garzón and Acevedo’s 2019 meta-analysis of the impact of AR technologies on student learning classified three types of pedagogical strategies used in comparisons groups—multimedia, traditional lectures, and traditional pedagogical tools—and analyzed them as a moderator, finding that AR yielded significantly larger impacts on student learning than each of these. They also report particularly large effects on learning content related to engineering.
Immersion in the learning experiences seems to be a major affordance of VR/AR learning technologies; however, evaluating the ethical, practical, and safety implications of incorporating immersive VR in classrooms is essential before large-scale deployment. Practical considerations such as the constraints of time and curricula, not to mention the high hardware costs of immersive VR (e.g., head-mounted devices) must be taken into account. Not having adequate space or people to supervise students in immersive virtual environments has been found to be a major constraint, in addition to regular classroom constraints such as time and integration into full curricula (Southgate et al., 2019).
Automated intelligent technology systems have been used to support students learning for several decades. Intelligent Tutoring Systems (Koedinger & Corbett, 2006; VanLehn et al., 2016), conversation agents (e.g., Graesser, 2016; Ward et al., 2013), and dialogue agents (Katz et al., 2021) have been used to support STEM learning with benefits for learners in terms of mastery of the content knowledge addressed by the interventions. Teachable agents embedded in technology applications are based on research showing that learning by social interactions (Palincsar & Brown, 1984) and by teaching others have positive effects on learning (Biswas et al., 2005; Roscoe & Chi, 2007), as well as the benefits of active learning (Bransford, Brown, & Cocking, 2000). One of the notable aspects of teachable agents is that they can also make the thinking process visible to students (Chase et al., 2009), which can significantly impact learning and metacognition (Schwartz et al., 2009).
One of the key areas of intelligent technologies in education is that of automated scoring of students’ written work (Celik et al., 2022), including assessments of students’ inquiry skills (Gobert et al., 2018). This is an important area where Natural Language Processing (NLP) techniques can help by providing immediate feedback to students (Gerard & Linn, 2022; Zhai et al., 2020), while at the same time supporting teachers who often have little time to provide detailed feedback on students’ written work. In a meta-analysis of 24 studies comparing automated adaptive guidance versus guidance provided in typical instruction, automated adaptive guidance was found to be significantly more effective. However, teachers can play an important role by using automated assessments and guidance provided by automated systems to inform their instruction (Puntambekar, 2024). New and emerging generations of such systems are discussed further in this chapter in the section on Artificial Intelligence.
Intelligent adaptive dashboards can help with classroom orchestration (Holstein, McLaren, & Aleven, 2019) by providing visualizations of critical real-time data on complex states (Dickler, Gobert, & Pedro, 2021), assisting with adaptive decision making, and identifying areas requiring attention. Dynamic algorithms can be applied to assess progress based on features sensed within the environment and correspondingly inform dashboard visualizations. Research suggests that visualizations of
group progress and individual student’s work helps teachers to identify groups and/or individuals requiring the teacher’s attention and support (Martinez-Maldonado et al., 2015). Although real-time dashboards on dedicated teacher devices have proven effective, they can also increase the burden on teachers by requiring them to manage multiple data streams simultaneously (Martinez-Maldonado et al., 2015). However, co-designing and prototyping dashboards with input from teachers has helped in designing dashboards that balance orchestration loads and the important information that teachers need for effective management orchestration (Olsen, Rummel, & Aleven, 2021).
Given the current moment, it is important to consider the role of innovation within Pre-K–12 STEM education with respect to some of the contemporary shifts caused by recent advances in artificial intelligence (AI). In the time since this report was commissioned, society has seen an explosion in the prominence and ubiquity of AI, largely fueled by large language models (LLMs). In some respects, the large organizations that develop and maintain widely used LLMs are dominating the innovation landscape, and exemplify scaling, but with minimal indications around how these technologies might be used most effectively. It is worth noting that applications of AI to STEM education have been in development for more than half a century, including some of the applications discussed in the previous section. What distinguishes many of the current technologies from early developments of AI in education is the wider set of activities that current AI in education technologies can afford teachers, learners, administrators, parents, and organizations; the opportunity to significantly shift who participates in STEM; what that participation looks like; where these experiences take place; and how learning and engagement are recorded and analyzed.
Looking across the AI in education landscape, and specifically examining ways that AI can shift who participates in learning, there are several examples where AI has enabled new forms of access. For instance, technologies that support speech recognition (Southwell et al., 2024), eye tracking (Ke et al., 2024), and gesture-based input (Abrahamson, Ryokai, & Dimmel, 2024) have created additional opportunities for people with disabilities to engage with otherwise inaccessible STEM content and experiences (Worsley et al., 2021). This expansion in accessibility can result from offering multimodal input and by enabling multimodal outputs that make fewer assumptions about user ability. While far from perfect, many of these input and output modalities are available through contemporary VR headsets
and through the use of computer vision. These interfaces can also allow younger students to access and engage with increasingly complex concepts and ideas, once again, pointing to a shift in who is invited to participate in STEM learning experiences (Worsely et al., 2021).
Examples of gesture-based input, or embodied learning experiences, also point to innovations in what learning itself looks like. Instead of the traditional practices of reading textbooks, learners may be transported into a virtual world where they conduct scientific experiments alongside peers and adults from around the world (Lester al., 2014).2 Such programs exemplify how educational technology can help realize computer-supported collaborative learning. Even in the absence of other humans participating in a virtual space, recent technological innovations can support learners through intelligent virtual agents that embody human-like characteristics and can support guided inquiry across an infinite number of learning contexts (Johnson & Lester, 2016). These intelligent virtual agents, or intelligent tutors, take many forms and can offer the types of personalization that society has grown to expect from commercial recommendations systems.
In the traditional in-person classroom context, AI technologies are being used to transform the classroom experience. Technologies that hinge on AI capabilities can be found in the robots and AI-partners that support classroom inclusion, discussion, and engagement (D’Mello et al., 2024). Broadly speaking, researchers and educators are able to leverage AI to shift the classroom experience into something that is more collaborative and fluid than the types of experiences that dominated the 20th century classroom.
Technology is also transforming where learning takes place. The COVID-19 pandemic and the shift to remote learning is one example of how technological innovations enabled classroom-based learning to become increasingly accessible from home. Additionally, researchers are leveraging mobile technologies, for example, to support learning within community-based settings and while navigating different geographical spaces. Many of the initiatives within this line of work take advantage of GPS technologies, and the exciting ways that learners can leverage technology to interact with and capture rich information about the world around them (Marin et al., 2020; Taylor, 2017). As previously highlighted, AI technologies are also being used to support learning within and through virtual reality. Virtual experiences, which range from the use
___________________
2 Note, that this might not be fully aligned with contemporary standards and ideas about science practices. More generally, emerging technologies may usher in paradigm shifts and new modes of activity that may challenge current views of learning and best practices in instruction. Research will be necessary to evaluate whether they are successful, and if so, for whom and under what conditions.
of specialized headsets to content experienced through a computer screen, might enable learners to navigate a distant geographical location, tour a nearby museum, or visit a mythical world of their own creation. The idea of students creating their own virtual content is one of the unique aspects of contemporary AI. The level of technical proficiency needed to create digital content has drastically decreased over the past ten years. Students and teachers now have access to powerful AI tools in easy-to-use, low-cost, web-based platforms that require little prior experience with traditional programming languages (Vadaparty et al., 2024; Weintrop, 2019).
Finally, researchers and schools are using AI to rethink important aspects about how to study, support, and measure student learning. This ranges from analytics approaches (Blikstein & Worsley, 2016) to detecting how students might feel about a given learning experience (Loderer, Pekrun, & Lester, 2020) to building computational models of student pathways through different educational settings (Pardos, Chau, & Zhao, 2019; Shao, Guo, & Pardos, 2021). It also includes ways for expanding what the field considers as indicators of student learning. For instance, many researchers have access to automated transcripts of learner-spoken utterances that can be used to better understand how well students are connecting with course content and adopting disciplinary content. To take this one step further, educators might then use that information to inform how they design upcoming lessons. These are simply a small collection of the vast integration points between AI and educational technology. And that landscape continues to grow as additional tools become increasingly accessible to learners, educators, and communities around the country.
Although the development and implementation of Ed Tech resources in STEM has great promise and may empower more students in many cases, it can also be biased in various ways or distributed unevenly in ways that perpetuate inequities in education (National Academies of Sciences, Engineering, and Medicine [NASEM], 2022; U.S. Department of Education, Office of Educational Technology, 2024). The recent 2024 National Educational Technology Plan (NETP) highlighted three digital divides that impact the equitable support of learning with Ed Tech:
Each of these digital divides is important, and they need to be considered together. The abrupt need for remote learning during the COVID-19 pandemic, and the emergency funding that became available as a result, spurred many schools to dramatically increase their students’ access to devices and improved connectivity (NASEM, 2020). Because so many tech-enabled educational resources are web-based, robust internet access in all schools and access to a sufficient number of devices that are available on demand in classrooms and kept updated and functioning have become foundational necessities for modern education. As of the end of 2023, about three-quarters of public school districts across the country were meeting the goal of bandwidth of at least 1 Mbps per student, which the Federal Communications Commission (FCC) has deemed sufficient to support digital learning, as reported in the ConnectK12 2023 Report on School Connectivity. Although this has been a heartening increase over previous years, many millions of students are in public schools that still lack adequate bandwidth. Students’ access to internet outside of school also remains a challenge. Due to a lack of continuing funding from Congress, the FCC’s Affordable Connectivity Program, which had helped lower income households afford broadband, was discontinued as of June 2024.3 Patterns of access in both schools and homes have mirrored the well-known problems of uneven and inequitable distribution of resources, often correlated with social and economic segregation and school funding policies. Rural areas, and rural tribal areas in particular (Mack et al., 2022), have been especially vulnerable to the higher costs per megabit of broadband and the sparseness of internet fiber systems.
As noted previously, some technology-enhanced learning resources have requirements that go well beyond merely having broadband access. Technologies that require specialized hardware, such as VR headsets, may be prohibitively expensive to provide in sufficient numbers to entire classrooms and hard to manage even in wealthier schools with smaller class sizes and good staff support. Many technologies also impose a heavy burden of set-up and maintenance, battery charging, and maintaining compatibility with school or district operating systems and firewalls. However, new technologies often become more affordable and easier to use as they are on the market for a while. As individual schools and districts make local decisions
___________________
about trade-offs between costs of implementation and return on investment, it is likely that socioeconomics will be a significant driver in which schools acquire new technologies, when they acquire them, and how they distribute or ration them.
While making progress on reducing the digital access divide is necessary, it is not sufficient on its own. Research indicates that improved access by itself has little impact on student outcomes and may even have negative impacts (Escueta et al., 2020). Consistent positive results are seen when access to devices and good connectivity are coupled with high-quality digital content and pedagogical activities, as highlighted in the NETP’s framing of the digital use divide. However, as discussed in the next section, many Ed Tech products have not been thoroughly researched and lack evidence one way or the other as to their educational effectiveness. During the COVID-19 pandemic, many schools took up new digital resources on an emergency basis, without the benefit of careful selection and planning processes or systems for managing and evaluating change. Often, the technological solutions were attempts to find quick substitutes for traditional teaching practices, such as having students view online content or take online tests, and many administrators and teachers still have not had sufficient time or support to systematically build their capacity to understand, evaluate, and coordinate more active and innovative uses of educational technologies. Many factors, such as pedagogy, underlying curricula, participatory structures, and teachers, play an essential role in helping students understand how and why these tools have been designed in certain ways and how they can be used to promote reflection, solve problems, and accomplish goals. Ensuring that all educators have equitable access to time and professional learning resources to build this capacity is central to bridging the digital design divide.
Recognizing and closing the existing digital divides described in the 2024 NETP is essential to overcoming traditional patterns of inequity in who has access to and who benefits from the distribution of educational resources. However, it is also important to consider ways in which emerging technologies may be powerful tools for promoting and sustaining new kinds of opportunities for all learners, with inclusiveness built into their foundations. One challenge for addressing equity in education is that it often entails disrupting entrenched patterns that tend to reify inequities or that routinely exclude or marginalize certain students, such as students with learning differences (NASEM, 2022, 2024). Incorporating new technologies into teaching and learning tends to be disruptive by its nature, but this disruption can be viewed as offering opportunities to remake learning experiences in new ways that promote participation, a sense of belonging, and meaningful learning for all students. The recent National Academies consensus report on Equity in K–12 STEM Education emphasizes the importance of goals such as promoting students’ agency, supporting
sense-making during learning, and leveraging linguistic and cultural assets and ways of knowing (NASEM, 2024). As highlighted in the previous section of this chapter, there is striking potential for innovative technologies to be shaped and applied in ways that expand and diversify types and modalities of participation in STEM learning, while also cultivating interest and broadening opportunities for all students to form a strong identity as someone who can learn and use STEM. Technologies can also provide opportunities to connect classrooms—in potentially vivid and engaging ways—to other people and resources in students’ local communities as well as around the world. This may help cultivate a sense of STEM as a dynamic shared enterprise with many different participants. If they are properly designed and deployed, new technologies offer the means to advance these goals. The NETP provides examples of principles and practices, such as the research-based Universal Design for Learning (UDL) framework, that can guide the design and use of Ed Tech to improve and optimize teaching and learning by reducing barriers and addressing individual and sociocultural learning needs.
Finally, Ed Tech offers the potential for new solutions to the persistent problem of the role of assessment in improving and reforming education to promote equity. As noted in Chapter 3, the current large-scale assessment practices ushered in with No Child Left Behind (NCLB) have exposed persistent performance gaps across groups of students but have also led to frustration that many existing high stakes assessments do not illuminate what is or is not happening during learning to explain those gaps and to understand what might be done to change them. Current and emerging technologies provide access to new types of data and analytic methods to better reveal and understand student learning processes and classroom activity, including fine-grained data about what individuals and interacting groups are doing and what they are accomplishing throughout learning. These go well beyond summative assessments and standardized test scores and have the potential to provide new windows into learning for students and families as well as teachers and administrators. As innovative assessment technologies are developed and implemented, it is imperative to ensure that their purposes and uses are clearly and transparently articulated and appropriately aligned to specific goals (NRC, 2001), while also attending to concerns about algorithmic or analytic biases and students’ and teachers’ autonomy and privacy.
An increasing proportion of innovations in STEM education involve the creation and application of various kinds of digital technologies, and, while some of the benefits and challenges to implementation and
scaling are similar to resources that are not technology-based, new applications of technology in education also have some unique features and considerations. In this section, we look at factors that are specific to educational improvements that are rooted in new applications of digital technologies.
While innovation is robust in the Ed Tech industry, it also poses some unique challenges. Ed Tech is an unusually fastmoving enterprise with many players, often working in market-driven contexts and capitalizing on the availability of new technologies in the general consumer market (Dube & Wen, 2022). The speed at which new technologies and products are marketed and purchased (or otherwise acquired) far outpaces the ability of researchers and decision makers to keep up with evaluating them (Escueta et al., 2019). Traditional research cycles used to obtain high-quality evidence of effectiveness or impact are much longer than the time it takes for new Ed Tech products to appear and become obsolete. Products that are heavily marketed or that incorporate popular features (e.g., gaming features) may achieve widespread adoption without any assurance that they are educationally effective. This rapid pace of development and change has prompted a need to re-think the relationship between research processes and implementation of Ed Tech resources. A recent National Academies report on the Future of Education Research at IES (NASEM, 2022) addressed this issue and highlighted the need for new research to guide the design of new tools so that they are grounded in theoretical mechanisms of learning and serve the needs of learners, to build understanding of relationships between new technologies and the learning environments in which they are employed, and to cultivate the capacity of educators to evaluate and integrate new Ed Tech resources.
A recent comprehensive review (Escueta et al., 2020) found that relatively few Ed Tech products have robust data that can support causal conclusions, such as evidence obtained from a randomized controlled trial or a regression discontinuity design. For products in the general category of computer-assisted learning, null results are common. A few mathematics products have demonstrated the strongest results, including the online math homework support system ASSISTments (Roschelle et al., 2016) and interactive SimCalc software for algebra and preparation for calculus (Hegedus, Dalton, & Tapper, 2015; Roschelle et al., 2010). Key positive features for products with good evidence of results are adaptivity to students at different levels, rapid feedback and guidance to students, and formative assessment data for teachers. It is worth noting that this level of evidence represents a high bar, and very few of the Ed Tech resources marketed to schools have been studied in this way.
In part because it is so challenging for education research to keep up with the pace of change in AI in particular, it is essential to formulate
principles and policies to govern and support the use of these powerful new technologies in ways that will address priorities in teaching and learning and advance equity, while modulating potential risks or unintended consequences. Compared to prior technologies, emerging AI is now accelerating a shift from capturing data efficiently to detecting patterns in very large and fine-grained sets of data and automating decisions about teaching and learning processes. As discussed above, it enables new forms of engagement and interaction among teachers, learners, and automated systems, and does so in ways that can address variability among learners and increase adaptivity. It can also provide teachers with data and tools to make their work easier and better. At the same time, it poses significant risks, including issues of data privacy and security, lack of alignment with educational goals and priorities, and algorithmic discrimination in the development and application of tools. As AI algorithms and models may increasingly be used for automating decisions about learning processes as well as about students’ status and access to resources (e.g., tracking into special programs, school admissions, or support services), methods for evaluating and ensuring the fairness of algorithmic decision making is consequential but such methods are currently unsettled (Holstein & Doroudi, 2021). The potential for bias can arise for reasons both internal and external to how these algorithms operate. Whose data are represented in the historical data used to train them, what evaluation metrics are used, and whether technical development is informed by broader contexts of usage and how different students may experience those contexts are critical considerations. As one example of how AI algorithms and models can propagate bias, if certain groups of learners or learning contexts are overrepresented in training datasets, then generalization to new and different groups and settings may be poor and could even end up widening divides among groups. Machine learning algorithms generally try to optimize overall prediction accuracy and may thus bias or overweight data from majority groups in the data. As a result, predictions and classifications may be less accurate for various subgroups of users if they are not well represented in the training set (Ocumpaugh et al., 2014).
In stark contrast to many other innovations the committee reviewed, some powerful AI resources are widely accessible as consumer products and have the potential to spread very rapidly among individuals and groups (including students), without going through formal processes to vet them and integrate them in educational settings in an intentional and thoughtful manner. A recent report from the U.S. Department of Education’s Office of Educational Technology analyzed the current state of AI in education and formulated a set of recommendations as guidance (U.S. Department of Education, Office of Educational Technology, 2023). These recommendations are summarized in Box 6-1.
A recent report by the U.S. Department of Education’s Office of Education Technology analyzed the current state of AI in education. Based upon that analysis, the group made the following recommendations:
SOURCE: Based on U.S. Department of Education, Office of Educational Technology, 2023.
It is clear that multiple forms of technology have shown evidence for supporting STEM learning and teaching. Although digital technologies can be powerful learning tools, simply giving students access to technology does not necessarily promote learning. Tools and technologies rarely fully communicate meaning or information about how they can be best used, especially to support learning and cultural practices. Many other factors such as pedagogy, underlying curricula, participatory structures, and teachers play an essential role in helping students understand how and why these tools have been designed in certain ways and how they can be used to promote reflection, solve problems, and accomplish goals. Therefore, it is important to interrogate promising, evidence-based innovations for the features that support (and constrain) their scalability and sustainability. In considering how technology-based resources are acquired by schools and are adapted to new settings and learners, equity concerns are particularly prominent, given that they simultaneously have strong potential to both increase existing digital divides, as well as open powerful new avenues for including and supporting students who have often been under-served or left out of STEM learning.
Emergent AI-based technologies hold unique promise for addressing difficult problems in new ways, but they also bring some potential risks, and scaling them and applying them effectively in education involves some special considerations. The educational technology industry tends to be fast-moving and market-driven, with many developers based in for-profit companies rather than research organizations or education-focused organizations. As a result, concerns may be less about how quickly or robustly a given innovation can be scaled and more about whether there is reliable evidence of its efficacy, whether it is aligned with educational goals and policies, and whether students’ needs are being served safely and equitably.
Abrahamson, D., Ryokai, K., & Dimmel, J. (2024). Learning mathematics with digital resources: Reclaiming the cognitive role of physical movement. In B. Pepin, G. Gueudet, and J. Choppin. (Eds.) Handbook of digital resources in mathematics education. Springer International Handbooks of Education. Springer.
Ainsworth, S. (2006). DeFT: A conceptual framework for considering learning with multiple representations. Learning and Instruction, 16(3), 183–198.
___. (2014). The multiple representation principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 464–486). Cambridge University Press.
Basu, S., Biswas, G., & Kinnebrew, J. S. (2017). Learner modeling for adaptive scaffolding in a computational thinking-based science learning environment. User Modeling and User-Adapted Interaction, 27, 5–53.
Biswas, G., Leelawong, K., Schwartz, D., Vye, N., & The Teachable Agents Group at Vanderbilt. (2005). Learning by teaching: A new agent paradigm for educational software. Applied Artificial Intelligence, 19(3–4), 363–392.
Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.
Braithwaite, D. W., & Goldstone, R. L. (2013). Integrating formal and grounded representations in combinatorics learning. Journal of Educational Psychology, 105(3), 666–682.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn. The National Academy Press.
Bumbacher, E., Salehi, S., Wieman, C., & Blikstein, P. (2018). Tools for science inquiry learning: Tool affordances, experimentation strategies, and conceptual understanding. Journal of Science Education and Technology, 27(3), 215–235.
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66(4), 616–630.
Chase, C. C., Chin, D. B., Oppezzo, M. A., & Schwartz, D. L. (2009). Teachable agents and the protégé effect: Increasing the effort towards learning. Journal of Science Education and Technology, 18(4), 334–352.
Chini, J. J., Madsen, A., Gire, E., Rebello, N. S., & Puntambekar, S. (2012). Exploration of factors that affect the comparative effectiveness of physical and virtual manipulatives in an undergraduate laboratory. Physical Review Special Topics-Physics Education Research, 8(1), 010–113.
ConnectK12. (2023). Report on school connectivity: Connect K–12. ConnectK12.org, https://s3.amazonaws.com/connected-nation/898e8ecb-8046-4850-af4b-b89b12c1a4a1/Connect_K12_Connectivity_Report_2023_FINAL.pdf
Cromley, J. G., Chen, R., & Lawrence, L. (2023). Meta-analysis of STEM learning using virtual reality: Benefits across the board. Journal of Science Education and Technology, 32(3), 355–364.
D’Mello, S. K., Biddy, Q., Breideband, T., Bush, J., Chang, M., Cortez, A., Flanigan, J., Foltz, P. W., Gorman, J. C., Hirshfield, L., Ko, M. L. M., Krishnaswamy, N., Lieber, R., Martin, J., Palmer, M., Penuel, W. R., Phillip, T., Puntambekar, S., Pustejovsky, J., Reitman, J. G., Sumner, T., Tissenbaum, M., Walker, L., & Whitehill, J. (2024). From learning optimization to learner flourishing: Reimagining AI in education at the Institute for Student-AI Teaming (iSAT). AI Magazine, 45(1), 61–68.
de Jong, T. (2006) Computer simulations: Technological advances in inquiry learning. Science, 312, 532–533.
de Jong, T., Linn, M. C., & Zacharia, Z. C. (2013). Physical and virtual laboratories in science and engineering education. Science, 340(6130), 305–308.
Dickler, R., Gobert, J., & Pedro, M. S. (2021). Using innovative methods to explore the potential of an alerting dashboard for science inquiry. Journal of Learning Analytics, 8(2), 105–122.
Dube, A. K., & Wen, R. (2022). Identification and evaluation of technology trends in K-12 education from 2011 to 2021. Education and Information Technologies, 27, 1929–1958. https://doi.org/10.1007/s10639-021-10689-8
Duran, M. (2022). Learning technologies: Research, trends, and issues in the U.S. education system. Springer International Publishing.
Escueta, M., Nickow, A. J., Oreopoulos, P., & Quan, V. (2020). Upgrading education with technology. Journal of Economic Literature, 58(4), 897–996.
Garzón, J., & Acevedo, J., (2019). Meta-analysis of the impact of Augmented Reality on students’ learning gains. Educational Research Review, 27, 244–260.
Garzón, J., Baldiris, S., Gutiérrez, J., & Pavón, J. (2020). How do pedagogical approaches affect the impact of augmented reality on education? A meta-analysis and research synthesis. Educational Research Review, 31, 100334.
Gerard, L. F., & Linn, M. C. (2022). Computer-based guidance to support student revision of their science explanations. Computers & Education, 176, 304–351.
Gilbert, J. K., & Justi, R. S. (2016). Modelling-based teaching in science education. Springer.
Gobert, J., Moussavi, R., Li, H., Sao Pedro, M., & Dickler, R. (2018). Real-time scaffolding of students’ online data interpretation during inquiry with Inq-ITS using educational data mining. In A. K. M. Azad, M. Auer, A. Edwards, and T. de Jong (Eds.), Cyber-physical laboratories in engineering and science education. Springer.
Graesser, A. C. (2016). Conversations with AutoTutor help students learn. International Journal of Artificial Intelligence in Education, 26, 124–132. https://doi.org/10.1007/s40593-015-0086-4
Hegedus, S. J., Dalton, S., & Tapper, J. R. (2015). The impact of technology-enhanced curriculum on learning advanced algebra in US high school classrooms. Educational Technology Research and Development, 63(2), 203–228.
Holstein, K., & Doroudi, S. (2021). Equity and artificial intelligence in education: Will “AIEd” amplify or alleviate inequities in education? arXiv. https://doi.org/10.48550/arXiv.2104.12920
Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity. Journal of Learning Analytics, 6(2), 27–52.
Johnson, W. L., & Lester, J. C. (2016). Face-to-face interaction with pedagogical agents, twenty years later. International Journal of Artificial Intelligence in Education, 26, 25–36. https://doi.org/10.1007/s40593-015-0065-9
Katz, S., Albacete, P., Chounta, I. A., Jordan, P., McLaren, B. M., & Zapata-Rivera, D. (2021). Linking dialogue with student modelling to create an adaptive tutoring system for conceptual physics. International Journal of Artificial Intelligence in Education, 31(3), 397–445.
Ke, F., Liu, R., Sokolikj, Z. Dahlstrom-Hakki, I., & Israel, M. (2024). Using eye-tracking in education: Review of empirical research and technology. Educational Technology Research and Development, 1–36.
Koedinger, K. R., & Corbett, A. (2006). Cognitive tutors: Technology bringing learning sciences to the classroom. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 61–77). Cambridge University Press.
Kozma, R., & Russell, J. (2005). Students becoming chemists: Developing representational competence. In J. Gilbert (Ed.), Visualization in science education. Kluwer.
Lester, J. C., Spires, H. A., Nietfeld, J. L., Minogue, J., Mott, B. W., & Lobene, E. V. (2014). Designing game-based learning environments for elementary science education: A narrative-centered learning perspective. Information Sciences, 264, 4–18. https://doi.org/10.1016/j.ins.2013.09.005
Linn, M. C., McElhaney, K. W., Gerard, L., & Matuk, C. (2018). Inquiry learning and opportunities for technology. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 221–233). Routledge.
Loderer, K., Pekrun, R., & Lester, J. C. (2020). Beyond cold technology: A systematic review and meta-analysis on emotions in technology-based learning environments. Learning and Instruction, 70, 101162. https://doi.org/10.1016/j.learninstruc.2018.08.002
Mack, E. A., Helderop, E., Keene, T., Loveridge, S., Mann, J., Grubesic, T. H., Kowalkowski, B., & Gollnow, M. (2022). A longitudinal analysis of broadband provision in tribal areas. Telecommunications Policy, 46(5), https://doi.org/10.1016/j.telpol.2022.102333
Marin, A., Taylor, K. H., Shapiro, B. R., & Hall, R. (2020). Why learning on the move: Intersecting research pathways for mobility, learning and teaching. Cognition and Instruction, 38(3), 265–280. https://doi.org/10.1080/07370008.2020.1769100
Martinez-Maldonado, R., Clayphan, A., Yacef, K., & Kay, J. (2015). MTFeedback: Providing notifications to enhance teacher awareness of small group work in the classroom. IEEE Transactions on Learning Technologies, 8(2), 187–200.
National Academies of Sciences, Engineering, and Medicine (NASEM). (2020). Reopening K–12 schools during the COVID-19 pandemic: Prioritizing health, equity, and communities. The National Academies Press. https://doi.org/10.17226/25858
___. (2022). The future of education research at IES: Advancing an equity-oriented science. The National Academies Press. https://doi.org/10.17226/26428
___. (2024). Equity in K–12 STEM education: Framing decisions for the future. The National Academies Press. https://doi.org/10.17226/26859
National Research Council. (2001). Knowing what students know: The science and design of educational assessment. National Academies Press.
Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N., & Heffernan, C. (2014). Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501.
Olsen, J. K., Rummel, N., & Aleven, V. (2021). Designing for the co-orchestration of social transitions between individual, small-group and whole-class learning in the classroom. International Journal of Artificial Intelligence in Education, 31, 24–56.
Palinscar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1(2), 117–175.
Pardos, Z. A., Chau, H., & Zhao, H. (2019, June). Data-assistive course-to-course articulation using machine translation. In Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale (pp. 1–10).
Puntambekar, S. (2024). [Technology for science learning and teaching]. Paper commissioned by the Committee on Pre-K-12 STEM Education Innovations. https://nap.nationalacademies.org/resource/27950/Technology%20for%20Science%20Learning%20and%20Teaching_Puntambekar.pdf
Puntambekar, S., Gnesdilow, D., Dornfeld Tissenbaum, C. D., Narayanan, H. N., & Rebello, S. (2021). Supporting middle school students’ science talk: A comparison of physical and virtual labs. Journal of Research in Science Teaching, 58(3), 392–419.
Puntambekar, S., Gnesdilow, D., Passonneau, R. J., & Kim, C (2024). AI-human partnership to help students write science explanations. Learning as a cornerstone of healing, resilience, and community: Proceedings of the 18th International Conference of the Learning Sciences -ICLS 2024. International Society of the Learning Sciences.
Renken, M. D., & Nunez, N. (2013). Computer simulations and clear observations do not guarantee conceptual understanding. Learning and Instruction, 23, 10–23.
Roschelle, J., Feng, M., Murphy, R. F., & Mason, C. A. (2016). Online mathematics homework increases student achievement. AERA Open, 2(4), 2332858416673968.
Roschelle, J., Shechtman, N., Tatar, D., Hegedus, S., Hopkins, B., Empson, S., Knudsen, J., & Gallagher, L. P. (2010). Integration of technology, curriculum, and professional development for advancing middle school mathematics: Three large-scale studies. American Educational Research Journal, 47(4), 833–878.
Roscoe, R. D., & Chi, M. T. (2007). Understanding tutor learning: Knowledge-building and knowledge-telling in peer tutors’ explanations and questions. Review of Educational Research, 77(4), 534–574.
Scavarelli, A., Arya, A., & Teather, R. J. (2021). Virtual reality and augmented reality in social learning spaces: A literature review. Virtual Reality, 25(1), 257–277.
Schwartz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Achér, A., Fortus, D., Shwartz, Y., Hug, B., & Krajcik, J. (2009). Developing a learning progression for scientific modeling: Making scientific modeling accessible and meaningful for learners. Journal of Research in Science Teaching, 46(6), 632–654.
Shafrir, U. (1999). Representational competence. In I. E. Sigel (Ed.), Development of mental representation: Theories and applications (pp. 371–389). Lawrence Erlbaum Associates Publishers.
Shao, E., Guo, S., & Pardos, Z. A. (2021). Degree planning with Plan-Bert: Multi-semester recommendation using future courses of interest. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 14920–14929. https://doi.org/10.1609/aaai.v35i17.17751
Southgate, E., Smith, S. P., Cividino, C., Saxby, S., Kilham, J., Eather, G., Scevak, J., Summerville, D., Buchanan, R., & Bergin, C. (2019). Embedding immersive virtual reality in classrooms: Ethical, organisational and educational lessons in bridging research and practice. International Journal of Child-Computer Interaction, 19, 19–29.
Southwell, R., Ward, W., Trinh, V. A., Clevenger, C., Clevenger, C., Watts, E., Reitman J., D’Mello, S., & Whitehill, J. (2024, April). Automatic speech recognition tuned for child speech in the classroom. In ICASSP 2024–2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 12291–12295). IEEE. https://doi.org/10.1109/ICASSP48485.2024.10447428.
Sullivan, S., Gnesdilow, D., Puntambekar, S., & Kim, J. S. (2017). Middle school students’ learning of mechanics concepts through engagement in different sequences of physical and virtual experiments. International Journal of Science Education, 39(12), 1573–1600.
Taylor, J., Roth, K., Wilson, C., Stuhlsatz, M., & Tipton, E. (2017). The effect of an analysis-of-practice, videocase-based, teacher professional development program on elementary students’ science achievement. Journal of Research on Educational Effectiveness, 10(2), 241–271.
U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations. https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
___. (2024). A call to action for closing the digital access, design, and use divides: 024 National Educational Technology Plan. https://www.govinfo.gov/content/pkg/GOVPUBED-PURL-gpo229250/pdf/GOVPUB-ED-PURL-gpo229250.pdf
Vadaparty, A., Zingaro, D., Smith IV, D. H., Padala, M., Alvarado, C., Gorson Benario, J., & Porter, L. (2024). CS1-LLM: Integrating LLMs into CS1 instruction. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education (Vol. 1, pp. 297–303).
VanLehn, K., Wetzel, J., Grover, S., & van de Sande, B. (2016). Learning how to construct models of dynamic systems: The effectiveness of the Dragoon intelligent tutoring system. IEEE Transactions on Learning Technologies.
Ward, W., Cole, R., Bolaños, D., Buchenroth-Martin, C., Svirsky, E., & Weston, T. (2013). My science tutor: A conversational multimedia virtual tutor. Journal of Educational Psychology, 105(4), 1115–1125.
Weintrop, D. (2019). Block-based programming in computer science education. Communications of the ACM, 62(8), 22–25.
Worsley, M., Mendoza Tudares, K., Mwiti, T., Zhen, M., & Jiang, M. (2021). Multicraft: A multimodal interface for supporting and studying learning in Minecraft. In X. Fang (Eds.) HCI in Games: Serious and Immersive Games. HCII 2021. Lecture Notes in Computer Science, 12790. Springer.
Wu, H., & Shah, P. (2004). Exploring visuospatial thinking in chemistry learning. Science Education, 88(3), 465–492.
Xu, W. W., Su, C. Y., Hu, Y., & Chen, C. H. (2022). Exploring the effectiveness and moderators of augmented reality on science learning: A meta-analysis. Journal of Science Education and Technology, 31(5), 621–637.
Zhai, X., Haudek, K. C., Shi, L., Nehm, R. H., & Urban-Lurain, M. (2020). From substitution to redefinition: A framework of machine learning-based science assessment. Journal of Research in Science Teaching, 57(9), 1430–1459.
This page intentionally left blank.