.
3:00pm – 3:30pm
Check-in + Networking
3:30pm – 3:45pm
Welcome Remarks
3:45pm – 5:00pm
Thriving in the Age of Acceleration
We stand at a pivotal moment in the “Fourth Industrial Revolution,” our modern age, which isn’t simply defined by technological breakthroughs but also by the profound societal ripples they create. This keynote explores three defining characteristics reshaping how we work, live, and learn, and it provides a forward-looking framework for how Learning Engineering principles can help us build systems that match the pace of change.
About Sae
Sae works at the intersection of cognition, technology, and data. She formerly directed the Advanced Distributed Learning (ADL) Initiative, a government program for researching learning technologies, and before joining the civil service, Sae worked as an applied human–systems scientist in both business and academia, including as an assistant professor with the University of Central Florida’s Institute for Simulation and Training. Sae is a prolific writer and professional presenter; for example, she recently released Engines of Engagement: A Curious Book About Generative AI (2023) and contributed to the National Academies report on Adult Learning in the Military Context (2024).
5:00pm – 7:00pm
Reception
.
.
.
.
Wednesday, Feb. 4
ASU Student Pavillion
.
8:00am – 8:30am
Breakfast (Senita A)
8:30am – 9:45am
Taking Human-Centered Learning Innovation to Scale
What does it really mean to take learning innovation to scale—without losing sight of the people those innovations are meant to serve? As educational technologies, research–practice partnerships, and cross-sector collaborations expand, the challenge is no longer whether we can scale, but how we do so in ways that remain human-centered, sustainable, and aligned with real learning needs.
Moderator:
Danielle McNamara, Executive Director of the ASU Learning Engineering Institute
Panel members:
Steve Ritter, Founder and Chief Scientist of Carnegie Learning
Alyssa Friend Wise, Professor of Technology and Education at Vanderbilt University and Director of the LIVE Learning Innovation Incubator
Cristina Heffernan, Co-Executive Director and Co-Founder of The ASSISTments Foundation
Ben Motz, Assistant Professor at Indiana University Bloomington
9:45am – 10:00am
Brief Break
10:00am – 10:50am
Human-Centered Learning Ecosystems: Reimagining Water Education for Real Estate Professionals
In Arizona, a gap exists between state-sponsored education and professional real estate practice, particularly in how licensed agents and brokers navigate water-related complexities such as rights, supply, and regional regulation. This paper presents the REAL Water Arizona project, a learning engineering initiative developed by the Arizona Water Innovation Initiative (AWII) at Arizona State University in partnership with ADRE. Grounded in community engagement, user research, and applied instructional and interface design, the project seeks to enhance water literacy among Arizona’s 83,000+ real estate professionals. Through evidence-based curriculum development and an interactive digital learning platform, researchers and designers advance the integration of learning science, human factors, and public-sector collaboration to improve continuing education and professional decision-making in the context of Arizona real estate.
Presenter:
Danielle Storey
Coaching, Not Autocomplete: Early Evidence from ConnectInk’s AI‑Supported Personal Narrative Pilot
This paper reports early findings from a Spring 2025 classroom pilot of ConnectInk, a browser-based, guardrailed AI writing coach that supports, but never generates, student text. In partnership with CPET at Teachers College, Columbia University, the tool was embedded in a brief personal-narrative unit across three NYC high schools. A mixed-methods design (baseline perception survey, post-lesson reflections, observation logs, and pre/post writing samples) examined perceptions, process, and products. Students entered with low enjoyment and limited sharing comfort but valued feedback and showed emerging openness to AI for revision. Across the unit, students used ConnectInk to brainstorm, organize, and revise; paired samples and commentary noted growth in narrative craft, structure, and elaboration. Findings suggest that bounded AI coaching, coupled with genre-based pedagogy, can feasibly support confidence, craft, and audience awareness within a brief instructional arc.
Presenter:
Julio Intriago-Izquierdo
Agentic PAL: Designing Human-Empowered AI Partnerships for Early Childhood Mathematics Learning
The rapid expansion of “AI-powered edtech” has outpaced the field’s capacity to ensure developmental appropriateness in early childhood mathematics. Generic large language models (LLMs) often generate activities that conflate concepts, skip developmental steps, or misalign with learning progressions, reflecting structural gaps in early childhood math expertise rather than limitations of AI itself. This paper introduces Agentic PAL, an AI-based system under development that draws on learning sciences, learning engineering, and a validated knowledge infrastructure to support adult–child math interactions. Rather than generating content autonomously, Agentic PAL uses a research-based knowledge model to guide AI reasoning and embed developmental and pedagogical safeguards. We outline the theoretical foundations and emerging architecture behind Agentic PAL and discuss implications for designing AI that strengthens—rather than replaces—adult expertise in early childhood learning ecosystems.
Presenter:
Anastasia Betts
Multiple-Document Comprehension in High School Science: A Learning-Engineering Pilot Study
This study applies a learning-engineering process to refine and pilot an instructional framework designed to help high school science students read and write from multiple sources. Building on prior participatory design research with science teachers, the pilot examined how one biology teacher implemented a structured, five-lesson sequence and how her reflections and student work informed iterative improvement. Findings indicated that the lesson design was clear, manageable, and compatible with existing science routines, helping students build confidence before engaging with more rigorous texts. Paraphrasing provided an accessible entry point for writing, while students required additional modeling for source evaluation and elaboration. Teacher feedback led to targeted design refinements, including checklists and low-stakes grading to support accountability. The study illustrates how learning engineering can use classroom data to align instructional design with authentic teaching contexts, promoting feasible and scalable approaches to integrated literacy instruction in science.
Presenter:
Andrew Potter
Socio-Emotional Learning in AI K-12 Guidance and Policy Documents: A Gap Analysis
As conversational and affective AI enters K–12 classrooms, it brings new socioemotional risks. This study analyzed 35 institutional, state, and international AI policy documents using the CASEL framework and five emergent themes (anthropomorphization, emotional attachment, manipulation, social displacement, and developmental vulnerability). Only 20% of policies mentioned socioemotional issues, and even these offered minimal depth. Overall, the findings reveal a major policy gap: current guidance largely overlooks the nuanced socioemotional risks of student–AI interaction and lacks clear proactive or reactive strategies.
Presenter:
Emmanuel Adeloju
Teacher Education through Learning Engineering: An Action Research on Faculty Transformation
This study aims to examine the digital transformation process conducting in the faculty of education through an action research approach. The process began in the beginning of 2025 with comprehensive needs analysis and continued with focus group interviews with key stakeholders. Based on the findings, the learning engineering approach has been adopted to the curriculum of faculty of education as a guiding framework. Accordingly, professional development seminars have been conducted for faculty members, courses have been revising, and the design and implementation of four new courses have been planned for the initial phase. The study follows a qualitative research design utilizing action research within a single case study framework. As the transformation of the faculty continues on the basis of the learning engineering framework, this study aims to provide information about the process and the outcomes so far.
Presenter:
Kürşat Çağıltay
The Writing Analytics Tool: A Learning Engineering Approach to Designing AI-Supported Writing Instruction
The Writing Analytics Toolkit (WAT) is an AI-driven platform designed to support writing instruction and research through transparent, theory-grounded writing analytics. Developed over six years through an IES-funded learning engineering initiative, WAT integrates natural language processing, machine learning, participatory design, and evidence-based instructional principles to serve students, teachers, and researchers within a single system. This paper describes WAT’s architecture, analytics, and interfaces, and frames its development as a learning engineering case study. We document how stakeholder engagement, iterative design cycles, empirical validation, and scalability considerations shaped WAT’s evolution. The work illustrates how learning engineering can guide the responsible design of AI-enabled educational tools that are pedagogically aligned, usable in authentic contexts, and extensible for research and innovation.
Presenters:
Danielle McNamara & Andrew Potter
10:50am – 11:00am
Brief Break
11:00am – 11:50am
Comparing Epistemic Emotions and User Experience Across Two AI Instructional Designs in Biology Learning
This pilot study investigates how two distinct AI instructional designs shape undergraduate students’ epistemic emotions and user experience in biology education. Students interacted with either an AI-Tutor that provided structured conceptual guidance or an AI-Navigator that incorporated an uncertainty-centered instructional approach designed to strategically raise, maintain, and reduce conceptual, procedural, and epistemic uncertainty. After completing a biology lab quiz, students completed the Epistemically-Related Emotions Scales (EES) and several user-experience items. AI-Tutor users reported significantly higher enjoyment, perceived helpfulness, AI preference, and future use intention; while AI-Navigator users showed more varied epistemic-emotion profiles, including slightly elevated confusion. Drawing on Control–Value Theory and the ERAS (Emotion Regulation in Achievement Situations) model, these findings highlight a design tension, which is that systems that enhance perceived control tend to produce more positive epistemic emotions and user experiences, compared to the systems that introduce epistemic friction in the short term.
Presenter:
Yiwen Li
REAL CHEM Action Research Through the LearnLab Summer School
REAL CHEM is a comprehensive courseware environment for general chemistry. Because the REAL CHEM courseware aligns with widely adopted textbooks and instructional practices, it has been implemented across a broad range of institutions. Given the practitioner focus of this manuscript, we frame our method as a description of how REAL CHEM instructors were supported to conduct action research with curated data from their courses incorporating REAL CHEM. The corresponding results are the ongoing action research projects that have emerged. We highlight one of these projects and discuss implications for data-driven reforms of teaching practices.
Presenter:
Bryan Henderson
Designing for Student Engagement with AI in Courseware: Lessons from Iterative Improvements to DOT in REAL CHEM
This design-based research study examines how students use and perceive DOT, a generative AI tutor embedded within REAL CHEM, a fully instrumented general chemistry courseware system. In Fall 2024, student awareness and engagement with DOT were low: only 11% of 4,249 students interacted with the tool, and interviews revealed reliance on external AI tools and traditional resources instead. DOT interactions were dominated by copied course questions and factual queries, with limited generative use, and reported concerns about accuracy, verbosity, and visibility. Based on these findings, we refined DOT’s base prompt and introduced four types of AI Activation Points—page-level, paragraph-level, activity-level, and open-ended—to make DOT more proactive and contextually relevant. Summer 2025 results showed improved awareness, higher engagement (36.2%), and strong student satisfaction. Findings suggest that generative AI must be well-timed, visible, and aligned with student workflows to meaningfully support learning.
Presenter:
Kimberly Larson
From Course Concept to Lecture Video: An AI-Powered System for Automated MOOC Development
Producing lecture videos for online courses requires significant time and expertise, and current AI-generated videos often lack accuracy, consistency, and pedagogical grounding. This study presents a controllable system that converts slides and scripts into lecture videos. The system follows instructor intent and integrates the ICAP framework to ensure the presence of constructive questions. We evaluate the system using four metrics: ICAP assurance, video generation time ratio, speech consistency, and slides coverage rate. Results show that the system provides higher controllability and stronger educational alignment than NotebookLM, demonstrating its potential for scalable MOOC development.
Presenter:
Hua Wei
MIRANDA: Real-Time Learning Analytics for Authentic Embedded Assessment
The standard methods of assessment can rarely capture the reflective, adaptive problem-solving, or collaborative reasoning expertise that is critical for academic growth and development for future workforce success (Pellegrino et al., 2001). Assessments help educators understand how students develop, refine, and apply knowledge. Methods are strengthened by continuous learning analytics that reveal how learners reason, reflect, and build disciplinary expertise. Research in game-based learning, including Karsenti and Parent’s study on Assassin’s Creed, shows that students achieve deeper historical understanding and increased engagement when instruction integrates interactive, authentic tasks supported by effective teaching practices (Karsenti & Parent, 2019). When feedback is delayed, this can produce an impediment in growth opportunities and greatly diminish learners' agency and identity, particularly for historically marginalized students (Black & William, 2009; Nasir, 2011). Evidence-based research has shown that digital learning environments naturally record learners' strategies, interactions, and decision pathways in real time (Gee, 2007; Steinkuehler & Squire, 2014).
Presenter:
Elina Ollila
The Impact on Cognition and Motivation Using Gaming, Simulation, and Visual Learning in Military Flight Training
In the field of pilot training, there remains a critical misunderstanding regarding the true drivers of effective learning. Much debate centers on whether advanced, networked simulators are unnecessarily complex compared to legacy Link Trainers or ITD systems without VR—platforms that have shown only limited effectiveness by eliciting correct responses from students. However, the current operational environment demands that we accelerate pilot production and achieve superior training outcomes at scale. This imperative requires us to consider more advanced solutions.
Presenter:
Ariah Elmore
12:00pm – 12:30pm
Lunch (Senita A)
12:30pm -1:20pm
From Insights to Implementation: Learning Engineering in Action
Join us for a fast-paced, good-natured debate where four leaders in learning engineering and AI take on the tradeoffs we usually politely avoid: speed vs evidence, insight vs scalability, humans vs algorithms.
Moderator:
Tracy Arner, Associate Director of the Learning Engineering Institute
Debate Participants:
Nia Nixon, Associate Professor in School of Education at the University of California, Irvine
Stephen Fancsali, Vice President of Data Science at Carnegie Learning
Kathryn McCarthy, Associate Professor of Educational Psychology at Georgia State University
René Kizilcec, Associate Professor of Information Science at Cornell University
1:20pm – 1:30pm
Brief Break
1:30pm – 2:20pm
What Works When for Whom Under What Conditions: Learning Engineering as an Enabler of Component-based Research
Component-based research (CBR) is a methodological strategy centered on studying the features and processes of innovations that contribute to desired outcomes for specific types of learner populations, conditions, and contexts. CBR focuses on precisely defined elements of overall innovations, called components, that can be studied individually or in clusters across multiple implementation sites. Compatible language in describing data, specificity in delineating the components of innovations, and combining data across studies are all attributes of learning engineering (LE) vital for CBR. Measurements and analyses can then determine what parts of the innovation are correlated with outcomes and situations. These methodological advances depend on developing data infrastructures, enabled by artificial intelligence, that empower design-based implementation and evaluation. This is essential for personalization as well as for achieving scale through adaptation to local conditions for success.
Presenter:
Chris Dede
Learning Engineering Body of Knowledge
A Guide to the Learning Engineering Body of Knowledge (LEBOK Guide), soft-released in December 2025, will be formally released at LERN 2026. This open source resource does for learning engineering what the Software Engineering Body of Knowledge (SWEBOK Guide) does for software. SWEBOK addresses principles and practices of designing, developing, testing, and maintaining software systems. LEBOK is a guide to the principles and practices of learning engineering. The Guide is released as a Wiki version (lebok.wiki), a PDF, and a machine-readable implementation in IEEE Sharable Competency Definition (IEEE 1484.20.3-2023) format used to link from learning resources, learning event metadata, and digital credentials. The wiki version of the guide is intended to be a platform for iterative community development and vetting, leading to future community-authorized releases. The Guide is currently organized into 14 knowledge areas (KAs) followed by appendices. Within each knowledge area are topics and subtopics.
Presenter:
Jim Goodell
The Education Tree: A New Theoretical Model for P-20 Education and Development
P–20 education, particularly K–16 education, has come under increased scrutiny over the past decade, and was particularly accelerated during the reported learning losses that occurred during shutdowns related to the COVID-19 pandemic. This study seeks to identify alternative learning strategies available in the current literature and proposes a new iteration on progressive models that consist of gradeless classrooms, ungrading, and employing generative artificial intelligence.
Presenter:
Maxwell Goshert
Implementing Concept Instruction via MCP Server
This applied learning engineering research demonstrates the operationalization of Merrill & Tennyson's (1977) concept teaching framework through AI-powered tools. An MCP (Model Context Protocol) server was developed implementing five sequential tools that decompose concept lesson creation into theory-aligned stages: concept definition, attribute analysis, example generation, practice activity creation, and lesson publication. The system encodes M&T's research-based prescriptions as structured prompts, validation algorithms, and schema-driven workflows.
Key findings reveal structural constraints with input/output schemas are essential for maintaining theoretical fidelity—unconstrained LLM generation produced inconsistent, theoretically misaligned outputs despite natural language descriptions of instructional principles. The architecture combining LLM semantic capabilities with seven algorithmic validators suggests a promising pattern for educational AI: leverage LLMs for content generation while enforcing learning science principles through external validation. Future work will embed these concept lessons within problem-based learning environments for just-in-time instruction.
Presenter:
Thor A. Anderson
NLP Validation of Prompt Strategies for Theory-Aligned LLM-Generated Personalization
This study applies an NLP-based validation framework to examine how Large Language Models (LLMs) can be iteratively refined for theory-aligned text personalization. Building on prior work, we extend the evaluation method to history texts and focus on prompt design as a key factor in personalization quality. Four LLMs (Claude, Llama, Gemini, ChatGPT-4) were prompted to adapt ten history passages for four reader profiles varying in reading skill and prior knowledge using one-shot and task-specific instruction prompts. Linguistic indices were extracted using the Writing Analytics Tool to assess the alignment of linguistic features with students’ needs. Although LLMs appropriately tailored text complexity, cohesion patterns failed to match theoretical expectations even under explicit guidance. This iteration highlights the limits of current prompting strategies and the importance of theory-augmented refinement. Through iterative prompt evaluation, the study demonstrates how NLP provides a scalable, real-time framework for validating and improving theory-driven personalization across multiple domains.
Presenter:
Linh Huynh
“Walk It Out”: An Embodied and Mobile AI Tutor for STEM Education
Interpreting motion graphs can be challenging for students, and individual teachers lack the capacity to provide rapid, personalized feedback during mobile learning (Mlearning) in physics. We have created an embodied smartphone app that uses LiDAR and has been shown to significantly improve kinematics knowledge. However, the app’s current simplistic binary feedback needs to be enhanced with an AI tutor that will give more sophisticated, adaptive, and Socratic feedback. This article describes the design process for creating a Socratic-style mobile AI tutor and the Retrieval Augmented Generation (RAG) LLM architecture. The RAG pulls from the authors’ published base of physics education articles to ground the model. Human experts determined that Claude Sonnet 4.5 provided significantly more precise, reliable, and Socratic (querying and non-directive) feedback compared to others. This team of learning engineers and physics teachers is now creating a semantic benchmark test to assess the quality of adaptive graph-learning feedback.
Presenter:
Mina Johnson-Glenberg
A Tiered Framework for Educational Event Data Documentation: Synthesizing Principles and Addressing Gaps
Digital learning platforms generate millions of fine-grained behavioral events, yet inadequate documentation restricts equitable research use. We systematically reviewed five documentation traditions (survey/DDI, FAIR, AI transparency, learning-analytics standards, psychometrics) and identified gaps unique to educational event data: temporal complexity, nested structures, platform business logic, scale/granularity, and access heterogeneity. Synthesizing cross-cutting strengths, we derived five principles—Transparency, Accessibility, Usability, Responsibility, Maintainability—and operationalized them in a 16-item 3-tiered framework. Platforms can self-assess documentation maturity from Baseline to Advanced, enabling incremental improvement. The framework bridged researcher needs with platform capabilities, promoted reproducibility, and lowered barriers for early-career and under-resourced scholars.
Presenter:
Xin Wei
Learning Engineering by Design: An Agentic AI Application for Rapid, Personalized Health & Safety Training in Disaster Response and Hazardous Environments
Disaster response and hazardous work environments face a critical training challenge: recruiting available labor and rapidly preparing diverse workers for site-specific dangers requires personalized learning materials that traditional development cannot support. When every worksite presents unique hazards and worker backgrounds vary widely, efficient preparation before limited contact training time becomes essential for safety and effectiveness.
The Health & Safety Training (HST) Copilot, an NSF SBIR Phase I project, addresses this gap through learning engineering principles embedded by design. By adopting ICICLE learning engineering standards and Total Learning Architecture (TLA) frameworks, organizations unknowingly benefit from best practices in instructional design, delivery, and assessment—critical where credentialing and skills decay are concerns. An 11-agent AI architecture rapidly transforms site-specific safety documents into personalized, learning-engineered primers.
Presenter:
Henry Ryng
2:20pm – 2:30pm
Brief Break
2:30pm – 3:20pm
ReQUESTA: A Hybrid Agentic Framework for Generating Cognitively Diverse Multiple-Choice Questions
This study presents ReQUESTA, a hybrid agentic framework that integrates LLM-powered and rule-based agents to generate multiple choice questions (MCQs) with distinct cognitive focuses: text-based, inferential, and main-idea. Expert raters evaluated 100 ReQUESTA-generated MCQs using a cognitive classification rubric to distinguish among the three question types. The evaluation results showed strong alignment between system- and human-assigned labels (accuracy = 0.95), demonstrating the framework’s ability to generate cognitively diverse and pedagogically meaningful questions. Overall, ReQUESTA’s modular and hybrid design supports scalable, iterative, and evidence-based assessment development. Future work will include psychometric validation and expansion to additional cognitive categories such as application-level questions.
Presenter:
Yu Tian
Towards Automated Detection of Struggling Student Programmers
In programming courses, it is often difficult for instructors to identify students who struggle while coding. Fortunately, automated assessment tools used in courses provide a way to capture data about programming activity. Using this data source provides the foundation for developing a machine learning model to automatically classify students who are struggling, even in large courses. Such a model would help instructors target interventions to help struggling students. In this paper, we provide a step towards creating this model. We have conducted preliminary work focused on identifying features with the potential to indicate struggle, performing feature engineering to extract them, and then conducting an exploratory data analysis on real data to visualize outliers and assess feasibility.
Presenter:
Sanjita Patwardhan
The Promise of Scenario-Based Assessments for College Instruction
The lecture-high stakes test structure of many college courses does not promote deep learning that can be transferred to real world contexts. Scenario-based assessments (SBAs) provide an approach to assessment that grounds authentic problem solving and scaffolds students to a solution for those problems. The paper reports the first iteration of a design-based research (DBR) design cycle to develop an SBA authoring system, which will enable college instructors to use SBAs in their courses. An SBA was administered in introduction to interdisciplinarity course taught at a large, public university. Data from students and faculty suggest that it was usable, and they perceived it to support learning beyond traditional course assessment formats. We reflect upon the affordances of SBAs that led to these positive perceptions.
Presenter:
Jonathan Cohen
User experience design of AI-assisted human-technology ecosystem for writing assessment
Generative AI (GenAI) raises increasing concerns as it has been swiftly used for academic misconduct and the tech communities have not figured out effective solution to detect AI-written text, particularly in the context of writing assessment. This threat to academic integrity and assessment validity urgently calls for effective change in the user experience (UX) design of writing assessment before we are able to detect AI-written text one day. How can we build up a sustainable and valid GenAI-assisted human-technology ecosystem for writing assessment with academic integrity? What practical goals of writing assessment can we use GenAI agents to meet and extend to its full potential for human IA (intelligence augmentation)? This experimental pilot study reviews the user experience design of four GenAI tools and GPTZERO with socio-cultural approach to explore these questions from user-centered perspective. Three principles for user experience design for AI-assisted writing assessment are synthesized for mastery learning.
Presenter:
Li (Lee) Liang
A Learning Engineering Approach to Transforming Teacher Practice Through Co-Designing Science Curricular for Multilingual Learners
This practitioner-focused paper describes the application of a learning engineering process to co-design a science unit on chemical and physical changes for 6th grade multilingual and multicultural classrooms. In partnership with a group of science teachers and specialist from highly diverse urban schools, researchers facilitated a summer co-design institute, co-creating a unit anchored in a culturally relevant phenomena. Drawing on the principles of collaborative design as professional development (Severance, 2022; Voogt et al., 2015), the study investigates how the co-design process can be structured and how it impacts teachers' perceived ability to support diverse learners. Data from co-design artifacts, classroom observations, and focus group teacher interviews reveal that the process fostered significant teacher agency and provided a powerful context for professional learning. The findings offer a replicable model for using learning engineering to create culturally and linguistically relevant science instruction.
Presenter:
Yernat Mnuar, Jie Zhang, Iftekharul Chowdhury
Currents of Inquiry: Insights From Two Years of Real-World AI-Learner Water Conversations
Conversational AI systems increasingly contribute to informal STEM learning, yet little is known about how users frame questions and how chatbots reply in such settings. This study analyzes two years of text interactions between Waterbot and community users asking about water issues. Treating questions as signals of learners’ cognitive focus and epistemic stance, we conduct two analyses: we code question types and quantify readability, causal connectives, and lexical markers of certainty for both users and the AI. Overall, learning sessions remained short, with depth tied more to chatbot verbosity than to linguistic complexity or certainty. These findings offer a design baseline for conversational AI that supports learning complex issues. The observed asymmetries in length, complexity, and certainty point to levers for the next design cycle, such as briefer default answers and clearer pathways for follow-up questions. As an early-stage study grounded in real learner traces, this work informs iterative refinement of conversational systems for civic and scientific informal learning about water issues.
Presenter:
Stephen Carradini