Top.Mail.Ru

Roman Komarov on AI in Education and Teaching Practice

It doesn’t just help solve problems — it changes the very nature of learning. Artificial intelligence has already become a reality we must reckon with. Where will this lead? Who will remain at the blackboard? An interview with Roman Komarov, Vice-Rector of Moscow City University, is not about technology, but about people.

— The “Priority-2030” programme is aimed at creating a new quality of education. How exactly does your strategic project on applying AI in teaching align with this global goal? What is the main challenge in education that you are addressing with artificial intelligence, and how will it change the teacher’s role?

— Today we hear more and more often the idea that with the advent of AI, education has passed the point of no return. For better or worse, this is indeed the case. And this point is only part of a much broader picture.

AI is not only rapidly developing on its own but also aggressively transforming the many spheres it penetrates. Just look at how it has already changed—and will continue to change — the labour market before our very eyes (forecasts on this flood us daily); the role it plays on the geopolitical stage; and the firm place it has already taken in strategic documents defining long-term development in different countries, including Russia. All this makes us view AI as a cross-cutting candidate technology that will ensure the transition to a new technological order. Education, of course, has not escaped this intervention.

On the one hand, AI has already taken its historical place in the evolution of educational tools, continuing the logic of earlier technological shifts: the first printing press, the appearance of the pen, the calculator, the personal computer, the internet, smartphones, and apps… Each of these breakthroughs sparked fears: “People will stop memorising!”, “Children won’t learn to count!”, “Students will stop thinking and just copy!” and so on. Yet with time, each of these epoch-making tools was gradually integrated into education, leading to qualitative innovations and eventually becoming taken for granted.

On the other hand, AI is not just another new tool. Even now it can act as a full-fledged cognitive partner, whose potential grows by leaps and bounds every day. In this sense, as our international colleagues put it, it is an “extended mind” and a form of “co-cognition.” And what does this “extended mind” concept mean for education?

It means, first of all, that we must rethink the very concept of “educational technologies.” It now seems to split: on one side, there are technologies directly developed on the basis of AI, and on the other, there are pedagogical technologies whose structure is being reshaped with AI built into them as an agent. This significant shift brings new questions: for instance, how do traditional didactic models change with the emergence of this new agent?

The traditional dyad of relations — “teacher-student” or “teacher-class” — is turning into a triad, where a qualitatively new entity with quasi-subjectivity and quasi-expertise appears on almost equal footing. Its role, however, is currently ambiguous. There is plenty of research showing how AI improves educational outcomes. But there is also plenty of contradictory research showing the opposite. For example, an April study by Anthropic revealed that students prefer to delegate complex, higher-order cognitive tasks (based on Bloom’s taxonomy) to AI systems, while MIT researchers recently added that such mindless delegation carries cognitive costs, lowering memory, critical thinking, originality, and even what they called overall “brain connectedness.” In short, the situation is highly contradictory.

Another point of tension is this: since everything is transforming so irreversibly, what educational outcomes should teachers aim for in their students, and how should assessment procedures—and the entire assessment system — change to adequately respond to such serious challenges? Traditional tests, homework, essays, etc., are no longer relevant in conditions of free access to AI. Clearly, the assessment system must be rebuilt.

But even this is not enough. The very role of the teacher is also changing. Over the past two years, a well-known phrase has firmly entered pedagogical discourse: “AI will not replace teachers, but teachers who use AI will replace those who don’t” (a paraphrase of IBM’s much-cited 2023 report). In partnership with AI, the teacher is now less a bearer and transmitter of knowledge, and more an epistemic authority and guiding mentor figure — one who can model and set the standard for a culture of systemic thinking based on subject knowledge, its structuring, and conscious practical application. The teacher also becomes a facilitator of dialogue with AI and a coach in its critical and ethical use.

Teachers already understand this well. Our internal university studies show that while in 2024 only every second teacher was using AI in their work, within less than a year this figure rose to 76% — almost equaling students, among whom only one in five does not use AI.

Bringing all these challenges together, it’s clear that the emergence of AI in education has created gaps that require us to rethink many things from scratch—rethinking educational technology development, piloting, and systematic research to ensure that solutions are valid and reliable. For now, as I’ve tried to show, we are at a research stage where the global body of empirical and experimental data is too contradictory to draw clear conclusions. Some of these gaps await the development of adequate diagnostic tools, including tools for assessing delayed effects beyond immediate outcomes. In general, the task is difficult but solvable. MCU’s strategic technological project “Educational AI Technologies” is precisely designed, together with our industrial partners and friendly universities equally concerned with these issues, to contribute to solving them.

— Could you give specific examples of pilot solutions currently being tested at MCU? What difficulties do teachers face in integrating these tools into real educational practice?

— Our strategic technological project is a portfolio of four products. Each is being actively piloted within MCU, and some have spread far beyond.

For example, the “Digital Adaptive Biology Textbook for Grades 5-6”, developed in partnership with the Prosveshchenie publishing house, is being tested both in our university school and in seven other schools in Moscow, the Moscow region, and the Nizhny Novgorod region, showing very optimistic results in improving student performance.

If we take our line of simulator-trainers aimed at diagnosing and developing universal and professional competencies of key education stakeholders, the picture is even broader. Three simulators — “Effective Communication with Parents”, “Effective Educational Organisation Leader”, and “Every Child’s Success” — are tightly integrated into MCU’s educational programmes, covering nearly 9,000 pedagogical students. Across Russia, in 2023 alone, more than 32,000 people from 137 educational organisations in nine regions completed training with the full simulator suite (seven in total).

The third project within the STP is the “AI Platform for Creating Virtual Agents.” It allows teachers and students to create AI personas of outstanding figures for use in education and research. This development is our answer to foreign analogues in the logic of import substitution, enabling Russia to maintain sovereignty in education and not fall behind in the race for technological leadership. At this stage, MCU faculty and master’s students from several institutes are piloting the platform. Users can already create two types of AI personas: the Mentor — an experienced guide and helper in learning who directs, supports, and shares knowledge; and the Maieutic Agent, who helps explore and assimilate new knowledge through Socratic dialogue.

The fourth project within the STP is the “Digital Mirror of the Lesson” (partner: SberEducation). This is a new tool in the field, providing high-quality feedback to student teachers. Using various metrics (methodological, psychological, social), the service employs AI to analyze video and audio recordings of model lessons conducted by students. The results allow students, either independently or with a mentor, to reflect on lesson effectiveness and form an individual development plan.

In 2024, about 700 students participated in piloting this technology. This year, two MCU institutes — the Institute of Pedagogy and Psychology of Education and the Institute of Digital Education — are conducting the pilot. In short, the process can be described as follows:

  1. Students conduct preparatory/model lessons using the service.

  2. Video and audio are processed, and results are analyzed with a methodologist to generate recommendations.

  3. AI analysis includes:

    • emotion and engagement tracking,

    • methodological communication techniques (student encouragement, lesson start, homework explanation, instructions, discipline management),

    • social communication techniques (leadership markers, lesson boundaries, praise),

    • conversation distribution (“I-mode” vs. “We-mode”).

  4. After correcting mistakes, students repeat the lesson.

  5. Recommendations are refined.

  6. Finally, students conduct a “model” lesson in real conditions at MCU’s Independent Competence Assessment Center, where both the Digital Mirror and an expert board assess them.

Thus, the Digital Mirror is a qualitatively new stage in developing our system for assessing student learning outcomes, launched with the Priority-2030 programme. It addresses the challenge of achieving objectivity and transparency in assessment, partly by reducing subjectivity and human bias. More than 3,000 students take part in this procedure each year.

As for the “difficulties” teachers face, I wouldn’t call them difficulties. For teachers, it’s rather an exciting creative educational challenge—to transform their lessons in light of new opportunities and take part in this creation. To make the process smoother, we hold systematic seminars for experience-sharing, reflection, and the adoption of best practices. Still, it’s important to note differences between institutes: for example, the Institute of Foreign Languages has different approaches than the Institute of Pedagogy and Psychology of Education or the Institute of Digital Education. Each has its own subject matter and discipline-specific practices. But this, ultimately, creates the synergy of creative teams, where solutions necessary for advancing educational practice emerge.

— One of the conditions of the “Priority-2030” programme is developing research potential. What unique data and research opportunities does this project provide for your educational scientists and data scientists? Does MCU plan to develop its own educational AI algorithms, not just use third-party solutions?

— It’s easier to answer the second part first. Does MCU plan to develop its own educational AI algorithms? In fact, this is exactly what we are doing. For example, the engagement assessment model integrated into the Digital Mirror of the Lesson is fully our own development. The same goes for the digital adaptive textbook and the simulators. Partner technologies are integrated into our products (where applicable) based on ecosystem logic, designed for synergy and accelerating Russia’s technological leadership in developing complex educational products.

As for the first part, your question is not as simple as it seems. It is extremely multi-layered, and depending on the angle, the answer may differ. If I start diving into each individual STP project, describing unique data and related research opportunities would take quite some time. For instance, in the Digital Adaptive Textbook alone, we must speak about student typologies, learning strategies, personalisation methods, and, at the highest level, the conceptual framework built with AI to verify educational content.

Shifting the angle to simulators, we’d need to discuss data on competency gaps and their dynamics, behavioral strategies in simulator interaction, and the potential for identifying patterns and using machine learning to predict risks of unproductive interaction and to design personalised learning paths.

If I may, I’d prefer to answer not “bottom-up” but “top-down” — from the ontological perspective of the entire STP. Recently, from Anna Elashkina, an expert in AI methodology, I heard a phrase: “Artificial intelligence is a form of reflection of natural intelligence.” For us working in education (especially in the humanities), this is perhaps the most important insight. Because at the heart of all education is always the human being. Ultimately, education boils down to helping a person throughout life discover and develop what is human within them—through learning, upbringing, preparation for professional and career self-determination, and encounters with others and with oneself at different stages of life.

In this interpretation, all the diverse unique data our researchers and developers obtain through AI within the STP are like bricks paving the path toward understanding natural human intelligence.

— Do you plan to share your developments with other pedagogical universities and schools?

— The transfer of developments within our STP is a natural attribute of their life and growth. Without external feedback, any system risks quickly turning inward, stagnating before fully coming to light. So, to answer directly — yes.

The methods of transfer, however, can vary: from broad sharing of developments, research, and best practices at conferences (above all, our “Digital Didactics” conference) to specialised professional development and retraining programmes. Some may need scientific and technical services to fine-tune our technologies for their institutions. All this is feasible.

More importantly, it is only in such productive relationships that innovative technological educational products are born and developed — the very products the Priority-2030 programme pushes us to create.

Photo: MCU