By: Ricardo Israel - 08/09/2025
Is there a right way to educate? The Greeks posed the question more than two and a half millennia ago, and we still don't have a single answer. Although, at least in what we call the West, every generation and culture has tried to find it since then. This hasn't been easy, since even the Greeks themselves didn't settle on a formula, ever since their city-states left us at least two paths: Athens and Sparta.
For some time now, there has been a debate about how close we are to artificial intelligence (AI), but last year it has emerged not only as something close, but also as a part of our daily lives, so suddenly, the answer we are beginning to seek has the urgency of coinciding with a change of enormous proportions. This is not the first time, since, during the Industrial Revolution, with technologies like steam, humans were able to multiply the power of their muscles, but now, we are told that we are at the dawn of a much more powerful evolution: the amplification of brain power.
Everyone seems to agree that these new technologies will transform our lives, and the debate is about the speed of change and how to avoid negative social costs. Optimists assure us that in the historical context, there is nothing in the past that would allow us to affirm that there will be fewer jobs as a consequence of these advances, since new ones are created alongside those destroyed, just as happened with the internet. However, pessimists counterargue that the artificial intelligence revolution is different from previous ones, since the change will be so rapid and in so many sectors that this time the disappearance of jobs will outstrip society's capacity to create new ones.
The debate is not new, and I have published books on this topic in previous years (1). The truth is that similar debates have arisen every time there have been technological revolutions, since new technologies burst onto the scene with the promise of creating a more diverse world with greater levels of personal freedom, although now we are told that the controllers of the system could have in their hands an unprecedented degree of power and unprecedented information about any person in the entire world, with the addition that in technological revolutions there is no counter-revolution, unlike what always occurs in social revolutions.
The issue seems to be everywhere, with questions for which we still don't have adequate answers. The first is how employment adapts and how quickly the skills needed by those entering the workforce change. The second is a political question in the best sense of the word: how to regulate artificial intelligence without simultaneously harming innovation.
On the other hand, the news is incessant. Thus, in the United States, at the end of August, the first lady presented an initiative seeking to introduce it from ages as early as four, in addition to the responsibility required of parents regarding their children. In those same days, her husband invited some of the leading artificial intelligence (AI) magnates to the White House, including those whose companies censored him during his first term. An affront that seems forgotten in this true alliance that has been established between them, since what lies behind it is too important, since in this field the fight between China and the United States will be decided over who will prevail as the world's great superpower, a battle in which tariffs, mutual restrictions on the sale of advanced chips, and the availability of rare earths appear as the only incidents. In fact, at the same time, Xi Jinping also spoke about AI when receiving important leaders such as Putin and Modi of India in China.
The questions don't stop there, and a significant one is whether it's appropriate to speak of "intelligence" at all. Given the current situation, perhaps we should use another term or develop a more appropriate concept that reflects the difference between machines and humans. In fact, at the dawn of mass computing, the term "artificial intelligence" was also used, only to be later abandoned (2). In other words, the invitation is to ask ourselves whether a machine can think or whether we should use a different term or concept.
What is beyond doubt is that we are experiencing a true cultural shift, one of those that define an entire era, historical changes that I prefer to call "long history," to differentiate it from a simple news event, "short history." I say this to better understand what is happening, since, if we were to look for common denominators to differentiate historical eras, we would probably say that in the West, theological factors defined the Middle Ages of Christian Europe, legal factors defined ancient Rome, and aesthetic canons defined classical Greece, but today it is the role of science and technology that defines our culture.
The problem we face is that, although this role of science and technology has been around for some time, there is no evidence that education is adapting at the required speed. In this regard, I don't believe the solution lies in teaching more of the same, but rather in regaining a holistic view of the educational process, this time with a critical approach.
In fact, educational systems also failed to fully adapt to the changes brought about by computing, the internet, and the digital society. Even earlier, at the beginning of the 20th century, intellectual history brought with it a novelty: specialization, promoted essentially by the university system. Prior to this, until the end of the 18th century, the pursuit of general, not specific, knowledge was sought, and philosophy was closely linked to science.
In fact, the emergence of computing and the smartphone disrupted the role that the teacher had acquired, equivalent to that of the Council of Elders in primitive society, that is, the guardian of knowledge, since it was the elder who accumulated knowledge like an archive, which was then passed on to the teacher in the classroom. However, by the second half of the 20th century, the most informed accumulators of knowledge had ceased to be humans, who had ceded their place to electronic beings.
That the accumulation of knowledge could be mechanical and external to Homo sapiens was already a cultural change, even if it wasn't immediately perceived that way. I am convinced that this was a relevant fact for so-called AI, in the sense that the future of education could lie in the past, since at least to me it shows a trend: the loss of importance of specialists and the rebirth of generalists. That is, education should prepare educated people, and not just those who repeat the 10 lines they've read on their cell phone about a topic, providing them with the false idea of "knowing" and understanding, since we are all aware of the enormous damage caused by the mixture of arrogance and ignorance displayed by those many who opine without knowing.
Therefore, it is not a bad thing that we have more questions than answers, so the debate on whether the smartphone should be banished from the classroom must continue, and not repeat what happened just a few decades ago, when there was no further discussion about whether it was important for pocket calculators to replace mathematics education based on basic operations in the classrooms. However, this disappeared from many places before it was answered how good or bad what was happening was.
The question today, from elementary to university level, is whether it's useful to prepare more specialized individuals when information is available, and has long been available, at our fingertips, or whether, instead of going through increasingly arbitrary stages, we should seek or try to understand the world we live in. Therefore, there's probably a broader consensus that it's more beneficial to invest more in elementary than university levels, since the differences in opportunities will really be evident at the beginning of training rather than at the end.
There are so many important questions for which we have no answers. For example, what are the mechanisms that make us think? If a machine tells us it's thinking, are we going to believe it or not? There are so many things about which we're not sure we have the answers, starting with: What is intelligence? The answers we have remain tentative, since what we do is measure it rather than understand it; sometimes it even appears as a convenient version that was discovered in the 20th century to replace 19th-century craniometry.
Intelligence sometimes appears as a mirror image of skill, and we find different types of intelligence—verbal, mathematical, spatial—but it's also found in music and in the body, when one thinks of elite athletes, who also use reasoning processes in their movements. Furthermore, it's often difficult to separate intelligence from its cultural roots, and different types of intelligence have been valued differently throughout history.
Something similar happens with time. A clock can measure it, but it doesn't give us a precise definition. This lack can also extend to information, since we know how to manipulate and process it, but we don't know the precise difference between it and knowledge, nor what separates it from simple data.
And now societies must face the challenges that AI brings. The first is to ensure that learning (the new) surpasses training (the known). A second is for education to go beyond mere instruction to aim to develop good citizens. The third challenge is to rethink what is basic, what is fundamental: is it a minimum knowledge base or certain fundamental values?
In other words, how do we educate best? Whether by teaching everything or by teaching the most important things, which in turn is linked to how visual culture is compatible with the contributions of books, in the sense of a different kind of literacy that integrates two worlds that today seem to be moving in opposite and alternative directions. On the one hand, screens provide information constantly throughout life, but coupled with what Umberto Eco explained in his defense of books ("No One Will End Books," with Jean-Claude Carriere, 2010), that they are irreplaceable for understanding and making sense of a society oversaturated with data, which also has the growing feeling that it is being misinformed rather than cultured. The mistake, what hasn't worked so far, is that it shouldn't be done on alternate days, but simultaneously.
With the emergence of AI, a fourth challenge for education is how to teach how to process and manage information, not how to accumulate it. The reason is that no educational system can transmit, even over a lifetime, what has been accumulated in just one specialty. Therefore, the true objective of the system is to teach understanding, explanation, and comprehension. Rather than imposing unique truths, in the era of so-called AI, it is still concrete, flesh-and-blood human beings who must learn to value and prioritize the information received.
It is not an easy task, since only recently have we asked ourselves how much human beings know at birth, in order to truly understand how much we can teach them and how they can be instructed in a formal education system, without castrating that wonderful ability to absorb information like a sponge that we have in the early years, since it is difficult in adult life in a similar period, to learn again as much as what was achieved in the first years.
Furthermore, any reorientation of the educational process requires further progress in a better understanding of the brain, in the distinction between the right and left hemispheres, between rational skills and imagination and intuition, and lacking this, we continue to speak in terms of stages, from preschool to higher education, which seems even more arbitrary in the context of the potential brought about by AI, especially in the need for education to recover the role of trainer that it had for so many centuries instead of the much more recent role of job trainer.
In other words, to adequately educate 21st-century students, every curriculum should provide unity, break down the artificial barriers of disciplinary specialization, teaching them to motivate themselves, to seek the "whys" rather than the "hows." Only in this unity do I see the possibility for many people to stop feeling overwhelmed by the complexity of the life they have been given, and that sense of permanent crisis that overwhelms too many people who are unable to make sense of the information they have received. Therefore, a fifth and final challenge is to achieve a minimum capacity for generalization, in order to make sense of the multiplicity of information received.
For all these reasons, in relation to the Greeks' question, I believe that all educational policy must have two pillars: the first is diversity, abandoning all rigidity; and the second is that the educational system must educate, not necessarily provide professional accreditation. This requires a rethinking of the idea of the university that has predominated for so long, since the profession in the age of AI becomes an option, not an obligation.
What we are experiencing is part of an impossible-to-stop process, networks that become highways, from which education cannot be stepped off or deviated, as it is the minimum necessary to live, work, and prosper. We have an obligation to look to the future with optimism. We have made enormous progress since the last Ice Age, twelve to fifteen thousand years that constitute the civilizing record of our species, and our generation lives more safely and securely than any other in the past, so we should not allow ourselves to be overwhelmed by the false sense of an unmanageable crisis.
Missing?
Understanding that living, working, or prospering is not and cannot be the sole goal of education, since there is a vital area for human beings where technology is of secondary importance: the critical attitude, that is, what allows for the advancement and discovery of new ideas, which should not be seen as a sudden spark but as an attitude, a way of being, that must be fostered and refined by the educational system.
Unfortunately, science is perceived as distant today, so we must not lose sight of people, regarding whom Reuven Feuerstein taught that "nothing is more stable in human beings than their capacity to change." And if human beings are constantly and regularly modified and self-modify, why shouldn't they do so now? In this regard, the question for the formal education system is whether it will collaborate or hinder this change. Therefore, the new paradigm should be to teach how to change intelligence. It is a basic postulate that, if intelligence is modifiable, it should be a right. Therefore, an intelligent person is also one who has the capacity to change within an educational system that must be a continuous and open learning system.
To better educate, a fertile ground would be to have a critical attitude toward the current model of science, asking: If the change has been so rapid and in so many sectors, why then shouldn't it also affect the activity known as science? In other words, aren't we also witnessing a paradigm shift here?
Science is not the only way to know; however, it is the only one that defines ignorance as its greatest enemy, since it is more interested in what is unknown than in what is known. It is not only a competitive and international activity, but also one that forces us to change our minds if new facts are discovered that change the previous consensus.
However, its main enemy is internal, and it becomes evident when hyperspecialization impedes a comprehensive view, when the mistaken belief that the whole can be explained by the isolated study of its parts prevails. At a time of the emergence of artificial intelligence, the question for education must be whether the current model of science is capable of coherently and unfragmentedly explaining the world we live in.
1) In 1982, I published my first book, "A Close World: The Political and Economic Impact of New Technologies," primarily related to computing and the internet, which were just emerging at the time (Santiago, Institute of Political Science, University of Chile, 189 pp). The pace of change was such that there was no second edition, given the rapidity with which what was expected for a more distant future arrived.
The second book was titled “Education, Science and Technology. Reflections on the End of the Millennium” (Santiago, Lom Ediciones. 1998, 139 pp.), a collection of essays.
2) In the 1982 book there is a chapter dedicated to the subject (“Artificial Intelligence”), where it is concluded that “The thinking machine, with a level of intelligence similar to or greater than that of human beings, still belongs to the realm of the laboratory. There is no evidence that we have it” (p. 70) @israelzipper
-Master's and PhD in Political Science (Essex University), Bachelor of Law (University of Barcelona), Lawyer (University of Chile), former presidential candidate (Chile, 2013)
«The opinions published herein are the sole responsibility of its author».