Between ethical challenges, education, and societal transformation, discover how artificial intelligence is redefining our economic and organizational models. Here's our reflection on the discussions held at the Université de la Terre.​

Our future must be reconsidered and rebuilt now, in connection with the living world. Solving this equation is imperative for all and will be at the heart of the Université de la Terre.
— Université de la Terre

During this event, we participated in the debate "Artificial Intelligence: What Civilizational Project?" moderated by Jérôme JUBELIN (President of UMANAO, host, and speaker) and organized in partnership with IESF.

Given that artificial intelligence (AI) now plays an essential role in our daily lives, what should our approach to this technology be?

How far can AI go? Should we be wary of it? Encourage it?
And above all, how will it impact the development of our societies?​

The choice to reflect on a new civilizational project becomes necessary.​

This is the first time in human history that we are witnessing the outsourcing of cognition [...]
— Jérôme JUBELIN

AI: A Definition That Can Seem Vague and Misleading

The debate began with a clarification of the definition of AI.​

According to Laurence DEVILLERS (Professor of Artificial Intelligence at Sorbonne University and researcher at CNRS), it's essential to revisit the notion of "intelligence."

As she observed after over 20 years of studying this field, AI does not possess real intelligence and is incapable of feeling any emotion. For the professor, believing otherwise is an illusion. She emphasizes that AI is entirely deterministic and predictable, even if some randomness can be introduced (notably in text generation).​

AI is neither intelligent nor emotional.
— Laurence DEVILLERS​

Olivier ALEXANDRE (Sociologist, CNRS researcher, member of the Internet and Society Center, and lecturer at Sciences Po Paris) added that the term "artificial intelligence" was coined during the Dartmouth Conference in 1955 by John McCarthy to secure funding from investors. The choice of this name was already strategic at the time.

Today, "AI" has become a boundary term, crossing multiple disciplines or fields of knowledge, with slightly different meanings depending on the context. For instance, it's used indiscriminately in the media (are they referring to a machine learning algorithm, generative AI, or a neural network?).

Having become a malleable concept, it generates disproportionate hopes and fears among users, investors, and decision-makers alike.​

The Lenstra Note :

This terminological and technical complexity is a challenge we encounter daily at Lenstra.​

We assist our clients in demystifying AI concepts, identifying appropriate uses based on business needs, and choosing suitable architectures, as illustrated in this use case. Even for professionals, these notions benefit from being clarified, both in terms of their capabilities and their limitations, to serve businesses and their performance.​

Is It Relevant to Compare AI to Humans?

[…] it's foolish, like comparing a race car to a world champion runner.
— Laurence DEVILLERS​

Another significant discussion focused on the characteristics of AI and how its performance is evaluated.

One of the first tests implemented was that of mathematician Alan Turing, as recalled by Olivier ALEXANDRE. He envisioned an experiment to test a machine's ability to imitate human conversation, involving a blind verbal confrontation between a human and a computer, as well as another human. If the human engaging in the conversations cannot distinguish the computer from the other human, the machine passes the Turing test.​

However, this "test" does not reveal genuine intelligence but merely an illusion, an imitation of human behavior.

Florent LATRIVE (Deputy Director of Information at Radio France | Digital & AI) responded by recounting the reaction of Hervé LE TELLIER (author and 2020 Goncourt Prize winner) when he was tricked by a text entirely generated by AI during a writing exercise.

"Oh my! That's impressive!" he exclaimed. (More details in the post below).

Although sensationalist, this method of evaluating AI's effectiveness is strongly criticized by Laurence DEVILLERS, who laments the constant comparison with human intelligence. According to her, better solutions must be found to measure AI's performance without making comparisons she deems "foolish" and "irrelevant."​

The Lenstra Note :

At Lenstra, we share this vision: AI is not meant to replicate humans but to augment them.​

Our expertise focuses on designing Data and AI systems that embrace their role as tools—interpretable, controllable, and always serving end-users.​

How Is AI Shaping the Future of Our Societies? An Ongoing Evolution?

Following this introduction on AI and its attributes, Jérôme JUBELIN highlighted that AI compels us to become aware of our relationship with the world.​

According to him, the question to ask would be: "Civilization: What projects for AI?"​

And there are already some answers.​

China and the USA: Two Contrasting Visions of AI, One Tool with Multiple Uses

Gilles Babinet (entrepreneur and co-president of the National Digital Council) presented the contrasting approaches of two major powers in the face of the rise of artificial intelligence, emphasizing how these technologies profoundly shape societies.

On one side, China integrates AI into management and governance systems. On the other, the United States leans towards a more liberal vision of AI, with prospects for organizational transformation and cost reduction in certain organizations.

According to Gilles Babinet, these trends, though very different, highlight two major risks associated with the increasing integration of AI: advanced automation potentially weakening human employment and the strengthening of control tools within societies.

AI & Dependencies: What Are the Stakes?

Towards an Addiction to Content?

Beyond governments, AI has also begun to impact our lifestyle habits, as discussed by Olivier ALEXANDRE.​

Particularly concerning social networks (Facebook, Instagram, TikTok), where recommendation algorithms have become so efficient that the nearly infinite personalized stream of short videos has become addictive for our brains (comparable to sugar in our diet).​

Playing on the instant gratification of our neural reward mechanism, similar to that caused by sugar in our diet, we are witnessing an accelerated commodification of our attention.​

And this is not without consequences.​

Taking the example of the USA:​

  • The average screen time among teenagers reaches 7 hours and 45 minutes per day.
  • The depression rate has tripled.
  • The number of reported mental disorders is multiplying.

A New Work Companion

Another growing dependency is the delegation of work tasks to generative AI tools, as stated by Laurence DEVILLERS.​

During a study conducted at Sorbonne University, she highlighted the behavior of students tasked with completing exercises with the option of being assisted by OpenAI's ChatGPT (version 4). A set of exercises was presented: some simple for students and challenging for AI, and others difficult for students but easy for AI.​

AI is not a tool like a hammer.
– Laurence DEVILLERS

Unsurprisingly, students used AI to solve exercises for which they had no answer or when in doubt. More surprisingly, a significant portion of participants re-challenged ChatGPT even when they knew the answer.​

It's this latter behavior that Laurence DEVILLERS warns against.

AI is not a tool like others over which we have total control and whose results are entirely predictable. According to her, users need to distance themselves from it, especially in learning situations. She also fears that the accelerated democratization of its use may lead to a decline in creativity due to laziness or ease.

The Lenstra Note :

At Lenstra, we ensure that our consultants develop AI systems trained solely on controlled data and without reliance on public generative AI corpora. Our tools are always based on ethics, transparency, and concrete utility.​

This involves implementing data infrastructures tailored to client needs, ensuring traceability of processes, and guaranteeing data sovereignty at every stage of the lifecycle.

Which Path for Europe?

Finding the Right Stance

Faced with these new risks and emerging challenges, does Europe have a voice?

According to Gilles BABINET, this is its moment of truth in its role as a regulator.

We need widespread public awareness in order to reflect and build our future with AI collectively.

Fortunately, some initiatives have already been implemented at the European level regarding AI and personal data protection, such as the AI Act, the GDPR, and the Schrems II ruling.

However, we must not adopt a submissive stance toward AI, insists Florent LATRIVE

If we continue to view this technology as a black box, it will remain one — preventing us from understanding it, mastering it, and guiding its evolution in alignment with our values.

The Lenstra Note :

These European values and initiatives are fully aligned with Lenstra’s approach, where regulatory compliance is integrated from the design phase of data architectures.

We already apply current standards and pay close attention to anticipating and implementing new regulations.

Our infrastructures are designed “by design” to combine compliance, performance, and resilience.

Toward the Development of AI Education and Ethics

To achieve this, we must demystify AI, its capabilities, and its uses — starting from a young age.

This has become essential, according to Laurence DEVILLERS, who highlighted the launch of ethical digital capsules by the Blaise Pascal Foundation, aimed at developing critical thinking on these topics in schools.

To address these issues in the workplace, Gilles BABINET emphasized the CaféIA initiative, which promotes AI education through discussions and interactive workshops. Initiated by the French National Digital Council, numerous resources and examples are regularly published on how to adopt and adapt this approach.

Finally, Olivier ALEXANDRE ended on a positive note, drawing a parallel between AI's arrival in our lives and that of the internet 30 years ago. It is a new medium we must learn to live with, educate ourselves about over time, and evolve alongside.

Above all, AI should be seen as a tool to simplify our lives and reduce our workload — not to replace us.

AI must serve a utilitarian purpose to ensure its societal impact benefits the greatest number.

The Lenstra Note :

To give our teams the tools for informed and healthy adoption of AI, we organize internal knowledge-sharing sessions on applied AI in data, where our experts share experiences and real-world use cases.

Convinced of their importance for growth, we are also developing internal applications to expand and deepen these uses.

Final Word

Shared by Laurence DEVILLERS:

[..] When we work for tomorrow and the unknown,
we act with reason, because we must work for the uncertain [...]
– Blaise PASCAL

The Lenstra Note :

We believe that AI can only be a sustainable opportunity if it rests on solid foundations: rigorous data governance, robust infrastructure, and a deeply human-centered approach to technology.

This is why we support our clients in setting up reliable, ethical, and resilient data environments, to make AI a lever for progress — not an end in itself.

Q&A

Should we create a legal body for AI and robots?
– Legal professional

Laurence DEVILLERS

Absolutely not!
That would mean granting rights and responsibilities to a machine.

Gilles BABINET

In a way, that already exists.
Companies must be held accountable for the AI solutions they develop.

(More details on this complex topic in this [article by the Murielle Cahen law firm].)

How can we reconcile ecology and AI development?
– Student

Oliver ALEXANDRE

There is a great deal of R&D focused on reducing energy consumption and, consequently, lowering CO2 emissions.
Significant progress has been made in embedded AI (e.g. recent smartphones, robotics), where the power consumption of models is constrained by the size and architecture of the chips used.

Laurence DEVILLERS

The company Mistral AI recently partnered with Veolia to use generative AI to improve resource management and energy production.

(More details in this [Veolia press release])

The term “Artificial Intelligence” can be misleading and often creates false hopes or expectations among users and investors.
If you could rename it, what would you call it instead?
– Thomas, author of this article (Lenstra)

Gilles BABINET

Cognitive system

Laurence DEVILLERS / Florent LATRIVE

Imitation / Mimicking machine

Olivier ALEXANDRE

Information system

Bibliographie

Présentation
Intelligence artificielle : quel projet de civilisation ?
Dartmouth workshop
ChatGTPT peut-il battre un prix Goncourt ?
Veolia et Mistral AI_ s'associent pour révolutionner la gestion de l'efficacité des ressources grâce à l'IA générative et accélérer la transformation écologique
Avocat en ligne
Elon Musk à la Maison-Blanche : entre idéologie et gouvernance, quel avenir pour la politique états-unienne ? - IRIS
TikTok et psychologie : l'app révolutionne les stratégies d'entreprise
Pensées de Blaise Pascal
Capsules Éthique du Numérique pour les Enfants - Fondation Blaise Pascal
30 ans du Web : les débuts mouvementés de l’Internet en France