Close

November interview: Aurélie Jean

Source: Frédéric Monceau

Interview with Aurélie Jean, a specialist in algorithms, researcher, entrepreneur (DPEEX, ISV) and best-selling author of books designed to popularize these subjects that have such an important impact on our lives. During this interview, we will discuss various topics, from algorithms to Artificial Intelligence (AI) and not forgetting digital sovereignty

 

[Emmanuel M]: Hello Aurélie, first of all thank you for accepting this interview. Could you tell us in a few words about your background and what drives you?

Following studies in mathematics, physics and mechanics (with a minor in computer science), I completed a PhD in computational mechanics of materials and mathematical morphology. Then I applied computational science to medicine for 7 years (Pennsylvania State University, MIT) then to finance for 2 years (Bloomberg). Since 2018, I divide my time between consulting and development (In Silico Veritas), research which gave birth to my second company DPEEX in medicine, teaching, and writing (among others my column in Le Point and my books).

 What drives me every day is to be able to use modeling and algorithmic simulation to solve concrete and complex problems of great magnitude. Like many scientists and engineers, I contribute to understanding our world and making it better by tackling high-impact problems, such as the one I tackled that gave rise to DPEEX, the pre-diagnosis detection of breast cancer.

 

[EM]: You explain that biases during the creation of algorithms are more or less inevitable. Could you briefly tell us what are the main tools to avoid them?

We can never say that our technology is free of algorithmic bias, just as we can never say that our code is free of bugs. That said, we can say that we have designed an algorithm according to good practices to minimize risk. This includes discussions with the business, testing during development, testing during deployment, and testing in production (when the algorithm is used by millions of people). There are also computational methods to be applied to the algorithm once it has been trained or calibrated, which consist in extracting part of its logic. This is called algorithmic explicability computation (the heart of the subject of my book “Do Algorithms Make Law?” published by L’Observatoire).

 

[EM]: We can only observe the progress of the digital transformation of our society. What do you think about the risk of reification of the human being, as evoked by some philosophers like Jacques Ellul or more contemporary like Michel Onfray ?

 

I regret the words of some intellectuals who do not understand concretely how these technologies work, which prevents them from pointing out technically correct arguments and therefore from proposing solutions that get out of the dramatic posture unfortunately common today. Yes, the risk of reification exists but it is absolutely not a fatality, but for that it is necessary to understand the scientific and technical ins and outs in order to build pragmatic and practical solutions. This is what the philosopher Gaspard Koenig did during his journey into the world of AI, which gave rise to his excellent book “The end of the individual” published by L’Observatoire in 2019. He denounces the end of individual unity by bringing concrete arguments on the functioning of technologies that for some homogenize individuals by a strong consideration of general statistical trends by losing the uniqueness of the individual. On many occasions, he opens the debate from a technical angle. It is necessary to leave a condemnation alone, but to associate it with answers which allow us to benefit from technologies while putting aside the risks and the threats for our humanity.

 

[EM]: On Artificial Intelligence, based on algorithms, how is France and Europe positioned in this global competition?

 

We clearly have limited private and public budgets compared to the US for example. That being said, we have exceptional talent. I deeply believe that we should not think only “France” when it comes to imagining a strategy for innovation in algorithms. On the contrary, we must think Europe, to combine data sets, unite talents, and share technologies. The RGPD is a revolution in the collection and use of personal data that we must take advantage of even more. It is not often said, but the RGPD has inspired texts like the CCPA (California Consumer Privacy Act).

 

[EM]: Let’s talk about sovereignty. For you, is digital sovereignty a valid concept? Do our political and industrial leaders really take into account the extent of this subject?

 

Sovereignty is certainly a political issue, as I explained in one of my columns in the magazine Le Point “La souveraineté numérique est avant tout un sujet politique (Digital sovereignty is above all a political issue)”. If we rely only on the technical and scientific capacities of the candidates for the choice of a solution, we will systematically take solutions from large and efficient players – often American -. States must be willing to create a favorable environment for the development of new technologies on their territory, but they must also make clearly accepted choices to promote the economic growth of certain French or European players – such as the choice of a French player to host citizens’ health data. In theory, political and economic leaders are aware of this, but in practice we still do not have a clear strategy, with missed opportunities (once again concerning the health data hub).

 

[EM]: In the face of rising geopolitical tensions, and a return to high conflict, can digital sovereignty be protective of our interests in your opinion??

We must build digital sovereignty without tending towards counterproductive nationalism. That is the difficulty. This will be possible thanks to a better knowledge on the part of our leaders of STEM and technological advances in order to build a coherent and ambitious vision.

 

[EM]: Aurélie, we are coming to the end of this interview. I would like to thank you again for the time you have given me. Could you give in a few words a conclusion full of perspectives?

I’ll end with a piece of advice for your readers. Beware when presented with a binary world and fatalistic thinking. Life is often about nuance, and we always have a choice.

scroll to top