Close

Artificial Intelligence (AI) a chance or a threat?

For the past few years, the new buzzword in the field of digital transformation has been AI, Artificial Intelligence.

Source: Pixabay

For the past few years, the new buzzword in the field of digital transformation has been AI, Artificial Intelligence. This concept has everything to awaken our wildest fantasies, especially since it is one of the favorite subjects of works of anticipation like Terminator (Skynet …), 2001 the Odyssey of l ‘space (Hal); A.I. and many more.

Artificial Intelligence today

Let’s try to take a quick (and therefore fragmented) overview of what Artificial Intelligence currently represents.

If we are first interested in its definition, according to the Wikipedia collaborative encyclopedia, “Artificial Intelligence is the set of technologies used to simulate intelligence”. It is not yet a discipline in its own, but rather a set of disciplines, using different technologies in order to simulate or replace man in the implementation of his cognitive functions.

What are its technologies, its concepts?

The algorithm: According to the definition of Wikipedia, it is the study and production of rules and techniques that are involved in the definition and design of algorithms, i.e. systematic problem solving processes allowing to describe precisely steps to solve an algorithmic problem. These are techniques whose first uses date back to Antiquity.

To learn a little more about this discipline and demystify its uses, I advise you to read this popular book, easy and pleasant to read: “On the other side of the machine” by Aurélie Jean. There is also the whole problem of explicit or implicit biases faced by people working on the construction of algorithms. It is therefore important to clearly specify the initial hypotheses, and identify its biases in order to take them into account in the interpretation of the results …

Machine Learning: These are statistical methodologies whose objective is to allow the computer to learn from data and thus improve its performance in solving tasks. This process is based on two stages: a phase of estimating the targeted model on a volume of finite data, then a second phase known as production, the model being established, it can be fed with data for processing.
The processing model is built on algorithms, and therefore special care must be taken with possible biases which could invalidate all or part of the results obtained.

Deep Learning : It is a set of machine learning methods attempting to model with a high level of data abstraction using articulated architectures of different nonlinear transformations. This technology is also based on algorithms. This class of machine learning algorithm uses different non-linear processing layers (each layer taking into account the result of the previous one), with a learning operation at several levels of details or data representations.
These slightly absconding explanations can be summed up by the result obtained, these technologies have enabled major progress in image and sound processing, with facial recognition as concrete applications, which raises a whole host of questions (to this I recommend this report from Arte)

Neural networks: Contrary to what you might think, this is a fairly old concept, the first formal neuron (simplified modeling of a biological neuron) was published in 1950 by Warren McCullochand Walter Pitts. Many works followed, highlighting limitations and then allowing new advances that have allowed us to arrive at the current situation.
Their strong point is their ability to learn from experience and to derive statistical and probabilistic rules from it. But this is also their limit, because the more complex the problem, the greater the volume of real data necessary for their learning. The second problem encountered when building networks is their opacity. Indeed, their complexity does not necessarily make it possible to understand their way of “thinking” and the results obtained cannot be verified by the creators of these networks …

The A.I. general, reality or myth.

To summarize what we saw in the previous paragraph, for the moment what we gather behind the concept of Artificial Intelligence constitutes a set of techniques which allow computers to perform cognitive human tasks in a powerful way. It is true that for each of these tasks taken individually, the machine eventually outperforms man. We have seen it in chess tournaments between masters (Giraffe) and an I.A., or even more recently in a Go tournament (Deepmind, Alphago). In these two examples, the A.I. beat the best human champions. However, if these results should not be minimized, it is important to remember that these same humans, although they have specialized in their respective fields of excellence, know how to do a whole bunch of other activities, which the machines can’t do it yet.

We are currently seeing that the possibility of arriving at the creation of a general A.I. arouses the excitement of a certain number of researchers. This enthusiasm is shared by large industrial groups in front of the prospects of productivity gains and control that this could bring.
This race, however, poses many ethical questions about the future of these intelligent “conscious” and their place in human society (see the article by  Numerama on this subject).

It should be noted, however, that more and more voices are rising to demand better control over the use of AI. in many human activities.
Take the example of A.I. used in finance for High Frequency trading. These techniques, poorly mastered or perverted by unscrupulous suppliers can cause micro-cracks or even cracks which do not reflect the reality of the market. And by a runaway linked to algorithms amplifying market reactions thus causing significant losses for investors … (eslsca-blog).

In the military field where the use of A.I. and autonomous weapons systems is a reality under construction (IFRI), some people worry because the increasingly massive use of automated systems and fleets of drones (air, land or naval) seem to show in crisis simulations, that the possibilities of de-escalation are lower than when the human factor is present (cf. the article by Metadéfense on this subject).
In the social field, the intensive use of A.I. could have devastating effects on our lives in democratic terms (cf. the report d’Arte), jobs and social relations.
Are we moving towards the best of all worlds?

In conclusion, the I.A.: a subject of sovereignty?

Is Artificial Intelligence therefore a subject of sovereignty?
When we look at all the sectors in which the A.I. is developing and gaining momentum (finance, defense, security, health, etc.) we can only realize the major strategic interest that this technological and research sector represents. It must be recognized that France, which for a long time had an educational system allowing to train brilliant mathematical minds (but is this still the case …), had and still has all the assets necessary to shine in these fields.
It is important that we ensure that French start-ups specializing in this area, as well as their ecosystem, are preserved and protected from American or Chinese appetites. It is about our future and our independence in regard to the geostrategic choices that we, the French people, will want to make. This will also have repercussions on our model of society which remains different from those proposed by the two current world superpowers which are the USA and China.
And you, how do you position yourself?

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave a comment
scroll to top