or a number – and each neuron will have a few pixels to analyze. The last layer will indicate “it’s a T with a probability of 0.8” or “it’s an I with a probability of 0.3”. A backpropagation operation is performed from the final result to remodify the parameters of each neuron.
The machine is programmed to “learn to learn”. AI does not exist to replace people, but to complement, assist, optimize and extend human capabilities. There are two types of AI:
– weak AI: its objective is to rid people of tedious tasks, using a computer program reproducing a specific behavior. This AI is fast to program, very powerful, but without any possibility of evolution. It is the current AI;
– strong AI: its objective is to build increasingly autonomous systems, or algorithms capable of solving problems. It is the most similar approach to human behavior. This AI learns or adapts very easily. Thanks to algorithmic feedback loops, the machine can modify its internal parameters used to manage the representation of each stratum from the representation of the previous stratum. These strata of functionalities are learned by the machine itself and not by humans. From this postulate, we can say that the machine becomes autonomous and intelligent, by constructing its own “computerization” structures and relying on axiomatic decisions. It is the future AI that should be developed in about 10 years.
WEAK AI.–
Weak AI or narrow AI simulates specific cognitive abilities such as natural language comprehension, speech recognition or driving. It only performs tasks for which it is programmed. It is therefore highly specialized. It is a machine for which the physical world is somewhat enigmatic, even ghostly, if it perceives it at all. It does not even have any awareness of time. This AI is unintelligent and works only on the basis of scenarios pre-established by designers and developers.
STRONG AI.–
Artificial general intelligence (AGI) or strong AI has similar – and even superior – reasoning abilities to those of human beings. It is endowed with capabilities not limited to certain areas or tasks. It reproduces or aims to reproduce a mind, or even a consciousness, on a machine. That is to say, an evolutionary machine with its own reasoning and consciousness, capable in particular of independently elaborating strategies and/or decisions that go beyond human beings in order to understand them so as to help them (in the best of cases) or to deceive or even destroy them (in the worst of cases).
From a general point of view, AI can be illustrated as an algorithmic matrix that aims to “justly or coldly” optimize decisions. Naturally, the morality or fairness of this judgment is not predefined, but depends, on the one hand, on the way in which the rules are learned (the objective criterion that has been chosen), and, on the other hand, on the way in which the learning sample has been constructed. The choice of the mathematical rules used to create the model is crucial. Just like the human functioning that analyzes a situation before changing one’s behavior, AI allows the machine to learn from its own results to modify its programming. This technology already exists in many applications like on our smartphones, and should soon be extended to all areas of daily life: from medicine to the autonomous car, through artistic creation, mass distribution, or the fight against crime and terrorism. Machine learning not only offers the opportunity to automatically make use of large amounts of data and identify habits in consumer behavior. Now, we can also actuate these data.
MACHINE LEARNING.–
Machine learning concerns the design, analysis, development and implementation of methods that allow a machine (in the broadest sense) to evolve through a systematic process, and, thus, perform tasks that are difficult or impossible to perform by more traditional algorithmic means. The algorithms used allow, to a certain extent, a computer-controlled (possibly a robot) or computer-assisted system to adapt its analyses and response behaviors based on the analysis of empirical data from a database or sensors.
In our view, adopting the machine learning method is no longer just a utility, but rather a necessity. Thus, in light of the digital transition and this “war of intelligences” (Alexandre 2017), companies will be the target of a major transformation and will invest in AI applications in order to:
– increase human expertise via virtual assistance programs;
– optimize certain products and services;
– bring new perspectives in R&D through the evolution of self-learning systems.
Therefore, AI holds great promise, but also strong fears, hazards and dangers that must be corrected or even removed, to ensure an implementation that is in accordance with the legal framework, moral values and ethical principles, and the common good. The conflicts in question can be very varied. Indeed, machines like robotic assistants ultimately ignore the concepts of good and evil. They need to be taught everything. Autonomous cars are likely to involve us in accidents or dangerous situations. Some conversational agents may insult or give bad advice to individuals and not be kind to them.
Thus, even if today, ethical recommendations have little impact on the functional scope of AI and introduce an additional level of complexity in the design of self-learning systems, it becomes essential, in the future, to design and integrate ethical criteria around digital projects related to AI.
Several standards dealing with algorithmic systems, transparency, privacy, confidentiality, impartiality and more generally with the development of ethical systems have been developed by professional associations such as the IEEE (Institute of Electrical and Electronics Engineers) and the IETF (Internet Engineering Task Force)3.
To this can be added documents focusing on ethical principles related to AI, such as:
– the Asilomar AI Principles, developed at the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (hereafter “Asilomar” refers to Asilomar AI Principles, 2017);
– the ethical principles proposed in the Declaration on Artificial Intelligence, Robotics and Autonomous Systems, published by the European Group on Ethics in Science and New Technologies of the European Commission, in March 2018;
– the principles set out by the High-Level Expert Group on AI, via a report entitled “Ethics Guidelines for Trustworthy AI”, for the European Commission, December 18, 2018;
– the Montreal Declaration for AI, developed at the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017 (hereafter “Montreal” refers to Montreal Declaration, 2017);
– best practices in AI of the Partnership on AI, the multi-stakeholder organization – composed of academics, researchers, civil society organizations, companies building and utilizing AI academics, researchers, civil society organizations and companies building and utilizing AI – that, in 2018, studied and formulated best practices in AI technologies. The objective was to improve public understanding of AI and to serve as an open platform for discussion and engagement on AI and its influences on individuals and society;
– the “five fundamental principles for an AI code”, proposed in paragraph 417 of the UK House of Lords Artificial Intelligence Committee’s report, “AI in the UK: Ready, Willing and Able”, published in April 2018 (hereafter “AIUK” refers to House of Lords, 2018);
– the ethical charter drawn up by the European Commission for the Efficiency of Justice (CEPEJ) on the use of AI in judicial systems and their environment. It is the first European text setting out ethical principles relating to the use of AI in judicial systems (see Appendix 1);
– the ethical principles of Luciano Floridi et al. in their article entitled “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, December 2018;