The former promotes a less energy- and resource-greedy artificial intelligence, with the goal of reducing its own carbon impact. To achieve that objective, developers can adopt less-complex models, limit the number of model trainings, or find a balance between technical performance and energy consumption.
The latter concept, "AI for Green", defines artificial intelligence developed for sustainable purposes and for environmental protection: optimizing car routes to reduce fuel consumption, predicting extreme climatic phenomena and anticipating their impacts, etc. These applications of artificial intelligence can therefore facilitate the sustainable transition in a majority of economic sectors.
Both concepts ultimately address the same pressing need to contribute to carbon emissions reduction efforts. In 2022, it is crucial for AI algorithm creators to integrate them: first, reflecting on the purpose and usefulness of the algorithm, then pushing for frugal AI for development and production.
From a successful POC to full-scale production
Although companies are gaining in maturity when it comes to developing Machine Learning algorithms, the increase in the number of algorithms in production remains negligible: the ratio of POCs to industrialized algorithms is low, or even nil for some companies. In order to accelerate the industrialization of algorithms, the major digital players have gradually adapted DevOps techniques and applied them to Machine Learning to create MLOps.
Bringing a Machine Learning model into production requires several steps that must be considered as early as the model design stage in order to integrate it into the existing infrastructure and processes. Among the elements necessary for a model to function properly in production are data collection and cleaning, training automation, experiments tracking, deployment and model drift monitoring.
As already identified last year, MLOps continues to be a business opportunity for companies: the technical challenges and scarcity of data-engineering and DevOps skills needed are to be considered when putting a Machine Learning model into production. The increase in customer demand for AI in production is notable across all industries, and this trend is expected to accelerate in 2022.
Prevent and protect AI solutions in production
The growing number of AI solutions put into production and hence accessible to users leads to an increase in the number of attacks and misuses. These attacks can mislead a model by altering the input: for example, a binary text classifier will consider the sentence "eat a child" as "bad" while "eat a child because I am very hungry" might be considered as "good". In addition, the algorithm may not be secured: it may then reveal the data it has been trained on, with severe consequences when it comes to personal data.
Three measures are recommended to deal with such threats: (1) carry out technology watch to identify new attacks and be aware of the issue, (2) when building an AI product, as early as the framing phase, include the identification of potential threats and security constraints (the model, the infrastructure hosting the solution, access types, etc.) and (3) during the development of the model, use training data protection techniques such as differential privacy or knowledge distillation. Finally, a further preventive measure is to systematically apply robustness tests to identify the vulnerabilities of the solution before it is deployed in production.
Keeping all these parameters in mind, it becomes clear that it is highly beneficial to anticipate the security and protection of artificial intelligence algorithms before they are deployed in production: this avoids model drifts and the leakage of confidential data.
Embracing transparent artificial intelligence to build trust
In 2022, trustworthy AI moves beyond a set of theoretical values and concepts to an operational lever that AI creators must fully integrate into the design of the algorithms they develop. Trustworthy artificial intelligence by design requires integrating throughout the lifecycle the issues of model explicability, decision interpretability, as well as the detection and management of bias and the possibility of human intervention (human in the loop).
For companies to meet the challenge of operationally embedding these values, acculturation, and training of employees are key, as responsible artificial intelligence is becoming a pillar of the European digital economy.
The transition from black box AI to transparent and explainable AI is a win-win situation. It is obviously necessary to gain and keep the trust of algorithm users, but also to protect the image of companies, avoid the release of deviant algorithms on the market and finally, anticipate future regulations.
Driven by the European Union, leading the way with its AI Act, this year, the design and use of artificial intelligence solutions should naturally shift towards a responsible AI, so-called "trustworthy", and operational to meet the users' demand, as well as the upcoming regulations.
Beyond a trend, the emergence of several reference frameworks and standards (AI certification of the LNE, trustworthy AI label of Labelia...) suggests that this AI will become the standard at the European level... and shortly worldwide? This week, the American Business Roundtable, representing 230 companies from all sectors, requested the Biden administration to establish rules in this direction. No doubt that in 2022, the number of certified companies should consequently increase.
Written by: Marie COUVE, José SANCHEZ and Ismail ERRADI @AXIONABLE