SafeAI - Artificial Intelligence Under Performance and Safety Review
Artificial intelligence (AI) holds enormous potential. For users and developers, however, performance is not the only critical factor; security is also paramount. That is why AI systems, like technical equipment or machinery, need to be rigorously assessed for risks and vulnerabilities.
This involves examining the robustness and security of the models and algorithms employed. The process also includes testing the underlying mathematical methods and assessing the transparency and traceability of the results. Beyond the model itself, it is essential to ensure that the use and further processing of the data comply strictly with permissible framework.
SafeAI, however, is not merely a technical approach. Ethical considerations also play a significant role. For example, the training data is also checked for possible bias to avoid discrimination. After all, an AI model is only as good as the data it was trained on.
Regulatory requirements are also taken into account. Although AI-specific legislation is still in its early stages, it is steadily evolving. Notably, the European Union recently adopted the AI Act, the first transnational framework aimed at ensuring the safe and trustworthy use of artificial intelligence.
An AI Seal of Approval to Enhance Trust and Acceptance
With increasing regulation, it is anticipated that more certifications and seals of approval will be introduced in the future in order to increase public trust and acceptance. An AI seal of approval could function similarly to the TÜV seal in the automotive industry, signaling that an AI system has undergone extensive testing and been deemed safe.
A significant milestone in establishing new standards is the DIN SPEC 92005, in whose development IABG played a key role. This specification offers insights into the error susceptibility of AI systems. It analyzes and quantifies the uncertainties that arise when using machine learning and AI. Based on the findings, optimization measures can then be implemented.
AI in the Public Sector: Safety and Robustness as Top Priorities
Recognizing forecasting uncertainties is becoming increasingly important as AI systems are used in more and more everyday situations. Autonomous driving serves as a prime example: AI must reliably identify the road’s path, detect other road users nearby and respond appropriately to construction zones. Extensive testing, trials and validation are necessary to ensure that the AI can safely navigate in adverse conditions, such as rain or darkness.
There are also numerous potential applications in the public sector, as demonstrated by a recent case: AI-supported analysis in emergency services can significantly improve predictions of expected deployment volumes. In the district of Marburg-Biedenkopf, for instance, approximately 13,700 operations had to be rescheduled in 2022 alone. In response, the district has partnered with IABG on a project to optimize deployment planning through AI. For such AI systems, safety and robustness remain absolute priorities, as it is critical to reliably detect anomalies, shifts in events, and trends while minimizing the risk of manipulation.
On the Safe Side from the Start with IABG's "SafeAI Kit"
The demand for AI applications growing, driven in part by the shortage of skilled workers. Many industries and public administrations face a lack of qualified personnel, making broad AI support essential for maintaining services.
IABG not only tests existing AI applications and models but also supports the development process with its "SafeAI Kit." This tool provides early-stage insights into the robustness and stability of the methods used, highlighting areas where adjustments may be needed. It is a valuable resource for developing new applications, as safe AI is not merely a technical goal but a necessity for the future.
Please fill in the form and we will get in touch with you as soon as possible.