EU Funds SAFEXPLAIN to Push CAIS in Automotive
February 15, 2023
Barcelona. The European Union has funded the SAFEXPLAIN (Safe and Explainable Critical Embedded Systems based on AI) project that began October 2022. The initiative lays the groundwork for Critical Autonomous AI-based Systems (CAIS) as an important aspect of functional safety requirements. CAIS need real-time responses for safer edge applications. The project is currently scheduled for three years and brings together a consortium of six partners from academia and industry.
CAIS is becoming the standard within mobile industry such as rail, automotive, and space. The benefits of a standardized CAIS will assistant in the growing technologies to further expand research to accident-proof our roads, rails, and rockets. Statistics show that using a reliable and efficient CAIS could prevent 90% of vehicle accidents a year with a reduction of nearly 80% on differing types of vehicles.
A crucial component of any CAIS software function is a Deep Learning (DL) ability in order to close the structural gap between its Functional Safety (FUSA) constraints and its DL solutions. Transparency (explainability and traceability) and the data-driven DL software do not collaborate well enough for a verifiable and pass/fail test-based software solution. SAFEXPLAIN plans to rethink this predicament by providing a customizable approach for the certification of DL-based CAIS.
Jaume Abella, SAFEXPLAIN coordinator, says, “this project aims to rethink FUSA certification processes and DL software design to set the groundwork for how to certify DL-based fully autonomous systems of any type beyond very specific and non-generalizable cases existing today".
SAFEXPLAIN will conduct three real-world uses that will demonstrate the advantages of its technology for various vehicle safety standards as well as integrating FUSA-aware DL solutions. According to SAFEXPLAIN, “To benefit wider groups of society, the technologies developed by the project will be integrated into an industrial toolset prototype. Various IP and implementations will be available open source, along with specific practical examples of their use to grant end-users the tools to develop those applications.”
*(Editor’s Note:The SAFEXPLAIN (Safe and Explainable Critical Embedded Systems based on AI) is a HORIZON Research and Innovation Action financed under grant agreement 101069595. The project began on 1 October 2022 and will end in September 2025. The project is formed by an inter-disciplinary consortium of six partners coordinated by the Barcelona Supercomputing Center (BSC). The consortium is composed of three research centers, RISE (Sweden; AI expertise), IKERLAN (Spain; FUSA and railway expertise) and BSC (Spain; platform expertise) and three CAIS industries, NAVINFO (Netherlands; automotive), AIKO (Italy; space), and EXIDA DEV (Italy; FUSA and automotive).)