Lab: Collective Systems

New explainability supported by world models for predictions in autonomous driving

NEUPA will help increase the explainability of artificial neural networks (ANN) and enable their certification in the medium term. We are following two innovative ideas:
(a) We train prediction models for each behavior together with learning ANN. In the second step, these models can be used to explain the behavior of the ANN itself.
(b) We use methods from statistical physics and collective decision making to analyze neural networks. We will use mathematical methods to group artificial neurons according to their functionality into bigger modules. This way, we can understand ANN module by module.
We want to demonstrate our methods for AI explainability by building a driver assistance system (DAS). This DAS will predict the behavior of pedestrians, cyclists and e-scooter riders and warn the driver if a dangerous traffic situation might arise.
At this time, ANN and related methods are hard to certify for autonomous driving. NEUPA can help increase the explainability of ANN and thus enable their certification.

our project partner is: IAV GmbH

official project website: NEUPA