Achieving Predictability in the Execution of Deep Neural Networks in Safety Critical Applications


Daniel Casini, Alessandro Biondi and Giorgio Buttazzo

Presentation title

Achieving Predictability in the Execution of Deep Neural Networks in Safety Critical Applications

Authors

Daniel Casini, Alessandro Biondi and Giorgio Buttazzo

Institution(s)

Scuola Superiore Sant'Anna, Pisa, Italy

Presentation type

Technical presentation

Abstract

The last decade has been characterized by a significant performance improvement of Deep Neural Networks' (DNNs), suggesting their adoption in many application fields, such as robotics, industrial control, and autonomous driving. Whenever a DNN is executed in a safety-critical scenario, its computational workload must execute within stringent deadlines. However, state-of-the-art frameworks for executing DNNs are optimized for the average-case and do not provide any mechanisms to bound worst-case response times. This work fills the gap by providing an accurate characterization of the workload generated by deep neural networks by means of a properly defined task model. In addition, it identifies potential problems in the popular Tensorflow framework and proposes an extension aimed at achieving a predictable behavior.


Additional material

  • Presentation slides: [pdf]

For more details on this presentation please click the button below: