POMAA

Pareto-Optimal MAchine Learning ASIC

The goal of POMAA is the development of a hybrid ASIC in 22FDX/FDSOI technology, consisting of a RISC-V processor core for the support of software components and an application specific Machine Learning IP Core (ML-IP), which is used for the computationally intensive parts of the system, especially for procedures based on deep neural networks/deep learning. The underlying ASIC design is designed for maximum energy efficiency using the features of FDSOI technology.

Duration: 01.11.2019 till 31.10.2020
Donee: University of Bremen
Application Field: Datenmanagement

Project details

A major problem in the use of AI is the large demand for fast and efficient hardware. Current methods of machine learning, especially deep neural networks, are generally based on computationally complex numerical methods. The resulting computational effort is particularly critical when AI or ML methods are to be used in miniaturized and possibly embedded systems, since in this case both the permissible physical size and the energy consumption must be as low as possible. A possible approach here is the use of highly specialized hardware, e.g. in the form of an ASIC for machine learning. Especially with these conflicting requirements - high performance and accuracy on the one hand, low energy consumption on the other hand - it is also necessary not only to use hardware optimized according to a single criterion, but also to apply different criteria and to develop a Pareto-optimal system with regard to different criteria. Optimally, such a system offers not only ML-specific functionality, but also support of classical software-specific functions, so that it can be used for different tasks.

To develop the best possible hardware architecture for an energy-efficient AI system, both an optimization of the hardware and an efficient architecture of the neural layers are necessary. Here, a hybrid, two-way optimization promises the best energy efficiency with the required (classification) performance. To this end, POMAA is aiming for two optimizations in one loop. Based on an initial analysis of the underlying data and the task of the competition, an automated generation and selection of NN architectures is performed. These architectures can vary greatly in the layers (type, number of neurons, etc.) and provide the required classification performance for the data with a minimum number of parameters. In certain cases NN layers can be replaced by signal/preprocessing methods in this phase of ML design. This set of architectures is synthesized and their energy efficiency and load profiles are determined by simulation. Based on these results, a new objective function is automatically adapted and optimized with ML methods, which are especially designed to evaluate expensive objective functions in order to optimize and re-synthesize new architectures. This process is run through until the lowest possible energy consumption has been achieved.

Back to the list of projects
© DFKI GmbH
last updated 26.08.2019
to top