Student Projects

Student Projects

Highlights (2018-19)

Machine Learning and Approximate Computing

Approximate computing is a promising approach to energy-efficient design of digital systems in many domains such as Machine Learning (ML). The use of specialized data formats in Deep Neural Networks (DNNs), the dominant Machine Learning algorithm, could allow substantial improvements in processing time and power efficiency.

The focus is on applying variable precision formats to ML algorithms. These formats allow to set different precisions for different operations and to tune the precision of given layers of the neural network to obtain higher power efficiency.

Machine Learning Approximate Architectures for Tactile Data Classification

The tactile data classification uses Machine Learning based on Tensorial Kernel approach whose computational complexity is very high. Several alternative can be evaluated for an efficient hardware implementation.

Approximate Operations and Operands for Deep Neural Networks

  • Studying and assessing architectures for the efficient hardware implementation of ML to select an appropriate architecture suitable for the targeted application.
  • Modeling hardware architectures with specialized data format in high-level languages (e.g., python) to profile complex ML algorithm and evaluate the impact on the computation efficiency.

Internet-of-Things and Edge Computing

Increasingly sophisticated and computationally intensive algorithms are required for applications running on mobile devices and embedded processors constituting the Internet-of-Things (IoT). These applications include audio and image recognition, machine learning, and security. Today, heavy computations are transferred to servers (the cloud), but in the paradigm of ``Edge" computing, it is desirable to perform the computation locally to decrease latency, network traffic and reduce the overall energy footprint.

In this context, Application Specific Processors (ASPs) are used to accelerate software applications in portable systems and at the Edge. FPGA-based accelerators can be designed and fine tuned to match exactly the algorithm, and FPGAs can be reconfigured at run-time by making the system adaptable to the specific workload.

Design of a Run Time Manager for FPGA-based Acceleration

Dynamic partial reconfiguration allows to re-program the hardware circuits implemented on the FPGA at run time. To have this on-the-fly reconfiguration, the main processor needs to run a run-time manager, called Hypervisor, to interface to the operating system and to the hardware drivers.

The scope of the project is to implement parts of this Hypervisor for an ARM processor.

Accelerator architectures for PCI-Express connected FPGAs

Acceleration oriented to server (data-center) and Edge computing.

Examples: cryptography engines, industrial protocol translation.

Accelerator architectures for Zynq FPGAs

Acceleration oriented mostly towards small computing and IoT.

Examples: Malware detection, classification algorithms, image and audio processing.

Energy Efficient Computing

The main objective is to reduce the power dissipation in digital processors without penalizing the performance with the objective of reducing the computation energy footprint and increasing the system's reliability.

Design of accelerators for financial applications

  • Monte Carlo simulators
  • Decimal accelerators for accunting applications

Energy efficient Floating-Point Units and DSPs

Studying techniques to reduce the power dissipation, without penalizing the performance, to prevent the temperature to rise in excess, and to increase the reliability. Targets: Floating-Point units and DSP processors (filters, etc.).

Tool for semi-automated digital design

Tools that use simple cost functions to evaluate timing, area and power dissipation of small units.

Other Projects

Projects in collaboration with GN ReSound (Hearing Aid company)
(Descriptions by GN ReSound)

Processor Architecture: We are considering architectural changes for our digital signal processor for future chipsets. We would like to model some of these ideas to gauge the benefits and challenges associated with them. Finally, we would like to relate these changes to real implementation in ASIC and FPGA.

Advanced Verification Methodology: We are considering introduction of some more advanced digital verification methodology into our department such as System Verilog and UVM. We have already prepared a little in this direction but we would like to work more with scaffolding, generic models, checkers etc.

Cycle-wise Accurate Power Profiler: We would like to build a model for our DSP with the understanding of the power cost per instruction. The idea is to characterise the processor either in simulation or by measurement to get a library of power numbers per instruction. This could then form the basis for a code analyser that can take a program and generate a summary of the power that is consumed.

Design of Memristor-based Energy Meters

Memristors were theorized more than fifty years ago, but only recently physical devices with memristor's behavior have been fabricated. The main characteristic of memristors is the so called memristance: the device's resistance is inversely proportional to the charge supplied to the device.

The scope of this project is to use memristors as current sensors to monitor the operation of a photovoltaic (PV) array. The focus of this work is to use multiple levels of memristance to accurately calculate the energy harvested by the PV array, its power efficiency, and to detect malfunctioning cells.

Prerequisites: Good knowledge of circuit theory and Spice simulation.

Shorter Projects (1-3 months)

HDL generators for soft IP blocks

Tool for semi-automated digital design
Tools that use simple cost functions to evaluate timing, area and power dissipation of small units.

Java applets implementing simple arithmetic algorithms
to be used to power web-sites with simple tools and examples.


Modified by Alberto Nannarelli on Wednesday September 05, 2018 at 19:50