Efabless Logo
Analog...
public project

Identified Problem

As development in AI increases, industries need power and cost-effective methods for computation. General purpose processors use a significant portion of energy to fetch data from memory rather than actual computation. One of the most common, but computationally most expensive operations in machine vision and pattern recognition is that of vector–matrix multiplication (VMM) in large dimensions.

Solution

 The Following circuit can multiply fixed weights with inputs.

With the same methodology we can implement vector–matrix multiplication.

The weights can be tuned by using a FET.

By changing the voltage in gate, we could modulate the channel resistance.   

Challenges and Indented Solution

The main problem with purely analog implementation is the effect of noise and component mismatch on precision. To this end, we propose the use of hybrid analog–digital technology to simultaneously add a large number of digital values in parallel, with careful consideration of sources of imprecision in the implementation and their overall system performance.

How our model works 

Data can be inputted serially; this serial data is converted a 2-to-12-bit digital output based on the enable select lines S_input 1-n . This digital data is converted to analog domain with modified DAC which S_input 1-n  determining the bit size.

Number of inputs = number of outputs = number of weights

This design allows to control the serial data bit size by controlling the input and output select lines.

 

Rough Sketch of Processer

Team Members  

Devadut S Balan  –  Under Graduate Student

Akhil Raj                –  Under Graduate Student

M K Varun             – Under Graduate Student

Anand S                  –  Under Graduate Student

References 

Charge model for VMM

 

 

 

Description

Our goal is to design an Analog Accelerated Vector Multiplier which is capable of handling matrix multiplications in the analog domain which is faster and power efficient than available digital alternatives. In Neural networks majority of computation is dedicated to matrix multiplications. Using convectional digital processers is less efficient than analog processors. With Mythic launching their first AI accelerator that could give the same performance with 10 times less power consumption it is clear that future for AI is in analog. With intelligent use of 12-bit DAC and ADC we could reduce the problems of analog computation. Another advantage of analog computation is that it is much faster than digital computation. IDC predict by 2025 we will have 80 billion connected devices. A significant portion of them would be AI based edge devices. Analog Processers are not only limited to Edge devices, it can also be used in large scale neural networks that could reduce the annual power consumption of these computing clusters.

Version

1.0

Category

processor