Innovation

“The Demand for more computing power will never end. The new wave of Scientific computing, Computing vision, Deep learning, and Video object recognition require General-purpose graphics processing units (GP-GPU) to process huge amounts of data at very high speed.”

To solve these huge computation needs, processors need to be powerful but also efficient to save the energy.

AzureEngine has developed a Reconfigurable Parallel Processor (RPP) architecture that combines the programmability and power efficiency together.

TECHNOLOGY

Parallel Computing

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out concurrently.

Computer vision, deep learning algorithms are typical applications with huge amounts of parallelism. Today’s processor computes a vector of data at a time. Each time, a new instruction is given to process a vector. Most of CPU, DSP and GPU is based on vector processor.

Reconfigurable architecture

A typical CPU processes an instruction at a time and the same processing element (PE) is used for the new instruction at the each cycle.

RPP moves the programmability from time domain to space domain. Instead of executing one instruction at a time, RPP distribute instructions into different PEs. The RPP is suitable for programs with massive data parallelism.

High Efficiency

Today’s computer builds billions of transistors into a single chip. The growth of the complexity is limited by the silicon technology node, chip size and power dissipation. 

With innovations from AzurEngine, this high efficiency becomes possible with Reconfigurable parallel processor (RPP) architecture. An RPP chip is made as efficient as an application specific integrated circuit (ASIC).

Applications

Reconfigurable parallel processor (RPP) architecture can be used in many applications such as:

 

  • Surveillance cameras
  • Auto driving parts
  • Medical imaging
  • Industrial robotics
  • Language translations

Benefits on Servers

AzurEngine’s patented Reconfigurable parallel processor (RPP) architecture offers many benefits for servers including:

  • Provide FP32 computation capability to ease the software develop
  • Low power consumption to reduce the density of server.
  • CUDA language support to inherit large amount of eco-system.
  • Video decoding capability

Deep learning

Deep learning is very taxing on the hardware of a computer

RPP brings the benefit of highly efficient HW to the fast evolving deep learning algorithms. RPP has been verified to support different kinds to neuron networks, like CNN and RNN efficiently. Low resolution data precisions can be applied at convolution and fully connected layers to improve the efficiency well floating point can be applied at the softmax layers to ease the quantization complexity.

Without change there is no innovation, creativity, or incentive for improvement. Those who initiate change will have a better opportunity to manage the change that is inevitable.
William Pollard

We are excited to hear from you! Please complete the form and we will get back to you asap.

Fields marked with * are required.

Send message