Technology

At DeePhi, we are known for providing state-of-the-art deep learning technologies. Our neural network compression technology and neural network hardware architecture have deeply influenced the field of AI, shaping the future of deep learning.

DNNDK

DNNDK™ (Deep Neural Network Development Kit) - DeePhi™ deep learning SDK, is designed as an integrated framework, which aims to simplify & accelerate DL (Deep Learning) applications development and deployment on DeePhi DPU™ (Deep Learning Processor Unit) platform. (Click DNNDK for more information.)
DECENT (DEep ComprEssioN Tool)
With the world's leading research & accumulation in neural network model compression, DeePhi develops DECENT (DEep ComprEssioN Tool). It firstly introduces pruning, quantization, weight-sharing and Huffman encoding to reduce model size from 5x to 50x without loss of accuracy.
DNNC Neural Network Compiler

DNNC is the key to maximize the computation power of DPU via efficiently mapping neural network into high performance DPU instructions. It significantly improves DPU computation resources utilization at the same time with lower system memory bandwidth and lower power requirements.

Hardware Architecture

Aristotle Architecture
In order to compute convolutional neural networks (CNN) , DeePhi designed the Aristotle Architecture from the ground up. While currently used for video and image recognition tasks, the architecture is flhhhexible and scalable for both servers and portable devices.
Descartes Architecture
DeePhi's Descartes Architecture is designed for compressed Recurrent Neural Networks (RNN) including LSTM. By taking advantage of sparsity, the Deephi Descartes Architecture can achieve over 2.5 TOPS on a KU060 FPGA at 300MHz allowing for instantaneous speech recognition, neural language processing, and many other recognition tasks.