DeePhi Deep Learning SDK

DNNDK™ (Deep Neural Network Development Kit) - DeePhi™ deep learning SDK, is designed as an integrated framework, which aims to simplify & accelerate DL (Deep Learning) applications' development and deployment on DeePhi DPU™ (Deep Learning Processor Unit) platform. It makes the computing power of DPU become easily accessible through providing a productive solution, covering the phases of compression, programming, compilation and runtime.

Key Features:
  • Industry-leading technology & the first public release of deep learning SDK in China

  • Innovative full-stack solution for deep learning development

  • A complete set of solid optimization toolchains, covering compression, compilation and runtime

  • Lightweight standard C/C++ programming APIs

  • Easy-to-use & flat/gentle learning curve

DNNDK Framework

DNNDK mainly consists of DEep ComprEssioN Tool (DECENT), Deep Neural Network Compiler (DNNC), Deep Neural Network Assembler (DNNAS), Neural Network Runtime (N2Cube), DPU Simulator and Profiler.
DNNDK(Deep Neural Network Development Kit)Framework

DECENT (DEep ComprEssioN Tool)

There is lots of redundant information in DNNs (Deep Neural Networks), including the number and precision of parameters, which leaves us great opportunities for optimization. With our world-leading research in neural network model compression, DeePhi developed DECENT (DEep ComprEssioN Tool). It introduces pruning, quantization, weight-sharing and Huffman encoding to reduce model size from 5x to 50x without loss of accuracy. Therefore, it greatly brings DPU platform higher computation efficiency, better energy efficiency and lower system memory bandwidth requirement.
DECENT (DEep ComprEssioN Tool)

DNNDK Hybrid Compilation Model

DeePhi-patented hybrid compilation technique initiatively resolves the programing complexities and deployment difficulties of DL applications under heterogeneous AI computing environment. Users-developed C/C++ application source code and DPU instruction code generated by DNNC for neural network are compiled and linked together, empowering a rapid turn-key deployment solution for DPU platform.
DNNDK Hybrid Compilation Model

DNNC is the key to maximize the computation power of DPU via efficiently mapping neural network into high performance DPU instructions. After parsing the topology of input trained & compressed neural network, it constructs internal computation graph IR in DAG format, including corresponding control flow & data flow information. It performs multiple kinds of compiler optimizing and transforming techniques, including computation nodes fusion, efficient instruction scheduling, full data reuse of DPU on-chip feature map and weights, etc. DNNC significantly improves DPU computation resource utilization under the constraint of low system memory bandwidth and power requirements.

Neural Network Model Download/DEMO

Example listDownloadDemo
ResNet50Caffe ModelVideo
Inception v1Caffe ModelVideo
VGG16Caffe ModelVideo
Face DetectionVideo
Pedestrian DetectionVideo
Video Structure AnalysisVideo