Edge AI Program Downsizing Technology

Downsize and increase the speed without lowering accuracy !

As Edge AI can perform AI processes in local environments such as IoT equipment, it is used in security cameras and other equipment where real-time, fast image analysis is required and its market is expanding.

Through efforts to help venture companies and other organizations, that do not have enough AI engineers or knowledge about embedding in order to implement Edge AI programs, optimize source code and memory without lowering AI accuracy within their limited resources (CPU devices, costs, etc.), we have developed a technology that provides the industry’s top level processing speed and downsizing effect.

Our technology enables implementation of customers’ AI algorithms in which learning has been completed such as those for sorting images or detecting objects including cnn, yolo, as well as ssd onto a platform for embedding which is represented by ARM. 
Using our unique quantization technology, AI algorithms can be compressed without lowering the determination/recognition accuracy of AI. Compression also provides the benefits of synergistic effects as it increases the system’s processing speed.

What we can do for you


Effects of applying our technology

Achieve AI operations on edge devices via optimization of NW models and embedding technology.
We provide the industry’s top level SW processing speed for image sorting functions.

Scope of our technology

Demo video

Case (1) : Increase the speed via SoC implementation
(image classification
: MobileNetV1)
Case (2) : Increase the speed via SoC implementation
(object detection : YoloV3Tiny)

Case (3) : Downsize NW + increase the speed via SoC implementation (object detection)

Example of increasing the speed via downsizing and optimized SoC implementation

※mAP(mean Average Precision) : An accuracy evaluation index used in object detection, and indicates the degree of correct classification of detected objects

Downsizing example through quantized learning