

Plug and Play AI platform, for AI developers to build AI for Edge
EDGENeural AI Platform ENAP building blocks
ENAP Training Containers
Optimization and Inference













Train
Train models from scratch, use a pre-optimized model or your own model
Simple UI based training Tool
Start training your model by simply copying the link of the data set. Supports a wide range of computer vision models like detection and classification. Easy data ingestion for training, automatically detects default configuration like the model family and name.
Supports customization for advanced users
Advanced users can customize settings and train hyper parameters to train the model efficiently for particular data sets, supports multiple Edge hardware.
Reporting, Versioning and analysis
Easily monitor training performance through visual charts, and detailed reports for accuracy improvements over time and number of Epochs. Feedback loops for new data-set and model versioning.
Optimize
Optimize inference performance without any accuracy trade-off
Select Model, Choose Hardware, Choose Quantization level
Automatically optimize inference performance through quantization and graph compilers to instantly improve memory, latency and throughput of your models.
Improve Performance while preserving accuracy
With the accuracy level bar set at -1%, improve model performance without accuracy trade-offs to easily deploy on actual hardware. ENAP takes care of the heavy lifting at the backend and re-trains the model if accuracy levels drop below the threshold.
Automatically reduces computational heaviness
ENAP optimizer automatically tunes the model to reduce any redundancy, to reduce the computational heaviness from the model, and makes it optimal to be deployed seamlessly on resource-constrained hardware.






Deploy
Enable continuous deployment seamlessly on any Edge hardware
Inference Engine
Simplify deep learning model inference across multiple frameworks and hardware. Easily deploy to any hardware using our inference engine and save hours of engineering time and effort.
Deploy across Multiple Hardware
Scale your inference workloads, build, port and optimize computer vision models on any Edge AI platform including Nvidia, Qualcomm, etc.
Accelerate AI adoption on Edge
Why EDGENeural AI Platform (ENAP)?

End to End Workload Processing
Process AI and Non-AI work loads on the chip
End-to-End AI software development with model lifecycle management

Extensive Edge AI Software Platform
Supports existing libraries
Pre-Built Optimised Models for chip
Model Optimisation
Pruning / Compression

MLOps / Edge DevOps
Model Lifecycle Management
Model Benchmarking
Federated Learning
Containers