EDGENeural AI Platform (ENAP)

Edge AI Innovation and Acceleration Platform

Enabling AI developers to build, train, optimize and deploy deep learning models at blazing-fast inference speeds

Sign up for Beta

Plug and Play AI platform, for AI developers to build AI for Edge

ENAP Studio provides a software defined platform that is modular, unified, hardware agnostic, integrates end-to-end workflow for AI engineers to build, train, optimize and deploy any AI application across various hardware platforms

EDGENeural AI Platform ENAP building blocks

The ENAP Studio comprises two components. ENAP Cloud Dashboard for ENAP training containers, and ENAP Edge Containers for Edge optimization, and inference engine containers

ENAP Training Containers
Low Code Platform
One Click Train, Optimise and Deploy
Hardware Agnostic
Seamless 3rd Party Integration
Supports all deep learning frameworks
Computer vision, NLP, or data science
Optimization and Inference
Multi-Architecture supporting x86, CPU & GPU, ARM, FPGA
Seamless integration with 3rd party IoT, Data Analysis Platform
AI Optimizer to automatically optimise and preserve accuracy
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element
Particle element

Train models from scratch, use a pre-optimised model or your your own model

Simple UI based training Tool
Start training your model by simply copying the link of the data set. Supports a wide range of computer vision models like detection and classification. Easy data ingestion for training, automatically detects default configuration like the model family and name.
Supports customization for advanced users
Advanced users can customize settings and train hyper parameters to train the model efficiently for particular data sets, supports multiple Edge hardware.
Reporting, Versioning and analysis
Easily monitor training performance through visual charts, and detailed reports for accuracy improvements over time and number of Epochs. Feedback loops for new data-set and model versioning.

Optimize inference performance without any accuracy trade-off

Select Model, Choose Hardware, Choose Quantization level
Automatically optimize inference performance through quantization and graph compilers to instantly improve memory, latency and throughput of your models.
Improve Performance while preserving accuracy
With the accuracy level bar set at -1%, improve model performance without accuracy trade-offs to easily deploy on actual hardware. ENAP takes care of the heavy lifting at the backend and re-trains the model if accuracy levels drop below the threshold.
Automatically reduces computational heaviness
ENAP optimizer automatically tunes the model to reduce any redundancy, to reduce the computational heaviness from the model, and makes it optimal to be deployed seamlessly on resource constrained hardware.
Particle element
Particle element
Particle element
Particle element

Enable continuous deployment seamlessly on any Edge hardware

Inference Engine
Simplify deep learning model inference across multiple framework and hardware. Easily deploy to any hardware using our inference engine and save hours of engineering time and effort
Deploy across Multiple Hardware
Scale your inference workloads, build, port and optimize computer vision models on any Edge AI platform including Nvidia, Qualcomm, etc.
Accelerate AI adoption on Edge

Why EDGENeural AI Platform (ENAP)?

End-to-End software stack is the secret behind AI on Edge adoption

End to End Workload Processing
End to End Workload Processing
Process AI and Non-AI work loads on the chip
End-to-End AI software development with model lifecycle management
Extensive Edge AI Software Platform
Extensive Edge AI Software Platform
Supports existing libraries
Pre-Built Optimised Models for chip
Model Optimisation
Pruning / Compression
MLOps / Edge DevOps
MLOps / Edge DevOps
Model Lifecycle Management
Model Benchmarking
Federated Learning