Decentralization of computing and 5G rollout has led Edge Computing more prevalent for customers to deploy and operate AI apps at the edge and securely, benefiting from real-time data access where its generated, applying intelligence to the data, taking action, and getting real-time insights, which bring a new paradigm to how we computing now and more in future, Imagine a stethoscope device that can a run ML model, which in turn feeds the data to mobile or tab based app to provide real-time details to Heartbeat information of the patient, and heartbeat murmur patterns can predicate the possible chance of cardiac arrest, lung issues and or asthma condition could save thousands and millions of people live, this is possible with AI running at the edge. Just by taking an edge stethoscope example. Each year 43 million people die of Non-communicable diseases (NCDs), equivalent to 71% of all deaths globally. Edge AI’s goal is to reduce 25-30% of these premature deaths due to NCDs by 2025 and work towards early detection and prediction of Heart/Lung/Asthma Disorders. This an amazing story isn’t it?
Many such edge computing examples exist in manufacturing, Automobile, Health care, and Smart & Critical infrastructure space. Let us now look at the gap that needs to be addressed, continuous model training on the spot at the edge without bringing down the model or replacing the mode is the need for business continuation, if we look back to the stethoscope use case, it’s not efficient to train and re-train these models on the cloud and continuously, the need is continuously training and re-train on the spot, that’s where the value is.
At the edge, it requires preparing a large amount of training data, and the other is the need for a vast amount of data storage, data lake backpropagation, and other calculations in the initial training stage. The latter was often performed on servers with high-performing GPUs, located in the cloud, thus very power-consuming, with latency issues, and time. Hence, it was not realistic to perform training on edge devices. However, a technology exists that can conduct training and inference extracting features from only a small amount of data. This technology is known as Sparse Modeling.
In this blog, we did a comparison of Sparse Modeling with deep learning; With some use cases where Sparse Modeling is beneficial. Furthermore, examples of visual inspection applications leveraging Sparse Modeling technology are getting introduced.
What is Sparse Modeling?
The word “sparse” is defined as “thinly dispersed or scattered”. Sparse Modeling assumes that essential information is actually very limited (therefore” sparsely distributed”). This technology identifies and extracts essential information from the input data for the output.
Sparse Modeling identifies the relationship between different data. When outputting, Sparse Modeling does not focus on the input data itself but focuses on the relationship between input and output data. By focusing on the relationship between the data, the quantity and quality of the input data itself do not matter. Consequently, only a small amount of data is needed. Sparse modeling is categorized as an unsupervised learning method in machine learning.
Deep learning generally delivers high performance in applications where sufficient data and annotations can be prepared (e.g., object detection for automated driving). However, Sparse Modeling expands the scope of AI applications to use cases where large amounts of data cannot be collected, and interpretability is highly important.
Sparse modeling can handle both one-dimensional data such as time series and two-dimensional data such as voice and images. Imaging applications include defect interpolation, defect inspection (anomaly detection), and super-resolution. Figure 1 shows a comparison between a conventional machine learning method and Sparse Modeling for the task of detecting defect images in solar panel inspections. You can see that the amount of training data (number of images) needed for Sparse Modeling is significantly smaller than that of deep learning. However, the accuracy of Sparse Modeling is more than 90% – with a training time of only 12 seconds at the edge makes it easier, and faster and maintains business continuity.
What is Sparse Modeling?
The word “sparse” is defined as “thinly dispersed or scattered”. Sparse Modeling assumes that essential information is actually very limited (therefore” sparsely distributed”). This technology identifies and extracts essential information from the input data for the output.
Sparse Modeling identifies the relationship between different data. When outputting, Sparse Modeling does not focus on the input data itself but focuses on the relationship between input and output data. By focusing on the relationship between the data, the quantity and quality of the input data itself do not matter. Consequently, only a small amount of data is needed. Sparse modeling is categorized as an unsupervised learning method in machine learning.
Deep learning generally delivers high performance in applications where sufficient data and annotations can be prepared (e.g., object detection for automated driving). However, Sparse Modeling expands the scope of AI applications to use cases where large amounts of data cannot be collected, and interpretability is highly important.
Sparse modeling can handle both one-dimensional data such as time series and two-dimensional data such as voice and images. Imaging applications include defect interpolation, defect inspection (anomaly detection), and super-resolution. Figure 1 shows a comparison between a conventional machine learning method and Sparse Modeling for the task of detecting defect images in solar panel inspections. You can see that the amount of training data (number of images) needed for Sparse Modeling is significantly smaller than that of deep learning. However, the accuracy of Sparse Modeling is more than 90% – with a training time of only 19 seconds at the edge makes it easier, and faster and maintains business continuity.

Benefits:
Sparse Modeling works even with small amounts of training data. This is because it extracts the features that are essential from the input data in the training stage. Looking at an example of a voice and anomaly detection application on the stethoscope, there is often a large amount of “good” data, but very little bad data. This is a case where Sparse Modeling works as an edge AI solution.
Sparse modeling is computationally inexpensive for model creation, enabling us to perform inference on edge devices and training.
When it comes to AI models performing both training and inference on edge devices, Edge company calls this “True Edge”. By training on the edge, there is no need to send data to an external location (e.g., a server in the cloud) resulting in fewer data security and compliance concerns, latency, and costs
We at Edgeneural.ai focused exactly on providing an Edge AI platform to developers, empowering them to accelerate the need for AI building/import, train, optimize, compress, deploy, and manage anywhere at the edge with full flexibility and portability, with the added benefit of training and re-training the model at the edge even in disconnected mode benefiting with dramatic time savings, costs, and business continuity.