It is a Tiny thing !


It is a Tiny thing!


TinyML 


Machine learning models are becoming increasingly powerful, but also increasingly complex. Therefore, require either a lot of energy or an Internet connection to a server for the application.

TinyML is a branch of machine learning and embedded systems research that looks at the kinds of models that may be used on compact, low-power gadgets like microcontrollers. It permits edge devices to do low-latency, low-power, and low-bandwidth model inference. A typical microcontroller uses electricity on the order of milliwatts or microwatts, compared to regular consumer CPUs, which use 65 to 85 watts, and standard consumer GPUs, which use 200 to 500 watts. That uses around a thousand times less energy. TinyML devices can execute ML applications on edge while operating unplugged for weeks, months, and in some circumstances, even years thanks to their low power consumption.

TinyML enables machine learning on microcontrollers and Internet of Things (IoT) devices in a very optimized way, meaning vast amounts of data can be leveraged and analyzed while using very little power. The next big opportunity in technology is Tiny ML. With the aid of software created for small-sized inference workloads, it attempts to address the problems of both cost and power efficiency by enabling data analytics performance on low-powered hardware with low processing power and tiny memory capacity.

TinyML Applications


Whether you are aware of it or not, TinyML probably plays a role in some aspects of your everyday life. Keyword identification, object recognition, classification, gesture recognition, audio detection, and machine monitoring are some uses for TinyML. The audio wake-word recognition model utilized inside Google and Android smartphones is an illustration of a TinyML application in daily life.

Modern machine learning (ML) algorithms and low-power embedded hardware have quickly emerged, ushering in a new Internet of Things (IoT) age that offers new possibilities for ML algorithms running on edge devices. One such lightweight ML framework is TinyML. Such devices' TinyML architecture, in particular, intends to give decreased latency, efficient bandwidth use, greater data security, increased privacy, cheaper expenses, and general network cost reduction in cloud environments. It offers a feasible alternative for IoT applications looking for cost-effective solutions because it allows IoT devices to function well without regular connectivity to cloud services while still offering accurate ML services. TinyML aims to provide on-premises analytics that significantly improve the value of IoT services, especially in environments with poor connectivity.



For the healthcare industry, tinyCare uses machine learning (ML) methods on edge devices with limited resources (TinyMl). In particular, we develop an end-to-end prototyped system that performs ML inference using a variety of ML techniques on microcontroller unit (MCU) powered edge devices to forecast blood pressure-related vital metrics like systolic (SBP), diastolic (DBP) and mean arterial (MAP) blood pressures using electrocardiogram (ECG) and photoplethysmogram (PPG) sensors. Through the use of over 500 hours and 12,000 authentic intensive care unit data examples, the suggested solution has been trained and evaluated. The suggested method produces results that are comparable to those of server-based state-of-the-art systems despite operating on a very constrained computing, power, and memory budget.


Challenges


  1. Low Power. TinyML systems can be recognized by their little energy consumption. As a result, a successful benchmarking procedure should, in theory, specify each device's energy efficiency.
  2. Small Memory. Due to their small size, TinyML systems commonly experience memory problems. TinyML systems typically have two orders of magnitude fewer resource constraints than standard ML systems, which frequently have limitations of a few GB, such as cell phones.
  3. Processor Performance. Even when coupled with potent CPUs, MCUs like the ARM Cortex M series still perform relatively badly when compared to cloud-based systems.
  4. Diversified Hardware.TinyML systems are already diverse in terms of their capabilities, power, and performance despite still being in their early phases.
  5. Lack of Appropriate Datasets. The TinyML paradigm may not be suitable for the present datasets because to a lack of low-power adaptation. Such datasets should be accurate enough in time and space to match the characteristics of the data produced by various sensors.
  6. TinyML assessment. It has shown to be a valuable tool in the development of next-generation systems and is used in a variety of applications across many different industry sectors.


References

  1. Warden, P., & Situnayake, D. (2019). TinyML. O'Reilly Media, Incorporated.
  2. Ray, P. P. (2021). A review on TinyML: State-of-the-art and prospects. Journal of King Saud University-Computer and Information Sciences.
  3. Banbury, C. R., Reddi, V. J., Lam, M., Fu, W., Fazel, A., Holleman, J., ... & Yadav, P. (2020). Benchmarking TinyML systems: Challenges and direction. arXiv preprint arXiv:2003.04821.
  4. Tsoukas, V., Boumpa, E., Giannakas, G., & Kakarountas, A. (2021, November). A Review of Machine Learning and TinyML in Healthcare. In 25th Pan-Hellenic Conference on Informatics (pp. 69-73).
  5. Shumba, A. T., Montanaro, T., Sergi, I., Fachechi, L., De Vittorio, M., & Patrono, L. (2022, July). Embedded Machine Learning: Towards a Low-Cost Intelligent IoT edge. In 2022 7th International Conference on Smart and Sustainable Technologies (SpliTech) (pp. 1-6). IEEE.
  6. Ahmed, K., & Hassan, M. (2022, June). tinyCare: A tinyML-based Low-Cost Continuous Blood Pressure Estimation on the Extreme Edge. In 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI) (pp. 264-275). IEEE.




Comments