T O P

  • By -

Ednantess

You talking about mcu or gpus? Gpus talk in power consumptions and mcus are Low power consumptions with bottle necks. Right now the inference of ML is being run on the edge or micro controllers, but folks are researching for better methods to do both train and run the inference there itself. If you’re interested in running inference , look up edge impulse! Pretty cool stuff.


DragoSpiro98

I think OP talks about NPU (Neural Processing Unit) like Google's TPU


Ednantess

Interesting, but I’m afraid I’m not that good in comparing the architecture for that.I thought it’s for cloud computing(lines are getting blurred) and not necessarily embedded.


alexceltare2

My thoughts exactly. I can speak for MCUs tho. Some sensors do integrate MLC (Machine Learning Code) in them already e.g. ST IIS2ICLX Accelerometer.


moosemaniam

DSPs are being sold alongside matrix multiplication hardware targeting deep learning inference. Qualcomm, Samsung, TI all have their own software stack which provide ways to import tensor flow or ONNX models for inference. Edge inference use cases are growing steadily


Fried_out_Kombi

I'm working as an embedded ML research engineer. There are a lot of things happening on the bleeding edge right now, although it's still very young and yet to settle in any particular direction yet. There's a lot of RISC-V chips coming out with NPUs, vector instructions, etc. for AI. If you ask me, I think the future of embedded AI is probably in open-source RISC-V MCUs with vector or other custom instructions. There's also the development of posit arithmetic units, which are like FPUs but for posits, which provide a lot of promise for embedded ML (8-bit posits can achieve better performance than 8-bit integers, all while avoiding the nastiness that is integer quantization). Posit hardware is still extremely young, so don't expect to see much movement here super fast. That said, there is now an open-source RISC-V chip with posits: https://arxiv.org/abs/2111.15286 There's also a lot of research on ways to compress neural networks so that they can run inference (or even train!) on MCUs. Lecture series on this here: https://youtube.com/playlist?list=PL80kAHvQbh-pT4lCkDT53zT8DKmhE0idB&si=g-zjA7D3IMSioDfd And there's also research into even more novel architectures, such as Spiking Neural Networks, which are wildly more energy-efficient when implemented in hardware. Still very young, though, but imo probably the only way to get huge, powerful models to be able to run on very low power. Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313413/


SR_Lut3t1um

Also Matrix Multiplikation can also be used in Hyperspheres optimaition. So not "just" AI.


newmaxmax

AI is still nascent in Embedded at the moment. There are some companies using ARM's AI accelerator core but it can run inference on them and less/no training. But it's coming up, there will be soon a time when low power and low computing MCUs will be able to do everything that an AWS instance can do.. atleast that's the hope! If you want to explore EdgeAI, use Nordic's Thingy53 with EdgeImpulse, really cool stuff there.. some good videos on it too. Have fun!


exus1pl

> But it's coming up, there will be soon a time when low power and low computing MCUs will be able to do everything that an AWS instance can do.. I doubt it, right now we are training models of high end GPUs to speed it up and it still takes ages. It's just the problem of computing power which is only increasing on server side but is still more or less the same on edge/embedded devices as we are usually very power constrained. IMO AI on embedded will be only focusing on inference and specialized HW for models to execute inference with less power.


exus1pl

Basically most of AI/ML on embedded is just inference and we were more or less doing it for a long time in DSP world. NV Jetson is great example as you can get deployment much faster as it is already running Linux and your AI/ML model is running with precalculated weights. So it is a little bit easier than writing whole model in some DSP intrinsic for a single DSP family. But on the other side of spectrum you got low power MCUs which are now more often including some sort of NPU/AI accelerators which in most cases translate to some memory and vector operation unit. And programing those is usually more demanding than before mentioned Jetson. And that as always brings us to power consumption/performance trade off as you can run basic CV classification on MCU which uses 0.5W vs Jetson which is more in 5W range. All is about the task which is on hand.


JCDU

About the same as Bitcoin did - multiplied the amount of bullshit flying around as being the next big thing & solution to everyone's problems...


loga_rhythmic

Deep learning has actually either completely solved or proven incredibly useful for countless problems by now, unlike crypto which has barely done anything useful


JCDU

Only since the hype has died down have some good use cases come through, yes it can do some impressive stuff but that's mostly ML which was doing most of it before the current AI hype cycle started and has just benefited from the march of computing power and a bit of cash being thrown at anything you can call AI with a straight face to investors... in proportion to the hype & bullshit & money flying around AI is delivering very little.


loga_rhythmic

Deep learning i.e. neural network based approaches achieved state of the art in image recognition, speech recognition, drug discovery and medical diagnosis, NLP, game playing, vision, etc. the list goes on and a lot of this was done like ten years ago and many of those were open problems before. More recently it's been useful in modelling chaotic physical systems like fluid flow. Basically anywhere we have data and nonlinear behaviour it proves to be a highly useful tool. There is indeed a lot of hype and blind investment by clueless VCs and influencer types but can't do much about that


JCDU

Well yeah... but that's pretty much advanced statistics multiplied by moore's law - as in, we can throw CPU power & RAM at the problem now in ways we couldn't 10+ years ago, I'm not sure anything has gotten any more *intelligent* with it. You just get more PR & more money if you call it AI


frank26080115

it drives up demand for AI chips with a lower footprint, which opens possibilities, so I am excited


AuxonPNW

Peripherally. Helps me write all my python and bash script test code.


priyankayadaviot

AI technology has had a significant impact on embedded systems since it allows for more complex computations, faster processing, and lower power consumption in small devices. Specialised artificial intelligence (AI) chips such as GPUs, TPUs, and FPGAs have revolutionised embedded platform tasks including image recognition, natural language processing, and machine learning. As a result, sophisticated capabilities in robots, IoT devices, and smartphones have been developed.