The Internet of Things is an ever evolving vision of having absolutely everything connected in some fashion. It’s a growing idea that’s taken some time to catch on, though the utility of all those connected things and widgets is actually right in the forefront. It allows the collection of data to help do stuff, at it’s simplest at least. Collecting all that data out there is only the first step, but what do we do with all of it? Process it and categorize it of course! And that’s where deep learning comes in. On mobile GPUs nonetheless.
Local deep learning on mobile devices could be in the near future
But sending it off through the airwaves might not be the most efficient of things to do, especially if you’re limited on bandwidth. Imagination Technologies is showing off deep learning done on their mobile GPUs, which would allow a lot of mobile sensor-laden devices to simply deal with their own data by themselves to make decisions and recommendations about, well, whatever.
At the Embedded Vision Summit of 2016, Imagination Tech showed off just how fast a mobile-focused PowerVR G6430 attached to a quad-core Intel Atom inside the Google Nexus could process images using a Convolutional Neural Network. They chose to use the Caffe deep learning framework running Alexnet accelerated with OpenCL. They pointed a consumer camera at some objects that were arranged in a small box to test the entire framework. The results were startlingly accurate and rather quick. The Google Nexus Player was able to perform those tests 12 times faster than running the same workload on the CPU.
Maybe that doesn’t quite sound so impressive at first glance. So what, it identified a few objects within a few seconds, but who cares? Well, image identification is one of the more computationally intensive operations, so this shows that even a GPU from a few generations back can do something rather difficult and do it fast, on a mobile power budget. Just imagine if it were doing something simpler, like voice recognition for a personal assistant.
This also paves the way for having a small, low powered device that can help keep your fridge full just through using inexpensive cameras that analyze what’s in there compared to a list of food you uploaded to it. In fact, every little base-station designed to look, listen and feel with the myriad of sensors connected can become their own smart hub, doing their tasks independent of anything.
Being able to compute in this way means better apps for us on our mobile devices. Imagine having Siri, Cortana or another personal assistant completely on your phone. It could mean pointing your camera at a bird you don’t know the species of and it immediately identifying it, or any number of new and novel ways to interact. Crucially, the embedded market and the Internet of Things will become increasingly relevant as well.