Your source of innovation in architecture & design
Art & Design

New System to Greatly Improve AI Potential for Household Appliances

New System to Greatly Improve AI Potential for Household Appliances
http://www.lgnewsroom.com/2019/09/lg-washing-machines-with-artificial-intelligence-and-direct-drive-motor-roll-out-region-wide/

Only in recent years have household appliances like the washing machine been produced with AI integrated technology, but the evolution of deep learning is ever-growing and the MIT news office announced that MIT researchers may have made great improvements with their newly developed system. Recent testing shows their system has unprecedented speed and accuracy. We could soon see a huge leap in technology on the market.

MIT researchers developed a system called MCUNet with which they hope to bring deep learning neural networks to computer chips in wearable medical devices, household appliances and the countless other objects tagged “internet of things” (IoT). 

The system designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power, according to the press release. 

The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.

The research will be presented at next month’s Conference on Neural Information Processing Systems. The lead author is Ji Lin, a Ph.D. student in Song Han’s lab in MIT’s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.


PAPER: “MCUNet: Tiny Deep Learning on IoT Devices


The following was written by Daniel Ackerman for the MIT news office who shared the document with various media outlets for immediate release. 

Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. 

“It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers because they are too small.”

So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks.

 “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.” 

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller—with no unnecessary parameters. 

“Then we deliver the final, efficient model to the microcontroller,” says Lin.

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight—instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. 

“It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.

The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. 

“We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” 

In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depthwise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.

MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images—the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”

The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.

“Huge potential”

The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.

The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances or enable more intelligent motion sensors.”

MCUNet could also make IoT devices more secure.

“A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”

Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smartwatches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.

Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy.

“Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han.

pub
Advertisement
pub
Advertisement
pub
Advertisement
pub
Advertisement
pub
Advertisement