IoT solutions and the devices enabling them will drive global data traffic toward an estimated, mind-boggling 600 zettabytes within the next year. But it’s no longer just about home monitors, autonomous cars and video cameras. While the Internet of Things (IoT) drives the connected world, it’s artificial intelligence (AI) that’s making all that data actionable and intelligible. And it’s AI that contributed a whopping $2 trillion to global GDP last year, making it a worthy focal point for the 2019 Intel® Partner Connect conference in Denver.

New innovations for supporting AI and IoT took center stage at the Intel-sponsored event, along with the advances in chip technology that are facilitating its growth. With an entire team dedicated to designing chipsets specifically for IoT, Intel’s latest semiconductor technology enables design at the 10 nanometer (nm) scale. For perspective, a human hair is 75,000 nm; i.e., a new chip can have 7,500 transistors in the width of one human hair. As chipsets double in power every six months, companies like Intel and NVIDIA are pushing the limits of physical production to enable the future-state of pin-sized compute that’s more advanced than the supercomputers of the 90s.

What does this advance in technology deliver? Computer vision technology – the ability to use video information to gather data and then quickly process and interpret it locally for optimized deep learning through video sensors – is the way of the future. Science has proven that the human eye can process data 60,000 times faster than text; Intel is mimicking this action for video data with the OpenVINO (Open Visual Interference and Neural Network Optimization) to support the development of vision technologies, and chip manufacturer/tech giant NVIDIA is working closely with partners to develop video and vision technologies that improve collection and storage faster than ever imagined.

Intel’s Partner Connect conference included demonstrations by several companies for how they use vision technology and data to drive decision-making for business. With the ability to monitor 16+ 1080P streams simultaneously, companies are able to optimize the retail experience using demographics and facial expression. Case in point: in China, the vending industry is radically different than in the U.S. A customer simply swipes his payment from his phone (generally via WeChat), opens a refrigerator and takes the items he wants. Through visual analysis, the customer is charged the correct amount for his selection, with no moving parts that break or get stuck. And the retailer benefits in its ability to track inventory and consumption.

So what does this mean to us? In my opinion, it means a higher quality of life and a push toward more data-driven decisions. At its core, the digitalization of our living spaces is providing new data input streams to the individual through existing visual, auditory and kinesthetic human processing capabilities. It sets the stage for data to be processed as close to the edge as possible, then driven back to the core and hyperscalers for analytics.

The Intel event showcased several examples of edge architectures that are making optimal use of the advances in chip and compute technology -- and providing us with an enormous amount of useful data without overwhelming our infrastructure with non-relative network traffic and videos. And it proved that edge computing will continue to drive our future and improve our lives through advanced technologies and AI.

Tim Parker

Vice President of Network Strategy
Tim Parker, Vice President of Network Strategy

Tim is the vice president of network strategy at Flexential, where he is responsible for guiding the company's interconnection ecosystem and developing network strategies and architectures for Flexential's HybridEdge Strategy. Tim has more than 25 years of experience in delivering high-performance customer service in the IT and Telecommunication sector.