Intel has launched a family of Intel Vision Accelerator Design Products which it claimed would do artificial intelligence (AI) inference and analytics on the edge devices, a place where data originates and is acted upon. The newly launched solutions come in two forms – one that features Intel Movidius vision processors and the other built on the Intel Arria 10 FPGA.
“Until recently, businesses have been struggling to implement deep learning technology. For transportation, smart cities, healthcare, retail and manufacturing industries, it takes specialized expertise, a broad range of form factors and scalable solutions to make this happen. Intel’s Vision Accelerator Design Products now offer businesses choice and flexibility to easily and affordably accelerate AI at the edge to drive real-time insights,” said Jonathan Ballon, Intel vice president and general manager, Internet of Things Group.
The chip maker said that its accelerator solution which is built on the OpenVINO software toolkit will provide developers an improved neural network performance on a variety of Intel products. Intel is of the view that the need for intelligence on edge devices has never been greater as it is today. “As deep learning approaches rapidly replace more traditional computer vision techniques, businesses can unlock rich data from digital video. With Intel Vision Accelerator Design Products, businesses can implement vision-based AI systems to collect and analyze data right on edge devices for real-time decision-making. Advanced edge computing capabilities help cut costs, drive new revenue streams and improve services.
Combined with Intel Vision products such as Intel CPUs with integrated graphics, these new edge accelerator cards allow businesses the choice and flexibility of price, power and performance to meet specific requirements from camera to cloud. The company informed that leading companies such as Dell, Honeywell and QNAP are planning products based on Intel Vision Accelerator Designs. Additional partners and customers, from equipment builders, solution developers and cloud service providers support these products.
Intel Vision Accelerator Design Products work by offloading AI inference workloads to purpose-built accelerator cards that feature either an array of Intel Movidius Vision Processing Units or a high-performance Intel Arria 10 FPGA. Deep learning inference accelerators scale to the needs of businesses using Intel Vision solutions, whether they are adopting deep learning AI applications in the data center, in on-premise servers or inside edge devices. With the OpenVINO toolkit, developers can easily extend their investment in deep learning inference applications on Intel CPUs and integrated GPUs to these new accelerator designs, saving time and money.