Understanding the Microsoft Vision AI Development Kit

Computer vision has come a long way since the days of reading text off of typed pages or helping identify shorts and opens on an assembly line. Advances in computing abilities, enormous amounts of data stored over the last 20 years in data centers and new and better algorithms have finally helped make artificial intelligence (AI) more than just a buzzword.

Computers are already utilizing imaging technologies and machine-learning (ML) software to diagnose images to classify and index them for, say, facial recognition in social networking applications like Facebook, in security and access control and even diagnosis for melanoma.

This is, however, just the tip of the potential application iceberg when you bring in the broader AI to not only analyze data, deliver insights and increase accuracy by learning but also to make decisions. Considering that about 80 percent of our understanding of the environment around us comes from visual perception, any AI implementation beyond simple machine learning will have to rely heavily on computer vision.

However, analyzing computer-vision data is a bandwidth-intensive business. While processor companies like Qualcomm have developed chips that can now handle intensive compute load, moving all of the image and video data to a far-off cloud for analysis and back whenever an application demands it may not always be feasible.

Let’s take an extreme case—self-driving cars, which are still under development. Even with future 5G speeds, the amount of data that needs to be transferred to a data center to analyze traffic situations simply breaks that application because it demands a real-time response. Consider another example—that of a home security system using facial recognition as part of multi-factor authentication (MFA). Should home internet connection speeds slow in the evening when more people start streaming movies, consumers may find themselves waiting on a sluggish system. And if you have ever been in a crowded elevator that stops frequently, you experienced consumer impatience as you watched the equally frequent punching of the “close door” button.

The solution to enabling applications like self-driving cars and ensuring success in the smart home market is the same: Move AI to the network or internet-of-things (IoT) edge.

That will open up AI in the computer vision market so that it will be $3.62 billion in 2018 and grow at 47.54-percent CAGR to $25.32 billion by 2023, according to a report offered by ResearchAndMarkets.com. The time to be a part of that market is now.

Kit up with Microsoft Vision AI

If you are ready to make your mark with IoT solutions like home-monitoring cameras, enterprise security cameras and smart-home devices with built-in AI, you can get the hardware, software and cloud components in one kit to get started right away.

Developed by Qualcomm Technologies, Microsoft and eInfochips, the Vision AI Developer Kit offers engineers the following benefits:

- Low latency

- Robustness

- Privacy

- Efficient utilization of network bandwidth

- Efficient utilization of cloud resources

- Cloud storage

- Cloud processing

- Enhanced security

- Enhanced device provisioning and management

- Integrated environment to build, train, validate and deploy AI models on edge devices

- Device telemetry and analytics

The kit is a camera-based device that combines IoT, edge and AI from Microsoft with the Qualcomm Vision Intelligence 300 Platform and the Qualcomm Neural Processing SDK for AI for on-device edge computing.

The sum of parts

The kit comprises powerful tools to make it easy to develop AI-on-edge products. Let’s take a closer look at them to understand what capabilities you can deliver in your applications. 

Qualcomm QCS603 system-on-chip (SoC): The SoC’s AI engine utilizes a digital signal processor (DSP) for image processing on the edge using the Qualcomm Hexagon 685 Vector Processor, two Qualcomm Hexagon Vector eXtensions (HVX), the Qualcomm Adreno 615 GPU with OpenGL ES 3.2, Vulkan, OpenCL support and Qualcomm Snapdragon Neural Processing Engine with a programming interface that supports Tensorflow, Caffe/Caffe2, ONNE and Android NN and delivers 2.1 TOPS @ 1 W.

Azure IoT Edge: This is a set of software tools in the kit that enable deployment of Microsoft’s Azure service functions, stream analytics, machine learning and SQL server databases from the cloud. It handles routing of messages among modules, devices at the edge and the cloud and, in doing so, moves cloud analytics and custom business logic to devices.

Azure IoT Edge itself consists of three components: IoT Edge modules deployed to devices to locally run Azure services, third-party services or custom code; IoT Edge runtime, which runs on each IoT Edge device and manages the modules; and a cloud-based interface with which you remotely monitor and manage the devices.

Azure IoT Edge supports custom modules that can be coded in C#, C, Node.js, Python and Java. It offers store and forward, which enables it to be operational even in unstable network environments.

Azure Machine Learning service: AML is a cloud service that allows you to train, deploy, automate and manage machine-learning models. The service can auto-train a model and auto-tune it. With the service SDK for Python, along with open-source Python packages, you can build and train highly accurate machine-learning and deep-learning models yourself.

Starting your development

Working with the kit is fairly straightforward. You would begin by training your models in Microsoft Azure for object detection and classification for any application like flagging defects in a manufacturing scenario or examining movement in a home for environmental and lighting control.

Then you would deploy the models to your new kit. With Qualcomm’s hardware and Azure IoT Edge, you can choose to simply run your models without connecting to the cloud.

When you connect to the cloud, you get the benefit of integration between the Qualcomm Neural Processing SDK for AI and AML. This allows you to convert those AI models to TensorFlow, Caffe/Caffe2 and the ONNX standard for pre-training.

The AML service packages your models into hardware acceleration-ready containers or modules, which can be deployed by Azure IoT Edge to devices with the Qualcomm Vision Intelligence Platform. Dedicated hardware — the CPU, the GPU or the DSP — accelerates the Qualcomm Neural Processing SDK to give you AI inferencing on the edge.

All you need now is the Microsoft Vision AI Developer Kit from Arrow.com to start the project.

newsletter 1

Neue Beiträge

Leider ergab Ihre Suche kein Ergebnis

Aktuelles über Elektronikkomponenten­

Wir haben unsere Datenschutzbestimmungen aktualisiert. Bitte nehmen Sie sich einen Moment Zeit, diese Änderungen zu überprüfen. Mit einem Klick auf "Ich stimme zu", stimmen Sie den Datenschutz- und Nutzungsbedingungen von Arrow Electronics zu.

Wir verwenden Cookies, um den Anwendernutzen zu vergrößern und unsere Webseite zu optimieren. Mehr über Cookies und wie man sie abschaltet finden Sie hier. Cookies und tracking Technologien können für Marketingzwecke verwendet werden.
Durch Klicken von „RICHTLINIEN AKZEPTIEREN“ stimmen Sie der Verwendung von Cookies auf Ihrem Endgerät und der Verwendung von tracking Technologien zu. Klicken Sie auf „MEHR INFORMATIONEN“ unten für mehr Informationen und Anleitungen wie man Cookies und tracking Technologien abschaltet. Das Akzeptieren von Cookies und tracking Technologien ist zwar freiwillig, das Blockieren kann aber eine korrekte Ausführung unserer Website verhindern, und bestimmte Werbung könnte für Sie weniger relevant sein.
Ihr Datenschutz ist uns wichtig. Lesen Sie mehr über unsere Datenschutzrichtlinien hier.