Azure Percept: A machine learning quick starter Otesanya David April 1, 2022

Azure Percept: A machine learning quick starter

Azure Percept: A machine learning quick starter

[ad_1]

Microsoft’s commitment to low-code and no-code application development goes a lot further than its Power Platform. The same connector and pipeline model powers its Azure Logic Apps platform and elements of the Azure Machine Learning studio. Connecting prebuilt elements together may not have the flexibility of developing your own applications from scratch, but it’s a quick way to deliver value. At the same time, it’s a way to bring in nontraditional development skills that can add missing knowledge.

One area where there’s a disconnect between application development and the physical world is health and safety. People are unpredictable, making it hard to design applications that can help identify potential dangers on the shop floor or around machinery. One option is to use computer vision–based machine learning to build models of normal behavior that allow anomalies to be quickly identified. A camera monitoring a set of gas pumps can be trained to identify someone smoking; a camera by a hydraulic press can be trained to monitor when an operator or passerby steps out of the safe space.

Introducing Azure Percept

The question is how to build and deploy a safety-oriented machine learning system quickly? That’s where Microsoft’s Azure Percept platform comes in, a focused version of its Azure Internet of Things edge platform combined with a set of hardware specifications and a cloud-hosted, low-code application development environment with a containerized deployment model. It offers an industry-standard mounting-based developer kit so you can build and test applications before deploying them to onsite systems. It uses the familiar 80/20 mounting rails used for much industrial electronic equipment so it will be compatible with existing mounts and power distribution systems, keeping costs to a minimum.

Microsoft has done a lot to make its Azure Cognitive Services portable, delivering containerized runtimes that let you use edge hardware for inferencing instead of sending data to centralized Azure resources. This approach helps save on bandwidth costs, allowing you to deliver a much smaller set of results to your applications rather than sending gigabytes of streamed images. Edge sites are often bandwidth constrained, so using this approach allows you to run machine learning–based applications where they’re needed, not where there’s available bandwidth.

A programmable smart device for beginners

Getting started requires the relatively low-cost Azure Percept DK, currently selling for $349. It comes in two parts: an edge compute unit and a smart camera. A third component, a smart microphone, is available for audio-based prediction applications, such as monitoring motors for signs of possible failure. The edge compute system is based on an NXP Arm system, running Microsoft’s own CBL-Mariner Linux distribution, and the camera uses an Intel Movidius dedicated computer vision system. Both are designed to get you going quickly. Microsoft suggests you can go from “out-of-the-box to first AI frames in under 10 minutes.”

Applications are developed in the cloud-based Azure Percept Studio, with a selection of prebuilt models. If you’re familiar with Microsoft’s Cognitive Services tools, you can also use the Azure Machine Learning studio or a local development environment using Visual Studio Code. The local toolkit is based on Python and TensorFlow, with Intel’s OpenVINO to support the Movidius vision processor. Other deployment environments, such as Nvidia’s, are supported so you build your own cameras using Jetson or work with third-party vendors to add their hardware to a Percept deployment. Tools can be downloaded as a single dev pack, building out a ready-to-use environment on Windows, macOS, or Linux.

Copyright © 2022 IDG Communications, Inc.

[ad_2]

Source link

Write a comment