After longer, intensive hours, I finally have my SecurityCam in a state that I would finally like to introduce to you. I use cutting-edge technologies that are worth taking a closer look at. But before I start with the Technology DeepDive in a multi-part report, I would like to give you my motivation. Maybe you can find yourself in it?
Why a smart SecurityCam
A long time ago, I wanted to know what was happening on my doorstep. But I especially wanted to know when the postman arrived or when he was there. Anyone who has small children who should take a nap in the afternoon knows that. Nobody wants them to be woken by a shrill bell. What our postmen mostly cannot know (or want 😈).
I therefore simply wanted to build a system that would inform me that the postman was entering my courtyard entrance. Just stupid that most SecurityCams systems don’t know what a postman is. So I tried my own approach. ..and that came out:
This multi-part article is full of technologies that I would like to explain to you. Perhaps the list below will make you curious.
- Azure IoTHub
- Azure IoTEdge
- Computer vision API
- Custom Vision API
- Tensor flow
- Azure blob storage
- Jetson Nano
Roughly described, my hardware setup consists of a Jetson Nano (an embedded device from NVidia), connected via Ethernet (but can also use WLAN) to the Internet and a webcam. Since machine learning always needs a little more power and thus also ensures temperature, I attached a fan.
I have outlined the software-side components below as high-level architecture. The Jetson Nano runs as an edge device and is connected to Microsoft Azure via the IoT Hub (Cloud Gateway). A loudspeaker enables machine-human communication.
The Nano can work offline, but can also communicate with the cloud when there is an internet connection. The Jetson Nano analyzes the images that a webcam delivers. Every object that is recognized is cut out and stored in the local storage. If the system detects a Postbus, the message “I see a Postbus” is output to a loudspeaker at the same time. If the internet connection is intact, the recognized objects (snippets) are transferred to the online storage and deleted locally. At the same time, the event data (when which object was viewed) are streamed to the cloud as time series data. The following picture shows the incoming events in a filterable chart.
” Custom Vision – I recognize the Postbus “. In this post I describe how I built a machine learning model for recognizing a Postbus.
About the Author:
As Microsoft MVP for Azure, Thomas Tomow is contributing to the community, that cares about modern technologies around cloud (e.g., Host for Azure Meetup Konstanz or Azure Meetup Stuttgart). He works at CGI in Germany as Director, leading a team of specialists in Cloud, AI, and IoT. He likes doing Karate for balancing his life and shares experience and knowledge with other like-minded people.
Tomow, T. (2020). Azure Custom Vision – Technology Deep-Dives Part 1. Available at: http://www.tomow.de/de/entw/ki-ml/ai-powered-security-cam-technology-deep-dives-part-1/ [Accessed:18th May 2020].
Check out more great Azure content here