Maurits Kaptein

Emotion recognition on the Edge using LoRa

How AI on the Edge (provided by Scailable) and LoRa (provided by KPN Things) jointly enable novel (I)IoT applications. This tutorial describes a demo we recently presented during the 19th KPN Startup Afternoon; we demonstrated how we can use the Scailable Edge AI deployment platform to deploy a fairly complex AI model (a deep neural network recognizing human emotions)

Face blurring demo

Image processing is one of the prominent uses of AI models. With the Scailable platform we can easily deploy image processing models anywhere. For example, with a click of a button, we were able to deploy a face blurring deep neural netwerk to the browser. Deploying such models in the browser (as opposed to in

Creating ONNX from scratch II: Image processing

ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. However, ONNX can be put to a much more versatile

Tutorial: Creating ONNX from scratch.

ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. In these cases users often simply save a model to

The making of “Update AI/ML models OtA”

Deploying trained AI models on edge devices might seem challenging. However, using minimal WebAssembly runtimes, and automatic conversion from ONNX to WebAssembly, modular AI/ML model deployment Over the Air (OtA) to pretty much any edge device is possible.

A vision on AI deployment on the edge

As more and more AI models are making their way to production—i.e, they are actually being used in day-to-day business processes—an active discussion has emerged questioning “how AI models should be deployed?” (e.g., using bloated containers? By rebuilding to stand-alone executables? By using “end-to-end” platforms with strong vendor lock in?) and “where AI models should