Maurits Kaptein

Event booth analytics. Locally.

Although regretfully most trade-fairs are suspended at the moment, we are happy to share a recent solution that was build using our patented software: simple and elegant, local, event booth analytics. Together with Advantech we setup an interactive demo of our technology during the recent Vision summit.

Brining the AI manager to your local machine

Our AI manager ensures that our clients can run AI/ML models securely on various edge devices and makes it easy to monitor performance and innovate new models. We have successfully installed the AI manager on a large number of Edge devices (i.e., The Advantech ICR32xx and 42xx series, the Siemens IOT2050, etc.). For developers we

Product quality control

We are happy to share the newest addition to our solutions: We are now able to fully integrate our product quality control solution with a Jaka cobot. Co-developed with HPS, we are making it super easy to use AI and machine vision models modularly and securely in flexible production environments.

Emotion recognition on the Edge using LoRa

How AI on the Edge (provided by Scailable) and LoRa (provided by KPN Things) jointly enable novel (I)IoT applications. This tutorial describes a demo we recently presented during the 19th KPN Startup Afternoon; we demonstrated how we can use the Scailable Edge AI deployment platform to deploy a fairly complex AI model (a deep neural network recognizing human emotions)

Face blurring demo

Image processing is one of the prominent uses of AI models. With the Scailable platform we can easily deploy image processing models anywhere. For example, with a click of a button, we were able to deploy a face blurring deep neural netwerk to the browser. Deploying such models in the browser (as opposed to in

Creating ONNX from scratch II: Image processing

ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. However, ONNX can be put to a much more versatile

Tutorial: Creating ONNX from scratch (deprecated).

  ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. In these cases users often simply save a model

The making of “Update AI/ML models OtA”

Deploying trained AI models on edge devices might seem challenging. However, using minimal WebAssembly runtimes, and automatic conversion from ONNX to WebAssembly, modular AI/ML model deployment Over the Air (OtA) to pretty much any edge device is possible.

A vision on AI deployment on the edge

As more and more AI models are making their way to production—i.e, they are actually being used in day-to-day business processes—an active discussion has emerged questioning “how AI models should be deployed?” (e.g., using bloated containers? By rebuilding to stand-alone executables? By using “end-to-end” platforms with strong vendor lock in?) and “where AI models should