ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. However, ONNX can be put to a much more versatile use: ONNX can easily be used to manually specify AI/ML processing pipelines, including all the pre- and post-processing that is often necessary for real-world deployments. In this tutorial we will show how to use the onnx helper
tools in Python to create a ONNX image processing pipeline from scratch and deploy it efficiently.
Creating ONNX from scratch II: Image processing
Maurits Kaptein · February 8, 2021AI & Art: In times of corona
Robin van Emden · February 7, 2021Scailable is proud to provide the AI behind Johan Nieuwenhuize’s new installation, “in times of corona”, now at the Haags Historisch Museum after a succesful exhibit at STROOM The Hague.
Tutorial: Creating ONNX from scratch.
Maurits Kaptein · February 5, 2021ONNX has been around for a while, and it is becoming a successful intermediate format to move, often heavy, trained neural networks from one training tool to another (e.g., move between pyTorch and Tensorflow), or to deploy models in the cloud using the ONNX runtime. In these cases users often simply save a model to ONNX format, without worrying about the resulting ONNX graph.
Repurposing old hardware for new AI
Robin van Emden · January 25, 2021Scailable deploys your AI and ML models instantly, anywhere. And by anywhere, we do mean, well, anywhere. As a demonstration, today, we succesfully deployed a 2021 visual convolutional neural network to a 1993 laptop.
The making of “Update AI/ML models OtA”
Maurits Kaptein · January 16, 2021Silo surface maintenance using drones
Robin van Emden · January 16, 2021How TensorFlow, ONNX, WebAssembly, and the Scailable platform team up to automatically detect and restore cracks in concrete surfaces.
Bandits, WebAssembly, and IoT
Maurits Kaptein · January 16, 2021An uncommon combination allows efficient sequential learning on the edge.
Efficient reinforcement learning on the edge
Maurits Kaptein · January 16, 2021With orthogonal persistence we can implement sequential learning on Edge devices
Exploiting the differences between model training and prediction
Maurits Kaptein · January 16, 2021Reducing the memory footprint and improving the speed and portability of deployed models.
A vision on AI deployment on the edge
Maurits Kaptein · January 3, 2021As more and more AI models are making their way to production—i.e, they are actually being used in day-to-day business processes—an active discussion has emerged questioning “how AI models should be deployed?” (e.g., using bloated containers? By rebuilding to stand-alone executables? By using “end-to-end” platforms with strong vendor lock in?) and “where AI models should be deployed?” (e.g., on a central cloud? On edge devices? As “close” as possible to the data generating sensors?).
For 2021 we are sharing our vision of AI deployment going forward. You can download the full white-paper here, or read the summary below.