Running on NVIDIA ORIN
Very cool, the first runs of our library models on NVIDIA ORIN form start to end using our managed deployment platform:
One of the core values of our edge AI middleware — which allows for effortless managed deployment of any AI model to any of our supported devices — is to abstract away from the hardware targets of an edge AI solution such that solution builders can focus on the value of their solution (as opposed to spending time and frustration on moving their model to their targeted hardware).
A recent addition to the edge AI accelerator landscape is the amazing NVIDIA ORIN. The ORIN is fast and highly energy efficient. And, the ORIN is available in a number of platforms by OEMS that we intimately partner with.
We are now happy (and proud) to provide ORIN support for our Edge AI Middleware. What does this mean? Well, it means that installing our AI manager on an ORIN supported devices will automatically detect ORIN, install a version of our middleware that supports ORIN, and we will make sure that your full AI pipeline is deployed as efficiently as possible to your new ORIN device. Pretty cool no?
So, if you are changing between an earlier NVIDIA GPU and moving to ORIN, and if you are struggling to redo your pipeline on your newly targeted edge device, please do book a demo with us to see how our edge AI middleware can make live easier and provide over the air managed deployment of your full AI pipeline. If you are an OEM integrating NVIDIA ORIN in your platform, please do see our support packages and reach out if you want to be added to our supported devices.
We will have a larger release of the ORIN integration soon, but for now please know that we are up and running!

Node-RED Support for Effortless Automation
Our Scailable middleware, the Scailable AI manager, makes AI/ML model deployment effortless. Our value is to abstract away from specific hardware targets (i.e., if you are using Scailable to deploy to any supported device, as a developer you don’t have to worry about the target or the accelerators therein) and allow virtually any modeling platform […]

Complete Your Edge AI Cycle with Scailable and Network Optix VMS Integration
At Scailable, we believe that effortless edge deployment is crucial for unlocking the full potential of artificial intelligence in the real world. That’s why we offer a go-to middleware solution that enables you to seamlessly run models on any supported edge device, with ease and without any hiccups. But we understand that simply running an […]

Extending device support for Advantech UNO and SKY
With Scailable we are actively building the go-to model pipeline layer for any edge AI device out there. With the Scailable AI manager installed on a given edge device you can effortlessly deploy and manage models from virtually any training platform. We ensure the pipeline is as fast as it can be, remotely (OtA) configurable, […]
Want to know more?
Visit our Resources Hub or contact us to learn how Scailable can improve your business operations.
Contact usInterested in our platform?
Get a free demo of our AI Manager so you can experience the benefits of edge AI solutions.
Book a demo