386_dnn.jpg

Repurposing old hardware for new AI

 ·

YouTube video

.

Technical details

The deployed model is a Keras MIST visual digit recognition convolutional neural network exported to ONNX. This ONNX file was uploaded to Scailable, where our toolchains optimized, int8 quantized, and transpiled the model. We then deployed the model to a 1993 Omnibook 300 (Intel 386SXLV based) MS-DOS 5 notebook with 2MB of memory.

When read by the model, the input images are converted to ASCII and printed to the MS-DOS console.

Since the HP Omnibook 300’s 386 processor lacks an FPU, we would have needed to emulate floating-point operations to be able to run the default float32 version of the MNIST DNN. The int8 quantization enabled us to circumvent this.

Of course, we also fully support hardware acceleration when and where available. But if you need to run your AI or ML models on legacy hardware (without an NPU, GPU or even FPU), Scailable has you covered!

Robin van EmdenRepurposing old hardware for new AI