Frequently Asked Questions
Plai™ Plug is a USB thumb stick designed for developers to prototype and create AI solutions. It features a Lightspeeur® accelerator chip that provides on-device and real-time inferencing capability.
The Lightspeeur® accelerator supports Convolutional Neural Networks (CNN) to provide a wide range of applications with object recognition, image and video classification, natural language processing and more.
Plai Plug connects to the host processor via USB interface. Plai Wifi includes both a battery and host processor, and connects to mobile devices via Wifi.
Currently, our Lightspeeur series of chips are commercially available for inferencing only.
Yes, but depends on the model and how greatly it differs from our model specifications. Please refer to the Model Development Specification to start.
This varies depending on the chip generation. All our accelerators can support VGG-like models. Lightspeeur 2803, our 2nd generation accelerator, can additionally support ResNet, MobileNet and their variants. For further details, please refer to the appropriate Model Development Specification guides.
RNNs are Recurrent Neural Networks, and are good for things like speech and handwriting recognition. Our accelerators do not support RNNs, but we have implemented some of these use cases with CNNs.
R-CNNs are Regions with CNNs, and are good for object detection and tracking. Our accelerators can accelerate successive calls to the CNN with different subsections of the input image.
GTI provides Model Development Kits (MDK) for Caffe, TensorFlow and PyTorch. The kits are used to port part or all of your existing CNN model onto GTI devices with comparable performance to the original floating-point model.
You can either train a model from scratch or fine-tune from a pre-trained floating-point model. The following general steps are recommended to properly use the TensorFlow MDK:
- Train a full floating-point model.
- Fine-tune with the GTI custom quantized convolutional layers.
- Fine-tune with the GTI custom quantized activation layers.
- Convert the model into the GTI format and evaluate on-chip performance.
- Make incremental improvements as needed.
Yes, you can train using 3rd party hardware and software. Then import your pre-trained model and run it through our Model Development Kit (MDK) to be optimized and converted to use on our chip for inferencing.
If you have a floating point model trained using other frameworks (e.g. CNTK, MXNet), you have to first convert that model into TensorFlow, Caffe or PyTorch. Then use the corresponding Model Development Kit (MDK) to continue with the rest of the workflow to convert your model into a GTI-compatible model.