Frequently Asked Questions

Plai™ Plug is a USB thumb stick designed for developers to prototype and create AI solutions. It features a Lightspeeur® accelerator chip that provides on-device and real-time inferencing capability.

The Lightspeeur® accelerator supports Convolutional Neural Networks (CNN) to provide a wide range of applications with object recognition,  image and video classification, natural language processing and more.


2 people found this faq useful.

Plai Plug connects to the host processor via USB interface. Plai Wifi includes both a battery and host processor, and connects to mobile devices via Wifi.


2 people found this faq useful.

Currently, our Lightspeeur series of chips are commercially available for inferencing only.


3 people found this faq useful.

Yes, our SDK includes some pre-trained models.


Be the first person to like this faq.

Yes, but depends on the model and how greatly it differs from our model specifications. Please refer to the Model Development Specification to start. 


2 people found this faq useful.

This varies depending on the chip generation. All our accelerators can support VGG-like models. Lightspeeur 2803, our 2nd generation accelerator, can additionally support ResNet, MobileNet and their variants. For further details, please refer to the appropriate Model Development Specification guides.


1 people found this faq useful.

RNNs are Recurrent Neural Networks, and are good for things like speech and handwriting recognition. Our accelerators do not support RNNs, but we have implemented some of these use cases with CNNs.

R-CNNs are Regions with CNNs, and are good for object detection and tracking. Our accelerators can accelerate successive calls to the CNN with different subsections of the input image.


Be the first person to like this faq.

GTI provides Model Development Kits (MDK) for Caffe, TensorFlow and PyTorch. The kits are used to port part or all of your existing CNN model onto GTI devices with comparable performance to the original floating-point model.

You can either train a model from scratch or fine-tune from a pre-trained floating-point model. The following general steps are recommended to properly use the TensorFlow MDK:

  1. Train a full floating-point model.
  2. Fine-tune with the GTI custom quantized convolutional layers.
  3. Fine-tune with the GTI custom quantized activation layers.
  4. Convert the model into the GTI format and evaluate on-chip performance.
  5. Make incremental improvements as needed.

Be the first person to like this faq.

Yes, you can train using 3rd party hardware and software. Then import your pre-trained model and run it through our Model Development Kit (MDK) to be optimized and converted to use on our chip for inferencing.


Be the first person to like this faq.

If you have a floating point model trained using other frameworks (e.g. CNTK, MXNet), you have to first convert that model into TensorFlow, Caffe or PyTorch. Then use the corresponding Model Development Kit (MDK) to continue with the rest of the workflow to convert your model into a GTI-compatible model.


Be the first person to like this faq.

The SDK supports Linux, Windows and Android operating systems. Under Linux besides x86_64 PC architecture (not x86), various ARM platforms are supported, such as the Raspberry Pi 3.  The default SDK release has packages for armv7l, armv8, and aarch64.


1 people found this faq useful.