Home Forums Model Development Kit (MDK) Converting pretrained model

This topic contains 5 replies, has 2 voices, and was last updated by Arpine Soghoyan Arpine Soghoyan 5 days, 2 hours ago.

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #10067

    Thomas Peters
    Participant

    Dear GTI devportal,

    I’ve been attempting to convert a Caffe model to evaluate the Plai Plug 2803 following the MDK User Guide, and I am quite stuck. Our model mostly consists of Convolution and ReLU layers, as well as some Concat, Deconvolution, and Eltwise layers. It has already been trained on vanilla Caffe (i.e., without GTI modifications).

    The User Guide, page 24, says “After performing steps described in section 7. Model Development Workflow, the layers should be “QuantConvolution” layers and activations should be “QuantReLu” layers.” Is this trying to say all Convolutions should be QuantConvolutions and all ReLUs should be QuantReLUs? If so, how was this supposed to happen? Steps 7.2-7.4 are “recommended”, and Step 7.1 says to train a model, which I’ve already done. Am I supposed to just edit my prototxt, replacing Convolution/ReLU layers with QuantConvolution/QuantReLU, respectfully? Do I need to retrain with your Caffe modifications?

    My next question regards the network*.json and Fullmodel_def *.json arguments to convert.py. How exactly am I supposed to create these? I find the User Guide unhelpful here: “network*.json is a JSON file similar to network _*_template.json provided in the conversion_tool/network_examples directory” (and similarly for the Fullmodel_def*.json). To me, those network_example jsons are a bit complicated, and I’m left wondering how to create my own (or even why they are necessary, given the prototxt and caffemodel). Appendix A : JSON File Format does not show me how to set them either.

    Regards,
    Tom Peters

    #11179
    Arpine Soghoyan
    Arpine Soghoyan
    Moderator

    GTI offers model development kit (MDK) to create your own models either from scratch or using pretrained caffemodel, train and convert to a format that will fit into GTI chip (.model format). MDK includes examples of network definitions for Mobilenet, Resnet and VGG type networks (for GTI 2803) that can be used as a starting point for your development. We recommend first running the training based on these examples than modifying them to match the required network criteria. If you have a custom network, please first check our model specification guide to make sure that particular network structure fits GTI chip architecture (GTI 2801 or GTI 2803).

    the steps to run the MDK.
    1) Run the training with full floating point model.
    2) Run the training with quantized convolutional layers.
    3a) Calibrate the range of activation values.
    3b) Run the training with quantized activation layers.
    4) Apply model fusion.
    5) Evaluate the quantized and fused model on the CPU/GPU.
    6) Convert the caffemodel into .model (GTI chip format).
    7) Run inferencing on the GTI chip. The accuracy at this stage should be the same as the accuracy in step 1 (~2% variation is possible).

    In my steps this is what I have done.

    1) I used the example prototxt to run training from scratch.
    2) Then replace FLOATING_POINT with
    quant_convolution_param {
    coef_precision: THREE_BIT
    bw_params: 12
    #shift_enable: true
    shift_enable: false
    }
    Make sure quant_enable: false
    3a) Run Calibrate_QuantReLU.py.
    3b) Then turn on the quantized activation layers
    By setting quant_enable: true
    5) Run RefineNetwork.py
    Make sure to set image_means=[0,0,0] otherwise default values will be used.
    6) Create deploy prototxt based on trainprototxt
    7) Run the conversion tool.

    If you need help with custom network training, GTI offers reviewing it. Send a DM with your network for me to review.

    #11272

    Thomas Peters
    Participant

    Hi Arpine,

    Thank you for your response, but unfortunately it doesn’t answer my questions:
    I have a pretrained model from vanilla caffe (not GTI caffe) that I do not want to retrain, but want to run on a GTI device. How do I get a corresponding prototxt and caffemodel for this model with GTI QuantConv and QuantReLU layers? Then, once I have such a model, how do I get the network*.json and Fullmodel_def*.json for my model?. My original question includes further details on my current understanding.

    Regards,
    Tom

    #11306
    Arpine Soghoyan
    Arpine Soghoyan
    Moderator

    Hi Tom,

    Retraining is mandatory, as GTI chip requires a specific model format to run on the chip.
    As I have mentioned, the MDK already includes example network.json and fullmodel.json files for your reference, however if you’re having issues creating your own custom network structure. please share more information so I can guide you properly.

    #11324

    Thomas Peters
    Participant

    Hi Arpine,

    Just to confirm, the MDK User Guide for Caffe page 33 says “The MDK has options to train the model on [sic] from a pretrained model or from scratch on CPU or GPU” but you’re saying I must retrain? I have a pretrained model which I’m not interested in retraining (it takes days).

    Regarding the network.json and fullmodel.json, I am already aware of what the user guide says on them (as my first post said). My problem is I do not understand how to generalize them to my own network. The guide only says “network*.json is a JSON file similar to network _*_template.json provided in the conversion_tool/network_examples directory.” How is one supposed to create their own? There are many parameters present in your example JSON files.

    Regards,
    Tom

    #11325
    Arpine Soghoyan
    Arpine Soghoyan
    Moderator

    Hi Tom,

    I understand that it would be so much easier simply porting your existing model directly into the chip, however, without retraining your pretrained model with GTI MDK it wouldn’t be possible to deploy it into the chip.
    If you need help customizing network*.json and full_model*.json files, I already offered reviewing your network prototxt.

    Regards,
    Arpine

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic.