Custom Models

While deepstack provides many functionalities out of the box, it allows you to also deploy image recognition models trained on your own dataset.

For example, you can train a model on a dataset of different classes of plants. With DeepStack, you can deploy this model to actually classify plants in a production environment.

DeepStack supports custom image classification models in ONNX, Tensorflow and Keras. With the ONNX support, you can train a model in any deep learning framework including Pytorch, MxNet, Chainer, CNTK and more. and deploy them to production with DeepStack.

In this guide, we shall walk through deploying a custom model using the three supported formats. The model we are deploying here is trained to recognize different classes of professions by their mode of dressing.

Deploying ONNX Models with DeepStack

ONNX is a universal model format supported by the most popular deep learning frameworks. A model trained in a framework like Pytorch can be easily exported to onnx.

Step 1 :: Download the model

You can download the exported onnx model here Idenprof ONNX Model

Step 2 :: The config file

The configuration file contains all the information about the preprocessing and labels for your model.

The config file for our idenprof model is below

{"sys-version": "1.0",
"framework":"ONNX","mean":0.5,"std":255,"width":224,"height":224,
"map": {"0": "chef", "1": "doctor", "2": "engineer", "3": "farmer",
"4": "firefighter", "5": "judge", "6": "mechanic",
 "7": "pilot", "8": "police", "9": "waiter"}}

Based on your model, you should set the mean , std , width , height , and map to suite your model. This will enable deepstack preprocess your input properly and decode the model predictions properly.

Step 3 :: Register Your Model

import requests
from io import  open

model = open("idenprof.onnx","rb").read()
config = open("config.json","rb").read()

response = requests.post("http://localhost:80/v1/vision/addmodel",files={"model":model,"config":config},data={"name":"profession"}).json()
print(response)

The code above, uploads the model and the config file to your local deepstack server, also, the {“name”:”profession”} specifies the unique name for the model. This model will be served on the endpoint http://localhost:80/v1/vision/custom/profession

Step 4 :: Restart DeepStack

The model will start serving as soon as you restart your deepstack server

Testing Your Custom Model

Below, we shall attempt to use our custom model to predict the class of the image below2

_images/test-custom-image.jpg
import requests

image_data = open("test-custom-image.jpg","rb").read()

response = requests.post("http://localhost:80/v1/vision/custom/profession",files={"image":image_data}).json()
print("Label:",response["label"])
print(response)

Result

{'label': 'farmer', 'success': True, 'confidence': 0.584346}

Deploying Keras Models with DeepStack

Keras is a popular deep learning framework focussed on ease of use.

Deploying keras models follows the same process as onnx models.

Step 1 :: Download the keras model

You can download the keras model here Idenprof Keras Model

Note that when using your custom keras models, the model file must contain both the weights and the archicture.

In your keras code, you can save the weights and archicture by using

model.save("model.h5")

Step 2 :: The config file

The config file is essentially the same, except that the framework should be changed to KERAS

{"sys-version": "1.0",
"framework":"KERAS","mean":0.5,"std":255,"width":224,"height":224,
"map": {"0": "chef", "1": "doctor", "2": "engineer", "3": "farmer",
"4": "firefighter", "5": "judge", "6": "mechanic",
 "7": "pilot", "8": "police", "9": "waiter"}}

Now, we can register the model in the same way.

Step 3 :: Register Your Model

import requests
from io import  open

model = open("idenprof.h5","rb").read()
config = open("config.json","rb").read()

response = requests.post("http://localhost:80/v1/vision/addmodel",files={"model":model,"config":config},data={"name":"profession"}).json()
print(response)

Deploying Tensorflow Models with DeepStack

Tensorflow is a very popular DL framework from Google.

Deploying tensorflow models follows the same process as onnx and keras.

Step 1 :: Download the tensorflow model

You can download the tensorflow model here Idenprof Tensorflow Model

Step 2 :: The config file

The config file for tensorflow models must contain the input_name and the output_name

{"sys-version": "1.0",
"framework":"TF","mean":0.5,"std":255,"width":224,"height":224,
"input_name":"input_1:0","output_name":"output_1:0",
"map": {"0": "chef", "1": "doctor", "2": "engineer", "3": "farmer",
"4": "firefighter", "5": "judge", "6": "mechanic",
"7": "pilot", "8": "police", "9": "waiter"}}

Now, we can register the model in the same way.

Step 3 :: Register Your Model

import requests
from io import  open

model = open("idenprof.pb","rb").read()
config = open("config.json","rb").read()

response = requests.post("http://localhost:80/v1/vision/addmodel",files={"model":model,"config":config},data={"name":"profession"}).json()
print(response)