Platform guide
...
Analyze your models
Image

Image Classification

7min

The below input parameters are for different attack types. To start working with the APIs view the Image Classification.

File upload format

  • Data: The processed data, ready to be passed to the model for prediction, should be saved in a folder.
  • Label: A CSV file should be created with two columns: "image" and "label." The first column should contain the image name, and the second column should contain the label. The label should be in integer format. Check sample label file attached.
  • Model: The model should be saved in either .h5 or TensorFlow format with full architecture. Full architecture is needed when loading the model to the platofrm for assessment either in encrypted or unencrypted. This can be ignored when model is hosted as an API.

Note:

  1. All files uploaded should be in zipped format. The above files are sample data for the MNIST use case.
  2. Prerequisite: Only 2-5 % of data is needed. The data should be representative and balanced across all classes.
  3. For poisoning, check poisoning section to get sample data, label and models.

Common parameters

The below table parameters are common for all attact types such as Extraction, Evasion, and Poisoning.

To see the additional parameter specific for each attact type, such as Extraction and Evasion, refer to the below sections.

Parameter

Data type

Description

Remark

model_Id

String

Model_id received during model registration. We need to provide this model ID in query parameter in URL.

You have to do model registration only once for a model to perform model analysis. This will help you track the no of api call made, and it's success metric.

Request Body (Json format)







normalize_data

String

Model trained on Normalized data.

if model is trained on normalized data, then set this parameter as "yes" else "no".

input_dimensions 

String

Provide input dimension of the image

the parameter should be string in the format "(height, weight, channel)" For example 28*28*1 for MNIST

number_of_classes

String

Number of prediction classes. 

the parameter should be string. Example MNIST : 10 (Range >0 & <=200)

model_framework

String

Original model is built with tensorflow framework.

curretly supported framework are: tensorflow, scikit-learn, keras. (Option:[tensorflow])



Parameter

Data type

Description

Remark

Request Body (Json format)







attack_type

String

You can select the attack type either Blackbox or Greybox.

Blackbox: for performing model analysis, no information about model or data will be used. Greybox: information about data will be leverage for creation of attack data Note: only 2-5 % of data is needed

number_of_attack_queries

String

Number of attack queries that model will be subjected to.

generally Heigher the number of attack queries, better would be the analysis. And it would take more time to process. (Range:  >0 & <=400000)

vulnerability_threshold

String

Threshold percent of stolen model accuracy at which defense model should be generated.

Threshold percent of stolen model accuracy at which defense model should be generated (Range :  0.0 - 1)

model_api_details

String

If use_model_api is Yes, then provide API details of hosted model as encrypted JSON string is mandatory

provide this only if use_model_api is "yes".

use_model_api

String

Use model API to train your model instead of uploading the model as a zip file.

when this parameter is yes, you don't have to upload model as zip. You can pass api url along with other verification credential in json file.

defense_bestonly



String

Choose to train your model until it achieves the best results or above 95% accuracy.

when selected "yes", it will train N number of model and select best model. Ofcourse this will take longer time. If "no", then once defense model accuracy reached above 95% It will stop

encryption_strategy

Int

Choose a encryption strategy for you model. if model is uploaded directly as a zip pick 0, 1 if model is encryted as .pyc and uploaded as a zip. Ignore if use_model_api is Yes

select 0: pass tensorflow model as it is, select 1: pass encrypted model. It could be .pyc file

Parameter

Data type

Description

Remark

Request Body (Json format)







model_api_details

String

If use_model_api is Yes, then provide API details of hosted model as encrypted JSON string is mandatory

provide this only if use_model_api is "yes".

use_model_api

String

Use model API to train your model instead of uploading the model as a zip.

when this parameter is yes, you don't have to upload model as zip. You can pass api url along with other verification credential in json file.

defense_bestonly

String

Choose to train your model until it achieves the best results or above 95% accuracy.

when selected "yes", it will train N number of model and select best model. Ofcourse this will take longer time. If "no", then once defense model accuracy reached above 95% It will stop

encryption_strategy

Int

Choose a encryption strategy for you model. if model is uploaded directly as a zip pick 0, 1 if model is encryted as .pyc and uploaded as a zip. Ignore if use_model_api is Yes

select 0: pass tensorflow model as it is, select 1: pass encrypted model. It could be .pyc file

Data Poisoning sample Artifact: CleanData , Label, Model, UniversalDataset, UniversalLabel

Model Poisoning Sample Artifact: CleanModel1, CleanModel2, Data, Label, Model_to_test

To improve the accuracy, you can experiment with the following values for your attack input parameters. In our example we have used an MNIST dataset in our model and the below table reflects the parameters suitable for it. For more information, please refer to the reference implementation.

Task Pair/Analysis Type

Type of Attack Strategy

No of queries

Outcome

Extraction

Blackbox

60000

Stolen model accuracy between 85%-90%

Extraction

Greybox

20000

Stolen model accuracy above 90%

Evasion

N/A

N/A

Evasion report

Poisoning

N/A

N/A

Model is poisoned or not.

To access all sample artifacts, please visit Artifacts.

Note: For Image classification, supported attack types are - Extraction, Evasion and Poisoning.