Platform guide
...
Analyze your models
Image
Image Classification
8min
the below input parameters are for different attack types to start working with the apis view the image classification docid 7pftv3d26cujsz6vmqloo file upload format data the processed data, ready to be passed to the model for prediction, should be saved in a folder download sample data label a csv file should be created with two columns "image" and "label " the first column should contain the image name, and the second column should contain the label the label should be in integer format check sample label file attached download sample label model the model should be saved in either h5 or tensorflow format with full architecture full architecture is needed when loading the model to the platofrm for assessment either in encrypted or unencrypted this can be ignored when model is hosted as an api download sample model note all files uploaded should be in zipped format the above files are sample data for the mnist use case prerequisite only 2 5 % of data is needed the data should be representative and balanced across all classes model extraction requires (450 900) samples or (50 100) samples per class model evasion requires (810 1620) samples or (90 180) samples per class for poisoning, check poisoning section to get sample data, label and models common parameters the below table parameters are common for all attact types such as extraction, evasion, and poisoning to see the additional parameter specific for each attact type, such as extraction and evasion, refer to the below sections parameter data type description remark model id string model id received during model registration we need to provide this model id in query parameter in url you have to do model registration only once for a model to perform model analysis this will help you track the no of api call made, and it's success metric request body (json format) normalize data string model trained on normalized data if model is trained on normalized data, then set this parameter as "yes" else "no" input dimensions string provide input dimension of the image the parameter should be string in the format "(height, weight, channel)" for example 28 28 1 for mnist number of classes string number of prediction classes the parameter should be string example mnist 10 (range >0 & <=200) model framework string original model is built with tensorflow framework curretly supported framework are tensorflow, scikit learn, keras (option \[tensorflow]) extraction parameters parameter data type description remark request body (json format) attack type string you can select the attack type either blackbox or greybox blackbox for performing model analysis, no information about model or data will be used greybox information about data will be leverage for creation of attack data note only 2 5 % of data is needed number of attack queries string number of attack queries that model will be subjected to generally heigher the number of attack queries, better would be the analysis and it would take more time to process (range >0 & <=400000) vulnerability threshold string threshold percent of stolen model accuracy at which defense model should be generated threshold percent of stolen model accuracy at which defense model should be generated (range 0 0 1) model api details string if use model api is yes, then provide api details of hosted model as encrypted json string is mandatory provide this only if use model api is "yes" use model api string use model api to train your model instead of uploading the model as a zip file when this parameter is yes, you don't have to upload model as zip you can pass api url along with other verification credential in json file defense bestonly string choose to train your model until it achieves the best results or above 95% accuracy when selected "yes" , it will train n number of model and select best model ofcourse this will take longer time if "no" , then once defense model accuracy reached above 95% it will stop encryption strategy int choose a encryption strategy for you model if model is uploaded directly as a zip pick 0, 1 if model is encryted as pyc and uploaded as a zip ignore if use model api is yes select 0 pass tensorflow model as it is, select 1 pass encrypted model it could be pyc file evasion parameters parameter data type description remark request body (json format) model api details string if use model api is yes, then provide api details of hosted model as encrypted json string is mandatory provide this only if use model api is "yes" use model api string use model api to train your model instead of uploading the model as a zip when this parameter is yes, you don't have to upload model as zip you can pass api url along with other verification credential in json file defense bestonly string choose to train your model until it achieves the best results or above 95% accuracy when selected "yes" , it will train n number of model and select best model ofcourse this will take longer time if "no" , then once defense model accuracy reached above 95% it will stop encryption strategy int choose a encryption strategy for you model if model is uploaded directly as a zip pick 0, 1 if model is encryted as pyc and uploaded as a zip ignore if use model api is yes select 0 pass tensorflow model as it is, select 1 pass encrypted model it could be pyc file drift reference data the reference images or the clean images should be saved in a folder download sample reference data reference label a csv file should be created with two columns "image" and "label " the first column should contain the image name, and the second column should contain the label the label should be in integer format check sample label file attached download sample reference label test data dataset under test that might contain drifted images download sample test data test label a csv file containing corresponding labels to the universal data, two columns ‘image’ and ‘label’ the ‘image’ column contains the image name including the extension, and the second column should contain the label the label should be in integer format download sample test label outlier date data in a zip format needs to be checked for the presence of outliers download sample data poisoning data poisoning universal dataset data containing potential poisoning data that needs to be tested download sample universal dataset universal label a csv file containing corresponding labels to the universal data, two columns ‘image’ and ‘label’ the ‘image’ column contains the imagename including the extension, and the second column should contain the label the label should be in integer format download sample universal label data the processed data, ready to be passed to the model for prediction, should be saved in a folder download sample data label a csv file should be created with two columns "image" and "label " the first column should contain the image name, and the second column should contain the label the label should be in integer format check sample label file attached download sample label model the model should be saved in either h5 or tensorflow format with full architecture full architecture is needed when loading the model to the platform for assessment download sample model experimentation with values to improve the accuracy, you can experiment with the following values for your attack input parameters in our example we have used an mnist dataset in our model and the below table reflects the parameters suitable for it for more information, please refer to the reference implementation task pair/analysis type type of attack strategy no of queries outcome extraction blackbox 6 0000 stolen model accuracy between 85% 90% extraction greybox 20000 stolen model accuracy above 90% evasion n/a n/a evasion report poisoning n/a n/a model is poisoned or not to access all sample artifacts, please visit artifacts docid\ ijneocxostabvvrsq11fa for specific artifact details, refer vulnerability report vulnerability report docid\ hl0ut2mwlcbkt8f97fr w sample attacks sample attacks docid 4g1mjm5lqjfm8t5wbvwpr defense report defense report docid\ vtzlttpja2vsf2j0stlsq defense model defense model docid\ xsbxmzxw4vv14 8nmbf8m note for image classification, supported attack types are extraction, evasion and poisoning, drift and outlier