当前位置:   article > 正文

【论文阅读】Knockoff Nets: Stealing Functionality of Black-Box Models(2019)

【论文阅读】Knockoff Nets: Stealing Functionality of Black-Box Models(2019)

在这里插入图片描述

摘要

Machine Learning (ML) models(机器学习模型) are increasingly(越来越多) deployed(部署) in the wild to perform(执行) a wide range of tasks(广泛的任务).
In this work, we ask to what extent(多大程度) can an adversary(对手) steal functionality(窃取功能) of such "victim’’ models based solely(仅基于) on blackbox interactions(黑箱交互): image in(图像输入), predictions out(预测输出).
In contrast to prior work(与之前的工作相反), we study complex victim blackbox models(复杂的受害者黑箱模型), and an adversary lacking knowledge of train/test data used by the model(缺乏模型使用的训练/测试数据), its internals(其内部), and semantics(语义) over model outputs(模型输出).
We formulate(表述) model functionality stealing(模型功能窃取) as a two-step approach(两步方法): (i) querying(查询) a set of(一组) input images(输入图像) to the blackbox model(黑盒模型) to obtain predictions(获得预测); and (ii) training a "knockoff’’ with queried image-prediction pairs(查询图像预测对).
We make multiple remarkable observations(多个显著的观察): (a) querying random images(查询随机图像) from a different distribution(从不同的分布中) than that of the blackbox training data(黑箱训练数

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/1011322
推荐阅读
  

闽ICP备14008679号