当前位置:   article > 正文

【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

在这里插入图片描述

摘要

Training a Deep Learning (DL) model(训练深度学习模型) requires proprietary data(专有数据) and computing-intensive resources(计算密集型资源). To recoup their training costs(收回训练成本), a model provider can monetize DL models through Machine Learning as a Service (MLaaS 机器学习即服务). Generally, the model is deployed at the cloud, while providing a publicly accessible(公开访问) Application Programming Interface (API 应用程序接口) for paid queries to obtain benefits(服务查询获得好处). However, model stealing attacks(模型窃取攻击) have posed security threats(安全威胁) to this model monetizing scheme as they steal the model without paying for future extensive queries. Specifically(具体来说), an adversary queries a targeted model(查询目标模型) to obtain input-output pairs(获取

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/1011312
推荐阅读
相关标签
  

闽ICP备14008679号