当前位置:   article > 正文

ai人工智能_相信AI?

ai 和trust的关系

ai人工智能

什么是信任? (What Is Trust?)

As I was leaving for a three-month family leave, a Director assigned to temporarily support my team came to my office to wish me the best.

当我要休三个月的家事假时,被指派临时支持我的团队的主任来我办公室祝我一切顺利。

“You can trust me,” the Director said as I walked out.

“你可以相信我,”当我走出去时,主任说。

That was my plan. I had no reason not to.

那是我的计划。 我没有理由不这样做。

“You know how there are people who blame the last guy for anything that goes wrong?” he said. “Well, I’m not going to do that.”

“你知道有些人怎么会把最后的错归咎于最后一个人吗?” 他说。 “好吧,我不会那样做。”

“Oh,” I said. “OK.”

“哦,”我说。 “好。”

Who says that? I thought. Only the guy who is about to screw you over.

谁说的? 我想。 只有要把你搞砸的家伙。

If trust means reliance on others to meet the expectations they set, the Director was explicitly telling me to trust him, but implicitly, he was warning me not to. Or at least dropping an obvious hint. When short-sighted self-interest drove him to seize control of my high-performing team permanently, I was disappointed but not surprised. Did he break my trust? Not really. With that parting comment, the Director had already lost my trust. But I never imagined a colleague would be allowed to go that far. Both the individual and the system had failed.

如果信任意味着依靠他人来实现他们设定的期望,则主任明确地告诉我要信任他,但隐含地,他警告我不要这样做。 或至少放弃明显的提示。 当目光短浅的个人利益驱使他永久夺取对我高绩效团队的控制权时,我感到失望,但并不感到惊讶。 他有没有打破我的信任? 并不是的。 有了那段离别的评论,主任已经失去了我的信任。 但是我从来没有想过会允许同事走那么远。 个人和系统都失败了。

When I returned, I relied on my confidence to recover and build a new team and a new set of projects. The Director acted like nothing unusual had happened. Like his lying didn’t matter. But losing my team and projects affected how I showed up. At work and outside of work.

当我返回时,我依靠自己的信心来恢复并组建新团队和新项目。 主任的举动似乎没有发生任何异常情况。 就像他撒谎没关系。 但是失去我的团队和项目影响了我的表现。 在工作中和工作之外。

信任与人工智能 (Trust & AI)

When we want to build trust through our businesses, products, teams, and community, we must ask one fundamental question. Are we able to be honest with ourselves? Variations of this ultimate trust question could include:

当我们想通过我们的业务,产品,团队和社区建立信任时,我们必须提出一个基本问题。 我们可以对自己诚实吗? 此最终信任问题的变体可以包括:

  • Are we able to be honest and transparent (in a relevant way) with others?

    我们能够与他人诚实和透明(以相关方式)吗?
  • Are our existing structures and systems trustworthy?

    我们现有的结构和系统是否值得信赖?
  • Do we want to take on the responsibility of trust?

    我们是否要承担信任的责任?
  • Do we want to win trust now but are willing to break it later when others don’t have a choice or we can get away with it?

    我们现在是否想赢得信任,但愿意在别人没有选择或我们可以摆脱时,打破信任吗?

Let’s explore that last bullet with follow-up questions. Do we want to win our users’ trust to use a “free” service while also littering the back end with complexities and loopholes that allow us to sell or use their data? Do we feel they have enough clues to discern? Like, what part of “cost of free” did they not understand? Do we do it because everyone else is doing it? Do we do it to survive? Or do we have options to engage with integrity? Are we looking to build long-term partnerships and loyalty? Find a way to do the right thing for us and the right thing for users?

让我们探讨后续问题的最后一个项目符号。 我们是否想赢得用户的信任以使用“ 免费 ”服务,同时还使后端散布着允许我们出售或使用其数据的复杂性和漏洞? 我们是否觉得他们有足够的线索可以识别? 就像,他们不了解“免费成本”的哪一部分? 我们这样做是因为其他人都在做吗? 我们这样做是为了生存吗? 还是我们有选择进行诚信合作? 我们是否正在寻求建立长期的伙伴关系和忠诚度? 找到一种方法为我们做正确的事,为用户做正确的事?

These questions are especially relevant when machine learning and AI (Artificial Intelligence) are used to establish trust-based connections between recruiters and job seekers, between content and consumers, between caregivers and caretakers, parsing out relevance, inferences, and recommendations. These systems and algorithms are perpetually optimized based on the metrics we use to reward or penalize, the data we give them access to use, and the autonomy of decision-making. They are critical when the stakes are high — think law enforcement or surveillance that encroaches on our autonomy, privacy, and intentions.

当使用机器学习和AI(人工智能)在招聘人员和求职者之间,内容与消费者之间,看护人与看护人之间建立基于信任的联系,解析相关性,推断和建议时,这些问题尤其重要。 这些系统和算法会根据我们用于奖励或惩罚的指标,我们授予他们使用的数据以及决策的自主权进行永久性优化。 当风险很高时,它们就至关重要-考虑侵犯我们自主权,隐私权和意图的执法或监视。

Trust involves a leap of faith. When we ask if we can trust AI, we are really asking: Can we trust the people who are vouching for the AI: designing, paying for, making, and using the systems? It’s ultimately and almost always, about us.

信任涉及信仰的飞跃。 当我们问我们是否可以信任AI时,我们实际上是在问:我们可以信任那些为AI担保的人吗:设计,付款,制造和使用系统? 最终并且几乎总是关于我们。

信任在人工智能中意味着什么? (What Does Trust Mean in Artificial Intelligence?)

In Feb, 2020, EU released the intention of defining and ultimately regulating trusted and transparent AI, prompting Google’s CEO to support AI regulation as “too important not to” while nudging them to take “a sensible approach” while the White House released its own letter to the EU advising it not to kill innovation. In June 2020, IBM, Amazon, and Microsoft joined the San Francisco and Seattle bans on facial recognition. The definition of “sensible” has evolved as trust in our systems — human and machine — around policing and facial recognition are under increased scrutiny. Even prior to the post-riot awareness of racism in America, heads of AI and data science departments in China, Europe, and Asia, leading researchers, and public interest groups have been asking a common question: How do we build trust in AI? And can there be a consensus on approach and solution to the answer and to our need for trusted technology?

2020年2月, 欧盟发布了定义和最终规范可信任且透明的AI的意图,促使Google的首席执行官支持AI法规 “太重要了”,同时敦促他们采取“明智的做法”,而白宫则发布了自己的建议。给欧盟的一封信, 建议它不要扼杀创新 。 2020年6月, IBM,亚马逊和微软加入了旧金山和西雅图的面部识别禁令。 随着对警务和面部识别的信任,我们对“人类”和“机器”系统的信任在不断发展,“明智”的定义也在不断发展。 甚至在暴乱后对美国种族主义的认识还没有出现之前,中国,欧洲和亚洲的AI和数据科学部门负责人,领先的研究人员以及公共利益团体就一直在问一个共同的问题: 我们如何建立对AI的信任? 在答案和我们对可信技术的需求的方法和解决方案上是否可以达成共识?

When industry organizations and institutions like IEEE and EU forums use keywords like “Trusted AI,” “Trust in AI,” and “Trustworthy AI,” they are talking about how to ensure, inject, and build trust, ethics, explainability, accountability, responsibility, reliability, transparency into our intelligent systems. They ask for transparency: How closely do the systems meet the expectations that were set? Are we clear on the expectations, the data, and methodology used?

当诸如IEEE和EU论坛之类的行业组织和机构使用“受信任的AI”,“信任的AI”和“值得信任的AI”之类的关键字时,他们正在谈论如何确保,注入和建立信任,道德,可解释性,问责制,责任,可靠性,透明性进入我们的智能系统。 他们要求透明: 这些系统在多大程度上满足了设定的期望? 我们是否清楚所用的期望,数据和方法?

This is a tricky concept for many reasons. But mainly because AI is a technology used by people, businesses, products, and governments. So the trust and confidence is ultimately placed in the people, businesses, products, or governments who have their own assessment about the reliability, truth, ability, and strength of their AI-powered solutions. It is often not one person or one system, but a series of interconnected systems and people. And any definition, methodology, or system can be used for different purposes depending on our hopes and fears. It can be changed, improved, or misused. The industry is finally banning facial recognition because it can no longer deny that we can’t trust the people who are going to use it.

由于许多原因,这是一个棘手的概念。 但主要是因为AI是人们,企业,产品和政府使用的技术。 因此,信任和信心最终会落在对自己的AI驱动解决方案的可靠性,真实性,能力和实力有自己评估的人员,企业,产品或政府​​中。 它通常不是一个人或一个系统,而是一系列相互联系的系统和人。 根据我们的希望和恐惧,任何定义,方法论或系统都可以用于不同的目的。 可以更改,改进或滥用它。 该行业最终禁止面部识别,因为它不能再否认我们不能相信将要使用它的人们。

构建可信赖的AI将需要什么? (What Will It Take to Build Trusted AI?)

Trust in AI involves at least two key sets of dependencies.

对AI的信任至少涉及两个关键的依赖关系集。

Dependency Set 1: Trust the decision makers.

依赖集1:信任决策者。

This includes leaders and entities — institutions, countries and companies who are building these solutions. What we know about them matters. Who has a seat at the table matters. Their goals and motivations matter. It all goes into, three key questions:

这包括领导者和实体-构建这些解决方案的机构,国家和公司。 我们对它们的了解很重要。 谁在桌子上坐着很重要。 他们的目标和动机很重要。 全部涉及三个关键问题:

  1. How much do we trust the decision makers? And those influencing them?

    我们对决策者有多信任? 而那些影响他们的人呢?
  2. Are they visible? Can we figure out who is involved?

    它们可见吗? 我们能弄清楚谁参与其中吗?
  3. Do they make it easy for us to understand where AI is being used and for what purpose (leveraging which data sets)? Which loops back to the first question.

    它们是否使我们更容易了解使用AI的位置和目的(利用哪些数据集)? 哪个循环回到第一个问题。

Trust with AI depends on the leader’s and entity’s track record with other decisions. Do they tend to pick the trustworthy partners, vendors, solutions or even know how to? Drive accountability? Bring in diversity? For example, when the current pandemic hit, consider, who did we trust?

与AI的信任取决于领导者和实体的跟踪记录以及其他决策。 他们倾向于选择值得信赖的合作伙伴,供应商,解决方案,甚至不知道如何做吗? 增强责任感? 带来多样性? 例如,当当前的大流行流行时,请考虑, 我们信任谁

Dependency Set 2: Build trust into our AI systems.

依赖集2:在我们的AI系统中建立信任。

This second set of trust dependencies cover the technical practices or tools that give the decision makers the ability to build reliability, transparency, explainability, and trust into our AI systems. This is where most of the debates are happening. I have participated in technical forums at Linux Foundation, IEEE, and for Machine Learning performance benchmarking along with many industry and university debates. Almost every forum begins with principled statements leading to practical strategies and realistic considerations of time and cost.

第二组信任依赖关系涵盖了技术实践或工具,这些技术或工具使决策者能够在我们的AI系统中建立可靠性,透明度,可解释性和信任度。 这是大多数辩论发生的地方。 我参加了Linux Foundation,IEEE,机器学习性能基准测试的技术论坛,以及许多行业和大学的辩论。 几乎每个论坛都以有原则的陈述开始,从而得出切实可行的策略以及对时间和成本的现实考虑。

  1. Alignment on definition: What do we mean by explainability? Where is it applicable?

    定义一致:可解释性是什么意思? 在哪里适用?
  2. Technical feasibility: What is possible?

    技术可行性:有什么可能?
  3. Business & Operational consideration: What is practical and sustainable?

    商业与运营考虑:什么是实用且可持续的?
  4. Risk & Reward: What are the consequences if we fail or don’t act?

    风险与奖励:如果我们失败或不采取行动,将会带来什么后果?
  5. Return on Investment: How much trouble/cost are we willing to bear to try to prevent potential consequences?

    投资回报率:我们愿意承担多少麻烦/成本以防止潜在的后果?
  6. Motivation & Accountability: How likely are we to be found out or held accountable? What can and will be regulated?

    动机与责任感:我们被发现或追究责任的可能性有多大? 可以而且将要监管什么?

These are not easy questions to answer. Since AI is entering almost every system and every industry, relevancy becomes important. For example, transparency can bring much needed accountability in some cases (criminal justice). It can also be used to overwhelm and confuse if too much information is shared in difficult to understand formats (think liability waivers). Or be entirely inappropriate, as in private and sensitive scenarios.

这些不是容易回答的问题。 由于AI几乎进入每个系统和每个行业,因此相关性变得很重要。 例如,在某些情况下,透明度可以带来急需的问责制(刑事司法)。 如果以难以理解的格式共享了太多的信息(也可以考虑免除责任),它也可以使您不知所措。 或者完全不适当,例如在私人和敏感的情况下。

Open source and technical, policy, and public interest communities around the world have been trying to drive consensus. While companies and institutions building, selling and using AI systems continue to make their own decisions. Regulations have always trailed innovation. And they have their own set of accountability challenges.

世界各地的开源和技术,政策以及公共利益社区一直在努力达成共识。 在公司和机构建设,销售和使用AI系统的过程中,他们继续做出自己的决定。 法规总是落后于创新。 他们有自己的一套问责制挑战。

So, what do we do?

那么我们该怎么办?

Change the Normal

更改法线

An open-ended question that continues to challenge us is how we will build self-regulation and motivation into AI when businesses are measured on short-term gains and market shares — time and money.

一个继续挑战我们的开放性问题是, 当根据短期收益和市场份额(时间和金钱)来衡量企业时,我们如何将自我调节和动机融入人工智能。

Motivation and accountability is needed for responsible AI, trusted AI, and ethical AI. We need a common framework or at least a common set of values — a set of definitions, principles, best practices, tools, checklists, and systems that can be automated and built into products to become trusted technology.

负责任的AI,可信任的AI和道德的AI需要动机和责任感。 我们需要一个通用的框架或至少一个通用的值集-一组定义,原理,最佳实践,工具,清单和系统,这些可以自动并内置到产品中以成为可信的技术。

All the while, we know our second set of considerations are almost always influenced, usurped, manipulated, or ignored by the first set, the people using AI. Their goals and their metrics. For real change, we need business cases for long-term impact that can be understood and developed.

一直以来我们都知道,第二组注意事项几乎总是受到使用AI的人的影响,篡改,操纵或忽略。 他们的目标和指标。 对于真正的变化,我们需要可以理解和发展的长期影响的业务案例。

This is where we have a potential glimmer of hope. If we design amazingly robust, trustworthy technology and systems, could it be harder for people to misuse or abuse them? If the shortcomings, biases, and insights into how decisions are made are clearly visible and we are able to anticipate outcomes by running different simulations and scenarios, we would be able to correct the inequities and unfairness of our past much faster? Could the relevant transparency, built into our systems and processes, give us an opportunity to create checks and balances as well as a level playing field and steer our institutions and leaders towards greater trustworthiness and reliability by proxy . . . rather than waiting to be shamed or found out?

这是我们潜在的一线希望。 如果我们设计出令人惊讶的强大,值得信赖的技术和系统,那么人们滥用或滥用它们会更难吗? 如果可以清楚地看到有关决策制定方法的缺陷,偏见和见解,并且我们能够通过运行不同的模拟和方案来预期结果,那么我们是否能够更快地纠正过去的不平等和不公平现象? 我们系统和流程中内置的相关透明度是否可以使我们有机会建立制衡和公平竞争的环境,并通过代理引导我们的机构和领导者获得更大的信任度和可靠性。 。 。 而不是等待被羞辱或发现?

Yes, in many ways, this is naively optimistic. The same tools could end up giving those with power a stamp of approval without anything really changing. It’ll become a more sophisticated opportunity to confuse or subterfuge. A coverup. Move people’s attention to AI, to technology, rather than those who are using it to wield power. But this is where community becomes critical.

是的,在很多方面,这都是天真的乐观。 相同的工具最终可能会给那些拥有权力的人一个认可的印章,而没有任何真正的改变。 它将成为一个更加复杂的迷惑或欺骗的机会。 掩饰。 将人们的注意力转移到AI上,转移到技术上,而不是将注意力转移到那些利用AI来发挥力量的人身上。 但这是社区变得至关重要的地方。

We humans may be slow in getting there, but when enough of us become determined to solve a problem, something we never thought possible, becomes possible. Even normal.

我们人类到达那里的速度可能很慢, 但是当我们足够的决心解决问题时,一些我们从未想到过的事情就变得可能。 甚至正常。

The current pandemic and public outcry against racism has shown us that once leaders and institutions take a stand, once the public takes a stand, people with good ideas and solutions, who have been doing the thinking and the work in the background can step into visibility. Excuses to keep the status quo appear shallow and stale. We can collectively get to somewhere better than before. But we have to be honest with ourselves and each other for it to last.

当前的流行病和公众对种族主义的强烈抗议向我们表明,一旦领导人和机构站出来,一旦公众站出来,那些具有良好思想和解决方案的人,这些人在后台进行思考和工作就可以提高知名度。 。 保持现状的借口显得肤浅而陈旧。 我们可以集体到达比​​以前更好的地方。 但是,我们必须对自己和彼此诚实,才能持久。

Can we do that?

我们能做到吗?

Most companies and institutions have ethical guidelines, best practices, and now AI principles. But we don’t really expect them to meet them. We know the difference between PR, spin, and reality.

大多数公司和机构都有道德准则,最佳实践以及现在的AI原则。 但是我们并不真正期望他们会见到他们。 我们知道PR,旋转和现实之间的区别。

What if it was normal to align our actions to the value systems we advertise? As we are starting to do with our biases and racism right now? And need to keep doing even after our collective attention moves elsewhere. Start with listening to people who have been thinking about this challenge in a complex and multidisciplinary context for a long time. And are likely already working at our companies. Understand what has worked and hasn’t worked. Be honest about where we are individually and collectively. And shift from our different starting places. From our here and our now. As we know from life, design, and engineering, everything is ultimately a navigation problem. Sometimes it’s a simple step that gets us going in the right direction. Get beyond talking to doing. For trust and AI, could we start with integrating trust into AI design instead of considering it optional? And now, instead of waiting for regulations later?

如果将我们的行为与我们宣传的价值体系保持一致是正常的做法怎么办? 当我们现在开始处理偏见和种族主义时? 即使我们的集体注意力转移到其他地方,也需要继续做下去。 首先,倾听长时间以来一直在复杂和多学科的背景下思考这一挑战的人们。 并且可能已经在我们的公司工作了。 了解哪些有效,哪些无效。 对我们个人和集体的处境诚实。 并从我们不同的出发地转移。 从我们这里到现在 我们从生活,设计和工程学中知道,一切最终都是导航问题。 有时候,这是一个简单的步骤,可以使我们朝正确的方向前进。 超越谈论要做。 对于信任和AI,我们可以开始将信任集成到AI设计中,而不是将其视为可选的吗? 而现在,不用等待以后的法规了吗?

After all, do we wait for regulations to innovate?

毕竟,我们是否在等待法规创新?

This article is part of Trust in AI series for The Responsible Innovation Project. Exploring the impact of innovation and AI on the way we live, learn and work.

本文是“负责任的创新项目”的 AI信任系列的一部分。 探索创新和人工智能对我们生活,学习和工作方式的影响。

翻译自: https://medium.com/swlh/trust-in-ai-eff46f0b36c4

ai人工智能

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号