ai人工智能_11条人工智能原则-程序员宅基地

技术标签: python  java  机器学习  人工智能  

ai人工智能

If not, how do we teach values to an autonomous intelligence? Can we codify them or simply enter it somewhere in the system? Is it more of an iterative process where we will correct parameters on the fly as systems learn on their own and potentially behave unexpectedly?

如果没有,我们如何向自主智能传授价值观? 我们可以对它们进行编纂还是直接将其输入系统中的某个位置? 它是否只是一个迭代过程,在此过程中,我们将在系统自行学习并可能出现意外行为时即时修正参数?

It does not seem practical, ideal, or even risk-free to teach values to an AI to preserve ourselves and avoid unwanted situations. Situations come to mind where an AI’s behavior was observed but its actions were unpredictable and there were not any options to correct the course. As we are facing those new complexities, behaviors, and potential uses, it makes sense to reflect on and explore what rules are needed. It is a pressing concern as we expand in this new field, especially as AI usage grows and takes over critical applications.

向AI传授价值观以保护自己并避免不必要的情况似乎不切实际,不理想甚至没有风险。 想到的情况是观察到AI的行为,但是其行为是不可预测的,并且没有任何纠正方法。 当我们面对这些新的复杂性,行为和潜在用途时,反思并探索需要哪些规则是有意义的。 随着我们在这个新领域的扩展,这是一个紧迫的问题,特别是随着AI使用的增长和接管关键应用程序。

我们已经有一些框架和规则 (We already have some frameworks and rules)

The AI space has benefited from high-level foundational principles. An organization, The Partnership on AI, has published high-level tenets to preserve AI as a positive and promising force. That is a first step forward but it does not address the day-to-day needs on the ground, especially as we go from experimenting to releasing AIs in the wild.

人工智能领域受益于高级基础原则。 一个名为AI伙伴关系的组织已经发布了高级原则,以保持AI的积极和有希望的力量。 这是向前迈出的第一步,但并不能满足当地的日常需求,尤其是在我们从试验到在野外发布AI的过程中。

On the technology side, maybe the best starting point and the main gap would be to define design principles intended first for technologists who are building the AIs and second for teams managing those advanced intelligence systems.

在技​​术方面,也许最好的起点和主要差距是定义设计原则,该原则首先针对正在构建AI的技术人员,其次用于管理这些高级智能系统的团队。

AI有很多阴影 (There are many shades of AI)

Of course, there is AI and AI. They are not all created equal and for the same purposes:

当然,有AI和AI。 它们并非都是相同且出于相同目的而创建的:

  • They have various levels of independence: from following a script under human supervision to independently allocating resources to robots in a factory

    它们具有不同程度的独立性:从遵循人工监督下的脚本到向工厂中的机器人独立分配资源
  • They have a wide range of responsibilities: from tweeting comments to managing armed drones

    他们承担着广泛的责任:从发布推文到管理武装无人机
  • They operate in different environments: from a lab not connected to the internet to a live trading environment

    它们在不同的环境中运行:从未连接互联网的实验室到实时交易环境
Image for post
Photo by Austin Distel on Unsplash
Austin DistelUnsplash拍摄的照片

先驱者清单(A checklist for the pioneers)

There are many considerations when designing AI systems to keep the risk to society manageable, especially for scenarios involving high independence, key responsibilities, and sensitive environments:

在设计AI系统以使对社会的风险可管理时,有很多考虑因素,尤其是对于涉及高度独立性,关键职责和敏感环境的场景:

  1. No black box: It has to be possible to check inside the program and review the code, logs, and timelines to understand how a system made a decision and which sources were checked. It should not be all machine code: users should be able to visualize and quickly understand the steps followed. It would avoid those situations where programs are shut down because nobody can fix bad behaviors or unintended actions.

    没有黑匣子:必须能够检查程序内部并查看代码,日志和时间表,以了解系统如何做出决定以及检查了哪些源。 它不应该全部是机器代码:用户应该能够可视化并快速了解所遵循的步骤。 这样可以避免由于没有人可以纠正不良行为或意外动作而导致程序关闭的情况。

  2. Debug mode: Artificial intelligence systems should have a debug mode, which could be turned on when the system makes mistakes, deliver unexpected results, or acts erratically. That would allow system administrators and support teams to quickly find root causes and to track more parameters, at the risk of temporarily slowing down processing. It would be beneficial to identify root causes.

    调试模式:人工智能系统应具有调试模式,当系统出错,交付意外结果或行为异常时,可以打开该模式。 这将使系统管理员和支持团队能够快速找到根本原因并跟踪更多参数,而有暂时降低处理速度的风险。 找出根本原因将是有益的。

  3. Fail-safe: For higher-risk cases, systems should have a fail-safe switch to reduce or turn off any capabilities creating issues if they cannot be fixed on the fly or explained quickly, to prevent potential damages. It is similar to the quality control process in a factory where an employee can stop an assembly line if he perceives an issue.

    故障安全:对于高风险的情况,系统应具有故障安全开关,以减少或关闭无法动态修复或快速解释的任何会造成问题的功能,以防止潜在的损坏。 这类似于工厂中的质量控制流程,在该工厂中,员工如果发现问题就可以停止流水线。

  4. Circuit breaker: For extreme cases, it must be possible to shut down the entire system. Some systems cannot be troubleshot in real-time and could do more harm than good when left active. Stock exchanges have automated circuit breakers to manage volatility and avoid crashes. Automated trading systems using AI should have the same systems in place, even if they have never had issues. That would prevent back swan situations, bugs, hacks, or any one-time situations leading to erratic trading and massive losses.

    断路器:在极端情况下,必须能够关闭整个系统。 某些系统无法实时进行故障排除,如果保持活动状态,则弊大于利。 证券交易所拥有自动断路器来管理波动并避免崩溃。 使用AI的自动交易系统应该具有相同的系统,即使它们从未出现过问题。 这样可以防止出现反向天鹅情况,错误,黑客或任何一次性情况,从而导致交易不稳定和大量损失。

  5. Approval matrices: At some point in the future, systems will fully mimic human reasoning and follow complex decision trees, applying judgment and making decisions. Humans should be in the chain of command and approve key decisions, especially when those are not repetitive and require some independent thinking. It can be useful to keep the RACI framework in mind. If an autonomous bus takes sometimes a slight detour to skip traffic, it should notify a human. If it decides to use a new road for the first time, then it should be approved by a human to avoid accidents. Giving systems control over resources such as electric power, security, and internet bandwidth can prove problematic, especially if bugs, security flaws, and other issues are discovered.

    批准矩阵:在将来的某个时候,系统将完全模仿人类的推理,并遵循复杂的决策树,应用判断和做出决策。 人类应该处在指挥链中并批准关键决策,尤其是在那些决策不是重复性的并且需要一些独立思考的情况下。 牢记RACI框架可能很有用。 如果无人驾驶巴士有时略微绕行以跳过交通,则应通知人员。 如果它决定第一次使用一条新路,则应得到人员的批准,以免发生事故。 赋予系统对诸如电力,安全性和互联网带宽之类的资源的控制可能会带来问题,尤其是在发现错误,安全漏洞和其他问题的情况下。

  6. Keeping track of assets, delegation, and autonomy: Humans get substantial leverage by transferring work to machines, especially if tasks become too complex, fast, expensive, or time-consuming. Algorithmic trading or real-time optimization solutions are good examples. However users should never delegate decision-making capability completely, nor stay on the sideline until issues arise nor lose track of what processes are automated/delegated to an AI. It is particularly relevant, for example, with the advances of Robotic Process Automation (RPA). As it expands (it is currently the fastest-growing software solution for enterprises), employees will start setting up their own routines, which could be running in the cloud indefinitely without anybody’s direct involvement. Companies should track centrally what routines are running and what AI agents are doing/creating. They should also implement policies preventing employees from using their own RPA from a USB drive or from the cloud to outsource tasks that should be controlled and owned by the company. Companies and users should also ensure they have a back door to be able to access any bots or AI processes running in the background, in case the main account gets disabled and users are locked out or in case of emergency if the regular account stops working.

    跟踪资产,委派和自治:人类将工作转移到机器上,从而获得了巨大的杠杆作用,尤其是在任务变得过于复杂,快速,昂贵或耗时的情况下。 算法交易或实时优化解决方案就是很好的例子。 但是,用户切勿完全委派决策能力,也不应待在问题出台之前,也不要跟踪将哪些流程自动化/委托给AI。 例如,它与机器人过程自动化(RPA)的进步特别相关。 随着它的扩展(它是目前企业中增长最快的软件解决方案),员工将开始设置自己的例程,这些例程可以无限期地在云中运行,而无需任何人的直接参与。 公司应集中跟踪正在运行的例程以及AI代理正在执行/创建的内容。 他们还应实施政策,防止员工使用USB驱动器或云中的RPA来外包应由公司控制和拥有的任务。 公司和用户还应确保他们拥有后门,以便能够访问在后台运行的任何bot或AI进程,以防主帐户被禁用并且用户被锁定,或者在常规帐户停止工作时出现紧急情况。

  7. No completely virtual or decentralized environments: A while back, sites such as Kazaa, Skype, and other peer-to-peer networks touted the idea of fully decentralized systems which would not reside in one location but instead would be hosted fractionally on a multitude of computers, with the ability to replicate content and repair themselves as hosts drop from the network. It is also one of the foundations of blockchain. That could obviously become a major threat if an autonomous AI system had this ability, went haywire, and became indestructible.

    没有完全虚拟或分散的环境:不久前,Kazaa,Skype和其他对等网络等站点吹捧了完全分散化系统的想法,该系统不会驻留在一个位置,而是部分地托管在多个位置计算机,能够复制内容并在主机从网络掉落时自行修复。 它也是区块链的基础之一。 如果一个自主的人工智能系统具有这种能力,陷入困境,并且变得坚不可摧,那么这显然将成为主要威胁。

  8. Feedback with discernment: The ability to receive and process feedback can be a great differentiator. It already allows voice recognition AI to understand and translate more languages than any human could ever learn. It can also enable machines to understand any accents or local dialects. However in some applications, for example, social media bots or in a newsroom, consuming all the feedback and using it can prove problematic. Between fake news, trolls, and users testing systems’ limits, processing feedback properly can be challenging for most AIs. In those areas, AIs need filters and tools to use feedback optimally and remain useful. Tay, the social bot from Microsoft, quickly fell off the deep end after misusing raw feedback and taunts, prompting it to release offensive content to its followers because it could not determine right from wrong inputs leading to unwanted outputs.

    敏锐的反馈:接收和处理反馈的能力可以与众不同。 它已经使语音识别AI能够理解和翻译比任何人都学得更多的语言。 它还可以使机器了解任何口音或当地方言。 但是,在某些应用程序中,例如社交媒体机器人或新闻编辑室中,吸收所有反馈并使用它可能会带来问题。 在虚假新闻,巨魔和用户测试系统的限制之间,对于大多数AI而言,正确处理反馈可能具有挑战性。 在这些领域,AI需要过滤器和工具来最佳地使用反馈并保持有用。 来自微软的社交机器人Tay在滥用原始反馈和嘲讽后Swift陷入困境,促使它向其追随者发布令人反感的内容,因为它无法确定错误输入导致错误输出的正确性。

  9. Annotated and editable code: In the event where machines write, edit, and update code, all code should automatically have embedded comments, to explain the system’s logic behind the change. Humans or another system should be able to review and change the code if needed, with the proper context and understanding of prior revisions.

    带注释的和可编辑的代码:在机器编写,编辑和更新代码的情况下,所有代码应自动具有嵌入式注释,以解释更改背后的系统逻辑。 如果需要,人员或其他系统应该能够在适当的上下文和对先前修订的理解下,根据需要查看和更改代码。

  10. Plan C: As with all systems, AIs in live environments have backups. Unlike typical IT systems, we are reaching a point where we cannot fully explore, understand, or test the AI systems we are building. If an AI system failed, went blank, or had major issues, we could revert to a backup that contains the same issues and ends up reproducing the problematic behaviors. In those cases, there should always be a plan C to switch back to human operations and use an alternative technology. As an example, a call center could handle thousands of automated AI-based voice interactions a day and dispatch users based on keywords. As volumes grow or peak, performance could decrease, cause dropped calls, and eventually crash the system. The backup could be restored but still contain the same flaw. The only option would be to turn off everything and decline all calls or to have a plan C in place, by redirecting incoming calls to humans or by using an alternative system.

    计划C:与所有系统一样,实时环境中的AI都有备份。 与典型的IT系统不同,我们正在达到无法完全探索,理解或测试所构建的AI系统的地步。 如果AI系统出现故障,变空白或出现重大问题,我们可以恢复到包含相同问题的备份,并最终重现有问题的行为。 在这些情况下,应该始终有计划C切换回人工操作并使用替代技术。 例如,呼叫中心每天可以处理数千个基于AI的自动化语音交互,并根据关键字分配用户。 随着音量的增加或达到峰值,性能可能会降低,导致掉话率并最终导致系统崩溃。 备份可以还原,但仍然包含相同的缺陷。 唯一的选择是通过将来电重新定向到人员或使用其他系统,关闭所有呼叫并拒绝所有呼叫或制定计划C。

长期会发生什么? (What could happen long-term?)

Image for post
Photo by Arseny Togulev on Unsplash
Arseny TogulevUnsplash上的 照片

In the worst-case scenario, a dystopian scenario: we end up with sprawling systems that we do not control very well and that we have trouble fixing or managing, leading to catastrophes. Skynet and HAL 9000 come to mind. Many additional dark scenarios can also be found in Black Mirror on Netflix. Great innovation can lead to collisions. The quest for growth, efficiencies and profits can open the door to unsustainable risks.

在最坏的情况下,是反乌托邦的情况:我们最终会得到庞大的系统,这些系统无法很好地控制,而且修复或管理时遇到麻烦,从而导致灾难。 天网HAL 9000浮现在脑海。 还可以在Netflix的《黑镜》中找到许多其他黑暗场景。 伟大的创新可能导致冲突。 对增长,效率和利润的追求可以为不可持续的风险敞开大门。

In the best-case scenario, we manage to strike a balance between using intelligent machines for efficiency and ensuring prosperity for our civilization. It translates into better jobs and a higher quality of life for all.

在最佳情况下,我们设法在使用智能机器提高效率与确保我们的文明繁荣之间取得平衡。 它可以为所有人带来更好的工作和更高的生活质量。

What do you think? Are there reasons to fear unchecked autonomous intelligences? Are we doing it well today? What other principles can you think of?

你怎么看? 是否有理由担心未经检查的自主情报? 我们今天过得好吗? 您还能想到什么其他原则?

Max Dufour is a Partner with Harmeda and leads strategic engagements for Financial Services, Technology, and Strategy Consulting clients. He can be reached directly at [email protected] or on LinkedIn.

Max DufourHarmeda的合伙人,负责为金融服务,技术和策略咨询客户提供战略服务。 可以通过[email protected]或通过LinkedIn与他直接联系

翻译自: https://towardsdatascience.com/11-artificial-intelligence-principles-554fd8adb36a

ai人工智能

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_26712095/article/details/109123055

智能推荐

进程间通信---无名管道-程序员宅基地

文章浏览阅读337次,点赞8次,收藏8次。【代码】进程间通信---无名管道。

Android屏幕适配-重点盘点(2)-程序员宅基地

文章浏览阅读705次,点赞17次,收藏17次。当我们出去找工作,或者准备找工作的时候,我们一定要想,我面试的目标是什么,我自己的技术栈有哪些,近期能掌握的有哪些,我的哪些短板 ,列出来,有计划的去完成,别看前两天掘金一些大佬在驳来驳去 ,他们的观点是他们的,不要因为他们的观点,膨胀了自己,影响自己的学习节奏。基础很大程度决定你自己技术层次的厚度,你再熟练框架也好,也会比你便宜的,性价比高的替代,很现实的问题但也要有危机意识,当我们年级大了,有哪些亮点,与比我们经历更旺盛的年轻小工程师,竞争。无论你现在水平怎么样一定要。

从微信扔骰子看iOS应用安全与逆向分析_frida lldb函数偏移-程序员宅基地

文章浏览阅读1.1k次。以微信扔骰子小游戏为例,记录一次完整 iOS 逆向分析的过程。_frida lldb函数偏移

LaTeX:overleaf latex 中文生僻字处理_latex 生僻字-程序员宅基地

文章浏览阅读8.4k次,点赞7次,收藏12次。问题描述:生僻字只有特定字体才能打出来解决方案:分为xelatex和pdflatexxelatex在usepackage区:\usepackage{ctex}\setCJKfamilyfont{myfont}{SimSun.ttf}\newcommand{\MyFont}{\CJKfamily{myfont}}然后在正文中:\MyFont{奂奒奓奘奙奚奛奜奝奞奟奡奣奤奦奨奁奫妸奯奰奱奲}效果:其中SimSun.ttf是自己上传在overleaf的字体文件的文件名。overleaf_latex 生僻字

java多线程notify()无法唤醒wait()问题_sc notify()无法唤醒wait-程序员宅基地

文章浏览阅读3.9k次,点赞2次,收藏5次。创建两个Runnable,其中一个的run()方法启动并调用wait(),第二个Runnable中run()方法在一定的几秒之后,为第一个任务调用notify(),从而使得第一个Runnable能显示一条信息,用Executor来测试。public class RunnableWait implements Runnable{ public static Object obj=new Obje_sc notify()无法唤醒wait

ssm计算机毕业设计基于Android的智能迎新(源码+程序+app+论文)-程序员宅基地

文章浏览阅读231次,点赞3次,收藏10次。从管理层面来看,该应用集成了个人信息管理,简化了繁琐的手工操作,提高了学校管理的效率和准确性。后端SSM框架结合了Spring的依赖注入和事务管理、SpringMVC的模型-视图-控制器架构以及MyBatis的数据持久化功能,为后端开发提供全面的支持。随着高等教育的大众化,每年有数以百万计的新生步入校园,他们面临的不仅仅是新知识的学习挑战,还有生活环境的适应问题。智能迎新appld92t的开发,不仅体现了科技对教育辅助作用的拓展,也反映了现代高校对于提升学生服务体验的追求。

随便推点

mac ssh,mac xshell,xshell替代,ssh客户端,ssh工具,远程桌面加速_mac 版 xshell远程连接工具-程序员宅基地

文章浏览阅读772次。HostBufFinalShell 发表 首页 liwei_perfect 消息 设置 我的推广 积分 0 最新回复 FinalShell SSH工具,服务器管理,远程桌面加速软件,支持Windows,Mac OS X,Linux,版本2.9.4,时间2018.7.19 ..._mac 版 xshell远程连接工具

第九次作业_现在想将学生绩点组成一个链表。链表结点内容包括学生姓名,学号,绩点。 输入是一-程序员宅基地

文章浏览阅读1.6k次,点赞2次,收藏17次。现在想将学生绩点组成一个链表。链表结点内容包括学生姓名,学号,绩点输入是一组学生的姓名、学号和绩点,以链表形式存储。删除绩点小于平均绩点的学生结点,成为一个新链表。后按照输入的顺序,依序输出新链表的学生信息。平均绩点是输入的所有学生绩点取算术平均值。输入描述输入包括若干行。 每行是一个学生的姓名、学号和绩点,以空格隔开。最后一行是*.输出描述输出包括学生姓名。 每个学生姓名一行。样例输入sddv 005 3.6jjjbjb 1465 1.5jdsf 015_现在想将学生绩点组成一个链表。链表结点内容包括学生姓名,学号,绩点。 输入是一

Android 8.0 蓝牙唤醒 Ble 锁屏 保活 后台 持续扫描 进程拉活 自动唤醒_<action android:name="com.hungrytree.receiver.bles-程序员宅基地

文章浏览阅读1.1w次,点赞4次,收藏26次。主要是api的说明,嫌啰嗦的可以直接看demo,demo中有个检测锁屏时间重复开启扫描的代码,主要是如果APP没有获得电量或者后台运行的权限,只能持续后台运行几小时。这个demo的作用是实现8.0以后的后台监测到特定蓝牙信号自动唤醒APP的功能,首先需要另外一个装了可以发射蓝牙信号软件的手机,我这边是选取的ios平台上的lightblue,然后在这个软件里面新建一个虚拟设备名称是要demo搜索..._

element中table表格和已选数据联动_element 表格 选中一个连带另一个-程序员宅基地

文章浏览阅读1.3k次。element中table表格和已选数据联动需求:1.根据条件查询表格,多选框选中项移到已选择表格,取消选择,已选项表格也取消该调数据数据2.已选项增加删除按钮,删除之后,查询数据的表格也取消勾选3.重新查询新数据,已选择数据不会改变,新选中的数据添加到已选择表格效果如下直接上代码 // 查询列表 <div slot="content" class="content" v-if="scheme"> <div _element 表格 选中一个连带另一个

dhcp服务器 无线桥接,老款TP-Link TL-WR841N路由器无线桥接设置方法-程序员宅基地

文章浏览阅读6k次。老款TP-Link TL-WR841N路由器的无线桥接设置方法。之所以说是老款TP-Link TL-WR841N路由器的桥接设置,是因为目前TP-Link TL-WR841N路由器又多个硬件版本,最新的版本是V12。而V12版本的TL-WR841N路由器,与前面所有版本的TL-WR841N路由器,在设置上有很大的不同。因此,本文把V1-V11版本的TL-WR841N路由器叫做老款TL-WR841N..._桥接dhcp怎么设置

PCL使用心得(二)点云数据可视化_viewer->addarrow-程序员宅基地

文章浏览阅读2.2k次。系列文章目录PCL使用心得(一)点云数据读取、保存以及格式自定义文章目录系列文章目录一、cloud_viewer二、PCLVisualizer基本定义点云颜色定义点云大小点云中添加形状删除点云更新点云一、cloud_viewer最基本的点云显示:#include <pcl/visualization/cloud_viewer.h>pcl::visualization::CloudViewer viewer("PCD");viewer.showCloud(cloud);while_viewer->addarrow

推荐文章

热门文章

相关标签