控制信息的极限

合集下载

《计算机网络》练习题

《计算机网络》练习题

《计算机网络》练习题一一、填空题1•计算机网络的主要性能指标有 ________ 和________ 。

2•在计算机网络体系结构中,_______ 是控制对等实体之间的通信,________ 是由下层向上层通过层间接口提供的。

3•信道的极限信息传输速率可用公式表示为__________________ b/s。

4.有十个站连接到以太网上,其中10个站连接到一个10Mb/s以太网集线器,另10个站连接到一个10Mb/s的以太网交换机上。

它们每一个站所获得的带宽分别是________ Mb/s 和 _____ Mb/s。

5.为了减少转发表中的重复项目,可以使用_________ 代替所有的具有相同下一跳的项目。

6.ATM以_______ 换作为信息传输的单位,并把许多软件上的功能转移到硬件上,通过________ 和 _____ 实现用户按需分配带宽。

7.______ 协议定义了浏览器和Web服务器之间通信规则。

8 •计算机网络体系结构通常采用____________ 的分析方法,典型的体系结构有模型和__________ 模型。

9•网络融合通常是指电信网络、有线电视网络逐渐融入到计算机网络技术中。

计算机网络向用户提供的最重要的功能是________________ 和_____________ 。

10.客户和服务器都是指通信中所涉及的两个应用程序,客户服务器方式所描述的是____________ 艮务和被服务的关系。

11•网络层是为___________ 间提供逻辑通信,运输层为____________________ 间提供端到端的逻辑通信。

12.数据链路层的基本问题是封装成桢、透明传输和差错检测。

解决透明传输的方法是__________ 或 ______________ 。

13.内部网关协议是实现一个自治系统内部的路有协议,研究的重点是确定域内的____________ ,而外部网关协议是在不同的AS的路由之间交换路由信息,因而侧重路由选择的____________ 。

计算机网络课后习题

计算机网络课后习题

1-3 试从多个方面比较电路交换、报文交换和分组交换的主要优缺点。

答:(1)电路交换电路交换就是计算机终端之间通信时,一方发起呼叫,独占一条物理线路。

当交换机完成接续,对方收到发起端的信号,双方即可进行通信。

在整个通信过程中双方一直占用该电路。

它的特点是实时性强,时延小,交换设备成本较低。

但同时也带来线路利用率低,电路接续时间长,通信效率低,不同类型终端用户之间不能通信等缺点。

电路交换比较适用于信息量大、长报文,经常使用的固定用户之间的通信。

(2)报文交换将用户的报文存储在交换机的存储器中。

当所需要的输出电路空闲时,再将该报文发向接收交换机或终端,它以“存储——转发”方式在网内传输数据。

报文交换的优点是中继电路利用率高,可以多个用户同时在一条线路上传送,可实现不同速率、不同规程的终端间互通。

但它的缺点也是显而易见的。

以报文为单位进行存储转发,网络传输时延大,且占用大量的交换机内存和外存,不能满足对实时性要求高的用户。

报文交换适用于传输的报文较短、实时性要求较低的网络用户之间的通信,如公用电报网。

(3)分组交换分组交换实质上是在“存储——转发”基础上发展起来的。

它兼有电路交换和报文交换的优点。

分组交换在线路上采用动态复用技术传送按一定长度分割为许多小段的数据——分组。

每个分组标识后,在一条物理线路上采用动态复用的技术,同时传送多个数据分组。

把来自用户发端的数据暂存在交换机的存储器内,接着在网内转发。

到达接收端,再去掉分组头将各数据字段按顺序重新装配成完整的报文。

分组交换比电路交换的电路利用率高,比报文交换的传输时延小,交互性好。

1-10 试在下列条件下比较电路交换和分组交换。

要传送的报文共x (bit),从源站到目的站共经过k 段链路,每段链路的传播时延为d (s),数据率为C(bit/s)。

在电路交换时电路的建立时间为s(s)。

在分组交换时分组长度为p(bit),且各结点的排队等待时间可忽略不计。

PLC控制器说明书

PLC控制器说明书

PLC控制器使用说明书承德承申自动化计量仪器有限责任公司本套系统适用于定量给料机,固体流量计,皮带秤。

采用了德国西门子S7-200系列PLC,具有运行稳定,精度高,扩展能力强的优点。

采用西门子大屏幕触摸屏,使的操作画面更加清晰直观易于操作。

数据输入全部采用屏幕软键盘,用户更加方便操作。

本系统可完全替代原装申克系列仪表应用的场合,其部分性能已超越原装仪表且价格低廉,扩展性能强。

以下部分是整个系统的操作解释,用户务必连续阅读。

首先是系统上电后的主画面:Array上电后触摸屏有一个自检过程过几十秒后与PLC连接成功,出现主画面。

在主画面中显示了两台给料机的工作画面。

如果此时有报警则在画面的右上角出现报警提示,用手点击报警提示,即可看到具体的报警信息。

主画面介绍:画面分为左右两台给料机,左边为一号给料机,右边为二号给料机。

我们以一号给料机为例进行介绍。

“#1启动”键用于启动给料机。

“#1停车”键用于停止给料机。

“ON/OFF”用于表示当前给料机的运行状态是处于启动状态或者停止状态下。

“G-MODE/V-MODE”用于表示当前给料机的运行模式是处于称量状态或者容积状态下。

“设定流量”表示当前给料机的设定值,用手按“设定流量”后面的数字即可出现一幅由数字和部分字母构成的屏幕键盘,在键盘上按相应的数字键和确认键即可完成对“设定流量”的修改。

(B07中的设定值为“触摸屏”时有效)“实际流量”表示当前给料机的实际流量值。

“皮带负荷”表示当前给料机的称量端负荷值。

“皮带速度”表示当前给料机的皮带速度值。

“累计流量”表示当前给料机的累计流量值。

主画面介绍完了,下面介绍功能画面:“功能键”位于主画面的左上角,点击“功能键”即可出现“功能画面”如下图:主画面介绍:“系统功能画面”分仍为左右两台给料机,左边为一号给料机,右边为二号给料机。

我们仍以一号给料机为例进行介绍。

1、标定功能:根据实际应用参数对程序进行必要的设置,以便精确的应用在实际现场中。

南开大学14秋学期《计算机网络基础》在线作业答案

南开大学14秋学期《计算机网络基础》在线作业答案

计算机网络基础 14秋学期《计算机网络基础》在线作业一,单选题1. 物理层的主要任务描述为确定与传输媒体的接口的一些特性,()指明对于不同功能的各种可能事件的出现顺序。

A. 机械特性B. 电气特性C. 功能特性D. 过程特性?正确答案:D2. 远程终端协议是()。

A. HTTPB. FTPC. TELNET?正确答案:C3. 物理层的主要任务描述为确定与传输媒体的接口的一些特性,()指明某条线上出现的某一电平的电压表示何种意义。

A. 机械特性B. 电气特性C. 功能特性D. 过程特性?正确答案:C4. 动态主机配置协议()提供了即插即用连网(plug-and-play networking)的机制。

A. DHCPB. DNSC. POP3?正确答案:A5. ()是运送消息的实体。

A. 数据B. 信号C. 码元?正确答案:A6. 速率达到或超过()的以太网称为高速以太网。

A. 100Mb/sB. 1Gb/sC. 10Gb/s吉比特D. 10Mb/s?正确答案:A7. ()是指电磁波在信道中需要传播一定的距离而花费的时间。

A. 排队时延B. 处理时延C. 传播时延D. 传输时延?正确答案:C8. 光纤到大楼指的是()。

A. FTTOB. FTTBC. FTTZD. FTTF?正确答案:B9. ()是数据与控制信息的结构或格式。

A. 语法B. 语义C. 同步D. 词义?正确答案:A10. 简单网络管理协议是()。

A. SNMPB. SMTPC. TCP?正确答案:A11. ()通信的双方可以同时发送和接收信息。

A. 单工通信B. 半双工通信C. 全双工通信?正确答案:C12. 超文本传送协议是()。

A. FTPB. TFTPC. HTTP?正确答案:C13. ()是指发送数据时,数据块从结点进入到传输媒体所需要的时间。

A. 排队时延B. 处理时延C. 传播时延D. 传输时延?正确答案:D14. ()是在使用时间域(或简称为时域)的波形表示数字信号时,代表不同离散数值的基本波形。

ISO17025-2017实验室数据控制与信息管理程序

ISO17025-2017实验室数据控制与信息管理程序

XXXXXXXX有限公司数据控制和信息管理程序文件编号:版本:编制:审核:批准:XXXXX有限公司发布数据控制和信息管理程序1.0目的为保证检测数据和信息的采集、计算、处理、记录、报告、存储、传输的准确、可靠、有效和保密,特编制本程序。

2.0范围2.1检测数据和信息的采集;2.2临界数据和信息的处理;2.3检测数据和信息的计算和处理;2.4数据和信息的修约;2.5数据和信息的判定;2.6数据和信息的转移;2.7错误数据和信息的更正;2.8可疑数据和信息的处理;2.9计算机以及数据和信息的管理。

3.0职责3.1 技术负责人:a)组织编制、修订和批准自动化测量程序软件;b)组织对自动测量软件的验证;c)检测室负责人:d)定本部门检测原始数据和信息的采集方法;e)组织制定自动化设备的操作规程。

3.2 监督员:a)校核检测数据和信息和判定结果;b)对可疑数据和信息进行验证。

3.3 检测员:a)认真采集和记录原始数据和信息;b)按规范计算和处理数据和信息。

3.4 技术负责人应当维护本程序的有效性。

4.0程序4.1数据和信息的采集4.1.1检测室负责人应按照本部门承检标准和检测细则的要求,规定出每一类承检产品或项目的检测原始数据和信息的采集方式和格式。

4.1.2如采用自动采集或打印原始数据和信息,检测室负责人应对采集数据和信息所用的测量系统实施验证和控制,控制方法参见本程序第4.4.4条。

4.1.3采集的原始数据和信息应进行修约或截尾,遵循先修约后运算的原则,最终报出数据和信息的有效位数应当遵循标准的规定或多出标准规定一位。

4.2临界数据和信息的处理4.2.1当测得值接近临界控制值时,检测人员应当增加测量次数,以观察测量结果的发散趋势。

如果连续观测到的测量值趋于平稳或收敛,则可以按照数据和信息处理的规定或程序进行数据和信息处理。

4.2.2如果连续观测到的测量值趋于发散,应查找发散原因,以判断是测量仪器还是被测物品的问题。

香浓定理解密之旅——信息论

香浓定理解密之旅——信息论

香浓定理解密之旅——信息论香农定理是以其奠基人克劳德·香农命名的一条定理,也是信息论的中心。

它揭示了数字通信中信息传输的极限,即信道容量。

本文的目的是通过对香农定理的讲解,让大家更好地了解信息论。

一、信息量的度量首先,我们需要了解在信息论中信息量的度量方式——信息熵。

信息熵是对一组可能性的不确定性程度的度量,它表示在一个系统中信息的平均量。

例如,考虑一枚硬币正面朝上和反面朝上的等概率事件,那么它的信息熵就是1比特。

另一个例子是一组4个可能性的抛硬币事件,那么它的信息熵就是2比特。

通常,我们将信息熵用H表示,单位是比特(bit)。

二、确定信道的容量下面,我们来探讨确定信道的容量。

确定信道是指,在信道中信息没有噪声干扰的情况下,信道的信息传输速率是无限的。

在这样的情况下,信源的信息熵必须小于或等于信道的容量。

在信源的信息熵等于信道容量的情况下,数据传输速率的极限被称为香农极限。

香农极限是一种理论上最快的数据传输速度的极限,它可以用以下公式计算:C = B log(1+S/N)其中C是信道容量,B是信道的宽带,S和N分别是信道内和信道外的信号功率。

这个公式告诉我们,当信号功率的信噪比(SNR)变大时,信道容量也随之增大。

三、非确定信道的容量实际上,在现实生活中,信息传输经常受到噪声的干扰。

在这种情况下,信道容量的计算就更为复杂了。

非确定信道的容量可以用香农公式的扩展版本来计算。

该公式包括两个元素:一是附加的关于信噪比的修正因子,称为香农-哈特利定理,用于计算噪声对数据传输速率的影响;二是关于信道编码的信息,即纠错码和流程控制等技术的应用,能够在一定程度上减轻噪声的影响,提高数据传输速度。

四、应用香农定理被广泛应用于无线通信领域,例如手机通信、无线电子邮件、卫星通信和移动应用等。

通过运用香农定理的基本原理,科学家们不断推陈出新,发明更为先进的通讯技术,开发出更高效、更稳定、更便捷、更安全的通讯设备和网络,使得信息交流更为便捷和快捷,有效地推动了社会进步和经济发展。

SPC统计过程控制

SPC统计过程控制

SPC统计过程控制—控制图连续数据和离散数据连续型:使用测量的可以有意义地无限分割的连续数值。

(时间,长度)离散型:类别信息,可以计数但是不能有意义的分割。

(合格/不合格)控制图由中心线,控制上限(UCL)及控制下限(LCL)组成。

注意控制限和规格限的区别,控制极限(UCL,LCL)是根据平均值计算得出的,是按过程中心值+/- 3个标准偏差计算出来的。

即控制极限是根据样本数据计算得出的,是过程的内部特征。

控制极限是由过程能力决定的。

规格极限(USL,LSL)是由执行的标准决定的,是过程的外部特征。

大多数规格是关于个体数值的,是由客户的要求决定的。

控制图上表现出来的波动分为由一般原因引起的波动和由特殊原因引起的波动,对于特殊原因引起的波动更加容易发现,比如超出了控制上限或控制下限或者常说的控制图七点判异规则。

控制图的作用就是要发现这些异常,并且分析根源采取纠正措施。

控制图的使用和选择连续的分组数据:XBar-R控制图和XBar-S控制图。

连续的单值数据:I-MR控制图。

离散的符合二项分布的不合格品数:(分组样品容量相等用nP控制图,不等用P控制图) 离散的符合泊松分布的缺陷数:(分组样品容量相等用C控制图,不等用U控制图) XBar-R控制图XBar(平均值控制图)反映变量X随时间的集中趋势及分组样本之间的变动性。

注意控制图中的每个点是每个分组的平均值,而控制图的中心线是分组的平均值的平均值。

R(极差控制图)极差控制图监测的是分组样本内部随时间的变动。

该图的中心线代表长期的分组样本之极差的平均值,或称为R。

对于R控制图只适合于样本容量较小的场合。

XBar-S控制图XBar(平均值控制图)反映变量X随时间的集中趋势及分组样本之间的变动性。

这个同XBar-R控制图。

对于S控制图是值标准差,标准差控制图监测的是分组样本内部随时间的变动。

该图的中心线代表长期的分组样本之标准偏差的平均值,标准差图可适用于分组样本容量(即n)大于2的任何场合。

信息安全等级保护四级防护技术要求

信息安全等级保护四级防护技术要求

信息安全等级保护(四级)具体技术要求我国信息安全等级保护与涉密信息系统分级保护关系等级保护分级保护保护对象不同非涉密信息系统涉密信息系统管理体系不同公安机关国家保密工作部门标准体系不同国家标准(GB、GB/T)国家保密标准(BMB,强制执行)级别划分不同第一级:自主保护级第二级:指导保护级第三级:监督保护级秘密级第四级:强制保护级机密级第五级:专控保护级绝密级涉密信息系统分级保护与信息安全等级保护制度相衔接,三个等级的防护水平不低于国家等级保护的第三、四、五级要求一、网络安全1.1网络安全审计1、对网络系统中的网络设备运行状况、网络流量、用户行为等进行全面的监测、记录;2、对于每一个事件,其审计记录应包括:事件的日期和时间、用户、事件类型、事件是否成功,及其他与审计相关的信息;3、安全审计应可以根据记录数据进行分析,并生成审计报表;4、安全审计应可以对特定事件,提供指定方式的实时报警;5、审计记录应受到保护避免受到未预期的删除、修改或覆盖等;6、安全审计应能跟踪监测到可能的安全侵害事件,并终止违规进程;7、审计员应能够定义审计跟踪极限的阈值,当存储空间接近极限时,能采取必要的措施(如报警并导出),当存储空间被耗尽时,终止可审计事件的发生;8、安全审计应根据信息系统的统一安全策略,实现集中审计;9、网络设备时钟应与时钟服务器时钟保持同步。

1.2边界完整性检查1、应能够检测内部网络中出现的内部用户未通过准许私自联到外部网络的行为(即“非法外联”行为);2、应能够对非授权设备私自联到网络的行为进行检查,并准确定位、有效阻断;3、应能够对内部网络用户私自联到外部网络的行为进行检查后准确定出位置,并对其进行有效阻断;4、应能够根据信息流控制策略和信息流的敏感标记,阻止重要信息的流出。

(网络设备标记,指定路由信息标记)。

1.3网络入侵防范1、在网络边界处应监视以下攻击行为:端口扫描、强力攻击、木马后门攻击、拒绝服务攻击、缓冲区溢出攻击、IP碎片攻击、网络蠕虫攻击等入侵事件的发生;2、当检测到入侵事件时,应记录入侵的源IP、攻击的类型、攻击的目的、攻击的时间等,并发出安全警告(如可采取屏幕实时提示、E-mail告警、声音告警等几种方式)及自动采取相应动作。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :c h a o -d y n /9905039v 1 26 M a y 1999Information-Theoretic Limits of ControlHugo Touchette ∗and Seth Lloyd †d’ArbeloffLaboratory for Information Systems and Technology,Department of Mechanical Engineering,Massachussetts Institute of Technology,Cambridge,Massachusetts 02139(January 9,2014)Fundamental limits on the controllability of physical systems are discussed in the light of infor-mation theory.It is shown that the second law of thermodynamics,when generalized to include information,sets absolute limits to the minimum amount of dissipation required by open-loop con-trol.In addition,an information-theoretic analysis of closed-loop control shows feedback control to be essentially a zero sum game:each bit of information gathered directly from a dynamical systems by a control device can serve to decrease the entropy of that system by at most one bit additional to the reduction of entropy attainable without such information (open-loop control).Consequences for the control of discrete binary systems and chaotic systems are discussed.PACS numbers:05.45.+b,05.20.-y,89.70.+cInformation and uncertainty represent complementary aspects of control.Open-loop control methods attempt to reduce our uncertainty about system variables such as position or velocity,thereby increasing our information about the actual values of those variables.Closed-loop methods obtain information about system variables,and use that information to decrease our uncertainty about the values of those variables.Although the literature in control theory implicitly recognizes the importance of in-formation in the control process,information is rarely regarded as the central quantity of interest [1].In this Letter we address explicitely the role of information and uncertainty in control processes by presenting a novel for-malism for analyzing these quantities using techniques of statistical mechanics and information theory.Specif-ically,based on a recent proposal by Lloyd and Slotine [2],we formulate a general model of control and inves-tigate it using entropy-like quantities.This allows us to make mathematically precise each part of the intuitive statement that in a control process,information must constantly be acquired,processed and used to constrain or maintain the trajectory of a system.Along this line,we prove several limiting results relating the ability of a control device to reduce the entropy of an arbitrary system in the cases where (i)such a controller acts inde-pendently of the state of the system (open-loop control),and (ii)the control action is influenced by some infor-mation gathered from the system (closed-loop control).The results are applied both to the stochastic example of coupled Markovian processes and to the deterministic example of chaotic maps.These results not only com-bine concepts of dynamical entropy and information in a unified picture,but also prove to be fundamental in that they represent the ultimate physical limitations faced by any control systems.The basic framework of our present study is the fol-lowing.We assign to the physical plant X we want to control a random variable X representing its state vec-tor (of arbitrary dimension)and whose value x is drawn according to a probability distribution p (x ).Physically,this probabilistic or ensemble picture may account for in-teractions with an unknown environment,noisy inputs,or unmodelled dynamics;it can also be related to a de-terministic sensitivity to some parameters which make the system effectively stochastic.The recourse to a sta-tistical approach then allows the treatment of both the unexpectedness of the control conditions and the dynam-ical stochastic features as two faces of a single notion:uncertainty .As it is well known,a suitable measure quantifying un-certainty is entropy [3,4].For a classical system with a discrete set of states with probability mass function p (x ),it is expressed asH (X )≡−xp (x )log p (x ),(1)(all logarithms are assumed to the base 2and the entropyis measured in bits).Other similar expressions also ex-ist for continuous state systems (fine-grained entropy),quantum systems (von Neumann entropy),and coarse-grained systems obtained by discretization of continuous densities in the phase space by means of a finite par-tition.In all cases,entropy offers a precise measure of disorderliness or missing information by characterizing the minimum amount of resources (bits)required to en-code unambiguously the ensemble describing the system [5].As for the time evolution of these entropies,we know that the fine-grained (or von Neumann)entropy remains constant under volume-preserving (unitary)evolution,a property closely related to a corollary of Landauer’s prin-ciple [6]which asserts that only one-to-one mappings of states,i.e.,reversible transformation preserving informa-tion are exempt of dissipation.Coarse-grained entropies,on the other hand,usually increase in time even in the ab-sence of noise.This is due to the finite nature of the par-tition used in the coarse-graining which,in effect,blurs the divergence of sufficiently close trajectories,therebyinducing a“randomization”of the motion.For many systems,the typical average rate of this increase is given by a dynamical invariant known as the Kolmogorov-Sinai entropy[7–9].In this context,we now address the problem of how a control device can be used to reduce the entropy of a system or to immunize it from sources of entropy,in par-ticular those associated with noise,motion instabilities, incomplete specification of states,and initial conditions. Although the problem of controlling a system requires more than limiting its entropy,the ability to limit en-tropy is a prerequisite to control.Indeed,the fact that a control process is able to localize a system in definite sta-ble states or trajectories simply means that the system can be constrained to evolve into states of low entropy starting from states of high entropy.To illustrate,in its most simple way,how the entropy of a system can be affected by external systems,let us consider a basic model consisting of our system X cou-pled to an environment E.For simplicity,and without loss of generality,we assume that the states of X form a discrete set.The initial state is again distributed accord-ing to p(x),and the effect of the environment is taken into account by introducing a perturbed conditional dis-tribution p(x′|e),where x′is a value of the state later in time and e,a particular realization of the stochastic perturbation appearing with probability p(e).For each value e,we assume that X undergoes a unique evolution, referred here to as a subdynamics,taken to be entropy conserving in analog to the Hamiltonian time evolution for a continuous physical system:H(X′|e)≡− x′p(x′|e)log p(x′|e)=H(X).(2)After the time transition X→X′,the distribution p(x′) is obtained by tracing out the variables of the environ-ment,and is used to calculate the change of the entropy H(X′)=H(X)+∆H.From the concavity property of entropy,it can be easily shown that∆H≥0,with equal-ity if and only if(iff)the state E is perfectly specified, i.e.,if a value e appears with probability one.In prac-tice,however,the environment degrees of freedom are uncontrollable and the uncertainty associated with the environment coupling can be suppressed by“updating”somehow our knowledge of X after the evolution.One direct way to reveal that state is to imagine a measure-ment apparatus A coupled to X in such a way that the dynamics of the composed system X+E is left unaffected. For this measurement scheme,the outcome of some dis-crete random variable A of the apparatus is described by a conditional probability matrix p(a|x′)and the marginal p(a)from which we can derive H(X′|A)≤H(X′)with equality iffA is independent of X[4].In this last in-equality we have used H(X′|A)≡ a H(X′|a)p(a),and H(X′|a)given similarly as in Eq.(2).Now,upon the application of the measurement,one can define the reduction of entropy of the system condi-tionally on the outcome of A by∆H A≡H(X′|A)−H(X),which,obviously,satisfies∆H A≤∆H,and H(A)≥∆H−∆H A.In other words,the decrease in the entropy of X conditioned on the state of A is con-pensated for by the increase in entropy of A.This latter quantity represents information that A posses about X. Accordingly,the entropy of X given A plus the entropy of A is nondecreasing,which is an expression of the second law of thermodynamics as applied to interacting systems [10,11].In a similar line of reasoning,Schack and Caves [12],showed that some classical and quantum systems can be termed“chaotic”because of their exponential sen-sitivity to perturbation,by which they mean that the minimal information H(A)needed to keep∆H A below a tolerable level grows exponentially in time in comparison to the entropy reduction∆H−∆H A.It must be stressed that the reduction of entropy of X discussed so far is conditional on the outcome of A.By assumption,X is not affected by A;as a result,accord-ing to an observer who does not know this outcome,the entropy of X is unchanged.In order to reduce entropy for all observers unconditioned on the state of any ex-ternal systems,a direct dynamical action on X must be established externally by a controller C whose influence on the system is represented by a set of control actions x c→x′triggered by the controller’s state c.Mathemat-ically,these actions can be modelled by a probability transition matrix p(x′|x,c)giving the probability that the system in state x goes to state x′given that the controller is in state c.The specific form of this actua-tion matrix will in general depend on the subdynamics envisaged in the control process:some of the actions, for example,may correspond to control strategies forc-ing several initial conditions to a common stable state, in which case the corresponding subdynamics is entropy decreasing.Others can model uncontrolled transitions perturbed by external or internal noise leading to“fuzzy”actuation rules which increase the entropy of the system. Hence,the systems X and C need not in general model a closed system;X,as we already noted,can also be af-fected by external systems(e.g.,environment)on which one has usually no control.However,formally speaking, one can always embed any open-system evolution in a higher dimensional closed system whose dynamics mim-ics a Hamiltonian system.This can be done by supple-menting an open system with a set of ancillary variables acting as an environment E in order to construct a global volume-preserving transition matrix such that,when the ancillary variables are traced out,the reduced transition matrix reproduces the dynamics of the system X+C. Note that these ancillary variables thus introduced need not have any physical significance:they are only there for the purpose of simplifying the analysis of the evolution of the system.In particular,no control can be achieved through the choice of E.Within our model,the control of the system X can only be assured by thechoice of the control C whereby we can force an ensem-ble of transitions leading the system to a net entropychange∆H.Since the overall dynamics of the system, controller and environment is Hamiltonian,Landauer’sprinciple immediately implies that if the controller is ini-tially uncorrelated with the system(open-loop control),a decrease in entropy∆H for the system must be compen-sated for by an increase in entropy of at least∆H for thecontroller and the environment[11].Furthermore,using again the concavity property of H,it can be shown thatthe maximum decrease of entropy achieved by a partic-ular subdynamics of control variableˆc is always optimalin the sense that no probabilistic mixture of the controlparameter can improve upon that decrease.Explicitly, we have the following theorem(we omit the proof whichfollows simply from the concavity property.)Theorem 1.—For open-loop control,the maximum value of∆H can always be attained for a pure choiceof the control variable,i.e.,with p(ˆc)=1and p(c)=0for all c=ˆc,whereˆc is the value of the controller leading to max∆H.Any mixture of the control variables eitherachieves the maximum or yields a smaller value.From the standpoint of the controller,one major draw-back of acting independently of the state of the sys-tem is that often no information other than that avail-able from the state of X itself can provide a reason-able way to determine which subdynamics are optimalor even accessible given the initial state.For this rea-son,open-loop control strategies implemented indepen-dently of the state of the system or solely on its statis-tics usually fail to operate efficiently in the presence ofnoise because of their inability to react or be adjustedin time.In order to account for all the possible be-haviors of a stochastic dynamical system,we have touse the information contained in its evolution by con-sidering a closed-loop control scheme in which the state of the controller is allowed to be correlated to the ini-tial state of X.This correlation can be thought as a measurement process described earlier that enables Cto gather an amount of information given formally inShannon’s information theory[3,4]by the mutual in-formation I(X;C)=H(X)+H(C)−H(X,C),where H(X,C)=− x,c p(x,c)log p(x,c)is the joint entropy of X and C.Having defined these quantities,we are now in position to state our main result which is that themaximum improvement that closed-loop can give over open-loop control is limited by the information obtainedby the controller.More formally,we haveTheorem2.—The amount of entropy∆H closed that can be extracted from any dynamical system by a closed-loop controller satisfies∆H closed≤∆H open+I(X;C),(3) where∆H open is the maximum entropy decrease that can be obtained by open-loop control and I(X;C)is the mu-tual information gathered by the controller upon obser-vation of the system state.Proof.—We construct a closed system by supplement-ing an ancilla E to our previous system X+C.More-over,let C and E be collectively denoted by B with state variable B.Since the complete system X+B is closed, its entropy has to be conserved,and thus H(X,B)= H(X′,B′).Defining the entropy changes of X and B by ∆H=H(X)−H(X′)and∆H B=H(B′)−H(B)respec-tively,and by using the definition of the mutual informa-tion,this condition of entropy conservation can also be rewritten in the form∆H=∆H B−I(X′;B′)+I(X;B) [11].Now,define∆H open as the maximum amount of entropy decrease of X obtained in the open-loop case where I(X;C)=I(X;B)=0(by construction of E, I(X;E)=0.)From the conservation condition,we hence obtain max∆H=∆H open+I(X;B),which is the de-sired upper bound for a feedback controller.To illustrate the above results,suppose that we con-trol a system in a mixture of the states{0,1}using a controller restricted to use the following two actionsc=0:x→x′=xc=1:x→x′=not x(4)(in other words,the controller and the system behave like a so-called‘controlled-not’gate).Since these actuation rules simply permute the state of X,H(X′)≥H(X) with equality if we use a pure control strategy or if H(X)=H max=1bit,in agreement with ourfirst theo-rem.Thus,∆H open=0.However,by knowing the actual value of x(H(X)bit of information)we can choose C to obtain∆H=H(X),therefore achieving Eq.(3)with equality.Evidently,as implied by this equation,informa-tion is required here as a result of the non-dissipative na-ture of the actuations and would not be needed if we were allowed to use dissipative(volume contracting)subdy-namics.Alternatively,no open-loop controlled situation is possible if we confine the controller to use entropy-increasing actuations as,for instance,in the control of nonlinear systems using chaotic dynamics.To demonstrate this last statement,let us consider the feedback control scheme proposed by Ott,Grebogi and Yorke(OGY)[13]as applied to the logistic mapx n+1=rx n(1−x n),x∈[0,1],(5) (the extension to more general systems naturally follows). The OGY method,specifically,consists of applying to Eq.(5)small perturbations r→r+δr n according to δr n=−γ(x n−x∗),whenever x n falls into a region D in the vicinity of the target point x∗.The gainγ>0 is chosen so as to ensure stability[14].For the purpose of chaotic control,all the accessible control actions de-termined by the values ofδr n,and thereby by the co-ordinates x n∈D,can be constrained to be entropy-increasing for a proper choice of D,meaning that theLyapunov exponentλ(r)associated with any actuation indexed by r is such thatλ(r)>0[15].Physically,this implies that almost any initial uniform distribution for X covering an interval of sizeε“expands”by a factor 2λ(r)on average after one iteration of the map with pa-rameter r[16].Now,for an open-loop controller,it can be readily be shown in that case that no control of the state x is possible;without knowing the position x n,a controller merely acts as a perturbation to the system, and the optimal control strategy then consists of us-ing the smallest Lyapunov exponent available so as to achieve∆H open=−λmin<0.Following theorem2,it is thus necessary,in order to achieve a controlled situation ∆H>0,to have I(X;C)≥λmin using a measurement channel characterized by an information capacity[4]of at leastλmin bit per use.In the controlled regime(n→∞),this means specifi-cally that if we want to localize the trajectory generated by Eq.(5)uniformly within an interval of sizeεusing a set of chaotic actuations,we need to measure x within an interval no larger thanε2−λmin.To understand this, note that an optimal measurement of I(X;C)=log a bits consists,for a uniform distribution p(x)of sizeε,in partitioning the intervalεinto a subintervals of sizeε/a. The controller under the partition then applies the same actuation r(i)for all the coordinates of the initial density lying in each of the subintervals i,therefore stretching them by a factor2λ(r(i)).In the optimal case,all the subintervals are directed toward x∗usingλmin and the corresponding entropy change is thus∆H closed=logε−log2λminε/a=−λmin+log a,(6) which is consistent with Eq.(3)and yields the aforemen-tioned value of a for∆H=0.Clearly,this value con-stitutes a lower bound for the OGY scheme since not all the subintervals are controlled with the same parameter r,a fact that we observed in numerical simulations[17]. In summary,we have introduced a formalism for study-ing control problems in which control units are analyzed as informational mechanisms.In this respect,a feedback controller functions analogously to a Maxwell’s demon [18],getting information about a system and using that information to decrease the system’s entropy.Our main result showed that the amount of entropy that can be ex-tracted from a dynamical system by a controller is upper bounded by the sum of the decrease of entropy achiev-able in open-loop control and the mutual information be-tween the dynamical system and the controller instaured during an initial interaction.This upper bound sets a fundamental limit on the performance of any controllers whose designs are based on the possibilities to accede low entropy states and was proven without any reference to a specific control system.Hence,its practical implica-tions can be investigated for the control of linear,non-linear and complex systems(discrete or continuous),as well as for the control of quantum systems for which our results also apply.For this latter topic,our probabilis-tic approach seems particularly suitable for the study of quantum controllers.The authors would like to thank J.-J.E.Slotine for helpful discussions.This work has been partially sup-ported by NSERC(Canada),and by a grant from the d’ArbeloffLaboratory for Information Systems and Tech-nology,MIT.。

相关文档
最新文档