Assessing Random Dynamical Network Architectures for Nanoelectronics

合集下载

空间自相关和空间自回归

空间自相关和空间自回归

空间自相关和空间自回归空间自相关和空间自回归是地理信息科学中常用的两种空间分析方法。

它们都是基于空间数据的统计分析方法,可以用来研究空间数据的空间相关性和空间自回归效应。

本文将分别介绍这两种方法的原理和应用。

一、空间自相关空间自相关是指空间数据中不同位置之间的相关性。

它可以用来研究空间数据的空间分布规律和空间聚集程度。

空间自相关的常用指标是Moran's I系数,它可以用来衡量空间数据的全局自相关性。

Moran's I 系数的取值范围为-1到1,其中-1表示完全负相关,0表示无相关性,1表示完全正相关。

当Moran's I系数大于0时,说明空间数据存在正相关性,即相似的值更可能出现在相邻的位置上;当Moran's I系数小于0时,说明空间数据存在负相关性,即相似的值更可能出现在远离的位置上。

空间自相关的应用非常广泛,例如在城市规划中可以用来研究不同区域之间的发展差异和空间分布规律;在环境科学中可以用来研究污染物的空间分布规律和传播途径;在农业生态学中可以用来研究农作物的空间分布规律和生长状态等。

二、空间自回归空间自回归是指空间数据中不同位置之间的相互影响。

它可以用来研究空间数据的空间依赖性和空间异质性。

空间自回归的常用模型是空间滞后模型和空间误差模型。

空间滞后模型是指当前位置的值受到相邻位置的值的影响,它可以用来研究空间数据的空间依赖性。

空间误差模型是指当前位置的值受到相邻位置的误差的影响,它可以用来研究空间数据的空间异质性。

空间自回归的应用也非常广泛,例如在经济学中可以用来研究不同地区之间的经济联系和空间溢出效应;在社会学中可以用来研究不同社区之间的人口流动和社会联系;在生态学中可以用来研究不同生态系统之间的相互作用和生态效应等。

总之,空间自相关和空间自回归是地理信息科学中非常重要的两种空间分析方法。

它们可以用来研究空间数据的空间相关性和空间自回归效应,为我们深入理解空间数据的空间分布规律和空间依赖性提供了有力的工具。

结构化状态空间序列模型

结构化状态空间序列模型

结构化状态空间序列模型
结构化状态空间序列模型(Structured State Space Sequential Model)是一种用于建模时序数据的统计模型。

该模型假设观
测数据的生成过程是由一个隐含的、非观测的状态序列驱动的,其中每个状态都对应着一个观测数据。

在结构化状态空间序列模型中,状态之间的转换是通过一个转移矩阵来建模的。

该转移矩阵描述了从一个状态转移到另一个状态的概率分布。

此外,模型还包括一个观测矩阵,用于描述从状态生成观测数据的概率分布。

根据具体需求,可以选择不同的结构化状态空间序列模型来建模不同类型的时序数据。

一些常见的结构化状态空间序列模型包括隐马尔可夫模型(Hidden Markov Model, HMM)、条件
随机场(Conditional Random Field, CRF)等。

结构化状态空间序列模型被广泛应用于自然语言处理、机器翻译、语音识别、图像处理等领域,用于解决一些与时序数据相关的问题,如序列标注、音频识别、模式识别等。

牛顿-拉夫逊潮流计算中检测雅可比矩阵奇异性和网络孤岛的新方法

牛顿-拉夫逊潮流计算中检测雅可比矩阵奇异性和网络孤岛的新方法

由 ( 式可得:I 【 0由于 D是对角矩 3 ) = 阵, , 因此 至少有一对角元 素为 0 。 因为 U= UL D D ,VL 设该潮流计算 是 n 节点 系统 。 所以( ) 2) 2 或( 工 a b弋有一个成立 , U 中有一 H子矩阵奇异 ,那 么 H矩阵各 个列向量线 性相 即 n 一1 零行 或 中有一零列 。 u 中行为零 , 是行相关 隋况 ;丰中列 为 关 , : 这 L 即 - = ( 不全为 0 q 0 ) 零, 这是列相关 隋况。 其 中: 是 H矩 阵的列 向量 ,1是相关 系 c T A矩 阵奇异 , 那么 A矩 阵行 向量 、 向量线 列 数 。由潮流雅可 比矩阵元素计算可知 : 性相关 , 即: 对 同一节点 , 素和 J 素的计 算具 有完 H元 元 全相似 的表达式 ,因此 ,矩 阵的各个列 向量也 J (a 4) 应满足( , 即:
中国新技术新产 品
一7

C ia N w T c n l ge n r d cs h n e e h oo isa d P o u t
高 新 技 术
新型停 水 自动关 闭阀结构 、 点及操作要 点 特
张金龙 曹 艳
( 西安航 空技 术高等专科学校机械 工程 系, 陕西 西安 7 0 7 ) 10 7
中图分 类 号 : 4 . 文献标 识 码 : G6 45 A

I 言 。在 日常生 活 中 , 前 由于停 水时 忘记 关 闭 阀门 , 水 时 也没 能及 时 关 闭 阀门 , 来 造成 水 资源 浪 费甚 至形 成安 全 隐 患 的情况 屡 见不 鲜 。 着全 民节 水 概念 不 断深入 人 心 , 一 问 随 这 题 引起 各方 关 注 。 因此 急 需设 计 一 款可 以在 停 水 时 自动关 闭 的水 阀 ,它 能够 在停 水 后 即 使 人们 忘记 关 闭 水 龙 头 也 能实 现 自动 关 闭 , 而再 次 来水 时 不 至于 出 现水 患 的情 况 ,能够 有 效 的节 约水 资源 。 要 实 现 自动 关 闭 功 能首 先 要 有 动 力 , 这 方 面可 以借 助 磁性 元件 的磁 力 、弹性 元 件 的 弹力 、 力 等外 力 , 时考 虑供 水 和停 水 时 的 重 同 水 压变 化 , 通过 联 动机 构实 现 。 2停 水 自动关 闭 阀 的结 构 及 特点 。利用 水 压 、 力 等 力 学 特 性 , 过 一 系 列 的实 验 、 重 经 改 进 , 发 出一 种 简单 、 行 的带 有 停水 自锁 研 可 机 构 的水 阀 。 款 水 阀为纯 机 械构 造 , 阀体 这 以 为 主体 框 架 , 有 阀 芯 、 封 圈 、 心 轮 以及 配 密 偏 手柄 , 无弹 性元 件 , 作状 况 不 受环 境 和时 间 工 的 限制 , 构 简 单 , 价 低 廉 并 方 便拆 换 , 结 造 整 体 可靠 性 高 。 停 水 自动关 闭 阀结 构 原 理 如 图 1 示 , 所 实 物 如 图 2所示 。序号 l 水 阀 的偏 心轮 , 为 2 为 0 型密 封 圈 , 为 V型 密封 圈 , 阀体 , 3 4为 5 为 阀芯 , 销 轴 , 手 柄 。 阀体 4是 主 框 6为 7为 架 , 来装 配其 它 元 件 , 进 水 口和 出 水 口; 用 有 阀芯 5的顶 端 与末 端分 别 装有 V 型密 封 圈 3 和 0 型 密 封 圈 2v 型 密 封 圈 3利 用 其 锥 面 , 与 阀体 4内部 锥 面 配合 实 现 停 水 时 密 封 , 而 0型密 封 圈 2与 阀体 4内壁 的接 触 实 现来 水 时对 水 阀末 端 的密 封 ,在 阀 芯 5的 中部 开两

随即效应模型

随即效应模型

随机效应模型引言随机效应模型是一种用于分析面板数据(panel data)的统计模型。

面板数据是指在时间上对同一组体或个体进行多次观测的数据,例如经济学中的跨国公司的财务数据、医学研究中的病人的长期随访数据等。

随机效应模型能够通过考虑个体间的异质性和时间间的相关性,提供更准确的估计和推断。

一、面板数据的特点面板数据相较于传统的横截面数据(cross-sectional data)和时间序列数据(time series data),具有以下几个特点:1.个体异质性:面板数据中的个体之间可能存在差异,例如不同公司的经营策略、不同病人的基线特征等。

2.时间相关性:面板数据中的观测值在时间上是相关的,例如经济学中的季度数据、医学研究中的长期随访数据等。

3.个体固定效应:个体固定效应是指个体固有的不可观测的特征,例如公司的管理能力、病人的遗传基因等。

4.时间固定效应:时间固定效应是指时间固有的不可观测的特征,例如季节性变化、政策变化等。

面板数据的分析需要考虑上述特点,以充分利用数据并得出准确的结论。

二、随机效应模型的基本原理随机效应模型是一种通过将个体固定效应和时间固定效应引入线性回归模型中,来解决面板数据分析中存在的个体异质性和时间相关性的方法。

随机效应模型的基本形式如下:y it=α+X itβ+c i+λt+ϵit其中,y it表示第i个个体在第t个时间点的观测值,X it表示解释变量矩阵,β表示解释变量的系数,c i表示个体固定效应,λt表示时间固定效应,ϵit表示随机误差项。

个体固定效应c i是与个体相关的不可观测因素,它可以通过引入个体虚拟变量来捕捉。

时间固定效应λt是与时间相关的不可观测因素,它可以通过引入时间虚拟变量来捕捉。

三、随机效应模型的估计方法随机效应模型的估计方法有多种,常用的有最小二乘法(OLS)估计法、差分法(first difference)估计法和最大似然法(maximum likelihood)估计法。

地理时空神经网络加权回归理论与方法

地理时空神经网络加权回归理论与方法

地理时空神经网络加权回归理论与方法地理时空数据是指具有地理位置和时间信息的数据,如卫星遥感影像、地表温度、气象数据等。

地理时空数据的特点是具有空间相关性和时间相关性,因此在地理信息科学中的分析和预测任务中扮演着重要角色。

而地理时空神经网络加权回归则是一种基于神经网络的加权回归模型,用于对地理时空数据进行建模和预测。

地理时空神经网络加权回归模型的核心是神经网络结构。

神经网络模型由神经元和神经元之间的连接组成。

神经元接收输入信号,并通过激活函数对其进行加权求和,然后将结果传递给下一层神经元。

这种层级结构使得神经网络能够自动学习输入数据的特征和模式,并进行复杂的非线性建模。

1.数据准备:收集地理时空数据,并进行数据清洗和预处理操作,如去除异常值、填充缺失值等。

2.模型构建:根据具体问题选择适当的神经网络结构,如多层感知机、卷积神经网络等。

设置输入层、隐藏层和输出层,并初始化权重和偏置项。

3.模型训练:使用已标记的地理时空数据作为训练集,通过反向传播算法更新神经网络中的权重和偏置项,最小化损失函数。

训练过程可以迭代多次,直到收敛或达到预设的训练次数。

4.模型评估:利用测试集对已训练好的模型进行评估,计算预测结果与实际观测值之间的误差。

常用的评估指标包括均方根误差、平均绝对误差等。

5.模型应用:使用训练好的模型对新的地理时空数据进行预测。

可以利用模型的泛化能力来预测未来的地理事件发展趋势,或者填充缺失的地理数据。

总结:地理时空神经网络加权回归理论与方法是一种用于地理时空数据建模和预测的有效工具。

通过利用神经网络的非线性建模能力,结合地理时空数据的特点,可以提高地理信息科学中的数据分析和预测任务的准确性和可靠性。

未来,随着研究的深入和发展,地理时空神经网络加权回归模型有望在更多领域得到应用。

空间计量交互项

空间计量交互项

空间计量交互项空间计量交互是指在计量经济学中,考虑空间因素对经济变量之间相互关系的影响。

在传统计量模型中,常假设各观测点之间相互独立,忽略了空间相关性。

然而,实际上经济变量之间通常存在空间依赖性,即一个地区的经济变量与其周边地区的经济变量之间存在一定的关联关系。

因此,引入空间计量交互项来捕捉这种空间依赖性成为当今计量经济学的研究热点。

一、空间计量模型基本原理空间计量模型旨在考虑地理空间对观测变量之间关系的影响,它的基本原理是通过引入空间权重矩阵来度量观测点之间的空间关联。

空间权重矩阵由两个要素构成:邻近性和权重。

邻近性用于描述观测点之间的地理距离和空间赫斯特指数,权重则表示了观测点之间相互影响的程度。

常见的空间权重矩阵包括对称权重矩阵和异方差权重矩阵等。

二、空间计量交互项的引入空间计量交互项是空间计量模型中的核心概念之一,它用于捕捉空间依赖性对观测变量之间关系的影响。

传统的计量模型假设观测点之间相互独立,忽略了空间相关性。

然而,在现实世界中,经济变量通常存在空间依赖性,即一个地区的经济变量受其周边地区的影响。

因此,为了解决这个问题,引入空间计量交互项成为当今计量经济学中的重要方法。

三、空间计量交互项的应用空间计量交互项可以应用于许多领域,如经济学、地理学、环境科学等。

在经济学中,空间计量交互项可以用于研究不同地区的经济增长、收入分配、贸易关系等问题。

在地理学中,空间计量交互项可以用于分析地区间的人口迁移、资源分布等现象。

在环境科学中,空间计量交互项可以用于研究不同地区的污染排放、气候变化等问题。

通过引入空间计量交互项,可以更准确地描述和解释观测变量之间的空间关系。

四、空间计量交互项的优点和局限空间计量交互项作为一种新的统计工具,具有许多优点。

首先,它能够考虑地理空间对观测变量之间关系的影响,提高模型的解释力和预测能力。

其次,它可以帮助我们更好地理解经济和社会现象在空间上的分布和演变规律。

然而,空间计量交互项也存在一些局限性。

隐马尔可夫模型三个基本问题及算法

隐马尔可夫模型三个基本问题及算法

隐马尔可夫模型三个基本问题及算法隐马尔可夫模型(Hien Markov Model, HMM)是一种用于建模具有隐藏状态和可观测状态序列的概率模型。

它在语音识别、自然语言处理、生物信息学等领域广泛应用,并且在机器学习和模式识别领域有着重要的地位。

隐马尔可夫模型有三个基本问题,分别是状态序列概率计算问题、参数学习问题和预测问题。

一、状态序列概率计算问题在隐马尔可夫模型中,给定模型参数和观测序列,计算观测序列出现的概率是一个关键问题。

这个问题通常由前向算法和后向算法来解决。

具体来说,前向算法用于计算给定观测序列下特定状态出现的概率,而后向算法则用于计算给定观测序列下前面状态的概率。

这两个算法相互协作,可以高效地解决状态序列概率计算问题。

二、参数学习问题参数学习问题是指在给定观测序列和状态序列的情况下,估计隐马尔可夫模型的参数。

通常采用的算法是Baum-Welch算法,它是一种迭代算法,通过不断更新模型参数来使观测序列出现的概率最大化。

这个问题的解决对于模型的训练和优化非常重要。

三、预测问题预测问题是指在给定观测序列和模型参数的情况下,求解最可能的状态序列。

这个问题通常由维特比算法来解决,它通过动态规划的方式来找到最可能的状态序列,并且在很多实际应用中都有着重要的作用。

以上就是隐马尔可夫模型的三个基本问题及相应的算法解决方法。

在实际应用中,隐马尔可夫模型可以用于许多领域,比如语音识别中的语音建模、自然语言处理中的词性标注和信息抽取、生物信息学中的基因预测等。

隐马尔可夫模型的强大表达能力和灵活性使得它成为了一个非常有价值的模型工具。

在撰写这篇文章的过程中,我对隐马尔可夫模型的三个基本问题有了更深入的理解。

通过对状态序列概率计算问题、参数学习问题和预测问题的深入探讨,我认识到隐马尔可夫模型在实际应用中的重要性和广泛适用性。

隐马尔可夫模型的算法解决了许多实际问题,并且在相关领域有着重要的意义。

隐马尔可夫模型是一种强大的概率模型,它的三个基本问题和相应的算法为实际应用提供了重要支持。

生成对抗网络的生成模型评估指标分析(Ⅲ)

生成对抗网络的生成模型评估指标分析(Ⅲ)

生成对抗网络(Generative Adversarial Networks, GANs)是一种机器学习模型,通过两个神经网络模型的对抗训练,从而能够生成接近真实的数据样本。

GANs 已经在图像生成、语音合成、自然语言处理等领域取得了令人瞩目的成果,然而如何评估生成模型的性能一直是一个具有挑战性的问题。

本文将从多个角度分析生成对抗网络的生成模型评估指标。

一、多样性(Diversity)生成模型的多样性是指生成样本的多样性和丰富性。

在图像生成领域,多样性体现在生成的图像能够覆盖不同的类别、姿势、角度和风格。

常用的评估指标包括Inception Score和Fréchet Inception Distance。

Inception Score是一种常用的图像生成模型评估指标。

它通过计算生成图像的条件分布的信息熵和真实图像的信息熵之间的差异来评估模型的性能。

然而,Inception Score存在一定局限性,它只考虑了生成图像的多样性,而没有考虑生成图像的质量。

Fréchet Incepti on Distance是另一种常用的图像生成模型评估指标。

它通过计算生成图像的特征分布和真实图像的特征分布之间的距离来评估模型的性能。

与Inception Score相比,Fréchet Inception Distance既考虑了生成图像的多样性,又考虑了生成图像的质量。

二、质量(Quality)生成模型的质量是指生成样本与真实样本的相似度和逼真程度。

在图像生成领域,质量体现在生成的图像能够具有清晰的纹理、细节和结构。

常用的评估指标包括生成图像的真实度评价指标和结构相似性指标。

生成图像的真实度评价指标是一种常用的图像生成模型评估指标。

它通过计算生成图像的像素级别和特征级别的相似度来评估模型的性能。

常用的像素级别相似度指标包括均方误差和峰值信噪比,常用的特征级别相似度指标包括感知损失和对抗损失。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Assessing Random Dynamical Network Architectures for NanoelectronicsChristof Teuscher 1,Natali Gulbahce 2,Thimo Rohlf 31Computer,Computational &Statistical Sciences Division,Los Alamos National Laboratory,USA,christof@teuscher.ch2Center for Complex Networks Research,Northeastern University,USA,natali.gulbahce@3Max Planck Institute for Mathematics in the Sciences,Leipzig,Germany,rohlf@Abstract —Independent of the technology,it is generally ex-pected that future nanoscale devices will be built from vast numbers of densely arranged devices that exhibit high failure rates.Other than that,there is little consensus on what type of technology and computing architecture holds most promises to go far beyond today’s top-down engineered silicon devices.Cellular automata (CA)have been proposed in the past as a possible class of architectures to the von Neumann computing architecture,which is not generally well suited for future mas-sively parallel and fine-grained nanoscale electronics.While the top-down engineered semi-conducting technology favors regular and locally interconnected structures,future bottom-up self-assembled devices tend to have irregular structures because of the current lack of precise control over these processes.In this paper,we will assess random dynamical networks,namely Random Boolean Networks (RBNs)and Random Threshold Networks (RTNs),as alternative computing architectures and models for future information processing devices.We will illustrate that—from a theoretical perspective—they offer superior properties over classical CA-based architectures,such as inherent robustness as the system scales up,more efficient information processing capabilities,and manufacturing benefits for bottom-up designed devices,which motivates this investigation.We will present recent results on the dynamic behavior and robustness of such random dynamical networks while also including manufacturing issues in the assessment.I.I NTRODUCTION AND M OTIVATIONThe advent of multicore architectures and the slowdown of the processor’s operating frequency increase are signs that CMOS miniaturization is increasingly hitting fundamental physical limits.A key question is how computing architectures will evolve as we reach these fundamental limits.A likely possibility within the realm of CMOS technology is that the integration density will cease to increase at some point,instead only the number of components,i.e,the transistors,will further increase,which will necessarily lead to chips with a higher area.This trend can already be observed with multi-core ar-chitectures.That in itself has implications on the interconnect architecture,the power consumption and dissipation,and the reliability.Another possibility is to go beyond silicon-based technology and to change the computing and manufacturing paradigms,by using for example bottom-up self-assembled devices.Self-assembling nanowires [12]or carbon nanotube electronics [2]are promising candidates,although none ofthem has resulted in electronics that is able to compete with traditional CMOS so far.What seems clear is that the current way with build computers and the way we algorithmically solve problems with them may need to be fundamentally revisited,which this paper is all about.While the top-down engineered CMOS technology favors regular and locally interconnected structures,future bottom-up self-assembled devices tend to have irregular structures because of the current lack of precise control over these processes.We therefore hypothesize that future and emerging computing architectures will be much more driven by manu-facturing constraints and particularities than for CMOS,which allowed engineers to implement a logic-based computing ar-chitecture with extreme precision and reliability,at least in the past.Independent of the forthcoming device and fabrication technologies,it is generally expected that future nanoscale devices will be built from (1)vast numbers of densely arranged devices that (2)exhibit high failure rates.We take this working hypothesis for granted in this paper and address it from a perspective that focuses on the interconnect topology.This is justified by the fact that the importance of interconnects on electronic chips has outrun the importance of transistors as a dominant factor of performance [9],[15],[25].The reasons are twofold:(1)the transistor switching speed for traditional silicon is much faster than the average wire delays and (2)the required chip area for interconnects has dramatically increased.In [45],Zhirnov et al.explored integrated digital Cellu-lar Automata (CA)architectures—which are highly regular structures with local interconnects (see Section III)—as an alternative paradigms to the von Neumann computer architec-ture for future and emerging information processing devices.Here,we are interested to explore and assess a more general class of discrete dynamical systems,namely Random Boolean Networks (RBNs)and Random Threshold Networks (RTNs).We will mainly focus on RBNs,but RTNs are included in this paper because they offer an alternative paradigm to Boolean logic,which can be efficiently implemented as well (see Section VII).Motivated by future and emerging nanoscale devices,we are interested to provide answers to the following questions:•Do RBNs and RTNs offer benefits over CA-architectures?a r X i v :0805.2684v 1 [c s .A R ] 17 M a y 2008If yes,what are they?•How does the interconnect complexity compare between RBNs/RTNs and CAs?•Does any of these architectures allow to solve problems more efficiently?•Is any of these architectures inherently more robust to simple errors?•Can CMOS and beyond-CMOS devices provide a benefit for the fabrication of any of these architectures?We will argue and illustrate that—at least from a theoret-ical perspective—random dynamical networks offer superior properties over classical regular CA-based architectures,such as inherent robustness as the system scales up,more efficient information processing capabilities,and manufacturing ben-efits for bottom-up fabricated devices,which motivates this investigation.We will present recent results on the dynamic behavior and robustness of such random dynamical networks while also including manufacturing issues in the assessment. To answer the above questions,we will extend recent results on the complex dynamical behavior of discrete random dynamical networks[34],their ability to solve problems[26], [39],and novel interconnect paradigms[37],[38].The remainder of this paper is as following:SectionII introduces random dynamical networks,namely random Boolean and random threshold networks.Section III briefly presents cellular automata architectures.Damage spreading and criticality of cellular automata and random dynamical networks is analyzed in Section IV.Section V analyzes the network topologies from a graph-theoretical and wiring-cost perspective.The task solving capabilities of RBNs and CAs are briefly assessed in Section VI,while Section VII looks into manufacturing issues.Section VIII concludes the paper.II.R ANDOM D YNAMICAL N ETWORKSA.Random Boolean NetworksA Random Boolean Network(RBN)[18]–[20]is a discrete dynamical system composed of N nodes,also called automata, elements or cells.Each automaton is a Boolean variable with two possible states:{0,1},and the dynamics is such thatF:{0,1}N→{0,1}N,(1) where F=(f1,...,f i,...,f N),and each f i is represented by a look-up table of K i inputs randomly chosen from the set of N nodes.Initially,K i neighbors and a look-table are assigned to each node at random.Note that K i(i.e.,the fan-in) can refer to the exact or to the average number of incoming connections per node.A node stateσt i∈{0,1}is updated using its corresponding Boolean function:σt+1 i =f i(x t i1,x t i2,...,x t iK i).(2)These Boolean functions are commonly represented by lookup-tables(LUTs),which associate a1-bit output(the node’s future state)to each possible K-bit input configuration. The table’s out-column is called the rule of the node.Note that even though the LUTs of a RBN map well on an FPGA or other memory-based architectures,the random interconnect in general does not.We randomly initialize the states of the nodes(initial condition of the RBN).The nodes are updated synchronously using their corresponding Boolean functions.Other updating schemes exist,see for example[13]for an overview.Syn-chronous random Boolean networks as introduced by Kauff-man are commonly called NK networks or models.Figure1 shows a possible NK random Boolean network representation (N=8,K=3).Fig.1.Illustration of a random Boolean network with N=8nodes and K=3inputs per node(self-connections are allowed).The node rules are commonly represented by lookup-tables(LUTs),which associate a1-bit output(the node’s future state)to each possible K-bit input configuration. The table’s out-column is commonly called the rule of the node.B.Random Threshold NetworksRandom Threshold Networks(RTNs)are another type of discrete dynamical systems.An RTN consists of N randomly interconnected binary sites(spins)with statesσi=±1.For each site i,its state at time t+1is a function of the inputs it receives from other spins at time t:σi(t+1)=sgn(f i(t))(3) withf i(t)=Nj=1c ijσj(t)+h.(4)The N network sites are updated synchronously.In the following,the threshold parameter h is set to zero.The interaction weights c ij take discrete values c ij=+1or−1 with equal probability.If i does not receive signals from j, one has c ij=0.III.C ELLULAR A UTOMATA A RCHITECTURES Cellular automata(CA)[44]were originally conceived by Ulam and von Neumann[41]in the1940s to provide a formal framework for investigating the behavior of complex, extended systems.CAs are a special case of the more general class of random dynamical networks,in which space and time are discrete.A CA usually consists of a D-dimensionalregular lattice of N lattice sites,commonly called nodes ,cells ,elements ,or automata .Each cell i can be in one of a finite number of S possible states and further consists of a transition function f i (also called rule ),which maps the neighboring states to the set of cell states.CAs are called uniform if all cells contain the same rule,otherwise they are non-uniform .Each cell takes as input the states of the cells within some finite local neighborhood.Here,we only consider non-uniform,two-dimensional (D =2),folded,and binary CAs (S =2)with a radius-1von Neumann neighborhood,where each cell isconnected to each of its four immediate neighbors only.Figure 2illustrates such an CA.The Boolean functions in each node must therefore define 24=16possible input combinations.To be able to compare CAs with RBNs,we do not consider self-connections.LSBMSBFig.2.Illustration of a binary,2D,folded cellular automaton with N =16cells.Each node is connected to its four immediate neighbors (von Neumann neighborhood).IV.D AMAGE S PREADING AND C RITICALITYA.Random Boolean and Threshold NetworksAs we have seen in Section II-B,RBNs and their complex dynamic behavior are essentially characterized by the average number of incoming links K i (fan-in)per node (e.g.,Figure 1shows a K =3network with 3incoming links per node).It turns out that in the thermodynamic limit,i.e.,N →∞,RBNs exhibit a dynamical order-disorder transition at a sparse critical connectivity K c =2[10](i.e.,where each node receives on average two incoming connections from two randomly chosen other nodes),which partitions their operating space into 3different regimes:(1),sub-critical,where K <K c ,(2)complex,where K =K c ,and (3)supercritical,where K >K c .In the sub-critical regime,the network dynamics are too “rigid”and the information processing capabilities are thus hindered,whereas in the supercritical regime,their behav-ior becomes chaotic.The complex regime is also commonly called the “edge of chaos,”because it represents the network connectivity where information processing is “optimal”and where a small number of stable attractors exist.Similar observations were made for sparsely connected ran-dom threshold (neural)networks (RTN)[33]for K c =1.849.For a finite system size N ,the dynamics of both systems converge to periodic attractors after a finite number of updates.At K c ,the phase space structure in terms of attractor periods [1],the number of different attractors [35]and the distribution<K><K>d (<K >)d (<K >)RBNRTN1e−040.001 0.010.1 110 1000 0.5 1 1.5 2 2.5 3N=32N=64N=128N=256N=5121e−040.001 0.01 0.1 1 10 100 00.511.522.53N=32N=64N=128N=256N=512N=1024Fig.3.Average Hamming distance (damage) d after 200system updates,averaged over 10000randomly generated networks for each value of K ,with 100different random initial conditions and one-bit perturbed neighbor configurations for each network.For both RBN and RTN,all curves for different N approximately intersect in a characteristic point K s .of basins of attraction [3]is complex,showing many properties reminiscent of biological networks [20].Results:In [34]we have systematically studied and compared damage spreading (i.e.,how a perturbed node-state influences the rest of the network nodes over time)at the sparse percolation (SP)limit for random Boolean and threshold networks with perturbations.In the SP limit,the damage induced in a network (i.e.,by changing the state of a node)does not scale with system size.Obviously,this limit is relevant to information and damage propagation in many tech-nological and natural networks,such as the Internet,disease spreading in populations,failure propagation in power grids,and networks-on-chips.We measure the damage spreading by the following methodology:the state of one randomly chosen node is changed.The damage is measured as the Hamming distance between a damaged and undamaged network instance after a large number of T system updates.We have shown that there is a characteristic average con-nectivity K RBN s =1.875for RBNs and K RT Ns =1.729for RTNs,where the damage spreading of a single one-bit perturbation of a network node remains constant as the system size N scales up.Figure 3illustrates this newly discovered point for RBNs and RTNs.For more details,see [34].Discussion:Both K c and K s are highly relevant for nano-scale electronics for the following reason:assuming we can build massive numbers of N simple logic gates that implement a random Boolean function,the above findings tell us that on average,every gate should be connected somewhere close to both K s and K c in order to (1)guarantee optimal robustness against failures for any system size and (2)optimal information processing at the “edge of chaos.”We are also hypothesizing that natural systems,such as the brain or genetic regulatory networks,may have evolved towards these characteristic connectivities.This remains,however,to be proved and is part of ongoing research.B.Cellular Automata Damage SpreadingWe have used the same approach as described above to measure the damage spreading in cellular automata.In order to vary the average number K of incoming links per cell in a cellular automata (e.g.,as pictured in Figure 2),we have adopted the following methodology:(1)for a desired averagenumber of links per cell K for a given CA size of N cells,the total number of links in the automaton is given by L =N K ;(2)we then randomly choose L possible connections on the regular CA-grid with uniform probability and establish the links.Damage is induced in the same way as for RBNs and RTNs:the state of one (or several)randomly chosen node(s)is changed.The damage is measured as the Hamming distance between a damaged and undamaged CA instance after a large number of T system updates,in our case T =200.Results:Figures4,5,and 6show the average damage of both RBNs and CAs for different system sizes and for a damage size of 1and 10respectively.We have left out RTNs for this analysis.As one can see,both the RBN and the CA average damage for different N approximately intersect inthe characteristic point K RBNs =1.875.This point is less pronounced for the larger damage sizes (Figures 5and 6).The RBN curves confirm what was already shown above in Figure 3,and are merely plotted here for comparison with the CA architectures and their system sizes imposed by square lattices.Interestingly,the CAs show different damage propagation behavior for different system sizes and connectivities.First,we observe that the average damage for one-bit damage events (Figure 4)is independent of the system size N for up to approximatively K =2.5average incoming connections per cell.This behavior disappears completely for large damage sizes (Figure 6).Second,Figure 4shows that all curvesintersect at K RBNs =K CA s=1.875.Third,Figure 6suggest that for larger damage sizes,K CAs disappears for CAs.Fourth,the average damage for larger damage events,i.e.,10and 20in our examples,converges to the same final values for both RBNs and CAs as K approaches 4.Discussion:We hypothesize that the particular behavior can be explained by the percolation limit of the cellular automata.Da Silva et al.[8]found that the link probability at the percolation limit is approximatively p ∼0.6,which means that the average connectivity at the percolation limit in our CA topology with a maximum of 4neighbors is given by k =4p =2.4.This value corresponds to the experimentally observed value where the damage spreading suddenly becomes dependent of the system size.Because of the local CA connectivity,there are lots of disconnected components below the percolation limit.Below this limit,the damage spreading is thus very slow and limited by the disconnected components,reason why it is essentially independent of system size.Above the percolation limit,the CA suddenly becomes connected and damage spreading becomes therefore dependent on the system size.For larger damage events,such as 10or 20,damage becomes more dependent on system size even below the percolation limit because there is a higher probability that damage is induced in several disconnected components at the same time.In summary:for single-node damage events,CAs offer system-size independent damage spreading for up to about K =2.4(which corresponds to the percolation limit),however,this particular behavior disappears for larger damaged (<k >)<k>Fig.4.Average Hamming distance (damage) d after 200system updates,averaged over 100randomly generated networks for each value of K ,with 100different random initial conditions and a damage size of 1node for each network.See text for discussion.d (<k >)<k>Fig.5.Average Hamming distance (damage) d after 200system updates,averaged over 100randomly generated networks for each value of K ,with 100different random initial conditions and a damage size of 10nodes for each network.See text for discussion.events.We conclude that in the general case,CAs do not pos-sess a characteristic connectivity K s ,where damage spreading is independent of the system size N .Such a connectivity,however,exists for both RBNs and RTNs,which makes them particularly suitable as a computing model in an environment with high error probabilities or systems with low system component reliabilities.An example are logical gates based on bio-molecular components [4],where high failure rates can be expected.V.C OMPLEX N ETWORKS AND W IRING C OSTSMost real networks,such as brain networks [11],[36],electronic circuits [17],the Internet,and social networks share the so-called small-world (SW)property [43].Compared to purely locally and regularly interconnected networks (such as for example the CA interconnect of Figure 2),small-world networks have a very short average distance (measured as the number of edges to traverse)between any pair ofd (<k >)<k>Fig.6.Average Hamming distance (damage) d after 200system updates,averaged over 100randomly generated networks for each value of K ,with 100different random initial conditions and a damage size of 20nodes for each network.See text for discussion.nodes,which makes them particularly interesting for efficient communication.The classical Watts-Strogatz small-world network [43]is built from a regular lattice with only nearest neighbor connec-tions.Every link is then rewired with a rewiring probability p to a randomly chosen node.Thus,by varying p ,one can obtain a fully regular (p =0)and a fully random (p =1)network topology.The rewiring procedure establishes “shortcuts”in the network,which significantly lower the average distance (i.e.,the number of edges to traverse)between any pair of nodes.In the original model,the length distribution of the shortcuts is uniform since a node is chosen randomly.If the rewiring of the connections is done proportional to a power law,l −α,where l is the wire length,then we obtain a small-world power-law network .The exponent αaffects the network’s communication characteristics [23]and navigability [21],which is better than in the uniformly generated small-world network.One can think of other distance-proportional distributions for the rewiring,such as for example a Gaussian distribution,which has been found between certain layers of the rat’s neocortical pyramidal neurons [14].In a real network,it is fair to assume that local connections have a lower cost (in terms of the associated wire-delay and the area required)than long-distance connections.Physically re-alizing small-world networks with uniformly distributed long-distance connections is thus not realistic and distance,i.e.,the wiring cost,needs to be taken into account,a perspective that recently gained increasing attention [30].On the other hand,a network’s topology also directly affects how efficient problems can be solved.Teuscher [37]has pragmatically and experimentally in-vestigated important design trade-offs and properties of an irregular,abstract,yet physically plausible 3D small-world interconnect fabric that is inspired by modern network-on-chip paradigms.The results confirm that (1)computation in irregular assemblies is a promising and disruptive comput-ing paradigm for self-assembled nano-scale electronics and (2)that 3D small-world interconnect fabrics with a power-law decaying distribution of shortcut lengths are physically plausible and have major advantages over local 2D and 3D regular topologies,such as CA interconnects.Discussion:There is a trade-off between (1)the physical realizability and (2)the communication characteristics for a network topology.A locally and regularly interconnected topology,such as that of a CA,is in general easy to build (especially for to-down engineered CMOS technology)and only involves minimal wire and area cost (as for example shown by Zhirnov et al.[45]),but it offers poor global commu-nication characteristics and scales-up poorly with system size.On the other hand,a random topology,such as that of RBNs or RTNs,scales-up well and has a very short-average path length,but it is not physically plausible because it involves costly long-distance connections established independently of the Euclidean distance between the nodes.The RBN and RTN topologies we consider here as thus extremes,such as CA topologies,the ideal lies in between:small-world topologies with a distance-dependent distribution of the connectivity.Such topologies are located in a unique spot in the design space and also offer two other highly relevant properties [22],[37]:(1)efficient navigability and thus potentially efficient routing,and (2)robustness against random link removals.For these reasons,we can conclude that small-world graphs are the most promising interconnects for future massive scale devices.VI.A G LANCE ON T ASK S OLVINGIn [26],Mesot and Teuscher have presented a novel an-alytical approach to find the local rules of random Boolean networks to solve the global density classification and the synchronization task—which are well known benchmark tasks in the CA community—from any initial configuration.They have also quantitatively and qualitatively compared the results with previously published work on cellular automata and have shown that randomly interconnected automata are computa-tionally more efficient in solving these two global tasks.In addition,preliminary results by the authors [39]also suggest that K c =2RBN generalize better on simple learning tasks than sub-critical or supercritical networks,but more research will be necessary.Discussion:To efficiently solve algorithmic problems with distributed computing architectures,efficient communi-cation is key.This is particularly true for tasks such as the density or the synchronization task,which are trivial to solve if one has a global view on the entire system state,but non-trivial to solve if each cell only sees a limited number of neighboring cells.It is thus not surprising that cells interconnected by a network with the small-world property perform much better on such tasks because the information propagation is significantly better.This is a too often neglected fact for CAs,in particular if one wants to use them as a viable mainstream and general purpose computing architecture.It is well-know that even simple CAs are computationally universal (and so are RBNs),i.e.,they can solve any algorithmic problem,but due to theirlocal non-small-world interconnect topology,that will only be possible in a highly inefficient way in the general case,i.e.,for a large set of different applications.This is well illustrated with the(highly inefficient)implementation of a universal Turing machine on top of the Game of Life[32].Naturally,there are exceptions to the general case,and it has been shown that CAs can be extremely efficient for certain niche applications,such as for examle image processing.VII.M ANUFACTURING I SSUESAs Chen et al.[6]state,“[i]n order to realize functional nano-electronic circuits,researchers need to solve three prob-lems:invent a nanoscale device that switches an electric current on and off;build a nanoscale circuit that controllably links very large numbers of these devices with each other and with external systems in order to perform memory and/or logic functions;and design an architecture that allows the circuits to communicate with other systems and operate independently on their lower-level details.”While we can currently build switching devices in vari-ous technologies besides CMOS(see[5],[16],[47]for an overview),one of the remaining challenges is to assemble and interconnect these switching devices(or logic functions) to larger systems,and ultimately to design a computing architecture that allows to perform reliable computations.As mentioned before,there is little consensus in the research com-munity on what type of technology and computing architecture holds most promises for the future.The motivation for investigating randomly assembled inter-connects and computing architectures can be summarized by the following observations:•long-range and global connections are costly(in terms of wire delay and of the chip area used)and limit system performance[15];•it is unclear whether a precisely regular and homogeneous arrangement of components is needed and possible on a multi-billion-component or even Avogadro-scale assem-bly of nano-scale components[40]•“[s]elf-assembly makes it relatively easy to form a ran-dom array of wires with randomly attached switches”[46];and•building a perfect system is very hard and expensive We have hypothesized in[38]and[37]that bottom-up self-assembled electronics based on conductive nanowires or nanotubes can lead to the random interconnect topologies we are interested in,however,several questions remain open and are part of a3-year interdisciplinary research project at LANL. Our approach consists in using a hybrid assembly(as others explore as well, e.g.,[12]),where the functional building blocks will still be traditional silicon in afirst step,while the interconnect is made up from self-assembled nanowires. Nanowires can be grown in various ways using diverse mate-rials,such as metals and semiconductors.We have chosen a novel way to grow conductive nanowires,which Wang et al.[42]at LANL have pioneered and demonstrated:Ag nanowires can be fabricated on top of conducting polyaniline polymer membranes via a spontaneous electrodeless deposition(self-assembly)method.We hypothesize that this will allow todensely interconnect silicon components in a simple and cheapway with specific distance-dependent wire-length distributions.We believe that this approach will ultimately allow us to easilyand cheaply fabricate RBN-like computing architectures.Random threshold networks,on the other hand,could berather straightforwardly and efficiently implemented with res-onant tunneling diode(RTD)logic circuits(see e.g.,[31]),and represent a very interesting alternative to conventionalBoolean logic gates.The reported results in this paper onrandom threshold networks can thus directly be applied to theimplementation of such devices.There has been a significantbody of research in the area of threshold logic in the past(seee.g.,[27]),but to the best of our knowledge,random thresholdnetworks have not been considered as computing models forfuture and emerging computing machines.VIII.C ONCLUSIONThe central claim of this paper is that locally interconnectedcomputing architectures,such as cellular automata(CA),arein general not appropriate models for large-scale and general-purpose computations.We have supported this claim with re-cent theoretical results on the complex dynamical behavior ofdiscrete random dynamical networks,their robustness to dam-age events as the system scales up,their ability to efficientlysolve tasks,and their improved transport characteristics dueto the short average path length.The arguments,in a nutshell,why we believe that CAs are not promising architectures forfuture information-processing devices,are as following:•their local interconnect topology is not small-world and has thus worse global transport characteristics(thansmall-world or random graphs),which directly affects theeffectiveness of how general-purpose algorithmic taskscan be solved;•in terms of a complex dynamical system,they operate in the supercritical regime( K >K c)with the widely used von Neumann neighborhood,which makes them sensitive to initial conditions;•they do not generally have a characteristic connectivity K s,where damage spreading is independent of system size,which makes a system inherently robust;and•it is unclear whether a precisely regular and homogeneous arrangement of components is possible at the scale of future information processing devices.We have assessed RBNs and RTNs as alternative models,however,as we have seen in Section V they come at a seriouscost:the uniform probability to establish connections withany node in the system independent of the Euclidean distancebetween them is not physically plausible and too expensivein terms of wiring cost.The ultimate interconnect topologyis small-world and has a distance-dependent distribution ofthe wires[30],[37],[38].We have preliminary evidence that,if we were to connect RBNs and RTNs by such a networktopology,both K c and K s would still exist.Research to clarifythis question is under progress,。

相关文档
最新文档