Abstract Towards Creating Specialised Integrity Checks Through Partial Evaluation of Meta-I

合集下载

谷歌gfs论文中文版

谷歌gfs论文中文版

摘要我们设计并实现了Google文件系统,一个面向分布式数据密集型应用的、可伸缩的分布式文件系统。

虽然运行在廉价的日用硬件设备上,但是它依然了提供容错功能,为大量客户机提供了很高的总体性能。

虽然与很多之前的分布式文件系统有很多相同目标,但是,我们的设计已经受应用的负载情况和技术环境影响,现在以及可预见的将来都反映出,我们的设计和早期的分布式文件系统的设想有了显著的分离。

这让我们重新审视了传统文件系统在设计上的选择,探索彻底不同的设计点。

GFS成功满足了我们的存储需求。

其作为存储平台被广泛的部署在Google内部,该平台用来产生和处理数据,这些数据被我们的服务以及需要大规模数据集的研究和开发工作使用。

迄今为止,最大的一个集群利用一千多台机器上的数千个硬盘,提供数百TB的存储空间,同时被数百个客户机访问。

在本论文中,我们展示了设计用来支持分布式应用的文件系统接口的扩展,讨论我们设计的许多方面,最后对小规模基准测试和真实使用作了测量报告。

常用术语设计,可靠性,性能,测量关键词容错,可伸缩性,数据存储,集群存储1. 简介为了满足Google迅速增长的数据处理需求,我们设计并实现了Google文件系统(Google File System–GFS)。

GFS与之前的分布式文件系统有着很多相同的目标,比如,性能、扩展性、可靠性以及可用性。

但是,我们的设计还受对我们的应用的负载和技术环境的观察的影响,现在以及可预见的将来都反映出,我们的设计和早期的分布式文件系统的设想有了显著的分离。

这让我们重新审视了传统文件系统在设计上的选择,在设计上探索了彻底不同的设计点。

首先,组件失效被认为是常态事件,而不是意外事件。

文件系统由几百乃至数千台由廉价的日常部件组装成的存储机器组成,同时被相当数量的客户机访问。

部件的数量和质量事实保证了任意给定时间,一些部件无法工作,一些部件无法从它们目前的失效状态中恢复。

我们遇到过如下原因导致的问题,比如应用程序bug、操作系统的bug、人为失误,甚至还有硬盘、内存、连接器、网络以及电源失效。

with contributions from

with contributions from

Integrating Static Checking and Interactive Verification: Supporting Multiple Theories and Provers in VerificationJoseph R.KinirySystems Research GroupSchool of Computer Science and InformaticsUCD DublinBelfield,Dublin4,IrelandPatrice ChalinDependable Software Research GroupDepartment of Computer Science and Software EngineeringConcordia UniversityMontreal,Quebec,H3G1M8,CanadaCl´e ment HurlinUniversit´e Henri Poincar´e,Nancy1BP60120,Nancy Cedex,Francewith contributions fromCees-Bart Breunesse,Julien Charles,David Cok,Bart Jacobs,Erik Poll,Silvio Ranise,Aleksy Schubert,and Cesare TinelliAbstractAutomatic verification by means of extended static checking(ESC)has seen some success in industry and academia due to its lightweight and easy-to-use nature.Un-fortunately,ESC comes at a cost:a host of logical and prac-tical completeness and soundness issues.Interactive veri-fication technology,on the other hand,is usually complete and sound,but requires a large amount of mathematical and practical expertise.Most programmers can be expected to use automatic,but not interactive,verification.The focus of this proposal is to integrate these two approaches into a single theoretical and practical framework,leveraging the benefits of each approach.1.IntroductionEndemic in society today are problems related to the lack of software quality which,as a result,is costing gov-ernments,businesses,and nations billions of dollars annu-ally[16].Correctness and security issues are also directly related to some of the most important concerns of the day such as those of national security and technology-based vot-ing.Additionally,driven by governmental regulations and market demands,businesses are now slowly beginning to assume liability for the faults exhibited by the software sys-tems they offer to their customers.This is particularly true in safety and security critical domains.While a variety of software engineering practices have been developed to help increase software quality(e.g.,test-ing practices,system design,modern processes,robust op-erating systems and programming languages),it is widely acknowledged that a promising way to achieve highly reli-able software in critical domains is to couple these practices with applied formal techniques supported by powerful mod-ern tools and technologies like those discussed in this paper.1.1.Program VerificationApplied formal methods has turned a corner over the past few years.Various groups in the semantics,specification, and verification communities now have sufficiently devel-oped mathematical and tool infrastructures that automatic and interactive verification of software components that arewritten in modern programming languages like Java has become a reality.Automatic verification by means of Ex-tended Static Checking(ESC)has seen some success in in-dustry and academia due to its lightweight and easy-to-use nature.Unfortunately,ESC comes at a cost:a host of logi-cal and practical completeness and soundness issues.Inter-active verification technology,on the other hand,is usually complete and sound,but requires a large amount of math-ematical and practical expertise.Typical programmers can be expected to use automatic,but not interactive,verifica-tion.In this paper we discuss work which is being undertaken to:•integrate the ESC and interactive verification ap-proaches into a single theoretical framework,thus cre-ating a unified semantic foundation,and•directly realize this theoretical framework in a modern software development environment(IDE)as an Open Source initiative.Specifically,our current work is focused on the integra-tion of the verification technologies behind two successful tools,namely ESC/Java2[12]and the LOOP program ver-ifier[11](both will be described shortly).The proposed integrated environment will perform as much automated verification as possible,falling back on interactive verifica-tion only when necessary.Additionally,in those situations where developers wish to delay the completion of the inter-active proofs,the tool will insert run-time assertion check-ing code.2.Two Key Java Verification ToolsNext,we discuss two complementary verification tools for Java upon which we base this work.These two tools are complementary because one is an automatic checker and the other is an interactive one.2.1.Extended Static Checking:ESC/Java2One of the most successful automatic verification tools for Java has been ESC/Java,an extended static checker orig-inally developed at DEC SRC[6].The next-generation release,called“ESC/Java2”,is now available as an Open Source project that is supported by academic and industrial researchers[12].David Cok and thefirst author are the ESC/Java2project administrators and have been the main contributors(until recently).ESC/Java2is currently used as a research foundation by over a half dozen research groups and as an instructional tool in nearly two dozen software-centric courses around the world.ESC/Java2reasons about Java programs that are spec-ified with annotations written in the Java Modeling Lan-guage(JML)[2,13].ESC/Java2automatically converts JML-annotated Java code into verification conditions that are automatically discharged by an embedded theorem prover—currently,Simplify[5].Problems in the specifi-cations,programs,or the checking itself are indicated to the user by means of error messages.As ESC/Java2’s perfor-mance and mode of interaction are comparable to an or-dinary compiler,it is quite usable by industry developers as well as computer science and software engineering stu-dents.2.2.Interactive Verification:The LOOP ToolThe LOOP tool,developed by the SoS Group at Radboud University Nijmegen under the supervision of Prof.Bart Jacobs,is an interactive verification tool for JavaCard[3]. The LOOP tool is one of the most complete verifiers with respect to the subset of Java that it covers.LOOP com-piles JML-annotated Java programs into proof obligations expressed as theories for the PVS theorem prover.By mak-ing use of PVS to interactively discharge the proof obliga-tions,one is able to prove a program correct with respect to its JML specification.The base Java/JML semantics of the LOOP tool essen-tially consists of a parameterized theory.The theory pa-rameters are for the(sub-)theory to be used to reason about integral types.Early in the LOOP Project,Java’s integral types were modeled by the mathematical ter, support was added for bounded integers(with the familiar modulo arithmetic)and a bitvector representation(which facilitates reasoning about bit-wise operations—something that is common in JavaCard applications).When reasoning about Java programs,one has a choice of program logics in-cluding Hoare logics and two weakest precondition calculi. Recently,Breunesse has merged these into a single,unified theory in which different representations can be used simul-taneously[1].As these two tools represent some of the best-of-breed of applied formal methods in the Java domain,integrating their foundations and approaches has merit.To accomplish this goal,there are several theoretical and practical challenges to be faced.3.Integration:Observations and Challenges There is no single canonical semantics of Java.The canonical informal semantics for Java is embodied in the Java Language Specification[8].Various groups have for-malized portions of this text and built complementary tools, e.g.,the•Everest Group at INRIA Sophia-Antipolis,•SoS Group at the Radboud University Nijmegen,•Logical Group at INRIA Futurs/Universit´e Paris-Sud,•SAnToS Laboratory at Kansas State University,•KeY group,composed of researchers from the Chalmers University of Technology,the University of Koblenz,and the University of Karlsruhe,•Software Component Technology Group at ETH Z¨u rich,and•now disbanded Extended Static Checking Group at Hewlett-Packard/Compaq/Digital Systems Research Center.In all of these cases the formalizations are incomplete,ei-ther in scope or in accuracy.Also,very little is understood about how the various semantics relate to each other. There is no single,core,canonical semantics of JML. While there are several partial informal and formal seman-tics for JML,there is no single,core semantics.Further-more,the informal semantics of JML is much more tran-sient and imprecise than that of Java,so the problems men-tioned above for Java are compounded for JML.This state of affairs leads to subtle inconsistencies between the inter-pretation of specifications by the tools that support JML. Because of this inconsistency,relating the semantics to each other is extremely difficult.Additionally,explaining,ex-tending,and reasoning about these artifacts(e.g.,the calculi of ESC/Java2)is very difficult.Little work has been done on meta-logical reason-ing about object logics.By meta-logical reasoning we mean reasoning about,rather than within,the semantics of program and specification languages.Formal meta-mathematical proofs are rare.It is not known,for example, if ESC/Java2’s object logic is sound.This is a critical issue.4.An Integrated Verification EnvironmentIn collaboration with others,our research groups have begun work on an integrated verification environment(IVE) and its necessary theoretical foundations.In doing so we have started to address the problems identified in the previ-ous section.We are(concurrently)working on the achievement of the following initial milestones:•elaboration of a semantics for a“core”JML,•extracting,analysing,and extending ESC/Java2’s logic and calculi,and•redesigning ESC/Java2’s proof infrastructure as well as backend interfaces and adaptors with the main ob-jective of allowing it to support new provers.4.1.Semantics for JMLSemantics have been developed for JML within differ-ent logics,nearly all of which have been embedded in the various tools developed by the groups enumerated in Sec-tion3.A few of these tools are publicly available,but most were never used outside the group that originally developed them.To resolve ambiguities,disagreements,and lack of de-tailed formal documentation within the JML community,a single,open semantics of a“core”of JML(recently named JML Level1)needs to be written.Chalin and Kiniry are currently outlining a proposed core and have begun formal-izing its definition.The outcome of this effort is also a ma-jor goal of the MOBIUS project[14].This semantics will be written in a well-understood for-malism,e.g.,within a modern extension to Hoare logic,a denotational semantics,and/or in an operational semantics. In our initial work we have decided to express our base, canonical semantics in PVS and Isabelle.Realizing the ob-ject logic within higher-order provers will help us charac-terize and compare semantics.It is expected that multiple formalizations of the object logic will be created due to practical and theoretical rea-sons.E.g.,most research groups have developed expertise in only one prover,and furthermore,the community can benefit from experimentation with the varying capabilities of each of these provers.4.2.Evolving ESC/Java2’s Logic and CalculiAs inherited from its predecessor,SRC’s ESC/Java, ESC/Java2makes use of an unsorted object logic and two calculi(a weakest precondition calculus and a strongest postcondition calculus).The unsorted object logic con-sists of approximately80axioms written in the language understood(only)by the Simplify prover.These axioms are highly tuned to the quirks and capabilities of Simplify. Initial logical extensions in ESC/Java2saw the logic aug-mented with approximately another20axioms.A transcription of this Simplify-based unsorted object logic has been written in PVS.We refer to this formaliza-tion of the logic as EJ0.Two other logics,EJ1and EJ2, have also been written;EJ1is merely a sorted version of EJ0whereas EJ2,also a sorted logic,was written from scratch with the purpose of better representing the abstrac-tions needed by ESC/Java2to reason about JML annotatedJava programs.Soundness proofs as well as results on the (semi-)equivalence of the EJ i logics are underway.We will also be“extracting”the weakest precondition and a strongest postcondition calculi of ESC/Java2,as well as at least one of the weakest precondition calculi used with the LOOP verification system.This will most likely be done in a higher-order logic or a term rewriting framework. The rewriting speed of special purpose environments like Maude[4]may be of benefit as the tool and verification ef-forts scale to larger problems.4.3.Supporting Multiple ProversAs we progress in our work on the definition and proofs of soundness and completeness of the EJ i logics,we are also progressing in our work on extending and adapting ESC/Java2to support multiple provers.By developing a generic prover interface along with suitable adaptors, we plan on experimenting with next-generationfirst-order provers,and a few higher-order provers.We anticipate the possibility of supporting the use of multiple provers,simultaneously or independently.Which prover to use might be determined automatically by ESC/Java2based on the context of the verification and the capabilities of the provers.For example,while Simplify is a very fast predicate solver,it does not support a complete or sound(fragment of)arithmetic,thus in verification con-texts where arithmetic is used,the tool should automatically avoid using Simplify.We have chosen Sammy and haRVey as the initial provers for experimentation[7,15].This choice was made due to our research relationship with the authors of these two tools as well as the authors’high-profile position within the SMT-LIB community[17].As a necessary precursor to being able to support mul-tiple provers,we are required to translate our object logic, whose current canonical representation is in PVS,into an appropriate formalism understood by each of the provers. Encoding of the ESC/Java2object logic for these provers is being accomplished primarily by their respective research teams.We will also be experimenting with the use of higher-order provers as backends for ESC/Java2.Our initially tar-geted provers are PVS,Isabelle,and Coq.Aside from the authors,Julien Charles at INRIA is working on a Coq real-ization of the object logic and Cesare Tinelli is contributing to the PVS realization.5.ConclusionOne of the advantages of our project is that we have a working toolset today that supports Java and JML.These tools are actively being used by researchers and a few in-dustry practitioners.Our goal is to help evolve these tools into their next-generation counterparts and,all the while, make sure that we take our own medicine.Thus,for exam-ple,writing JML specifications for the Java modules of our toolsets has been and is routinely done.We are also apply-ing our tools to themselves,thus providing non-trivial case studies demonstrating the practical utility of the tools.ESC/Java2and LOOP have been applied to other case studies in the areas of Internet voting[10],JavaCard appli-cations[9],and web-based enterprise applications,for ex-ample.Some of these case studies are already part of our GForge[18].We will be routinely re-executing these case studies as the tools evolve so as to validate the tools and ensure that their effectiveness is,in fact,improving.6.AcknowledgmentsThis proposal is based upon the work of many people. Our collaborators are gratefully acknowledged on thefirst page as well as in the various sections of the proposal.We thank the anonymous referees for their helpful comments. This work is being supported by the Ireland Canada Uni-versity Foundation as well by the European Project Mobius within the frame of IST6th Framework and national grants from the Science Foundation Ireland and Enterprise Ireland. This paper reflects only the authors’views and the Com-munity is not liable for any use that may be made of the information contained therein.References[1] C.-B.Breunesse.On JML:Topics in Tool-assisted Verifi-cation of Java Programs.PhD thesis,Radboud University Nijmegen,2005.In preparation.[2]L.Burdy,Y.Cheon,D.Cok,M.Ernst,J.Kiniry,G.T.Leav-ens,K.M.Leino,and E.Poll.An overview of JML tools and applications.Feb.2005.[3]Z.Chen.Java Card Technology for Smart Cards:Architec-ture and Programmer’s Guide.2000.[4]M.Clavel,F.Dur´a n,S.Eker,J.Meseguer,and M.-O.Stehr.Maude as a formal meta-tool.In Proceedings of the World Congress on Formal Methods in the Development of Com-puting Systems,1999.[5] D.Detlefs,G.Nelson,and J.B.Saxe.Simplify:a theoremprover for program checking.J.ACM,52(3):365–473,2005.[6] C.Flanagan,K.R.M.Leino,M.Lillibridge,G.Nelson,J.B.Saxe,and R.Stata.Extended static checking for Java.In ACM SIGPLAN2002Conference on Programming Lan-guage Design and Implementation(PLDI’2002),pages234–245,2002.[7]H.Ganzinger,G.Hagen,R.Nieuwenhuis,A.Oliveras,andC.Tinelli.DPLL(T):Fast decision procedures.In R.Alurand D.Peled,editors,Proceedings of the16th Interna-tional Conference on Computer Aided Verification,CAV’04(Boston,Massachusetts),volume3114of Lecture Notes in Computer Science,pages175–188.Springer,2004.[8]J.Gosling,B.Joy,and G.Steele.The Java Language Spec-ification.first edition,Aug.1996.[9] B.Jacobs.JavaCard program verification.In R.Boultonand P.Jackson,editors,Theorem Proving in Higher Order Logics TPHOL2001,volume2151,pages1–3,2001.[10] B.Jacobs.Counting votes with formal methods.InC.Rattray,S.Majaraj,and C.Shankland,editors,AlgebraicMethodology and Software Technology,volume3116,pages 241–257,2004.[11] B.Jacobs and E.Poll.Java program verification at Ni-jmegen:Developments and perspective.In International Symposium on Software Security(ISSS’2003),volume3233, pages134–153,Nov.2004.[12]J.R.Kiniry and D.R.Cok.ESC/Java2:UnitingESC/Java and JML:Progress and issues in building and us-ing ESC/Java2and a report on a case study involving the use of ESC/Java2to verify portions of an Internet voting tally system.In Construction and Analysis of Safe,Secure and Interoperable Smart Devices:International Workshop, CASSIS2004,volume3362,Jan.2005.[13]G.T.Leavens,E.Poll,C.Clifton,Y.Cheon,C.Ruby,D.Cok,and J.Kiniry.JML Reference Manual.Departmentof Computer Science,Iowa State University,226Atanasoff Hall,draft revision1.94edition,2004.[14]The MOBIUS project.http://mobius.inria.fr/.[15]S.Ranise and D.Deharbe.Light-weight theorem provingfor debugging and verifying units of code.In International Conference on Software Engineering and Formal Methods SEFM2003,Canberra,Australia,Sept.2003.[16]RTI:Health,Social,and Economics Research,Research Tri-angle Park,NC.The economic impacts of inadequate in-frastructure for software testing.Technical Report Planning Report02-3,NIST,May2002.[17]SMT-LIB:The satisfiability modulo theories library.http:///smtlib/.[18]The Systems Research Group GForge.http://sort.ucd.ie/.。

形式化方法与应用专题前言

形式化方法与应用专题前言

软件学报ISSN 1000-9825, CODEN RUXUEWE-mail:************.cn Journal of Software ,2021,32(6):1579−1580 [doi: 10.13328/ki.jos.006256] ©中国科学院软件研究所版权所有. Tel: +86-10-62562563形式化方法与应用专题前言∗ 田 聪1, 邓玉欣2, 姜 宇31(西安电子科技大学 计算机学院,陕西 西安 710071) 2(华东师范大学 软件学院,上海 200062) 3(清华大学 软件学院,北京 100084)通讯作者: 田聪, 邓玉欣, 姜宇,E-mail:*****************,***************,********************* 中文引用格式: 田聪,邓玉欣,姜宇.形式化方法与应用专题前言.软件学报,2021,32(6):1579−1580. /1000- 9825/6256.htm计算机科学的发展主要涉及硬件和软件的发展,而软、硬件发展的核心问题之一是如何保证它们是安全可靠的.如今,硬件性能变得越来越高,运算速度也越来越快,体系结构、软件的功能也更加复杂,如何开发可靠的软、硬件系统,是计算机科学发展面临的巨大挑战.特别是现在计算机系统广泛应用于许多安全攸关系统中,如高速列车控制系统、航空航天控制系统、医疗设备控制系统等等,这些系统中的错误可能导致灾难性后果. 形式化方法己经成功应用于各种硬件设计,特别是芯片的设计.各大硬件制造商都有一个非常强大的形式化方法团队为保障系统的可靠性提供技术支持,例如IBM 、AMD 等等.近年来,随着形式验证技术和工具的发展,特别是在程序验证中的成功应用,形式化方法在处理软件开发复杂性和提高软件可靠性方面已显示出无可取代的潜力.各个著名的研究机构都投入了大量人力和物力从事这方面的研究.例如,美国宇航局(NASA)拥有的形式化方法研究团队在保证美国航天器控制软件正确性方面发挥了巨大作用,在研发“好奇号”火星探测器时,为了提高控制软件的可靠性和生产率,广泛使用了形式化方法.在新兴领域,如区块链及人工智能等领域,形式化方法也逐步得到应用,提升系统的整体安全可控.本专题公开征文,共征得投稿27篇.特约编辑先后邀请了国内外在该领域比较活跃的学者参与审稿工作,每篇投稿至少邀请2位专家进行初审.大部分稿件经过初审和复审两轮评审,部分稿件经过了两轮复审.通过初审的稿件还在FMAC 2020大会上进行了现场报告,作者现场回答了与会者的问题,并听取了与会者的修改建议.最终有18篇论文入选本专题.《C2P:基于Pi 演算的协议C 代码形式化抽象方法和工具》提出一种检测安全协议代码语义逻辑错误的形式化验证方法,通过将协议C 源码自动化抽象为Pi 演算模型,基于Pi 演算模型对协议安全属性进行形式化验证.《大粒度Pull Request 描述自动生成》利用图神经网络和强化学习的技术,提出一种为GitHub 平台中大粒度Pull Request 自动生成描述的方法.《Petri 网的反向展开及其在程序数据竞争检测的应用》针对安全Petri 网的可覆盖性判定问题提出一种目标导向的反向展开算法,并应用于并发程序中数据竞争检测问题的形式化验证.《面向SPARC 处理器架构的操作系统异常管理验证》提出了基于Hoare-logic 的验证框架,用于证明面向SPARC 处理器架构操作系统异常管理的正确性,基于该框架验证了我国北斗三号在轨实际应用的航天器嵌入式实时操作系统SpaceOS 异常管理功能的正确性.《基于分支标记的数据流模型的代码生成方法》针对具有复杂分支组合的数据流模型提出了基于分支调度标记的代码生成方法.《面向AADL 模型的存储资源约束可调度性分析》提出一种面向软件架构级别、基于抢占调度序列的缓存相关抢占延迟计算方法,用来分析缓存相关抢占延迟约束下AADL(架构分析和设计语言)模型的可调度性.收稿时间: 2021-01-301580 Journal of Software软件学报 V ol.32, No.6, June 2021《基于锁增广分段图的多线程程序死锁检测》对已有的锁图和分段图模型进行改进,提出一种新的死锁检测方法,该方法能有效消除各种误报,提高死锁检测的准确率.《基于污染变量关系图的Android应用污点分析工具》提出了一种基于污染变量关系图的污点分析方法,并描述了基于该方法所实现的工具FastDroid的架构、模块及算法细节.《以太坊中间语言的可执行语义》对以太坊中间语言Yul进行形式化,利用Isabelle/HOL证明辅助工具给出了其类型系统和小步操作语义的形式化定义,为智能合约正确性、安全性验证奠定了基础.《个体交互行为的平滑干预模型》提出一种基于个体交互行为系统平滑干预模型,能够很好地引导用户行为平滑变化,且产生足够的区分性使得行为伪装异常检测场景下模型的准确性显著提高.《支持乱序执行的Raft协议》使用 TLA+为分布式共识协议ParallelRaft提供严格的形式化规约,并证明了在参与者数量较小的情形下算法的正确性.《面向CPS时空性质验证的混成AADL建模与模型转换方法》提出了面向CPS时空性质验证的混成AADL建模与模型转换方法,并通过一个飞机避撞系统实例验证该方法的有效性.《芯片开发功能验证的形式化方法》提出了一种新型验证设计模型和生成代码一致性的方法.该方法利用MSVL语言进行系统建模,通过统一模型检测的原理,验证模型是否满足性质的有效性.《面向数据流的ROS2数据分发服务形式建模与分析》采用概率模型检验的方法,分析、验证机器人操作系统ROS2系统数据分发机制的实时性和可靠性.《Ptolemy离散事件模型形式化验证方法》提出了一种基于形式模型转换的方法来验证离散事件模型的正确性,通过在Ptolemy环境中实现一个插件,可以自动将离散事件模型转换为时间自动机模型,并通过调用Uppaal验证内核完成验证.《面向MSVL的智能合约形式化验证》介绍了如何使用建模、仿真与验证语言(MSVL)和命题投影时序逻辑(PPTL)对智能合约进行建模和验证.《面向ROS的差分模糊测试方法》提出了一种差分模糊测试方法对机器人操作系统ROS不同版本的功能包进行测试,找出其中的漏洞.《基于Coq的分块矩阵运算的形式化》完善了基于Coq记录类型的矩阵形式化方法,其中包括提出新的矩阵等价定义、并证明了一组新的引理,最终实现了矩阵与分块矩阵形式化的不同类型的基础库.本专题重点关注形式化基础方法、技术、支持工具以及领域交叉应用,反映了我国学者在该领域的最新研究进展.感谢《软件学报》编委会、CCF形式化方法专委会对专题工作的指导和帮助,感谢编辑部各位老师从征稿启示发布、审稿专家邀请至评审意见汇总、论文修改、定稿及出版所付出的辛勤工作,感谢专题全体评审专家及时、耐心、细致的评审工作,感谢踊跃投稿的所有作者对《软件学报》的信任.希望本专题能够对形式化方法的科研工作有所促进.田聪(1981-),女,博士,西安电子科技大学计算机科学与技术学院教授,博士生导师,CCF杰出会员,主要研究领域为形式化方法,程序验证,等.姜宇(1989-),男,博士,清华大学软件学院副教授,博士生导师,CCF会员,主要研究领域为形式化方法,程序分析,嵌入式软件,等.邓玉欣(1978-),男,博士,华东师范大学软件工程学院教授,博士生导师,CCF高级会员,主要研究领域为形式化方法,程序理论,等.。

2022年考研考博-考博英语-西北工业大学考试全真模拟易错、难点剖析AB卷(带答案)试题号:97

2022年考研考博-考博英语-西北工业大学考试全真模拟易错、难点剖析AB卷(带答案)试题号:97

2022年考研考博-考博英语-西北工业大学考试全真模拟易错、难点剖析AB卷(带答案)一.综合题(共15题)1.单选题As a general rule, Dad is generous, but as a merchant, he usually drives a hard().问题1选项A.bargainB.dealC.transactionD.negotiation【答案】A【解析】考查固定搭配。

drive a hard bargain“讨价还价” 。

句意:一般来说,父亲是慷慨的,但是作为一个商人,他通常会讨价还价。

选项A符合题意。

2.单选题The State Council will lay down new rules that aim to make management compatible with internationally accepted conventions.问题1选项A.conferencesB.conversationsC.practicesD.formations 【答案】C【解析】考查名词词义辨析。

句意:国务院将制定新的规则来使得管理与国际惯例一致。

conference “会议”;conversation “(非正式)交谈,谈话”;practice “实践,惯例”;formation “组成”。

根据前面的“lay down new rules” 可推测出划线单词的意思是“规则”,因此选项C符合题意。

3.单选题As ordinary people, scientists are by no means more honest or()than other people, but as scientist, they attach special value to honest while they are in their working sphere.问题1选项A.ethicalB.ethnicC.aestheticD.esthetic【答案】A【解析】考查形容词词义辨析。

四大安全会议论文题目

四大安全会议论文题目

2009and2010Papers:Big-4Security ConferencespvoOctober13,2010NDSS20091.Document Structure Integrity:A Robust Basis for Cross-site Scripting Defense.Y.Nadji,P.Saxena,D.Song2.An Efficient Black-box Technique for Defeating Web Application Attacks.R.Sekar3.Noncespaces:Using Randomization to Enforce Information Flow Tracking and Thwart Cross-Site Scripting Attacks.M.Van Gundy,H.Chen4.The Blind Stone Tablet:Outsourcing Durability to Untrusted Parties.P.Williams,R.Sion,D.Shasha5.Two-Party Computation Model for Privacy-Preserving Queries over Distributed Databases.S.S.M.Chow,J.-H.Lee,L.Subramanian6.SybilInfer:Detecting Sybil Nodes using Social Networks.G.Danezis,P.Mittal7.Spectrogram:A Mixture-of-Markov-Chains Model for Anomaly Detection in Web Traffic.Yingbo Song,Angelos D.Keromytis,Salvatore J.Stolfo8.Detecting Forged TCP Reset Packets.Nicholas Weaver,Robin Sommer,Vern Paxson9.Coordinated Scan Detection.Carrie Gates10.RB-Seeker:Auto-detection of Redirection Botnets.Xin Hu,Matthew Knysz,Kang G.Shin11.Scalable,Behavior-Based Malware Clustering.Ulrich Bayer,Paolo Milani Comparetti,Clemens Hlauschek,Christopher Kruegel,Engin Kirda12.K-Tracer:A System for Extracting Kernel Malware Behavior.Andrea Lanzi,Monirul I.Sharif,Wenke Lee13.RAINBOW:A Robust And Invisible Non-Blind Watermark for Network Flows.Amir Houmansadr,Negar Kiyavash,Nikita Borisov14.Traffic Morphing:An Efficient Defense Against Statistical Traffic Analysis.Charles V.Wright,Scott E.Coull,Fabian Monrose15.Recursive DNS Architectures and Vulnerability Implications.David Dagon,Manos Antonakakis,Kevin Day,Xiapu Luo,Christopher P.Lee,Wenke Lee16.Analyzing and Comparing the Protection Quality of Security Enhanced Operating Systems.Hong Chen,Ninghui Li,Ziqing Mao17.IntScope:Automatically Detecting Integer Overflow Vulnerability in X86Binary Using Symbolic Execution.Tielei Wang,Tao Wei,Zhiqiang Lin,Wei Zou18.Safe Passage for Passwords and Other Sensitive Data.Jonathan M.McCune,Adrian Perrig,Michael K.Reiter19.Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication.Chris Karlof,J.Doug Tygar,David Wagner20.CSAR:A Practical and Provable Technique to Make Randomized Systems Accountable.Michael Backes,Peter Druschel,Andreas Haeberlen,Dominique UnruhOakland20091.Wirelessly Pickpocketing a Mifare Classic Card.(Best Practical Paper Award)Flavio D.Garcia,Peter van Rossum,Roel Verdult,Ronny Wichers Schreur2.Plaintext Recovery Attacks Against SSH.Martin R.Albrecht,Kenneth G.Paterson,Gaven J.Watson3.Exploiting Unix File-System Races via Algorithmic Complexity Attacks.Xiang Cai,Yuwei Gui,Rob Johnson4.Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86Processors.Bart Coppens,Ingrid Verbauwhede,Bjorn De Sutter,Koen De Bosschere5.Non-Interference for a Practical DIFC-Based Operating System.Maxwell Krohn,Eran Tromer6.Native Client:A Sandbox for Portable,Untrusted x86Native Code.(Best Paper Award)B.Yee,D.Sehr,G.Dardyk,B.Chen,R.Muth,T.Ormandy,S.Okasaka,N.Narula,N.Fullagar7.Automatic Reverse Engineering of Malware Emulators.(Best Student Paper Award)Monirul Sharif,Andrea Lanzi,Jonathon Giffin,Wenke Lee8.Prospex:Protocol Specification Extraction.Paolo Milani Comparetti,Gilbert Wondracek,Christopher Kruegel,Engin Kirda9.Quantifying Information Leaks in Outbound Web Traffic.Kevin Borders,Atul Prakash10.Automatic Discovery and Quantification of Information Leaks.Michael Backes,Boris Kopf,Andrey Rybalchenko11.CLAMP:Practical Prevention of Large-Scale Data Leaks.Bryan Parno,Jonathan M.McCune,Dan Wendlandt,David G.Andersen,Adrian Perrig12.De-anonymizing Social Networks.Arvind Narayanan,Vitaly Shmatikov13.Privacy Weaknesses in Biometric Sketches.Koen Simoens,Pim Tuyls,Bart Preneel14.The Mastermind Attack on Genomic Data.Michael T.Goodrich15.A Logic of Secure Systems and its Application to Trusted Computing.Anupam Datta,Jason Franklin,Deepak Garg,Dilsun Kaynar16.Formally Certifying the Security of Digital Signature Schemes.Santiago Zanella-Beguelin,Gilles Barthe,Benjamin Gregoire,Federico Olmedo17.An Epistemic Approach to Coercion-Resistance for Electronic Voting Protocols.Ralf Kuesters,Tomasz Truderung18.Sphinx:A Compact and Provably Secure Mix Format.George Danezis,Ian Goldberg19.DSybil:Optimal Sybil-Resistance for Recommendation Systems.Haifeng Yu,Chenwei Shi,Michael Kaminsky,Phillip B.Gibbons,Feng Xiao20.Fingerprinting Blank Paper Using Commodity Scanners.William Clarkson,Tim Weyrich,Adam Finkelstein,Nadia Heninger,Alex Halderman,Ed Felten 21.Tempest in a Teapot:Compromising Reflections Revisited.Michael Backes,Tongbo Chen,Markus Duermuth,Hendrik P.A.Lensch,Martin Welk22.Blueprint:Robust Prevention of Cross-site Scripting Attacks for Existing Browsers.Mike Ter Louw,V.N.Venkatakrishnan23.Pretty-Bad-Proxy:An Overlooked Adversary in Browsers’HTTPS Deployments.Shuo Chen,Ziqing Mao,Yi-Min Wang,Ming Zhang24.Secure Content Sniffing for Web Browsers,or How to Stop Papers from Reviewing Themselves.Adam Barth,Juan Caballero,Dawn Song25.It’s No Secret:Measuring the Security and Reliability of Authentication via’Secret’Questions.Stuart Schechter,A.J.Bernheim Brush,Serge Egelman26.Password Cracking Using Probabilistic Context-Free Grammars.Matt Weir,Sudhir Aggarwal,Bill Glodek,Breno de MedeirosUSENIX Security2009promising Electromagnetic Emanations of Wired and Wireless Keyboards.(Outstanding Student Paper)Martin Vuagnoux,Sylvain Pasini2.Peeping Tom in the Neighborhood:Keystroke Eavesdropping on Multi-User Systems.Kehuan Zhang,XiaoFeng Wang3.A Practical Congestion Attack on Tor Using Long Paths,Nathan S.Evans,Roger Dingledine,Christian Grothoff4.Baggy Bounds Checking:An Efficient and Backwards-Compatible Defense against Out-of-Bounds Errors.Periklis Akritidis,Manuel Costa,Miguel Castro,Steven Hand5.Dynamic Test Generation to Find Integer Bugs in x86Binary Linux Programs.David Molnar,Xue Cong Li,David A.Wagner6.NOZZLE:A Defense Against Heap-spraying Code Injection Attacks.Paruj Ratanaworabhan,Benjamin Livshits,Benjamin Zorn7.Detecting Spammers with SNARE:Spatio-temporal Network-level Automatic Reputation Engine.Shuang Hao,Nadeem Ahmed Syed,Nick Feamster,Alexander G.Gray,Sven Krasser8.Improving Tor using a TCP-over-DTLS Tunnel.Joel Reardon,Ian Goldberg9.Locating Prefix Hijackers using LOCK.Tongqing Qiu,Lusheng Ji,Dan Pei,Jia Wang,Jun(Jim)Xu,Hitesh Ballani10.GATEKEEPER:Mostly Static Enforcement of Security and Reliability Policies for JavaScript Code.Salvatore Guarnieri,Benjamin Livshits11.Cross-Origin JavaScript Capability Leaks:Detection,Exploitation,and Defense.Adam Barth,Joel Weinberger,Dawn Song12.Memory Safety for Low-Level Software/Hardware Interactions.John Criswell,Nicolas Geoffray,Vikram Adve13.Physical-layer Identification of RFID Devices.Boris Danev,Thomas S.Heydt-Benjamin,Srdjan CapkunCP:Secure Remote Storage for Computational RFIDs.Mastooreh Salajegheh,Shane Clark,Benjamin Ransford,Kevin Fu,Ari Juels15.Jamming-resistant Broadcast Communication without Shared Keys.Christina Popper,Mario Strasser,Srdjan Capkun16.xBook:Redesigning Privacy Control in Social Networking Platforms.Kapil Singh,Sumeer Bhola,Wenke Lee17.Nemesis:Preventing Authentication and Access Control Vulnerabilities in Web Applications.Michael Dalton,Christos Kozyrakis,Nickolai Zeldovich18.Static Enforcement of Web Application Integrity Through Strong Typing.William Robertson,Giovanni Vigna19.Vanish:Increasing Data Privacy with Self-Destructing Data.(Outstanding Student Paper)Roxana Geambasu,Tadayoshi Kohno,Amit A.Levy,Henry M.Levy20.Efficient Data Structures for Tamper-Evident Logging.Scott A.Crosby,Dan S.Wallach21.VPriv:Protecting Privacy in Location-Based Vehicular Services.Raluca Ada Popa,Hari Balakrishnan,Andrew J.Blumberg22.Effective and Efficient Malware Detection at the End Host.Clemens Kolbitsch,Paolo Milani Comparetti,Christopher Kruegel,Engin Kirda,Xiaoyong Zhou,XiaoFeng Wang 23.Protecting Confidential Data on Personal Computers with Storage Capsules.Kevin Borders,Eric Vander Weele,Billy Lau,Atul Prakash24.Return-Oriented Rootkits:Bypassing Kernel Code Integrity Protection Mechanisms.Ralf Hund,Thorsten Holz,Felix C.Freiling25.Crying Wolf:An Empirical Study of SSL Warning Effectiveness.Joshua Sunshine,Serge Egelman,Hazim Almuhimedi,Neha Atri,Lorrie Faith Cranor26.The Multi-Principal OS Construction of the Gazelle Web Browser.Helen J.Wang,Chris Grier,Alex Moshchuk,Samuel T.King,Piali Choudhury,Herman VenterACM CCS20091.Attacking cryptographic schemes based on”perturbation polynomials”.Martin Albrecht,Craig Gentry,Shai Halevi,Jonathan Katz2.Filter-resistant code injection on ARM.Yves Younan,Pieter Philippaerts,Frank Piessens,Wouter Joosen,Sven Lachmund,Thomas Walter3.False data injection attacks against state estimation in electric power grids.Yao Liu,Michael K.Reiter,Peng Ning4.EPC RFID tag security weaknesses and defenses:passport cards,enhanced drivers licenses,and beyond.Karl Koscher,Ari Juels,Vjekoslav Brajkovic,Tadayoshi Kohno5.An efficient forward private RFID protocol.Come Berbain,Olivier Billet,Jonathan Etrog,Henri Gilbert6.RFID privacy:relation between two notions,minimal condition,and efficient construction.Changshe Ma,Yingjiu Li,Robert H.Deng,Tieyan Li7.CoSP:a general framework for computational soundness proofs.Michael Backes,Dennis Hofheinz,Dominique Unruh8.Reactive noninterference.Aaron Bohannon,Benjamin C.Pierce,Vilhelm Sjoberg,Stephanie Weirich,Steve Zdancewicputational soundness for key exchange protocols with symmetric encryption.Ralf Kusters,Max Tuengerthal10.A probabilistic approach to hybrid role mining.Mario Frank,Andreas P.Streich,David A.Basin,Joachim M.Buhmann11.Efficient pseudorandom functions from the decisional linear assumption and weaker variants.Allison B.Lewko,Brent Waters12.Improving privacy and security in multi-authority attribute-based encryption.Melissa Chase,Sherman S.M.Chow13.Oblivious transfer with access control.Jan Camenisch,Maria Dubovitskaya,Gregory Neven14.NISAN:network information service for anonymization networks.Andriy Panchenko,Stefan Richter,Arne Rache15.Certificateless onion routing.Dario Catalano,Dario Fiore,Rosario Gennaro16.ShadowWalker:peer-to-peer anonymous communication using redundant structured topologies.Prateek Mittal,Nikita Borisov17.Ripley:automatically securing web2.0applications through replicated execution.K.Vikram,Abhishek Prateek,V.Benjamin Livshits18.HAIL:a high-availability and integrity layer for cloud storage.Kevin D.Bowers,Ari Juels,Alina Oprea19.Hey,you,get offof my cloud:exploring information leakage in third-party compute clouds.Thomas Ristenpart,Eran Tromer,Hovav Shacham,Stefan Savage20.Dynamic provable data possession.C.Christopher Erway,Alptekin Kupcu,Charalampos Papamanthou,Roberto Tamassia21.On cellular botnets:measuring the impact of malicious devices on a cellular network core.Patrick Traynor,Michael Lin,Machigar Ongtang,Vikhyath Rao,Trent Jaeger,Patrick Drew McDaniel,Thomas Porta 22.On lightweight mobile phone application certification.William Enck,Machigar Ongtang,Patrick Drew McDaniel23.SMILE:encounter-based trust for mobile social services.Justin Manweiler,Ryan Scudellari,Landon P.Cox24.Battle of Botcraft:fighting bots in online games with human observational proofs.Steven Gianvecchio,Zhenyu Wu,Mengjun Xie,Haining Wang25.Fides:remote anomaly-based cheat detection using client emulation.Edward C.Kaiser,Wu-chang Feng,Travis Schluessler26.Behavior based software theft detection.Xinran Wang,Yoon-chan Jhi,Sencun Zhu,Peng Liu27.The fable of the bees:incentivizing robust revocation decision making in ad hoc networks.Steffen Reidt,Mudhakar Srivatsa,Shane Balfe28.Effective implementation of the cell broadband engineTM isolation loader.Masana Murase,Kanna Shimizu,Wilfred Plouffe,Masaharu Sakamoto29.On achieving good operating points on an ROC plane using stochastic anomaly score prediction.Muhammad Qasim Ali,Hassan Khan,Ali Sajjad,Syed Ali Khayam30.On non-cooperative location privacy:a game-theoretic analysis.Julien Freudiger,Mohammad Hossein Manshaei,Jean-Pierre Hubaux,David C.Parkes31.Privacy-preserving genomic computation through program specialization.Rui Wang,XiaoFeng Wang,Zhou Li,Haixu Tang,Michael K.Reiter,Zheng Dong32.Feeling-based location privacy protection for location-based services.Toby Xu,Ying Cai33.Multi-party off-the-record messaging.Ian Goldberg,Berkant Ustaoglu,Matthew Van Gundy,Hao Chen34.The bayesian traffic analysis of mix networks.Carmela Troncoso,George Danezis35.As-awareness in Tor path selection.Matthew Edman,Paul F.Syverson36.Membership-concealing overlay networks.Eugene Y.Vasserman,Rob Jansen,James Tyra,Nicholas Hopper,Yongdae Kim37.On the difficulty of software-based attestation of embedded devices.Claude Castelluccia,Aurelien Francillon,Daniele Perito,Claudio Soriente38.Proximity-based access control for implantable medical devices.Kasper Bonne Rasmussen,Claude Castelluccia,Thomas S.Heydt-Benjamin,Srdjan Capkun39.XCS:cross channel scripting and its impact on web applications.Hristo Bojinov,Elie Bursztein,Dan Boneh40.A security-preserving compiler for distributed programs:from information-flow policies to cryptographic mechanisms.Cedric Fournet,Gurvan Le Guernic,Tamara Rezk41.Finding bugs in exceptional situations of JNI programs.Siliang Li,Gang Tan42.Secure open source collaboration:an empirical study of Linus’law.Andrew Meneely,Laurie A.Williams43.On voting machine design for verification and testability.Cynthia Sturton,Susmit Jha,Sanjit A.Seshia,David Wagner44.Secure in-VM monitoring using hardware virtualization.Monirul I.Sharif,Wenke Lee,Weidong Cui,Andrea Lanzi45.A metadata calculus for secure information sharing.Mudhakar Srivatsa,Dakshi Agrawal,Steffen Reidt46.Multiple password interference in text passwords and click-based graphical passwords.Sonia Chiasson,Alain Forget,Elizabeth Stobert,Paul C.van Oorschot,Robert Biddle47.Can they hear me now?:a security analysis of law enforcement wiretaps.Micah Sherr,Gaurav Shah,Eric Cronin,Sandy Clark,Matt Blaze48.English shellcode.Joshua Mason,Sam Small,Fabian Monrose,Greg MacManus49.Learning your identity and disease from research papers:information leaks in genome wide association study.Rui Wang,Yong Fuga Li,XiaoFeng Wang,Haixu Tang,Xiao-yong Zhou50.Countering kernel rootkits with lightweight hook protection.Zhi Wang,Xuxian Jiang,Weidong Cui,Peng Ning51.Mapping kernel objects to enable systematic integrity checking.Martim Carbone,Weidong Cui,Long Lu,Wenke Lee,Marcus Peinado,Xuxian Jiang52.Robust signatures for kernel data structures.Brendan Dolan-Gavitt,Abhinav Srivastava,Patrick Traynor,Jonathon T.Giffin53.A new cell counter based attack against tor.Zhen Ling,Junzhou Luo,Wei Yu,Xinwen Fu,Dong Xuan,Weijia Jia54.Scalable onion routing with torsk.Jon McLachlan,Andrew Tran,Nicholas Hopper,Yongdae Kim55.Anonymous credentials on a standard java card.Patrik Bichsel,Jan Camenisch,Thomas Gros,Victor Shouprge-scale malware indexing using function-call graphs.Xin Hu,Tzi-cker Chiueh,Kang G.Shin57.Dispatcher:enabling active botnet infiltration using automatic protocol reverse-engineering.Juan Caballero,Pongsin Poosankam,Christian Kreibich,Dawn Xiaodong Song58.Your botnet is my botnet:analysis of a botnet takeover.Brett Stone-Gross,Marco Cova,Lorenzo Cavallaro,Bob Gilbert,MartinSzydlowski,Richard A.Kemmerer,Christopher Kruegel,Giovanni VignaNDSS20101.Server-side Verification of Client Behavior in Online Games.Darrell Bethea,Robert Cochran and Michael Reiter2.Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs.S.Wolchok,O.S.Hofmann,N.Heninger,E.W.Felten,J.A.Halderman,C.J.Rossbach,B.Waters,E.Witchel3.Stealth DoS Attacks on Secure Channels.Amir Herzberg and Haya Shulman4.Protecting Browsers from Extension Vulnerabilities.Adam Barth,Adrienne Porter Felt,Prateek Saxena,and Aaron Boodman5.Adnostic:Privacy Preserving Targeted Advertising.Vincent Toubiana,Arvind Narayanan,Dan Boneh,Helen Nissenbaum and Solon Barocas6.FLAX:Systematic Discovery of Client-side Validation Vulnerabilities in Rich Web Applications.Prateek Saxena,Steve Hanna,Pongsin Poosankam and Dawn Song7.Effective Anomaly Detection with Scarce Training Data.William Robertson,Federico Maggi,Christopher Kruegel and Giovanni Vignarge-Scale Automatic Classification of Phishing Pages.Colin Whittaker,Brian Ryner and Marria Nazif9.A Systematic Characterization of IM Threats using Honeypots.Iasonas Polakis,Thanasis Petsas,Evangelos P.Markatos and Spiros Antonatos10.On Network-level Clusters for Spam Detection.Zhiyun Qian,Zhuoqing Mao,Yinglian Xie and Fang Yu11.Improving Spam Blacklisting Through Dynamic Thresholding and Speculative Aggregation.Sushant Sinha,Michael Bailey and Farnam Jahanian12.Botnet Judo:Fighting Spam with Itself.A.Pitsillidis,K.Levchenko,C.Kreibich,C.Kanich,G.M.Voelker,V.Paxson,N.Weaver,S.Savage13.Contractual Anonymity.Edward J.Schwartz,David Brumley and Jonathan M.McCune14.A3:An Extensible Platform for Application-Aware Anonymity.Micah Sherr,Andrew Mao,William R.Marczak,Wenchao Zhou,Boon Thau Loo,and Matt Blaze15.When Good Randomness Goes Bad:Virtual Machine Reset Vulnerabilities and Hedging Deployed Cryptography.Thomas Ristenpart and Scott Yilek16.InvisiType:Object-Oriented Security Policies.Jiwon Seo and Monica m17.A Security Evaluation of DNSSEC with NSEC3.Jason Bau and John Mitchell18.On the Safety of Enterprise Policy Deployment.Yudong Gao,Ni Pan,Xu Chen and Z.Morley Mao19.Where Do You Want to Go Today?Escalating Privileges by Pathname Manipulation.Suresh Chari,Shai Halevi and Wietse Venema20.Joe-E:A Security-Oriented Subset of Java.Adrian Mettler,David Wagner and Tyler Close21.Preventing Capability Leaks in Secure JavaScript Subsets.Matthew Finifter,Joel Weinberger and Adam Barth22.Binary Code Extraction and Interface Identification for Security Applications.Juan Caballero,Noah M.Johnson,Stephen McCamant,and Dawn Song23.Automatic Reverse Engineering of Data Structures from Binary Execution.Zhiqiang Lin,Xiangyu Zhang and Dongyan Xu24.Efficient Detection of Split Personalities in Malware.Davide Balzarotti,Marco Cova,Christoph Karlberger,Engin Kirda,Christopher Kruegel and Giovanni VignaOakland20101.Inspector Gadget:Automated Extraction of Proprietary Gadgets from Malware Binaries.Clemens Kolbitsch Thorsten Holz,Christopher Kruegel,Engin Kirda2.Synthesizing Near-Optimal Malware Specifications from Suspicious Behaviors.Matt Fredrikson,Mihai Christodorescu,Somesh Jha,Reiner Sailer,Xifeng Yan3.Identifying Dormant Functionality in Malware Programs.Paolo Milani Comparetti,Guido Salvaneschi,Clemens Kolbitsch,Engin Kirda,Christopher Kruegel,Stefano Zanero4.Reconciling Belief and Vulnerability in Information Flow.Sardaouna Hamadou,Vladimiro Sassone,Palamidessi5.Towards Static Flow-Based Declassification for Legacy and Untrusted Programs.Bruno P.S.Rocha,Sruthi Bandhakavi,Jerry I.den Hartog,William H.Winsborough,Sandro Etalle6.Non-Interference Through Secure Multi-Execution.Dominique Devriese,Frank Piessens7.Object Capabilities and Isolation of Untrusted Web Applications.Sergio Maffeis,John C.Mitchell,Ankur Taly8.TrustVisor:Efficient TCB Reduction and Attestation.Jonathan McCune,Yanlin Li,Ning Qu,Zongwei Zhou,Anupam Datta,Virgil Gligor,Adrian Perrig9.Overcoming an Untrusted Computing Base:Detecting and Removing Malicious Hardware Automatically.Matthew Hicks,Murph Finnicum,Samuel T.King,Milo M.K.Martin,Jonathan M.Smith10.Tamper Evident Microprocessors.Adam Waksman,Simha Sethumadhavan11.Side-Channel Leaks in Web Applications:a Reality Today,a Challenge Tomorrow.Shuo Chen,Rui Wang,XiaoFeng Wang Kehuan Zhang12.Investigation of Triangular Spamming:a Stealthy and Efficient Spamming Technique.Zhiyun Qian,Z.Morley Mao,Yinglian Xie,Fang Yu13.A Practical Attack to De-Anonymize Social Network Users.Gilbert Wondracek,Thorsten Holz,Engin Kirda,Christopher Kruegel14.SCiFI-A System for Secure Face Identification.(Best Paper)Margarita Osadchy,Benny Pinkas,Ayman Jarrous,Boaz Moskovich15.Round-Efficient Broadcast Authentication Protocols for Fixed Topology Classes.Haowen Chan,Adrian Perrig16.Revocation Systems with Very Small Private Keys.Allison Lewko,Amit Sahai,Brent Waters17.Authenticating Primary Users’Signals in Cognitive Radio Networks via Integrated Cryptographic and Wireless Link Signatures.Yao Liu,Peng Ning,Huaiyu Dai18.Outside the Closed World:On Using Machine Learning For Network Intrusion Detection.Robin Sommer,Vern Paxson19.All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution(but might have been afraid to ask).Thanassis Avgerinos,Edward Schwartz,David Brumley20.State of the Art:Automated Black-Box Web Application Vulnerability Testing.Jason Bau,Elie Bursztein,Divij Gupta,John Mitchell21.A Proof-Carrying File System.Deepak Garg,Frank Pfenning22.Scalable Parametric Verification of Secure Systems:How to Verify Ref.Monitors without Worrying about Data Structure Size.Jason Franklin,Sagar Chaki,Anupam Datta,Arvind Seshadri23.HyperSafe:A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity.Zhi Wang,Xuxian Jiang24.How Good are Humans at Solving CAPTCHAs?A Large Scale Evaluation.Elie Bursztein,Steven Bethard,John C.Mitchell,Dan Jurafsky,Celine Fabry25.Bootstrapping Trust in Commodity Computers.Bryan Parno,Jonathan M.McCune,Adrian Perrig26.Chip and PIN is Broken.(Best Practical Paper)Steven J.Murdoch,Saar Drimer,Ross Anderson,Mike Bond27.Experimental Security Analysis of a Modern Automobile.K.Koscher,A.Czeskis,F.Roesner,S.Patel,T.Kohno,S.Checkoway,D.McCoy,B.Kantor,D.Anderson,H.Shacham,S.Savage 28.On the Incoherencies in Web Browser Access Control Policies.Kapil Singh,Alexander Moshchuk,Helen J.Wang,Wenke Lee29.ConScript:Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser.Leo Meyerovich,Benjamin Livshits30.TaintScope:A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection.(Best Student Paper)Tielei Wang,Tao Wei,Guofei Gu,Wei Zou31.A Symbolic Execution Framework for JavaScript.Prateek Saxena,Devdatta Akhawe,Steve Hanna,Stephen McCamant,Dawn Song,Feng MaoUSENIX Security20101.Adapting Software Fault Isolation to Contemporary CPU Architectures.David Sehr,Robert Muth,CliffBiffle,Victor Khimenko,Egor Pasko,Karl Schimpf,Bennet Yee,Brad Chen2.Making Linux Protection Mechanisms Egalitarian with UserFS.Taesoo Kim and Nickolai Zeldovich3.Capsicum:Practical Capabilities for UNIX.(Best Student Paper)Robert N.M.Watson,Jonathan Anderson,Ben Laurie,Kris Kennaway4.Structuring Protocol Implementations to Protect Sensitive Data.Petr Marchenko,Brad Karp5.PrETP:Privacy-Preserving Electronic Toll Pricing.Josep Balasch,Alfredo Rial,Carmela Troncoso,Bart Preneel,Ingrid Verbauwhede,Christophe Geuens6.An Analysis of Private Browsing Modes in Modern Browsers.Gaurav Aggarwal,Elie Bursztein,Collin Jackson,Dan Boneh7.BotGrep:Finding P2P Bots with Structured Graph Analysis.Shishir Nagaraja,Prateek Mittal,Chi-Yao Hong,Matthew Caesar,Nikita Borisov8.Fast Regular Expression Matching Using Small TCAMs for Network Intrusion Detection and Prevention Systems.Chad R.Meiners,Jignesh Patel,Eric Norige,Eric Torng,Alex X.Liu9.Searching the Searchers with SearchAudit.John P.John,Fang Yu,Yinglian Xie,Martin Abadi,Arvind Krishnamurthy10.Toward Automated Detection of Logic Vulnerabilities in Web Applications.Viktoria Felmetsger,Ludovico Cavedon,Christopher Kruegel,Giovanni Vigna11.Baaz:A System for Detecting Access Control Misconfigurations.Tathagata Das,Ranjita Bhagwan,Prasad Naldurg12.Cling:A Memory Allocator to Mitigate Dangling Pointers.Periklis Akritidis13.ZKPDL:A Language-Based System for Efficient Zero-Knowledge Proofs and Electronic Cash.Sarah Meiklejohn,C.Chris Erway,Alptekin Kupcu,Theodora Hinkle,Anna Lysyanskaya14.P4P:Practical Large-Scale Privacy-Preserving Distributed Computation Robust against Malicious Users.Yitao Duan,John Canny,Justin Zhan,15.SEPIA:Privacy-Preserving Aggregation of Multi-Domain Network Events and Statistics.Martin Burkhart,Mario Strasser,Dilip Many,Xenofontas Dimitropoulos16.Dude,Where’s That IP?Circumventing Measurement-based IP Geolocation.Phillipa Gill,Yashar Ganjali,Bernard Wong,David Lie17.Idle Port Scanning and Non-interference Analysis of Network Protocol Stacks Using Model Checking.Roya Ensafi,Jong Chun Park,Deepak Kapur,Jedidiah R.Crandall18.Building a Dynamic Reputation System for DNS.Manos Antonakakis,Roberto Perdisci,David Dagon,Wenke Lee,Nick Feamster19.Scantegrity II Municipal Election at Takoma Park:The First E2E Binding Governmental Election with Ballot Privacy.R.Carback,D.Chaum,J.Clark,J.Conway,A.Essex,P.S.Herrnson,T.Mayberry,S.Popoveniuc,R.L.Rivest,E.Shen,A.T.Sherman,P.L.Vora20.Acoustic Side-Channel Attacks on Printers.Michael Backes,Markus Durmuth,Sebastian Gerling,Manfred Pinkal,Caroline Sporleder21.Security and Privacy Vulnerabilities of In-Car Wireless Networks:A Tire Pressure Monitoring System Case Study.Ishtiaq Rouf,Rob Miller,Hossen Mustafa,Travis Taylor,Sangho Oh,Wenyuan Xu,Marco Gruteser,Wade Trappe,Ivan Seskar 22.VEX:Vetting Browser Extensions for Security Vulnerabilities.(Best Paper)Sruthi Bandhakavi,Samuel T.King,P.Madhusudan,Marianne Winslett23.Securing Script-Based Extensibility in Web Browsers.Vladan Djeric,Ashvin Goel24.AdJail:Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements.Mike Ter Louw,Karthik Thotta Ganesh,V.N.Venkatakrishnan25.Realization of RF Distance Bounding.Kasper Bonne Rasmussen,Srdjan Capkun26.The Case for Ubiquitous Transport-Level Encryption.Andrea Bittau,Michael Hamburg,Mark Handley,David Mazieres,Dan Boneh27.Automatic Generation of Remediation Procedures for Malware Infections.Roberto Paleari,Lorenzo Martignoni,Emanuele Passerini,Drew Davidson,Matt Fredrikson,Jon Giffin,Somesh Jha28.Re:CAPTCHAs-Understanding CAPTCHA-Solving Services in an Economic Context.Marti Motoyama,Kirill Levchenko,Chris Kanich,Damon McCoy,Geoffrey M.Voelker,Stefan Savage29.Chipping Away at Censorship Firewalls with User-Generated Content.Sam Burnett,Nick Feamster,Santosh Vempala30.Fighting Coercion Attacks in Key Generation using Skin Conductance.Payas Gupta,Debin GaoACM CCS20101.Security Analysis of India’s Electronic Voting Machines.Scott Wolchok,Erik Wustrow,J.Alex Halderman,Hari Prasad,Rop Gonggrijp2.Dissecting One Click Frauds.Nicolas Christin,Sally S.Yanagihara,Keisuke Kamataki3.@spam:The Underground on140Characters or Less.Chris Grier,Kurt Thomas,Vern Paxson,Michael Zhang4.HyperSentry:Enabling Stealthy In-context Measurement of Hypervisor Integrity.Ahmed M.Azab,Peng Ning,Zhi Wang,Xuxian Jiang,Xiaolan Zhang,Nathan C.Skalsky5.Trail of Bytes:Efficient Support for Forensic Analysis.Srinivas Krishnan,Kevin Z.Snow,Fabian Monrose6.Survivable Key Compromise in Software Update Systems.Justin Samuel,Nick Mathewson,Justin Cappos,Roger Dingledine7.A Methodology for Empirical Analysis of the Permission-Based Security Models and its Application to Android.David Barrera,H.Gunes Kayacik,Paul C.van Oorschot,Anil Somayaji8.Mobile Location Tracking in Metropolitan Areas:malnets and others.Nathanial Husted,Steve Myers9.On Pairing Constrained Wireless Devices Based on Secrecy of Auxiliary Channels:The Case of Acoustic Eavesdropping.Tzipora Halevi,Nitesh Saxena10.PinDr0p:Using Single-Ended Audio Features to Determine Call Provenance.Vijay A.Balasubramaniyan,Aamir Poonawalla,Mustaque Ahamad,Michael T.Hunter,Patrick Traynor11.Building Efficient Fully Collusion-Resilient Traitor Tracing and Revocation Schemes.Sanjam Garg,Abishek Kumarasubramanian,Amit Sahai,Brent Waters12.Algebraic Pseudorandom Functions with Improved Efficiency from the Augmented Cascade.Dan Boneh,Hart Montgomery,Ananth Raghunathan13.Practical Leakage-Resilient Pseudorandom Generators.Yu Yu,Francois-Xavier Standaert,Olivier Pereira,Moti Yung14.Practical Leakage-Resilient Identity-Based Encryption from Simple Assumptions.Sherman S.M.Chow,Yevgeniy Dodis,Yannis Rouselakis,Brent Waters15.Testing Metrics for Password Creation Policies by Attacking Large Sets of Revealed Passwords.Matt Weir,Sudhir Aggarwal,Michael Collins,Henry Stern16.The Security of Modern Password Expiration:An Algorithmic Framework and Empirical Analysis.Yinqian Zhang,Fabian Monrose,Michael K.Reiter17.Attacks and Design of Image Recognition CAPTCHAs.Bin Zhu,JeffYan,Chao Yang,Qiujie Li,Jiu Liu,Ning Xu,Meng Yi18.Robusta:Taming the Native Beast of the JVM.Joseph Siefers,Gang Tan,Greg Morrisett19.Retaining Sandbox Containment Despite Bugs in Privileged Memory-Safe Code.Justin Cappos,Armon Dadgar,JeffRasley,Justin Samuel,Ivan Beschastnikh,Cosmin Barsan,Arvind Krishnamurthy,Thomas Anderson20.A Control Point for Reducing Root Abuse of File-System Privileges.Glenn Wurster,Paul C.van Oorschot21.Modeling Attacks on Physical Unclonable Functions.Ulrich Ruehrmair,Frank Sehnke,Jan Soelter,Gideon Dror,Srinivas Devadas,Juergen Schmidhuber22.Dismantling SecureMemory,CryptoMemory and CryptoRF.Flavio D.Garcia,Peter van Rossum,Roel Verdult,Ronny Wichers Schreur23.Attacking and Fixing PKCS#11Security Tokens.Matteo Bortolozzo,Matteo Centenaro,Riccardo Focardi,Graham Steel24.An Empirical Study of Privacy-Violating Information Flows in JavaScript Web Applications.Dongseok Jang,Ranjit Jhala,Sorin Lerner,Hovav Shacham25.DIFC Programs by Automatic Instrumentation.William Harris,Somesh Jha,Thomas Reps26.Predictive Black-box Mitigation of Timing Channels.Aslan Askarov,Danfeng Zhang,Andrew Myers27.In Search of an Anonymous and Secure Lookup:Attacks on Structured Peer-to-peer Anonymous Communication Systems.Qiyan Wang,Prateek Mittal,Nikita Borisov28.Recruiting New Tor Relays with BRAIDS.Rob Jansen,Nicholas Hopper,Yongdae Kim29.An Improved Algorithm for Tor Circuit Scheduling.Can Tang,Ian Goldberg30.Dissent:Accountable Anonymous Group Messaging.Henry Corrigan-Gibbs,Bryan Ford31.Abstraction by Set-Membership—Verifying Security Protocols and Web Services with Databases.Sebastian Moedersheim。

171105_NASAC17_程序修复(武汉大学 玄跻峰)

171105_NASAC17_程序修复(武汉大学 玄跻峰)

3
从Bug到Debu缺陷
分析 缺陷
修复 缺陷
测试用例 - Test Case
4
程序修复的目的
程序修复的现状:人工修复
人工编写代码补丁,检验补丁正确性
人工程序修复正面临极大挑战
修复程序难 人力成本高 引入错误多 潜在风险大
软件遍布各个领域
5

我们曾给出的粗糙分类
• 基于搜索的程序修复(ICSE 2009, TSE 2012) • 基于穷举的程序修复(ICST 2010, ISSTA 2015) • 基于约束求解的程序修复(ICSE 2013, TSE 2017)
玄跻峰 等.自动程序修复方法研究进展.软件学报,27(4),2016.
12
30
程序修复–推断
尝试 学习 推断 校正
补丁的输出 自动的输出推断 - 动态推测补丁的运行时数值
补丁的输入 动态上下文编码 - 将面向对象语义编码成动态值
对象语义 继承 多态 对象状态 空引用 运行时值 ……
31
程序修复–推断
补丁的输出 自动的输出推断 - 动态推测补丁的运行时数值 补丁的输入 动态上下文编码 - 将面向对象语义编码成动态值 合成补丁 补丁的约束求解 - 将补丁生成转换为约束求解
0
3
3
3
Pass
2^30
2^29
2^29 2^30+2^29 Fail




14
尝试 学习 推断 校正
15
尝试 学习 推断 校正
16
尝试 学习 推断 校正
17
尝试 学习 推断 校正
18
基于测试的程序修复框架

fanyi

A Petri Net-based Model for Web Service CompositionAbstractThe Internet is going through several major changes. It has become a vehicle of Web services rather than just a reposi-tory of information. Many organizations are putting their core business competencies on the Internet as a collection of Web services. An important challenge is to integrate them to cre-ate new value-added Web services in ways that could never be foreseen forming what is known as Business-to-Business(B2B) services. Therefore, there is a need for modeling techniques and tools for reliable Web service composition. In this paper, we propose a Petri net-based algebra, used to model control flows, as a necessary constituent of reliable Web service com-position process. This algebra is expressive enough to capture the semantics of complex Web service combinations.Keywords: Web services, Petri net, Web service com-position.1 IntroductionIn order to survive the massive competition created by the new online economy, many organizations are rushing to put their core business competencies on the Internet as a collection of Web services for more automation and global visibility. The concept of Web service has become recently very popular, however, there is no clear agreed upon definition yet. Typical examples of Web services include on-line travel reservations, procurement, customer relationship management(CRM), billing, accounting, and supply chain. In this paper, by Web service(or simply service)we mean an autonomous software application or component, i.e., a semantically well defined functionality, uniquely identified by a Uniform Resource Locator(URL).The ability to efficiently and effectively share services on the Web is a critical step towards the development of the new online economy driven by the Business-to-Business(B2B)e-commerce. Existing enterprises would form alliances and integrate their services to share costs, skills, and resources in offering a value-added service to form what is known as B2B services. Briefly stated, a B2B service is a conglomeration of mostly outsourced services working in tandem to achieve the business goals of the desired enterprise. An example of an integrated B2B service is a financial management system that uses payroll, tax preparation, and cash management as components. The component services might all be outsourced to business partners.To date, the development of B2B services has been largely ad-hoc, time-consuming, and requiring enormous effort of low-level programming. This task would obviously be tedious and hardly scalable because of the volatility and size of the Web. As services are most likely autonomous and heterogeneous, building a B2B service with appropriate inter-service coordination would be difficult. More importantly, the fast and dynamic composition of services is an essential requirement for organizations to adapt their business practices to the dynamic nature of the Web.As pointed out before, Internet and Web technologies have opened new ways of doing business more cheaply and efficiently. However, for B2B e-commerce to really take off, there is a need for effective and efficient means to abstract, compose, analyze, and evolve Web services in an appropriate time-frame. Ad-hoc and proprietary solutions on the one hand, and lack of a canonical model for modeling and managing Web services on the other hand, have largely hampered a faster pace in deploying B2B services. Current technologies based on Universal Description, Discovery, and Integration (UDDI)1,Web Service Description Language(WSDL),and Simple Object Access Protocol(SOAP) do not realize complex Web service combinations, henceproviding limited support in service composition. SOAP is a standard for exchanging XML-formatted messages over HTTP between applications. WSDL is a general purpose XML language for describing what a Web service does, where it resides, and how to invoke it. UDDI is a standard for publishing information about Web services in a global registry as well as for Web service discovery.In this paper, we propose a Petri net-based algebra for modeling Web services control flows. The model is expressive enough to capture the semantics of complex service combinations and their respective specificities. The obtained framework enables declarative composition of Web services. We show that the defined algebra caters for the creation of dynamic and transient relationships among services. The remainder of this paper is organized as follows. Web service modeling and specification using Petri nets are presented in Section 2.Section 3 is devoted to the algebra for composing Web services and its Petri net-based formal semantics. Section 4 discusses the analysis and verification of Web services. Section 5 gives a brief overview of related work. Finally, Section 6 provides some concluding remarks.2 Web Services as Petri NetsPetri nets (Petri 1962, Peterson 1981)are a well founded process modeling technique that have formal semantics. They have been used to model and analyze several types of processes including protocols, manufacturing systems, and business processes. Petri net is a directed, connected, and bipartite graph in which each node is either a place or a transition. Tokens occupy places. When there is at least one token in every place connected to a transition, we say that the transition is enabled. Any enabled transition may fire removing one token from every input place, and depositing one token in each output place. The use of visual modeling techniques such as Petri nets in the design of complex Web services is justified by many reasons. For example, visual representations provide a high-level yet precise language which allows to express and reason about concepts at their natural level of abstraction.A Web service behavior is basically a partially ordered set of operations. Therefore, it is straight-forward to map it into a Petri net. Operations are modeled by transitions and the state of the service is modeled by places. The arrows between places and transitions are used to specify causal relations.We can categorise Web services into material services, information services, and material/information services, the mixture of both. We assume that a Petri net, which represents the behavior of a service, contains one input place (i.e., a place with no incoming arcs) and one output place (i.e., a place with no outgoing arcs).A Petri net with one input place, for absorbing information, and one output place, for emitting information, will facilitate the definition of the composition operators and the analysis as well as the verification of certain properties. At any given time, a Web service can be in one of the following states: NotInstantiated , Ready, Running, Suspended, or Completed. When a Web service is in the Ready state, this means that a token is in its corresponding input place, whereas the Completed state means that there is a token in the corresponding output place.3 Composing Web ServicesA Web service has a specific task to perform and may depend on other Web services, hence being composite. For example , a company that is interested in selling books could focus on this aspect while outsourcing other aspects such as payment and shipment. The composition of two or more services generates a new service providing both the original individual behavioral logic anda new collaborative behavior for carrying out a new composite task. This means that existing services are able to cooperate although the cooperation was not designed in advance. Service composition could be static (service components interact with each other in a pre-negotiated manner) or dynamic (they discover each other and negotiate on the fly). In this section we present an algebra that allows the creation of new value-added Web services using existing ones as building blocks. Sequence, alternative, iteration, and arbitrary sequence are typical constructs specified in the control flow. More elaborate operators, dealt with in this paper, are parallel with communication, discriminator, selection, and refinement. We also give a formal semantics to the proposed algebra in terms of Petri nets as well as some nice algebraic properties.3.2 Formal SemanticsIn this section, we give a formal definition, in terms of Petri nets, of the composition operators. It is important to note that service composition, as will be described below, applies to syntactically different services. This is due to the fact that the places and transitions of the component services must be disjoint for proper composition. However, a service may be composed with itself. Typically, this situation occurs when services describe variants of the same operation (e.g., normal execution and exceptional situations) or, for instance, if a single supplier offers two different goods, the requests may be handled independently, as though they were from two different suppliers. In this case, the overlapping must be resolved prior to composition. This can be accomplished by renaming the sets P and T of one of the equal services. The two services remain equal up to isomorphism on the names of transitions and places. Note also that, in case of silent operations, we represent graphically the corresponding transitions as black rectangles.3.2.1 Basic ConstructsEmpty Service . The empty service is a service that performs no operation. It is used for technical and theoretical reasons.Sequence . The sequence operator allows the execution of two services S1 and S2 in sequence, that is, one after another.S1 must be completed before S2can start. This is typically the case when a service depends on the output of the previous service. For example, the service Payment is executed after the completion of the service Delivery.Alternative . The alternative operator permits, given two services S1 and S2,to model the execution of either S1 or S2,but not both. For instance, the assess_claim service is followed by either the service indemnify_customer or the service convoke_customer.Arbitrary Sequence. The arbitrary sequence operator specifies the execution of two services that must not be executed concurrently, that is, given two services S1 and S2,we have either S1 followed by S2 or S2 followed by S1.Suppose,for instance, that there are two goods, then acquiring a single good is useless unless the rest of the conjuncts can also be acquired. Moreover, without a deadline, there is no benefit by making the two requests in parallel, and doing so may lead to unnecessary costs if one of the conjuncts is unavailable or unobtainable. Therefore, the optimal execution is necessarily an arbitrary serial ordering of requests to suppliers.Iteration. The iteration operator models the execution of a service followed a certain number of times by itself. Typical examples where iteration is required are communication and quality control where services are executed more than once.3.2.2 Advanced ConstructsParallelism with Communication. The parallel operator represents the concurrent execution of two services. Concurrent services may synchronize and exchange information.Discriminator. Web services are unreliable; they have a relatively high probability of failing or of being unacceptably slow. Delays of only a few seconds could result in service providers losing significant sums of money or disappointing their customers. Different service providers may provide the same or similar services. Therefore, it should be possible to combine unreliable services to obtain more “reliable”services. The discriminator operator is used, for instance, to place redundant orders to different suppliers offering the same service to increase reliability. The first to perform the requested service triggers the subsequent service and all other late responses are ignored for the rest of the composite service process.Selection. Relying on a single supplier puts a company at its mercy. To reduce risk, a company should maintain relationships with multiple suppliers. These suppliers may, e.g., charge different prices, propose different delivery dates and times, and have different reliabilities. The selection construct allows to choose the best service provider, by using a ranking criteria, among several competing suppliers to outsource a particular operation.Refinement. The refinement construct, in which operations are replaced by more detailed non empty services, is used to introduce additional component services into a service. Refinement is the transformation of a design from a high level abstract form to a lower level more concrete form hence allowing hierarchical modeling.摘要互联网正经过几个大的变化。

DELVER

DELVER: Real-Time, ExtensibleAlgorithms出处AbstractPeer-to-peer communication and semaphores have garnered minimal interest from both computational biologists and mathematicians in the last several years. In fact, few cyberinformaticians would disagree with the investigation of rasterization. In this position paper, we investigate how write-back caches can be applied to the emulation of object-oriented languages.Table of Contents1) Introduction2) Design3) Implementation4) Experimental Evaluation∙ 4.1) Hardware and Software Configuration∙ 4.2) Experiments and Results5) Related Work6) Conclusion1 IntroductionThe refinement of linked lists is a confusing question. To put this in perspective, consider the fact that foremost researchers always use suffix trees to fulfill this ambition. A practical problem in steganography is the exploration of operating systems. To what extent can redundancy be enabled to fulfill this intent?We examine how superpages can be applied to the synthesis of the Turing machine. Our heuristic creates Internet QoS. Nevertheless, the study of I/O automata might not be the panacea that end-users expected. Combinedwith Lamport clocks, it harnesses a novel framework for the synthesis of superblocks.To our knowledge, our work in this position paper marks the first framework improved specifically for flip-flop gates. Existing semantic and lossless applications use multimodal technology to allow RPCs. Existing electronic and cacheable algorithms use the refinement of model checking to request event-driven models [,,,]. It should be noted that our application is maximally efficient. This combination of properties has not yet been studied in previous work.Our main contributions are as follows. To begin with, we verify that though neural networks can be made large-scale, authenticated, and replicated, virtual machines can be made interactive, distributed, and homogeneous. Continuing with this rationale, we verify that Boolean logic and the transistor are largely incompatible. Of course, this is not always the case.The roadmap of the paper is as follows. To begin with, we motivate the need for linked lists. Furthermore, we prove the robust unification of lambda calculus and reinforcement learning []. We place our work in context with the related work in this area. Ultimately, we conclude.2 DesignOur methodology relies on the confirmed methodology outlined in the recent acclaimed work by Johnson in the field of linear-time theory. Consider the early methodology by Kumar and Jackson; our framework is similar, but will actually achieve this mission. The model for DELVER consists of four independent components: interrupts, multimodal modalities, "smart" configurations, and self-learning configurations. This may or may not actually hold in reality. Rather than synthesizing compact epistemologies, DELVER chooses to study active networks.Figure 1: DELVER's relational provision.DELVER relies on the extensive design outlined in the recent well-known work by G. Watanabe et al. in the field of networking. This is a significant property of our methodology. Consider the early methodology by Alan Turing et al.; our methodology is similar, but will actually solve this grand challenge. Similarly, Figure 1details DELVER's client-server synthesis. Thus, the model that our algorithm uses is feasible.Figure 2: The architectural layout used by DELVER.Reality aside, we would like to evaluate a design for how DELVER might behave in theory. Furthermore, we assume that operating systems and the producer-consumer problem can synchronize to surmount this problem. While information theorists always assume the exact opposite, our methodology depends on this property for correct behavior. The question is, will DELVER satisfy all of these assumptions? The answer is yes.3 ImplementationThough many skeptics said it couldn't be done (most notably K. Ramanathan et al.), we construct a fully-working version of our system. DELVER requires root access in order to learn self-learning symmetries. DELVER is composed of a collection of shell scripts, a virtual machine monitor, and a virtual machine monitor. It was necessary to cap the instruction rate used by our system to 2498 teraflops. One is not able to imagine other solutions to the implementation that would have made implementing it much simpler.4 Experimental EvaluationOur evaluation represents a valuable research contribution in and of itself. Our overall evaluation methodology seeks to prove three hypotheses: (1) that e-commerce no longer influences complexity; (2) that interrupt rate stayed constant across successive generations of Apple Newtons; and finally (3) that an approach's user-kernel boundary is not as important as an application's legacy ABI when maximizing median power. An astute reader would now infer that for obvious reasons, we have decided not to refine effective response time. Along these same lines, we are grateful for discrete 16 bit architectures; without them, we could not optimize for usability simultaneously with usability constraints. Third, our logic follows a new model: performance matters only as long as security constraints take a back seat to popularity of replication. We hope to make clear that our patching the ABI of our distributed system is the key to our evaluation method.4.1 Hardware and Software ConfigurationFigure 3: Note that seek time grows as block size decreases - a phenomenon worthemulating in its own right.Our detailed performance analysis necessary many hardware modifications. We scripted a simulation on MIT's 2-node overlay network to quantify T. U. Qian's synthesis of neural networks in 1999 []. To start off with, physicists added 2Gb/s of Ethernet access to our system to understand the effective RAM speed of DARPA's human test subjects. We removed 100MB of flash-memory from the NSA's metamorphic overlay network to understandtechnology. Had we prototyped our trainable overlay network, as opposed to emulating it in courseware, we would have seen duplicated results. Along these same lines, we removed more USB key space from CERN's network to prove collectively linear-time methodologies's impact on the work of Soviet chemist Ivan Sutherland. Next, we quadrupled the hit ratio of our human test subjects to consider information. On a similar note, we removed 25 CPUs from our 10-node overlay network to understand our mobile telephones []. In the end, we removed some NV-RAM from our human test subjects to disprove the paradox of cryptoanalysis.Figure 4: Note that latency grows as work factor decreases - a phenomenon worthimproving in its own right.We ran DELVER on commodity operating systems, such as Ultrix Version 0.8.0, Service Pack 9 and AT&T System V. all software components were hand assembled using Microsoft developer's studio linked against trainable libraries for enabling SCSI disks. All software was hand assembled using GCC 8.0 with the help of Richard Stearns's libraries for provably exploring vacuum tubes. Our purpose here is to set the record straight. All software components were linked using Microsoft developer's studio built on James Gray's toolkit for lazily enabling average throughput. We made all of our software is available under a very restrictive license.Figure 5: These results were obtained by Adi Shamir []; we reproduce them here for clarity. Even though such a hypothesis is continuously a robust ambition, it largely conflicts with the need to provide cache coherence to hackers worldwide.4.2 Experiments and ResultsFigure 6: These results were obtained by Lee et al. []; we reproduce them here forclarity.Figure 7: The median interrupt rate of our application, as a function of hit ratio[].Is it possible to justify the great pains we took in our implementation? Yes. We ran four novel experiments: (1) we measured optical drive throughput as a function of ROM space on a Nintendo Gameboy; (2) we ran 08 trials with a simulated RAID array workload, and compared results to our bioware emulation; (3) we measured E-mail and DNS latency on our system; and (4) we compared median popularity of rasterization on the FreeBSD, L4 and L4 operating systems. All of these experiments completed without paging or unusual heat dissipation. Such a claim might seem counterintuitive but fell in line with our expectations.We first analyze the first two experiments as shown in Figure 5 [,,]. Operator error alone cannot account for these results. These instruction rate observations contrast to those seen in earlier work [], such as B. Harris's seminal treatise on I/O automata and observed ROM throughput. This is essential to the success of our work. Note that 4 bit architectures have smoother clock speed curves than do modified journaling file systems.We next turn to the first two experiments, shown in Figure 4. This is essential to the success of our work. Note the heavy tail on the CDF in Figure 5, exhibiting amplified median time since 2001. the key to Figure 3is closing the feedback loop; Figure 6shows how our heuristic's effective ROM speed does not converge otherwise. Next, note the heavy tail on the CDF in Figure 4, exhibiting amplified median instruction rate.Lastly, we discuss the first two experiments. The many discontinuities in the graphs point to muted median latency introduced with our hardware upgrades. Note how emulating Byzantine fault tolerance rather than emulating them in middleware produce smoother, more reproducible results.Furthermore, error bars have been elided, since most of our data points fell outside of 14 standard deviations from observed means.5 Related WorkWe now consider related work. Thomas [] developed a similar application, unfortunately we verified that our methodology is in Co-NP [,,,,]. Unlike many previous methods [], we do not attempt to control or provide the producer-consumer problem []. DELVER is broadly related to work in the field of hardware and architecture by Suzuki [], but we view it from a new perspective: hierarchical databases [,,,,]. Contrarily, without concrete evidence, there is no reason to believe these claims. Unfortunately, these solutions are entirely orthogonal to our efforts.Instead of architecting read-write configurations [,], we achieve this goal simply by visualizing cooperative theory []. We had our method in mind before Raj Reddy published the recent well-known work on reliable models. Our design avoids this overhead. Unlike many prior approaches, we do not attempt to synthesize or emulate the construction of Scheme []. DELVER represents a significant advance above this work. These frameworks typically require that the memory bus [] and interrupts can collaborate to accomplish this intent, and we validated here that this, indeed, is the case.We now compare our method to previous certifiable technology methods. Further, instead of architecting the development of flip-flop gates [,], we accomplish this purpose simply by controlling spreadsheets []. DELVER represents a significant advance above this work. Unlike many previous approaches, we do not attempt to develop or simulate the deployment of journaling file systems []. Instead of emulating distributed configurations [], we achieve this purpose simply by improving the partition table. All of these approaches conflict with our assumption that wireless technology and highly-available methodologies are structured [,].6 ConclusionIn our research we introduced DELVER, a methodology for superpages. Along these same lines, we also explored a framework for model checking []. Weused compact epistemologies to argue that kernels and theproducer-consumer problem can connect to overcome this riddle. We expect to see many researchers move to visualizing DELVER in the very near future.。

计算机考研--复试英语

计算机考研--复试英语Abstract 1In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.摘要近年来,机器学习发展迅速,尤其是深度学习在图像、声⾳、⾃然语⾔处理等领域取得卓越成效.机器学习算法的表⽰能⼒⼤幅度提⾼,但是伴随着模型复杂度的增加,机器学习算法的可解释性越差,⾄今,机器学习的可解释性依旧是个难题.通过算法训练出的模型被看作成⿊盒⼦,严重阻碍了机器学习在某些特定领域的使⽤,譬如医学、⾦融等领域.⽬前针对机器学习的可解释性综述性的⼯作极少,因此,将现有的可解释⽅法进⾏归类描述和分析⽐较,⼀⽅⾯对可解释性的定义、度量进⾏阐述,另⼀⽅⾯针对可解释对象的不同,从模型的解释、预测结果的解释和模仿者模型的解释3个⽅⾯,总结和分析各种机器学习可解释技术,并讨论了机器学习可解释⽅法⾯临的挑战和机遇以及未来的可能发展⽅向.Abstract 2Deep learning is an important field of machine learning research, which is widely used in industry for its powerful feature extraction capabilities and advanced performance in many applications. However, due to the bias in training data labeling and model design, research shows that deep learning may aggravate human bias and discrimination in some applications, which results in unfairness during the decision-making process, thereby will cause negative impact to both individuals and socials. To improve the reliability of deep learning and promote its development in the field of fairness, we review the sources of bias in deep learning, debiasing methods for different types biases, fairness measure metrics for measuring the effect of debiasing, and current popular debiasing platforms, based on the existing research work. In the end we explore the open issues in existing fairness research field and future development trends.摘要:深度学习是机器学习研究中的⼀个重要领域,它具有强⼤的特征提取能⼒,且在许多应⽤中表现出先进的性能,因此在⼯业界中被⼴泛应⽤.然⽽,由于训练数据标注和模型设计存在偏见,现有的研究表明深度学习在某些应⽤中可能会强化⼈类的偏见和歧视,导致决策过程中的不公平现象产⽣,从⽽对个⼈和社会产⽣潜在的负⾯影响.为提⾼深度学习的应⽤可靠性、推动其在公平领域的发展,针对已有的研究⼯作,从数据和模型2⽅⾯出发,综述了深度学习应⽤中的偏见来源、针对不同类型偏见的去偏⽅法、评估去偏效果的公平性评价指标、以及⽬前主流的去偏平台,最后总结现有公平性研究领域存在的开放问题以及未来的发展趋势.Abstract 3TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.摘要: TensorFlow Lite(TFLite)是⼀个轻量、快速、跨平台的专门针对移动和IoT场景的开源机器学习框架,是TensorFlow 的⼀部分,⽀持安卓、iOS、嵌⼊式Linux以及MCU等多个平台部署.它⼤⼤降低开发者使⽤门槛,加速端侧机器学习的发展,推动机器学习⽆处不在.介绍了端侧机器学习的浪潮、挑战和典型应⽤;TFLite的起源和系统架构;TFLite的最佳实践,以及适合初学者的⼯具链;展望了未来的发展⽅向.Abstract 4The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computervision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.摘要:互联⽹的飞速发展催⽣了很多新型⽹络应⽤,其中包括实时多媒体流服务、远程云服务等.现有尽⼒⽽为的路由转发算法难以满⾜这些应⽤所带来的多样化的⽹络服务质量需求.随着近些年将机器学习⽅法应⽤于游戏、计算机视觉、⾃然语⾔处理获得了巨⼤的成功,很多⼈尝试基于机器学习⽅法去设计智能路由算法.相⽐于传统数学模型驱动的分布式路由算法⽽⾔,基于机器学习的路由算法通常是数据驱动的,这使得其能够适应动态变化的⽹络环境以及多样的性能评价指标优化需求.基于机器学习的数据驱动智能路由算法⽬前已经展⽰出了巨⼤的潜⼒,未来很有希望成为下⼀代互联⽹的重要组成部分.然⽽现有对于智能路由的研究仍然处于初步阶段.⾸先介绍了现有数据驱动智能路由算法的相关研究,展现了这些⽅法的核⼼思想和应⽤场景并分析了这些⼯作的优势与不⾜.分析表明,现有基于机器学习的智能路由算法研究主要针对算法原理,这些路由算法距离真实环境下部署仍然很遥远.因此接下来分析了不同的真实场景智能路由算法训练和部署⽅案并提出了2种合理的训练部署框架以使得智能路由算法能够低成本、⾼可靠性地在真实场景被部署.最后分析了基于机器学习的智能路由算法未来发展中所⾯临的机遇与挑战并给出了未来的研究⽅向.Abstract 5In recent years, the rapid development of Internet technology has greatly facilitated the daily life of human, and it is inevitable that massive information erupts in a blowout. How to quickly and effectively obtain the required information on the Internet is an urgent problem. The automatic text summarization technology can effectively alleviate this problem. As one of the most important fields in natural language processing and artificial intelligence, it can automatically produce a concise and coherent summary from a long text or text set through computer, in which the summary should accurately reflect the central themes of source text. In this paper, we expound the connotation of automatic summarization, review the development of automatic text summarization technique and introduce two main techniques in detail: extractive and abstractive summarization, including feature scoring, classification method, linear programming, submodular function, graph ranking, sequence labeling, heuristic algorithm, deep learning, etc. We also analyze the datasets and evaluation metrics that are commonly used in automatic summarization. Finally, the challenges ahead and the future trends of research and application have been predicted.摘要:近年来,互联⽹技术的蓬勃发展极⼤地便利了⼈类的⽇常⽣活,不可避免的是互联⽹中的信息呈井喷式爆发,如何从中快速有效地获取所需信息显得极为重要.⾃动⽂本摘要技术的出现可以有效缓解该问题,其作为⾃然语⾔处理和⼈⼯智能领域的重要研究内容之⼀,利⽤计算机⾃动地从长⽂本或⽂本集合中提炼出⼀段能准确反映源⽂中⼼内容的简洁连贯的短⽂.探讨⾃动⽂本摘要任务的内涵,回顾和分析了⾃动⽂本摘要技术的发展,针对⽬前主要的2种摘要产⽣形式(抽取式和⽣成式)的具体⼯作进⾏了详细介绍,包括特征评分、分类算法、线性规划、次模函数、图排序、序列标注、启发式算法、深度学习等算法.并对⾃动⽂本摘要常⽤的数据集以及评价指标进⾏了分析,最后对其⾯临的挑战和未来的研究趋势、应⽤等进⾏了预测.Abstract 6With the high-speed development of Internet of things, wearable devices and mobile communication technology, large-scale data continuously generate and converge to multiple data collectors, which influences people’s life in many ways. Meanwhile, it also causes more and more severe privacy leaks. Traditional privacy aware mechanisms such as differential privacy, encryption and anonymization are not enough to deal with the serious situation. What is more, the data convergence leads to data monopoly which hinders the realization of the big data value seriously. Besides, tampered data, single point failure in data quality management and so on may cause untrustworthy data-driven decision-making. How to use big data correctly has become an important issue. For those reasons, we propose the data transparency, aiming to provide solution for the correct use of big data. Blockchain originated from digital currency has the characteristics of decentralization, transparency and immutability, and it provides an accountable and secure solution for data transparency. In this paper, we first propose the definition and research dimension of the data transparency from the perspective of big data life cycle, and we also analyze and summary the methods to realize data transparency. Then, we summary the research progress of blockchain-based data transparency. Finally, we analyze the challenges that may arise in the process of blockchain-based data transparency.摘要:物联⽹、穿戴设备和移动通信等技术的⾼速发展促使数据源源不断地产⽣并汇聚⾄多⽅数据收集者,由此带来更严峻的隐私泄露问题, 然⽽传统的差分隐私、加密和匿名等隐私保护技术还不⾜以应对.更进⼀步,数据的⾃主汇聚导致数据垄断问题,严重影响了⼤数据价值实现.此外,⼤数据决策过程中,数据⾮真实产⽣、被篡改和质量管理过程中的单点失败等问题导致数据决策不可信.如何使这些问题得到有效治理,使数据被正确和规范地使⽤是⼤数据发展⾯临的主要挑战.⾸先,提出数据透明化的概念和研究框架,旨在增加⼤数据价值实现过程的透明性,从⽽为上述问题提供解决⽅案.然后,指出数据透明化的实现需求与区块链的特性天然契合,并对⽬前基于区块链的数据透明化研究现状进⾏总结.最后,对基于区块链的数据透明化可能⾯临的挑战进⾏分析.Abstract 7Blockchain technology is a new emerging technology that has the potential to revolutionize many traditional industries. Since the creation of Bitcoin, which represents blockchain 1.0, blockchain technology has been attracting extensive attention and a great amount of user transaction data has been accumulated. Furthermore, the birth of Ethereum, which represents blockchain 2.0, further enriches data type in blockchain. While the popularity of blockchain technology bringing about a lot of technical innovation, it also leads to many new problems, such as user privacy disclosure and illegal financial activities. However, the public accessible of blockchain data provides unprecedented opportunity for researchers to understand and resolve these problems through blockchain data analysis. Thus, it is of great significance to summarize the existing research problems, the results obtained, the possible research trends, and the challenges faced in blockchain data analysis. To this end, a comprehensive review and summary of the progress of blockchain data analysis is presented. The review begins by introducing the architecture and key techniques of blockchain technology and providing the main data types in blockchain with the corresponding analysis methods. Then, the current research progress in blockchain data analysis is summarized in seven research problems, which includes entity recognition, privacy disclosure risk analysis, network portrait, network visualization, market effect analysis, transaction pattern recognition, illegal behavior detection and analysis. Finally, the directions, prospects and challenges for future research are explored based on the shortcomings of current research.摘要:区块链是⼀项具有颠覆许多传统⾏业的潜⼒的新兴技术.⾃以⽐特币为代表的区块链1.0诞⽣以来,区块链技术获得了⼴泛的关注,积累了⼤量的⽤户交易数据.⽽以以太坊为代表的区块链2.0的诞⽣,更加丰富了区块链的数据类型.区块链技术的⽕热,催⽣了⼤量基于区块链的技术创新的同时也带来许多新的问题,如⽤户隐私泄露,⾮法⾦融活动等.⽽区块链数据公开的特性,为研究⼈员通过分析区块链数据了解和解决相关问题提供了前所未有的机会.因此,总结⽬前区块链数据存在的研究问题、取得的分析成果、可能的研究趋势以及⾯临的挑战具有重要意义.为此,全⾯回顾和总结了当前的区块链数据分析的成果,在介绍区块链技术架构和关键技术的基础上,分析了⽬前区块链系统中主要的数据类型,总结了⽬前区块链数据的分析⽅法,并就实体识别、隐私泄露风险分析、⽹络画像、⽹络可视化、市场效应分析、交易模式识别、⾮法⾏为检测与分析等7个问题总结了当前区块链数据分析的研究进展.最后针对⽬前区块链数据分析研究中存在的不⾜分析和展望了未来的研究⽅向以及⾯临的挑战.Abstract 8In recent years, as more and more large-scale scientific facilities have been built and significant scientific experiments have been carried out, scientific research has entered an unprecedented big data era. Scientific research in big data era is a process of big science, big demand, big data, big computing, and big discovery. It is of important significance to develop a full life cycle data management system for scientific big data. In this paper, we first introduce the background of the development of scientific big data management system. Then we specify the concepts and three key characteristics of scientific big data. After an review of scientific data resource development projects and scientific data management systems, a framework is proposed aiming at the full life cycle management of scientific big data. Further, we introduce the key technologies of the management framework including data fusion, real-time analysis, long termstorage, cloud service, and data opening and sharing. Finally, we summarize the research progress in this field, and look into the application prospects of scientific big data management system.摘要:近年来,随着越来越多的⼤科学装置的建设和重⼤科学实验的开展,科学研究进⼊到⼀个前所未有的⼤数据时代.⼤数据时代科学研究是⼀个⼤科学、⼤需求、⼤数据、⼤计算、⼤发现的过程,研发⼀个⽀持科学⼤数据全⽣命周期的数据管理系统具有重要的意义.分析了研发科学⼤数据管理系统的背景,阐述了科学⼤数据的概念和三⼤特征,通过对科学数据资源发展和科学数据管理系统的研究进展进⾏综述分析,提出了满⾜科学数据管理全⽣命周期的科学⼤数据管理框架,并从数据融合、数据实时分析、长期存储、云服务体系以及数据开放共享机制5个⽅⾯分析了科学⼤数据管理系统中的关键技术.最后,结合科学研究领域展望了科学⼤数据管理系统的应⽤前景.Abstract 9Recently, research on deep learning applied to cyberspace security has caused increasing academic concern, and this survey analyzes the current research situation and trends of deep learning applied to cyberspace security in terms of classification algorithms, feature extraction and learning performance. Currently deep learning is mainly applied to malware detection and intrusion detection, and this survey reveals the existing problems of these applications: feature selection,which could be achieved by extracting features from raw data; self-adaptability, achieved by early-exit strategy to update the model in real time; interpretability, achieved by influence functions to obtain the correspondence between features and classification labels. Then, top 10 obstacles and opportunities in deep learning research are summarized. Based on this, top 10 obstacles and opportunities of deep learning applied to cyberspace security are at first proposed, which falls into three categories. The first category is intrinsic vulnerabilities of deep learning to adversarial attacks and privacy-theft attacks. The second category is sequence-model related problems, including program syntax analysis, program code generation and long-term dependences in sequence modeling. The third category is learning performance problems, including poor interpretability and traceability, poor self-adaptability and self-learning ability, false positives and data unbalance. Main obstacles and their opportunities among the top 10 are analyzed, and we also point out that applications using classification models are vulnerable to adversarial attacks and the most effective solution is adversarial training; collaborative deep learning applications are vulnerable to privacy-theft attacks, and prospective defense is teacher-student model. Finally, future research trends of deep learning applied to cyberspace security are introduced.摘要:近年来,深度学习应⽤于⽹络空间安全的研究逐渐受到国内外学者的关注,从分类算法、特征提取和学习效果等⽅⾯分析了深度学习应⽤于⽹络空间安全领域的研究现状与进展.⽬前,深度学习主要应⽤于恶意软件检测和⼊侵检测两⼤⽅⾯,指出了这些应⽤存在的问题:特征选择问题,需从原始数据中提取更全⾯的特征;⾃适应性问题,可通过early-exit策略对模型进⾏实时更新;可解释性问题,可使⽤影响函数得到特征与分类标签之间的相关性.其次,归纳总结了深度学习发展⾯临的⼗⼤问题与机遇,在此基础上,⾸次归纳了深度学习应⽤于⽹络空间安全所⾯临的⼗⼤问题与机遇,并将⼗⼤问题与机遇归为3类:1)算法脆弱性问题,包括深度学习模型易受对抗攻击和隐私窃取攻击;2)序列化模型相关问题,包括程序语法分析、程序代码⽣成和序列建模长期依赖问题;3)算法性能问题,即可解释性和可追溯性问题、⾃适应性和⾃学习性问题、存在误报以及数据集不均衡的问题.对⼗⼤问题与机遇中主要问题及其解决⽅案进⾏了分析,指出对于分类的应⽤易受对抗攻击,最有效的防御⽅案是对抗训练;基于协作性深度学习进⾏分类的安全应⽤易受隐私窃取攻击,防御的研究⽅向是教师学⽣模型.最后,指出了深度学习应⽤于⽹络空间安全未来的研究发展趋势.。

Proof of

A Component-based Approach to Verified Software:What,Why,How and What Next?Kung-Kiu Lau,Zheng Wang,Anduo Wang and Ming GuSchool of Computer Science,The University of ManchesterManchester M139PL,United Kingdomkung-kiu,zw@School of Software,Tsinghua University,Beijing,Chinawad04@,guming@1What?Our component-based approach to verified software is a result of cross-fertilisation be-tween verified software and component-based software development.In contrast to ap-proaches based on compositional verification techniques,our approach is designed to solve the scale problem in verified software.Compositional verification tends to be top-down,i.e.it partitions a system into sub-systems,and proves the whole system by proving the subsystems(Fig.1).Thesubsys-positional verification.tems,often called components,are system-specific,and are therefore not intended for reuse.It follows that their proofs cannot be reused in other systems.By contrast,our approach to verified software is bottom-up,starting with pre-existingCom-pre-verified components,and composing them into verified composites.(Fig.2).proofproofponent-based approach to verified software.ponents are system-independent,and are intended for reuse in many systems.Their proofs are therefore also reusable in different systems.2Why?In compositional verification,the only form of‘scaling up’is decomposition into smaller, more manageable subsystems.The task of decomposition itself(and composing the subproofs)is directly proportional to the size of the whole system.By contrast,in our component-based approach,scaling up is achieved because each step of composition is independent of the size of the whole system.The total number of composition steps required depends on the size of the whole system as well as the granularity of the components.3How?The pre-requisite for a component-based approach to verified software is that compo-nents and their specification,composition and verification are not only well-defined, but also defined in such a way that verified software can be built in a component-based manner.That is,we need a component model such that it supports this approach.3.1A Component ModelA component model defines what components are,and how they can be composed.We have defined a component model[7]in which we can also reason about components and their composition.The defining characteristics of our components are encapsulation and compositionality,which lead to self-similarity.The defining characteristic of our composition operators is that they are exogenous connectors[8]that provide interfaces to the composites they produce.Self-similarity is what makes our component-based approach possible.It means that our composite components have hierarchical specifications,hierarchical proof obliga-tions,or verification conditions(VCs),and as a result,proof reuse,via sub-VCs,is possible.3.2A Case Study:The Missile Guidance SystemWe have implemented our component model in Spark[2],and using this implemen-tation,we have experimented on an industrial strength case study,a Missile Guidance system[4],which we obtained from Praxis High Integrity Systems.The Missile Guid-ance system is the main control unit for an endo-atmospheric interceptor missile.It consists of a main control unit and input/output.An I/O handler reads data from differ-ent sensors and passes them via a bus to corresponding processing units.These units then pass their results to a navigation unit which produces the output for the system.The implementation in[4]contains246packages including tools and a test harness. In total,it has30,102lines of Spark Ada code including comments and annotations.Using our component model,we have implemented a component-based version of the Missile Guidance system.Its architecture is shown in Fig.3.We reused code from [4]as computation units,and composed them using exogenous connectors.Seq1,Seq1’, Seq2and Seq4are composite components whose interfaces are sequence connectors. Sel2is a composite component whose interface is is a selector connector.Seq3is a sequence connector,Sel1a Selector,Pipe1and Pipe2are pipe connectors and Loop is an iterator.We have proved the system completely,using the Spark proof tools:Examiner,Sim-plifier and Checker;and its proof obligation summary is shown in Fig.4.The summary2Fig.3.A component-based missile guidance system.UndiscgdTotal VCs by type:0000Total Examiner Simp Checker Review False−−−−−−−−−−−−Proved By−−−−−−−−−−−−−−−−−00Assert or Post:Precondition Check Check Statement Runtime check:00Refinement VCs:00Inheritance VCs:−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−End of Semantic Analysis Summary−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−Totals:00% Totals:0%0%9695070137%1494350000350000002818105112860010900193181147929052%10%Fig.4.Proof Obligation Summary of the missile guidance system.is generated automatically by the Spark Proof Obligation Summariser (POGS).It is a summary for the VCs:their total number,types,and numbers discharged by each proof tool.In the proofs of composite components,we succeeded in reusing proofs of sub-components,by virtue of the hierarchical nature of the VCs.We define proof reuse rate for a (composite or atomic)component simply as the ratio of the number of new VCs for the (composition or invocation)connector to the number of VCs in the sub-components (or computation unit).Of course the actual proof effort for each VC is variable,but we believe the ratio of VC numbers does give a first approximation to proof reuse rate.As an illustration of the proof reuse rates for the component-based missile guidance system,we will show the proof reuse rates for part of the system,viz.the composite component Seq4in Fig.3.The subcomponents of Seq4are shown in Fig.5,where ‘Ibm’is the invocation of ‘bm’(Barometer),‘Ias’is the invocation of ‘as’(Airspeed),etc.The proof reuse rate for each sub-component of Seq4is shown in Fig.6.We can see that the bulk of proof efforts goes into proving the computation units of atomic components,but these proofs are only done once and can be reused afterwards.Our component-based approach is able to reuse these proofs effectively,thus reducing the cost of proof efforts of the whole system.3Fig.5.Part of the missile guidance system.Package No. of VCs bm 11Ibm 17as Ias 1119cp 1931Icp ins Iins 15231320Ife fe fz Ifz 1320Ira ra 28Package No. of VCs 28ir Iir 3482%se Ise 21mt Imt 1925dt Idt 1218wh 12Iwh 18cl en Ien 3038352812Icl 21Package No. of VCs Reuse rateReuse rate Reuse rate 352Seq458%61%65%65%65%80%75%76%67%67%57%79%65%98%Fig.6.Proof reuse rates for part of the missile guidance system.More importantly,this experiment confirms that our component-based approach can scale up,because of proof reuse.4What Next?Although the missile guidance system is an industrial strength case study,our exper-iment is only a first step in developing and applying our component-based approach to verified software.Much more remains to be done,and here we outline some future work.4.1Formalisation and Proof of Properties of Component ModelA preliminary formalisation of the semantics of our component model has been done,using first-order logic [7].To prove properties of our component model,we plan to formalise the model in a theory with a proof tool.To investigate this,we plan to use PVS [9].4.2Implementation in Other Languages and ToolsImplementation of our component model in other languages with proof tools,e.g.Spec#[3],JML [6],etc.will be interesting.A comparison with B [1]and its tools will also be illuminating.The objective will be to evaluate whether and how well these models and tools support our model for component-based verified software.44.3Larger ExamplesAlthough the Missile Guidance System is already quite large,it is nowhere near the1 million lines that is the target of the Grand Challenge in Verified Software[5].There-fore,we hope to attempt increasingly larger examples,in order to produce convincing evidence that our model isfit for purpose,as far as the scale problem is concerned. By so doing we can also contribute to the repository of verified code that the grand challenge also seeks to establish.References1.J.R.Abrial.The B-Book:Assigning Programs to Meanings.Cambridge University Press,1996.2.J.Barnes.High Integrity Software:The SPARK Approach to Safety and Security.Addison-Wesley,2003.3.M.Barnett,K.M.Leino,and W.Schulte.The Spec#programming system:An overview.InProc.Int.Workshop on Construction and Analysis of Safe,Secure,and Interoperable Smart Devices,LNCS3362,pages49–69.Springer,2004.4. A.Hilton.High Integrity Hardware-Software Codesign.PhD thesis,The Open University,April2004.5.T.Hoare and J.Misra.Verified software:theories,tools,experiments-vision of a grandchallenge project.In Proceedings of IFIP working conference on Verified Software:theories, tools,experiments,2005.6.The Java Modeling Language(JML)Home Page./˜leavens/JML.html.u,M.Ornaghi,and Z.Wang.A software component model and its preliminaryformalisation.In F.S.de Boer et al.,editor,Proc.4th International Symposium on Formal Methods for Components and Objects,LNCS4111,pages1–21.Springer-Verlag,2006.u,P.Velasco Elizondo,and Z.Wang.Exogenous connectors for software compo-nents.In G.T.Heineman et al.,editor,Proc.8th Int.Symp.on Component-based Software Engineering,LNCS3489,pages90–106.Springer,2005.9./documentation.shtml/.5。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

TowardsCreatingSpecialisedIntegrityChecksThroughPartialEvaluationofMeta-Interpreters

MichaelLeuschel”andDannyDeSchreyetDepartmentofComputerScience,K.U.LeuvenCelestijnenlaan200A,B-3001Heverlee,Belgium{michael,dannyd}@cs.kuleuven.ac.be

AbstractIn[23]wepresentedapartialevaluationschemefora“reallife”subsetofProlog,centainingfirst-orderbuilt-in’s,sim-pleside-effectsandtheoperationalpredicateif-then-else.Inthispaperweapplythisschemetospecialiseintegritycheckingindeductivedatabases.Wepresentaninterpreterwhichcanbeusedtochecktheintegrityconstraintsinhier-archicaldeductivedatabases.Thisinterpreterincorporatestheknowledgethattheintegrityconstraintswerenotvio-latedpriortoagivenupdateandusesatechniquetoliftthegroundrepresentationtothenon-groundoneforres-olution.Bypartiallyevaluatingthismeta-interpreterfor

certaintransactionpatternswe:areabletoobtainveryeffi-cientspecialisedupdateprocedures,executingsubstantiallyfasterthantheoriginalmeta-interpreter.Thepartialeval-uationschemepresentedin[23]seemstobecapableofau-tomaticallygeneratinghighlyspecialisedupdateproceduresfordeductivedatabases.

1IntroductionPartialevaluationhasreceivedconsiderableattentionboth

infunctionalprogramming(seethebookbyJonesetal[20]andthereferencestherein)andlogicprogramming(e.g.[14,15,21,37]).However,theconcernsinthesetwoap-proacheshavestronglydiffered.Infunctionalprogramming,self-applicationandthererdisaticmofthedifferentFutamuraprojections,hasbeenthefocusofalotofcontributions.Inlogicprogramming,self-applicationhasreceivedverylit-tleattention.lHere,themajorityoftheworkhasbeen

concernedwithdirectoptimisationofrun-timeexecution,oftentargetedatremovingtheoverheadcausedbymeta-interpreters.Inthecontextofpurelogicprograms,partialevalu-ationisoftenreferredtoaspartialdeduction,theterm

partialevaluationbeingreservedforthetreatmentofnon-declarativeprograms.Firmthecmeticalfoundationsforpar-

qSupportedbyEspritBR-projectCompulog11tsenior~e~earch~ssociateoftheElelgianNationalFundforscien-

tificResearch1Somenotableexceptionsare[16,31].

Permissiontocopywithoutfeeallorpartofthismaterialisgrantedprovidedthatthecopiesarenotmadeordistributedfordirectcommercialadvantage,theACMcopyrightnoticeandthetitleofthepublicationanditsdataappear,andnoticeisgiventhatcopyingisbypermissionoftheAssociationofComputingMachinery.Tocopyotherwise,orf,orepublish,requiresafeeandforspecificpermission.PEPM’95LaJolla,CAUSA01995ACM0-89791-720-019510006...$3.50253

tialdeductionhavebeenestablishedbyLloydandShepherd-sonin[25].Howeverpurelogicprogrammingisrarelyconsideredtobeviableforpractical“real-life”programmingandforin-stancePrologincorporatesnon-declarativeextensions.In[23]wepresentedapartialevaluationschemeforapracti-callyusablesubsetofPrologencompassingfirst-orderzbuilt-in’s,likevar/1,nonvsr/1and=../2,simpleside-effects,likeprint/1,andtheoperationalif-then-elseconstruct.Animportantaspectistheinclusionoftheif-then-else,whichprovedtobemuchbettersuitedforpartialevalua-tionthanthe“fullblown”cut.Forinstanteitwaspos-sibletoobtainaKnuth-Morris-Prattlikesearchalgorithmbyspecializinga“dumb”searchalgorithmforagivenpat-tern.Theif-then-elsecontainsalocalcutandisusuallywrittenas(If->Then;Else).Thefollowinginformal

Prologclausescanbeusedtodefinetheif-then-else:

(If->Then;Else):-If,!,Then.(If->Then;Else):-Else.

Inotherwordsifthetest-partsucceedsthenalocalcutisexecutedandthethen-partisentered(thismeansthatthetest-partwillyieldatmostonesolution).Ifthetest-partfailsfinitelythentheelse-partisexecutedandifthetest-part“loops”(i.e.failsinfinitely)thenthewholeconstructloops.Notethatmostusesofthecutcanbemappedtoif-then-elseconstructsandtheif-then-elsecanalsobeusedtoimplementthenot/1.3In[23]wealsoshowedthatfreenessandsharinginfor-mation,whichsofarhavebeenofnointerestin(pure)par-tialdeduction,canbeimportanttoproduceefficientspe-cialisedprograms.Inthispaperweapplythepartialevalu-ationtechniqueof[23]toanon-trivialandpracticallyusefulmeta-interpreterforspecialisedintegritycheckingindeduc-tivedatabases.Fromatheoreticalviewpoint,integrityconstraintsareveryusefulforthespecificationofdeductivedatabases.Theyensurethatnocontradictorydatacanbeintroducedandmonitorthecoherenceofadatabase.Fromapracticalview-pointhoweveritcanbequiteexpensivetochecktheintegrityofadeductivedatabaseaftereachupdate.Anextensiveamountofresearchefforthasbeenputintoimprovingin-tegritycheckingsuchthatittakesadvantageofthefactthatadatabasewasconsistentbeforeanyparticularupdate

‘Asopposedto“secondorder”built-in’swhicharepredicatesma-nipulatingclausesandgoals,likecall/1orassert/1.

相关文档
最新文档