ScopeCast A Service for Consistent Peer-to-Peer Multicasts

合集下载

Clustering and Load Balancing

Clustering and Load Balancing

Introduction (cont.)
• Load balancers also perform server monitoring of services in a web server farm. • case of failure of a service, the load balancer continues to perform load balancing across the remaining services that are UP. • In case of failure of all the servers bound to a virtual server, requests may be sent to a backup virtual server (if configured) or optionally redirected to a configured URL. • In Global Server Load Balancing (GSLB) the load balancer distributes load to a geographically distributed set of server farms based on health, server load or proximity.
Introduction (cont.)
• Different virtual servers can be configured for different sets of physical services, such as TCP and UDP services in general. • Application specific virtual server may exist to support HTTP, FTP, SSL, DNS, etc. • The load balancing methods manage the selection of an appropriate physical server in a server farm. • Persistence can be configured on a virtual server; once a server is selected, subsequent requests from the client are directed to the same server. • Persistence is sometimes necessary in applications where client state is maintained on the server, but the use of persistence can cause problems in failure and other situations. • A more common method of managing persistence is to store state information in a shared database, which can be accessed by all real servers, and to link this information to a client with a small token such as a cookie, which is sent in every client request.

access token scope使用实例 -回复

access token scope使用实例 -回复

access token scope使用实例-回复什么是访问令牌?在计算机科学和网络安全领域,访问令牌(Access Token)是一种用于身份验证和授权的凭证。

它是一个字符串,表示某个实体(如用户或应用程序)有权访问特定资源(如API端点或文件)。

访问令牌通常与特定的用户或应用程序关联,并且具有一定的有效期。

在使用访问令牌时,应验证其有效性和授权范围,以确保只有授权的实体才能访问资源。

访问令牌的作用是什么?访问令牌起到了许多重要作用,包括但不限于以下几个方面:1. 身份验证:访问令牌可用于验证用户或应用程序的身份。

通过提供正确的访问令牌,实体可以证明其是经过授权的,并且有访问特定资源的权限。

2. 授权访问:访问令牌还包含有关实体所拥有的授权范围的信息。

这些授权范围可以确定实体能够执行的操作或访问的资源的级别。

3. 安全性:访问令牌比传统的用户名和密码身份验证更安全。

由于访问令牌的有效期有限,并且可以通过配置仅限于特定的IP地址或设备进行访问,因此即使令牌被泄露,攻击者仍有限制或短暂的窗口期进行滥用。

4. 减少了服务器的负载:通过使用访问令牌进行身份验证和授权,服务器可以减少对数据库查询的需求,并且可以更容易地缓存和验证令牌的有效性。

这降低了服务器的负载,提高了系统的性能。

那么,如何使用访问令牌的作用域?访问令牌的作用域(Scope)定义了实体在使用令牌时具有的特定权限。

在OAuth 2.0协议中,作用域是通过字符串表示的,可以是简单的名称,也可以是复杂的层次结构。

例如,一个作用域可以是"read",另一个可以是"write",还可以是更复杂的"User:Read Account:Write"。

使用访问令牌的作用域有以下几个关键步骤:1. 定义作用域:首先,需要确定在特定应用程序或API中需要定义的作用域。

作用域应该根据应用程序的需求和资源的访问级别来定义。

微服务cap原理

微服务cap原理

微服务cap原理CAP原理是指在分布式系统设计中,一致性(Consistency)、可用性(Availability)、分区容错性(Partition tolerance)这三个特性不可兼得的原则。

在一个分布式系统中,无法同时满足这三个特性,只能在其中选择两个进行保证。

一致性(Consistency)指的是在分布式系统中的多个节点中,对于一个操作的结果保持一致。

即在任何时刻,不论用户访问哪个节点,都能获得相同的数据副本和结果返回。

在保证一致性的情况下,系统的所有节点在执行操作时都必须达到一致的状态。

可用性(Availability)指的是系统在面对用户请求时能够正常响应并提供服务。

即使系统中的某个节点出现故障或不可用,仍然可以通过其他节点继续提供服务。

保证可用性意味着系统能够尽可能地处理用户请求,并且在有限时间内返回结果。

分区容错性(Partition tolerance)指的是系统在面对网络分区(节点之间的通信中断)的情况下仍然能够保持正常运行。

网络分区是不可避免的,特别是在大规模分布式系统中。

分区容错性要求系统能够正确处理分区后的数据同步和一致性问题。

根据CAP原理,当发生网络分区时,系统必须在一致性和可用性之间做出选择。

如果选择保证一致性和分区容错性,则可能导致在网络分区期间无法提供服务(即不可用)。

如果选择保证可用性和分区容错性,则可能导致数据不一致的情况发生。

微服务架构通常面临CAP原理的挑战,因为微服务架构中的各个服务之间通常是分布式部署的。

在微服务架构中,通常会将一致性和分区容错性作为首要考虑,而可用性则通过水平扩展和故障恢复机制来保证。

例如,可以使用分布式事务、消息队列、缓存策略等技术手段来实现一致性和分区容错性,并且通过负载均衡、故障转移、服务降级等措施来提高可用性。

总之,CAP原理为分布式系统设计提供了一个思考框架,帮助我们权衡在一致性、可用性和分区容错性之间的取舍,并根据具体业务需求选择最合适的方案。

stablediffusion error code1

stablediffusion error code1

stablediffusion error code1英文版Stablediffusion error code1When encountering error code1 in the Stablediffusion system, it is important to understand what this code means and how to address it. Error code1 typically indicates a communication error between the server and the client, resulting in data not being transmitted properly. This can lead to issues with stability and performance of the system.To troubleshoot error code1, first check the network connection between the server and the client. Ensure that both devices are properly connected to the network and that there are no issues with the internet connection. If the network connection is fine, then the next step is to check the server settings to ensure that they are properly configured.If the server settings are correct, then the issue may lie with the client-side settings. Check the client configuration to make sure that it is set up correctly and that there are no conflicts with other software or devices. It may also be helpful to restart both the server and the client to see if that resolves the issue.If none of these steps resolve the error code1, it may be necessary to contact technical support for further assistance. They will be able to provide additional troubleshooting steps and help identify the root cause of the issue. By addressing error code1 promptly and effectively, you can ensure the stability and performance of your Stablediffusion system.中文版当在Stablediffusion系统中遇到error code1时,重要的是要理解这个代码的含义以及如何解决它。

dubbo metadataservicedubbowrap -回复

dubbo metadataservicedubbowrap -回复

dubbo metadataservicedubbowrap -回复Dubbo是一个快速、低延迟、高吞吐量的分布式服务框架,主要用于面向大规模集群的服务治理。

而MetadataServiceDubboWrap是Dubbo 框架中的一个重要组件,它提供了元数据服务的封装,用于支持Dubbo 的服务发现、注册和路由等功能。

本文将逐步回答关于Dubbo MetadataServiceDubboWrap的问题,旨在帮助读者更深入地了解和使用这个组件。

一、Dubbo是什么?Dubbo是一个高性能、轻量级的开源分布式服务框架,由阿里巴巴公司开发并开源。

它的主要目标是提供面向分布式及大规模集群的服务治理解决方案,包括服务的注册、发现、路由、负载均衡、容错、配置管理等功能。

Dubbo具有高性能、可扩展性、透明化的远程调用、智能负载均衡、可视化的服务治理和运维等特点,被广泛应用于众多互联网企业的底层架构。

二、Dubbo组件之MetadataServiceDubboWrap是什么?MetadataServiceDubboWrap是Dubbo框架中的一个重要组件,用于支持Dubbo的元数据服务。

它的主要作用是将元数据服务封装成Dubbo 可用的方式,提供给Dubbo框架使用。

元数据服务是指用于描述和管理服务的元数据信息,如服务的接口定义、实现类、版本号、协议等。

通过MetadataServiceDubboWrap,Dubbo框架可以根据元数据信息实现服务的自动注册、发现和路由,从而实现分布式服务的管理。

三、MetadataServiceDubboWrap的工作原理是什么?MetadataServiceDubboWrap通过Dubbo的扩展机制来实现元数据服务的封装。

当Dubbo服务启动时,Dubbo框架会加载并初始化MetadataServiceDubboWrap组件。

在Dubbo框架启动过程中,MetadataServiceDubboWrap会先尝试连接注册中心,获取服务的元数据信息。

nacos的cap原则

nacos的cap原则

nacos的cap原则Nacos的CAP原则Nacos是一个用于配置管理和服务发现的开源平台,它遵循CAP原则,即一致性(Consistency)、可用性(Availability)和分区容错性(Partition tolerance)。

本文将围绕Nacos的CAP原则展开,分别介绍这三个方面。

一致性(Consistency)一致性是指系统中的所有节点在同一时间具有相同的数据副本,即数据的一致性。

在Nacos中,一致性主要体现在配置管理的过程中。

Nacos使用主从复制的方式来保证数据的一致性。

当一个配置发生变化时,Nacos会将变更通知到所有的节点,保证所有节点上的配置都是最新的。

这样可以保证系统中的所有节点在同一时间具有相同的配置信息,从而保证了系统的一致性。

可用性(Availability)可用性是指系统在面对各种故障和异常情况时能够正常工作的能力。

在Nacos中,可用性主要体现在服务发现的过程中。

Nacos使用心跳检测的方式来保证服务的可用性。

当一个服务不可用时,Nacos 会及时发现并将其从服务列表中移除,从而保证其他服务能够正常使用可用的服务。

同时,Nacos还支持集群部署,当一个节点出现故障时,其他节点可以接管服务,保证整个系统的可用性。

分区容错性(Partition tolerance)分区容错性是指系统在面对网络分区时能够正常工作的能力。

在Nacos中,分区容错性主要体现在服务注册和发现的过程中。

Nacos 使用分布式一致性算法来保证分区容错性。

当一个节点与其他节点发生网络分区时,Nacos会通过选举算法来选出一个新的领导者节点,保证系统的正常运行。

同时,Nacos还支持多数据中心的部署,当一个数据中心发生故障时,其他数据中心可以接管服务,保证整个系统的分区容错性。

Nacos遵循CAP原则,通过一致性、可用性和分区容错性来保证系统的稳定性和可靠性。

一致性保证了数据的一致性,可用性保证了系统的正常运行,分区容错性保证了系统在面对网络分区时的可靠性。

configurationmanager.connectionstrings 用法 -回复

configurationmanager.connectionstrings 用法 -回复

configurationmanager.connectionstrings 用法-回复configurationmanager.connectionstrings是一个用于访问和管理应用程序配置文件中连接字符串的类。

在.NET开发中,连接字符串是一种用于连接到数据库、文件等存储资源的重要参数。

配置文件是一个XML 文件,通常用于存储应用程序的设置和配置信息。

本文将详细介绍configurationmanager.connectionstrings类的用法,并提供一些具体的示例来帮助读者更好地理解。

首先,我们需要引入System.Configuration命名空间,这个命名空间包含了configurationmanager.connectionstrings类。

在代码中,我们可以通过using关键字来简化命名空间的使用,即using System.Configuration。

接下来,我们需要在配置文件中定义连接字符串。

通常,配置文件的名称为App.config(Windows应用程序)或Web.config(Web应用程序),位于应用程序的根目录下。

我们可以使用任何文本编辑器来编辑配置文件。

在配置文件中,我们可以使用connectionStrings元素来定义连接字符串。

例如,以下是一个简单的连接字符串定义:<connectionStrings><add name="MyConnection"connectionString="Server=localhost;Database=MyDatabase;User Id=sa;Password=123456;" providerName="System.Data.SqlClient"/></connectionStrings>在上面的示例中,我们定义了一个名为“MyConnection”的连接字符串,它连接到本地的数据库“MyDatabase”,使用了SQL Server的提供程序“System.Data.SqlClient”。

华为交换机VTY用户界面属性配置教程

华为交换机VTY用户界面属性配置教程

华为交换机VTY用户界面属性配置教程用户通过Telnet或SSH方式登录设备实现本地或远程维护时,可以根据用户使用需求以及对设备安全的考虑来配置VTY,除对VTY类型用户界面呼入呼出进行限制的ACL号、用户名和口令及用户界面的验证方式外其他参数设备均有缺省值,用户可以结合实际需求和安全性考虑选择配置。

1、设置通过账号和密码登陆VTY界面1.1、进入VTY用户界面视图[Huawei]user-interface vty ?INTEGER<0-4,16-20> The first user terminal interface to be configured[Huawei]user-interface vty 0 4[Huawei-ui-vty0-4]1.2、设置用户验证方式为AAA验证(即通过账号和密码登陆)[Huawei-ui-vty0-4]authentication-mode ?aaa AAA authenticationnone Login without checkingpassword Authentication through the password of a user terminal interface[Huawei-ui-vty0-4]authentication-mode aaa1.3、设置登陆的账号和密码[Huawei-ui-vty0-4]q[Huawei]aaa[Huawei-aaa]local-user ?STRING<1-64> User name, in form of 'user@domain'. Can use wildcard '*',while displaying and modifying, such as*@isp,user@*,*@*.Cannot include invalid character / \ : * ? " < > | @ '[Huawei-aaa]local-user ?access-limit Set access limit of user(s)ftp-directory Set user(s) FTP directory permittedidle-timeout Set the timeout period for terminal user(s) password Set passwordprivilege Set admin user(s) levelservice-type Service types for authorized user(s)state Activate/Block the user(s)user-group User group[Huawei-aaa]local-user password ?cipher User password with cipher text[Huawei-aaa]local-user password cipher 1.4、设置账号的使用类型为Telnet或SSH[Huawei-aaa]local-user service-type telnet或[Huawei-aaa]local-user service-type ssh2、设置只通过密码登陆VTY2.1、置用户验证方式为密码验证[Huawei-ui-vty0-4]authentication-mode password2.2、设置登陆密码[Huawei-ui-vty0-4]set authentication password cipher ?STRING<1-16>/<24> Plain text/cipher text password[Huawei-ui-vty0-4]set authentication password cipher 3、设置直接登陆VTY(此模式不安全)[Huawei-ui-vty0-4]authentication-mode none4、配置VTY用户界面的用户优先级缺省情况下,VTY用户界面对应的默认命令访问级别是0,实际工作如果对权限要求不是特别严格,一本设置为15级。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

ScopeCast:A Service for Consistent Peer-to-Peer MulticastsX.B.Shen Y.C.TayNational University of SingaporeOctober26,2003Keywords:distributed systems,peer-to-peer multi-cast,causal ordering.AbstractPeer-to-peer systems can benefit from a service that delivers multicasts in a consistent manner to appli-cations and other services.ScopeCast offers such a service.It does not assume membership is static or known,nor require logical or physical infrastructure for support.It allows users and applications to force (manually or via timeouts,etc.)delivery of received messages when congestion or failures are detected. The protocol resumes consistent delivery thereafter. 1IntroductionWhen a message arrives at a node in a multicast group,its delivery to the application may need to be delayed to ensure that all nodes have a consistent view of the communication[AN].The delays slow down communication,and other costs include mem-ory(to store messages and metadata)and bandwidth (to carry control data).Nonetheless,such multicast ordering provides an abstraction that facilitates the design of distributed systems(e.g.fault tolerance[B],virtual memory [AHJ],application monitor[RST]),and was used in several systems(e.g.Psync[MPS],ISIS[BSS]).How consistency is defined determines the order-ing for the multicast delivery(Atomic,FIFO,Causal, etc.[HT]),and various techniques have been used todesign the delivery protocol(logical clocks[EMS], causality graphs[DKM],message logging[KD],to-kens[RM],vectors[BSS]and matrices[RST]).Like most distributed systems,peer-to-peer ap-plications need consistent views of the network state (e.g.replicatedfiles in Freenet[C],replicated root in Bayeux[Z]or location and routing information in Pastry[RD]).There is,however,one important difference: Multicast membership changes frequently as peers continually join and quit.Peer-to-peer multicasts therefore cannot assume membership is static or known globally(e vector clocks).The problem is exacerbated for many peer-to-peer systems,since the underlying wide-area,multi-hop networks make out-of-order message reception and concurrent join or quit events more likely[AN]. Some techniques that are suitable for local-area net-works(e.g.tokens)also become impractical.For this setting,we propose ScopeCast:a ser-vice for consistent peer-to-peer multicast delivery. ScopeCast does not require support from some logi-cal or physical infrastructure[AN,YHH],and a mul-ticast may be viaflooding or through some mecha-nism(e.g.multicast tree).ScopeCast’s consistency definition includes uni-casts and is,essentially,causal multicast[BSS]after taking dynamic group membership into account.The protocol inserts a peer’s view of the multicast frontier [BM]in an outgoing message’s header;the protocol uses this information to decide whether a received message is ready for delivery to the application.Failures or congestion in the network can cause 1an unacceptable delivery delay or buffer overflow. When this happens,the user or application can force the(inconsistent)delivery of received messages. Such a disruption does not affect consistent delivery at other peers;moreover,ScopeCast resumes consis-tent delivery for the disrupted peer.We state the consistency definition in Section2. Section3then presents the protocol and describes how it allows for a temporary break from consis-tency.We prove the protocol is correct in Section 4,and conclude with a summary in Section5.2Consistency DefinitionWefirst use events and messages as abstractions to describe the system,then define consistency.2.1Events and MessagesWe are interested in these events: Join()Quit()Send(,)Receive(,)Deliver(,)Discard(,)the-th message sent bythe Msg()received byIn particular,Join()is announced by the multicast event Send(,Msg(,0)).After joins a multicast group,it may receive some old messages that it can discard.The ones that matter are those that arrive at from senders whom knows have joined,or who know has joined,i.e. Scope()(Deliver(Msg(.Sender,0),) Deliver(Msg(),.Sender)Send(.Sender,)) (Receive()Quit()).Scope is thus a messagefilter that ScopeCast uses to take dynamic membership into account.2.2ConsistencyFigs.1and2illustrate the motivation for our consis-tency definition.The following is illustrated in Fig. 3,where may be a unicast or multicast.2BCD"(a) File f is copied from C to D , then deleted at C after thetransfer is acknowledged. If the acknowledgement is delivered at C before the multicast, the search may fail to find f .C andD have f .(b) An object f is passed (not copied) from peer to peer.f If send( ) is delivered at D before the multicast,B may conclude that both BCDFigure 2:Search problem:inconsistent interleav-ing of unicast with multicast.Consistency Definition Consider message sent by peer .Deliver()is consistent if and only if for any multicast in Scope(),Deliver()Send()implies Deliver()Deliver().Thus,the immediate delivery of received messages will be inconsistent in Figs.1and 2.The offendingmessage is from to in Fig.1,the acknowledge-ment in Fig.2(a)and send in Fig.2(b).Since Send()Deliver()and there are no intermediate events at be-tween them,this definition is the same as the common definition for causal multicasts [PRS],i.e.Send()Send()implies Deliver()Deliver(),except our defini-mcast .PDeliver(M ,P ) Deliver( )M,P Figure 3:Consistency definition.tion orders delivery only if is a multicast thatis in Scope .In particular,we allow non-FIFO delivery if and are both unicasts.One can easily use sequence numbers to impose FIFO unicast delivery.The following result justifies the definition:PropositionIf a request is multicast and delivered consistently (according to the above definition),and each peer replies with its state at delivery time,then the repliesconstitute a global snapshot that is consistent (by Chandy and Lamport’s definition [CL]).Since ScopeCast delivers a peer ’s join an-nouncement consistently (see Theorem),this result implies that the join acknowledgements yield a con-sistent time cut [BM]that defines scope().3ScopeCast:Description and Costs Each peer has a unique identifier PeerID .Each mes-sage carries a sequence number .SeqNum ,which is 0for join announcements,incremented for every multicast but unchanged for unicasts.For each peer ,ScopeCast maintains a mes-sage queue OnHold ,and two lists:History contains [PeerID,SeqNum]records for the latest messages from PeerID that are delivered at ;Frontier con-3tains those History records for multicasts that are not causally preceded by other multicast delivered at.3.1Protocol DescriptionThe pseudocode for the main ScopeCast routines is shown in Fig.4(a),and the utilities in Fig.4(b). 3.2Costs and ControlPeer’s History keeps just one[,SeqNum]pair in local storage for each that sent to—this is cheap.The size of Frontier depends on the com-munication pattern;each delivered message adds one pair,but may also remove others;moreover,Frontier is reset(line9)whenever sends a multicast.ScopeCast can be tuned in various ways.If’s Frontier reaches a(say,user-specified)threshold in size,ScopeCast canflush it by multicasting a NULL message.Similarly,if the message queue overflows, or the time between message arrival and delivery reaches a threshold,ScopeCast can notify the user andflush(i.e.deliver)all or part of the queue.A lengthy queue or long delay may be caused by an overload or some failure in the network,and aban-doning consistency is reasonable in such cases.When the user or application intervenes byflush-ing,we view them as taking the responsibility to re-solve any resulting inconsistency.ScopeCast there-fore considers theflushed messages—and the miss-ing messages they were waiting for—as delivered consistently.With this assumption,subsequent de-livery by ScopeCast is consistent.A new peer relies on a joining acknowledge-ment(line23)or Frontier records carried by arriv-ing messages to decide if an arrived message is in Scope().If this information is slow in coming,’sOnHold list may grow indefinitely(as if there is a network failure).A new peer may therefore need to flush and accept some initial inconsistency.Flushing at a peer does not affect consistent de-livery at other peers.In fact,a peer can choose to flush every arriving message,i.e.ignore consistency.Such options may help put aside reservations about ordering message delivery.Birman’s review of how real users use multicasts in deployed systems cautions against“excessive transparency”and advo-cates some visibility,so knowledgeable users can make trade-offs[B].4Correctness ProofThis sectionfirst discusses the underlying assump-tions,before proving that ScopeCast is correct.4.1AssumptionsMessage duplication in lower layers is not an issue for ScopeCast,since multicast duplicates are dis-carded(line19and line29),while unicast duplicates do not affect ScopeCast’s consistency and state.Network failures and buffer overflows may cause message loss,but that does not affect ScopeCast’s consistency per se.Failures can prevent messages from being received but(by our definition)there is no correctness issue since the messages are not de-livered.The effect of message loss on ScopeCast is to de-lay delivery;if delivery is forced byflushing,con-sistency is lost for the forced delivery but resumes thereafter(Section3.2).4InitializePeer():SeqNum=;line1 Frontier=NULL;line2 ScopeMulticast(NULL,“ALL”,“JOIN”);//null message to announce JOIN to all peers line3 Quit():ScopeMulticast(NULL,“ALL”,“QUIT”);line4 ScopeMulticast(data,dest,type)://type is UNICAST,MULTICAST,JOIN,ACK or QUIT SeqNum=SeqNum+1;line5 msg=new ScopeCastMsg(myPeerID,dest,SeqNum,Frontier,data,type);//dest is multicast group line6 multicast(msg,dest);//now messages in Frontier precede msg...line7 UpdateHistory(myPeerID,SeqNum);line8 Frontier=[myPeerID,SeqNum];//...so Frontier contains msg alone;line9 ScopeUnicast(data,dest,type)://unicast:SeqNum,History,(multicast)Frontier unchangedmsg=new ScopeCastMsg(myPeerID,dest,SeqNum,Frontier,data,type);line10 unicast(msg,dest);line11 ScopeReceive(msg):if(ProcessMsg(msg))//if processing msg changes History...line12 for(i=onhold.size();i;i)//...then process the messages line13 m=onhold.dequeue();//...that are on hold line14 if(ProcessMsg(m))i=onhold.size()+1;//i is reset,in case ProcessMsg changes History line15ProcessMsg(msg):if(msg.type==“UNICAST”||msg.type==“ACK”)changed=ProcessUnicast(msg)line16else changed=ProcessMulticast(msg)line17 return changed;line18 ProcessMulticast(msg):if(InHistory(msg.Sender,msg.SeqNum))Discard(msg);return FALSE;//old multicast line19 else if(ToHold(msg))onhold.enqueue(msg);return FALSE;line20 else UpdateHistory(msg.Sender,msg.SeqNum);line21 UpdateFrontier(msg);line22 if(msg.type==“JOIN”)Ack(msg);//this ack helps the new peer initialize its History line23 if(msg.type==“MULTICAST”)Deliver(msg.data);//do not deliver QUIT messages line24 return TRUE;line25ProcessUnicast(msg)://sender knows dest,so sender must know dest has joinedif(msg.type==“ACK”)if(History.contains(msg.sender,*))Discard(msg);return FALSE;//History already initializedelse UpdateHistory(msg.Sender,msg.SeqNum);return TRUE;line26 else if(ToHold(msg))onhold.enqueue(msg);return FALSE;line27 else Deliver(msg.data);return FALSE;line28Fig.4(a)Main ScopeCast routines.5InHistory(PeerID,SeqNum):return(History.contains(PeerID,SeqNum)&&(SeqNum SeqNum));line29 ToHold(msg):for(each[PeerID,SeqNum]in msg.Frontier)//have we seen the multicasts that preceded msg?line30 if(!InHistory(PeerID,SeqNum))return TRUE;line31 return FALSE;line32 UpdateHistory(PeerID,SeqNum):record=History.lookup(PeerID);line33 if(record==NULL)History.add([PeerID,SeqNum]);line34 else if(record.SeqNum SeqNum)record.SeqNum=SeqNum;line35 UpdateFrontier(msg):for(each[PeerID,SeqNum]in msg.Frontier)//remove records for messages preceding msg...line36 Frontier.remove([PeerID,SeqNum]);line37 Frontier.add([msg.Sender,msg.SeqNum]);//...because msg is now on the Frontier line38Fig.4(b)Utilities for ScopeCast.4.2ProofsLet and be local times at peer,and Delivered(multicastDeliver.time.Then the con-sistency definition has the following property,whichcan be used for induction:Induction Lemma(See Fig.5)Let Msg,Msg,Send.time and Send.time.Suppose Deliver Deliver and Deliver Deliver for allDelivered Scope.If Deliver is consistent,then Deliver is consistent.Proof As per the consistency definition,con-sider any multicast Scope such that Deliver Send.We must show that Deliver Deliver.Send Deliver:Then(see Fig.5)Delivered. Since Scope as well,we get Deliver Deliver.Consider now a peer’s view of the frontier: Frontier LemmaLet be local time at and let.Frontier be the set of messages for records in.Frontier at time.(i)If has not sent any messages before,then.Frontier.6PQ M"Fig.6Frontier Lemma (ii):.Frontier.(ii)If is the last message sent byat time(see Fig.6),letDelivered DeliveredDeliver .Sender Send .Sender .Then .Frontier Delivered.(iii)If .Frontier has a record for ,then Deliver Deliver for any peer .Proof If has not even sent its join announce-ment,then (i)follows from line2.Otherwise,for (ii),consider when a record [PeerID,SeqNum]for is added to and removed from .Frontier.Addition happens only when is delivered (line22and line38).Removal happens when sends amulticast (line9)or [PeerID,SeqNum]appears in .Frontier for some delivered at (line37).A removed record will not be added to .Frontier again because of the filtering by SeqNum (line19).It follows that .Frontier Delivered for (ii).For (iii),would be on hold (lines 20,27,30and 31)at anyuntil is delivered at .TheoremMessage delivery in ScopeCast is consistent.Proof Consistency for a multicast does not de-pend on preceding unicasts (Fig.3),so we may firstignore the unicasts and consider just the multicasts.Consider the first multicastMsg that is within Scope().If ,then is ’s join announcement;has no preceding multicasts,so Deliver is consistent (line23and line24).Otherwise,is the first multicast by after it acknowledged ’s join announcement.ScopeCast checks every message in .Frontier (line20)to see if it is within scope and already delivered,soDeliveris again consistent.We thus have the base case for an induction on ,sonow consider message Msg .In thefollowing,note (line19)that we only need considerdelivery of messages that are in Scope.Let ,,and be as in the Induction Lemma (Fig.5).ScopeCast imposes first-in-first-out delivery for multicasts fromto (line30),so Deliver Deliver .We firstshow that Deliver Deliver for allDeliver Scope .If .Frontier has a record for ,we getDeliver Deliver from FrontierLemma (iii).If not,by Fron-tier Lemma (ii),and’s record was removed from .Frontier by some Delivered .Let ,,be in Delivered ,such that delivery ofat removes the record forfrom .Frontier,but .Frontier has a record for,so .Frontier carries that record.The record removal by means (line37)that .Frontier has a record for ,soDeliverDeliver by Fron-tier Lemma (iii).Similarly,Deliver Deliver .Since ,we getDeliver Deliver as desired.Now an induction on using the InductionLemma proves that Deliver is consistent,so ScopeCast delivers all the multicasts consistently.7For a unicast,consider any preceding multicast delivered at before Send.time. If.Frontier has a record for,then Frontier Lemma(iii)gives DeliverDeliver.If not,there are again multi-casts(possibly from)etc.delivered at that induce the ordering Deliver Deliver.5SummaryEngineering a distributed system is easier if there is a service for consistent multicast delivery.For peer-to-peer systems,inconsistency is more likely because membership is dynamic,and wide-area net-work delays make message reordering and concur-rent join/quit events more likely.We propose ScopeCast,a protocol for ensuring consistent delivery of peer-to-peer multicasts,and proved its correctness.It does not assume group membership is known orfixed.Furthermore,if there is network failure or traffic congestion,delivery of received messages can be forced;consistent delivery resumes after this interruption.Simulations show that the delay between mes-sage reception and delivery grows less quickly than end-to-end network delay[S].On the other hand, they also show that,because ScopeCast does not use any supporting logical or physical infrastructure,its overheads limit it to small peer groups.We present elsewhere another multicast protocol[S]that uses a spanning tree supported by a distributed hash table, and is thus more scalable.References[AHJ]M.Ahamad,P.Hutto,and R.John,Im-plementing and Programming Causal Dis-tributed Memory.Proc.ICDCS,Arlington,TX(May1991),271–281.[AN]N.Adly and M.Nagi,Maintaining Causal Order in Large Scale Distributed Systems Us-ing a Logical Hierarchy,Proc.IASTED Int.Conf.on Applied Informatics,Innsbruck,Austria(Feb.1995),214–219.[B]K.P.Birman,A review of experiences withreliable multicasts,Software—Practice andExperience29,9(July1999),741–774. [BSS]K.Birman,A.Schiper,and P.Stephenson, Lightweight Causal and Atomic Group Mul-ticast,ACM puter Systems9,3(Aug.1991),272–314.[BM]Ö.Babao˘g lu and K.Marzullo,Consistent global states,in Distributed Systems,S.Mul-lender(ed.),Addison Wesley(1993),55–96.[C]I.Clarke,O.Sandberg,B.Wiley and T.W.Hong,Freenet:A distributed anonymousinformation storage and retrieval system,LNCS2009,Springer-Verlag(2001),46–66. [CL]K.M.Chandy and mport.Distributed snapshots:determining global states of dis-tributed systems.ACM puter Sys-tems3,1(Feb.1985),63–75.[DKM]D.Dolev,S.Kramer and D.Malki,Early delivery totally ordered multicast in asyn-chronous environments,Proc.FTCS,Toulouse,France(June1993),544–553. [EMS]P.D.Ezhilchelvan,R.A.Macêdo and S.K.Shrivastava,Newtop:A fault-tolerant groupcommunication protocol,Proc.ICDCS,Van-couver,Canada(May1995),296–306. [HT]V.Hadzilacos and S.Toueg,Fault-tolerant broadcasts and related problems,in Dis-tributed Systems,S.Mullender(ed.),Addi-son Wesley(1993),97–145.8[KD]I.Keidar and D.Dolev,Efficient message or-dering in dynamic networks,Proc.PODC,Philadelphia,PA(May1996),68–76. [MPS]S.Mishra,L.L.Peterson,R.D.Schlicht-ing,Implementing Fault-Tolerant ReplicatedOjects Using Psync,Proc.SRDS,Seattle,WA,(Oct.1989),42–52.[RD] A.Rowstron and P.Druschel,Pastry:scal-able,distributed object location and routingfor large-scale peer-to-peer systems,Proc.IFIP/ACM Middleware2001,Heidelberg,Germany(Nov.2001).[RM] B.Rajagopalan and P.K.McKinley,A token-based protocol for reliable,ordered multicastcommunication,Proc.SRDS,Seattle,WA(Oct.1989),84–93.[RST]M.Raynal,A.Schiper and S.Toueg,The causal ordering abstraction and a simple wayto implement it,Information Processing Let-ters39,6(1991),343–350.[S]X.B.Shen,On concurrent states and peer-to-peer multicasts,MSc Thesis,National Uni-versity of Singapore(2003).[YHH]L.-H.Yen,T.-L.Huang,S.-Y.Hwang,A pro-tocol for causally ordered message deliveryin mobile computing systems,MONET2,4(Jan.1998),365–372.[Z]S.Q.Zhuang,B.Y.Zhao,A.D.Joseph,R.H.Katz,J.Kubiatowicz,Bayeux:An Archi-tecture for Scalable and Fault-tolerant Wide-Area Data Dissemination,Proc.NOSSDA V,Port Jefferson,NY,(June2001),11–20.9。

相关文档
最新文档