CLUSMA-A Mobile Agent based clustering middleware for Wireless Sensor Networks

合集下载

遗传算法在关联规则挖掘中的应用

遗传算法在关联规则挖掘中的应用

遗传算法在关联规则挖掘中的应用
任颖;李华伟;吕红;吕海燕;赵媛
【期刊名称】《电脑知识与技术》
【年(卷),期】2009(005)016
【摘要】数据挖掘是关联规则中一个重要的研究方向.该文对关联规则的数据挖掘和遗传算法进行了概述,提出了一种改进型遗传算法的关联规则提取算法.最后结合实例给出了用遗传算法进行关联规则的挖掘方法.
【总页数】2页(P4260-4261)
【作者】任颖;李华伟;吕红;吕海燕;赵媛
【作者单位】海军航空工程学院,山东,烟台,264001;山东商务职业学院,山东,烟台,264001;海军航空工程学院,山东,烟台,264001;海军航空工程学院,山东,烟台,264001;海军航空工程学院,山东,烟台,264001
【正文语种】中文
【中图分类】TP18
【相关文献】
1.并行遗传算法及其在关联规则挖掘中的应用 [J], 石杰
2.遗传算法在关联规则挖掘中的研究与应用 [J], 赵艳丽
3.自适应小生境遗传算法在关联规则挖掘中的应用 [J], 杨小影;冯艳茹;钱娜
4.遗传算法在关联规则挖掘中的应用 [J], 任颖;李华伟;吕红;吕海燕;赵媛
5.基于遗传算法的关联规则挖掘技术在体质监测分析中的应用 [J], 彭中莲
因版权原因,仅展示原文概要,查看原文内容请购买。

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud Computing

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud Computing

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud ComputingAbstract— Data storage is an important application of cloud computing, where clients can remotely store their data into the cloud. By uploading their data into the cloud, clients can be relieved from the burden of local data storage and maintenance. This new paradigm of data storage service also introduces new security challenges. One of these risks that can attack the cloud computing is the integrity of the data stored in the cloud. In order to overcome the threat of integrity of data, the client must be able to use the assistance of a Third Party A uditor (TPA), in such a way that the TPA verifies the integrity of data stored in cloud with the client’s public key on the behalf of the client. The existing schemes with single verifier (TPA) may not scale well for this purpose. In this paper, we propose A n Efficient Distributed Verification Protocol (EDVP) to verify the integrity of data in a distributed manner with support of multiple verifiers (Multiple TPA s) instead of single Verifier (TPA). Through the extensive security, performance and experimental results, we show that our scheme is more efficient than single verifier based scheme. Keywords: cloud storage, Integrity, Client, TPA, SUBTPAs, Verification, cloud computing.I.I NTRODUCTIONCloud computing is a large-scale distributed computing paradigm in which a pool of computing resources is available to Clients via the Internet. The Cloud Computing resources are accessible as public utility services, such as processing power, storage, software, and network bandwidth etc. Cloud storage is a new business solution for remote backup outsourcing, as it offers an abstraction of infinite storage space for clients to host data backups in a pay-as-you-go manner [1]. It helps enterprises and government agencies significantly reduce their financial overhead of data management, since they can now archive their data backups remotely to third-party cloud storage providersrather than maintaining local computers on their own. For example, Amazon S3 is a well known storage service.The increasing of data storage in the cloud has brought a lot of attention and concern over security issues of this data. One of important issue is with cloud data storage is that of data integrity verification at untrusted cloud servers. For example, the storage service provider, which experiences Byzantine failures occasionally, may decide to hide the data loss incidents from the clients for the benefit of their own. What is more serious is that for saving money and storage space the service provider might neglect to keep or deliberately delete rarely accessed data files which belong to thin clients. Consider the large size of the outsourced data and the client’s constrained resource capability, the main problem can be generalized as how can the client find an efficient way to perform periodical integrity verifications without local copy of data files.To verify the integrity of data in cloud without having local copy of data files, recently several integrity verification protocols have been developed under different systems [2-13].A ll these protocols have verified the integrity of data with single verifier (TPA). However, in single auditor verification systems, they use only one Third Party A uditor (TPA) to verify the Integrity of data based Challenge-Response Protocol. In that verification process, the TPA stores the metadata corresponding to the file blocks and creates a challenge and sends to the CSP. The CSP generates the Integrity proof for corresponding challenge, and send back to the TPA. Then, TPA verifies the response with the previously stored metadata and gives the final audit result to the client. However, in this single A uditor system, if TPA system will crash due to heavy workload then whole verification process will be aborted. In addition, during the verification process, the network traffic will be very high near the TPA organization and may create network congestion. Thus, the performance will be degrading in single auditor verification schemes. Therefore, we need an efficient distributed verification protocol to verify the integrity of data in cloud.In this paper, we propose an Efficient Distributed Verification Protocol (EDVP) to verify the integrity of data in a distributed manner with support of multiple verifiers (Multiple TPAs) instead of single Verifier (TPA), which were discussed in existing prior works[2-13]. In our protocol, many number of SUBTPA s concurrently works under the single TPA and workload also must be uniformly distribute among the SUBTPA s, so that each SUBTPA will verify over the whole part, Suppose if TPA fails, one of the SUBTPA will act as TPA. Our protocol would detect the data corruptions in the cloud efficiently when compared to single verifier based protocols.Our protocol design is based on RSA-based Dynamic Public Audit Service for Integrity Verification of data in cloud proposed by Syam et al.[11] in a distributed manner. Here, the n verifiers challenge the n servers uniformly and if m server’s response is correct out of n servers then, we can say that Integrity of data is ensured. To verify the Integrity of the data, our verification process uses multiple TPA s, among theSyam Kumar.P1Dept.of Computer ScinceIFHE(Deemed University)Hyderabad, Indiashyam.553@1,Subramanian. R2, Thamizh Selvam.D3Dept.of Computer Science School of Engineering and Technology,Pondicherry University, Puducherry, India, rsmanian.csc@.in2,dthamizhselvam@32013 Second International Conference on Advanced Computing, Networking and Securitymultiple TPAs, one TPA will act as main TPA and remaining are SUBTPA s. The main TPA uses all SUBTPA s to detect data corruptions efficiently, if main TPA fails, then one of the SUBTPA will act as main TPA. The SUBTPA s do not communicate with each other and they would like to verify the Integrity of the stored data in cloud, and consistency of the provider’s responses. The propose system guarantee the atomic operations to all TPA s; this means that TPA which observe each SUBTPA operations are consistent, in the sense that their own operations plus those operations whose effects they see have occurred atomically in same sequence.In Centrally Controlled and Distributed Data paradigm, where all SUBTPA s are controlled by the TPA and SUBTPA’s communicate to any Cloud Data Storage Server, we consider a synchronous distributed system with multiple TPA s and Servers. Every SUBTPA is connected to Server through a synchronous reliable channel that delivers a challenge to the server. The SUBTPA and the server together are called parties P. A protocol specifies the behaviours of all parties. An execution of P is a sequence of alternating states and state transitions, called events, which occur according to the specification of the system components. A ll SUBTPA s follow the protocol; in particular, they do not crash. Every SUBTPA has some small local trusted memory, which serves to store distribution keys and authentication values. The server might be faulty or malicious and deviate arbitrarily from the protocol; such behaviour is also called Byzantine failure.The Synchronous system comes down to assuming the following two properties:1. Synchronous computation. There is a known upper bound on processing delays. That is, the time taken by any process to execute a step is always less than this bound. Remember that a step gathers the delivery of a message (possibly nil) sent by some other process, a local computation (possibly involving interaction among several layers of the same process), and the sending of a message to some other process.2. Synchronous communication. There is a known upper bound on challenge/response transmission delays. That is, the time period between the instant at which a challenge is sent and the time at which the response is delivered by the destination process is less than this bound.II.RELATED WORKBowers et al. [2] introduced a High Availability Integrity Layer (HAIL) protocol to solve the Availability and Integrity problems in cloud computing using error correcting codes and Universal Hash Functions (UHFs). This scheme achieves the A vailability and Integrity of data. However, this scheme supports private verifiability.To support public verifiability of data integrity, Barsoum et al. [3] proposed a Dynamic Multiple Data Copies over the Cloud Servers, which is based on multiple replicas. This scheme achieves the Availability and Integrity of data stored in cloud. Public verification enables a third party auditor (TPA) to verify the integrity of data in cloud with the data owner's public key on the behalf of the data owner,. Wang et al. [4] designed an Enabling Public Auditability and Data Dynamics for data storage security in cloud computing using Merkle Hash Tree (MHT). It achieves the guarantee of the data Integrity with efficient data dynamic operations and public verifiability. Similarly,Wang et al. [5] proposed a flexible distributed verification protocol to ensure the dependability, reliability and correctness of outsourced data in the cloud by utilizing homomorpic token and distributed erasure coded data. This scheme allow users to audit the outsourced data with less communication and computation cost. Simultaneously, it detects the malfunctioning servers. In their subsequent work, Wang et al. [6] developed a privacy-preserving data storage security in cloud computing. Their construction utilizes and uniquely combines the public key based homomorpic authenticator with random masking while achieving the Integrity and privacy from the auditor. Similarly, Hao et al. [7] proposed a privacy-preserving remote data Integrity checking protocol with data dynamics and public verifiability. This protocol achives the deterministic guaranty of Integrity and does not leak any information to third party auditors. Zhuo et al. [8] designed a dynamic audit service to verify the Integrity of outsourced data at untrusted cloud servers. Their audit system can support public verifiability and timely abnormal detection with help of fragment structure, random sampling and index hash table. Yang et al. [9] proposed a provable data possession of resource-constrained mobile devices in cloud computing. In their framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for the mobile devices by using bilinear signature and Merkle hash tree (MHT), this scheme aggregates the verification tokens of the data file into one small signature to reduce the communication and storage burden.Although, all these schemes achieved the Integrity of remote data assurance under different systems, they do not provide a strong integrity assurance to the clients because their verification process using pseudorandom sequence. If we use pseudorandom sequence to verify the remote data Integrity, sometimes they may not detect the data modifications on data blocks. Since pseudorandom sequence is not uniform (uncorrelated numbers), it does not cover the entire file while generating Integrity proof for a challenge. Therefore, probabilistic Integrity checking methods using pseudorandom sequence may not provide strong Integrity assurance to user’s data stored in remotely.To provide better Integrity assurance, Syam et al. [10] proposed a homomorpic distributed verification protocol using Sobol sequence instead of pseudorandom sequence [2-9]. Their protocol ensures the A vailability, Integrity of data and also detects the data corruption efficiently. In their subsequent work, Syam et al. [11] described a RSA-based Dynamic Public Audit protocol for integrity verification of data stored in cloud. This scheme gives probabilistic proofs based on random challenges and like [10] it also detects the data modification on file. Similarly, Syam et al. [12] developed an Efficient and Secure protocol for both Confidentiality andIntegrity of data with public verifiability and dynamic operations. Their construction uses Elliptic Curve Cryptography instead of RSA because ECC offers same security as RSA with small key size. Later, Syam et al.[13] proposed a publicly verifiable Dynamic secret sharing protocol for A vailability, Integrity, Confidentiality of data with public verifiability.Although all these schemes achieved the integrity of remote data under different systems with Single TPA, but in single auditor verification protocols, they use only one Third Party A uditor (TPA) to verify the Integrity of data based Challenge-Response Protocol. However, in this single Auditor system, if TPA system will crash due to heavy workload then whole verification process will be aborted.III.PROBLEM STATEMENTA.Problem DefinitionIn cloud data storage, the client stores the data in cloud via cloud service provider. Once data moves to cloud he has no control over it i.e. no security for outsourced data stored in cloud, even if Cloud Service Provider (CSP) provides some standard security mechanism to protect the data from attackers but still there is a possibility threats from attackers to cloud data storage, since it is under the control of third party provider, such as data leakage, data corruption and data loss. Thus, how can user efficiently and frequently verify that whether cloud server storing data correctly or not? A nd will not be tampered with it. We note that the client can verify the integrity of data stored in cloud without having a local copy of data and any knowledge of the entire data. In case clients do not have the time to verify the security of data stored in cloud, they can assign this task to trusted Third Party Auditor (TPA). The TPA verifies the integrity of data on behalf of clients using their public key.B.System ArchitectureThe network representation architecture for cloud data storage, which consists four parts: those are Client, Cloud Service Provider (CSP), Third Party A uditors (TPA s) and SUBTPAS as depicted in Fig 1:Fig 1: Cloud Data Storage Architecture Client: - Clients are those who have data to be stored, and accessing the data with help of Cloud Service Provider (CSP). They are typically desktop computers, laptops, mobile phones, tablet computers, etc.Cloud Service Provider (CSP):- Cloud Service Providers (CSPs) are those who have major resources and expertise in building, managing distributed cloud storage servers and provide applications, infrastructure, hardware, enabling technology to customers via internet as a service.Third Party Auditor (TPA):- Third Party Auditor (TPA) who has expertise and capabilities that users may not have and he verify the security of cloud data storage on behalf of users. SUBTPAS: the SUBTPA s verifies the integrity of data concurrently under the control of TPAThroughout this paper, terms verifier or TPA and server or CSP are used interchangeablyC.Security ThreatsThe cloud data storage mainly facing data corruption challenge:Data Corruption: cloud service provider or malicious cloud user or other unauthorized users are self interested to alter the user data or deleting.There are two types of attackers are disturbing the data storage in cloud:1) Internal Attackers: malicious cloud user, malicious third party user (either cloud provider or customer organizations) are self interested to altering the user’s personal data or deleting the user data stored in cloud. Moreover they decide to hide the data loss by server hacks or Byzantine Failure to maintain its reputation2) External Attackers: we assume that an external attacker can compromise all storage servers, so that he can intentionally modify or delete the user’s data as long as they are internally consistent.D.GoalsIn order to address the data integrity stored in cloud computing, we propose an Efficient Distribution Verification Protocol for ensuring data storage integrity to achieve the following goals:Integrity: the data stored safely in cloud and maintain all the time in cloud without any alteration.Low-Overhead: the proposed scheme verifies the security of data stored in cloud with less overhead.E.Preliminaries and Notations•f key(.)- Sobol Random Function (SRF) indexed on some key, which is defined asf : {0,1}* ×key-GF (2w).•ʌkey– Sobol Random Permutation (SRP) indexed under key, which is defined asʌ : {0,1}log2(l) × key –{0,1}log2(l) .IV. EFFICENT DISTRIBUTION VERIFICATIONPROTOCOL:EDVP The EDVP protocol is designed based on RSA -based Dynamic Public A udit Protocol (RSA -DPA P), which is proposed by Syam et al.[11]. In EDVP, we are mainly concentrating on verification phase of RSA -DPA P. The EDVP contains three phases: 1) Key Distribution, 2) Verification Process 3) Validating Integrity. The process of EDVP is: first, the TPA generates the keys and distribute to SUBTPA s. Then the SUBTPA s verify the integrity of data and gives result to main TPA. Finally, the main TPA validates the integrity by observing the report from SUBTPAs.A. Key DistributionIn key distribution, the TPA generates the random keyand distributes it to his SUBTPAs as follows:The TPA first generates the Random key by using SobolRandom Function [15] then Compute)(1i f K k =Where1 i n and the key is indexed on some (usually secret) key: f :{0,1}*× keyĺZ p Then, employ (m, n ) secret sharing scheme [14] andpartition the random key K into n pieces. To divide K into npieces, the client select a polynomial a(x) with degree m-1andcomputes the n pieces: 1221....−++++=m j i i a i a i a K K (2)¦−=+=11m j j j i i a K K (3)A fter that TPA chooses nSUBTPA s and distributes n pieces to them. The procedure of key distribution is given in algorithm 1.Algorithm 1: Key Distribution1.1. Generates a random key K using Sobol Sequence. )(1i f K k =2. Then, the TPA partition the K into n pieces using (m,n) secret sharing scheme3. TPA select the Number of SUBTPAs: n, and threshold value m;4. for i ĸ1 to n do5. TPA sends k i to the all SUBTPA i s6. end for7. endB. Verification ProcessIn verification process, all SUBTPAs verify the Integrity of data and give results to the TPA, if m SUBTPAs responses meet the threshold value then TPA says that Integrity of data is valid. At a high level, the protocol operates like this: A TPA assigns a local timestamp to every SUBTPA of its operations. Then, every SUBTPA maintains a timestamp vector T in itstrusted memory. A t SUBTPA i , entry T[j] is equal to thetimestamp of the most recently executed operation by SUBTPA j in some view of SUBTPA i .To verify the Integrity of data, each SUBTPA creates a challenge and sends to the CSP as follows: first SUBTPA generates set of Random indices c of set [1, n] using Sobol Random Permutation (SRP) with random key)(c j j K π= (4) Where 1 c l and ʌkey (.) is a Sobol Random Permutation (SRP), which is indexed under key: ʌ : {0,1}log2(l ) ×key–{0,1} log2(l ).Next, each SUBTPA also chooses a fresh random key r j, wherer j = )(2l f k (5)Then, creates a challenge chal ={j, r j } is pairs of random indices and random values. Each SUBTPA sends a challenge to the CSP and waits for the response. The CSP computes a response to the corresponding SUBTPA challenges and send responses back to SUBTPAs.When the SUBTPA receives the response message, first he checks the timestamp, it make sure that V T (using vectorcomparison) and that V [i] = T[i]. If not, the TPA aborts theoperation and halts; this means that server has violated the consistency of the service. Otherwise, the SUBTP COMMITS the operation and check if stored metadata and response (integrity proof) is correct or not? If it is correct,then stores TRUE in its table and sends true message to TPA, otherwise store FALSE and send a false signal to the TPA for corrupted file blocks. The detailed procedure of verification processes is given in algorithm 2. Algorithm 2: Verification Process 1. Procedure: Verification Process 2. Timestamp T3. Each SUBTPA i computes4. Compute )(c j SRPk π=5. the Generate the sobol random key r j6. Send (Chal=(j, r j ) as a challenge to the CSP;7. the server computes the Proof PR i send back to theSUBTPAs;8. PR i ĸReceive(V);9. If (V T V [i] = T[i]) 10. return COMMIT then11. if PR i equals to Stored Metadata then 12. return TRUE;13. Send Signal, (Packet j , TRUE i ) to theTPA14. else15. return FALSE;16. Send Signal, (Packet i , FALSE i ) to the TPA; 17. end if 18. else19. ABORT and halt the process 20. e nd if 21. e nd(1)C.Validating IntegrityTo validate the Integrity of the data, the TPA will receive the report from any subset m out of n SUBTPAs and validates the Integrity. If the m SUBTPA s give the TRUE signal to TPA, then the TPA decides that data is not corrupted otherwise he decides that data has been corrupted. In the final step, the TPA will give an A udit result to the Client. In algorithm 3, we given the process of validating the Integrity, in which, we generalize the Integrity of the verification protocol in a distributed manner. Therefore, we can use distribution verification on scheme [11].Algorithm 3: Validating Integrity1.Procedure: validation(i)2.TPA receives the response from the m SUBTPAs3.for iĸ1 to m do4.If(response==TRUE)5. Integrity of data is valid6. else if(response==FALSE)7. Integrity is not valid8.end if9.end for10.endV.A NALYSIS OF EDVPIn this section, we analyse the security, and performance of EDVP.A.Security AnalysisIn security analysis, we analyze the Integrity of the data in terms of probability detection.Probability Detection:It is very natural that verification activities would increase the communication and computational overheads of the system. To enhance the performance, we used Secret sharing technique [14] to distribute the Key k that provides minimum communication and tractable computational complexity. Thus, it reduces the communication overhead between TPA and SUBTPAs. For a new verification, the TPA can change the Key K for any SUBTPA and send only the different part of the multiset elements to the SUBTPA. In addition, we used probabilistic verification scheme based on Sobol Sequences that provides uniformity not only for whole sequences but also for each subsequences, so each SUBTPA will independently verify over the entire file blocks. Thus, there is a high probability to detect fault location very quickly. Therefore, a Sobol sequence provides strong Integrity proof for the remotely stored data.The probability detections of data corruptions of this protocol same as previous protocols [9-12].In EDVP, we use Sobol random sequence generator to generate the file block number, because sequence are uniformly distributed over [0, 1] and cover the whole region. To make integers, we multiply constant powers of two with the generated sequences. Here, we consider one concrete example, taking 32 numbers from the Sobol sequences.B. B. Performance Analysis and Experimental ResultsIn this section, we evaluate the performance of theverification time for validating Integrity and compare theexperimental results with previous single verifier basedprotocol [11] as shown in Tables 1-3. In Table 4 and 5, wehave shown that the Computation cost of the Verifier and CSPrespectively.Table 1: Veri ication times (Sec) with 5 veri iers whendifferent percentages of 100000 blocks are corruptedCorruption data in percentageSingle Verifierbased Protocols[11]EDVP[5 verifiers]1% 25.99 12.145% 53.23 26.55 10% 70.12 38.6315% 96.99 51.2220% 118.83 86.4430% 135.63 102.8940% 173.45 130.8550% 216.11 153.81 Table 2: Verif ication times (Sec) with 10 Verif ierswhen di f f erent percentages o f 100000 blocks are corruptedCorruption data in percentage Single Verifier basedProtocols[11]EDVP[10verifiers]1% 25.9908.14 5% 53.2318.55 10% 70.12 29.63 15% 96.99 42.22 20% 118.83 56.44 30% 135.63 65.89 40% 173.45 80.85 50% 216.11 98.81T able 3: Verification times (Sec) with 20 verifiers when different percentages of 100000 blocks are corruptedCorruption data in percentage Single VerifierbasedProtocols[11]EDVP[20verifiers]1% 25.9904.145% 53.2314.5510% 70.12 21.6315% 96.99 32.2220% 118.83 46.4430% 135.63 55.8940% 173.45 68.8550% 216.11 85.81From Tables 1-3, we can observe that verification time is lessfor detecting data corruptions in cloud when compared to single verifier based protocol [11]Table 4:Verifier computation Time (ms) for the differentfile sizesFile Size Single Verifier basedProtocol[11]EDVP1MB 148.26 80.07 2MB 274.05 192.65 4MB 526.25 447.23 6MB 784.43 653.44 8MB 1083.9 820.87 10MB 2048.26 1620.06Table 5:CSP computation Time (ms) for the different filesizesFile Size Single Verifier basedProtocols[11]EDVP1MB 488.16 356.272MB 501.23 392.554MB 542.11 421.116MB 572.17 448.678MB 594. 15 465.1710MB 640.66 496. 02 From the table 4 & 5, we can observe that computation cost of verifier and CSP is less compared existing scheme[11]VI.C ONCLUSIONIn this paper, we presented an EDVP scheme to verify the Integrity of data stored in the cloud in a distributed manner with support of multiple verifiers (Multiple TPAs) instead of single Verifier (TPA). This protocol use many number of SUBTPA s concurrently works under the single TPA and workload also must be uniformly distribute among SUBTPAs, so that each SUBTPA will verify the integrity of data over the whole part. Through the security and performance analysis, we have proved that an EDVP verification protocol would detect the data corruptions in the cloud efficiently when compared to single verifier verification based scheme.R EFERENCES[1]R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I.Brandic.“Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5thUtility,” Future Generation Computer Systems, vol. 25, no. 6,June 2009, pp 599–616, Elsevier Science, A msterdam, TheNetherlands.[2]Bowers K. D., Juels A., and Oprea A., (2008) “HAIL: A High-vailability and Integrity Layer for Cloud Storage,”Cryptology ePrint Archive, Report 2008/489.[3]Barsoum, A. F., and Hasan, M. A., “On Verifying DynamicMultiple Data Copies over Cloud Servers”, Technical Report, Department of Electrical and Computer Engineering University of Waterloo, Ontario, Canada, Aug 2011.[4]Wang Q., Wang C., Li J., Ren K., and Lou W., “Enablingpublic veri¿ability and data dynamics for storage security incloud computing”, IEEE Trans. Parallel and Distributed Systems. VOL.22, NO.5. May, 2011,pp.[5]Wang C., Wang Q., Ren K., cao N., and Lou W.,(2012)“Towards Secure and Dependable Storage Services in CloudComputing”, IEEE Trans. Service Computing. VOL. 5, NO. 2,APRIL -JUNE 2012, pp.220-232.[6]Wang, C., Ren, K., Lou,W., and Li, J., “Toward publiclyauditable secure cloud data storage services”, IEEE Networks,Vol. 24, No. 4, 2010, pp. 19–24.[7]Hao Z., Zhong S., Yu N.,(2011) “A Privacy-Preserving RemoteData Integrity Checking Protocol with Data Dynamics andPublic Verifiability”, IEEE Trans Knowledge and DataEngineering,Vol.23, Issue 9,pp.1432 –1437.[8]Zhu Y., Wang H., Hu Z., Ahn G., Hu H., Stephen, and Yau S.,“Dynamic A udit Services for Integrity Verification of Outsourced Storages in Clouds”, Proc. of the 26th A CMSymposium on Applied Computing (SAC), March 21-24, 2011,Tunghai University, TaiChung, Taiwan.[9]Yang J., Wang H., Wang J., Tan C., and Yu D., (2011)“Provable Data Possession of Resource-constrained MobileDevices in Cloud Computing”, JOURNA L OF NETWORKS,VOL. 6, NO. 7, July,, 2011,pp.1033-1040[10]P. Syam Kumar, R. Subramanian, “Homomorpic DistributedVerification Ptorotocol for Ensuring Data Storage in CloudComputing”. Journal of Information, Vol. 14, No.10, Oct-2011, pp.3465-3476.[11]P. Syam Kumar, R. Subramanian, “RSA-based DynamicPublic A udit Service for Integrity Verification of DataStorage in Cloud Computing using Sobol Sequence” SpecialIssue Security, Privacy and Trust in Cloud Systems, International Journal of Cloud Computing(IJCC) in InderScience Publications, Vol. 1 No.2/3, 2012, pp.167-200. [12]P. Syam Kumar, R. Subramanian, “A n effiecent and Secureprotocol for Ensuring Data Storage Security inCloud Computing” publication in International Journal of computerScience Issues(IJCSI), Volume 8, Issue 6, Nov-2011, pp.261-274.[13]P. Syam Kumar, Marie Stanislas Ashok, Subramanian. R, “APublicly Verifiable Dynamic Secret Sharing Protocol forSecure and Dependable Data Storage in Cloud Computing”Communicated for Publication in International Journal ofCloud Applications and Computing (IJCAC).[14]Shamir A.,“How to Share a Secret”, Comm. A CM, vol.22.1979.[15]Brately P and Fox B L (1988) Algorithm 659: ImplementingSobol’s Quasi-random Sequence Generator ACM Trans. Math.Software 14 (1) 88–100.。

遗传距离聚类法和模型聚类法在地方鸡种群体遗传结构分析中的比较

遗传距离聚类法和模型聚类法在地方鸡种群体遗传结构分析中的比较

1.2 微卫星引物 综合地方鸡种遗传多样性研究所使用的引
物[4,12-13],从中选择效果较好,杂合度在中度水平以 上的16 个微卫星基因座:M CW 0295、M CW 0222、 M C W 0014、M CW 0067、M CW 0069、M CW 0034、 MCW0111、M CW0078、M CW0206、LEl0094、 LEl0234、M CW0330、M CW0104、M CW0020、 M CW0165、MCW0123。 1.3 PCR扩增、电泳及结果记录
在利用微卫星、单核苷酸多态性(Single nucleotide polymorphisms,SNP)、限制性片段长度多态 性(Restriction fragment length polymorphism , R F L P)等基因型数据,分析群体遗传结构及品种间 亲缘关系的研究中,聚类方法主要有2 种类型:一种 是距离聚类法(Distance-based cluster method),通 过计算两两群体(个体)间的遗传距离,并基于遗传 距离运用N J(Neighbor joining)或U PGM A(Unweighted pair group method with arithmetic mean) 算法构建聚类图,分析群体遗传结构及亲缘关系;目 前距离聚类法包括D s(Nei’s standard genetic distance)、DR(Reynolds’genetic distance)、DA(Nei’s improved genetic distance)等,遗传距离在畜禽品种 遗传结构及亲缘关系分析中[1-6]已被广泛采用。
遗传距离聚类法和模型聚类法在地方鸡种群体 遗传结构分析中的比较
李慧芳1,陈宽维1*,韩威1,张学余1,高玉时1,陈国宏2,朱云芬1,王强1

用于无人机蜂群协同导航的动态互观测在线建模方法[发明专利]

用于无人机蜂群协同导航的动态互观测在线建模方法[发明专利]

(19)中华人民共和国国家知识产权局(12)发明专利申请(10)申请公布号 (43)申请公布日 (21)申请号 201910699294.4(22)申请日 2019.07.31(71)申请人 南京航空航天大学地址 210016 江苏省南京市秦淮区御道街29号(72)发明人 王融 熊智 刘建业 李荣冰 李传意 杜君南 陈欣 赵耀 崔雨辰 安竞轲 聂庭宇 (74)专利代理机构 南京经纬专利商标代理有限公司 32200代理人 姜慧勤(51)Int.Cl.G01C 21/00(2006.01)G01C 21/20(2006.01)(54)发明名称用于无人机蜂群协同导航的动态互观测在线建模方法(57)摘要本发明公开了用于无人机蜂群协同导航的动态互观测在线建模方法,该方法首先根据各成员卫星导航接收机可见星数量对成员进行第一级筛选,明确当前时刻各成员在协同导航中的角色,随后建立以待辅助的各对象成员为原点的移动坐标系,并计算各备选参考节点的坐标;在此基础上,根据与各对象成员的是否可相对测距,对各备选参考节点进行第二级筛选,获得可用参考成员集合,并初步建立动态互观测模型;最后通过迭代修正对模型进行优化,并根据无人机蜂群观测关系、自身定位性能和协同导航中角色的变化进行新一轮动态互观测建模,为有效实现无人机蜂群协同导航提供准确依据。

权利要求书3页 说明书7页 附图3页CN 110426029 A 2019.11.08C N 110426029A1.用于无人机蜂群协同导航的动态互观测在线建模方法,其特征在于,包括如下步骤:步骤1,对无人机蜂群中的每个成员进行编号并表示为1,2,…,n,按照当前时刻各成员机载卫星导航接收机接收到可用星数量,对成员进行第一级筛选,确定各成员在协同导航中的角色:设接收到可用星数量小于4的成员为对象成员,将对象成员编号集合记为A;设接收到可用星数量不小于4的成员为备选参考成员,将备选参考成员编号集合记为B;且步骤2,获取对象成员i机载导航系统指示位置,并以该指示位置为原点,建立该对象成员当地东北天地理坐标系,i表示成员编号且i∈A;步骤3,获取备选参考成员j机载导航系统指示位置及其定位误差协方差,并将备选参考成员j机载导航系统指示位置及其定位误差协方差均转换到步骤2建立的对象成员i当地东北天地理坐标系中,j表示成员编号且j∈B;步骤4,按照每个对象成员与每个备选参考成员之间是否可以相互测距,对备选参考成员进行第二级筛选,确定各备选参考成员在协同导航中的角色:设与对象成员i可以相互测距的备选参考成员为对象成员i的可用参考成员,将对象成员i的可用参考成员编号集合记为C i,且步骤5,计算对象成员与其可用参考成员的互观测矢量,并根据互观测矢量计算对象成员与其可用参考成员的矢量投影矩阵;步骤6,计算对象成员与其可用参考成员的对象位置投影矩阵以及可用参考位置投影矩阵;步骤7,利用步骤5获得的矢量投影矩阵和步骤6获得的对象位置投影矩阵,计算对象成员与其可用参考成员之间状态互观测矩阵;步骤8,利用步骤5获得的矢量投影矩阵和步骤6获得的可用参考位置投影矩阵,计算对象成员与其可用参考成员之间噪声互观测矩阵;利用噪声互观测矩阵,计算对象成员与其可用参考成员之间互观测噪声协方差;步骤9,利用步骤7获得的状态互观测矩阵,建立对象成员对其全部可用参考成员的互观测集合矩阵;步骤10,利用步骤8获得的互观测噪声协方差,建立对象成员对其全部可用参考成员的互观测集合协方差;步骤11,利用步骤5获得的互观测矢量,建立对象成员对其全部可用参考成员的互观测集合观测量;步骤12,根据步骤9获得的互观测集合矩阵、步骤10获得的互观测集合协方差以及步骤11获得的互观测集合观测量,建立无人机蜂群协同导航的动态互观测模型,根据动态互观测模型进行对象成员加权最小二乘定位,得到对象成员位置的经度修正量、纬度修正量、高度修正量,并计算修正的经度、纬度、高度;步骤13,利用步骤7获得的状态互观测矩阵和步骤8获得的互观测噪声协方差,计算对象成员位置估计协方差;步骤14,利用步骤6获得的对象位置投影矩阵和步骤12得到的对象成员位置的经度修正量、纬度修正量、高度修正量,计算在线建模误差量;当在线建模误差量小于事先设置的动态互观测在线建模误差控制标准时,判定在线建模迭代收敛,即在线建模结束并转入步骤15,否则返回步骤5对互观测模型进行迭代修正;步骤15,判断是否导航结束,如是则结束;否则返回步骤1进行下一时刻建模。

基于SLAM技术的移动机器人导航研究

基于SLAM技术的移动机器人导航研究

基于SLAM技术的移动机器人导航研究移动机器人在现代智能制造和智慧物流中扮演着越来越重要的角色。

其导航技术的卓越性直接影响到机器人在特定环境中的工作表现和安全性。

为此,基于SLAM技术的移动机器人导航成为现代导航技术研究领域中的热点问题。

一、SLAM技术的基本概念SLAM技术,全称为Simultaneous Localization And Mapping,即同时定位和地图构建技术。

在自主移动机器人领域中,它是实现自主导航的关键技术之一。

SLAM要求机器人在移动过程中实时地定位自身位置和绘制环境地图,并且能够自我更新这些地图。

这是一个强大的自主导航系统,因为它能够让机器人同时完成寻路和地图更新等多项任务。

SLAM技术主要包括传感器数据融合、地图构建和自主导航等多个方面。

二、基于SLAM技术的移动机器人导航的可实现性可以将基于SLAM技术的移动机器人导航认为是一个环境感知、定位和路径规划三个模块的集成。

在现代自主导航系统中,环境感知模块在智慧物流、自动驾驶等领域中得到越来越广泛的应用。

在地图建模方面,SLAM技术相比传统的机器人定位和地图构建技术,具有建立环境模型更加精准,以及对环境的更新能力更强等优势。

在实时路径规划中,基于SLAM技术的导航系统能够实现更加智能化的路径规划,从而更好地满足复杂环境下的导航需求。

三、基于SLAM技术的导航系统的发展趋势随着技术不断的发展,基于SLAM技术的导航系统将越来越成熟和完善。

主要趋势有以下几点:1.大数据和深度学习的应用。

基于SLAM技术的导航系统可以通过运用深度学习算法,将传感器数据、地图数据等大量的数据进行融合处理,以实现更加准确和高效的环境感知、地图构建和路径规划。

2.多传感器数据融合技术的应用。

基于SLAM技术的导航系统可以通过结合多个传感器的数据信息,实现更加全面和精确的环境感知和自主导航。

3.引入白盒思想。

白盒思想是指基于人类视角对机器行为进行解释,从而实现更加智能化和人性化的良好用户体验。

一种基于多维航迹特征的目标轨迹聚类方法

一种基于多维航迹特征的目标轨迹聚类方法

一种基于多维航迹特征的目标轨迹聚类方法摘要在目标跟踪和分析中,如何快速高效地聚类目标轨迹是一个关键问题。

本文提出一种基于多维航迹特征的目标轨迹聚类方法。

该方法首先对目标航迹进行多维特征提取,并采用向量空间模型进行航迹相似度计算。

然后,采用层次聚类算法对相似度矩阵进行聚类,得到目标轨迹簇。

最后,通过评价标准和可视化效果验证本方法的有效性。

实验结果表明,该方法聚类效果良好,可应用于无人机、车辆等目标的轨迹分析和行为识别中。

关键词:目标跟踪;目标轨迹聚类;多维特征提取;层次聚类;评价标准引言目标跟踪和分析在无人机、车辆等领域中具有广泛应用。

其目的是通过对目标轨迹的跟踪和分析,掌握目标的运动规律和行为模式,为后续的任务决策和规划提供依据。

在目标跟踪和分析过程中,目标轨迹聚类是一个关键环节。

通过对轨迹进行聚类,可以将相同类型的目标归类到一个簇中,从而更好地掌握目标的运动规律和行为模式。

目前,目标轨迹聚类方法较多,常用的方法包括基于密度的聚类、基于模型的聚类和基于特征的聚类等。

其中,基于特征的聚类方法由于具有较好的可解释性和鲁棒性,在目标跟踪和分析中得到广泛应用。

然而,传统的特征提取方法大多基于欧氏距离,无法很好地处理轨迹数据的高维、非线性和噪声等问题。

因此,如何快速高效地提取轨迹的多维特征,是目标轨迹聚类方法中的一个关键问题。

本文提出一种基于多维航迹特征的目标轨迹聚类方法。

该方法首先对目标航迹进行多维特征提取,并采用向量空间模型进行航迹相似度计算。

然后,采用层次聚类算法对相似度矩阵进行聚类,得到目标轨迹簇。

最后,通过评价标准和可视化效果验证本方法的有效性。

多维特征提取传统的轨迹特征分析方法主要基于欧氏距离,如距离、速度和角度等。

然而,轨迹数据往往具有高维、非线性和噪音等问题,难以用传统的欧氏距离进行处理。

因此,本文提出了一种多维航迹特征提取方法。

首先,将每个目标的轨迹数据离散到一个网格中,得到一个多维特征向量。

安全管理毕业论文范文5篇

安全管理毕业论文范文5篇

安全管理毕业论文范文5篇安全管理毕业论文范文一论文题目:安全管理模型在云计算环境中的应用摘要:云计算环境中,云虚拟机(VM)在越来越多虚拟化安全管理方面上面临着巨大的挑战,为了解决这一问题,在基于高效且动态部署的条件下提出虚拟机安全管理模型,并利用此模型进行状态调度和迁移,同时对云虚拟机的安全体系结构进行研究.总体来说,该模型是一种基于AHP(层次分析法)和CUSUM(累计总和)DDOS攻击检测算法的可以对虚拟机进行部署和调度以及算法检测的方法.关键词:虚拟机安全;虚拟机部署;虚拟机调度0引言云计算[1]作为一种基于资源虚拟化的新型网络计算模型[2],以数据中心为基础,通过咨询的服务方式和可伸缩的计算资源来满足用户需求.但是随着云计算运营商的快速发展,虚拟化技术在各行各业中越来越受到关注,从而使得越来越多的用户将他们的数据和应用迁移到虚拟环境中,基于这种环境,虚拟机的数量也呈现增长的趋势.与此同时,虚拟机如何实现有效、安全地部署和迁移以实现对数据资源的高效利用也已成为虚拟化管理的一个巨大挑战.比方说,恶意用户通过租用大量虚拟机来对云数据中心发起TCPSYN洪水攻击,而外部环境却无法有效辨别攻击云数据中心的虚拟机,这样的攻击显得更微妙、更难以被快速防御.为了防御这种攻击,基于虚拟机状态迁移技术的虚拟机集群调度程序[3]被提出.文献[4]在KVM虚拟化环境转换模式中讨论了动态移动的实现,且分析了动态迁移的可靠性和安全性.同时,Danev等[5]也在虚拟机安全迁移的vTPM原理和方法下,以硬件的方式研究该模型,继而保证数据实时迁移的安全性.但通过对上述方法的深入研究,发现上述方法不足以保证虚拟环境的安全性,因此,本文基于对虚拟机安全模型的有效部署和管理,实现了一个安全管理关键技术研究与实现的动态迁移.1虚拟机安全管理模型图1展示了一种虚拟机安全管理模型,此模型可以分为四个部分:(1)多物理服务器虚拟机管理系统;(2)虚拟机状态监测系统;(3)基于AHP实时迁移虚拟机技术部署和调度方法;(4)基于CUSUM算法的DDoS攻击监测机制.结合图1可以分析,在用户通过云数据中心获取数据服务时,为了使多个用户在相同的物理服务器上实现资源共享,需要使虚拟机实现安全有效的迁移.虚拟机迁移指的是一个主机上运行的虚拟机(源主机)在运行时可以将数据很便捷地迁移到另一个主机(目标主机)上.但是这种模式下的问题也较突出,例如当一个数据用户在很短的时间内同时在云环境中撤回多个虚拟机时,它将导致物理服务器之间的负载变得不平衡.而且只要大量虚拟机的一些物理服务器上运行的任务发生闲置,同样会导致虚拟机在向用户提供服务的QoS请求过程中发生物理服务器负载不完整.综上所述,基于AHP的虚拟机具体的功能部署和调度方法包括以下四方面:监控物理服务器状态的统计特征;了解虚拟机资源类型以及访问特性;了解虚拟机资源分析的特点;以及在此基础上对物理服务器的安全性能进行评估,继而最终找到最适合的物理服务器部署或迁移以达到优化资源分配的虚拟机集群.同样还有另外一个不可被忽视的问题存在,即TCPSYN洪水攻击.在TCPSYN洪水攻击中,又以分布式拒绝服务攻击为代表,其使用TCP/IP 三路访问,隐藏在不安全的数据库中,由多个攻击发起者向目标主机发送一个SYN+ACK数据包,但是收到SYN+ACK数据包后服务器没有反应,随后可以攻击源IP地址并对攻击发起人进行伪装,从而导致物理服务器无法正常为用户数据请求提供服务,继而表现出极大的破坏性.因此便对互联网的安全性、完整性、可用性构成了严重威胁.这时可以了解到基于CUSUM算法的DDoS攻击检测机制[6],主要特点如下:可进行虚拟机网络流量的信息统计,包括SYN+ACK数据包和FIN+RST数据包:设计和实现改进的CUSUM算法用于快速检测恶意的虚拟机交易.其中,修改后的CUSUM算法从一个正常的TCP连接建立到结束有一个对称的关系,即单个SYN包和一个FIN|RST包配对.当洪水攻击发生时,SYN和FIN+RST数据包中一个数据包的数量将远远超过另一个,并通过检测两者之间的差异来进行辨别防御.2虚拟机安全管理的关键技术无论是物理服务器发生负载太高、超过预定阈值的情况,还是发生低负荷状态的情况,只要这种负载不平衡状态影响到虚拟机通过QoS操作提供服务,那么便需要在物理服务器上进行虚拟机迁移、负载均衡.首先,从云数据中心的使用中获取物理服务器的现有资源,比如称为N1的公式,同时获取其他物理服务器的当前资源使用,如公式(2)表示HN.H1={CPU1,MEM1,BandWidth1}结合图1可以分析,在用户通过云数据中心获取数据服务时,为了使多个用户在相同的物理服务器上实现资源共享,需要使虚拟机实现安全有效的迁移.虚拟机迁移指的是一个主机上运行的虚拟机(源主机)在运行时可以将数据很便捷地迁移到另一个主机(目标主机)上.但是这种模式下的问题也较突出,例如当一个数据用户在很短的时间内同时在云环境中撤回多个虚拟机时,它将导致物理服务器之间的负载变得不平衡.而且只要大量虚拟机的一些物理服务器上运行的任务发生闲置,同样会导致虚拟机在向用户提供服务的QoS请求过程中发生物理服务器负载不完整.综上所述,基于AHP的虚拟机具体的功能部署和调度方法包括以下四方面:监控物理服务器状态的统计特征;了解虚拟机资源类型以及访问特性;了解虚拟机资源分析的特点;以及在此基础上对物理服务器的安全性能进行评估,继而最终找到最适合的物理服务器部署或迁移以达到优化资源分配的虚拟机集群.同样还有另外一个不可被忽视的问题存在,即TCPSYN洪水攻击.在TCPSYN洪水攻击中,又以分布式拒绝服务攻击为代表,其使用TCP/IP 三路访问,隐藏在不安全的数据库中,由多个攻击发起者向目标主机发送一个SYN+ACK数据包,但是收到SYN+ACK数据包后服务器没有反应,随后可以攻击源IP地址并对攻击发起人进行伪装,从而导致物理服务器无法正常为用户数据请求提供服务,继而表现出极大的破坏性.因此便对互联网的安全性、完整性、可用性构成了严重威胁.这时可以了解到基于CUSUM算法的DDoS攻击检测机制[6],主要特点如下:可进行虚拟机网络流量的信息统计,包括SYN+ACK数据包和FIN+RST数据包:设计和实现改进的CUSUM算法用于快速检测恶意的虚拟机交易.其中,修改后的CUSUM算法从一个正常的TCP连接建立到结束有一个对称的关系,即单个SYN包和一个FIN|RST包配对.当洪水攻击发生时,SYN和FIN+RST数据包中一个数据包的数量将远远超过另一个,并通过检测两者之间的差异来进行辨别防御.3功能测试测试环境为Red hat enterprise Linux 4操作系统以及三个物理服务器.测试任务可以拟定为三个物理服务器通过一个统一的接口为简单的云计算环境提供虚拟机租赁服务,用户在云计算环境中,根据一个统一的顺序应用四个虚拟机,其中涉及到应用程序流程的比较以及每个物理服务器之间资源的负载消耗.由于虚拟机应用程序的物理资源和时间是动态平衡过程中导致物理服务器负载不平衡的主要瓶颈,而虚拟机的动态迁移瓶颈恰好是资源和时间动态平衡的物理过程,因此,对于资源的消耗,可以使用虚拟机的数量进行迁移,从而进行粗略的量化.图2显示了虚拟机AHP环境的层次分析结果.通过获取资源特性的虚拟机计算权重向量,将资源利用率合并到物理服务器上,这便是基于权重向量计算层次分析中的每个物理服务器的分数.图2中显示,对于虚拟机1,其权重向量为[0.2,0.6,0.2],并且是一种利用多个个体的组合所获得的资源物理服务器,对物理服务器进行综合计算后,物理服务器2的得分为36.122,物理服务器3的得分为42.288.其中较小的物理服务器得分表示当前的资源物理服务器可以满足其余运行的虚拟机1,但是同时也表明部署虚拟机1是存在一定压力的,因此选择物理服务器2作为最佳物理服务器.4结论针对云计算的虚拟机安全管理需求所提出这种安全管理框架模型,通过讨论虚拟机管理模型研究的功能配置,验证了利用DDoS攻击检测方法来检测通过租用大量虚拟机来启动TCPSYN洪水攻击的方法是可行的,且可以使虚拟机有效地部署和动态迁移.参考文献:[1]Khan A. U. R .,Othman M.,Feng Xia,et al . Con?text-Aware Mobile Cloud Computing and Its Chal?lenges[J]. IEEE Cloud Computing,2015,2(3):42-49.[2]Jain R. Paul S. Network virtualization andsoftwaredefinednetworking for cloud computing:a survey[J]. IEEECommunications Magazine,2013,51(11):24-31.[3]Wei Z.,Xiaolin G.,Wei H. R.,et al. TCPDDOSattack detection on the host in the KVM virtualmachineenvironment[C]. 2012 IEEE/ACIS 11thInternationalConference on Computer and Inforation Science. doi:10.1109/icis.2012.105.[4]Yamuna Devi L.,Aruna P.,Sudha D.D.,et al. Securityin Virtual Machine Live Migration for KVM[A]. Yamu?na Devi,2011 InternationalConference on Process Au?tomation,Control and Computing(PACC)[C].USA:IEEE Computer Society Press,2011:1-6.[5]Danev B,Jayaram M R,Karame G O,et al. Enabling?secure VM-v TPM migration in private clouds[A].Danev,240 the 27th Annual Computer Security Applica?tions Conference[C]. USA:ACM Press,2011:187-196.[6]Fen Y,Yiqun C,Hao H,et al. Detecting DDo S attack?based on compensation non-parameter CUSUM algorithm[J]. Journal on Communications,2008,29(6):126-132.12345下一页。

封堵移动设备上的蠕虫漏洞

封堵移动设备上的蠕虫漏洞

封堵移动设备上的蠕虫漏洞
MichaelOtey;刘海蜀
【期刊名称】《Windows & Net Magazine:国际中文版》
【年(卷),期】2004(000)03M
【摘要】问题出现在我刚刚出差归来的那一天。

问题始于网络布线,我办公室的工作站电脑恰好距离网络路由器很近,大概在我开始工作一个小时左右,我注意到路由器上的WAN指示灯总是常亮而且一直保持这样。

【总页数】1页(P7)
【作者】MichaelOtey;刘海蜀
【作者单位】
【正文语种】中文
【中图分类】TP309.5
【相关文献】
1.移动设备时刻面临安全漏洞威胁 [J],
2.微软\"蠕虫级\"高危漏洞 [J],
3.微软“蠕虫级”高危漏洞 [J],
4.研究表明电邮漏洞威胁移动设备 [J],
5.全球移动威胁调查显示移动设备时刻面临安全漏洞威胁 [J],
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

CLUSMA: A Mobile Agent based clustering middleware forWireless Sensor NetworksUsman Sharif1, Abid Khan1, Mansoor Ahmed1, Waqas Anwar2, Ali Nawaz Khan3,1Department of Computer ScienceCOMSATS Institute of Information Technology, Islamabad, Pakistanitseen@{abidkhan,mansoor}@.pk2 Department of Computer ScienceCOMSATS Institute of Information Technology, Abbottabad, Pakistanwaqas@.pk3 Department of Electrical Engineering, COMSATS Institute of Information Technology, Lahore, Pakistanankhan@.pkABSTRACTThis paper proposes CLUSMA: a middleware framework for communication between sensor nodes in Wireless sensor network. Our proposed technique is based on clustering technique (CLUS) and it uses Mobile Agent (MA).This paper gives a sketch of the middleware and the actual implementation of the framework are in progress. The main objective of this middleware is to utilize the networks functions and resources in an effective way. Because the proposed framework is based on MA it will work in homogenous as well as heterogeneous environment. Previous such frameworks used symmetric secret key for sensor nodes authentication. The proposed framework will provide security in heterogeneous environment by using mobile agents in a cluster which will communicate with agents in other cluster.KeywordsMiddleware, clustering algorithm, mobile agents, Program Integrity Verification (PIV), symmetric secret key, heterogeneous environment1.INTRODUCTIONWireless Sensor Networks (WSN) is becoming important for many emerging applications such as military surveillance, alerts on terrorists and burglars. The security of sensor networks is of utmost importance [1]. Due to resource constraints on sensor nodes, it is not feasible for sensors to use traditional pair-wise key establishment techniques such as public key cryptography and key distribution centre [2].To protect the sensors from physical tampering and manipulation of the sensor programs, [16] proposed a soft tamper-proofing scheme that verifies the integrity of the program in each sensor device, called the program integrity verification (PIV). In PIV protocol the sensors rely on PIV servers (PIVSs) to verify the integrity of the sensor programs. The sensors authenticate PIVSs with centralized and trusted third-party entities, such as authentication servers (ASs), in the network. Distributed authentication protocol of PIVSs (DAPP) DAPP is a solution to the problem of authenticating PIVSs in a fully distributed manner without the ASs. This paper is organized as following, section II talks about the related work, section III describes design of proposed middleware framework with clustering technique, PIV implementation over middleware framework is included in IV section and last section concludes the paper.2.RELATED WORKA security model for low-value transactions, especially focusing on authentication in ad hoc networks is presented in [3]. They used the recommendation protocol from the distributed trust model [4] to build trust relationships and extend it by requesting for references in ad hoc networks. Each node maintains a local repository of trustworthy nodes in the network, and a path between any two nodes can be built by indirectly using the repositories of other nodes. They also introduced the idea of threshold cryptography [5] in which, as long as the number of compromised nodes is below a given threshold, the compromised nodes cannot harm the network operation. Some threats and possible solutions for basic mechanisms and security mechanisms in mobile ad hoc networks are presented in [6]. They developed a self-organizing public-key infrastructure. In their system, certificates are stored in local certificate repositories and distributed by the users. In contrast, our work focuses on authentication of servers, while their work features admission control and pair wise key establishment.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.FIT’09, December 16–18, 2009, CIIT, Abbottabad, Pakistan.Copyright 2009 ACM 978-1-60558-642-7/09/12 (10)3.Design principle for WSN MiddlewareThe software design for WSNs should follow several basic principles. We re-interpret those principles for the design of middleware as follows: The middleware should provide Data-centric mechanisms for data processing and querying within the network. Due to its simplicity, flexibility, and robustness, cluster-based network architecture has been widely used in the design and implementation of network protocols and collaborative signal processing applications for WSNs, such as [8-10]. Intuitively, cluster-based architecture is suitable for hosting the data-centric processing paradigm from both geographical and system design perspectives [9,10]. While designing any middleware for WSN the followings aspects must be addressed properly (1) Heterogeneity (2) Localized Algorithms (3) Energy and Resource management (4) Data Aggregation (5) Encoding and Compression (6) Scalability (7) Security3.1HeterogeneityLike in regular distributed systems, heterogeneity is a phenomenon also seen in WSN, which should be managed by the middleware. Sensor nodes can be heterogeneous by construction, that is, some nodes have larger batteries, farther-reaching communication devices, or more processing power. Middleware should have the knowledge about the nodes’ states, dispatching tasks based on its knowledge concerning their remaining energy, their communication devices, and their computational resources.3.2Localized AlgorithmsSince the cluster-based architecture localizes the interaction of sensor nodes and hence the coordination and control overhead within a restricted vicinity, it is reasonable to regard each cluster as a basic function unit of the middleware. Both principles data-centric and localized algorithms strongly motivate cluster-based architectures. In fact, such architectures have been widely investigated in ad hoc networks since they “promote more efficient use of resources in controlling large dynamic networks”[11]. Compared with mobile networks that incur a high cost for maintaining clusters throughout the network, WSNs usually consist of stationary sensor nodes with less dynamics. Hence, the cost for superimposing cluster architecture over the physical network is affordable, given the potential advantages offered by clusters in designing scalable and localized data centric algorithms. [12]3.3Energy and resource managementRecent researches in microelectronics have made it possible to produce very tiny sensor devices, sometimes on the order of one cubic cm. Due to their small size, sensor nodes suffer from limited resources like energy, computing power, memory, and communication bandwidth. Therefore, the middleware on top of those devices should be lightweight, energy efficient, smartly managing restrained resources in order to provide the required services while maximizing the device’s life. Most of the time, it is nearly impossible to recharge these sensory devices, because of their huge number, their small size, and because they are scattered in the field so it is hard also to find all of them. 3.4Data AggregationWhen organizing a network in a distributed fashion, the nodes in the network are not only passing data or executing applications programs, they are also actively involved in taking decisions about how to operate the network. This is a specific form of information processing that happens in the network, but is limited to information about the network itself. When the concrete data transported by the network is also taken into account into this information processing, that’s in-network processing. In-network processing has different forms: data aggregation and data encoding and compression. Perhaps one of the most used techniques of in-network processing is data aggregation [13], also called data fusion. The benefit of this technique -mainly power saving- comes from the fact that transmitting data is considerably more expensive than even complex computations. This form of in-network processing is called aggregation since data collected from different nodes along the way between the source and the sink is aggregated into a condensed form while satisfying some conditions for the result to be meaningful.3.5Encoding and CompressionSince sensor nodes are deployed in physical environments, the readings of the adjacent ones are going to be very similar, they are correlated. The compression and encoding techniques exploit such correlation, which can be a spatial one (in adjacent nodes) or a temporal one (readings at the same moment). Middleware should provide data aggregation and data compression algorithms, giving the applications above the ability to choose the algorithm that best suits.3.6ScalabilityMiddleware should support scalability by maintaining acceptable levels of performance as the network grows. Some network architectures have been investigated and proven suitable for growing networks such as clustered networks [13], and data-centric models. Moreover, scalability provides robustness and fault tolerance for the network.3.7SecurityIt obvious that security measures in WSN shouldn’t be considered in the same way as other kinds of networks for many reasons: WSN infrastructure is made of small, cheap nodes scattered in a potentially hostile area, therefore it’s impossible to prevent the sensor nodes to be physically accessed by attackers. In that case, an attacker could achieve full control over the captured node, so he can reads its memory or alter its software. Some cryptographic algorithms are hard to implement because of the memory and computational constraints. When performing in-network processing, several nodes access the data to change it, and therefore there are a large number of parties involved in end-to-end information transfers. The limited energy of a node is particularly attracting for attackers, they can force it to exhaust its energy and die.4.Middleware Framework with clustering technique using mobile agent (CLUSMA)We are designing a unified framework for WSN that contains predefined parameters regarding efficient utilization of network resources, to reduce the communication overhead, to remove the bottleneck of authentication server. The designing of this framework is better approach to introduce integrity between sensor nodes and between sensors and servers in a more effective way. Security constraints that effect networks resources so badly can be controlled in more effective way by implementing this framework. We intend to implement program integrity verification (PIV) protocol in our middleware.4.1Pre-defined parametersAs mentioned earlier that the framework works with its predefined parameters. First of all, the number of sensor nodes is decided that will remain part of the pre-defined network with predefined area covered by that nodes. Their transmission coverage or range will also be pre-assigned. With that pre-assigned range all the nodes will perform their functions of sensing and communication. There will be lot of areas with so many sensor nodes exist in their own network and will perform their functions in their specific pre-assigned area.4.2Interconnection between sensorsEvery sensor node can communicate to a sensor node in the same network directly or indirectly but what will happen if one sensor of a network wants to communicate with any other sensor of some other networks? There are a number of solutions to this kind of problems in wireless networks like Ad hoc networks. Mobile agent is one of possible solutions. It will reside one a node with bit high processing and transmission power, belong to a network and can communicate to the mobile agent of another network because of its high transmission power. Mobile agent middleware introduces extensibility of the network functions, e.g. bandwidth, frequency. Mobile agent middleware’s communication with each other in different clusters where the cell’s frequency reuse implementation facilitate WSN environment in more effective way.4.3ClusteringA cluster consists of number of nodes. As in our proposed idea, the sensors will be scattered into clusters on some pre-defined basis. All the sensors belong to a cluster can communicate with every node in a cluster. Same, every node in cluster can communicate to the nodes of different clusters through middleware or agent (As shown in the figure) that have high processing than normal sensor nodes but these agents cannot perform very crucial functions like maintaining data of every node or the complex computations related to key generation and distribution.Inter-Cluster Coordination: In the proposed middleware, information exchange between clusters is necessary for both information sharing and coordination. For instance, data gathered at one cluster can be requested by either the base station or other clusters across the network. [14]. This tradeoff of energy against application fidelity is also important for inter-cluster routing. For instance, an energy efficient packet scheduling scheme over an existing data gathering substrate is described in [15]. Similar techniques can be applied for information dissemination among clusters.Middleware or AgentSensor nodeFig.1. CLUSMA MiddlewareIntra-Cluster coordination: Another issue arises when multiple clusters overlap with each other. For instance, two separated objects initially tracked by two different clusters may move across or eventually move together, which leads to the overlap of the two clusters. In such a case, multiple clusters may compete for resources. It is therefore important to establish necessary mechanisms for detecting the existence of overlapped clusters and coordinating between clusters to avoid unfairness, starvation or deadlock during resource competition. It is sometimes helpful to perform cluster combination if two clusters overlap on a large portion of geographical area and will co-exist for a long time period. Cluster combination can be used to achieve reduced coordination overhead and increased resource utilization, while necessitating high scalability of the cluster control protocol and resource management algorithms. The counterpart of cluster combination is cluster splitting. Cluster splitting is needed when two close objects tracked by a single cluster begin to move toward opposite directions. However, cluster splitting can also be regarded as the procedure of reducing the application load on the original cluster that tracks one object and forming another cluster for tracking the other object.5.PIVS OVER CLUSMAPIV program is installed on every node. Within a cluster, every PIV Senor node will communicate to other PIV node, the integrity of the nodes can be achieved in this way. PIV protocols are there in between these communication entities. The same PIV is installed on all PIVSs servers that communicate with their nodes in clusters. PIVSs will be performing all critical operations related to key generation and distributions, maintaining data and information of every node in a cluster etc in order to achieve more reliability, and these PIVSs will be replicated. As discussedearlier, if a sensor node of a cluster wants to communicate with some other node of another cluster, then node will first contact to its PIVS, PIVS inform that node about the destination node with its exact location and clusters. One of the important aspect of PIVs is that it will be replicated on inter and intra cluster basis. PIVs information will be distributed on all the clusters. After knowing the information about the destination node, source node contacts to its mobile agent middleware that will communicate to the mobile agent having destination sensor node. Integrity and authentication of the nodes within a cluster or outside the cluster are major challenges to be tackled down. We are of the view that this middleware framework performs well to overcome these security issues. The information about all the nodes of all the clusters is distributed on the PIVSs. The verification of the nodes can be achieved simply by the verification procedure that takes place in the initiation of communication of one node to another. Every node contains pre-loaded key generation material [17], on the basis of that code; symmetric key is generated that is pairwise key sharing between sensor nodes. In [17] key server randomly generated a symmetric bivariate k-degree polynomial f(x,y) that is a secret known only to the key server. Any two nodes in the network generate a pairwise key by substituting x and y by their nodes IDs. Authentication of the middleware mobile agents is also an issue that is solved in our proposed framework. There should be key sharing process between mobile agents. They will authenticate each other on the basis of secret keys that will be shared by PIVSs. PIVSs will hold key generation function to be used to generate pair wise key.6.CONCLUSIONThe proposed middleware framework incorporates clustering techniques to reduce the amount of communication between sensor nodes. We know that sensors are usually deployed in hostile and unattended environments and hence are susceptible to various attacks, including node capture, physical tampering, and manipulation of the sensor program. CLUSMA framework will be able to provide security in the homogeneous and heterogeneous WSN environment by using one or more middleware mobile agents in a cluster. Authentication of the middleware mobile agents is also an issue that is solved in our proposed framework. There should be a key sharing process between mobile agents so that they can authenticate each other on the basis of secret keys. PIVSs will hold key generation function, on the basis of that function pairwise key will be generated.7.REFERENCES[1]S. A. Camtepe and B. Yener, “Key distribution mechanismsfor wireless sensor networks: a survey,” Department of Computer Science, Rensselaer Polytechnic Institute, Tech.Rep. TR-05-07, March 23 2005.[2]W. Stallings, Cryptography and Network Security:Principles and Practices, 4th Ed. Prentice Hall, 2005.[3] A. Weimerskirch, and G.Thonet, “A distributed light-weightauthentication model for ad hoc networks”. In Proceedings of the 4th International Conference on Information Security and Cryptology (ICISC01), 2001. [4]Abdul-Rahman and S.Hailes, “A distributed trust model”. InProceedings of the Workshop on New Security Paradigms, 1997.[5]Y. Desmedt, and Y.Frankel, “Threshold cryptosystems”. InProceedings on Advances in Cryptology (CRYPTO89), 1989.[6]J.P. Hubaux, et al. “The quest for security in mobile ad hocnetworks”. In Proceedings of the 2nd ACM MobiHoc01, 2001.[7] D. Estrin, et al, “Next century challenges: Scalablecoordination in sensor networks”, In ACM/IEEE (Mobi- Com), 1999.[8]Cougar Project. [Online]. Available:/database/Cougar[9]W. Heinzelman, A. P. Chandrakasan, et al “An applicationspecific protocol architecture for wireless microsensor networks,” IEEE Trans. on Wireless Networking, 2002. [10]M. Younis, M. Youssef et al, “Energy-aware routing inclusterbased sensor networks”. In International Symposium on Modeling, Anaysis and Simulation of Computer and Telecommunication Systems, 2002.[11]C. E. Perkins, Ad Hoc Networking. Addison-Wesley, 2001.[12]M. Singh and V. K. Prasanna, “Optimal energy-balancedalgorithm for selection in a single hop sensor network,” In IEEE SNPA, 2003.[13]Y. Yu and V. K. Prasanna, “Energy-balanced task allocationfor collaborative processing in wireless sensor networks,”accepted by MONET special issue on Algorithmic Solutions for Wireless, Mobile, Ad Hoc and Sensor Networks. [14]C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directeddiffusion: A scalable and robust communication paradigm for sensor networks,” In ACM/IEEE MobiCom, 2000. [15]Y. Yu, B. Krishnamachari, and V. K. Prasanna, “Energy-latency tradeoffs for data gathering in wireless sensor networks,” to appear in IEEE InfoCom 2004.[16]K.G.Shin and T.Park, “Soft tamper-Proofing Via ProgramIntegrity Verification in Wireless Sensor Networks”, IEEE Transactions on Mobile Computing, Vol. 4(3). 2005. [17]C. Blundo, A. De Santis, A. Herzberg, S. Kutten, U.Vaccaro and M. Yung, Perfectly Secure Key Distribution for Dynamic Conferences. In Advances in Cryptology --- CRYPTO '92, Springer-Verlag, Berlin, 1993.。

相关文档
最新文档