acm军事论文
【精品文档】acm学术会议的论文投稿状态-word版本 (4页)

本文部分内容来自网络整理,本司不为其真实性负责,如有异议或侵权请及时联系,本司将立即删除!== 本文为word格式,下载后可方便编辑和修改! ==acm学术会议的论文投稿状态?篇一:SCI投稿状态SCI投稿状态自己查-投稿术语名词解释1. Submitted to journal刚提交的状态2. Manuscript received by Editorial Office就是你的文章到了编辑手里了,证明投稿成功。
3. with editor如果在投稿的时候没有要求选择编辑,就先到主编那,主编会分派给别的编辑。
这当中就会有另两个状态:3.1 Awaiting Editor Assignment指派责任编辑Editor assignment 是把你的文章分给另一个编辑处理了。
3.2 technical check in progress检查你的文章符合不符合期刊的投稿要求3.3 Editor Declined Invitation如果编辑接收处理了就会邀请审稿人了。
4. 随后也会有2种状态4.1 Decision Letter Being Prepared就是编辑没找到审稿人就自己决定了,那根据一般经验,对学生来说估计就会挂了 1)英文太差,编辑让修改。
2)内容太差,要拒了。
除非大牛们直接被接收。
4.2 Review(s) invited 找到审稿人了,就开始审稿5 Under review这应该是一个漫长的等待。
当然前面各步骤也可能很慢的,要看编辑的处理情况。
如果被邀请审稿人不想审,就会decline,编辑会重新邀请别的审稿人。
6. Required Reviews Completed审稿人的意见已经上传,审稿结束,等待编辑决定7. Evaluating Recommendation评估审稿人的意见,随后你将受到编辑给你的decision8. Minor revision/ Major revision这个时候可以稍微庆祝一下了,问题不大了,因为有修改就有可能。
acm论文模板范文

acm论文模板范文ACM是全世界领域影响力最大的专业学术组织。
而acm模板,你们知道吗?这是 ___为大家了两篇acm论文,这样你们对模板会有直观的印象![摘要] 鉴于ACM大学生程序设计竞赛(ACM/ICPC)在人才选拔和培养方面的显著作用,如何将ACM/ICPC竞赛活动嵌入常规教学,创新教学模式,结合专业教学,加强训练管理,提高培训效益,已成为人们关注的问题。
针对这一应用需求,本文设计并开发了基于ACM/ICPC机制的大学生程序设计培训管理系统。
系统采用B/S架构,以SQL Server xx作为后台管理数据库,Visual Studio 和为前端开发工具。
在分析系统功能的基础上,着重阐述了该系统设计与实现的关键技术。
该系统实际运行稳定、可靠,为开展ACM/ICPC竞赛培训和教学提供了一种有效管理途径。
[关键词] ACM/ICPC;培训管理系统;Web开发;;数据库技术doi : 10 . 3969 / j . issn . 1673 - 0194 . xx . 03. 015[] TP311 [] A [] 1673 - 0194(xx)03- 0028- 031 引言ACM国际大学生程序设计竞赛(ACM International Collegiate Programming Contest, ACM ICPC) 由美国计算机协会(ACM)主办,始于1970年,至今已经有40多年的,是世界公认的规模最大、水平最高、影响广泛的国际大学生程序设计竞赛,竞赛优胜者是各大IT企业和科研院所青睐和优先选拔的人才[1]。
近些年来,伴随着ACM/ICPC大学生程序设计竞赛在国内如火如荼地开展,计算机界更加关注在人才培养方面,如何科学合理地引入、借鉴ACM/ICPC竞赛训练,将ACM/ICPC竞赛活动与常规专业课程教学有机结合起来,突破传统教学内容和,以有效培养学生的学习能力、创新意识和综合素质。
这其中,如何有效组织开展ACM/ICPC竞赛训练,加强培训管理,提高培训效益,亦是人们关注的热点问题。
ACM的论文写作格式标准

ACM的论⽂写作格式标准ACM Word Template for SIG Site1st Author1st author's affiliation1st line of address2nd line of address Telephone number, incl. country code 1st author's E-mail address2nd Author2nd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code2nd E-mail3rd Author3rd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code3rd E-mailABSTRACTA s network speed continues to grow, new challenges of network processing is emerging. In this paper we first studied the progress of network processing from a hardware perspective and showed that I/O and memory systems become the main bottlenecks of performance promotion. Basing on the analysis, we get the conclusion that conventional solutions for reducing I/O and memory accessing latencies are insufficient for addressing the problems.Motivated by the studies, we proposed an improved DCA combined with INIC solution which has creations in optimized architectures, innovative I/O data transferring schemes and improved cache policies. Experimental results show that our solution reduces 52.3% and 14.3% cycles on average for receiving and transmitting respectively. Also I/O and memory traffics are significantly decreased. Moreover, an investigation to the behaviors of I/O and cache systems for network processing is performed. And some conclusions about the DCA method are also presented.KeywordsKeywords are your own designated keywords.1.INTRODUCTIONRecently, many researchers found that I/O system becomes the bottleneck of network performance promotion in modern computer systems [1][2][3]. Aim to support computing intensive applications, conventional I/O system has obvious disadvantages for fast network processing in which bulk data transfer is performed. The lack of locality support and high latency are the two main problems for conventional I/O system, which have been wildly discussed before [2][4].To overcome the limitations, an effective solution called Direct Cache Access (DCA) is suggested by INTEL [1]. It delivers network packages from Network Interface Card (NIC) into cache instead of memory, to reduce the data accessing latency. Although the solution is promising, it is proved that DCA is insufficient to reduce the accessing latency and memory traffic due to many limitations [3][5]. Another effective solution to solve the problem is Integrated Network Interface Card (INIC), which is used in many academic and industrial processor designs [6][7]. INIC is introduced to reduce the heavy burden for I/O registers access in Network Drivers and interruption handling. But recent report [8] shows that the benefit of INIC is insignificant for the state of the art 10GbE network system.In this paper, we focus on the high efficient I/O system design for network processing in general-purpose-processor (GPP). Basing on the analysis of existing methods, we proposed an improved DCA combined with INIC solution to reduce the I/O related data transfer latency.The key contributions of this paper are as follows:Review the network processing progress from a hardware perspective and point out that I/O and related last level memory systems have became the obstacle for performance promotion.Propose an improved DCA combined with INIC solution for I/O subsystem design to address the inefficient problem of a conventional I/O system.Give a framework of the improved I/O system architecture and evaluate the proposed solution with micro-benchmarks. Investigate I/O and Cache behaviors in the network processing progress basing on the proposed I/O system.The paper is organized as follows. In Section 2, we present the background and motivation. In Section 3, we describe the improved DCA combined INIC solution and give a framework of the proposed I/O system implementation. In Section 4, firstly we give the experiment environment and methods, and then analyze the experiment results. In Section 5, we show some related works. Finally, in Section 6, we carefully discuss our solutions with many existing technologies, and then draw some conclusions.2.Background and MotivationIn this section, firstly we revise the progress of network processing and the main network performance improvement bottlenecks nowadays. Then from the perspective of computer architecture, a deep analysis of network system is given. Also the motivation of this paper is presented.2.1Network processing reviewFigure 1 illustrates the progress of network processing. Packages from physical line are sampled by Network Interface Card (NIC). NIC performs the address filtering and stream control operations, then send the frames to the socket buffer and notifiesOS to invoke network stack processing by interruptions. When OS receives the interruptions, the network stack accesses the data in socket buffer and calculates the checksum. Protocol specific operations are performed layer by layer in stack processing. Finally, data is transferred from socket buffer to the user buffer depended on applications. Commonly this operation is done by memcpy, a system function in OS.Figure 1. Network Processing FlowThe time cost of network processing can be mainly broke down into following parts: Interruption handling, NIC driver, stack processing, kernel routine, data copy, checksum calculation and other overheads. The first 4 parts are considered as packet cost, which means the cost scales with the number of network packets. The rests are considered as bit cost (also called data touch cost), which means the cost is in proportion to the total I/O data size. The proportion of the costs highly depends on the hardware platform and the nature of applications. There are many measurements and analyses about network processing costs [9][10]. Generally, the kernel routine cost ranges from 10% - 30% of the total cycles; the driver and interruption handling costs range from 15% - 35%; the stack processing cost ranges from 7% - 15%; and data touch cost takes up 20% - 35%. With the development of high speed network (e.g. 10/40 Gbps Ethernet), an increasing tendency for kernel routines, driver and interruption handling costs is observed [3].2.2 MotivationTo reveal the relationship among each parts of network processing, we investigate the corresponding hardware operations. From the perspective of computer hardware architecture, network system performance is determinedby three domains: CPU speed, Memory speed and I/O speed. Figure 2 depicts the relationship.Figure 2. Network xxxxObviously, the network subsystem can achieve its maximal performance only when the three domains above are in balance. It means that the throughput or bandwidth of each hardware domain should be equal with others. Actually this is hard for hardware designers, because the characteristics and physical implementation technologies are different for CPU, Memory and I/O system (chipsets) fabrication. The speed gap between memory and CPU –a.k.a “the memory wall” –has been paid special attention for more than ten years, but still it is not well addressed. Also the disparity between the data throughput inI/O system and the computing capacity provided by CPU has been reported in recent years [1][2].Meanwhile, it is obvious that the major time costs of network processing mentioned above are associated with I/O and Memory speeds, e.g. driver processing, interruption handling, and memory copy costs. The most important nature of network processing is the “producer-consumer locality” between every two consecutive steps of the processing flow. That means the data produced in one hardware unit will be immediately accessed by another unit, e.g. the data in memory which transported from NIC will be accessed by CPU soon. However for conventional I/O and memory systems, the data transfer latency is high and the locality is not exploited.Basing on the analysis discussed above, we get the observation that the I/O and Memory systems are the limitations for network processing. Conventional DCA or INIC cannot successfully address this problem, because it is in-efficient in either I/O transfer latency or I/O data locality utilization (discussed in section 5). To diminish these limitations, we present a combined DCA with INIC solution. The solution not only takes the advantages of both method but also makes many improvements in memory system polices and software strategies.3.Design MethodologiesIn this section, we describe the proposed DCA combined with INIC solution and give a framework of the implementation. Firstly, we present the improved DCA technology and discuss the key points of incorporating it into I/O and Memory systems design. Then, the important software data structures and the details of DCA scheme are given. Finally, we introduce the system interconnection architecture and the integration of NIC.3.1Improved DCAIn the purpose of reducing data transfer latency and memory traffic in system, we present an improved Direct Cache Access solution. Different with conventional DCA scheme, our solution carefully consider the following points.The first one is cache coherence. Conventionally, data sent from device by DMA is stored in memory only. And for the same address, a different copy of data is stored in cache which usually needs additional coherent unit to perform snoop operation [11]; but when DCA is used, I/O data and CPU data are both stored in cache with one copy for one memory address, shown in figure 4. So our solution modifies the cache policy, which eliminated the snooping operations. Coherent operation can be performed by software when needed. This will reduce much memory traffic for the systems with coherence hardware [12].I/O writeCPU write*(addr) = aCacheCPU write*(addr) = aI/O write with DCA*(addr) = bCache(a) cache coherance withconventional I/O(b) cache coherance withDCA I/OFigure 3. xxxxThe second one is cache pollution. DCA is a mixed blessing to CPU: On one side, it accelerates the data transfer; on the other side, it harms the locality of other programs executed in CPU and causes cache pollution. Cache pollution is highly depended on the I/O data size, which is always quite large. E.g. one Ethernet package contains a maximal 1492 bytes normal payload and a maximal 65536 bytes large payload for Large Segment Offload (LSO). That means for a common network buffer (usually 50 ~ 400 packages size), a maximal size range from 400KB to 16MB data is sent to cache. Such big size of data will cause cache performance drop dramatically. In this paper, we carefully investigate the relationship between the size of I/O data sent by DCA and the size of cache system. To achieve the best cache performance, a scheme of DCA is also suggested in section 4. Scheduling of the data sent with DCA is an effective way to improve performance, but it is beyond the scope of this paper.The third one is DCA policy. DCA policy refers the determination of when and which part of the data is transferred with DCA. Obviously, the scheme is application specific and varies with different user targets. In this paper, we make a specific memory address space in system to receive the data transferred with DCA. The addresses of the data should be remapped to that area by user or compilers.3.2DCA Scheme and detailsTo accelerate network processing, many important software structures used in NIC driver and the stack are coupled with DCA. NIC Descriptors and the associated data buffers are paid special attention in our solution. The former is the data transfer interface between DMA and CPU, and the later contains the packages. For farther research, each package stored in buffer is divided into the header and the payload. Normally the headers are accessed by protocols frequently, but the payload is accessed only once or twice(usually performed as memcpy) in modern network stack and OS. The details of the related software data structures and the network processing progress can be found in previous works [13].The progress of transfer one package from NIC to the stack with the proposed solution is illustrated in Table 1. All the accessing latency parameters in Table 1 are based on a state of the art multi-core processor system [3]. One thing should be noticed is that the cache accessing latency from I/O is nearly the same with that from CPU. But the memory accessing latency from I/O is about 2/3 of that from CPU due to the complex hardware hierarchy above the main memory.Table 1. Table captions should be placed above the tabletransfer.We can see that DCA with INIC solution saves above 95% CPU cycles in theoretical and avoid all the traffic to memory controller. In this paper, we transfer the NIC Descriptors and the data buffers including the headers and payload with DCA to achieve the best performance. But when cache size is small, only transfer the Descriptors and the headers with DCA is an alternative solution.DCA performance is highly depended on system cache policy. Obviously for cache system, write-back with write-allocate policy can help DCA achieves better performance than write-through with write non-allocate policy. Basing on the analysis in section 3.1, we do not use the snooping cache technology to maintain the coherence with memory. Cache coherence for other non-DCA I/O data transfer is guaranteed by software.3.3 On-chip network and integrated NICFootnotes should be Times New Roman 9-point, and justified to the full width of the column.Use the “ACM Reference format” for references – that is, a numbered list at the end of the article, ordered alphabetica lly and formatted accordingly. Seeexamplesof sometypicalreference types, in the new “ACM Reference format”, at the end of this document. Within this template, use the style named references for the text. Acceptable abbreviations, for journal names, can be found here:/doc/4a4372fc0242a8956bece471.html /reference/abbreviations/. Word may try to automatically …underline? hotlinks in your references, the correct style is NO underlining.The references are also in 9 pt., but that section (see Section 7) is ragged right. References should be published materials accessible to the public. Internal technical reports may be cited only if they are easily accessible (i.e. you can give the address to obtain the report within your citation) and may be obtained by any reader. Proprietary information may not be cited. Private communications should be acknowledged, not referenced (e.g., “[Robertson, personal communication]”).3.4 Page Numbering, Headers and FootersDo not include headers, footers or page numbers in your submission. These will be added when the publications are assembled.4. FIGURES/CAPTIONSPlace Tables/Figures/Images in text as close to the reference as possible (see Figure 1). It may extend across both columns to a maximum width of 17.78 cm (7”).Captions should be Times New Roman 9-point bold. They should be numbered (e.g., “Table 1” or “Figure 2”), please note that the word for Table and Figure are spelled out. Figure?s captions should be centered beneath the image or picture, and Table captions should be centered above the table body.5. SECTIONSThe heading of a section should be in Times New Roman 12-point bold in all-capitals flush left with an additional 6-points of white space above the section head. Sections and subsequent sub- sections should be numbered and flush left. For a section head andFigure 1. Insert caption to place caption below figure.a subsection head together (such as Section 3 and subsection 3.1), use no additional space above the subsection head.5.1SubsectionsThe heading of subsections should be in Times New Roman 12-point bold with only the initial letters capitalized. (Note: For subsections and subsubsections, a word like the or a is not capitalized unless it is the first word of the header.)5.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized and 6-points of white space above the subsubsection head.5.1.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.5.1.1.2SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.6.ACKNOWLEDGMENTSOur thanks to ACM SIGCHI for allowing us to modify templates they had developed.7.REFERENCES[1]R. Huggahalli, R. Iyer, S. Tetrick, "Direct Cache Access forHigh Bandwidth Network I/O", ISCA, 2005.[2] D. Tang, Y. Bao, W. Hu et al., "DMA Cache: Using On-chipStorage to Architecturally Separate I/O Data from CPU Data for Improving I/O Performance", HPCA, 2010.[3]Guangdeng Liao, Xia Zhu, Laxmi Bhuyan, “A New ServerI/O Architecture for High Speed Networks,” HPCA, 2011. [4] E. A. Le′on, K. B. Ferreira, and A. B. Maccabe. Reducing the Impact of the MemoryWall for I/O Using Cache Injection, In 15th IEEE Symposium on High-Performance Interconnects (HOTI?07), Aug, 2007.[5] A.Kumar, R.Huggahalli, S.Makineni, “Characterization ofDirect Cache Access on Multi-core Systems and 10GbE”,HPCA, 2009.[6]Sun Niagara 2,/doc/4a4372fc0242a8956bece471.html /processors/niagara/index.jsp [7]PowerPC[8]Guangdeng Liao, L.Bhuyan, “Performance Measurement ofan Integrated NIC Architecture with 10GbE”, 17th IEEESymposium on High Performance Interconnects, 2009. [9] A.Foong et al., “TCP Performance Re-visited,” IEEE Int?l Symp on Performance Analysis of Software and Systems,Mar 2003[10]D.Clark, V.Jacobson, J.Romkey, and H.Saalwen. “AnAnalysis of TCP processing overhead”. IEEECommunications,June 1989.[11]J.Doweck, “Inside Intel Core microarchitecture and smartmemory access”, Intel White Paper, 2006[12]Amit Kumar, Ram Huggahalli., Impact of Cache CoherenceProtocols on the Processing of Network Traffic[13]Wenji Wu, Matt Crawford, “Potential performancebottleneck in Linux TCP”, International Journalof Communication Systems, Vol. 20, Issue 11, pages 1263–1283, November 2007.[14]Weiwu Hu, Jian Wang, Xiang Gao, et al, “Godson-3: ascalable multicore RISC processor with x86 emulation,” IEEE Micro, 2009. 29(2): pp. 17-29.[15]Cadence Incisive Xtreme Series./doc/4a4372fc0242a8956bece471.html /products/sd/ xtreme_series. [16]Synopsys GMAC IP. /doc/4a4372fc0242a8956bece471.html /dw/dwtb.php?a=ethernet_mac[17]/doc/4a4372fc0242a8956bece471.html ler, P.M.Watts, A.W.Moore, "Motivating Future Interconnects: A Differential Measurement Analysis of PCI Latency", ANCS, 2009.[18]Nathan L.Binkert, Ali G.Saidi, Steven K.Reinhardt.Integrated Network Interfaces for High-Bandwidth TCP/IP.Proceedings of the 12th international conferenceon Architectural support for programming languages andoperating systems (ASPLOS). 2006[19]G.Liao, L.Bhuyan, "Performance Measurement of anIntegrated NIC Architecture with 10GbE", HotI, 2009. [20]Intel Server Network I/O Acceleration./doc/4a4372fc0242a8956bece471.html /technology/comms/perfnet/download/ServerNetworkIOAccel.pdfColumns on Last Page Should Be Made As Close AsPossible to Equal Length。
人工智能军事发展论文

人工智能军事发展论文人工智能(AI)作为当今世界科技发展的前沿领域,其在军事领域的应用正日益增多,对军事战略、战术以及战争形态产生了深远的影响。
本文旨在探讨人工智能在军事发展中的角色、影响以及未来趋势。
引言随着科技的不断进步,人工智能技术已经渗透到社会的各个领域,特别是在军事领域,AI的应用正逐步改变着战争的面貌。
从自动化武器系统到情报分析,再到无人作战平台,人工智能正在成为军事力量的重要组成部分。
人工智能技术在军事领域的应用人工智能技术在军事领域的应用是多方面的,主要包括以下几个方面:1. 情报收集与分析:AI可以快速处理和分析大量数据,帮助军事决策者更准确地判断战场形势,预测敌方行动。
2. 自动化武器系统:自动化武器系统能够自主识别目标并进行攻击,提高了作战效率和精确度。
3. 无人作战平台:无人机、无人舰艇和无人地面车辆等无人作战平台的使用,减少了人员伤亡,同时提高了作战的灵活性和隐蔽性。
4. 网络战与电子战:AI技术在网络战和电子战中的应用,可以有效地干扰和破坏敌方的通信和指挥系统。
5. 模拟训练与决策支持:AI可以用于模拟战场环境,为军事训练提供支持,同时辅助决策者制定作战计划。
人工智能对军事战略的影响人工智能技术的发展对军事战略产生了重要影响:1. 战略思维的转变:AI技术使得军事战略更加依赖于数据和算法,传统的战略思维模式需要适应这一变化。
2. 作战方式的变革:无人作战平台的广泛应用,使得作战方式更加多样化,同时也对传统的战术和作战原则提出了挑战。
3. 战争伦理的挑战:自动化武器系统的使用引发了关于战争伦理和责任归属的讨论。
人工智能在军事发展中的挑战尽管人工智能技术为军事发展带来了诸多优势,但也存在一些挑战:1. 技术可靠性问题:自动化武器系统的可靠性和稳定性是其广泛应用的关键。
2. 安全与隐私问题:在情报收集与分析过程中,如何保护信息安全和个人隐私是一个重要议题。
3. 国际法规与伦理问题:AI在军事领域的应用需要符合国际法规和伦理标准。
军事学毕业论文

信息化战争论考号110211136520 姓名黄帅摘要:信息化战争是一种战争形态。
这种形态与社会形态有关,具有信息时代的特征,是信息时代的产物。
可以给信息化战争这样定义:信息化战争,是指一种大量使用信息技术和信息化武器装备,在陆、海、空、天、电全维战场进行的一体化战争。
这种战争对机械化战争来讲,是一次质的飞跃,是一种全面的革命。
使武器装备、作战指挥和作战方式等方面发生了深刻的变革,制信息权将成为战场争夺的焦点。
因此,必须高度重视信息战在高技术战争中的重要作用,大力加强军队的信息化建设关键词:21世纪是信息的时代。
信息成为了本世纪的三大产业支柱之一。
人们无时无刻不在喊着信息这个词汇。
那么信息的概念究竟是什么?现代科学指事物发出的消息、指令、数据、符号等所包含的内容。
人通过获得、识别自然界和社会的不同信息来区别不同事物,得以认识和改造世界。
在一切通讯和控制系统中,信息是一种普遍联系的形式。
1948年,数学家香农在题为“通讯的数学理论”的论文中指出:“信息是用来消除随机不定性的东西”。
美国数学家、控制论的奠基人诺伯特·维纳在他的《控制论——动物和机器中的通讯与控制问题》中认为,信息是“我们在适应外部世界,控制外部世界的过程中同外部世界交换的内容的名称”。
英国学者阿希贝认为,信息的本性在于事物本身具有变异度。
在以上认识的基础上,我认为对信息的含义可以作这样的定义:信息是客观事物存在、联系、作用和发展变化的反映,是一种特殊的消息。
信息的特殊性在于它具有预先未知性、消除未知性和相关联性。
一、对信息化及信息化战争概念的理解和认识信息化这一概念是随着人类社会由工业化向信息化迈进而出现的一个富含技术色彩的词语。
信息化这一概念现在已经被广泛应用与日常工作、生活以及军事领域,出现了社会信息化、家电信息化、信息化武器装备、信息化战争等概念。
由于界定信息化这一概念对研究相关问题有较大的影响,因此,我们有必要对信息化这一概念进行辨析,以准确把握信息化的内涵。
ACM的论文写作格式标准

ACM Word Template for SIG Site1st Author1st author's affiliation1st line of address2nd line of address Telephone number, incl. country code 1st author's E-mail address2nd Author2nd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code2nd E-mail3rd Author3rd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code3rd E-mailABSTRACTA s network speed continues to grow, new challenges of network processing is emerging. In this paper we first studied the progress of network processing from a hardware perspective and showed that I/O and memory systems become the main bottlenecks of performance promotion. Basing on the analysis, we get the conclusion that conventional solutions for reducing I/O and memory accessing latencies are insufficient for addressing the problems.Motivated by the studies, we proposed an improved DCA combined with INIC solution which has creations in optimized architectures, innovative I/O data transferring schemes and improved cache policies. Experimental results show that our solution reduces 52.3% and 14.3% cycles on average for receiving and transmitting respectively. Also I/O and memory traffics are significantly decreased. Moreover, an investigation to the behaviors of I/O and cache systems for network processing is performed. And some conclusions about the DCA method are also presented.KeywordsKeywords are your own designated keywords.1.INTRODUCTIONRecently, many researchers found that I/O system becomes the bottleneck of network performance promotion in modern computer systems [1][2][3]. Aim to support computing intensive applications, conventional I/O system has obvious disadvantages for fast network processing in which bulk data transfer is performed. The lack of locality support and high latency are the two main problems for conventional I/O system, which have been wildly discussed before [2][4].To overcome the limitations, an effective solution called Direct Cache Access (DCA) is suggested by INTEL [1]. It delivers network packages from Network Interface Card (NIC) into cache instead of memory, to reduce the data accessing latency. Although the solution is promising, it is proved that DCA is insufficient to reduce the accessing latency and memory traffic due to many limitations [3][5]. Another effective solution to solve the problem is Integrated Network Interface Card (INIC), which is used in many academic and industrial processor designs [6][7]. INIC is introduced to reduce the heavy burden for I/O registers access in Network Drivers and interruption handling. But recent report [8] shows that the benefit of INIC is insignificant for the state of the art 10GbE network system.In this paper, we focus on the high efficient I/O system design for network processing in general-purpose-processor (GPP). Basing on the analysis of existing methods, we proposed an improved DCA combined with INIC solution to reduce the I/O related data transfer latency.The key contributions of this paper are as follows:▪Review the network processing progress from a hardware perspective and point out that I/O and related last level memory systems have became the obstacle for performance promotion.▪Propose an improved DCA combined with INIC solution for I/O subsystem design to address the inefficient problem of a conventional I/O system.▪Give a framework of the improved I/O system architecture and evaluate the proposed solution with micro-benchmarks.▪Investigate I/O and Cache behaviors in the network processing progress basing on the proposed I/O system.The paper is organized as follows. In Section 2, we present the background and motivation. In Section 3, we describe the improved DCA combined INIC solution and give a framework of the proposed I/O system implementation. In Section 4, firstly we give the experiment environment and methods, and then analyze the experiment results. In Section 5, we show some related works. Finally, in Section 6, we carefully discuss our solutions with many existing technologies, and then draw some conclusions.2.Background and MotivationIn this section, firstly we revise the progress of network processing and the main network performance improvement bottlenecks nowadays. Then from the perspective of computer architecture, a deep analysis of network system is given. Also the motivation of this paper is presented.2.1Network processing reviewFigure 1 illustrates the progress of network processing. Packages from physical line are sampled by Network Interface Card (NIC). NIC performs the address filtering and stream control operations, then send the frames to the socket buffer and notifies OS to invoke network stack processing by interruptions. When OS receives the interruptions, the network stack accesses the data in socket buffer and calculates the checksum. Protocol specific operations are performed layer by layer in stack processing. Finally, data is transferred from socket buffer to the user buffer depended on applications. Commonly this operation is done by memcpy, a system function in OS.Figure 1. Network Processing FlowThe time cost of network processing can be mainly broke down into following parts: Interruption handling, NIC driver, stack processing, kernel routine, data copy, checksum calculation and other overheads. The first 4 parts are considered as packet cost, which means the cost scales with the number of network packets. The rests are considered as bit cost (also called data touch cost), which means the cost is in proportion to the total I/O data size. The proportion of the costs highly depends on the hardware platform and the nature of applications. There are many measurements and analyses about network processing costs [9][10]. Generally, the kernel routine cost ranges from 10% - 30% of the total cycles; the driver and interruption handling costs range from 15% - 35%; the stack processing cost ranges from 7% - 15%; and data touch cost takes up 20% - 35%. With the development of high speed network (e.g. 10/40 Gbps Ethernet), an increasing tendency for kernel routines, driver and interruption handling costs is observed [3].2.2 MotivationTo reveal the relationship among each parts of network processing, we investigate the corresponding hardware operations. From the perspective of computerhardware architecture, network system performance is determined by three domains: CPU speed, Memory speed and I/O speed. Figure 2 depicts the relationship.Figure 2. Network xxxxObviously, the network subsystem can achieve its maximal performance only when the three domains above are in balance. It means that the throughput or bandwidth ofeach hardware domain should be equal with others. Actually this is hard for hardware designers, because the characteristics and physical implementation technologies are different for CPU, Memory and I/O system (chipsets) fabrication. The speed gap between memory and CPU – a.k.a “the memory wall” – has been paid special attention for more than ten years, but still it is not well addressed. Also the disparity between the data throughput in I/O system and the computing capacity provided by CPU has been reported in recent years [1][2].Meanwhile, it is obvious that the major time costs of network processing mentioned above are associated with I/O and Memory speeds, e.g. driver processing, interruption handling, and memory copy costs. The most important nature of network processing is the “producer -consumer locality” between every two consecutive steps of the processing flow. That means the data produced in one hardware unit will be immediately accessed by another unit, e.g. the data in memory which transported from NIC will be accessed by CPU soon. However for conventional I/O and memory systems, the data transfer latency is high and the locality is not exploited.Basing on the analysis discussed above, we get the observation that the I/O and Memory systems are the limitations for network processing. Conventional DCA or INIC cannot successfully address this problem, because it is in-efficient in either I/O transfer latency or I/O data locality utilization (discussed in section 5). To diminish these limitations, we present a combined DCA with INIC solution. The solution not only takes the advantages of both method but also makes many improvements in memory system polices and software strategies.3. Design MethodologiesIn this section, we describe the proposed DCA combined with INIC solution and give a framework of the implementation. Firstly, we present the improved DCA technology and discuss the key points of incorporating it into I/O and Memory systems design. Then, the important software data structures and the details of DCA scheme are given. Finally, we introduce the system interconnection architecture and the integration of NIC.3.1 Improved DCAIn the purpose of reducing data transfer latency and memory traffic in system, we present an improved Direct Cache Access solution. Different with conventional DCA scheme, our solution carefully consider the following points. The first one is cache coherence. Conventionally, data sent from device by DMA is stored in memory only. And for the same address, a different copy of data is stored in cache which usually needs additional coherent unit to perform snoop operation [11]; but when DCA is used, I/O data and CPU data are both stored in cache with one copy for one memory address, shown in figure 4. So our solution modifies the cache policy, which eliminated the snoopingoperations. Coherent operation can be performed by software when needed. This will reduce much memory traffic for the systems with coherence hardware [12].I/O write *(addr) = bCPU write *(addr) = aCacheCPU write *(addr) = a I/O write with DCA*(addr) = bCache(a) cache coherance withconventional I/O(b) cache coherance withDCA I/OFigure 3. xxxxThe second one is cache pollution. DCA is a mixed blessing to CPU: On one side, it accelerates the data transfer; on the other side, it harms the locality of other programs executed in CPU and causes cache pollution. Cache pollution is highly depended on the I/O data size, which is always quite large. E.g. one Ethernet package contains a maximal 1492 bytes normal payload and a maximal 65536 bytes large payload for Large Segment Offload (LSO). That means for a common network buffer (usually 50 ~ 400 packages size), a maximal size range from 400KB to 16MB data is sent to cache. Such big size of data will cause cache performance drop dramatically. In this paper, we carefully investigate the relationship between the size of I/O data sent by DCA and the size of cache system. To achieve the best cache performance, a scheme of DCA is also suggested in section 4. Scheduling of the data sent with DCA is an effective way to improve performance, but it is beyond the scope of this paper.The third one is DCA policy. DCA policy refers the determination of when and which part of the data is transferred with DCA. Obviously, the scheme is application specific and varies with different user targets. In this paper, we make a specific memory address space in system to receive the data transferred with DCA. The addresses of the data should be remapped to that area by user or compilers.3.2 DCA Scheme and detailsTo accelerate network processing, many important software structures used in NIC driver and the stack are coupled with DCA. NIC Descriptors and the associated data buffers are paid special attention in our solution. The former is the data transfer interface between DMA and CPU, and the later contains the packages. For farther research, each package stored in buffer is divided into the header and the payload. Normally the headers are accessed by protocols frequently, but the payload is accessed only once or twice (usually performed as memcpy) in modern network stack and OS. The details of the related software data structures and the network processing progress can be found in previous works [13].The progress of transfer one package from NIC to the stack with the proposed solution is illustrated in Table 1. All the accessing latency parameters in Table 1 are based on a state of the art multi-core processor system [3]. One thing should be noticed is that the cache accessing latency from I/O is nearly the same with that from CPU. But the memory accessing latency from I/O is about 2/3 of that from CPU due to the complex hardware hierarchy above the main memory.Table 1. Table captions should be placed above the tabletransfer.We can see that DCA with INIC solution saves above 95% CPU cycles in theoretical and avoid all the traffic to memory controller. In this paper, we transfer the NIC Descriptors and the data buffers including the headers and payload with DCA to achieve the best performance. But when cache size is small, only transfer the Descriptors and the headers with DCA is an alternative solution.DCA performance is highly depended on system cache policy. Obviously for cache system, write-back with write-allocate policy can help DCA achieves better performance than write-through with write non-allocate policy. Basing on the analysis in section 3.1, we do not use the snooping cache technology to maintain the coherence with memory. Cache coherence for other non-DCA I/O data transfer is guaranteed by software.3.3 On-chip network and integrated NICFootnotes should be Times New Roman 9-point, and justified to the full width of the column.Use the “ACM Reference format” for references – that is, a numbered list at the end of the article, ordered alphabetically and formatted accordingly. See examples of some typical reference types, in the new “ACM Reference format”, at the end of this document. Within this template, use the style named referencesfor the text. Acceptable abbreviations, for journal names, can be found here: /reference/abbreviations/. Word may try to automatically ‘underline’ hotlinks in your references, the correct style is NO underlining.The references are also in 9 pt., but that section (see Section 7) is ragged right. References should be published materials accessible to the public. Internal technical reports may be cited only if they are easily accessible (i.e. you can give the address to obtain thereport within your citation) and may be obtained by any reader. Proprietary information may not be cited. Private communications should be acknowledged, not referenced (e.g., “[Robertson, personal communication]”).3.4Page Numbering, Headers and FootersDo not include headers, footers or page numbers in your submission. These will be added when the publications are assembled.4.FIGURES/CAPTIONSPlace Tables/Figures/Images in text as close to the reference as possible (see Figure 1). It may extend across both columns to a maximum width of 17.78 cm (7”).Captions should be Times New Roman 9-point bold. They should be numbered (e.g., “Table 1” or “Figure 2”), please note that the word for Table and Figure are spelled out. Figure’s captions should be centered beneath the image or picture, and Table captions should be centered above the table body.5.SECTIONSThe heading of a section should be in Times New Roman 12-point bold in all-capitals flush left with an additional 6-points of white space above the section head. Sections and subsequent sub- sections should be numbered and flush left. For a section head and a subsection head together (such as Section 3 and subsection 3.1), use no additional space above the subsection head.5.1SubsectionsThe heading of subsections should be in Times New Roman 12-point bold with only the initial letters capitalized. (Note: For subsections and subsubsections, a word like the or a is not capitalized unless it is the first word of the header.)5.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized and 6-points of white space above the subsubsection head.5.1.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.5.1.1.2SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.6.ACKNOWLEDGMENTSOur thanks to ACM SIGCHI for allowing us to modify templates they had developed. 7.REFERENCES[1]R. Huggahalli, R. Iyer, S. Tetrick, "Direct Cache Access forHigh Bandwidth Network I/O", ISCA, 2005.[2] D. Tang, Y. Bao, W. Hu et al., "DMA Cache: Using On-chipStorage to Architecturally Separate I/O Data from CPU Data for Improving I/O Performance", HPCA, 2010.[3]Guangdeng Liao, Xia Zhu, Laxmi Bhuyan, “A New ServerI/O Architecture for High Speed Networks,” HPCA, 2011. [4] E. A. Le´on, K. B. Ferreira, and A. B. Maccabe. Reducingthe Impact of the MemoryWall for I/O Using Cache Injection, In 15th IEEE Symposium on High-PerformanceInterconnects (HOTI’07), Aug, 2007.[5] A.Kumar, R.Huggahalli, S.Makineni, “Characterization ofDirect Cache Access on Multi-core Systems and 10GbE”,HPCA, 2009.[6]Sun Niagara 2,/processors/niagara/index.jsp[7]PowerPC[8]Guangdeng Liao, L.Bhuyan, “Performance Measurement ofan Integrated NIC Architecture with 10GbE”, 17th IEEESymposium on High Performance Interconnects, 2009. [9] A.Foong et al., “TCP Performance Re-visited,” IEEE Int’lSymp on Performance Analysis of Software and Systems,Mar 2003[10]D.Clark, V.Jacobson, J.Romkey, and H.Saalwen. “AnAnalysis of TCP processing overhead”. IEEECommunications,June 1989.[11]J.Doweck, “Inside Intel Core microarchitecture and smartmemory access”, Intel White Paper, 2006[12]Amit Kumar, Ram Huggahalli., Impact of Cache CoherenceProtocols on the Processing of Network Traffic[13]Wenji Wu, Matt Crawford, “Potential performancebottleneck in Linux TCP”, International Journalof Communication Systems, Vol. 20, Issue 11, pages 1263–1283, November 2007.[14]Weiwu Hu, Jian Wang, Xiang Gao, et al, “Godson-3: ascalable multicore RISC processor with x86 emulation,”IEEE Micro, 2009. 29(2): pp. 17-29.[15]Cadence Incisive Xtreme Series./products/sd/ xtreme_series.[16]Synopsys GMAC IP./dw/dwtb.php?a=ethernet_mac [17]ler, P.M.Watts, A.W.Moore, "Motivating FutureInterconnects: A Differential Measurement Analysis of PCILatency", ANCS, 2009.[18]Nathan L.Binkert, Ali G.Saidi, Steven K.Reinhardt.Integrated Network Interfaces for High-Bandwidth TCP/IP.Figure 1. Insert caption to place caption below figure.Proceedings of the 12th international conferenceon Architectural support for programming languages and operating systems (ASPLOS). 2006[19]G.Liao, L.Bhuyan, "Performance Measurement of anIntegrated NIC Architecture with 10GbE", HotI, 2009. [20]Intel Server Network I/O Acceleration./technology/comms/perfnet/downlo ad/ServerNetworkIOAccel.pdfColumns on Last Page Should Be Made As Close AsPossible to Equal Length。
acm检索的会议论文学校认吗

acm检索的会议论文学校认吗篇一:什么样的会议论文能够被SCI或者EI检索什么样的会议论文能够被SCI或者EI检索!(转载)管理提醒: 本帖被kidy008 从新人展示移动到本区(2009-03-15)现在检索很多,包括SCI,SCIE,EI,ISTP,ACM。
关于哪些会被检索,很多人关注,写个短文供大家参考,当然不保证绝对正确,基本都差不多。
1.SCI检索SCI检索包括SCI的3500多个期刊,大部分都是很著名的刊物,国内的e文刊物只有科学通报,中国科学,《计算机科学技术学报》(英文)等寥寥几种被SCI检索。
一些非常著名的会议可能会被SCI检索,但事先一般都会被某个刊物出版(一般为springer的)然后再被检索。
查询自己论文是否被SCI检索可以到在图书馆主页里面的:Web of ScienceProceedings(ISTP/ISSHP)里面进行查询。
2.SCIE检索SCIE是SCI的扩展,其发起机构不同,但据说现在数据库合并了。
包括了5000多种著名期刊,高档次会议的会议集(一般由Springer正式出版为专刊)。
国内被SCIE收录的也不多,如电子学报。
可以说SCIE,尤其是SCIE刊物也是很不错的。
一般LNCS出版的会议集(不是所有的Springer)被SCIE收录,具体查询哪个刊物或会议是否被收录同SCI。
3.EI检索EI检索分Page1和Compendex,其中Page1不能算作一篇完全的EI。
被EIPage1收录的刊物很多,如武汉理工大学学报(自科版),武大学报(英文版)等。
被EI全文检索(Compendex)的刊物称为核心刊源,其档次相对较高,国内有不少,如:计算机学报,软件学报,计算机研究与发展,及包括华工学报在内的较多大学学报。
IEEE的会议都会被EI检索,但是是全文还是Page1不一定,大多数为全文。
被EI检索的论文可以在图书馆的EI信息村里面查询。
4.ISTP检索ISTP检索称为第三大检索(sci和scie合并),但其档次较低,由于我们不能算,就不具体分析了,很多EI或SCI 或ACM都被ISTP检索。
长篇毕业论文原文

本科毕业设计(2012届)题目学院专业班级学号学生姓名指导教师完成日期2012年6月摘要本毕业设计主要设计自主研发的激光打靶系统的信号处理过程,继而实现整个打靶系统。
激光打靶系统主要包括半导体激光枪、光电探测器和信号处理电路,信号处理过程是整个系统的关键。
激光打靶的打靶过程,由激光枪发射激光脉冲信号,光电靶接收激光脉冲信号,经过系列信号处理过程最终得到打靶的结果。
光电靶由许多块的光电探测器组成,每块不同位置的光电探测器对应不同编号,从打靶的实际情况出发,确定了相应的编号规则。
打靶的成绩由激光所击中的光电探测器的编号来判定。
激光打靶系统的主要信号处理过程包括:信号的放大、编码和数据传输。
信号由光电探测器检测后传送到相应的放大电路,放大电路采用集成运算放大器。
按原先对光电探测器的编码规则采用多路优先编码器对信号进行编码。
最后把编码值以串口的形式传送到计算机,利用计算机的强大功能对打靶结果进行各种处理。
与计算机之间的串行数据传输由89C2051单片机实现。
89C2051单片机的程序,使用keil编译器进行设计和调试完成,其主要功能是控制数据的串行传送,实现与计算机的串口通信。
该信号处理系统实现了对信号的良好检测。
与计算机之间的串口通信可以实现数据在计算机上的显示、统计、储存等功能,为打靶者提供非常直接、准确的打靶结果,有利于提高打靶效果。
关键词:激光打靶;信号处理;信号编码;串行传输ABSTRACTThe main aim of this thesis is to design and realize signal processing of a self-developed laser target shooting system and then realize the whole laser target shooting system. The laser target shooting system consists of semiconductor laser gun, photoelectric detector, and signal processing circuit, which is the key part of the whole system. Laser target shooting process go though following steps: laser gun emitted a pulse of laser, which would be received by the laser target and the results of shooting will be shown on screen of computer by series signal processing. The laser target consists of some silicon photoelectric units that were encoded with different numbers according to certain rule. The result of the shooting will be got when detecting the number of the photoelectric unit that receives the laser pulse.The signal processing of the laser target shooting system mainly consists of signal amplification, signal encoding and data transmission. The inspected photoelectric signal was then amplified by operator amplifiers, coded by multiplex priority encoder according to the prearranged rule, and then transferred to computer by 89C2051 MCU through its serial port. And then computer can process the signal. The program of 89C2051 MCU is designed in keil and debugged using keil compiler. It is designed to control the data transmission with computer.The designed signal processing system can detect signal effectively. Through the serial data transmission, computer can process the shooting result, such as display, statistics and storage etc. It provide direct and exact shooting result for trainer, so it can increase the efficiency of the shooting training.Key words:laser target shooting;signal amplification;signal encode;serial data transmission1引言目前的射击打靶训练,基本以实弹训练为主,国防开支大,危险系数高。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
2011级本科生军事理论论文卷
论南海局势与中国处境
计算机科学与技术学院2011级ACM班
姓名:
学号:
成绩:
老师:
二零一一年十一月
【摘要】南海本来毫无争议是中国领土的一部分,但为何一直被世界公认为是中国领土的南沙群岛会形成主权争议,从而导致如此复杂的南海问题呢?虽然我国疆域陆海兼备,但国人的海洋意识历来比较薄弱。
中国仍急需提高和增强全名族自上而下的海洋意识,从而形成当代世界的发展潮流和中华民族利益的海洋意识。
中国才能认识到自己在维护海洋权益方面面临着的严峻的形势,才会不惜一切代价捍卫我国主权,维护我国海洋权益。
【关键字】南海问题,主权,海洋意识
【引子】1992年联合国环境和发展大会通过的《21世纪议程》指出,海洋不仅是生命支持系统的重要组成部分,而且是可持续发展的宝贵财富。
是因为海洋是最大的政治地理单元,军事活动的重要场所,并且蕴藏着丰富的资源和能源。
然而在有可能划归我国的近300万平方公里海域中,争议区150万平方公里。
我
国与所有海上邻国朝鲜、韩国、日本、菲律宾、马来西亚、文莱,越南等国存在油气和渔业资源争端;同日本、菲律宾、马来西亚、文莱,越南存在着岛屿归属争议。
我们应清醒地认识到我国海域正面临着严重的挑战。
【正文】自秦朝开始华南沿海一带已经是中国领土,从西汉到唐末南海就已经是中国领海。
南海诸岛自古以来就是中国的神圣领土。
中国人民最早发现这些岛屿礁滩,长期以这些岛屿礁滩为基地进行渔业捕捞生产和居住,世代相继对这些岛屿礁滩进行辛勤的开发和经营。
中国政府也是最早对这些岛屿礁滩实行管辖和行使主权,而越南等东南亚国家所谓的拥有南海主权的说法完全是不合法的。
越南在1975年以前明确承认中国对南沙群岛的领土主权。
菲律宾和马来西亚等国在70年代以前没有任何法律文件或领导人讲话提及本国领土范围包括南沙群岛。
美国与西班牙1898年签订的巴黎条约和1900年签订的华盛顿条约曾明确规定了菲律宾的领土范围,但并未包括南沙群岛。
1953年菲律宾宪法、1951年菲美军事同盟条约等也对此作了进一步确认,而马来西亚只是到了1978年12月,才在其公布的大陆架地图上将南沙群岛的部分岛礁和海域标在马来西亚境内。
南沙群岛一共有五百多座岛屿,我国只占有九个。
在占据大量岛礁的情况下,越、菲、马等国每年从南沙海域采走的石油约1800多万吨,这无疑是对我国的主权和资源的侵犯。
但是中国
政府一贯主张以和平方式谈判解决国际争端。
中国主张有关各方在南沙问题上采取克制、冷静和建设性的态度。
近些年来,越南、菲律宾等出兵强占南海一些无人岛礁,摧毁中国在南沙无人岛礁所设主权标志,抓扣或以武力驱赶我在南海作业的渔民,对此,中方始终坚持通过外交渠道,以和平方式与有关国家商讨解决有关问题。
然而时下,面对夹杂有领土争端、资源开发、共同安全、权力角逐等复杂矛盾的南海问题,中国如不能采取快刀斩乱麻的战略清晰态度,将在较长一段时间内面临内外可以预料和无法预料的多重矛盾压力。
作为一个在国际上有着足够影响力的大国,是绝不能允许神圣的主权受到侵犯的。
然而面对国际上的其他势力的介入,中国的处境更加复杂化。
中国始终坚持在主权不受侵犯的条件下通过双边友好协商解决与有关国家之间的分歧。
但是当主权真正面临不可避免的威胁时,中国应毫不犹豫地采取措施。
对此我提出以下几点建议供参考借鉴:
1、构建全民海洋意识。
我国是一个拥有辽阔海域的大国,却未能引起足够的重视,重陆轻海。
由于中国几千年的江河文明,导致中国人民海洋意识淡薄。
然而西方过去的强国无一不是海上的霸主,可见向海洋发展的国家必然兴旺。
有关专家学者认为,海洋意识、海洋权益观念的淡薄,才造成了今天处理南海问题的被动。
加强全民海洋意识刻不容缓。
2、建议中国增派兵力驻守曾母暗沙。
南海是中国对外贸易的枢纽,同时南海也蕴藏着极丰
富的资源。
这种要塞是必须派兵驻守的,同时也是对
我国安全运输贸易的保障,也是对中国南海渔民的安
全保障。
3、加大对南海的开发。
双方之争其实就是资源之争。
中国应加快对南海油气
的开发。
目前南海尚未得到很好的开发利用,具有很
大的开发潜力。
中国国内的油气资源并不充裕,加快
对南海的开发刻不容缓。
总之我们应始终坚持中国的领土不受侵犯,这是一个国家的底线,在面对南海问题时中国的抉择是艰难的,应为中国是具有国际影响力的大国,在处理的过程中既要顾及所产生的后果在国际上的影响,有要坚持自己的领土不受侵犯。
中国的处境十分窘迫。
但我相信只要奉行战略清晰的方针,采取具体问题分类处理的路径,一切问题都可以通过谈判获得解决。
中国向来不畏艰难挑战,只要中国坚持原则,进退有度,就能从根本上解决南海问题。