ARMSim a Modeling and Simulation Environment for Agent-based Grid Computing
simulation modeling and analysis -回复

simulation modeling and analysis -回复Simulation modeling and analysis is a powerful tool used in various industries to understand complex systems, predict their behavior, and make informed decisions. In this article, we will explore what simulation modeling and analysis are, how they work, and why they are valuable in today's world.Simulation modeling is the process of creating a computer-based representation of a real system or process. It involves developing a mathematical model that captures the key components and interactions of the system. This model is then used to simulate the behavior and performance of the system under different scenarios and conditions.Simulation analysis, on the other hand, refers to the process of evaluating the output or results generated by the simulation model. It involves analyzing and interpreting the data produced during the simulation to gain insights into the system's behavior and performance.The first step in simulation modeling and analysis is defining the objectives and scope of the study. This includes identifying the keyvariables, parameters, and constraints that need to be included in the model. For example, in a manufacturing setting, variables such as production rate, inventory levels, and machine downtime may be of interest.Once the objectives and scope are defined, the next step is data collection. This involves gathering relevant data about the system or process under study. This data can come from a variety of sources, including historical records, surveys, and observations. In some cases, it may be necessary to create synthetic or hypothetical data to supplement the available information.After data collection, the model building phase begins. This involves constructing a mathematical representation of the system using specialized software or programming languages. The model should be able to capture the important characteristics and dynamics of the system, such as its inputs, outputs, and interactions.Next, the model needs to be verified and validated. Verification ensures that the model is free from errors and accurately represents the system. Validation, on the other hand, involvescomparing the output of the model with real-world data or expert knowledge to ensure that it accurately captures the system's behavior.Once the model is verified and validated, the simulation experiments can be conducted. These experiments involve running the model using different input values and scenario conditions to generate data on the system's behavior and performance. The output data can then be analyzed using statistical techniques to understand the effects of various factors on the system's performance.Simulation modeling and analysis provide several benefits. First, they allow decision-makers to experiment with different scenarios and conditions without having to disrupt or modify the real system. This can be particularly valuable in sensitive or high-risk environments, where the consequences of change can be costly or dangerous.Second, simulation modeling and analysis provide a level of detail and visibility that is difficult to achieve through other methods. They allow decision-makers to understand the complexinteractions and dependencies within a system, leading to more informed and effective decision-making.Additionally, simulation modeling and analysis can help optimize system performance. By running multiple simulations and analyzing the results, decision-makers can identify bottlenecks, inefficiencies, and areas of improvement. This can lead to cost savings, increased productivity, and enhanced customer satisfaction.In conclusion, simulation modeling and analysis are valuable tools that enable decision-makers to gain insights into complex systems and make informed decisions. By creating a computer-based representation of a system and running simulations,decision-makers can experiment with different scenarios and conditions to understand the system's behavior and optimize its performance. With the increasing complexity of modern systems, simulation modeling and analysis are becoming essential tools in various industries.。
MATLAB英文材料-基于matlab的仿真(含中文翻译)

1. Introduction
Navigation is the essential ability that a mobile robot. During the development of new navigation algorithms, it is necessary to test them in simulated robots and environments before the testing on real robots and the real world. This is because (i) the prices of robots are expansive; (ii) the untested algorithm may damage the robot during the experiment; (iii) difficulties on the construction and alternation of system models under noise background; (iv) the transient state is difficult to track precisely; and (v) the measurements to the external beacons are hidden during the experiment, but this information is often helpful for debugging and updating the algorithms.
This paper presents a Matlab-based simulator that is fully compatible with Matlab codes,and makes it possible for robotics researchers to debug their codeand do experiments conveniently at the first stage of their research.The algorithms development is based on Matlab subroutines with appointed parameter variables,which are stored in a file to be accessed by the ing this simulator,we can build the environment, select parameters,build subroutinesand display outputs on the screen.Data are recorded during the whole procedure;some basic analyses are also performed.
仿真模拟 算力测算 案例

仿真模拟算力测算案例### Simulation and Computing Power Estimation.1. Introduction.Simulation is a powerful tool that can be used to model and analyze complex systems. It can be used to predict the behavior of a system under different conditions, and to identify potential problems. However, simulation can also be computationally expensive, especially for large and complex systems.The amount of computing power required for a simulation depends on a number of factors, including:The size of the system being simulated.The complexity of the system being simulated.The level of detail required in the simulation.The desired accuracy of the simulation.2. Estimating Computing Power Requirements.There are a number of different methods that can be used to estimate the computing power requirements for a simulation. One common method is to use a benchmark. A benchmark is a set of standard tests that are used to measure the performance of a computer system. By running a benchmark on a computer system, you can get an idea of how well the system will perform on a particular simulation.Another method for estimating computing power requirements is to use a simulation profiler. A simulation profiler is a tool that can be used to measure the performance of a simulation. By profiling a simulation, you can identify the parts of the simulation that are most computationally expensive. This information can then be used to optimize the simulation and reduce the computing power requirements.3. Reducing Computing Power Requirements.There are a number of different ways to reduce the computing power requirements for a simulation. One way isto use a simpler model. A simpler model will typically require less computing power than a more complex model. Another way to reduce computing power requirements is touse a less accurate simulation. A less accurate simulation will typically require less computing power than a more accurate simulation.Finally, you can also reduce computing power requirements by using a more efficient simulation algorithm.A more efficient simulation algorithm will typicallyrequire less computing power than a less efficientsimulation algorithm.4. Conclusion.Simulation is a powerful tool that can be used to model and analyze complex systems. However, simulation can alsobe computationally expensive, especially for large andcomplex systems. By understanding the factors that affect computing power requirements, you can make informed decisions about how to optimize your simulations and reduce the computing power requirements.### 仿真模拟算力测算。
An advanced modeling and simulation tool for dynamic systems

An advanced modeling and simulation tool for dynamic systems Levent U. Gökdere, Charles W. Brice, and Roger A. DougalElectrical and Computer Engineering DepartmentUniversity of South CarolinaSwearingen 3A80Columbia, SC 29208Email: gokdere@; Tel: 803-777-9314; Fax: 803-777-8045.Key words: Modeling and simulation, dynamic systems, virtual prototyping.ABSTRACTIn this paper, a new modeling and simulation tool, the Virtual Test Bed (VTB), is introduced. The VTB is a software environment that has been developed for design, analysis, and virtual prototyping of large-scale multi-technical systems [1, 2, 3]. The two most important features of the VTB are that [1, 2, 3]: (i) it has the capability of integrating models, that have been created in a variety of languages such as SPICE (Simulation Program with Integrated Circuit Emphasis), ACSL (Advance Continuous SimulationLanguage ®) and SABER ®, into one simulation environment, and (ii) it provides advanced displays of simulation results including full-motion animation of mechanical components, oscilloscope-like displays of waveform data, and imaginative mappings of computed results onto the system topology.Consider a multi-technical dynamic system, which consists of mechanical, electronic, and electromechanical components. Simulation of such a system is usually challenging since the existing simulation tools are generally developed for specific applications. For example, ACSL and MATLAB are excellent tool s for modeling and simulation of control systems which include mechanical and electromechanical parts for which the differential equations can be easily written. On the other hand, other tools, such as SPICE and SABER, are preferred for the detailed analys is of electric circuits. Especially, SABER is the software of choice in modeling and simulation of power electronic circuits. One of the main purposes of the VTB is to allow the system designer(s) to model each individual component of a multi-technical system in a language appropriate to that component, and then, to bring those separate models into one common simulation environment [1, 2, 3]. In addition to the external models, which are obtained from their source languages by translation, the VTB has also its own native models created in C++. Currently, the VTB library includes native models of basic circuit components, power electronic devices, power converters, electric motors, and mechanical parts.In this paper, application of the VTB to a motion control system is introduced. Specifically, vector control of a permanent-magnet synchronous motor (PMSM) is considered. In recent years, the PMSMs tend to replace any other electric motors in many applications due to their higher torque-to-volume ratio, and the current-command rotor-oriented speed controller (vector controller) for PMSMs is the industry standard for precise tracking of predetermined speed/position trajectories [4, 5]. The simulation of this system within the VTB environment is achieved by the use of the VTB native models and/or external models which are translated from their original languages such as ACSL and MATLAB.In the final version of the paper, the detailed description of the VTB architecture and the simulation results for the above-mentioned system will be presented.REFERENCES[1]Charles W. Brice, Levent U. Gökdere, Roger A. Dougal, "The Virtual Test Bed: An Environment forVirtual Prototyping," Proceedings of International Conference on Electric Ship (ElecShip'98), pp. 27-31, September 1, 1998, Istanbul, Turkey.[2] R. A. Dougal, C. W. Brice, R. O. Pettus, G. J. Cokkinides, and A. P. S. Meliopoulos, "Virtualprototyping of PCIM systems - The Virtual Test Bed," Proceedings of the 37th International PCIM'98 Power Electronics Conference, pp. 226-234, November 7-13, 1998, Santa Clara, California.[3]Levent U. Gökdere, Lei Hua, John Mookken, Charles W. Brice, and Roger A. Dougal, "Operation ofimported power converter models in the Virtual Test Bed," in Proceedings of the High Frequency Power Conversion Conference (HFPC'99), pp. 99-106, November 9-11, 1999, Chicago, Illinois.[4] P. Vas, Vector Control of AC Machines, Oxford, U.K.: Clarendon Press, 1990.[5] W. Leonhard, Control of Electric Drives, Berlin, Germany: Springer-Verlag, 1985.。
机械臂自动建模与运动轨迹三维导航规划

第37卷第10期计算机仿真2020年10月文章编号:1006 - 9348 (2020)10 - 0297 - 06机械臂自动建模与运动轨迹三维导航规划高升'李正1张伟U2(1.中国科学院沈阳自动化研究所,辽宁沈阳110016;2.中国科学院机器人与智能制造创新研究院,辽宁沈阳110016)摘要:在机械臂虚拟仿真问题研究中,为实现机械臂的自动建模及运动仿真的三维可视化显示,结合OpenGL三维仿真技术设计了一种简单易行的机械臂三维可视化仿真系统。
系统采用机械臂可复用构型库实现虚拟机械臂的自动建模。
同时采用一种三维导航方式的轨迹规划方法,通过将标准化的G代码轨迹导人到机械臂工作空间中实现轨迹的自动跟踪,最后控制虚拟机械臂完成轨迹规划过程并输出机械臂运动控制指令。
仿真结果表明,上述仿真系统实现了机械臂的自动建模与轨迹自动跟踪过程的三维可视化显示,为机械臂的虚拟仿真提供了一个新的途径。
关键词:自动建模;三维可视化;运动仿真;轨迹规划中图分类号:TP242 文献标识码:BAutomatic Modeling Simulationand 3D Navigation Planning of Motion Trajectory for ManipulatorGAO Sheng1’2,LI Zheng1’2,ZHANG Wei1’2(1. Shenyang Institute of Automation,Chinese Acad e m y of Sciences,Shenyang Liaoning 110016, China;2. Institutes for Robotics and Intelligent Manufacturing,Chinese Acad e m y of Sciences,Shenyang Liaoning 110016, China)A B S T R A C T:In the research of the virtual simulation for the mechanical a r m,in order to realize the automatic m o deling of the mechanical arm and the 3D visual display of the motion simulation,a simple and easy 3D visual simulation system for the mechanical arm was designed based on O p e n G L3D simulation technology.In the system,the reusable configuration library was used to realize the automatic modeling of the virtual manipulator.At the same time,a trajectory planning method based on3D navigation was adopted.The trajectory was automatically tracked by importing the standardized G-code trajectory into the manipulator workspace.Finally,the virtual manipulator was controlled to complete the trajectory planning process and output the motion control c o m m a n d of the manipulator.Simulation results show that the system can realize the automatic modeling of the mechanical arm and the 3D visualization of the trajectory automatic tracking process,which provides a new way for the virtual simulation of the mechanical arm.K E Y W O R D S:Automatic modeling; 3D visualization;Motion simulation;Trajectory planningi引言目前,机械臂式工业机器人在工业生产、焊接、搬运、加 工领域得到了广泛的应用。
基于机器视觉的认知康复机器人系统设计

through the interface. If the patient is still unable to complete the task, the robotic arm
image information from time domain to frequency domain, and target detection is
accomplished by comparing the descriptors of reference objects and objects to be tested.
了完整的辅助康复系统,并且针对所用的六自由度机械臂进行了运动学建模和分
析,分析了机械臂的运动和避障策略。
最后,在康复系统样机中,通过模拟病人进行了完整的测试。仿真和实际测
试的结果充分的展示了辅助康复策略的可行性以及该辅助康复机器人系统的高效
性。
关键词:认知康复;机器视觉;傅里叶描述算子;模长位移算法;机械臂
been proved that, to a certain extent, MCI can be slowed down or even cured by human
intervention. However, due to the lack and uneven distribution of related medical
康复过程往往不及时不到位,缺乏持续性和有效性。针对此,本课题设计了一套
基于机器视觉的辅助认知康复机器人系统,用以提高 MCI 等疾病的康复效率。
Chapter 1Basic Simulation Modeling

But now have better modeling software … more general, flexible, but still (relatively) easy to use
– Can consume a lot of computer time
Simulation Modeling and Analysis – Chapter 1 – Basic Simulation Modeling
– What’s being simulated is the system – To study system, often make assumptions/approximations, both logical and mathematical, about how it works – These assumptions form a model of the system – If model structure is simple enough, could use mathematical methods to get exact information on questions of interest — analytical solution
– Several conferences devoted to simulation, notably the Winter Simulation Conference ()
• Surveys of use of OR/MS techniques (examples …)
– Must be studied via simulation — evaluate model numerically and collect data to estimate model characteristics
ArmSim:基于ARM处理器的全系统模拟器

ArmSim:基于ARM处理器的全系统模拟器/doc/82aea47f27284b73f242508f.html1邓⽴波1,龙翔1,⾼⼩鹏11北京航空航天⼤学计算机学院,北京(100083)E-mail:denglibo@/doc/82aea47f27284b73f242508f.html摘要:模拟器作为嵌⼊式系统研究的基础研发⼯具,可辅助系统体系结构调优、软硬件协同设计。
本⽂实现了具有良好配置性及可扩展性的ArmSim模拟器,该模拟器是针对ARM 处理器的全系统模拟器,可在其上运⾏和调试ARM应⽤级和系统级的⽬标程序。
本⽂详细描述ArmSim的设计与实现细节。
关键词:ARM处理器模拟全系统模拟器中图分类号:1.引⾔随着嵌⼊式系统的飞速发展,嵌⼊式系统的研究与开发已经成为当今计算机科学的⼀个重要分⽀。
由于应⽤领域的特点,嵌⼊式系统研发通常需要依赖特定的硬件环境。
由于对硬件环境的过度依赖,因此传统的研究开发模式具有明显的缺陷,即软件开发与硬件开发⽆法并⾏展开。
这⼀⽅⾯致使研发周期过长,另⼀⽅⾯也使得设计⼯作缺乏⾜够的灵活性。
为了解决上述问题,基于软件的模拟器已经成为嵌⼊式系统研发中的主要技术⼿段之⼀。
软件模拟就是⽤计算机软件来模拟某⼀特定硬件系统的全部或部分的外部特性和内部功能,实现对⽬标硬件系统的⾼度仿真,使得运⾏在模拟器上的程序⽆法感知到底层硬件的存在,就如同运⾏在真实硬件平台上。
在嵌⼊式系统领域,软件模拟技术已经被⼴泛应⽤于嵌⼊式系统软硬件协同设计、嵌⼊式操作系统开发与评估以及⼤型嵌⼊式应⽤软件性能评估等⽅⾯。
本⽂描述的ArmSim是⼀个基于C语⾔的ARM处理器的全系统模拟器。
ArmSim实现了全系统硬件的功能模拟,不仅可以运⾏ELF格式的ARM应⽤级程序,⽽且可以运⾏ELF 映象或⼆进制映象格式的系统级程序,如U-Boot。
ArmSim模拟器还⽀持GDB等调试器对模拟器上运⾏的ARM程序进⾏源代码级远程调试。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
In SIMULATION: Transactions of The Society for Modeling and Simulation International, Special Issue on Modeling and Simulation Applications in Cluster and Grid Computing, SAGE Publications, Vol. 80, No. 4-5, pp. 221-229, 2004.ARMSim: a Modeling and SimulationEnvironment for Agent-based Grid ComputingJUNWEI CAOC&C Research Laboratories, NEC Europe Ltd., Germanycao@ccrl-nece.deAbstract. ARMS is an agent-based resource management system for grid computing, where agents are organized into a hierarchy and cooperate with each other to discover available grid resources using a technique of decentralized resource advertisement and discovery. Since a large-scale application of ARMS is not available, the most straightforward way to investigate the ARMS performance is through a modeling and simulation approach. In this work, an ARMS performance modeling and simulation environment (ARMSim) is presented. The ARMSim kernel is composed of a model composer and a simulation engine, while users can input related information and get simulation outputs from corresponding GUIs.A case study is included using an example model with over 1000 agents and several experiments are carried out each involving nearly 100000 requests. Simulation results are also illustrated and show the impact of the choice of different agent configurations on the overall system performance. ARMSim enables the ARMS agent performance to be investigated quantitatively and simulation results are potential to be utilized at running time for online ARMS performance improvement (e.g. avoiding performance bottleneck and reducing network traffic).Keywords: modeling and simulation, multi-agent systems, grid computing, ARMS, ARMSim1. IntroductionThe grid originated from a high performance computing power delivery system [14] and is now becoming a global infrastructure for large-scale resource sharing and system integration [3]. The grid infrastructure should support both scalability and adaptability in dynamic grid environments. While examples of grid middleware technologies include CORBA [16], JINI [2] and Web Services [12], a multi-agent approach is utilized in our work.In my previous work an agent-based methodology [6] is developed for building large-scale distributed software systems with highly dynamic behaviors. This has been used in the implementation of an agent-based resource management system for grid computing [7, 8], ARMS, where agents are considered to be the main high level abstraction of grid resources. Each agent can be registered with multiple grid resources and agents are organized into a hierarchy. Agents can also cooperate with each other and resource information is advertised and discovered along the agent hierarchy using different configurations.Two applications of ARMS can be found respectively in the work described in [9] and [10]. In the first application, the ARMS implementation is integrated with multiple local grid schedulers. Each scheduler is responsible for load balancing of a PC cluster using a performance prediction driven method and an iterative heuristic algorithm [18]. The ARMS agents perform advertisement and discovery across multiple clusters to achieve overall load balancing at the grid level. In the second implementation, ARMS performs resource discovery for a grid workflow management system according to user QoS requirements. In both cases, the ARMS implementation is limit with up to 30 agents involved, which is far from a grid size. The performance of ARMS agents themselves can not be investigated in details due to the absence of a large-scale implementation.In this work, a modeling and simulation environment for the ARMS agents (ARMSim) is developed, which can be used to evaluate the overall ARMS agent performance in a quantitative way according to some pre-defined metrics. ARMSim has as input all of performance related information of the agent system, it composes them into a performance model, simulates the resource advertisement and discovery processes step by step, and finally outputs all of the statistical data. ARMSim supports multi-view and real-time display of simulation results. Simultaneous simulation of multiple models and comparison of results can also be performed in the ARMSim environment.A case study is included in this work with about 1100 agents involved. 13 experiments are designed, each with a different agent configuration. During each simulation nearly 100000 requests are sent out and in some situations nearly 10000000 communications are involved for resource advertisement and discovery. Simulation results show that agent configurations can have very different impacts on system performance and the simulation approach is the most straightforward way to enable the system performance to be investigated quantitatively.The rest of the paper is organized as follows: In Section 2 an overview introduction of the ARMS system is given. In Section 3, the ARMSim structure is described in detail.A case study with corresponding simulation results is illustrated in Section 4. Related work is summarized in Section 5 and the paper concludes in Section 6.2. ARMSAgents comprise the main components in the ARMS system; the agents are organized into a hierarchy and are designed to be homogenous. Each agent is viewed as a representative of multiple grid resources at a higher level of grid management. The information of each grid resource can be advertised within the agent hierarchy (in any direction) and agents can cooperate with each other to discover available grid resources [8].Each agent utilizes Agent Capability Tables (ACTs) to record grid resource information. An agent can choose to maintain different ACTs corresponding to the different sources of resource information: T_ACT is used to record information of an agent’s own registered grid resources; C_ACT to record resource information cached during discovery processes; L_ACT to record information received from lower agents in the hierarchy; and G_ACT to record information from the upper agent in the hierarchy.There are basically two ways to maintain the contents of L_ACT and G_ACT in an agent: data-pull and data-push, each of which has two approaches: periodic and event-driven. An agent can ask other agents for their resource information either periodically or when a request arrives. An agent can also submit its resource information to other agents periodically or when the resource information is updated. The frequency of the periodical resource information advertisement is also configurable.Another important process among agents is resource discovery which is also a cooperative activity. Within each agent, its own registered grid resources (recorded in the T_ACT) are evaluated first. If the requirement can be met locally, the discovery ends successfully. Otherwise resource information in C_ACT, L_ACT and G_ACT is evaluated in turn and the request dispatched to the agent, which is able to provide the best (or the first) requirement/resource match. If no resource can meet the requirement, the request is submitted to the upper agent. When the head of the hierarchy is reached and the available resource is still not found, the discovery terminates unsuccessfully.The ARMS architecture and mechanisms described above allow possible system scalability. Most requests are processed in a local domain and need not to be submitted to a wider area. Both advertisement and discovery are processed between neighboring agents and the system has no central structure, which otherwise might act as a potential bottleneck. This is further investigated in a quantitative way using the ARMSim environment.Four metrics are defined to evaluate the performance of agent behaviors: discovery speed (v), system efficiency (e), load balancing (b) and success rate (f) [7]. The average agent resource discovery speed (v) during a certain period is calculated via the total number of requests (r) divided by the total number of connections made for the discovery (d). The average efficiency of the system (e) is considered as the ratio of the total number of requests (r) during a certain period to the total number of connections made for both the discovery (d) and the advertisement (a). The workload of each agent (w) is described as the sum of the outgoing (o) and incoming (i) connection times and the mean square deviation of all agent workloads is used to describe the load balancing level of the system (b). The success rate (f) is the ratio of successful resource discovery (r f) to the total number of requests (r) during a certain period.These metrics may conflict at most of the time, that is not all metrics can be high at the same time. For example, a quick discovery speed does not mean high efficiency, as sometimes quick discovery may be achieved through the high workload encountered in resource advertisement and data maintenance, leading to low system efficiency. It is necessary to find the critical factors of a practical system, and then to use the different configurations to reach high performance. This can be carried out efficiently using a modeling and simulation approach.3. ARMSim structurePerformance evaluation of resource discovery in a large-scale multi-agent system like ARMS is a difficult task, especially when thousands of agents and tens of thousands of requests and communications are involved. Different configurations of agent behaviors on resource advertisement and discovery can make the overall system behaviors very complex. In this section, the ARMSim modeling and simulation environment is introduced.radr fvebf RequestsThe main ARMSim structure is illustrated in Figure 1, which includes a kernel and GUIs. The kernel part performs the modeling and simulation functions, while users can input related information and get simulation outputs from the GUIs.3.1. Inputs/outputsThere are four kinds of information that affect the system performance and must be input into the performance model. These include: the agent hierarchy, the resources, the requests, and the configurations for resource advertisement and discovery. ARMSim supports the modeling activity at both the agent level and the system level. The only components that exist in the model are agents, so agent-level modeling can be used to define all the model attributes for the simulation. However, system-level modeling is also necessary to input information on agent mobility, resource and request distribution, and so on. These will be discussed in detail below.•Agent hierarchy. When a new agent is added into the model, its upper agent should be defined. The upper agent is also configured to add a new lower agent. The information is used to organize agents into a hierarchy in the system model. No cycles are permitted in the hierarchy, which may cause deadlock during the resource discovery process.•Requests. Each agent is configured to send different requests periodically. A request item may include several parts of information: the required resource name, the relative required performance value, and the sending frequency (see examples in Table 3). •Resources. Each agent is also configured to provide many grid resources, whose performance may vary over time. A resource item may include several parts of information: the resource name, the relative performance value, and the performance changing frequency (see examples in Table 2). The usage of these attributes will be introduced in the ARMSim kernel section below.•Configurations. Different configurations are defined in each agent to control its behaviors on resource advertisement and discovery. These configurations have been introduced in Section 2 and examples can be found in Table 4.•Agent mobility. The agent mobility can be defined at the system level only. An agent mobility item may include information on: the agent ID, the new agent ID after the movement, the upper agent ID of the new agent, and the step number when the movement will happen during the simulation. •Request distribution. System-level request definitions can ease the modeling process. The same request item does not need to be defined in different agents one by one. ARMSim provides a convenient way to distributea request definition to different agents once it is definedat the system level.•Resource distribution. The same resource with the same attributes can also be provided by different agents.System-level resource definitions allow many agents to be configured with the same resource at one time. •Global configurations. A system-level configuration definition can affect all of the agents in the model and ease the modeling process. Both global configurations and individual configurations can be defined in each agent. However, agent-level configuration definitions have a priority over the system-level ones.The information above is input into the ARMSim. Examples of these information can be found in Section 4. The ARMSim outputs are all of the simulation results on four performance metrics. All of the details on resource advertisement and discovery are also recorded in a simulation log file for further reference. The use of input information to produce outputs during the modeling and simulation processes within the ARMSim kernel is introduced below.3.2. ARMSim kernelThe ARMSim kernel is composed of a model composer and a simulation engine. The kernel will perform the main modeling and simulation functions and transform the raw simulation data to statistical results to support the four performance metrics.The model composer organizes the input information into a performance model before the simulation process begins. During this phase, the system-level information is transferred into an agent-level representation as much as possible. For example, system-level requests and resources will be used to configure a certain percentage of agents. The global configurations are used to define the configurations of each agent, except for agents that have already been defined with agent-level configurations. After these, a performance model is composed ready for evaluation. The information on agent movements can only be stored at the system level and will not be used to configure any agent in the system.The simulation engine will start a simulation process once a performance model and a total number of simulation steps are defined. The whole process is illustrated in Figure 2, which is divided into seven phases, five of which are within the main simulation loop.•Initialize simulation. Once a simulation process is started, an environment will be setup for simulating resource advertisement and discovery. All of the GUIs for performance modeling are locked. The performance model cannot be modified during the simulation. The simulation results are also initialized for recording the outputs.•Set resource changes. This is performed at the beginning of each simulation step. The performance ofa grid resource may change at each step. There is alsothe frequency of change in performance of each grid resource. The performance of each resource may or may not be changed at each step according to this frequency.•Set agent movements. Each agent mobility item contains a step number when a movement will happen during the simulation. An agent movement indicates not only the change of the agent hierarchy, but also the change of related resources. Additional resource advertisement occurs when an agent is moved, for example, old resource information is announced for deletion, and new information should be advertised along the new agent hierarchy.•Advertise resources. Both event-driven and periodic resource advertisement are considered during this phase. Each agent acts on its ACTs according to its configurations. Each connection between agents for resource advertisement will be recorded in the simulation log file and will affect corresponding simulation results.•Send requests and discover resources. A request is decided to be sent according to its frequency. Each agent that receives the request will look up its ACTs in turn according to its configuration for resource discovery. Every detail of a resource discovery processis recorded in the log file and related simulation results, such as agent connection times, are recorded. •Calculate and visualize simulation results. At the end of each simulation step, the raw simulation data should be summarized, and corresponding statistical results on the performance metrics calculated. These results are shown on ARMSim GUIs dynamically to provide the user a view of what is going on during the simulation. •Finalize simulation. After all simulation steps are completed ARMSim returns to the modeling mode. All the modeling GUIs are unlocked. The GUIs for visualizing the simulation results will not be refreshed until the next simulation begins, and can thus be used for further analysis.ARMSim also supports the evaluation of multiple models simultaneously. The user can use different configurations in different models, simulate them, and compare the results.3.3. User interfacesThe ARMSim environment is implemented using Java. It provides graphical user interfaces for the modeling and simulation respectively.The user can add, edit and delete agents from the model via the main GUI window (see an example in Figure 3). In the left column of the main window, all of the agents are listed. A brief description of the selected agent is also shown below the agent list. The text field above the agent list can be used to search an agent by its name. The model can also be saved and reloaded for reuse later.Some other ARMSim GUIs are used to visualize simulation results to the user. Examples can be found in Figures 4 and 5. During each step in the simulation the results will be updated in each of the GUIs. The simulator can provide multiple views of the simulation data, which are all updated in real time. In the step-by-step view of the Figure 4, the simulation data, r, a, d, r f, and the statistic data, v, e, b, f, in each step are shown. In the accumulative view shown in Figure 5, the statistical data on the accumulative steps are shown. In the agent view, the user can view the ACT contents of a selected agent. The log view shows the simulation log file, which records the details of all resource advertisement and discovery processes during simulation.3.4. Main featuresARMSim is developed to provide quantitative information of the performance of resource advertisement and discovery in the ARMS agent system. The main feature of the ARMSim environment can be summarized as follows:•Support for all of the performance metrics andconfigurations described in the ARMS system;•Support two levels of system modeling for easy and convenient performance modeling;•Support modeling of agent mobility and simulation of additional resource advertisement processes; •Support multi-view and real-time display of simulation results;•Support simultaneous simulation of multiple models and comparison of results.The use of the ARMSim environment for a performance study is introduced in the next section through a case study, and simulation results are included to show the impact of the choice of different agent configurations on the system resource discovery performance.4. A case studyIn this section, an example model is given and experiment results are included to show how to steer the ARMS performance optimization using the ARMSim environment.4.1. Example modelA screenshot of the example model built in the ARMSim environment is illustrated in Figure 3.Figure 3. The ARMSim modelingThe attributes of the example model are shown in several tables. This is composed of about 1100 agents, each representing a one or more grid resources that may provide computing capabilities with different performances. These agents are organized in a hierarchy, which has three layers. The identity of the root agent is gem. There are 100 agents registered to gem, ten of which each also have 100 lower agents. The hierarchy is illustrated in Table1.Agents Upper agentgem -coord~0……coord~99 gemagent01~0……agent01~99 coord~5agent02~0……agent02~99 coord~15agent03~0……agent03~99 coord~25agent04~0……agent04~99 coord~35agent05~0……agent05~99 coord~45agent06~0……agent06~99 coord~55agent07~0……agent07~99 coord~65agent08~0……agent08~99 coord~75agent09~0……agent09~99 coord~85agent10~0……agent10~99 coord~95Table 1. Example model: agent hierarchy To simplify the modeling processes, we define the resources and requests in the agents at the system level, which is shown in Table 2 and 3 respectively. The name of the resources and requests are all HPC, but with different relative performance values. The frequency value of the resource, 5, for example, means the resource performance will change between 0 and the performance value once every 5 steps during the simulation. The frequency value of the request, 5, for example, means a request will be sent once every 5 steps during the simulation. The distribution value is used to define how many agents will be configured with the corresponding resource or request.Name RelativeperformanceFrequency Distribution (%) HPC 1000 5 20HPC 800 8 30HPC 600 10 40HPC 400 15 50HPC 200 20 60Table 2. Example model: resourcesName RelativeperformanceFrequency Distribution (%) HPC 100 5 80HPC 200 8 70HPC 300 10 60HPC 400 15 50HPC 500 20 40HPC 650 30 30HPC 800 40 20HPC 900 50 10HPC 1000 60 10Table 3. Example model: requestsFinally, the model must define how each agent uses the ACTs to optimize the performance. In this case study sixexperiments have been considered, each of which has the same configurations as described in Tables 1, 2 and 3, but has different configurations as described in Table 4.Experiment number Configurations1 2 3 4 5 6Using T_ACT X XX X X X Using C_ACT X X X XXL_ACT: event-driven data-push X X X X G_ACT: periodic data-pull* X X X L_ACT: periodic data-pull* X X G_ACT: event-driven data-push XTable 4. Example model: configurationsTo simplify the experiments, we only define the configurations at the system level, which means all of the agents in the model must use the same configurations. A mixture of configurations is possible but is not considered in these experiments. In the simulation results included in the section below, a comparison of the different configurations is given by considering their impact on the agent performance.4.2. Simulation resultsFigure 4. The ARMSim simulation: a step-by-step viewWhen the simulation begins, a thread is created to calculate the statistical data step by step. The phase for request sending and the resource discovery is the key partof the whole simulation process. The ARMSimenvironment can show the results in multiple views. The step-by-step and accumulative views are especially interesting in this case study, which is illustrated in Figures 4 and 5 respectively.Figure 5. The ARMSim simulation: an accumulative viewAs shown in Figure 5, six experiments are carried out simultaneously for 220 steps and results are displayed on all of performance metrics. Note that simulation processes in the first 20 steps are not stable and not included in Figures 4 and 5. We are especially interested in the balance of resource discovery speed and system efficiency in this case study. It is obvious that different configurations designed in these experiments lead to different impact on discovery speed and system efficiency. Detailed simulation data can be found in Table 5 and each of the six situations are also described below.1. Only T_ACTs are used in each agent. Each time therequest arrives, a lot of connections must be made and traversed in order to find the satisfied grid resource. In this situation, the discovery speed and system efficiency are both rather low.2. The cache is used in each agent, which needs no extradata maintenance and improves the discovery speed and system efficiency a little. This is because the dynamics of the resources reduce the effects of the cached information and so becomes unreliable.3. L_ACT is added in each agent. Each time the resourceperformance changes, the corresponding agent will advertise the change upward in the hierarchy. This adds additional data maintenance workload to the system, which decreases the discovery workload extremely. So the discovery speed and the system efficiency are all improved.4. G_ACT is also added. Each agent will get globalresource information from its upper agent once every 10 simulation steps, which will add additional data maintenance workload. From the simulation results, we can see this improves the discovery speed further. But the system efficiency decreases because of the additional data maintenance.5. Another maintenance of the L_ACT is added. Eachagent asks for resource information from its lower agents once every 10 steps. This improves the discovery speed a bit further and adds more data maintenance workload, which also decreases the system efficiency.6. Another maintenance of the G_ACT is added. Thisimproves the discovery speed only a little, but adds further data maintenance workload, which decreases the system efficiency extremely.Performance metrics* No. r a d v=r/d e=r/(a+d)1 91848 0 972118 0.09 0.092 92326 0 849540 0.10 0.103 92206 89916 37583 2.45 0.724 92084110648 34034 2.70 0.63 5 91264 138965 32415 2.81 0.53 6929299065245 32837 2.83 0.01Table 5. Simulation results IThe impact of the choice of the configurations on the discovery speed and the system efficiency is shown clearly in Figure 6.Figure 6. Choice of the configurationsIt can be seen that both third and fourth experiments have a good balance between the discovery speed and the system efficiency for this example model. The fourth situation has a higher discovery speed in comparison to the third, with lower system efficiency. And the third situation has higher system efficiency with lower discovery speed. The difference between configurations of the third and fourth experiments is that G_ACT periodic data pull is additionally configured in the fourth experiment, which was once 10 simulation steps. Changing the G_ACT data pull frequency will also change the performance of the model. Some further experiments are designed where the configurations that are used are all the same as described in the fourth experiment. The only difference is the G_ACTs in the agents are updated with different frequencies, which may lead to differences in the amount of system workload for resource advertisement and discovery. Detailed simulation results are given in Table 6.Performance metrics* Freq. ra d v=r/d e=r/(a+d)1 91618 330256 32530 2.81 0.252 91537 210336 33346 2.74 0.37 5 91355 134910 33492 2.72 0.54 10 92084 110648 34034 2.70 0.63 20 90713 98893 33734 2.68 0.68 30 91641 95154 34935 2.620.70 80 92997 91803 35944 2.58 0.72 120 92540 88539 37016 2.50 0.73 never 9220689916 37583 2.45 0.72Table 6. Simulation results IIIn Table 6, when the frequency value is once 10 simulation steps, the situation is the same as that in the fourth experiment. And when G_ACTs are never maintained, the situation is the same as that in the third experiment. The impact of the choice of the G_ACT periodic data pull frequency on the discovery speed and the system efficiency is shown clearly in Figure 7.Figure 7. Choice of the G_ACT periodic data pull frequencyAs shown in Figure 7, the best trade-off between discovery speed and system efficiency is once every 20。