Memory management techniques for Time Warp on a distributed memory machine
Memory management system and method for storing an

专利名称:Memory management system and methodfor storing and retrieving messages发明人:Kevin Brown,Michael Spicer申请号:US11274154申请日:20051116公开号:US20070113031A1公开日:20070517专利内容由知识产权出版社提供专利附图:摘要:Embodiments of the present invention provide an efficient manner tosystematically remove data from a memory that has been transferred or copied to disk storage, thereby facilitating faster querying of data residing in the memory. In particular,memory containing data received from data sources is partitioned into a fixed quantity of buckets each associated with a respective time interval. The buckets represent contiguous intervals of time, where each interval is preferably of the same duration. When data arrives, the data is associated with a timestamp and placed in the appropriate bucket associated with a time interval corresponding to that timestamp. If a timestamp falls outside the range of time intervals associated with the buckets, the data corresponding to that timestamp is placed in an additional bucket. Data within the oldest bucket in memory is periodically removed to provide storage capacity for new incoming information.申请人:Kevin Brown,Michael Spicer地址:San Rafael CA US,Lafayette CA US国籍:US,US更多信息请下载全文后查看。
我每天的学习计划英文

我每天的学习计划英文As a student, I always believe in the power of consistent and focused studying. Therefore, I have come up with a well-organized daily study plan to help me stay on track and achieve my academic goals. My study plan is a combination of time management, subject prioritization, and stress management to ensure that I make the most out of my study time without feeling overwhelmed.Morning RoutineTo kick start my day and set the right tone for studying, I wake up early in the morning at around 6:00 AM. I believe in the saying "early to bed, early to rise makes a man healthy, wealthy, and wise." So, starting my day early helps me to feel more alert, focused, and ready to take on the day's challenges. After waking up, I dedicate the first 30 minutes to meditation and yoga to clear my mind, calm my nerves, and promote mental clarity. This helps me to reduce stress, increase concentration, and improve my overall mental well-being.Following my meditation and yoga session, I take a quick shower and make myself a healthy breakfast. A nutritious and well-balanced breakfast consisting of fruits, whole grains, and protein helps me to fuel my body and brain for the day ahead. After breakfast, I spend 15 minutes reviewing my to-do list and setting my goals for the day. This helps me to prioritize my tasks, establish a clear focus, and ensure that I stay organized throughout the day.Study Session 1I start my first study session at around 8:00 AM. This is my most productive time of the day, so I always tackle the most challenging and demanding subjects during this period. I begin by reviewing the previous day's notes and assignments to refresh my memory and reinforce my learning. I then move on to studying new topics or chapters, taking detailed notes, and solving practice problems. I make sure to allocate 60-90 minutes to each subject, with short 5-10 minute breaks in between to rest my mind and avoid burnout.I believe in the importance of active learning, so I always engage in discussions, ask questions, and seek clarification on any concepts that I find difficult to understand. I also make use of visual aids, such as diagrams, charts, and flashcards, to enhance my understanding and retention of the material. During this study session, my focus is on subjects like mathematics, physics, and chemistry, which require a high level of concentration and mental endurance.Mid-day BreakAfter completing my first study session, I take a short mid-day break to recharge and refuel.I use this time to have a light snack, take a 30-minute walk outside, and engage in some form of physical activity. Exercise helps me to de-stress, increase my energy levels, and improve my mood. It also allows me to clear my mind and shift my focus away fromstudying for a while. I use this break as an opportunity to relax, unwind, and take a mental breather before the next study session.Study Session 2Following my mid-day break, I resume my studies at around 12:30 PM. During this study session, I tackle subjects that require more creativity and critical thinking, such as literature, history, and philosophy. I start by reviewing the assigned readings, analyzing texts, and writing essays or reflections. I allocate 60-90 minutes to each subject, with short breaks in between to stretch and take a quick walk. I make sure to stay engaged and interactive, seeking to understand the underlying themes, contexts, and implications of the material.I also make use of mnemonic devices, mind maps, and other memory techniques to help me retain the information and recall it later when needed. I find that visualization and association techniques are particularly effective with subjects that involve a lot of memorization, such as history dates and literary terms. By the end of this study session, I aim to have a deeper understanding of the subject matter and be able to articulate my thoughts effectively.Afternoon RoutineAs I wrap up my second study session, I take a longer break to have a nutritious lunch, relax, and engage in some leisure activities. I use this time to catch up with friends, listen to music, or pursue my hobbies, such as painting or playing a musical instrument. This break allows me to disconnect from my studies, recharge my mental batteries, and refocus my energy for the evening study session. I find that having a healthy work-life balance is essential for maintaining my overall well-being and preventing burnout.Study Session 3With my energy replenished, I begin my third study session at around 3:30 PM. This session is dedicated to subjects that require a practical and hands-on approach, such as computer science, programming, and technology. I start by working on coding projects, solving programming challenges, and experimenting with new software or applications. I find that being actively engaged in the material and applying what I have learned helps me to consolidate my knowledge and develop practical skills.I make use of online tutorials, coding platforms, and interactive resources to enhance my learning experience and stay updated with the latest developments in the field. I also participate in online forums, coding communities, and tech meetups to connect with other like-minded individuals and expand my network. By the end of this study session, I aim to have completed a significant portion of my coding projects and gained a deeper insight into the practical application of computer science concepts.Evening RoutineAfter completing my third study session, I take a longer break to have dinner, unwind, and spend quality time with my family. I use this time to share my day's experiences, listen to their stories, and engage in meaningful conversations. This helps me to disconnect from my studies, relax, and enjoy the company of my loved ones. I find that social interaction and emotional support are crucial for maintaining my mental and emotional well-being, especially after a long day of intense studying.Study Session 4After spending some quality time with my family, I dedicate the evening to my final study session, which starts at around 7:30 PM. This session is focused on revising and reviewing the day's material, solving practice tests, and quizzing myself on various topics. I make use of study guides, review sheets, and flashcards to reinforce my learning and identify any areas that need further clarification. I also practice time management techniques, such as setting a timer for each question or task, to improve my pace and decision-making skills.I find that active recall and spaced repetition strategies are particularly effective during this session, as they help me to retrieve information from memory and strengthen my long-term retention. I also make use of online resources, such as educational websites and virtual libraries, to access additional study materials and further support my learning. By the end of this session, I aim to have a solid grasp of the day's material and feel confident in my ability to recall and apply the knowledge when needed.Reflection and PlanningAfter completing my final study session, I take some time to reflect on my day, evaluate my progress, and plan for the next day. I use this opportunity to review my accomplishments, identify any challenges or areas of improvement, and set specific goals for the following day.I also update my study schedule, prioritize my tasks, and make any necessary adjustments to ensure that I stay on track with my academic objectives. This reflective practice helps me to stay accountable, motivated, and proactive in my approach to studying.Bedtime RoutineTo wind down and prepare for a restful night's sleep, I engage in a calming bedtime routine at around 10:30 PM. I make sure to disconnect from electronic devices, such as my phone and laptop, at least 30 minutes before bedtime to reduce exposure to blue light and promote better sleep quality. I also practice relaxation techniques, such as deep breathing and progressive muscle relaxation, to calm my mind and body. Additionally, I engage in some light reading, listen to soothing music, or write in my journal to unwind and reflect on the day's experiences.ConclusionIn conclusion, my daily study plan is a well-structured and balanced approach to learning that allows me to maximize my productivity, retention, and overall academic performance.By incorporating time management, subject prioritization, and stress management strategies into my study routine, I am able to stay focused, motivated, and effective in my learning process. With consistency, dedication, and a growth mindset, I am confident that I will achieve my academic goals and continue to excel in my studies.。
内存管理说明和类型 Memory Management -IT英语作文论文

内存管理说明和类型Memory ManagementExplanation and Types INTRODUCTION TO MEMORY MANAGEMENTMemory management is being regarded as the type of resource management which applies to computer memory. In accordance with the given context, it can be depicted that Memory management is the systematic process in which computer memories are controlled as well as coordinated. Here, different portions are called blocks that are being used for the purpose to run the various programs in order to optimize the overall performance of the system in an effectual way. In the operating system, Memorymanagement plays a vital role. This is because; it assures that the memory of the computer should not be filled with the data. Thus, here we will carry out major discussion on the given aspect only and learn more about the major concepts that are being related to Memory management in an efficient way.KEY AREAS OF MEMORY MANAGEMENTMemory management is being recognized as one of the most complex fields of computer science. Hence, there is a number of techniques have been developed with an aim to make the respective function more efficient in nature. There are three major areas in which the wholeMemory management is divided. The details about the same are depicted below:Hardware memory managementThe very first area related toMemory management is hardware memory management. Herein, it can be depicted that hardware memory management is concerned with the hardware devices. In other words, it can also be said that it is basically related to the electronic devices that store the data of the computer. This includes devices like RAM (Random Access Memory) and caches memory.Operating system memory managementIt is being regarded as another form of Memory management. In this regard, in the operating system, the memory of the computer should be allocated to the user program. However, it isbeing reused by another program when it will be seen that the memory is no longer will be required. Herein, the computer can also pretend that it has more memory to store than the actual capacity. On the other hand, each program that is involved in it has the machine memory. Thus, these are being regarded as the main feature of the virtual memory that is also one of the most important concepts in the computer program.Application memory managementAnother key area of Memory management is application memory management. It basically involves the process of supplying the memory that is being needed for the data structure and program objects. Here, the memory is recycled when it is of no use or when it is no longer required. The application memory does not possess the capability to predict how much memory it willrequire. It basically possesses two types of tasks such as allocations and recycling.Allocation: This happens when the program requests the block of the memory, thus in the respective situation the memory manager has to allocate the blocks out of the larger blocks that have been received by it from the operating system. The respective function which is being performed by the individual is called by the name of allocators.Recycling: It is another task of application memory. In this, the blocks are recycled for the reuse purpose when the memory blocks are allocated and the data which they possess is no longer required. This includes two basic approaches of the recycling memory and theseare manual memory management and automatic memory management.PROBLEMS RELATED TO MEMORY MANAGEMENTHowever, before getting details about the manual and automatic memory management, it is very much important for the individual that they should get the idea about the main problems that are related to the concepts such as Memory management in an effectual way. By getting an idea about the same, some more main details about the Memory management can be gathered. The detail explanations of the same are given below:Premature fees and the dangling pointers: There are many programs that tend to give up thememory, however, they still make an attempt to access it later and behave randomly. The given type of situation is called by the name of premature fees. But, the surviving reference to the memory is called as the dangling pointers in an effectual way.Memory leak: It is another issue that is related to memory management. In accordance with the given context, there are some programs that continually allocate the memory without giving up on the same and as a result of this, they will run out of the memory. The respective type of condition is called as the memory leak.The poor locality of the references: It is another problem that comes from the layout of the allocated blocks. Here, modern hardware and the manager of the operating system handle thememory. The access to the successive memory is faster when it is very much near to the memory locations.Inflexible design: The manager of memory can also cause the server performance problem. This happens when they are being designed with the one use in mind but actually, they are being in different ways. The given problem will occur because here the solutions of memory management tend to make the varied type of assumptions in relation to the manner in which program is being used. Therefore, it is very correct to say that these are some main problems that are related to Memory management.ABOUT MANUAL MEMORY MANAGEMENTAs discussed above also that the recycling can be done in two way and thus in this section discussion is being carried out in relation to the manual memory management. In the given type of memory management, the programmer will tend to have direct control over the memory, thus it has to make a decision that when memory should be recycled. The given thing is usually done by explicit calls with an aim to heap the management functions or it is done through the language construct that will have influence over the control stack. In this regard, it can be depicted that the main feature of the manual memory manager is to state the program to say different things such as “Have the memory has a back” or “am I finished with it”. The advanta ges and disadvantages that are associated with the same are depicted below:Advantages of manual memory management programFor the programmer, it is very much easier to get the idea that what is exactly going onThere is some manual memory manager that tends to perform better when they are short of the memory.Disadvantages of manual memory management programThe programmer here has to write so many codes with an aim to perform the repetitive bookkeeping of the memory in an effectual way.It should form a significant part of any type of module interface.It also requires more memory per the overhead object instead of others.The bugs that are being identified in the manual memory management are very much common. Thus, it can be said that these are some main advantages and disadvantages of manual memory management.ABOUT AUTOMATIC MEMORY MANAGEMENTIt is another type of management. The automatic memory management is basically the type of service or it is the part of the language that automatically recycles that specific memory that the computer program will not use again. The managers of the automatic memory usually perform their job of doing recycling the blocksthat are not reachable from the program variables. However, there are some advantages as well as disadvantages are also being assessed in relation to automatic memory management. The details of the same are given below:Advantages of automatic memory managementThe programmer of the given memory is freed to work on the actual problem in an effectual way.There is very much clear modular interface.The bugs that are being involved in it are not that much.It is more efficient in comparison to the other type of memory management.Disadvantages of automatic memory managementThe memory here is retained, but it is of no use for the users.At present, there is very much limited availability is being seen in relation to the memory manager.BASIC CONCEPTS RELATED TO MEMORY MANAGEMENTIn order to know more about management, it is very much important for the individual that it should have an idea about the main concepts that are related to management. This is because it is by complying with a given type of activity only more details about thememory management can be gathered. The details about the major concepts are depicted below:SwappingThe very first concept of management is swapping. It is the type of process which is being required in memory for the execution purpose. But, many times this happens that there is not enough main memory left with an aim to hold all the active processes in the timesharing type of system. Thus, it is due to the presence of respective aspect only the access processes are being kept on the desk and afterwards, they are brought in for the purpose to run it dynamically. In simple words, it is very right to say that swapping is the systematic process to bring each and every process in themain memory, they are being run for the while and afterwards, they are again being put up in the disk.Contiguous memory allocationIn the contiguous memory allocation, each process is being contained in the single contiguous block of the memory. Herein, it can be said that in the respective approach the memory of the computer is being divided into several fixed partitions. On the other hand, each partition that is being involved in it tends to possess one process. Thus, when the partition is free then in this situation from the process queue, an input is selected and the same thing is loaded in it. The free blocks in the memory are being called by the name of holes. Further,different sets of holes are searched in order to examine which specific hole is good.Memory protectionIt is also considered as another most important aspect. It is basically the type of phenomenon which enables an individual to carry out control over memory access rights on the computers. The main aim of the given protection is to prevent the processing of that specific memory that has not been allocated to it in an effectual way. Thus, it is through this way only it prevents the bugs within the process that is being affected by another process. Further, the results of segmentation faults are being used in order to disrupt the whole process.Memory allocationIt is the systematic process with the help of which computer program are being assigned with the memory or space. It is of three types such as first fit, best fit, and the worst fit. The first fit is the first hole that is very much big enough to allocate to the program. Moreover, the best fit is the smallest hole that is very much big enough to allocate to the program. Apart from this, the last fit is the worst fit that will have the largest hole that is very much big enough to allocate to the program.Memory ManagementCONCLUSIONFrom the whole analysis, it can be concluded that in the operating system of the computer Memory management has a vital role. This is due to the reason that it assures that all the memory of the computers should be stored in an effectual manner. Further, by having the idea about the main areas of Memory management, the programmer can make an effective decision in relation to the allocation as well as recycling of the memory in an efficient way. Further, by having good knowledge about the concepts of management,the programmer can assess a quick solution to the problem. Further, with the help of mentioned concepts, detailed knowledge about it can be gained.。
(2008)内存管理算法介绍 Memory Management Algorithms

Memory Management Algorithms内存管理算法By 冲出宇宙2008-10-81前言内存资源有限,内存管理很必要。
2问题1.内存分配回收要快速。
因为内存分配回收十分频繁;2.内存碎片比例要小。
理论上,最坏的时候,任何不重整内存的管理算法都会导致接近100%的内存碎片;3算法内存管理算法有如下几个公理:1.空闲内存块用链表(有时是双向链表)串接,链表可以有多个;2.连续多个空闲内存块会合并为一个大块,有时是马上合并,有时是延迟合并;经典算法:1.First-Fit:选择满足要求的第一个内存块,切分出需要的大小,把剩下的内存块放回链表。
2.Best-Fit:选择最接近且大于需要大小的空闲内存块,切分出需要的大小,把剩下的内存块放回链表。
3.Buddy-System:伙伴系统有很多变形,如Binary-Buddy、Fibonacci-Buddy、Weighted-Buddy和Double-Buddy等。
伙伴系统有log2N个链表(N表示可分配内存大小),每个链表串接的内存块大小一样,都是2的幂。
如Binary-Buddy,在分配内存的时候,首先找到一个空闲内存块,接着把内存块不断的进行对半切分(切分得到的2个同样大小的内存块互为伙伴),直到切出来的内存块刚好满足分配需求为止。
合并的时候,只有伙伴才能合并为一个新的内存块。
4.Half-Fit:本算法维护了log2N个链表(N表示可分配内存大小),每个链表里面串接的内存块大小范围在2i到2i+1。
分配内存的时候,首先找到对应的链表,然后在链表里面寻找第一个块(First-Fit),分配一部分内存给用户,剩余的内存加入到对应的链表里。
实用算法:1.Doug Lea's Allocator:Doug Lea在95年维护了glib里面malloc函数,他构建了一种内存分配算法。
这种算法维护了多个链表(一般是128个链表)。
算法采用延迟合并策略,即当无法分配内存的时候,才合并相邻内存块。
Memory Management

– JIT/compiler generates a map for every program point where a GC may occur – Can constrain optimizations (derived pointers) – Required for type-accurate GC
• Garbage Collection
– Discriminating live objects and garbage
Garbage Collection
GC: How?
• Automatically collect dead objects • Liveness reachability
• Efficiency can be very high • Gives programmers “control”
Garbage Collection
Automatic memory management
• reduces programmer burden • eliminates sources of errors • integral to modern object-oriented languages, i.e., Java, C#, .net • now part of mainstream computing • Challenge:
Garbage Collection
Liveness: the GC and VM/Compiler Contract
• GC Maps - identify what is live
– Root set
• Live registers, walk the stack to enumerate stack variables, globals at any potential GC point
英语听课笔记范文10篇试卷讲解

英语听课笔记范文10篇试卷讲解Lecture Notes Template 1。
Title: Introduction to Artificial Intelligence.Lecturer: Dr. Emily Jones.Date: September 5, 2023。
Key Concepts:Artificial Intelligence (AI): The science of creating intelligent machines that can perform tasks typically requiring human intelligence.Machine Learning (ML): A subset of AI that allows computers to learn from data without explicit programming.Deep Learning (DL): A type of ML that uses artificial neural networks to process data and make predictions.Natural Language Processing (NLP): A field of AI focused on enabling computers to understand and generate human language.Computer Vision (CV): A field of AI that enables computers to analyze and understand images and videos.Lecture Outline:I. What is AI?Definition, scope, and applications.II. Types of AI.Narrow AI vs. General AI.Symbolic AI vs. Statistical AI.III. Machine Learning.Supervised, unsupervised, and reinforcement learning.Algorithms and techniques.IV. Deep Learning.Artificial neural networks.CNNs, RNNs, and LSTMs.V. NLP and CV.Text processing, machine translation, speech recognition.Image classification, object detection, facial recognition.Key Takeaways:AI is a rapidly growing field with the potential torevolutionize many industries.Machine learning and deep learning are fundamental techniques in AI.NLP and CV enable computers to interact with humans and the world in a more meaningful way.Lecture Notes Template 2。
第七章MEMORY_MANAGEMENT

7.1 Memory Management Requirements (存储器管理需求 )
• Relocation(重定位)
• Memory Protection(存储保护)
• Memory Sharing(存储共享) • Logical Organization(逻辑组织) • Physical Organization(物理组织)
• 何谓重定位? 把在装入时对目标程序中指令和数据的变换过 程称为重定位。 • 地址变换是在装入时一次完成的,以后不再改 变,故称为静态重定位。 • 将目标模块装入内存后,并不立即把装入模块 中的相对地址转换为绝对地址,而是把这种地 址转换推迟到程序执行时进行,在硬件地址变 换机构的支持下,随着对每条指令或数据的访 问自动进行地址变换,故称为动态重定位。
• Virtual-Memory Segmentation(虚拟存储分段)
7.2 Memory Partitioning
• 存储器管理最基本的操作是由处理器把 程序装入主存执行。
• Fixed Partitioning (固定分区)
1.系统初始启动时将内存划分为数目固定、 尺寸固定的多个分区。 2.这些分区的尺寸可以相等也可以不等。
• Unequal-size partitions(大小不等分区 )
– can assign each process to the smallest partition within which it will fit(把每个进程指定到适应它的 最小分区 ) – queue for each partition – processes are assigned in such a way as to minimize wasted memory within a partition(可以使一个分区 内部浪费的空间最少 ).
memory management解决方法

memory management解决方法
在计算机系统中,内存管理是一项重要的任务,它负责有效地管理计算机的内存资源。
内存管理的目标是优化内存的利用,确保系统可以高效地运行,并提供良好的用户体验。
在面对内存管理问题时,以下是一些常见的解决方法:
1. 分页和分段:分页和分段是常用的内存管理技术。
分页将内存划分为固定大小的页框,而分段将内存划分为逻辑段。
这两种方法可以提高内存的利用率,同时也更容易管理内存。
2. 虚拟内存:虚拟内存是一种将磁盘空间用作内存扩展的技术。
它使得操作系统可以将部分数据存储在磁盘上,并根据需要进行加载和卸载。
虚拟内存可以解决内存不足的问题,同时还可以提供更大的地址空间。
3. 垃圾回收:垃圾回收是一种自动内存管理技术,它可以自动释放不再使用的内存。
垃圾回收器会定期检查内存中的对象,并释放那些无法访问的对象。
这样可以避免内存泄漏和提高内存利用率。
4. 内存池:内存池是一种预先分配一定量的内存并进行管理的技术。
通过使用内存池,可以避免频繁的内存分配和释放操作,从而提高内存管理的效率。
5. 内存压缩:内存压缩是一种将内存中的数据进行压缩以节省空间的技术。
通过使用压缩算法,可以减少内存占用并提高内存的利用率。
总结起来,以上是几种常见的内存管理解决方法。
根据具体的情况和需求,可以选择合适的方法来解决内存管理问题,以提高系统的性能和用户体验。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Memory Management Techniques for Time Warp on a DistributedMemory Machine∗Bruno R.Preiss Wayne M.Loucks Dept.of Elec.and Comp.Eng.Dept.of Elec.and Comp.Eng.University of Waterloo University of WaterlooWaterloo,ON N2L3G1Waterloo,ON N2L3G1AbstractThis paper examines memory management issues associated with Time Warp synchronized parallel sim-ulation on distributed memory machines.The paper begins with a summary of the techniques which have been previously proposed for memory management on various parallel processor memory structures.It then concentrates the discussion on parallel simulation ex-ecuting on a distributed memory computer—a system comprised of separate computers,interconnected by a communications network.An important characteris-tic of the software developed for such systems is the fact that the dynamic memory is allocated from a pool of memory that is shared by all of the processes at a given processor.This paper presents a new memory management protocol,pruneback,which recovers space by discard-ing previous states.This is different from all previ-ous schemes such as artificial rollback and cancelback which recover memory space by causing one or more logical processes to roll back to an earlier simulation time.The paper includes an empirical study of a paral-lel simulation of a closed stochastic queueing network showing the relationship between simulation execution time and amount of memory available.The results in-dicate that using pruneback is significantly more effec-tive than artificial rollback(adapted for a distributed memory computer)for this problem.In the study, varying the memory limits over a2:1range resulted in a1:2change in artificial rollback execution time and almost no change in pruneback execution time. 1IntroductionThere is always the desire to improve the perfor-mance of simulation in general and of parallel sim-∗This work was supported in part by the Information Tech-nology Research Centre(ITRC)of the Province of Ontario (Canada)and by the Natural Sciences and Engineering Re-search Council(NSERC)of Canada.ulation in particular.One aspect of performance is the amount of memory required to complete the sim-ulation.If there is enough memory,then memory is seldom considered.However,if there is insufficient memory to complete the simulation,then the perfor-mance of the simulation is non-existent.This paper examines the question of memory man-agement in Time Warp simulation.Whereas previ-ously reported techniques have been proposed for var-ious processor structures,this paper concentrates the discussion on parallel simulation executing on a dis-tributed memory computer.Software developed for such systems is characterized by the fact that the dy-namic memory is allocated from a pool of memory that is shared by all of the processes at a given proces-sor.In this context,we present a new memory man-agement protocol,pruneback,which recovers space by discarding previous states and not by changing the lo-cal virtual time(LVT)of one or more processes.This is different from all previous schemes,such as artifi-cial rollback and cancelback,which recover memory space by causing one or more logical processes to roll back to an earlier simulation time.We present an em-pirical study which suggests that using pruneback is significantly more effective than artificial rollback on a distributed memory computer.The rest of the paper is organized as follows.Sec-tion2briefly addresses the question of memory usage and memory management in Time Warp simulation. Section3proposes a simple classification scheme for memory management algorithms which attempt to re-cover from memory stalls,and categorizes the known algorithms.In Section4,algorithms developed specif-ically for distributed memory machines are discussed, and the new algorithm,pruneback,is described.Sec-tion5presents experimental results which show the performance of pruneback vs.artificial rollback and Section6summarizes the contributions of this work.2Memory Usage in Time Warp Simu-lationThere have been a number of studies which ex-amine the minimum amount of memory required for various Time Warp implementations[1,?–3].Many of these reports concentrate on parallel execution with either the same amount of memory as the sequential simulation or with an amount of memory that is of the same order as that of sequential simulation.Al-though these are very important issues from a theo-retical point of view,these reports also point out that the time penalty associated with parallel execution using the same amount of memory as the sequential simulation may be extremely high.One recent study proposes an adaptive memory management technique which uses dynamically allocated memory constraints to limit memory usage and to limit optimism in a Time Warp system[4].All of these systems concentrate on guaranteed performance.That is,if the system has an amount of memory of the same order as that required by the sequential simulation,then the specified mem-ory control technique will guarantee completion.One other characteristic shared by these techniques is that in order to provide this guarantee,the memory must be managed as one common pool from which any LP can request memory storage.Although these schemes can be adapted to execute with other memory mod-els,they can no longer provide a guarantee of memory usage on the same order as the sequential memory usage.This difference between a guaranteed comple-tion of the simulation,and a heuristic to improve the probability of completion is an important difference. In[?],the ability of a memory management technique to complete the simulation in an amount of memory of the same order as that required for sequential is referred to as a memory optimal technique.A distributed memory system is doomed to require significantly more memory than any reasonable se-quential implementation.Consider the case with N processors,each with the same amount of memory available for dynamic allocation,say M.In a patho-logical case,N−1of the processors are idle at a given simulation time,τ,and all of the events to be exe-cuted reside in a single processor,P.In a Time Warp system,the entire sequential state(specifically all of the events on the sequential event list)must be stored at all times.Thus,if GVT isτ,since all of the se-quential state happens to be resident at P,then M must be(at least)large enough to hold the largest possible sequential event list,say M′.As a result,in a distributed memory system,when executing a Time Warp algorithm there may be a need for at least NM′total storage capacity in the system.All of the LPs consume some portion of the avail-able memory.If at any time a process exhausts its available memory,then it is possible that the simula-tion will stall and never complete.We refer to this as a memory stall—the overall simulation is stalled due to insufficient memory.The objective of the proposed technique is to delay the onset of a memory stall. 2.1Memory ObjectsThe memory used in an optimistically synchronized simulation,such as the basic Time Warp system de-scribed by Jefferson[5],can be classified into the fol-lowing three categories:State Storage Used to store some or all of the vari-ous states(or state vectors)required by an LP during execution.Input Message Storage Used to store the mes-sages that have been received by an LP.Output Message Storage Used to store copies of the messages sent by the LP(for cancelation pur-poses).Although it is theoretically possible to maintain a complete set of these objects,this is seldom practical. As a result the concept of global virtual time(GVT) has been introduced to Time Warp as a measure of the smallest virtual time to which a rollback is possible (at any given execution time)during the course of a simulation[5].In some implementations it is difficult to compute the precise value of GVT without temporarily stop-ping the progress of the simulation.In these cases a number of different GVT-like times may be consid-ered.Figure1shows the relationship among these times.Please note that these times are virtual times and that each varies with execution time.-virtual time0SGVT GVTEGVT L VTFigure1:Relationships among the Important Simula-tion Time ValuesIn Figure1,LVT is the current local virtual time of the LP.GVT is the true global virtual time.This is the value that would be calculated if the simulation was stopped,all in-transit messages delivered,and GVT calculated.There will be no rollbacks to a sim-ulation time smaller than GVT.EGVT is the current estimate for GVT.In a distributed system which doesnot calculate a perfect GVT,an estimate that is not larger than the real GVT is used.Given a value for EGVT,it may be possible that any given LP will not have a state corresponding to EGVT.(E.g.,this can arise if sparse checkpointing is used,i.e.,the check-point interval is greater than one[?].)As a result the state vector with the largest virtual time,smaller than (or equal to)EGVT becomes the surrogate GVT state (SGVT).Since,as far as the LP is concerned,at any time a message could arrive with a receive timestamp of EGVT,it must be possible to obtain the state vec-tor for EGVT.Thus,the state vector with timestamp SGVT and all messages with receive times greater than or equal to SGVT must be preserved so that the EGVT can be recomputed,if required.Although it is logically a separate activity,fossil collection is often intimately tied to GVT updates.Thus,in this dis-cussion we have assumed that all states and messages having a timestamps smaller than SGVT are fossils and have been deleted from storage.All of the message objects corresponding to a sim-ulation time greater than SGVT must be stored in dynamically allocated memory.In addition,some of the state objects must be stored.At least the states corresponding to SGVT and LVT must be maintained.2.2Memory Pool ModelsThe memory objects described in Section2.1are dynamically allocated,as required.The storage is al-located from a pool of memory.There are many pos-sible memory pool models for parallel algorithms such as Time Warp.Two perspectives that can be used to consider the memory alternatives are:the view from the hardware—i.e.,how is the actual machine con-structed;and from the software—how is the user inter-face to the memory structured.Although it is clearly possible to implement a given software model on a any hardware model,it is also important to keep in mind that a particular mapping may introduce large time penalties. E.g.,to treat a set of workstations,com-municating using a slow network,as having a single shared memory space with a uniform access time may lead to very poor performance.When a process requests more dynamically allo-cated memory,that memory can be allocated:from a pool pre-allocated to that process(process pool); from a pool pre-allocated to the processor that ex-ecutes that LP(processor pool);or from a common pool of dynamically allocated memory used by all of the LPs in the simulation(global pool).In the case of a distributed memory architecture, it is possible to implement the global pool scheme.If the memory access times are non-uniform(as would likely be the case if some memory is on the local pro-cessor and some is on a remote processor),then the performance penalty associated with storing some of the states or messages on a remote computer could be unacceptably large.It has been pointed out that,as the cost(in execution time)of checkpointing increases, the execution time generally increases[?].2.3Memory Management TechniquesThere are many algorithms that address the mem-ory management problem[2].They are summarized below.Algorithms that reclaim memory that is no longer needed.Each time EGVT is recalculated, SGVT may be advanced and,thus,there may be some memory objects that are no longer needed(fossils). Algorithms which ensure that EGVT is close to GVT will tend to decrease the onset of a memory stall by reducing the total memory requirements. Algorithms that attempt to use less memory. There are two basic techniques to reduce the memory requirements of a simulation during execution:reduce the size of the state that must be saved;and reduce the number of states that must be saved.One technique to reduce the size of the state saved is incremental state saving which saves only a subset of each LP’s state each time the LP is activated[6].Two techniques to reduce the number of states that are saved are:to use a checkpoint interval that is greater than one[?];and to constrain optimism in some manner such as Breathing Time Warp[7]. Algorithms that attempt to recover from a memory stall.The previous algorithms seek to de-lay the onset of a memory stall.Although this may be sufficient in some cases,there will always be those cases that will eventually stall due to memory star-vation.Algorithms which attempt to recover from a memory stall are all similar in that they select one or more objects to discard in an effort to free enough memory to permit the simulation to proceed.Pre-viously proposed algorithms are described in[2].Sec-tion3examines the operation of these algorithms with particular attention to the type of memory pool avail-able.3Algorithms that Attempt to Recover from a Memory StallOne of the key issues in understanding the various proposals for memory stall recovery is locality.In the Time Warp parallel simulation environment there are two locality issues—how local is the memory exhaus-tion and how local is the memory that is recovered.We call the former the memory stall trigger and the latter the memory recovery target.3.1Memory Stall TriggersThe event which triggers a memory stall is an at-tempt by some LP to allocate extra storage.If this attempt to allocate fails,then a memory stall recov-ery scheme must be initiated1.The question of local-ity with respect to the memory trigger is a question of which type of pool has run out of memory.Self A self-triggered memory stall occurs when an LP in the process pool memory model runs out of mem-ory.Local A locally-triggered memory stall occurs in the processor pool memory model when an LP detects that there is no more memory for any of the LPs on the given processor.As a result,several LPs are stalled and cannot proceed until some remedial action is taken.Global A globally-triggered memory stall occurs when all of the memory in the global pool memory model has been exhausted.3.2Memory Recovery TargetsThe target of a memory stall is the LP(or LPs) which must take remedial action to resolve the mem-ory exhaustion.In many systems such remedial action involves removing stored object(s),(i.e.,messages or state vectors)and performing any required rollback operations.For this presentation it is convenient to maintain the symmetry with the memory stall trigger discussion and to regard the LP associated with the memory to be recovered as the memory recovery tar-get.Once an LP has detected the memory stall,there are three sets of LPs from which the memory target(s) can be chosen:Self In a self-target system,the only alternative is to take remedial action to reduce space usage within the LP which has detected the memory exhaustion condition.Local In a local-target system,any LP(or LPs)res-ident at the same processor as the LP which has de-tected the memory exhaustion can be required to take remedial action in order to reclaim space.1In most implementations,GVT is updated and fossil col-lection done following a memory stall and before the memory recovery technique is initiated.In this description,we assume that GVT has been recalculated and that the fossil collection has been completed.In the empirical study,EGVT is updated regularly,however,the simulation is not stopped each time a processor exhausts its memory pool Global In the global-target system,any LP(or LPs) can be required to take remedial action by the memory recovery technique.Rollback based schemes recover space by causing some LP to take remedial action.This action could in turn trigger secondary rollback operations.In this dis-cussion,the LP(s)chosen by the memory control tech-nique is(are)the primary target(s).Other LPs rolled back as a result of the primary target’s actions are not considered as memory targets.Previous schemes have implemented the remedial action through the use of Time Warp’s rollback mechanism[2].In the proposed scheme,state vectors are removed from memory and if they are required at some later point,then the coast-forward facilities of Time Warp operating with sparse checkpointing are used to recreate the missing state.3.3Combinations of Triggers and TargetsThe previous discussion suggests that there are nine possible systems(three types of triggers for each of three types of targets).The emphasis of this paper is on one particular case—local-trigger,local-target.In order to place that model in context,we summarize three of the most probable combinations of triggers and targets.3.3.1Self-Trigger,Self-Target TechniquesIn this type of system,each process acts as its own agent.If an LP detects that it has exhausted its mem-ory,then some remedial action is taken.Two schemes, described in the literature,which use a self-trigger self-target structure are sendback and Gafni’s Algorithm. These are summarized below.Message Sendback This technique was introduced to provide someflow control in a Time Warp imple-mentation.When a message arrives at a process,i, which has run out of memory space,some message (the new one or some other one)is sent back to its originating process,j.When the originating process receives the sent back message,it rolls back to the state when the message was sent.The primary goal is to return i to a condition where there is room to process one message[1,5].Gafni’s Protocol When an LP runs out of memory, this protocol selects a memory object and discards it. If it is an input message,then the sendback scheme is used.If the object is an output message then the antimessage is sent to the destination(causing a roll-back).If the object is a local state vector,the process rolls back to a previous state[?,2,8].3.3.2Global-Trigger,Global-Target Tech-niquesTwo global-trigger,global-target techniques have been described in the literature.They have both been shown to be memory optimal by Lin[?]. Cancelback Cancelback extends Gafni’s protocol to select any memory object,with a timestamp greater than GVT for cancelling[1].Artificial Rollback Artificial rollback can be viewed as a simplified implementation of cancelback. In artificial rollback,once the stall has been detected, rather than choosing any object(message or state vec-tor)to delete,and possibly dealing with the sendback mechanism,the LPs themselves release storage by(ar-tificially)forcing a rollback[?,9].3.3.3Local-Trigger,Local-Target Techniques In this type of system,each processor acts as its own recovery agent.Although it is a specific LP which detects the problem,the object to be deleted(and the LP to be targeted)can be any of those present at the local processor.An implementation based on this technique cannot be memory optimal as described in Section2.It may be necessary to have the total memory required for the sequential simulation present at every processor.The schemes described under the previous mem-ory trigger,memory target models could be adapted for application in the local-trigger local-target mem-ory model.However,they would not be memory op-timal.In Section4we summarize the operation of two local-trigger,local-target heuristics:local-trigger, local-target artificial rollback and local-trigger,local-target pruneback.4Local-Trigger,Local-Target Tech-niquesIn this section we consider local-trigger,local-target techniques for recovering from memory stalls.The need for this class of technique arises in distributed memory,message-passing architectures,in which it is inefficient to operate using a global memory pool.We identify two categories of local-trigger, local-target techniques—the existing rollback-based schemes discussed in the previous section and a new non-rollback based scheme called pruneback.Since the various rollback based recovery schemes described in the previous section operate on a similar basis,we have chosen to compare pruneback with the artificial rollback scheme.4.1Artificial Rollback—ARollback-Based Recovery Scheme Artificial rollback is described in detail in[?].For our purposes,the technique is modified in a straight-forward way to permit it to work as a local-trigger, local-target method.The modification is very minor: the set of LPs impacted by a memory stall is restricted to those at the local processor.This modification does mean that the system is no longer memory optimal[?]. This scheme is introduced as a variant of an existing technique to which pruneback can be compared.Consider the operation of Processor2in the simu-lation shown in Figure2.Assume that there is room in the dynamic storage area(of processor2)to hold11 states.In Figure2(a),a memory stall has occurred. When this occurs fossil collection reclaims as much storage as possible.In this example there are two state vectors that can be reclaimed within processor2 (marked as F in Figure2(a)).As a result of this re-claimed storage,two more events may be processed (labeled F′in Figure2(b)).If we assume that GVT does not advance as a result of the evaluation of the F′states,then once again processor2is memory stalled. At this point artificial rollback is invoked and the most advanced states are cancelled(the states marked A in Figure2(c)).Although it would be possible to cancel only one state,as pointed out in[2]it also possible to define a salvage level which attempts to reclaim a given amount of storage once artificial rollback has been in-voked.In thefigure36%of the storage has been re-claimed.In the empirical study,Section5,25%of the storage,at a memory stalled processor,is recovered during an artificial rollback operation.4.2Pruneback—A Non-Rollback-BasedRecovery SchemeThe rollback based recovery schemes differ in how they select memory objects to remove but,in thefinal analysis,they all use the rollback mechanism built into Time Warp to facilitate memory recovery.The scheme proposed in this section is not based on the use of rollback to recover the space.However,it does rely on the coast-forward component of the usual rollback scheme to recover should some state information need to be reconstructed.The basic operation involves a pruning of the saved state vectors of all of the local LPs when memory ex-haustion is detected.The pruning operation involves selecting various state vectors for deletion.Which states,and how many states should be pruned are implementation dependent2.However,there are some 2In the empirical study described in Section5one out of every4states at a processor is pruned.s cprocessed event message unprocessed event message saved state F fossil A artificial rollback targets s s ss ss ss s c s s c c c ss scss c c s s s ss s ss s c s s c c sc c s s s s s ss s ss s c s s c c sc c s s s -----?-----?-----?LP 1LP 2F Fsequential stateLP 3LP 4GVTvirtual timeF F FFLP 1LP 2sequential stateLP 3LP 4GVTvirtual timeF ′F ′LP 1LP 2sequential stateLP 3LP 4GVTvirtual timeAA A A (a)Initial Memory Stall(b)Progress after Fossil Collection(c)State Storage before Artificial RollbackF ′reused fossil Processor 1Processor 2Processor 2Processor 1Processor 2Processor 1Figure 2:Snapshot of a Simulation using Artificial Rollbackconstraints that are similar to those involved in the im-plementation sparse checkpointing (i.e.,checkpoint in-tervals greater than one)[?].For example,you should never prune the current state or the SGVT state.Consider the simulation snapshot shown in Fig-ure 3.The system characteristics are the same as those in Figure 2,the only difference is that follow-ing the second memory stall,pruneback is invoked rather than artificial rollback.In Figure 3a num-ber of states have been pruned prior to the onset of the first memory stall.This means that the stall oc-curs at a larger virtual time.As a result of the pruned states the sequential state,that is the events that must be preserved (or be able to be recreated)so that the system state at GVT can be recreated may not be those closest to GVT for all processes.In fact,LP 3in Figure 3(a)has a state (marked y )which has been pruned that is part of the sequential state.In this case a state with a smaller virtual time (marked x )has been preserved so that the sequential state could be recreated.In this example there is one state vector that can be reclaimed within processor 2(marked as F in Figure 3(a)).As a result of this reclaimed storage,one more event may be processed (labeled F ′in Fig-ure 3(b)).If we assume that GVT does not advance as a result of the evaluation of the F ′states,then once again processor 2is memory stalled.At this point the pruneback algorithm is invoked causing the states marked P in Figure 3(c)to be reclaimed.Note,the pruned states were used during simulation but are no longer stored.When pruneback is invoked,states can be deleted from any of the processes in the processor.In order to free the same amount of memory in both artificial rollback and pruneback,pruneback may need to delete more states.This is because the rollback-based techniques free both the state vectors and the input and output message queues,while pruneback frees only the state vector memory space.4.3Memory OptimalityIt should be pointed out that neither of these local-trigger,local-target schemes are memory optimal.The reason for this is rooted in the fact that,it cannot be guaranteed in either case that the LP critical to the completion of the simulation (in optimal memory)will be able to obtain a state buffer.5ResultsPreliminary versions of the artificial rollback and pruneback algorithms were added to the Yaddes exe-cution kernel[?].They have been used to demonstrate the relative performance of the two memory control techniques in the presence of more than enough mem-ory to complete the simulation.It should be noted that the Time Warp implementation used in these tests is non-preemptive.Thus,once an LP has started processing an event,the execution of that event is completed before a subsequent rollback can take place.Unfortunately,some of the more exciting questions cannot be answered until a more complete and robust implementation exists.A subset of the Nicol suite has been used to test these algorithms[?,?,10,?,?].In particular we have............................................................................................................s s ss ss ss s c s s c c ss sc sc c s s s s ss ss ss s c s s c c c ss sc ss c c s s s s c-----?-----?(a)Initial Memory Stall(b)Progress after Fossil CollectionLP 1LP 2sequential stateLP 3LP 4GVTvirtual timeF F LP 1LP 2sequential stateLP 3LP 4GVTvirtual timeF F FLP 1LP 2Fsequential stateLP 3LP 4GVTvirtual timeF F FF ′Fprocessed event messageunprocessed event messagesaved state F fossil P pruneback targetreused fossil F pruned statexy (c)State Storage after PrunebackPPP Processor 1Processor 2Processor 1Processor 1Processor 2Processor 2Figure 3:Snapshot of a Simulation using Prunebackused a closed stochastic queueing network consisting of 64nodes connected in a (6-dimensional)hypercube.Each node has six inputs and six outputs and provides non-preemptive service to arriving customers.Each customer receives a randomly distributed,biased ex-ponential service time and is then routed to a ran-domly selected output link.There are (on average)8customers at each node.The 64nodes are evenly distributed to a set of 8Transputers,interconnected in a 3-dimensional cube.The values plotted in the graphs which follow,areobtained by averaging of the results of three runs and the error bars indicate the 95%confidence intervals.In the all of the graphs which follow,the axis labelled memory limit indicates the size of each per-processor memory pool.Thus,a memory limit of 150indicates that each processor has a processor pool the size of which is the size of 150state vectors.This memory is actually used for both state and message storage.5.1Artificial Rollback ResultsFigures 4,5,and 6indicate the performance of ar-tificial rollback in the presence of differing memory limitations.Figure 4shows the anticipated result that,as the amount of memory is decreased,the execution time in-creases.As expected,and verified by Figures 5and 6,the number of states rolled back increases.Presum-ably the increase in the number of states rolled back corresponds to the increase in the amount of calcula-tion performed to complete the simulation and,con-sequently,to the increased total execution time.Memory limit [state units]Execution time [s]1001201401601802002202401020304050607080.........................................................................................................................................................................................▽▽▽▽▽▽▽▽△△△△△△△confidence level =95%Legend:▽aggressive cancellation △lazy cancellationFigure 4:Artificial rollback—Execution time vs.memory limit.Figures 5and 6show the differences between the number of states rolled back (due to natural Time Warp activity),and the number rolled back as a re-sult of the artificial rollback algorithm.In the case of aggressive cancellation (Figure 5),as memory is re-duced there is an increase in both the number of nor-。