A Generic Framework to Model, Simulate and Verify Genetic Regulatory Networks

合集下载

CloudResearch(云服务)_15

CloudResearch(云服务)_15

Do Clouds Compute?A Framework for Estimating the Value of Cloud ComputingMarkus Klems, Jens Nimis, Stefan TaiFZI Forschungszentrum Informatik Karlsruhe, Germany f klems,nimis, tai g @fzi.de•IntroductionOn-demand provisioning of scalable and reliable compute services, along witha cost model that charges consumers based on actual service usage, hasbeen an objective in distributed computing research and industry for a while.Cloud Computing promises to deliver on this objective: building on compute and storage virtualization technologies, consumers are able to rent infrastructure \in the Cloud"as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis.In addition to the technological challenges of Cloud Computing there is a need for an appropriate, competitive pricing model for infrastructure-as-a-service. The acceptance of Cloud Computing depends on the ability to im-plement a model for value co-creation. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in esti-mating Cloud Computing costs and to compare these costs to conventional IT solutions.•ObjectiveThe main purpose of our paper is to present a basic framework for estimat-ing value and determine bene ts from Cloud Computing as an alternative to conventional IT infrastructure, such as privately owned and managed IT hard-ware. Our e ort is motivated by the rise of Cloud Computing providers and the question when it is pro table for a business to use hardware resources \in the Cloud". More and more companies already embrace Cloud Computing services as part of their IT infrastructure [1]. However, there is no guide to tell when outsourcing into the Cloud is the way to go and in which cases it does not make sense to do so. With our work we want to give an overview of economic and technical aspects that a valuation approach to Cloud Computing must take into consideration.Valuation is an economic discipline about estimating the value of projects1and enterprises [2]. Corporate management relies on valuation methods in or-der to make reasonable investment decisions. Although the basic methods are rather simple, like Discounted Cash Flow (DCF) analysis, the di culties lie in appropriate application to real world cases.Within the scope of our paper we are not going to cover speci c valuation methods. Instead, we present a generic framework that serves for cost compar-ison analysis between hardware resources \in the Cloud"and a reference model, such as purchasing and installing IT hardware. The result of such a compari-son shows the value of Cloud Computing associated with a speci c project and measured in terms of opportunity costs. In later work the framework must be eshed out with metrics, such as project free cash ows, EBITDA, or other suit-able economic indicators. Existing cost models, such as Gartner's TCO seem promising candidates for the design of a reference model [3].•ApproachA systematic, dedicated approach to Cloud Computing valuation is urgently needed. Previous work from related elds, like Grid Computing, does not con-sider all aspects relevant to Cloud Computing and can thus not be directly applied. Previous approaches tend to mix business objectives with technologi-cal requirements. Moreover, the role of demand behavior and the consequences it poses on IT requirements needs to be evaluated in a new light. Most impor-tant, it is only possible to value the bene t from Cloud Computing if compared to alternative solutions. We believe that a structured framework will be helpful to clarify which general business scenarios Cloud Computing addresses.Figure 1 illustrates our framework for estimating the value of Cloud Com-puting. In the following, we describe in more detail the valuation steps suggested with the framework.2.1 Business ScenarioCloud Computing o ers three basic types of services over the Internet: virtual-ized hardware resources in form of storage capacity and processing power, plus data transfer volume. Since Cloud Computing is based on the idea of Internet-centric computing, access to remotely located storage and processors must be encompassed with su cient data transfer capacities.The business scenario must specify the business domain (internal processes, B2B, B2C, or other), key business objectives (cost e ciency, no SLA viola-tions, short time to market, etc.), the demand behavior (seasonal, temporary spikes, etc.) and technical requirements that follow from business objectives and demand behavior (scalability, high availability, reliability, ubiquitous access, se-curity, short deployment cycles, etc.).23(2) Business DomainIT resources are not ends in themselves but serve speci c business objectives. Organizations can bene t from Grid Computing and Cloud Computing in di er-ent domains: internal business processes, collaboration with business partners and for customer-faced services (compare to [14]). [1] Business ObjectivesOn a high level the typical business bene ts mentioned in the context of Cloud Computing are high responsiveness to varying, unpredictable demand behavior and shorter time to market. The IBM High Performance on Demand Solu-tions group has identi ed Cloud Computing as an infrastructure for fostering company-internal innovation processes [4]. The U.S. Defense Information Sys-tems Agency explores Cloud Computing with the focus on rapid deployment processes, and as a provisionable and scalable standard environment [5].z Demand BehaviorServices and applications in the Web can be divided into two disjoint categories: services that deal with somewhat predictable demand behavior and those that must handle unexpected demand volumes respectively. Services from the rst category must be built on top of a scalable infrastructure in order to adapt to changing demand volumes. The second category is even more challenging, since increase and decrease in demand cannot be forecasted at all and sometimes occurs within minutes or even seconds.Traditionally, the IT operations department of an organization must master the di culties involved in scaling corporate infrastructure up or down. In prac-tice it is impossible to constantly fully utilize available server capacities, which is why there is always a tradeo between resource over-utilization, resulting in glaring usability e ects and possible SLA violations, and under-utilization, leading to negative nancial performance [6]. The IT department dimensions the infrastructure according to expected demand volumes and in a way such that enough space for business growth is left. Moreover, emergency situations, like server outages and demand spikes must be addressed and dealt with. Asso-ciated with under- and over-utilization is the notion of opportunity costs. The opportunity costs of under-utilization are measured in units of wasted compute resources, such as idle running servers. The opportunity costs of over-utilization are the costs of losing customers or being sued as a consequence of a temporary server outage.Expected Demand: Seasonal DemandAn online retail store is a typical service that su ers from seasonal demand spikes. During Christmas the retail store usually faces much higher demand vol-umes than over the rest of the year. The IT infrastructure must be dimensioned such that it can handle even the highest demand peaks in December.Expected Demand: Temporary E ect4Some services and applications are short-lived and targeted to single or sel-dom events, such as Websites for the Olympic Games 2008 in Beijing. As seen with seasonal demand spikes, the increase and decrease of demand volume is somewhat predictable. However, the service only exists for a comparably short period of time, during which it experiences heavy tra c loads. After the event, the demand will decrease to a constant low level and the service be shut down eventually.Expected Demand: Batch ProcessingThe third category of expected demand scenarios are batch processing jobs. In this case the demand volume is usually known beforehand and does not need to be estimated.Unexpected demand: Temporary E ectThis scenario is similar to the \expected temporary e ect", except for one major di erence: the demand behavior cannot be predicted at all or only short time in advance. A typical example for this scenario is a Web start-up company that becomes popular over night because it was featured on a news network. Many people simultaneously rush to the Website of the start-up company, caus-ing signi cant tra c load and eventually bringing down the servers. Named after two famous news sharing Websites this phenomenon is known as \Slash-dot e ect"or \Digg e ect".[12] Technical RequirementsBusiness objectives are put into practice with IT support and thus translate into speci c IT requirements. For example, unpredictable demand behavior translates to the need for scalability and high availability even in the face of signi cant tra c spikes; time to market is directly correlated with deployment times.z Costs of Cloud ComputingAfter having modeled a business scenario and the estimated demand volumes, it is now time to calculate the costs of a Cloud Computing setting that can ful ll the scenario's requirements, such as scalability and high availability.A central point besides the scenario properties mentioned in section3.1.3 is the question: how much storage capacity and processing power is needed in order to cope with demand and how much data transfer will be used? The numbers might either be xed and already known beforehand or are unknown and must be estimated.In a next step a Utility Computing model needs to de ne compute units and thus provides a metric to convert and compare computing resources between the Cloud and alternative infrastructure services. Usually the Cloud Comput-ing provider de nes the Utility Computing model, associated with a pricing scheme, such as Amazon EC2 Compute Units (ECU). The vendor-speci c model can be converted into a more generic Utility Computing unit, such as FLOPS, I/O operations, and the like. This might be necessary when comparing Cloud5Computing o ers of di erent vendors. Since Cloud Computing providers charge money for their services based on the Utility Computing model, these pricing schemes can be used in order to determine the direct costs of the Cloud Com-puting scenario. Indirect costs comprise soft factors, such as learning to use tools and gain experience with Cloud Computing technology.3. Costs of the Reference IT Infrastructure ServiceThe valuation of Cloud Computing services must take into account its costs as well as the cash ows resulting from the underlying business model. Within the context of our valuation approach we focus on a cost comparison between infrastructure in the Cloud and a reference infrastructure service. Reaching or failing to reach business objectives has an impact on cash ows and can therefore be measured in terms of monetary opportunity costs.The reference IT infrastructure service might be conventional IT infrastruc-ture (SME or big business), a hosted service, a Grid Computing service, or something else. This reference model can be arbitrarily complex and detailed, as long as it computes the estimated resource usage in a similar manner as in the Cloud Computing scenario of section 3.2. The resource usage will not in all cases be the same as in the Cloud Computing scenario. Some tasks might e.g. be computed locally, thus saving data transfer. Other di erences could result from a totally di erent approach that must be taken in order to ful ll the business objectives de ned in the business scenario.In the case of privately owned IT infrastructure, cost models, such as Gart-ner's TCO [3], provide a good tool for calculations [8]. The cost model should comprise direct costs, such as Capital Expenditures for the facility, energy and cooling infrastructure, cables, servers, and so on. Moreover, there are Opera-tional Expenditures which must be taken into account, such as energy, network fees and IT employees. Indirect costs comprise costs from failing to meet busi-ness objectives, e.g. time to market, customer satisfaction or Quality of Service related Service Level Agreements. There is no easy way to measure how this can be done and will vary from case to case. More sophisticated TCO models must be developed to mitigate this shortcoming. One approach might be to compare cash ow streams that result from failing to deliver certain business objectives, such as short time to market. If the introduction of a service o ering is delayed due to slow deployment processes, the resulting de cit can be calculated as a discounted cash ow.When all direct and indirect costs have been taken into account, the total costs of the reference IT infrastructure service can be calculated by summing up. Finally, costs of the Cloud Computing scenario and the reference model scenario can be compared.64. Evaluation and DiscussionEarly adopters of Cloud Computing technologies are IT engineers who work on Web-scale projects, such as the New York Times TimesMachine [9]. Start-ups with high scalability requirements turn to Cloud Computing providers, such as Amazon EC2, in order to roll out Web-scale services with comparative low entry costs [7]. These and other examples show that scalability, low market barriers and rapid deployment are among the most important drivers of Cloud Computing.5. New York Times TimesMachineIn autumn 2007 New York Times senior software engineer Derek Gottfrid worked on a project named TimesMachine. The service should provide access to any New York Times issue since 1851, adding up to a bulk of 11 million articles which had to be served in the form of PDF les. Previously Gottfrid and his colleagues had implemented a solution that generated the PDF les dynamically from already scanned TIFF images of the New York Times articles. This approach worked well, but when tra c volumes were about to increase signi cantly it would be better to serve pre-generated static PDF les.Faced with the challenge to convert 4 Terabyte of source data into PDF, Derek Gottfrid decided to make use of Amazon's Web Services Elastic Compute Cloud (EC2) and Simple Storage Service (S3). He uploaded the source data to S3 and started a Hadoop cluster of customized EC2 Amazon Machine Images (AMIs). With 100 EC2 AMIs running in parallel he could complete the task of reading the source data from S3, converting it to PDF and storing it back to S3 within 36 hours.How does this use case t in our framework?Gottfrid's approach was motivated by the simplicity with which the one-time task could be accomplished if performed \in the Cloud". No up-front costs were involved, except for insigni cant expenditures when experimenting if the endeavor was feasible at all. Due to the simplicity of the approach and the low costs involved, his superiors agreed without imposing bureaucratic obstacles.Another key driver was to cut short deployment times and thereby time to market. The alternative to Amazon EC2 and S3 would have been to ask for permission to purchase commodity hardware, install it and nally run the tasks - a process that very likely would have taken several weeks or even months. After process execution, the extra hardware would have to be sold or used in another context.This use case is a good example for a one-time batch-processing job that can be performed in a Grid Computing or Cloud Computing environment. From the backend engineer's point of view it is favorable to be able getting started without much con guration overhead as only the task result is rele vant. The data storage and processing volume is known beforehand and no measures have to be taken to guarantee scalability, availability, or the like.7In a comparative study researchers from the CERN-based EGEE project argue that Clouds di er from Grids in that they served di erent usage patterns. While Grids were mostly used for short-term job executions, clouds usually sup-ported long-lived services [10]. We agree that usage patterns are an important di erentiator between Clouds and Grids, however, the TimesMachine use case shows that this not a question of service lifetime. Clouds are well-suited to serve short-lived usage scenarios, such as batch-processes or situational Mash-up ser-vices.1 Major League BaseballMLB Advanced Media is the company that develops and maintains the Major League Baseball Web sites. During the 2007 season, director of operations Ryan Nelson received the request to implement a chat product as an additional service to the Web site [11]. He was told that the chat had to go online as soon as possible. However, the company's data center in Manhattan did not leave much free storage capacity and processing power.Since there was no time to order and install new machines, Nelson decided to call the Cloud Computing provider Joyent. He arranged for 10 virtual machines in a development cluster and another 20 machines for production mode. Nelson's team developed and tested the chat for about 2 months and then launched the new product. When the playo s and World Series started, more resources were needed. Another 15 virtual machines and additional RAM solved the problem.Ryan Nelson points out two major advantages of this approach. First, the company gains exibility to try out new products quickly and turn them o if they are not a success. In this context, the ability to scale down shows to be equally important as scaling up. Furthermore, Nelson's team can better respond to seasonal demand spikes which are typical for Web sites about sports events. 6) Related WorkVarious economic aspects of outsourcing storage capacities and processing power have been covered by previous work in distributed computing and grid comput-ing [12], [13], [14], [15]. However, the methods and business models introduced for Grid Computing do not consider all economic drivers which we identi ed relevant for Cloud Computing, such as pushing for short time to market in the context of organization inertia or low entry barriers for start-up companies.With a rule of thumb calculation Jim Gray points to the opportunity costs of distributed computing in the Internet as opposed to local computations, i.e. in LAN clusters [12]. In his scenario $1 USD equals 1 GB sent over WAN or alter-natively eight hours CPU processing time. Gray reasons that except for highly processing-intensive applications outsourcing computing tasks into a distributed environment does not pay o because network tra c fees outnumber savings in processing power. Calculating the tradeo between basic computing services can be useful to get a general idea of the economies involved. This method can8910easily be applied to the pricing schemes of Cloud Computing providers. For $1 USD the Web Service Amazon EC2 o ers around 6 GB data transfer or 10 hours CPU processing 1. However, this sort of calculation only makes sense if placed in a broader context. Whether or not computing services can be performed locally depends on the underlying business objective. It might for example be necessary to process data in a distributed environment in order to enable online collaboration. George Thanos, et al evaluate the adoption of Grid Computing technology for business purposes in a more comprehensive way [14]. The authors shed light on general business objectives and economic issues associated with Grid Computing, such as economies of scale and scope, network externalities, market barriers, etc. In particular, the explanations regarding the economic rationale behind complementing privately owned IT infrastructure with utility comput-ing services point out important aspects that are also valid for our valuation model. Cloud Computing is heavily based on the notion of Utility Computing where large-scale data centers play the role of a utility that delivers computing services on a pay-per-use basis. The business scenarios described by Thanos, et al only partially apply to those we can observe in Cloud Computing. Important bene ts associated with Cloud Computing, such as shorter time to market and responsiveness to highly varying demand, are not covered. These business objec-tives bring technological challenges that Cloud Computing explicitly addresses, such as scalability and high availability in the face of unpredictable short-term demand peaks.4 Conclusion and Future Work Cloud Computing is an emerging trend of provisioning scalable and reliable services over the Internet as computing utilities. Early adopters of Cloud Com-puting services, such as start-up companies engaged in Web-scale projects, intu-itively embrace the opportunity to rely on massively scalable IT infrastructure from providers like Amazon. However, there is no systematic, dedicated ap-proach to measure the bene t from Cloud Computing that could serve as a guide for decision makers to tell when outsourcing IT resources into the Cloud makes sense. We have addressed this problem and developed a valuation framework that serves as a starting point for future work. Our framework provides a step-by-step guide to determine the bene ts from Cloud Computing, from describing a business scenario to comparing Cloud Computing services with a reference IT solution. We identify key components: business domain, objectives, demand behavior and technical requirements. Based on business objectives and technical requirements, the costs of a Cloud Computing service, as well as the costs of a reference IT solution, can be calculated and compared. Well-known use cases of1According to the Amazon Web Service pricing in July 2008 one GB of outgoing tra c costs $0.17 for the rst 10 TB per month. Running a s mall AMI instance w ith the compute capacity of a 1.0-1.2 GHz 2007 Xeon or Opteron processor for one hour costs $0.10 USD.11Cloud Computing adopters serve as a means to discuss and evaluate the validity of our framework.In future work, we will identif y and analyze concrete valuation methods that can be applied within the context of our framework. Furthermore, it is necessary to evaluate cost models that might serve as a template for estimating direct and indirect costs, a key challenge that we have only mentioned.References1. Amazon Web Services: Customer Case Studies,/Success-Stories-AWS-home-page/b/ref=sc_fe_l_1?ie=UTF8&node=182241011&no=34406612. Titman, S.,Martin, J.: Valuation. The Art & Science of CorporateInvest-ment Decisions, Addison-Wesley (2007)3. Gartner TCO, /TCO/index.htm4. Chiu, W..: From Cloud Computing to the New Enterprise Data Center,IBM High Performance On Demand Solutions (2008)[5] Pentagon's IT Unit Seeks to Adopt Cloud Comput-ing, New York Times,/idg/IDG_852573C400693880002574890080F9EF.html?ref=technology[6] Schlossnagle, T.: Scalable Internet Architectures, Sams Publishing (2006)[7] PowerSet Use Case, /b?ie=UTF8&node=331766011&me=A36L942TSJ2AJA[8] Koomey, J.: A Simple Model for Determining True Total Cost ofOwnership for Data Centers, Uptime Institute (2007)[9] New York Times TimesMachine use case, /2007/11/01/self-service-prorated-super-computing-fun/[10] Begin, M.: An EGEE Comparative Study: Grids and Clouds - Evolution orRevolution?, CERN Enabling Grids for E-Science (2008)[11] Major League Baseball use case, /news/2007/121007-your-take-mlb.html[12] Gray, J.: Distributed Computing Economics. Microsoft ResearchTechnical Report: MSRTR- 2003-24, Microsoft Research (2003)[13] Buyya, R.,Stockinger, H.,Giddy, J.,Abramson, D.: Economic Models forManagement of Resources in Grid Computing, ITCom (2001)121 Thanos, G., Courcoubetis, C., Stamoulis, G.: Adopting the Grid forBusi-ness Purposes: The Main Objectives and the Associated Economic Is-sues, Grid Economics and Business Models: 4th International Workshop, GECON (2007)2 Hwang, J.,Park, J.: Decision Factors of Enterprises for Adopting GridComputing, Grid Economics and Business Models: 4th International Work-shop, GECON (2007)13。

Balance Score Card 平衡记分卡

Balance Score Card 平衡记分卡
Financial Perspective
Revenue Strategy Return on Investment Productivity Strategy
1. The economic model of key levers driving financial performance
Sources of Growth
optimization
Learning Ground crew alignment
7
Balanced Scorecard Example
Strategic Theme: Operating Efficiency
Financial Profitability Fewer Planes Customer Flight Is on Time More Customers
• Fast ground
turnaround
• On Ground Time • 30 Minutes • Cycle time • On-Time • 90% optimization
Departure program 70% yr. 3 90% yr. 5 100%
Learning Ground Crew Alignment
How success in achieving the strategy will be measured and tracked
The level of performance or rate of improvement needed
Key action programs required to achieve objectives
• Ground crew
alignment

A Generic Camera Model and Calibration Method for Conventional-2006,

A Generic Camera Model and Calibration Method for Conventional-2006,

A Generic Camera Model and Calibration Method for Conventional,Wide-Angle,and Fish-Eye LensesJuho Kannala and Sami S.BrandtAbstractFish-eye lenses are convenient in such applications where a very wide angle of view is needed but their use for measurement purposes has been limited bythe lack of an accurate,generic,and easy-to-use calibration procedure.We hencepropose a generic camera model,which is suitable forfish-eye lens cameras aswell as for conventional and wide-angle lens cameras,and a calibration method forestimating the parameters of the model.The achieved level of calibration accuracyis comparable to the previously reported state-of-the-art.Index Termscamera model,camera calibration,lens distortion,fish-eye lens,wide-angle lensI.I NTRODUCTIONThe pinhole camera model accompanied with lens distortion models is a fair approximation for most conventional cameras with narrow-angle or even wide-angle lenses[1],[6],[7].But it is still not suitable forfish-eye lens cameras.Fish-eye lenses are designed to cover the whole hemisphericalfield in front of the camera and the angle of view is very large,about180◦. Moreover,it is impossible to project the hemisphericalfield of view on afinite image plane by a perspective projection sofish-eye lenses are designed to obey some other projection model. This is the reason why the inherent distortion of afish-eye lens should not be considered only as a deviation from the pinhole model[14].There have been some efforts to model the radially symmetric distortion offish-eye lenses with different models[16],[17],[20].The idea in many of these approaches is to transform the originalfish-eye image to follow the pinhole model.In[17]and[16],the parameters of the distortion model are estimated by forcing straight lines straight after the transformation butthe problem is that the methods do not give the full calibration.They can be used to“correct”the images to follow the pinhole model but their applicability is limited when one needs to know the direction of a back-projected ray corresponding to an image point.The calibration procedures in[5]and[3]instead aim at calibratingfish-eye lenses generally.However,these methods are slightly cumbersome in practice because a laser beam or a cylindrical calibration object is required.Recently thefirst auto-calibration methods forfish-eye lens cameras have also emerged [8],[9],[12].Miˇc uˇs´ık and Pajdla[8]proposed a method for simultaneous linear estimation of epipolar geometry and an omnidirectional camera model.Claus and Fitzgibbon[12]presented a distortion model which likewise allows the simultaneous linear estimation of camera motion and lens geometry,and Thirthala and Pollefeys[9]used the multiview-view geometry of radial1D cameras to estimate a non-parametric camera model.In addition,the recent work by Barreto and Daniilidis[10]introduced a radial fundamental matrix for correcting the distortion of wide-angle lenses.Nevertheless,the emphasis in these approaches is more in the auto-calibration techniques than in the precise modeling of real lenses.In this paper,we concentrate on accurate geometric modeling of real cameras.1We propose a novel calibration method forfish-eye lenses that requires that the camera observes a planar calibration pattern.The calibration method is based on a generic camera model that will be shown to be suitable for different kind of omnidirectional cameras as well as for conventional cameras.First,in Section II,we present the camera model and,in Section III,theoretically justify it by comparing different projection models.In Section IV,we describe a procedure for estimating the parameters of the camera model,and the experimental results are presented and discussed in Sections V and VI.II.G ENERIC C AMERA M ODELSince the perspective projection model is not suitable forfish-eye lenses we use a more flexible radially symmetric projection model.This basic model is introduced in Section II-A and then extended with asymmetric distortion terms in Section putation of back-projections is described in Section II-C.1An early conference version of this paper is[2].A.Radially Symmetric ModelThe perspective projection of a pinhole camera can be described bythe following formula r =f tan θ(i.perspective projection),(1)where θis the angle between the principal axis and the incoming ray,r is the distance between the image point and the principal point and f is the focal length.Fish-eye lenses instead are usually designed to obey one of the following projections:r =2f tan(θ/2)(ii.stereographic projection),(2)r =fθ(iii.equidistance projection),(3)r =2f sin(θ/2)(iv.equisolid angle projection),(4)r =f sin(θ)(v.orthogonal projection).(5)(a)2c (b)Fig.1.(a)Projections (1)-(5)with f =1.(b)Fish-eye camera model.The image of the point P is p whereas it would be p by a pinhole camera.Perhaps the most common modelis the equidistance projection.Thebehavior of the different projec-tions is illustrated in Fig.1(a)andthe difference between a pinholecamera and a fish-eye camera isshown in Fig.1(b).The real lenses do not,how-ever,exactly follow the designedprojection model.From the view-point of automatic calibration,itwould also be useful if we hadonly one model suitable for different types of lenses.Therefore we consider projections in the general formr (θ)=k 1θ+k 2θ3+k 3θ5+k 4θ7+k 5θ9+...,(6)where,without any loss of generality,even powers have been dropped.This is due to the fact that we may extend r onto the negative side as an odd function while the odd powers span the set of continuous odd functions.For computations we need to fix the number of terms in (6).We found that first five terms,up to the ninth power of θ,give enough degrees offreedom for good approximation of different projection curves.Thus,the radially symmetric part of our camera model contains the five parameters,k 1,k 2,...,k 5.Let F be the mapping from the incoming rays to the normalized image coordinates x y =r (θ) cos ϕsin ϕ =F (Φ),(7)where r (θ)contains the first five terms of (6)and Φ=(θ,ϕ) is the direction of the incoming ray.For real lenses the values of parameters k i are such that r (θ)is monotonically increasing on the interval [0,θmax ],where θmax is the maximum viewing angle.Hence,when computing the inverse of F ,we may solve θby numerically finding the roots of a ninth order polynomial and then choosing the real root between 0and θmax .B.Full ModelReal lenses may deviate from precise radial symmetry and therefore we supplement our model with an asymmetric part.For instance,the lens elements may be inaccurately aligned causing that the projection is not exactly radially symmetric.With conventional lenses this kind of distortion is called decentering distortion [1],[13].However,there are also other possible sources of imperfections in the optical system and some of them may be difficult to model.For example,the image plane may be tilted with respect to the principal axis or the individual lens elements may not be precisely radially symmetric.Therefore,instead of trying to model all different physical phenomena in the optical system individually,we propose a flexible mathematical distortion model that is just fitted to agree with the observations.To obtain a widely applicable,flexible model,we propose to use two distortion terms as follows.One distortion term acts in the radial direction∆r (θ,ϕ)=(l 1θ+l 2θ3+l 3θ5)(i 1cos ϕ+i 2sin ϕ+i 3cos 2ϕ+i 4sin 2ϕ),(8)and the other in the tangential direction∆t (θ,ϕ)=(m 1θ+m 2θ3+m 3θ5)(j 1cos ϕ+j 2sin ϕ+j 3cos 2ϕ+j 4sin 2ϕ),(9)where the distortion functions are separable in the variables θand ϕ.Because the Fourier series of any 2π-periodic continuous function converges in the L 2-norm and any continuous odd function can be represented by a series of odd polynomials we could,in principle,model any kind of continuous distortion by simply adding more terms to (8)and (9),as they both now have seven parameters.By adding the distortion terms to (7),we obtain the distorted coordinates x d =(x d ,y d ) byx d =r (θ)u r (ϕ)+∆r (θ,ϕ)u r (ϕ)+∆t (θ,ϕ)u ϕ(ϕ),(10)where u r (ϕ)and u ϕ(ϕ)are the unit vectors in the radial and tangential directions.To achieve a complete camera model we still need to transform the sensor plane coordinates into the image pixel coordinates.By assuming that the pixel coordinate system is orthogonal we get the pixel coordinates (u,v ) from u v = m u00m v x d y d + u 0v 0 =A (x d ),(11)where (u 0,v 0) is the principal point and m u and m v give the number of pixels per unit distance in horizontal and vertical directions,respectively.By combining (10)and (11)we have the forward camera modelm =P c (Φ),(12)where m =(u,v ) .This full camera model contains 23parameters and it is denoted by p 23in the following.Since the asymmetric part of the model is very flexible,it may sometimes be reasonable to use a reduced camera model in order to avoid over-fitting.This is the case if,for instance,the control points do not cover the whole image area.Leaving out the asymmetric part gives the camera model p 9with nine parameters:five in the radially symmetric part (7)and four in the affine transformation (11).We did experiments also with the six-parametric model p 6which contains only two parameters in the radially symmetric part.C.Backward ModelAbove we have described our forward camera model P c .In practice,one also needs to know the backward modelΦ=P −1c (m )(13)which is the mapping from the image point m =(u,v ) to the direction of an incoming light ray,Φ=(θ,ϕ) .We write P c as the composite function P c =A ◦D ◦F ,where F is the transformation (7)from the ray direction Φto the ideal Cartesian coordinates x =(x,y ) on the image plane,D is the distortion mapping from x to the distorted coordinates x d =(x d ,y d ) ,and A is the affine transformation (11).We decompose the projection modelin this form because,for the inverse transfrom P−1c=F−1◦D−1◦A−1,it is straightforward to compute F−1and A−1.The more difficult part is to numerically compute D−1. Given a point x d,finding x=D−1(x d)is equivalent to computing the shift s into the expression x=x d−s,wheres=S(Φ)=∆r(θ,ϕ)u r(ϕ)+∆t(θ,ϕ)uϕ(ϕ).(14) Moreover,we may write S(Φ)≡(S◦F−1)(x)and approximate the shift by thefirst order Taylor expansion of S◦F−1around x d that yieldss (S◦F−1)(x d)+∂(S◦F−1)∂x(x d)(x−x d)=S(Φd)−∂S∂Φ ∂F∂Φ(Φd)−1s,whereΦd=F−1(x d)may be numerically evaluated.Hence,we may compute the shift s froms I+∂S∂Φ(Φd) ∂F∂Φ(Φd) −1 −1S(Φd).(15) where the Jacobians∂S/∂Φand∂F/∂Φmay be computed from(14)and(7),respectively. So,finallyD−1(x d) x d− I+ ∂S∂Φ◦F−1 (x d) ∂F∂Φ◦F−1 (x d) −1 −1(S◦F−1)(x d).(16)It seems that thefirst order approximation for the asymmetric distortion function D is tenable in practice because the backward model error is typically several degrees smaller that the calibration accuracy for the forward model,as will be seen in detail in Section V.III.J USTIFICATION OF THE P ROJECTION M ODELThe traditional approach for camera calibration is to take the perspective projection model as a starting point and then supplement it with distortion terms[1],[6],[18].However,this is not a valid approach forfish-eye lenses because,whenθapproachesπ/2,the perspective model projects points infinitely far and it is not possible to remove this singularity with the conventional distortion models.Hence,we base our calibration method to the more generic model(6).We compared the polynomial two-parametric modelsr =a b sin(bθ)(M 1)and r =proposed by Miˇc uˇs ´ık [11]for have plotted the projection curves approximations with models M 1,M polynomial model (6)with the first the value f =200pixels which is a reasonable value for a real camera.The projections were approximated between 0and θmax ,where the values of θmax were 60◦,110◦,110◦,110◦and 90◦,respectively.The interval [0,θmax ]was discretized using the step of 0.1◦and the models M 1and M 2were fitted by using the Levenberg-Marquardt method.It can be seen from Fig.2that the model M 1is not suitable at all for the perspective and stereographic projections and that the model M 2is not accurate for the orthogonal projection.TABLE I T HE APPROXIMATION ERRORS .M 1M 2P 3P 9(1)6913120.1(2)9013130.0(3)0.00.00.00.0(4)0.0 1.80.330.0(5)0.09.7 1.800.0In Table I,we have tabulated the maximum approximationerrors for each model,i.e.,the maximum vertical distancesbetween the desired curve and the approximation in Fig.2.Here we also have the model P 9which is the polynomialmodel (6)with the first five terms.It can be seen that themodel P 3has the best overall performance from all of thetwo-parametric models and that the sub-pixel approximationaccuracy for all the projection curves requires the five-parametric model P 9.These results show that the radially symmetric part of our camera model is well justified.IV.C ALIBRATING THE G ENERIC M ODELNext we describe a procedure for estimating the parameters of the camera model.The calibration method is based on viewing a planar object which contains control points in known positions.The advantage over the previous approaches is that also fish-eye lenses,possibly having a field of view larger than 180◦,can be calibrated by simply viewing a planar pattern.In addition,a good accuracy can be achieved if circular control points are used,as described in Section IV-B.A.Calibration AlgorithmThe calibration procedure consists of four steps that are described below.We assume that M control points are observed in N views.For each view,there is a rotation matrix R j and a translation vector t j describing the position of the camera with respect to the calibration plane such thatX c=R j X+t j,j=1,...,N.(17) We choose the calibration plane to lie in the XY-plane and denote the coordinates of the control point i with X i=(X i,Y i,0) .The corresponding homogeneous coordinates in the calibration plane are denoted by x i=(X i,Y i,1) and the observed coordinates in the view jpby m i=(u i j,v i j) .Thefirst three steps of the calibration procedure involve only six internal jcamera parameters and for these we use the short-hand notation p6ˆ=(k1,k2,m u,m v,u0,v0). The additional parameters of the full model are inserted only in thefinal step.Step1:Initialization of internal parametersThe initial guesses for k1and k2are obtained byfitting the model r=k1θ+k2θ3to the desired projection(2)-(4)with the manufacturer’s values for the nominal focal length f and the angle of viewθmax.Then we also obtain the radius of the image on the sensor plane by r max=k1θmax+k2θ3max.With a circular imagefish-eye lens,the actual imagefills only a circular area inside the image frames.In pixel coordinates,this circle is an ellipseu−u0a 2+ v−v0b 2=1,whose parameters can be estimated.Consequently,we obtain initial guesses for the remaining unknowns m u,m v,u0,and v0in p,where m u=a/r max and m v=b/r max.With a full-frame lens,the best thing is probably to place the principal point to the image center and use the reported values of the pixel dimensions to obtain initial values for m u and m v.Step2:Back-projection and computation of homographiesonto the unit With the internal parameters p6,we may back-project the observed points m ijsphere centered at the camera origin(see Fig.1(b)).The points on the sphere are denoted .Since the mapping between the points on the calibration plane and on the unit sphere by˜x ijis a central projection,there is a planar homography H j so that s˜x i=H j x i p.jFor each view j the homography H j is computed as follows:(i)Back-project the control points by first computing the normalized image coordinates x i j y i j= 1/m u 001/m v u i j −u 0v i j −v 0 ,transforming them to the polar coordinates (r i j ,ϕi j )ˆ=(x i j ,y i j ),and finally solving θi j fromthe cubic equation k 2(θi j )3+k 1θi j −r i j =0.(ii)Set ˜x i j =(sin ϕi j sin θi j ,cos ϕi j sin θi j ,cos θi j ) .(iii)Compute the initial estimate for H j from the correspondences ˜x i j ↔x i p by the linearalgorithm with data normalization [15].Define ˆx i j as the exact image of x i p under H jsuch that ˆx i j =H j x i p /||H j x i p ||.(iv)Refine the homography H j by minimizingi sin 2αi j ,where αi j is the angle betweenthe unit vectors ˜x i j and ˆx i j .Step 3:Initialization of external parameters The initial values for the external camera parameters are extracted from the homographies H j .It holds thats ˜x i j = R jt j X iY i 01 = r 1j r 2j t j X i Y i 1 which implies H j =[r 1j r 2j t j ],up to scale.Furthermorer 1j =λj h 1j ,r 2j =λj h 2j ,r 3j =r 1j ×r 2j ,t j =λj h 3j ,where λj =sign(H 3,3j )/||h 1j ||.Because of estimation errors,the obtained rotation matrices are not orthogonal.Thus,we use the singular value decomposition to compute the closest orthogonal matrices in the sense of Frobenius norm [4]and use them as initial guess for eachR j .Step 4:Minimization of projection error If the full model p 23or the model p 9is used the additional camera parameters are initialized to zero at this stage.As we have the estimates for the internal and external camera parameters,we use (17),(7)or (10),and (11)to compute the imaging function P j for each camera,where a control point is projected to ˆm i j =P j (X i ).The camera parameters are refined byminimizing the sum of squared distances between the measured and modeled control pointprojectionsNj=1M i=1d(m i j,ˆm i j)2(18) using the Levenberg–Marquardt algorithm.B.Modification for Circular Control PointsIn order to achieve an accurate calibration,we used a calibration plane with white circles on black background since the centroids of the projected circles can be detected with a sub-pixel level of accuracy[19].In this setting,however,the problem is that the centroid of the projected circle is not the image of the center of the original circle.Therefore,since m iinj (18)is the measured centroid,we should not project the centers as pointsˆm i.jTo avoid the problem above,we propose solving the centroids of the projected circles numerically.We parameterize the interior of the circle at(X0,Y0)with radius R by X( ,α)= (X0+ sinα,Y0+ cosα,0) .Given the camera parameters,we get the centroidˆm for the circle by numerically evaluatingˆm= R0 2π0ˆm( ,α)|det J( ,α)|dαdR0 2π0|det J( ,α)|dαd ,(19) whereˆm( ,α)=P(X( ,α))and J( ,α)is the Jacobian of the composite function P◦X. The analytical solving of the Jacobian is rather a tedious task but it can be computed by mathematical software such as Maple.V.C ALIBRATION E XPERIMENTSA.Conventional and Wide-Angle Lens CameraThe proposed camera model was compared to the camera model used by Heikkil¨a[6]. This model is the skew-zero pinhole model accompanied with four distortion parameters and it is denoted byδ8in the following.In thefirst experiment we used the same data,provided by Heikkil¨a,as in[6].It was originally obtained by capturing a single image of a calibration object consisting of two orthogonal planes,each with256circular control points.The camera was a monochrome CCD camera with a8.5mm Cosmicar lens.The second experiment was performed with the Sony DFW-VL500camera and a wide-angle conversion lens,with total focal length of3.8mm.In this experiment,we used six images of the calibration object.There were 1328observed control points in total and they were localized by computing their gray scale centroids[19].TABLE II T HE RMS RESIDUAL ERROR IN PIXELS .δ8p 6p 9p 23Cosmicar 0.0610.1070.0550.052Sony 0.1240.2340.0920.057The obtained RMS residual errors,i.e.the root-mean-squared distances betweenthe measured and modeled control pointpositions,are shown in Table II.Especially interesting is the comparison between models δ8and p 9because they both have eight degrees of freedom.Model p 9gave slightly smaller residuals although it does not contain any tangential distortion terms.The fullFig.3.Heikkil¨a ’s calibration data.(a)The estimated asym-metric distortion (∆r u r +∆t u ϕ)using the extended modelp 23.(b)The remaining residual for each control point.Thevectors are scaled up by a factor of 150.However,in the first experiment the fullmodel may have been partly fitted to thesystematic errors of the calibration data.This is due to the fact that there weremeasurements only from one image wherethe illumination was not uniform and allcorners were not covered by control points.To illustrate the fact,the estimated asymmetric distortion and remaining residuals for themodel p 23are shown in Fig.3.The relatively large residuals in the lower right corner of the calibration image (Fig.3(b))seem to be due to inaccurate localization,caused by non-uniform lighting.In the second experiment the calibration data was better so the full model is likely to be more useful.This was verified by taking an additional image of the calibration object and solving the corresponding external camera parameters with given internal parameters.The RMS projection error for the additional image was 0.049pixels for p 23and 0.071for p 9.This indicates that the full model described the true geometry of the camera better than the simpler model p 9.Finally,we estimated the backward model error for p 23,caused by the first order ap-proximation of the asymmetric distortion function (see Section II-C).This was done by back-projecting each pixel and then reprojecting the rays.The maximum displacement in the reprojection was 2.1·10−5pixels for the first camera and 4.6·10−4pixels for the second.Both values are very small so it is justified to ignore the backward model error in practice.B.Fish-Eye Lens CamerasThe first experimented fish-eye lens was an equidistance lens with the nominal focal length of 1.178mm,and it was attached to a Watec 221S CCD color camera.The calibrationobject was a 2×3m 2plane containing white circles with the radius of 60mm on the black background.The calibration images were digitized from an analog video signal to 8-bit monochrome images,whose size was 640by 480pixels.(a)(b)Fig.4.Fish-eye lens calibration using only one view.(a)Original image where the white ellipse depicts the field of view of 150◦.(b)The image corrected to follow pinhole model.Straight lines are straight as they should be.The calibration of a fish-eye lens canbe performed even from a single image ofthe planar object as Fig.4illustrates.Inthat example we used the model p 6and60control points.However,for the mostaccurate results,the whole field of viewshould be covered with a large number ofmeasurements.Therefore we experimentedour method with 12views and 680pointsin total;the results are in Table III.The extended model p 23had the smallest residual error but the radially symmetric model p 9gave almost as good results.Nevertheless,there should be no risk of over-fitting because the number of measurements is large.The estimated asymmetric distortion and the residuals are displayed in Fig.5.TABLE III T HE RMS RESIDUAL ERROR IN PIXELS .p 6p 9p 23Watec 0.1460.0940.089ORIFL 0.4910.1670.137The second fish-eye lens was ORIFL190-3lensmanufactured by Omnitech Robotics.This lenshas a 190degree field of view and it clearlydeviates from the exact equidistance projectionmodel.The lens was attached to a Point Grey Dragonfly digital color camera having 1024×768pixels;the calibration object was the same as in Section V-A.The obtained RMS residual errors for a set-up of 12views and 1780control points are shown in Table III.Again the full model had the best performance and this was verified with an additional calibration image.The RMS projection error for the additional image,after fitting the external camera parameters,was 0.13pixels for p 23and 0.16pixels for p 9.The backward model error for p 23was evaluated at each pixel within the circular images.The maximum displacement was 9.7·10−6pixels for the first camera and 3.4·10−3pixels for the second.Again,it is justified to ignore such small errors in practice.residual for each control point that shows no obvious system-atic error.Both plots are in normalized image coordinates and the vectors are scaled up by a factor of150to aid inspection.mation errors for10calibration trials at different levels of noise.The errorbars represent the minimum and maximum values among the trials.C.Synthetic DataIn order to evaluate the robustness of the proposed calibration method we did experiments also with synthetic data.The ground truth values for the camera parameters were obtained from the realfish-eye lens experiment that was illustrated in Fig.5.So,we used the full camera model and we had680circular control points in12synthetic calibration images, where the gray level values of control points and background were180and5,respectively. In order to make the synthetic images to better correspond real images they were blurred by a Gaussian pdf(σ=1pixel)and quantized to the256gray levels.Firstly,we estimated the significance of the centroid correction proposed in Section IV-B. In the above setting the RMS distance between the centroids of the projected circles and the projected centers of the original circles was0.45pixels.It is significantly larger value than the RMS residual errors reported in the real experiment(Table III).This indicates that, without the centroid correction,the estimated camera parameters would have been biased and it is likely that the residual error would have been larger.Secondly,we estimated the effect of noise to the calibration by adding Gaussian noise to the synthetic images and performing10calibration trials at each noise level.The standard deviation of the noise varied between0and15pixels.The control points were localized from the noisy images byfirst thresholding them using afixed threshold.Then the centroid of each control point was measured by computing the gray-level-weighted center-of-mass. The simulation results are shown in Fig.6,where we have plotted the average RMS measurement,RMS residual and RMS estimation errors.There is small error also at the zero noise level because of the discrete pixel representation and gray level quantization.Thefact that the RMS errors approximately satisfy the Pythagorean equality indicates that the calibration algorithm has converged to the true global minimum[15].Moreover,the low values of the RMS estimation error indicate that the estimated camera model is close to the true one even at large noise levels.VI.C ONCLUSIONWe have proposed a novel camera calibration method forfish-eye lens cameras that is based on viewing a planar calibration pattern.The experiments verify that the method is easy-to-use and provides a relatively high level of accuracy with circular control points.The proposed camera model is generic,easily expandable and suitable also for conventional cameras with narrow-or wide-angle lenses.The achieved level of accuracy forfish-eye lenses is better than it has been reported with other approaches and,for narrow-angle lenses,it is comparable to the results in[6].This is promising considering especially the aim of usingfish-eye lenses in measurement purposes.S OFTWAREThe calibration toolbox is available on the authors’webpage.A CKNOWLEDGMENTThe authors would like to thank Janne Heikkil¨a for discussions and for providing his data for the experiments.R EFERENCES[1]Brown, D. C.:“Close-range camera calibration”,Photogrammetric Engineering,37(8):855-866,1971.[2]Kannala,J.and Brandt,S.:“A generic camera calibration method forfish-eye lenses”,Proc.ICPR,pp.10-13,2004.[3]Bakstein,H.and Pajdla,T.:“Panoramic Mosaicing with a180◦Field of View Lens”,Proc.IEEE Workshop on Omnidirectional Vision,pp.60-67,2002.[4]Zhang,Z.:“Aflexible new technique for camera calibration”,TPAMI,22(11):1330-1334,2000.[5]Shah,S.and Aggarwal,J.:“Intrinsic parameter calibration procedure for a(high-distortion)fish-eye lens camera with distortion model and accuracy estimation”,Pattern Recognition,29(11):1775-1788,1996.。

modelObj 4.2 模型对象框架(进行回归分析)说明书

modelObj 4.2 模型对象框架(进行回归分析)说明书

Package‘modelObj’October13,2022Type PackageTitle A Model Object Framework for Regression AnalysisVersion4.2Date2022-06-05Author Shannon T.HollowayMaintainer Shannon T.Holloway<****************************>Description A utility library to facilitate the generalization of statistical methods built on a regres-sion framework.Package developers can use'modelObj'methods to initiate a regression analy-sis without concern for the details of the regression model and the method to be used to ob-tain parameter estimates.The specifics of the regression step are left to the user to de-fine when calling the function.The user of a function developed within the'modelObj'frame-work creates as input a'modelObj'that contains the model and the R methods to be used to ob-tain parameter estimates and to obtain predictions.In this way,a user can easily go from lin-ear to non-linear models within the same package.Depends methodsSuggests stats,graphicsLicense GPL-2Encoding UTF-8NeedsCompilation noRoxygenNote7.1.1Collate'methodObj.R''methodObjPredict.R''methodObjSolver.R''methodObjSolverFormula.R''modelObj.R''modelObjFormula.R''methodObjSolverXY.R''modelObjXY.R''buildModelObj.R''modelObjFit.R''warnMsg.R'Repository CRANDate/Publication2022-06-0708:30:09UTCR topics documented:buildModelObj (2)fit (4)1fitObject (5)model (6)modelObj (7)modelObjFit-class (8)predict (9)predictor (10)predictorArgs (11)solver (11)solverArgs (12)Index14 buildModelObj Create an Object of Class modelObjDescriptionA utility function to transfer user defined models and estimation methods to an object of classmodelObj.UsagebuildModelObj(model,solver.method=NULL,solver.args=NULL,predict.method=NULL,predict.args=NULL)Argumentsmodel An object of class formula;the model.solver.method An object of class character specifying the name of the R function to be used to obtain parameter estimates.Or,the function to be used to obtain parameterestimates.For example,‘lm’,‘glm’,or‘rpart’.The specified modeling functionMUST have a corresponding predict method.solver.args An object of class list containing additional arguments to be sent to solver.method.Arguments must be provided as a list,where the name of each element matchesa formal argument of solver.method.For example,if a logistic regression usingglm is desired,solver.method=“glm”solver.args=list(“family”=binomial)A solver.method can takes formal arguments’formula’and’data’as inputs,suchas lm and glm.Some R methods do not use formal names’formula’and’data’;a user can indicate if a different naming convention is used for these two inputarguments.For example,if a method expects the formula object to be passedthrough input variable x,solver.args<-list("x"="formula")A solver.method can also take formal arguments’x’and’y’as inputs,such asglmnet.Some R methods do not use formal names’x’and’y’to indicate the co-variate and response;a user can indicate if a different naming convention is usedfor these two input arguments.For example,if a method expects the covariatematrix to be passed through input variable X,solver.args<-list("X"="x") predict.method A character.The name of the R function or the function to be used to obtainpredictions.For example,‘predict.lm’,‘predict’,or‘predict.glm’.If no functionis explicitly given,the generic predict is assumed.For many methods,thegeneric method is appropriate.predict.args A list.Additional arguments to be sent to predict.method.This must be pro-vided as a list,where the name of each element matches a formal argument ofpredict.method.For example,if a logistic regression using glm was used tofitthe model formula object and predictions on the scale of the response are de-sired,predict.method=“predict.glm”predict.args=list(“type”=“response”).It is assumed that the predict.method has formal arguments“object"and“new-data".If predict.method does not use these formal arguments,predict.args mustexplicitly indicate the variable names used for these inputs.For example,list(“newx"=“newdata")if the new data is passed to predict.method through input argument“newx".DetailsUnless changed by the user in solver.args and/or predict.args,default settings are assumed for thespecified regression and prediction methods.ValueAn object of class modelObjFormula or modelObjXY,which inherit directly from modelObj.Examples#----------------------------------------------------##Create modeling object using a formula#----------------------------------------------------#mo<-buildModelObj(model=Y~X1+X2+X3+X4,solver.method= lm ,predict.method= predict.lm ,predict.args=list(type= response ))4fit fit Obtain parameter estimatesDescriptionPerforms specified regression analysis.Usagefit(object,data,response,...)##S4method for signature modelObj,data.framefit(object,data,response,...)Argumentsobject An object of class modelObj as returned by the buildModelObj function.data An object of class data.frame containing the variables in the model.response An object of class vector containing the response variable....ignoredDetailsIf defined by the modeling function,the following methods can be applied to the value object returned:coef,plot,predict,print,residuals,show,and summary.ValueAn object of class modelObjFit,which contains the object returned by the modeling function and the method to be used to obtain predictions.Examples#generate dataX<-matrix(rnorm(1000,0,1),ncol=4,dimnames=list(NULL,c("X1","X2","X3","X4")))Y<-X%*%c(0.1,0.2,0.3,0.4)+rnorm(250)X<-data.frame(X)#create modeling object using a formulamo<-buildModelObj(model=Y~X1+X2+X3+X4,solver.method= lm )#fit modelfit.obj<-fit(object=mo,data=X,response=Y)fitObject5coef(fit.obj)head(residuals(fit.obj))plot(fit.obj)head(predict(fit.obj,X))summary(fit.obj)fitObject Retrieve Regression ObjectDescriptionRetrieves the value object returned by the regression method used to obtain parameter estimates. UsagefitObject(object,...)##S4method for signature ANYfitObject(object,...)##S4method for signature modelObjFitfitObject(object,...)Argumentsobject An object of class modelObjFit....ignored.DetailsThis function is useful for accessing methods that are defined by the regression method but are not directly accessible from the modelObjFit object.For example,for many regression methods,users can retrieve thefitted values by callingfitted.values(object).This method is not directly accessible from a modelObjFit.However,fitted.values()can be applied to the object returned byfitObject().ValueThe Value returned by the regression method specified in the governing modelObj.The exact structure of the value will depend on the regression method.For example,if nls()is the regression method,a list is returned.6modelExamples#Generate dataX<-matrix(rnorm(1000,0,1),ncol=4,dimnames=list(NULL,c("X1","X2","X3","X4")))Y<-X%*%c(0.1,0.2,0.3,0.4)+rnorm(250)X<-data.frame(X)#Create modeling object using a formulamo<-buildModelObj(model=Y~X1+X2+X3+X4,solver.method= lm )#Fit modelfit.obj<-fit(object=mo,data=X,response=Y)obj<-fitObject(fit.obj)fobj<-fitted.values(obj)head(fobj)model Retrieve modelDescriptionRetrieves model from modelObjUsagemodel(object,...)##S4method for signature ANYmodel(object,...)##S4method for signature modelObjmodel(object,...)##S4method for signature modelObjFitmodel(object,...)Argumentsobject A modelObj object...ignoredmodelObj7 ValueThe formula for the regressionmodelObj Class modelObjDescriptionA class for model objects.DetailsObjects should not be created directly.The utility function buildModelObj()should be used.Slotsmodel Object of class formulasolver Object of class methodObjSolver method to obtain parameter estimates.predictor Object of class methodObjPredict method to obtain predicted values.Methodsfit:Executes regression step.model:Retrieve model.solver:Retrieve regression method name.solverArgs:Retrieve arguments to be sent to regression method.solverArgs(object)<-:Set arguments to be sent to regression method.predictor:Retrieve prediction method name.predictorArgs:Retrieve arguments to be sent to prediction method.predictorArgs(object)<-:Set arguments to be sent to prediction method.ExamplesshowClass("modelObj")8modelObjFit-class modelObjFit-class Class modelObjFitDescriptionA class for storing regression analysis results.Usage##S4method for signature modelObjFitcoef(object,...)##S4method for signature modelObjFitplot(x,y,...)##S4method for signature modelObjFitprint(x)##S4method for signature modelObjFitresiduals(object,...)##S4method for signature modelObjFitshow(object)##S4method for signature modelObjFitsummary(object,...)Argumentsobject An object of class modelObjFit...passed to underlying method defined for regression value object.x An object of class modelObjFity ignoredMethods(by generic)•coef:Extract Model Coefficients•plot:X-Y plotting•print:Print regression results•residuals:Extract residuals•show:Show regression results•summary:Show summary resultspredict9SlotsfitObj Object returned by the regression analysismodelObj Object of class modelObj.MethodsfitObject:Extracts regression step.model:Retrieve model.solver:Retrieve regression method name.solverArgs:Retrieve arguments to be sent to regression method.solverArgs(object)<-:Set arguments to be sent to regression method.predictor:Retrieve prediction method name.predictorArgs:Retrieve arguments to be sent to prediction method.predictorArgs(object)<-:Set arguments to be sent to prediction method.ExamplesshowClass("modelObjFit")predict Model PredictionsDescriptionPredictions from the results of afit object.Usagepredict(object,...)##S4method for signature modelObjFitpredict(object,newdata,...)Argumentsobject An object of class modelObjFit as returned by thefit()function.newdata An object of class data.frame containing the variables in the model....ignoredValueModel predictions,the form of which depend on the regression analysis.10predictorExamples#generate dataX<-matrix(rnorm(1000,0,1),ncol=4,dimnames=list(NULL,c("X1","X2","X3","X4")))Y<-X%*%c(0.1,0.2,0.3,0.4)+rnorm(250)X<-data.frame(X)#create modeling object using a formulamo<-buildModelObj(model=Y~X1+X2+X3+X4,solver.method= lm )#fit modelfit.obj<-fit(object=mo,data=X,response=Y)predict(fit.obj)predict(fit.obj,newdata=X[1:10,])predictor Retrieve Prediction MethodDescriptionRetrieves method for prediction analysisUsagepredictor(object,...)##S4method for signature modelObjpredictor(object,...)##S4method for signature modelObjpredictor(object,...)##S4method for signature modelObjFitpredictor(object,...)Argumentsobject A modelObj object...ignoredValueAn object of class character or functionpredictorArgs11 predictorArgs Retrieve Predictor ArgumentsDescriptionRetrieves the arguments that are to be passed to the prediction method when called.UsagepredictorArgs(object,...)##S4method for signature modelObjpredictorArgs(object,...)predictorArgs(object)<-value##S4replacement method for signature ANY,ANYpredictorArgs(object)<-value##S4replacement method for signature modelObj,listpredictorArgs(object)<-value##S4method for signature modelObjFitpredictorArgs(object,...)Argumentsobject A modelObj object...ignoredvalue List to be stored in argsValueA listsolver Retrieve Solver MethodDescriptionRetrieves method for regression analysisUsagesolver(object,...)##S4method for signature ANYsolver(object,...)##S4method for signature modelObjsolver(object,...)##S4method for signature modelObjFitsolver(object,...)Argumentsobject A modelObj object...ignoredValueAn object of class character or functionsolverArgs Retrieve Solver ArgumentsDescriptionRetrieves the arguments that are to be passed to the regression method when called. UsagesolverArgs(object,...)##S4method for signature ANYsolverArgs(object,...)##S4method for signature modelObjsolverArgs(object,...)solverArgs(object)<-value##S4replacement method for signature ANY,ANYsolverArgs(object)<-value##S4replacement method for signature modelObj,listsolverArgs(object)<-value##S4method for signature modelObjFitsolverArgs(object,...)Argumentsobject A modelObj object...ignoredvalue List to be stored in args ValueA listIndexbuildModelObj,2coef,modelObjFit-method(modelObjFit-class),8fit,4fit,modelObj,data.frame-method(fit),4 fitObject,5fitObject,ANY-method(fitObject),5 fitObject,modelObjFit-method(fitObject),5model,6model,ANY-method(model),6model,modelObj-method(model),6 model,modelObjFit-method(model),6 modelObj,3,7modelObjFit-class,8 modelObjFormula-class(modelObj),7 modelObjXY-class(modelObj),7plot,modelObjFit-method(modelObjFit-class),8 predict,9predict,modelObjFit-method(predict),9 predictor,10predictor,modelObj-method(predictor), 10predictor,modelObjFit-method(predictor),10 predictorArgs,11predictorArgs,modelObj-method(predictorArgs),11 predictorArgs,modelObjFit-method(predictorArgs),11 predictorArgs<-(predictorArgs),11 predictorArgs<-,ANY,ANY-method(predictorArgs),11 predictorArgs<-,modelObj,list-method (predictorArgs),11print,modelObjFit-method(modelObjFit-class),8residuals,modelObjFit-method(modelObjFit-class),8show,modelObjFit-method(modelObjFit-class),8solver,11solver,ANY-method(solver),11solver,modelObj-method(solver),11solver,modelObjFit-method(solver),11solverArgs,12solverArgs,ANY-method(solverArgs),12solverArgs,modelObj-method(solverArgs),12solverArgs,modelObjFit-method(solverArgs),12solverArgs<-(solverArgs),12solverArgs<-,ANY,ANY-method(solverArgs),12solverArgs<-,modelObj,list-method(solverArgs),12summary,modelObjFit-method(modelObjFit-class),814。

机械专业外文翻译-挖掘机的机械学和液压学

机械专业外文翻译-挖掘机的机械学和液压学

┊┊┊┊┊┊┊┊┊┊┊┊┊装┊┊┊┊┊订┊┊┊┊┊线┊┊┊┊┊┊┊┊┊┊┊┊┊Multi-Domain Simulation:Mechanics and Hydraulics of an Excavator Abstract It is demonstrated how to model and simulate an excavator with Modelica and Dymola by using Modelica libraries for multi-body and for hydraulic systems. The hydraulic system is controlled by a “load sensing” controller. Usually, models containing3-dimensional mechanical and hydraulic components are difficult to simulate. At hand of the excavator it is shown that Modelica is well suited for such kinds of system simulations.1. IntroductionThe design of a new product requires a number of decisions in the initial phase that severely affect the success of the finished machine. Today, digital simulation is therefore used in early stages to look at different concepts. The view of this paper is that a new excavator is to be designed and several candidates of hydraulic control systems have to be evaluated.Systems that consist of 3-dimensional mechanical and of hydraulic components – like excavators – are difficult to simulate. Usually, two different simulation environments have to be coupled. This is often inconvenient, leads to unnecessary numerical problems and has fragile interfaces. In this article it is demonstrated at hand of the model of an excavator that Modelica is well suited for these types of systems.The 3-dimensional components of the excavator are modeled with the new, free Modelica MultiBody library. This allows especially to use an analytic solution of the kinematic loop at the bucket and to take the masses of the hydraulic cylinders, i.e., the “force elements”, directly into account. The hydraulic part is modeled in a detailed way, utilizing pump, valves and cylinders from HyLib, a hydraulics library for Modelica. For the control part a generic “load sensing” control system is used, modeled by a set of simple equations. This approach gives the required results and keeps the time needed for analyzing the problem on a reasonable level.2. Modeling ChoicesThere are several approaches when simulating a system. Depending on the task it may be necessary to build a very precise model, containing every detail of the system and needing a lot of information, e.g., model parameters. This kind of models is expensive to build up but on the other hand very useful if parameters of a well defined system have to be modified. A typical example is the optimization of parameters of a counterbalance valve in an excavator (Kraft 1996).The other kind of model is needed for a first study of a system. In this case some properties of the pump, cylinders and loads are specified. Required is information about the performance of that system, e.g., the speed of the pistons or the necessary input power at the pump shaft, to make a decision whether this design can be used in principle for the task at hand. This model has therefore to be “cheap”, i.e., it must be possible to build it in a short time without detailed knowledge of particular components.The authors intended to build up a model of the second type, run it and have first results with a minimum amount of time spent. To achieve this goal the modeling language Modelica (Modelica 2002), the Modelica simulation environment Dymola (Dymola 2003), the new Modelica library for 3-dimensional mechanical systems “MultiBody”(Otter et al. 2003) and the Modelica library of hydraulic components HyLib (Beater 2000) was used. The model consists of the 3-dimensional mechanical construction of the excavator, a detailed description of the power hydraulics and a generic “load sensing” controller. This model will be available as a demo in the next version of HyLib.3. Construction of ExcavatorsIn Figure 1 a schematic drawing of a typical excavator under consideration is shown. It consists of a chain track and the hydraulic propel drive which is used to manoeuvre the machine but usually not during a work cycle. On top of that is a carriage where the operator is sitting. It can rotate around a vertical axis with respect to the chain track. It also holds the Diesel engine, the hydraulic pumps and control system. Furthermore, there is a boom, an arm and at the end a bucket which is attached via a planar kinematic loop to the arm. Boom, arm and bucket can be rotated by the appropriate cylinders.┊┊┊┊┊┊┊┊┊┊┊┊┊装┊┊┊┊┊订┊┊┊┊┊线┊┊┊┊┊┊┊┊┊┊┊┊┊Figure 2 shows that the required pressures in the cylinders depend on the position. For the “stretched” situation the pressure in the boom cylinder is 60 % higher than in the retracted position. Not only the position but also the movements have to be taken into account. Figure 3 shows a situation where the arm hangs down. If the carriage does not rotate there is a pulling force required in the cylinder. When rotating –excavators can typically rotate with up to 12 revolutions per minute –the force in the arm cylinder changes its sign and now a pushing force is needed. This change is very significant because now the “active” chamber of the cylinder switches and that must be taken into account by the control system. Both figures demonstrate that a simulation model must take into account the couplings between the four degrees of freedom this excavator has. A simpler model that uses a constant load for each cylinder and the swivel drive leads to erroneous results4. Load Sensing SystemExcavators have typically one Diesel engine, two hydraulic motors and three cylinders. There exist different hydraulic circuits to provide the consumers with the required hydraulic energy. A typical design is a Load Sensing circuit that is energy efficient and user friendly. The idea is to have a flow rate control system for the pump such that it delivers exactly the needed flow rate. As a sensor the pressure drop across an orifice is used. The reference value is the resistance of the orifice. A schematic drawing is shown in figure 4, a good introduction to that topic is given in (anon. 1992).The pump control valve maintains a pressure at the pump port that is typically 15 bar higher than the pressure in the LS line (= Load Sensing line). If the directional valve is closed the pump has therefore a stand-by pressure of 15 bar. If it is open the pump delivers a flow rate that leads to a pressure drop of 15 bar across that directional valve. Note: The directional valve is not used to throttle the pump flow but as a flow meter (pressure drop that is fed back) and as a reference (resistance). The circuit is energy efficient because the pump delivers only the needed flow rate, the throttling losses are small compared to other circuits.If more than one cylinder is used the circuit becomes more complicated, see figure 5. E.g. if the boom requires a pressure of 100 bar and the bucket a pressure of 300 bar the pump pressure must be above 300 bar which would cause an unwanted movement of the boom cylinder. Therefore compensators are used that throttle the oil flow and thus achieve a pressure drop of 15 bar across the particular directional valve. These compensators can be installed upstream or downstream of the directional valves. An additional valve reduces the nominal pressure differential if the maximum pump flow rate or the maximum pressure is reached (see e.g. Nikolaus 1994).5. Model of Mechanical PartIn Figure 6, a Modelica schematic of the mechanical part is shown. The chain track is not modeled, i.e., it is assumed that the chain track does not move. Components “rev1”, ..., “rev4” are the 4 revolute joints to move the parts relative to each other. The icons with the long black line are “virtual”rods that are used to mark specific points on a part, especially the mounting points of the hydraulic cylinders. The light blue spheres (b2, b3, b4, b5) are bodies that have mass and an inertia tensor and are used to model the corresponding properties of the excavator parts.The three components “cyl1f”, “cyl2f”,and “cyl3f” are line force components that describe a force interaction along a line between two attachment points. The small green squares at these components represent 1-dimensional translational connectors from theModelica.Mechanics. Translational library. They are used to define the 1- dimensional force law acting between the two attachment points. Here, the hydraulic cylinders described in the next section are directly attached. The small two spheres in the icons of the “cyl1f,cyl2f, cyl3f” components indicate that optionally two point masses are taken into account that are attached at defined distances from the attachment points along the connecting line. This allows to easily model the essential mass properties (mass and center of mass) of the hydraulic cylinders with only a very small computational overhead.The jointRRR component (see right part of Figure 6) is an assembly element consisting of 3 revolute joints that form together a planar loop when connected to the arm. A picture of this part of an excavator, a zoom in the corresponding Modelica schematic and the animation view is shown in Figure 7. When moving revolute joint “rev4” (= the large red cylinder in the lower part of Figure 7; the small┊┊┊┊┊┊┊┊┊┊┊┊┊装┊┊┊┊┊订┊┊┊┊┊线┊┊┊┊┊┊┊┊┊┊┊┊┊red cylinders characterize the 3 revolute joints of the jointRRR assembly component) the position and orientation of the attachment points of the “left”and “right” revolute joints of the jointRRR component are known. There is a non-linear algebraic loop in the jointRRR component to compute the angles of its three revolute joints given the movement of these attachment points. This non-linear system of equations is solved analytically in the jointRRR object, i.e., in a robust and efficient way. For details see In a first step, the mechanical part of the excavator is simulated without the hydraulic system to test this part separatly. This is performed by attaching translational springs with appropriate spring constants instead of the hydraulic cylinders. After the animation looks fine and the forces and torques in the joints have the expected size, the springs are replaced by the hydraulic system described in the next sections.All components of the new MultiBody library have “built-in” animation definitions, i.e., animation properties are mostly deduced by default from the given definition of the multi-body system. For example, a rod connecting two revolute joints is by default visualized as cylinder where the diameter d is a fraction of the cylinder length L (d = L/40) which is in turn given by the distance of the two revolute joints. A revolute joint is by default visualized by a red cylinder directed along the axis of rotation of the joint. The default animation (with only a few minor adaptations) of the excavator is shown if Figure 8. The light blue spheres characterize the center of mass of bodies. The line force elements that visualize the hydraulic cylinders are defined by two cylinders (yellow and grey color) that are moving in each other. As can be seen, the default animation is useful to get, without extra work from the user side, a rough picture of the model that allows to check the most important properties visually, e.g., whether the center of masses or attachment points are at the expected places.For every component the default animation can be switched off via a Boolean flag. Removing appropriate default animations, such as the “centerof- mass s pheres”, and adding some components that have pure visual information (all visXXX components in the schematic of Figure 6) gives quickly a nicer animation, as is demonstrated in Figure 9. Also CAD data could be utilized for the animation, but this was not available for the examination of this excavator.6. The Hydraulics Library HyLibThe (commercial) Modelica library HyLib (Beater 2000, HyLib 2003) is used to model the pump, metering orifice, load compensator and cylinder of the hydraulic circuit. All these components are standard components for hydraulic circuits and can be obtained from many manufacturers. Models of all of them are contained in HyLib. These mathematical models include both standard textbook models (e. g. Dransfield 1981, Merrit 1967, Viersma 1980) and the most advanced published models that take the behavior of real components into account (Schulz 1979, Will 1968). An example is the general pump model where the output flow is reduced if pressure at the inlet port falls below atmospheric pressure. Numerical properties were also considered when selecting a model (Beater 1999). One point worth mentioning is the fact that all models can be viewed at source code level and are documented by approx. 100 references from easily available literature.After opening the library, the main window is displayed (Figure 10). A double click on the “pumps” icon opens the selection for all components that are needed to originate or end an oil flow (Figure 11). For the problem at hand, a hydraulic flow source with internal leakage and externally commanded flow rate is used. Similarly the needed models for the valves, cylinders and other components are chosen.All components are modeled hierarchically. Starting with a definition of a connector –a port were the oil enters or leaves the component – a template for components with two ports is written. This can be inherited for ideal models, e.g., a laminar resistance or a pressure relief valve. While it usually makes sense to use textual input for these basic models most of the main library models were programmed graphically, i.e., composed from basic library models using the graphical user interface. Figure12 gives an example of graphical programming. All mentioned components were chosen from the library and then graphically connected.7. Library Components in Hydraulics CircuitThe composition diagram in Figure 12 shows the graphically composed hydraulics part of the excavator model. The sub models are chosen from the appropriate libraries, connected and the┊┊┊┊┊┊┊┊┊┊┊┊┊装┊┊┊┊┊订┊┊┊┊┊线┊┊┊┊┊┊┊┊┊┊┊┊┊parameters input. Note that the cylinders and the motor from HyLib can be simply connected to the also shown components of the MultiBody library. The input signals, i.e., the reference signals of the driver of the excavator, are given by tables, specifying the diameter of the metering orifice, i.e. the reference value for the flow rate. From the mechanical part of the excavator only the components are shown in Figure 12 that are directly coupled with hydraulic elements, such as line force elements to which the hydraulic cylinders are attached.8. Model of LS ControlFor this study the following approach is chosen: Model the mechanics of the excavator, the cylinders and to a certain extent the pump and metering valves in detail because only the parameters of the components will be changed, the general structure is fixed. This means that the diameter of the bucket cylinder may be changed but there will be exactly one cylinder working as shown in Figure 1. That is different for the rest of the hydraulic system. In this paper a Load Sensing system, or LS system for short, using one pump is shown but there are other concepts that have to be evaluated during an initial design phase. For instance the use of two pumps, or a separate pump for the swing.The hydraulic control system can be set up using meshed control loops. As there is (almost) no way to implement phase shifting behavior in purely hydraulic control systems the following generic LS system uses only proportional controllers.A detailed model based on actual components would be much bigger and is usually not available at the begin of an initial design phase. It could be built with the components from the hydraulics library but would require a considerable amount of time that is usually not available at the beginning of a project.In Tables 1 and 2, the implementation of the LS control in form of equations is shown. Usually, it is recommended for Modelica models to either use graphical model decomposition or to define the model by equations, but not to mix both descrip- tion forms on the same model level.For the LS system this is different because it has 17 input signals and 5 output signals. One might built one block with 17 inputs and 5 outputs and connect them to the hydraulic circuit. However, in this case it seems more understandable to provide the equations directly on the same level as the hydraulic circuit above and access the input and output signals directly. For example, ”metOri1.port_A.p” used in table 2 is the measured pressure at port_A of the metering orifice metOri1. The calculated values of the LS controller, e.g., the pump flow rate “pump.inPort.signal[1] = ...” is the signal at the filled blue rectangle of the “pump” component, see Figure 12).The strong point of Modelica is that a seamless integration of the 3-dimensional mechanical library, the hydraulics library and the non standard, and therefore in no library available, model of the control system is easily done. The library components can be graphically connected in the object diagram and the text based model can access all needed variables.9. Some Simulation ResultsThe complete model was built using the Modelica modeling and simulation environment Dymola (Dymola 2003), translated, compiled and simulated for 5 s. The simulation time was 17 s using the DASSL integrator with a relative tolerance of 10-6 on a 1.8 GHz notebook, i.e., about 3.4 times slower as real-time. The animation feature in Dymola makes it possible to view the movements in an almost realistic way which helps to explain the results also to non-experts, see Figure 9.Figure 13 gives the reference signals for the three cylinders and the swing, the pump flow rate and pressure. From t = 1.1 s until 1.7 s and from t = 3.6 s until 4.0 s the pump delivers the maximum flow rate. From t = 3.1 s until 3.6 s the maximum allowed pressure is reached. Figure 14 gives the position of the boom and the bucket cylinders and the swing angle. It can be seen that there is no significant change in the piston movement if another movement starts or ends. The control system reduces the couplings between the consumers which are very severe for simple throttling control.Figure 15 shows the operation of the bucket cylinder. The top figure shows the reference trajectory, i. e. the opening of the directional valve. The middle figure shows the conductance of the compensators. With the exception of two spikes it is open from t = 0 s until t = 1 s. This means that in┊┊┊┊┊┊┊┊┊┊┊┊┊装┊┊┊┊┊订┊┊┊┊┊线┊┊┊┊┊┊┊┊┊┊┊┊┊that interval the pump pressure is commanded by that bucket cylinder. After t = 1 s the boom cylinder requires a considerably higher pressure and the bucket compensator therefore increases the resistance (smaller conductance). The bottom figure shows that the flow rate control works fine. Even though there is a severe disturbance (high pump pressure after t = 1 s due to the boom) the commanded flow rate is fed with a small error to the bucket cylinder.10. ConclusionFor the evaluation of different hydraulic circuits a dynamic model of an excavator was built. It consists of a detailed model of the 3 dimensional mechanics of the carriage, including boom, arm and bucket and the standard hydraulic components like pump or cylinder. The control system was not modeled on a component basis but the system was described by a set of nonlinear equations.The system was modeled using the Modelica MultiBody library, the hydraulics library Hylib and a set of application specific equations. With the tool Dymola the system could be build and tested in a short time and it was possible to calculate the required trajectories for evaluation of the control system.The animation feature in Dymola makes it possible to view the movements in an almost realistic way which helps to explain the results also to多畴模拟:挖掘机的机械学和液压学概要:通过使用用于多体和液压系统的Modelica程序库,示范通过Modelica和Dymola如何模拟和仿真挖掘机。

SIMIT机器人模拟文档说明书

SIMIT机器人模拟文档说明书

Robotic Simulation withPLCSIM Advanced – SIMIT – NX-MCD InformationSIMIT SiemensIndustryOnlineLegal information© S i e m e n s 2021 A l l r i g h t s r e s e r v e dLegal informationUse of application examplesApplication examples illustrate the solution of automation tasks through an interaction of several components in the form of text, graphics and/or software modules. The application examples are a free service by Siemens AG and/or a subsidiary of Siemens AG ("Siemens"). They are non-binding and make no claim to completeness or functionality regarding configuration and equipment. The application examples merely offer help with typical tasks; they do not constitute customer-specific solutions. You yourself are responsible for the proper and safe operation of the products in accordance with applicable regulations and must also check the function of the respective application example and customize it for your system.Siemens grants you the non-exclusive, non-sublicensable and non-transferable right to have the application examples used by technically trained personnel. Any change to the applicationexamples is your responsibility. Sharing the application examples with third parties or copying the application examples or excerpts thereof is permitted only in combination with your own products. The application examples are not required to undergo the customary tests and quality inspections of a chargeable product; they may have functional and performance defects as well as errors. It is your responsibility to use them in such a manner that any malfunctions that may occur do not result in property damage or injury to persons.Disclaimer of liabilitySiemens shall not assume any liability, for any legal reason whatsoever, including, without limitation, liability for the usability, availability, completeness and freedom from defects of the application examples as well as for related information, configuration and performance data and any damage caused thereby. This shall not apply in cases of mandatory liability, for exampleunder the German Product Liability Act, or in cases of intent, gross negligence, or culpable loss of life, bodily injury or damage to health, non-compliance with a guarantee, fraudulentnon-disclosure of a defect, or culpable breach of material contractual obligations. Claims fordamages arising from a breach of material contractual obligations shall however be limited to the foreseeable damage typical of the type of agreement, unless liability arises from intent or gross negligence or is based on loss of life, bodily injury or damage to health. The foregoing provisions do not imply any change in the burden of proof to your detriment. You shall indemnify Siemens against existing or future claims of third parties in this connection except where Siemens is mandatorily liable.By using the application examples you acknowledge that Siemens cannot be held liable for any damage beyond the liability provisions described.Other informationSiemens reserves the right to make changes to the application examples at any time without notice. In case of discrepancies between the suggestions in the application examples and other Siemens publications such as catalogs, the content of the other documentation shall have precedence.The Siemens terms of use (https:// ) shall also apply.Security informationSiemens provides products and solutions with Industrial Security functions that support the secure operation of plants, systems, machines and networks.In order to protect plants, systems, machines and networks against cyber threats, it is necessary to implement – and continuously maintain – a holistic, state-of-the-art industrial security concept. Siemens’ products and solutions constitute one element of such a concept.Customers are responsible for preventing unauthorized access to their plants, systems, machines and networks. Such systems, machines and components should only be connected to anenterprise network or the Internet if and to the extent such a connection is necessary and only when appropriate security measures (e.g. firewalls and/or network segmentation) are in place. For additional information on industrial security measures that may be implemented, please visit https:///industrialsecurity .Siemens’ products and solutions undergo contin uous development to make them more secure. Siemens strongly recommends that product updates are applied as soon as they are available and that the latest product versions are used. Use of product versions that are no longersupported, and failure to apply the late st updates may increase customer’s exposure to cyber threats.To stay informed about product updates, subscribe to the Siemens Industrial Security RSS Feed at: https:///industrialsecurity .Table of contents© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dTable of contentsLegal information ......................................................................................................... 2 1Introduction ........................................................................................................ 4 1.1 Overview............................................................................................... 4 1.2 Request for Application Example ......................................................... 5 1.3 Mode of operation ................................................................................ 6 1.4 Components used .. (8)2 Example: TIA Portal, PLC and HMI .................................................................. 9 3Vendor specific examples............................................................................... 12 3.1 Setup for Vendor specific examples ................................................... 12 3.2 Install a new coupling inside SIMIT .................................................... 13 3.3Common description for robot couplings (13)4 Vendor Example: KUKA Connection ............................................................. 145 Vendor Example: ABB Connection ................................................................ 156 Vendor Example: Stäubli Connection ........................................................... 167 Vendor Example: Fanuc Connection ............................................................. 17 8Vendor Example: UR Connection .................................................................. 18 9 Vendor Example: YASKAWA Connection ..................................................... 19 10 MCD Standalone .............................................................................................. 20 11Appendix .......................................................................................................... 21 11.1Service and support ........................................................................... 21 11.2 Industry Mall ....................................................................................... 22 11.3 Links and literature ............................................................................. 22 11.4 Change documentation . (22)© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d1Introduction1.1OverviewFor simulation of machines its necessary to simulate all machine elements. This Application example is focused on simulation of Robots (6-Axis Kinematics). For vendor unspecific behavior we will use the "Inverse Kinematic functionalities" inside MCD (Setup with: PLCSIM Adv. – SIMIT – NX-MCD)For vendor specific behavior we will connect additional software to our established Simulation Setup by couplings inside SIMIT towards robotic simulation software (coming from vendor side called VRC: Virtual Robot Controller, Setup: PLCSIM Adv. – SIMIT – VRC – NX-MCD)InformationWorking with Robotic Simulation during the engineering workflow has many different variants, depending on your actual engineering task.Figure 1-1: Engineering Workflow for Robotic Simulation© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d1.2 Request for Application ExampleAll couplings and Demo projects can be received by request. After Request, our Support Team will get in direct contact with you for individual support.1. For this Request please go to Siemens Industrial Online Support and place a"Service Request".(weblink: https:///) 2. On Main Page select "Service Request"3. Login with your SIOS (Siemens Industrial Online Support) credentials4. Inside Product Search enter "SIMIT", and select "SIMIT-PLCSIM Advanced –MCD" under Virtual Commissioning 5. Fill out Description and add some specific details (short description of your usecase, which robot vendor)6. Press next for send and confirm your Request, now a Siemens expert will getin contact with you.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d1.3 Mode of operationInside this Application example you will find a functional Demo showing a test setup for robotic simulation. This example is split into two versions, a generic example (vendor unspecific) and several vendors specific. Both examples are based on the same machine and same TIA Portal Project (with different ".gsdml" files).Automation Task:Robot will take Tray from left Conveyor and insert it into the Screw Assembly Machine, Afterwards Robot will take PLC Element from right Conveyor and put it on the Tray.NOTEFor simplify this example, Screw assembly machine is not automated with PLC control, it is completely controlled by sequences inside NX-MCD.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dFigure 1-2: Machine for general exampleFigure 1-3: Top-View and Station names© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d1.4 Components usedThis application example has been created with the following hardware and software components:Table 1-1: Siemens ProductsTable 1-2: Additional Software© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d2Example: TIA Portal, PLC and HMITIA ProjectMachine sequence is controlled by PLC and can be operated by HMI.Inside TIA Portal project you find a PLC SIMATIC S7-1516 and a TP900 Comfort Panel.NOTEPLC will be simulated with PLCSIM Advanced, with PLCSIM Advanced you can simulate all SIEMENS PLC´s from SIMATIC S7-1500 PLC family.Figure 2-1: TIA Portal Hardware SetupFigure 2-2: Signals exchanged between PLC and Machine (including Robot)© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dThe PLC is controlling the Interlocks and status of all three Machine elements (Conveyor Tray, Conveyor Product, Screw assembly Station, Robot), with four Function Blocks. Process Sequence [FB6]:Controlling the Machine Process and sending Commands towards Robot Controller.ConveyorTray [FB3] & ConveyorProduct [FB2]:For the conveyor stations.Status [FB5]:Receiving Status from all stations.LinkToHMI [DB1]:Data Block, containing all variables shared with HMI.Figure 2-3: Function and Data Block in TIA PortalInside TIA Portal Project you also find the HMI control screen. The HMI is separated into manual and automatic areas.Figure 2-4: HMI control screen for manual mode2 Example: TIA Portal, PLC and HMI© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dFigure 2-5: HMI control screen for automatic mode© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d3Vendor specific examples3.1Setup for Vendor specific examplesExpanding the Example with a virtual Robot Controller (VRC, Software from Robot Vendor) for: • Vendor specific behavior• Using the same Code inside VRC and real Robot (offline/online programming) • Access to all Kinematics supported by VRCFigure 3-1: Simulation SetupFigure 3-2: Signal exchange between ComponentsNOTEBoth Examples using the same TIA project and same machine.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d3.2 Install a new coupling inside SIMITInside SIMIT SP it is possible to expand the couplings by installing additional ones. This method is called "external coupling". All External couplings are made by using of SIMIT API functionalities.Installing a new external coupling is easily done by copy a new folder into the couplings folder inside the SIMIT installation folder.3.3 Common description for robot couplingsAll couplings are aligned with the robot vendors. The couplings are using the native available functionalities of the robot vendor software. All couplings and the corresponding communication technology are different.NOTEIn all robot couplings you will find an Import and Export functionality inside SIMIT. This functionality can be used for backup and workflow purposes.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dInterface IntegrationConnection towards KUKA office Lite is realized by a customized Interface inside SIMIT using y200 Interface.Figure 4-1: Simulation Setup for connecting KUKA Office LiteFigure 4-2: Mechanism for Data transfer between Office Lite and SIMIT© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dInterface integrationConnection towards ABB Robot Studio is realized with Shared Memory Coupling mechanism (SHM).It´s a built-in functionality inside ABB Robot Studio to connect internal signals via SHM. Caused by Shared Memory technology this data exchange is performant, but SIMIT and ABB Robot Studio must be on the same PC.NOTEAn Information video can be found on YouTube inside ABB Video Channel: https:///watch?v=Lh07B86eEToFigure 5-1: Setup for simulating together with ABB Robot StudioFigure 5-2: PLCSIM Advanced together with SIMIT, NX-MCD and ABB Robot Studio.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dInterface IntegrationConnection between SIMIT and Stäubli Robotic Suite is realized, based on a API (SOAP) connection.Figure 6-1: Setup for Simulation with Stäubli Robotics SuiteFigure 6-2: SIMIT together with Stäubli Robotics Suite.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e dInterface integrationConnection between SIMIT and Fanuc ROBOGUIDE is realized through the FANUC Robot Interface (FRRJIF.DLL).NOTEFor using ROBOGUIDE together with SIMIT you need to install FANUC RobotInterface (Runtime), for this example we tested with RobotInterface_Ver.3.0.0_(Runtime).Figure 7-1: Setup for Simulation with ROBOGUIDEFigure 7-2: Simulation Setup with SIMIT, NX MCD and ROBOGUIDE.© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d8 Vendor Example: UR ConnectionInterface integrationConnection between SIMIT and URSim is realized through RTDE Coupling. RTDE is a real-time interface to Universal Robot Controllers (virtual or machine) using a TCP/IP connection on port 30004. However, the live Robot will becontrolled via PROFINET and SIMIT will use the same registers for read and write to the virtual robot controller through the RTDE interface.Figure 8-1: Setup for Simulation with URSimFigure 8-2: Simulation Setup, SIMIT, NX-MCD and URSim.9 Vendor Example: YASKAWA Connection© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d9 Vendor Example: YASKAWA ConnectionInterface IntegrationConnection between SIMIT and Yaskawa MotoSim EG-VRC is realized through a Plug-In.Figure 9-1: Setup for Simulation with MotoSim EG-VRCThe general structure is depicted in Figure 9-1. Figure 9-2 shows a screenshot with the running components (without PLCSIM Advanced)Figure 9-2: Overview of Simulation Setup with SIMIT, NX MCD and MotoSim with SIMITCoupling Plug-In10 MCD Standalone© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d10 MCD StandaloneThis example is created for usage without SIMIT nor PLCSIM Advanced, all Movements and logic is implemented in NX-MCD. Robot Movements are generated by using "Inverse Kinematics" Element inside NX-MCD and not controlled by a vendor specific Robot controller.Figure 10-1: MCD-Standalone example© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d 11Appendix 11.1 Service and supportIndustry Online SupportDo you have any questions or need assistance?Siemens Industry Online Support offers round the clock access to our entire service and support know-how and portfolio.The Industry Online Support is the central address for information about our products, solutions and services.Product information, manuals, downloads, FAQs, application examples and videos – all information is accessible with just a few mouse clicks: Technical SupportThe Technical Support of Siemens Industry provides you fast and competent support regarding all technical queries with numerous tailor-made offers – ranging from basic support to individual support contracts.Please send queries to Technical Support via Web form:/cs/my/srcSITRAIN – Digital Industry Academy We support you with our globally available training courses for industry with practical experience, innovative learning methods and a concept that’s tailored to the customer’s specific needs. For more information on our offered trainings and courses, as well as their locations and dates, refer to our web page: /sitrainService offerOur range of services includes the following:•Plant data services •Spare parts services •Repair services •On-site and maintenance services •Retrofitting and modernization services • Service programs and contractsYou can find detailed information on our range of services in the service catalog web page:/cs/scIndustry Online Support appYou will receive optimum support wherever you are with the "Siemens Industry Online Support" app. The app is available for iOS and Android: /cs/ww/en/sc/2067© S i e m e n s A G 2021 A l l r i g h t s r e s e r v e d 11.2 Industry MallThe Siemens Industry Mall is the platform on which the entire siemens Industry product portfolio is accessible. From the selection of products to the order and the delivery tracking, the Industry Mall enables the complete purchasing processing – directly and independently of time and location:11.3 Links and literatureTable 11-111.4 Change documentationTable 11-2。

nlmeU包的中文名称:nlmeU数据集与功能扩展包说明书

Package‘nlmeU’October13,2022Version0.70-9Date2022-05-02AuthorAndrzej Galecki******************,Tomasz Burzykowski****************************** Maintainer Andrzej Galecki<******************>Title Datasets and Utility Functions Enhancing Functionality of'nlme'PackageDescription Datasets and utility functions enhancing functionality of nlme package.Datasets,func-tions and scripts are described in book titled'Linear Mixed-Effects Models:A Step-by-Step Approach'by Galecki and Burzykowski(2013).Package is under development. Depends R(>=2.14.2)Imports nlmeSuggests reshape,WWGbook,lattice,ellipse,roxygen2,testthatLicense GPL(>=2)URL /~agalecki/LazyData yesCollate'logLik1.R''nlmeU-package.R''Pwr.R''simulateY.R''varia.R'NeedsCompilation noRepository CRANDate/Publication2022-05-0215:40:02UTCR topics documented:nlmeU-package (2)armd (2)armd.wide (3)armd0 (4)fcat (5)logLik1 (6)logLik1.lme (7)12armd missPat (8)prt (9)prt.fiber (10)prt.subjects (11)Pwr (12)Pwr.lme (12)runScript (14)sigma (14)SIIdata (15)simulateY (16)Index18 nlmeU-package Datasets and auxiliary functions for Galecki and Burzykowski book2013.DescriptionDatasets and auxiliary functions for Galecki and Burzykowski book(2013).DetailsDatasets and auxiliary functions for Galecki and Burzykowski book(2013).Package under devel-opment.Author(s)Andrzej Galecki<******************>,Tomasz Burzykowski<******************************> armd armd Data(867x8)DescriptionData from Age-Related Macular Degeneration(ARMD)clinical trialFormatThe armd data frame has867rows and8columns.It contains data for n=234subjects stored in a long format with up to four rows for one subject.subject a factor with234levels1,2,3,4,6,...,240treat.f a factor with2levels Placebo,Activevisual0an integer vector with values ranging from20to85miss.pat a factor with8levels----,---X,--X-,--XX,-XX-,...,X-XXarmd.wide3time.f a factor with4levels4wks,12wks,24wks,52wkstime a numeric vector with values4,12,24,52visual an integer vector with values ranging from3to85tp a numeric vector with values1,2,3,4corresponding to time points4,12,24,52,respectively DetailsThe ARMD data arise from a randomized multi-center clinical trial comparing an experimental treatment(interferon-alpha)versus placebo for patients diagnosed with ARMD.SourcePharmacological Therapy for Macular Degeneration Study Group(1997).Interferon alpha-IIA is ineffective for patients with choroidal neovascularization secondary to age-related macular degen-eration.Results of a prospective randomized placebo-controlled clinical trial.Archives of Ophthal-mology,115,865-872.See Alsoarmd0,armd.wideExamplessummary(armd)armd.wide armd.wide Data(240x10)DescriptionData from Age-Related Macular Degeneration(ARMD)clinical trialFormatThe armd.wide data frame has240rows and10columns.Data are stored in wide format with each row corresponding to one subject.subject a factor with240levels1,2,3,4,5,...,240lesion an integer vector with values1,2,3,4line0an integer vector with values ranging from5to17visual0an integer vector with values of visual acuity measured at baseline ranging from20to85 visual4an integer vector with values of visual acuity measured at4weeks ranging from12to84 visual12an integer vector with values of visual acuity measured at12weeks ranging from3to85 visual24an integer vector with values of visual acuity measured at24weeks ranging from5to85 visual52an integer vector with values of visual acuity measured at52weeks from4to85treat.f a factor with2levels Placebo,Activemiss.pat a factor with9levels----,---X,--X-,--XX,-XX-,...,XXXX4armd0DetailsThe ARMD data arise from a randomized multi-center clinical trial comparing an experimental treatment(interferon-alpha)versus placebo for patients diagnosed with ARMD.SourcePharmacological Therapy for Macular Degeneration Study Group(1997).Interferon alpha-IIA is ineffective for patients with choroidal neovascularization secondary to age-related macular degen-eration.Results of a prospective randomized placebo-controlled clinical trial.Archives of Ophthal-mology,115,865-872.See Alsoarmd,armd0Examplessummary(armd.wide)armd0armd0Data(1107x8)DescriptionData from Age-Related Macular Degeneration(ARMD)clinical trialFormatThe armd0data frame has1107rows and8columns.It contains data for n=240subjects stored in a long format with up tofive rows for one subject.subject a factor with240levels1,2,3,4,5,...treat.f a factor with2levels Placebo,Activevisual0an integer vector with values from20to85miss.pat a factor with9levels----,---X,--X-,--XX,-XX-,...time.f a factor with5levels Baseline,4wks,12wks,24wks,52wkstime a numeric vector with values from0to52visual an integer vector with values from3to85tp a numeric vector with values from0to4DetailsThe ARMD data arise from a randomized multi-center clinical trial comparing an experimental treatment(interferon-alpha)versus placebo for patients diagnosed with ARMD.fcat5 SourcePharmacological Therapy for Macular Degeneration Study Group(1997).Interferon alpha-IIA is ineffective for patients with choroidal neovascularization secondary to age-related macular degen-eration.Results of a prospective randomized placebo-controlled clinical trial.Archives of Ophthal-mology,115,865-872.See Alsoarmd,armd.widefcat fcat Data(4851x3)DescriptionData from Flemish Community Attainment-Targets(FCAT)StudyFormatThe fcat data frame has4851rows and3columnstarget a factor with9levels T1(4),T2(6),T3(8),T4(5),T5(9),...,T9(5)id a factor with539levels1,2,3,4,5,...,539scorec an integer vector with values from0to9DetailsAn educational study,in which elementary school graduates were evaluated with respect to reading comprehension in Dutch.Pupils from randomly selected schools were assessed for a set of nine attainment targets.The dataset is an example of grouped data,for which the grouping factors are crossed.SourceJanssen,R.,Tuerlinckx,F.,Meulders,M.,&De Boeck,P.(2000).A hierarchical IRT model for criterion-referenced measurement.Journal of Educational and Behavioral Statistics.25(3),285. Examplessummary(fcat)6logLik1 logLik1Calculates contribution of one subject to the log-likelihoodDescriptionThis function is generic;method functions can be written to handle specific classes of objects.UsagelogLik1(modfit,dt1,dtInit)Argumentsmodfit an object representing modelfitted to data using ML estimation.dt1a data frame with data for one subject,for whom the log-likelihood function is to be evaluateddtInit an optional auxiliary data frame.Valuenumeric scalar value representing contribution of a given subject to the overall log-likelihood re-turned by logLik()function.Author(s)Andrzej Galecki and Tomasz BurzykowskiReferencesExamplesrequire(nlme)logLik(fm1<-lme(distance~age,data=Orthodont))#random is~agedt1<-subset(Orthodont,Subject=="M01")logLik1(fm1,dt1)logLik1.lme7logLik1.lme Calculates contribution of one subject to the log-likelihood for lmeobjectDescriptionThis is method for logLik1()generic function.Usage##S3method for class lmelogLik1(modfit,dt1,dtInit)Argumentsmodfit an lme object representing modelfitted using maximum likelihood.dt1a data frame with data for one subject,for whom the log-likelihood function is to be evaluateddtInit an optional auxiliary data frame.DetailsCalculates profile likelihood(with beta profiled out)for*one*subject.Data with*one*level of grouping only.correlation component in modelStruct not implemented.Valuenumeric scalar value representing contribution of a given subject to the overall log-likelihood re-turned by logLik()function applied to lme object defined by modfit argument.Author(s)Andrzej Galecki and Tomasz BurzykowskiExamplesrequire(nlme)lm3.form<-visual~visual0+time+treat.f(fm16.5ml<-#M16.5lme(lm3.form,random=list(subject=pdDiag(~time)),weights=varPower(form=~time),data=armd,method="ML"))df1<-subset(armd,subject=="1")#Panel R20.7logLik1(fm16.5ml,df1)8missPat missPat Extract pattern of missing dataDescriptionThis function allows to compactly present pattern of missing data in a given vector/matrix/data frame or combination of thereof.UsagemissPat(...,symbols=c("X","-"),collapse="",missData=FALSE)Arguments...one or more vectors/matrices/data frames.They need to be compatible for columnwise binding.symbols vector containing two single characters used to indicate NA and remaining val-ues.By defualt it has values:X and-.collapse an optional character string.It is used in the internal call paste()function to separate the results.Rarely used.By default set to NULLmissData logical.If TRUE data frame with pattern of missing values is saved in missData attribute of the vector returned by this function.Valuecharacter vector with as many elements as length of vectors(s)/number of rows in matrices and/or data frames in...{}argument(s).Attribute cnames contains names of vectors/columns/variables.Optional attribute missData contains data frame with missing pattern.Author(s)Andrzej Galecki and Tomasz BurzykowskiExamplesdtf<-subset(armd.wide,select=c(visual12,visual24,visual52))missPat(dtf,symbols=c("?","+"))prt9 prt prt Data(2471x9)DescriptionData from a Progressive Resistance Randomized Trial.FormatThe prt data frame has2471rows and9columns.It contains data for n=63subjects.Each subject underwent muscle biopsy before and after intervention.Data are stored in a long format with each record corresponding to one musclefiber.There are two types of musclefibers:Type1and Type2.Dependent variables:specific force and isometric force are measured pre-and post intervention.id a factor with63levels5,10,15,20,25,...,520(subject id)prt.f a factor with2levels High,Low,i.e.training(intervention)intensityage.f a factor with2levels Young,Old(stratifying variable)sex.f a factor with2levels Female,Male(stratifying variable)bmi a numeric vector with values of BMI at baseline ranging from18.36to32.29iso.fo a numeric vector with values of isometric force ranging from0.16to2.565spec.fo a numeric vector with values of specific force ranging from80.5to290occ.f a factor with2levels Pre,Pos,i.e.pre-and post-intervention.fiber.f a factor with2levels Type1,Type2,i.e.Type1and Type2musclefiber.DetailsData frame prt was obtained by merging prt.subjects and prt.fiber.SourceClaflin,D.R.,Larkin,L.M.,Cederna,P.S.,Horowitz,J.F.,Alexander,N.B.,Cole,N.M.,Galecki,A.T.,Chen,S.,Nyquist,L.V.,Carlson,B.M.,Faulkner,J.A.,&Ashton-Miller,J.A.(2011)Effectsof high-and low-velocity resistance training on the contractile properties of skeletal musclefibers from young and older humans.Journal of Applied Physiology,111,1021-1030.See Alsoprt.fiber,prt.subjectsExamplessummary(prt)10prt.fiber prt.fiber prt.fiber Data(2471x5)DescriptionData from a Progressive Resistance Randomized Trial.FormatThe prt.fiber data frame has2471rows and5columns.Each row in the data corresponds to one musclefiber collected during muscle biopsy.See prt data frame for the description of the study design.id a factor with63levels5,10,15,20,25,...,520iso.fo a numeric vector with values of isometric force ranging from0.16to2.565spec.fo a numeric vector with values of specific force ranging from80.5to290occ.f a factor with2levels Pre,Pos,i.e.pre-and post-interventionfiber.f a factor with2levels Type1,Type2,i.e.Type1and Type2musclefiber.DetailsPRT trial was aimed for devising evidence-based methods for improving and measuring the mobility and muscle power of elderly men and womenSourceClaflin,D.R.,Larkin,L.M.,Cederna,P.S.,Horowitz,J.F.,Alexander,N.B.,Cole,N.M.,Galecki,A.T.,Chen,S.,Nyquist,L.V.,Carlson,B.M.,Faulkner,J.A.,&Ashton-Miller,J.A.(2011)Effectsof high-and low-velocity resistance training on the contractile properties of skeletal musclefibers from young and older humans.Journal of Applied Physiology,111,1021-1030.See Alsoprt,prt.subjectsExamplessummary(prt.fiber)prt.subjects11 prt.subjects prt.subjects Data(63x5)DescriptionData prt.subjects...FormatThe prt.subjects data frame has63rows and5columnsid a factor with63levels5,10,15,20,25,...prt.f a factor with2levels High,Lowage.f a factor with2levels Young,Oldsex.f a factor with2levels Female,Malebmi a numeric vector with values from18.4to32.3DetailsThe working hypothesis was that a12-week program of PRT would increase:(a)the power out-put of the overall musculature associated with movements of the ankles,knees,and hips;(b)the cross-sectional area and the force and power of permeabilized singlefibers obtained from the vastus lateralis muscle;and(c)the ability of young and elderly men and women to safely arrest standard-ized falls.The training consisted of repeated leg extensions by shortening contractions of the leg extensor muscles against a resistance that was increased as the subject trained using a specially designed apparatusSourceClaflin,D.R.,Larkin,L.M.,Cederna,P.S.,Horowitz,J.F.,Alexander,N.B.,Cole,N.M.,Galecki,A.T.,Chen,S.,Nyquist,L.V.,Carlson,B.M.,Faulkner,J.A.,&Ashton-Miller,J.A.(2011)Effectsof high-and low-velocity resistance training on the contractile properties of skeletal musclefibers from young and older humans.Journal of Applied Physiology,111,1021-1030.Examplessummary(prt.subjects)Pwr Calculates power based on a modelfitDescriptionThis function is generic;method functions can be written to handle specific classes of objects.UsagePwr(object,...)Argumentsobject an object containing the results returned by a modelfitting function(e.g.,lme)....some methods for this generic function may require additional arguments. Valuenumeric scalar value.Author(s)Andrzej Galecki and Tomasz BurzykowskiSee AlsoPwr.lmeExamples##Not run:Pwr(fm1)##End(Not run)Pwr.lme Performs power calculationsDescriptionThis is method for Pwr()generic function.It worksfine for an example given in the book.It may require additional testing,especially for post-hoc power analysisUsage##S3method for class lmePwr(object,...,type=c("sequential","marginal"),Terms,L,verbose=FALSE,sigma,ddf=numeric(0),alpha=0.05,altB=NULL,tol=1e-10)Argumentsobject an object containing lmefit,which provides information needed for power cal-culations...some additional arguments may be required.type an optional character string specifying the type of sum of squares to be used in F-tests needed for power calculations.Syntax is the same as for anova.lme()in nlme package.Terms an optional integer or character vector specifying which terms in the model should be jointly tested to be zero using a Wald F-test.See anova.lme in nlmepackage for details.L an optional numeric vector or array specifying linear combinations of the coef-ficients in the model that should be tested to be zero.See anova.lme in nlmepackage for details.verbose an optional logical value.See anova.lme in nlme package for details.sigma numeric scalar value.ddf numeric scalar value.Argument can be used to redefine default number of de-nominator degrees of freedomalpha numeric scalar value.By default0.05.altB matrix/vector containing alternative values for beta parameterstol numeric scalar value.Valuea data frame inheriting from class Pwr.lmeAuthor(s)Andrzej Galecki and Tomasz BurzykowskiSee Alsoanova.lme14sigma runScript Executes scripts from GB bookDescriptionDefault call of the function without arguments,prints a list of available scripts.UsagerunScript(script=NA,package="nlmeU",subdir="scriptsR2.15.0",echo=TRUE)Argumentsscript character string containing name of the script to be executed.By default is set to NA.package character string containing package name.By default nlmeU.subdir subdirectory containing scripts.By default:scriptsR15.0.echo ed by source function.By default set to TRUE.ValueScript is executed and results are printed.Author(s)Andrzej Galecki and Tomasz BurzykowskiExamplesrunScript()sigma Extract scale parameter sigma from a modelfitDescriptionThis function is generic;method functions can be written to handle specific classes of objects. Usagesigma(object,...)SIIdata15Argumentsobject an object for which scale parameter can be extracted....some methods for this generic function may require additional arguments. Valuenumeric scalar value.Author(s)Andrzej Galecki and Tomasz BurzykowskiExamples##sigma(fm1)SIIdata SIIdata Data(1190x12)DescriptionData from Study of Instructional Improvement ProjectFormatThe SIIdata data frame has1190rows and12columns.The dataset includes results for1190first grade pupils sampled from312classrooms in107schools.sex a factor with2levels M,F,i.e.males and females,resepectivelyminority a factor with2levels Mnrt=No,Mnrt=Yes.An indicator variable for the minority status mathkind an integer vector with values from290to629.This is pupil’s math score in the spring of the kindergarten yearmathgain an integer vector with values from-110to253.Number represents pupil’s gain in the math achievement score from the spring of kindergarten to the spring offirst grade ses a numeric vector with values from-1.61to3.21.Value represents socioeconomical status yearstea a numeric vector with values from0to40.It is number of years of teacher’s experience in teaching in thefirst grademathknow a numeric vector with values from-2.5to2.61.Number represents teacher’s knowledge of thefirst-grade math contents(higher values indicate a higher knowledge of the contents) housepov a numeric vector containing proportion of households in the nneighborhood of the school below the poverty level with values ranging from0.012to0.564mathprep a numeric vector with values from1to6.Contains the number of preparatory courses on thefirst-grade math contents and methods followed by the teacher.classid a factor with312levels1,2,3,4,5,...,312.Classroom’s idschoolid a factor with107levels1,2,3,4,5,...,107.School’s idchildid a factor with1190levels1,2,3,4,5,...,1190.Pupil’s idDetailsThe SII Project was carried out to assess the math achievement scores offirst-and third-grade pupils in randomly selected classrooms from a national US sample of elementary schools(Hill et al,2005).Data were also analyzed in West et al,2007.The outcome of interest is mathgain variable.Data were created based on classroom data from WWGbook packageSourceHill,H.,Rowan,B.,and Ball,D.(2005).Effect of teachers mathematical knowledge for teaching on student achievement.American Educational Research Journal,42,371-406.West,B.T.,Welch,K.B.,and Galecki,A.T.(2007).Linear Mixed Models:A Practical Guide Using Statistical Software.Chapman and Hall/CRC.Examplessummary(SIIdata)simulateY Simulates values of the dependent variable based on a modelfitDescriptionThis function is generic;method functions can be written to handle specific classes of objects.UsagesimulateY(object,nsim=1,seed=NULL,...,verbose=FALSE,sigma)Argumentsobject an object with a modelfit for which dependent variable is to be simulated.nsim number of simulations.nsim=1by default.seed integer scalar used to initiate random numbers generator....some methods for this generic function may require additional arguments.verbose logical.If TRUE basic information about arguments is provided.By default set to FALSE.sigma numeric scalar.Allows to perform simulations employing alternative value of the scale parameter.Valuenumeric matrix.Number of columns determined by nsim argument.Author(s)Andrzej Galecki and Tomasz BurzykowskiExamples##simulateY(fm1)Index∗datasetsarmd,2armd.wide,3armd0,4fcat,5prt,9prt.fiber,10prt.subjects,11SIIdata,15∗packagenlmeU-package,2anova.lme,13armd,2,4,5armd.wide,3,3,5armd0,3,4,4fcat,5logLik1,6logLik1.lme,7missPat,8nlmeU(nlmeU-package),2nlmeU-package,2prt,9,10prt.fiber,9,10prt.subjects,9,10,11Pwr,12Pwr.lme,12,12runScript,14sigma,14SIIdata,15simulateY,1618。

使用multisim进行数字逻辑电路建模与仿真说明书

DIGITAL LOGIC CIRCUIT MODELING AND SIMULATION WITH MULTISIMMultisim is a schematic capture and simulation program for analog, digital and mixed analog/digital circuits, and is one component of the National Instruments “Circuit Design Suite”.The basic steps in modeling and analysis of a digital logic circuit are:1.Open Multisim and create a “design”.2.Draw a schematic diagram of the circuit (components and interconnections).3.Design digital test patterns to be applied to the circuit inputs to stimulate the circuit and connect signal sourcesto the circuit inputs to produce these patterns.4.Connect the circuit outputs to one or more indicators to display the response of the circuit to the test patterns.5.Run the simulation and examine the results, copying and pasting Multisim windows into lab reports and otherdocuments as needed.6.Save the design.Step 1. Open Multisim and create a designThis creates a blank design called “Design1”, as illustrated in Figure 1. Save the file with the desired design name via menu bar File>Save As to use the standard Windows Save dialog.A previously saved design can be opened via File>Open. In the dialog window, navigate to the directory in which the design is stored, select the file, and click the Open button.Figure 1. Blank design with default name “Design 1.”Step 2. Draw a schematic diagram of the circuitPlace ComponentsA schematic diagram comprises one or more circuit components, interconnected by wires. Optionally, signal “sources” may be connected to the circuit inputs, and “indicators” to the circuit outputs. Each component is selected from the Multisim library and placed on the drawing sheet in the Circuit Window (also called the Workspace). The Multisim library is organized into “groups” of related components (Transistors, Diodes, Misc Digital, TTL, etc.). Each group comprises one or more “families”, within which the components are implemented with a common technology. For designing and simulating digital logic circuits in this course, “Misc Digital” (TIL family only) is used.The “Misc Digital” group has three families of components, of which family “TIL” contains models of generic logic gates, flip-flops, and modular functions. These components are technology-independent, which means that they have only nominal circuit delays and power dissipation, unrelated to any particular technology. Generic components can be used to test the basic functionality of a design, whereas realistic timing information requires the use of technology-specific part models, such as those in the TTL group.To place a component on the drawing sheet, select it via the Component Browser, which is opened via the component toolbar or the menu bar. From the menu bar, select Place>Component to open the Component Browser window, illustrated in Figure 2. You can also open this window by clicking on the Misc Digital icon in the component toolbar. On the left side of the window, select “Master Database”, group “Misc Digital”, and family “TIL”. The component panel in the center lists all components in the selected family. Scroll down to and click on the desired gate (NAND2 Figure 3); its symbol and description are displayed on the right side of the window. Then click the OK button. The selected gate will be shown on the drawing sheet next to the cursor; move the cursor to position the gate at the desired location, and then click to fix the position of the component. The component can later be moved to a different location, deleted, rotated, etc. by right clicking on the component and selecting the desired action. You may also select these operations via the menu bar Edit menu.Figure 2. Component Browser: Misc Digital TIL family NAND2 gate component selected.Figure 3. A third NAND2 gate is about to be placed on the drawing sheet.Figure 4 shows the schematic diagram with four placed components. Note that each placed gate has a “designator” (U2, U3, U4, U5), which can be used when referring to that gate. You can change a designator by right clicking on the component, selecting Properties, and entering the desired name on the Label tab.Figure 4. Schematic diagram with all placed components.Drawing WiresWires are drawn between component pins to interconnect them. Moving the cursor over a component pin changes the pointer to a crosshair, at which time you may click to initiate a wire from that pin. This causes a wire to appear, connected to the pin and the cursor. Move the cursor to the corresponding pin of the second component (the wire follows the cursor) and click to terminate the wire on that pin. If you do not like the path selected for the wire, you may click at a point on the drawing sheet to fix the wire to that point and then you can move the cursor to continue the wire from that point. You may also initiate or terminate a wire by clicking in the middle of a wire segment, creating a “junction” at that point. This is necessary when a wire is to be fanned out to more than one component input. A partially-wired circuit, including one junction point, is illustrated in Figure 5.Figure 5. Partially wired circuit, with one junction point.Step 3. Generating test input patterns.To drive circuit simulations, Multisim provides several types of “sources” and “instruments” to generate and apply patterns of logic values to digital circuit inputs. Sources are placed on the schematic sheet and connected to circuit inputs in the same way as circuit components, selecting them from the “Digital_Sources” family of the “Sources” group in the component browser. Note that there is a Place Source shortcut icon in the tool bar.There are three basic digital sources:1.DIGITAL_CONSTANT – this is a box with a constant logic 1 or 0 output, and would be used where the logic valueis not to be changed during simulation. To change the output value, right click on the box, select Properties, select the desired value on the Value tab, and click the OK button.2.INTERACTIVE_DIGITAL_CONSTANT – this is a clickable box that can be connected to a circuit input. Clicking onthe box toggles its output between 0 and 1. This can be used to interactively change a circuit input duringsimulation.3.DIGITAL_CLOCK – this is a box that produces a repeating pulse train (square waveform), oscillating between 0and 1 at a specified frequency. To set the frequency and duty cycle, right click on the box, select Properties,select the desired frequency and duty cycle value on the Value tab, and click the OK button.Figure 6 shows the circuit of Figure 5 with an INTERACTIVE_DIGITAL_CONSTANT connected to each input. Note that the initial state of each is logic 0. Since this circuit has only three inputs, all 8 input patterns can be produced (to generate a truth table for the circuit) by manually toggling the inputs.Figure 6. INTERACTIVE_DIGITAL_CONSTANT sources connected to circuit inputs.Step 4. Connect circuit outputs to indicatorsTo facilitate studying the digital circuit output(s), Multisim provides a variety of “indicators”. For digital simulation, the most useful are digital “probes”, hex displays, and the Logic Analyzer instrument. A probe, illustrated in Figure 7, displays a single digital value as ON or OFF (the probe is “illuminated” indicating an ON condition). The PROBE family of the Indicators group includes a generic PROBE_DIG and several PROBE_DIG_color indicators (color = BLUE, GREEN, ORANGE, RED, YELLOW). The probe in Figure 7 is PROBE_DIG_BLUE. This circuit can be verified by manually changing the three INTERACTIVE_DIGITAL_CONSTANT inputs to each of the 8 possible combinations, and recording the probe value for each combination to create a truth table.Figure 7. DIGITAL_PROBE_BLUE connected to circuit output.Step 5. Run the simulationA simulation is initiated by pressing the Run (green arrow) button in the toolbar or via the menu bar via Simulate>Run. You may simulate the circuit by clicking on the keys to change the input values and observe the output changes through the LED indicator.You may capture any window and paste it into a Word or other document for generating reports. An individual window is captured by pressing the ALT and Print Screen keys concurrently. You may then “paste” the captured window into a document via the editing features of that document. To capture a circuit diagram in the main window, the simplest method is via the menu bar Tools>Capture Screen Area. This produces a rectangle whose corners can be stretched to include the screen area to be captured; the “copy” icon on the top left corner is pressed to copy the area, which may then be pasted into a document.Step 6. Save the design and close MultisimThe simplest way to save a design is to click the Save icon in the Design Toolbar on the left side of the window, directly above the design name. Alternatively, you may use the standard menu bar File>Save.Multisim is exited as any other Windows program.-This document is a modified and short version of /department/ee/elec2210/.Appendix: Creating subcircuits and hierarchical blocks (from NI Multisim manual)Complete the following steps to place a new subcircuit:1. Select Place»New subcircuit. The Subcircuit Name dialog box appears.2. Enter the name you wish to use for the subcircuit, for example, “PowerSupply” andclick OK. Your cursor changes to a “ghost” image of the subcircuit indicating that thesubcircuit is ready to be placed.3. Click in the desired location to place the subcircuit.4. Double-click on the new subcircuit and select Open subsheet from the Label tab ofthe Hierarchical Block/Subcircuit dialog box that displays. An empty design sheetappears.5. Place and wire components as desired in the new subcircuit.6. Select Place»Connectors»Hierarchical connector, and place and wire the connector asdesired. Repeat for all required hierarchical connectors.When you attach a hierarchical connector to a wire, the net name for the wire that you connect it to does not change if it has a Preferred net name (user-assigned via the NetProperties dialog box). If the net name on the wire is auto-named, it changes to match theconnector.7. Select the sheet that contains the subcircuit from the Hierarchy tab of the DesignToolbox.OrSelect View»Parent sheet.This command moves you up to the next sheet in the hierarchy. If you have multiple nested circuits and are viewing, for example, a subcircuit within a subcircuit, you will notmove to the top of the hierarchy.The symbol for the subcircuit that appears includes pins for the number of connectors thatyou added.8. Wire the hierarchical connectors into the main circuit.Complete the following steps to place another instance of the same subcircuit:1. Select the desired subcircuit and select Edit»Copy.2. Select Edit»Paste to place a copy of the subcircuit on the workspace.。

ADS晶体管直流仿真教程

ADS晶体管直流仿真教程2.This chapter introduces the mixer circuit and shows all the basics of DC simulations, including a family of curves and device biasing calculations.Lab 2: DC SimulationsLab 2: DC Simulations2-2OBJECTIVESBuild a symbolized sub-circuit for use in the hierarchy Create a family of curves for the device used in the mixer Sweep variables, pass parameters, and the plot or list the data ? Use equations to calculate bias resistor values from simulation dataNOTE about this lab: This lab and the remaining labs will use the BJT mixer to demonstrate all types of simulations. Regardless of the type of circuit you design, the techniques and simulations presented in these labs will be applicable to many other circuit configurations.PROCEDUREThe following steps are for creating the mixer BJT sub-circuit with package parasitics and performing the dc simulations as part of the design process.1. Create a New Project and name it: mixer2. Open a New Schematic Window and save it as: bjt_pkg3. Setup the BJT device and model:a. Insert the BJT generic device and model: In the schematic window,select the palette: Devices–BJT . Select the BJT-NPN device and insert it onto the schematic. Next insert the BJT Model (model card withdefault Gummel Poon parameters).Lab 2: DC Simulations2-3b. Double click on the model. When the dialog appears, click Component Options and in the next dialog, click Clear All and OK . This will remove the parameter list from the schematic.c. Assign Forward Beta = beta. Double click on the model card you just inserted. Select the Bf parameter and type in the word beta as shown here. Also, click the small box : Display parameter on schematic for Bf only and then click Apply . The numerical value of beta will be assigned in the next steps.d. Type in the value of Vaf (Forward Early Voltage) as 50 and display it by clicking Apply and OK . This will make the dc curves more realistic.e. Click OK to dismiss the dialog box with these changes.f. For the BJT device or any component, you can also remove theunwanted display parameters (Area, Region, Temp and Mode) by editingit in the same way.Lab 2: DC Simulations2-44. Build the rest of the subcircuitThe picture here shows the completed subcircuit. Follow the steps to build it or simply build it as shown:NOTE: Connect the components together or wire them as needed.a. Insert the package parasitics L and C: Insert three lead inductors (320pH ) and two junction capacitors (120 fF ). Be sure touse the correct units (pico and femto) or your circuit will not have the correct response.Also, add some resistance R= 0.01 ohms to the base lead inductor and display the desired component values as shown.b. Insert port connectors: Click the port connector icon (shown here) and insert the connectors exactly in this order : 1) collector, 2)base, 3) emitter. You must do this so that the connectors have the exact same pin configuration as the ADS BJT symbol. Edit the port names – change P1 to C, change P2 to B, and change P3 to E.c. Clean up the schematic: Position the components so the schematiclooks organized – this is good practice. To move component text, press the F5-Key and then select the component . Use the cursor to position the text.Add Wire icon.Lab 2: DC Simulations2-55. Create a symbol for the sub-circuitThere are three ways to create a symbol for a circuit: 1) Use a default symbol, 2) Use a built-in symbol (a standard symbol), or 3) Create a new symbol by drawing one or modifying an existing one. For this lab you will use a built-in bjt symbol which looks better than the default three-port symbol. The following steps shows how to do this:a. To see the default symbol, click : View >Create/Edit Schematic Symbol . The symbol page will replace the schematic page and a dialog will appear. Click OK to use the defaults.b. Next, a rectangle or square with three ports is generated:NOTE : You will be replacing the default symbol with a built-in BJT symbol in the next steps. As you do, you must assign the pin (port) numbers exactly as shown to match the built-in symbol for the emitter, base, and collector.c. To change the symbol to a built-in symbol that looks like atransistor, delete the entire symbol you just created: Select > Select All . Then click the trash can icon to delete the symbol.d. Return to the schematic: View > Create/Edit Schematic .Now click File> Design Parameters . In the General tab,there is a Symbol Name parameter list. Click the arrow and select: SYM_BJT_NPN . Also, Change the component instance name to Q.Lab 2: DC Simulations2-6e. Set beta as a pass parameter: To do this, click the Parameters tab. In the Parameter Name area, type in beta and assign a default value of 100by clicking the Add button. Be sure to click the Display button as shown in the picture. Click the OK button at the bottom (not shown here) to save the new definitions and dismiss the dialog.f. In the schematic window, Save the design to make sure all your work is save and close the window . You now have a sub-circuit that will be available for use in other designs and other projects.6. Create another circuit for DC simulationsa. Open a new schematic from the Main window and save it as: dc_curves .This will be the upper level circuit.b. Click on the Library list icon and the library browser will appear.Select the mixe r project and you will see the bjt_pkg circuit listedas an available component.Lab 2: DC Simulations2-7c. Select the bjt_pkg component and the npn transistor symbol will be appear on your cursor. Click in the dc_curves schematic to insert the bjt_pkg . You can now close the library window and save the dc_curves design (good practice to save often).7. Set up a dc curve tracerFor this step you will use a template. ADS built-in templates make it easier to set up the simulation after the schematic is built. In this case, the dc curve tracer template is set up to sweep VCE within incremental values of base current IBB.a. On the schematic, click File > Insert Template and select the BJT_curve_tracer to insert it. Click OK and it will appear on yourcursor - to insert it, click near your bjt_pkg symbol.Click to insert the template.Lab 2: DC Simulations2-8b. With the curve tracer template inserted, wire the circuit together so it looks like the shown here. Note that you can move the component text using the F5 key so that the schematic looks good.NOTE: If you did not use this Template, you would have to insert every component (the V_DC source, the I_Probe, the I_DC source, etc.) one at a time. Also, you would have to assign and set up the variables (IBB, VCE) for the swept simulation.c. Set the Parameter Sweep IBB values: 1 uA to 11 uA in 2 uA steps .Parameter Sweep components are available in all simulation palettes. Set the DC simulation controller SweepVar VCE : 0 to 5 in 0.1 steps .Notice that the VAR1 variables VCE and IBB can be used as is becausethey only initialize the variables but it is best to use reasonable values.F5Wire IconKeyboard F5 is a Hot Key for movingcomponent textLab 2: DC Simulations2-98. Name the dataset and run the simulation a. Click Simulate > Simulation Setup . When the dialog appears,type in a name for the dataset dc_curves as shown.b. Click Apply and Simulate .c. After the simulation is finished, click the Cancel button and the setup dialog will disappear. If you get an error message, check the simulation set up and repeat if necessary.9. Display the results, change beta, and resimulate a. Click the New Data Display icon (shown here). Insert a rectangular plot and add the IC.i data . Note that voltage VCE is the default X-axis value. The results should look similar to the “beta=100”plot shown here.b. On the schematic, change the value of beta = 144. The value will automatically be passed down to the sub circuit that you set up in the previous steps. Simulate again and notice the change as shown here.NOTE: You will use beta =144 for the remainder of the labs.Lab 2: DC Simulations2-10c. Insert a marker on the dc_curves trace (as shown here), where the initialspecification of 1 mA at VCE corresponds to about 7 uA of base current.d. Insert a list (click the icon).e. Select collector current IC and add it . If the list is in table format as shown (box with Xacross it), edit or double click the list and check the box, Suppress Table Formatand OK. Then scroll through the data.List IconLab 2: DC Simulations2-11DC Bias DESIGN CONSIDERATION: When the final circuit is constructed, the LO drive will shift the current slightly higher and this means that the operating point can be a little lower if desired. In addition, a current limiting collector resistor RC will be required and that will lower the voltage across VCE. Knowing this, it is reasonable to assume that VCC of 2 volts will be divided with a voltage drop of about 0.5V for RC with the remaining 1.5V across the device VCE.10. Create a new design to calculate bias valuesThe next steps will sweep only base current for a fixed value of VCE at 1.5 volts. This will allow you to determine values of base-emitter voltage VBE that can be used to calculate the bias resistor values.a. Save the dc_curves schematic . Next, save it with a new name as follows: click File > Save As and when the dialog box appears, type in a new name: dc_bias . Now, you have three designs in the networks directory: bjt_pkg, dc_curves anddc_bias.b. If only one variable is swept, it is more effective to sweep it in theSimulation controller and not in a Parameter sweep. Therefore, delete the Parameter Sweep . Refer to the schematic here to: 1) edit the DC controller to sweep IBB: 1uA to 11 uA in 1 uA steps , 2) set Vdc =1.5V , and 3) remove VCE from the VarEqn by editing it (double click)and using the Cut button to remove VCE as a variable.c. Insert a node name to allow you to get simulation data from a node on the schematic. Click the icon or use the command: Component >Name Node . When the dialog appears, type in the name VBE and clickon the node at the base of the transistor.VCE is cut from theDC controller,Lab 2: DC Simulations2-12d. Save and Simulate : Save the new design by clicking the save icon –this is always good practice. Next, check the dataset name: Simulation > Setup ) as in the previous simulation. Be sure it appears as: dc_bias and then Simulate .11. Display the data (dc_bias) in a listIn this step, you will use the same data display window that contains the dc_curves data. In fact, you can plot numerous datasets in the display but you must explicitly define (dataset name..) the data to be displayed.a. In the current Data Display window, notice that the default dataset is dc_curves. This is OK. However, if you change the default todc_bias , you will see that the plot becomes invalid because the data is not the same array size as the two dimensional one. This is normal. Try this now as shown and then set it back to dc_curves .b. Now, in the current Data Display window, make room for the new data by using the zoom and view icons . Then insert a new list .c. When the list dialog box appears, select thedc_bias dataset and, add VBE and IC . You should get results similar to those here where IC is veryclose to 1 mA.Lab 2: DC Simulations2-13d. Draw a box around the values of interest as shown here. To do this,click the rectangle icon from the tool bar and draw it on the list. This is one way to highlight the data. Also, the data display window by using Save As and giving it a name like: dc_data.12. W rite an equation to calculate Rb a. On the data display,insert an equation by clicking on the equation icon and then clicking in the data display window:b. When the dialog appears, type in the equation as shown by typing and using the Insert button. First, select the dc_bias dataset in the upper right (circled). To write the equation type the first part only: Rb = (1.5 -and select VBE and click < help.Lab 2: DC Simulations2-14IMPORTANT NOTES on writing equationsEquations that operate on data can either be explicit or generic:The difference in these two equations is in the data being referenced, especially the default dataset in the case of the generic equation. Also, note that equations and data are CASE SENSITIVE.c. Verify how the generic equation described above will work. Be sure the data display shows dc_curves as the default dataset . Now, insert another equation and type it in as shown (generic version):Rb1 = (1.5 - VBE) / IBB . After you click OK and it will be red (invalid)d. Now, change the default dataset to dc_bias (at the top of the display) and verify that it is valid.Now, continue with the design by calculating the collector resistor.e. Write an equation for resistor Rc . You should be able to do this based on what you learned in the previous steps.Generic Equation: When no equation applies to the default dataset.Explicit Equation: Areferenced: name..Lab 2: DC Simulations2-15f. List the values Rb and Rc .Insert a List and when the first dialog appears, select Equations by clicking the arrow. Then Add Rb and Rc and click OK .g. When the list appears, you will then see a table of values for Rb and Rc that correspond to the value of IBB. As a rule, you always get the independent variable (here IBB) when you list or plot data.h. Increase the size of a display (if you see dots …after the entries), by dragging the corner of the list. If dots appear after a number or name, it indicates there is more data and you should increase the size of the list or plot.i. Draw a box (rectangle around the desired values to read it easier. Then edit the list (double click) and select Plot Options . Now, type in a title and change the format as shown by using the More button if desired.j. Be sure to save the display (.dds file). With these values of Rb and Rc,the next step is to bias the device and test the bias network.Lab 2: DC Simulations2-1613. Set up a new design to test the bias networkFor this step, you will create the schematic design without using a template. During this process you will learn some efficient ways to do this.a. Open a new schematic from the existing one, using the File > Newcommand or the icon and name it: dc_net . Notice that this dialog allows you to name the new design and gives you other options.b. In the new schematic (dc_net), insert your sub circuit bjt_pkg by typing in the name in the component history list:c. Set the value of beta to: 144d. Goto LumpedComponents palette and insert a resistor as the base resistor. Notice that “R” appears in the historylist when you do this.Lab 2: DC Simulations2-17e. Insert the collector resistor and rotate it: put the cursor in the history list “R” and press Enter. Immediately, when the resistor is attached to your cursor, click the –90 rotate icon shown here and the component will increment 90 degrees – then insert it.f. Insert a current probe (I_Probe) from the palette or type it in.Connect it to the top of the collector resistor.After you connect the component, you can drag it and it isautomatically wired.Lab 2: DC Simulations2-18g. Finish building the circuit as follows :Rename and assign resistors: Rb = 100 K ohms and Rc = 470 ohms . Rename the I_Probe: ICInsert V_DC supply set at 2 V from (Sources-Frequency Domain palette). Insert a node name at the collector as VC . Wire the circuit and organize it.NOTE on Name Node : To remove a named node , click Edit > Component >Remove Node Name or you can rename the node with a blank (click the icon and try it). This step is to show how to remove a node name – you may need it later on .h. Insert a DC simulation controller(Simulation-DC palette).Lab 2: DC Simulations2-1914. Simulate and verify the bias network conditionsFor this you do not need to display the data. Instead, you will simply annotate the schematic to verify that IC meets the 1 mA specification and that bias design consideration (described earlier) is accurate.a. Press the F7 keyboard key and the simulation will be launched with the dataset name that is the same as the schematic –this is the default. You can verify this by reading the status window:b. Annotate the current and voltage at the nodes. Click on the menu command: Simulate > Annotate DC Solution . Now you should see the voltage and currents at the nodes. Be sure that you have about 1 mAof collector current with VCE about 1.5 V. If not, check your work.VCE is 1.5 voltsIC is 1 mA。

ITU-T分组传送技术标准



应用业务层(APP) (如:IP、MPLS等)
控 制 数 平 据 平 面 面
理 平 面
以太网业务层(APP) (以太网业务PDU) 传送业务层(TRAN) (如:IEEE802.1,SONET/SDH、MPLS等)
8 MEF4 MEN网 网 层
电信科学技术第五研究所
电信级以太网
MEF技术规范 MEF技术规范
规范号 MEF2 MEF3 规范名称 Requirements and Framework for Ethernet Service Protection Circuit Emulation Service Definitions, Framework and Requirements in Metro Ethernet Networks Metro Ethernet Network Architecture Framework Part 1: Generic Framework Metro Ethernet Service definition Phase 2 EMS-NMS Information Model Implementation Agreement for the Emulation of PDH Circuits over Metro Ethernet Networks Abstract test Suite for Ethernet Services at the UNI Ethernet Service Attributes Phase 2 User Network Interface (UNI) Requirements and Framework Metro Ethernet Network Architecture Framework Phase 2: Ethernet Services Layer User Network Interface (UNI)Type 1 Implementation Agreement Abstract Test Suite for Traffic Management Phase 1 规范号 MEF15 MEF16 规范名称 Requirements for Management of Metro Ethernet Phase 1 Network Elements Ethernet Local Management Interface
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Generic Framework to Model,Simulate and VerifyGenetic Regulatory NetworksAlejandro Arbel´a ez,Julian Guti´e rrez,Carlos Olarte and Camilo RuedaPontificia Universidad JaverianaCalle18No.118-250,Cali-Colombiaaarbelaez@.co,{jg,caolarte,crueda}@.coAbstractProcess calculi are formalisms to model concurrent systems.Their mathematical basis and compositional style make possible to decompose a system into simple and well defined processes.Interaction among them is formally defined by the semantic of the calculi.These characteristics allow to study systems coming from different areas such as arts,engineering and sciences.In this paper we propose a generic framework to model, simulate and verify genetic regulatory networks based on a non-deterministic timed concurrent constraint calculus.This framework provides a set of process definitions to model generic/parametric components in a biological context,a simulator to observe the system evolution in time and some insights to perform formal proofs to verify and make inferences over the systems.An instantiation of the framework is presented by modeling the lactose operon.Keywords:Process calculi,Concurrent Constraint Programming,ntcc,Genetic Regulatory Networks,Lac OperonResumenLos c´a lculos de procesos son formalismos para modelar sistemas concurrentes.Sus fundamentos matem´a ticos y estilo de dise˜n o composicional hacen posible descomponer un sistema en procesos simples y bien definidos. La interacci´o n entre los procesos es formalmente definida por la sem´a ntica de los c´a lculos.Estas carac-ter´ısticas permiten estudiar sistemas provenientes de diferentes´a reas tales como las artes,la ingenier´ıa y las ciencias.En este art´ıculo se propone un marco gen´e rico para modelar,simular y verificar redes de regulaci´o n gen´e tica con el uso de un c´a lculo de procesos temporal y no determin´ıstico basado en restricciones.Este marco provee un conjunto de definiciones de procesos para modelar componentes gen´e ricos/param´e tricos en un contexto biol´o gico,un simulador para observar la evoluci´o n del sistema en el tiempo y algunas directrices para desarrollar pruebas formales para verificar y hacer inferencia sobre el sistema.Un caso de estudio es presentado con el modelado del oper´o n de la lactosa,una red de regulaci´o n gen´e tica compleja.Palabras claves:C´a lculos de Procesos,Programaci´o n Concurrente por Restricciones,ntcc,Redes de Regulaci´o n Gen´e tica,Oper´o n de la Lactosa1IntroductionThe study of concurrent systems is often carried out with the aid of process calculi.These are very ex-pressive formalisms centered on the notion of interaction.Systems are understood as interacting complex processes composed of smaller ones following a compositional approach.Such an approach is encouraged by the(usually)few mathematical constructs provided by the calculus that defines how processes interact among them and with their environment.The mathematical foundations underlying process calculi allow to both model and verify properties of a system,thus providing a concrete design methodology for complex systems.These appealing properties of process calculi have been a strong motivation for its use in a wide spectrum offields including,distributed systems[10],systems biology[15],visual/object-oriented languages [16]and reactive systems[18]among others.The interest of the scientific community on process calculi is also reflected by the extensions proposed in order to cope with a number of widely occurring concepts such as time([10,18]),mobility[8],probabilistic/stochastic behavior[13,11],and partial information[17].Process calculi have been previously used to model biological functions(see[5,12]).Most of this work has been conducted using(extensions of)theπcalculus([13,2])and the Ambient calculus([14]).By the other side,calculi devised for specific biological systems have also been proposed.For instance,calculi to model membranes([1]),protein interaction([3])and reversibility in bio-molecular processes([6]).We are interested in the study of biological/molecular systems using process calculi following the con-current constraint programming model(ccp)[17].ccp is based on the concept of constraint as an entity carrying partial information,i.e.,conditions for the values that variables can take.Constraints are accumu-lated monotonically over a so-called store,an entity that contains all information produced by the system during its execution.In this way,the problem offinding higher-order biological function of systems can be taken up by relying in the fundamental mathematical approach those process calculi provide.This approach is justified by the following facts:(i)A clear separation among processes can be achieved by considering concurrent agents as basic components of programs making possible a straightforward model refinement in context-dependent models.(ii)Because of the declarative nature of ccp,only the constraints of the prob-lems have to be stated,whereas system control(i.e.,evolution)is formally defined by the semantic of the calculus.(iii)Constraints can be seen as a representation of incomplete knowledge.This is an important consideration in biological domains where the exact function of several systems and mechanisms is still a matter of research.Finally,(iv)the construction of simulation tools and model verifiers can be formally done.Models presented in this paper are built using a non-deterministic timed extension of ccp called ntcc [10]and a stochastic extension of it[11].These models allow to express non-determinism,asynchronous and stochastic behavior.We aim to establish how such constraint-based process calculi can help to design a formal language suitable to model molecular processes.The objective is then using this language to develop highly accurate models,discover from these models behavioral properties of biological systems and develop semi-automated tools to verify and simulate large(complex)systems in molecular biology.We believe that languages and tools based on the ccp paradigm can thus constitute a valuable methodology to design and test coherent bio-molecular models.The main contribution of this paper is to provide a generic framework to:(i)Model genetic regulatory networks by using a set of process definitions to model biological components.(ii)Observe the evolution of the system during the time by means of a simulation tool executing ntcc programs.And(iii)make quantitative inferences when a complete mathematical model of the system is not available by means of formal proofs in the linear temporal logic associated with ntcc.In addition,we provide a complete model of the lactose operon(i.e.,lac operon),a non-trivial biological system not previously modeled using process calculi.The rest of the paper is structured as follow:Section2gives some preliminars about ntcc calculus. Additionally,a structural and functional description of the lac operon is given.Section3shows the generic and parametric process definitions provided by the framework and how they can be used to model the lac operon.Some graphics showing the quantitative measures taken from the simulator of the calculus are presented in section3.7.Section4is devoted to present how formal proofs can be performed in the framework and how they can be used to infer future behavior in the system.Finally,section5concludes the paper and gives some research direction.2Computational and Biological FoundationsHere we present some of the theoretical background of our work.First,the main features of the ntcc process calculus are ter,an intuitive description of the Lac Operon is given.This system will be used as a case study in forthcoming sections.2.1ntcc:A timed,process calculusntcc is a temporal concurrent constraint calculus suitable to model non-deterministic and asynchronous behavior.As such,it is particularly appropriate to model reactive and concurrent systems.One of the main features of this calculus is that it is equipped with a proof system for linear-temporal properties of ntcc processes.In this section we briefly describe the syntax and proof system of the ntcc calculus,referring the reader to[10,21]for further details.First we recall the notion of constraint system.Basically,a constraint system provides a signature from which syntactically denotable objects in the language called constraints can be constructed,and an entail-ment relation(|=)specifying interdependencies among such constraints.The underlying language L of the constraint system contains the symbols¬,∧,⇒,∃,true and false which denote logical negation,conjunc-tion,implication,existential quantification,and the always true and always false predicates,respectively. Constraints,denoted by c,d,...arefirst-order formulae over L.We say that c entails d in∆,written c|=∆d (or just c|=d when no confusion arises),if c⇒d is true in all models of∆.For operational reasons we shall require|=to be decidable.In ntcc time is divided into discrete intervals(or time units),each one of them having its own constraint store(or simply store).Intuitively,a store accumulates all the information available in the system at a given time.The basic actions for communication with the store are tell and ask operations.While the former adds new pieces of information to the store,the latter enquires the store to check if some information can be inferred from its current content.Moreover,synchronisation among processes is only based on these two actions.In this way,each time unit can be understood as a reactive entity,where a process P i receives an input from the environment(i.e.,a constraint).The process P i is then executed considering this input, responding with some output(that is,new constraints)once no further processing over the store is possible. Computation in the next time unit is then based on a residual process resulting from P i and on new inputs provided by the environment.Process Syntaxntcc processes P,Q,...∈P roc are built from constraints c∈C and variables x∈V in the underlying constraint system by the following syntax.P,Q...::=tell(c)| i∈I when c i do P i|P Q|local x in P|next(P)|unless c next(P)|!P|⋆P The only move or action of process tell(c)is to add the constraint c to the current store,thus making c available to other processes in the current time interval.The guarded-choice i∈I when c i do P i,where I is afinite set of indexes,represents a process that,in the current time interval,non-deterministically chooses one of the P j(j∈I)whose corresponding constraint c j is entailed by the store.The chosen alternative,if any,precludes the others.If no choice is possible then the summation is precluded.We shall use“+”for binary summations.Process P Q represents the parallel composition of P and Q.In one time unit(or interval)P and Q operate concurrently,“communicating”via the common store.We use i∈I P i,where I is afinite set,to denote the parallel composition of all P i.Process local x in P behaves like P,except that all the information on x produced by P can only be seen by P and the information on x produced by other processes cannot be seen by P.We abbreviate local x1in(local x2in(...(local x n in P)...))as local x1,x2,...,x n in P.The process next(P)represents the activation of P in the next time interval.Hence,a move of next(P) is a unit-delay of P.The process unless c next(P)is similar,but P will be activated only if c cannot be inferred from the current store.The“unless”processes add(weak)time-outs to the calculus,i.e.,they wait one time unit for a piece of information c to be present and if it is not,they trigger activity in the next time interval.We shall use next n(P)as an abbreviation for next(next(...next(P))...)),where next is repeated n times.LTELL tell (c )⊢c LSUM ∀i ∈I P i ⊢A iP Q ⊢A ˙∧B LUNL P ⊢A !P ⊢ A LLOCP ⊢A ⋆P ⊢♦A LNEXT P ⊢AP ⊢B if A ˙⇒BTable 1:A proof system for (linear-temporal)properties of ntcc processesThe operator !is a delayed version of the replication operator for the π-calculus ([8]):!P represents P next (P ) next 2P ...,i.e.,unboundedly many copies of P but one at a time.The replication operator is a way of defining infinite behavior through the time intervals.The operator star (i.e.,⋆)allows us to express asynchronous behavior.The process ⋆P represents an arbitrary long but finite delay for the activation of P .For example,⋆tell (c )can be viewed as a message c that is eventually delivered but there is no upper bound on the delivery time.We shall use ⋆n P as an abbreviation of next n (⋆P )to represent a delayed version of the operator star .Using ntcc it is also possible to encode process definitions as procedures and recursion .We shall usea definition of the form A (x )def =P x where P x is a process using a variable x .A “call”of the form A (c )would then launch a process P x once the variable x is substituted by c .We can rely on the usual intuitions concerning procedure calls in a programming language.We shall use recursive process definitions of theform q (x )def =P q ,where q is the process name and P q calls q only once and such a call must be within thescope of a “next”.As in [21]we consider call-by-value for variables in recursive process calls.Moreover,the encodings generalize easily to the case of definitions with an arbitrary number of parameters.These kinds of definition do not add functionality to ntcc since they can be defined in terms of the standard ntcc constructs.Linear-temporal Logic in ntccThe linear-temporal logic associated with ntcc is defined as follows.Formulae A,B,...∈A are defined by the grammar:A,B,...:=c |A ˙⇒A |˙¬A |˙∃x A |◦A | A |♦A.Here c denotes an arbitrary constraint which acts as an atomic proposition.Symbols ˙⇒,˙¬and ˙∃x represent linear-temporal logic implication,negation and existential quantification.These symbols are not to be confused with the logic symbols ⇒,¬and ∃x of the constraint system.Symbols ◦, and ♦denote the linear-temporal operators next ,always and eventually .We use A ˙∨B as an abbreviation of ˙¬A ˙⇒B and A ˙∧B as an abbreviation of ˙¬(˙¬A ˙∨˙¬B ).The standard interpretation structures of linear temporal logic are infinite sequences of states.In ntcc ,states are represented with constraints,thus we consider as interpretations the elements of C ω.When α∈C ωis a model of A ,we write α|=A .We shall say that P satisfies A if every infinite sequence that P can possibly output satisfies the property expressed by A .A relatively complete proof system for assertions P ⊢A ,whose intended meaning is that P satisfies A,is given in Table 1.We shall write P ⊢A if there is a derivation of P ⊢A in this system.Finally,the following lemma will be useful in derivations:Lemma 1(Nielsen et al.[10])For every process P ,1.P ⊢˙true ,2.P ⊢˙false ,3.P ⊢A P ⊢A ˙∧B .2.2The Lac Operon:structure and behaviorThe lac operon [7]is one of the most important genetic regulatory networks [7]present in living cells.This regulatory system deals with the sources of energy needed to accomplish the functions of the cell.Thegenetic regulatory network related with the lac operon has been extensively studied by biologist due to its biological importance.Next,we will present a general description of the main features in the structure and behavior of the lac operon.An operon is a genetic cluster comprising a control region and some structural genes.The control region determines the operon status.In particular,the lac operon has three genes in the structural region:LacZ,LacY and LacA (see Figure 1).Gene LacZ codifies for β-galactosidase protein which hydrolyses (a term used for some bio-molecular divisions)lactose into glucose and galactose.Gene LacY codifies for permease protein.This molecule allows to lactose outside the cell to move across the cell membrane to increase the concentration levels of lactose inside the cell.Finally,gene LacA codifies for β-galatoside transacetylase protein.The function of this protein is still undetermined but biologists believe that it has no influence on the lac operon regulatory system.Another important gene related to the lac operon regulatory system is LacI .This gene codifies a protein that precludes activation of the operon (i.e.,it is a so-called repressor protein).In this genetic regulatory network we can identify two important regulatory processes:repression and induction.The former favors turning the genes offwhile the latter favors the opposite behavior.The regulating mechanism enforced by the lac operon system is as follows:there is repression when a cell is in an environment plenty in glucose.In this case,the repressor protein produced by LacI can bind to the control region thus preventing RNA polymerase (an enzyme)to transcribe the operon.But when there is lack of glucose in the environment an induction process is triggered.In induction a protein called CAP-cAMP is produced in the cell,helping RNA polymerase to transcribe the operon.In this situation,β-galactosidase,permease and β-galatoside transacetylase proteins increment their concentration inside the cell.In addition,the concentration of internal lactose induces the formation of a molecule called allolactose.This sugar cooperates in the induction process blocking the repressor proteins.{{Control RegionStructural Genes Figure 1:Lac Operon3Genetic Regulatory Networks in ntccIn this section we will present a set of ntcc processes to model the behavior of genetic regulatory networks.The lac operon regulatory system is used as a case study.First we explain how we model continuous systems with the discrete-temporal features of ntcc .Then,in sections 3.2and 3.3,formal ntcc definitions of molecular events and of regulation and status values in biological entities are given.The ntcc processes shown in these sections are the basis for the model of the lac operon regulatory system given in subsequent sections.Finally,in section 3.7the whole model and some results of its simulation are presented.3.1Continuous systems in ntccContinuity is required to model this kind of systems because their regulation is determined by the con-centration levels of different biological entities along time.We consider two different kinds of continuity:persistence in the values of the variables and continuous time.To model the former we define a process that explicitly transfers the current value of a variable of the system from one ntcc time unit to the next.We shall use m i and m ′i for the value of a variable in the current and the last time unit,respectively:State i (v i )def =tell (m ′i =v i ) next (State i (m i ))This process is used to define the state of the system in every ntcc time unit:State (ρ1,...,ρn )def = i ∈I (tell (m i =ρi ) next (State i (ρi )))where I is the set of indexes of variables in the biological system andρi the initial value of m i.The above process is also used to configure the system for real system simulations with parameters coming from actual biological measurements.The temporal kind of continuity is achieved by considering many ntcc time units as“samples”of one system unit:T ime(t)def=tell(T s=t) next(T ime(t+Dt))where T s is the“continuous”time value of the system in the current ntcc time unit and Dt a constant value representing the resolution of the system.Lower values of Dt give better approximations of real continuous systems.Obviously,the value of Dt has strong practical consequences in system simulations.The following process represents the continuous behavior of whole system:Dynamic def=State(ρ1,...,ρn) T ime(0.0)A very important feature of Dynamic is its generality.This process is not restricted to any particular system,not even biological ones,so it can be used to model the dynamic behavior of many continuous systems.3.2Modeling molecular eventsIn molecular systems several events have to be considered,such as pointing out when a group of molecules interacts with others,performs a specific task or produces a biological control signal.We shall use several discrete variables to indicate either presence or absence of some molecular actions or events in models.The variables representing the events or actions described in this section will be called signaling variables in the rest of the paper.A generic ntcc process to model this kind of molecular behavior can be defined as follows: Signal def=! e∈E,svar∈S(when e do next(tell(svar=1)) unless e next tell(svar=0)) where E is the set of constraints expressing molecular events and S the set of signaling variables in the system.This specification is not the same as an if,then,else kind of thing.Notice that unlike an if-then-else structure,process Signal can reason over the lack of information.So,it is always possible to determine svar despite constraint e holds or not in the store.More complex signaling processes and variables can be constructed with the process presented above, e.g.,a molecular event with delay conditions using temporal ntcc operators.In some cases,to achieve more accurate descriptions of these kinds of molecular behavior,stochastic processes are needed.A stochastic extension of ntcc recently proposed in[11]is effectively applied in our case study to model a particular binding process of molecules.3.3Modeling regulation and status value in biological entitiesMost of the processes used to represent dynamic behavior of a biological entity have a similar structure. They can be modeled as a regulated process controlled by a signaling variable.We define a parametric process Regulate i to model the behavior of biological entity i under the control of a signaling variable.This parametric process can be constructed as follows:Regulate i(svar,P i,N i)def=when svar=1do P i+when svar=0do N i In the above,process P i is executed when the biological event marked by signaling variable svar occurs. Otherwise process N i is executed.Notice that operator“+”chooses between the two kinds of regulation for the biological entity i.So,only one type of regulation is perfomed over i since the chosen alternative precludes the other one.To model status values(e.g.,level of gene expression,location,etc.),we use template Status i to define a wide variety of biological situations in which we want to determine particular conditions in/of a biological entity:Status i def=!(( c∈C when condition c do next(tell(m i=fc i(m′i))))unless knownConditions next tell(m i=m′i)) where C is the set of indexes of conditions for changes in the status of a biological entity i.The new value is defined by a control function fc i.When no conditions for change holds,the state of the system remains the same in the next time unit.3.4Control Region and Structural GenesIn this section the control region and structural genes of a regulatory network are modeled.We use the Status i process as a template to model the sites or places in which a control event may happen.We also propose a parametric process to model the behavior of the most important biological entity present in a genetic regulatory network:a single gene.In the particular case of the lac operon three places have relevance in the control process:CAPsite, Operator and Promoter regions.We use discrete variables m1,m2and m3to represent the operon status: CAP site process to indicate induction,Operator process to indicate repression and P romoter process to indicate the level of transcription.We also use some signaling variables to determine when these biological processes occur.Processes CAP site,Operator and P romoter are formally integrated in ControlRegion by the parallel composition operator:ControlRegion def=CAP site1 Operator2 P romoter3Process Gen x below is a parametric ntcc specification to model the structure and behavior of a single gene.This specification is parameterized by constants representing the degradation and production rates of mRNAs and proteins produced in the transcription and translation of a gene.We consider three important entities:level(i.e.,status)of transcription and concentrations of mRNAs and proteins produced by the gene. Process Gen x is defined using parametric/generic processes Regulate i and Status i:GenStatus i def=!((when tbegin=1∧tend=0do next(tell(m i=m′i+1))+when tbegin=0∧tend=1do next(tell(m i=m′i−1)))unless tbegin=tend next tell(m i=m′i)) MRNA j(p j,d j)def=Regulate j(tbegin,next(tell(m j=m′j+p j−Dt×(d j×m′j))),next(tell(m j=m′j−Dt×(d j×m′j)))) P ROT EIN k(p k,d k)def=Regulate k(mrnah,next(tell(m k=m′k+Dt×(p k×m′j−d k×m′k))),next(tell(m k=m′k−Dt×(d k×m′k)))) Gen x(p j,d j,p k,d k)def=GenStatus i !MRNA j(p j,d j) !P ROT EIN k(p k,d k)where m i,m j and m k are variables representing the status of the gene(i.e.,level of expression),mRNA concentration and protein concentration,respectively.Moreover,d j and d k represent the rate of natural molecular degradation of mRNAs and proteins,respectively.The production rate of these biological entities is determined by the constants p j and p k and two signaling variables tbegin and tend.These denote starting and ending time of RNA polymerase gene transcription.Signaling variable mrnah is used to indicate when the mRNA concentration is“high enough”to begin the translation of the protein.In the particular case of the lac operon two processes are needed to model when RNA polymerase is placed between GenZ and GenY,and when it is placed between GenY and GenA(see Figure1).This biological situation is modeled as a Status i process:DelayGG i def=!((when tend1=1∧tbegin2=0do next(tell(m i=m′i+1))+when tend1=0∧tbegin2=1do next(tell(m i=m′i−1)))unless tend1=tbegin2next tell(m i=m′i))where tend1indicates the time when RNA polymerasefinishes the transcription of thefirst gene and tbegin2the time when it begins the transcription of the second gene.Thus m i is the number of molecules of RNA polymerase moving between the two consecutive genes.To present a complete model of the structural genes in the lac operon,we define GenZ,GenY,GenA, DelayZY and DelayY A in a similar way as the parametric and generic specifications proposed for Gen xand DelayGG i.StructuralGenes def=GenZ(κ1,...,κ4) DelayZY GenY(σ1,...,κ4) DelayY A GenA(ρ1,...,ρ4) The above processes could be used to model the structure and behavior of different genes by changing biological parameters and signaling variables in the model.In subsequent sections we present formal modelsof particular biological processes in the lac operon(i.e.,induction,repression and hydrolysis).3.5Induction and RepressionInduction and repression are biological conditions inside the cell determining whether the lac operon turnson or off.In induction,glucose concentration is low(i.e.,signaling variable glucl=1).This allows to increase the concentration of a protein called cAMP,thus increasing also that of CAP-cAMP protein.This protein is the biological entity that enhances transcription in the lac operon.Levels of CAP-cAMP proteinare indirectly modeled from the explicit concentrations of CAP and cAMP.The concentrations of AMP and ADP are also modeled to calculate the value of cAMP inside the cell.To model induction the ntcc processes CAMP,AMP,ADP and CAP are defined.Biological details about the parameters in these processes are omitted due to space restrictions(see[9]for a more complete account).CAMP def=Regulate5(glucl,next(tell(m5=m′5+Dt×(0.1m′11−0.1001m′5))),next(tell(m5=m′5−Dt×0.1001m′5)))AMP def=Regulate11(glucl,next(tell(m11=m′11+Dt×(0.1m′5+0.1m′12−0.2001m′11))),next(tell(m11=m′11+Dt×(0.1m′5+0.1m′12−0.1001m′11)))) ADP def=!next(tell(m12=m′12+Dt×(0.1m′11−0.2m′12)))CAP def=!next(tell(m4=m′4+Dt×(1.0−0.1m′4)))Induction def=!CAMP !AMP ADP CAPProcess Repression below models the repressor gene,the repressor protein and the way in which repressor protein binds to the lac operon.The repressor gene GenI is defined with the same ntcc specification usedfor the other genes in the lac operon.The behavior of the repressor protein modeled in Repressor andits binding to the DNA of the lac operon is controlled by allolach,a signaling variable indicating when allolactose concentration inside the cell reaches a threshold.When this happens,repressor and allolactose react forming a biological complex that prevents the repressor binding to the lac operon.Signaling variable allolach is defined with a stochastic processρP(see[11])to model probabilistic binding to the allolactose once the threshold is reached.The binding process Binding includes three processes(i.e.,OperatorBinding,DNABinding and NotBinding) to model the fact that the repressor could interact directly with the operator region or,with less probability, bind to the structural genes.It may also happens that the repressor does not bind to the lac operon.We formally express this kind of behavior in process Repression:Repressor def=Regulate16(allolach,next(tell(m16=m′16+Dt×(0.2m′15−m′8×m′16))),next(tell(m16=m′16+Dt×(0.2m′15−m′16))) DNABinding def=Regulate17(allolach,next(tell(m17=m′17−Dt×0.1m′17)),next(tell(m17=m′17+Dt×(0.0399m′16−0.1m′17))) NotBinding def=Regulate18(allolach,next(tell(m18=m′18−Dt×0.1m′18)),next(tell(m18=m′18+Dt×(0.001m′16−0.1m′18))) OperatorBinding def=Regulate7(allolach,next(tell(m7=m′7−Dt×0.1m′7)),next(tell(m7=m′7+Dt×(0.96m′16−0.1m′7))) Binding def=DNABinding NotBinding OperatorBindingRepression def=GenI(κ1,...,κ4) !Repressor !Binding。

相关文档
最新文档