Experimental evaluation of interface doses in the presence of air cavities
飞机健康管理系统的验证与确认

飞机健康管理(Aircraft health management,AHM)系统的验证与确认(Verification and validation ,V&V)是一个复杂的技术难题,但是为了确保AHM 系统的稳定、可靠,这个过程不可或缺。
AHM 系统的V&V 需要从两个主要方面入手。
其一,AHM 系统的本身;其二,AHM 系统的实施。
AHM 系统需要按照AHM 的需求来进行搭建,例如,AHM 系统要在给定时间内作出具有特定置信度的响应。
AHM 系统的实现需要融合多种技术手段,包含数据分析处理、故障诊断与隔离、基于人工智能的状态评估与计划等。
这里主要研究AHM 系统V&V 的内部与外部的影响以及由这些影响带来的技术难题。
1 现存的软件系统的V&V1.1 航电设备的V&V商用飞机上的软件是由美国联邦航空局(Federal Aviation Administration,简称FAA)来认证的,依据文件为RTCA 公司的DO-178B。
文献[1]解释了RTCA 公司的DO-178B 文件。
文中描述了DO-178B 的意图以及合理性,同时讨论了DO-178B 与美国联邦法规之间的关系。
DO-178B 规定软件的验证过程应包含以下几个方面:a.软件开发过程的验证b.软件开发周期内的数据检验c.软件的功能验证d.基于需求的测试与分析e.稳定性测试f.结构包容性分析2004年7月8日,NASA 颁布了NASA-STD-8719.13b 来对软件系统进行验证。
2009年11月11日,对该文件进行了更新,发布了最新版的替代文件NPR 7150.2A。
这种V&V 方式与认证手段与其他领域的类似。
1.3 航天器故障保护的V&VNASA 机器航天器的故障保护(Fault protection ,简称FP)软件是一种特殊的健康管理系统。
该系统在两个方面超越了普通的健康管理系统,具有推理能力和自我修复的能力,不是简单的避免故障。
基于html5外文参考文献

基于html5外文参考文献1. Li, W., Fu, L., Duan, Y., & Chen, M. (2015). Intelligent web application firewall based on HTML5. Journal of Computers,10(12), 3227-3242.2. Elayoubi, S. E., Brunstrom, A., & Garcia, T. (2017). QoE-driven optimization of HTML5-based video streaming services. IEEE Transactions on Network and Service Management, 14(3), 747-760.3. An, B., Guo, B., Liu, L., Li, Y., & Deng, J. (2018). HTMLSound: A Privacy-Preserving Payment Method for HTML5-Based Web Services. IEEE Access, 6, 25907-25917.4. Ubieta, M., Lachaud, C., & Teixeira, F. (2014). Is HTML5 The Future of Rich Web Applications? Experimental Evaluation of AngularJS and HTML5's Local Storage. International Journal of Web and Grid Services, 10(2), 105-125.5. Machado, M. E., Pereira, J. M. A., & Pereira, F. (2013). HTML5 Mobile Hostap: A Performance Evaluation. International Journal of Web and Grid Services, 9(3), 268-286.6. Rahmati, A., Bahl, P., & Balasubramanian, A. (2011). Understanding the limitations of WiFi-based gesture recognition systems. In Proceedings of the ACM SIGCOMM conference on Internet measurement (pp. 599-610).7. Jedrzejek, C., & Pawlowski, M. (2017). Comparative analysis of WebRTC and HTML5 Media Capture and Streams API in avideoconferencing scenario. Journal of Telecommunications and Information Technology, 1, 63-68.8. Macedo, C. A. R., Albuquerque, H. P., Rosa, C. O., & Kurose, J.F. (2015). DHT-Based Resource Discovery for HTML5-Based Peer-to-Peer Networks. IEEE Transactions on Network and Service Management, 12(2), 147-158.9. Wang, H., Griwodz, C., Halvorsen, P., Langseth, H., Hartvigsen,G., & Stensland, H. K. (2017). Optimizing HTML5 Video Streaming in Mobile Environments. IEEE Transactions on Circuits and Systems for Video Technology, 27(10), 2192-2203.10. Sitaram, D. (2014). HTML5: The missing manual. O'Reilly Media.。
STEMSubjectMiscellaneous

STEM Subject: MiscellaneousMoss, K., Crowley, M. 2011. Effective learning in science: The use of personal response systems with wide range of audiences. Computers & Education. Vol. 56, 36‐43.This paper describes the flexibility of Personal Response Systems (PRSs), (also known as ‘clickers’ or electronic voting systems (EVS)), as part of strategies to support students’ learning in science.Whilst variants of this technology began to appear 12 years ago, there is now a steadilyincreasing adoption of these systems within higher education, including science programmes,and this use has grown significantly in the last six years. They have previously been shown tooffer a measurable learning benefit. Typically, someone at an institution buys these systems for learning support and they never make it out of their cases. Far less work has been done withthese systems at school level. In this practitioner based paper, the broad range of practical uses for these systems is described in a variety of formal and informal learning situations – fromtesting the understanding of science concepts (from primary aged school children up to physics undergraduates), to undertaking evaluation of events as well as public participation in datacollection for research on attitudes to careers. In addition, the data collected on such handsets can be mapped to demographic factors such as gender and age yielding further layers ofanalysis. Overall this is a highly flexible and transferable approach to the use of interactivetechnology for engaging learners of all ages as well as carrying out research.Mayer, R.E., Stull, S., Deleeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., Zhang, H. 2009. Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology. Vol. 34, 51‐57What can be done to promote student–instructor interaction in a large lecture class? Oneapproach is to use a personal response system (or ‘‘clickers”) in which students press a buttonon a hand‐held remote control device corresponding to their answer to a multiple choicequestion projected on a screen, then see the class distribution of answers on a screen, anddiscuss the thinking that leads to the correct answer. Students scored significantly higher on the course exams in a college‐level educational psychology class when they used clickers to answer2 to 4 questions per lecture (clicker group), as compared to an identical class with in‐classquestions presented without clickers (no‐clicker group, d = 0.38) or with no in‐class questions(control group, d = 0.40). The clicker treatment produced a gain of approximately 1/3 of a grade point over the no‐clicker and control groups, which did not differ significantly from each other.Results are consistent with the generative theory of learning, which predicts students in theclicker group are more cognitively engaged during learning.Knapp, F.A., Desrochers, M.N. 2009. An experimental evaluation of the instructional effectiveness of a student response system: A comparison with constructed overt responding. International Journal of Teaching and Learning in Higher Education. Vol. 21, 36‐46.Student response systems (SRSs) are increasingly being used in the classroom. However, there have been few well‐controlled experimental evaluations to determine whether students benefit academically from these instructional tools. Additionally, comparisons of SRS with otherinteractive methods have not often been conducted. We compared SRS, Constructed OvertResponse (COR), passive, and control conditions to determine their effects on learning andaffect. We found that students performed better in the interactive conditions—SRS and COR—than the other conditions. Participants’ gain and retention of gain scores in the SRS conditionwere lower than those in the COR condition. Participants in the SRS condition perceived theircondition as more enjoyable than those in the passive condition and more useful than those in the control condition. Additional research questions are raised about how these interactivemethods may best improve student learning.Mazur, E. 2009. Farewell, lecture? Science. Vol. 232, 50‐51.Discussions of education are generally predicated on the assumption that we know whateducation is. I hope to convince you otherwise by recounting some of my own experiences.When I started teaching introductory physics to undergraduates at Harvard University, I neverasked myself how I would educate my students. I did what my teachers had done—I lectured. I thought that was how one learns. Look around anywhere in the world and you’ll find lecturehalls filled with students and, at the front, an instructor. This approach to education has notchanged since before the Renaissance and the birth of scientific inquiry. Early in my career Ireceived the first hints that something was wrong with teaching in this manner, but I hadignored it. Sometimes it’s hard to face reality.Ghosh, S., Renna, F. ing electronic response systems in economics classes. Journal of Economic Education. Fall. 354‐367.College instructors and students participated in a pilot project at the University of Akron toenhance student learning through the use of a common teaching pedagogy, peer instruction.The teaching pedagogy was supported by the use of technology, an electronic personalresponse system, which recorded student responses. The authors report their experiences inusing this technology‐enhanced teaching pedagogy and provide another example of an activeand collaborative learning tool that instructors can use to move beyond “chalk and talk.”Preliminary survey results from students participating in this pilot project are also reported. Salemi, M.K. 2009. Clickenomics: Using a classroom response system to increase student engagement in a large‐enrollment principles of economics course. Journal of Economic Education. Fall 385‐406.One of the most important challenges facing college instructors of economics is helping students engage. Engagement is particularly important in a large‐enrollment Principles of Economicscourse, where it can help students achieve a long‐lived understanding of how economists usebasic economic ideas to look at the world. The author reports how instructors can useClassroom Response Systems (clickers) to promote engagement in the Principles course. Hedraws heavily on his own experience in teaching a one semester Principles course at theUniversity of North Carolina at Chapel Hill but also reports on how others have used clickers to promote engagement. He concludes with evidence that students find clickers very beneficial and with an assessment of the costs and benefits of adopting a clicker system.Beckert, T.E., Fauth, E., Olsen, K. 2009. Clicker satisfaction for students in human development: Differences for class type, prior exposure, and student talkativity. North American Journal of Psychology. Vol. 11, 599‐612.Clicker technology is growing in popularity in psychology and human development classes. Itallows all students to provide instant feedback to instructor inquiry by using radio‐frequencyremote voting. The goal of this study was to determine the degree to which exposure, classtype, and self‐reported level of verbal interaction related to user satisfaction. One hundredseventy human development students participating in classrooms with clicker technologycompleted a 36‐question clicker satisfaction survey. Overall students were satisfied with the use of clickers. Specifically students using clickers in multiple classrooms and in upperdivision classes indicated higher levels of satisfaction. Additionally, students who self‐reported to be less likely to comment verbally in class indicated higher levels of satisfaction with clicker use.Stowell, J.R. Nelson, J.M. 2007. Benefits of electronic audience response systems on student participation, learning, and emotion. Teaching of Psychology. Vol. 34, 253‐250.We compared an electronic audience response system (clickers) to standard lecture, hand‐raising, and response card methods of student feedback in simulated introductory psychologyclasses. After hearing the same 30‐min psychology lecture, participants in the clicker group had the highest classroom participation, followed by the response card group, both of which weresignificantly higher than the hand‐raising group. Participants in the clicker group also reportedgreater positive emotion during the lecture and were more likely to respond honestly to in‐class review questions.Roschelle, J., Penuel, W.R., Abrahamson, L. 2004. Proceedings from the Annual Meeting of the American Educational Research Association 04: Classroom response and communication systems: Research review and theory. San Diego, CA.In How People Learn, Bransford and colleagues (National Research Council, 1999) citeclassroom response system technology and the related pedagogy as one of the most promising innovations for transforming classrooms to be more learner‐, knowledge‐, assessment‐, andcommunity‐centered. As a step towards guiding practice and advancing research, we presentour review of the research on this and more advanced, but related technologies, particularlywith regard to the popular use of these systems to enhance questioning and feedback. We also formulate tentative theoretical connections to a broader scientific literature that could explain how pedagogy and technology together realize multiple desirable outcomesWit, E. 2003. Who wants to be…The use of a personal response system in statistics teaching. MSOR Connections. Vol. 3, 14‐21.Service courses of statistics can be among the most recalcitrant. Undergraduate students do not always see immediately the relevance of the course to their own field so that interaction withthem tends to be difficult. Add on top of that the large class size, and interactive teaching may seem impossible. The development of handsets as used in Who wants to be a millionaire? hasproven to be a possible tool to enhance interaction and stimulate learning. In this article wedescribe this personal response system (PRS) and its implementation within a statistics service course to first year psychology students.。
外科新技术-水刀(waterjetscalpel)

2 3年1 月中国医学物理学杂志Oct.,2 3第2 卷第4期Chinese Journal of Medical PhysicsVol.2 .No.4外科新技术-水刀(water jet scalpel )谢丹玫(重庆医科大学图书馆,重庆 4 16)摘要:介绍近年出现的外科新技术-水刀(water jet scalpel )及其在肝胆外科、颌面外科、骨外科、神经外科、泌尿外科、耳鼻喉科和眼科手术中的应用。
关键词:水刀;喷射压强;喷嘴直径;液体吸收中图分类号:R318.6文献标识码:A文章编号:1005-202X (2003)04-0274-02从小孔中喷出的高压水流原本在工业中用作清洗、打孔和切割的工具。
它对人体可以产生广泛的内部损害而只表现出极小的外部证据[1-2]。
1980年报告了高压水喷(high-pressurewater jet )的第一次医学应用,当时是用于器官切割[2]。
开初,该技术主要用于肝胆外科,后来逐步发展到了颌面外科、整形外科、泌尿外科、骨科、耳鼻喉科、眼科、创伤外科等外科领域。
我们现在把该技术称为水刀(water jet scalpel )。
水刀可以常规开放式使用(open ),也可结合内窥镜使用。
水刀结合内窥镜使用时叫内窥镜水刀(hydrolaparoscope )[3]。
1新技术可能存在的问题与任何新技术的采用一样,水刀在医学应用之初,也遇到过一些问题。
大致有:(1)切割深度控制问题;(2)分离出的微小组织块和游离细胞对手术区的污染问题,特别是在肿瘤摘除时,担心肿瘤的扩散和种植;(3)在结合内窥镜使用时,反喷的水雾对内窥镜产生的光学污染问题[4];(4)静脉空气栓塞问题[5];(5)液体吸收问题[6];(6)血管、神经损害问题[7-8];(7)器械的无菌问题[9]。
2新技术的主要技参数新器械的主要控制参数是喷水的压强(几十Bar )和喷嘴(nozzle )的直径(几百微米)[8,10-13]。
ABSTRACT Text Joins in an RDBMS for Web Data Integration

Text Joins in an RDBMS for Web Data IntegrationLuis Gravano Panagiotis G.Ipeirotis Nick Koudas Divesh Srivastava Columbia University AT&T Labs–Research {gravano,pirot}@{koudas,divesh}@ABSTRACTThe integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and chal-lenging problem.Due to the lack of global identifiers,the same entity(e.g.,a product)might have different textual representations across databases.Textual data is also often noisy because of tran-scription errors,incomplete information,and lack of standard for-mats.A fundamental task during data integration is matching of strings that refer to the same entity.In this paper,we adopt the widely used and established cosine similarity metric from the information retrievalfield in order to identify potential string matches across web sources.We then use this similarity metric to characterize this key aspect of data inte-gration as a join between relations on textual attributes,where the similarity of matches exceeds a specified puting an exact answer to the text join can be expensive.For query process-ing efficiency,we propose a sampling-based join approximation strategy for execution in a standard,unmodified relational database management system(RDBMS),since more and more web sites are powered by RDBMSs with a web-based front end.We implement the join inside an RDBMS,using SQL queries,for scalability and robustness reasons.Finally,we present a detailed performance evaluation of an im-plementation of our algorithm within a commercial RDBMS,us-ing real-life data sets.Our experimental results demonstrate the efficiency and accuracy of our techniques.Categories and Subject DescriptorsH.2.5[Database Management]:Heterogeneous Databases;H.2.4 [Database Management]:Systems—Relational databases,Tex-tual databases;H.2.8[Database Management]:Database Appli-cations—Data miningGeneral TermsAlgorithms,Measurement,Performance,ExperimentationKeywordstext indexing,data cleaning,approximate text matching1.INTRODUCTIONThe integration of information from heterogeneous web sources is of central interest for applications such as catalog data integra-tion and warehousing of web data(e.g.,job advertisements and an-nouncements).Such data is typically textual and can be obtained from disparate web sources in a variety of ways,including web Copyright is held by the author/owner(s).WWW2003,May20–24,2003,Budapest,Hungary.ACM1-58113-680-3/03/0005.site crawling and direct access to remote databases via web proto-cols.The integration of such web data exhibits many semantics-and performance-related challenges.Consider a price-comparison web site,backed by a database,that combines product information from different vendor web sites and presents the results under a uniform interface to the user.In such a situation,one cannot assume the existence of global identifiers (i.e.,unique keys)for products across the autonomous vendor web sites.This raises a fundamental problem:different vendors may use different names to describe the same product.For example,a ven-dor might list a hard disk as“Western Digital120Gb7200rpm,”while another might refer to the same disk as“Western Digi r al HDD120Gb”(due to a spelling mistake)or even as“WD120Gb 7200rpm”(using an abbreviation).A simple equality comparison on product names will not properly identify these descriptions as referring to the same entity.This could result in the same product entity from different vendors being treated as separate products, defeating the purpose of the price-comparison web site.To effec-tively address the integration problem,one needs to match multiple textual descriptions,accounting for:•erroneous information(e.g.,typing mistakes)•abbreviated,incomplete or missing information•differences in information“formatting”due to the lack of standard conventions(e.g.,for addresses)or combinations thereof.Any attempt to address the integration problem has to specify a measure that effectively quantifies“closeness”or“similarity”be-tween string attributes.Such a similarity metric can help establish that“Microsoft Windows XP Professional”and“Windows XP Pro”correspond to the same product across the web sites/databases,and that these are different from the“Windows NT”product.Many ap-proaches to data integration use a text matching step,where sim-ilar textual entries are matched together as potential duplicates. Although text matching is an important component of such sys-tems[1,21,23],little emphasis has been paid on the efficiency of this operation.Once a text similarity metric is specified,there is a clear require-ment for algorithms that process the data from the multiple sources to identify all pairs of strings(or sets of strings)that are sufficiently similar to each other.We refer to this operation as a text join.To perform such a text join on data originating at different web sites, we can utilize“web services”to fully download and materialize the data at a local relational database management system(RDBMS). Once this materialization has been performed,problems and incon-sistencies can be handled locally via text join operations.It is de-sirable for scalability and effectiveness to fully utilize the RDBMS capabilities to execute such operations.In this paper,we present techniques for performing text joins ef-ficiently and robustly in an unmodified RDBMS.Our text joins rely on the cosine similarity metric[20],which has been successfully used in the past in the WHIRL system[4]for a similar data inte-gration task.Our contributions include:•A purely-SQL sampling-based strategy to compute approxi-mate text joins;our technique,which is based on the approxi-mate matrix multiplication algorithm in[2],can be fully exe-cuted within standard RDBMSs,with no modification of the underlying query processing engine or index infrastructure.•A thorough experimental evaluation of our algorithms,in-cluding a study of the accuracy and performance of our ap-proach against other applicable strategies.Our experiments use large,real-life data sets.•A discussion of the merits of alternative string similarity met-rics for the definition of text joins.The remainder of this paper is organized as follows.Section2 presents background and notation necessary for the rest of the dis-cussion,and introduces a formal statement of our problem.Sec-tion3presents SQL statements to preprocess relational tables so that we can apply the sampling-based text join algorithm of Sec-tion4.Then,Section5presents the implementation of the text join algorithm in SQL.A preliminary version of Sections3and5ap-pears in[12].Section6reports a detailed experimental evaluation of our techniques in terms of both accuracy and performance,and in comparison with other applicable approaches.Section7discusses the relative merits of alternative string similarity metrics.Section8 reviews related work.Finally,Section9concludes the paper and discusses possible extensions of our work.2.BACKGROUND AND PROBLEMIn this section,wefirst provide notation and background for text joins,which we follow with a formal definition of the problem on which we focus in this paper.We denote withΣ∗the set of all strings over an alphabetΣ.Each string inΣ∗can be decomposed into a collection of atomic“enti-ties”that we generally refer to as tokens.What constitutes a token can be defined in a variety of ways.For example,the tokens of a string could simply be defined as the“words”delimited by special characters that are treated as“separators”(e.g.,‘’).Alternatively, the tokens of a string could correspond to all of its q-grams,which are overlapping substrings of exactly q consecutive characters,for a given q.Our forthcoming discussion treats the term token as generic,as the particular choice of token is orthogonal to the design of our ter,in Section6we experiment with different token definitions,while in Section7we discuss the effect of token choice on the characteristics of the resulting similarity function. Let R1and R2be two relations with the same or different at-tributes and schemas.To simplify our discussion and notation we assume,without loss of generality,that we assess similarity be-tween the entire sets of attributes of R1and R2.Our discussion extends to the case of arbitrary subsets of attributes in a straight-forward way.Given tuples t1∈R1and t2∈R2,we assume that the values of their attributes are drawn fromΣ∗.We adopt the widely used vector-space retrieval model[20]from the information retrievalfield to define the textual similarity between t1and t2. Let D be the(arbitrarily ordered)set of all unique tokens present in all values of attributes of both R1and R2.According to the vector-space retrieval model,we conceptually map each tuple t∈R i to a vector v t∈ |D|.The value of the j-th component v t(j) of v t is a real number that corresponds to the weight of the j-th token of D in v t.Drawing an analogy with information retrieval terminology,D is the set of all terms and v t is a document weight vector.Rather than developing new ways to define the weight vector v t for a tuple t∈R i,we exploit an instance of the well-established tf.idf weighting scheme from the information retrievalfield.(tf.idf stands for“term frequency,inverse document frequency.”)Our choice is further supported by the fact that a variant of this gen-eral weighting scheme has been successfully used for our task by Cohen’s WHIRL system[4].Given a collection of documents C,a simple version of the tf.idf weight for a term w and a document d is defined as tf w log(id f w),where tf w is the number of times that w appears in document d and id f w is|C|w,where n w is the num-ber of documents in the collection C that contain term w.The tf.idf weight for a term w in a document is high if w appears a large num-ber of times in the document and w is a sufficiently“rare”term in the collection(i.e.,if w’s discriminatory power in the collection is potentially high).For example,for a collection of company names, relatively infrequent terms such as“AT&T”or“IBM”will have higher idf weights than more frequent terms such as“Inc.”For our problem,the relation tuples are our“documents,”and the tokens in the textual attribute of the tuples are our“terms.”Consider the j-th token w in D and a tuple t from relation R i. Then tf w is the number of times that w appears in t.Also,id f w is|R i|w,where n w is the total number of tuples in relation R i that contain token w.The tf.idf weight for token w in tuple t∈R i is v t(j)=tf w log(id f w).To simplify the computation of vector similarities,we normalize vector v t to unit length in the Euclidean space after we define it.The resulting weights correspond to the impact of the terms,as defined in[24].Note that the weight vec-tors will tend to be extremely sparse for certain choices of tokens; we shall seek to utilize this sparseness in our proposed techniques.D EFINITION 1.Given tuples t1∈R1and t2∈R2,let v t1and v t2be their corresponding normalized weight vectors and let D be the set of all tokens in R1and R2.The cosine similarity(or just similarity,for brevity)of v t1and v t2is defined as sim(v t1,v t2)=|D|j=1v t1(j)v t2(j).Since vectors are normalized,this measure corresponds to the cosine of the angle between vectors v t1and v t2,and has values be-tween0and1.The intuition behind this scheme is that the magni-tude of a component of a vector expresses the relative“importance”of the corresponding token in the tuple represented by the vector. Intuitively,two vectors are similar if they share many important to-kens.For example,the string“ACME”will be highly similar to “ACME Inc,”since the two strings differ only on the token“Inc,”which appears in many different tuples,and hence has low weight. On the other hand,the strings“IBM Research”and“AT&T Re-search”will have lower similarity as they share only one relatively common term.The following join between relations R1and R2brings together the tuples from these relations that are“sufficiently close”to each other,according to a user-specified similarity thresholdφ:D EFINITION 2.Given two relations R1and R2,together with a similarity threshold0<φ≤1,the text join R1 IφR2returns all pairs of tuples(t1,t2)such that t1∈R1and t2∈R2,and sim(v t1,v t2)≥φ.The text join“correlates”two relations for a given similarity thresh-oldφ.It can be easily modified to correlate arbitrary subsets of attributes of the relations.In this paper,we address the problem of computing the text join of two relations efficiently and within an unmodified RDBMS:P ROBLEM 1.Given two relations R1and R2,together with a similarity threshold0<φ≤1,we want to efficiently compute(an approximation of)the text join R1 IφR2using“vanilla”SQL in an unmodified RDBMS.In the sequel,wefirst describe our methodology for deriving, in a preprocessing step,the vectors corresponding to each tuple of relations R1and R2using relational operations and represen-tations.We then present a sampling-based solution for efficiently computing the text join of the two relations using standard SQL in an RDBMS.3.TUPLE WEIGHT VECTORSIn this section,we describe how we define auxiliary relations to represent tuple weight vectors,which we later use in our purely-SQL text join approximation strategy.As in Section2,assume that we want to compute the text join R1 IφR2of two relations R1and R2.D is the ordered set of all the tokens that appear in R1and R2.We use SQL expressions to create the weight vector associated with each tuple in the two rela-tions.Since–for some choice of tokens–each tuple is expected to contain only a few of the tokens in D,the associated weight vec-tor is sparse.We exploit this sparseness and represent the weight vectors by storing only the tokens with non-zero weight.Specifi-cally,for a choice of tokens(e.g.,words or q-grams),we create the following relations for a relation R i:•RiTokens(tid,token):Each tuple(tid,w)is associated with an occurrence of token w in the R i tuple with id tid.This relation is populated by inserting exactly one tuple(tid,w) for each occurrence of token w in a tuple of R i with tuple id tid.This relation can be implemented in pure SQL and the implementation varies with the choice of tokens.(See[10] for an example on how to create this relation when q-grams are used as tokens.)•RiIDF(token,idf):A tuple(w,id f w)indicates that token w has inverse document frequency id f w(Section2)in relation R i.The SQL statement to populate relation RiIDF is shown in Figure1(a).This statement relies on a“dummy”relation RiSize(size)(Figure1(f))that has just one tuple indicating the number of tuples in R i.•RiTF(tid,token,tf):A tuple(tid,w,tf w)indicates that token w has term frequency tf w(Section2)for R i tuple with tuple id tid.The SQL statement to populate relation RiTF is shown in Figure1(b).•RiLength(tid,len):A tuple(tid,l)indicates that the weight vector associated with R i tuple with tuple id tid has a Eu-clidean norm of l.(This relation is used for normalizing weight vectors.)The SQL statement to populate relation RiLength is shown in Figure1(c).•RiWeights(tid,token,weight):A tuple(tid,w,n)indicates that token w has normalized weight n in R i tuple with tuple id tid.The SQL statement to populate relation RiWeights is shown in Figure1(d).This relation materializes a compact representation of thefinal weight vector for the tuples in R i.•RiSum(token,total):A tuple(w,t)indicates that token w hasa total added weight t in relation R i,as indicated in relationRiWeights.These numbers are used during sampling(see Section4).The SQL statement to populate relation RiSum is shown in Figure1(e).Given two relations R1and R2,we can use the SQL statements in Figure1to generate relations R1Weights and R2Weights with a compact representation of the weight vector for the R1and R2 tuples.Only the non-zero tf.idf weights are stored in these tables. Interestingly,RiWeights and RiSum are the only tables that need to be preserved for the computation of R1 IφR2that we describe in the remainder of the paper:all other tables are just necessary to construct RiWeights and RiSum.The space overhead introduced by these tables is moderate.Since the size of RiSum is bounded by the size of RiWeights,we just analyze the space requirements for RiWeights.Consider the case where q-grams are the tokens of choice.(As we will see,a good value is q=3.)Then each tuple R i.t j of relation R i can contribute up to approximately|R i.t j|q-grams to relation RiWeights,where|R i.t j|is the number of characters in R i.t j.Furthermore,each tuple in RiWeights consists of a tuple id tid,the actual token(i.e.,q-gram in this case),and its associated weight.Then,if C bytes are needed to represent tid and weight, the total size of relation RiWeights will not exceed|R i|j=1(C+q)·|R i.t j|=(C+q)·|R i|j=1|R i.t j|,which is a(small)constant times the size of the original table R i.If words are used as the token of choice,then we have at most|R i.t j|tokens per tuple in R i.Also,to store the token attribute of RiWeights we need no more than one byte for each character in the R i.t j tuples.Therefore,we can bound the size of RiWeights by1+C times the size of R i. Again,in this case the space overhead is linear in the size of the original relation R i.Given the relations R1Weights and R2Weights,a baseline ap-proach[13,18]to compute R1 IφR2is shown in Figure2.This SQL statement performs the text join by computing the similar-ity of each pair of tuples andfiltering out any pair with similar-ity less than the similarity thresholdφ.This approach produces an exact answer to R1 IφR2forφ>0.Unfortunately,as we will see in Section6,finding an exact answer with this approach is prohibitively expensive,which motivates the sampling-based tech-nique that we describe next.4.SAMPLING-BASED TEXT JOINSThe result of R1 IφR2only contains pairs of tuples from R1and R2with similarityφor ually we are interested in high values for thresholdφ,which should result in only a few tuples from R2typically matching each tuple from R1.The baseline ap-proach in Figure2,however,calculates the similarity of all pairs of tuples from R1and R2that share at least one token.As a result, this baseline approach is inefficient:most of the candidate tuple pairs that it considers do not make it to thefinal result of the text join.In this section,we describe a sampling-based technique[2] to execute text joins efficiently,drastically reducing the number of candidate tuple pairs that are considered during query processing. The sampling-based technique relies on the following intuition: R1 IφR2could be computed efficiently if,for each tuple t q of R1, we managed to extract a sample from R2containing mostly tuples suspected to be highly similar to t q.By ignoring the remaining (useless)tuples in R2,we could approximate R1 IφR2efficiently. The key challenge then is how to define a sampling strategy that leads to efficient text join executions while producing an accurate approximation of the exact query results.The discussion of the technique is organized as follows:•Section4.1shows how to sample the tuple vectors of R2to estimate the tuple-pair similarity values.•Section4.2describes an efficient algorithm for computing an approximation of the text join.The sampling algorithm described in this section is an instance of the approximate matrix multiplication algorithm presented in[2], which computes an approximation of the product A=A1·...·A n, where each A i is a numeric matrix.(In our problem,n=2.)The actual matrix multiplication A =A2·...·A n happens during a preprocessing,off-line step.Then,the on-line part of the algorithm works by processing the matrix A1row by row.4.1Token-Weighted SamplingConsider tuple t q∈R1with its associated token weight vector v tq,and each tuple t i∈R2with its associated token weight vector v ti.When t q is clear from the context,to simplify the notation we useσi as shorthand for sim(v tq,v ti).We extract a sample of R2 tuples of size S for t q as follows:•Identify each token j in t q that has non-zero weight v tq(j), 1≤j≤|D|.INSERT INTO RiIDF(token,idf)SELECT T.token,LOG(S.size)-LOG(COUNT(UNIQUE(*)))FROM RiTokens T,RiSize S GROUP BY T.token,S.size INSERT INTO RiTF(tid,token,tf)SELECT T.tid,T.token,COUNT(*)FROM RiTokens TGROUP BY T.tid,T.token (a)Relation with token idf counts(b)Relation with token tf countsINSERT INTO RiLength(tid,len)SELECT T.tid,SQRT(SUM(I.idf*I.idf*T.tf*T.tf))FROM RiIDF I,RiTF T WHERE I.token =T.token GROUP BY T.tidINSERT INTO RiWeights(tid,token,weight)SELECT T.tid,T.token,I.idf*T.tf/L.len FROM RiIDF I,RiTF T,RiLength L WHERE I.token =T.token AND T.tid =L.tid (c)Relation with weight-vector lengths (d)Final relation with normalized tuple weight vectors INSERT INTO RiSum(token,total)SELECT R.token,SUM(R.weight)FROM RiWeights R GROUP BY R.tokenINSERT INTO RiSize(size)SELECT COUNT(*)FROM Ri (e)Relation with total token weights(f)Dummy relation used to create RiIDFFigure 1:Preprocessing SQL statements to create auxiliary relations for relation R i .SELECTr1w.tid AS tid1,r2w.tid AS tid2FROM R1Weights r1w,R2Weights r2w WHERE r1w.token =r2w.token GROUP BY r1w.tid,r2w.tidHAVING SUM(r1w.weight*r2w.weight)≥φFigure 2:Baseline approach for computing the exact value of R 1 IφR 2.•For each such token j ,perform S Bernoulli trials over each t i ∈{t 1,...,t |R 2|},where the probability of picking t i in a trial depends on the weight of token j in tuple t q ∈R 1and in tuple t i ∈R 2.Specifically,this probability is p ij =v t q (j )·v t i (j )T V (t q ),where T V (t q )= |R 2|i =1σi is the sum of the similarity of tuple t q with each tuple t i ∈R 2.In Section 5we show how we can implement the sampling step even if we do not know the value of T V (t q ).Let C i be the number of times that t i appears in the sample of size S .It follows that:T HEOREM 1.The expected value ofC i S·T V (t q )is σi .PThe proof of this theorem follows from an argument similar to that in [2]and from the observation that the mean of the process that generates C i is |D |j =1v t q (j )v t i (j )T V (t q )=σi T V (t q ).Theorem 1establishes that,given a tuple t q ∈R 1,we can obtain a sample of size S of tuples t i such that the frequency C i of tuple t i can be used to approximate σi .We can then report t q ,t i aspart of the answer of R 1 IφR 2for each tuple t i ∈R 2such that its estimated similarity with t q (i.e.,its estimated σi )is φ or larger,where φ =(1− )φis a threshold slightly lower 1than φ.Given R 1,R 2,and a threshold φ,our discussion suggests thefollowing strategy for the evaluation of the R 1 IφR 2text join,in which we process one tuple t q ∈R 1at a time:•Obtain an individual sample of size S from R 2for t q ,using vector v t q to sample tuples of R 2for each token with non-zero weight in v t q .•If C i is the number of times that tuple t i appears in the sam-ple for t q ,then use CiS T V (t q)as an estimate of σi .•Include tuple pair t q ,t i in the result only if C iS T V (t q)≥φ (or equivalently C i ≥S T V (t q )φ ),and filter out the re-maining R 2tuples.1For all practical purposes, is treated as a positive constant less than 1.This strategy guarantees that we can identify all pairs of tuples withsimilarity of at least φ,with a desired probability,as long as we choose an appropriate sample size S .So far,the discussion has focused on obtaining an R 2sample of size S individually for each tuple t q ∈R 1.A naive implementation of this sampling strat-egy would then require a scan of relation R 2for each tuple in R 1,which is clearly unacceptable in terms of performance.In the next section we describe how the sampling can be performed with only one sequential scan of relation R 2.4.2Practical Realization of SamplingAs discussed so far,the sampling strategy requires extracting a separate sample from R 2for each tuple in R 1.This extraction of a potentially large set of independent samples from R 2(i.e.,one per R 1tuple)is of course inefficient,since it would require a large number of scans of the R 2table.In this section,we describe how to adapt the original sampling strategy so that it requires one single sample of R 2,following the “presampling”implementation in [2].We then show how to use this sample to create an approximateanswer for the text join R 1 IφR 2.As we have seen in the previous section,for each tuple t q ∈R 1we should sample a tuple t i from R 2in a way that depends on the v t q (j )·v t i (j )values.Since these values are different for each tuple of R 1,a straightforward implementation of this sampling strategy requires multiple samples of relation R 2.Here we describe an alter-native sampling strategy that requires just one sample of R 2:First,we sample R 2using only the v t i (j )weights from the tuples t i of R 2,to generate a single sample of R 2.Then,we use the single sample differently for each tuple t q of R 1.Intuitively,we “weight”the tuples in the sample according to the weights v t q (j )of the t q tuples of R 1.In particular,for a desired sample size S and a targetsimilarity φ,we realize the sampling-based text join R 1 IφR 2in three steps:1.Sampling:We sample the tuple ids i and the correspond-ing tokens from the vectors v t i for each tuple t i ∈R2.We sample each token j from a vector v t i with probabil-ity v t i (j ).(We define Sum (j )as the total weight of the j -th token in relation R 2,Sum (j )= |R 2|i =1v t i (j ).These weights are kept in relation R2Sum .)We perform S trials,yielding approximately S samples for each token j .We in-sert into R2Sample tuples of the form i,j as many times as there were successful trials for the pair.Alternatively,we can create tuples of the form i,j,c ,where c is the number of successful trials.This results in a compact representation of R2Sample ,which is preferable in practice.2.Weighting:The Sampling step uses only the token weights from R 2for the sampling,ignoring the weights of the tokensSELECT rw.tid,rw.token,rw.weight/rs.total AS PFROM RiWeights rw,RiSum rsWHERE rw.token=rs.tokenFigure3:Creating an auxiliary relation that we sample to cre-ate RiSample(tid,token).in the other relation,R1.The cosine similarity,however,uses the products of the weights from both relations.During the Weighting step we use the token weights in the non-sampled relation to get estimates of the cosine similarity,as follows.For each R2Sample tuple i,j ,with c occurrences in thetable,we compute the value v tq (j)·Sum(j)·c,which isan approximation of v tq (j)·v ti(j)·S.We add this value toa running counter that keeps the estimated similarity of thetwo tuples t q and t i.The Weighting step thus departs from the strategy in[2],for efficiency reasons,in that we do not use sampling during the join processing.3.Thresholding:After the Weighting step,we include the tu-ple pair t q,t i in thefinal result only if its estimated similar-ity is no lower than the user-specified threshold(Section4.1). Such a sampling scheme identifies tuples with similarity of at leastφfrom R2for each tuple in R1.By sampling R2only once, the sample will be correlated.As we verify experimentally in Sec-tion6,this sample correlation has a negligible effect on the quality of the join approximation.As presented,the join-approximation strategy is asymmetric in the sense that it uses tuples from one relation(R1)to weight sam-ples obtained from the other(R2).The text join problem,as de-fined,is symmetric and does not distinguish or impose an ordering on the operands(relations).Hence,the execution of the text join R1 IφR2naturally faces the problem of choosing which relation to sample.For a specific instance of the problem,we can break this asymmetry by executing the approximate join twice.Thus,we first sample from vectors of R2and use R1to weight the samples. Then,we sample from vectors of R1and use R2to weight the sam-ples.Then,we take the union of these as ourfinal result.We refer to this as a symmetric text join.We will evaluate this technique experimentally in Section6.In this section we have described how to approximate the text join R1 IφR2by using weighted sampling.In the next section,we show how this approximate join can be completely implemented ina standard,unmodified RDBMS.5.SAMPLING AND JOINING TUPLE VEC-TORS IN SQLWe now describe our SQL implementation of the sampling-based join algorithm of Section4.2.Section5.1addresses the Sampling step,while Section5.2focuses on the Weighting and Thresholding steps for the asymmetric versions of the join.Finally,Section5.3 discusses the implementation of a symmetric version of the approx-imate join.5.1Implementing the Sampling Step in SQL Given the RiWeights relations,we now show how to implement the Sampling step of the text join approximation strategy(Sec-tion4.2)in SQL.For a desired sample size S and similarity thresh-oldφ,we create the auxiliary relation shown in Figure3.As the SQL statement in thefigure shows,we join the relations RiWeights and RiSum on the token attribute.The P attribute for a tuple inthe result is the probability RiWeights.weightRiSum.total with which we shouldpick this tuple(Section4.2).Conceptually,for each tuple in the output of the query of Figure3we need to perform S trials,pick-ing each time the tuple with probability P.For each successfulINSERT INTO RiSample(tid,token,c)SELECT rw.tid,rw.token,ROUND(S*rw.weight/rs.total,0)AS c FROM RiWeights rw,RiSum rsWHERE rw.token=rs.token ANDROUND(S*rw.weight/rs.total,0)>0Figure4:A deterministic version of the Sampling step,which results in a compact representation of RiSample.SELECT r1w.tid AS tid1,r2s.tid AS tid2FROM R1Weights r1w,R2Sample r2s,R2Sum r2sum,R1V r1vWHERE r1w.token=r2s.token ANDr1w.token=r2sum.token ANDr1w.tid=r1v.tidGROUP BY r1w.tid,r2s.tid,HAVING SUM(r1w.weight*r2sum.total/)≥S∗φ / Figure5:Implementing the Weighting and Thresholding steps in SQL.This query corresponds to the asymmetric execution of the sampling-based text join,where we sample R2and weight the sample using R1.trial,we insert the corresponding tuple tid,token in a relation RiSample(tid,token),preserving duplicates.The S trials can be implemented in various ways.One(expensive)way to do this is as follows:We add“AND P≥RAND()”in the WHERE clause of the Figure3query,so that the execution of this query corresponds to one“trial.”Then,executing this query S times and taking the union of the all results provides the desired answer.A more efficient al-ternative,which is what we implemented,is to open a cursor on the result of the query in Figure3,read one tuple at a time,perform S trials on each tuple,and then write back the result.Finally,a pure-SQL“simulation”of the Sampling step deterministically de-fines that each tuple will result in Round(S·RiWeights.weightRiSum.total)“suc-cesses”after S trials,on average.This deterministic version of the query is shown in Figure42.We have implemented and run exper-iments using the deterministic version,and obtained virtually the same performance as with the cursor-based implementation of sam-pling over the Figure3query.In the rest of the paper–to keep the discussion close to the probabilistic framework–we use the cursor-based approach for the Sampling step.5.2Implementing the Weighting and Thresh-olding Steps in SQLSection4.2described the Weighting and Thresholding steps as two separate steps.In practice,we can combine them into one SQL statement,shown in Figure5.The Weighting step is implemented by the SUM aggregate in the HA VING clause.We weight each tuple from the sample according to R1W eights.weight·R2Sum.totalR1V.T V, which corresponds to v t q(j)·Sum(j)V q(see Section4.2).Then,we can count the number of times that each particular tuple pair ap-pears in the results(see GROUP BY clause).For each group,the result of the SUM is the number of times C i that a specific tuple pair appears in the candidate set.To implement the Thresholding step,we apply the countfilter as a simple comparison in the HA V-ING clause:we check whether the frequency of a tuple pair at least matches the count threshold(i.e.,C i≥ST V(t q)φ ).Thefinal out-put of this SQL operation is a set of tuple id pairs with expected similarity of at leastφ.The SQL statement in Figure5can be fur-ther simplified by completely eliminating the join with the R1V 2Note that this version of RiSample uses the compact representation in which each tid-token pair has just one associated row.。
Aspen中NIST使用方法

Aspen中NIST使⽤⽅法NIST ThermoData EngineUse this dialog box to estimate pure component parameters using the NIST Thermo Data Engine (TDE), or retrieve binary parameters from NIST. If at least two components are defined, you can choose at the top to evaluate either pure properties or binary mixture properties.If you choose databank component(s) or one(s) which have already had their structural formula specified, you can click Evaluate Now to run TDE to estimate properties immediately.If you choose a user-defined component, you can click Enter Additional Data to open the User-Defined Component Wizard for that component. Once you have specified the structural formula and optional additional data, you will be able to run TDE from within the wizard.TDE takes a few minutes to run. When it finishes running, the TDE Pure Results or TDE Binary Results window will appear with the results of the estimation.See AlsoUsing the NIST Thermo Data Engine (TDE)User-Defined Component WizardUsing the NIST Thermo Data Engine (TDE)You can use the ThermoData Engine (TDE) from the National Institute of Standards and Technology (NIST) to estimate property parameters for any component or pair of components given one of the following for each component:CAS numberMolecular structure. TDE can only use molecular structure saved in an MDL file (*.mol) or specified using the drawing tool in the User Defined Component Wizard. It cannot use molecular structurespecified by atom and connectivity.Note: Only MDL files of version V2000 are supported. The version V3000 files, sometimes called Extended MDL files, are not supported.TDE has a variety of group contribution methods available to estimate pure component property parameters based on molecular structure. Based on TDE's large database of experimental data, these methodshave been ranked for accuracy for different compound classes. For eachTo run TDE:1.Specify the component(s) on the Components | Specifications |Selection sheet.2.On the Home tab of the ribbon, in the Data Source group, clickNIST. The NIST ThermoData Engine dialog box appears.3.Choose Pure or Binary mixture.4.Select the component from the list in the dialog box. For binarymixture properties select a component from the second list aswell.5.If the CAS number or molecular structure is specified for eachcomponent, then the Evaluate Now button (for pure componentproperties) or Retrieve Data button (for binary mixture properties) is enabled. Click it to estimate property parameters.ORFor pure component parameters, if neither CAS number normolecular structure is specified, click Enter Additional Data. TheUser Defined Component Wizard appears, allowing you tospecify the molecular structure and optionally other data aboutthe component. You will be given the option to run TDE toestimate parameters after specifying data.The following data can be sent to TDE:Vapor pressure dataLiquid densityIdeal gas heat capacityNormal boiling pointMolecular structure (if specified using a version V2000 MDL file or using the drawing tool in the User Defined Component Wizard)Note: TDE takes a couple minutes to run on a typical computer.6.When TDE is finished, the results will appear in the TDE Purewindow or the TDE Binary window.See AlsoAbout the NIST ThermoData Engine (TDE)User Defined Component WizardNIST TDE Data Evaluation MethodologyNIST TDE vs. NIST-TRC DatabankUsing TDE ResultsAbout the NIST ThermoData Engine (TDE)The ThermoData Engine (TDE) is a thermodynamic data correlation, evaluation, and prediction tool provided with Aspen Plus and Aspen Properties through a long-term collaboration agreement with the National Institute of Standards and Technology (NIST).The purpose of the ThermoData Engine software is to provide critically evaluated thermodynamic and transport property data based on the principles of dynamic data evaluation.Critical evaluation is based on:Published experimental data stored in a program databasePredicted values based on molecular structure andcorresponding-states methodsUser supplied data, if anyThe primary focus of the current version is pure organic compounds comprised of the elements: C, H, N, O, F, Cl, Br, I, S, and P. The principles upon which the ThermoData Engine software are based are fully discussed in two articles.1,2 The first article describes the foundations of TDE while the second describes the extension of TDE for dynamic equation-of-state evaluation and online updating. Online updating is not available in Aspen Plus.ThermoData Engine is the first software fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST Thermodynamic Research Center (TRC). This concept requires the development of large electronic databases capable of storing essentially all raw experimental data known to date with detailed descriptions of relevant metadata and uncertainties. The combination of these databases with expert software designed primarily to generate recommended data based on available raw experimental data and their uncertainties leads to the possibility of producing data compilations automatically to order, forming a dynamic data infrastructure. The NIST TRC SOURCE data archival system currently containing more than 3 million experimental data points is used in conjunction with ThermoData Engine as a comprehensive storage facility for experimental thermophysical and thermochemical property data. The SOURCE database is continually updated and is the source for the experimental database used with TDE.The ThermoData Engine software incorporates all major stages of the concept implementation, including data retrieval, grouping, normalization, sorting, consistency enforcement, fitting, and prediction. The ThermoData Engine emphasizes enforcement of consistency between related properties (including those obtained from predictions), and incorporates a large variety of models for fitting properties. Predicted values are provided using the following set of Prediction MethodsThe experimental database containing raw property data for a very large number of components (over 17,000 compounds) is included automatically with Aspen Plus/Aspen Properties. Results of the TDE evaluations –model parameters –can be saved to the Aspen Plus simulation and used in process calculations. Experimental data can also be saved to the simulation and used with the Aspen Plus Data Regression System, if needed, for example, to fit other property models, or to fit data over limited temperature ranges that correspond to the process conditions of interest.Note: AspenTech has provided the regression results for much of this data in the NIST-TRC databank. You can use this databank to gain mostof the advantage of NIST without spending the time to run TDE dynamically. The models linked below (used in many property methods) provide access to these properties when the NIST-TRC databank is used. See NIST TDE vs. NIST-TRC Databank for more information.Note: NIST TDE is a complementary technology of the existing Property Estimation System of Aspen Plus. The two features work independently of each other and will co-exist. However, we anticipate that TDE will continue to be enhanced with additional raw data and new or improved estimation methods and will be used in preference to the Property Estimation System in the future.The Aspen Plus - TDE interface covers the following properties of pure molecular compounds. Most of them can be estimated for new compounds based on molecular structure, using the methods listed below. Where multiple methods are listed for a property, they are ranked for accuracy for each compound class based on the data in the experimental database, and the highest-ranked one for the given structure is automatically selected.Single-Valued PropertiesTemperature-Dependent PropertiesReferences1.ThermoData Engine (TDE): Software Implementation of theDynamic Data Evaluation Concept, J. Chem. Inf. Model., 45 (4), 816 -838, 2005./doc/bf527fa31b5f312b3169a45177232f60ddcce795.html /TDEarticle.pdf 2.ThermoData Engine (TDE): Software Implementation of theDynamic Data Evaluation Concept. 2. Equations of State onDemand and Dynamic Updates over the Web, J. Chem. Inf. Model., 47, 1713-1754, 2007. /doc/bf527fa31b5f312b3169a45177232f60ddcce795.html /TDEarticle2.pdf 3.K. G. Joback, R. C. Reid. Estimation of Pure-Component Propertiesfrom Group-Contributions. Chem. Eng. Comm. 1987, 57, 233-243.4.L. Constantinou, R. Gani. New Group-Contribution Method forEstimating Properties of Pure Compounds. AIChE J. 1994, 40,1697-1710.5.J. Marrero-Morejon, E. Pardillo-Fontdevila. Estimation of PureCompound Properties Using Group-Interaction Contributions.AIChE J. 1999, 45, 615-621.6.G. M. Wilson, L. V. Jasperson. Critical Constants T c, P c. EstimationBased on Zero, First, Second-Order Methods. AIChE Meeting, New Orleans, LA, 1996.7. D. Ambrose, J. Walton. Vapor-Pressures up to TheirCritical-Temperatures of Normal Alkanes and Alkanols. Pure Appl.Chem. 1989, 61, 1395-1403.8.T. Yamada, R. D. Gunn. Saturated Liquid Molar Volumes. TheRackett Equation. J. Chem. Eng. Data 1973, 18, 234-236.9.L. Riedel. Chem.-Ing.-Tech. 1954, 26, 259-264. As modified in: J. L.Hales, R. Townsend. J. Chem. Thermodyn. 1972, 4, 763-772.10.B. E. Poling, J. M. Prausnitz, J. P. O'Connell. The Properties of Gasesand Liquids, 5th ed.; McGraw-Hill: New York, 2001.11.S. R. S. Sastri, K. K. Rao. A New Group Contribution Method forPredicting Viscosity of Organic Liquids. Chem. Eng. J. Bio. Eng. J.1992, 50, 9-25.12.T. H. Chung, M. Ajlan, L. L. Lee, K. E. Starling, GeneralizedMultiparameter Correlation for Nonpolar and Polar FluidTransport-Properties. Ind. Eng. Chem. Res. 1988, 27, 671-679.13.B. E. Poling, J. M. Prausnitz, J. P. O'Connell. The properties of Gasesand Liquids, 5th ed.; McGraw-Hill: New York, 2001 (page 9.9 forlow-pressure gas and page 9.35 Lucas model for high-pressure).14.T. H. Chung, L. L. Lee, K. E. Starling. Applications of Kinetic GasTheories and Multiparameter Correlation for Prediction of DiluteGas Viscosity and Thermal-Conductivity. Ind. Eng. Chem. Fund.1984, 23, 8-13.See AlsoNIST TDE vs. NIST-TRC DatabankUsing the NIST ThermoData EngineNIST TDE Data EvaluationNIST TDE Data Evaluation MethodologyThe NIST ThermoData Engine (TDE) uses dynamic data evaluation to determine the data to be used in regressing property parameters from the collected raw experimental data in NIST's database. The data evaluation is broken into several phases. The data are broken into four blocks:Phase diagram: triple point, critical temperature, phase boundary pressureVolumetric: critical density, saturated & single phase density, volumetric coefficientsEnergetic: energy differences, energy derivatives, speed of sound Other: transport properties, surface tension, refraction The blocks are first processed individually. The following thermodynamic consistency tests are performed within the phase diagram, volumetric, and energetic data:Vapor pressures of phases must be equal at triple points, and slope/enthalpy change must be consistentCondensed phase boundaries must converge to the triple pointGas and liquid saturation density curves must converge at the critical temperatureFirst derivative of saturated density must trend toward infinity at the critical temperatureSingle-phase densities must converge to saturated densities Then, the vapor pressure, saturated density, and enthalpy of vaporization are checked for consistency, and the other data is processed.See AlsoAbout the NIST ThermoData Engine (TDE)NIST TDE vs. NIST-TRC DatabankIn addition to the raw property data available with NIST TDE, the Aspen Physical Property System includes the NIST-TRC databank, which contains parameters regressed with TDE for compounds for which a significant amount of data was available. NIST-TRC and associated property models available in Aspen Plus provide all that most users need to use property data from NIST in their simulations.NIST TDE provides additional capabilities for users who need them:You can perform dynamic data evaluation using the raw property database delivered with Aspen Physical Property System. You can trace back to the original data sources.You can save the data into Aspen Plus to perform additional data regressions beyond those automated by TDE, such as fitting to adifferent property model or fitting data over a differenttemperature range which corresponds to the process conditionsof interest.Note: The NIST-TRC databank is only available when using the Aspen Properties Enterprise Database. Starting in version V7.0, Aspen Plus and Aspen Properties are configured to use the enterprise database when installed.Using TDE ResultsPure component resultsOn the left side of the TDE Pure Results window under Properties for component ID is a list of the property parameters available, with All at the top. Selecting All displays a summary of the parameter results. ForT-dependent parameters, a + is displayed; you can click this to open the display of the estimated values for each element of these parameters.Selecting any parameter displays details about that parameter on a multi-tab display, including any experimental data and estimated property values. In the display of experimental data, one column indicates which data points were used in regression and which were rejected as outliers.With the Experimental Data, Predicted Values, or Evaluated Results tab of any T-dependent parameter open, in the Home tab of the ribbon, in the Plot group, you can click Prop-T in the ribbon to generate a plot of that data vs. temperature. The plot displays all available experimental data and predicted values along with the curve of evaluated values.If you want to save this data as part of your simulation, you must click Save Parameters to save it on Parameters and Data forms. See Saving data to forms, below.Binary mixture resultsOn the left side of the TDE Binary Results window is a list of the property parameters available, with Data for ID(1) and ID(2) at the top. Clicking Data for ID(1) and ID(2) displays a summary of the parameterresults. The retrieved parameters appear in a tree at the left; selecting categories in the tree displays a summary of the data available under that category. Selecting the individual numbered results displays the experimental data. Double-clicking a row in any of the summary views also displays its experimental data.With any experimental data set open, in the Home tab of the ribbon, the Plot group displays buttons for ways you can plot that data.If you want to save this data as part of your simulation, you must click Save Data to save it on Data forms. See Saving data to forms, below.Once you have saved some results to forms, you can click Data Regression to create a data regression case with this data. See NIST TDE Data Evaluation/Regression.Click the Consistency Test tab to run consistency tests on the VLE data. See NIST TDE VLE Consistency Test for details. Saving results to formsClick Save Parameters or Save Dava to save any of the pure component TDE results and most categories of pure component or binary experimental data in forms in your current Aspen Plus or AspenProperties run. A dialog box appears, allowing you to select which parameters you want to save data for. For pure component experimental data, a checkbox (selected by default) lets you specify to save only accepted data; if this is selected then experimental data points which were rejected by TDE are not saved to forms. For binary data, a checkbox allows you to save both the data and its uncertainty. The data is saved to:Methods | Parameters | Pure Component| TDE-1 form (scalar parameter values)Methods | Parameters | Pure Component|Parameter Name forms (T-dependent parameter values)Data | Pure-Comp forms (pure component experimental data) Data | Mixture forms (binary experimental data)Note: TDE results are only saved if you use Save Data. Otherwise, they are discarded when you close the window. Values are saved in SI units. These are treated as user-entered parameters.See AlsoNIST TDE Data Evaluation。
筋土拉拔界面弹塑性模型的全过程分析

㊀第45卷第12期煤㊀㊀炭㊀㊀学㊀㊀报Vol.45㊀No.12㊀㊀2020年12月JOURNAL OF CHINA COAL SOCIETYDec.㊀2020㊀移动阅读杜常博,易富.筋土拉拔界面弹塑性模型的全过程分析[J].煤炭学报,2020,45(12):4062-4073.DU Changbo,YI Fu.Full-range analysis of elastic-plastic model of pull-out interface between geosynthetics and soil[J].Journal of China Coal Society,2020,45(12):4062-4073.筋土拉拔界面弹塑性模型的全过程分析杜常博1,易㊀富2,3(1.辽宁工程技术大学土木工程学院,辽宁阜新㊀123000;2.辽宁工程技术大学建筑与交通学院,辽宁阜新㊀123000;3.煤炭科学研究总院,北京㊀100013)摘㊀要:基于筋土拉拔界面理想弹塑性模型基础上,针对土工合成材料拉拔时出现的应变硬化和应变软化现象,提出了双线性剪应力-位移弹塑性硬化模型和三线性剪应力-位移弹塑性软化模型,将应变硬化筋材的筋土拉拔过程分为弹性阶段㊁弹性-硬化过渡阶段和完全硬化阶段,将应变软化筋材的筋土拉拔过程分为弹性阶段㊁弹性-软化过渡阶段㊁完全软化阶段㊁软化-残余过渡阶段和完全残余阶段;为了验证2种弹塑性模型的准确性与适用性,将2种弹塑性模型预测结果与试验结果进行对比,并引入几种经典的筋土界面模型进行对比分析;通过筋土界面基本控制方程,分别推导了拉拔荷载下应变硬化和应变软化2种塑性变形特征筋材不同阶段界面拉力㊁剪应力和位移的计算表达式,很好的反映了筋土拉拔界面的渐进性破坏;同时,为直观反映2种类型筋材拉拔试验过程中界面不同拉拔阶段的受力演化规律,对界面剪应力在不同拉拔阶段的分布规律进行分析㊂研究结果表明:2种弹塑性模型预测结果与拉拔试验数据吻合良好,能够较好的反映筋土界面的硬化或软化特性,且计算更为简便,具有更好的适用性,验证了所提出2种弹塑性模型对筋材在拉拔界面中渐进破坏分析的有效性;在筋材的拉拔过程中,2种弹塑性模型的过渡阶段一般均不明显㊂研究结果能够全面反映不同应变类型筋土界面的渐进性破坏㊂关键词:拉拔界面;弹塑性模型;土工合成材料;应变硬化;应变软化中图分类号:TU47㊀㊀㊀文献标志码:A㊀㊀㊀文章编号:0253-9993(2020)12-4062-12收稿日期:2019-10-17㊀㊀修回日期:2019-12-20㊀㊀责任编辑:陶㊀赛㊀㊀DOI :10.13225/ki.jccs.2019.1419㊀㊀基金项目:国家自然科学基金资助项目(51774163)㊀㊀作者简介:杜常博(1992 ),男,辽宁阜新人,讲师,博士㊂E -mail:duchangbo2839@163.com㊀㊀通讯作者:易㊀富(1978 ),男,河北张北人,教授,博士生导师,博士㊂E -mail:yifu9716@163.comFull-range analysis of elastic-plastic model of pull-out interface betweengeosynthetics and soilDU Changbo 1,YI Fu 2,3(1.College of Civil Engineering ,Liaoning Technical University ,Fuxin ㊀123000,China ;2.College of Architecture and Transportation ,Liaoning Technical Uni-versity ,Fuxin ㊀123000,China ;3.China Coal Research Institute ,Beijing ㊀100013,China )Abstract :Based on the ideal elastic-plastic model of reinforced soil pull-out interface,aiming at the strain hardening and strain softening phenomena of geosynthetics during pull-out,a bilinear shear stress-displacement elastic-plastic hardening model and a trilinear shear stress-displacement elastic-plastic softening model were proposed.The pull-out process of strain hardening reinforcement was divided into three stages:elastic stage,elastic-hardening transition stage and pure hardening stage.The pull-out process of strain softening reinforcement was divided into elastic stage,elastic-softening transition stage,pure softening stage,softening-residual transition stage and pure residual stage.To verify theaccuracy and applicability of the two elastic-plastic models,the predicted results were compared with the experimental第12期杜常博等:筋土拉拔界面弹塑性模型的全过程分析results,and several classical models of pull-out interface between reinforcement and soil were introduced for compara-tive analysis.Based on the basic control equation of reinforcement-soil interface,the expressions of interface tension, shear stress and displacement at different stages of reinforcement of two kinds plastic deformation,strain hardening and strain softening,under pull-out load were derived respectively,which well reflect the progressive failure of the pull-out interface between reinforcement and soil.Moreover,to directly reflect the stress evolution law of the two types of rein-forcement at different pull-out stages of reinforcement-soil interface,the evolution law of interface shear stress at differ-ent pull-out stages was analyzed.The results show that the predicted results of the two elastic-plastic models are in good agreement with the pull-out test data,which can better reflect the hardening or softening characteristics of the pull-out interface between reinforcement and soil,and the calculation is more simple and has better applicability,it al-so verifies the validity of the proposed two elastic-plastic models for the progressive failure analysis of reinforcement at the pull-out interface.In the pull-out process of reinforcement,the transition stage of the two elastic-plastic models is not obvious.The results can comprehensively reflect the progressive failure of reinforcement-soil interface with different strain types,and provide a theoretical support for the study of the interface characteristics of reinforcement-soil pull-out.Key words:pull-out interface;elastic-plastic model;geosynthetics;strain hardening;strain softening㊀㊀土工合成材料在加筋工程中的应变硬化与应变软化是土工合成材料常见力学特性,关于筋土界面相互作用的试验研究[1-2]中都强调了筋材的这一显著特性㊂史旦达等[3]对比单㊁双向土工格栅加筋工况,认为单向格栅加筋时的直剪和拉拔曲线表现为应变软化型,而双向格栅加筋的2种试验曲线一般表现为应变硬化型;而杨敏等[4]研究土工布与黄土界面摩擦作用时发现,直剪试验曲线表现为硬化型,拉拔曲线表现成软化型㊂一般情况下,对筋土界面特性理论方面研究需考虑剪应力与位移之间的关系[5-7],刘续等[8]和易富等[9]认为在拉拔位移较小的情况下剪应力和位移呈线性关系;GURUNG等[10]采用双曲线计算模型分析界面剪应力与位移的变化关系;ESTERHULZEN 等[11]提出了峰值前和峰值后都采用双曲线表示的位移软化模型;林明飞等[12]将剪应力-位移曲线前后2部分均采用双曲线模型进行模拟;王军等[13]在试验基础上提出一种能够描述筋土界面力学性能的组合本构模型,该模型包含峰值及残余强度包络线㊁峰值前的双曲线模型㊁峰值后位移软化模型和界面剪胀模型;林伟岸等[14]将峰值前㊁峰值后塑性软化和塑性流动的剪应力与位移变化均采用直线模拟;而张鹏等[15]提出三阶段弹塑性剪应力-位移模型,峰值前采用双曲线模拟,峰值后的塑性软化和塑性流动阶段采用直线模拟㊂上述筋拉拔土界面模型虽能较好的模拟界面拉拔行为,但未能考虑筋材拉拔全过程不同阶段界面的的渐进破坏特性㊂为了真实描述筋土界面拉拔过程中不同阶段的渐进破坏作用特性,国内外学者针对筋材应变软化特性提出了很多计算模型,如ZHU等[16]和CHEN 等[17]通过三参数模型推导了筋土界面轴力和剪应力在不同拉拔阶段的解析表达式㊂在关于筋土界面特性的研究中,对于应变硬化特性只有较少计算模型,而且还未有同时包含筋材应变硬化和应变软化2种塑性变形特征的计算模型㊂笔者在筋土拉拔界面理想弹塑性模型[18]基础上,针对加筋材料拉拔曲线的应变硬化和应变软化2种形式(以下简称应变硬化筋材和应变软化筋材),提出应变硬化筋材在拉拔过程中经历3个连续阶段,应变软化筋材在拉拔过程中经历5个连续阶段,分别推导出2种类型筋材拉拔试验过程中不同拉拔阶段的拉力㊁剪应力和位移的分布规律,同时还分析了界面剪应力在不同拉拔阶段的演化规律,以期为实际加筋工程中筋材的选择与设计提供参考㊂1㊀筋土界面基本方程图1为筋材拉拔试验示意图,试验中,试验槽底部和侧向边界固定,在承压板上可以施加竖向上覆压力㊂筋材的长度和厚度分别为L,t,拉伸模量为E㊂设筋材在x处的剪应力为τ,在其上取长度为d x 的微单元体进行分析,宽度取筋材的单位宽度,忽略筋材的边界效应,则根据受力平衡可以得出(T+d T)-T+2τ(d x+εd x)=0(1)式中,T为筋材在x处单位宽度拉力;εd x为微元体的单元变形长度;ε为应变㊂根椐应变的定义,在x处筋材应变可写为ε=-d u d x(2)3604煤㊀㊀炭㊀㊀学㊀㊀报2020年第45卷式中,u 为x 处筋材相对位移㊂假定应变与单位宽度的拉力线性相关,即ε=T /(Et )(3)㊀㊀由式(1)~(3)可以求出Et d 2ud x2+2τ(ε-1)=0(4)㊀㊀一般在拉拔过程中的实际应变ε很小[19],可忽略,故式(4)近似表示为Et d 2ud x2-2τ=0(5)㊀㊀式(5)即为筋土界面基本方程,对研究筋土界面特性具有重要意义㊂图1㊀筋材拉拔示意Fig.1㊀Pull-out sketch of reinforcement根据文献[20],界面剪应力峰值τp 可由式(6)计算:τp =σn tan φᶄgs =(q s +γh )tan φᶄgs(6)式中,σn 为法向应力,σn =q s +γh ;q s 为附加应力;γ为试验填料的容重;h 为加筋以上填料的铺筑高度;φᶄgs 为界面综合摩擦角㊂2㊀应变硬化筋材拉拔界面分析2.1㊀硬化筋材模型应变硬化筋材界面剪应力与位移试验曲线达到峰值前可近似表现为弹性关系,之后表现为应变硬化特征㊂将这种曲线形式简化为如图2所示的双线性线性剪应力-位移(τ-u )关系[21]㊂由图2可见,第1阶段(OA 段),以直线表示剪应力达到峰值前的剪应力与位移关系;第2阶段(AB 段),以直线表示筋材的应变硬化㊂图2中,K s1和K s2分别为OA 段和AB 段的斜率,也叫做界面剪切刚度,u p 为对应的拉拔位移,K s1=τp /u p ㊂当筋材拉拔端剪应力增大到筋材极限剪应力τult 时,筋材发生破坏,定义两者之间的比值为破坏比R f[15],即R f =τp /τult ,其值一般在0.5~1.0㊂τ=K s1u ,0ɤu ɤu pK s2(u -u p )+τp ,u >u p{(7)图2㊀双线性界面剪应力与位移(τ-u )关系Fig.2㊀Relationship between interface shear stress and displacement of bilinear (τ-u )㊀㊀基本假定:根据上述应变硬化筋材理论模型的定义,认为拉拔荷载下筋土界面将经历弹性阶段㊁弹性-硬化过渡阶段㊁完全硬化阶段,分别对应图3中的Ⅰ,Ⅱ,Ⅲ阶段(L e ,L h 分别为筋材拉拔过程中弹性区长度和硬化区长度)㊂通过理论计算可得到各个拉拔阶段拉力㊁剪应力和位移的计算表达式㊂图3㊀拉拔模型分析中应变硬化筋材渐进拉拔过程Fig.3㊀Progressive pull-out process of strain hardeningreinforcement in pull-out model analysis4604第12期杜常博等:筋土拉拔界面弹塑性模型的全过程分析2.2㊀硬化模型验证为验证硬化模型的准确性与适用性,引入几种经典的筋土界面模型进行对比分析,包括理想弹塑性模型[18]㊁双曲线模型[10]㊂笔者选取文献[3]中法向应力为100kPa㊁相对密度为70%条件下双向土工格栅与砂土的拉拔试验结果进行模拟验证,模拟参数见表1,模拟结果如图4所示㊂从图4可以看出,理想弹性模型和双曲线模型无法体现界面的硬化过程,而双线性剪应力-位移弹塑性硬化模型能够较好的反映界面硬化特性,而且计算更为简便,具有更好的适用性㊂硬化模型可适用于加筋土拉拔试验时试验曲线呈现应变硬化特征情况时,该模型能够合理的模拟硬化型筋材的拉拔行为,但图中阶段Ⅱ即弹性-硬化过渡阶段区间很小,这是由于该筋材弹性模量很大而加筋长度较小导致筋材的渐进性破坏不明显㊂双向土工格栅加筋土产生应变硬化现象的主要原因是由于双向格栅横肋的阻挡作用㊂表1㊀拉拔试验的模拟参数[3]Table 1㊀Simulation parameters of pull-out test [3]σn /kPa L /m t /mm E /GPa K s1/(MPa㊃m -1)K s2/(MPa㊃m -1)tan φᶄgs 1000.2254.230.470.56图4㊀双向土工格栅拉拔试验结果[3]Fig.4㊀Pull-out test results of bidirectional geogrids inreference[3]2.3㊀硬化拉拔界面全历程分析2.3.1㊀弹性阶段(Ⅰ阶段)当0ɤu <u p 时,剪应力和位移呈弹性关系,二者关系满足关系式τ=K s1u ,联立式(5)和(7)可得到此阶段的控制方程d 2Td x2-α2T =0(8)式中,α=2K s1/(Et )㊂解式(8)可得T e (x )=C 1exp(-αx )+C 2exp(αx )(9)式中,T e (x )为筋材在弹性阶段的拉力;C 1,C 2为积分常数㊂拉拔试验中,拉拔端即x =0的拉力为T 01,在自由端即x =L 的拉力为0,则有边界条件T (x =0)=T 01T (x =L )=0{(10)㊀㊀将边界条件式(10)代入式(9)可求得C 1=exp(αL )exp(αL )-exp(-αL )T01C 2=-exp(-αL )exp(αL )-exp(-αL )T01ìîí(11)则可求得Ⅰ阶段的拉力表达式:T e (x )=T 01sinh α(L -x )sinh αL(12)㊀㊀根据式(5)和(7)可得到Ⅰ阶段相应的剪应力τe (x )和位移u e (x )关系式τe (x )=αT 012cosh α(L -x )sinh αL(13)u e (x )=αT 012K s1cosh α(L -x )sinh αL(14)㊀㊀让x =0代入到式(14)中可得到拉拔端的位移(即拉拔位移u e0)的变换式为T 01=2tanh αLαK s1u e0(15)㊀㊀当u e0=u p 时,τp =K s1u e0,由式(15)可得弹性阶段与弹性-硬化过渡阶段的临界拉力T c eh0,也就是在弹性阶段的最大拉力为T c eh0=2τp tanh αLα(16)2.3.2㊀弹性-硬化过渡阶段(Ⅱ阶段)随着拉力的不断增长,界面剪应力从拉拔端逐渐向尾部传递,直至达到峰值,然后拉拔端开始发生塑性特征,出现应变硬化现象,进入Ⅱ阶段㊂定义临界点P (x =L h )划分弹性区和硬化区,当0ɤx <L h ,界面处于Ⅱ阶段硬化区,而当L h <x ɤL ,界面处于Ⅱ阶段弹性区(其中,L h 为硬化区的长度)㊂(1)弹性区(L h <x ɤL )㊂Ⅱ阶段弹性区界面5604煤㊀㊀炭㊀㊀学㊀㊀报2020年第45卷拉力㊁剪应力和位移的分布规律与完全弹性阶段相似,可得T e (x )=T 02sinh α(L -x )sinh α(L -L h )(17)τe (x )=αT 022cosh α(L -x )sinh α(L -L h )(18)u e (x )=αT 022K s1cosh α(L -x )sinh α(L -L h )(19)式中,T 02为过渡点P 的拉拔力㊂考虑到过渡点P 的界面剪应力等于峰值剪应力,则可以得到T 02=2τp tanh α(L -L h )α(20)㊀㊀(2)硬化区(0ɤx ɤL h )㊂Ⅱ阶段硬化区界面剪应力与剪切位移的关系由式(7)定义,联立式(5)和(7)可得d 2Td x2-β2T =0(21)式中,β=2K s2/(Et )㊂T h x )=C 3exp(-βx )+C 4exp(βx )(22)式中,T h (x )为筋材在完全弹性阶段的拉力;C 3,C 4为积分常数㊂考虑边界条件:T h (x =0)=T 03T h (x =L h )=T e (x =L h ){(23)式中,T 03为弹性硬化阶段的极限拉力㊂将式(23)边界条件代入式(22)可求得C 3=-T 03exp(-βL h )exp(βL h )-exp(-βL h )+㊀㊀2τp tanh α(L -L h )α[exp(βL h )-exp(-βL h )]C 4=T 03exp(βL h )exp(βL h )-exp(-βL h )-㊀㊀2τp tanh α(L -L h )α[exp(βL h )-exp(-βL h )]ìîí(24)则可求得Ⅱ阶段拉力㊁剪应力和位移的表达式分别为T h (x )=T 03sinh β(L h -x )sinh βL h +2τp tanh α(L -L h )αsinh βxsinh βL h(25)τh (x )=βT 032cosh β(L h -x )sinh βL h+2βτp tanh α(L -L h )αcosh βxsinh βL h(26)u h (x )=βT 032K s2cosh β(L h -x )sinh βL h+2βτp tanh α(L -L h )αK s2cosh βx sinh βL h -τp K s2+u p (27)㊀㊀由于弹性区与硬化区的过渡点P 剪应力连续,即τe (x =L h )=τh (x =L h ),则可以求得T 03:T 03=2τp sinh βL h β+2τp tanh α(L -L h )αcosh βL h(28)㊀㊀当L h =L 时,由式(28)可得弹性-硬化过渡阶段与完全硬化阶段的临界拉力T c h0为T c h0=2τp sinh βLβ(29)2.3.3㊀完全硬化阶段(Ⅲ阶段)类似于Ⅱ阶段硬化区的分析,式(22)仍然适用于完全硬化阶段,边界条件为T s (x =0)=T 04T s (x =L )=0{(30)式中,T 04为完全硬化阶段的极限拉力㊂Ⅱ阶段的拉力㊁剪应力和位移表达式为T h (x )=T 04sinh β(L -x )sinh βL(31)τh (x )=βT 042cosh β(L -x )sinh βL(32)u h (x )=βT 042K s2cosh β(L -x )sinh βL -τp K s2+u p(33)㊀㊀将x =0代入式(33)中得到此阶段拉拔位移u h0的变换式为T 04=2tanh βLβ[K s2(u h0-u p )+τp ](34)式中,u h0为完全硬化阶段x =0时筋材的拉拔位移㊂在完全硬化阶段,界面拉拔端拉拔力和剪应力均增大,当筋材拉拔端处的剪应力增大到筋材的极限应力τult 时,筋材发生破坏,则τult =K s2(u h0-u p )+τp ,结合R f =τp /τult ,所以式(34)可写成T 04=2τp tanh βLR f β(35)㊀㊀综上,应变硬化筋材拉拔试验过程中的3个阶段均得到了封闭解,每2个阶段间临界拉力的结果见表2㊂2.4㊀界面剪应力分布规律为直观反映应变硬化筋材拉拔试验过程中界面不同拉拔阶段的受力演化规律,对3个阶段界面剪应力分布进行分析㊂所选取的模型参数见表1㊂为简化分析,对模型参数进行归一化处理,归一化筋材位置为X =x /L ,归一化界面剪应力为ρ=τ/τp ㊂6604第12期杜常博等:筋土拉拔界面弹塑性模型的全过程分析表2㊀硬化拉拔模型2个阶段间的临界拉力Table 2㊀Critical tensile force between two stages ofhardening pull-out model拉拔阶段Ⅰ-ⅡⅡ-Ⅲ临界拉拔力表达式T c eh0=2τp tanh αLαT c h0=2τp sinh βLβ㊀㊀计算得到τp =σn tan φᶄgs =56kPa,α=2K s1/(Et )=0.92,β=2K s2/(Et )=0.31㊂根据所给出的模型参数,代入以下各式:2.4.1㊀弹性阶段代入式(13)得到:ρ=cosh αL (1-X )cosh αL,代入式(16)求得完全弹性与弹性-硬化过渡段的临界拉力T c eh0=2τp tanh αLα=22.15kN /m㊂2.4.2㊀弹性-硬化过渡阶段(1)弹性区(0.5ɤX ɤ1.0)㊂令L h =L /2代入式(18)得:ρ=cosh αL (1-X )cosh(αL /2);(2)硬化区(0ɤX ɤ0.5)㊂将L h =L /2代入式(26)得到ρ=cosh βL (0.5-X )+βtanh(αL /2)αsinh(βL /2)ˑcosh βL 2cosh βL (0.5-X )-cosh βLX éëùû;由式(29)可求得弹性-硬化过渡与完全硬化阶段的临界拉力T c h0=2τp sinh βLβ=22.41kN /m㊂2.4.3㊀完全硬化阶段代入式(32)得到ρ=cosh βL (1-X )R f cosh βL;代入式(35)求得完全硬化阶段的极限拉力T 04=2τp tanh βLR f β=27.96kN /m㊂根据上述界面剪应力计算表达式,可得应变硬化筋材拉拔过程中弹性㊁弹性-硬化过渡和完全硬化3个阶段的界面剪应力演化规律,如图5所示㊂由图5可知:当T ɤ22.15kN /m 时,界面处于弹性阶段,剪应力从拉拔端到自由端呈非线性减小,当T =22.15kN /m 时,界面处于弹性与弹性-硬化过渡的临界阶段,拉拔端达到剪应力峰值56kPa;当22.15kN /mɤT ɤ22.41kN /m 时,界面处于弹性-硬化过渡阶段,此阶段拉拔端剪应力达到峰值后带动自由端继续增大,剪应力从拉拔端到自由端整体呈减小趋势,当T =22.41kN /m 时,即自由端也达到峰值后,界面将进入完全硬化阶段;当22.41kN /m ɤT ɤ27.96kN /m 时,界面处于完全硬化阶段,此阶段拉拔端剪应力继续增大,直至T =27.96kN /m 时,拉拔端剪应力达到筋材极限应力,筋材发生破坏㊂图5㊀拉拔过程各阶段剪应力演化规律(硬化筋材)Fig.5㊀Evolution of shear stress in different stages of pull-outprocess (hardened reinforcement)3㊀应变软化筋材拉拔界面分析3.1㊀软化筋材模型应变软化筋材界面剪应力与位移试验曲线达到峰值前可近似表现为弹性关系,之后表现为塑性软化和塑性流动㊂本文将这种曲线特征简化为如图6所示的三线性剪应力-位移关系(τ-u )㊂由图6可知,第1阶段(OA 段),以直线表示剪应力达到峰值前剪应力与位移关系;第2阶段(AB 段),以直线表示筋材的应变软化;第3阶段(BC 段),以水平直线表示筋材的塑性流动㊂图6中,k s1为OA 段斜率,k s1=τp /u p ;k s2为AB 段斜率,τr 为界面残余剪应力,u r 为对应位移,k s2=(τr -τp )/(u r -u p )<0;类比破坏比的定义,将两者之间的比值定义为Rᶄf ,即Rᶄf=τr /τp ㊂7604煤㊀㊀炭㊀㊀学㊀㊀报2020年第45卷τ=k s1u,0ɤuɤu pk s2(u-u p)+τp,u p<uɤu rRᶄf k s1u p,u>u rìîí(36)图6㊀三线性界面剪应力与位移(τ-u)关系Fig.6㊀Relationship between interface shear stressand displacement of trilinear(τ-u)㊀㊀基本假定:将应变软化筋材筋土拉拔过程划分为弹性㊁弹性-软化过渡㊁完全软化㊁软化-残余过渡和完全残余5个阶段[16],分别对应图7(a)~(e)(L s为筋材拉拔过程中软化区的长度)㊂通过理论计算可得到各个拉拔阶段的界面拉力㊁剪应力和位移计算表达式㊂3.2㊀软化模型验证笔者选取文献[3]中法向应力100kPa压实度为0.85条件下单向土工格栅与黏性土的拉拔试验结果进行模拟,模拟参数见表3㊂为验证软化模型的准确性与适用性,引入理想弹塑性模型[18]和弹性-指数软化模型[19]2种经典筋土界面模型进行对比分析,模拟结果如图8所示㊂从图8可以看出,理想弹性模型在无法体现界面的软化过程,而弹性-指数软化模型和三线性剪应力-位移弹塑性软化模型能够较好的反映界面软化特性,但三线性软化模型更为计算简便,表达式简洁,具有更好的适用性㊂在加筋土拉拔试验时试验曲线出现应变软化特征时可适用于此软化模型,该模型能够合理的模拟应变软化筋材界面的拉拔特性,而且图8中阶段Ⅱ和Ⅳ两个过渡阶段区间不明显㊂单向格栅加筋土产生应变软化现象是因为在加筋过程中当拉拔力随着拉拔位移的增大达到峰值时,筋材整体出现滑动而发生应变软化特征㊂图7㊀拉拔模型分析中应变软化筋材渐进拉拔过程Fig.7㊀Progressive pull-out process of strain softening reinforcement in pull-out model analysis表3㊀拉拔试验的模拟参数[3]Table3㊀Simulation parameters of pull-out test in reference[3]σn/kPa L/m t/mm E/GPa k s1/(MPa㊃m-1)k s2/(MPa㊃m-1)tanφgs Rᶄf 1000.220.41.73-0.520.210.643.3㊀软化拉拔界面全历程分析3.3.1㊀弹性阶段(Ⅰ阶段)与上述应变硬化型筋材在弹性阶段的分布规律相同,联立式(5)和(36)可得到此阶段的拉力㊁剪应力和位移关系表达式为8604第12期杜常博等:筋土拉拔界面弹塑性模型的全过程分析图8㊀单向土工格栅拉拔试验结果[3]Fig.8㊀Pull-out test results of unidirectional geogrids [3]T e (x )=T 01sinh α(L -x )sinh αL(37)τe (x )=αT 012cosh α(L -x )sinh αL(38)u e (x )=αT 012k s1cosh α(L -x )sinh αL(39)㊀㊀将x =0代入式(39)中可得拉拔位移u e0的变换式T 01=2tanh αLαk s1u e0(40)㊀㊀当u e0=u p 时,τp =k s1u e0,由式(40)可得弹性阶段与弹性-硬化过渡阶段的临界拉力T c eh0,也就是在弹性阶段的最大拉力T c es0=2τp tanh αLα(41)3.3.2㊀弹性-软化过渡阶段(Ⅱ阶段)界面剪应力逐渐增大并从拉拔端向尾部传递,直至拉拔端剪应力达到峰值,开始发生应变软化现象,进入Ⅱ阶段㊂定义临界点Q 1(x =L s )划分弹性区和软化区,当0ɤx <L s ,界面处于Ⅱ阶段软化区,而当0ɤx <L s ,界面处于Ⅱ阶段弹性区(其中,L s 为软化区的长度)㊂(1)弹性区(L s ɤx ɤL )㊂Ⅱ阶段弹性区界面拉力㊁剪应力和位移的分布规律与完全弹性阶段相似,则可以得到T e (x )=T 02sinh α(L -x )sinh α(L -L s )(42)τe (x )=αT 022cosh α(L -x )sinh α(L -L s )(43)u e (x )=αT 022k s1cosh α(L -x )sinh α(L -L s )(44)㊀㊀考虑到过渡点Q 1的界面剪应力等于峰值剪应力,则可以得到T 02=2τp tanh α(L -L s )α(45)㊀㊀(2)软化区(0ɤx ɤL s )㊂软化区界面剪应力与剪切位移的关系由式(36)定义,联立式(45)和式(36)可得d 2T d x 2+β2T =0(46)式中,β=-2k s2/(Et )>0㊂解式(46)可得T s (x )=Cᶄ3cos βx +Cᶄ4sin βx(47)式中,T s (x )为筋材在完全弹性阶段的拉力;Cᶄ3,Cᶄ4为积分常数㊂考虑边界条件:T s (x =0)=T 03T s (x =L s )=T e (x =L s ){(48)㊀㊀将式(48)边界条件代入到式(47)可得Cᶄ3=T 03Cᶄ4=2τp tanh α(L -L s )αsin βL s -T 03cot βL sìîí(49)㊀㊀由于弹性区与软化区的过渡点Q 1的剪应力连续,即τe (x =L s )=τs (x =L s ),则可以求得T 03:T 03=2τpsin βL s β+tanh α(L -L s )αcos βL séëùû(50)则可求得Ⅱ阶段软化区拉力㊁剪应力和位移的表达式:T s (x )=2τpéësin β(L s -x )β+tanh α(L -L s )αˑcos β(L s -x )ùû(51)τs (x )=τp éëcos β(L s -x )-βtanh α(L -L s )αˑsin β(L s -x )ùû(52)u s (x )=τp k s2éëcos β(L s -x )-βtanh α(L -L s )αsin β(L s -x )ùû-τp k s2+u p (53)㊀㊀当L s =L 时,由式(50)可得弹性-软化过渡阶段与完全软化阶段的临界拉拔力T c s0:T c s0=2τp sin βLβ(54)3.3.3㊀完全软化阶段(Ⅲ阶段)类似与Ⅱ阶段软化区的分析,式(47)仍然适用9604煤㊀㊀炭㊀㊀学㊀㊀报2020年第45卷于完全软化阶段,边界条件为T s (x =0)=T 04T s (x =L )=0{(55)则完全软化阶段的解为T s (x )=T 04(cos βx -cot βL sin βx )(56)τs (x )=βT 042(sin βx +cot βL cos βx )(57)u s (x )=βT 042k s2(sin βx +cot βL cos βx )-τp k s2+u p(58)㊀㊀将x =0代入式(58)可得此阶段拉拔位移u s0的变换式:T 04=2tan βLβ[k s2(u s0-u p )+τp ](59)㊀㊀在完全软化阶段,界面拉力和剪应力均减小,当u s0=u r 时,τr =k s2(u s0-u p )+τp ,由式(59)可得弹性-软化过渡阶段与完全软化阶段的临界拉拔力T c sr0,也就是完全软化阶段的最小拉拔力为T c sr0=2τr tan βLβ(60)3.3.4㊀软化-残余过渡阶段(Ⅳ阶段)当拉拔力减小到T c sr0,筋材拉拔端开始进入残余状态并逐渐向筋材尾部延伸,进入Ⅳ阶段㊂定义过渡点Q 2(x =L r )划分软化区和残余区,当0ɤx <L r ,界面处于Ⅱ阶段残余区,而当L r <x ɤL ,界面处于Ⅱ阶段软化区(其中,L r 为残余区长度)㊂(1)软化区(L s ɤx ɤL )㊂在完全软化阶段得到拉力㊁剪应力和位移的分布规律与软化-残余过渡阶段相似,根据式(56)~(59)可以得到Ⅳ阶段弹性区的拉力㊁剪应力和位移的关系式为T s (x )=T 05[cos β(x -L r )-cot β(L -L r )ˑsin β(x -L r )](61)τs (x )=βT 052[sin β(x -L r )+cot β(L -L r )ˑcos β(x -L r )](62)u s (x )=βT052k s2[sin β(x -L r )+cot β(L -L r )ˑcos β(x -L r )]-τp k s2+u p(63)㊀㊀由于过渡点Q 2处的界面剪应力等于残余剪应力τs (x =L r )=τr ,则根据式(61)可以得到T 05=2τr tan β(L -L r )β(64)㊀㊀(2)残余区(0ɤx ɤL r )㊂残余区界面剪应力与残余抗剪强度相等,由式(5)和(36)可得d Td x=-2τr (65)㊀㊀考虑边界条件:T r (x =0)=T 06T r (x =L r )=T s (x =L r ){(66)式中,T r (x )为筋材在完全残余阶段的拉力㊂则可得T 06=2τr tan β(L -L r )β+2τr L r (67)T r (x )=2τr (L r -x )+2τrtan β(L -L r )β(68)u r (x )=τr Et x 2-2τr Et éëL r x +tan β(L -L r )βx ùû+Cᶄ5(69)Cᶄ5=τr L 2r Et -2τr L r βEt tan β(L -L r )+τr -τp k s2+u p (70)㊀㊀将x =0,L r =L 代入式(67)得到软化-残余过渡阶段与完全残余阶段的临界拉力T c r0:T c r0=2τr L (71)3.3.5㊀完全残余阶段(Ⅴ阶段)将x =0,L r =L 代入式(69)得到筋材拉拔端的临界剪切位移:u cr0=τr L 2Et +τr -τpk s 2+u p (72)㊀㊀假定筋材在拉拔端的位移为uᶄ0,则剪切位移的分布可导出为u r (x )=τr Et (x 2-L 2)-2τr L Et (x -L )+τr-τpk s2+u p +(uᶄ0-u c r0)=uᶄ0+2τr L Et(x 2-2Lx )(73)㊀㊀此阶段拉力T c r0和界面剪应力τr 保持常数㊂因此,拉力在筋材不同位置的分布为T r (x )=2τr (L -x )(74)㊀㊀综上,应变软化筋材拉拔试验过程中的5个阶段均得到了封闭解,每2个阶段间临界拉力的结果见表4㊂3.4㊀界面剪应力分布规律为直观反映应变软化筋材筋土界面不同拉拔阶段的受力演化规律,对5个阶段界面剪应力的分布进行分析,所选的模型参数见表3㊂计算得到τp =σn tan φᶄgs =21kPa,τr =Rᶄf τp=0704第12期杜常博等:筋土拉拔界面弹塑性模型的全过程分析13.44kPa,α=2k s1/(Et )=2.08,β=-2k s2/(Et )=1.14㊂根据所给出的模型参数,代入以下各式㊂表4㊀软化拉拔模型两个阶段间临界拉力Table 4㊀Critical tensile force between two stages ofsoftening pull-out model拉拔阶段临界拉拔力表达式Ⅰ-ⅡT c es0=2τp tanh αLαⅡ-ⅢT c s0=2τp sin βLβⅢ-ⅣT c sr0=2τr tan βLβⅣ-ⅤT c s0=2τr L㊀㊀(1)弹性阶段㊂代入式(38)得到:ρ=cosh αL (1-X )cosh αL,代入式(41)求得完全弹性阶段与弹性-软化过渡阶段的临界拉力T c es0=2τp tanh αLα=7.95kN /m㊂㊀㊀(2)弹性-软化过渡阶段㊂①弹性区(0.5ɤX ɤ1)㊂令L s =L /2代入式(45)得到ρ=cosh αL (1-X )cosh(αL /2);㊀㊀②软化区(0ɤX ɤ0.5)㊂将L s =L /2代入式(52)得到ρ=cos βL (0.5-X )-βtanh(αL /2)αsinβL (0.5-X )㊂由式(54)可求得弹性-软化过渡阶段与完全软化阶段的临界拉力T c s0=2τp sin βLβ=8.33kN /m㊂㊀㊀(3)完全软化阶段㊂代入式(57)得到ρ=Rᶄf (tan βL sin βLX +cos βLX ),代入式(60)求得完全软化阶段与软化-残余阶段的临界拉力T c sr0=2τr tan βLβ=5.47kN /m㊂(4)软化-残余过渡阶段㊂①软化区(0.5ɤX ɤ1)㊂令L r =L /2代入式(62)得到ρ=Rᶄf [tan (βL /2)sin βL (X -0.5)+cos βL (X -0.5)];㊀㊀②残余区(0ɤX ɤ0.5)㊂进入残余区的界面剪应力ρ=Rᶄf =0.64㊂由式(71)可求得软化-残余过渡阶段与完全残余阶段的临界拉力T c s0=2τr L =5.38kN /m㊂(5)完全残余阶段㊂完全残余阶段界面剪应力ρ=Rᶄf=0.64㊂根据上述界面剪应力计算表达式,可得土工合成材料拉拔过程中弹性阶段㊁弹性-软化过渡阶段㊁完全软化阶段㊁软化-残余过渡阶段和完全残余阶段的界面剪应力演化规律,如图9所示㊂图9㊀拉拔过程各阶段剪应力演化规律(软化筋材)Fig.9㊀Evolution of shear stress in different stages of pull-out process soil (softened reinforcement)1704煤㊀㊀炭㊀㊀学㊀㊀报2020年第45卷㊀㊀由图9可知,当Tɤ7.95kN/m时,界面处于弹性阶段,剪应力从拉拔端到自由端呈非线性减小,当T=7.95kN/m时,界面处于弹性与弹性-软化过渡的临界阶段,拉拔端达到剪应力峰值21kPa;当7.95kN/mɤTɤ8.33kN/m时,界面处于弹性-软化过渡阶段,剪应力整体呈先增大后减小变化趋势,峰值点位于弹性区与软化区交界;随着拉力增大,软化区长度也不断增大,自由端剪应力逐渐向峰值靠近,当Tɤ8.33kN/m时,自由端也达到峰值后,界面将进入完全软化阶段;当5.47kN/mɤTɤ8.33kN/m 时,界面处于完全软化阶段,剪应力从拉拔端到自由端呈非线性增大,当Tɤ5.47kN/m时,拉拔端剪应力达到残余应力,界面将进入软化-残余过渡阶段㊂当5.38kN/mɤTɤ5.47kN/m时,界面处于软化-残余过渡阶段阶段,此阶段拉拔端开始进入残余状态并逐渐向自由端过渡;当Tɤ5.38kN/m时,即自由端也达到残余应力后,界面将进入完全残余阶段,此阶段剪应力保持残余应力不变㊂4㊀结㊀㊀论(1)针对土工合成材料拉拔时出现的应变硬化和应变软化现象,提出了双线性剪应力-位移弹塑性硬化模型和三线性剪应力-位移弹塑性软化模型,将应变硬化筋材的筋土拉拔过程分为弹性阶段㊁弹性-硬化过渡阶段和完全硬化阶段,将应变软化筋材的筋土拉拔过程分为弹性阶段㊁弹性-软化过渡阶段㊁完全软化阶段㊁软化-残余过渡阶段和完全残余阶段㊂(2)通过界面基本控制方程,分别推导了拉拔荷载下应变硬化和应变软化2种塑性变形特征筋材不同阶段界面拉力㊁剪应力和位移的解析解;同时分析了界面剪应力在不同拉拔阶段的演化规律,反映了筋土界面的渐进性破坏㊂(3)2组模型预测结果与拉拔试验数据基本吻合,验证了所提出2种弹塑性模型对筋材在拉拔界面中渐进破坏分析的有效性;在筋材的拉拔过程中,一般过渡阶段不明显㊂参考文献(References):[1]㊀施建勇,钱学德,朱月兵.垃圾填埋场土工合成材料的界面特性试验方法研究[J].岩土工程学报,2010,32(5):699-692.SHI Jianyong,QIAN Xuede,ZHU Yuebing.Experimental methods for interface behaviors of geosynthetics in landfills[J].Chinese Jour-nal of Geotechnical Engineering,2010,32(5):699-692. [2]㊀林伟岸,张宏伟,詹良通,等.土工膜/土工织物界面大型斜坡模型试验研究[J].岩土工程学报,2012,34(10):1590-1596.LIN Weian,ZHANG Hongwei,ZHAN Liangtong,et rge-scaleramp model tests on geomembrane/geotextile interface[J].Chinese Journal of Geotechnical Engineering,2012,34(10):1590-1596.[3]㊀史旦达,刘文白,水伟厚.单双向塑料土工格栅与不同填料界面作用特性对比试验研[J].岩土力学,2009,30(8):2237-2244.SHI Danda,LIU Wenbai,SHUI parative experimental studies of interface characteristics between uniaxial/biaxial plas-tic geogrids and different soils[J].Rock and Soil Mechanics,2009, 30(8):2237-2244.[4]㊀杨敏,李宁,刘新星,等.土工布加筋土界面摩擦特性试验研究[J].西安理工大学报,2016,32(1):46-51.YANG Min,LI Ning,LIU Xinxing,et al.Experimental research on interface frictional behaviors of the geotextile-reinforced soil[J].Journal of Xi an University of Technology,2016,32(1):46-51.[5]㊀SAWICKI A.Modelling of geosynthetic reinforcement in soil retai-ning walls[J].Geosynthetics International,1998,5(3):327-345.[6]㊀陈榕,栾茂田,赵维,等.土工格栅拉拔试验及筋材摩擦受力特性研究[J].岩土力学,2009,30(4):960-964.CHEN Rong,LUAN Maotian,ZHAO Wei,et al.Research on pull-out test and frictional resistance characteristic of geogrids[J].Rock and Soil Mechanics,2009,30(4):960-964.[7]㊀LIU H Y,HAN H Y,AN H M,et al.Hybrid finite-discrete ele-ment modelling of asperity degradation and gouge grinding during di-rect shearing of rough rock joints[J].International Journal of Coal Science&Technology,2016,3(3):295-310.[8]㊀刘续,唐晓武,申昊,等.加筋土结构中筋材拉拔力的分布规律研究[J].岩土工程学报,2013,35(4):800-804.LIU Xu,TANG Xiaowu,SHEN Hao,et al.Stress distribution of rein-forcement of reinforced soil structures under drawing force[J].Chi-nese Journal of Geotechnical Engineering,2013,35(4):800-804.[9]㊀易富,张旖旎,王世杰,等.土工格栅与尾矿拉拔试验及筋材拉力研究[J].硅酸盐通报,2018,37(6):1836-1840,1851.YI Fu,ZHANG Yini,WANG Shijie.Research on pull-out test and tensile of geogrids and tailings[J].Bulletin of the Chinese Ce-ramic Society,2018,37(6):1836-1840,1851.[10]㊀GURUNG N,IWAO Y,MADHAV M R.Pullout test model for ex-tensible reinforcement[J].International Journal for Numerical andAnalytical Methods in Geomechanics,1999,23(12):1337-1348.[11]㊀ESTERHUIZEN J B,FLIZ G M,DUNCAN J M.Constitutive behav-ior of geosynthetic interface[J].Journal of Geotechnical and Geo-environmental Engineering,2001,127(10):834-840. [12]㊀李明飞,郑效峰,那达慕,等.土工合成材料界面应变软化特性的一种本构新模型[J].沈阳工业大学学报,2015,37(1):97-101.LI Mingfei,ZHENG Xiaofeng,NA Damu.A new constitutive modelfor strain softening behavior in geosynthetic interface[J].Journalof Shenyang University of Technology,2015,37(1):97-101.[13]㊀王军,林旭,符洪涛.砂土-格栅筋土界面特性的本构模型研究[J].岩土力学,2014,35(S2):75-84.WANG Jun,LIN Xu,FU Hongtao.Study of constitutive model ofsand-geogrid interface behavior in geogrid/geotextile reinforced soil[J].Rock and Soil Mechanics,2014,35(S2):75-84. [14]㊀林伟岸,朱斌,陈云敏,等.考虑界面软化特性的垃圾填埋场斜坡上土工膜内力分析[J].岩土力学,2008,29(8):2063-2069.2704。
海上拖缆多缆拖带技术应用

1 专业名词解释为了能理解文中的相关专业术语,下面对专业术语进行简单的介绍,以此来加深对海洋物探勘探拖带系统应用的理解掌握。
地震勘探船:海上地震工作是以地震队(船)的组织形式来完成的。
船舶上安装地震采集记录系统、震源控制系统、空气压力压缩设备、导航定位设备等,使用海上专用的电缆和检波器,在地震船航行中连续地进行地震波的激发和接收。
船应该足够大的马力,才能拖得动这些设备,在采集时优先考虑效率因素,所以船舶的动力和机动性决定着采集的效率。
缆间距:电缆与电缆之间的横向距离,常规作业一般为100米,海上电缆与电缆之间的间距主要是通过电缆之间的连接绳索的长度决定的。
震源阵列子阵间距:震源一般由多个子阵列组合而成,子阵列间的间距定义为子阵间距,震源系统的扩展一般采用自扩式扩展器,在最外侧阵列上加装分水叶片,阵列与阵列之间用一定长度的绳索缆间,拖带同一侧子阵从而实现整个源的布放。
电缆(排列):在海洋地震勘探中,电缆拖在船后接收地震信号,由于电缆的比重与海水基本相同,在定深器的协助下,可沉放到任何深度,所以又称等浮电缆,它内部除主要有海洋检波器外,还有磁罗径、罗经鸟、声学鸟、RGPS、压力传感器、深度传感器等。
目前,主要使用的是sercel 生产的固体数字等浮电缆。
扩展器:一种用于海洋地震勘探的大型分水装置,船舶扩展器绞车通过高强度迪尼玛绳拖拽扩展器,装有分水叶片的扩展器与拖拽绳索形成一定的夹角,在水流的作用下向外扩,从而拖带电缆向外扩。
目前多缆拖带船主流使用的扩展器有BOAR 48和MODE 53。
721船使用的是MODE53扩展器。
浮鱼:一种具有2T 浮力的浮体,用于控制电缆头部的深度。
通过定深绳与前导段连接在一起,使电缆的头部达到设定的要求。
2 多源多缆水下拖带原理牵引式扩展方法是目前在海洋地震勘探作业船上使用最为广泛的一种水下系统扩展方法,大量应用于对水下枪阵和电缆的扩展。
牵引扩展方式的扩展原理:是借用水下扩展设备将电缆或枪阵扩展出去。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Experimental evaluation of interface doses in the presence of air cavities compared with treatment planning algorithms a…B.H.Shahine b)Department of Physics and Astronomy,University of British Columbia,6224Agricultural Road,Vancouver BC,V6T1Z1,CanadaM.S.A.L.Al-Ghazi c)Fraser Valley Cancer Centre,British Columbia Cancer Agency,1375096th Avenue,Surrey BC,V3V1Z2,CanadaE.El-KhatibVancouver Cancer Centre,British Columbia Cancer Agency,600West10th Avenue,Vancouver BC,V5Z4E6,Canada and Department of Physics and Astronomy,University of British Columbia,6224Agricultural Road,Vancouver BC,V6T1Z1Canada͑Received20February1998;accepted for publication18December1998͒A series of experiments were carried out to simulate air cavities in a polystyrene phantom.Dosewas measured at air/polystyrene interface and as a function of depth.Results of experiments werecompared to calculations done using three treatment planning systems.These systems employBatho,modified Batho,and the equivalent tissue–air-ratio methods for inhomogeneity corrections.The measured interface dose decreased by55%for a5cm air gap,5ϫ5cm2field size,and6MVphotons.This has been attributed to a lack of electronic equilibrium and dispersion of secondaryparticles transported through the air gap.These results are at variance with predictions of calcula-tions using three treatment planning systems,for which only a10%decrease was calculated.Thisis because the calculation algorithms employed do not incorporate electron transport.Further ex-periments were conducted to study the contribution of scatter from sides of the walls of the cavities.Dose measurements as a function of depth were also performed to investigate the effect of primaryfluence attenuation.The Batho algorithm did not show any sensitivity to the position of air gapsidewalls.This points to the need for proper inclusion of disequilibrium effects and shape ofinhomogeneity.©1999American Association of Physicists in Medicine.͓S0094-2405͑99͒00303-X͔Key words:interface dosimetry,inhomogeneity corrections,treatment planning,radiotherapyI.INTRODUCTIONSimplistic radiation dose calculations at a point usually pre-suppose that the point is located in a region where electronic equilibrium exists.At interfaces of materials of different density and atomic number this assumption breaks down.A loss of both longitudinal and lateral electronic equilibrium occurs,the extent of which depends on the energy,radiation field size,and the range of charged particles set in motion. As the x-ray energy is increased,the range of charged par-ticles set in motion also increases.Therefore,as the beam energy is increased the region of electronic disequilibrium becomes significant particularly for low density tissues. These effects are clinically important in the thoracic region where the lung,a low density organ,occupies a large vol-ume.Accurate dosimetry is critical to reduce toxicity in radia-tion therapy.Dose measurements and calculations both within and beyond inhomogeneities have been extensively reported.1–3The most commonly used dose correction meth-ods for lung inhomogeneity correct the dose for changes in photonfluence but do not account for changes in charged particle transport and therefore may not accurately predict the dose in interface regions.1,4,5These methods are:the ratio of tissue–air-ratios͑TARs͒,the Batho or power law correction,6the equivalent TAR͑ETAR͒method7or varia-tions thereof.8The introduction of a low density material alters the ra-diation transport in a complex manner.The primary transmit-ted radiation is increased because the lower density material attenuates the beam less.However,there are less scattered photons by the lower density material.This produces a de-crease in dose.There is also less charged particles set in motion with larger range.The result is that inhomogeneity corrections will be greater for smallerfield sizes and lower x-ray energies.4,5Besides the lung,other regions of low density are air cavi-ties situated in the upper respiratory tract.These may be larger than1cm in cross section.The dose received by tis-sues situated at the interface of the air channel or beyond is of clinical significance and may influence outcome of treat-ments with curative intent.Several studies have reported un-derdosing at interfaces due to loss of electronic equilibrium.9,10The degree of underdosing depends on the energy of the radiation beam,field size,and geometry of the air cavity.Directly at the interface the dose is lower for higher energy and smallerfield size.11–13Beyond the inter-face the dose becomes greater than its corresponding value in homogeneous tissue.13In an investigation of air cavity inter-face dose in a humanoid phantom the variation in dose with energy was found to be clinically insignificant,however,a strong dependence on the shape and geometry of the air cav-ity is reported.14The present study was initiated to quantify the extent of discrepancy between measurements and calculations using three treatment planning systems and to assess the dose at the air/polystyrene interface and at depths for varying air gap geometries.II.MATERIALS AND METHODS A.Experimental procedureIrradiations were done with 6and 18MV x rays produced by a Philips SL20linear accelerator ͑Elekta Oncology Systems/Philips Medical Systems,Sweden and UK ͒.Field sizes of 5ϫ5and 10ϫ10cm 2at a source to chamber distance ͑SCD ͒of 100cm were used.The experimental setup is de-picted in Fig. 1.A parallel plate air ionization chamber ͑Markus PTW model 30-329,Victoreen Nuclear Associates,Carle Place,NY ͒was used for the measurements.This chamber has a polyethylene window of thickness 2.7mg/cm 2,a 5.4-mm-diam circular collector,a nominal volume of 0.05cm 3,and a 2mm electrode separation.The chamber is located at the air/polystyrene interface at a constant dis-tance of 100cm from the source ͑SCD setup ͒.The effective point of measurement is the proximal surface of the cham-ber.The polarity effect was investigated and found to be less than 1%.The air gap was varied from 1to 5cm with 3cm polystyrene supported by sidewalls above the air gap.The dose D i was measured at the air/polystyrene interface and the dose D h was measured at 3cm of homogeneous phantom depth without an air gap.The relative dose was then calcu-lated as D i /D h .In the first experiment,the lateral dimension of the air gap was made larger than the radiation field size ͓Fig.1͑a ͔͒.D i /D h was calculated using several dose calcula-tion programs and compared to measured values.In the sec-ond experiment the lateral dimension of the air gap was var-ied from 0ϫ0cm 2͑no air gap ͒to 20ϫ20cm 2͓Fig.1͑b ͔͒.The lateral dimensions of the air cavity were altered to in-vestigate the contribution of scattered radiation on the inter-face dose for the 5ϫ5cm 2field size.The thickness ͑t ͒of the air gap used was 3and 5cm for the second experiment.The dose normalization point was situated in a homogeneouspolystyrene phantom at depths 3cm ϩt for the 0ϫ0cm 2air gap.The effect of primary photon attenuation was investi-gated by adding polystyrene layers on top of the point of measurement keeping the SCD constant at 100cm.Measure-ments were carried out for air gaps of 3and 5cm height and varying lateral dimensions in the third experiment.Polysty-rene sheets were added to increase the depth of measurement up to 4cm ͓Fig.1͑c ͔͒.The lateral dimensions of the air gap were increased from 0cm to larger than radiation field size ͑infinity ͒.Again dose normalization was at 0ϫ0cm 2air gap and depth 3cm ϩt .B.Treatment planning systemsThree treatment planning systems were used for the rela-tive dose calculation.These are:Cadplan ͑Varian Oncology Systems,Varian/Dosetek,Zug,Switzerland ͒,General Elec-tric Target 2͑General Electric,Milwakee,WI ͒,and an in-house developed treatment planning system ͑Xplan,Vancou-ver Cancer Centre,B.C.,Canada ͒.The Cadplan system employs three methods of inhomo-geneity correction which are user selectable:the Batho,modified Batho,and the equivalent TAR ͑ETAR ͒methods.The modified Batho method uses only the descending por-tion of the TAR/TPR curve.For high energy photons,the buildup depth can be several centimeters.In this region the TAR/TPR value is not valid.15This calculation does not re-flect build-down and rebuildup at interfaces.Mathematically,the correction factors are represented as follows;for the Batho method ͓Cadplan External Beam Modeling Reference Manual,version 2.6.2,pp.70–75,͑Varian/Dosetek,Zug,Switzerland,May 1995͔͒:CF ϭK N͟m ϭ1NT ͑d m ,A ͒͑m Ϫm Ϫ1͒/0,͑1͒where the path to the calculation point is being divided intoN layers of 1cm thickness,T is the TAR value at depth d m ,m is the index for inhomogeneity boundaries,d m is the dis-tance between the grid points and the m th inhomogeneity boundary,A is the field size,m is the linear attenuation coefficient for the m th inhomogeneity layer,0is the linear attenuation coefficient for water,and K N ϭ͑en /͒N ͑en /͒͑2͒is the ratio of the mass energy absorption coefficient for the N th layer to that for water.In the modified Batho method the depth of D max is added to the depth d m which givesCF ϭK N͟m ϭ1NT ͑d m ϩD max ,A ͒͑m Ϫm Ϫ1͒/0.͑3͒Therefore the Batho and the modified Batho models differonly in the depth definition for the TAR/TPR value.The second system studied is the GE Target 2.It uses the Batho method with the capability of doing three-dimensional ͑3D ͒dose calculation.Finally,an in-house system ͑Xplan ͒implemented at the Vancouver Cancer Center was alsoused.F IG .1.Experimental setup used for the three experiments is shown.The closed rectangle represents the position of the Markus chamber.This system uses the same Batho method as Cadplan with only two-dimensional ͑2D ͒dose calculation capabilities for 6MV photons.III.RESULTS AND DISCUSSIONDose perturbations produced by air gaps are due to alter-ations in both photon and electron transport.As the air gap thickness increases electrons from the upper polystyrene layer no longer reach the point of measurement giving rise to a second buildup region.Thus the point of measurement ef-fectively represents another surface.Figure 2shows results of experimental measurements andcalculations of the relative dose (D i /D h )for 6MV photons using the Batho,modified Batho,and equivalent TAR ͑ETAR ͒inhomogeneity correction methods on the Cadplan system for a 10ϫ10cm 2͓Fig.2͑a ͔͒and a 5ϫ5cm 2͓Fig.2͑b ͔͒field size.In this experiment the 3cm polystyrene slab is moved away from the point of measurement and the air gap increases from 1to 5cm.The decrease in the measured relative dose is attributed to changes in electron transport and loss of electronic equilibrium.As the air gap increases less secondary electrons are seen by the detector since they are dispersed over a greater range.However,for larger field sizes an increase in the scatter component predominates,pro-ducing greater relative dose.Calculation methods only account for changes in photon transport while assuming electronic equilibrium.The de-crease in calculated relative dose is much less than that mea-sured.In our experimental setup the primary beam attenua-tion is unaffected by the size of the air gap.Therefore,the decrease in relative dose is attributed to less scattered radia-tion reaching the point of calculation as the air gap increases up to 3cm after which point the relative dose remains con-stant.However,the dose is significantly overpredicted in the calculation.For instance,the relative dose calculated using the Batho method for a 5ϫ5cm 2field irradiated with 6MV beam behind a 5cm air gap is 0.9whereas the measured relative dose is 0.45.While air gaps in the head and neck region are typically less than 3cm and larger air gaps are not encountered clinically,our experiments are designed to illus-trate the limitations of the treatment planning algorithms.A similar trend of the relative dose as a function of air gap thickness was observed using the three methods with the Batho method differing by 5%from the modified Batho and ETAR methods.To illustrate differences that may occur with the imple-mentation of the Batho method on different treatment plan-ning systems results based on three treatment planning sys-tems are presented in Fig.3.Relative dose calculations for 6MV photon energy and two field sizes ͑10ϫ10and 5ϫ5cm 2͒are shown.The field size dependence is insignificant in the case of the GE Target 2system.All the results for the three treatment planning systems lie within a 5%range.In the second experiment where the sidewalls are smaller than the radiation field size ͓Fig.1͑b ͔͒,photons scattered from the walls of the cavity as well as electron transport,contribute to the overall behavior of the absolute dose with change in the air gap thickness.Variation in the lateral di-mensions of the cavity will eventually influence this behav-ior.As the lateral dimensions of the cavity were increased from 0ϫ0cm 2͑no cavity ͒to 2ϫ2cm 2the relative dose increased since the primary photon fluence is no longer at-tenuated.For both 6MV ͓Fig.4͑a ͔͒and 18MV ͓Fig.4͑b ͔͒the dose reaches a maximum at lateral dimensions of 2ϫ2cm 2.It decreases afterwards due to a smaller number of pho-tons scattered by the walls and reaching the detector.Fur-thermore,the increase in the air gap thickness resulted at first in a higher dose because of greater scatter from the thicker walls;but as the air gap dimension approaches radiationfieldF IG parison of calculated and measured relative dose (D i /D h )for 6MV photons is shown for ͑a ͒10ϫ10cm 2and ͑b ͒5ϫ5cm 2field sizes.Calculations of relative dose using the Batho,modified Batho,and ETAR correction methods as implemented on Cadplan lie between 1and 0.9while measurements with the Markus chamber show a decrease down to 0.45.The three methods show the same trend giving higher relative dose for the larger field size.Greater agreement is seen between the modified Batho and the ETAR methods.Results are normalized to unity at 0cm air gap with 3cm of overlying polystyrene.size dimension,thicker air gaps result in a lower dose.The latter is consistent with the results of the first experiment ͑Fig.2͒where the phenomenon is attributed to a reduction in electron transport.Calculations using the Batho and the modified Batho methods ͓Eqs.͑1͒and ͑3͒,respectively ͔did not show sensitivity to the dimension of the sidewalls as expected.Figures 5͑a ͒–5͑d ͒show the variation of dose as a function of depth for 10ϫ10cm 2͓͑a ͒and ͑b ͔͒and 5ϫ5cm 2͓͑c ͒and ͑d ͔͒fields irradiated with 6MV photons using air gap thick-ness of 3cm ͓Figs.5͑a ͒and 5͑c ͔͒and 5cm ͓Figs.5͑b ͒and 5͑d ͔͒.Figures 6͑a ͒–6͑d ͒show the results of the same irradia-tion conditions for 18MV photons.There was a buildup in the dose only for cases where lateral wall dimensions ap-proach or exceed the field size and this buildup was more significant for the smaller field size and larger air gaps.This is consistent with experiment one where a significant drop in dose is observed as the air gap increases.The maximum doseoccurs when the thickness of polystyrene approaches the maximum range of primary electrons for the respective en-ergies investigated ͑6and 18MV ͒in this second buildup region.Furthermore,for lateral wall dimensions studied the relative dose converges at 2cm for 6MV and 4cm of depth for 18MV below the air/polystyrene interface.At this depth,scattered photons ͑from the walls ͒are all attenuated making the size of the sidewalls irrelevant at larger depths.IV.CONCLUSIONSThe perturbation of dose distributions by air cavities was investigated as a function of photon energy,field size,size and shape of the cavity,and depth of the point of measure-ment for varying geometries.Data presented in this report show alteration in the dose at air/polystyrene interface for different air gap thickness as determined bycommercialF IG .3.Relative dose calculation is shown as a function of air gap thickness using the Batho correction method as implemented on the Cadplan,GE Target 2,and Xplan treatment planning systems.A 6MV beam and two field sizes ͑a ͒10ϫ10cm 2and ͑b ͒5ϫ5cm 2wereused.F IG .4.Measurements of the relative dose as a function of air gap side for a 5ϫ5cm 2field size are shown.Sides were varied from 0ϫ0͑no gap ͒cm 2to 20ϫ20cm 2for two different air gap thicknesses ͑3and 5cm ͒:͑a ͒6MV photons and ͑b ͒18MV photons.The dose was normalized to the dose in homogeneous phantom with 6cm ͑t ϭ3cm ͒and 8cm ͑t ϭ5cm ͒of polysty-rene over the detector.treatment planning systems using the Batho,modified Batho,and ETAR methods compared to experiments.The latter demonstrated a significant underdosing down to a relative dose at the interface of 0.45for a 5cm air gap while the treatment planning systems overpredicted the relative dose ͑0.9͒.These differences are due to the fact that electronic disequilibrium is not accounted for by the algorithms used in the treatment planning systems.This requires increasing the monitor units by the factor of ͑calculated dose ͒/͑measured dose ͒to achieve the intended dose at the interface.Geom-etries of air gap widths ranging from 2ϫ2to 20ϫ20cm 2demonstrated the effect of the scatter component of the dose from the walls and its magnitude at interfaces.The under-dosing observed may be significant at depths of several cen-timeters from the interface for smaller field sizes and air gaps approaching the field size in their lateral dimensions.TheseF IG .5.Measurements of the relative dose as a function of depth and air gap thickness for 6MV photons are shown.Different sizes of air gap side-walls were used ranging from zero to infinity in which case the sides are much larger than the field size.For a 10ϫ10cm 2field size:͑a ͒3cm and ͑b ͒5cm air gap thickness;for a 5ϫ5cm 2field size:͑c ͒3cm and ͑d ͒5cm air gap thickness.The depth is relative to the interface.The dose was normal-ized to the dose in homogeneous phan-tom with 6cm ͓͑a ͒and ͑c ͔͒and 8cm ͓͑b ͒and ͑d ͔͒of polystyrene over thedetector.F IG .6.Measurements of the relative dose as a function of depth and air gap thickness for 18MV photons are shown.Different sizes of air gap side-walls were used ranging from zero to infinity in which case the sides are much larger than the field size.For a 10ϫ10cm 2field size:͑a ͒3cm and ͑b ͒5cm air gap thickness;for a 5ϫ5cm 2field size:͑c ͒3cm and ͑d ͒5cm air gap thickness.The depth is relative to the interface.The dose was normalized to the dose in homogeneous phantom with 6cm ͓͑a ͒and ͑c ͔͒and 8cm ͓͑b ͒and ͑d ͔͒of polystyrene over the detec-tor.findings point to the need for proper inclusion of electron disequilibrium effects at interfaces and would be possible with Monte Carlo simulation and superposition/convolution algorithms.5This is an area of active investigation.16,17 ACKNOWLEDGMENTSThe authors are grateful to Dr.K.Yuen for fruitful dis-cussions and the referees for their constructive comments.B.H.S.is a recipient of a Natural Sciences and Engineering Research Council of Canada͑NSERC͒postgraduate scholar-ship.a͒Reference to commercial products does not imply endorsement by the authors.b͒Corresponding author;electronic mail:shahine@physics.ubc.cac͒Present address:Department of Radiation Oncology,University of Cali-fornia,Irvine,101The City Drive South,Orange,California92868.1J.J.Battista,T.R.Mackie,E.El-Khatib,and J.W.Scrimger,‘‘Lung dose corrections for6MV and15MV X-rays:Anomalies,’’Proceedings of the Eighth International Conference on the Use of Computers in Radiation Therapy,Toronto,Canada,9–12July1984͑unpublished͒.2J.R.Cunningham,‘‘Current and future development of tissue inhomoge-neity corrections for photon beam clinical dosimetry with the use of CT,’’in Computed Tomography in Radiation Therapy,edited by C.C.Ling,C.C.Rogers,and R.J.Morton͑Raven,New York,1983͒.3E.El-Khatib,‘‘The present status of lung dosimetry in radiation treatment planning,’’Med.Dosim.10,9–15͑1985͒.4T.R.Mackie,E.El-Khatib,J.J.Battista,J.Scrimger,J.Van Dyk,and J. R.Cunningham,‘‘Lung dose corrections for6MV and15MV x rays,’’Med.Phys.12,327–332͑1985͒.5T.R.Mackie,J.W.Scrimger,J.J.Battista,and E.El-Khatib,‘‘A con-volution method for calculating dose in situations of lateral electronic disequilibrium,’’Med.Phys.11,397͑1984͒.6H.F.Batho,‘‘Lung corrections in cobalt60beam therapy,’’J.Can. Assoc.Radiol.15,39–83͑1964͒.7M.R.Sontag and J.R.Cunningham,‘‘Corrections to absorbed dose calculations for tissue inhomogeneities,’’Med.Phys.4,431–436͑1977͒. 8J.W.Wong and J.A.Purdy,‘‘On methods of inhomogeneity corrections for photon transport,’’Med.Phys.17,807–814͑1990͒.9E.R.Epp,A.L.Boyer,and K.P.Doppke,‘‘Underdosing of lesions resulting from lack of electronic equilibrium in upper respiratory air cavi-ties irradiated by10MV x-ray beams,’’Int.J.Radiat.Oncol.,Biol.,Phys. 2,613–619͑1977͒.10P.M.Ostwald,T.Kron,and C.Hamilton,‘‘Assessment of mucosal un-derdosing in larynx irradiation,’’Int.J.Radiat.Oncol.,Biol.,Phys.36, 181–187͑1996͒.11M.E.J.Young and R.O.Kornelsen,‘‘Dose corrections for low-density tissue inhomogeneities and air channels for10MV x rays,’’Med.Phys. 10,450–455͑1983͒.12J.L.Beach,M.S.Mendiondo,and O.A.Mendiondo,‘‘A comparison of air-cavity inhomogeneity effects for cobalt-60,6and10MV x-ray beams,’’Med.Phys.14,140–144͑1987͒.13E.E.Klein,L.M.Chin,R.K.Rice,and B.J.Mijnheer,‘‘The influence of air cavities on interface doses for photon beams,’’Int.J.Radiat.On-col.,Biol.,Phys.27,419–427͑1993͒.14A.Niroomand-Rad,K.W.Harter,S.Thobejane,and K.Bertrand,‘‘Air cavity effects on the radiation dose to the larynx using Co-60,6MV,and 10MV photon beams,’’Int.J.Radiat.Oncol.,Biol.,Phys.29,1139–1146͑1994͒.15R.A.Fox and S.Webb,‘‘A comparison of inhomogeneity correction methods with Monte Carlo data,’’Australas Phys.Eng.Sci.Med.85, 452–462͑1979͒.16N.Papanikolaou,T.R.Mackie,C.Merger-Wells,M.Gehring,and P. Reckwerdt,‘‘Investigation of the convolution method for polyenergetic spectra,’’Med.Phys.20,1327–1336͑1993͒.17C.M.Wells et al.,‘‘Measurements of the electron dose distribution near inhomogeneities using a plastic scintillation detector,’’Int.J.Radiat.On-col.,Biol.,Phys.29,1157–1165͑1994͒.。