Preform Characterization in VARTM Process Model Development
[转载]MRT投影参数设置及原理
![[转载]MRT投影参数设置及原理](https://img.taocdn.com/s3/m/79f446100640be1e650e52ea551810a6f424c856.png)
[转载]MRT投影参数设置及原理这个很有⽤的原⽂地址:MRT投影参数设置及原理作者:吉圈圈⽤MRT投影转换,涉及⼏个⽅⾯的内容:1.参数设置2.误区3.原理4.如何选择投影和基准⾯【⼀】投影转换的设置·中国地区Albers投影参数的设置坐标系:⼤地坐标系投影:Albers 正轴等⾯积双标准纬线圆锥投影南标准纬线:25°N北标准纬线:47°N中央经线:105°E坐标原点:105°E 与⾚道的交点。
纬向偏移:0,经向偏移:0。
MRT中的输⼊中的输⼊:Smajor:6370997 Sminor: 6370997 STDPR1:25.0STDPR2:47.0 CenMer:105.0 OriginLat:0.0备注:Smajor:椭球体长半轴,依据基准⾯⽽定。
上⾯的6370997地球球体半径。
Sminor:椭球体短半轴centmer:longitude of the central meridianoriginLat:latitude of the projection origin·加拿⼤地区Albers投影参数设置MRT中的输⼊PARAMETER["False_Easting",0],PARAMETER["False_Northing",0],PARAMETER["longitude_of_center",-96], //对应CenMerPARAMETER["Standard_Parallel_1",50],PARAMETER["Standard_Parallel_2",70],其中smajor和sminor同上。
其他地区的查询⽅式:google⼀下:“地区名+ablers”即可,如:canada albers equal area·Lambert Azimuthal投影参数的设置其中的中⼼经纬度是根据你⾃⼰的研究区范围取中间值即可。
mdetr代码讲解

mdetr代码讲解全文共四篇示例,供读者参考第一篇示例:mdetr是一种目标检测模型,它是基于Mask R-CNN架构的改进版本,旨在提高目标检测的性能和效率。
在本文中,我们将详细介绍mdetr的代码和实现细节,帮助读者更好地理解和使用这一强大的检测模型。
我们来看一下mdetr的架构。
mdetr模型主要由三部分组成:Encoder、Decoder和Transformer。
Encoder负责对输入图像进行特征提取,通常采用预训练的卷积神经网络,例如ResNet。
Decoder 是一个由Transformer模块组成的序列模型,用于学习目标检测任务中的目标与位置之间的关系。
Transformer模块负责进行目标分类和位置预测。
下面我们将逐步解释mdetr的代码实现。
我们需要导入必要的库和模块:import torchimport torchvisionimport torch.nn.functional as Ffrom torch import nnfrom torch.optim import Adamfrom torchvision.models.detection.detr import DETR, Matcher, BipartiteMatcher, SetCriterion接下来,我们定义mdetr模型的主要结构:class MDETR(nn.Module):def __init__(self, num_classes, num_queries):super().__init__()self.num_classes = num_classesself.num_queries = num_queriesself.detr = DETR(num_classes=num_classes,num_queries=num_queries)self.matcher = Matcher(cost_class=1, cost_bbox=5, cost_giou=2)在这段代码中,我们定义了一个MDETR类,它继承自nn.Module类。
SAS_Clinical_临床面试50题汇总

SAS Clinical 临床面试50题汇总How can SAS program can be validated?By writing OPTIONS OBS=0 at the starting of the code and if execution of code is On PC SAS than log will be detected itself by highlighted colors. These are the two ways for validating an SAS program.In SAS explain which statement does not perform automatic conversions in comparisons?By using WHERE statements automatic conversions can’t be per formed because WHERE statement variables exist in the data set.Explain SUBSTR function?SUBSTR function is used for extracting a string or replacing contents of character value.Explain PROC SORT?PROC SORT sorts SAS data set by variables so that a new data set can be prepared for further use.Explain PROC UNIVARIATE?PROC UNIVARIATE is used for elementary numeric analysis and will examine how data is distributed.What do you mean by CALL PRXFREE Routine?CALL PRXFREE routine is used for Character String Matching and is used for allocation of free memory for perl regular expression.Explain APPEND procedure?APPEND means adding at the end so in terms of SAS we can say adding one sas data set to another sas data set.Explain BMDP procedure?For analyzing data BMPD procedure is used.Define RUN-Group processing?RUN-Group processing is used to submit a PROC step using RUN statement without ending the procedure. Interested in a high-paying career in Big Data? sas online certification is what you need!Explain BY-Group processing?BY statement is used by BY-Group processing so that it can process data which are indexed, grouped or ordered based on the variables.What will CALENDAR procedure do?CALENDAR procedure will show data in a monthly calendar format from SAS data set.What are the functions which are used for Character handling functions?UPCASE and LOWCASE are the functions which are used for character handling functions.What is the use of DIVIDE function?DIVIDE function is used to return the division result.Explain TRANSLATE function?TRANSLATE function: Characters which are specified in a string are replaced by the characters specified by us.Explain BOR function?It is a bitwise logical operation and is used to return bitwise logical OR between two statements.Explain CALL PRXCHANGE routine?It is used for performing replacement of pattern matching.Explain ANYDIGIT function?It is used for searching a character string and as soon as string is found it will return it.What do you understand by CALL MISSING routine?The character or numeric variables which are specified will be assigned missing values through CALL MISSING routine.What do you mean by ALTER data set option?It is used for assigning an ALTER password which will stop the users from changing the file.Explain COMPRESS data set option?It is used for compressing the data into new output.Define Formats?Instruction used by SAS for writing data values is known as Formats.How variable formats are handled by PROC COMPARE?Variable formats are handled by PROC COMPARE as PROC COMPARE is used for comparing unformatted values.What are the features of SAS system?It provides Ipv6 support, new true type fonts, extended time notations, restart mode, universal printing, checkpoint mode and ISO 8601 support.What is the use of $BASE64X?By using base 64 encoding, character data is converted into ASCII text.Explain VFORMATX Function?It is used to return the format which is assigned with the value of the given Statement.Define STD function?Standard deviation will be returned for nonmissing statements.What is Debugging?Debugging is the technique for testing the program logic and this can be done with the help of debugger.Explain FILECLOSE data set option?When data set is closed, its tape positioning is defined by FILECLOSE.What does ODS stands for?ODS stands for output delivery system.What does CDISC stands for?CDISC stands for Clinical Data Interchange Standards Consortium.Which method is used to copy blocks of data?The method which is used to copy blocks of data is defined as block I/O method.What is the procedure for copying an entire library?Copy statement should be followed by an input data library and an output data library.Define MAX () function?Max () function is used to return the largest value.What is the use of sysrc function?It is a function which provides a system error number.Explain what is SAS? What are the functions does it performs?SAS i.e. Statistical Analysis System is a combined set of software solutions which helps user to analyze data. ∙It can change, manipulate, analyze & retrieve data.∙Numerical analysis can be done.∙Report writings.∙Quality can be improved.Explain what is the basic structure of SAS programing?SAS programs consists of:∙DATA step, which recovers & manipulates data.∙PROC step, which interpret the data.Explain what is Data Step?The main function of Data step is to create SAS data sets by manipulating data.Explain what is PDV?Program Data Vector is the area of memory where data sets are created through SAS system i.e. one at a time.When program is executed an input buffer is created which will read the data values and make them assign to their respective variables.Mention what is the difference between nodupkey and nodup options?The identical observations are checked and removed through NODUP option. NODUPKEY option checks for all BY variable values and if found, it will eliminate that.Explain what is the use of function Proc summary?Proc Summary is same as Proc Means i.e. it will give descriptive statistics but it will not give output as default, we have to give an option print then only it will give the output.Explain what does PROC print, and PROC contents are used for?PROC print outputs a listing of the values of some or all of the variables in a SAS data set. PROC contents tells the structure of the data set rather than the data values.Explain what Procglm does?The functions of Procglm are covariance analysis, variance analysis, multivariate and repeated analysis of variance.Explain what is SAS informats?An informat is an instruction that SAS uses to read data values. They are used to read, or input data from external files.What function CATX syntax does?CATX syntax inserts delimiters, removes trailing and leading blanks and returns a concatenated character string.Explain what is the use of PROC gplot?PROC gplot identifies the data set that contains the plot variables. It has more options and therefore can create more colorful and fancier graphics.How to sort in descending order?By using DESCENDING keyword in PROC SORT code, we can sort in descending order.What do the put and input function do?Input Function: Character values are converted into numeric values Put function: Numeric values are converted into character values.What’s the difference between VAR B1 – B3 and VAR B1 — B3?Single Dash specifies consecutively numbered variables. Double Dash specifies variables available within that data set. For example: Data Set: ID NAME B1 B2 C1 B3Then, B1 –B3 would return B1 B2 B3 And B1 – B3 would return B1 B2 C1 B3.What is the basic syntax style in SAS?The points important for running SAS program are:DATA statement, which names your data set.The names of the variables in your data set are described by INPUT statement.Statement should be ended through semi-colon (;).Space between word and statement should be there.What are the special input delimiters?The input delimiters are DLM and DSD.What is the difference between a format and an informat?Format: A format is to write data i.e. WORDIATE18 and WEEKDATEW Informat: An informat is to read data i.e. comma, dollar and date (MMDDYYw, DATEw, TIMEw, PERCENTw) Describe any one SAS function?RIM: removes trailing blanks from a character expression Str1 = ‘my’; Str2 = ‘dog’; Result = TRIM (Str1) (Str2); Result = ‘mydog’What is Program Data Vector (PDV) and what are its functions?PDV is a logical area in the memorySAS creates a dataset one observation at a timeInput buffer is created at the time of compilation, for holding a record from external filePDV is created followed by the creation of input bufferSAS builds dataset in the PDV area of memoryCompare SAS STATA & SPSS?Each package offers its own unique strengths and weaknesses. As a whole, SAS, Stata and SPSS form a set of tools that can be used for a wide variety of statistical analysis. With Stat/Transfer it is very easy to convert data files from one package to another in just a matter of seconds or minutes. Therefore, there can be quite an advantage to switching from one analysis package to another depending on the nature of your problem. For example, if you were performing analysis using mixed models you might choose SAS, but if you were doing logistic regression you might choose Stata, and if you were doing analysis of variance you might choose SPSS. If you are frequently performing statistical analysis, we would strongly urge you to consider making each one of these packages part of your toolkit for data analysis.What are the uses of SAS?SAS/ETS software provides tools for a wide variety of applications in business, government, and academia. Major uses of SAS/ETS procedures are economic analysis, forecasting, economic and financial modeling, time series analysis, financial reporting, and manipulation of time series data.The common theme relating the many applications of the software is time series data: SAS/ETS software is useful whenever it is necessary to analyze or predict processes that take place over time or to analyze models that involve simultaneous relationships.Although SAS/ETS software is most closely associated with business, finance and economics, time series data also arise in many other fields. SAS/ETS software is useful whenever time dependencies, simultaneous relationships, or dynamic processes complicate data analysis. For example, an environmental quality study might use SAS/ETS software’s time series analysis tools to analyze pollution emiss ions data. A pharmacokinetic study might use SAS/ETS software’s features for nonlinear systems to model the dynamics of drug metabolism in different tissues.How do I Create a SAS Data Set with Compressed Observations?To create a compressed SAS data set, use the COMPRESS=YES option as an output DATA set option or in an OPTIONS pressing a data set reduces its size by reducing repeated consecutive characters or numbers to 2-bye or 3-byte representations.To uncompress observations, you must use a DATA step to copy the data set and use option COMPRESS=NO for the new data set.The advantages of using a SAS compressed data set are reduced storage requirements for the data set and fewer input/output operations necessary to read from and write to the data set during processing. The disadvantages include not being able to use SAS observation number to access an observation. The CPU time required to prepare compressed observations for input/output observations is increased because of the overhead of compressing and expanding the observations. (Note: If there are few repeated characters, a data set can occupy more space in compressed form than in uncompressed form, due to the higher overhead per observation.) For more details on SAS compression see “SAS Language: Reference, Version 6, First Edition, Cary, NC: SAS Institute Inc., 1990”.How can we minimize space requirement for the huge data set in SAS for window?When you are working with large data sets, you can do the following steps to reduce space requirements.Split huge data set into smaller data sets.Clean up your working space as much as possible at each step.Use data set options (keep= , drop=) or statement (keep, drop) to limit to only the variables needed.Use IF statement or OBS = to limit the number of observations.Use WHERE= or WHERE or index to optimize the WHERE expression to limit the number of observations in a Proc Step and Data Step.Use length to limit the bytes of variables.Use _null_ data set name when you don’t need to create a data set.Compress data set using system options or data set options (COMPRESS=yes or COMPRESS=binary).Use SQL to do merge, summary, sort etc. rather than a combination of Proc Step and Data Step with temporary data sets.。
复合材料成型加工技术---

真空袋
1. 过程
制品毛坯 真空袋密封 抽真空 固化 制品
2. 特征 1)工艺简单,不需要专用设备; 2)压力较小,最大为0.1MPa,只适
用厚度1.5mm以下复合材料制品
压力袋成型
压力为0.25~0.5MPa
真空袋-热压罐成型
预浸料成型
预浸料成型(prepreg lay-up)
基本步骤:
设备:要求比RTM高,投资大
模压成型(Compression Molding)
将复合材料片材或模塑料放入金属对模中, 在温度和压力作用下,材料充满模腔,固 化成型,脱模制得产品的方法。
模具预热 模压料称量
涂刷脱模剂 预热
装模
压制 脱模
制品 检验 后处理
BMC模压 SMC、TMC模压 预浸料模压(层压)
多孔膜 密实膜 多孔织物
多孔膜
真空封装系统:透气、隔离、吸胶、透胶系统
适合加工高纤维含量(>60%)复合材料 简单和复杂构型构件均可以加工 高强度和高刚度复合材料均可以加工 劳动强度大,不适于大量加工 构件成本高 在航空和军事用先进复合材料上用途广泛
喷射成型(spray-up process)
包括:
干法缠绕: 预浸纱 湿法缠绕: 纤维纱 半干法缠绕: 纤维纱
加热软化 缠绕 浸胶 缠绕 浸胶 烘干 缠绕
非常适合制作管 状制品,如压力 容器、管道、火 箭发动机壳体、 喷管、化学品储 存罐等
配合CAD系统, 可以制作外形更 为复杂的构件
基本步骤
① 粗纱线轴放置在粗纱架上; ② 几根粗纱从导纱沟中穿过; ③ 固化剂和树脂在容器中混合后倒入树脂浸渍槽中; ④ 在卷绕滚筒上涂覆脱模剂、凝胶涂层,并将卷绕滚筒放
prefeR 0.1.3 软件包用户指南说明书

Package‘prefeR’October14,2022Type PackageTitle R Package for Pairwise Preference ElicitationVersion0.1.3Date2022-04-24Author John LepirdMaintainer John Lepird<****************.edu>Description Allows users to derive multi-objective weights from pairwise comparisons,which research shows is more repeatable,transparent,and intuitive other techniques.These weightscan be rank existing alternatives or to define a multi-objective utility function for optimization. License MIT+file LICENSEImports mcmc,methods,entropySuggests testthat,knitr,rmarkdownVignetteBuilder knitrRoxygenNote7.1.2Encoding UTF-8URL https:///jlepird/prefeR,https://jlepird.github.io/prefeR/NeedsCompilation noRepository CRANDate/Publication2022-04-2423:00:02UTCR topics documented:.calculateLogProb (2).estimateEntropy (2).getLogIndifProb (3).getLogStrictProb (3).sampleEntropy (4)BayesPrefClass (4)Exp (5)12.estimateEntropyFlat (5)infer (6)Normal (6)prefEl (7)suggest (8)%=% (8)%>% (9)%<% (9)Index10 .calculateLogProb Calculates the log probability of seeing a given set of preferencesDescriptionCalculates the log probability of seeing a given set of preferencesUsage.calculateLogProb(x,p)Argumentsx A guess for our weight vectorp An object of the Bayes preference classValueA scalar log-likelihood of the guess x.estimateEntropy Calculates the expected posterior entropy of the prefel object if x andy are compared.Ignores the odds of indifference preferences,as usingthem would increase runtime50%without much gain.DescriptionCalculates the expected posterior entropy of the prefel object if x and y are compared.Ignores the odds of indifference preferences,as using them would increase runtime50%without much gain. Usage.estimateEntropy(p,currentGuess,x,y).getLogIndifProb3Argumentsp An object of class BayesPrefClass.currentGuess The current best estimate for our weight vector.x Possible comparison1y Possible comparison2.getLogIndifProb Evaluates the likelihood of the observed indifference preferencesDescriptionEvaluates the likelihood of the observed indifference preferencesUsage.getLogIndifProb(x,pref,p)Argumentsx the underlying datapref the stated preferencep the preference elication object.getLogStrictProb Evaluates the likelihood of the observed strict preferencesDescriptionEvaluates the likelihood of the observed strict preferencesUsage.getLogStrictProb(x,pref,p)Argumentsx the underlying datapref the stated preferencep the preference elication object4BayesPrefClass .sampleEntropy Calculates the entropy of a matrix of samples.DescriptionCalculates the entropy of a matrix of samples.Usage.sampleEntropy(X)ArgumentsX a matrix where each row is a sample of variables in different columnsBayesPrefClass An object containing all data necessary for preference elicitation.DescriptionAn object containing all data necessary for preference elicitation.Fieldsdata A matrix or dataframe of data.priors A list of functions that give the prior on each variable.sigma A scalar value to use for the confusion factor(default0.1).Sigma(Internal use only)A matrix of sigma*diag(ncol(data)).strict A list of lists of preferences.For each element x,x[[1]]>x[[2]].indif A list of lists of indifference preferences.For each element x,x[[1]]=x[[2]].weights A vector of weights determined by the inference algorithm.MethodsaddPref(x)Adds a preference created using%>%,%<%,or%=%.infer(estimate="recommended")Calls the“infer”function to guess weightsrank()Calculates the utility of each row in our datasetsuggest(maxComparisons=10)Calls the“suggest”function to guess weightsExp5 Exp A convenience function for generating Exponential priors.DescriptionA convenience function for generating Exponential priors.UsageExp(mu=1)Argumentsmu The mean of the exponential distribution,i.e.1/rateValueA function yielding the log-PDF at x of a exponential distribution with given statistics.See AlsoOther priors:Flat(),Normal()ExamplesExp(1)(1)==dexp(1,1,log=TRUE)Flat A convenience function for generating aflat prior.DescriptionA convenience function for generating aflat prior.UsageFlat()ValueThe zero function.See AlsoOther priors:Exp(),Normal()ExamplesFlat()(1)==0.06Normal infer A function that estimates the user’s underlying utility function.DescriptionA function that estimates the user’s underlying utility function.Usageinfer(p,estimate="recommended",nbatch=1000)Argumentsp A BayesPrefClass instance.estimate The type of posterior point-estimate returned.Valid options are"recommended"(default),"MAP",and"mean".nbatch If using Monte Carlo estimates,the number of samples.Defaults to1000.ValueA vector of parameters that bestfits the observed preferences.Examplesp<-prefEl(data=data.frame(c(1,0,1),c(0,1,1),c(1,1,1)),priors=c(Normal(0,1),Exp(0.5),Flat()))p$addPref(1%>%2)infer(p,estimate="MAP")Normal A convenience function for generating Normal priors.DescriptionA convenience function for generating Normal priors.UsageNormal(mu=0,sigma=1)Argumentsmu The mean of the normal distributionsigma The standard deviation of the priorprefEl7 ValueA function yielding the log-PDF at x of a normal distribution with given statistics.See AlsoOther priors:Exp(),Flat()ExamplesNormal(0,1)(1)==dnorm(1,log=TRUE)prefEl A shortcut to create objects of the class BayesPrefClass.DescriptionA shortcut to create objects of the class BayesPrefClass.UsageprefEl(data=NA,priors=list(),...)Argumentsdata A matrix or dataframe of data.Each column should be a variable,each row an observation.priors A list of functions that give the prior on each variable.E.g.see help(Flat)...Other parameters to pass to the class constructor.Not recommended.Examplesp<-prefEl(data=data.frame(x=c(1,0,1),y=c(0,1,1)),priors=c(Normal(0,1),Flat()))8%=% suggest Suggests a good comparison for the user to make next.DescriptionSuggests a good comparison for the user to make next.Usagesuggest(p,maxComparisons=10)Argumentsp An object of class BayesPrefClass.maxComparisons The maximum number of possible comparisons to check.Default:10.ValueA two-element vector of recommended comparisons.%=%A helper function to add in preferences in a user-friendly way.DescriptionA helper function to add in preferences in a user-friendly way.Usagea%=%bArgumentsa Thefirst alternativeb The second alternativeSee AlsoOther preferences:%<%(),%>%()Examples1%=%2#indifferent between1and2%>%9 %>%A helper function to add in preferences in a user-friendly way.DescriptionA helper function to add in preferences in a user-friendly way.Usagea%>%bArgumentsa The preferred rowb The non-preferred rowSee AlsoOther preferences:%<%(),%=%()Examples1%>%2#prefer row1to row2%<%A helper function to add in preferences in a user-friendly way.DescriptionA helper function to add in preferences in a user-friendly way.Usagea%<%bArgumentsa The non-preferred rowb The preferred rowSee AlsoOther preferences:%=%(),%>%()Examples1%<%2#prefer row2to row1Index∗preferences%<%,9%=%,8%>%,9∗priorsExp,5Flat,5Normal,6.calculateLogProb,2.estimateEntropy,2.getLogIndifProb,3.getLogStrictProb,3.sampleEntropy,4%<%,8,9,9%=%,8,9%>%,8,9,9BayesPrefClass,4Exp,5,5,7Flat,5,5,7infer,6Normal,5,6prefEl,7suggest,810。
spm使用培训课件05JustinMultipletesting

• extrinsic smoothness
– resampling during preprocessing – matched filter theorem
deliberate additional smoothing to increase SNR
Multiple tests
hhhh
tt
hh
t
h
contrast of estimated parameters
t=
variance estimate
What is the problem?
Multiple tests
hhhh
tt
hh
t
h
contrast of estimated parameters
t=
variance estimate
Inference at a single voxel
h
Decision: H0 , H1: zero/non-zero activation
t h
contrast of estimated parameters
t=
variance estimate
Inference at a single voxel
small volume correction (SVC)
Computing EC wrt. search volume and threshold
E(u) () ||1/2 (u 2 -1) exp(-u 2/2) / (2)2
–
Search region R3
模式识别与机器学习 复习资料 温雯 老师
温雯
一些需要提及的问题
温雯 广东工业大学 计算机学院 23
温雯
广东工业大学
计算机学院
21
温雯
广东工业大学
模式识别系统的复杂性 – An Example
“利用光学传感器采集信息,对 传送带上的鱼进行种类的自动 区分” Fish Classification: Sea Bass / Salmon
一个例子
将鲈鱼与三文鱼进行区分 问题归纳(抽象而言) • 模式识别系统 • 设计流程
Preprocessing involves:
广东工业大学 计算机学院 28
Overlap in the histograms is small compared to length feature 温雯 广东工业大学 计算机学院
27
温雯
判定边界
错误分类的代价
模型的复杂度
Generalization (推广能力)
Partition the feature space into two regions by finding the decision boundary (判定边界)that minimizes the error.
Optical Character Recognition (typography)
A v t u I h D U w K
一种新的人机交互系统 你,从中看到模式识别吗?
Vision
fluent中专业词汇翻译
GridRead 读取文件:scheme 方案 journal 日志 profile 外形 Write 保存文件Import :进入另一个运算程序 Interpolate :窜改,插入 Hardcopy : 复制, Batch options 一组选项 Save layout 保存设计Check 检查Info 报告:size 尺寸 ;memory usage 内存使用情况;zones 区域 ;partitions 划分存储区 Polyhedral 多面体:Convert domain 变换范围 Convert skewed cells 变换倾斜的单元 Merge 合并 Separate 分割Fuse (Merge 的意思是将具有相同条件的边界合并成一个;Fuse 将两个网格完全贴合的边界融合成内部(interior)来处理,比如叶轮机中,计算多个叶片时,只需生成一个叶片通道网格,其他通过复制后,将重合的周期边界Fuse 掉就行了。
注意两个命令均为不可逆操作,在进行操作时注意保存case)Zone 区域: append case file 添加case 文档 Replace 取代;delete 删除;deactivate 使复位;Surface mesh 表面网孔Reordr 追加,添加:Domain 范围;zones 区域; Print bandwidth 打印 Scale 单位变换 Translate 转化Rotate 旋转 smooth/swap 光滑/交换Models 模型:solver 解算器Pressure based 基于压力density based 基于密度implicit 隐式,explicit 显示Space 空间:2D,axisymmetric(转动轴),axisymmetric swirl (漩涡转动轴);Time时间:steady 定常,unsteady 非定常Velocity formulation 制定速度:absolute绝对的;relative 相对的Gradient option 梯度选择:以单元作基础;以节点作基础;以单元作梯度的最小正方形。
NONMEM软件使用说明书:非线性混合效应模型理解与应用
Package‘nmw’May10,2023Version0.1.5Title Understanding Nonlinear Mixed Effects Modeling for PopulationPharmacokineticsDescription This shows how NONMEM(R)software works.NONMEM's classical estimation meth-ods like'First Order(FO)approximation','First Order Conditional Estima-tion(FOCE)',and'Laplacian approximation'are explained.Depends R(>=3.5.0),numDerivByteCompile yesLicense GPL-3Copyright2017-,Kyun-Seop BaeAuthor Kyun-Seop BaeMaintainer Kyun-Seop Bae<********>URL https:///package=nmwNeedsCompilation noRepository CRANDate/Publication2023-05-1003:40:02UTCR topics documented:nmw-package (2)AddCox (3)CombDmExPc (4)CovStep (5)EstStep (6)InitStep (7)TabStep (9)TrimOut (10)Index1112nmw-package nmw-package Understanding Nonlinear Mixed Effects Modeling for PopulationPharmacokineticsDescriptionThis shows how NONMEM(R)</innovation/nonmem/>software works. DetailsThis package explains’First Order(FO)approximation’method,’First Order Conditional Estima-tion(FOCE)’method,and’Laplacian(LAPL)’method of NONMEM software.Author(s)Kyun-Seop Bae<********>References1.NONMEM Users guide2.Wang Y.Derivation of various NONMEM estimation methods.J Pharmacokinet Pharmaco-dyn.2007.3.Kang D,Bae K,Houk BE,Savic RM,Karlsson MO.Standard Error of Empirical BayesEstimate in NONMEM(R)VI.K J Physiol Pharmacol.2012.4.Kim M,Yim D,Bae K.R-based reproduction of the estimation process hidden behind NON-MEM Part1:First order approximation method.2015.5.Bae K,Yim D.R-based reproduction of the estimation process hidden behind NONMEM Part2:First order conditional estimation.2016.ExamplesDataAll=Theophcolnames(DataAll)=c("ID","BWT","DOSE","TIME","DV")DataAll[,"ID"]=as.numeric(as.character(DataAll[,"ID"]))nTheta=3nEta=3nEps=2THETAinit=c(2,50,0.1)OMinit=matrix(c(0.2,0.1,0.1,0.1,0.2,0.1,0.1,0.1,0.2),nrow=nEta,ncol=nEta) SGinit=diag(c(0.1,0.1))LB=rep(0,nTheta)#Lower boundUB=rep(1000000,nTheta)#Upper boundFGD=deriv(~DOSE/(TH2*exp(ETA2))*TH1*exp(ETA1)/(TH1*exp(ETA1)-TH3*exp(ETA3))*(exp(-TH3*exp(ETA3)*TIME)-exp(-TH1*exp(ETA1)*TIME)),AddCox3 c("ETA1","ETA2","ETA3"),function.arg=c("TH1","TH2","TH3","ETA1","ETA2","ETA3","DOSE","TIME"),func=TRUE,hessian=TRUE)H=deriv(~F+F*EPS1+EPS2,c("EPS1","EPS2"),function.arg=c("F","EPS1","EPS2"),func=TRUE) PRED=function(THETA,ETA,DATAi){FGDres=FGD(THETA[1],THETA[2],THETA[3],ETA[1],ETA[2],ETA[3],DOSE=320,DATAi[,"TIME"]) Gres=attr(FGDres,"gradient")Hres=attr(H(FGDres,0,0),"gradient")if(e$METHOD=="LAPL"){Dres=attr(FGDres,"hessian")Res=cbind(FGDres,Gres,Hres,Dres[,1,1],Dres[,2,1],Dres[,2,2],Dres[,3,])colnames(Res)=c("F","G1","G2","G3","H1","H2","D11","D21","D22","D31","D32","D33") }else{Res=cbind(FGDres,Gres,Hres)colnames(Res)=c("F","G1","G2","G3","H1","H2")}return(Res)}#######First Order Approximation Method#Commented out for the CRAN CPU time#InitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB,#Pred=PRED,METHOD="ZERO")#(EstRes=EstStep())#4sec#(CovRes=CovStep())#2sec#PostHocEta()#Using e$FinalPara from EstStep()#TabStep()########First Order Conditional Estimation with Interaction Method#InitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB,#Pred=PRED,METHOD="COND")#(EstRes=EstStep())#2min#(CovRes=CovStep())#1min#get("EBE",envir=e)#TabStep()########Laplacian Approximation with Interacton Method#InitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB,#Pred=PRED,METHOD="LAPL")#(EstRes=EstStep())#4min#(CovRes=CovStep())#1min#get("EBE",envir=e)#TabStep()AddCox Add a Covariate Column to an Existing NONMEM datasetDescriptionA new covariate column can be added to an existing NONMEM dataset.4CombDmExPcUsageAddCox(nmData,coxData,coxCol,dateCol="DATE",idCol="ID")ArgumentsnmData an existing NONMEM datasetcoxData a data table containing a covariate columncoxCol the covariate column name in the coxData tabledateCol date column name in the NONMEM dataset and the covariate data tableidCol ID column name in the NONMEM dataset and the covariate data tableDetailsItfirst carry forward for the missing data.If NA is remained,it carry backward.ValueA new NONMEM dataset containing the covariate columnAuthor(s)Kyun-Seop Bae<********>CombDmExPc Combine the demographics(DM),dosing(EX),and DV(PC)tables intoa new NONMEM datasetDescriptionA new NONMEM dataset can be created from the demographics,dosing,and DV tables.UsageCombDmExPc(dm,ex,pc)Argumentsdm A demographics table.It should contain a row per subject.ex An exposure table.Drug administration(dosing)history table.pc A DV(dependent variable)or PC(drug concentration)tableDetailsCombining a demographics,a dosing,and a concentration table can produce a new NONMEM dataset.CovStep5ValueA new NONMEM datasetAuthor(s)Kyun-Seop Bae<********>CovStep Covariance StepDescriptionIt calculates standard errors and various variance matrices with the e$FinalPara after estimation step.UsageCovStep()DetailsBecause EstStep uses nonlinear optimization,covariance step is separated from estimation step.It calculates variance-covariance matrix of estimates in the original scale.ValueTime consumed timeStandard Error standard error of the estimates in the order of theta,omega,and sigmaCovariance Matrix of Estimatescovariance matrix of estimates in the order of theta,omega,and sigma.This isinverse(R)x S x inverse(R)by default.Correlation Matrix of Estimatescorrelation matrix of estimates in the order of theta,omega,and sigma Inverse Covariance Matrix of Estimatesinverse covariance matrix of estimates in the order of theta,omega,and sigma Eigen Values eigen values of covariance matrixR Matrix R matrix of NONMEM,the second derivative of log likelihood function with respect to estimation parametersS Matrix S matrix of NONMEM,sum of individual cross-product of thefirst derivative of log likelihood function with respect to estimation parametersAuthor(s)Kyun-Seop Bae<********>6EstStepReferencesNONMEM Users GuideSee AlsoEstStep,InitStepExamples#Only after InitStep and EstStep#CovStep()EstStep Estimation StepDescriptionThis estimates upon the conditions with InitStep.UsageEstStep()DetailsIt does not have arguments.All necessary arguments are stored in the e environment.It assumes "INTERACTION"between eta and epsilon for"COND"and"LAPL"options.The output is basically same to NONMEM output.ValueInitial OFV initial value of the objective functionTime time consumed for this stepOptim the raw output from optim functionFinal Estimatesfinal estimates in the original scaleAuthor(s)Kyun-Seop Bae<********>ReferencesNONMEM Users GuideSee AlsoInitStepExamples#Only After InitStep#EstStep()InitStep Initialization StepDescriptionIt receives parameters for the estimation and stores them into e environment.UsageInitStep(DataAll,THETAinit,OMinit,SGinit,LB,UB,Pred,METHOD)ArgumentsDataAll Data for all subjects.It should contain columns which Pred function uses.THETAinit Theta initial valuesOMinit Omega matrix initial valuesSGinit Sigma matrix initial valuesLB Lower bounds for theta vectorUB Upper bounds for theta vectorPred Prediction function nameMETHOD one of the estimation methods"ZERO","COND",or"LAPL"DetailsPrediction function should return not only prediction values(F or IPRED)but also G(first derivative with respect to etas)and H(first derivative of Y with respect to epsilon).For the"LAPL",prediction function should return second derivative with respect to eta also."INTERACTION"is TRUE for "COND"and"LAPL"option,and FALSE for"ZERO".Omega matrix should be full block one.Sigma matrix should be diagonal one.ValueThis does not return values,but stores necessary values into the environment e.Author(s)Kyun-Seop Bae<********>ReferencesNONMEM Users GuideExamplesDataAll=Theophcolnames(DataAll)=c("ID","BWT","DOSE","TIME","DV")DataAll[,"ID"]=as.numeric(as.character(DataAll[,"ID"]))nTheta=3nEta=3nEps=2THETAinit=c(2,50,0.1)#Initial estimateOMinit=matrix(c(0.2,0.1,0.1,0.1,0.2,0.1,0.1,0.1,0.2),nrow=nEta,ncol=nEta)OMinitSGinit=diag(c(0.1,0.1))SGinitLB=rep(0,nTheta)#Lower boundUB=rep(1000000,nTheta)#Upper boundFGD=deriv(~DOSE/(TH2*exp(ETA2))*TH1*exp(ETA1)/(TH1*exp(ETA1)-TH3*exp(ETA3))*(exp(-TH3*exp(ETA3)*TIME)-exp(-TH1*exp(ETA1)*TIME)),c("ETA1","ETA2","ETA3"),function.arg=c("TH1","TH2","TH3","ETA1","ETA2","ETA3","DOSE","TIME"),func=TRUE,hessian=TRUE)H=deriv(~F+F*EPS1+EPS2,c("EPS1","EPS2"),function.arg=c("F","EPS1","EPS2"),func=TRUE) PRED=function(THETA,ETA,DATAi){FGDres=FGD(THETA[1],THETA[2],THETA[3],ETA[1],ETA[2],ETA[3],DOSE=320,DATAi[,"TIME"]) Gres=attr(FGDres,"gradient")Hres=attr(H(FGDres,0,0),"gradient")if(e$METHOD=="LAPL"){Dres=attr(FGDres,"hessian")Res=cbind(FGDres,Gres,Hres,Dres[,1,1],Dres[,2,1],Dres[,2,2],Dres[,3,])colnames(Res)=c("F","G1","G2","G3","H1","H2","D11","D21","D22","D31","D32","D33") }else{Res=cbind(FGDres,Gres,Hres)colnames(Res)=c("F","G1","G2","G3","H1","H2")}return(Res)}#########First Order Approximation MethodInitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB, Pred=PRED,METHOD="ZERO")#########First Order Conditional Estimation with Interaction MethodInitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB, Pred=PRED,METHOD="COND")#########Laplacian Approximation with Interacton MethodInitStep(DataAll,THETAinit=THETAinit,OMinit=OMinit,SGinit=SGinit,LB=LB,UB=UB,TabStep9 Pred=PRED,METHOD="LAPL")TabStep Table StepDescriptionThis produces standard table.UsageTabStep()DetailsIt does not have arguments.All necessary arguments are stored in the e environment.This is similar to other standard results table.ValueA table with ID,TIME,DV,PRED,RES,WRES,derivatives of G and H.If the estimation methodis other than’ZERO’(First-order approximation),it includes CWRES,CIPREDI(formerly IPRED), CIRESI(formerly IRES).Author(s)Kyun-Seop Bae<********>ReferencesNONMEM Users GuideSee AlsoEstStepExamples#Only After EstStep#TabStep()10TrimOut TrimOut Trimming and beutifying NONMEM original OUTPUTfileDescriptionTrimOut removes unnecessary parts from NONMEM original OUTPUTfile.UsageTrimOut(inFile,outFile="PRINT.OUT")ArgumentsinFile NONMEM original untidy OUTPUTfile nameoutFile Outputfile name to be writtenDetailsNONMEM original OUTPUTfile contains unnecessary parts such as CONTROLfile content, Start/End Time,License Info,Print control characters such as"+","0","1".This function trims those.ValueoutFile will be written in the current working folder or designated folder.Ths returns TRUE if the process was smooth.Author(s)Kyun-Seop Bae<********>Index∗Covariance StepCovStep,5∗Data PreparationAddCox,3CombDmExPc,4∗Estimation StepEstStep,6∗Initialization StepInitStep,7∗NONMEM OUTPUTTrimOut,10∗Nonlinear Mixed Effects Modelingnmw-package,2∗Population Pharmacokineticsnmw-package,2∗Tabulation StepTabStep,9AddCox,3CombDmExPc,4CovStep,5EstStep,6,6,9InitStep,6,7nmw(nmw-package),2nmw-package,2TabStep,9TrimOut,1011。
fluent操作界面中英
fluent 操作界面中英文对照Read 读取文件:scheme 方案journal 日志profile 外形 Write 保存文件Import :进入另一个运算程序 Interpolate :窜改,插入 Hardcopy : 复制, Batch options 一组选项 Save layout 保存设计Grid 网格Check 检查Info 报告:size 尺寸 ;memory usage 内存使用 情况;zones 区域;partitions 划分存储区 Polyhedral 多面体:Convert domain 变换范围Convert skewed cells 变换倾斜的单元Merge 合并 Separate 分割Fuse (Merge 的意思是将具有相同条件的边界合 并成一个;Fuse 将两个网格完全贴合的边界融合 成内部(interior)来处理,比如叶轮机中,计算多 个叶片时,只需生成一个叶片通道网格,其他通 过复制后,将重合的周期边界Fuse 掉就行了。
注意两个命令均为不可逆操作,在进行操作时注 意保存case) Zone 区域: append case file 添力口 case 文档 Replace 取代;delete 删除;deactivate 使复 位; Surface mesh 表面网孔Reordr 追加,添加:Domain 范围;zones 区域; Print bandwidth 打印 Scale 单位变换 Translate 转化Rotate 旋转 smooth/swap 光滑/交换CheckInfo ► Polyhedra►Merge...Separate ► Fuse...Zone►Surface Mesh... Reorder►Scale...Translate...Rotate...Smooth/Swap...ieGrid ] Define Solvea:w 1E3 SolverSolver* Pressure Based 'Density Based Space「2DL Axisymmetric 广 Axcsymmetric Swirl m 3DVelgty Formulatiqn • Absolute RelativeGradient Option区 Implicit「Explicit Time# SteadyUnsteadyPorous Formulation• Superficial VelocityPhysical Veiccity6K | Cancel] Help |Pressure based 基于压力 Density based 基于密度Models 模型:solver 解算器Formulation # Green-Gauss Cell Oased Green-Gauss N 。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Preform Characterization in VARTM Process Model DevelopmentBrian W. GrimsleyI, Roberto J. CanoI, Pascal HubertII, Alfred C. LoosIII, Charles B. KellenIV and Brian J. JensenI NASA Langley Research Center, Hampton, Virginia 23681 II McGill University, Montreal, Quebec, Canada H3A2K6 III Michigan State University, East Lansing, Michigan 48824 IV Old Dominion University, Norfolk, VA 23508IABSTRACT1Vacuum-Assisted Resin Transfer Molding (VARTM) is a Liquid Composite Molding (LCM) process where both resin injection and fiber compaction are achieved under pressures of 101.3 kPa or less. Originally developed over a decade ago for marine composite fabrication, VARTM is now considered a viable process for the fabrication of aerospace composites (1,2). In order to optimize and further improve the process, a finite element analysis (FEA) process model is being developed to include the coupled phenomenon of resin flow, preform compaction and resin cure. The model input parameters are obtained from resin and fiber-preform characterization tests. In this study, the compaction behavior and the Darcy permeability of a commercially available carbon fabric are characterized. The resulting empirical model equations are input to the 3Dimensional Infiltration, version 5 (3DINFILv.5) process model to simulate infiltration of a composite panel. KEY WORDS: Vacuum-Assisted RTM, Characterization, Finite Element Analysis1.0 INTRODUCTIONUnlike other LCM processes such as resin transfer molding (RTM), the VARTM process allows fabrication of composite parts without the use of any supplied pressure. Both transfer of the matrix resin and compaction of the part are achieved using atmospheric pressure alone. Therefore, the upper tool of the matched metal mold used in RTM is replaced in the VARTM1This paper is declared a work of the U.S. Government and is not subject to copyright protection in the United States.process by a formable vacuum bag material. Flow of the resin into the part is enhanced through the use of a resin distribution medium (3,4). The highly-permeable medium induces resin flow through the thickness of the part and reduces filling times. Work under the Twenty-First Century Aircraft Technology (TCAT) program at NASA Langley Research Center has focused on further developing the VARTM process for fabrication of composite structures for high-performance aircraft. The development and use of a finite element process model is an important part of this research. The process model allows sensitivity analyses to determine the influence of intrinsic (e.g. constitutive material properties) and extrinsic (e.g. cure cycle) parameters on the final part quality (5). In addition, a model which accurately predicts infiltration, cure and final part dimensions greatly reduces the cost associated with the trial and error procedure commonly used in manufacturing to develop a suitable processing cycle. The process model 3DINFIL predicts three-dimensional flow evolution, part injection times, and final part dimensions using a finite element/control volume technique (6). Previously developed for RTM, the 3DINFIL model has been modified for application to the VARTM process by coupling the flow sub-model to a compaction sub-model (6). This is critical to accurately predicting the particular resin flow behavior caused by the presence of a flexible bag and low compaction pressures found in the VARTM process. The key material input properties required by the model are the preform compaction behavior and the permeability. These two properties are coupled by the state of the preform, such as the fiber volume fraction and the saturation. In this work, the compaction and permeability of a preform specimen containing four stacks of the SAERTEX®2 multi-axial, warp-knit (MAWK) carbon fabric preforms were characterized for the range of fiber volume fractions found in typical VARTM conditions. The resulting empirical equations were used as material input parameters in 3DINFIL to simulate the flow front evolution in a four-stack, 30.5 cm x 61.0 cm flat composite panel. The results of the simulation are compared with flow front measurements obtained from infiltration experiments using a glass tool.2.0 BACKGROUNDPrevious work at NASA LaRC and by others (7-9) has led to a better understanding of the unique compaction phenomena associated with the VARTM process. In VARTM, the following equation accounts for the transverse equilibrium inside the mold cavity during impregnation:PATM = PR + PF[1]where PATM is the applied atmospheric pressure, PR is the resin pressure and PF is the fiber pressure (pressure supported by the preform). In the dry condition, the preform essentially supports the external pressure, PF = PATM, and a dry maximum debulking deformation of the preform is reached. During infiltration, two deformation mechanisms are present in the wet area of the preform: the wetting compaction and the spring-back. The wetting compaction is causedUse of trade names or manufacturers does not constitute an official endorsement, either expressed or implied, by the National Aeronautics and Space Administration.2by a change of the arrangement or state of the fiber network, which is created by the lubrication effect of the wetting fluid. Under a given external pressure, the lubrication of the dry preform causes an increase in the preform compaction by an additional amount of wetting deformation. In VARTM, this phase typically takes place at high fiber pressure (PF ≈ PATM) and low resin pressure (PR ≈ 0). The preform spring-back mechanism occurs when the local resin pressure increases. According to Equation 1, the fiber pressure must decrease when the resin pressure increases. Consequently, the preform compaction decreases by an amount of spring-back deformation due to the reduced pressure carried by the preform. The deformation experienced during spring-back has been shown to include both elastic and time-dependent, viscoelastic recovery (10). The curves also exhibit a permanent deformation due to nesting of the fabric. At any time during the infiltration process, the net compaction of the preform depends on the relative magnitude of the wetting and the spring-back deformation mechanisms. Other work (11) to characterize the compaction of non-crimp, knitted carbon fabrics has shown that the fiber volume of the specimens at pressures ranging from 50 kPa to 700 kPa was influenced by the number of layers or “stacks” of the knitted performs in the specimen. It was found that the amount of nesting that occurs increases with the amount of stacks in a specimen. The evidence of this inter-stack influence on fiber volume would require both compaction and permeability characterization specific to the stacking sequence and thickness of the preform to be infused. Flow of resin through fibrous media can be modeled using Darcy’s law (12, 13). Modeling the flow in the VARTM process differs from the traditional RTM process. The flexible vacuum bag and varying pressure inside the mold cavity, result in a variation of the preform thickness and, hence, the fiber volume fraction of the preform during the impregnation process. As explained above, the variation of compaction pressure during infiltration necessitates measurement of the preform permeability at varying fiber volumes. Permeability is defined as the resistance to flow through porous media and is often related to the porosity using an empirical model. The threedimensional form of Darcy’s Law for an anisotropic material is written as: ⎡ S xx ⎡q x ⎤ ⎢q ⎥ = 1 ⎢ S ⎢ y ⎥ η ⎢ xy ⎢ S xz ⎢qz ⎥ ⎣ ⎦ ⎣ S xy S yy S yz ⎡ ⎤ S xz ⎤ ⎢∂P ∂x ⎥ ⎥ S yz ⎥ ⎢∂P ⎥ ⎢ ∂y ⎥ S zz ⎥ ⎢∂P ⎥ ⎦ ⎣ ∂z ⎦[2]where Sij are the components of the permeability tensor, qi are the components of the superficial velocity vector, η is the viscosity of the fluid and P is the pressure. For preform architectures that are orthotropic, the components Sxy, Sxz and Syz are zero and Equation [2] simplifies to:∂P ∂x η ∂P 1 q y = S yy ∂y η ∂P 1 q z = S zz η ∂z qx = 1 S xx[3]In this work, the x-direction runs parallel to the advancing flow front in a VARTM panel infiltration. The y-direction is 90° and in-plane to x. The z-direction is the transverse, or throughthickness flow direction.3.0 MATERIALSThe carbon fabric studied in this work is a multi-axial warp-knit (MAWK) fabric supplied by SAERTEX®. The material is composed of seven plies of both AS-4 and IM-7 carbon fibers with a total areal weight of of 1423 g/m2. The plies are stacked, not woven, and then knitted with an alternating polyester tricot/chain knit thread in the stacking sequence described in Table 1. In this study, the 0° fiber tows were in the fabric x-direction. A fiber density of 1.78 g/cc was used for both the biaxial and MAWK fabrics.Table 1 Ply Stacking Sequence in SAERTEX MAWK Fabric. Ply Number 1 2 3 4 5 6 7 Yarn Material 3K-AS4 3K-AS4 12K-IM7 6K-AS4 12K-IM7 3K-AS4 3K-AS4 Yarn Orientation +45° -45° 0° 90° 0° +45° -45° Areal Weight g/m2 156 156 314 171 314 156 156The wetting fluid used in the experiments was an SAE 40 motor oil supplied by Valvoline. The fluid was chosen for its relative ease of use, safe handling and low cost. The temperature of the oil during permeability and compaction experiments varied from day to day depending on the room temperature. Prior to each test the viscosity was measured using a Brookfield viscometer. For oil temperatures ranging from 21.4°C to 25.4°C, the Brookfield viscosity varied from 0.34 Pa•s to 0.24 Pa•s, respectively.4.0 EXPERIMENTALCompaction and permeability characterization experiments were carried out on specimens containing four stacks of the MAWK fabric. Both, in-plane, Sxx and Syy, and through-thethickness, Szz, permeability constants were determined for fiber volume fractions ranging from 45% to 58%. 4.1 Compaction In previous work (14), difficulties were encountered in accurately determining the initial thickness of the preform specimens. The initial thickness is a key parameter in formulating the compaction model and therefore in accurate flow predictions. In an effort to solve this problem a separate approach was taken to determine the initial “uncompacted”thickness as well as a starting thickness at some predetermined pressure. To accomplish this, a SATEC® T1000 test frame with a TRANSDUCER TECHNIQUES® 2.2x10-2 kN load cell was utilized. The fixture consisted of two circular parallel plates with a diameter of 1.60 cm and 10.1 cm for the top and bottom plate respectively. After zeroing the fixture, a 2.54 cm x 2.54 cm specimen was placed between the two plates. The fixture was closed until the slightest load was registered, i.e. less than 2.2x10-3 N. The test was then started when the specimen was loaded at a strain rate of 1.3x10-3 mm/sec until the load reached the predetermined loading point, in this case 0.334 N which corresponds to 1.70 kPa (0.5 in Hg). At this point the strain rate was decreased to 1.3x10-4 mm/sec to allow for the viscoelastic response in the fabric. The strain rate was cycled at these rates between 0.311 N and 0.334 N until the load overcame the relaxation and continued to increase above the 0.334 N limit. Cycling the strain rate between the upper and lower bounds of load simulated a pressure hold at 1.70 kPa inside the vacuum bag. The thickness measured at this stage was used as the initial preform thickness for the fiber volume fraction calculation. The remainder of the dry compaction curve was obtained by compressing the specimen in a vacuum bag. The vacuum bag compaction experiments were conducted in both dry and wet conditions to develop an understanding of the compaction response of the MAWK fabric at the low pressures experienced during VARTM processing. An instrumented aluminum tool was used to measure pressure and displacement of the preform. As described in previous work (7), pressure sensors (Omega Engineering, Inc., Series PX102) were mounted at the tool surface beneath the fiber preform. Linear Variable Displacement Transducers (LVDT, Omega Engineering, Inc., Series 400) were supported above the vacuum-bagged preform by a rigid beam. Sensor outputs were recorded by a PC-based data acquisition system using LabVIEW® software. The preform specimens were carefully cut into 15 cm x 27 cm rectangles, weighed and placed on the tool. For the sake of continuity, the compaction specimens were laid-up on the tool and bagged similar to a conventional VARTM infiltration. The distribution medium, containing three layers of nylon mesh screen, was placed beneath the resin inlet distribution tubing and atop the preform to a point within 2.54 cm of the LVDT and pressure sensor. Therfore, the thickness of the media was not included in calculation of the resulting curves. The specimens were placed on the tool surface so that the advancing flow front would contact the pressure sensor at the same time that the preform under the LVDT was wetted. For the dry preform tests, the bagged preform was evacuated via one port located approximately 12 cm from the preform. The displacement was recorded at preset pressure levels from 1.70 kPa to 101 kPa after allowing the displacement to reach steady state at each of these compaction levels (approximately 200 seconds). In the wet compaction test, the resin and vacuum lines were placed following the procedure for a typical VARTM infiltration described in reference (7). The vacuum bag was initially evacuated to 0.5 in Hg (1.7 kPa compaction pressure) to zero the LVDT. For the dry preform test, the vacuum bag was then steadily evacuated to full vacuum (101 kPa compaction pressure). Once the preform displacement stabilized, the bag was steadily vented back to 0.5 in Hg. For the wet preform test, the bagged preform was initially impregnated with wetting fluid by applying a small pressure gradient to the preform. Once the preform was fully impregnated, full vacuum was applied to the bag until a steady state compaction deformation was measured. Then, the bag was slowly vented by opening the inlet and allowing resin to flow back into the preform. The results are displayed in the next section.4.2 Permeability Experiments were conducted to characterize the in-plane (Sxx and Syy) and the through-thickness (Szz) permeability at fiber volumes ranging from 45% to 58%. The tests were performed at or below the VARTM injection pressure of 101 kPa. Side-view schematic diagrams of the in-plane and through-thickness permeability test fixtures are shown in Figure 2-A) and B). Both the in-plane and through-thickness fixtures were essentially rigid steel molds instrumented with diaphragm pressure sensors and LVDTs. The fixtures were mounted in a compression test frame. The in-plane fixture was designed to characterize preform specimens 15 cm x 15.3 cm at thicknesses ranging from 0.2 cm to 2.5 cm.Steady-State Pressure Sensor Compaction Force LVDT Inlet Pressure Sensor Compaction Force LVDTSteel ScreenResin Distribution Plate PreformFluid InletAdvancing Front Pressure SensorsPreformA) In-plane fixtureB) Through-thickness fixtureFigure 1 In-plane and through-thickness permeability fixtures.The low-end thickness limit was due to the thickness of a strip of steel mesh placed at either end of the preform. The mesh was added to the fixture after initial tests at the lower fiber volumes showed that the preform specimens were sliding across the ground surface of the fixture. The mesh was placed on both ends of the preform to ensure identical boundary conditions at both the resin filling and exiting sides. The in-plane fixture utilized two LVDTs to ensure uniform thickness across the 15.0 cm length of the specimen. Four pressure sensors were contained in this fixture. The sensor located at the inlet side was utilized in this study for permeability characterization under steady-state conditions. The remaining three pressure sensors can be used for advancing-front characterization. In testing the MAWK fabric, the specimen was placed so that the 0° rovings were length-wise, or parallel to the direction of resin flow for determination of Sxx permeability. For Syy, the specimen was placed so that the 0° rovings were perpendicular to the direction of flow. The through-thickness, Szz, fixture (Figure 2B) was designed to test fabric specimens 5.08 cm x 5.08 cm up to 3.20 cm thick. The concept is identical to that of the in-plane fixture except that the fluid flowed into and out of the specimen via a rigid distribution plunger and base plates, which rested against the 25.81 cm2 surface on both sides of the specimen. The plate contained a hole-pattern with 0.50 cm holes drilled every 0.64 cm. NATIONAL INSTRUMENTS® data acquisition hardware and LABVIEW® software are used to record the pressure difference (inlet and outlet), ∆P, mass flow rate, M, and thickness, t. The fiber volume fraction, Vf, is calculated according to Equation 4:Vf = FAW t ∗ ρF[4]where FAW is the fiber areal weight of the preform specimen and ρF is the density of the fabric. The superficial or filter velocity, q, is calculated according to Equation 5.q= M ρR A[5]ρR is the density of the fluid and A is the cross-sectional area of the preform normal to the flow. The permeability constant, S, is then calculated at each compaction level by the relation from Darcy’s Law:S= qη L ∆P[6]where η is the viscosity of the fluid and L is the length of the preform specimen.5.0 CHARACTERIZATION RESULTS5.1 Compaction In Figure 3 a typical compaction test data set is shown for four stacks of the MAWK fabric including dry compaction, dry unloading and wet unloading modes. The independent variable “Pressure” is plotted on the ordinate to provide ease of view. The fiber lubrication is noted by the increase in fiber volume fraction at 101 kPa after the fabric specimen is wetted. The wet unloading portion of the curve is a quasi-static representation of the infusion of the fabric during a VARTM infiltration. The hysteresis phenomenon is evidenced by the difference in the starting point of the dry compaction curve at Vf = 47% and the ending point of the dry unloading curve at Vf = 52%. This difference of 5% represents the permanent or inelastic deformation of the fabric. The region of the dry unloading curves from Vf = 55% to 52% represents the elastic response of the fabric.Figure 2 Initial thickness determination performed in test frame.120Dry Compaction100Dry Release Wet ReleasePressure (kPa)806040200 0.460.480.50.520.540.56Fiber Volume FractionFigure 3 Compaction behavior of MAWK fabric in dry and wet conditions.Since no general constitutive model is available to describe the compaction behavior of this type of preform, the relationship between the compressive strain in the preform and the appliedpressure is obtained by fitting the compaction curve data to an empirical model. Equation 7 shows the relationship between the fiber volume fraction, Vf and the strain, ε, where φ is the initial preform porosity.Vf = 1−φ 1− ε[7]Prior to a VARTM infusion the fiber preform is compacted beneath the vacuum bag. Thus, the compressive strain of the preform can be calculated by curve fitting to the dry compaction results. As the flow front moves thru the preform, thus wetting the fabric, the local net pressure applied to the preform decreases according to Equation 1. Therefore the strain in the wet preform is determined by fitting an empirical model to the wet unloading test results. The resulting equations were fit to data from four dry-compaction and wet-unloading tests:ε dry = a dry (1 − eε wet = a wet + bwet (( bdry * Pf )))[8] [9]Pf c wet + Pfwhere Pf is the net pressure applied to the preform and the averaged constants for four samples are given in Table 2 below.Table 2 Compaction Model Parameters.Constant Dry Compressive Strain, εdry Max strain (at Pf = 101kPa), adry Curve fit constant, bdry Min strain (at Pf = 10kPa), awet Wet Compressive Strain, εwet Curve fit constant, bwet Curve fit constant, cwetValue 0.151 -0.034 0.122 0.035 14.076The results of the dry and wet compaction model equations as well as the statistical deviation are plotted in Figure 4.0.560.54Fiber Volume0.520.50.48Dry Model Wet Model0.46 0 10 20 30 40 50 60 70 80 90 100 110Pressure (kPa)Figure 4 Resulting curves and error from dry and wet compaction model equations.5.2 Permeability A power law equation was used to fit the permeability data as a function of the fiber volume fraction:S = a(V f ) b[10]where S is the permeability in m2, Vf is the fiber volume fraction and a and b are the empirical constants. Table 3 shows the constants obtained from fitting Equation 10 to the measured experimental data. Figure 5 shows the comparison between the permeability model and the measure data. The resulting transverse, Szz, permeabiltiy constant values are signficantly lower than those found for the in-plane directions.Table 3 Permeability model empirical constants.a In-plane, parallel to knitting, Sxx In-plane, normal to knitting, Syy Through the thickness, Szz 2.97 x 10-13 1.53 x 10-12 3.56 x 10-15b -6.58 -4.24 -8.641E-131E-121E-111E-100.420.460.50.540.580.62Fiber VolumeP e r m e a b i l i t y (m 2)Figure 5 Permeability experimental and model curves for four stacks of MAWK fabric.Both the permeability and compaction empirical models were implemented in the finite element code 3DINFIL as part of the material database.6.0 FINITE ELEMENT SIMULATIONSThe process model 3DINFIL was originally developed for RTM to simulate the flow of a resin through a three-dimensional anisotropic preform. The resin is assumed to be an incompressable, Newtonian fluid. The governing differential equation for flow is based on Darcy’s law and solved using the Finite Element / Control Volume (FE/CV) technique. The FE/CV method eliminates the need for remeshing of the resin-filled domain for each time step, thus the flow simulation can be performed rapidly and efficiently. Song et.al.(6) modified the existing model to account for the variation in fiber volume fraction during VARTM infiltration due to the flexible tooling and varying pressure inside the mold cavity. The new model 3DINFIL-5.0 features a numerical compaction sub-model, which is coupled to the existing Darcy flow model. At each time step, the resin pressure distribution is obtained from the flow model and the pressure supported by the preform is computed using Equation 1, listed previously. The transverse strain is then calculated using the relations developed in the empical compaction models Equations 8 and 9. The resulting values for strain are converted to fiber volume fraction using Equation 7. The result are input back to the flow model. The fluid velocity is then calculated using Darcy’s law and the empirical equations developed for permeabilty, Equation 10.In this work proceeding two-dimensional simulations of a flat preform are conducted with the empirical models developed in this study for compaction and permeability. The panel contained four stacks of MAWK fabric having dimensions of 30.0 cm x 60.0 cm. The initial porosity value of 0.532 used in the simulations was computed from the thickness value found in the load framecompaction test. The in-plane and transverse permeability values used for the distribution media were 8.31x10-9 m2 and 5.49x10-10 m2, respectively. Figure 6 depicts the geometry and boundary conditions used in the simulations. In the first simulation the resin was supplied to the panel as a point source and a single node was flagged as saturated with resin pressure of 101 kPa. In the second simulation the resin was supplied as a line source and hence all of the nodes at the resin side boundary were saturated. In both cases the medium was located 2.54 cm from the vacuum side of the panel.Simulation #2(Line Source)Figure 6Finite element model geometry and boundary conditions.The results of the simulations are compared to a glass tool infiltration experiment conducted using four stacks of the MAWK fabric cut to identical dimensions as those in the model geometry. The preform was infiltrated using the same fluid (oil) that was used in both the wet compaction and permeability characterization. The viscosity measured at the time of the infiltration was input to the model.Good agreement was obtained between the simulations and the experiment for the flow front evolution at the top and the bottom of the preform. The point-source boundary condition used in Simulation #1 resulted in a closer prediction of the final fill time for both the top and bottom surface, however this model did not capture the beginning of the curve on the bottom side. Observations of the actual flow in the panel indicate that there is initially a small amount of in-plane flow on the bottom surface of the resin side. At approximately 40 seconds into the infiltration, the in-plane flow becomes dominated by flow in the transverse. Future work will involve further investigation of how specifically the placement of the resin source influences the model predictions.10203040506070050100150200250300350400Time (sec)P o s i t i o n (c m )Figure 7 Finite element model simulation results and glass tool infiltration experiment data.7.0 CONCLUSIONSThe development and use of a finite element process model plays a key role in developing a complete understanding and therefore fully maximizing the potential of the VARTM process. Unlike the well characterized RTM process, a method to accurately predict flow in VARTM infiltration has not been developed in any sense that is widely accepted. This work focused on the development of more accurate empirical models to characterize the compaction behavior and permeability of a non-crimp, multi-axial carbon fabric. A new method to measure the initial thickness of the preformed was employed. The resulting compaction model equations fit the data well and showed little variability for different test samples. The preform permeability and compaction models were incorporated in the 3DINFIL process model. The resulting simulations accurately predicted the flow front evolution measured for a four stack thick panel of MAWK fabric.8.0REFERENCES1 .L.R. Thomas, A.K. Miller and A.L. Chan, SAMPE International Symposium, 47, 570, (2002).2 .J. M. Criss, S.C Parsons and R.W. Koon, SAMPE International Symposium, 49, 1049, (2004).3 .W.H. Seemann, (1990), U.S. Patent 4,902,215.4 .W.H. Seemann, (1994), U.S. Patent 5,316,462.5 .G. Fernlund, A. Poursartip, K. Nelson, M. Wilenski and F. Swanstrom, SAMPE International Symposium, 44,1744, (1999).6 .X. Song, “Vacuum Assisted Resin Transfer Modeling (VARTM): Model Development and Verification”, Ph.D.Dissertation, Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State University, Blacksburg, VA, (2003).7 . B.W. Grimsley, P. Hubert, X. Song, R.J. Cano, A.C. Loos and R.B. Pipes, SAMPE International TechnicalConference, 33, 140, (2001).8 . A. Hammami, Polymer Composites, 22, (3), 337, (2001).9 .M. Andersson, S. Lundstrom, B.R. Gebart and R. Langstrom, 8th International Conference on Fiber ReinforcedComposites, 113, (2000).10 .A. Somasheka, S. Bickerton, D. Battacharyya, 7th International Conference on Flow Processes in CompositeMaterials, , 425, (2004).11 .T. Kruckenberg and R. Paton, 7th International Conference on Flow Processes in Composite Materials, 418,(2004).12 .A.C. Loos, J.R. Sayre, R.D. McGrane and B.W. Grimsley, SAMPE International Symposium, 46, 1049, (2001).13 . T. Sadiq, R. Parnas and S. Advani, International SAMPE Technical Conference, 24, 674, (1992).14 .B.W. Grimsley, R.J. Cano, B.J. Jensen, A.C. Loos and P. Hubert, 18th Annual Technical Conference of theAmerican Society of Composite, (2003).。