毕业论文外文翻译--并条机自调匀整利用人工神经网络确定在自调匀整作用点(外文原文+中文翻译)

合集下载

FA322自调匀整并条机的性能分析与生产实践

FA322自调匀整并条机的性能分析与生产实践

FA322自调匀整并条机的性能分析与生产实践南通双弘纺织有限公司吉宜军摘要:本文介绍了FA322(USG)自调匀整高速并条机的主要性能特点及规格。

在同等配棉条件下,通过生产精梳纯棉 40支纱,对比实验了FA322(USG)(单并),F A306(二道)两种不同机型生产成条、成纱的质量。

同时介绍了FA322(USG)(单并)并条机的使用体会及性能分析。

自调匀整的正确使用及注意的问题。

关键词:并条机、自调匀整、性能、质量、对比、体会我公司于2002年1月从陕西宝成新型纺织有限公司购置安装了带自调匀整的FA322并条机两台。

经过近几年的生产实践,证明该设备生产精梳纯棉针织纱质量指标优越,实际生产速度高达400m/min,特别是在提高条干水平、控制稳定成纱质量偏差、降低成纱重量CV%、单强CV%、减少偶发性的棉结纱疵和长粗节纱疵、减少成纱毛羽,提高针织布面的实物质量方面具有显著的优势。

1.主要技术性能及规格1.1 FA322并条机的USG自调匀整装置的控制采用开环式。

该机在每个眼的输入端配备独立的凹凸检测罗拉,喂入棉条厚度变化使凸罗拉位移,由位移传感器将机械位移转换为信号,输入USG单元。

在USG控制单元中对检测值(实际值)与额定值进行比较,信号处理后指令伺服电机,结合差速轮系,改变后区牵伸系统变化,从而保证喂入棉条重量在±25%范围内波动时,输入棉条重量控制在<1%。

棉条“在线”监控是一个由带有前置放大器的喇叭口持续、精确、快速地测量输出棉条的重量。

实测数值与预先设定的额定值相比较,通过计算机进行“在线”控制,并可显示:棉条支数偏差(A%),棉条重不匀(CV%),可以设定警报极限,并自动停车,有关棉条的质量数据随时列表或图表方式显示出来。

1.2 FA322(USG)并条机车头,车尾高速齿轮装置于封闭的油箱内采用油浴润滑外,其他传动均采用齿形带传动;上清洁采用两段欧门式积极回转绒套及清洁梳,下清洁采用摆动丁腈刮圈;上下罗拉均采用滚针轴承,运转平稳,上罗拉采用丁腈橡胶直接包胶,硫化工艺胶套与芯子结合牢固,不易变形,经久耐用,且具有良好的吸放湿性能和抗静电性、耐磨性和弹性。

现代并条机自调匀整效果及纺纱试验

现代并条机自调匀整效果及纺纱试验

现代并条机自调匀整效果及纺纱试验
谢继延
【期刊名称】《轻纺工业与技术》
【年(卷),期】2010(039)001
【摘要】通过对现代并条机自调匀整装置一系列匀整效果及纺纱测试试验,对典型并条机自调匀整装置的性能进行了分析研究,提出了改进匀整效果、提高质量效益的途径和方法,希望能对棉纺企业并条机自调匀整装置在生产中的应用及制订相关的工艺参数提供一定的参考价值.
【总页数】4页(P7-10)
【作者】谢继延
【作者单位】广州科技贸易职业学院,广东,番禺,511442
【正文语种】中文
【中图分类】TS103.22+4
【相关文献】
1.现代并条机的自调匀整技术
2.FA322B自调匀整并条机性能特点及使用效果分析
3.HSR1000型并条机自调匀整效果分析
4.提高FA326A型并条机自调匀整效果的体会
5.末道并条机加装自调匀整装置的效果分析
因版权原因,仅展示原文概要,查看原文内容请购买。

USTER自调匀整并条机上粗节和大节的控制

USTER自调匀整并条机上粗节和大节的控制

2021年2月Cotton Textile TechnologyUSTER自调匀整并条机上粗节和大节的控制陈洪奎1刘建忠1封玉蓉1崔洁2刘文国2(1.吴忠德悦纺织科技有限公司,宁夏吴忠,751100;2.德州恒丰纺织有限公司,山东德州,253517)摘要:探讨USTER自调匀整并条机上粗节和大节的门限设定和控制实践。

介绍了TP⁃N粗节和大节的概念、特点和常规解决措施;通过实例,详细解读了并条TP⁃N粗节和大节产生的原因和解决思路;说明了并条大节对质量的影响;提出了并条大节高、中、低灵敏度的门限设定,以及并条挡车工寻找并条大节较优的操作法。

认为:建立健全并条大节分析制度,充分用好自调匀整并条机,可持续改进和稳定企业的质量水平。

关键词:USTER;自调匀整;并条机;TP⁃N粗节;大节;门限设定中图分类号:TS104.2+4文献标志码:B文章编号:1000-7415(2021)02-0007-04Control of Different Thick Section onUSTER Auto-leveling Drawing FrameCHEN Hongkui1LIU Jianzhong1FENG Yurong1CUI Jie2LIU Wenguo2(1.Wuzhong Deyue Textile Science and Technology Co.,Ltd.,Wuzhong,751100,China;2.Dezhou Hengfeng Textile Co.,Ltd.,Dezhou,253517,China)Abstract The threshold setting and control practice of thick section and large thick section on USTER auto-leveling drawing frame were discussed.The concept,properties and common solving measures of TP⁃N thick section and large thick section were introduced.Through practical examples,the reasons and solutions of drawing TP⁃N thick section and large thick section were interpreted in detail.The influence of drawing large thick section on quality was explained.The threshold setting with high,middle and low sensitivity for drawing large thick section and the operation to find drawing larger thick section by drawing worker were put forward.It is considered that the analysis system for drawing large thick section should be established and improved in cotton spinning mill.Auto-leveling drawing frame should be well used to continuously improve and stabilize the quality level of the enterprise.Key Words USTER,auto-leveling,drawing frame,TP⁃N thick section,large thick section,threshold setting1TP⁃N粗节和大节的概念USTER自调匀整系统中对粗节的定义包含两类:TP⁃N粗节和大节[1]。

自调匀整的工作原理及应用

自调匀整的工作原理及应用

自调匀整的工作原理及应用1. 简介自调匀整是一种工程技术,通过调整系统参数来实现自动控制和优化系统的稳定性。

它在工业自动化、电力系统、通信领域等都有广泛的应用。

2. 工作原理自调匀整的工作原理基于反馈控制原理,通过不断测量系统的输出反馈信号,与期望值进行比较,并根据误差信号来调整系统参数,使系统达到期望的稳定状态。

3. 自调匀整的应用以下是自调匀整在不同领域的应用举例:3.1 工业自动化•自调匀整可以应用于工业生产中的自动化控制系统,通过对工厂设备的参数进行自动调整,实现控制系统的稳定运行和优化生产效率。

•在自动化生产线中,自调匀整可以根据不同的产品要求自动调整工艺参数,提高产品的质量和一致性。

3.2 电力系统•在电力系统中,自调匀整用于实现发电机的调节控制,确保电网的稳定运行。

•自调匀整可以根据电网负荷的变化,自动调整发电机的励磁参数,保持电网电压的稳定和频率的准确。

3.3 通信领域•在通信领域,自调匀整可以用于自适应调制解调器中,根据信道状况和传输要求自动调整调制解调器的参数,提高通信质量和速率。

•在无线通信系统中,自调匀整可以自动调整天线的方向和功率,使得通信信号的接收和发送效果更好。

3.4 智能交通•自调匀整在智能交通系统中应用广泛,可以通过实时监测交通流量和路况来自动调整红绿灯的时序和间隔,提高交通效率和减少堵塞。

•在高速公路的收费系统中,自调匀整可以根据车流量自动调整收费站的车道数,提高通行能力。

3.5 智能家居•在智能家居系统中,自调匀整可以根据家庭成员的习惯和需求,自动调整室内温度、照明和音乐等设备的参数,提供舒适的居住环境。

•自调匀整还可以根据室内空气质量自动调节空气净化设备的工作模式,保证室内空气的清洁度。

4. 总结自调匀整是一种基于反馈控制原理的工程技术,通过不断测量和调整系统参数来实现自动控制和优化系统的稳定性。

它在工业自动化、电力系统、通信领域、智能交通和智能家居等多个领域都有广泛的应用。

(完整版)人工神经网络在认知科学的研究中的应用状况毕业设计开题报告外文翻译

(完整版)人工神经网络在认知科学的研究中的应用状况毕业设计开题报告外文翻译

本科毕业设计(论文) 外文翻译(附外文原文)学院:机械与控制工程学院课题名称:人工神经网络在认知科学研究中的应用状况的报告专业(方向):自动化(控制)班级:学生:指导教师:日期:水下运载工具模糊逻辑控制器的简单设计方法K. Ishaque n, S.S.Abdullah,S.M.Ayob,Z.Salam(Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru, Malaysia )摘要:模糊逻辑控制器(FLC)的性能是由其推理规则决定的。

在大多数情况下,FLC 会使用很多算法,以使其控制功能的精确性得到增强。

不过运行大型的算法需要很多的计算时间,这使得安装使用的FLC必须有快速和高效的性能。

本文描述一种水下运载工具模糊逻辑控制器的简单设计方法(FLC),水下运载工具也被称为深度下潜救援运载工具(DSRV)。

这一方法使控制器成为单输入模糊逻辑控制器(SIFLC),其省略了普通模糊逻辑控制器中将双输入FLC(CFLC)转变成单输入FLC的步骤。

SIFLC使推理法则得到简化,主要是简化了控制参数的转化过程。

控制器是在MATLAB/SIMULINK程序平台上使用航海系统模拟器(MSS)来模拟状况的,其以此达到简化的目的。

在仿真中,波动的干扰提交到DSRV中。

在SIFLC上显示出相同输入系统的Mamdani和Sugeno类型的相同反应,而且SIFLC只需要非常小的转换。

在两个量级间,他的执行时间是少于CIFLC的。

关键词:模糊逻辑控制器;距离符号法;单输入模糊逻辑控制;水下运载工具电子工程系,teknologi malaysia大学,UTM81310,Skudai,johor bahru,malaysia 1引言无人水下运载工具是一个自动的,像水下机器人设备一样能完成水下任务(例如搜索和营救操作,考察,监视,检查,维修和保养)的设备。

自调匀整技术在高速并条机上的应用(上)

自调匀整技术在高速并条机上的应用(上)
f mewe e a ay e o . r a r n z d to l Ke o d yW r s D a n r me r w g F a ,Hi h s e i g - p d,A tlv lr p cat ,A a y i e u o e ee ,S e i y l n ss l
程中需要进行 均匀度控制。近年来, 棉纺设备发 生了很大的变化 , 生产工序缩短 , 生产连续化 、 高 速化 、 自动化及新型纺纱的出现 , 如清梳联合机 、
高产 梳棉 机 、 高速并 条 机 、 高效 能精 梳机 和转 杯纺 纱等 更需要 对半 制 品进行 自动 控制 。另 外 电子技
并 条机 自调 匀 整 发 展 到今 天 , 中长 片段 或 闭
0 引言
棉纺 生产 的基 本任 务是 要生 产有一 定 均匀度 和强 力 即具有 一 定 质量 要 求 的纱 线 , 此 生产 过 为
环 的匀 整方 式 已趋 基本 淘 汰 , 片段 和 超 短 片段 短 将 成 为主流 , } 环 是 发 展 的趋 势 。改 变 牵 伸 的 昆合
维普资讯





Cotn T xi e h olg t e te T c n o y o l
第3 4卷 第 7期 20 0 6年 7月
自调匀整技术在高速并条机上 的应用 ( ) 上
费 青
( 中国纺织科学研究 院)
摘 要 : 并务机采用自 调匀整装置主要是控制棉条的重量偏差和改善棉条的均匀度 , 特别是重量不匀率,
Ap i a i n o plc to fAut lv lr Te hn lg o e ee c o o y
i g -p e a igF a ( h rt at n Hihs ed Dr w n rme T ef s P r) i

现代并条机的自调匀整技术

现代并条机的自调匀整技术

现代并条机的自调匀整技术内容简介:20世际80年代以来,随时着电子计算机动技术、传感技术及变频调速技术与纺织机械的不断结合,使纺织机械走向高科技化,并条机也不例外,经过不断改进,使现代并条机具备了在线并条条干自调匀整,粗节疵点自动监控,全自动牵伸自动调节.牵伸罗拉隔距自动调节.形成了电子计算机自动监控的体系,此外,还改进了机器负压净化功能,以及单独传动的自动换桶体系,使并条机功能更加完善。

在2007幕尼黑ITMA上展现的新现代化并条机的质量保障体系的技术进步尤为突出,国外有许多生产并条机的国家.如德国日本.意大利.英国及瑞士等.现以瑞士立达公司的RSB系列及德国特吕茨勒公司的TD-03系列的自调匀整式并条机为例,现就高科技并条机的自调匀整技术的有关问题讨论如下:关键词自调匀整开环式自调匀整闭环式自调匀整传感器伺服电机主牵伸区一、瑞士立达公司的RSB系列的自调匀整式并条机瑞士立达公司研制的RSB系列自调匀整式并条机及其原来的机型,我国早期引进的RSB951以及20世纪末以来引进的RSB-D30、RSB-D35 、RSB-40JI、及RSB-401及SB-D11等在进一步提高棉条质量上都取得了很显著的进步,并在加工技术上有了很大的变化。

就RSB系列自调匀整式并条机的机构特点及使用状况专作以下介绍:并条机生产的棉条质量对于成纱质量及织物的质量十分重要。

近几年来瑞士立达公司研制的RSB系列自调匀整式并条机及其原来的机型,我国早期引进的RSB951以及20世纪末以来引进的RSB-D30、RSB-D35、RSB-40JI、RSB-401及 SB-D11等在进一步提高棉条质量上都取得了很显著的进步,并在加工技术上有了很大的变化:1 RSB-40 系列并条机的最高引出线速度已达到1100米/分(SB-D11并条机)2、自调匀整系统2.1并条机自调匀整形式的确立自调匀整器形式有开环式,闭环式及混合式等,开环式自调匀整系统在并条机上应用具有检测点喂入棉条的速度比输出检测点的线速度低很多的优点以及喂入部分棉层比较厚的特点,比闭环式对条干的检测准确,尤其新式自调匀整器原理匀整电路完全实现数字化,匀整频率不是以时间作为扫描基础,而是以喂入棉条经过检测罗拉的长度为基础,先进的乌斯特匀整扫描长度为每次1.5毫米,有的还缩短到1毫米,扫描检测一次所需时间达到毫秒级,速度快,精度高,匀整频率高,一般可将±25%范围内的喂入棉条匀整到±1%以内,因此,并条机匀整器的形式,目前已大都确定为开环式,即检测点与调整系统同在喂入侧。

RSB-D45C并条机的使用于维护

RSB-D45C并条机的使用于维护

RSB-D45C并条机的使用于维护唐明珂(山东省聊城市临清市华兴纺织有限公司)立达RSB-D45C自调匀整并条机采用开环控制式匀整系统,通过凹凸罗拉检测喂入条子的体积变化,计算出伺服电机合适的转速,使条子在牵伸区内获得适宜的牵伸,以调整条子的重量偏差,改善条干CV%。

笔者通过大量的试验证明,自调匀整还可以完全消除精梳条搭接波。

但自调匀整系统对机械部件、驱动部件、电子系统与软件的同步配合有很大的依赖性,需保证设备的完好状态和合理的上机工艺。

1 自调匀整的参数设置在原料、车速、生条重量、并合根数、牵伸倍数发生变化,或隔距调整、更换工艺部件后,都需要对自调匀整的匀整点、匀整强度与低速调整这个参数重新进行设置。

1.1 匀整点匀整点又称匀整作用点,是凹凸罗拉检测到的疵点在主牵伸区被匀整的位置。

匀整点的设置采用“自动搜索匀整点”的方法。

进入菜单20.1中,在‘匀整点’后把‘否’改为‘是’进入20.3菜单,按电脑提示换桶,按启动键开车搜索匀整点。

搜索完成之后,在‘接受新值’后的‘???’输入‘是’然后换桶,匀整点搜索完成。

1.2 匀整强度匀整强度是匀整系统对检测到的疵点的匀整程度。

匀整强度调节采用做n、n+1、n-1三组试验的方法来确定。

分别做n,n+1,n-1各3段10m或6段5m的重量,在菜单20.4中输入各组试验数据,在‘计算’后输入‘是’,将‘接受’后面的‘???’改为‘是’。

如果电脑提示重新试验,需重新按以上步骤测试重量,直到电脑显示‘接受新值’。

1.3 低速调整低速调整是匀整系统检测到疵点因不同速度所产生的匀整补偿。

低速调整的设置方法有系统推荐调整与手动调整两种。

1.3.1 系统推荐调整在正常运行和点动运行状态下分别检查3段10m或6段5m的棉条重量,计算两组棉条的重量差异,如果差异小于±5%则无需调整;如果重量差异大于±5%,进入20.5菜单输入两组条子的重量,然后在‘计算’后输入‘是’,再将‘接受’后的‘???’改为‘是’,重新按以上步骤测试,直到重量差异小于±5%。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Textile Research Journal Article Use of Artificial Neural Networks for Determining the LevelingAction Point at the Auto-leveling Draw FrameAssad Farooq1and Chokri CherifInstitute of Textile and Clothing Technology, TechnischeUniversität Dresden. Dresden, GermanyAbstractArtificial neural networks with their ability of learning from data have been successfully applied in the textile industry. The leveling action point is one of the important auto-leveling parameters of the drawing frame and strongly influences the quality of the manufactured yarn. This paper reports a method of predicting the leveling action point using artificial neural networks. Various leveling action point affecting variables were selected as inputs for training the artificial neural networks with the aim to optimize the auto-leveling by limiting the leveling action point search range. The Levenberg Marquardt algorithm is incorporated into the back-propagation to accelerate the training and Bayesian regularization is applied to improve the generalization of the networks. The results obtained are quite promising.Key words:artificial neural networks, auto-lev-eling, draw frame, leveling action point。

The evenness of the yarn plays an increasingly significant role in the textile industry, while the sliver evenness is one of the critical factors when producing quality yarn. The sliver evenness is also the major criteria for the assessment of the operation of the draw frame. In principle, there are two approaches to reduce the sliver irregularities. One is to study the drafting mechanism and recognize the causes for irregularities, so that means may be found to reduce them. The other more valuable approach is to use auto-levelers [1], since in most cases the doubling is inadequate to correct the variations in sliver. The control of sliver irregularities can lower the dependence on card sliver uniformity, ambient conditions, and frameparameters.At the auto-leveler draw frame (RSB-D40) the thickness variations in the fed sliver are continually monitored by a mechanical device (a tongue-groove roll) and subsequently converted into electrical signals. The measured values are transmitted to an electronic memory with a variable, the time delayed response. The time delay allows the draft between the mid-roll and the delivery roll of the draw frame to adjust exactly at that moment when the defective sliver piece, which had been measured by a pair of scanning rollers, finds itself at a point of draft. At this point, a servo motor operates depending upon the amount of variation detected in the sliver piece. The distance that separates the scanning rollers pair and the point of draft is called the zero point of regulation or the leveling action point (LAP) as shown in Figure 1. This leads to the calculated correction on the corresponding defective material [2,3]. In auto-leveling draw frames, especially in the case of a change of fiber material, or batches the machine settings and process controlling parameters must be optimized. The LAP is the most important auto-leveling parameter which is influenced by various parameters such as feeding speed, material, break draft gauge, main draft gauge, feeding tension, break draft, and setting of the sliver guiding rollers etc.Use of Artificial Neural Networks for Determining the Leveling Action Point A. Farooq and C. CherifFigure 1 Schematic diagram of an auto-leveler drawing frame.Previously, the sliver samples had to be produced with different settings, taken tothe laboratory, and examined on the evenness tester until the optimum LAP was found (manual search). Auto-leveler draw frame RSB-D40 implements an automatic search function for the optimum determination of the LAP. During this function, the sliver is automatically scanned by adjusting the different LAPs temporarily and the resulted values are recorded. During this process, the quality parameters are constantly monitored and an algorithm automatically calculates the optimum LAP by selecting the point with the minimum sliver CV%. At present a search range of 120 mm is scanned, i.e. 21 points are examined using 100 m of sliver in each case; therefore 2100 m of sliver is necessary to carry out the search function. This is a very time-consuming method accompanied by the material and production losses, and hence directly affecting the cost parameters. In this work, we have tried to find out the possibility of predicting the LAP, using artificial neural net-works, to limit the automatic search span and to reduce the above-mentioned disadvantages.Artificial Neural NetworksThe motivation of using artificial neural networks lies in their flexibility and power of information processing that conventional computing methods do not have. The neural network system can solve a problem “by experience and learning” the input–output patterns provided by the user. In the field of textiles, artificial neural networks (mostly using back-propagation) have been extensively studied during the last two decades [4–6]. In the field of spinning previous research has concentrated on predicting the yarn properties and the spinning process performance using the fiber properties or a combination of fiber properties and machine settings as the input of neural networks [7–12].Back-propagation is a supervised learning technique most frequently used for artificial neural network training. The back-propagation algorithm is based on the Widrow-Hoff delta learning rule in which the weight adjustment is carried out through the mean square error of the output response to the sample input [13]. The set of these sample patterns is repeatedly presented to the network until the error value is minimized. The back-propagation algorithm uses the steepest descent method, which is essentially a first-order method to determine a suitable direction of gradient movement.OverfittingThe goal of neural network training is to produce a network which produces small errors on the training set, and which also responds properly to novel inputs. When a network performs as well on novel inputs as on training set inputs, the network is said to be well generalized. The generalization capacity of the network is largely governed by the network architecture (number of hidden neurons) and this plays a vital role during the training. A network which is not complex enough to learn all the information in the data is said to be underfitted, while a network that is too complex to fit the “noise” in the data leads to overfitting. “Noise” means variation in the target values that are unpredictable from the inputs of a specific network. All standard neural network architectures such as the fully connected multi-layer perceptron are prone to overfitting. Moreover, it is very difficult to acquire the noise free data from the spinning industry due to the dependence of end products on the inherent material variations and environmental conditions, etc. Early stopping is the most commonly used technique to tackle this problem. This involves the division of training data into three sets, i.e. a training set, a validation set and a test set, with the drawback that a large part of the data (validation set) can never be the part of the training.RegularizationThe other solution of overfitting is regularization, which is the method of improving the generalization by constraining the size of the network weights. Mackay[14] discussed a practical Bayesian framework for back-propagation networks, which consistently produced networks with good generalization.The initial objective of the training process is to mini-mize the sum of square errors:∑=-=ni i i D a t E 12)( (1)Where i t are the targets and i a are the neural network responses to the respective targets. Typically, training aims to reduce the sum of squared errors F=Ed.However, regularization adds an additional term, the objective function,W D E E F αβ+= (2) In equation (2), w E is the sum of squares of the network weights, and α and βare objective function parameters. The relative size of the objective function parameters dictates the emphasis for training. If α<< β, then the training algorithm will drive the errors smaller. If α>>β, training will emphasize weight size reduction at the expense of network errors, thus producing a smoother network response [15].The Bayesian School of statistics is based on a different view of what it means to learn from data, in which probability is used to represent the uncertainty about the relationship being learned. Before seeing any data, the prior opinions about what the true relationship might be can be expressed in a probability distribution over the network weights that define this relationship. After the program conceives the data, the revised opinions are captured by a posterior distribution over network weights. Network weights that seemed plausible before, but which do not match the data very well, will now be seen as being much less likely, whilethe probability for values of the weights that do fit the data well will have increased[16].In the Bayesian framework the weights of the network are considered random variables. After the data is taken, the posterior probability function for the weights can be updated according to Bayes’ rule:(3) In equation (3), D represents the data set, M is the particular neural network model used, and w is the vector of network weights.),/(M w P α is the prior probability, which represents our knowledge of the weights before any data is collected. ),,/(M w D P βis the likelihood function, which is the probability of data occurring, given the weights w.),,/(M w P βαis a normalization factor, which guarantees that the total probability is 1 [15].In this study, we employed the MATLAB Neural Net-works Toolbox function “trainbr” which is an incorporation of the Levenberg–Marqaurdt algorithm and the Bayesian regularization theorem (or Bayesian learning) into back-propagation to train the neural network to reduce the computational overhead of the approximation of the Hessian matrix and to produce good generalization capabilities. This algorithm provides a measure of the network parameters (weights and biases) being effectively used by the network. The effective number of parameters should remain the same, irrespective of the total number of parameters in the network. This eliminates the guesswork required in determining the optimum network size.ExperimentalThe experimental data was obtained from Rieter, Ingolstadt,the manufacturer of draw frame RSB-D40 [17]. For these experiments, the material selection and experimental design was based on the frequency of particular material use in the spinning industry. For example, Carded Cotton is the most frequently used material, so it was used as a standard and the experiments were performed on carded cotton with all possible settings, which was not the case with other materials. Also, owing to the fact that all the materials could not be processed with same roller pressure and draft settings, different spin plans were designed. The materials with their processing plans are given in Table 1.The standard procedure of acclimatization was applied to all the materials and the standard procedure for auto leveling settings (sliver linear density, LAP, leveling intensity) was adopted. A comparison of manual and automatic searches was performed and the better CV% results were achieved by the automatic search function from RSBD-40.Therefore the LAP searches were accomplished by the Rieter QualityMonitor (RQM). An abstract depiction of the experimental model is shown in Figure 2.Use of Artificial Neural Networks for Determining the Leveling Action Point A. Farooq and C. CherifFigure 2 Abstract neural network model.Here the point to be considered is that there is no possibility in the machine to adjust the major LAP influencing parameter, i.e. feeding speed. So feeding speed was considered to be related to delivery speed and number of doublings according to equation (4). The delivery speed was varied between 300 and 1100 m/min and the doublings were 5 to 7, to achieve the different values of the feeding speed:Feeding Speed=Delivered Count× Delivery speed/(Doublings ×Feed Count)(4)Training and Testing SetsFor training the neural network, the experimental data was divided into three phases. The first phase included the experimental data for the initial compilation of the data and subsequent analysis. The prior knowledge regarding the parameters influencing LAP, i.e. feeding speed, delivery speed, break draft, gauges of break and main draft, and the settings of the sliver guide, was used to select the data. So the first phase contained the experiments in which the standard settings were taken as a foundation and then one LAP influencing parameter was changed in each experiment.In the second phase, the experiments were selected in which more than one influencing parameter was changed and the network was allowed to learn the complex interactions. This selection was made on the basis of ascertained influencingparameters, with the aim to increase or decrease the LAP length. The third phase involved the experiments conducted on the pilot scale machine. These pilot scale experiments were carried out by the machine manufacturer to get the response for different settings. So these results were selected to assess the performance of the neural networks.Pre-processing of dataNormalizing the input and target variables tends to make the training process better behaved by improving the numerical condition of the problem. Also it can make training faster and reduce the chances of getting stuck in local minima. Hence, for the neural network training, because of the large spans of the network-input data, the inputs and targets were scaled for better performance. At first the inputs and targets were normalized between the interval [–1, 1], which did not show any promising results. Afterwards the data was normalized between the interval [0, 1] and the networks were trained with success.Neural Network TrainingWe trained five different neural networks to predict the LAP by increasing the number of training data sets gradually. The data sets were divided into training and test sets as shown in Table 2. The training was performed with training sets and test sets were reserved to judge the prediction performance of the neural network in the form of error. Figure 3 depicts the training performance of the neural network NN 5.Figure 3 Training performance of NN 5.Results and DiscussionAs already mentioned in Table 2, different combinations of the data sets were used to train five networks keeping a close look on their test performance, i.e. performance on the unseen data. As Bayesian regularization is said to eliminate the guess work for the appropriate number of hidden layers and hidden neurons, the numbers of hidden neurons, ranging from 14 to 22, were selected in two layers to train the networks. The basic idea behind the selection of network topology is that the network should be complex enough to learn all the relationships in the data as the possibility of overfitting was tackled with regularization. The behavior during training and performance on the test sets can be seen from Figures 4–8.Figure 4 Testing performance of NN 1.Use of Artificial Neural Networks for Determining the Leveling Action Point A. Farooq and C. CherifFigure 5 Testing performance ofNN 2.Figure 6 Testing performance of NN 3.Figure 7 Testing performance of NN 4.Figure 8 Testing performance of NN 5.Mean absolute error was calculated to compare the predicted and actual values, both during training and testing, and the results are presented in Figure 9. The overall results can be explained by considering the presence or absence of input–output interactions in the training data and the increase in the prediction performance with the increase of training data sets. Figure 9 clarifies the increase in testing error, even with an increase in the number of training, data sets, when training and testing was performed on data sets from different phases (NN2 & NN4, see Table 2). However, the error showed a downwards trend when part of the phase data was used to train the network and the remainder was used for testing, as in NN 3 and NN 5 in comparison with NN 2 and NN 4, respectively. The presence of different input–output interactions in different phases explains this trend. The exceptional behavior of NN 1 with respect to the above-mentioned fact is attributed to the relatively small number of data sets in phase 1.Figure 9 Error comparison between training and test sets.In order to assess the goodness of fit of NN 5, a 10-fold cross-validation was preformed, i.e. using 90% of the data for training and 10% for testing, repeating the training 10 times and testing the network each time on 10% of the unseen data. An average 2R= 0.9622 was reported. The same procedure was adopted for the 80% training and 20% test sets and the calculated value of 2R was 0.9470. This decrease in the performance is due to the 80% data sets available for training. However, these values confirm a very good fit of the NN 5 model.ConclusionThe artificial neural network model was developed and the networks were trained at the Institute of Textile and Clothing Technology, Technische Universitat Dresden. The use of Bayesian regularization to reduce the testing error for the practical applications has shown quite promising results. From the testing performance of NN 5 as shown in Table 3, a maximum deviation of about 2 mm is observable, which falls well within the 3 mm negligible range for determination of the LAP. This concludes that neural networks can be applied in the future for quick computation of the LAP, with the advantages of fast adjustment and saving of material and time. The accuracy in computation can lead to better sliver CV% and better yarn quality.Table 3 Test performance of NN 5.中文翻译纺织研究期刊并条机自调匀整利用人工神经网络确定在自调匀整作用点摘要用他们的人工神经网络从数据中学习的能力已经成功的应用在纺织行业,在并条机自调匀整参数中匀整作用点是一个重要的参数,并且强烈的影响到纱线的质量,本文主要是讲述的是用人工神经网络来检测匀整作用点,各种各样的匀整作用点会影响导致不同的变量,我们把这些变量输入到人工神经网络进行测试,目的是为了通过限制匀整作用点改变范围来优化并条机的自调匀整,Levenberg-Marquardt的算法是纳入反向传播加速系和贝叶斯正则化方法,改进了网络的泛化。

相关文档
最新文档