Realization of a Rapidly Reconfigurable Robotic Workcell
Advanced Topics in Data Science

Advanced Topics in Data Science Data science is a rapidly evolving field that encompasses a wide range of advanced topics. In this article, we will explore some of the most cutting-edge and complex concepts in data science, including machine learning, deep learning, natural language processing, and big data.Machine learning is a crucial aspect of data science that involves the development of algorithms that can learn from and make predictions or decisions based on data. This advanced topic involves a wide range of techniques, including supervised learning, unsupervised learning, and reinforcement learning. Supervised learning involves training a model on labeled data, while unsupervised learning involves finding patterns and relationships in unlabeled data. Reinforcement learning, on the other hand, involves training a model to make decisions in a dynamic environment in order to maximize some notion of cumulative reward.Deep learning is a subfield of machine learning that focuses on the development of artificial neural networks, which are inspired by the structure of the human brain. These networks are capable of learning to represent data in multiple layers of increasingly abstract representations, allowing them to excel at tasks such as image and speech recognition, natural language processing, and reinforcement learning. Deep learning has been a major driver of progress in fields such as computer vision and natural language processing, and has led to major breakthroughs in areas such as autonomous vehicles, medical imaging, and language translation.Natural language processing (NLP) is a branch of artificial intelligence that focuses on the interaction between computers and humans through natural language. NLP enables computers to understand, interpret, and generate human language in a valuable way. NLP involves a wide range of techniques and methods,including text mining, sentiment analysis, language modeling, and machine translation. It is an essential technology for many applications, including chatbots, virtual assistants, and language translation services.Big data refers to the massive volumes of data that are so large and complex that traditional data processing applications are inadequate to deal with them. This advanced topic in data science involves the collection, storage, and analysis of large and complex data sets using advanced computing and statistical techniques. Big data has a wide range of applications, including predictive analytics, risk modeling, fraud detection, and personalized marketing. It is an essential component of modern data science and is crucial for understanding and making decisions based on large and complex data sets.In conclusion, advanced topics in data science encompass a wide range of complex and cutting-edge concepts, including machine learning, deep learning, natural language processing, and big data. These topics are crucial for understanding and analyzing large and complex data sets, and have a wide range of applications in fields such as computer vision, speech recognition, language translation, predictive analytics, and more. As the field of data science continues to evolve, it is important for professionals to stay abreast of these advanced topics in order to remain competitive in the industry.。
中国科学英文版模板

中国科学英文版模板1.Identification of Wiener systems with nonlinearity being piece wise-linear function HUANG YiQing,CHEN HanFu,FANG HaiTao2.A novel algorithm for explicit optimal multi-degree reduction of triangular surfaces HU QianQian,WANG GuoJin3.New approach to the automatic segmentation of coronary arte ry in X-ray angiograms ZHOU ShouJun,YANG Jun,CHEN WuFan,WANG YongTian4.Novel Ω-protocols for NP DENG Yi,LIN DongDai5.Non-coherent space-time code based on full diversity space-ti me block coding GUO YongLiang,ZHU ShiHua6.Recursive algorithm and accurate computation of dyadic Green 's functions for stratified uniaxial anisotropic media WEI BaoJun,ZH ANG GengJi,LIU QingHuo7.A blind separation method of overlapped multi-components b ased on time varying AR model CAI QuanWei,WEI Ping,XIAO Xian Ci8.Joint multiple parameters estimation for coherent chirp signals using vector sensor array WEN Zhong,LI LiPing,CHEN TianQi,ZH ANG XiXiang9.Vision implants: An electrical device will bring light to the blind NIU JinHai,LIU YiFei,REN QiuShi,ZHOU Yang,ZHOU Ye,NIU S huaibining search space partition and search Space partition and ab straction for LTL model checking PU Fei,ZHANG WenHui2.Dynamic replication of Web contents Amjad Mahmood3.On global controllability of affine nonlinear systems with a tria ngular-like structure SUN YiMin,MEI ShengWei,LU Qiang4.A fuzzy model of predicting RNA secondary structure SONG D anDan,DENG ZhiDong5.Randomization of classical inference patterns and its applicatio n WANG GuoJun,HUI XiaoJing6.Pulse shaping method to compensate for antenna distortion in ultra-wideband communications WU XuanLi,SHA XueJun,ZHANG NaiTong7.Study on modulation techniques free of orthogonality restricti on CAO QiSheng,LIANG DeQun8.Joint-state differential detection algorithm and its application in UWB wireless communication systems ZHANG Peng,BI GuangGuo,CAO XiuYing9.Accurate and robust estimation of phase error and its uncertai nty of 50 GHz bandwidth sampling circuit ZHANG Zhe,LIN MaoLiu,XU QingHua,TAN JiuBin10.Solving SAT problem by heuristic polarity decision-making al gorithm JING MingE,ZHOU Dian,TANG PuShan,ZHOU XiaoFang,ZHANG Hua1.A novel formal approach to program slicing ZHANG YingZhou2.On Hamiltonian realization of time-varying nonlinear systems WANG YuZhen,Ge S. S.,CHENG DaiZhan3.Primary exploration of nonlinear information fusion control the ory WANG ZhiSheng,WANG DaoBo,ZHEN ZiYang4.Center-configur ation selection technique for the reconfigurable modular robot LIU J inGuo,WANG YueChao,LI Bin,MA ShuGen,TAN DaLong5.Stabilization of switched linear systems with bounded disturba nces and unobservable switchings LIU Feng6.Solution to the Generalized Champagne Problem on simultane ous stabilization of linear systems GUAN Qiang,WANG Long,XIA B iCan,YANG Lu,YU WenSheng,ZENG ZhenBing7.Supporting service differentiation with enhancements of the IE EE 802.11 MAC protocol: Models and analysis LI Bo,LI JianDong,R oberto Battiti8.Differential space-time block-diagonal codes LUO ZhenDong,L IU YuanAn,GAO JinChun9.Cross-layer optimization in ultra wideband networks WU Qi,BI JingPing,GUO ZiHua,XIONG YongQiang,ZHANG Qian,LI ZhongC heng10.Searching-and-averaging method of underdetermined blind s peech signal separation in time domain XIAO Ming,XIE ShengLi,F U YuLi11.New theoretical framework for OFDM/CDMA systems with pe ak-limited nonlinearities WANG Jian,ZHANG Lin,SHAN XiuMing,R EN Yong1.Fractional Fourier domain analysis of decimation and interpolat ion MENG XiangYi,TAO Ran,WANG Yue2.A reduced state SISO iterative decoding algorithm for serially concatenated continuous phase modulation SUN JinHua,LI JianDong,JIN LiJun3.On the linear span of the p-ary cascaded GMW sequences TA NG XiaoHu4.De-interlacing technique based on total variation with spatial-t emporal smoothness constraint YIN XueMin,YUAN JianHua,LU Xia oPeng,ZOU MouYan5.Constrained total least squares algorithm for passive location based on bearing-only measurements WANG Ding,ZHANG Li,WU Ying6.Phase noise analysis of oscillators with Sylvester representation for periodic time-varying modulus matrix by regular perturbations FAN JianXing,YANG HuaZhong,WANG Hui,YAN XiaoLang,HOU ChaoHuan7.New optimal algorithm of data association for multi-passive-se nsor location system ZHOU Li,HE You,ZHANG WeiHua8.Application research on the chaos synchronization self-mainten ance characteristic to secret communication WU DanHui,ZHAO Che nFei,ZHANG YuJie9.The changes on synchronizing ability of coupled networks fro m ring networks to chain networks HAN XiuPing,LU JunAn10.A new approach to consensus problems in discrete-time mult iagent systems with time-delays WANG Long,XIAO Feng11.Unified stabilizing controller synthesis approach for discrete-ti me intelligent systems with time delays by dynamic output feedbac k LIU MeiQin1.Survey of information security SHEN ChangXiang,ZHANG Hua ngGuo,FENG DengGuo,CAO ZhenFu,HUANG JiWu2.Analysis of affinely equivalent Boolean functions MENG QingSh u,ZHANG HuanGuo,YANG Min,WANG ZhangYi3.Boolean functions of an odd number of variables with maximu m algebraic immunity LI Na,QI WenFeng4.Pirate decoder for the broadcast encryption schemes from Cry pto 2005 WENG Jian,LIU ShengLi,CHEN KeFei5.Symmetric-key cryptosystem with DNA technology LU MingXin,LAI XueJia,XIAO GuoZhen,QIN Lei6.A chaos-based image encryption algorithm using alternate stru cture ZHANG YiWei,WANG YuMin,SHEN XuBang7.Impossible differential cryptanalysis of advanced encryption sta ndard CHEN Jie,HU YuPu,ZHANG YueYu8.Classification and counting on multi-continued fractions and its application to multi-sequences DAI ZongDuo,FENG XiuTao9.A trinomial type of σ-LFSR oriented toward software implemen tation ZENG Guang,HE KaiCheng,HAN WenBao10.Identity-based signature scheme based on quadratic residues CHAI ZhenChuan,CAO ZhenFu,DONG XiaoLei11.Modular approach to the design and analysis of password-ba sed security protocols FENG DengGuo,CHEN WeiDong12.Design of secure operating systems with high security levels QING SiHan,SHEN ChangXiang13.A formal model for access control with supporting spatial co ntext ZHANG Hong,HE YePing,SHI ZhiGuo14.Universally composable anonymous Hash certification model ZHANG Fan,MA JianFeng,SangJae MOON15.Trusted dynamic level scheduling based on Bayes trust model WANG Wei,ZENG GuoSun16.Log-scaling magnitude modulated watermarking scheme LING HeFei,YUAN WuGang,ZOU FuHao,LU ZhengDing17.A digital authentication watermarking scheme for JPEG image s with superior localization and security YU Miao,HE HongJie,ZHA NG JiaShu18.Blind reconnaissance of the pseudo-random sequence in DS/ SS signal with negative SNR HUANG XianGao,HUANG Wei,WANG Chao,L(U) ZeJun,HU YanHua1.Analysis of security protocols based on challenge-response LU O JunZhou,YANG Ming2.Notes on automata theory based on quantum logic QIU Dao Wen3.Optimality analysis of one-step OOSM filtering algorithms in t arget tracking ZHOU WenHui,LI Lin,CHEN GuoHai,YU AnXi4.A general approach to attribute reduction in rough set theory ZHANG WenXiuiu,QIU GuoFang,WU WeiZhi5.Multiscale stochastic hierarchical image segmentation by spectr al clustering LI XiaoBin,TIAN Zheng6.Energy-based adaptive orthogonal FRIT and its application in i mage denoising LIU YunXia,PENG YuHua,QU HuaiJing,YiN Yong7.Remote sensing image fusion based on Bayesian linear estimat ion GE ZhiRong,WANG Bin,ZHANG LiMing8.Fiber soliton-form 3R regenerator and its performance analysis ZHU Bo,YANG XiangLin9.Study on relationships of electromagnetic band structures and left/right handed structures GAO Chu,CHEN ZhiNing,WANG YunY i,YANG Ning10.Study on joint Bayesian model selection and parameter estim ation method of GTD model SHI ZhiGuang,ZHOU JianXiong,ZHAO HongZhong,FU Qiang。
Die-on-waferandW...

Die-on-wafer and Wafer-level 3D Integration for Millimeter-Wave Smart Antenna TransceiversM.M. Hella, S. Devarajan, J.-Q. Lu, K. Rose and R.J. GutmannCenter for Integrated Electronics RensselaerPolytechnicInstitute,Troy,NewYork12180,***************.eduAbstract — A three-dimensional (3D) IC technology platform for high-performance, heterogeneous integration of silicon ICs for mm-wave smart antenna transceivers is presented. The platform uses dielectric adhesive bonding of fully-processed wafer-to-wafer aligned ICs, followed by a three-step thinning process and copper damascene patterning to form inter-wafer interconnects. A low noise amplifier (LNA), power amplifier (PA), and an analog-to-digital converter (ADC) are designed in RF-enhanced SiGe BiCMOS process to operate in the 24GHz ISM band. These critical design blocks serve as a step towards the realization of a complete system integrated with I/O matching networks, switches, antennas, and digital processing in a 3D configuration.I. I NTRODUCTIONThe next wave of wireless communications seeks to improve data rates and channel capacity by employing larger bandwidths with higher efficiencies. One promising technology to attain this goal involves the use of smart-antenna technology whereby multiple antennas are combined intelligently at the transmitter and the receiver, both at the subscriber and the base station. Various forms of multiple antenna systems provide solutions for communications and radars, such as multiple-input-multiple-output (MIMO) diversity transceivers and synthetic aperture radars (SARs) [1]. The industrial, scientific, and medical (ISM) band at 24GHz is regarded as a potential candidate for such applications. Traditionally, communications systems working in the microwave/mm-wave band are realized using multiple microwave modules implemented mainly in GaAs, adding to overall cost and complexity. It is envisioned that single-chip silicon-based technologies will replace current solutions in a way similar to the trend that commercial cellular and PCS systems have taken for their implementation. System integration is the main key in the development of any low cost/high performance wireless networking system [2-3].The major drive behind the 3D integration for mm-wave applications is the impact of interconnect losses at these frequencies (For example, the interconnect loss for a flip-chip packaged circuit is near 1.2dB at 60GHz [4]), reconfigurable/smart silicon-based transceivers that interface with CMOS memory-intensive digital processors and possibly NMOS-based imagers.In this paper various issues related to the 3D integration for mm-wave transceivers will be addressed. The 3D technology platform is presented in section II. Some basic building blocks in the transceiver chain including a SiGe-based low noise amplifier ( LNA), a power amplifier (PA) and a high performance SiGe analog-to-digital converter (ADC) are introduced.II. 3D IC T ECHNOLOGY P LATFORMDie-to-die, die-to-wafer and wafer-to-wafer approaches are in various stages of research and development [5]. Alternative wafer-to-wafer technology platforms are under development involving oxide-to-oxide bonding, copper-to-copper bonding, and dielectric adhesive bonding [5]. Our dielectric adhesive bonding approach accommodates wafer distortions and interface contaminants; in addition, a handling wafer is not required and wafers are thinned only after bonding to a host wafer.A three-wafer stack depicting our IC technology platform is shown in Figure 1(a) [6]. Fully processed wafers are aligned to within a micron after spin coating a micron thick benzocyclobutene (BCB) and soft baking the BCB to remove volatile components. The wafer pair is then bonded together in a bonder with a specified ambient, temperature and pressure cycle. After bonding, the top-(Face-to-face)SubstrateSubstrateDielectricDielectric(Face-to-back)SubstrateMulti-level on-chip interconnectsLevelFig. 1. (a) Schematic of a 3D integration platform, showing wafer bonding interface, vertical inter-wafer vias (plug- and bridge-type), and "face-to-face" and "face-to-back" bonding; (b) three-wafer/three-die stack for SiGe-based mm-wave transceiver.side donor wafer is thinned by backside grinding, polishing and selective etching. Finally, inter-wafer interconnects are formed by copper damascene patterning. The upper level device wafer can be integrated in a similar process flow.An attractive wafer-level partitioning depicted in Figure 1(b) is to have the top wafer in a three-wafer stack as a thermal-coefficient-of-expansion (TCE) matched glass, in which high-Q passives can be processed (inductors, with or without magnetic thin films, high density capacitors with high dielectric constant thin films, and/or multiple antennas for beam forming applications); the middle wafer is a SiGe-based transceiver wafer, with vias connecting to the high-Q passives in the upper wafer; the bottom layer isthe CMOS-based processor and memory. This partitioning, is particularly attractive for mm-wave applications, since the interconnect length between the core of the transceiver and both the passives in the upper layer, as well as the digital control in the bottom layer, can be controlled. This allows extensive computing capabilities as well as minimum interconnect losses.The BCB-based bond has a critical adhesion energy between 25 and 35 J/m2 depending upon bonding conditions [6], well above the 5-10 J/m2 required for IC processing. Moreover, inter-wafer via chains have been fabricated that demonstrated the validity of the process flow with micron-sized vias and 1-µm wafer-to-wafer alignment, as described in detailed elsewhere [6].The impact of our bonding and thinning processes on IC interconnects (copper with oxide and copper with ultra-low-k dielectric) has been investigated with SEMATECH [7], and on 130 nm SOI CMOS devices and test circuits having four-level copper/low-k interconnects with Freescale [8]. While the ultra-low-k dielectric structure shows some change due to the fragile structure, changes in resistance and line-to-line leakage are small [7]. CMOS device and circuit parameters (threshold voltage, subthreshold leakage and ring oscillator delay) vary by less than one-third of the original 10%-90% spread across the wafer [8]. A FIB-SEM cross-section of a SOI CMOS wafer BCB-bonded to a prime Si wafer after a double-bonding/thinning process is shown in Figure 2 [8].While 3D die stacks with micron-size, through-wafer vias may have comparable performance to wafer-level 3D implementation, the manufacturing cost will be higher due to the die handling and the die-by-die stack processing of the vertical interconnects. Monolithic wafer-level 3D implementations are more challenging than system-in-a-package (SiP) until a viable manufacturing base is established. However, the performance advantages with short inter-wafer interconnects, high integration density, and low interconnectivity cost, make monolithic 3D attractive for future wireless networking solutions.III. B ASIC B UILDING B LOCKS IN S I G E B I CMOS FOR24GH Z T RANSCEIVERHaving presented our current 3D technology platform, it is worth noting that in RF/mm-wave applications, the techniques of SiP and Multi-Chip-Module (MCM) are currently being pursued as more-realistic cost- performance solutions. However, the long-term cost of either 2D or 3D die-stack packaging solutions is affected by chip-handling and assembly. Clearly wafer-to-wafer implementations are a longer-term solution, but also have the lowest cost for high-volume products since chip handling is minimized and vertical interconnectivity is maximized by a batch monolithic process.In the following subsections, the designs of an ADC, an LNA, and a PA are presented. These are the active circuit blocks that will interface with both the bottom and upper layers.3.1. A SiGe-based Analog-to-Digital ConverterThe increasingly challenging requirements on ADC performance posed by: 1) new high-bandwidth standards, 2) the trend of low-IF single-heterodyne receivers, and 3) advanced power amplifier linearization techniques, call for device flexibility available only in BiCMOS technologies [9]. The impact of including an ADC with an RF/microwave transceiver IC on one wafer and combining with digital processing IC in a second wafer is significant, particularly for smart/reconfigurable wireless terminals.A conventional pipeline A/D converter is designed using gain-of-2 sample/hold (S/H) amplifiers realized with an operational transconductance amplifier (OTA) in a negative feedback loop using precise-value capacitors. IBM’s SiGe 6HP technology, which provides 47 GHz SiGe HBTs and 250 nm node CMOS, was used. The ADCchip architecture, micrograph, and summary of measured BCBSiCMOSSOI DieSiO2Fig. 2. FIB-SEM cross-section of SOI CMOS wafer BCB-bonded toa prime Si wafer after the double-bonding/thinning process [8].Measured Pipeline ADC Performance:Resolution: 12-Bits Sampling Rate: 34 MS/sSimulated OTA Performance:DC Gain: 88 dBUnity Gain Frequency: 430 MHzSettling Time (0.01%): 10 nsSiGe OTAChip PhotographChip Layout (3x3 mm )Pipeline ADC Block Diagram Fig. 3. SiGe BiCMOS pipeline A/D converter. results are shown in Figure 3 [10]. High DC gain, fastsettling, low noise OTAs capable of driving largesampling capacitors without sacrificing output swing areneeded for realizing high-performance pipelined ADCs. A folded cascode configuration using SiGe NPN HBTs as cascodes with PMOS inputs resulted in a wide-bandwidth, high-gain, fast-settling OTA. The 34 MS/s sampling rate with 12 bit resolution was limited by capacitor mismatch and the lack of self-calibration techniques [10]. More recently, an improved SiGe BiCMOS OTA was designed that uses a triple-cascode architecture and NMOS-NPN SiGe HBT Darlington inputs with cascode SiGe HBTs to achieve fast settling response, with a predicted 115 MS/s sampling rate at 12 bit resolution [11]; with digital self-calibration [12] using a 7-bit pipeline seed, 205 MS/s is predicted. Using the A/D figure-of-merit (FoM) from the 2003 ITRS [13], we obtain a conservative estimate of 2.2 x 103 GHz/W without self-calibration and 4.0 x 103GHz/W with self-calibration, both using the 6HP process introduced in 2000. In comparison, the 2003 ITRS predicts CMOS A/Dconverters to reach a FoM of 2.2 x 103GHz/W in 2009 and 4.0 x 103 GHz/W in 2012 [13].3.2. Low Noise Amplifier The LNA designed is a typical common-emitter amplifier with inductive degeneration and an isolation cascode. To get sufficient gain two identical stages were cascaded, similar to the design presented in [14]. Hence, each stage was designed to have 50 ohm input and output matching. While the design presented in [14] used a 120 GHz process to realize a 24 GHz SiGe LNA, we were able to realize similar performance with a 60 GHz f T process with careful component optimization. The designed LNA is shown in Figure4. The input transistor Q1 is inductively degenerated with Ls toprovide good input matching with a 50 ohm real part. Thebias current density is determined for low noise figure andQ1 is sized for input matching along with Ls and Lg. Q2is used to provide better isolation between the inputtransistor and the output node. While in typical cascadedsystems there is no need to match the output impedance ofthe first stage and the input impedance of the second stageto 50 ohms, Guan [14] suggests that at a high-frequencylike 24 GHz, sensitivity to variations in other adjacentblocks can be minimized by matching each to 50-Ohm. Also, in a two-stage LNA design, the first stage can exactly be replicated if it is designed with 50 ohm input and output matching. C1, C2 and Ld are sized for matching the output of the first stage to 50 ohms. Once the first stage is optimized, it is replicated in order to obtain a high gain (S21). The simulated plots of S11, S21 and NF are shown in Figure 4. It is worth noting that the 6.1dB noise figure can be lowered to around 4dB if the on-chip spiral inductors can be replaced with higher quality factor inductors. We anticipate that this can be realized using the 3D configuration by having high quality passives on TCE-matched glass in the upper layer.3.3. 24GHz Power AmplifierA 2-stage single-ended class AB power amplifier is designed using 0.18um FETs available in the SiGeBiCMOS technology used. High f T FETs are used rather than the high breakdown HBTs since the latter have a lower f T of 24GHz. Input, output, and inter-stagematching are implemented on-chip using the inductor line formed of the top metal layer over a deep trench to isolate the inductor from the substrate. This technique generates small value inductors with high quality factor. The amplifier has been simulated with the effect of parasitics, including ground inductances as shown in figure 5 (a) and (b). Using 5 ground bonding pads with their typical packaging parasitic inductances, the PA can deliver 11dBm of maximum output power. The output power is estimated to increase to 14dBm with around 6dB increase in gain by decreasing the ground inductance. The on-chipFig 4: 24 GHz SiGe LNA and simulated S-parameter / NF curves.3D integration will not enhance the performance of the amplifier.IV. S UMMARY AND C ONCLUSIONSWe have presented our 3D integration platform and its application for mm-wave smart antenna transceivers. Basic test blocks targeting the 24GHz ISM band are designed to serve as a step towards the realization of the complete system integrated with I/O matching network, switches, and antennas. Simulation results from various blocks indicate the possible increase in power gain, output power, and noise figure with the increase in the quality factor of inductors. Although the relative increase in performance does not justify the higher cost of 3D integration, the partitioning capability, possibility of integrating multiple antennas and switches on the top layer, and integrating processors with higher computational power can prove 3D to be a worthy long-term solution. Another possible application is the concept of digital assisted RF/analog design, where the performance of each RF/analog block can be optimized in real-time by monitoring its output and applying digital techniques for performance improvement. This requires high interconnect capacity, which if done in 2D can pose cross-talk issues, and consume higher area. 3D on the other hand can provide vertical interconnect from the digital processing core in the bottom layer to each block in the transceiver chain in the inter-mediate layer.A CKNOWLEDGEMENTThis research is partially supported through the Interconnect Focus Center for Hyperintegration, funded by MARCO, DARPA and NYSTAR.R EFERENCES[1] X. Guan, H. Hashemi, and Ali Hajimiri, “A Fully Integrated 24-GHz Eight-Element Phased-Array Receiver in Silicon,” IEEE J. Of Solid_State Circuits, vol. 39, NO. 12, pp. 2311-2320, Dec. 2004. [2] A. Smolders, N. Pulsford, P. Philippe, and F. Van Straten, “RF SiP:The Next Wave for Wireless System Integration,” Proc. of IEEE Radio Frequency Integrated Circuits Symposium, pp. 233-236, May 2004.[3] R. Tummala, and J. Laskar, “Gigabit Wireless: System-on-a-Package Technology,” Proceedings of the IEEE, Vol. 92, No. 2, Feb. 2004.[4] Modest Oprysko, “Building Millimeter-Wave Circuits in Silicon,”Workshop on Advances in RF and High-Speed System Integration, IEEE Radio and Wireless Conference, Atlanta 2004.[5] J.-Q. Lu, T.S. Cale and R.J. Gutmann, “Dielectric Adhesive WaferBonding for Back-End Wafer-Level 3D Hyper-integration,”Dielectrics for Nanosystems: Materials, Science, Processing, Reliability, and Manufacturing, edited by R. Singh, H. Iwai, R.R.Tummala, and S.C. Sun, pp. 312-323, ECS PV 2004-04, 2004. [6] J.-Q. Lu, A. Jindal, Y. Kwon, J.J. McMahon, K.-W. Lee, R.P.Kraft, B. Altemus, D. Cheng, E. Eisenbraun, T.S. Cale, and R.J.Gutmann, “3D System-on-a-Chip using Dielectric Glue Bonding and Cu Damascene Inter-Wafer Interconnects,” Thin Film Materials, Processes, and Reliability, Eds.: S. Mathad, T. S. Cale,D. Collins, M. Engelhardt, F. Leverd, and H. S. Rathore, pp. 381-389, ECS Proc. Vol. PV 2003-13, 2003.[7] J.-Q. Lu, A. Jindal, Y. Kwon, J.J. McMahon, M. Rasco, R. Augur,T.S. Cale, and R.J. Gutmann, “Evaluation Procedures for Wafer Bonding and Thinning of Interconnect Test Structures for 3D ICs,”IEEE International Interconnect Technology Conference (IITC), pp. 74-76, June 2003.[8] R.J. Gutmann, J.-Q. Lu, S. Pozder, Y. Kwon, A. Jindal, M. Celik,J.J. McMahon, K. Yu and T.S. Cale, “A Wafer-Level 3D IC Technology Platform,” Advanced Metallization Conference in 2003 (AMC 2003), Eds. G.W. Ray, T. Smy, T. Ohta and M. Tsujimura, pp. 19-26, MRS Proceedings 2004.[9] A. Zanchi, F. Tsay, and I. Papantonopoulos, “Impact of DielectricRelaxation on a 14b Pipeline ADC in 3V SiGe BiCMOS,” ISSCC Digest of Technical Papers, pp. 330-331, Feb. 2003.[10] S. Devarajan, M. Hourihan and K. Rose, “High-speed 12-bitpipeline A/D converter for high-speed image capture,” SRC SiGe Design Challenge – Phase 2, July 2003.[11] S. Deverajan, R.J. Gutmann and K. Rose, “A 87dB, 2.3GHz, SiGeBiCMOS operational transconductance Amplifier,” IEEE International Symposium on Circuits and Systems”, May 2004, pp.1293-1296.[12] A. Karanicolas, Ph.D. Thesis, Massachusetts Institute ofTechnology, 1994.[13] International Technology Roadmap for Semiconductors (ITRS):2003 Edition, (Semiconductor Industry Association, 2003, ).[14] Xian Guan, Hossein Hashemi and Ali Hajimiri, “A Fully Integrated24-GHz Eight-Element Phased-Array Receiver in Silicon,” IEEE Journal of Solid-State Circuits, Vol. 39, No. 12, pp. 2311-2320, Dec 2004.(b)。
real-esrgan简要介绍

real-esrgan简要介绍英文回答:Real-ESRGAN (Enhanced Super-Resolution Generative Adversarial Network) is a state-of-the-art image super-resolution model that utilizes a generative adversarial network (GAN) to enhance the visual quality of low-resolution images. It has achieved remarkable results in image upscaling, noise reduction, and artifact removal. A key innovation in Real-ESRGAN is the use of a perceptual loss function, which encourages the model to focus on perceptually relevant details in the image, leading to more realistic and visually pleasing results. The model has been widely used in various applications, including image editing, video enhancement, and medical imaging.中文回答:Real-ESRGAN简介。
Real-ESRGAN(增强超分辨率生成对抗网络)是一种最先进的图像超分辨率模型,它利用生成对抗网络(GAN)来增强低分辨率图像的视觉质量。
它在图像升采样、降噪和去除伪影方面取得了显着的效果。
Real-ESRGAN 的一个关键创新之处在于使用感知损失函数,它鼓励模型关注图像中与感知相关的信息,从而产生更逼真、更赏心悦目的效果。
深度合成技术原理英语作文

深度合成技术原理英语作文英文:Deep synthesis technology, also known as deepfake technology, is a method of creating realistic-looking but entirely fake videos or images. It uses advanced machine learning algorithms to manipulate and combine existing images and videos to create new, often convincing, content. This technology has been used in various fields, including entertainment, politics, and social media, and has raised concerns about its potential for misuse and abuse.The principle behind deep synthesis technology is the use of deep learning algorithms, particularly generative adversarial networks (GANs), to analyze and synthesize visual and audio data. GANs consist of two neural networks, a generator and a discriminator, which work together to create and evaluate the authenticity of the synthesized content. The generator creates fake content, while the discriminator tries to distinguish between real and fakecontent. Through this process of competition and collaboration, the GANs can produce highly realistic and convincing deepfakes.One of the most well-known examples of deep synthesis technology is the creation of fake videos of public figures, such as politicians and celebrities, saying or doing things they never actually did. These deepfakes can be incredibly convincing, making it difficult for viewers to discern the truth. This has raised concerns about the potential fordeep synthesis technology to be used for misinformation and propaganda, as well as for malicious purposes such as blackmail and fraud.Despite the potential for misuse, deep synthesis technology also has legitimate applications. For example,it can be used in the film industry to create realistic special effects or to bring deceased actors back to the screen. It can also be used in the field of medicine to generate realistic medical simulations for training purposes. However, the ethical and legal implications of deep synthesis technology must be carefully considered andregulated to prevent its misuse.In conclusion, deep synthesis technology is a powerful and potentially dangerous tool that has the ability to create highly realistic fake content. While it has legitimate applications, its potential for misuse and abuse raises serious concerns. As this technology continues to develop, it is important for society to have a conversation about its implications and to establish guidelines for its responsible use.中文:深度合成技术,也被称为深度伪造技术,是一种创建逼真但完全虚假的视频或图像的方法。
初中生写有关人工智能一类的英语作文

初中生写有关人工智能一类的英语作文全文共6篇示例,供读者参考篇1Artificial Intelligence: The Future is Here!Hi everyone! I'm a 7th grader and I've been really interested in artificial intelligence (AI) lately. It seems like something straight out of a sci-fi movie, but it's actually becoming a reality right before our eyes. Let me tell you what I've learned about this fascinating and mind-boggling technology.First off, what exactly is AI? Basically, it refers to computer systems that can perform tasks that normally require human intelligence, like learning, reasoning, problem-solving, and even creativity. These systems use complex algorithms and massive amounts of data to "learn" and make decisions, just like our brains do. Crazy, right?One of the most well-known examples of AI is virtual assistants like Siri, Alexa, and Google Assistant. These helpful little robots can understand our voice commands, look up information for us, set reminders, play music, and even crackjokes sometimes (though their sense of humor could use some work!). But AI goes way beyond virtual assistants.Self-driving cars are another incredible application of AI. These vehicles use sensors, cameras, and advanced software to navigate roads, avoid obstacles, and make driving decisions without any human input. Companies like Tesla, Waymo, and Uber are racing to perfect this technology and make our roads safer. Imagine never having to worry about distracted or drunk drivers again!AI is also transforming fields like healthcare and scientific research. Smart diagnostic systems can analyze medical images and data to detect diseases earlier and more accurately than human doctors. And AI algorithms can sift through massive datasets and spot patterns that lead to new scientific discoveries, from better drugs to cleaner energy solutions.Personally, I can't wait to see what AI has in store for the future. Maybe one day, we'll have robot tutors that can customize lessons just for us based on how we learn best. Or AI assistants that can help with our homework and answer any question we have. Heck, AI might even be able to compose essays for us (though I doubt it could make them as entertaining as this one!).Speaking of the future, some scientists are working on artificial general intelligence (AGI) – AI systems that can match or exceed human intelligence across all domains. We're still probably decades away from AGI, but if we ever achieve that level of AI, it could lead to a technological singularity where progress happens at an unimaginable pace. Whole industries and ways of life could be transformed overnight.As exciting as AGI sounds, it's also a little scary to think about. What if superintelligent AI systems become uncontrollable or decide humans are a threat? Will we become obsolete and get taken over by our own creations, like in the Terminator movies? I really hope the AI researchers are taking safety seriously and putting safeguards in place.Well, those are just some of my thoughts on this wild and rapidly evolving field of AI. Whether you find it thrilling or terrifying, there's no denying that it's going to have a huge impact on all of our lives in the years ahead. We might as well buckle up and enjoy the ride into our AI-powered future!篇2The Fascinating World of Artificial IntelligenceHey there! My name is Alex, and I'm a 13-year-old student who's super interested in technology, especially artificial intelligence (AI). AI is like really smart computer programs that can do amazing things like understand human language, recognize images and speech, and even beat human masters at complex games like chess and Go.I first learned about AI a couple of years ago when I saw a video of this crazy robot that could walk around and do backflips and stuff. I thought that was so cool! Then I started reading about how AI can be used for all sorts of helpful tasks like assisting doctors in diagnosing diseases, controlling self-driving cars, and providing suggestions for movies or products you might like based on your interests.At first, some of the technical details about AI went over my head. Like how AI systems use things called neural networks that are inspired by the human brain to process data in a way that mimics how we learn and make decisions. But the more I read, the more fascinated I became.One type of AI I find particularly interesting is called machine learning. Basically, instead of being programmed with tons of rules like traditional software, machine learning systems can study data and examples to figure things out on their own. It'slike how we learn language and skills as babies by observing patterns rather than following strict rules. With enough data to train on, machine learning can allow AI to do amazing things like understand natural human speech, translate between languages, recognize faces, objects and even emotions in images and video.Speaking of recognizing images, another awesome AI capability is computer vision. By analyzing digital images and videos, computer vision algorithms can automatically identify people, objects, text, scenery and activities. They can even track movement of things over time. It's thanks to computer vision that AI can power so much modern facial recognition for security and photo tagging on social media. Self-driving cars also rely heavily on computer vision to detect other vehicles, pedestrians, traffic signals and road conditions.While those are some of the current major applications, the possibilities for AI seem almost limitless going forward. I could see AI being used to help solve challenging problems like climate change by analyzing environmental data and testing potential solutions through simulation. AI tutors and personalized learning tools could transform education by adapting to each student's unique needs and learning style. AI might even help uscommunicate with animals by interpreting their vocalizations and behaviors!Those are valid concerns, but I don't think we should be afraid of AI overall. We just need to make sure it's developed responsibly and its applications are guided by ethics around protecting people's privacy, preventing harm, and respecting human rights. With the proper care, AI can be an amazing tool to help solve humanity's greatest challenges.Personally, I'd love to have a career in the field of AI once I'm older. It would be so rewarding to help advance this incredible technology in ways that improve people's lives. Maybe I could work on creating AI assistants to help people with disabilities, or AI systems to diagnose diseases earlier through analyzing medical scans and data. Or who knows, perhaps I could even contribute towards the development of artificial general intelligence (AGI) - an AI that can think, learn and reason just as flexibly as the human mind!Even if I don't directly work in AI, I know it's a field that will increasingly intersect with almost every career and industry in the future. So it's definitely something all students like me should learn about so we can make the most of AI's potential. Atthe very least, we need to understand AI well enough to not be replaced by it, ha!In all seriousness though, I don't think we should view AI as a threat to human jobs or humanity itself. Instead, we should see it as an amazing tool that can collaborate with us and empower us to achieve so much more. I mean, we've already used inventions like the printing press, steam engine, and computers to massively expand human knowledge, productivity and reach. AI will take that even further by amplifying our intelligence in incredible new ways.AI may seem like something from science fiction, but the foundations for it are very real thanks to decades of work by computer scientists, mathematicians, cognitive scientists and others. I'm so excited to see where the latest advancements in machine learning, neural networks and other AI capabilities lead. From smarter digital assistants to new scientific and medical breakthroughs, I really think AI will help create a better world and push humanity forward.Those are just my thoughts as a kid fascinated by AI and its vast potential! I'm sure there's still so much about this field that I have to learn. But I'm looking forward to it and can't wait to see what the future of artificial intelligence has in store. Hopefullyyou found my perspective interesting, even if it's not the most advanced take on the topic. Let me know if you have any other questions - I'm always eager to learn more!篇3The Awesome World of AIHi there! My name is Jamie and I'm a 7th grader at Central Middle School. Today I want to tell you all about artificial intelligence, or AI for short. AI is something that seems like science fiction, but it's very real and growing more important every day. Simply put, AI refers to machines that can think and learn like humans.One type of AI that you've probably heard of is virtual assistants like Siri, Alexa, and Google Assistant. These helpful programs use AI to understand our voices and respond to our questions and commands. Let's say I ask Alexa "What's the weather going to be like this weekend?" Alexa will check the online weather forecasts, process that information, and give me a summary in plain English. Amazing!AI assistants can do all sorts of useful tasks like setting reminders, converting units, playing music, and even telling jokes. My mom uses the AI on her smartphone to make grocery lists,find recipes, get directions, and more. She says AI assistants are like having a super smart personal assistant that never gets tired or takes a day off.But AI can do way more than just be a virtual helper. It's being used in self-driving cars that can sense the road and navigate without a human driver. AI software can analyze medical scans and test results to help doctors diagnose diseases. And AI algorithms are used by websites like Netflix to recommend shows you'll probably enjoy based on your viewing history and preferences.One of the most fascinating areas of AI is machine learning. This is where the AI software can study huge amounts of data to detect patterns and make predictions all by itself, just like how our brains learn over time from experience. For example, an AI could examine millions of past home sales to figure out the biggest factors that influence housing prices. Or it could analyze thousands of security camera videos to get really good at recognizing suspicious behavior.Machine learning is how AI systems are trained to master skills like recognizing spoken words, identifying objects in images, translating between languages, and playing complex games like chess and Go. The more data the AI has to learn from,the smarter and more capable it becomes. This is letting AI take on challenges that were incredibly difficult to program using traditional software rules and logic.There's also the challenge of making AI systems that are robust, unbiased, and aligned with human ethics and values. We need to make sure the AI doesn't learn harmful biases from the data it's trained on, and that it remains under meaningful human control. We wouldn't want an AI that was racist or sexist, or that could be misused by bad people to cause harm.Some people worry that AI will eventually become super intelligent and turn against its human creators. But many AI researchers think we're nowhere close to that level of general AI yet, and that we'll have plenty of warning if it starts happening so we can shape AI positively. I think it's important not to be afraid of new technologies, but to learn about篇4The Brilliant World of AIMy name is Alex and I'm in the 8th grade. I'm really interested in technology, especially artificial intelligence or AI for short. AI is all about creating computer systems that can perform tasks that normally require human intelligence. Things likelearning, problem-solving, decision-making, recognizing speech and images, and so on. AI is becoming super advanced and it's going to change the world in amazing ways!One of the coolest areas of AI is machine learning. This is where computers can learn and improve from data without being explicitly programmed. It's kind of like how we learn - through experiences. With machine learning, computers study huge amounts of data to find patterns and insights. They use algorithms to build models that allow them to make predictions or decisions. The more data they have, the better they get!A common use of machine learning is for things like product recommendations on sites like Amazon and Netflix. Have you ever noticed how Netflix seems to know exactly what movies and shows you'll like? That's machine learning hard at work! The algorithms study your viewing history and preferences to personalize the recommendations just for you.But machine learning can do way more than just product recs. It's being used for all kinds of amazing applications like detecting fraud, improving cyber security, forecasting weather, making medical diagnoses, and even composing music or artwork! The possibilities are mind-blowing.Another fascinating area of AI is natural language processing or NLP. This is what allows computers to understand, interpret and generate human language. Virtual assistants like Siri, Alexa and Google Assistant all use NLP to communicate with us. When you ask Alexa to add an item to your shopping list or to play your favorite music, it comprehends your speech and intent through NLP.NLP is also what powers real-time translation apps and software. You know how on Google Translate you can have whole conversations translated instantly across languages? That's next-level NLP at work! The technology is analyzing the languages, context and even things like idioms and slang to produce smooth, natural translations. It's like real-life universal translators from science fiction!Computer vision is another awesome application of AI that allow machines to identify and process images and videos just like humans can. It combines machine learning with understanding the visual world. Computer vision already helps power face recognition for tagging friends in pics on social media. But it also has way bigger uses like aiding self-driving cars to "see" the road, assisting doctors to diagnose diseases fromscan images, and tons of applications for security and surveillance.Speaking of self-driving cars, they simply wouldn't be possible without AI! Autonomous vehicles rely on multiple AI capabilities like computer vision, sensor data processing, navigation, path planning and decision making. There's no way conventional programming could account for the infinite number of potential scenarios a self-driving car could encounter on the roads. But with advanced AI systems, they can dynamically analyze situations and make smart decisions in real-time while driving.AI is also bringing huge improvements to areas like robotics, manufacturing, logistics and more through machine learning, planning and perception. Robots can be trained using AI to intelligently coordinate and carry out complex physical tasks and processes. It allows systems to constantly adapt and optimize in ways old-school programming could never match.What really excites me most about AI though, is the potential it has to help solve humanitarian issues and push forward scientific breakthroughs. There are already examples of AI being used for good in areas like:Protecting the environment by monitoring deforestation, air and water pollution, wildlife populations etc.Tackling hunger and food insecurity by optimizing crop sustainability and yieldsProviding quality education for all through intelligent tutoring systems and adaptive learningAdvancing healthcare through drug discovery, treatment design, and preventive careMitigating climate change by modeling impacts and solutionsAnd those are just a few examples! With AI's incredible processing power, predictive capabilities and never-ending learning potential, I'm confident it will unlock solutions to our biggest global challenges that we can't even imagine yet.But if we get it right, artificial intelligence will be one of the most transformative forces for good in human history! I can't wait to see how AI continues evolving and changing the world for the better as I get older. Maybe I'll even end up having a career developing these incredible technologies one day. For now though, I'll just keep learning everything I can about AI and spread the word about why it's so brilliant!篇5The Exciting World of Artificial IntelligenceHi there! My name is Jamie, and I'm a student in middle school. Recently, I've become really interested in a fascinating topic called artificial intelligence, or AI for short. Let me tell you all about it!AI is like having a super-smart robot friend that can help you with all sorts of things. It's a technology that allows machines to think and learn like humans do. Isn't that amazing? These machines, called AI systems, can process information, recognize patterns, make decisions, and even come up with creative ideas –just like our brains do, but way faster and more efficiently!One of the coolest things about AI is that it can learn from experience, just like we do. For example, if you show an AI system a bunch of pictures of dogs, it can study those pictures and learn to recognize dogs in other images or even in real life. The more data and examples you give it, the better it gets at its task. It's like playing a game over and over until you master it, but for an AI, it happens much quicker!AI has already made its way into our daily lives in so many ways. Have you ever used a virtual assistant like Siri or Alexa?Those are AI systems that can understand your voice commands and help you with tasks like setting alarms, getting weather updates, or even cracking jokes. Speaking of jokes, some AI systems are now so advanced that they can write stories, poems, and even funny one-liners!But AI isn't just about fun and games; it's also being used to solve serious problems and make our lives better. For instance, AI can help doctors diagnose diseases more accurately by analyzing medical images and data. It can also help scientists study climate change and find ways to protect our environment. In fact, AI is being used in almost every field imaginable, from finance and transportation to education and entertainment.Personally, I think AI is one of the most exciting technologies of our time. Just imagine having a robot tutor that can explain complex concepts in a way that's easy to understand, or a virtual friend that can play games with you and never gets bored. The possibilities are endless!But what do you think about AI? Do you find it fascinating or a little bit scary? Maybe a mix of both? Either way, I encourage you to learn more about it because it's shaping the world we live in, and who knows, you might even end up working with AI systems in the future!Well, that's all from me for now. I've gotta run and catch up on my favorite AI-generated cartoon series. Until next time, stay curious and keep exploring the amazing world of technology!Word count: 2,012篇6Artificial Intelligence: The Future is HereHave you ever wondered what the future will be like? I think about it a lot. Will we have flying cars and jet packs? Will robots do all our chores and homework for us? The idea of advanced technology has always fascinated me, especially artificial intelligence or AI.AI is basically computer software that can think and learn kind of like a human brain. It can look at data, see patterns, and make decisions without being directly programmed for every situation. AI is used in lots of things we interact with every day like Google searches, Siri and Alexa voice assistants, and even Netflix movie recommendations.But AI is going to be so much more than that. Scientists are working on making AI that can drive cars, diagnose diseases, create art and music, and even tutor students better than humanteachers! Just imagine an AI math tutor that could look at how you are solving problems and give you customized help and practice for the areas you are struggling with most. How cool would that be?Some people are worried that advanced AI could become smarter than humans and take over the world like in the Terminator movies. But most experts say we are still very far away from anything like that. Current AI is extremely good at specific narrow tasks, but it can't reason about the world like a human can. An AI mig。
hardware造句
1、BDHHI's newest brand, K2, is a commercial hardware line of door hardware and exit devices.BDHHI的最新的商标K2,该品牌专业制造商用类的五金门锁和门禁装置。
2、Exert oneself optimizes the soft hardware environment that the project builds.着力优化项目建设的软硬件环境。
3、It is more expensive to upgrade hardware than its software counterpart.硬件升级的价格要远高于相应软件升级的价格。
4、Farther down the road is the fu Zhong hardware and furniture wholesaler.一路下去,还能看到福中五金店和家具批发商。
5、H.264 HD video is hardware decoded via the gpu. 通过GPU进行硬件解码的H . 264高清视频。
6、Some prosectors actually use pruning shears from a hardware store, which are much less expensive.实际使用的多从五金行购买,这种是比较昂贵的。
7、Design of the Hardware for Embedded Communication Controller Chip MPC 850MPC850嵌入式通信开发平台的硬件设计8、Host switching in case of hardware failure.在硬件故障时进行主机切换。
9、Old hardware companies want a slice of the software sashimi.传统的硬件公司也想从软件产业分一杯羹。
一种可重构蛇形机器人的研究
与地面的摩擦系数特性。基于以上的思想,本样
收稿日期:&""&—",—"& 基金 项 目:国 家 -’% 高 技 术 研 究 发 展 计 划 资 助 项 目 (&""!++#&&%’")
万方数据
机在机构的底面刻有一定规律的条纹,来增加法 向和切向摩擦系数比。 !0# 柔性连接单元设计
· !%(! ·
中国机械工程第 *8 卷第 *, 期 #//@ 年 . 月下半月
!"" 运动的实现 蛇的运动模式是有选择的,因为它通常针对
特定的环境显得十分有效。根据蛇形机器人平面
和空间运动学模型,本样机实现了自然界中蛇的
# 种典型运动方式(见图 #)。
($)
(%)
(&)
(’)
图 # 蛇形机器人典型运动方式的实现
(()蜿蜒运动 蛇形机器人的蜿蜒运动(见图
#$)是运动效率比较高的一种运动方式,它的运动
间的扭转作用产生。目前此类蛇形机器人的代表 !,主要由固定板、智能控制单元、活动板、仿蛇皮
机构有柔性关节单元蛇形机构及二自由度模块组 底面、连接板组成。固定板、活动板、连接板材料
成的蛇形机器人机构[#,(]。
为铝合金。智能控制单元由一个控制板和一个直
模块可重构机器人由许多模块组成,这些模 流伺服电机组成,整个单元安装在固定板上,活动
播频率是水平面内波传播频率的二倍时,侧向运 动的速度和切向运动速度基本相等,最大速度可 达 !+!# - . /。
0 结论
本文提出的新型的可重构蛇形机器人机构具 有可适应地面形状变化的柔性连接环节和类似于 蛇腹鳞摩擦特性的机构底部;手动可重构,当单自 由度关节轴线互相平行连接时,该机构可进行平 面运动,当单自由度关节轴线垂直依次连接时,形 成的蛇形机器人具有两自由度的关节,可进行三 维空间运动。建立了该蛇形机器人平面和空间运 动学模型,并实现了平面蜿蜒运动、直线运动和伸 缩运动,以及空间侧向蜿蜒运动等。
3GPP TS 36.331 V13.2.0 (2016-06)
3GPP TS 36.331 V13.2.0 (2016-06)Technical Specification3rd Generation Partnership Project;Technical Specification Group Radio Access Network;Evolved Universal Terrestrial Radio Access (E-UTRA);Radio Resource Control (RRC);Protocol specification(Release 13)The present document has been developed within the 3rd Generation Partnership Project (3GPP TM) and may be further elaborated for the purposes of 3GPP. The present document has not been subject to any approval process by the 3GPP Organizational Partners and shall not be implemented.This Specification is provided for future development work within 3GPP only. The Organizational Partners accept no liability for any use of this Specification. Specifications and reports for implementation of the 3GPP TM system should be obtained via the 3GPP Organizational Partners' Publications Offices.KeywordsUMTS, radio3GPPPostal address3GPP support office address650 Route des Lucioles - Sophia AntipolisValbonne - FRANCETel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16InternetCopyright NotificationNo part may be reproduced except as authorized by written permission.The copyright and the foregoing restriction extend to reproduction in all media.© 2016, 3GPP Organizational Partners (ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, TTC).All rights reserved.UMTS™ is a Trade Mark of ETSI registered for the benefit of its members3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational PartnersLTE™ is a Trade Mark of ETSI currently being registered for the benefit of its Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM AssociationBluetooth® is a Trade Mark of the Bluetooth SIG registered for the benefit of its membersContentsForeword (18)1Scope (19)2References (19)3Definitions, symbols and abbreviations (22)3.1Definitions (22)3.2Abbreviations (24)4General (27)4.1Introduction (27)4.2Architecture (28)4.2.1UE states and state transitions including inter RAT (28)4.2.2Signalling radio bearers (29)4.3Services (30)4.3.1Services provided to upper layers (30)4.3.2Services expected from lower layers (30)4.4Functions (30)5Procedures (32)5.1General (32)5.1.1Introduction (32)5.1.2General requirements (32)5.2System information (33)5.2.1Introduction (33)5.2.1.1General (33)5.2.1.2Scheduling (34)5.2.1.2a Scheduling for NB-IoT (34)5.2.1.3System information validity and notification of changes (35)5.2.1.4Indication of ETWS notification (36)5.2.1.5Indication of CMAS notification (37)5.2.1.6Notification of EAB parameters change (37)5.2.1.7Access Barring parameters change in NB-IoT (37)5.2.2System information acquisition (38)5.2.2.1General (38)5.2.2.2Initiation (38)5.2.2.3System information required by the UE (38)5.2.2.4System information acquisition by the UE (39)5.2.2.5Essential system information missing (42)5.2.2.6Actions upon reception of the MasterInformationBlock message (42)5.2.2.7Actions upon reception of the SystemInformationBlockType1 message (42)5.2.2.8Actions upon reception of SystemInformation messages (44)5.2.2.9Actions upon reception of SystemInformationBlockType2 (44)5.2.2.10Actions upon reception of SystemInformationBlockType3 (45)5.2.2.11Actions upon reception of SystemInformationBlockType4 (45)5.2.2.12Actions upon reception of SystemInformationBlockType5 (45)5.2.2.13Actions upon reception of SystemInformationBlockType6 (45)5.2.2.14Actions upon reception of SystemInformationBlockType7 (45)5.2.2.15Actions upon reception of SystemInformationBlockType8 (45)5.2.2.16Actions upon reception of SystemInformationBlockType9 (46)5.2.2.17Actions upon reception of SystemInformationBlockType10 (46)5.2.2.18Actions upon reception of SystemInformationBlockType11 (46)5.2.2.19Actions upon reception of SystemInformationBlockType12 (47)5.2.2.20Actions upon reception of SystemInformationBlockType13 (48)5.2.2.21Actions upon reception of SystemInformationBlockType14 (48)5.2.2.22Actions upon reception of SystemInformationBlockType15 (48)5.2.2.23Actions upon reception of SystemInformationBlockType16 (48)5.2.2.24Actions upon reception of SystemInformationBlockType17 (48)5.2.2.25Actions upon reception of SystemInformationBlockType18 (48)5.2.2.26Actions upon reception of SystemInformationBlockType19 (49)5.2.3Acquisition of an SI message (49)5.2.3a Acquisition of an SI message by BL UE or UE in CE or a NB-IoT UE (50)5.3Connection control (50)5.3.1Introduction (50)5.3.1.1RRC connection control (50)5.3.1.2Security (52)5.3.1.2a RN security (53)5.3.1.3Connected mode mobility (53)5.3.1.4Connection control in NB-IoT (54)5.3.2Paging (55)5.3.2.1General (55)5.3.2.2Initiation (55)5.3.2.3Reception of the Paging message by the UE (55)5.3.3RRC connection establishment (56)5.3.3.1General (56)5.3.3.1a Conditions for establishing RRC Connection for sidelink communication/ discovery (58)5.3.3.2Initiation (59)5.3.3.3Actions related to transmission of RRCConnectionRequest message (63)5.3.3.3a Actions related to transmission of RRCConnectionResumeRequest message (64)5.3.3.4Reception of the RRCConnectionSetup by the UE (64)5.3.3.4a Reception of the RRCConnectionResume by the UE (66)5.3.3.5Cell re-selection while T300, T302, T303, T305, T306, or T308 is running (68)5.3.3.6T300 expiry (68)5.3.3.7T302, T303, T305, T306, or T308 expiry or stop (69)5.3.3.8Reception of the RRCConnectionReject by the UE (70)5.3.3.9Abortion of RRC connection establishment (71)5.3.3.10Handling of SSAC related parameters (71)5.3.3.11Access barring check (72)5.3.3.12EAB check (73)5.3.3.13Access barring check for ACDC (73)5.3.3.14Access Barring check for NB-IoT (74)5.3.4Initial security activation (75)5.3.4.1General (75)5.3.4.2Initiation (76)5.3.4.3Reception of the SecurityModeCommand by the UE (76)5.3.5RRC connection reconfiguration (77)5.3.5.1General (77)5.3.5.2Initiation (77)5.3.5.3Reception of an RRCConnectionReconfiguration not including the mobilityControlInfo by theUE (77)5.3.5.4Reception of an RRCConnectionReconfiguration including the mobilityControlInfo by the UE(handover) (79)5.3.5.5Reconfiguration failure (83)5.3.5.6T304 expiry (handover failure) (83)5.3.5.7Void (84)5.3.5.7a T307 expiry (SCG change failure) (84)5.3.5.8Radio Configuration involving full configuration option (84)5.3.6Counter check (86)5.3.6.1General (86)5.3.6.2Initiation (86)5.3.6.3Reception of the CounterCheck message by the UE (86)5.3.7RRC connection re-establishment (87)5.3.7.1General (87)5.3.7.2Initiation (87)5.3.7.3Actions following cell selection while T311 is running (88)5.3.7.4Actions related to transmission of RRCConnectionReestablishmentRequest message (89)5.3.7.5Reception of the RRCConnectionReestablishment by the UE (89)5.3.7.6T311 expiry (91)5.3.7.7T301 expiry or selected cell no longer suitable (91)5.3.7.8Reception of RRCConnectionReestablishmentReject by the UE (91)5.3.8RRC connection release (92)5.3.8.1General (92)5.3.8.2Initiation (92)5.3.8.3Reception of the RRCConnectionRelease by the UE (92)5.3.8.4T320 expiry (93)5.3.9RRC connection release requested by upper layers (93)5.3.9.1General (93)5.3.9.2Initiation (93)5.3.10Radio resource configuration (93)5.3.10.0General (93)5.3.10.1SRB addition/ modification (94)5.3.10.2DRB release (95)5.3.10.3DRB addition/ modification (95)5.3.10.3a1DC specific DRB addition or reconfiguration (96)5.3.10.3a2LWA specific DRB addition or reconfiguration (98)5.3.10.3a3LWIP specific DRB addition or reconfiguration (98)5.3.10.3a SCell release (99)5.3.10.3b SCell addition/ modification (99)5.3.10.3c PSCell addition or modification (99)5.3.10.4MAC main reconfiguration (99)5.3.10.5Semi-persistent scheduling reconfiguration (100)5.3.10.6Physical channel reconfiguration (100)5.3.10.7Radio Link Failure Timers and Constants reconfiguration (101)5.3.10.8Time domain measurement resource restriction for serving cell (101)5.3.10.9Other configuration (102)5.3.10.10SCG reconfiguration (103)5.3.10.11SCG dedicated resource configuration (104)5.3.10.12Reconfiguration SCG or split DRB by drb-ToAddModList (105)5.3.10.13Neighbour cell information reconfiguration (105)5.3.10.14Void (105)5.3.10.15Sidelink dedicated configuration (105)5.3.10.16T370 expiry (106)5.3.11Radio link failure related actions (107)5.3.11.1Detection of physical layer problems in RRC_CONNECTED (107)5.3.11.2Recovery of physical layer problems (107)5.3.11.3Detection of radio link failure (107)5.3.12UE actions upon leaving RRC_CONNECTED (109)5.3.13UE actions upon PUCCH/ SRS release request (110)5.3.14Proximity indication (110)5.3.14.1General (110)5.3.14.2Initiation (111)5.3.14.3Actions related to transmission of ProximityIndication message (111)5.3.15Void (111)5.4Inter-RAT mobility (111)5.4.1Introduction (111)5.4.2Handover to E-UTRA (112)5.4.2.1General (112)5.4.2.2Initiation (112)5.4.2.3Reception of the RRCConnectionReconfiguration by the UE (112)5.4.2.4Reconfiguration failure (114)5.4.2.5T304 expiry (handover to E-UTRA failure) (114)5.4.3Mobility from E-UTRA (114)5.4.3.1General (114)5.4.3.2Initiation (115)5.4.3.3Reception of the MobilityFromEUTRACommand by the UE (115)5.4.3.4Successful completion of the mobility from E-UTRA (116)5.4.3.5Mobility from E-UTRA failure (117)5.4.4Handover from E-UTRA preparation request (CDMA2000) (117)5.4.4.1General (117)5.4.4.2Initiation (118)5.4.4.3Reception of the HandoverFromEUTRAPreparationRequest by the UE (118)5.4.5UL handover preparation transfer (CDMA2000) (118)5.4.5.1General (118)5.4.5.2Initiation (118)5.4.5.3Actions related to transmission of the ULHandoverPreparationTransfer message (119)5.4.5.4Failure to deliver the ULHandoverPreparationTransfer message (119)5.4.6Inter-RAT cell change order to E-UTRAN (119)5.4.6.1General (119)5.4.6.2Initiation (119)5.4.6.3UE fails to complete an inter-RAT cell change order (119)5.5Measurements (120)5.5.1Introduction (120)5.5.2Measurement configuration (121)5.5.2.1General (121)5.5.2.2Measurement identity removal (122)5.5.2.2a Measurement identity autonomous removal (122)5.5.2.3Measurement identity addition/ modification (123)5.5.2.4Measurement object removal (124)5.5.2.5Measurement object addition/ modification (124)5.5.2.6Reporting configuration removal (126)5.5.2.7Reporting configuration addition/ modification (127)5.5.2.8Quantity configuration (127)5.5.2.9Measurement gap configuration (127)5.5.2.10Discovery signals measurement timing configuration (128)5.5.2.11RSSI measurement timing configuration (128)5.5.3Performing measurements (128)5.5.3.1General (128)5.5.3.2Layer 3 filtering (131)5.5.4Measurement report triggering (131)5.5.4.1General (131)5.5.4.2Event A1 (Serving becomes better than threshold) (135)5.5.4.3Event A2 (Serving becomes worse than threshold) (136)5.5.4.4Event A3 (Neighbour becomes offset better than PCell/ PSCell) (136)5.5.4.5Event A4 (Neighbour becomes better than threshold) (137)5.5.4.6Event A5 (PCell/ PSCell becomes worse than threshold1 and neighbour becomes better thanthreshold2) (138)5.5.4.6a Event A6 (Neighbour becomes offset better than SCell) (139)5.5.4.7Event B1 (Inter RAT neighbour becomes better than threshold) (139)5.5.4.8Event B2 (PCell becomes worse than threshold1 and inter RAT neighbour becomes better thanthreshold2) (140)5.5.4.9Event C1 (CSI-RS resource becomes better than threshold) (141)5.5.4.10Event C2 (CSI-RS resource becomes offset better than reference CSI-RS resource) (141)5.5.4.11Event W1 (WLAN becomes better than a threshold) (142)5.5.4.12Event W2 (All WLAN inside WLAN mobility set becomes worse than threshold1 and a WLANoutside WLAN mobility set becomes better than threshold2) (142)5.5.4.13Event W3 (All WLAN inside WLAN mobility set becomes worse than a threshold) (143)5.5.5Measurement reporting (144)5.5.6Measurement related actions (148)5.5.6.1Actions upon handover and re-establishment (148)5.5.6.2Speed dependant scaling of measurement related parameters (149)5.5.7Inter-frequency RSTD measurement indication (149)5.5.7.1General (149)5.5.7.2Initiation (150)5.5.7.3Actions related to transmission of InterFreqRSTDMeasurementIndication message (150)5.6Other (150)5.6.0General (150)5.6.1DL information transfer (151)5.6.1.1General (151)5.6.1.2Initiation (151)5.6.1.3Reception of the DLInformationTransfer by the UE (151)5.6.2UL information transfer (151)5.6.2.1General (151)5.6.2.2Initiation (151)5.6.2.3Actions related to transmission of ULInformationTransfer message (152)5.6.2.4Failure to deliver ULInformationTransfer message (152)5.6.3UE capability transfer (152)5.6.3.1General (152)5.6.3.2Initiation (153)5.6.3.3Reception of the UECapabilityEnquiry by the UE (153)5.6.4CSFB to 1x Parameter transfer (157)5.6.4.1General (157)5.6.4.2Initiation (157)5.6.4.3Actions related to transmission of CSFBParametersRequestCDMA2000 message (157)5.6.4.4Reception of the CSFBParametersResponseCDMA2000 message (157)5.6.5UE Information (158)5.6.5.1General (158)5.6.5.2Initiation (158)5.6.5.3Reception of the UEInformationRequest message (158)5.6.6 Logged Measurement Configuration (159)5.6.6.1General (159)5.6.6.2Initiation (160)5.6.6.3Reception of the LoggedMeasurementConfiguration by the UE (160)5.6.6.4T330 expiry (160)5.6.7 Release of Logged Measurement Configuration (160)5.6.7.1General (160)5.6.7.2Initiation (160)5.6.8 Measurements logging (161)5.6.8.1General (161)5.6.8.2Initiation (161)5.6.9In-device coexistence indication (163)5.6.9.1General (163)5.6.9.2Initiation (164)5.6.9.3Actions related to transmission of InDeviceCoexIndication message (164)5.6.10UE Assistance Information (165)5.6.10.1General (165)5.6.10.2Initiation (166)5.6.10.3Actions related to transmission of UEAssistanceInformation message (166)5.6.11 Mobility history information (166)5.6.11.1General (166)5.6.11.2Initiation (166)5.6.12RAN-assisted WLAN interworking (167)5.6.12.1General (167)5.6.12.2Dedicated WLAN offload configuration (167)5.6.12.3WLAN offload RAN evaluation (167)5.6.12.4T350 expiry or stop (167)5.6.12.5Cell selection/ re-selection while T350 is running (168)5.6.13SCG failure information (168)5.6.13.1General (168)5.6.13.2Initiation (168)5.6.13.3Actions related to transmission of SCGFailureInformation message (168)5.6.14LTE-WLAN Aggregation (169)5.6.14.1Introduction (169)5.6.14.2Reception of LWA configuration (169)5.6.14.3Release of LWA configuration (170)5.6.15WLAN connection management (170)5.6.15.1Introduction (170)5.6.15.2WLAN connection status reporting (170)5.6.15.2.1General (170)5.6.15.2.2Initiation (171)5.6.15.2.3Actions related to transmission of WLANConnectionStatusReport message (171)5.6.15.3T351 Expiry (WLAN connection attempt timeout) (171)5.6.15.4WLAN status monitoring (171)5.6.16RAN controlled LTE-WLAN interworking (172)5.6.16.1General (172)5.6.16.2WLAN traffic steering command (172)5.6.17LTE-WLAN aggregation with IPsec tunnel (173)5.6.17.1General (173)5.7Generic error handling (174)5.7.1General (174)5.7.2ASN.1 violation or encoding error (174)5.7.3Field set to a not comprehended value (174)5.7.4Mandatory field missing (174)5.7.5Not comprehended field (176)5.8MBMS (176)5.8.1Introduction (176)5.8.1.1General (176)5.8.1.2Scheduling (176)5.8.1.3MCCH information validity and notification of changes (176)5.8.2MCCH information acquisition (178)5.8.2.1General (178)5.8.2.2Initiation (178)5.8.2.3MCCH information acquisition by the UE (178)5.8.2.4Actions upon reception of the MBSFNAreaConfiguration message (178)5.8.2.5Actions upon reception of the MBMSCountingRequest message (179)5.8.3MBMS PTM radio bearer configuration (179)5.8.3.1General (179)5.8.3.2Initiation (179)5.8.3.3MRB establishment (179)5.8.3.4MRB release (179)5.8.4MBMS Counting Procedure (179)5.8.4.1General (179)5.8.4.2Initiation (180)5.8.4.3Reception of the MBMSCountingRequest message by the UE (180)5.8.5MBMS interest indication (181)5.8.5.1General (181)5.8.5.2Initiation (181)5.8.5.3Determine MBMS frequencies of interest (182)5.8.5.4Actions related to transmission of MBMSInterestIndication message (183)5.8a SC-PTM (183)5.8a.1Introduction (183)5.8a.1.1General (183)5.8a.1.2SC-MCCH scheduling (183)5.8a.1.3SC-MCCH information validity and notification of changes (183)5.8a.1.4Procedures (184)5.8a.2SC-MCCH information acquisition (184)5.8a.2.1General (184)5.8a.2.2Initiation (184)5.8a.2.3SC-MCCH information acquisition by the UE (184)5.8a.2.4Actions upon reception of the SCPTMConfiguration message (185)5.8a.3SC-PTM radio bearer configuration (185)5.8a.3.1General (185)5.8a.3.2Initiation (185)5.8a.3.3SC-MRB establishment (185)5.8a.3.4SC-MRB release (185)5.9RN procedures (186)5.9.1RN reconfiguration (186)5.9.1.1General (186)5.9.1.2Initiation (186)5.9.1.3Reception of the RNReconfiguration by the RN (186)5.10Sidelink (186)5.10.1Introduction (186)5.10.1a Conditions for sidelink communication operation (187)5.10.2Sidelink UE information (188)5.10.2.1General (188)5.10.2.2Initiation (189)5.10.2.3Actions related to transmission of SidelinkUEInformation message (193)5.10.3Sidelink communication monitoring (195)5.10.6Sidelink discovery announcement (198)5.10.6a Sidelink discovery announcement pool selection (201)5.10.6b Sidelink discovery announcement reference carrier selection (201)5.10.7Sidelink synchronisation information transmission (202)5.10.7.1General (202)5.10.7.2Initiation (203)5.10.7.3Transmission of SLSS (204)5.10.7.4Transmission of MasterInformationBlock-SL message (205)5.10.7.5Void (206)5.10.8Sidelink synchronisation reference (206)5.10.8.1General (206)5.10.8.2Selection and reselection of synchronisation reference UE (SyncRef UE) (206)5.10.9Sidelink common control information (207)5.10.9.1General (207)5.10.9.2Actions related to reception of MasterInformationBlock-SL message (207)5.10.10Sidelink relay UE operation (207)5.10.10.1General (207)5.10.10.2AS-conditions for relay related sidelink communication transmission by sidelink relay UE (207)5.10.10.3AS-conditions for relay PS related sidelink discovery transmission by sidelink relay UE (208)5.10.10.4Sidelink relay UE threshold conditions (208)5.10.11Sidelink remote UE operation (208)5.10.11.1General (208)5.10.11.2AS-conditions for relay related sidelink communication transmission by sidelink remote UE (208)5.10.11.3AS-conditions for relay PS related sidelink discovery transmission by sidelink remote UE (209)5.10.11.4Selection and reselection of sidelink relay UE (209)5.10.11.5Sidelink remote UE threshold conditions (210)6Protocol data units, formats and parameters (tabular & ASN.1) (210)6.1General (210)6.2RRC messages (212)6.2.1General message structure (212)–EUTRA-RRC-Definitions (212)–BCCH-BCH-Message (212)–BCCH-DL-SCH-Message (212)–BCCH-DL-SCH-Message-BR (213)–MCCH-Message (213)–PCCH-Message (213)–DL-CCCH-Message (214)–DL-DCCH-Message (214)–UL-CCCH-Message (214)–UL-DCCH-Message (215)–SC-MCCH-Message (215)6.2.2Message definitions (216)–CounterCheck (216)–CounterCheckResponse (217)–CSFBParametersRequestCDMA2000 (217)–CSFBParametersResponseCDMA2000 (218)–DLInformationTransfer (218)–HandoverFromEUTRAPreparationRequest (CDMA2000) (219)–InDeviceCoexIndication (220)–InterFreqRSTDMeasurementIndication (222)–LoggedMeasurementConfiguration (223)–MasterInformationBlock (225)–MBMSCountingRequest (226)–MBMSCountingResponse (226)–MBMSInterestIndication (227)–MBSFNAreaConfiguration (228)–MeasurementReport (228)–MobilityFromEUTRACommand (229)–Paging (232)–ProximityIndication (233)–RNReconfiguration (234)–RNReconfigurationComplete (234)–RRCConnectionReconfiguration (235)–RRCConnectionReconfigurationComplete (240)–RRCConnectionReestablishment (241)–RRCConnectionReestablishmentComplete (241)–RRCConnectionReestablishmentReject (242)–RRCConnectionReestablishmentRequest (243)–RRCConnectionReject (243)–RRCConnectionRelease (244)–RRCConnectionResume (248)–RRCConnectionResumeComplete (249)–RRCConnectionResumeRequest (250)–RRCConnectionRequest (250)–RRCConnectionSetup (251)–RRCConnectionSetupComplete (252)–SCGFailureInformation (253)–SCPTMConfiguration (254)–SecurityModeCommand (255)–SecurityModeComplete (255)–SecurityModeFailure (256)–SidelinkUEInformation (256)–SystemInformation (258)–SystemInformationBlockType1 (259)–UEAssistanceInformation (264)–UECapabilityEnquiry (265)–UECapabilityInformation (266)–UEInformationRequest (267)–UEInformationResponse (267)–ULHandoverPreparationTransfer (CDMA2000) (273)–ULInformationTransfer (274)–WLANConnectionStatusReport (274)6.3RRC information elements (275)6.3.1System information blocks (275)–SystemInformationBlockType2 (275)–SystemInformationBlockType3 (279)–SystemInformationBlockType4 (282)–SystemInformationBlockType5 (283)–SystemInformationBlockType6 (287)–SystemInformationBlockType7 (289)–SystemInformationBlockType8 (290)–SystemInformationBlockType9 (295)–SystemInformationBlockType10 (295)–SystemInformationBlockType11 (296)–SystemInformationBlockType12 (297)–SystemInformationBlockType13 (297)–SystemInformationBlockType14 (298)–SystemInformationBlockType15 (298)–SystemInformationBlockType16 (299)–SystemInformationBlockType17 (300)–SystemInformationBlockType18 (301)–SystemInformationBlockType19 (301)–SystemInformationBlockType20 (304)6.3.2Radio resource control information elements (304)–AntennaInfo (304)–AntennaInfoUL (306)–CQI-ReportConfig (307)–CQI-ReportPeriodicProcExtId (314)–CrossCarrierSchedulingConfig (314)–CSI-IM-Config (315)–CSI-IM-ConfigId (315)–CSI-RS-Config (317)–CSI-RS-ConfigEMIMO (318)–CSI-RS-ConfigNZP (319)–CSI-RS-ConfigNZPId (320)–CSI-RS-ConfigZP (321)–CSI-RS-ConfigZPId (321)–DMRS-Config (321)–DRB-Identity (322)–EPDCCH-Config (322)–EIMTA-MainConfig (324)–LogicalChannelConfig (325)–LWA-Configuration (326)–LWIP-Configuration (326)–RCLWI-Configuration (327)–MAC-MainConfig (327)–P-C-AndCBSR (332)–PDCCH-ConfigSCell (333)–PDCP-Config (334)–PDSCH-Config (337)–PDSCH-RE-MappingQCL-ConfigId (339)–PHICH-Config (339)–PhysicalConfigDedicated (339)–P-Max (344)–PRACH-Config (344)–PresenceAntennaPort1 (346)–PUCCH-Config (347)–PUSCH-Config (351)–RACH-ConfigCommon (355)–RACH-ConfigDedicated (357)–RadioResourceConfigCommon (358)–RadioResourceConfigDedicated (362)–RLC-Config (367)–RLF-TimersAndConstants (369)–RN-SubframeConfig (370)–SchedulingRequestConfig (371)–SoundingRS-UL-Config (372)–SPS-Config (375)–TDD-Config (376)–TimeAlignmentTimer (377)–TPC-PDCCH-Config (377)–TunnelConfigLWIP (378)–UplinkPowerControl (379)–WLAN-Id-List (382)–WLAN-MobilityConfig (382)6.3.3Security control information elements (382)–NextHopChainingCount (382)–SecurityAlgorithmConfig (383)–ShortMAC-I (383)6.3.4Mobility control information elements (383)–AdditionalSpectrumEmission (383)–ARFCN-ValueCDMA2000 (383)–ARFCN-ValueEUTRA (384)–ARFCN-ValueGERAN (384)–ARFCN-ValueUTRA (384)–BandclassCDMA2000 (384)–BandIndicatorGERAN (385)–CarrierFreqCDMA2000 (385)–CarrierFreqGERAN (385)–CellIndexList (387)–CellReselectionPriority (387)–CellSelectionInfoCE (387)–CellReselectionSubPriority (388)–CSFB-RegistrationParam1XRTT (388)–CellGlobalIdEUTRA (389)–CellGlobalIdUTRA (389)–CellGlobalIdGERAN (390)–CellGlobalIdCDMA2000 (390)–CellSelectionInfoNFreq (391)–CSG-Identity (391)–FreqBandIndicator (391)–MobilityControlInfo (391)–MobilityParametersCDMA2000 (1xRTT) (393)–MobilityStateParameters (394)–MultiBandInfoList (394)–NS-PmaxList (394)–PhysCellId (395)–PhysCellIdRange (395)–PhysCellIdRangeUTRA-FDDList (395)–PhysCellIdCDMA2000 (396)–PhysCellIdGERAN (396)–PhysCellIdUTRA-FDD (396)–PhysCellIdUTRA-TDD (396)–PLMN-Identity (397)–PLMN-IdentityList3 (397)–PreRegistrationInfoHRPD (397)–Q-QualMin (398)–Q-RxLevMin (398)–Q-OffsetRange (398)–Q-OffsetRangeInterRAT (399)–ReselectionThreshold (399)–ReselectionThresholdQ (399)–SCellIndex (399)–ServCellIndex (400)–SpeedStateScaleFactors (400)–SystemInfoListGERAN (400)–SystemTimeInfoCDMA2000 (401)–TrackingAreaCode (401)–T-Reselection (402)–T-ReselectionEUTRA-CE (402)6.3.5Measurement information elements (402)–AllowedMeasBandwidth (402)–CSI-RSRP-Range (402)–Hysteresis (402)–LocationInfo (403)–MBSFN-RSRQ-Range (403)–MeasConfig (404)–MeasDS-Config (405)–MeasGapConfig (406)–MeasId (407)–MeasIdToAddModList (407)–MeasObjectCDMA2000 (408)–MeasObjectEUTRA (408)–MeasObjectGERAN (412)–MeasObjectId (412)–MeasObjectToAddModList (412)–MeasObjectUTRA (413)–ReportConfigEUTRA (422)–ReportConfigId (425)–ReportConfigInterRAT (425)–ReportConfigToAddModList (428)–ReportInterval (429)–RSRP-Range (429)–RSRQ-Range (430)–RSRQ-Type (430)–RS-SINR-Range (430)–RSSI-Range-r13 (431)–TimeToTrigger (431)–UL-DelayConfig (431)–WLAN-CarrierInfo (431)–WLAN-RSSI-Range (432)–WLAN-Status (432)6.3.6Other information elements (433)–AbsoluteTimeInfo (433)–AreaConfiguration (433)–C-RNTI (433)–DedicatedInfoCDMA2000 (434)–DedicatedInfoNAS (434)–FilterCoefficient (434)–LoggingDuration (434)–LoggingInterval (435)–MeasSubframePattern (435)–MMEC (435)–NeighCellConfig (435)–OtherConfig (436)–RAND-CDMA2000 (1xRTT) (437)–RAT-Type (437)–ResumeIdentity (437)–RRC-TransactionIdentifier (438)–S-TMSI (438)–TraceReference (438)–UE-CapabilityRAT-ContainerList (438)–UE-EUTRA-Capability (439)–UE-RadioPagingInfo (469)–UE-TimersAndConstants (469)–VisitedCellInfoList (470)–WLAN-OffloadConfig (470)6.3.7MBMS information elements (472)–MBMS-NotificationConfig (472)–MBMS-ServiceList (473)–MBSFN-AreaId (473)–MBSFN-AreaInfoList (473)–MBSFN-SubframeConfig (474)–PMCH-InfoList (475)6.3.7a SC-PTM information elements (476)–SC-MTCH-InfoList (476)–SCPTM-NeighbourCellList (478)6.3.8Sidelink information elements (478)–SL-CommConfig (478)–SL-CommResourcePool (479)–SL-CP-Len (480)–SL-DiscConfig (481)–SL-DiscResourcePool (483)–SL-DiscTxPowerInfo (485)–SL-GapConfig (485)。
Knowledge-Based Systems
Ultsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.Integration of Neural Networks withKnowledge-Based SystemsAlfred Ultsch, Dieter KorusDepartment of Mathematics/Informatics; University of MarburgHans-Meerwein-Straße/Lahnberge; D-35032 Marburg; F. R. Germanyemail: ultsch or korus@mathematik.uni-marburg.dehttp://www.uni-marburg.de/~wina/ABSTRACTExisting prejudices of some Artificial Intelligence researchers against neural networks are hard tobreak. One of their most important argument is that neural networks are not able to explain theirdecisions. Further they claim that neural networks are not able so solve the variable binding pro-blem for unification. We show in this paper that neural networks and knowledge-based systemsmust not be competitive, but are capable to complete each other. The disadvantages of the oneparadigm are the advantages of the other and vice versa. We show several ways to integrate bothparadigms in the areas of explorative data analysis, knowledge acquisition, introspection, andunification. Our approach to such hybrid systems has been prooved in real world applications.1. IntroductionThe successful application of knowledge-based systems in different areas as diagnosis, construction and plan-ning shows the usefulness of a symbolic knowledge representation. However, this representation implies pro-blems in processing data from natural processes. Normally such data are results of measurements and have therefore no straightforward kind of symbolic representation [1]. Knowledge-based systems often fall short in handling inconsistent and noisy data. It is also difficult to formalize knowledge in such domains where ´a priori´rules are unknown. Often the performance in ´learning from examples´ and ´dealing with untypical situations´(graceful degradation) is insufficient. The rules used by conventional expert systems are sait to be able to repre-sent complex concepts only approximately [4]. In such complex systems inconsistent and context-dependent rules (cases) may result in unacceptable errors. In addition, it is almost impossible for experts to describe their knowledge, which they acquired from many examples by experience, entirely in symbolic form [6].State-of-the-art knowledge-based system technology is based on symbolic processing. Acknowledged shortco-ming of current computational techniques is their brittleness, often arising from the inability of first order logic to capture adequately the dynamics of a changing and incompletely known environment. An important property of knowledge stored in symbolic form is that it can be interpreted and communicated to experts. The limits of such an approach, however, become quite evident when sensor data or measurement data, for example from physical processes, are handled. Inconsistent data frequently force symbolic systems into an undefined state. Another heavy problem in knowledge-based system design is the acquisition of knowledge. It is well known that it is almost impossible for an expert to describe his domain specific knowledge entirely in form of rules or other knowledge representation schemes. In addition, it is very difficult or even impossible to describe expertise acquired by experience.Neural networks claim to avoid most of the disadvantages of knowledge-based systems described above. These systems which rely on a distributed knowledge representation are able to develop a concise representation of complex concepts. It is possible to learn knowledge from experience directly [4]. Characteristic attributes of connectionist systems are the ability of generalization and graceful degradation. E.g. they are able to process inconsistent and noisy data. In addition, neural networks compute the most plausible output to each input. Neural networks, however, also have their disadvantages. It is difficult to provide an explanation of the beha-viour of the neural network because of the distributed knowledge representation. Therefore expertise learned by neural networks is not available in a form that is intelegible for human beings as well as for knowledge-basedUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.systems. It seems to be difficult to describe or to interpret this kind of information. In knowledge-based systems on the other hand it is easy to describe and to verify the underlying concepts.2. Integration of Neural Networks with Knowledge-Based SystemsIndications are that neural networks provide fault-tolerance and noise resistance. They adapt to unstable and lar-gely unknown environments as well. Their weakness lies in a reliance on data-intensive training algorithms, with little opportunity to integrate available, discrete knowledge. At present, neural networks are relatively suc-cessfull in applications dealing with subsymbolic raw data; in particular, if the data is noisy or inconsistent. Such subsymbolic level processing seems to be appropriate for dealing with perceptions tasks and perhaps even with tasks that call for combined perception and cognition. Neural networks are able to learn structures of an input set without using a priori information. Unfortunately they cannot explain their behavior because a distributed repre-sentation of the knowledge is used. They only can tell about the knowledge by showing responses to a given input.Both approaches, knowledge-based systems and neural networks, of modelling brain-like information proces-sing are complementary in the sense that traditional knowledge-based systems are a top-down approach starting from high-level cognitive functions whereas neural networks are a bottom-up approach on a biophysical basis of neurons and synapses. It is a matter of fact that the symbolic as well as the subsymbolic aspects of information processing are essential to systems dealing with real world tasks. Integrating neural networks and knowledge-based systems is certainly a challenging task [10]. Beside these general considerations several specific tasks have to be solved. The most important are - without claiming on completeness:Structure Detection by Collective Behavior: In real world people have continuously to do with raw and sub-symbolic data which is characterized by the property that one single element does not have a meaning (interpre-tation) of itself alone. The question is, how to transform the subsymbolic data into a symbolic form. Unsupervised learning neural networks can adapt to structures inherent in the data. They exhibit the property to produce their structure during learning by the integration (overlay) of many case data. But they have the disad-vantage that they cannot be interpreted by looking at the activity or weights of single neurons. Because of this we need tools to detect the structure in large neural networks.Integrated Knowledge Acquisition: Knowledge acquisition is one of the biggest problems in artificial intelli-gence. A knowledge-based system may therefore not be able to diagnose a case which an expert is able to. The question is, how to extract experience from a set of examples for the use of knowledge-based systems. Under Integrated Knowledge Acquisition we understand subsymbolic approaches, i.e. the usage of neural networks, to gain symbolic knowledge. Neural networks can easily process subsymbolic raw data by handling noisy and inconsistent data. An intrinsic property of neural networks is, however, that no high level knowledge can be identified in the trained neural network. The central problem for Integrated Knowledge Acquisition is therefore how to transform whatever a neural network has learned into a symbolic form.Introspection: Under introspection we understand methods and techniques whereby a knowledge-based system observes its own behaviour and improves its performance. This approach can be realized using neural networks that observe the sequence of steps an expert system takes in the derivation of a conclusion. This is often called control knowledge. When the observed behaviour of the expert system is appropriately encoded, a neural net-work can learn how to avoid missleading paths and how to arrive faster at its conclusions.Unification: One type of integrated reasoning is the realization of an important part of the reasoning process, the unification, using neural networks. Unification pays a central role in logic programming (e.g. in the language Prolog) and is also a central feature for the implementation of many knowledge-based systems. The idea of this approach is to realize the matching and unification part of the reasoning process in a suitable neural network. 3. Structure Detection by Collective BehaviorOne of the neural network types we use for representing subsymbolic raw data in large distributed neural net-works are the Self-Organizing Feature Maps (SOFM) by Kohonen [5]. It has the ability to map a high-dimen-sional feature space onto a usually two-dimensional grid of neurons. The important feature of this mapping is that adjacent points in the data space are mapped onto adjacent neurons in the grid by conserving the distributionUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/Australia, 1995.of the input data. In normal applications we use 64 by 64, 128 by 128 or 256 by 256 neurons. Due to the repre-sentation of the input data in learning phase the SOFM adapts to the structure inherent in the data. On the map neighbouring neurons form regions, which correspond to similar input vectors. These neighbourhoods form dis-joint regions, thus classifying the input vectors.But looking at the learned SOFM as it is one is not able to see much structure in the neural network, especially when processing a large amount of data with high dimensionality. In addition, automatic detection of the classi-fication is difficult because the SOFM converges to an equal distribution of the neurons on the map. So a special visualization tool, the so called …unified distance matrix methods“, short U-matrix methods, were developed [19]to graphically visualize the structure of the SOFM in a three-dimensional landscape (fig. 1). The simplest U-matrix method is to calculate for each neuron the mean of the distances to its (at most) 8 neighbours and add this value as the height of each neuron in a third dimension. Other methods e.g. consider also the position of the refe-rence vectors on the map. Using an U-matrix method we get, with the help of interpolation and other visualisa-tion technics, a three-dimensional landscape with walls and valleys. Neurons which belong to the same valley are quite similar and may belong to the same class; walls separate different classes (fig. 1). Unlike in other clas-sification algorithms the number of expected classes must not be known a priori. Also, subclasses of larger clas-ses can be detected. Single neurons in deep valleys indicate possible outliers. These visualizations are implemented together with a complete toolbox to show the interpolated U-matrices in three dimensions, with different interpolation methods, in different colour tables, different perspectives, with clipping, tiled or single top view, the position of the reference vectors, the identification of single reference vectors to identify possible outliers or special calses, the drawing of class borders, labeling of clusters, and in addition, single component maps, which show the distribution of a single feature on the SOFM . For example, using a data set containing blood analysis values from 20 patients (20 vectors with 11 real-valued components) selected from a set of 1500patients [3], it turned out that the clustering coresponds nicely with the different patient's diagnoses.4. Integrated Knowledge AcquisitionIn the previous Section we presented the combination of the Self-organization Feature Map ( SOFM ) by Koho-nen [5] and the U-matrix methods [19] to detect structure in large neural networks with collective behaviour representing the structure of the input data. As a result we are able to classify the input data. To acquire know-ledge out of this neuronal classification, we developed an inductive machine learning algorithm, called sig*.Fig. 1. U-MatrixUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.Fuzzy logic, based on fuzzy set theory [20], opens the possibility to model and process vague knowledge in knowledge-based systems. This offers for example the chance to explain the decision making process of human experts derived from vague or uncertain information. In another way some problems of traditional knowledge-based systems like dealing with exceptions in rule based systems can solved by fuzzy logic. Further, because of the generalisation ability of neural networks fuzzy theory is well suited to express the vague knowledge of lear-ned neural networks. To take use out of these advantages we expanded our system by extracting membership functions out of neural networks which are used to transfer the knowledge into fuzzy rules.1. Neural UnificationWe have investigated an approach that is close to the problem representation. The main idea is to use Kohonen´s Self-Organizing Feature Maps (SOFM) [5] for the representation of the atoms and functors in a term. SOFM have the property that similar input data (in this case atoms and functors) are represented in a close neighbor-hood in the feature map (relating to their semantical context). For each atom, resp. functor, in the logical state-ment the input vector for the SOFM is generated as follows: each component of the feature vector represents the number of occurences of the given atom/functor in a (sub-)term, whereby the number after the feature term refers to the arity of the term. The length of the vector is the number of possible (sub-)terms. The training of the SOFM with these input vectors results in a map, called Input Feature Map (IFM).A special designed relaxation network, called Cube, performs the unification by determining the most common unifier. For each argument position of the unifying terms a layer of neurons is constructed, having the same topology as the IFM. For each occurrence of a variable in the given Prolog program a vector of neurons, called Variable Vector, is constructed. The encoding of the vectors is the same as for the input vector of the IFM. Cube and Variable Vectors are connected through neurons leading to and from a vector of neurons called Pairs. Each neuron in the Pairs vector encodes two argument positions that have to be unified. Lateral connections between the pairing neurons activate identical argument positions. With an simple threshold neuron operating on the Pairs neurons the occur check can be realised [15]. The activation functions of the different neurons are con-structed such that if the network has reached a stable state (relaxation process), the unification process can be performed [15]. In order to actually calculate the most common unifier a special SOFM, called Output Feature Map (OFM), is constructed. Weights of this feature map are the activations of the Cube neurons. If the Variable Vector of a variable is used as input pattern to the OFM, the neuron representing an instance of that variable responds.In our network tests like occurrence and clash are implemented such that they can be calculated in parallel and during the relaxation process. Unification is performed via a relaxation neural network. If this network has rea-ched a stable state the most common unifier can be read out using the OFM. It can be proven that our network performs the unification process precisely [15]. Real world applications of logic programming, in particular in expert systems, require more than exact reasoning capabilities. In order to perform fuzzy unification the AND resp. OR neurons of the relaxation networks have to be modified. Instead of the AND resp. OR function in the neurons with connections from the Variable Vectors to the Pairs neurons the activation function is changed to the minimum resp. maximum of the two input activations [15]. We have tested the system with different pro-grams consisting of a small Prolog database, simplification of algebraic terms, symbolic differentiation, and the traveling salesman problem [15].2. IntrospectionMany symbolic knowledge processing systems rely on programs that are able to perform symbolic proofs. Inter-preters for the programming language Prolog are examples of such programs. The usage of Prolog interpreters for symbolic proofs, however, implies a certain proof strategy. But in case of failure of a partial goal, the inter-preter backtracks systematically to the last choice made without analyzing the cause of failure. Even for simple programs, this implicit control strategy is not sufficient to obtain efficient computations. Neural networks can be used to automatically optimize symbolic proofs without the need of an explicit formulation of control know-ledge [16]. We have realized an approach to learn and store control knowledge in a neural network. Input to the neural network is the Prolog clause to be proved. The output is an encoded structural description of the subgoal that is to be proved next. In order to do a comparison we have realized three different neural networks for that problem [16]: ART1 extended to supervised learning mode [2]; Backpropagation [8]; and Kohonen's Self-Orga-nizing Feature Maps (SOFM) [5].Ultsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.A meta-interpreter generates training patterns for the neural network. It encodes successful Prolog proofs. Trai-ned with these examples of proofs the neural network generalizes a control strategy to select clauses. Another meta-interpreter, called generating meta-interpreter (GMI), is asked to prove a goal. The GMI constructs the optimal proof for the given goal, i.e. the proof with the minimal number of resolutions. The optimal proof is found by generating all possible proofs and comparing them with reference to the number of resolutions. For an optimal proof each clause-selection-situation is recorded. A clause-selection-situation is described by the fea-tures of the partial goal to be proved and the clause which is selected to solve that particular goal. The clause is described by a unique identification and two different sorts of information concerning the structure of arguments are used: the types of arguments and their possible identity. For the types of arguments a hierarchical ordering of the possible argument types is used. The encoder takes the clause-selection-situation and produces a training pattern for the neural network. The encoding preserves similarities among the types. The neural network is trai-ned with the encoded training patterns until it is able to reproduce the choice of a clause for a partial goal. A query is passed to an optimizing meta-interpreter (OMI). For each partial goal the OMI presents the description of the partial goal as input to the neural network and obtains a candidate-clause for resolution. With this candi-date the resolution is attempted. If resolution fails, the OMI uses Prolog search strategy as default.Our system allows to generalize the earned control knowledge to new programs. In order to do this, structural similarities beween the new program and the learned one are used to generate a mapping of the corresponding selection situations of different programs.We have tested our approach using several different Prolog programs, for example programs for map coloring, travelling salesman, symbolic differentiation and a little expert system [16]. It turned out, that almost all neural networks were in principle able to learn a proof strategy. Best results in reproducing learned strategies were obtained with the modified ART1-network which reproduced the optimal number of resolutions for a known proof. For queries of the same type (same program) that were not used as training data, however, the SOFM tur-ned out to be the best. A proof strategy using this neural network averaged slightly over the optimal number of resolutions even for completely new programs but well below of the number of resolutions a Prolog interpreter needs. The backpropagation network we used was the worst for both cases [16].3. SummaryWe showed several meaningful ways to integrate neural networks with knowledge-based systems. Concerning Neural Unification we have studied a neural unification algorithm using Self-Organizing Feature Maps by Kohonen. This neural unification algorithm is capable to do the ordinary unification with neural networks whe-reby important problems like the occur-check and the calculation of a most common unifier can be done in par-allel. In Introspection we have tested several different neural networks for their ability to detect and learn proof strategies. A modified Self-Organizing Feature Map has been identified to yield best results concerning the reproduction of proofs made before and for generalizing to completely new programs. In the field of Structure Detection we have developed a combined toolbox to detect structures that a Self-Organized Feature Map has learned from subsymbolic raw data by the collective behaviour of assemblies of neurons. Data stemming from measurements with typically high dimensionality can be analyzed by using an apt visualization of a Self-Organi-zing Feature Map (U-matrix methods). The detected structures can be reformulated in the form of symbolic rules for Integrated Knowledge Acquisition by a sophisticated machine learning algorithm sig*.The usage of neural networks for integrated subsymbolic and symbolic knowledge acquisition realizes a new type of learning from examples. Unsupervised leaning neural networks are capable to extract regularities from data with the help of apt visualization technics. Due to the distributed subsymbolic representation, neural net-works are, however, not able to explain their inferences. Our system avoids this disadvantage by extracting sym-bolic rules out of the neural network. It is possible to give an explaination of the inferences made by the neural networks. By exploiting the properties of the neural networks the system is also able to effectively handle noisy and incomplete data. Algorithms for neural unification allow an efficient realization of the central part of a sym-bolic knowledge processing system and may also be used for neural approximative reasoning. Introspection with neural networks frees the user and programmer of knowledge processing systems to formulate control know-ledge explicitelyUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.4. AcknowledgementWe thank Ms. G. Guimaraes, Mr H. Li, and Mr. V. Weber for the helpful discussions. We thank all the students of the University of Dortmund, Germany who worked on preliminary versions of our systems. This research has been supported in part by the German Ministry of Research and Technology, project WiNA (contract No. 413-5839-01 IN 103 C/3) and by the Bennigsen-Foerde price of NRW.5. References[1]K.H. Becks, W. Burchard., A.B. Cremers, A. Heuker, A. Ultsch …Using Activation Networks for Analogical Ordering ofConsideration: Due Method for Integrating Connectionist and Symbolic Processing“ in Eckmiller/ Hartmann/ Hauske (Eds) pp. 465-469, 1990.[2]G.A. Carpenter, S. Grossberg …Self-Organization of Stable Category Recognition Codes for Analog Input Patterns“Applied Optics V ol. 26, pp. 4.919-4.930, 1987.[3]G. Deichsel, H.J. Trampisch …Clusteranalyse und Diskriminanzanalyse“ Gustav Fisher Verlag, Stuttgart, 1985.[4]W.R. Hutchison, K.R. Stephens …Integration of Distributed and Symbolic Knowledge Reresentatoins“ Proc. IEEEIntern. Conf. on Neural Networks, San Diego, CA, p. 395, 1987.[5]T. Kohonen …Self-Organization and Associative Memory“ Springer, Berlin, 1989 (3rd ed.).[6]M.C. Mozer …RAMBOT: A Connectionist Expert System that Learns by Example“ Proc. IEEE Intern. Conf. on NeuralNetworks, San Diego, CA, V ol. 2, p. 693, 1987.[7]G. Palm, A. Ultsch, K. Goser, U. Rückert …Knowledge Processing in Neural Architecture“ in Delgado-Fraias/ Moore(Eds) VLSI for Neural Networks and Artificial Intelligence Plenum Publ., New York, 1993.[8] D.E. Rumelhart, J.L. McClelland (Eds) …Parallel Distributed Processing“ MIT Press, 1986.[9]M. Schweizer, P.M.B. Foehn, J. Schweizer, A. Ultsch … A Hybrid Expert System for Avalanche Forecasting“ Proc. Intl.Conf. ENTER 94 - Inform. Communic. Technol. in Tourism Innsbruck, pp. 148-1531994.[10]A. Ultsch …Connectionist Modells and their Integration with Knowledge-Based Systems“ Technical Report No 396,Univ. of Dortmund, Germany, 1991 (in german).[11]A. Ultsch …Self-organizing Neural Networks for Knowledge Acquisition“ Proc. European Conf. on AI (ECAI) Wien,Austria, pp. 208-210, 1992.[12]A. Ultsch …Knowledge Acquisition with Self-Organizing Neural Networks“ Proc. Intl. Conf. on Artificial Neural Net-works (ICANN) Brighton, UK, pp. 735-740, 1992.[13]A. Ultsch …Self-Organized Feature Maps for Monitoring and Knowledge Akquisition of a Chemical Process“ Proc. Intl.Conf. on Artificial Neural Networks (ICANN) Amsterdam, Netherlands, pp. 864-867, 1993.[14]A. Ultsch, G. Guimaraes, D. Korus, H. Li …Knowledge Extraction from Artificial Neural Networks and Applications“TAT & World Transpter Congress 93, Aachen, Germany, Springer, pp. 194-203, 1993.[15]A. Ultsch, G. Guimaraes, V. Weber …Self Organizing Feature Maps for Logical Unification“ Proc. Intl. Joint Conf. AIPortugal, 1994.[16]A. Ultsch, R. Hannuschka, U. Hartmann, M. Mandischer, V. Weber …Optimizing logical proofs with connectionist net-works“ in Kohonen et al (Eds.) Proc ICANN, Helsinki, Elsevier, pp. 585-590, 1991.[17]A. Ultsch, D. Korus, T.O. Kleine …Neural Networks in Biochemical Analysia“ Abstract Intl. Conf. Biochemical Analy-sis 95, publ. in: Europ. Journal of Clinical Chemistry and Clinical Biochemistry, V ol. 33, No. 4, Berlin, pp. A144-A145., April 1995.[18]A. Ultsch, H. Li …Automatic Acquisition of Symbolic Knowledge from Subsymbolic Neural Networks“ Proc Intl. Conf.on Signal Processing Peking, China, pp. 1201-1204, 1993.[19]A. Ultsch, H.P. Siemon …Self-Organizing Neural Networks for Exploratory Data Analysis“ Proc. Conf. Soc. for Infor-mation and Classification Dortmund, Germany, 1992.[20]L.A. Zadeh …Fuzzy Sets“ Information and Control 8, pp. 338-353, 1965.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Architecture for Rapidly Reconfigurable Robot Workcell I-Ming Chen* Peter Chen+ Guilin Yang+ Weihai Chen+In-Gyu Kang+ Song Huat Yeo* Guang Chen**School of Mechanical and Production Engineering Nanyang Technological UniversityNanyang Ave, Singapore 639798+Automation Technology Division Gintic Institute of Manufacturing Technology 71 Nanyang Drive, Singapore 638075ABSTRACTA Reconfigurable robot workcell is a collection of standardized components, such as actuators, rigid links, tools, fixtures, sensors, and transport systems.These components can be rapidly assembled and configured to form a robotic workcell for a specific task. In this article, we describe the architecture of this type of reconfigurable workcells based on component technology. Both hardware and software aspects of the workcell are described. In addition, the control and simulation environment for the robot workcell, SEMORS, is introduced.1. I NTRODUCTIONAn automated manufacturing workcell usually consists of a collection of material processing and handling devices such as CNC machines, robots, part feeders, conveyors, and sensors. These devices are traditionally designed and commissioned with the intent that they will be operated with few significant changes for as long as the specific product is being manufactured. Certain flexibility has been given to those devices through programmable controllers. However, it is time-consuming and not cost effective to reconfigure them for other products. Such drawbacks reduce the attractiveness of automation systems for manufactures that are involved in high-variety, low-volume production. For manufacturers adopting the fixed automation systems, not being fast enough to market changes will place them at a disadvantage.The Nanyang Technological University and Gintic Institute of Manufacturing Technology are currently involved in a project to improve agility and flexibility in automation systems by developing a “rapidly reconfigurable” robot workcell. The key to the concept of reconfigurability in automation lies in component-based technology. Ultimately, workcells will be made of standard modular components, such as actuators, links, end-effectors, fixtures, and sensors. These components can be rapidly assembled and configured to form a robotic workcell according to specific task requirements. Based on this component-based concept, components can be reused for different workcell configurations. Also, the initial installment cost of such a system can be moderated by purchasing the minimum required parts. The maintenance and upgrading of the system becomes very easy by just replacing the malfunctioned or outdated components. Most important of all, converting one manufacturing line from one product to the other can be very fast and easy in order to keep up with the rapid changing marketplace.2. H ARDWARE C OMPONENT2.1 Modular RobotsThe major element in the workcell is the modular robot system. A modular robot can assume various robot geometry and DOFs for different task specifications. It consists of a pool of standard modules: actuators, rigid links, and end-effectors. Basically, the actuator modules are self-contained compact mechatronic drives with built-in motors, motor controllers, amplifiers, and communication bus. The actuators can be flexibly stacked together to form a multi-axes robot manipulator or can be used independently as individual motion control elements on the factory floor. They are connected through standard mechatronic interfaces, and communication networks. The link modules are rigid connectors with various geometrical shapes. From experience the geometry of the rigid links should be customizable based on the vendor specifications in order to fit the robot in a specific workspace.There exist several prototype modular robot systems in various research institutions [1,2,3,4]. For reliability purpose, we build the workcell based on one commercial source, the MoRSE modular robot system as shown in Figure 1. The main MoRSE components are the modular drives which contains internal PID control and Controller Area Network (CAN) bus communication system. Modular drives are daisy-chained in CAN-bus system and can communicate with a PC controller. The overall system is a distributed system with the PC controller monitoring and coordinating the activities of the local module controllers. The link units are rigid connectors that can be customized by the end-user. The connection between the link and actuator modules is through bolt screws that can be easily disconnected. The setup of a workcell with several modular robots is rendered as a computer simulation shown in Figure 2. We will show that the modular robot can be configuredas serial manipulators to perform parts assembly tasks and also as high-stiffness parallel manipulators to perform light machining tasks.2.2 Workcell devicesThe entire robot workcell consists of modular robot systems, material handling systems, sensors, device controllers and other peripheral devices. The hardware environment for both simulation and actual workcell implementation is illustrated in Figure 3. This is a PC-based system consisting of a workcell supervisor, a data server, a graphical workstation, and a group of workcell devices along with the modular robot that are controlled by their respective device controllers (e.g., the industrial PC). We also assume that the devices in the workcell other than the robot can be reconfigured. Therefore, the workcell components are connected to a workcell-wide communication network.§ Supervisory Control NetworkA digital network facilitates communication among the workcell components. The choice of network implementation will ultimately depend on the bandwidth requirement, the connectivity of the network with the device controllers, and the reliability of the networking protocol. For demonstration purpose, we use 10Mb Ethernet protocol.§ Workcell SupervisorDuring workcell operation, the supervisor receives reports from the device controllers regarding the status of the devices, and then, based on the status reports, issues instructions to the device controllers. The device controllers, upon receiving the instructions, activate the device to execution the instructions.§ Device ControllersA device controller is essentially an interface between the device and the workcell supervisor. Since devices of different types from different vendors have their particular characteristics of operation, a device controller’s function is to hide such particularity from the workcell supervisor to lessen the analytical and computational burden of the supervisor, thus improving the “reconfigurability” of the workcell. Operationally, a device controller performs the following three types of tasks: (i) reporting device status, (ii) processing instructions from the workcell supervisor, and (iii)controling the device to execute the instruction.§ Data ServerThe data server stores (i) system data (such as the kinematics and dynamics models of a robot) generated during off-line workcell programming, and (ii) run-time data relevant to the operation of the workcell. It receives data from two sources: the workcell supervisor and the device controllers. The data from the supervisor mainly concerns the coordination of the workcell, while the data from the device controllers reflects the real-time status of the devices. The data server also maintains real-time communication with the graphical workstation. It provides the graphical workstation the necessary data for visual display of the workcell activities.§ Graphical WorkstationThe workstation executes a simulation program to display the real-time status of the workcell. It obtains the model data of the workcell devices from the data server,and the data concerning the real-time status of the devices from the device controllers. It is expected that the simulation program being executed on the graphical workstation will also support a user interface. This interface will allow a user to view in real-time the operation of the workcell from different perspectives and at different levels of details.Figure 3 Workcell hardwareSensorsData ServerGraphical Workcell ReaderFigure 2Workcell layoutPARALLELSERIAL3. S OFTWARE C OMPONENTSThe software environment will support both on-line and off-line operations. On-line operation refers to workcell activities (such as sensing, actuating, and graphical display of workcell status, etc.) that occur during the execution of a task (such as polishing a part), while off-line operations refer to activities associated with preparation for on-line operations or for processing of information after a task execution.To ensure the component-based concept to be applicable workcell-wide, the architecture of the workcell software adopts the reusable component software concept. There are several commercially available component software formats, such as COM and CORBA. In this project, we follow the standard of COM closely. Figure 4 depicts the software components. Each block represents a program or a set of data. Shaded blocks are programs associated with off-line operations, non-shaded blocks represent on-line operations, and 3D blocks represent system data. The arrows indicate directions of information flow. Note that in actual implementation, some of the information may flow from one component to another through the data server. Also some information needed by various processes may be obtained from the data server; the specific routes of such information flow are not indicated in the overall representation depicted in Figure 4.Figure 4: Software component3.1 Modular Robot Level§Robot Kinematics ModelA kinematics model of a robot specifies the geometrical relationships among the links and joint of the robot. Here, the robot kinematic algorithm is formulated based on the Local Product-of-Exponentials (POE) formula [4]. It is a local coordinate representation of the joint axes is established for a general robot configuration. The forward kinematics can be expressed as a product of matrix exponentials. For the modular robot system, no specific robot geometry and configuration will be given in advance. Unlike conventional robot system, there is no robot kinematic model residing in the modular robot controller. Hence, a data structure will be defined for automatic model generation. Subsequently, a general inverse kinematics algorithm for modular robots is implemented based on numerical Newton-Raphson method [5].§Robot Dynamics ModelA dynamics model of a robot specifies the dynamical behavior of the robot. A forward dynamics model specifies the motion of the robot under actuation, while an inverse dynamics model specifies the required actuation so that the robot can achieve certain motion trajectory. The parameters of a dynamics model are usually represented symbolically. The specific values of these parameters can be determined (analytically or experimentally) to “characterize” the model. Similar to the case of robot kinematic models, we also developed automatic model generation for robot dynamics based on Newton-Euler recursive algorithms. [6]§Kinematic Calibration AlgorithmThe process of determining and refining the values for the kinematic parameters of the robot model is referred to as kinematic calibration. For modular robots, frequent module reconfiguration will create mechanical errors in the nominal kinematic model. Thus, kinematic kinematic calibration is necessary. The robot calibration follows the local POE model. The differential transformation theory and linear superposition principles are utilized in the calibration model. An iterative least-squares algorithm is used to identify the robot actual kinematics parameters. Because of the use of local coordinates, this algorithm is easy to set up and can deal with robots of different joint types and arbitrary DOFs. Furthermore, the Local POE model varies smoothly with the change of joint axes, which makes the model singularity free. [7].3.2 Workcell Level§Workcell Task PlannerA workcell task is one that specifies a complete operation of the workcell. For instance, a statement such as Polish Part A is a workcell task. Workcell tasks (which is input to the task planner) are usually generated from some factory-level production planner. For the current implementation, a user could generate the workcell tasks. The task planner takes a set of workcell tasks as its input, and generates a set of device tasks to be performed by the workcell.. A device task is one that can be interpreted and executed by a device controller. For instance, an instruction for the robot such as Execute Command Sequence #1 is a device task, where the sequence labeled #1 may contain a set of specific robot commands.A workcell task must be decomposed into a set of device tasks for execution. For example, assume that the part is being moved around by a PLC-controlled conveyor continuously, and can be stopped for a robot to performthe polishing task. To execute the workcell task Polish Part A, the device tasks may be specified as follows:From Workcell Supervisor toDeviceTaskInstructionDescriptionPLC ExecuteProgram #2§Stop pallet if part presence is detected by sensor.§Report to workcell supervisor that part is ready.Robot Controller ExecuteTask #1§Approach workpiece.§Polish workpiece.§Retreat.§Report to workcell supervisorthat polishing is done.PLC ExecuteSequence#3§Release pallet to conveyor.§Report to workcell supervisor that part has been released.§Robot Configuration Optimization AlgorithmThe task planner generates a set of device tasks associated with a given workcell task. A set of device tasks for a robot may consist of robot command such as Move A 0.2, which can be interpreted to mean moving the robot end-effector from the current location to a new location A at 20% of the maximum rated speed. Given a set of such robot commands, the robot configuration optimization algorithm will generate an “optimal” robot configuration, which specifies the DOF, the type of joints, etc. It is based on such a configuration that a modular robot is assembled to perform the tasks. Several task-optimal configuration algorithms have been proposed for modular robots. Basically finding the most suitable task-oriented robot configuration is a design optimization problem. Genetic algorithms and other artificial intelligence techniques are employed for solutions. [8,9]§Workcell Layout Optimization AlgorithmFor a given workcell task, the workcell layout optimization algorithm determines the necessary workcell devices for performing the task. It also determines a suitable arrangement of the workcell devices such that the device tasks (generated by the task planner) can be effectively carried out. The output of this algorithm is a set of data (as indicated by the 3D block Workcell Layout) specifying the devices required, and their respective locations of deployment.§Task SchedulerThe task scheduler decides the sequence by which a set of workcell tasks is to be executed. For tasks that have no deadline, the tasks can be executed in any order. For tasks that have specific deadline, schedulability of the tasks should be first examined. If the tasks are determined to be schedulable, then scheduling algorithms, such as the Earliest-Deadline-First algorithm, can be used to yield an execution sequence. For tasks that are both periodic and have hard deadlines, scheduling will require more complicated algorithm [10].A full version of the task scheduler must be able to deal with these three cases.§Workcell SupervisorThe workcell supervisor is a program that coordinates the activities of the various devices in to execute a set of device tasks. It issues instructions to the workcell devices based the current workcell status, and ensure that the instructions are executed properly. Error detection and recovery algorithms can be incorporated into the workcell supervisor in order to improve its robustness.§Device ControllerThe instructions from the supervisor to a device controller are at the task level. They indicate what needs to be done by the device, but do not contain specific information as to how the task is to be carried out by the device. For instance, an instruction to the robot might be specified as: Execute Assembly Task #1, (or it could be simply a number, say, 1) which instructs the robot to execute a pre-programmed assembly task labeled as Task #1. The device controller of the robot (i.e., the industrial PC in this case) must process the instruction locally, by retrieving (from local memory or from the workcell database) the robot commands associated with this instruction and send these commands to the robot in order to execute the task properly. For a device with rudimentary functions, such as a conveyor, local processing of the instruction by the device controller (e.g., the PLC) could be relatively simple.§Workcell DatabaseThe database stores data generated from both off-line and on-line operations. These include the workcell layout, the robot kinematics and dynamics models, the graphical workcell model, and data collected by the device-controllers and the workcell supervisor during workcell simulation and actual operation.§Graphical DisplayThe graphical display is program that manages the 3D representation of the workcell activities. It also serves as a simulation tool for visual verification of workcell operations.4. C ONTROL AND S IMULATION4.1 ControlTwo levels of control exist in the workcell. One is supervision of the workcell activities; the other is real-time control of individual workcell devices. The workcell supervisor performs supervisory control of the workcell. The supervisor coordinates the activities of the workcell by collecting reports from the device controller regarding the status of the workcell, and then, based on these reports, dispatching device task instructions to the device controllers. The device controllers do the control of the individual devices. Each device controller is equippedwith its own real-time control system to ensure that the assigned tasks are completed properly.4.2 SimulationSimulation is conducted in a power-off setting, where all workcell components are operational except the actual devices (which are turned off). The simulated activities of the workcell are displayed graphically on the graphical workstation. During on-line operation, the workstation displays the workcell status in synchronization with the physical events taking place in the workcell.4.3 SEMORSThe robot control and simulation environment (a.k.a. SEMORS) is a software application currently under development to manage the off-line programming and the real-time operation of the robot. It enables a user to §Construct the kinematics and dynamics models of a robot based on a set of predefined modules,§Graphically generate a desired trajectory for the robot end-effector,§Compose a robot task sequence§Select a control law for the robot to execute the desired trajectory and task,§Specify how output data are to be stored,§Run a simulation, and§Execute the task.The general layout of SEMORS is illustrated in Figure 5. It consists of a set of menus and three display areas. Display Area I shows the tools and resources (such as the set of modules available for assembling a new robot) required during off-line programming. Display Area II shows the graphical results of the use programming, e.g., the robot as it is being assembled. The User Input Area is for the prompting and accepting user input.SEMORS is intended to be a uniform interface for all modular robots. It will be used both for simulation and for on-line execution of a task, regardless of whether the robot is executing (or is simulated to be executing) the task as a stand-alone application, or as part of a workcell process.5. D ATA M ANAGEMENT AND P RESENTATIONThe main resource for data management and presentation is the data server. All data pertaining to the operation of the workcell is stored on this server. The device controllers and the workcell supervisor may store some information locally to facilitate timely operations. Workcell data can be displayed on the workcell supervisor, the data server, and the graphical workstation. Each may display a set of data associated with its own functionality. Unless an industrial PC is used as the device controller for the conveyor and sensors, data associated with these devices can be displayed on the either the workcell supervisor or the data server.6. S UMMARYWe have presented a new type of reconfigurable robotic workcell for rapid deployment and agile manufacturing. The key idea is to design the entire workcell based on the component technology, not only the hardware parts but also the software and control parts. Components are connected through standard hardware and software interfaces. The major element in the workcell is the modular reconfigurable robot system that can be configured into any geometry, any DOFs, including both serial and parallel types of configurations. Currently we are in the first stage of the implementation: building the simulation, control, and hardware environment for a single reconfigurable robot workcell, i.e., an SEMORS-enabled modular robot. Ultimately, we will demonstrate a set of fully connected reconfigurable robotworkcells to perform various applications including light machining tasks (for example, polishing, grinding, and deburring), and assembly tasks (for example, odd parts insertion). Acknoledgement:This project is supported by Gintic Institute of Manufacturing Technology, Singapore, Under Upstream Project U97-A006.7. R EFERENCES[1] R. Cohen, M. Lipton, M. Dai, B. Benhabib, Conceptual Design of a Modular Robot. ASME J. Mechanical Design, Vol. 114, pp117-125, 1992.[2] T. Fukuda, S. Nakagawa, Dynamically Reconfigurable Robot System. Preceedings of IEEE Conf. Robotics & Automation, pp1581-1586, 1988.[3] C.J.J. Paredis, H.B. Brown, R.W. Casciola, J.E. Moody, P.K. Khosla,A Rapidly Deployable Manipulator System. International Workshop on Some Critical Issues in Robotics, Singapore 1995.[4] I.M. Chen, G. Yang, Configuration Independent Kinematics for Modular Robots. Proceedings of IEEE Conf. Robotics & Automation, pp1845-1849, 1995.[5] I.M. Chen, G. Yang, Inverse Kinematics for Modular Reconfigurable Robots. Proceedings of IEEE Conf. Robotics & Automation, 1998.Figure 5GUI of SEMORS[6] I.M. Chen, G. Yang, Automatic Model Generation for Modular Reconfigurable Robot Dynamics, ASME J. Dynamic Systems, Measurement, Contrl, to appear.[7] I.M. Chen, G. Yang, Kinematic Calibration of Modular Reconfigurable Robots Using Product-of-Exponentials Formula. J. Robotic Systems, Vol. 14, No. 11, pp807-821, 1997.[8] I.M. Chen, J. Burdick, Determining Task Optimal Modular Robot Assembly Configurations, Proceedings of IEEE Conf. Robotics & Automation, pp132-137, 1995.[9] G. Yang, I.M. Chen, Reduced DOF Modular Robot Configurations, ICARCV, Singapore, 1998.[10] C.Y. Chen, I.M. Chen, Scheduling of Reconfigurable Workcells for Hard-real-time Manufacturing Operation. ICARCV, 1998.。