PAPER Special Section on Multidimensional Signal Processing and Its Application A Shortest

合集下载

Paper Preparation Guidelines

Paper Preparation Guidelines

Paper Preparation GuidelinesIn the following you will find guidelines and template for preparing your full paper to Journal of Technical Acoustics electronically. We suggest that you print and read this information carefully before downloading the template file.General InformationAll manuscripts may be in English or in Chinese. All manuscripts must provide title, author, abstract and key words both in English and in Chinese..提供基金类别和项目编号为了说明论文所研究课题的重要性,若有基金资助,请在文章首页的左下角写明准确的基金资助类别和项目编号。

如:国家自然科学基金项目(项目编号);国家“八六三”高技术研究发展计划基金项目(项目编号)。

若没有,也请您注明“本文的研究没有基金资助”。

.提供作者简介作者简介包括姓名、性别、出生年、学位或职称、现主要研究方向及主要业绩,放在首页脚。

作者单位的中英文要完全对应。

作者工作单位准确到系或学院等,要写全称。

如:“清华大学计算机科学与技术系”不应简写为“清华大学计算机系”; “浙江大学计算机科学与工程学系”不应简写为“浙江大学计算机系”。

The paper must be 4~8 printed pages in length.To achieve the best viewing experience, we strongly encourage you to use Times-New-Roman font (the Word template file uses Times-New-Roman). This is needed in order to give the Proceedings a uniform look.The paper should be in the following format:- 4~8 printed pages (Standard A4 size: 210 mm by 297 mm)- Single column- Printed in black ink- Do NOT include headers and footers. The page numbers identification will be post processed automatically, at the time of printing the Proceedings.- The first page should have the paper title, author(s), and affiliation(s) centered on the page.- Follow the style of the sample paper that is included with regard to title, authors, affiliations, abstract, heading, and subheadings.- Print the paper on white paper and check that the positioning (left and top margins) as well as other layout features are correct.Microsoft Word TemplateYou may write your paper using Microsoft Word, template file (for Microsoft Word 2000 or later). You will find the following file:sxjs_template.doc (MS Word template)However, note that you need to submit a paper as PDF file including all authors’ signatures.(提供所有作者的签名,表明均已看过此文,用PDF格式文件传送)Additional InstructionsPaper Title - The paper title must be centered on the page and in boldfaced letters only. Only the first letter of the title should be capitalized.Authors' Name(s) - The authors' name(s) and affiliation(s) appear centered below the paper title. If space permits, please include a mailing address here. The templates indicate the area where the title and author information should go. These items need not be strictly confined to the number of lines indicated; papers with multiple authors and affiliations, for example, may require two or more lines for this information.关于作者英文名字的要求姓氏全部大写,名第1个字母大写,双名中间加连字符。

NuMicro N9H30系列开发板用户手册说明书

NuMicro N9H30系列开发板用户手册说明书

NuMicro®FamilyArm® ARM926EJ-S BasedNuMaker-HMI-N9H30User ManualEvaluation Board for NuMicro® N9H30 SeriesNUMAKER-HMI-N9H30 USER MANUALThe information described in this document is the exclusive intellectual property ofNuvoton Technology Corporation and shall not be reproduced without permission from Nuvoton.Nuvoton is providing this document only for reference purposes of NuMicro microcontroller andmicroprocessor based system design. Nuvoton assumes no responsibility for errors or omissions.All data and specifications are subject to change without notice.For additional information or questions, please contact: Nuvoton Technology Corporation.Table of Contents1OVERVIEW (5)1.1Features (7)1.1.1NuMaker-N9H30 Main Board Features (7)1.1.2NuDesign-TFT-LCD7 Extension Board Features (7)1.2Supporting Resources (8)2NUMAKER-HMI-N9H30 HARDWARE CONFIGURATION (9)2.1NuMaker-N9H30 Board - Front View (9)2.2NuMaker-N9H30 Board - Rear View (14)2.3NuDesign-TFT-LCD7 - Front View (20)2.4NuDesign-TFT-LCD7 - Rear View (21)2.5NuMaker-N9H30 and NuDesign-TFT-LCD7 PCB Placement (22)3NUMAKER-N9H30 AND NUDESIGN-TFT-LCD7 SCHEMATICS (24)3.1NuMaker-N9H30 - GPIO List Circuit (24)3.2NuMaker-N9H30 - System Block Circuit (25)3.3NuMaker-N9H30 - Power Circuit (26)3.4NuMaker-N9H30 - N9H30F61IEC Circuit (27)3.5NuMaker-N9H30 - Setting, ICE, RS-232_0, Key Circuit (28)NUMAKER-HMI-N9H30 USER MANUAL3.6NuMaker-N9H30 - Memory Circuit (29)3.7NuMaker-N9H30 - I2S, I2C_0, RS-485_6 Circuit (30)3.8NuMaker-N9H30 - RS-232_2 Circuit (31)3.9NuMaker-N9H30 - LCD Circuit (32)3.10NuMaker-N9H30 - CMOS Sensor, I2C_1, CAN_0 Circuit (33)3.11NuMaker-N9H30 - RMII_0_PF Circuit (34)3.12NuMaker-N9H30 - RMII_1_PE Circuit (35)3.13NuMaker-N9H30 - USB Circuit (36)3.14NuDesign-TFT-LCD7 - TFT-LCD7 Circuit (37)4REVISION HISTORY (38)List of FiguresFigure 1-1 Front View of NuMaker-HMI-N9H30 Evaluation Board (5)Figure 1-2 Rear View of NuMaker-HMI-N9H30 Evaluation Board (6)Figure 2-1 Front View of NuMaker-N9H30 Board (9)Figure 2-2 Rear View of NuMaker-N9H30 Board (14)Figure 2-3 Front View of NuDesign-TFT-LCD7 Board (20)Figure 2-4 Rear View of NuDesign-TFT-LCD7 Board (21)Figure 2-5 Front View of NuMaker-N9H30 PCB Placement (22)Figure 2-6 Rear View of NuMaker-N9H30 PCB Placement (22)Figure 2-7 Front View of NuDesign-TFT-LCD7 PCB Placement (23)Figure 2-8 Rear View of NuDesign-TFT-LCD7 PCB Placement (23)Figure 3-1 GPIO List Circuit (24)Figure 3-2 System Block Circuit (25)Figure 3-3 Power Circuit (26)Figure 3-4 N9H30F61IEC Circuit (27)Figure 3-5 Setting, ICE, RS-232_0, Key Circuit (28)Figure 3-6 Memory Circuit (29)Figure 3-7 I2S, I2C_0, RS-486_6 Circuit (30)Figure 3-8 RS-232_2 Circuit (31)Figure 3-9 LCD Circuit (32)NUMAKER-HMI-N9H30 USER MANUAL Figure 3-10 CMOS Sensor, I2C_1, CAN_0 Circuit (33)Figure 3-11 RMII_0_PF Circuit (34)Figure 3-12 RMII_1_PE Circuit (35)Figure 3-13 USB Circuit (36)Figure 3-14 TFT-LCD7 Circuit (37)List of TablesTable 2-1 LCD Panel Combination Connector (CON8) Pin Function (11)Table 2-2 Three Sets of Indication LED Functions (12)Table 2-3 Six Sets of User SW, Key Matrix Functions (12)Table 2-4 CMOS Sensor Connector (CON10) Function (13)Table 2-5 JTAG ICE Interface (J2) Function (14)Table 2-6 Expand Port (CON7) Function (16)Table 2-7 UART0 (J3) Function (16)Table 2-8 UART2 (J6) Function (16)Table 2-9 RS-485_6 (SW6~8) Function (17)Table 2-10 Power on Setting (SW4) Function (17)Table 2-11 Power on Setting (S2) Function (17)Table 2-12 Power on Setting (S3) Function (17)Table 2-13 Power on Setting (S4) Function (17)Table 2-14 Power on Setting (S5) Function (17)Table 2-15 Power on Setting (S7/S6) Function (18)Table 2-16 Power on Setting (S9/S8) Function (18)Table 2-17 CMOS Sensor Connector (CON9) Function (19)Table 2-18 CAN_0 (SW9~10) Function (19)NUMAKER-HMI-N9H30 USER MANUAL1 OVERVIEWThe NuMaker-HMI-N9H30 is an evaluation board for GUI application development. The NuMaker-HMI-N9H30 consists of two parts: a NuMaker-N9H30 main board and a NuDesign-TFT-LCD7 extensionboard. The NuMaker-HMI-N9H30 is designed for project evaluation, prototype development andvalidation with HMI (Human Machine Interface) function.The NuMaker-HMI-N9H30 integrates touchscreen display, voice input/output, rich serial port serviceand I/O interface, providing multiple external storage methods.The NuDesign-TFT-LCD7 can be plugged into the main board via the DIN_32x2 extension connector.The NuDesign-TFT-LCD7 includes one 7” LCD which the resolution is 800x480 with RGB-24bits andembedded the 4-wires resistive type touch panel.Figure 1-1 Front View of NuMaker-HMI-N9H30 Evaluation BoardNUMAKER-HMI-N9H30 USER MANUAL Figure 1-2 Rear View of NuMaker-HMI-N9H30 Evaluation Board1.1 Features1.1.1 NuMaker-N9H30 Main Board Features●N9H30F61IEC chip: LQFP216 pin MCP package with DDR (64 MB)●SPI Flash using W25Q256JVEQ (32 MB) booting with quad mode or storage memory●NAND Flash using W29N01HVSINA (128 MB) booting or storage memory●One Micro-SD/TF card slot served either as a SD memory card for data storage or SDIO(Wi-Fi) device●Two sets of COM ports:–One DB9 RS-232 port with UART_0 used 75C3232E transceiver chip can be servedfor function debug and system development.–One DB9 RS-232 port with UART_2 used 75C3232E transceiver chip for userapplication●22 GPIO expansion ports, including seven sets of UART functions●JTAG interface provided for software development●Microphone input and Earphone/Speaker output with 24-bit stereo audio codec(NAU88C22) for I2S interfaces●Six sets of user-configurable push button keys●Three sets of LEDs for status indication●Provides SN65HVD230 transceiver chip for CAN bus communication●Provides MAX3485 transceiver chip for RS-485 device connection●One buzzer device for program applicationNUMAKER-HMI-N9H30 USER MANUAL●Two sets of RJ45 ports with Ethernet 10/100 Mbps MAC used IP101GR PHY chip●USB_0 that can be used as Device/HOST and USB_1 that can be used as HOSTsupports pen drives, keyboards, mouse and printers●Provides over-voltage and over current protection used APL3211A chip●Retain RTC battery socket for CR2032 type and ADC0 detect battery voltage●System power could be supplied by DC-5V adaptor or USB VBUS1.1.2 NuDesign-TFT-LCD7 Extension Board Features●7” resolution 800x480 4-wire resistive touch panel for 24-bits RGB888 interface●DIN_32x2 extension connector1.2 Supporting ResourcesFor sample codes and introduction about NuMaker-N9H30, please refer to N9H30 BSP:https:///products/gui-solution/gui-platform/numaker-hmi-n9h30/?group=Software&tab=2Visit NuForum for further discussion about the NuMaker-HMI-N9H30:/viewforum.php?f=31 NUMAKER-HMI-N9H30 USER MANUALNUMAKER-HMI-N9H30 USER MANUAL2 NUMAKER-HMI-N9H30 HARDWARE CONFIGURATION2.1 NuMaker-N9H30 Board - Front View Combination Connector (CON8)6 set User SWs (K1~6)3set Indication LEDs (LED1~3)Power Supply Switch (SW_POWER1)Audio Codec(U10)Microphone(M1)NAND Flash(U9)RS-232 Transceiver(U6, U12)RS-485 Transceiver(U11)CAN Transceiver (U13)Figure 2-1 Front View of NuMaker-N9H30 BoardFigure 2-1 shows the main components and connectors from the front side of NuMaker-N9H30 board. The following lists components and connectors from the front view:NuMaker-N9H30 board and NuDesign-TFT-LCD7 board combination connector (CON8). This panel connector supports 4-/5-wire resistive touch or capacitance touch panel for 24-bits RGB888 interface.Connector GPIO pin of N9H30 FunctionCON8.1 - Power 3.3VCON8.2 - Power 3.3VCON8.3 GPD7 LCD_CSCON8.4 GPH3 LCD_BLENCON8.5 GPG9 LCD_DENCON8.7 GPG7 LCD_HSYNCCON8.8 GPG6 LCD_CLKCON8.9 GPD15 LCD_D23(R7)CON8.10 GPD14 LCD_D22(R6)CON8.11 GPD13 LCD_D21(R5)CON8.12 GPD12 LCD_D20(R4)CON8.13 GPD11 LCD_D19(R3)CON8.14 GPD10 LCD_D18(R2)CON8.15 GPD9 LCD_D17(R1)CON8.16 GPD8 LCD_D16(R0)CON8.17 GPA15 LCD_D15(G7)CON8.18 GPA14 LCD_D14(G6)CON8.19 GPA13 LCD_D13(G5)CON8.20 GPA12 LCD_D12(G4)CON8.21 GPA11 LCD_D11(G3)CON8.22 GPA10 LCD_D10(G2)CON8.23 GPA9 LCD_D9(G1) NUMAKER-HMI-N9H30 USER MANUALCON8.24 GPA8 LCD_D8(G0)CON8.25 GPA7 LCD_D7(B7)CON8.26 GPA6 LCD_D6(B6)CON8.27 GPA5 LCD_D5(B5)CON8.28 GPA4 LCD_D4(B4)CON8.29 GPA3 LCD_D3(B3)CON8.30 GPA2 LCD_D2(B2)CON8.31 GPA1 LCD_D1(B1)CON8.32 GPA0 LCD_D0(B0)CON8.33 - -CON8.34 - -CON8.35 - -CON8.36 - -CON8.37 GPB2 LCD_PWMCON8.39 - VSSCON8.40 - VSSCON8.41 ADC7 XPCON8.42 ADC3 VsenCON8.43 ADC6 XMCON8.44 ADC4 YMCON8.45 - -CON8.46 ADC5 YPCON8.47 - VSSCON8.48 - VSSCON8.49 GPG0 I2C0_CCON8.50 GPG1 I2C0_DCON8.51 GPG5 TOUCH_INTCON8.52 - -CON8.53 - -CON8.54 - -CON8.55 - -NUMAKER-HMI-N9H30 USER MANUAL CON8.56 - -CON8.57 - -CON8.58 - -CON8.59 - VSSCON8.60 - VSSCON8.61 - -CON8.62 - -CON8.63 - Power 5VCON8.64 - Power 5VTable 2-1 LCD Panel Combination Connector (CON8) Pin Function●Power supply switch (SW_POWER1): System will be powered on if the SW_POWER1button is pressed●Three sets of indication LEDs:LED Color DescriptionsLED1 Red The system power will beterminated and LED1 lightingwhen the input voltage exceeds5.7V or the current exceeds 2A.LED2 Green Power normal state.LED3 Green Controlled by GPH2 pin Table 2-2 Three Sets of Indication LED Functions●Six sets of user SW, Key Matrix for user definitionKey GPIO pin of N9H30 FunctionK1 GPF10 Row0 GPB4 Col0K2 GPF10 Row0 GPB5 Col1K3 GPE15 Row1 GPB4 Col0K4 GPE15 Row1 GPB5 Col1K5 GPE14 Row2 GPB4 Col0K6GPE14 Row2GPB5 Col1 Table 2-3 Six Sets of User SW, Key Matrix Functions●NAND Flash (128 MB) with Winbond W29N01HVS1NA (U9)●Microphone (M1): Through Nuvoton NAU88C22 chip sound input●Audio CODEC chip (U10): Nuvoton NAU88C22 chip connected to N9H30 using I2Sinterface–SW6/SW7/SW8: 1-2 short for RS-485_6 function and connected to 2P terminal (CON5and J5)–SW6/SW7/SW8: 2-3 short for I2S function and connected to NAU88C22 (U10).●CMOS Sensor connector (CON10, SW9~10)–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11)–SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensorconnector (CON10)Connector GPIO pin of N9H30 FunctionCON10.1 - VSSCON10.2 - VSSNUMAKER-HMI-N9H30 USER MANUALCON10.3 - Power 3.3VCON10.4 - Power 3.3VCON10.5 - -CON10.6 - -CON10.7 GPI4 S_PCLKCON10.8 GPI3 S_CLKCON10.9 GPI8 S_D0CON10.10 GPI9 S_D1CON10.11 GPI10 S_D2CON10.12 GPI11 S_D3CON10.13 GPI12 S_D4CON10.14 GPI13 S_D5CON10.15 GPI14 S_D6CON10.16 GPI15 S_D7CON10.17 GPI6 S_VSYNCCON10.18 GPI5 S_HSYNCCON10.19 GPI0 S_PWDNNUMAKER-HMI-N9H30 USER MANUAL CON10.20 GPI7 S_nRSTCON10.21 GPG2 I2C1_CCON10.22 GPG3 I2C1_DCON10.23 - VSSCON10.24 - VSSTable 2-4 CMOS Sensor Connector (CON10) FunctionNUMAKER-HMI-N9H30 USER MANUAL2.2NuMaker-N9H30 Board - Rear View5V In (CON1)RS-232 DB9 (CON2,CON6)Expand Port (CON7)Speaker Output (J4)Earphone Output (CON4)Buzzer (BZ1)System ResetSW (SW5)SPI Flash (U7,U8)JTAG ICE (J2)Power ProtectionIC (U1)N9H30F61IEC (U5)Micro SD Slot (CON3)RJ45 (CON12, CON13)USB1 HOST (CON15)USB0 Device/Host (CON14)CAN_0 Terminal (CON11)CMOS Sensor Connector (CON9)Power On Setting(SW4, S2~S9)RS-485_6 Terminal (CON5)RTC Battery(BT1)RMII PHY (U14,U16)Figure 2-2 Rear View of NuMaker-N9H30 BoardFigure 2-2 shows the main components and connectors from the rear side of NuMaker-N9H30 board. The following lists components and connectors from the rear view:● +5V In (CON1): Power adaptor 5V input ●JTAG ICE interface (J2) ConnectorGPIO pin of N9H30Function J2.1 - Power 3.3V J2.2 GPJ4 nTRST J2.3 GPJ2 TDI J2.4 GPJ1 TMS J2.5 GPJ0 TCK J2.6 - VSS J2.7 GPJ3 TD0 J2.8-RESETTable 2-5 JTAG ICE Interface (J2) Function●SPI Flash (32 MB) with Winbond W25Q256JVEQ (U7); only one (U7 or U8) SPI Flashcan be used●System Reset (SW5): System will be reset if the SW5 button is pressed●Buzzer (BZ1): Control by GPB3 pin of N9H30●Speaker output (J4): Through the NAU88C22 chip sound output●Earphone output (CON4): Through the NAU88C22 chip sound output●Expand port for user use (CON7):Connector GPIO pin of N9H30 FunctionCON7.1 - Power 3.3VCON7.2 - Power 3.3VCON7.3 GPE12 UART3_TXDCON7.4 GPH4 UART1_TXDCON7.5 GPE13 UART3_RXDCON7.6 GPH5 UART1_RXDCON7.7 GPB0 UART5_TXDCON7.8 GPH6 UART1_RTSCON7.9 GPB1 UART5_RXDCON7.10 GPH7 UART1_CTSCON7.11 GPI1 UART7_TXDNUMAKER-HMI-N9H30 USER MANUAL CON7.12 GPH8 UART4_TXDCON7.13 GPI2 UART7_RXDCON7.14 GPH9 UART4_RXDCON7.15 - -CON7.16 GPH10 UART4_RTSCON7.17 - -CON7.18 GPH11 UART4_CTSCON7.19 - VSSCON7.20 - VSSCON7.21 GPB12 UART10_TXDCON7.22 GPH12 UART8_TXDCON7.23 GPB13 UART10_RXDCON7.24 GPH13 UART8_RXDCON7.25 GPB14 UART10_RTSCON7.26 GPH14 UART8_RTSCON7.27 GPB15 UART10_CTSCON7.28 GPH15 UART8_CTSCON7.29 - Power 5VCON7.30 - Power 5VTable 2-6 Expand Port (CON7) Function●UART0 selection (CON2, J3):–RS-232_0 function and connected to DB9 female (CON2) for debug message output.–GPE0/GPE1 connected to 2P terminal (J3).Connector GPIO pin of N9H30 Function J3.1 GPE1 UART0_RXDJ3.2 GPE0 UART0_TXDTable 2-7 UART0 (J3) Function●UART2 selection (CON6, J6):–RS-232_2 function and connected to DB9 female (CON6) for debug message output –GPF11~14 connected to 4P terminal (J6)Connector GPIO pin of N9H30 Function J6.1 GPF11 UART2_TXDJ6.2 GPF12 UART2_RXDJ6.3 GPF13 UART2_RTSJ6.4 GPF14 UART2_CTSTable 2-8 UART2 (J6) Function●RS-485_6 selection (CON5, J5, SW6~8):–SW6~8: 1-2 short for RS-485_6 function and connected to 2P terminal (CON5 and J5) –SW6~8: 2-3 short for I2S function and connected to NAU88C22 (U10)Connector GPIO pin of N9H30 FunctionSW6:1-2 shortGPG11 RS-485_6_DISW6:2-3 short I2S_DOSW7:1-2 shortGPG12 RS-485_6_ROSW7:2-3 short I2S_DISW8:1-2 shortGPG13 RS-485_6_ENBSW8:2-3 short I2S_BCLKNUMAKER-HMI-N9H30 USER MANUALTable 2-9 RS-485_6 (SW6~8) FunctionPower on setting (SW4, S2~9).SW State FunctionSW4.2/SW4.1 ON/ON Boot from USB SW4.2/SW4.1 ON/OFF Boot from eMMC SW4.2/SW4.1 OFF/ON Boot from NAND Flash SW4.2/SW4.1 OFF/OFF Boot from SPI Flash Table 2-10 Power on Setting (SW4) FunctionSW State FunctionS2 Short System clock from 12MHzcrystalS2 Open System clock from UPLL output Table 2-11 Power on Setting (S2) FunctionSW State FunctionS3 Short Watchdog Timer OFFS3 Open Watchdog Timer ON Table 2-12 Power on Setting (S3) FunctionSW State FunctionS4 Short GPJ[4:0] used as GPIO pinS4Open GPJ[4:0] used as JTAG ICEinterfaceTable 2-13 Power on Setting (S4) FunctionSW State FunctionS5 Short UART0 debug message ONS5 Open UART0 debug message OFFTable 2-14 Power on Setting (S5) FunctionSW State FunctionS7/S6 Short/Short NAND Flash page size 2KBS7/S6 Short/Open NAND Flash page size 4KBS7/S6 Open/Short NAND Flash page size 8KBNUMAKER-HMI-N9H30 USER MANUALS7/S6 Open/Open IgnoreTable 2-15 Power on Setting (S7/S6) FunctionSW State FunctionS9/S8 Short/Short NAND Flash ECC type BCH T12S9/S8 Short/Open NAND Flash ECC type BCH T15S9/S8 Open/Short NAND Flash ECC type BCH T24S9/S8 Open/Open IgnoreTable 2-16 Power on Setting (S9/S8) FunctionCMOS Sensor connector (CON9, SW9~10)–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11).–SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensorconnector (CON9).Connector GPIO pin of N9H30 FunctionCON9.1 - VSSCON9.2 - VSSCON9.3 - Power 3.3VCON9.4 - Power 3.3V NUMAKER-HMI-N9H30 USER MANUALCON9.5 - -CON9.6 - -CON9.7 GPI4 S_PCLKCON9.8 GPI3 S_CLKCON9.9 GPI8 S_D0CON9.10 GPI9 S_D1CON9.11 GPI10 S_D2CON9.12 GPI11 S_D3CON9.13 GPI12 S_D4CON9.14 GPI13 S_D5CON9.15 GPI14 S_D6CON9.16 GPI15 S_D7CON9.17 GPI6 S_VSYNCCON9.18 GPI5 S_HSYNCCON9.19 GPI0 S_PWDNCON9.20 GPI7 S_nRSTCON9.21 GPG2 I2C1_CCON9.22 GPG3 I2C1_DCON9.23 - VSSCON9.24 - VSSTable 2-17 CMOS Sensor Connector (CON9) Function●CAN_0 Selection (CON11, SW9~10):–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11) –SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensor connector (CON9, CON10)SW GPIO pin of N9H30 FunctionSW9:1-2 shortGPI3 CAN_0_RXDSW9:2-3 short S_CLKSW10:1-2 shortGPI4 CAN_0_TXDSW10:2-3 short S_PCLKTable 2-18 CAN_0 (SW9~10) Function●USB0 Device/HOST Micro-AB connector (CON14), where CON14 pin4 ID=1 is Device,ID=0 is HOST●USB1 for USB HOST with Type-A connector (CON15)●RJ45_0 connector with LED indicator (CON12), RMII PHY with IP101GR (U14)●RJ45_1 connector with LED indicator (CON13), RMII PHY with IP101GR (U16)●Micro-SD/TF card slot (CON3)●SOC CPU: Nuvoton N9H30F61IEC (U5)●Battery power for RTC 3.3V powered (BT1, J1), can detect voltage by ADC0●RTC power has 3 sources:–Share with 3.3V I/O power–Battery socket for CR2032 (BT1)–External connector (J1)●Board version 2.1NUMAKER-HMI-N9H30 USER MANUAL2.3 NuDesign-TFT-LCD7 -Front ViewFigure 2-3 Front View of NuDesign-TFT-LCD7 BoardFigure 2-3 shows the main components and connectors from the Front side of NuDesign-TFT-LCD7board.7” resolution 800x480 4-W resistive touch panel for 24-bits RGB888 interface2.4 NuDesign-TFT-LCD7 -Rear ViewFigure 2-4 Rear View of NuDesign-TFT-LCD7 BoardFigure 2-4 shows the main components and connectors from the rear side of NuDesign-TFT-LCD7board.NuMaker-N9H30 and NuDesign-TFT-LCD7 combination connector (CON1).NUMAKER-HMI-N9H30 USER MANUAL 2.5 NuMaker-N9H30 and NuDesign-TFT-LCD7 PCB PlacementFigure 2-5 Front View of NuMaker-N9H30 PCB PlacementFigure 2-6 Rear View of NuMaker-N9H30 PCB PlacementNUMAKER-HMI-N9H30 USER MANUALFigure 2-7 Front View of NuDesign-TFT-LCD7 PCB PlacementFigure 2-8 Rear View of NuDesign-TFT-LCD7 PCB Placement3 NUMAKER-N9H30 AND NUDESIGN-TFT-LCD7 SCHEMATICS3.1 NuMaker-N9H30 - GPIO List CircuitFigure 3-1 shows the N9H30F61IEC GPIO list circuit.Figure 3-1 GPIO List Circuit NUMAKER-HMI-N9H30 USER MANUAL3.2 NuMaker-N9H30 - System Block CircuitFigure 3-2 shows the System Block Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-2 System Block Circuit3.3 NuMaker-N9H30 - Power CircuitFigure 3-3 shows the Power Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-3 Power Circuit3.4 NuMaker-N9H30 - N9H30F61IEC CircuitFigure 3-4 shows the N9H30F61IEC Circuit.Figure 3-4 N9H30F61IEC CircuitNUMAKER-HMI-N9H30 USER MANUAL3.5 NuMaker-N9H30 - Setting, ICE, RS-232_0, Key CircuitFigure 3-5 shows the Setting, ICE, RS-232_0, Key Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-5 Setting, ICE, RS-232_0, Key Circuit3.6 NuMaker-N9H30 - Memory CircuitFigure 3-6 shows the Memory Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-6 Memory Circuit3.7 NuMaker-N9H30 - I2S, I2C_0, RS-485_6 CircuitFigure 3-7 shows the I2S, I2C_0, RS-486_6 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-7 I2S, I2C_0, RS-486_6 Circuit3.8 NuMaker-N9H30 - RS-232_2 CircuitFigure 3-8 shows the RS-232_2 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-8 RS-232_2 Circuit3.9 NuMaker-N9H30 - LCD CircuitFigure 3-9 shows the LCD Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-9 LCD Circuit3.10 NuMaker-N9H30 - CMOS Sensor, I2C_1, CAN_0 CircuitFigure 3-10 shows the CMOS Sensor,I2C_1, CAN_0 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-10 CMOS Sensor, I2C_1, CAN_0 Circuit3.11 NuMaker-N9H30 - RMII_0_PF CircuitFigure 3-11 shows the RMII_0_RF Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-11 RMII_0_PF Circuit3.12 NuMaker-N9H30 - RMII_1_PE CircuitFigure 3-12 shows the RMII_1_PE Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-12 RMII_1_PE Circuit3.13 NuMaker-N9H30 - USB CircuitFigure 3-13 shows the USB Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-13 USB Circuit3.14 NuDesign-TFT-LCD7 - TFT-LCD7 CircuitFigure 3-14 shows the TFT-LCD7 Circuit.Figure 3-14 TFT-LCD7 CircuitNUMAKER-HMI-N9H30 USER MANUAL4 REVISION HISTORYDate Revision Description2022.03.24 1.00 Initial version NUMAKER-HMI-N9H30 USER MANUALNUMAKER-HMI-N9H30 USER MANUALImportant NoticeNuvoton Products are neither intended nor warranted for usage in systems or equipment, anymalfunction or failure of which may cause loss of human life, bodily injury or severe propertydamage. Such applications are deemed, “Insecure Usage”.Insecure usage includes, but is not limited to: equipment for surgical implementation, atomicenergy control instruments, airplane or spaceship instruments, the control or operation ofdynamic, brake or safety systems designed for vehicular use, traffic signal instruments, all typesof safety devices, and other applications intended to support or sustain life.All Insecure Usage shall be made at customer’s risk, and in the event that third parties lay claimsto Nuvoton as a result of customer’s Insecure Usage, custome r shall indemnify the damagesand liabilities thus incurred by Nuvoton.。

papercutting英语作文

papercutting英语作文

Papercutting is an ancient and intricate art form that has been practiced for centuries in various cultures around the world,particularly in China where it is known as Jianzhi. This art form involves cutting designs or patterns into paper using various tools,such as scissors or knives.It is a popular folk art that is often used for decorative purposes and is also commonly associated with festivals and celebrations.Introduction to Papercutting:Papercutting is a traditional craft that has been passed down through generations.It is a form of visual storytelling,where artists use paper as their canvas and their tools as their brushes.The art form is known for its precision and the ability to create detailed and complex designs with a single sheet of paper.Historical Background:The origins of papercutting can be traced back to the6th century in China.It was initially used for religious and ceremonial purposes,such as adorning doors and windows during the Chinese New Year to ward off evil spirits.Over time,the art form evolved and became a popular form of folk art,with each region developing its unique styles and motifs.Techniques and Tools:The basic tools for papercutting include a sharp pair of scissors and a small,sharp knife. More advanced artists may also use specialized tools like chisels and punches to create intricate designs.The paper used can vary from simple red paper,which is traditional,to more colorful and textured papers.Designs and Motifs:Papercutting designs often feature traditional motifs that symbolize good fortune, happiness,and mon themes include animals like dragons and phoenixes, which represent power and grace,and plants like peonies,which symbolize wealth and honor.Other designs may depict scenes from folklore,historical events,or everyday life.Cultural Significance:Papercutting holds a significant place in various cultural celebrations.During the Chinese New Year,red papercuttings are hung in homes and public spaces to bring good luck and ward off evil spirits.In weddings,papercuttings are used to decorate the wedding chamber and symbolize the union of the couple.It is also used in funerals to honor the deceased and guide their spirit to the afterlife.Modern Developments:In contemporary times,papercutting has gained international recognition and has beenadapted by artists worldwide.Modern papercutting artists experiment with new themes, materials,and techniques,pushing the boundaries of this traditional art form.It has also been integrated into various other art forms,such as fashion,where papercut designs are used to create unique textiles,and in animation,where the art form is brought to life through digital means.Conclusion:Papercutting is more than just a craft it is a cultural treasure that reflects the creativity and heritage of the people who practice it.As it continues to evolve,papercutting remains a testament to the enduring appeal of traditional art forms in the modern world.Future Prospects:The future of papercutting looks promising as it continues to be embraced by new generations and cultures.With the rise of social media,papercutting has found a new platform to showcase its beauty and complexity,attracting a global audience.As artists continue to innovate and share their work,papercutting is poised to maintain its relevance and charm in the world of art and culture.。

基于属性增强的神经传感融合网络的人脸识别算法论文

基于属性增强的神经传感融合网络的人脸识别算法论文

Attribute-Enhanced Face Recognition with Neural Tensor Fusion Networks Guosheng Hu1Yang Hua1,2Yang Yuan1Zhihong Zhang3Zheng Lu1 Sankha S.Mukherjee1Timothy M.Hospedales4Neil M.Robertson1,2Yongxin Yang5,61AnyVision2Queen’s University Belfast3Xiamen University 4The University of Edinburgh5Queen Mary University of London6Yang’s Accounting Consultancy Ltd {guosheng.hu,yang.hua,yuany,steven,rick}@,N.Robertson@ zhihong@,t.hospedales@,yongxin@yang.acAbstractDeep learning has achieved great success in face recog-nition,however deep-learned features still have limited in-variance to strong intra-personal variations such as large pose changes.It is observed that some facial attributes (e.g.eyebrow thickness,gender)are robust to such varia-tions.We present thefirst work to systematically explore how the fusion of face recognition features(FRF)and fa-cial attribute features(FAF)can enhance face recognition performance in various challenging scenarios.Despite the promise of FAF,wefind that in practice existing fusion meth-ods fail to leverage FAF to boost face recognition perfor-mance in some challenging scenarios.Thus,we develop a powerful tensor-based framework which formulates fea-ture fusion as a tensor optimisation problem.It is non-trivial to directly optimise this tensor due to the large num-ber of parameters to optimise.To solve this problem,we establish a theoretical equivalence between low-rank ten-sor optimisation and a two-stream gated neural network. This equivalence allows tractable learning using standard neural network optimisation tools,leading to accurate and stable optimisation.Experimental results show the fused feature works better than individual features,thus proving for thefirst time that facial attributes aid face recognition. We achieve state-of-the-art performance on three popular databases:MultiPIE(cross pose,lighting and expression), CASIA NIR-VIS2.0(cross-modality environment)and LFW (uncontrolled environment).1.IntroductionFace recognition has advanced dramatically with the ad-vent of bigger datasets,and improved methodologies for generating features that are variant to identity but invari-ant to covariates such as pose,expression and illumination. Deep learning methodologies[41,40,42,32]have proven particularly effective recently,thanks to end-to-endrepre-Figure1:A sample attribute list is given(col.1)which per-tains to the images of the same individual at different poses (col.2).While the similarity scores for each dimension vary in the face recognition feature(FRF)set(col.3),the face at-tribute feature(FAF)set(col.4)remains very similar.The fused features(col.5)are more similar and a higher similar-ity score(0.89)is achieved.sentation learning with a discriminative face recognition ob-jective.Nevertheless,the resulting features still show im-perfect invariance to the strong intra-personal variations in real-world scenarios.We observe that facial attributes pro-vide a robust invariant cue in such challenging scenarios.For example gender and ethnicity are likely to be invariant to pose and expression,while eyebrow thickness may be invariant to lighting and resolution.Overall,face recogni-tion features(FRF)are very discriminative but less robust;while facial attribute features(FAF)are robust but less dis-criminative.Thus these two features are potentially com-plementary,if a suitable fusion method can be devised.To the best of our knowledge,we are thefirst to systematically explore the fusion of FAF and FRF in various face recog-nition scenarios.We empirically show that this fusion can greatly enhance face recognition performance.Though facial attributes are an important cue for face recognition,in practice,wefind the existing fusion meth-ods including early(feature)or late(score)fusion cannot reliably improve the performance[34].In particular,while 1offering some robustness,FAF is generally less discrimina-tive than FRF.Existing methods cannot synergistically fuse such asymmetric features,and usually lead to worse perfor-mance than achieved by the stronger feature(FRF)only.In this work,we propose a novel tensor-based fusion frame-work that is uniquely capable of fusing the very asymmet-ric FAF and FRF.Our framework provides a more powerful and robust fusion approach than existing strategies by learn-ing from all interactions between the two feature views.To train the tensor in a tractable way given the large number of required parameters,we formulate the optimisation with an identity-supervised objective by constraining the tensor to have a low-rank form.We establish an equivalence be-tween this low-rank tensor and a two-stream gated neural network.Given this equivalence,the proposed tensor is eas-ily optimised with standard deep neural network toolboxes. Our technical contributions are:•It is thefirst work to systematically investigate and ver-ify that facial attributes are an important cue in various face recognition scenarios.In particular,we investi-gate face recognition with extreme pose variations,i.e.±90◦from frontal,showing that attributes are impor-tant for performance enhancement.•A rich tensor-based fusion framework is proposed.We show the low-rank Tucker-decomposition of this tensor-based fusion has an equivalent Gated Two-stream Neural Network(GTNN),allowing easy yet effective optimisation by neural network learning.In addition,we bring insights from neural networks into thefield of tensor optimisation.The code is available:https:///yanghuadr/ Neural-Tensor-Fusion-Network•We achieve state-of-the-art face recognition perfor-mance using the fusion of face(newly designed‘Lean-Face’deep learning feature)and attribute-based fea-tures on three popular databases:MultiPIE(controlled environment),CASIA NIR-VIS2.0(cross-modality environment)and LFW(uncontrolled environment).2.Related WorkFace Recognition.The face representation(feature)is the most important component in contemporary face recog-nition system.There are two types:hand-crafted and deep learning features.Widely used hand-crafted face descriptors include Local Binary Pattern(LBP)[26],Gaborfilters[23],-pared to pixel values,these features are variant to identity and relatively invariant to intra-personal variations,and thus they achieve promising performance in controlled environ-ments.However,they perform less well on face recognition in uncontrolled environments(FRUE).There are two main routes to improve FRUE performance with hand-crafted features,one is to use very high dimensional features(dense sampling features)[5]and the other is to enhance the fea-tures with downstream metric learning.Unlike hand-crafted features where(in)variances are en-gineered,deep learning features learn the(in)variances from data.Recently,convolutional neural networks(CNNs) achieved impressive results on FRUE.DeepFace[44],a carefully designed8-layer CNN,is an early landmark method.Another well-known line of work is DeepID[41] and its variants DeepID2[40],DeepID2+[42].The DeepID family uses an ensemble of many small CNNs trained in-dependently using different facial patches to improve the performance.In addition,some CNNs originally designed for object recognition,such as VGGNet[38]and Incep-tion[43],were also used for face recognition[29,32].Most recently,a center loss[47]is introduced to learn more dis-criminative features.Facial Attribute Recognition.Facial attribute recog-nition(FAR)is also well studied.A notable early study[21] extracted carefully designed hand-crafted features includ-ing aggregations of colour spaces and image gradients,be-fore training an independent SVM to detect each attribute. As for face recognition,deep learning features now outper-form hand-crafted features for FAR.In[24],face detection and attribute recognition CNNs are carefully designed,and the output of the face detection network is fed into the at-tribute network.An alternative to purpose designing CNNs for FAR is tofine-tune networks intended for object recog-nition[56,57].From a representation learning perspective, the features supporting different attribute detections may be shared,leading some studies to investigate multi-task learn-ing facial attributes[55,30].Since different facial attributes have different prevalence,the multi-label/multi-task learn-ing suffers from label-imbalance,which[30]addresses us-ing a mixed objective optimization network(MOON). Face Recognition using Facial Attributes.Detected facial attributes can be applied directly to authentication. Facial attributes have been applied to enhance face verifica-tion,primarily in the case of cross-modal matching,byfil-tering[19,54](requiring potential FRF matches to have the correct gender,for example),model switching[18],or ag-gregation with conventional features[27,17].[21]defines 65facial attributes and proposes binary attribute classifiers to predict their presence or absence.The vector of attribute classifier scores can be used for face recognition.There has been little work on attribute-enhanced face recognition in the context of deep learning.One of the few exploits CNN-based attribute features for authentication on mobile devices [31].Local facial patches are fed into carefully designed CNNs to predict different attributes.After CNN training, SVMs are trained for attribute recognition,and the vector of SVM scores provide the new feature for face verification.Fusion Methods.Existing fusion approaches can be classified into feature-level(early fusion)and score-level (late fusion).Score-level fusion is to fuse the similarity scores after computation based on each view either by sim-ple averaging[37]or stacking another classifier[48,37]. Feature-level fusion can be achieved by either simple fea-ture aggregation or subspace learning.For aggregation ap-proaches,fusion is usually performed by simply element wise averaging or product(the dimension of features have to be the same)or concatenation[28].For subspace learn-ing approaches,the features arefirst concatenated,then the concatenated feature is projected to a subspace,in which the features should better complement each other.These sub-space approaches can be unsupervised or supervised.Un-supervised fusion does not use the identity(label)informa-tion to learn the subspace,such as Canonical Correlational Analysis(CCA)[35]and Bilinear Models(BLM)[45].In comparison,supervised fusion uses the identity information such as Linear Discriminant Analysis(LDA)[3]and Local-ity Preserving Projections(LPP)[9].Neural Tensor Methods.Learning tensor-based compu-tations within neural networks has been studied for full[39] and decomposed[16,52,51]tensors.However,aside from differing applications and objectives,the key difference is that we establish a novel equivalence between a rich Tucker [46]decomposed low-rank fusion tensor,and a gated two-stream neural network.This allows us achieve expressive fusion,while maintaining tractable computation and a small number of parameters;and crucially permits easy optimisa-tion of the fusion tensor through standard toolboxes. Motivation.Facial attribute features(FAF)and face recognition features(FRF)are complementary.However in practice,wefind that existing fusion methods often can-not effectively combine these asymmetric features so as to improve performance.This motivates us to design a more powerful fusion method,as detailed in Section3.Based on our neural tensor fusion method,in Section5we system-atically explore the fusion of FAF and FRF in various face recognition environments,showing that FAF can greatly en-hance recognition performance.3.Fusing attribute and recognition featuresIn this section we present our strategy for fusing FAF and FRF.Our goal is to input FAF and FRF and output the fused discriminative feature.The proposed fusion method we present here performs significantly better than the exist-ing ones introduced in Section2.In this section,we detail our tensor-based fusion strategy.3.1.ModellingSingle Feature.We start from a standard multi-class clas-sification problem setting:assume we have M instances, and for each we extract a D-dimensional feature vector(the FRF)as{x(i)}M i=1.The label space contains C unique classes(person identities),so each instance is associated with a corresponding C-dimensional one-hot encoding la-bel vector{y(i)}M i=1.Assuming a linear model W the pre-dictionˆy(i)is produced by the dot-product of input x(i)and the model W,ˆy(i)=x(i)T W.(1) Multiple Feature.Suppose that apart from the D-dimensional FRF vector,we can also obtain an instance-wise B-dimensional facial attribute feature z(i).Then the input for the i th instance is a pair:{x(i),z(i)}.A simple ap-proach is to redefine x(i):=[x(i),z(i)],and directly apply Eq.(1),thus modelling weights for both FRF and FAF fea-tures.Here we propose instead a non-linear fusion method via the following formulationˆy(i)=W×1x(i)×3z(i)(2) where W is the fusion model parameters in the form of a third-order tensor of size D×C×B.Notation×is the tensor dot product(also known as tensor contraction)and the left-subscript of x and z indicates at which axis the ten-sor dot product operates.With Eq.(2),the optimisation problem is formulated as:minW1MMi=1W×1x(i)×3z(i),y(i)(3)where (·,·)is a loss function.This trains tensor W to fuse FRF and FAF features so that identity is correctly predicted.3.2.OptimisationThe proposed tensor W provides a rich fusion model. However,compared with W,W is B times larger(D×C vs D×C×B)because of the introduction of B-dimensional attribute vector.It is also almost B times larger than train-ing a matrix W on the concatenation[x(i),z(i)].It is there-fore problematic to directly optimise Eq.(3)because the large number of parameters of W makes training slow and leads to overfitting.To address this we propose a tensor de-composition technique and a neural network architecture to solve an equivalent optimisation problem in the following two subsections.3.2.1Tucker Decomposition for Feature FusionTo reduce the number of parameters of W,we place a struc-tural constraint on W.Motivated by the famous Tucker de-composition[46]for tensors,we assume that W is synthe-sised fromW=S×1U(D)×2U(C)×3U(B).(4) Here S is a third order tensor of size K D×K C×K B, U(D)is a matrix of size K D×D,U(C)is a matrix of sizeK C×C,and U(B)is a matrix of size K B×B.By restricting K D D,K C C,and K B B,we can effectively reduce the number of parameters from(D×C×B)to (K D×K C×K B+K D×D+K C×C+K B×B)if we learn{S,U(D),U(C),U(B)}instead of W.When W is needed for making the predictions,we can always synthesise it from those four small factors.In the context of tensor decomposition,(K D,K C,K B)is usually called the tensor’s rank,as an analogous concept to the rank of a matrix in matrix decomposition.Note that,despite of the existence of other tensor de-composition choices,Tucker decomposition offers a greater flexibility in terms of modelling because we have three hyper-parameters K D,K C,K B corresponding to the axes of the tensor.In contrast,the other famous decomposition, CP[10]has one hyper-parameter K for all axes of tensor.By substituting Eq.(4)into Eq.(2),we haveˆy(i)=W×1x(i)×3z(i)=S×1U(D)×2U(C)×3U(B)×1x(i)×3z(i)(5) Through some re-arrangement,Eq.(5)can be simplified as ˆy(i)=S×1(U(D)x(i))×2U(C)×3(U(B)z(i))(6) Furthermore,we can rewrite Eq.(6)as,ˆy(i)=((U(D)x(i))⊗(U(B)z(i)))S T(2)fused featureU(C)(7)where⊗is Kronecker product.Since U(D)x(i)and U(B)B(i)result in K D and K B dimensional vectors re-spectively,(U(D)x(i))⊗(U(B)z(i))produces a K D K B vector.S(2)is the mode-2unfolding of S which is aK C×K D K B matrix,and its transpose S T(2)is a matrix ofsize K D K B×K C.The Fused Feature.From Eq.(7),the explicit fused representation of face recognition(x(i))and facial at-tribute(z(i))features can be achieved.The fused feature ((U(D)x(i))⊗(U(B)z(i)))S T(2),is a vector of the dimen-sionality K C.And matrix U(C)has the role of“clas-sifier”given this fused feature.Given{x(i),z(i),y(i)}, the matrices{U(D),U(B),U(C)}and tensor S are com-puted(learned)during model optimisation(training).Dur-ing testing,the predictionˆy(i)is achieved with the learned {U(D),U(B),U(C),S}and two test features{x(i),z(i)} following Eq.(7).3.2.2Gated Two-stream Neural Network(GTNN)A key advantage of reformulating Eq.(5)into Eq.(7)is that we can nowfind a neural network architecture that does ex-actly the computation of Eq.(7),which would not be obvi-ous if we stopped at Eq.(5).Before presenting thisneural Figure2:Gated two-stream neural network to implement low-rank tensor-based fusion.The architecture computes Eq.(7),with the Tucker decomposition in Eq.(4).The network is identity-supervised at train time,and feature in the fusion layer used as representation for verification. network,we need to introduce a new deterministic layer(i.e. without any learnable parameters).Kronecker Product Layer takes two arbitrary-length in-put vectors{u,v}where u=[u1,u2,···,u P]and v=[v1,v2,···,v Q],then outputs a vector of length P Q as[u1v1,u1v2,···,u1v Q,u2v1,···,u P v Q].Using the introduced Kronecker layer,Fig.2shows the neural network that computes Eq.(7).That is,the neural network that performs recognition using tensor-based fu-sion of two features(such as FAF and FRF),based on the low-rank assumption in Eq.(4).We denote this architecture as a Gated Two-stream Neural Network(GTNN),because it takes two streams of inputs,and it performs gating[36] (multiplicative)operations on them.The GTNN is trained in a supervised fashion to predict identity.In this work,we use a multitask loss:softmax loss and center loss[47]for joint training.The fused feature in the viewpoint of GTNN is the output of penultimate layer, which is of dimensionality K c.So far,the advantage of using GTNN is obvious.Direct use of Eq.(5)or Eq.(7)requires manual derivation and im-plementation of an optimiser which is non-trivial even for decomposed matrices(2d-tensors)[20].In contrast,GTNN is easily implemented with modern deep learning packages where auto-differentiation and gradient-based optimisation is handled robustly and automatically.3.3.DiscussionCompared with the fusion methods introduced in Sec-tion2,we summarise the advantages of our tensor-based fusion method as follows:Figure3:LeanFace.‘C’is a group of convolutional layers.Stage1:64@5×5(64feature maps are sliced to two groups of32ones, which are fed into maxout function.);Stage2:64@3×3,64@3×3,128@3×3,128@3×3;Stage3:196@3×3,196@3×3, 256@3×3,256@3×3,320@3×3,320@3×3;Stage4:512@3×3,512@3×3,512@3×3,512@3×3;Stage5:640@ 5×5,640@5×5.‘P’stands for2×2max pooling.The strides for the convolutional and pooling layers are1and2,respectively.‘FC’is a fully-connected layer of256D.High Order Non-Linearity.Unlike linear methods based on averaging,concatenation,linear subspace learning [8,27],or LDA[3],our fusion method is non-linear,which is more powerful to model complex problems.Further-more,comparing with otherfirst-order non-linear methods based on element-wise combinations only[28],our method is higher order:it accounts for all interactions between each pair of feature channels in both views.Thanks to the low-rank modelling,our method achieves such powerful non-linear fusion with few parameters and thus it is robust to overfitting.Scalability.Big datasets are required for state-of-the-art face representation learning.Because we establish the equivalence between tensor factorisation and gated neural network architecture,our method is scalable to big-data through efficient mini-batch SGD-based learning.In con-trast,kernel-based non-linear methods,such as Kernel LDA [34]and multi-kernel SVM[17],are restricted to small data due to their O(N2)computation cost.At runtime,our method only requires a simple feed-forward pass and hence it is also favourable compared to kernel methods. Supervised method.GTNN isflexibly supervised by any desired neural network loss function.For example,the fusion method can be trained with losses known to be ef-fective for face representation learning:identity-supervised softmax,and centre-loss[47].Alternative methods are ei-ther unsupervised[8,27],constrained in the types of super-vision they can exploit[3,17],or only stack scores rather than improving a learned representation[48,37].There-fore,they are relatively ineffective at learning how to com-bine the two-source information in a task-specific way. Extensibility.Our GTNN naturally can be extended to deeper architectures.For example,the pre-extracted fea-tures,i.e.,x and z in Fig.2,can be replaced by two full-sized CNNs without any modification.Therefore,poten-tially,our methods can be integrated into an end-to-end framework.4.Integration with CNNs:architectureIn this section,we introduce the CNN architectures used for face recognition(LeanFace)designed by ourselves and facial attribute recognition(AttNet)introduced by[50,30]. LeanFace.Unlike general object recognition,face recognition has to capture very subtle difference between people.Motivated by thefine-grain object recognition in [4],we also use a large number of convolutional layers at early stage to capture the subtle low level and mid-level in-formation.Our activation function is maxout,which shows better performance than its competitors[50].Joint supervi-sion of softmax loss and center loss[47]is used for training. The architecture is summarised in Fig.3.AttNet.To detect facial attributes,our AttNet uses the ar-chitecture of Lighten CNN[50]to represent a face.Specifi-cally,AttNet consists of5conv-activation-pooling units fol-lowed by a256D fully connected layer.The number of con-volutional kernels is explained in[50].The activation func-tion is Max-Feature-Map[50]which is a variant of maxout. We use the loss function MOON[30],which is a multi-task loss for(1)attribute classification and(2)domain adaptive data balance.In[24],an ontology of40facial attributes are defined.We remove attributes which do not characterise a specific person,e.g.,‘wear glasses’and‘smiling’,leaving 17attributes in total.Once each network is trained,the features extracted from the penultimate fully-connected layers of LeanFace(256D) and AttNet(256D)are extracted as x and z,and input to GTNN for fusion and then face recognition.5.ExperimentsWefirst introduce the implementation details of our GTNN method.In Section5.1,we conduct experiments on MultiPIE[7]to show that facial attributes by means of our GTNN method can play an important role on improv-Table1:Network training detailsImage size BatchsizeLR1DF2EpochTraintimeLeanFace128x1282560.0010.15491hAttNet0.050.8993h1Learning rate(LR)2Learning rate drop factor(DF).ing face recognition performance in the presence of pose, illumination and expression,respectively.Then,we com-pare our GTNN method with other fusion methods on CA-SIA NIR-VIS2.0database[22]in Section5.2and LFW database[12]in Section5.3,respectively. Implementation Details.In this study,three networks (LeanFace,AttNet and GTNN)are discussed.LeanFace and AttNet are implemented using MXNet[6]and GTNN uses TensorFlow[1].We use around6M training face thumbnails covering62K different identities to train Lean-Face,which has no overlapping with all the test databases. AttNet is trained using CelebA[24]database.The input of GTNN is two256D features from bottleneck layers(i.e., fully connected layers before prediction layers)of LeanFace and AttNet.The setting of main parameters are shown in Table1.Note that the learning rates drop when the loss stops decreasing.Specifically,the learning rates change4 and2times for LeanFace and AttNet respectively.Dur-ing test,LeanFace and AttNet take around2.9ms and3.2ms to extract feature from one input image and GTNN takes around2.1ms to fuse one pair of LeanFace and AttNet fea-ture using a GTX1080Graphics Card.5.1.Multi-PIE DatabaseMulti-PIE database[7]contains more than750,000im-ages of337people recorded in4sessions under diverse pose,illumination and expression variations.It is an ideal testbed to investigate if facial attribute features(FAF) complement face recognition features(FRF)including tra-ditional hand-crafted(LBP)and deeply learned features (LeanFace)to improve the face recognition performance–particularly across extreme pose variation.Settings.We conduct three experiments to investigate pose-,illumination-and expression-invariant face recogni-tion.Pose:Uses images across4sessions with pose vari-ations only(i.e.,neutral lighting and expression).It covers pose with yaw ranging from left90◦to right90◦.In com-parison,most of the existing works only evaluate perfor-mance on poses with yaw range(-45◦,+45◦).Illumination: Uses images with20different illumination conditions(i.e., frontal pose and neutral expression).Expression:Uses im-ages with7different expression variations(i.e.,frontal pose and neutral illumination).The training sets of all settings consist of the images from thefirst200subjects and the re-maining137subjects for testing.Following[59,14],in the test set,frontal images with neural illumination and expres-sion from the earliest session work as gallery,and the others are probes.Pose.Table2shows the pose-robust face recognition (PRFR)performance.Clearly,the fusion of FRF and FAF, namely GTNN(LBP,AttNet)and GTNN(LeanFace,At-tNet),works much better than using FRF only,showing the complementary power of facial features to face recognition features.Not surprisingly,the performance of both LBP and LeanFace features drop greatly under extreme poses,as pose variation is a major factor challenging face recognition performance.In contrast,with GTNN-based fusion,FAF can be used to improve both classic(LBP)and deep(Lean-Face)FRF features effectively under this circumstance,for example,LBP(1.3%)vs GTNN(LBP,AttNet)(16.3%), LeanFace(72.0%)vs GTNN(LeanFace,AttNet)(78.3%) under yaw angel−90◦.It is noteworthy that despite their highly asymmetric strength,GTNN is able to effectively fuse FAF and FRF.This is elaborately studied in more detail in Sections5.2-5.3.Compared with state-of-the-art methods[14,59,11,58, 15]in terms of(-45◦,+45◦),LeanFace achieves better per-formance due to its big training data and the strong gener-alisation capacity of deep learning.In Table2,2D meth-ods[14,59,15]trained models using the MultiPIE images, therefore,they are difficult to generalise to images under poses which do not appear in MultiPIE database.3D meth-ods[11,58]highly depend on accurate2D landmarks for 3D-2D modellingfitting.However,it is hard to accurately detect such landmarks under larger poses,limiting the ap-plications of3D methods.Illumination and expression.Illumination-and expression-robust face recognition(IRFR and ERFR)are also challenging research topics.LBP is the most widely used handcrafted features for IRFR[2]and ERFR[33].To investigate the helpfulness of facial attributes,experiments of IRFR and ERFR are conducted using LBP and Lean-Face features.In Table3,GTNN(LBP,AttNet)signifi-cantly outperforms LBP,80.3%vs57.5%(IRFR),77.5% vs71.7%(ERFR),showing the great value of combining fa-cial attributes with hand-crafted features.Attributes such as the shape of eyebrows are illumination invariant and others, e.g.,gender,are expression invariant.In contrast,LeanFace feature is already very discriminative,saturating the perfor-mance on the test set.So there is little room for fusion of AttrNet to provide benefit.5.2.CASIA NIR-VIS2.0DatabaseThe CASIA NIR-VIS2.0face database[22]is the largest public face database across near-infrared(NIR)images and visible RGB(VIS)images.It is a typical cross-modality or heterogeneous face recognition problem because the gallery and probe images are from two different spectra.The。

Full-length Research Papers should not

Full-length Research Papers should not

Acta Crystallographica Section DBiologicalCrystallographyISSN0907-4449Notes for authors20031.Scientific scopeSection D of Acta Crystallographica welcomes the submission of papers covering any aspect of structural biology,particularly structures of biological macromolecules.In addition to new structural determinations, preliminary data on unit-cell dimensions and space groups will be considered for publication,provided suitable diffraction photographs(or their equivalent),together with an estimate of resolution,are included. Also,articles on crystal growth of biological macromolecules are welcomed,and re®ne-ments of known structures may be published if the information content warrants it.For all structural papers,suf®cient evidence should be provided to convince the referees that the interpretations of the diffraction data and electron-density maps are correct,within the resolution of the analysis.2.Categories of contributions Contributions should conform to the general editorial style of the journal.2.1.Research PapersFull-length Research Papers should not normally exceed15journal pages(about 15000words).2.2.Short CommunicationsShort Communications are intended for the presentation of topics of limited scope, or for preliminary announcements of novel research®ndings.They are not intended for interim reports of work in progress,and must report results that are of scienti®c value in their own right.Short Communications should not exceed two journal pages(about1500words).A maximum of two®gures and two tables of appropriate size are permitted.2.3.Crystallization PapersThese are short papers which report the crystallization of novel,important or dif®cult-to-crystallize biological macro-molecules,or new crystallization techniques. In general,a submission will only be considered if the structure of the macro-molecule has not been published already.Crystallization Papers should notnormally exceed two journal pages(about1500words).Authors should take intoaccount the evaluation criteria given in x11.Crystallization Papers should besubmitted to one of the CrystallizationCo-editors,whose addresses appear on theinside front cover of each issue.2.4.Structural Genomics PapersThis category of papers provides rapidreporting of structural genomics research.Full details can be found at http://journals./d/services/structuralgenomics/.2.5.Lead ArticlesLead Articles are authoritative,compre-hensive and forward-looking reviews ofmajor areas of research interest.They arealways commissioned by the Section Editors,on the advice of the Editorial Board.Suggestions for suitable topics and ofpotential author(s)are welcomed by theSection Editors for discussion with theBoard.The Section Editors will discuss thetreatment of the topic,the length of theArticle and the delivery date of the manu-script with invited author(s);Lead Articleswill be refereed in the normal manner.2.6.Topical ReviewsA Topical Review is a short,highlyfocused survey covering a relatively narrowarea of current research interest.It shouldnot aim to be comprehensive,but a briefintroduction should provide historicalperspective and a brief conclusion shouldindicate likely future directions.Topical Reviews will be limited to aboutten journal pages(10000words)exceptin special agreed circumstances.Shorterreviews on rapidly evolving areas are alsoactively encouraged.They will be commis-sioned by the Section Editors eitherpersonally,or following a formal proposal byprospective author(s).Topical Reviews willbe refereed in the normal way.2.7.Letters to the EditorThese may deal with non-technicalaspects of crystallography,its role,itspropagation,the proper function of itsSocieties etc.,or may make a technicalobservation that would usefully be broughtto a wider audience.Letters should besubmitted to one of the Section Editors or tothe Editor-in-chief of Acta Crystallographicaonly.2.8.Scientific CommentComments of general scienti®c interest tothe readership are welcomed.These shouldnot normally exceed two journal pages andshould be submitted as in x3.2.9.Meeting ReportsThese are normally invited.Prospectiveauthors interested in writing such itemsshould®rst contact the Section Editors.2.10.New Commercial ProductsAnnouncements of new commercialproducts are published free of charge.Thedescriptions,up to300words or theequivalent if a®gure is included,should givethe manufacturer's full address.2.11.ObituariesThese will be commissioned by theSection Editors.3.Submission and handling ofmanuscriptsPapers should be submitted in one of twoways:as hard copy directly to a Co-editoror Section Editor or electronically viathe web at /services/submit.html.3.1.Hard-copy submissionManuscripts and®gures should beprepared using the®le formats listed in x3.9.Three paper copies and the electronic®le(s)should be submitted;authors are remindedto keep an exact copy of the submission forlater editorial adjustments and for checkingproofs.Unless stated otherwise in x2,thesubmission should be sent to a SectionEditor or any of the Co-editors taking intoaccount their areas of expertise.On accep-tance,an electronic version of the®nalmanuscript will be required by the Editorial Of®ce.Contact details for the editors are avail-able at /d/services/ editors.html.3.2.Electronic submission Manuscripts and®gures should be prepared using the®le formats listed in x3.9. Full details of the submission procedure can be found at /services/ submit.html and authors should®rst check this page to see if the service is available.nguages of publicationActa Crystallographica Section D will publish papers in English,French,German and Russian.3.4.Handling of manuscriptsAll contributions will be seen by referees (normally two)before they can be accepted for publication.The editor to whom the manuscript is assigned is responsible for choosing referees and for accepting or rejecting the paper.This responsibility includes decisions on the®nal form of the paper and interpretation of these Notes when necessary.If changes to a manuscript requested by a Section Editor,Co-editor or the editorial staff are not received within two months of transmittal to the author,the submission will automatically be withdrawn.Should the manuscript require further revision,this would normally be expected to be completed within one month of the revision having been requested.Any subsequent communication of the material will be treated as a new submission in the editorial process.For accepted papers,it is the responsi-bility of the Managing Editor to prepare the paper for printing.This may involve corre-spondence with the authors and/or the responsible editor in order to resolve ambi-guities or to obtain satisfactory®gures or tables.The date of acceptance that will appear on the published paper is the date on which the Managing Editor receives the last item required.Correspondence will be sent to the author who submitted the paper unless the Managing Editor is informed of some other suitable arrangement.On rare occasions an editor may consider that a paper is better suited to a section of Acta Crystallographica other than that speci®ed by the author(s),to the Journal of Applied Crystallography or to the Journal of Synchrotron Radiation.Any change to the section or journal of publication will only bemade after full discussion with the commu-nicating author.3.5.Author's warrantyThe submission of a paper is taken as animplicit guarantee that the work is original,that it is the author(s)own work,that allauthors concur with and are aware of thesubmission,that all workers involved in thestudy are listed as authors or given propercredit in the acknowledgements,that themanuscript has not already been published(in any language or medium),and that it isnot being considered and will not be offeredelsewhere while under consideration for anIUCr journal.The inclusion of material in aninformal publication,e.g.a preprint serveror a newsletter,does not preclude publica-tion in an IUCr journal.All authors will berequired to sign off the®nal version of thepaper.Important considerations related topublication have been given in the ethicalguidelines published in Acc.Chem.Res.(2002),35,74±76.3.6.CopyrightExcept as required otherwise by nationallaws,an author must sign and submit a copyof the Transfer of Copyright Agreementform for each manuscript before it can beaccepted.During the electronic submissionprocess,authors will be asked to transfercopyright electronically.3.7.Author grievance procedureAn author who believes that a paper hasbeen unjusti®ably treated by the Co-editormay appeal initially to a Section Editor andthen to the Editor-in-chief if still aggrievedby the decision.3.8.Contact e-mail addressThe contact author must provide ane-mail address for editorial communicationsand despatch of electronic proofs.3.9.File formatThe manuscript should be prepared usingT E X,L A T E X or Word.Authors are encour-aged to use the templates available from theEditorial Of®ce by e-mail(med@)orby ftp(from the`templates'directory).AllWord submissions should be accompaniedby an RTF(rich text format)®le.Figures may be provided in PostScript,encapsulated PostScript or TIFF formats.The resolution of bitmap graphics should bea minimum of600d.p.i.3.10.File transferFor electronic submissions the®lesshould be uploaded via the web.Full detailsof this procedure are given at http:///services/submit.html.For hard-copy submissions®nal electronic®les must have a®lename constructed fromthe reference number supplied by the Co-editor.Files should be given the extensionsFtex,Fdo and Frtf as appropriate.Illus-trations should be given the extensions Fps,Feps or Ftif.Multiple®les for the samesubmission should be uniquely identi®ede.g.xzIHVUfigIFps,xzIHVUfigPFps,xzIHVUFdo ,etc.where xz1087is the refer-ence number.Only after acceptance of the paper by theresponsible editor should the®nal electronicversion of the paper be sent to the EditorialOf®ce in Chester.This may be via the web(see above),by e-mail(med@),ondiskette or by ftp as described below.4.Abstract and synopsisAll contributions must be accompanied byan English language Abstract and a one ortwo sentence Synopsis of the main®ndingsof the paper for inclusion in the Table ofContents for the relevant issue.The Abstractshould state as speci®cally and as quantita-tively as possible the principal resultsobtained.The Abstract should be suitable forreproduction by abstracting services withoutchange in wording.It should not repeatinformation given in the title.Ordinarily200words suf®ce for Abstracts of ResearchPapers,Lead Articles and Topical Reviews,and100words for shorter contributions.Itshould make no reference to tables,diagrams,atom numbers or formulaecontained in the paper.It should not containfootnotes.Numerical information given inthe Abstract should not be repeated in thetext.It should not include the use of`we'or`I'.(i)On your workstation enter:ftp ftpFiu rForg(ii)Wait for x me F F F:promptand enter: nonymous(iii)Wait for sswordXprompt and enter:your eEm ilddress(iv)Wait for ftpb promptand enter:d in omingGd(v)Transfer a®le from youraccount(e.g.j29.ps)as anidenti®able name(e.g.xz1087®g1.ps):put jPWFpsxzIHVUfigIFps(vi)Wait for ftpb prompt before sending another®le(vii)Finish off the ftp sessionby entering: ye(viii)Send an e-mail to Chester(meddiu rForg)with a list of the®les transferred by ftpLiterature references in an Abstract are discouraged.If a reference is unavoidable,it should be suf®ciently full within the Abstract for unambiguous identi®cation,e.g.[Terwil-liger(1994).Acta Cryst.D50,17±23].5.Diagrams and photographs(`figures')Figures should be prepared using one of the ®le formats listed in x3.9.The choice of tables and®gures should be optimized to produce the shortest printed paper consistent with clarity.Duplicate presentation of the same information in both tables and®gures is to be avoided,as is redundancy with the text.Authors of protein structure papers are requested to submit a picture of the C chain trace.This will be helpful for referees and may be deposited.In addition,a diagram of the®t of a side chain is helpful to the reader in terms of assessing the resolu-tion and map quality.Fibre data should contain appropriate information such as a photograph of the data.As primary diffraction data cannot be satisfactorily extracted from such®gures,the basic digital diffraction data should be deposited.5.1.QualityElectronic®les in the formats listed in x3.9 are essential for high-quality reproduction. The resolution of bitmap graphics should be a minimum of600d.p.i.Where electronic ®les are not available,hard-copy greyscale or colour images should be provided as glossy ser printer or photocopier output will generally be unsatisfactory for reproduction of such diagrams.5.2.SizeDiagrams should be as small as possible consistent with legibility.They will normally be sized so that the greatest width including lettering is less than the width of a column in the journal.5.3.Lettering and symbolsFine-scale details and lettering must be large enough to be clearly legible(ideally 1.5±3mm in height)after the whole diagram has been reduced to one column width. Lettering should be kept to a minimum; descriptive matter should be placed in the legend.5.4.Numbering and legendsDiagrams should be numbered in a singleseries in the order in which they are referredto in the text.A list of the legends(`®gurecaptions')should be included in the manu-script.5.5.StereofiguresAtom labelling when included should beon both left and right views in stereoperspective.Both views should be incorp-orated into a single®gure.5.6.Colour figuresFigures in colour are accepted at no costto the author provided that the editor agreesthat they improve the understanding of thepaper.At the editor's discretion,®guresprinted in black and white may appear incolour in Crystallography Journals Online.6.TablesAuthors submitting in Word should use theWord table editor to prepare tables.e of tablesExtensive numerical information isgenerally most economically presented intables.Text and diagrams should not beredundant with the tables.6.2.Design,numbering and sizeTables should be numbered in a singleseries of arabic numerals in the order inwhich they are referred to in the text.Theyshould be provided with a caption.Tables should be carefully designed tooccupy a minimum of space consistent withclarity.7.Mathematics and letter symbolsAuthors submitting in Word should use theWord equation editor to prepare displayedmathematical equations.The use of the stop(period)to denotemultiplication should be avoided except inscalar products.Generally no sign isrequired but,when one is,a multiplicationsign(Â)should be used.Vectors should be in bold type and tensorsshould be in bold-italic type.Greek letters should not be spelled out.Care should be taken not to causeconfusion by using the same letter symbol intwo different meanings.Gothic,script or other unusual letteringshould be avoided.Another typeface may besubstituted if that used by the author is notreadily available.Equations,including those in publishedAppendices,should be numbered in a singleseries.8.MultimediaMultimedia additions to a paper(e.g.time-lapse sequences,three-dimensional struc-tures)are welcomed;they will be madeavailable via Crystallography JournalsOnline.9.Nomenclature9.1.Crystallographic nomenclatureAuthors should follow the generalrecommendations produced by the IUCrCommision on Crystallographic Nomen-clature(see reports at /iucr-top/comm/cnom/).Atoms of the same chemical specieswithin an asymmetric unit should be distin-guished by an appended arabic numeral.Chemical and crystallographic numberingshould be in agreement wherever possible.When it is necessary to distinguish crystal-lographically equivalent atoms in differentasymmetric units the distinction should bemade by lower-case roman numeral super-scripts(i.e.i,ii,iii etc.)to the original atomlabels.Space groups should be designated by theHermann±Mauguin symbols.Standard cellsettings,as listed in Volume A of Interna-tional Tables for Crystallography,should beused unless objective reasons to the contraryare stated.When a non-standard setting isused,the list of equivalent positions shouldbe given.Hermann±Mauguin symbolsshould also be used for designating pointgroups and molecular symmetry.It is helpfulif the origin used is stated explicitly wherethere is a choice.The choice of axes should normally followthe recommendations of the Commission onCrystallographic Data[Kennard et al.(1967).Acta Cryst.22,445±449].A symbol such as123or hkl withoutbrackets is understood to be a re¯ection,(123)or(hkl)a plane or set of planes,[123]or[uvw]a direction,{hkl}a form and h uvw iall crystallographically equivalent directionsof the type[uvw].Other bracket notationsshould be explicitly de®ned.9.2.Nomenclature of chemical compoundsetc.Chemical formulae and nomenclatureshould conform to the rules of nomenclatureestablished by the International Union of Pure and Applied Chemistry(IUPAC),the International Union of Biochemistry and Molecular Biology(IUBMB),the Interna-tional Mineralogical Association and other appropriate bodies.As far as possible the crystallographic nomenclature should correspond to the systematic name.Any accepted trivial or non-systematic name may be retained,but the corre-sponding systematic(IUPAC)name should also be given.9.3.UnitsThe International System of Units(SI)is used except that the aÊngstroÈm(symbol AÊ, de®ned as10À10m)is generally preferred to the nanometre(nm)or picometre(pm)as the appropriate unit of length.Recom-mended pre®xes of decimal multiples should be used rather than`Â10n'.10.ReferencesReferences to published work must be indicated by giving the authors'names followed immediately by the year of publi-cation,e.g.Neder&Schulz(1998)or(Neder &Schulz,1998).Where there are three or more authors the reference in the text should be indicated in the form Smith et al. (1998)or(Smith et al.,1998)etc.(all authors should be included in the full list).In the reference list,entries for journals [abbreviated in the style of Chemical Abstracts(the abbreviations Acta Cryst.,J. Appl.Cryst.and J.Synchrotron Rad.are exceptions)],books,multi-author books, computer programs,personal communica-tions and undated documents should be arranged alphabetically and conform with the following style:BruÈnger,A.T.(1992a).X-PLOR.Version3.1.A System for X-ray Crystallography and NMR. Yale University,Connecticut,USA.BruÈnger,A.T.(1992b).Nature(London),355, 472±474.Collaborative Computational Project,Number4 (1994).Acta Cryst.D50,760±763. Crowther,R.A.(1972).The Molecular Replace-ment Method,edited by M.G.Rossmann,pp. 173±178.New York:Gordon and Breach. International Union of Crystallography(1999). (IUCr)Crystallography Journals Online,http:// .International Union of Crystallography(2001). (IUCr)Structure Reports Online,http:// /e/journalhomepage.html. Sheldrick,G.M.(2003).Acta Cryst.D59.In the press.Yariv,J.(1983).Personal communication.Note that inclusive page numbers must begiven.Identi®cation of individual structures inthe paper by use of database reference(identi®cation)codes should be accom-panied by a full citation of the originalliterature in the reference list.However,intables containing more than ten such refer-ence codes,citation in the reference list isnot required.11.Evaluation criteriaThe information required in manuscripts isgiven below.11.1.Crystallization dataA list of data recommended for inclusionin Crystallization Papers can be found onthe web at /d/services/crystallization/.11.2.ResolutionThe effective resolution should bedescribed clearly.Values of the internalagreement of the data,R merge,together withthe multiplicity(i.e.the average number ofmeasurements for each re¯ection fromwhich R merge is calculated),the percentageof data with I>3'(I)and percentagecompleteness of the data are required forthe overall data set and the highest resolu-tion shell together with the limits of thatshell in AÊ.For high-quality data obtainedwith synchrotron radiation,values ofR merge<20%,completeness>93%andobservable data>70%should be achievablefor the highest resolution shell.A completetable listing the above criteria as a functionof resolution should also be submitted,butwill normally be included in the supple-mentary material,see x13.11.3.Unrefined structuresAdequate experimental details should beprovided to convince referees that theinterpretation is correct,within the resolu-tion of the analysis.If heavy-atom deriva-tives were used,suf®cient data should beprovided for evaluation of the quality ofthose derivatives.The®t of the model to theelectron-density maps used to determine thestructure should be shown or described byquantitative indicators,such as real-spaceresiduals.11.4.Refined structuresFor re®ned structures the data requireddepend on the effective resolution of theanalysis.The following should be included.A®nal Ramachandran plot is importantand should be provided for review purposes.The paper should include a brief statementof the percentage of amino acids in allowed,additionally allowed and disallowed regionsof the plot.The r.m.s.deviations in B values withineach residue's main-chain and side-chainatoms should be included.The crystallographic R index should betabulated as a function of resolution andR free should also be included.Adequate details should be providedregarding the steps followed in constructingthe model and re®ning the structure.Alsorequested are:the number of solvent atoms;solvent B values;the history and salientdetails of the re®nement methods employed,including the resolution ranges that wereused at various stages of re®nement;therestraints used;a description of how thethermal parameters were treated;and howthe solvent sites were selected and handledduring re®nement.It should be clear if vander Waals distances were restrained.Hydrogen-bonding patterns within theprotein should be described including thenumber of hydrogen-bond donors notinvolved in hydrogen bonding and anyunsatis®ed buried main-chain hydrogenbonds.Any structural features that are consid-ered somewhat unusual should be described.Examples include cis peptide bonds;unoc-cupied volume inside the protein,buriedcharge groups that are not involved in saltbridges or reasonable hydrogen-bondingenvironments;unusual locations of glycineand proline residues;unusual distributionsof polar and hydrophobic groups within themolecule;and unusual bond lengths,bondangles,planes,intra-and intermolecularcontacts.12.Small-molecule structuredeterminationsPapers that report the results of crystalstructure determinations of small moleculesmust report the associated experimentaldata as required in Notes for Authors forSection C of Acta Crystallographica.Thesedata should be supplied as a single electronic®le in CIF format.The CIF will be checkedin Chester for internal consistency.13.Supplementary publicationprocedure(deposition)13.1.Purpose and scopeParts of some papers are of interest toonly a small number of readers,and the costof printing these parts is not warranted. Arrangements have therefore been made for such material to be made available from the IUCr electronic archive via Crystallography Journals Online or to be deposited with the Protein Data Bank,the Nucleic Acid Database and the ICDD as appropriate.13.2.IUCr electronic archiveAll material for deposition in the IUCr electronic archive should be supplied elec-tronically.Non-structural information,which may include:details of the experimental procedure; details of the stages of structure re®nement; details of mathematical derivations given only in outline in the main text and in mathematical Appendices;lengthy discussion of points that are not of general interest or that do not lead to de®nite conclusions but that do have signi®cant value; and additional diagrams,should be supplied in one of the formats given in x3.9.Structural information(for small-mole-cule structures)should be supplied in CIF format;structure factors should be supplied as.fcf®les.13.3.Macromolecular structuresAuthors should follow the depositionrecommendations of the IUCr Commissionon Biological Macromolecules[Acta Cryst.(2000),D56,2].For all structural studies ofmacromolecules,coordinates and structurefactors must be deposited with the ProteinData Bank or the Nucleic Acid Database if atotal molecular structure has been reported.Authors must supply the Protein Data Bank/Nucleic Acid Database reference codesbefore the paper can be published.13.4.Crystallization dataFor Crystallization Papers,authors arerecommended to deposit their data with therelevant database.14.Powder diffraction dataAuthors of powder diffraction papers shouldconsult the notes provided at the online CIFhelp page(/c/services/cifhelp.html).For papers that present theresults of powder diffraction pro®le®tting orre®nement(Rietveld)methods,the primarydiffraction data,i.e.the numerical intensityof each measured point on the pro®le as afunction of scattering angle,will be depos-ited.15.Crystallography Journals OnlineAll IUCr journals are available on theweb via Crystallography Journals Online;/.Full details ofauthor services can be found at http:///d/services/authorservices.html.15.1.Electronic status informationAuthors may obtain information aboutthe current status of their papers at http:///services/status.html.15.2.ProofsProofs will be provided in portabledocument format(pdf).The correspondenceauthor will be noti®ed by e-mail when theproofs are ready for downloading.15.3.ReprintsAfter publication,the correspondenceauthor will be able to download the elec-tronic reprint of the published article,free ofcharge.Authors will also be able to orderprinted reprints at the proof stage.。

组织与胚胎学英文课件:Connective Tissue

组织与胚胎学英文课件:Connective Tissue

1.3 Classification of Connective Tissue
Connective tissue proper • Loose connective tissue (L.C.T) • Dense Connective Tissue (D.C.T.) • Adipose tissue • Reticular tissue
---function: synthesize and store fat
⑥undifferentiated mesenchymal cell
---structure:similar to fibrocyte ---function: multi-differentiating potential
Mechanism of mast cells in allergic reactions
allergic reactions
毛细血管 通透性
支气管平 血管扩张 滑肌痉挛
血浆液体溢出
Edema 水肿
哮喘
asthma
BP
shock 休克
⑤fat cell ---structure: large, round or polygonal flattened ovoid nucleus located on one side of cell thin layer of cytoplasm a large lipid droplet
⑦leukocytes: (blood cell)
---Granulocyte: neutrophil eosinophil basophil
---Agranulocyte: lymphocyte (B, T) monocyte
Identify the following cells

Paper Content Marks

Paper Content Marks

Paper Content Marks(% of total) PurposeReading(1 hour ) 5 parts 25% Shows you can deal confidently with different types of text, such as business publications and correspondence.Writing(45 minutes) 2 parts 25% Requires you to be able to produce two different pieces of writing, such as letters, reports, proposals and emails.Listening(about 40 minutes including transfer time) 3 parts 25% Requires you to be able to follow and understand a range of spoken materials, such as interviews, discussions and presentations.Speaking(14 minutes per pair of candidates) 3 parts 25% T ests your ability to communicate effectively in face-to-face situations. Y ou will take the Speaking test with one or two other candidates.What’s in the Reading paper?The Cambridge English: Business Vantage Reading paper has different types of texts and questions.In part 1, you may be required to read one long text divided into four sections, or four shorter, related texts.SummaryTime allowed: 1 hourNumber of parts: 5Number of questions: 45Marks: 25% of totalLengths of texts: 150–550 words per textTexts may be from: Newspapers and magazine artic les, reports, advertisements, letters,messages, brochures, guides, manuals, etc.Parts 1–5Part 1 (Matching)What's in Part 1? Either four short texts on a related topic or one text divided into four sections and a series of statements. Y ou have to match each statement to the text or section where you can find the information.What do I have to practise? Reading –scanning fot gist and specificinformation.How many questions are there? 7How many marks do I get? One mark for each correct answer.Practise Part 1Now try Part 1 from the sample Cambridge English: Business Vantage Reading paper.Part 2 (Matching)What's in Part 2? A text with gaps and some sentences (A–G). Each gap represents a missing sentence. Y ou have to read the text and the sentences and decide which sentence belongs in each gap.What do I have to practise? Reading – understanding text structure. How many questions are there? 5How many marks do I get? One mark for each correct answer. Practise Part 2Now try Part 2 from the sample Cambridge English: Business Vantage Reading paper. Part 3 (Multiple choice)What's in Part 3? A single text with six comprehension questions. Y ou have to read the text and choose the right answer for each question (A, B, C or D).What do I have to practise? Reading for gist and specific information. How many questions are there? 6How many marks do I get? One mark for each correct answer. Practise Part 3Now try Part 3 from the sample Cambridge English: Business Vantage Reading paper.Part 4 (Multiple-choice cloze)What's in Part 4? A text with gaps. Each gap represents one word or phrase. Y ou have to read the text and choose the right wordor phrase to fill each gap from a choice of four (A, B, C or D).What do I have to practise? Reading – vocabulary and structure.How many questions are there? 15How many marks do I get? One mark for each correct answer. Practise Part 4Now try Part 4 from the sample Cambridge English: Business Vantage Reading paper. Part 5 (Proof-reading)What's in Part 5? A text in which some lines are correct and some lines have an extra, unnecessary word. If the line is correct, you write 'CORRECT' on your answer sheet. If the line is not correct, you have to write down the extra word.What do I have to practise? Reading – understanding sentence structure andfinding errors.How many questions are there? 12How many marks do I get? One mark for each correct answer.Practise Part 5Now try Part 5 from the sample Cambridge English: Business Vantage Reading paper.DOs and DON’TsDOs1Pay attention to the complete meaning of the sentences in Part 1.2Read the whole text in Part 2 and try to predict what kind of information is missing from each of the gaps, before working on the extracts.3Look very carefully at the pronouns in the extracts in Part 2. They must refer correctly to the nouns before and/or after the gap in the text.4Notice linking words and phrases in Part 2. For example, ‘however’ or ‘but’ must link two contrasting ideas.5Regularly check your answers in Part 2. If you are finding a question difficult, perhaps you have already used the correct answer to that question in the wrong place. Always leave enough time to double-check answers against the text.6Pay attention to the general theme of the paragraphs in Part 3.7Read the text and questions very carefully in Part 3. Remember that the options A–D in the question may mean something very similar to the text, but not the same.8Read the question or stem very carefully in Part 3. Perhaps all of the options occur somewhere in the text, but only one of them is correct with that particular question.9Keep vocabulary lists and try to use new vocabulary that you learn. This will be particularly useful for Part 4.10Look carefully at the sentences in Part 4. Does the word you have chosen usually go together with a certain preposition or grammatical structure? Does it make a good collocation with the surrounding words?11Remember that the extra word in Part 5 has to be grammatically wrong and not just unnecessary.12In Part 5, write your answer in capital letters.DON'T s13In Part 1, don’t choose an answer just because you find matching words. There are usually some similarities between sections and you need to make sure that your choice matches the complete meaning of the question.14Don’t forget that tenses in the Part 2 extracts need to fit logically with those already present in the text.15Don't choose more than one letter for any of the answers in Parts 1–4.FAQs (Frequently Asked Questions)What aspects of reading are tested in this paper?Y ou are tested on your ability to understand gist, detail and the text structure and to identify main points and specific information. Y ou are also tested on vocabulary, understanding discourse features and the ability to identify errors.How many marks is the Reading paper worth?The Reading paper is worth 25% of the total score.How long should I spend on each part?There is no time limit for each task; some tasks may take longer than others and you should be aware of how long you need for different tasks. However, it's worth remembering that some tasks have more items and are, therefore, worth more marks.How do I answer the Reading paper?In this paper, you put the answers on an answer sheet by filling in a lozenge (a kind of box) or by writing a one-word answer on your answer sheet in pencil.What’s in the Writing paper?The Cambridge English: Business V antage Writing paper has different types of texts and questions. In one part you may have to read one long text or two or more shorter, related texts.SummaryTime allowed:45 minutesNumber of parts:2Number of questions:2 compulsory questionsMarks:25% of total--------------------------------------------------------------------------------Parts 1–2Part 1What's in Part 1? A description of a business situation. Y ou have to write an internal company communication using the information we give you.What do I have to practise?Writing a message, memo or email: giving instructions, explaining a development, asking for comments, requesting information, agreeing to requests, etc.How many questions are there?1 compulsory questionHow much do I have to write?40–50 wordsPractise Part 1Now try Part 1 from the sample Cambridge English: Business V antage Writing paper.--------------------------------------------------------------------------------Part 2What's in Part 2? Some material (letter, fax, email, note, notice, advert, graph, chart) to read. Y ou have to write a piece of business correspondence, a report or a proposal based on the information.What do I have to practise?Writing business correspondence (e.g. explaining, apologising, reassuring, complaining), reports (e.g. describing, summarising) or proposals (e.g. describing, summarising, recommending, persuading).How many questions are there?1 compulsory questionHow much do I have to write?120–140 wordsPractise Part 2Now try Part 2 from the sample Cambridge English: Business V antage Writing paper.--------------------------------------------------------------------------------DOs and DON’TsDOsRead the question carefully and underline the important parts.Make a plan before you start writing.Write clearly and concisely.Write so that the examiner can read the answer.Check that you have included all the content elements.Add relevant ideas and information of your own in Part 2.Remember which format to use (email, report, etc.).Use the correct style or register (e.g. formal/informal).Use a range of business words and expressions.Structure your writing with good linkers such as 'firstly', 'also', 'however', 'moreover', 'nevertheless' and so on.Write in paragraphs.Check the question and your work again after you have finished writing.DON'TsDon't use white correction fluid but do cross out mistakes with a single line.Don't forget to divide your time appropriately between the two questions. Remember that Part 1 is marked out of 10 and Part 2 out of 20.Don't panic if o ther people in the exam start writing straight away. It’s better to read the question carefully and plan before you start writing.Don't copy too many words and phrases from the question paper – try to use your own words. Don’t repeat the same words and structures too often.Don't waste time writing addresses for a letter, as they are not required.--------------------------------------------------------------------------------FAQs (Frequently Asked Questions)How many answers do I need to produce?Two.In what ways is Part 1 different from Part 2?In Part 1, the task requires internal communication (writing to somebody within the same company), which may be a note, message, memo or email. In Part 2, the task may be a business letter, fax or email, or a report or proposal.Is the input different in Part 1 than Part 2?Y es. In Part 1, the input is a situation with instructions for what to write plus the layout of the task type. In Part 2, there are one or more pieces of input. These could be in the form of business correspondence – letter, fax or email, or internal communication – note, memo or email, or visuals such as graphs, charts, adverts, notices, etc. The layout is given if the task is to write a fax or an email.How many marks is each question worth?Part 2 is worth twice as many marks as Part 1. The scores are converted to provide a mark out 10 for Part 1 and a mark out of 20 for Part 2.How many marks does the Writing paper carry in total?The Writing paper is worth a total of 30 marks (25% of the total score).Where do I write my answers?In the question booklet. This booklet also contains enough space for you to write your rough work.What if I write less than the number of words stated in the task?If you write an answer which is too short, it may not have an adequate range of language and may not provide all the information required.What if I write more than the number of words stated in the task?Y ou should not worry if you write slightly more than the word limit, but if you write far more than the word limit, your message may become unclear, and have a negative effect on the reader.How is the Writing paper marked?Writing Examiners mark candidate answers in a secure online marking environment. The software randomly allocates candidate answers to Examiners so they assess scripts from a variety of countries and centres. The software allows for examiners marking to be monitored for quality and consistency.What’s in the Listening paper?The Cambridge English: Business V antage Listening paper has three parts. For each part you have to listen to a recorded text or texts and answer some questions. Y ou hear each recording twice.SummaryTime allowed:About 40 minutes, including time to transfer your answers onto the answer sheetNumber of parts:3Number of questions:30Marks:25% of total--------------------------------------------------------------------------------Parts 1–3Part 1 (Note completion)What's in Part 1? Three conversations or answer machine messages. For each recording, you have to listen and fill in four gaps in a short text, such as a form.What do I have to practise?Listening and noting specific information.How many questions are there?12How many marks do I get?One mark for each correct answer.Practise Part 1Now try Part 1 from the sample Cambridge English: Business V antage Listening paper.--------------------------------------------------------------------------------Part 2 (Matching)What's in Part 2? Two sets of five short monologues (recordings of one person speaking). All the monologues have a similar theme. Each set of monologues has a list of eight items (A–H) and you have to match each speaker to one of the items.What do I have to practise?Listening to identify topic, context, function, etc.How many questions are there?10How many marks do I get?One mark for each correct answer.Practise Part 2Now try Part 2 from the sample Cambridge English: Business V antage Listening paper.--------------------------------------------------------------------------------Part 3 (Multiple choice)What's in Part 3? One longer conversation or monologue (interview, discussion, presentation, etc.) and some comprehension questions. Y ou have to listen to the recording and choose the right answer (A, B or C) for each question.What do I have to practise?Listening for details and main ideas.How many questions are there?8How many marks do I get?One mark for each correct answer.Practise Part 3Now try Part 3 from the sample Cambridge English: Business V antage Listening paper.--------------------------------------------------------------------------------DOs and DON’TsDOsIn the time before the first listening, read the instructions and task carefully, think about what you are going to hear, and underline key words in the instructions and questions.Use the second listening to check, confirm or alter your answers from the first listening (remember that changing an answer in Part 2 may affect other answers in the same task).Remember that in Part 1, spelling should be correct (British or American spelling). Concentrate on an in-depth understanding of what is said in Parts 2 and 3.Remember that in Part 2, the five answers in each task should be different.Answer all the questions –you won't lose marks for wrong answers, and there's a chance that you'll guess correctly.Carefully copy your answers in pencil onto the answer sheet during the 10 minutes at the end of the test.DON'TsDon't leave any answers blank.Don't spend too long thinking about a question: leave it until the second listening.Don't attempt to rephrase unnecessarily what you hear in Part 1.Don't forget to pay attention to anything that appears after the gap in Part 1 questions.In Part 1, don’t repeat information or words that already appear before or after the gap. For example, if the word ‘days’ is after the gap, don’t write ‘days’ in your answer.n Part 3, don’t forget that you should only choose the option that actuall y answers the question –even if an option is true, it may not answer the question that has been asked.--------------------------------------------------------------------------------FAQs (Frequently Asked Questions)What sort of material is used in the test?The recordings are scripted. They all deal with business topics and situations. Nearly all have one or two speakers.How useful is exam preparation for improving my listening ability?The exam tests listening skills that are required for most purposes – not only in business – so exam preparation is valuable, even if you are not taking the exam.Can the Listening test be taken separately?No, Cambridge English: Business V antage consists of four papers testing listening, reading, writing and speaking. All four papers need to be taken in the same exam period, in order to pass the exam.Face-to-face Speaking testDownload a free pack of sample papers (zip file)What’s in the Speaking paper?The Cambridge English: Business V antage Speaking test has three parts and you take it together with another candidate. There are two examiners. One of the examiners conducts the test and the other examiner listens to what you say and takes notes.SummaryTime allowed:14 minutes per pair of candidatesNumber of parts:3Marks:25% of totalY ou have to talk:with the examinerwith the other candidateon your own--------------------------------------------------------------------------------Parts 1–3Part 1 (Conversation)What's in Part 1? Conversation with the examiner. The examiner first asks general and then more business-related questions. Y ou will have to talk briefly about yourself, your home, interests and job.What do I have to practise?Giving personal information. Talking about present circumstances, past experiences and future plans, expressing opinions, speculating, etc.How long do we have to speak?About 3 minutesPractise Part 1Now try Part 1 from the sample Cambridge English: Business V antage Speaking paper--------------------------------------------------------------------------------Part 2 (Mini-presentation)What's in Part 2? A 'mini-presentation' on a business theme. The examiner gives you a choice of three topics (A, B or C). Y ou have 1 minute to prepare to give a speech lasting approximately 1 minute. Listen carefully when your partner speaks as the examiner will ask you a question about what your partner says.What do I have to practise?Making a longer speech on your own. Giving information, and expressing and justifyingopinions.How long do we have to speak?About 6 minutesPractise Part 2Now try Part 2 from the sample Cambridge English: Business V antage Speaking paper.--------------------------------------------------------------------------------Part 3 (Discussion)What's in Part 3? A discussion with the other candidate on a business-related topic. The examiner gives you a topic to discuss and you have to talk to the other candidate about the situation and decide together what to do.What do I have to practise?Expressing and justifying opinions, speculating, comparing and contrasting, agreeing and disagreeing, etc.How long do we have to speak?About 5 minutesPractise Part 3Now try Part 3 from the sample Cambridge English: Business V antage Speaking paper.--------------------------------------------------------------------------------DOs and DON’TsDOsGet plenty of speaking practice in small groups, especially on topics that are likely to be used in the exam.Listen to native (or good) speakers of English doing similar tasks.Collect and keep records of words and phrases that are useful for carrying out the exam tasks.Ask for clarification if you don't understand the instructions/task.Speak clearly and loudly enough for the examiners to hear you.A void long silences and frequent pauses.Listen to your partner and respond appropriately.Make sure turn-taking (taking turns to speak and listen to each other) is as natural as possible.Use all the opportunities yo u’re given in the test to speak, and extend your responses whenever possible.DON'TsDon't memorise and practise long answers for Part 1.Don’t try to talk much more than your partner or interrupt in an impolite way.Don't worry about not knowing a word or phrase – explain what you mean using other words. Don't worry too much about making mistakes – you don't have to be word perfect.Don't just respond to what has been said –be prepared to give your own ideas, ask for your partner’s opinion and develo p your partner's ideas and contributions.--------------------------------------------------------------------------------FAQs (Frequently Asked Questions)How many marks is the Speaking test worth?It is worth 25% of the total marks for the Cambridge English: Business V antage examination.What should I do if I don't understand a question?Ask! Good communication involves asking when you don’t understand in an appropriate way.I communicate well but am not always very accurate in grammar and vocabulary. Can I still pass the Speaking paper?For the V antage level, you need to be accurate enough in your grammar and choice of words to get your meaning across. In other words, you do not need to be accurate all the time to pass. Also, remember that grammar and vocabulary is only one of the four areas that are assessed in the exam. The others are discourse management, pronunciation and interactive communication.What sort of topics might I be expected to speak about?Some examples: personal information, the office, general business environment and routine, entertainment of clients, travel and conference meetings, using the telephone, health and safety, buying and selling, management skills, promotion, training courses. These topics are spread across the four components of the exam (Reading, Writing, Listening and Speaking).Is it an advantage to know your partner in the Speaking test?No. Y ou should be encouraged to change partners in class so that you get used to interacting with a variety of people, includ ing people you don’t know well.What if I am paired with someone much better than me?As all students are assessed on their own performance and not on how they compare with their exam partner, this would not be a problem. So similarly, if you have difficulty in understanding your partner your grade will not suffer. It is important to try to communicate and interact with your partner, whatever their level.How much do I need to know about business to be successful in the Speaking test?The Cambridge English: Business Certificates were developed to test English language in abusiness context. They are not focused on any specific branch, e.g. banking or computing, but you are expected to be familiar with a wide range of business situations and the vocabulary appropriate to them.What if I don't know anything about any of the topics for the 1-minute presentation?Y ou do not need specialist knowledge for the topics used. There are three topics for you to choose from. The first topic is always the most general and is suitable for people with little or no working experience. The second topic is more specific to work contexts, and the third is most suited to people with experience of work situations. Y ou are marked on your language and not on your knowledge of the topic or the originality of your ideas.What happens if two candidates are ‘mismatched’, e.g. a shy person with a more dominant one? Examiners know how to deal with this situation, and give both of the candidates an opportunity to speak –make sure you take this opportunity. It is important both to talk and to give the other candidate the chance to talk. The examiner can use the questions after the Part 3 task to encourage a quieter student to speak more.Does the interview always have a 2:2 format?No, if there is an uneven number of candidates, a group of three is allowed.。

齿轮参数中英对照

齿轮参数中英对照
英文齿轮参数中文齿轮参数numbernormalmodule法向模数transversemodule端面模数addendummodificationfactor变位系数referencediameter分度圆直径tipdiameter齿顶圆直径rootdiameter齿根圆直径basediameter基圆直径prefinishingdiameter剃前渐开线起始圆直径controlformdiameter渐开线检查有效直径effectiveoutsidediameter渐开线终止圆直径leadnormalpressureangle法向压力角transversepressureangle端面压力角helixangle分度圆螺旋角basehelixangle基圆螺旋角handcircularthicknesschordaladdendumnormalcircularthicknessdpafterfinishingheattreatment分度圆弦齿厚basetangentmeasurementafterfinishingheattreatment公法线长度dimensionoverdiameterballspinafterfinishingheattreamentcenterdistance中心距tooltipradius刀具齿顶圆角半径toothtipfilletanglereferedaxis齿顶倒角numbermatinggear配对齿轮齿数齿轮gearpair齿轮副gearpairparallelaxes平行轴齿轮副gearpairintersectingaxes相交轴齿轮副traingears齿轮系planetarygeartrain行星齿轮系齿轮传动matinggears配对齿轮pinion小齿轮大齿轮drivinggear主动齿轮drivengear从动齿轮planetgear行星齿轮planetcarrier行星架sungear太阳轮externalgear外齿轮internalgear内齿轮centredistance中心距shaftangle轴交角linecentres连心线speedreducinggearpair减速齿轮副speedincreasinggearpair增速齿轮副gearratiotransmissionratio传动比axialplane轴平面datumplane基准平面pi
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Manuscript received June 28, 2005. Manuscript revised October 7, 2005. Final manuscript received November 15, 2005. † The authors are with the Department of Computer and Mathematical Sciences, Graduate School of Information Sciences, Tohoku University, Sendai-shi, 980–8579 Japan. †† The author is with the Sendai National College of Technology, Sendai-shi, 989–3128 Japan. ††† The author is with the Department of Electronics, Faculty of Engineering, Tohoku Institute of Technology, Sendai-shi, 982– 8577 Japan. a) E-mail: ito@aoki.ecei.tohoku.ac.jp DOI: 10.1093/ietfec/e89–a.3.735
Koichi ITO† a) , Masahiko HIRATSUKA†† , Takafumi AOKI† , Members, and Tatsuo HIGUCHI††† , Fellow
SUMMARY This paper presents a shortest path search algorithm using a model of excitable reaction-diffusion dynamics. In our previous work, we have proposed a framework of Digital Reaction-Diffusion System (DRDS)—a model of a discrete-time discrete-space reaction-diffusion system useful for nonlinear signal processing tasks. In this paper, we design a special DRDS, called an “excitable DRDS,” which emulates excitable reaction-diffusion dynamics and produces traveling waves. We also demonstrate an application of the excitable DRDS to the shortest path search problem defined on two-dimensional (2-D) space with arbitrary boundary conditions. key words: reaction-diffusion system, nonlinear dynamics, shortest path search, excitable dynamics
1.
Байду номын сангаас
Introduction
Living organisms can create a remarkable variety of structures to realize intelligent functions. In embryology, the development of patterns and forms is sometimes called Morphogenesis. In 1952, Alan Turing suggested that a system of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis [1]. Recently, modelbased studies of morphogenesis employing computer simulations have begun to attract much attention in mathematical biology [2], [3]. From an engineering viewpoint, the insights into morphogenesis provide important concepts for devising a new class of intelligent signal processing functions inspired by biological pattern formation phenomena [4], [5]. From this viewpoint, we have proposed a framework of Digital Reaction-Diffusion System (DRDS)—a discretetime discrete-space reaction-diffusion dynamical system— for designing signal processing models exhibiting active pattern/texture formation capability. In our previous papers [6], [7], some applications of DRDS to biological texture generation and fingerprint image enhancement/restoration have already been discussed.
The DRDS can simulate a variety of reaction-diffusion dynamics by changing its nonlinear reaction kinetics. This paper describes the design of an excitable DRDS based on FitzHugh-Nagumo-type dynamics [2]; the designed DRDS creates excitable traveling waves exhibiting the following characteristics: (i) the waves propagate with a constant velocity, and (ii) they vanish in collisions with other waves without any other interaction. The goal of this paper is to propose an algorithm for shortest path search in twodimensional (2-D) space using the excitable DRDS. We first define a 2-D map (specifying collision-free space and blocked space) as a boundary condition for the excitable DRDS, and initiate a traveling wave at the starting point in the map. The traveling wave propagates through the map splitting into different groups of wavefronts at branch points. A snapshot of wavefronts represents an equidistant surface measured from the starting point. By tracing back the equidistant surfaces of different time steps, we can find the shortest path from the starting point to any specified destinations. So far, there are some papers discussing the mechanism of finding the collision-free shortest path in a 2-D map using excitable reaction-diffusion dynamics. In the papers [8]–[13], the real chemical reaction, called BelousovZhabotinsky (BZ) reaction, is employed as an excitable medium to generate traveling waves for path finding. The use of real chemical media for performing practical computing tasks has the weakness of limited stability in its operation. Also, the size and complexity of maps that can be handled in chemical computers may be limited. The other related papers basically employ continuous-time models of excitable dynamics, including a PDE (Partial Differential Equation) model [14] and circuit models [11], [15]. All these works focus on the mechanism of generating equidistant surfaces for the given map by using excitable chemical waves and describe only simple examples of small maps. On the other hand, this paper describes a concrete algorithm for shortest path search (including the process of tracing back the equidistant surfaces). The proposed algorithm is based on the discrete-time discrete-space model of DRDS, which is easily implemented in digital computers, and can be applied to arbitrary maps of practical size and complexity. This paper is organized as follows: Sect. 2 defines an excitable DRDS, and presents some examples of wave propagation in the excitable DRDS. Section 3 describes a shortest path search algorithm using the excitable DRDS. Section 4 demonstrates some experiments for the shortest path
相关文档
最新文档