Image Retrieval Using Modified Generic Fourier Descriptors
无斑肥螈的攀附能力和体表形貌

櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄櫄[14]DuanHB,LuG.ElitistchemicalreactionoptimizationforContour-Basedtargetrecognitioninaerialimages[J].IEEETransactionsonGeoscienceandRemoteSensing,2015,53(5):2845-2859.[15]LyuP,CongY,WuWH,etal.Multi-orientedscenetextdetectionviacornerlocalizationandregionsegmentation[C].IEEE/CVFConferenceonComputerVisionandPatternRecognition,2018.[16]许佳佳,张 叶,张 赫.基于改进Harris-SIFT算子的快速图像配准算法[J].电子测量与仪器学报,2015,29(1):48-54.[17]ZhangWC,ShuiPL.Contour-basedcornerdetectionviaangledifferenceofprincipaldirectionsofanisotropicGaussiandirectionalderivatives[J].PatternRecognition,2015,48(9):2785-2797.[18]LiZQ,ChenJS.SuperpixelsegmentationusingLinearSpectralClustering[C]//2015IEEEConferenceonComputerVisionandPatternRecognition(CVPR),2015:1356-1363.[19]李 智,曲长文,周 强,等.基于SLIC超像素分割的SAR图像海陆分割算法[J].雷达科学与技术,2017,15(4):354-358.[20]SokicE,KonjicijaS.PhasepreservingFourierdescriptorforshape-basedimageretrieval[J].SignalProcessing:ImageCommunication,2016,40:82-96.[21]HengL,LiuZW,HuangYL,etal.Quaterniongenericfourierdescriptorforcolorobjectrecognition[J].PatternRecognition,2015,48(12):3895-3903.[22]邸馨瑶,焦 林,宋怀波.基于优选傅里叶描述子的粘连条锈病孢子图像分割方法研究[J].计算机应用与软件,2018,35(3):193-198.焦 庆,李 蒙,戴庆文,等.无斑肥螈的攀附能力和体表形貌[J].江苏农业科学,2020,48(15):260-264.doi:10.15889/j.issn.1002-1302.2020.15.048无斑肥螈的攀附能力和体表形貌焦 庆,李 蒙,戴庆文,王晓雷(南京航空航天大学机电学院,江苏南京210016) 摘要:自然界中,无斑肥螈能够轻松攀附在各种不同类型的湿滑壁面上。
medical image analysis

Wavelet optimization for content-based image retrieval in medical databases
G. Quellec a,c, M. Lamard b,c,*, G. Cazuguel a,c, B. Cochener b,c,d, C. Roux a,c
* Corresponding author. Address: Univ Bretagne Occidentale, Brest F-29200, France. Tel.: +33 298018110; fax: +33 298018124. E-mail addresses: gwenole.quellec@telecom-bretagne.eu (G. Quellec), mathieu. lamard@univ-brest.fr (M. Lamard), guy.cazuguel@telecom-bretagne.eu (G. Cazuguel), beatrice.cochener@chu-brest.fr (B. Cochener), christian.roux@ telecom-bretagne.eu (C. Roux). 1361-8415/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.media.2009.11.004
ARTICLE IN PRESS
Medical Image Analysis xxx (2009) xxx–xxx
Contents lists available at ScienceDirect
Medical Image Analysis
ULPI_v1_1

UTMI+ Low Pin Interface (ULPI)SpecificationRevision 1.1October 20, 2004Revision HistoryDate CommentRevision Issue0.9 November 12, 2003 Pre-release.1.0rc1 January 3, 2004 Introduce PHY interface “modes”.Update interface timings. Clarify 4-bit data clocking.Clarify sending of RX CMD’s and interrupts.Introduce AutoResume feature.Route int pin to data(3) during 6-pin Serial Mode.Explain VBUS thresholds.Add T&MT diagram and updated text.Add new section to explain how PHY is aborted by Link.Various clarifications.1.0rc2 January 13, 2004 Add block diagram.Tighten interface timing.Modify suspend protocol to more closely resemble UTMI.Add SPKR_L and SPKR_MIC to signal list and T&MTconnector.Various clarifications.1.0rc3 January 19, 2004 Specify that PHY must send RX CMD after Reset.Link + PHY clock startup time of no more than 5.6ms for aperipheral is now mandatory.PHY output delay reduced from 10ns to 9ns.Added link decision time numbers for low speed.Various Clarifications.1.0 February 2, 2004 1.0rc3 adopted as 1.0 release.1.1rc1 September 1, 2004 Various clarifications and fixes to hold time numbers, sendingRXCMDs, FsLsSerialMode, Vbus control and monitoring,Test_J and Tesk_K signalling, Low Power Mode,Hostdisconnect, ID detection, HS SOF packets, interrupts,Carkit Mode, interface protection, No SYNC/EOP mode,linestate filtering, and AutoResume.1.1rc2 October 4, 2004 Re-arranged text in section 3.8.7.3. Updated contributors list.1.1 October 20, 2004 1.1rc2 adopted as 1.1 release.The present Specification has been circulated for the sole benefit of legally-recognized Promoters, Adopters and Contributors of the Specification. All rights are expressly reserved, including but not limited to intellectual property rights under patents, trademarks, copyrights and trade secrets. The respective Promoter's, Adopter's or Contributor's agreement entered into by Promoters, Adopters and Contributors sets forth their conditions of use of the Specification.iiPromotersARC International Inc.Conexant Systems, Inc.Mentor Graphics CorporationPhilipsSMSCTransDimension, Inc.ContributorsVertenten PhilipsBartOkur PhilipsBatuhanBillAnderson MotorolaMcInerney TransDimensionBillBooker CypressBrianARCBelangerChrisKolb ARCChrisChrisSchell PhilipsChung Wing Yan PhilipsSrokaPhilipsDaveWang PhilipsDavidWooten TransDimensionDavidSMSCEricKawamotoPhilipsMackayFarranFrazier ConexantFrankFredRoberts SynopsysFarooqConexantHassanLee TransDimensionHyunParr MentorIanStandiford TransDimensionJayPhilipsTjiaJeromeMentorSaundersMarkMohamed Benromdhane ConexantSMSCMorganMonksISINabilTaklaTengstrand ARCPeterRamanand Mandayam ConexantDouglas MentorRobSaleemMohamed Synopsys(Author)ShaunReemeyer PhilipsCypressSimonNguyenSubramanyam Sankaran PhilipsTexasInstrumentsViningSueRemple QualcommTerryChen ConexantTimothyConexantChangVincentQuestions should be emailed to lpcwg@.iiiTable of Contents1.Introduction (1)1.1General (1)1.2Naming Convention (1)1.3Acronyms and Terms (1)1.4References (1)2.Generic Low Pin Interface (2)2.1General (2)2.2Signals (2)2.3Protocol (3)2.3.1Bus Ownership (3)2.3.2Transferring Data (3)2.3.3Aborting Data (4)3.UTMI+ Low Pin Interface (5)3.1General (5)3.2Signals (6)3.3Block Diagram (7)3.4Modes (9)3.5Power On and Reset (10)3.6Interrupt Event Notification (10)3.7Timing (11)3.7.1Clock (11)3.7.2Control and Data (13)3.8Synchronous Mode (15)3.8.1ULPI Command Bytes (15)3.8.2USB Packets (18)3.8.3Register Operations (30)3.8.4Aborting ULPI Transfers (37)3.8.5USB Operations (39)3.8.6Vbus Power Control (internal and external) (52)3.8.7OTG Operations (52)3.9Low Power Mode (55)3.9.1Data Line Definition For Low Power Mode (55)3.9.2Entering Low Power Mode (55)3.9.3Exiting Low Power Mode (56)3.9.4False Resume Rejection (57)3.10Full Speed / Low Speed Serial Mode (Optional) (58)3.10.1Data Line Definition For FsLsSerialMode (58)3.10.2Entering FsLsSerialMode (59)3.10.3Exiting FsLsSerialMode (60)3.11Carkit Mode (Optional) (61)3.12Safeguarding PHY Input Signals (62)4.Registers (65)4.1Register Map (65)4.2Immediate Register Set (67)4.2.1Vendor ID and Product ID (67)4.2.2Function Control (68)4.2.3Interface Control (69)4.2.4OTG Control (71)4.2.5USB Interrupt Enable Rising (72)4.2.6USB Interrupt Enable Falling (73)4.2.7USB Interrupt Status (74)4.2.8USB Interrupt Latch (75)4.2.9Debug (76)4.2.10Scratch Register (76)4.2.11Carkit Control (77)4.2.12Carkit Interrupt Delay (77)iv4.2.13Carkit Interrupt Enable (78)4.2.14Carkit Interrupt Status (78)4.2.15Carkit Interrupt Latch (79)4.2.16Carkit Pulse Control (79)4.2.17Transmit Positive Width (80)4.2.18Transmit Negative Width (80)4.2.19Receive Polarity Recovery (80)4.2.20Reserved (81)4.2.21Access Extended Register Set (81)4.2.22Vendor-specific (81)4.3Extended Register Set (81)4.4Register Settings for all Upstream and Downstream signalling modes (81)5.T&MT Connector (83)5.1General (83)5.2Daughter-card (UUT) Specification (83)vFiguresFigure 1 – LPI generic data bus ownership (3)Figure 2 – LPI generic data transmit followed by data receive (3)Figure 3 – Link asserts stp to halt receive data (4)Figure 4 – Creating a ULPI system using wrappers (5)Figure 5 – Block diagram of ULPI PHY (7)Figure 6 – Jitter measurement planes (12)Figure 7 – ULPI timing diagram (13)Figure 8 – Clocking of 4-bit data interface compared to 8-bit interface (14)Figure 9 – Sending of RX CMD (17)Figure 10 – USB data transmit (NOPID) (18)Figure 11 – USB data transmit (PID) (19)Figure 12 – PHY drives an RX CMD to indicate EOP (FS/LS LineState timing not to scale) (20)Figure 13 – Forcing a full/low speed USB transmit error (timing not to scale) (21)Figure 14 – USB receive while dir was previously low (22)Figure 15 – USB receive while dir was previously high (23)Figure 16 – USB receive error detected mid-packet (24)Figure 17 – USB receive error during the last byte (25)Figure 18 – USB HS, FS, and LS bit lengths with respect to clock (26)Figure 19 – HS transmit-to-transmit packet timing (29)Figure 20 – HS receive-to-transmit packet timing (29)Figure 21 – Register write (30)Figure 22 – Register read (31)Figure 23 – Register read or write aborted by USB receive during TX CMD byte (31)Figure 24 – Register read turnaround cycle or Register write data cycle aborted by USB receive (32)Figure 25 – USB receive in same cycle as register read data. USB receive is delayed (33)Figure 26 – Register read followed immediately by a USB receive (33)Figure 27 – Register write followed immediately by a USB receive during stp assertion (34)Figure 28 – Register read followed by a USB receive (34)Figure 29 – Extended register write (35)Figure 30 – Extended register read (35)Figure 31 – Extended register read aborted by USB receive during extended address cycle (36)Figure 32 – PHY aborted by Link asserting stp. Link performs register write or USB transmit (37)Figure 33 – PHY aborted by Link asserting stp. Link performs register read (38)Figure 34 – Link aborts PHY. Link fails to drive a TX CMD. PHY re-asserts dir (38)Figure 35 – Hi-Speed Detection Handshake (Chirp) sequence (timing not to scale) (40)Figure 36 – Preamble sequence (D+/D- timing not to scale) (41)Figure 37 – LS Suspend and Resume (timing not to scale) (43)Figure 38 – FS Suspend and Resume (timing not to scale) (44)Figure 39 – HS Suspend and Resume (timing not to scale) (46)Figure 40 – Low Speed Remote Wake-Up from Low Power Mode (timing not to scale) (47)Figure 41 – Full Speed Remote Wake-Up from Low Power Mode (timing not to scale) (48)Figure 42 – Hi-Speed Remote Wake-Up from Low Power Mode (timing not to scale) (49)Figure 43 – Automatic resume signalling (timing not to scale) (50)Figure 44 – USB packet transmit when OpMode is set to 11b (51)Figure 45 – RX CMD V A_VBUS_VLD ≤Vbus indication source (54)Figure 46 – Entering low power mode (55)Figure 47 – Exiting low power mode when PHY provides output clock (56)Figure 48 – Exiting low power mode when Link provides input clock (56)Figure 49 – PHY stays in Low Power Mode when stp de-asserts before clock starts (57)Figure 50 – PHY re-enters Low Power Mode when stp de-asserts before dir de-asserts (57)Figure 51 – Interface behaviour when entering Serial Mode and clock is powered down (59)Figure 52 – Interface behaviour when entering Serial Mode and clock remains powered (59)Figure 53 – Interface behaviour when exiting Serial Mode and clock is not running (60)Figure 54 – Interface behaviour when exiting Serial Mode and clock is running (60)Figure 55 – PHY interface protected when the clock is running (62)Figure 56 – Power up sequence when PHY powers up before the link. Interface is protected (63)Figure 57 – PHY automatically exits Low Power Mode with interface protected (63)Figure 58 – Link resumes driving ULPI bus and asserts stp because clock is not running (64)viFigure 59 – Power up sequence when link powers up before PHY (ULPI 1.0 compliant links) (64)Figure 60 – Recommended daughter-card configuration (not to scale) (83)viiTablesTable 1 – LPI generic interface signals (2)Table 2 – PHY interface signals (6)Table 3 – Mode summary (9)Table 4 – Clock timing parameters (11)Table 5 – ULPI interface timing (13)Table 6 – Transmit Command (TX CMD) byte format (15)Table 7 – Receive Command (RX CMD) byte format (16)Table 8 – USB specification inter-packet timings (26)Table 9 – PHY pipeline delays (27)Table 10 – Link decision times (28)Table 11 – OTG Control Register power control bits (52)Table 12 – Vbus comparator thresholds (52)Table 13 – RX CMD VbusValid over-current conditions (53)Table 14 – Vbus indicators in the RX CMD required for typical applications (54)Table 15 – Interface signal mapping during Low Power Mode (55)Table 16 – Serial Mode signal mapping for 6-pin FsLsSerialMode (58)Table 17 – Serial Mode signal mapping for 3-pin FsLsSerialMode (58)Table 18 – Carkit signal mapping (61)Table 19 – Register map (66)Table 20 – Register access legend (67)Table 21 – Vendor ID and Product ID register description (67)Table 22 – Function Control register (68)Table 23 – Interface Control register (70)Table 24 – OTG Control register (71)Table 25 – USB Interrupt Enable Rising register (72)Table 26 – USB Interrupt Enable Falling register (73)Table 27 – USB Interrupt Status register (74)Table 28 – USB Interrupt Latch register (75)Table 29 – Rules for setting Interrupt Latch register bits (75)Table 30 – Debug register (76)Table 31 – Scratch register (76)Table 32 – Carkit Control Register (77)Table 33 – Carkit Interrupt Delay register (77)Table 34 – Carkit Interrupt Enable register (78)Table 35 – Carkit Interrupt Status Register (78)Table 36 – Carkit Interrupt Latch register (79)Table 37 – Carkit Pulse Control (79)Table 38 – Transmit Positive Width (80)Table 39 – Transmit Negative Width (80)Table 40 – Receive Polarity Recovery (81)Table 41 – Upstream and downstream signalling modes (82)Table 42 – T&MT connector pin view (84)Table 43 – T&MT connector pin allocation (84)Table 44 – T&MT pin description (85)viii1. Introduction1.1 GeneralThis specification defines a generic PHY interface in Chapter 2.In Chapter 3, the generic interface is applied to the UTMI+ protocol, reducing the pin count for discrete USB transceiver implementations supporting On-The-Go, host, and peripheral application spaces.Convention1.2 NamingEmphasis is placed on normal descriptive text using underlined Arial font, e.g. must.Signal names are represented using the lowercase bold Arial font, e.g. clk.Registers are represented using initial caps, bold Arial font, e.g. OTG Control.Register bits are represented using initial caps, bold italic Arial font, e.g. USB Interrupt Enable Falling. 1.3 Acronyms and TermsA-device Device with a Standard-A or Mini-A plug inserted into its receptacleB-device Device with a Standard-B or Mini-B plug inserted into its receptacleDeviceDRD Dual-RoleFPGA Field Programmable Gate ArraySpeedFS FullHNP Host Negotiation ProtocolHS Hi-SpeedLink ASIC, SIE, or FPGA that connects to an ULPI transceiverLPI Low Pin InterfaceSpeedLS LowOTG On-The-GoPHY Physical Layer (Transceiver)PLL Phase Locked LoopSE0 Single Ended ZeroSIE Serial Interface EngineSRP Session Request ProtocolT&MT Transceiver and Macrocell TesterULPI UTMI+ Low Pin InterfaceUSB Universal Serial BusUSB-IF USB Implementers ForumUTMI USB 2.0 Transceiver Macrocell InteraceUUT Unit Under Test1.4 References[Ref 1] Universal Serial Bus Specification, Revision 2.0[Ref 2] On-The-Go Supplement to the USB 2.0 Specification, Revision 1.0a[Ref 3] USB 2.0 Transceiver Macrocell Interface (UTMI) Specification, v1.05[Ref 4] UTMI+ Specification, Revision 1.0[Ref 5] CEA-2011, OTG Transceiver Specification[Ref 6] CEA-936A, Mini-USB Analog Carkit Interface Specification[Ref 7] USB 2.0 Transceiver and Macrocell Tester (T&MT) Interface Specification, Version 1.212. Generic Low Pin Interface2.1 GeneralThis section describes a generic low pin interface (LPI) between a Link and a PHY. Interface signals are defined and the basic communication protocol is described. The generic interface can be used as a common starting point for defining multiple application-specific interfaces.Chapter 3 defines the UTMI+ Low Pin Interface (ULPI), which is based on the generic interface described here. For ULPI implementations, the definitions in chapter 3 over-ride anything defined in chapter 2.2.2 SignalsThe LPI transceiver interface signals are described in Table 1. The interface described here is generic, and can be used to transport many different data types. Depending on the application, the data stream can be used to transmit and receive packets, access a register set, generate interrupts, and even redefine the interface itself. All interface signals are synchronous when clock is toggling, and asynchronous when clock is not toggling. Data stream definition is application-specific and should be explicitly defined for each application space for inter-operability.Control signals dir, stp, and nxt are specified with the assumption that the PHY is the master of the data bus. If required, an implementation can define the Link as the master. If the Link is the master of the interface, the control signal direction and protocol must be reversed.Signal Direction DescriptionPHY Interfaceclock I/O Interface clock. Both directions are allowed. All interface signals are synchronous to clock.data I/O Bi-directional data bus, driven low by the Link during idle. Bus ownership is determined by dir. The Link and PHY initiate data transfers by driving a non-zero pattern onto the data bus. LPI defines interface timing for single-edge data transfers with respect to rising edge of clock. An implementation may optionally define double-edge data transfers with respect to both rising and falling edges of clock.dir OUT Direction. Controls the direction of the data bus. When the PHY has data to transfer to the Link, it drives dir high to take ownership of the bus. When the PHY has no data to transfer it drives dir low and monitors the bus for Link activity. The PHY pulls dir high whenever the interface cannot accept data from the Link. For example, when the internal PHY PLL is not stable.stp IN Stop. The Link asserts this signal for 1 clock cycle to stop the data stream currently on the bus. If the Link is sending data to the PHY, stp indicates the last byte of data was on the bus in the previous cycle. If the PHY is sending data to the Link, stp forces the PHY to end its transfer, de-assert dir and relinquish control of the the data bus to the Link.nxt OUT Next. The PHY asserts this signal to throttle the data. When the Link is sending data to the PHY, nxt indicates when the current byte has been accepted by the PHY. The Link places the next byte on the data bus in the following clock cycle. When the PHY is sending data to the Link, nxt indicates when a new byte is available for the Link to consume.Table 1 – LPI generic interface signals22.3 ProtocolOwnership2.3.1 BusThe PHY is the master of the LPI bi-directional data bus. Ownership of the data bus is determined by the dir signal from the PHY, as shown in Figure 1. When dir is low, the Link can drive data on the bus. When dir is high, the PHY can drive data on the bus. A change in dir causes a turnaround cycle on the bus during which, neither Link nor PHY can drive the bus. Data during the turnaround cycle is undefined and must be ignored by both Link and PHY.The dir signal can be used to directly control the data output buffers of both PHY and Link.Figure 1 – LPI generic data bus ownershipData2.3.2 TransferringAs shown in the first half of Figure 2, the Link continuously drives the data bus to 00h during idle. The Link transmits data to the PHY by driving a non-zero value on the data bus. To signal the end of data transmission, the Link asserts stp in the cycle following the last data byte.In the second half of Figure 2, the Link receives data when the PHY asserts dir. The PHY asserts dir only when it has data to send to the Link, and keeps dir low at all other times. The PHY drives data to the Link after the turnaround cycle.The nxt signal can be used by the PHY to throttle the data during transmit and receive. During transmit, nxt may be asserted in the same cycle that the Link asserts stp.Figure 2 – LPI generic data transmit followed by data receive2.3.3 AbortingDataThe PHY can assert dir to interrupt any data being transmitted by the Link. If the Link needs to interrupt data being received from the PHY, it asserts stp for one clock cycle, as shown in Figure 3. This causes the PHY to unconditionally1 de-assert dir and accept a complete data transmit from the Link. The PHY may re-assert dir again only when the data transmit from the Link has completed.Figure 3 – Link asserts stp to halt receive data1 The PHY will not de-assert dir if the ULPI interface is not usable. For example, if the internal PLL is not stable.3. UTMI+ Low Pin Interface3.1 GeneralThis section describes how any UTMI+ core can be wrapped to convert it to the smaller LPI interface. The generic interface described in chapter 2 is used as a starting point. This section always over-rides anything stated in chapter 2. While this specification details support of UTMI+ Level 3, PHY implementers may choose to support any of the Levels defined in UTMI+.ULPI defines a PHY to Link interface of 8 or 12 signals that allows a lower pin count option for connecting to an external transceiver that may be based on the UTMI+ specification. The pin count reduction is achieved by having relatively static UTMI+ signals be accessed through registers and by providing a bi-directional data bus that carries USB data and provides a means of accessing register data on the ULPI transceiver.This specification relies on concepts and terminology that are defined in the UTMI+ specification [Ref 4]. Specifically, if a ULPI PHY design is based on an internal UTMI+ core, then that core must implement the following UTMI+ features.Linestate must accurately reflect D+/D- to within 2-3 clocks. It is up to individual Link designers to use Linestate to time bus events.Filtering to prevent spurious SE0/SE1 states appearing on Linestate due to skew between D+ and D-. Filtering of 14 clock cycles is required in Low Speed, and 2 clock cycles in Full Speed and Hi-Speed modes.The PHY must internally block the USB receive path during transmit. The receive path can be unblocked when the internal Squelch (HS) or SE0-to-J (FS/LS) is seen.TxReady must be used for all types of data transmitted, including Chirp.Due to noise on the USB, it is possible that RxActive asserts and then de-asserts without any valid data being received, and RxValid will not assert. The Link should operate normally with these data-less RxActive assertions.As shown in Figure 4, a PHY or Link based on this specification can be implemented as an almost transparent wrapper around existing UTMI+ IP cores, preserving the original UTMI+ packet timing, while reducing pin count and leaving all functionality intact. This should not be taken to imply that other implementations are not possible.Figure 4 – Creating a ULPI system using wrappers3.2 SignalsTable 2 describes the ULPI interface on the PHY. The PHY is always the master of the ULPI bus. USB and Miscellaneous signals may vary with each implementation and are given only as a guide to PHY designers.Signal Direction DescriptionPHY Interfaceclock I/O Interface clock. The PHY must be capable of providing a 60MHz output clock. Support for an input 60MHz clock is optional. If the PHY supports both clock directions, it must not use the ULPI control and data signals for setting the clock direction.Data bus. Driven to 00h by the Link when the ULPI bus is idle. Two bus widths are allowed:• 8-bit data timed on rising edge of clock.data I/O• (Optional) 4-bit data timed on rising and falling edges of clock.dir OUT Controls the direction of the data bus2. The PHY pulls dir high whenever the interface cannot accept data from the Link. For example, when the internal PLL is not stable. This applies whether Link or PHY is the clock source.stp IN The Link must assert stp to signal the end of a USB transmit packet or a register write operation, and optionally to stop any receive. The stp signal must be asserted in the cycle after the last data byte is presented on the bus.nxt OUT The PHY asserts nxt to throttle all data types, except register read data and the RX CMD. Identical to RxValid during USB receive, and TxReady during USB transmit. The PHY also asserts nxt and dir simultaneously to indicate USB receive activity (RxActive), if dir was previously low. The PHY is not allowed to assert nxt during the first cycle of the TX CMD driven by the Link.USB InterfaceD+ I/O D+ pin of the USB cable. Required.D- I/O D- pin of the USB cable. Required.ID IN ID pin of the USB cable. Required for OTG-capable PHY’s.VBUS I/O V BUS pin of the USB cable. Required for OTG-capable PHY’s. Required for driving V BUS and the V BUS comparators.MiscellaneousXI IN Crystal input pin. Vendors should specify supported crystal frequencies. XO OUT Crystal output pin.C+ I/O Positive terminal of charge pump capacitor.C- I/O Negative terminal of charge pump capacitor.SPKR_L IN Optional Carkit left/mono speaker input signal.SPKR_MIC I/O Optional Carkit right speaker input or microphone output signal.RBIAS I/O Bias current resistor.Table 2 – PHY interface signals2 UTMI+ wrapper developers should note that data bus control has been reversed from UTMI to ensure that USB data reception is not interrupted by the Link.3.3 BlockDiagramAn example block diagram of a ULPI PHY is shown in Figure 5. This example is based on an internal UTMI+ Level 3 core [Ref 4], which can interface to peripheral, host, and On-The-Go Link cores. A description of each major block is given below.ULPI InterfaceUSBCableChargePumpCapacitor Figure 5 – Block diagram of ULPI PHYUTMI+ Level 3 PHY coreThe ULPI PHY may contain a core that is compliant to any UTMI+ level [Ref 4]. Signals for 16-bit data buses are not supported in ULPI. While Figure 5 shows the typical blocks for a Level 3 UTMI+ core, the PHY vendor must specify the intended UTMI+ level, and provide the functionality necessary for compliance to that level.ULPI PHY WrapperThe ULPI PHY wrapper of Figure 5 reduces the UTMI+ interface to the Low Pin Interface described in this document. All signals shown on the UTMI+ Level 3 PHY core are reduced to the ULPI interface signals clock, data, dir, stp, and nxt. The Register Map stores the relatively static signals of the UTMI+ interface. Crystal Oscillator and PLLWhen a crystal is attached to the PHY, the internal clock(s) and the external 60MHz interface clock are generated from the internal PLL. When no crystal is attached, the PHY may optionally generate the internal clock(s) from an input 60MHz clock provided by the Link.General BiasingInternal analog circuits require an accurate bias current. This is typically generated using an external, accurate reference resistor.DrvVbusExternal and ExternalVbusIndicatorThe PHY may optionally control an external VBUS power source via the optional pin DrvVbusExternal. For example, the external supply could be a charge pump or 5V power supply controlled using a power switch. The external supply is controlled by the DrvVbus and the optional DrvVbusExternal bits in the OTG Control register. The polarity of the DrvVbusExternal output pin is implementation dependent.If control of an external VBUS source is provided the PHY may optionally provide for a VBUS power source feed back signal on the optional pin ExternalVbusIndicator. If this pin is provided, the use of the pin is defined by the optional control bits in the OTG Control and Interface Control registers. See Section 3.8.6.3 for further detail.Power-On-ResetA power-on-reset circuit must be provided in the PHY. When power is first applied to the PHY, the power-on-reset will reset all circuitry and leave the ULPI interface in a usable state.Carkit OptionThe PHY may optionally support Carkit Mode [Ref 6]. While in Carkit Mode, the PHY routes speaker and microphone signals between the Link and the USB cable. In carkit mono mode, SPKR_L inputs a mono speaker signal and SPKR_MIC outputs the microphone signal, MIC. In carkit stereo mode, SPKR_L inputs the left speaker signal, and SPKR_MIC inputs the right speaker signal, SPKR_R.3.4 ModesThe ULPI interface can operate in one of five independent modes listed in Table 3. The interface is in Synchronous Mode by default. Other modes are enabled by bits in the Function Control and Interface Control registers. In Synchronous Mode, the data bus carries commands and data. In other modes, the data pins are redefined with different functionality. Synchronous Mode and Low Power Mode are mandatory.Mode Name Mode DescriptionSynchronous Mode This is the normal mode of operation. The clock is running and is stablewith the characteristics defined in section 3.6. The ULPI interface carriescommands and data that are synchronous to clock.Low Power Mode The PHY is powered down with the clock stopped. The PHY keeps dirasserted, and the data bus is redefined to carry LineState and interrupts.See section 3.9 for more information.6-pin FS/LS Serial Mode (optional) The data bus is redefined to 6-pin serial mode, including 6 pins to transmit and receive serial USB data, and 1 pin to signal interrupt events. The clock can be enabled or disabled. This mode is valid only for implementations with an 8-bit data bus. See section 3.10 for more information.3-pin FS/LS Serial Mode (optional) The data bus is redefined to 3-pin serial mode, including 3 pins to transmit and receive serial USB data, and 1 pin to signal interrupt events. The clock can be enabled or disabled. See section 3.10 for more information.Carkit Mode (optional) The data bus is redefined to Carkit mode [Ref 6], including 2 pins for serial UART data, and 1 pin to signal interrupt events. The clock may optionally be stopped. See section 3.11 for more information.Table 3 – Mode summary。
文件英文_精品文档

文件英文Title: Document Management Best Practices: Ensuring Smooth Workflow EfficiencyIntroduction:Efficient management of documents plays a critical role in the success of any organization. Document management involves the creation, distribution, and storage of documents in an organized and easily accessible manner. This document will provide an overview of best practices for managing documents effectively, focusing on the importance of clear file naming conventions, proper folder organization, version control, and security measures to ensure smooth workflow efficiency.I. Clear File Naming Conventions:1. Consistency: Establishing and following consistent file naming conventions are crucial for easy retrieval and organization of documents. Use a standardized format that reflects the content and purpose of the document.2. Descriptive and Meaningful: Choose file names that clearly reflect the content of the document. Avoid using generic terms or ambiguous abbreviations, as these can lead to confusion and difficulty in finding the desired files.3. Use Keywords: Incorporate relevant keywords in the file names to improve searchability. Including dates or project codes within the file names can also aid in categorization and sorting.II. Proper Folder Organization:1. Intuitive Structure: Create a logical and intuitive folder structure to help users navigate through the document repository effortlessly. Consider organizing folders based on projects, departments, or document types.2. Limit Subfolder Depth: Avoid excessively deep hierarchical structures, as they can lead to cumbersome navigation and reduced efficiency. Aim to keep the folder structure simple and easily understandable.3. Avoid Duplication: Discourage the replication of documents across folders. Instead, use shortcuts or symbolic links to keep documents organized in a single location while still making them available in relevant folders.III. Version Control:1. Document Versioning: Implement a systematic version control system to track changes made to documents over time. This allows users to access older versions if needed and ensures the integrity of the document management process.2. Clear Version Naming: Use a clear and consistent naming convention to differentiate between different versions of the same document. Incorporate information such as revision numbers, dates, or initials to avoid confusion.IV. Security Measures:1. Password Protection: Implement password protection measures for sensitive documents to ensure confidentiality and prevent unauthorized access.2. User Permissions: Assign appropriate access rights to different users or groups, limiting access to sensitive or confidential materials. Regularly review and update permissions to maintain data security.3. Backup and Recovery: Regularly back up documents to a secure location to prevent the loss of critical information in case of hardware failure, natural disasters, or other unforeseen events. Implement a robust recovery plan to restore files quickly.Conclusion:Effective document management is essential for maintaining workflow efficiency and ensuring easy access to critical information. By implementing clear file naming conventions, organizing folders logically, establishing version control practices, and implementing security measures, organizations can streamline their document management processes. These best practices not only save time and effort but also enhance collaboration, productivity, and overall operational efficiency. With careful consideration and implementation of these strategies, organizations can establish a document management system that empowers employees and promotes business success.。
Research Summary

Research Summaryby Jun YangMy areas of interest include multimedia information retrieval, Web searching, digital library, computer vision, and multimedia database, which share a common theme – the management of multimedia data. This interest is originated from the insight that, without effective access and management tools, the pervasive and expanding multimedia information will become more frustrating and less valuable to end users. Starting from the 3rd year of my undergraduate study, I have worked in the broad area of multimedia for 4 years in 4 different research institutions, including Microsoft Visual Perception Lab of Zhejiang University, Siemens in Vienna, Austria, Microsoft Research Asia, and Dept of Computer Engineering and Information Technology at City University of Hong Kong. I have published over 15 refereed papers in international conferences and journals or as book chapters, and built several prototype systems. In this document, I summarize my research achievements by domains with reference to my representative publications.1. Multi-modal Information RetrievalThis work is motivated by the conservative “one system, one media” framework observed among existing multimedia information retrieval systems, i.e., each system can deal with only a single type of media based on a single type of knowledge (e.g., content-based image retrieval). To remedy this limitation, we advocate “one system for all” framework by proposing multi-modal information retrieval, where the keyword “multi-modal” is defined at three levels: (1) multiple types of media data (text, images, videos, etc) are retrieved in an integrated manner; (2) multiple sources of knowledge are explored; and (3) multiple retrieval approaches/techniques are employed. Two research projects along this direction are described below: • Octopus – an aggressive search mechanism for multi-modal information [1]: Octopus is a mechanism for aggressive retrieval of multi-modal data (i.e., a mixture of text, images, videos, etc) in an integrated manner. It is based on a multifaceted knowledge base constructed on a layered graph model (LGM), which describes the relevance relationships among media objects deduced from low-level features, contextual knowledge (e.g., hyperlinks), and user-system interactions. Link analysis technique, an extensively used technique in Web searching, is applied to explore the LGM to search for relevant media objects for user queries. Furthermore, an incremental relevance feedback technique is proposed to update the knowledge base by learning from user-system interactions, therefore enhancing the retrieval performance in a “hill-climbing” manner. Octopus advocates a highly flexible retrieval scenario, where users are free to submit any media (objects) as query example and receive any media (objects) as results.Our recent work has addressed the interface design [15] of Octopus.• CoSEEM – a cooperative search engine for multimedia in digital libraries[2,3]: As the predecessor of Octopus, CoSEEM is a retrieval framework for multimedia information in digital libraries.It focuses on the use of uniform semantic descriptions (keywords) to retrieve various types of media objects in an integrated manner. A learning-from-elements strategy is proposed to propagate and update descriptive keywords associated with media objects, and a cross-media search mechanism is devised to search for media objects by combining their low-level features and semantic descriptions.2. Semantics and Content based Image RetrievalDue to the “semantic gulf” between low-level features and high-level user queries, Content-based Image Retrieval (CBIR) is still of limited practicability in general settings, while its killer applications in specificdomains are yet to be found. In view of this, my research on CBIR emphasizes exploring the role of human-computer interaction to achieve semantics-based and personalized image retrieval. The specific techniques/devices that I have investigated for this general goal include lexical thesaurus [4,5], user profiling [6], graphic-theoretic model [7], and “peer indexing” [8, 9], as summarized below:• Thesaurus-aided image retrieval and browsing [4,5]: This approach explores the power of lexical thesaurus (specifically, WordNet) to support fuzzy match in keyword-based image retrieval. By examining the similarity between different keywords, our approach is able to match a keyword query with images annotated by different but relevant keywords (e.g., matching a query of “animal” with images annotated with “tiger”), which is not supported by simple keyword matching. Although WordNet has been extensively used in IR, our approach is unique in terms of a combination of semantic keywords and low-level image features for image retrieval. Moreover, we propose a dynamic semantic hierarchy, which can be automatically constructed from WordNet to support image navigation by semantic subjects.• Personalized image retrieval based on user profiling [6]: The objective of this work is towards personalized image retrieval based on a synergy of relevance feedback techniques and information filtering/recommendation techniques. Specifically, a “common profile” and a set of “user profiles” are constructed from user feedbacks to model the common knowledge and the personal views of individual users respectively. Our profile-based image retrieval approach enables “learning from others” by exploring the common profile, as well as “learning from history” by exploring user profiles. Therefore, the retrieval results generated by our approach strike the balance between matching the commonsense of the entire user community and catering for the personal interests of each individual user.• A graphic-theoretic model for image retrieval [7]: In attempt to remedy the limitation of traditional “non-memory” relevance feedback techniques, we have proposed a graphic-theoretic model for incremental relevance feedback in image retrieval. A two-layered graph model is introduced to memorize the semantic correlations (among images) progressively derived from user feedbacks, and link analysis technique is adopted to explore the graph model for image retrieval. This approach outperforms traditional approaches in both short-term (intra-session) and long-term (inter-session) performance.• Data and user-adaptive image retrieval based on “peer indexing” [8]: Peer indexing is based on an intuitive idea – indexing an image by its semantically related peer images. The peer index of an image, as a list of weighted peer images, can be acquired from user feedbacks by a suggested learning strategy.Due to the analogy between a keyword and a peer image as a “visual keyword”, mature techniques in the IR area (e.g., TF/IDF weighting scheme, cosine similarity metric) are applied to image retrieval based on peer indexing in cooperation with low-level image features. Our recent work along this direction has focused on data and user-adaptive image retrieval [9] by applying two-level peer indexing.3. Vector-based Media (Flash™) ManagementRecent years witness the phenomenal growth of Flash, a new format of vector-based animation set forth by Macromedia Inc., which has over 440 million of viewers worldwide. This remarkable popularity justifies the need of investigating the management issues of Flash, which are critical to the better utilization of the enormous Flash resource but unfortunately overlooked by the research community. We therefore propose FLAME [10,11], namely FL ash A ccess and M anagement E nvironment, which covers a variety of management issues of Flash animations.Currently, FLAME consists of three functional components, including (1) content-based retrieval component, which addresses the indexing, retrieval, and query specification of Flash animations by exploringtheir content characteristics on their embedded media ingredients, spatio-temporal features, and user interactions; (2) classification component, which automatically classifies Flash animations into pre-defined categories, such as MTV, commercial advertisement, cartoon, e-postcard, based on their content characteristics; and (3) segmentation component, which partitions long Flash animations into shot/scene structures defined similarly to their counterparts in video segmentation. Further issues to be explored under FLAME include Flash search engine, copyright protection, and sample-based Flash authoring.4. Multimedia DatabaseMy primary goal in this area is to apply database technology to address the efficiency and scalability problem that plagues data-intensive multimedia information systems. One specific problem is the “semantic gap” between semantics-intensive multimedia applications and conventional databases, which are inadequate to model the context-dependent semantics of multimedia data. We have managed to propose MediaView [12,13] as an extended object-oriented view mechanism to bridge this semantic gap. Specifically, this mechanism captures the dynamic semantics of multimedia using a modeling construct named media view, which formulates a customized context where heterogeneous media objects with related semantics are characterized by semantic properties and relationships.Another proposal is a self-adaptive semantic schema mechanism (SSM) for multimedia databases [14]. The SSM is implemented based on an object-oriented data model, in which classes are organized into a semantic hierarchy. As its unique feature, SSM supports adaptive evolution of a schema in the form of expansion with new classes and/or compaction by removing inefficient classes, when the conditions of predefined ECA-rules are satisfied. This self-adaptive evolution strategy allows a data schema to be automatically optimized for each particular multimedia application, (esp. multimedia retrieval systems), thereby achieving a dynamic, application-specific balance between modeling capability and efficiency.5. Video-based Human AnimationTo overcome the shortcomings of conventional human animation techniques, we have proposed a video-based human animation approach [16]. Given a video clip containing human motion, we first recognize and track the human joints with the aid of Kalman filter and morph-block matching in a sequence of video frames. From the recognized human joints, we construct the corresponding 3-D human motion skeleton sequence under the perspective projection, using camera calibration techniques and human anatomy knowledge. Finally, a motion library is established by annotating multiform motion attributes, which can be browsed and queried by animators. This approach has the advantages of rich source materials, low computational cost, efficient production, and realistic animation result.6. Video SegmentationSegmentation of video clips serves as the basis of video indexing and retrieval. We have developed a prototype system for parsing video clips, especially news videos, into a sequence of shots and scenes. The shot boundaries are detected by examining the difference between the color histograms of consecutive frames using “twin-comparison” algorithm, which is robust in detecting gradual transitions (zoom, fade in/out, dissolve, etc). Particularly, for news videos with a prior model of the temporal video structure, we group the segmented shots into higher-level units such as news stories, weather forecast, and commercials. Reference:1. Jun Yang, Qing Li, Yueting Zhuang, “Octopus: Aggressive Search of Multi-Modality Data Using Multifaceted KnowledgeBase”, Proc. of 11th Int'l Conf. on World Wide Web, pp.54-64, Hawaii, USA, May, 2002.2. Jun Yang, Yueting Zhuang, Qing Li, “Search for Multi-Modality Data in Digital Libraries”, Proc. of 2nd IEEE Pacific-RimConf. on Multimedia, pp. 482-489, Beijing, China, 2001.3. Jun Yang, Yueting Zhuang, Qing Li, “Multi-Modal Retrieval for Multimedia Digital Libraries: Issues, Architecture, andMechanisms”, Proc. of Int'l Workshop on Multimedia Information Systems, pp. 81-88, Capri, Italy, 2001.4. Jun Yang, Liu Wenyin, Hongjiang Zhang, Yueting Zhuang, “Thesaurus-aided Approach for Image Retrieval and Browsing”,Proc. of 2nd IEEE Int'l Conf. on Multimedia and Expo, pp. 313-316. Tokyo, Japan, 2001.5. Jun Yang, Liu Wenyin, Hongjiang Zhang, Yueting Zhuang, “An Approach to Semantics-based Image Retrieval andBrowsing”, Proc. of 7th Int’ l Conference on Distributed Multimedia Systems, Taiwan, 2001.6. Qing Li, Jun Yang, Yueting Zhuang, “Web-based Multimedia Retrieval: Balancing out between Common Knowledge andPersonalized Views”, Proc. of 2nd Int'l Conf. on Web Information System Engineering, pp. 92-101, Kyoto, Japan, 2001.7. Yueting Zhuang, Jun Yang, Qing Li, “A Graphic-Theoretic Model for Incremental Relevance Feedback in Image Retrieval”,Proc. of 2002 Int'l Conf. on Image Processing, New York, Sep., 2002.8. Jun Yang, Qing Li, Yueting Zhuang, "Image Retrieval and Relevance Feedback using Peer Indexing", Proc. of 2002 IEEEInt'l Conf. on Multimedia and Expo, Lausanne, Switzerland, Aug, 2002.9. Jun Yang, Qing Li, Yueting Zhuang, "Modeling Data and User Characteristics by Peer Indexing in Content-based ImageRetrieval", The 9th Int'l Conf. on Multimedia Modeling, Taiwan, 2003. (accepted)10. Jun Yang, Qing Li, Liu Wenyin, Yueting Zhuang, "Search for Flash Movies on the Web", Proc. of the 3rd Int'l Conf. on WebInformation Systems Engineering, workshop on Mining for Enhanced Web Search, Singapore, 2002.11. Jun Yang, Qing Li, Liu Wenyin, Yueting Zhuang, "FLAME: A Generic Framework for Content-based FlashRetrieval", Proc. of the 4th Int'l Workshop on Multimedia Information Retrieval, in conjunction with ACM Multimedia 2002, Juan-les-Pins, France, 2002.12. Qing Li, J un Yang, Yueting Zhuang, "MediaView: A Semantic View Mechanism for Multimedia Modeling", Proc. of the3rd IEEE Pacific-Rim Conf. on Multimedia, Taiwan, Dec. 2002. (accepted)13. Qing Li, Jun Yang, Yueting Zhuang, “Chapter 9: A Semantic Data Modeling Mechanism for Multimedia Databases”, inMultimedia Information Retrieval and Management, edited by Hong-jiang Zhang, etc.14. Jun Yang, Qing Li, and Yueting Zhuang, "A Self-adaptive Semantic Schema Mechanism for Multimedia Databases", SPIEPhotonics Asia: Electronic Imaging and Multimedia Technology III, pp.69-79, Proc. vol. 4926, Shanghai, China, Oct. 2000.15. Jun Yang, Qing Li, Yueting Zhuang, "A Multimodal Information Retrieval System: Mechanism and Interface", IEEE Trans.on Multimedia (submitted).16. Zhuang Yueting, Liu xiaoming, Pan Yunhe, Yang Jun, "Human Three Dimension Motion Skeleton Reconstruction ofMotion Image Sequence", Journal of Computer-aided Design & Computer Graphics, 12(4), 245-251, 2002. (in Chinese)。
临床试验英语词汇

专业术语、缩略语中英对照表缩略语英文全称中文全称ABE Average Bioequivalence 平均生物等效性AC Active control 阳性对照ADE Adverse Drug Event 药物不良事件ADR Adverse Drug Reaction 药物不良反应AE Adverse Event 不良事件AI Assistant Investigator 助理研究者ALB Albumin 白蛋白ALD Approximate Lethal Dose 近似致死剂量ALP Alkaline phosphatase 碱性磷酸酶ALT Alanine aminotransferase 丙氨酸转氨酶ANDA Abbreviated New Drug Application 简化新药申请ANOV A Analysis of variance 方差分析AST Aspartate aminotransferase 天冬氨酸转氨酶ATR Attenuated total reflection 衰减全反射法BA Bioavailability 生物利用度BE Bioequivalence 生物等效性BMI Body Mass Index 体质指数BUN Blood urea nitrogen 血尿素氮CATD Computer-assisted trial design 计算机辅助试验设计CDER Center of Drug Evaluation and Research 药品评价和研究中心CFR Code of Federal Regulation 美国联邦法规CI Co-Investigator 合作研究者CI Confidence Interval 可信区间COI Coordinating Investigator 协调研究者CRC Clinical Research Coordinator 临床研究协调者CRF Case Report/Record Form 病历报告表/病例记录表CRO Contract Research Organization 合同研究组织CSA Clinical Study Application 临床研究申请CTA Clinical Trial Application 临床试验申请CTP Clinical Trial Protocol 临床试验方案CTR Clinical Trial Report 临床试验报告CTX Clinical Trial Exemption 临床试验免责CHMP Committee for Medicinal 人用药委会Products for Human UseDSC Differential scanning 差示扫描热量计DSMB Data Safety and monitoring Board 数据安全及监控委员会EDC Electronic Data Capture 电子数据采集系统EDP Electronic Data Processing 电子数据处理系统EWP Europe Working Party 欧洲工作组FDA Food and Drug Administration 美国食品与药品管理局FR Final Report 总结报告GCP Good Clinical Practice 药物临床试验质量管理规范GCP Good Laboratory Practice 药物非临床试验质量管理规范GLU Glucose 葡萄糖GMP Good Manufacturing Practice 药品生产质量管理规范HEV Health economic evaluation 健康经济学评价IB Investigator’s Brochure研究者手册IBE IndividualBioequivalence 个体生物等效性IC Informed Consent 知情同意ICF Informed Consent Form 知情同意书ICH International Conference on Harmonization 国际协调会议IDM Independent Data Monitoring 独立数据监察IDMC Independent Data Monitoring Committee 独立数据监察委员会IEC Independent Ethics Committee 独立伦理委员会IND Investigational New Drug 新药临床研究IRB Institutional Review Board 机构审查委员会ITT Intention-to –treat 意向性分析IVD In Vitro Diagnostic 体外诊断IVRS Interactive Voice Response System 互动语音应答系统LD50 Medial lethal dose 半数致死剂量LLOQ Lower Limit of quantitation 定量下限LOCF Last observation carry forward 最接近一次观察的结转LOQ Limit of Quantitation 检测限MA Marketing Approval/Authorization 上市许可证MCA Medicines Control Agency 英国药品监督局MHW Ministry of Health and Welfare 日本卫生福利部MRT Mean residence time 平均滞留时间MTD Maximum Tolerated Dose 最大耐受剂量ND Not detectable 无法定量NDA New Drug Application 新药申请NEC New Drug Entity 新化学实体NIH National Institutes of Health 国家卫生研究所(美国)NMR Nuclear Magnetic Resonance 核磁共振PD Pharmacodynamics 药效动力学PI Principal Investigator 主要研究者PK Pharmacokinetics 药物动力学PL Product License 产品许可证PMA Pre-market Approval (Application) 上市前许可(申请)PP Per protocol 符合方案集PSI Statisticians in the Pharmaceutical Industry 制药业统计学家协会QA Quality Assurance 质量保证QAU Quality Assurance Unit 质量保证部门QC Quality Control 质量控制QWP Quality Working Party 质量工作组RA Regulatory Authorities 监督管理部门REV Revision 修订SA Site Assessment 现场评估SAE Serious Adverse Event 严重不良事件SAP Statistical Analysis Plan 统计分析计划SAR Serious Adverse Reaction 严重不良反应SD Source Data/Document 原始数据/文件SD Subject Diary 受试者日记SDV Source Data Verification 原始数据核准SEL Subject Enrollment Log 受试者入选表SFDA State Food and Drug Administration 国家食品药品监督管理局SI Sponsor-Investigator 申办研究者SI Sub-investigator 助理研究者SIC Subject Identification Code 受试者识别代码SOP Standard Operating Procedure 标准操作规程SPL Study Personnel List 研究人员名单SSL Subject Screening Log 受试者筛选表T&R Test and Reference Product 受试和参比试剂T-BIL Total Bilirubin 总胆红素T-CHO Total Cholesterol 总胆固醇TG Thromboglobulin 血小板球蛋白Tmax Time of maximum concentration 达峰时间TP Total proteinum 总蛋白UAE Unexpected Adverse Event 预料外不良事件WHO World Health Organization 世界卫生组织WHO- WHO International Conference WHO 国际药品管理当局会议ICDR A of Drug Regulatory AuthoritiesAberrant result 异常结果Absorption phase 吸收相Absorption 吸收Accuracy 准确度Accurate 精密度Administer 给药Amendment修正案Approval 批准Assess 估计Audit Report 稽查报告Audit 稽查Auditor 稽查员Analytical run/batch:分析批Benefit 获益Bias 偏性,偏倚Bioequivalence 生物等效Biosimilar /Follow-on biologics 生物仿制药Blank Control 空白对照Blind codes 编制盲底Blind review 盲态检查/盲态审核Blinding method 盲法Blinding/masking 盲法/设盲Block size 每段的长度Block 层/分段BCS 生物药剂学分类系统Carryover effect 延滞效应Case history 病历Clinical equivalence 临床等效性Clinical study 临床研究Clinical Trial Report 临床试验报告Comparison 对照Compensation 补偿,赔偿金Compliance 依从性Concomitant 伴随的Conduct 行为Confidence level 置信水平Consistency test 一致性检验Contract/ agreement 协议/合同Control group 对照组Coordinating Committee 协调委员会Crossover design 交叉设计Cross-over Study 交叉研究Cure 痊愈Data management 数据管理Descriptive statistical analysis 描述性统计分析Dichotomies 二分类Dispense 分布Diviation 偏差Documentation 记录/文件Dosage forms 剂型Dose dumping 剂量倾卸(药物迅速释放入血而达到危险浓度)Dose-reaction relation 剂量-反应关系Double blinding 双盲Double dummy 双模Drop out 脱落Effectiveness 疗效Elimination phase 消除相Emergency envelope 应急信件Enantiomers 对映体End point 终点Endpoint criteria/ measurement 终点指标Enterohepatic recycling 肠肝循环Essential Documentation 必需文件Ethical 伦理的Ethics committee 伦理委员会Evaluate 评估Exclusion Criteria 排除标准Excretion 排泄Expedite 促进Extrapolated 外推的Essentially similar product:基本相似药物Factorial design 析因设计Failure 无效,失败Finacing 财务,资金Final point 终点First pass metabolism 首过代谢Fixed-dose procedure 固定剂量法Full analysis set 全分析集GC-FTIR 气相色谱-傅利叶红外联用GC-MS 气相色谱-质谱联用Generic drug 通用名药Gene mutation 基因突变Genotoxicity tests 生殖毒性试验Global assessment variable 全局评价变量Group sequential design 成组序贯设计Hypothesis test 假设检验Highly permeable:高渗透性Highly soluble:高溶解度Highly variable drug:高变异性药物Highly:Variable Drug 高变异性药物HVDP:高变异药物制剂Identification 鉴别,身份证Improvement 好转In vitro 体外In vivo 体内Inclusion Criteria 入选表准Information Gathering 信息收集Initial Meeting 启动会议Inspection 检察/视察Institution Inspection 机构检察Instruction 指令,说明Integrity 完整,正直Intercurrent 中间发生的,间发的Inter-individual variability 个体间变异性Interim analysis 期中分析Investigational Product 试验药物Investigator 研究者Involve 引起,包括IR 红外吸收光谱Innovator Product:原创药Ka 吸收速率常LC-MS 液相色谱-质谱联用logarithmic transformation 对数转换Logic check 逻辑检查Lost of follow up 失访Mask 面具,掩饰Matched pair 匹配配对Metabolism 代谢Missing value 缺失值Mixed effect model 混合效应模式Modified release products 改良释放剂型Monitor 监查员Monitoring Plan 监察计划Monitoring Report 监察报告MS-MS 质谱-质谱联用Multi-center Trial 多中心试验Negative 阴性,否定的Non-clinical Study 非临床研究Non-inferiority 非劣效性Non-Linear Pharmacokinetics 非线性药代动力学Non-parametric statistics 非参数统计方法NTID:窄治疗指数制剂Obedience 依从性Open-blinding 非盲Open-label 非盲Original Medical Record 原始医疗记录Outcome Assessment 结果评价Outcome measurement 结果指标Outlier 离群值OIP 经口服吸收药物Parallel group design 平行组设计Parameter estimation 参数估计Parametric statistics 参数统计方法Patient file 病人档案Patient History 病历Per protocol,PP 符合方案集Permeability 渗透性Pharmacodynamic characteristics 药效学特征Pharmacokinetic characteristics 药代学特征Placebo Control 安慰剂对照Placebo 安慰剂Polytomies 多分类Post-dosing postures 给药后坐姿Potential 潜在的Power 检验效能Precision 精密度Preclinical Study 临床前研究Precursor 母体前体Premature 过早的,早发Primary endpoint 主要终点Primary variable 主要变量Prodrug 药物前体Protocol amendment 方案补正Protocol Amendments 修正案Protocol 试验方案Quality Control Sample:质控样品Rapidly dissolving:快速溶出Racemates 外消旋物Randomization 随机/随机化Range check 范围检Rating scale 量表Recruit 招募,新会员Replication 可重复Retrieval 取回,补修Revise 修正Risk 风险Run in 准备期Safety evaluation 安全性评价Safety set 安全性评价的数据集Sample Size 样本量、样本大小Sampling schedules 采血计划Scale of ordered categorical ratings 有序分类指标Secondary variable 次要变量Sequence 试验次序Seriousness 严重性Severity 严重程度Significant level 检验水准Simple randomization 简单随机Single Blinding 单盲Site audit 试验机构稽查Solubility 溶解度Specificity 特异性Specify 叙述,说明Sponsor-investigator 申办研究者Standard curve 标准曲线Statistical model 统计模型Statistical tables 统计分析表Steady state 稳态Storage 储存Stratified 分层Study Audit 研究稽查Study Site 研究中心Subgroup 亚组Sub-investigator 助理研究者Subject Enrollment Log 受试者入选表Subject Enrollment 受试者入选Subject Identification Code List 受试者识别代码表Subject Recruitment 受试者招募Subject Screening Log 受试者筛选表Subject 受试者Submit 交付,委托Superiority 检验Supplemental 增补的Supra-bioavailability 超生物利用度(试验药的生物利用度大于对照药)Survival analysis 生存分析System Audit 系统稽查SmPC:药品说明书Standard Sample:标准样品Target variable 目标变量Treatment group 试验组Trial error 试验误差Trial Initial Meeting 试验启动会议Trial Master File 试验总档案Trial Objective 试验目的Trial site 试验场所Triple Blinding 三盲Two one-side test 双单侧检验Therapeutic equivalence:治疗等效性Un-blinding 破盲/揭盲Verify 查证、核实Visual analogy scale 直观类比打分法Vulnerable subject 弱势受试者Wash-out Period 洗脱期Well-being 福利,健康Withdraw 撤回,取消药代动力学参数Ae(0-t):给药到t时尿中排泄的累计原形药。
参考文献

[1]卢汉清,刘静.基于图学习的自动图像标注[J]. 计算机学报,2008,31(9):1630-1645.[2]李志欣,施智平,李志清,史忠植.融合语义主题的图像自动标注[J].软件学报,2011,22(4):801-812.[3]Minyi Ke, Shuaihao Li, Yong Sun, Shengjun Cao . Research on similarity comparison by quantifying grey histogram based on multi-feature in CBIR [J]// Proceedings of the 3th International Conference on Education Technology and Training .IEEE Press.2010:422-424.[4] Wang Ke-Gang, Qi Li-Ying. Classification and Application of Images Based on Color Texture Feature[C]// Proceedings of 4th IEEE International Conference on Computer Science and Information Technology .IEEE Press. 2011:284-290.[5]Mohamed Maher Ben Ismail. Image Database Categorization based on a Novel Possibilistic Clustering and Feature Weighting Algorithm[C] // Proceedings of 2012 International Conference on Network and Computational Intelligence. 2012:122-127.[6]Du Gen-yuana, Miao Fang, Tian Sheng-li, Liu Ye.A modified fuzzy C-means algorithm in remote sensing image segmentation[C]// Proceedings of Environmental Science and Information Application Technology. 2009: 447-450.[7]Jeon J., Lavrenko V., Manmatha R. Automatic image annotation and retrieval using cross- media relevance models[C]// ACM SIGIR.ACM Press,2003:119- 126.[8]苗晓光,袁平波,何芳,俞能海. 一种新颖的自动图像标注方法[C].// 第十三届中国图象图形学术会议.2006:581-584参考文献正解[1] Smeulders A W M, Worring M, Santini S, et al. Content-based image retrieval at the end ofthe early years[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,22(12),2000: 1349-1380.[2] Datta R, Joshi D, Li J, et al. Image retrieval: ideas, influences, and trends of the new age[J].ACM Computing Surveys (CSUR),40(2),2008: 5.[3] Mller H, Mller W, Squire D M G, et al. Performance evaluation in content-based imageretrieval: overview and proposals[J]. Pattern Recognition Letters,22(5),2001: 593-601.[4]Müller H, SO H E S. Text-based (image) retrieval[J]. HES SO//Valais, Sierre, Switzerland[Online] http://thomas. deselaers. de/teaching/files/tutorial_icpr08/03text Based Retrieval. pdf [Accessed 25 July 2010], 2007.[5]Zhao R, Grosky W I. Narrowing the semantic gap-improved text-based web document retrievalusing visual features[J]. Multimedia, IEEE Transactions on, 2002, 4(2): 189-200.[6]卢汉清,刘静.基于图学习的自动图像标注[J]. 计算机学报,2008,31(9):1630-1645.[7]李志欣,施智平,李志清.史忠植.融合语义主题的图像自动标注[J].软件学报,2011,22(4):801-812.[8] Li J, Wang JZ. Automatic linguistic indexing of pictures by a statistical modeling approach.IEEE Trans. on Pattern Analysis and Machine Intelligence, 2003,25(9):1075−1088. [doi:10.1109/TPAMI.2003.1227984][9] Chang E, Goh K, Sychay G, Wu G. CBSA: Content-Based soft annotation for multimodalimage retrieval using Bayes point machines. IEEE Trans. on Circuits and Systems for Video Technolo gy, 2003,13(1):26− 38. [doi: 10.1109/TCSVT.2002.808079][10]Carneiro G, Chan AB, Moreno PJ, Vasconcelos N. Supervised learning of semantic classes forimage annotation and retrieval. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007,29(3):394 − 410. [doi: 10.1109/TPAMI.2007.61][11]Blei DM, Jordan MI. Modeling annotated data. In: Proc. of the 26th Int’l ACM SIGIR Conf.on Research and Development in Information Retrieval. New York: ACM Press, 2003. 127− 134. [doi: 10.1145/860435.860460][12]Barnard K, Duygulu P, Forsyth D, de Freitas N, Blei DM, Jordan MI. Matching words andpictures. Journal of Machine Learning Research, 2003,3(2):1107 − 1135. [doi:10.1162/153244303322533214][13] LA VRENKO V, JEON J. Automatic image annotation and retrieval using cross-mediarelevance models. [C]//Proceeding of the 26th ACM SIGIR Conf. on Research and Development in Information Retrieval . New York: ACM, 2003: 119 − 126.[14]MINVI KE, SHUAIHAO LI, YONG SUN, SHENGJUN CAO. Research on similaritycomparison by quantifying grey histogram based on multi-feature in CBIR [C]//Proceeding of the 3rd International Conference on Education Technology and Training .IEEE,2010:422-424.[15] WANGKE GANG, QILI YING. Classification and application of images based on colortexture feature[C]// Proceedings of 4th IEEE International Conference on Computer Science and Information Technology .IEEE, 2011:284-290.[16]Mohamed Maher Ben Ismail. Image Database Categorization based on a Novel PossibilisticClustering and Feature Weighting Algorithm[C] // Proceedings of 2012 International Conference on Network and Computational Intelligence. 2012:122-127.[17]Du Gen-yuana, Miao Fang, Tian Sheng-li, Liu Ye.A modified fuzzy C-means algorithm inremote sensing image segmentation[C]// Proceedings of Environmental Science and Information Application Technology. 2009: 447-450.[18] Wang Ke-Gang, Qi Li-Ying. Classification and Application of Images Based on ColorTexture Feature[C]// Proceedings of 4th IEEE International Conference on Computer Science and Information Technology .IEEE Press. 2011:284-290.[19]Jeon J., Lavrenko V., Manmatha R. Automatic image annotation and retrieval using cross-media relevance models[C]// ACM SIGIR.ACM Press,2003:119- 126.[20]苗晓光,袁平波,何芳,俞能海. 一种新颖的自动图像标注方法[C].// 第十三届中国图象图形学术会议.2006:581-584.[21] Duygulu P, Barnard K, de Freitas J F G, et al. Object recognition as machine translation:Learning a lexicon for a fixed image vocabulary[M]//Computer Vision—ECCV 2002.Springer Berlin Heidelberg, 2002: 97-112.[22]王科平,王小捷,钟义信.加权特征自动图像标注方法[J].北京邮电大学学报,.2011:34(5):6-9.[23] Chen K, Li J, Ye L. Automatic Image Annotation Based on Region Feature[M]//Multimediaand Signal Processing. Springer Berlin Heidelberg, 2012: 137-145.[24]刘丽, 匡纲要. 图像纹理特征提取方法综述[J]. 中国图象图形学报, 2009, 14(4): 622-635.[25] 杨红菊, 张艳, 曹付元. 一种基于颜色矩和多尺度纹理特征的彩色图像检索方法[J]. 计算机科学, 2009, 36(9): 274-277.[26]Minyi Ke, Shuaihao Li, Yong Sun, Shengjun Cao . Research on similarity comparison byquantifying grey histogram based on multi-feature in CBIR [J]// Proceedings of the 3th International Conference on Education Technology and Training .IEEE Press.2010:422-424 [27] Mohamed Maher Ben Ismail. Image Database Categorization based on a Novel Possibilistic Clustering and Feature Weighting Algorithm[C] // Proceedings of 2012 International Conference on Network and Computational Intelligence. 2012:122-127.[28]Khalid Y I A, Noah S A. A framework for integrating DBpedia in a multi-modality ontologynews image retrieval system[C]//Semantic Technology and Information Retrieval (STAIR).IEEE, 2011: 144-149.[29]Celik T, Tjahjadi T. Bayesian texture classification and retrieval based on multiscale featurevector[J]. Pattern recognition letters, 2011, 32(2): 159-167.[30]Min R, Cheng H D. Effective image retrieval using dominant color descriptor and fuzzysupport vector machine[J]. Pattern Recognition, 2009, 42(1): 147-157.[31]Feng H, Shi R, Chua T S. A bootstrapping framework for annotating and retrieving WWWimages[C]//Proceedings of the 12th annual ACM international conference on Multimedia.ACM, 2004: 960-967.[22]Ke X, Chen G. Automatic Image Annotation Based on Multi-scale SalientRegion[M]//Unifying Electrical Engineering and Electronics Engineering.New York, 2014: 1265-1273.[33]Wartena C, Brussee R, Slakhorst W. Keyword extraction using wordco-occurrence[C]//Database and Expert Systems Applications (DEXA).IEEE, 2010: 54-58. [34]刘松涛, 殷福亮. 基于图割的图像分割方法及其新进展[J]. 自动化学报, 2012, 38(6):911-922.[35]陶文兵, 金海. 一种新的基于图谱理论的图像阈值分割方法[J]. 计算机学报, 2007, 30(1):110-119.[36]谭志明. 基于图论的图像分割及其嵌入式应用研究[D][J]. 博士学位论文) 上海交通大学,2007.[37] Shi J, Malik J. Normalized cuts and image segmentation[J]. Pattern Analysis and MachineIntelligence,2000, 22(8): 888-905.[38] Huang Z C, Chan P P K, Ng W W Y, et al. Content-based image retrieval using color momentand Gabor texture feature[C]//Machine Learning and Cybernetics (ICMLC), 2010 International Conference on. IEEE, 2010, 2: 719-724.[39]王涛, 胡事民, 孙家广. 基于颜色-空间特征的图像检索[J]. 软件学报, 2002, 13(10).[40] 朱兴全, 张宏江. iFind: 一个结合语义和视觉特征的图像相关反馈检索系统[J]. 计算机学报,2002, 25(7): 681-688.[41]Sural S, Qian G, Pramanik S. Segmentation and histogram generation using the HSV colorspace for image retrieval[C]//Image Processing. 2002. Proceedings. 2002 International Conference on. IEEE, 2002, 2: II-589-II-592 vol. 2.[42]Ojala T, Rautiainen M, Matinmikko E, et al. Semantic image retrieval with HSVcorrelograms[C]//Proceedings of the Scandinavian conference on Image Analysis. 2001: 621-627.[43]Yu H, Li M, Zhang H J, et al. Color texture moments for content-based imageretrieval[C]//Image Processing. 2002. Proceedings. 2002 International Conference on. IEEE, 2002, 3: 929-932.[44]Sun L, Ge H, Yoshida S, et al. Support vector description of clusters for content-based imageannotation[J]. Pattern Recognition, 2014, 47(3): 1361-1374.[45]Hiremath P S, Pujari J. Content based image retrieval using color, texture and shapefeatures[C]//Advanced Computing and Communications, 2007. ADCOM 2007.International Conference on. IEEE, 2007: 780-784.[46]Zhang D, Lu G. Generic Fourier descriptor for shape-based image retrieval[C]//Multimediaand Expo, 2002. ICME'02. Proceedings. 2002 IEEE International Conference on. IEEE, 2002, 1: 425-428.[47]Gevers T, Smeulders A W M. Pictoseek: Combining color and shape invariant features forimage retrieval[J]. Image Processing, IEEE Transactions on, 2000, 9(1): 102-119.[48] Bailloeul T, Zhu C, Xu Y. Automatic image tagging as a random walk with priors on thecanonical correlation subspace[C]//Proceedings of the 1st ACM international conference on Multimedia information retrieval. ACM, 2008: 75-82.[16]MOHAMED MAHER, BEN ISMAIL. Image database categorization based on a novelprobability clustering and feature weighting algorithm[C] // Proceedings of 2012 International Conference on Network and Computational Intelligence, 2012:122-127.[17]DU GENYUANA, TIAN SHENGLI, LIU YE.A modified fuzzy c-means algorithm in remotesensing image segmentation[C]// Proceedings of Environmental Science and Information Application Technology, 2009: 447-450.[18]SIVIC J, RUSSELL BC. Discovering objects and their location in images.[C] //Proceedingsof the 10th IEEE Int’l Conf. on Computer Vision .IEEE Computer Society, 2005:370 − 377.[19] DUYGULU P, BARNARD K, FORSYTH D. Object recognition as machine translation [J].//Learning a lexicon for a fixed image vocabulary In: HEYDEN A, NIELSEN M, JOHANSEN P, eds. Lecture Notes in Computer Science 2353,2002,45(1): 97−112.[20]JEON J, MANMA THA R. Automatic image annotation and retrieval using cross- mediarelevance models[C]// ACM SIGIR.ACM,2003:119- 126.[21]G K RAMANI and T ASSUDANI. Tag recommendation for photos[J].In Stanford CS229Class Project, 2009,23(1):130 − 145.[22] D. ZHANG and G. LU. A review on automatic image annotation techniques[J]. PatternRecognition, 2011,145(1):346–362.[23]K.BARNARD,P.DUGGULU,N FREITAS,D FORSYTH, D BLEI. Matching words andpictures [J].Journal of Machine Learning Research,2003,132(2):1107-1135.[24]苗晓光,袁平波,何芳,俞能海. 一种新颖的自动图像标注方法[C].// 第十三届中国图象图形学术会议.2006:581-584.[25] 王科平,王小捷, 钟义信.加权特征自动图像标注方法[J].北京邮电大学学报,2011,34(5):6−9.[26] JIN R, KANG F, SUKTHANKAR R. Correlated label propagation with application tomulti-label learning [C]. //Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2006:119-126.[27]YANG C.B, DONG, M, HUA J. Region-based image annotation using asymmetrical supportvector machine-based multiple-instance learning[C].// Proceeding of the CVPR.2006:2057–2063.[28] CARNEIRO G, V ASCONCELOS N. A database centric view of semantic image annotationand retrieval[C].// Proceeding of ACM SIGIR. 2005:559–566.[29] CUSANO C, CIOCCA G, SCHETTINI R. Image annotation using SVM[C].// Proceeding ofthe Internet Imaging, 2004: 330–338.[30]R Y AN, A HAUPTMANN, R JIN. Multimedia search with pseudo-relevancefeedback[C].//Proceeding of IEEE conference on Content-based Image and Video Retrieval.2007:238-247.[31] J W ANG, S KUMAR, S CHANG. Semi-supervised hashing for scalable imageretrieval[C].//Proceeding of IEEE conference on Computer Vision.2009:1-8.[32]M KOKARE, B CHATTERJI, P BISWAS. Comparison of similarity metrics for texture imageretrieval[C].//Proceeding of IEEE conference on Convergent Technologies for Asia-Pacific Region.2003:571-575.[33]EDWARD CHANG, KINGSHY GOH, GERARD SYCHAY, GANG WU. Content-base softannotation for multimodal image retrieval using bays point machines[J].CirSysVideo,2003,13(1):26-38.[34]SHI J, MALIK J. Normalized cuts and image segmentation[J].IEEE Transactions on PatternAnalysis and Machine Intelligence,2000,22(8):888-905.[35]ZHOU D, BOUSQUET O. Learning with local and global consistency[C].//Proceeding ofAdvances in Neural Information Proceeding Systems,2004:321-328.[36]Minyi Ke, Shuaihao Li, Yong Sun, Shengjun Cao . Research on similarity comparison byquantifying gray histogram based on multi-feature in CBIR [J]// Proceedings of the 3th International Conference on Education Technology and Training .IEEE Press.2010:422-424 [37] Wang Ke-Gang, Qi Li-Ying. Classification and Application of Images Based on ColorTexture Feature[C]// Proceedings of 4th IEEE International Conference on Computer Science and Information Technology .IEEE Press. PP:284-290 ,2011[38] Chen K, Li J, Ye L. Automatic Image Annotation Based on Region Feature[M]//Multimediaand Signal Processing. Springer Berlin Heidelberg, 2012: 137-145.[39]Lu Z, Ip H H S. Generalized relevance models for automatic image annotation [M]//Advancesin Multimedia Information Processing-PCM 2009. Springer Berlin Heidelberg, 2009: 245-255.[40]Fournier J, Cord M. A Flexible Search-by-Similarity Algorithm for Content-Based ImageRetrieval[C]//JCIS. 2002: 672-675.论文3[1]Zhang C, Chai J Y, Jin R. User term feedback in interactive text-based image retrieval[C]//Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2005: 51-58.[2]Akgül C B, Rubin D L, Napel S, et al. Content-based image retrieval in radiology: currentstatus and future directions[J]. Journal of Digital Imaging, 2011, 24(2): 208-222.[7]Zhang D, Islam M, Lu G. Structural image retrieval using automatic image annotation and region based inverted file[J]. Journal of Visual Communication and Image Representation, 2013, 24(7): 1087-1098.[8]Datta R, Joshi D, Li J, et al. Image retrieval: Ideas, influences, and trends of the new age[J]. ACM Computing Surveys (CSUR), 2008, 40(2): 111-115.[9]Li ZX, Shi ZP, Li ZQ, Shi ZZ. A survey of semantic mapping in image retrieval[J]. Journal of Computer-Aided Design and Computer Graphics, 2008,20(8):1085−1096 (in Chinese with English abstract).[10]Zhang D, Islam M M, Lu G. A review on automatic image annotation techniques[J]. Pattern Recognition, 2012, 45(1): 346-362.[11] Li J, Wang J Z. Automatic linguistic indexing of pictures by a statistical modeling approach[J]. Pattern Analysis and Machine Intelligence,2003, 25(9): 1075-1088.[12] Chang E, Goh K, Sychay G, et al. CBSA: content-based soft annotation for multimodal image retrieval using Bayes point machines[J]. Circuits and Systems for Video Technology,2003, 13(1): 26-38.[13]Jeon J, Lavrenko V, Manmatha R. Automatic image annotation and retrieval using cross-media relevance models[C]//Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval. ACM,2003: 119−126.[14] Lavrenko V, Manmatha R, Jeon J. A Model for Learning the Semantics of Pictures[C]//NIPS. 2003: 11-18.[15]Feng S L, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation[C]//Proceedings of the IEEE Computer Society conference onComputer Vision and Pattern Recognition.IEEE, 2005: 51-58.[16]Tian D, Zhao X, Shi Z. An Efficient Refining Image Annotation Technique by Combining Probabilistic Latent Semantic Analysis and Random Walk Model[J]. Intelligent Automation & Soft Computing, 2014 (ahead-of-print): 1-11.。
微视凌志相机

pylon Release Notes Version 2.3.0Document ID Number: AW00013312Revision Date: September 16, 2010Subject to Change Without Notice© Basler Vision TechnologiesInstallation InformationYou can find detailed information on installing the Basler pylon software in the Installation and Setup Guide for Cameras Used with Basler’s pylon API (AW000611xx000).You can download the guide free of charge from the Basler website atRestricted Compatibility of the APIThe programming interface for pylon 2.x is in some respects not backwards compatible with pylon 1.x. The pylon Programmer’s Guide and API Reference describes the modifications needed to rebuild an application that was developed using pylon 1.0.When updating an application from a pylon version older than pylon 2.2, the application's Visual Studio project settings must be adjusted with respect to the compiler include search path, linker library search path, and the linker's "Additional Dependencies" settings. References to the GENICAM_ROOT or GENICAM_ROOT_V1_1 environment variable must be replaced by the PYLON_GENICAM_ROOT variable introduced with the pylon 2.2 release. If the import libraries for pylon or GenICam are explicitly listed in the linker configuration settings of your project, these references should be removed since some of the libraries have been renamed. Your application should include the PylonIncludes.h header file rather than explicitely specify the import libraries. By including the PylonIncludes.h header file, the required import libraries will be linked to your application automatically. Refer to the "Building Applications with pylon" and "Migrating from Previous pylon Versions" sections of the pylon for Windows Programmer's Guide and API Reference document for detailed information about the required project settings. The Microsoft Windows 2000 operating system is no longer supported since pylon 2.2. Version 2.3.0New FeaturesRuntime•Support of Windows 7.C++ SDK•Improved device enumeration and device creation.When enumerating cameras a filter can be applied allowing to enumerate only devicesmatching given criteria. Refer to the "Applying a Filter when Enumerating Cameras"section in the "pylon Programmer's Guide and API Reference" help document. The"Creating Specific Cameras" section describes how to create devices by specifyingdevice properties like serial number or IP address.•Version of pylon can be retrieved at runtime.Use the GetPylonVersion() function to query the version of the pylon installation.The VersionInfo class serves as a helper class for comparing version numbers.•Camera classes used for camera configuration have been extended to support new camera features.•CPixelTypeMapper classThe CPixelTypeMapper class provides a mapping of camera specific pixel format values(e.g. "Mono8") to PixelType enum values (e.g. PixelType_Mono8).IEEE 1394 Transport Layer•Updated the camera description file for Basler cameras with IEEE 1394 interface.Camera features introduced with latest firmware versions are supported.Camera Link Transport Layer•Specifying maximum baud rate.Pylon will use the maximum possible baud rate reported by the Camera Link framegrabbers by default. For some frame grabbers however, the communication at higherbaud rates might be unreliable. The "probe device" functionality of the CL Configurator tool will test the reliability of the serial port currently probed. In case there are anycommunication errors it will repeat the test at lower baud rates, shows a warningmessage, and suggest lowering the maximum baud rate to ensure propercommunication.In addition, the CL Configurator tool provides a dialog for manually setting the maximum baud rate.•64-bit support for Camera Link.The 64-bit pylon installer installs both, a 32 bit and a 64 bit version of the TransportLayer. This allows operating Basler aviator cameras regardless whether the framegrabber manufacturer provides 32 bit or 64 bit versions of the DLLs required for theserial communication.Pylon Viewer• A 64 bit version of the pylon Viewer is available.The 64 bit version is required to access Camera Link cameras if the frame grabbermanufacturer provides only 64 bit versions of the DLLs required for the serialcommunication.•Save images as .tiff files.Grabbed images can be saved as .tiff files. Saving .tiff images with 16 bit per pixel issupported.Pylon NeuroCheck DriverFor NeuroCheck 6.0 a driver based on pylon is available. By installing pylon 2.3 and the pylon NeuroCheck driver Basler GigE and IEEE 1394 cameras are supported by the NeuroCheckapplication. The pylon NeuroCheck driver can be downloaded from the Basler website: When installed the assembly is now visible in the Add Reference dialog in Visual Studio 2010.Fixed BugsIEEE 1394 Transport Layer•Frequent bus resets could have caused an IEEE 1394 camera to stop grabbing or to disappear from the IEEE 1394 bus.•Occasionally grabbing images failed with a "Not enough storage is available to process this command" error.SDK•When compiling pylon applications using Microsoft Visual Studio 2010 you may have received compilation errors.•Including Basler1394Camera.h in your project prevented the delay loading of GCBase.DLL resulting in a Missing DLL error message when starting the application.•Fixed an issue where the installation of the Filter Driver on Windows 7 could have caused a bluescreen.•Closing the stream grabber or calling FinishGrab() without stopping the image acquisition caused the camera to stream data although the required resources werealready freed.•The camera's trigger settings were not saved to file.Installer•On 64 bit Windows 7 the installer always requested a reboot after installing IEEE 1394 drivers.•The installer occasionally hung when updating the environment.•After uninstalling the Direct Show filter it was still visible in the list of available Direct Show filters.Documentation•C++ camera class documentation: grouping of class members was broken.SpeedOMeter•The Speedometer occasionally hung when closed while grabbing was still active..Samples•The VB6 and .NET viewer samples did not display Bayer images correctly. Known RestrictionsInstallation of .NET features fails when .NET Framework is not already installedSynopsis: Installing the pylon .NET runtime and SDK features fails if there is no Framework 2.0 installed.Solution: Install the .NET 2.0 Framework. The installer can be downloaded from/downloads/details.aspx?displaylang=en&FamilyID=0856eacb-4362-4b0d-8edd-aab15c5e04f5.Note: Windows Vista or later have .NET 2.0 Framework preinstalled.The pylon::CLock and pylon::AutoLock Classes Are Deprecated Synopsis: These classes are also provided by the GenApi SDK which is part of the pylon SDK. For that reason, the pylon versions will be omitted in the next releases.Solution: If you are using the Pylon::CLock or the Pylon::AutoLock classes in your application, include the <genapi/Synch.h> file instead of <pylon/Synch.h> and use the GenApi::CLock and GenApi::AutoLock classes instead of the Pylon::CLock and Pylon::AutoLock classes. "missing driver" Error Message After Uninstalling pylonSynopsis: When you uninstall a previous pylon version (pylon 2.2 or earlier) you may get an error message stating that drivers for IEEE 1394 Camera/Generic Desktop Camera could not be found or installed. Windows may sometimes even ask for a driver disk.Solution: The error message is displayed because after the pylon setup has uninstalled its drivers for IEEE 1394 cameras, windows tries to restore the previous drivers for these cameras. You can safely ignore this message or click "Cancel" if Windows asks for a driver disk during uninstall.Missing pylon/GenICam DLLs when Starting pylon Viewer or Any Other Application Using pylonSynopsis: When the pylon installation is performed under the Administrator’s account, under some circumstances the changes of the environment variables don’t take effect immediately. This leads to error messages that pylon DLLs cannot be found when starting a pylon based application (e.g., the pylon Viewer complains about a missing PylonUtility_MD.dll).Workaround: When installing pylon as Administrator, log off and on again after the pylon installation is finished. After logging on again the environment changes become effective. Restrictions on 64 Bit Operating SystemsSynopsis: In general, due to limited hardware or operating system resources, it is not possible to enqueue an arbitrary number of image buffers at a time. For the IEEE 1394 camera driver, there are limitations due to restricted DMA resources provided by the OHCI chip sets. The actual amount of memory that can be fed into the driver’s input queues depends on the currently available amount of free physical memory below 4 GByte. For the GigE drivers there are similar restrictions.Feeding in too much memory will result in exceptions and can cause a high CPU load and an extremely reduced system performance.Solution: You can use an arbitrary amount of memory for grabbing, but you must not register and feed in all buffers into the driver’s input queue simultaneously (by using theRegisterBuffer()and QueueBuffer() methods). We recommend to register and enqueue sufficient memory buffers for receiving images for a certain period (e.g. 250 ms). Each time a filled buffer is retrieved from the driver’s output queue (by using the RetrieveResult() method), it can be deregistered (by using the DeregisterBuffer() method). Then, a new buffer can be registered and fed into the driver’s input queue. See the pylon Programmer’s Guide and API Reference for an explanation of the concepts of the queues for the drivers. Applications Using a Previous Version of pylon (pylon 2.2 or earlier) May Not CompileSynopsis: Applications using a previous pylon version (pylon 2.2 or earlier), may not compile when pylon 2.3 is used.Solution: For information on how to adjust the code, see the pylon Programmer’s Guide and API Reference.MFC Applications Report Memory LeaksSynopsis: In debug mode, MFC applications using the pylon API report a huge number of memory leaks.Workaround: The pylon runtime libraries free the reported memory blocks after the MFC memory tracking feature has dumped the assumed leaking memory leaks.Use the PylonTerminate() method to explicitly free memory resources allocated by the pylon runtime system. In order to terminate the application properly, call PylonTerminate()in the ExitInstance() method of your application object. Due to a design problem of the MFC memory tracking feature the memory leaks are dumped before DLLs like the pylon runtime libraries are unloaded/freed. This in turn frees global static data allocated by these DLLs. As a result, even after calling PylonTerminate(), the MFC memory tracking features reports false memory leaks. As a workaround the memory tracking feature can be disabled in the application’s InitInstance() method:BOOL CWinMyApp::InitInstance(){#ifdef _DEBUG// Disable tracking of memory for the scope of the InitInstance() AfxEnableMemoryTracking(FALSE);#endif // _DEBUG}Unable to See Multiple GigE Cameras in the pylon IP Configuration ToolSynopsis: When using multiple GigE cameras connected to more than one network adapter in your PC, not all cameras are listed in the pylon IP Configuration tool.Solution: When using multiple network adapters or multiport network adapters, ensure that for each adapter (network interface) a different subnet is configured. When two or more adapters are configured for the same subnet, GigE cameras cannot be found. Be aware that configuring more than one network adapter for automatic IP addressing, causes all these adapters to share the same subnet (169.245.255.255), and this will lead to non-discoverable GigE cameras. Unable to Establish a Connection to a GigE Camera Using a 100 MBit Network AdapterSynopsis: When using a 100 MBit network adapter with a GigE camera, a connection can only be established if the speed and duplex mode settings of the network adapter are set to Auto. Workaround: The Installation and Setup Guide for Cameras Used with Basler’s pylon API describes how to install the Basler network drivers and how to configure the network adapter. Ensure that the speed and duplex mode settings of the network adapter are set to Auto. Unable to Establish a Connection to a GigE Camera When Using Multiple Network Adapters (I)Synopsis: When multiple network adapters are used on one PC with multiple adapters included in the same subnet, connections to cameras cannot be established.Workaround: Assign fixed IP addresses to your network adapters. Ensure that each adapter is in a different subnet.Unable to Establish a Connection to a GigE Camera When Using Multiple Network Adapters (II)Synopsis: A connection to a camera cannot be established when more than one adapter is configured for automatic IP addressing.Workaround: Assign fixed IP addresses to your network adapters. Ensure that each adapter is in a different subnet.Unable to Establish a Connection to a GigE Camera When Using Debug LibrariesSynopsis: Using the debug version of the Pylon GigE transport layer library sets the heartbeat timeout to 5 minutes. This allows single stepping during the debugging without loosing theconnection to the camera. When restarting the program after an anomalous termination or after terminating the application in the debugger, no connection to the camera can be established. A Device is controlled by another application error message is issued.Workaround: Remove the network cable to the camera temporarily. If this is not convenient, a tool is available allowing to shutdown the connection to the camera after an application aborted. Contact Basler Technical support for obtaining the tool.GigE Camera Detection Fails If More than One Device Is Configured for AutoIP (e.g., if a Mobile Device Is Connected)Synopsis: An attached GigE camera configured for automatic IP addressing is not detected when a further device is also configured for automatic IP addressing, for example a mobile device such as a PDA using MS Active Sync®.Workaround: Assign fixed IP addresses to the camera and to the network adapter the camera is connected to. Make sure that the adapter and the camera are in the same subnet.GigE Camera UnreachableSynopsis: Camera is temporarily not reachable because the MS Windows® operating system adds an incorrect entry in the routing table when a camera is removed while a connection to it exists.Workaround: Use Microsoft’s route utility to detect and remove the incorrect route. The route print command shows all entries. Remove the entry to your camera using theroute delete <camera ip>command.Unable to Grab ImagesSynopsis: When pylon drivers for GigE and IEEE 1394 cameras are not installed, pylon based applications fail to grab images. A Failed to open stream grabber error message is issued. Workaround: Depending on whether you use GigE and/or IEEE 1394 cameras, install the appropriate pylon drivers as described in the Installation and Setup Guide for Cameras Used with Basler’s pylon API.Samples may report invalid PixelFormat enumeration value when using an ace cameraSynopsis: Some samples may report an invalid PixelFormat value when using an ace camera. Workaround: Some samples explicitly use the PixelFormat value “Mono16” which may not be supported by all ace cameras. Change all occurrences of “Mono16” to “Mono12” which is a pixel format supported by all ace cameras.Known BugsBluescreen may occur when uninstalling pylon softwareSynopsis: Uninstalling pylon software while an IEEE 1394 camera is connected to the IEEE 1394 bus may cause a bluescreen.Workaround: Make sure to unplug the plug of the IEEE 1394 cable from the camera before you start uninstalling pylon software. If you can not unplug the plug switch off camera power.IEEE 1394 Transport Layer: A Deadlock might occur when closing an IEEE 1394 camera while the camera still acquires images.Synopsis: If you close a 1394 camera while a acquisition is still active the Close() function may deadlock and not return.Workaround: Ensure that an AcquisitionStop command is issued before closing the camera device.Pylon Viewer: Process Hangs When Closing the ViewerSynopsis: Under some circumstances, the pylon Viewer process keeps running after closing the Viewer application. The GUI of the viewer is no longer visible, but the process is still visible in the task manager.Workaround: Currently, there is no workaround other than killing the process using the Windows task manager.GigE Filter Driver: Dropped FramesSynopsis: If the first packet of an image frame (the so-called data leader packet) is missed, the complete frame will be dropped since the filter driver generates no resend requests for this data leader packet.Workaround: Use a network adapter that is compatible with the performance driver. When the performance driver is used, a frame will be received even if its first packet was missed.When a compatible network adapter can not be used, optimize the transport layer parameters of the filter driver and reduce the network traffic. As a rule of thumb, set the packet size to its maximum. See also the camera User’s Manual and the Installation and Setup Guide for Cameras Used with Basler’s pylon API.GigE Filter Driver: RAS, VPN Is Not WorkingSynopsis: The RAS and VPN services do not work with the Basler filter driver.Workaround: Unbind the Pylon GigE Vision filter driver from the adapter that is used for the RAS and/or VPN service. Do not share a network connection between GigE Vision devices and the RAS and/or VPN service. How to unbind a filter driver is explained in the Installation and Setup Guide for Cameras Used with Basler’s pylon API.SpeedOMeter May Show Error Messages When Unplugging the Camera while Acquiring Images.Synopsis: When you start the SpeedOMeter, select a camera, press Start and unplug the camera you may get several error messages.Workaround: Press Stop to end the acquisition before unplugging the camera. Version 2.2.1New FeaturesSDK & Runtime•64 bit support for PylonC and API.IEEE 1394 Transport Layer•The device info objects contain valid serial numbers. In previous versions, the serial number of an IEEE1394 camera could only be retrieved after creating and opening apylon device object. With pylon 2.2.1, the serial number is available by just enumerating camera devices. The device info objects for IEEE 1394 cameras contain now the serial numbers. A serial number can be retrieved by calling theCDeviceInfo::GetSerialNumber() function. Note: The serial number information is only valid if the camera is not opened by an application when enumerating the device. Camera Link Transport Layer•Support of the aviator camera's Acquisition Status feature.Fixed BugsSDK•The assembly is not listed in the Visual Studio "Add Reference" dialog.• API: XML help is not available.•Including PylonGigEIncludes.h causes compiler errors (references to non-existing files).•PylonGigE DLL is not linked automatically (missing #pragma comment lib).•PylonUtilities DLL is not linked automatically. (missing pragma comment lib).•Missing #pragma comment lib for user32.dll and shell32.dll.•VS 2008: COMPILER define redefinition when not including PylonIncludes.h first.Installer•The GenICam x64 bin folder is not added to the PATH environment variable.•64 bit runtime redistributables for IEEE 1394 and for GigE cannot be installed.Camera Link Transport Layer•I/O errors during XML file download are ignored causing corrupted XML files added to the XML file cache.Camera Link Transport Layer and GigE Transport Layer•If the XML file download fails, an XML file stored in the file system is used.Direct Show / Twain•The tree view control does not respect increment constraints of certain camera features.Known RestrictionsFor the 2.2.1 release there are the same restrictions as for the 2.2.0 release (see below).Known BugsFor the 2.2.1 release there are the same known bugs as for the 2.2.0 release (see below).Version 2.2.0New FeaturesPylon•Support for the aviator Camera Link cameras. The pylon Viewer tool can be used to configure aviator cameras. In order to use aviator cameras, you must run theCLConfiguration tool to select the serial ports to be probed for aviator cameras. TheCLConfiguration tool will be automatically started when running the pylon Viewer for the first time.•Update to GenICam 2.0. With the newer GenICam version, the creation of pylon device objects is much faster than for previous pylon versions.•Facility to avoid conflicts with non-Basler software based on GenICam 2.0. The pylon installer no longer sets a system-wide GENICAM_ROOT_VX_Y and otherGenICam related environment variables. Setting these environment variables globallycould have influenced the installation of non-Basler software based on GenICam. Forbackward compatibility reasons, the pylon installer by default extends the PATHenvironment variable so that the GenICam DLLs shipped with pylon can be found atruntime. If that causes conflicts with non-Basler software, deselect or uninstall theExtend PATH Environment Variable feature. Refer to the "Delay Loading of DLLs andBootstrapping" section of the pylon for Windows API Reference and Programmer'sGuide document for more information about how pylon based applications set up theenvironment at runtime.SDK•Support for Camera Link. This release provides a new transport layer for aviator Camera Link cameras. This transport layer supports camera configuration only.Grabbing images with the Camera Link transport layer is not supported. To grab images use the API provided by your frame grabber manufacturer.•Pylon C API. Libraries and header files of the pylon C API are provided as an alternative to the pylon C++ API. As part of the installation procedure, a set of sample C programs is installed at <Program Files Folder>\Basler\Pylon 2.2\Samples\C. Projectfiles for Microsoft Visual Studio 6.0 and make files for the Borland C++ command-linecompiler are provided.•Pylon API. The pylon 2.2.0 installer installs a .NET assembly allowing to build pylon based applications using .NET languages. A set of C# sample programs isinstalled at <Program Files Folder>\Basler\Pylon 2.2\Samples\C#.•Pylon VB6 API. A type library for building pylon based applications with Visual Basic 6 is provided. A set of VB 6 sample programs is installed at <Program FilesFolder>\Basler\Pylon 2.2\Samples\VB.•IEEE 1394 Transport Layer•New camera configuration XML files for Basler FireWire cameras. New features introduced with the latest firmware updates are supported. The new XML files areoptimized to reduce delays when starting image acquisition.GigE Transport Layer•Support for ace cameras•Heartbeat timeout adjustable per environment variable. Refer to the "The pylon Programmer’s Guide and API Reference" document for more details. Fixed BugsPylon Viewer•The viewer crashes when collapsing the feature tree.•The viewer crashes when changing the user level.•Under some circumstances drop down boxes in the feature tree are not updated.Installer•Failed to detect older pylon installations.•Failed to uninstall network drivers.•Failed to install due to remaining network drivers from previous pylon installations. The pylon 2.2 installer now detects and removes remaining network drivers from previous installations.SDK•No delay loading possible due to exported string constants.GigE•Dead locks when closing a GigE device.•Failed to reopen a GigE event grabber.•Device discovery failed if there were no network interfaces enabled.•Device discovery failed if non-camera devices sent unexpected responses to device discovery requests.1394•Fixed handle leak in 1394 transport layer.•Memory leak when using chunk parser or event grabber.•Failed to set ExposureTimeBase feature for A102f, A31xf and A60xf cameras.Direct Show•Broken support of Bayer and YUV pixel formats.•AOI settings were reset when starting image acquisition.Known RestrictionsThe pylon Installer Requires the Microsoft .NET Framework 2.0 Synopsis: Installing the pylon .NET runtime and SDK features fails if there is no Framework 2.0 installed.Solution: Install the .NET 2.0 Framework. The installer can be downloaded from/downloads/details.aspx?displaylang=en&FamilyID=0856eacb-4362-4b0d-8edd-aab15c5e04f5.Windows Vista or higher have .NET 2.0 Framework preinstalled.The pylon::CLock and pylon::AutoLock Classes Are Deprecated Synopsis: These classes are also provided by the GenApi SDK which is part of the pylon SDK. For that reason, the pylon versions will be omitted in the next releases.Solution: If you are using the Pylon::CLock or the Pylon::AutoLock classes in your application, include the <genapi/Synch.h> file instead of <pylon/Synch.h> and use the GenApi::CLock and GenApi::AutoLock classes instead of the Pylon::CLock and Pylon::AutoLock classes."missing driver" Error Message After Uninstalling pylon 2.2Synopsis: When you uninstall pylon 2.2 you may get a message stating that drivers for 1394 Camera/Generic Desktop Camera could not be found or installed.Sometime Windows may even ask for a driver disk.Solution: This error message is displayed because after the plyon setup has uninstalled its drivers for 1394 cameras windows tries to restore the previous drivers for these cameras.You can safely ignore these messages or click "Cancel" if Windows asks for a driver disk during uninstall.When Installing as Administrator, Changes to the Environment Are not Effective ImmediatelySynopsis: When the pylon installation is performed under the Administrator’s account, under some circumstances the changes of the environment variables don’t take effect immediately. This leads to error messages that pylon DLLs cannot be found when starting a pylon based application (e.g., the pylon Viewer complains about a missing PylonUtility_MD.dll). Workaround: When installing pylon as Administrator, log off and on again after the pylon installation has been finished. After logging on again the environment changes become effective.Restrictions on 64 Bit Operating SystemsSynopsis: In general, due to limited hardware or operating system resources, it is not possible to enqueue an arbitrary number of image buffers at a time. For the IEEE 1394 camera driver, there are limitations due to restricted DMA resources provided by the OHCI chip sets. The actual amount of memory that can be fed into the driver’s input queues depends on the currently available amount of free physical memory below 4 GByte. For the GigE drivers there are similar restrictions.Feeding in too much memory will result in exceptions and can cause a high CPU load and an extremely reduced system performance.Solution: You can use an arbitrary amount of memory for grabbing, but you must not register and feed in all buffers into the driver’s input queue simultaneously (by using theRegisterBuffer()and QueueBuffer() methods). We recommend to register and enqueue sufficient memory buffers for receiving images for a certain period (e.g. 250 ms). Each time a filled buffer is retrieved from the driver’s output queue (by using the RetrieveResult() method), it can be deregistered (by using the DeregisterBuffer() method). Then a new buffer can be registered and fed into the driver’s input queue. See the Programmer’s Guide and API Reference for an explanation of the concepts of the queues for the drivers. Applications Using an Old pylon Version May Not CompileSynopsis: Current applications using a pylon version older than pylon 2.0, may not compile when pylon 2.2 is used.Solution: For information on how to adjust the code, see the Programmer’s Guide and API Reference.MFC Applications Report Memory LeaksSynopsis: In debug mode, MFC applications using the pylon API report a huge number of memory leaks.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Image Retrieval Using Modified Generic Fourier Descriptors Atul Sajjanhar1, Guojun Lu2, Dengsheng Zhang21 School of Information TechnologyDeakin University221 Burwood HighwayBurwood, VIC 3125Australiaatuls@.au2 Gippsland School of Computing and Information TechnologyMonash UniversityNorthways RoadChurchill, VIC 3842Australia{guojun.lu, dengsheng.zhang}@.auAbstract: Generic Fourier Descriptors have been used for image retrieval [12]. In this paper, we have proposed a modification to the Generic Fourier Descriptors. We have performed experiments to compare the performance of the proposed method with the standard method. Tests were performed on Set B of the MPEG-7 Still Images Content Set [13]. The experimental results show the effectiveness of the proposed method.Keywords: Fourier descriptors, CBIR, shape descriptors1. IntroductionMuch research is being done to develop tools for analyzing images based on their content and then representing them in a manner that the images can then be searched based on these representations. Content based image retrieval (CBIR) allows users to retrieve images using queries based on sketches, user constructed query images, color and texture patterns, layout or structural descriptions, and other example images or iconic and graphical information. Retrieval of images based on the shape of objects in images is an important part of CBIR.Approaches for shape representation and retrieval can be broadly classified into contour based and region based. Some of the region based methods are moment invariants [3] and grid based method [4]. Some of the contour based methods are polygonal approximation [5], autoregressive model [6], Fourier Descriptors [8], distance histograms [1] and chain code [10].Recently, Generic Fourier Descriptors method was proposed by Zhang and Lu [12] for region based matching of shapes. In this paper, we modify the Generic Fourier Descriptors method and perform experiments to test the effectiveness of the proposed method.In Section 2, we describe the Generic Fourier Descriptors for image retrieval. In Section 3, we describe the proposed enhancement to GFD. The Experimental Setup and Results are presented in Section 4. We provide the conclusion in Section 5.2. Generic Fourier DescriptorsGeneric Fourier Descriptors (GFD) have been used for image retrieval based on region-based shape matching [12]. In GFD, the feature vectors are created by extracting spectral information in the frequency domain. The Fourier transform is applied to the polar raster sampled shape image. Consider the image shown in the Figure 1. To obtain the GFD for the image, the image is first plotted in polar coordinates. The polar image of Figure 1, is shown in Figure 2.Figure 1: An Image in Cartesian CoordinatesFigure 2: Polar ImageBefore obtaining the polar image, the image is normalized for scale. 2-D DFT is applied to the rectangular region in the polar coordinates to obtain the Fourier coefficients which are used to construct the feature vectors for shape representation and similarity measure [8][9].3. Proposed MethodWe draw an analogy from Color Coherence Vectors (CCV) proposed by Pass and Zabih [11]. CCV is used for image retrieval based on color. Pass et al [12] defined color coherence of pixels as the degree to which pixels of that color are members of a large similarly colored region. Pixels are classified as coherent or incoherent. Coherent pixels are part of a sizable contiguous region of similar color while incoherent pixels are not.In the case of shape representation, we define “connectivity” of pixels in the image. To compute the connectivity of the pixels, the pixels which are set are identified. The state of the nearest 8-neighbours is computed for each of these pixels. The connectivity of a pixel is obtained as the number of set pixels amongst the nearest 8-neighbours.The connectivity information of the pixels in Figure 1is shown in the figure below.Figure 3: Connectivity Information for Image inFigure 1Figure 3 provides additional information for the image in Figure 1. The additional information is added in the Cartesian coordinates. Hence, we obtain the z-axis which provides information regarding connectivity of pixels. For each set pixel within the image, the connectivity can take values 0 through 8. A connectivity of 0 indicates that none of the nearest 8-neighbours are set . A connectivity of 8 indicates that all of the nearest 8-neighbours are set .Figure 4: Connectivity information in CylindricalCoordinatesFigure 4, represents the image in cylindrical coordinates. Cylindrical coordinates ()φθ,,r are obtained from the 3D Cartesian co-ordinates (x, y, z) as shown below.()()22c c y y x x r −+−=(1)−−=c cxx y y arctan θ(2)z =φ(3)where, (x c , y c ) is the centroid of the 2D Cartesian image and “z” represents the connectivity of pixel (x, y).The feature vectors are constructed from the cylindrical coordinates by computing the 2D-DFT for each value of φ in Eqn. 3. The 2D-DFT of the polar coordinates for each value of φ is defined as below.+− =τθπθφθτρT R r j rer f PF 2),(),( (4)where, R and T is the radial and angular resolution. r, θ is obtained from Eqn. 1 and Eqn. 2. This gives a set of 9 feature vectors for each image.The difference between two images is computed as the sum of the Euclidean distances between feature vectors as shown in Eqn. 5.() −==−=12,,2,,18021),(RT j j i ji i f fF F Dist (5)where,j i x f ,, is a descriptor within the feature vectorof image x. 80≤≤i is the connectivity. RT j <<0, where R, T is the radial and angular resolution.4. Experimental ResultsExperiments are conducted on Item number S8 within the MPEG-7 Still Images Content Set [13]. This is a collection of trademark images and originally provided by the Korean Industrial Property Office.S8 consists of 3621 still images. It is divided into the 6 sets for testing. We performed experiments on Set B. Set B is used for subjective testing. Set B consists of 2801 shapes from the whole database, it is used for subjective test. 682 shapes in Set B are manually sorted into 10 classes by MPEG-7.We have obtained the ranks of relevant images for GFD method and the proposed method. The GFD method is represented by “polar” and the proposed method is represented by “cylindrical” within the legends. In Figures 5-8, we have plotted the recall-precision for queries performed on the classes within Set B.Figure 5: Query Shape #1171Figure 6: Query Shape #1225Figure 7: Query Shape #1154Figure 8: Query Shape #11855. ConclusionsWe note that the data set does not contain intricate images. In Figures 3 and 4, we see that the pixel density is high for connectivity=0 andconnectivity=8. We believe that the relative improvement in the effectiveness of the proposed method will be higher with an increase in pixel densities for intermediate connectivities.In this paper, an enhancement to Generic Fourier Descriptors has been proposed. The proposed method generates information rich feature vectors. We have tested the proposed method on the MPEG-7 Still Images Content Set. The experiments prove the effectiveness of the proposed method. In the future, we will study the efficiency of the proposed method.References1.Fan, S., Shape Representation and Retrievalusing Distance Histograms, Technical Report – University of Alberta (2001).2.Sajjanhar, A., Lu, G., Effect of SpatialCoherence on Shape Retrieval, CISST’03, Las Vegas, 23-26 June 2003.3.Hu, M. K., Visual pattern recognition by momentinvariants, IRE transactions on Information Theory, IT-8, pp. 179-187, February 1962.4.Sajjanhar, A. and Lu, G. A Grid Based ShapeIndexing and Retrieval Method, The Australian Computer Journal, Vol. 29, No. 4, pp. 131-140, November 1997.5.Pavlidis, T. P., Polygonal approximation byNewton’s method, IEEE transactions on Computers, C-26(8), pp. 800-807, August 1977.6.Kashyap, R. L., Chellappa, R., Stochastic modelsfor closed boundary analysis: Representation and reconstruction, IEEE transactions on Information Theory, IT-27(5), pp. 627-637, September 1981.7.Kauppinen, H., Seppanen, T., Pietikainen, M.,An Experimental Comparison of Autoregressive Fourier-Based Descriptors in 2D Shape Classification, IEEE transactions on Pattern Analysis and Machine Intelligence, 17(2), pp.201-207, February 1995.8.Persoon, E., and Fu, K. S., Shape DiscriminationUsing Fourier Descriptors, IEEE transactions on Systems, Man and Cybernetics, SMC-7(3), pp.170-179, March 1977.9.Zahn, C. T. and Roskies, R. Z., Fourierdescriptions for plane closed curves, IEEE transactions on Computers, 21(3), pp. 269-281, 1972.10.Freeman, H., Computer Processing of Line-Drawing Images, Computing Surveys, 6(1), pp.57-97, March 1974.11.Pass, G. and Zabih, R., Histogram refinement forcontent-based image retrieval, IEEE Workshopon Applications of Computer Vision, pp. 96-102, December 1996.12.Zhang, D. S. and Lu, G. Generic FourierDescriptors for Shape-based Image Retrieval, IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland, August 26-29, 2002.13.MPEG-7,http://ipsi.fraunhofer.de/delite/Projects/MPEG7/Documents/N2466.html。