Oracle at TREC 10 Filtering and Question-Answering

合集下载

Oracle SQL Developer Oracle TimesTen In-Memory Dat

Oracle SQL Developer Oracle TimesTen In-Memory Dat

Oracle® SQL DeveloperOracle TimesTen In-Memory Database Support Release NotesRelease 19.2F22572-01November 2019This document provides late-breaking information as well as information that is notyet part of the formal documentation.This document contains the following sections:■Changes in this release■Supported TimesTen releases and platforms■Known problems and limitations■Documentation Accessibility1Changes in this releaseThis section lists changes between releases:■Changes for Release 19.2 from Release 19.1■Changes for Release 19.1 from Release 18.2■Changes for Release 18.2 from Release 4.1■Changes for Release 4.1 from Release 4.0■Changes for Release 4.0 from Release 3.11.1Changes for Release 19.2 from Release 19.1■This release now supports Oracle Java Development Kit (JDK) 11.1.2Changes for Release 19.1 from Release 18.2■There are new features. See New features in release 19.1.1.2.1New features in release 19.1Features supported in TimesTen Scaleout Release 18.1.2.1 or later:■For stop or unload database operations, you can opt to force all user connectionsto disconnect if the initial attempt to terminate such connections fails.Features supported in TimesTen Scaleout Release 18.1.1.3 or later:■The Connection Details dialog now displays information on the element, host,instance, connection name, process ID, and type of connection for the selectedconnection. It also displays additional information on client/server or proxyconnections.1.3Changes for Release 18.2 from Release 4.1■There are new features. See New features in release 18.2.1.3.1New features in release 18.2Features supported in TimesTen In-Memory Database 18.1 or later:■You can use SQL Developer to work with TimesTen Scaleout databases. For more information, see "Working with TimesTen Scaleout" in the Oracle SQL Developer Oracle TimesTen In-Memory Database Support User's Guide.1.4Changes for Release 4.1 from Release 4.0■There are new features. See New features in release 4.1.1.4.1New features in release 4.1Features supported in TimesTen Release 11.2.1.0 or later:■You can set a custom substitution character that TimesTen uses for variable substitution. By default, the set define attribute is set to off and uses thesubstitution character of &. For more information on the set define ttIsqlattribute, see "ttIsql" in the Oracle TimesTen In-Memory Database Reference.1.5Changes for Release 4.0 from Release 3.1■When you create a named connection to connect to a TimesTen database, the User specified connection type is now called Advanced.■You can set the Autocommit option directly from the New/Select Database Connection dialog. This new autocommit option is set at the connection level. In previous releases the autocommit option is set at the running instance level.■There are new features. See New features in release 4.0.1.5.1New features in release 4.0Features supported in TimesTen Release 11.2.2.5 or later:■You can capture ttStats snapshots and generate ttStats reports that compare two snapshots. A ttStats snapshot is a collection of performance metrics.TimesTen collects performance metrics from TimesTen system tables, systemviews, and built-in procedures.■There are new pre-defined Performance reports. Performance reports show statistics information and statistics snapshots for the TimesTen database. These reports are available in the Performance category of the TimesTen Reports. Features supported in TimesTen Release 11.2.2.4 or later:■SQL Developer supports the TimesTen Index Advisor which can evaluate a SQL workload and recommend indexes. The indexes that the TimesTen Index Advisor recommends can improve the performance for joins, single table scans, and ORDER BY or GROUP BY operations.■You can load data using parallel threads from an Oracle database into a TimesTen database without creating a cache grid, cache group, or cache table.2Supported TimesTen releases and platformsSQL Developer 19.2 is available on 64-bit Microsoft Windows, Linux systems, and macOS. SQL Developer 19.2 supports Oracle TimesTen In-Memory Database 11.2.2.8.0 and later, TimesTen Application-Tier Database Cache 11.2.2.8.0 and later, and TimesTen In-Memory Database 18.1, and can be used to connect to a TimesTen database that resides on any platform that is supported by the TimesTen software.In a client/server environment, if you connect to the TimesTen server with an older release of the TimesTen client, then new features added to the newer release of the server are not supported.SQL Developer 19.2 requires Oracle Java Development Kit (JDK) 8 or 11, with a minimum version of 1.8 update 121.3Known problems and limitations■When you use some Java 8 versions on Linux, the pie chart in the table distribution tab cannot be seen. The drop-down list for the pie chart and the bar chart may look distorted. As a workaround, select the bar chart to visualize the data.■To use SQL Developer 19.2 on Windows to manage a grid or database from TimesTen Scaleout, install a TimesTen Client (version 11.2.2.8.0 or higher) and set your environment variables with the ttenv script. By default, the ttenv script sets the CLASSPATH environment variable to:installation_dir\lib\ttjdbc5.jar;However, using ttjdbc5.jar with Oracle Java Development Kit (JDK) 8 causes the Java Virtual Machine to crash. Depending on the available.jar file, set your CLASSPATH environment variable to use either installation_dir\lib\ttjdbc7.jar; or installation_dir\lib\ttjdbc8.jar;.■If you do not select "Autocommit," then TimesTen SQL operations within the Connections navigator are not always automatically committed. You must issue an explicit commit by either selecting Commit or by issuing the commit command in SQL Worksheet. If TimesTen encounters errors in your transaction, you must explicitly roll back the transaction by either selecting Rollback or by issuing the rollback command in SQL Worksheet.If "Autocommit" is selected, then TimesTen SQL operations within theConnections navigator are automatically committed. A transaction in the SQL Worksheet is also automatically committed if there are no open tables in thetransaction. If there are open tables, then the transaction in the worksheet is not automatically committed and you must issue an explicit commit by eitherselecting Commit or by issuing the commit command in SQL Worksheet.To enable "Autocommit," select Tools, then select Preferences. In the Preferences dialog box, click the + to the left of the Database node to expand the node and select Advanced.■Setting the passthrough level to a value other than 0 can affect the SQL operations in the Connections navigator. Make sure this setting is reset to 0 when switching from issuing passthrough operations in SQL Worksheet back to the Connections navigator.You can also create an unshared worksheet which uses a separate databaseconnection from the Connections navigator so that a particular passthrough levelsetting applies only to that worksheet. From a shared worksheet, click theUnshared SQL Worksheet icon or press Ctrl+Shift+N to create an unsharedworksheet.■If you click the + to the left of the Indexes node to view the list of indexes and then click the name of the index with the characteristics you want to view, you cannot view the index's DDL statement, because the SQL tab is missing. To view theCREATE INDEX statement, from the index's underlying table or materialized view, click the SQL tab.■TimesTen error 2963 (Inconsistent datatypes: (NUMBER,CHAR) are not compatible in expression) is returned when you attempt to specify anumeric value filter on a numeric column. To specify a filter value for a numeric column, click the + to the left of the Tables node to view the list of tables, and then click the name of a table that contains a numeric column. In the Data tab of the table, click the name of a numeric column. In the Filter field, enter a numeric value and press the Enter key.■Before you export data from a table in a format that is compatible with the TimesTen ttBulkCp utility, set the format for DATE and TIMESTAMP data using the following instructions:1.Select Tools, then select Preferences.2.In the Preferences dialog box click the + to the left of the Database node toexpand the node.3.Select NLS.4.In the Date Format field, specify RRRR-MM-DD5.In the Timestamp Format field, specify DD-MON-RRRR HH24:MI:SSXFFTo export data from a table in a format that the ttBulkCp utility recognizes,right-click the name of the table and select Unload. After you select Unload, step 1 (Source/Destination) of the Unload Wizard appears. In the Format drop-down menu, select ttbulkcp. After you complete step 1, click the Next button to advance to step 2 (Specify Data) of the Unload Wizard. After you complete step 2, click the Next button to advance to step 3 (Unload Summary). Click the Finish button to complete the data export operation.You can change the DATE and TIMESTAMP data format back to their originalsettings after the data has been exported.■The hierarchical profile and debugging of PL/SQL procedures and functions in a TimesTen database are not supported.4Documentation AccessibilityFor information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at/pls/topic/lookup?ctx=acc&id=docacc.Access to Oracle SupportOracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit/pls/topic/lookup?ctx=acc&id=info or visit /pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.Oracle SQL Developer Oracle TimesTen In-Memory Database Support Release Notes, Release 19.2F22572-01Copyright © 2008, 2019, Oracle and/or its affiliates. All rights reserved.This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.。

德尔·韦玛网络S4048T-ON交换机说明书

德尔·韦玛网络S4048T-ON交换机说明书

The Dell EMC Networking S4048T-ON switch is the industry’s latest data center networking solution, empowering organizations to deploy modern workloads and applications designed for the open networking era. Businesses who have made the transition away from monolithic proprietary mainframe systems to industry standard server platforms can now enjoy even greater benefits from Dell EMC open networking platforms. By using industry-leading hardware and a choice of leading network operating systems to simplify data center fabric orchestration and automation, organizations can tailor their network to their unique requirements and accelerate innovation.These new offerings provide the needed flexibility to transform data centers. High-capacity network fabrics are cost-effective and easy to deploy, providing a clear path to the software-defined data center of the future with no vendor lock-in.The S4048T-ON supports the open source Open Network Install Environment (ONIE) for zero-touch installation of alternate network operating systems, including feature rich Dell Networking OS.High density 1/10G BASE-T switchThe Dell EMC Networking S-Series S4048T-ON is a high-density100M/1G/10G/40GbE top-of-rack (ToR) switch purpose-builtfor applications in high-performance data center and computing environments. Leveraging a non-blocking switching architecture, theS4048T-ON delivers line-rate L2 and L3 forwarding capacity within a conservative power budget. The compact S4048T-ON design provides industry-leading density of 48 dual-speed 1/10G BASE-T (RJ45) ports, as well as six 40GbE QSFP+ up-links to conserve valuable rack space and simplify the migration to 40Gbps in the data center core. Each40GbE QSFP+ up-link can also support four 10GbE (SFP+) ports with a breakout cable. In addition, the S4048T-ON incorporates multiple architectural features that optimize data center network flexibility, efficiency and availability, including I/O panel to PSU airflow or PSU to I/O panel airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. S4048T-ON supports feature-rich Dell Networking OS, VLT, network virtualization features such as VRF-lite, VXLAN Gateway and support for Dell Embedded Open Automation Framework.• The S4048T-ON is the only switch in the industry that supports traditional network-centric virtualization (VRF) and hypervisorcentric virtualization (VXLAN). The switch fully supports L2 VX-• The S4048T-ON also supports Dell EMC Networking’s Embedded Open Automation Framework, which provides enhanced network automation and virtualization capabilities for virtual data centerenvironments.• The Open Automation Framework comprises a suite of interre-lated network management tools that can be used together orindependently to provide a network that is flexible, available andmanageable while helping to reduce operational expenses.Key applicationsDynamic data centers ready to make the transition to software-defined environments• High-density 10Gbase-T ToR server access in high-performance data center environments• Lossless iSCSI storage deployments that can benefit from innovative iSCSI & DCB optimizations that are unique only to Dell NetworkingswitchesWhen running the Dell Networking OS9, Active Fabric™ implementation for large deployments in conjunction with the Dell EMC Z-Series, creating a flat, two-tier, nonblocking 10/40GbE data center network design:• High-performance SDN/OpenFlow 1.3 enabled with ability to inter-operate with industry standard OpenFlow controllers• As a high speed VXLAN Layer 2 Gateway that connects thehypervisor based ovelray networks with nonvirtualized infrastructure Key features - general• 48 dual-speed 1/10GbE (SFP+) ports and six 40GbE (QSFP+)uplinks (totaling 72 10GbE ports with breakout cables) with OSsupport• 1.44Tbps (full-duplex) non-blocking switching fabric delivers line-rateperformance under full load with sub 600ns latency• I/O panel to PSU airflow or PSU to I/O panel airflow• Supports the open source ONIE for zero-touch• installation of alternate network operating systems• Redundant, hot-swappable power supplies and fansDELL EMC NETWORKING S4048T-ON SWITCHEnergy-efficient 10GBASE-T top-of-rack switch optimized for data center efficiencyKey features with Dell EMC Networking OS9Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP and PBR (Policy Based Routing) support• Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP andPBR (Policy Based Routing) support• VRF-lite enables sharing of networking infrastructure and provides L3traffic isolation across tenants• Increase VM Mobility region by stretching L2 VLAN within or across two DCs with unique VLT capabilities like Routed VL T, VLT Proxy Gateway • VXLAN gateway functionality support for bridging the nonvirtualizedand the virtualized overlay networks with line rate performance.• Embedded Open Automation Framework adding automatedconfiguration and provisioning capabilities to simplify the management of network environments. Supports Puppet agent for DevOps• Modular Dell Networking OS software delivers inherent stability as well as enhanced monitoring and serviceability functions.• Enhanced mirroring capabilities including 1:4 local mirroring,• Remote Port Mirroring (RPM), and Encapsulated Remote PortMirroring (ERPM). Rate shaping combined with flow based mirroringenables the user to analyze fine grained flows• Jumbo frame support for large data transfers• 128 link aggregation groups with up to 16 members per group, usingenhanced hashing• Converged network support for DCB, with priority flow control(802.1Qbb), ETS (802.1Qaz), DCBx and iSCSI TLV• S4048T-ON supports RoCE and Routable RoCE to enable convergence of compute and storage on Active FabricUser port stacking support for up to six units and unique mixed mode stacking that allows stacking of S4048-ON with S4048T-ON to providecombination of 10G SFP+ and RJ45 ports in a stack.Physical48 fixed 10GBase-T ports supporting 100M/1G/10G speeds6 fixed 40 Gigabit Ethernet QSFP+ ports1 RJ45 console/management port with RS232signaling1 USB 2.0 type A to support mass storage device1 Micro-USB 2.0 type B Serial Console Port1 8 GB SSD ModuleSize: 1RU, 1.71 x 17.09 x 18.11”(4.35 x 43.4 x 46 cm (H x W x D)Weight: 23 lbs (10.43kg)ISO 7779 A-weighted sound pressure level: 65 dB at 77°F (25°C)Power supply: 100–240V AC 50/60HzMax. thermal output: 1568 BTU/hMax. current draw per system:4.6 A at 460W/100VAC,2.3 A at 460W/200VACMax. power consumption: 460 WattsT ypical power consumption: 338 WattsMax. operating specifications:Operating temperature: 32°F to 113°F (0°C to45°C)Operating humidity: 5 to 90% (RH), non-condensing Max. non-operating specifications:Storage temperature: –40°F to 158°F (–40°C to70°C)Storage humidity: 5 to 95% (RH), non-condensingRedundancyHot swappable redundant powerHot swappable redundant fansPerformance GeneralSwitch fabric capacity:1.44Tbps (full-duplex)720Gbps (half-duplex)Forwarding Capacity: 1080 MppsLatency: 2.8 usPacket buffer memory: 16MBCPU memory: 4GBOS9 Performance:MAC addresses: 160KARP table 128KIPv4 routes: 128KIPv6 hosts: 64KIPv6 routes: 64KMulticast routes: 8KLink aggregation: 16 links per group, 128 groupsLayer 2 VLANs: 4KMSTP: 64 instancesVRF-Lite: 511 instancesLAG load balancing: Based on layer 2, IPv4 or IPv6headers Latency: Sub 3usQOS data queues: 8QOS control queues: 12Ingress ACL: 16KEgress ACL: 1KQoS: Default 3K entries scalable to 12KIEEE compliance with Dell Networking OS9802.1AB LLDP802.1D Bridging, STP802.1p L2 Prioritization802.1Q VLAN T agging, Double VLAN T agging,GVRP802.1Qbb PFC802.1Qaz ETS802.1s MSTP802.1w RSTP802.1X Network Access Control802.3ab Gigabit Ethernet (1000BASE-T)802.3ac Frame Extensions for VLAN T agging802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X) withQSA802.3ba 40 Gigabit Ethernet (40GBase-SR4,40GBase-CR4, 40GBase-LR4) on opticalports802.3u Fast Ethernet (100Base-TX)802.3x Flow Control802.3z Gigabit Ethernet (1000Base-X) with QSA 802.3az Energy Efficient EthernetANSI/TIA-1057 LLDP-MEDForce10 PVST+Max MTU 9216 bytesRFC and I-D compliance with Dell Networking OS9General Internet protocols768 UDP793 TCP854 T elnet959 FTPGeneral IPv4 protocols791 IPv4792 ICMP826 ARP1027 Proxy ARP1035 DNS (client)1042 Ethernet Transmission1305 NTPv31519 CIDR1542 BOOTP (relay)1812 Requirements for IPv4 Routers1918 Address Allocation for Private Internets 2474 Diffserv Field in IPv4 and Ipv6 Headers 2596 Assured Forwarding PHB Group3164 BSD Syslog3195 Reliable Delivery for Syslog3246 Expedited Assured Forwarding4364 VRF-lite (IPv4 VRF with OSPF, BGP,IS-IS and V4 multicast)5798 VRRPGeneral IPv6 protocols1981 Path MTU Discovery Features2460 Internet Protocol, Version 6 (IPv6)Specification2464 Transmission of IPv6 Packets overEthernet Networks2711 IPv6 Router Alert Option4007 IPv6 Scoped Address Architecture4213 Basic Transition Mechanisms for IPv6Hosts and Routers4291 IPv6 Addressing Architecture4443 ICMP for IPv64861 Neighbor Discovery for IPv64862 IPv6 Stateless Address Autoconfiguration 5095 Deprecation of T ype 0 Routing Headers in IPv6IPv6 Management support (telnet, FTP, TACACS, RADIUS, SSH, NTP)VRF-Lite (IPv6 VRF with OSPFv3, BGPv6, IS-IS) RIP1058 RIPv1 2453 RIPv2OSPF (v2/v3)1587 NSSA 4552 Authentication/2154 OSPF Digital Signatures Confidentiality for 2328 OSPFv2 OSPFv32370 Opaque LSA 5340 OSPF for IPv6IS-IS1142 Base IS-IS Protocol1195 IPv4 Routing5301 Dynamic hostname exchangemechanism for IS-IS5302 Domain-wide prefix distribution withtwo-level IS-IS5303 3-way handshake for IS-IS pt-to-ptadjacencies5304 IS-IS MD5 Authentication5306 Restart signaling for IS-IS5308 IS-IS for IPv65309 IS-IS point to point operation over LANdraft-isis-igp-p2p-over-lan-06draft-kaplan-isis-ext-eth-02BGP1997 Communities2385 MD52545 BGP-4 Multiprotocol Extensions for IPv6Inter-Domain Routing2439 Route Flap Damping2796 Route Reflection2842 Capabilities2858 Multiprotocol Extensions2918 Route Refresh3065 Confederations4360 Extended Communities4893 4-byte ASN5396 4-byte ASN representationsdraft-ietf-idr-bgp4-20 BGPv4draft-michaelson-4byte-as-representation-054-byte ASN Representation (partial)draft-ietf-idr-add-paths-04.txt ADD PATHMulticast1112 IGMPv12236 IGMPv23376 IGMPv3MSDP, PIM-SM, PIM-SSMSecurity2404 The Use of HMACSHA- 1-96 within ESPand AH2865 RADIUS3162 Radius and IPv63579 Radius support for EAP3580 802.1X with RADIUS3768 EAP3826 AES Cipher Algorithm in the SNMP UserBase Security Model4250, 4251, 4252, 4253, 4254 SSHv24301 Security Architecture for IPSec4302 IPSec Authentication Header4303 ESP Protocol4807 IPsecv Security Policy DB MIBdraft-ietf-pim-sm-v2-new-05 PIM-SMwData center bridging802.1Qbb Priority-Based Flow Control802.1Qaz Enhanced Transmission Selection (ETS)Data Center Bridging eXchange (DCBx)DCBx Application TLV (iSCSI, FCoE)Network management1155 SMIv11157 SNMPv11212 Concise MIB Definitions1215 SNMP Traps1493 Bridges MIB1850 OSPFv2 MIB1901 Community-Based SNMPv22011 IP MIB2096 IP Forwarding T able MIB2578 SMIv22579 T extual Conventions for SMIv22580 Conformance Statements for SMIv22618 RADIUS Authentication MIB2665 Ethernet-Like Interfaces MIB2674 Extended Bridge MIB2787 VRRP MIB2819 RMON MIB (groups 1, 2, 3, 9)2863 Interfaces MIB3273 RMON High Capacity MIB3410 SNMPv33411 SNMPv3 Management Framework3412 Message Processing and Dispatching forthe Simple Network ManagementProtocol (SNMP)3413 SNMP Applications3414 User-based Security Model (USM) forSNMPv33415 VACM for SNMP3416 SNMPv23417 Transport mappings for SNMP3418 SNMP MIB3434 RMON High Capacity Alarm MIB3584 Coexistance between SNMP v1, v2 andv34022 IP MIB4087 IP Tunnel MIB4113 UDP MIB4133 Entity MIB4292 MIB for IP4293 MIB for IPv6 T extual Conventions4502 RMONv2 (groups 1,2,3,9)5060 PIM MIBANSI/TIA-1057 LLDP-MED MIBDell_ITA.Rev_1_1 MIBdraft-grant-tacacs-02 TACACS+draft-ietf-idr-bgp4-mib-06 BGP MIBv1IEEE 802.1AB LLDP MIBIEEE 802.1AB LLDP DOT1 MIBIEEE 802.1AB LLDP DOT3 MIB sFlowv5 sFlowv5 MIB (version 1.3)DELL-NETWORKING-SMIDELL-NETWORKING-TCDELL-NETWORKING-CHASSIS-MIBDELL-NETWORKING-PRODUCTS-MIBDELL-NETWORKING-SYSTEM-COMPONENT-MIBDELL-NETWORKING-TRAP-EVENT-MIBDELL-NETWORKING-COPY-CONFIG-MIBDELL-NETWORKING-IF-EXTENSION-MIBDELL-NETWORKING-FIB-MIBIT Lifecycle Services for NetworkingExperts, insights and easeOur highly trained experts, withinnovative tools and proven processes, help you transform your IT investments into strategic advantages.Plan & Design Let us analyze yourmultivendor environment and deliver a comprehensive report and action plan to build upon the existing network and improve performance.Deploy & IntegrateGet new wired or wireless network technology installed and configured with ProDeploy. Reduce costs, save time, and get up and running cateEnsure your staff builds the right skills for long-termsuccess. Get certified on Dell EMC Networking technology and learn how to increase performance and optimize infrastructure.Manage & SupportGain access to technical experts and quickly resolve multivendor networking challenges with ProSupport. Spend less time resolving network issues and more time innovating.OptimizeMaximize performance for dynamic IT environments with Dell EMC Optimize. Benefit from in-depth predictive analysis, remote monitoring and a dedicated systems analyst for your network.RetireWe can help you resell or retire excess hardware while meeting local regulatory guidelines and acting in an environmentally responsible way.Learn more at/lifecycleservicesLearn more at /NetworkingDELL-NETWORKING-FPSTATS-MIBDELL-NETWORKING-LINK-AGGREGATION-MIB DELL-NETWORKING-MSTP-MIB DELL-NETWORKING-BGP4-V2-MIB DELL-NETWORKING-ISIS-MIBDELL-NETWORKING-FIPSNOOPING-MIBDELL-NETWORKING-VIRTUAL-LINK-TRUNK-MIB DELL-NETWORKING-DCB-MIBDELL-NETWORKING-OPENFLOW-MIB DELL-NETWORKING-BMP-MIBDELL-NETWORKING-BPSTATS-MIBRegulatory compliance SafetyCUS UL 60950-1, Second Edition CSA 60950-1-03, Second Edition EN 60950-1, Second EditionIEC 60950-1, Second Edition Including All National Deviations and Group Differences EN 60825-1, 1st EditionEN 60825-1 Safety of Laser Products Part 1:Equipment Classification Requirements and User’s GuideEN 60825-2 Safety of Laser Products Part 2: Safety of Optical Fibre Communication Systems FDA Regulation 21 CFR 1040.10 and 1040.11EmissionsInternational: CISPR 22, Class AAustralia/New Zealand: AS/NZS CISPR 22: 2009, Class ACanada: ICES-003:2016 Issue 6, Class AEurope: EN 55022: 2010+AC:2011 / CISPR 22: 2008, Class AJapan: VCCI V-3/2014.04, Class A & V4/2012.04USA: FCC CFR 47 Part 15, Subpart B:2009, Class A RoHSAll S-Series components are EU RoHS compliant.CertificationsJapan: VCCI V3/2009 Class AUSA: FCC CFR 47 Part 15, Subpart B:2009, Class A Available with US Trade Agreements Act (TAA) complianceUSGv6 Host and Router Certified on Dell Networking OS 9.5 and greater IPv6 Ready for both Host and RouterUCR DoD APL (core and distribution ALSAN switch ImmunityEN 300 386 V1.6.1 (2012-09) EMC for Network Equipment\EN 55022, Class AEN 55024: 2010 / CISPR 24: 2010EN 61000-3-2: Harmonic Current Emissions EN 61000-3-3: Voltage Fluctuations and Flicker EN 61000-4-2: ESDEN 61000-4-3: Radiated Immunity EN 61000-4-4: EFT EN 61000-4-5: SurgeEN 61000-4-6: Low Frequency Conducted Immunity。

Oracle Communications Consulting (OCC)自动化测试套件版本1.0

Oracle Communications Consulting (OCC)自动化测试套件版本1.0

1 DATA SHEET | Oracle Communications Consulting (OCC) powers up Automated Test Suite | Version 1.0 Copyright © 2020, Oracle and/or its affiliates | Confidential – PublicOracle Communications Consulting (OCC) powers up Automated Test SuiteManual testing has classically come at a high cost of resources to run and maintain, is time-consuming, lacks proper coverage and is error-prone due to repetitiveness. This has led to the introduction and attractiveness of automating these tests. Automation testing is used to improve the execution speed of verification, checks or any other repeatable tasks in the software development, integration and deployment lifecycle.The return on investment between manual versus automated testing has been a balancing act. Historically, automated testing has required investment in tools costs, setup of test cases, and then the maintenance/updating of those tools & test cases.Today’s Service Providers need a test automation tool that improves their time-to-market, reduces costs, extends their testing scope, and in turn, increases their quality and their customers’ experience. In addition, they need to roll out new software versions and new platforms quickly, test configuration changes, and expand/grow their networks all while maintaining core network resiliency and high-availability as they roll out newservice plans. According to a recent survey by GitLab with over 3,600 respondents from 21 countries, 47% claim testing is their number one reason for delays, 74% have shifted testing earlier in the deployment process, 35% claim they are half-way there to implementing automated testing and only 12% have implemented full test automation.The Oracle Communication Automated Testing Suite (ATS) is a software that is used on the system under test to check if the system is functioning as expected and provides end-to-end and regression testing of 4G & 5G scenarios, including interworking test cases and Network Function (NF) emulation. This Jenkins-based open-source automation server is flexible enough that the user can create additional test cases using the APIs provided by the framework to automate all manner of tasks related to building, testing, delivering or deploying software.ORACLE COMMUNICATIONS CONSULTING ACTIVATES OC-ATSLeveraging decades of experience in core network solutions deployment and testing, depth knowledge of telco and ITprotocols and call flows, our consultants are equipped and ready to customize and expand the capabilities of your OC-ATC for automated functional testing, regression testing, load testing and test process management. Building on the included test cases in the suite and developed custom testing accelerators, OCC works with our customers to develop bespoke test cases and reports specific to your network and business needs.Oracle Communications Consulting maximizes your Investment in OC-ATS by:ATS Installation and ConfigurationCustom test cases and reports developmentTest cases and reportmodifications according to customer network and services evolutionBenchmarking and capacity planning testingTraffic generation for specific call testing or network scenarios customizationRegression test executionsimplification with scheduling & custom reportsCONNECT WITH USCall +1.800.ORACLE1 or visit .Outside North America, find your local office at/contact ./oracle/oracleCopyright © 2020, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.This device has not been authorized as required by the rules of the Federal Communications Commission. This device is not, and may not be, offered for sale or lease, or sold or leased, until authorization is obtained.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0120Disclaimer: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described in this document may change and remains at the sole discretion of Oracle Corporation.CI/CD PIPELINE METHODOLOGYRecognizing that each customer’s network and needs are different, OCC designs different CD Pipelines specific to these needs. For example, in the case of a new software release, ATS use case tests are deployed automatically against the new software.ORACLE COMMUNICATIONS CONSULTING EXTENDS THE FUNCTIONALITY AND CAPABILITIES OF YOUR OC-ATSAs your network traffic evolves, so must your OC-ATS test cases and reports. It isimportant that OC-ATS undergoes regular test cases reviews and that adjustments are made or new use cases tests are developed. In today’s virtualized and cloud native environments, where 4G/5G applications are no longer deployed onproprietary hardware, changes in underlying environments can happen, and often outside the control of Service Providers. It is of critical importance that regression testing is powerful, is customized to your needs and delivers meaningful reports and data. The ability to quickly deploy these new test cases is especially critical for interworking and policy rule case additions which can be done expeditiously, run daily and include subscriber/subscription lifecycle.These OC-ATS extensions include, but are not limited to: end-to-end use cases, customer ad-hoc use cases, enhanced use case validation, a test scheduler and reports, and a test creation user interface. In addition, OCC can enhance OC-ATS with additional capabilities, such as network topology, NF simulators, chaos testing, hybrid/public cloud hosting and integration with customer toolchain.TRUST THE EXPERTSOCC possess both the intimate knowledge of the elements under test in your 4G/5G network and the ATS framework to quickly design and develop the test cases and reports your company needs to continually check and ensure your solution is running smoothly and optimally with the minimum use of your resources. In today’s Cloud environ ments, it is necessary to continually check the integrity and health of the entire end-to-end solution and alert to any issues network or elemental changes have introduced. This is only possible with continual automated testing which has been powered by Oracle Communications Consulting.Oracle Communications Consulting extends your OC-ATS with:Evolution and Maintenance Ad-hoc Test CasesPowerful Regression Testing Extended CapabilitiesOut-of-the-box Standalone NF Test Cases Modifications Test Scheduler Custom ReportsChaos Testing。

Oracle10g常用工具简介

Oracle10g常用工具简介
UPDATE table_name SET column1=value1 WHERE condition;
常用命令
可以通过修改`sqlplus`的配置文件来设置各种参数,如字体、颜色、自动提交等。
设置环境变量可以方便地调用SQLPlus,例如设置`ORACLE_HOME`和`PATH`环境变量。
配置与环境变量
概述
概述与功能
首先需要从Oracle官网下载适合自己操作系统的安装包,然后按照提示进行安装。
安装完成后需要进行一些基本的配置,如设置环境变量、配置网络连接等,以确保能够正常连接到Oracle数据库。
安装
配置
安装与配置
使用
通过SQL Developer可以方便地连接到Oracle数据库,并进行各种数据库操作。同时,还可以使用其内置的脚本编辑器和调试器进行SQL脚本的编写和调试。
管理
可以通过Oracle Data Pump的管理工具(如Oracle Enterprise Manager)对Oracle Data Pump进行管理和监控。
使用与管理
05
Oracle Automatic Workload Repository (AWR)
概述与功能
Oracle Automatic Workload Repository (AWR) 是 Oracle 数据库的一个重要组件,用于收集、处理和存储数据库的性能统计信息。
配置
安装与配置
使用
AWR 的使用主要包括查询 AWR 存储的性能统计信息和生成性能报告。可以通过 Oracle Enterprise Manager (OEM) 或 SQL*Plus 等工具进行查询和报告生成。
管理
AWR 的管理主要包括监控 AWR 的运行状态、定期清理过期的统计信息以及根据需求调整 AWR 的配置参数。此外,还需要关注 AWR 的存储空间使用情况,确保足够的存储空间以容纳性能统计信息。

orcale默认事务隔离级别

orcale默认事务隔离级别

orcale默认事务隔离级别【最新版】目录1.Oracle 默认事务隔离级别介绍2.Oracle 默认事务隔离级别的设置3.Oracle 默认事务隔离级别的影响4.Oracle 默认事务隔离级别的实例正文【Oracle 默认事务隔离级别介绍】Oracle 数据库系统中,为了保证数据的一致性和完整性,采用了事务隔离级别来限制并发访问。

事务隔离级别是数据库系统用来控制事务在并发访问过程中的可见性。

Oracle 数据库默认的事务隔离级别是“可重复读取”,这是 Oracle 数据库系统推荐的隔离级别。

【Oracle 默认事务隔离级别的设置】Oracle 数据库默认事务隔离级别是“可重复读取”,可以通过查询v$transaction_isolation 参数得到。

如果想要修改默认的事务隔离级别,可以使用 ALTER SYSTEM 语句,如下所示:```ALTER SYSTEM SET transaction_isolation = "READ UNCOMMITTED" SCOPE=both;```其中,"READ UNCOMMITTED"是事务隔离级别的一种,表示在读取数据时,可以读取到未提交的数据。

【Oracle 默认事务隔离级别的影响】Oracle 默认事务隔离级别为“可重复读取”,意味着在并发访问过程中,一个事务在读取某些数据后,另一个事务想要读取这些数据,必须等待前一个事务提交后才能读取。

这种隔离级别可以有效地防止脏读、不可重复读和幻读等问题,保证数据的一致性和完整性。

【Oracle 默认事务隔离级别的实例】假设有两个事务同时访问一个数据库,事务 A 和事务 B。

事务 A 读取某个数据,然后事务 B 修改了这个数据并提交。

如果采用“可重复读取”的隔离级别,事务 A 在读取数据后,事务 B 修改的数据对其是不可见的,必须等待事务 B 提交后才能读取。

Oracle ATG 一手指南版本 11.0说明书

Oracle ATG 一手指南版本 11.0说明书

Version 11.0Repository Guide Oracle ATGOne Main StreetCambridge, MA 02142USARepository GuideProduct version: 11.0Release date: 01-10-14Document identifier: AtgRepositoryGuide1402071827Copyright © 1997, 2014 Oracle and/or its affiliates. All rights reserved.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at /pls/ topic/lookup?ctx=acc&id=docacc.Access to Oracle Support: Oracle customers have access to electronic support through My Oracle Support. For information, visit http:// /pls/topic/lookup?ctx=acc&id=info or visit /pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.Table of Contents1. Introduction (1)2. Repository API (3)Repository Architecture (4)Repository Items (4)Item Descriptors (5)MutableRepository (6)Core Repository API Elements (6)atg.repository.Repository (7)atg.repository.RepositoryView (7)atg.repository.RepositoryItem (8)atg.repository.MutableRepository (8)atg.repository.PropertiesChangedEvent (10)Cloning Repository Items (11)3. Repository Queries (13)Repository Query API (13)atg.repository.QueryBuilder (13)atg.repository.QueryOptions (14)Repository Query Examples (15)Repository Queries in the ATG Control Center (17)Repository Query Language (18)RQL Overview (18)Comparison Queries (19)Text Comparison Queries (19)Date and Timestamp Queries (20)Property of Property Queries (20)Logical Operators (20)Multi-Valued Property Queries (20)INCLUDES ITEM (21)IS NULL (21)COUNT (21)ALL (22)PROPERTY HINT (22)Full Text Search Queries (22)ID-based Queries (22)ORDER BY (23)RANGE (23)Parameters in Queries (24)Parameterized Field Queries (24)RQL Examples (25)RQL Grammar (25)4. SQL Repository Overview (27)5. SQL Repository Architecture (29)Repositories and Transactions (29)Managing Transactions (30)Repository Definition Files (31)Default Values and XML File Combination (32)SQL Repository Items (32)SQL Repository Item Descriptors (33)6. SQL Repository Data Models (35)Primary and Auxiliary Tables (35)id Property (35)Repository Guide iiiCompound Repository IDs (36)IdSpaces and the id Property (37)Database Sequences and Repository IDs (38)Auxiliary Tables (38)References Constraints (39)Properties and Database Columns (39)One-to-Many Relationships: Multi-Valued Properties (40)Operating on Multi-Valued Properties (42)Many-to-Many Relationships (42)Default Item Descriptor (43)Cascading Data Relationships (43)Cascade Insert (43)Cascade Update (44)Cascade Delete (44)Cascade Example (45)Item Descriptor Inheritance (46)Benefits of Item Descriptor Inheritance (47)Queries and Item Descriptor Inheritance (49)Item Descriptor Inheritance with the copy-from Attribute (49)Limitations of SQL Repository Inheritance (49)Derived Properties (50)Derivation Syntax (50)Override Properties (52)Properties Derived from the Same Item (52)Complex Derivations (52)Derivation Methods (53)Repository Items and Session Backup (56)7. SQL Repository Item Properties (59)Enumerated Properties (59)enumerated (60)enumerated String (61)Required Properties (62)Unique Properties (63)Date and Timestamp Properties (63)Last-Modified Properties (63)Null Properties (64)Grouping and Sorting Properties (64)Property Validation with a Property Editor Class (65)Maintaining Item Concurrency with the Version Property (66)Repository Items as Properties (67)Multiple Item Properties (68)Adding an Item to a Multi-Item Property (69)Querying Subproperties (70)Transient Properties (70)Assigning FeatureDescriptorValues with the <attribute> Tag (70)Attributes Used in the ACC (71)Linking between Repositories (72)SQL Types and Repository Data Types (72)User-Defined Property Types (73)Identifying a User-Defined Property Type (74)Using the property-type Attribute (74)Implementing a User-Defined Property Type (74)Property Conversion Methods (76)iv Repository GuideNull Values in User-Defined Property Types (76)User-Defined Properties and the ACC (76)User-Defined Property Type Examples (77)Property Fetching (79)Handling Large Database Columns (80)8. SQL Repository Queries (81)Repository Filtering (81)<rql-filter> (82)filterQuery and rqlFilterString Properties (83)Overriding RQL-Generated SQL (83)Parameterized Queries (84)Parameterized Query API (84)Query Types that Support Parameters (85)QueryCache and Parameterized Queries (85)Parameterized Query Example (85)Named Queries (87)Named Queries in an SQL Repository Definition File (87)Java Code Access to Named Queries (90)Text Search Queries (92)Simulating Text Search Queries (94)Wildcards in Queries (94)Not Queries and Null Values (95)Outer Joins (95)Table Ownership Issues (96)Constraints (97)Setting Ownership at the Repository Level (97)Unsupported Queries in the SQL Repository (97)9. Localizing SQL Repository Definitions (99)Defining a Resource Bundle (99)Localizing Properties (100)Localizing Enumerated Properties (100)10. SQL Repository Caching (103)Item and Query Caches (103)Item Caches (104)Query Caches (104)Caching Modes (104)Setting Caching Mode (105)Disabling Caching (105)Inherited Caching Modes (106)Simple Caching (106)Locked Caching (106)Prerequisites (106)ClientLockManager Component (107)ServerLockManager Component (107)Processing Lock Requests (109)Isolation Levels (110)Locking Exceptions (111)Resolving Lock Contention (111)Monitoring Lock Managers (112)Locking Scenarios and Workflows (113)Distributed Caching Modes (114)Simple versus Distributed Caching (114)Distributed Caching Mode Options (114)Repository Guide vDistributed TCP Caching (115)Distributed TCP Caching Setup (116)Restoring Subscriber Data (117)Invalidating Cached Items (117)Disabling Automatic Updates to das_gsa_subscriber (117)Distributed JMS Caching (118)Distributed JMS Caching Setup (118)Distributed Hybrid Caching (119)Distributed Hybrid Caching Setup (120)Distributed Hybrid Caching Initialization (123)Optimizing Performance (123)Monitoring Cache Manager Activity (124)Distributed External Caching (124)Cache Configuration (124)Query Cache Tuning (126)Item Cache Tuning: ATG Commerce (126)Cache Timeout (126)Monitoring Cache Usage (127)Weak Cache Hashtable (128)Caching by Repository IDs (129)Restoring Item Caches (129)Preloading Caches (130)Enabling Lazy Loading (131)Lazy Loading Settings (132)Integration with Batch Loading (133)Using Preloading Hints in Lazy-Loaded Queries (133)Cache Flushing (134)Flushing All Repository Caches (134)Flushing Item Caches (135)Flushing Query Caches (136)Cache Invalidation Service (136)Enabling the Cache Invalidator (136)Invoke the Cache Invalidator Manually (136)Use the Cache Invalidator with Distributed JMS Caching (137)11. External SQL Repository Caching (139)Choosing Repository Items for External Caching (139)Configuring Repository Items for External Caching (139)Cache Locality (140)Cache Modes for External Caching (141)External Caching and Cache Invalidation (142)External Caching and Cache Warming (142)External Cache Naming (143)External Caching Statistics (144)Batch Mode for External Caching (145)Enabling Batch Mode for External Caching (145)Repository Configuration for Batch Mode (146)12. Developing and Testing an SQL Repository (147)Adding Items (147)Adding Items with Composite IDs (148)Adding Items without Specifying IDs (148)Adding Items to Multi-Item Properties (149)Updating Items (149)Removing Items (150)vi Repository GuideRemoving References to Items (150)Querying Items (151)Importing and Exporting Items and DDLs (151)startSQLRepository (151)Requirements (152)Syntax (152)Exporting Repository Data (155)Importing Repository Data (155)Importing to a Versioned Repository (156)SQL Repository Test Example (157)Using Operation Tags in the Repository Administration Interface (158)Debug Levels (159)Modifying a Repository Definition (159)13. SQL Repository Reference (161)SQL Repository Definition Tag Reference (161)<!DOCTYTPE> (162)<gsa-template> (162)<header> (162)<item-descriptor> (163)<property> (168)<derivation> (173)<option> (173)<attribute> (174)<table> (175)<expression> (176)<rql-filter> (176)<named-query> (177)<rql-query> (177)<rql> (177)<param> (177)<sql-query> (177)<sql> (177)<input-parameter-types> (178)<returns> (178)<dependencies> (178)<transaction> (178)<rollback-transaction> (179)<add-item> (179)<update-item> (180)<remove-item> (180)<remove-all-items> (181)<query-items> (181)<print-item> (182)<set-property> (183)<import-items> (183)<export-items> (184)<load-items> (184)<dump-caches> (185)<print-ddl> (186)DTD for SQL Repository Definition Files (186)Sample SQL Repository Definition Files (192)Simple One-to-One (193)One-to-One with Auxiliary Table (194)Repository Guide viiOne-to-Many with an Array (195)One-to-Many with a Set (196)One-to-Many with a Map (197)One-to-Many Mapping to Other Repository Items (198)Ordered One-to-Many (199)Many-to-Many (200)Multi-Column Repository IDs (201)Configuring the SQL Repository Component (203)Registering a Content Repository (203)SQL Repository Component Properties (204)14. SQL Content Repositories (213)Setting Up an SQL Content Repository (213)Creating an SQL Content Repository Definition (214)Folder and Content Item Descriptors (214)Path and Item ID Attributes (215)Defining Content Item Descriptors (217)Content Attributes and Properties (217)Storing Content on a File System (218)Content Repository Example (218)Book Item Type Properties (219)Locating the Content with Path and Folder Properties (219)Book Example Repository Definition File (220)Book Example SQL Table Creation Statements (220)Adding Content to the Content Repository (221)Accessing Items in the Content Repository (221)Configuring an SQL Content Repository (222)15. Repository Loader (223)Repository Loader Architecture (223)Repository Loader Components (224)FileSystemMonitorScheduler (225)FileSystemMonitorService (225)LoaderManager (227)TypeMapper and TypeMappings (229)ContentHandlers (232)Repository Loader Administration (233)RLClient (234)Supplemental RLClient Parameters (234)Repository Loader Manifest (235)Manifest File Tags and Attributes (236)Importing Versioned Repository Data (236)Configuring the VersionedLoaderEventListener (237)Importing Targeters that Reference rules Files (240)Configuring TypeMapping Components for the PublishingFileRepository (241)Repository Loader Example (241)User Item Type (242)Item Pathnames (243)Type Mappings and Content Handlers (244)TypeMapper (244)xml2repository Schemas (245)Running the Repository Loader Example (245)16. Purging Repository Items (247)Selecting Repository Items (248)Related Conditions and Actions (249)viii Repository GuidePurge Statistics (249)Scheduling a Purge Operation (251)Stopping a Purge Operation (252)Asset Purge Error Handling (252)Throttling and Performance (252)Configuring Throttle Settings for an Asset Purge Function (252)Configuring the Thread Count for an Asset Purge Function (253)Using the Profile Asset Purge Function (253)Creating and Configuring an Asset Purge Function (254)Asset Purge Process Overview (255)Configuring Asset Condition Components (256)Configuring Related Condition Components (257)Configuring Related Action Components (260)Configuring Basic Purging Components (264)Configuring Additional Processing Components (267)Configuring the Asset Purge Pipeline (267)Configuring the Asset Purge User Interface (270)17. Repository Web Services (271)GetRepositoryItem Web Service (271)PerformRQLQuery Web Service (274)PerformRQLCountQuery Web Service (275)Repository Web Service Security (277)18. Composite Repositories (279)Use Example (279)Primary and Contributing Item Descriptors (279)Item Inheritance and Composite Repositories (280)Transient Properties and Composite Repositories (280)Non-Serializable Items and Composite Repositories (280)Property Derivation (281)Configuring a Composite Repository (281)Property Mappings (281)Excluding Properties (282)Link Methods (282)Creating Composite and Contributing Items (283)Missing Contributing Items (283)Configuring the Composite Repository Component (284)Composite Repository Queries (284)Composite Repository Caching (285)Composite Repository Definition Tag Reference (285)<composite-repository-template> (285)<header> (composite repository) (285)<item-descriptor> composite repository (286)<primary-item-descriptor> (287)<contributing-item-descriptor> (288)<attribute> composite repository (288)<property> composite repository (289)<primary-item-descriptor-link> (290)<link-via-id> (291)<link-via-property> (292)DTD for Composite Repository Definition Files (292)Sample Composite Repository Definition File (294)19. Secured Repositories (299)Features and Architecture (299)Repository Guide ixCreating a Secured Repository (301)Modify the Underlying Repository (301)Configure the Secured Repository Adapter Component (302)Register the Secured Repository Adapter Component (303)Create the Secured Repository Definition File (304)Secured Repository Example (305)Modify the SQL for the Repository Data Store (305)Modify the XML definition file (306)Define the Secured Repository Adapter’s Definition File (307)Configure a Secured Repository Adapter Component (308)Register the Repositories (309)ACL Syntax (309)Standard Access Rights (310)ACL Examples (310)Secured Repository Definition File Tag Reference (311)<secured-repository-template> (311)<item-descriptor> secured repository (311)<property> secured repository (312)<default-acl> (312)<descriptor-acl> (312)<owner-property> (313)<acl-property> (313)<creation-base-acl> (313)<creation-owner-acl-template> (314)<creation-group-acl-template> (314)DTD for Secured Repository Definition File (315)Performance Considerations (317)Exceptions Thrown by the Secured Repository (318)20. LDAP Repositories (319)Overview: Setting Up an LDAP Repository (320)LDAP Directory Primer (320)Hierarchical Tree Structure (321)LDAP Data Representation (321)Hierarchical Entry Types (322)Directory Schema (322)LDAP and JNDI (324)LDAP Sources (324)LDAP Repository Architecture (324)LDAP Repository Items and Repository IDs (325)Item Descriptors and LDAP Object Classes (325)Item Descriptor Hierarchies and Inheritance (327)Id and ObjectClasses Properties (328)Additional Property Tag Attributes (329)New Item Creation (330)Repository Views in the LDAP Repository (331)Repository View Definition (331)LDAP Repository View Example (332)LDAP Repository Queries (333)ID Matching Queries (333)Unsupported Queries in the LDAP Repository (334)Configuring LDAP Repository Components (334)/atg/adapter/ldap/LDAPRepository (335)/atg/adapter/ldap/InitialContextPool (336)x Repository Guide/atg/adapter/ldap/InitialContextEnvironment (337)/atg/adapter/ldap/LDAPItemCache (339)/atg/adapter/ldap/LDAPItemCacheAdapter (339)/atg/adapter/ldap/LDAPQueryCache (339)/atg/adapter/ldap/LDAPQueryCacheAdapter (340)LDAP Password Encryption (340)LDAP Repository Definition Tag Reference (341)<!DOCTYPE>LDAP repository (341)<ldap-adapter-template> (342)<header>LDAP repository (342)<view> (342)<item-descriptor>LDAP repository (342)<id-property> (343)<object-classes-property> (344)<object-class> (345)<property>LDAP repository (345)<option>LDAP repository (347)<attribute>LDAP repository (347)<child-property> (348)<new-items> (349)<search-root> (350)Sample LDAP Repository Definition File (351)DTD for LDAP Repository Definition Files (352)Index (355)Repository Guide xixii Repository Guide1 Introduction 11IntroductionData access is a large part of most Internet applications. Oracle ATG Web Commerce Data Anywhere Architecture™ provides a unified view of content and data across a business for organizations and their customers. The core of the Oracle ATG Web Commerce Data Anywhere Architecture is the Repository API.Through the Repository API, you can employ a single approach to accessing disparate data types, including SQL databases, LDAP directories, content management systems, and file systems.With the Oracle ATG Web Commerce Data Anywhere, the application logic created by developers uses the same approach to interact with data regardless of the source of that data. One of the most powerful aspects of this architecture is that the source of the data is hidden behind the Oracle ATG Web Commerce Repository abstraction. It is easy to change from a relational data source to an LDAP directory as none of the application logic needs to change. After data is retrieved from a data source, it is transformed into an object-oriented representation. Manipulation of the data can be done using simple getPropertyValue and setPropertyValue methods. The Repository API ties in closely with Oracle ATG Web Commerce’s targeting APIs, so you can retrieve items from the repository based on a variety of targeting rules, as well as retrieving specific identified items.The figure below provides a high-level overview of the Oracle ATG Web Commerce Data Anywhere Architecture.Oracle ATG Web Commerce Data Anywhere Architecture offers several advantages over the standard data access methods such as Java Data Objects (JDO), Enterprise JavaBeans (EJB), and Java Database Connectivity (JDBC).Among the differences:Data source independenceOracle ATG Web Commerce Data Anywhere Architecture provides access to relational database managementsystems, LDAP directories, and file systems using the same interfaces. This insulates application developersfrom schema changes and also storage mechanism. Data can even move from a relational database to an LDAPdirectory without requiring recoding. Java Data Objects support data source independence, but it is up tovendors to provide an LDAP implementation.Fewer lines of Java codeLess code leads to faster time-to-market and reduced maintenance cost. Persistent data types created withOracle ATG Web Commerce Data Anywhere are described in an XML file, with no Java code required.Unified view of all customer interactionsA unified view of customer data (gathered by web applications, call center applications, and ERP systems) can beprovided without copying data into a central data source. This unified view of customer data leads to a coherentand consistent customer experience.Maximum performanceIntelligent caching of data objects ensures excellent performance and timely, accurate results. The JDO and EJBstandards rely on a vendor implementation of caching that might not be available.Simplified transactional controlThe key to overall system performance is minimizing the impact of transactions while maintaining the integrityof your data. In addition to full Java Transaction API (JTA) support, Oracle ATG Web Commerce Data Anywherelets both page developers and software engineers control the scope of transactions with the same transactionalmodes—required, supports, never—used by EJB deployment engineers.Fine-grained access controlYou can control who has access to which data at the data type, data object, even down to the individualproperty with Access Control Lists (ACLs).Integration with ATG product suitesOracle ATG Web Commerce personalization, scenarios, commerce, portal, and content administrationapplications all make use of repositories for data access. A development team is free to use EJBs along sideof Oracle ATG Web Commerce technology, but the easiest way to leverage investment in Oracle ATG WebCommerce technology is to follow the example set by the solution sets. The Oracle ATG Web Commerce solutionsets satisfy all their data access needs with repositories.2 1 Introduction2Repository APIThe Oracle ATG Web Commerce Repository API (atg.repository.*) is the foundation of persistent objectstorage, user profiling, and content targeting in Oracle ATG Web Commerce products. A repository is a dataaccess layer that defines a generic representation of a data store. Application developers use this genericrepresentation to access data by using only interfaces such as Repository and RepositoryItem. Repositoriesaccess the underlying data storage device through a connector, which translates the request into whatever callsare needed to access that particular data store. Connectors for relational databases and LDAP directories areprovided out-of-the-box. Connectors use an open, published interface, so additional custom connectors can beadded if necessary.Developers use repositories to create, query, modify, and remove repository items. A repository item is likea JavaBean, but its properties are determined dynamically at runtime. From the developer’s perspective, theavailable properties in a particular repository item depend on the type of item they are working with. One itemmight represent the user profile (name, address, phone number), while another might represent the meta-dataassociated with a news article (author, keywords, synopsis).The purpose of the Repository interface system is to provide a unified perspective for data access. For example,developers can use targeting rules with the same syntax to find people or content.Applications that use only the Repository interfaces to access data can interface to any number of back-end datastores solely through configuration. Developers do not need to write a single interface or Java class to add a newpersistent data type to an applicationEach repository connects to a single data store, but multiple repositories can coexist within Oracle ATG WebCommerce products, where various applications and subsystems use different repositories or share the samerepository. Applications that use only the Repository API to access data can interface to any number of back-end data stores solely through configuration. For example, the security system can be directed to maintainits list of usernames and passwords in an SQL database by pointing the security system at an SQL repository.Later, the security system can be changed to use an LDAP directory by reconfiguring it to point to an LDAPrepository. Which repositories you use depends on the data access needs of your application, including thepossible requirement to access data in a legacy data store.The Oracle ATG Web Commerce platform includes the following models for repositories:•SQL repositories use Oracle ATG Web Commerce’s Generic SQL Adapter (GSA) connector to map betweenOracle ATG Web Commerce and the data in an SQL database. You can use an SQL repository to access content,user profiles, application security information, and more.•SQL profile repository, included in the Oracle ATG Web Commerce Personalization module, uses the GenericSQL Adapter connector to map user data that is contained in an SQL database. See the PersonalizationProgramming Guide.•LDAP Repositories (page 319) use the Oracle ATG Web Commerce LDAP connector to access user data in anLDAP directory. See the LDAP Repositories (page 319) chapter.•Composite Repositories (page 279) let you use multiple data stores as sources for a single repository.2 Repository API3。

ORACLE 10G自动化特性在联通BSS系统性能优化工作中的应用

ORACLE 10G自动化特性在联通BSS系统性能优化工作中的应用

ORACLE 10G自动化特性在联通BSS系统性能优化工作中的应用摘要:在介绍Oracle 9i版本下性能优化的基本方法基础上,着重论述了Oracle 10g版本中的一些自动化特性,以及应用10g中的自动化特性进行数据库系统性能优化的方法和案例。

关键词:Oracle 自动化;10G 优化;ADDMOracle数据库是中国联通核心业务支撑系统中最主要的关系型数据库管理系统。

在联通BSS业务支撑系统的体系结构中,绝大多数的子系统都是典型的在线事务处理系统(OLTP系统),对业务处理的实时性和性能要求非常高。

基于以上情况,对Oralce数据库及BSS应用系统进行不断地性能调整和优化,就成为联通业务支撑工作中一个必不可少的关键性工作。

Oracle 10G中很多性能优化工作可以借助自动化的工具方便快捷地进行,使性能优化工作的效率和效果得以大幅提升。

1Oracle 9i环境下性能优化的典型方法和问题1.1Oracle 9i环境下典型的性能优化步骤通常情况下Oracle 9i环境下的性能优化工作的步骤如下:①用户或应用维护人员反馈:“某个应用场景下,系统反应很慢”;②数据库管理员(DBA)人工观测该应用是否确实比较慢,或者根据以往记录的性能测试数据来比较,确认该性能问题是否确实存在;③使用统计信息收集工具包-StatsPack包(以下简称SP 包),在可能会发生性能问题的时间段,进行性能统计信息搜集(即进行“snapshot快照”);④使用SP包,生成SP分析报告;⑤DBA阅读SP分析报告,结合自己的经验,手工在浩如烟海的信息中找到可能存在的性能瓶颈;⑥对于性能瓶颈,DBA结合自己的经验,手工设计优化方案;⑦DBA或应用系统开发人员,进行数据库配置、数据库对象或应用程序的调整;⑧调整完成后,再次进行性能表现的观测,如果问题没有得到解决,则再次从步骤三开始进行新一轮的过程,直至问题得以解决。

Oracle 9i时期典型的性能优化步骤的示意图如图1所示:图1Oracle9i时期典型的性能优化的步骤1.2Oracle 9i环境下性能优化方法存在的问题从上述步骤中可以明显看出,9i环境下的性能优化,均是DBA人工处理,所有的判断和决策都依靠人来开展,不仅非常耗时耗力,而且对进行优化工作的人员素质要求较高,对DBA 的“经验”要求极高,旺旺只有对业务较熟悉的DBA才能进行一些有效的优化工作。

深度理解Oracle10g中UNDO_RETENTION参数的使用

深度理解Oracle10g中UNDO_RETENTION参数的使用

每一中数据库都需要有一种管理回滚或者撤销数据的方法。

当一个DML发生以后,在用户还没有提交(COMMIT)改变,用户不希望这种改变继续保持,需要撤销所做的修改,将数据回退到没有发生改变以前,这时就需要使用一种被称为撤销记录的数据。

使用撤销记录,我们可以:1、当使用ROLLBACK语句时回滚事务,撤销DML操作改变的数据2、恢复数据库3、提供读取的一致性4、使用Oracle Flashback Query分析基于先前时间点的数据5、使用Oracle Flashback特性从逻辑故障中恢复数据库Oracle10g中的自动撤销管理(AUM)在Oracle10g中对于回滚段的管理可以通过配置参数而实现自动管理。

为启用撤销空间的自动管理,首先必须在init.ora中或者SPFILE文件中指定自动撤销模式。

其次需要创建一个专用的表空间来存放撤销信息,这保证用户不会在SYSTEM 表空间中保存撤销信息。

此外还需要为撤销选择一个保留时间。

如果需要实现AUM,需要配置以下3个参数:UNDO_MAMAGEMENTUNDO_TABLESPACEUNDO_RETENTION查看初始化参数的设置:SQL> show parameter undo_tablespace;NAME TYPE VALUE------------------------------------ ----------------------------------undo_tablespace string UNDOTBS1SQL> show parameter undo_management;NAME TYPE VALUE------------------------------------ ----------------------------------undo_management string AUTOSQL> show parameter undo_retention;NAME TYPE VALUE------------------------------------ ----------------------------------undo_retention integer 900SQL>初始化参数的描述:Initialization Parameter DescriptionUNDO_MANAGEMENT If AUTO, use automatic undo management. The default is MANUALUNDO_TABLESPACE An optional dynamic parameter specifying the name of an undo tablespace. This parameter should be used only when the database has multiple undo tablespaces and you want to direct the database instance to use a particular undo tablespace.UNDO_RETENTION The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database may overwrite unexpired undo information when tablespace space becomes low.For an undo tablespace with the AUTOEXTEND option enabled, the databaseattempts to honor the minimum retention period specified byUNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information.如果将初始化参数UNDO_MANAGEMENT设置为AUTO,则Oracle10g将启用AUM。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Oracle at TREC 10: Filtering and Question-AnsweringShamim Alpha, Paul Dixon, Ciya Liao, Changwen YangOracle Corporation500 Oracle Parkway M/S 4op8Redwood Shores, CA 94065 USAtrec@Abstract:Oracle’s objective in TREC-10 was to study the behavior of Oracle information retrieval in previously unex-plored application areas. The software used was Oracle9i Text[1], Oracle’s full-text retrieval engine inte-grated with the Oracle relational database management system, and the Oracle PL/SQL procedural programming language. Runs were submitted in filtering and Q/A tracks. For the filtering track we submit-ted three runs, in adaptive filtering, batch filtering and routing. By comparing the TREC results, we found that the concepts (themes) extracted by Oracle Text can be used to aggregate document information content to simplify statistical processing. Oracle's Q/A system integrated information retrieval (IR) and information extraction (IE). The Q/A system relied on a combination of document and sentence ranking in IR, named entity tagging in IE and shallow parsing based classification of questions into pre-defined categories.1. Filtering based on Theme SignatureAs a first time filtering track participant, Oracle submitted runs for adaptive filtering, batch filtering and routing this year. Only linear-utility optimized runs were submitted for adaptive filtering and batch filtering. The filtering system is built based on the Oracle 9i database with PL/SQL - an Oracle supported database access language. Since the routing sub-task outputs the top 1000 ranked documents per category, and the training process and similarity score calculation algorithm are the same for batch filtering and routing, we will focus our discussion on batch filtering and adaptive filtering.The filtering system can be divided into three parts based on functionality:a. Theme Vector Generationb. Trainingc. ClassificationTheme Vector GenerationTheme vector generation generates a theme vector for each document. It is built-in functionality of Oracle Text, the information retrieval component of the Oracle database[2]. A theme vector containing a list of themes (concepts) and associated weights carries all information of a document used in classification. Themes are normalized words having meanings individually and extracted based on the Oracle Text knowl-edge base. The knowledge base is built in-house and contains about 425 thousand concepts classified into 2000 major categories. These categories are organized hierarchically under six top terms: business and eco-nomics, science and technology, geography, government and military, social environment, and abstract ideas and concepts. This knowledge base is built to support concept search and retrieval. For this TREC work, the ConText knowledge base was employed in our filtering system to preprocess documents and gen-erate concept terms. Although the Oracle Text user extensible knowledge base functionality allows users to modify the built-in knowledge base using user specified thesaurus, we used the knowledge base without any modification. We believe augmenting the knowledge base using domain specific information could improve filtering performance. In the theme generation, known phrases are recognized using a greedy algorithm, unknown words and proper name phrases are recognized and treated as themes. Words and phrases are nor-malized to their canonical forms. Every normalized term is a potential theme for a document.Theme weights are used to rank the semantic significance of themes to the aggregate document content.Themes are assigned initial weights based on their lexical flags in the knowledge base. Next, several factors derived from the structure of a document and the frequency of the theme in the document are employed to modify the initial weights of the themes. For example, the first few terms inside a sentence have higher weights than the terms at the end of sentences to account for “fronting” and sentence focus.Generated theme vectors are normalized to have unity length before being sent to the training or classifica-tion process. This normalization can be written as :where w j n and w j are the j-th component (j-th theme term) weight of theme vector w after and before unity normalization respectively.Our prior experience demonstrates that themes are superior to text tokens in representing text documents of medium to large size for classification purposes. Oracle Text first tokenizes documents and then processes these tokens using a greedy maximal match algorithm to generate themes. A brief description of the process to generate themes from tokens may shed some lights on the reason why themes are superior to tokens in classification. After finding a token, Oracle Text gets the part of speech information from the knowledge base or finds phrases based on the greedy algorithm and lexical knowledge base. If the token is a noun, a canonical form is used as a normalized form for this token, such as “tastefulness” with canonical form of “tasting” and “dull-headedness” with canonical form of “stupidity”. If the token is a non-noun, a base form is found based on the knowledge base or morphology if the token does not exist in knowledge base. After that, a normalized noun form is used as the theme form for the non-noun base form. For example, “steadied”has a base form of “steady” which corresponds to a normalized form of “steadiness”. The following differ-ences between themes and tokens may contribute to the different behaviors in classification:1. Themes can handle phrases while tokens can not without a lexicon.2. Themes are represented with normalized forms of concepts, while tokens are forms with orwithout stemming. Word normalization is mostly based on lexical knowledge, while stemmingof a token is mostly based on morphology.3. The weight of a theme expresses the lexical information of a term, locations in a document,and term frequency. The weight of a token typically only includes the information of term fre-quency.For the classification task no parent themes (broader terms) were used. Whether or not the parent themes improve the learning quality is actually an open question. One side says a specific word should be more important for representing a document and a parent theme may act as a common word. On the other hand,one of the parent themes may tell exactly what a document is about. However, that might depend on the level of parent theme and depend on whether or not the hierarchy of the knowledge base represents the same knowledge hierarchy in the classification application. We intend to investigate this issue thoroughly in the future.Trainingw j n w j w j2∑-----------------=The training process calculates the summation of all relevant theme vectors for each category. The summa-tion result serves as the original theme vector for one category. Because of accumulation, the number of themes in the category theme vector can be large. Experiments show that reducing some common themes and less frequent themes for the category theme vector can improve classification accuracy. Theme reduc-tion can also reduce the resource usage and improve classification performance. We adopt a two-step theme reduction. The first step is to choose the top 400 themes with highest theme weights in the category theme vector. As mentioned earlier, the theme weight obtained from Oracle Text combines information about the lexical significance, word position inside one sentence, and occurrence frequency inside the document.Those top 400 themes in the category theme vector are the most frequently occurring and significant words to the category. Another rationale for choosing the theme by weights is that words with little meaning have lower weights and therefore can be removed.The first step of theme selection based on the theme weight may choose some themes which are common in lot of categories. These common themes are not specific to one category and may produce extra noise to the classification process. The second step of theme reduction is to choose themes which are more specific to one category. We use a chi-square test for theme selection [3]. In specific, we choose a theme if the null hypothesis that this theme is independent of the considered category can be proved not true. The themes will be chosen if:where N is the total number of training documentsR is the number of training documents in this categoryn t is the number of training documents containing this wordr t is the number of training documents in this category and containing this word.value 3.84 is chosen because the confidence of chi-square test is 0.95.By chi-square test, the average theme vector size can be reduced to 280. In the original category theme vec-tor, the weight is the summation of each document’s theme weights; those weights help us to choose the top 400 themes for the category. However, during the classification process, we use Robertson-Sparck Jones weights [4] as term weights in category theme vectors. The weights are calculated based on the statistical characteristics of the training set and relevant category:This formula is obtained from the Bayesian statistical model. The Robertson-Sparck Jones weight is the component weight for one term to estimate the log-odds of an given document belonging to the considered category in the assumption that terms are independent [5].N Nr t n t R –()2Rn t N R –()N n t –()----------------------------------------------- 3.84>r t 0.5+()N R –n t –r t 0.5++()n t r t –0.5+()R r t –0.5+()---------------------------------------------------------------------------logClassificationBefore classification, category theme vectors are normalized to have unity length. In classification, the sim-ilarity scores S between the incoming document and each category are calculated as a dot product between the document theme vector vd and category theme vector vc, that is S = vd.vc. The document is classified to the categories in which the similarity scores are larger than the corresponding category thresholds. The pre-defined thresholds are determined from the relevance information either from the training set in batch filter-ing or from feedback in adaptive filtering.Threshold DeterminationBatch filteringEach category has its own threshold to determine if a document can be classified to it based on the similarity score. In order to determine the threshold for one category, we use the classification module to calculate the similarity score between all training documents and the considered category. For any given threshold x, we can get the following contingency table as we know the actual categories of each training document.Relevant Not RelevantRetrieved R+ N+Not Retrieved R- N-We can define a utility (goal) function of the about 4 numbers, say f(R+,N+,R-,N-,x). x appears explicitly in the function because R+,R-,N+ and N- are all functions of the threshold x. The threshold is chosen to maxi-mize the function f.Threshold = x: max f(R+,N+,R-,N-,x)xIn TREC-10, we submit the batch filtering run based on optimization function of linear-utility, which is f(R+,N+)=T10U=2R+ - N+.In implementation, one can generate a sorted array of training documents ordered by similarity scores to the given category with a decreasing sequence. The relevance information of documents in the sorted array before any given document can determine R+, N+ at the threshold value equal to the similarity score of this document. For each document in the sorted array, one then can calculate the T10U function value at the threshold value equal to the similarity score of this document based on calculated R+, N+. Because the array is sorted such that the similarity scores are decreasing, one therefore can draw a curve of T10U vs threshold. As threshold decreases from the largest value, the T10U values first increase because more relevant docu-ments are located at the positions having larger similarity scores, and decrease after reaching a peak. The peak position corresponds to a similarity score , whose value is the optimized threshold value to maximize T10U function. This calculation makes the assumption that the training set similarity score distribution and T10U quantity is similar to that of the test set.Adaptive trainingIn adaptive filtering, we first built initial category theme vectors from training process of an initial training set, which contains two relevant documents per category. The training process is the same as we discussed above. The initial category threshold is set to be 40% of the minimum similarity score of the two relevant documents with the considered category. We then classify the test documents in a batch mode with eachbatch containing 2000 documents coming from the test set stream. After classification of each batch, feed-back information including the relevance judgments and the similarity scores is sent to adaptive training, see Fig.1.Adaptive training includes updating category theme vectors and category thresholds. In order to update the category theme vector, we have to maintain the original category theme vectors which are the theme vectors before any theme selections and has the theme weights from summation of Oracle Text theme weights. To keep the number of themes in the category theme vector from becoming too large, we limit the size of each original category theme vector to a maximum of 2000. The extra feedback training document theme vectors are added to the original category theme vectors using Widrow-Hoff algorithm [6].where w j , w n j are the weights for j-th component of the category theme vector before and after adaptive training, respectively. x i is the theme vector of i-th feedback document, y i the relevance judgment of the i-th feedback document with the considered category with y i =0 denoting not relevant, y i =1 denoting relevant.w .x i denotes the dot product between the theme vector w and x i . z>0 is learning rate and is set to 0.2.The Widrow-Hoff algorithm generates a list of updated themes and weights. We maintain only the top 2000highest weight themes for each category. The weights here are calculated quantities from Oracle Text theme weights. We apply theme selections and employ Robertson-Sparck Jones weights as category theme vector weights for classification as discussed in the above training section.Thresholds can be calculated based on the relevance information and similarity scores of all previous feed-w j nw j 2z w x i •y i –()x i j ,–=Widrow-Hoff TrainingOriginal Category Theme Vectors Theme Selection Robertson-Sparck Jones Weight Category Theme VectorsCategory ThresholdsClassification Theme Vector GenerationSuggested Categories DocumentsFeedback Info.Theme Vectors Figure 1. Adaptive filtering diagramThreshold Modificationback documents in the way we discussed in the threshold determination section. However, that calculation may take unacceptably long time. Instead we adopt a simple method to adjust the existing thresholds only based solely on current feedback information.Thresholds can be adjusted by calculating the optimal threshold for the extra feedback training set as dis-cussed in threshold determination section. We denote the optimal threshold asoptimal_threshold_extra_training, then the updated threshold is :updated_threshold = old_threshold + C (optimal_threshold_extra_training - old_threshold)where C is a learning parameter and is set to 0.3. We note that the feedback batch size and the learning parameter C are relevant parameters, if the feedback batch size is small, the optimal threshold for the extra feedback documents may vary a lot, one then choose a smaller C. C has to be chosen such that the updated thresholds change with the feedback process in a systematic and stable way.Submission Result and DiscussionsOracle submitted three runs. They are listed in the Table 1, and Table 2, with adaptive, batch runs in table 1 and routing in table2, respectively. The numbers in the parenthesis are the median value of all participants. The median values are the (N/2+1)-th value in sorted decreaseing list if the number of participants N is even. Except the precision for batch filter, all numbers in our submitted runs are above median.We note that the routing behaves better than batch filtering. The fact that batch filtering system has only one more component: thresholding, than routing implies that our threshold determination is not quite good for batch filtering. In batch filtering, the threshold can not be adjusted. Once a threshold is determined, it is used to classify the whole test set without any adjustment. So the initial threshold determination is critical. How-ever, it is interesting to note that the same simple method of determining threshold behaves quite well in adaptive filtering when comparing our adaptive filtering result with others.Our training, classifying, and thresholding methods are all well-known methods, but our system behaves better than medians, especially in adaptive filtering. One explanation for this might be the linguistic suite in Oracle Text and knowledge base we used to process documents. The theme vector we get from Oracle Text contains more information than just text token and occurrence frequency in the document. Theme vector have a list of normalized terms. This term normalization could reduce the size of collection thesaurus, and make it easier to match different terms with the same concept. The weight of the theme contains not only the occurrence frequency information, but lexical information. In conclusion, the combination of these linguistic functionalities and appropriately engineering some well-known learning methods are believed to make our system successful.Table 1: Adaptive and batch filtering result with T10U optimization. The numbers in the parathesis are the median value for all participants.Run label Run type Optimi-zationPrecision(median)Recall(median)T10SU(median)F-beta(median)oraAU082201adaptive T10U0.538(0.462)0.495(0.213)0.291(0.137)0.519(0.273)oraBU082701batch T10U0.556(0.618)0.353(0.293)0.249(0.247)0.450(0.448)2. Question Answering based on Information Retrieval and Information Extraction Questions can be classified into pre-defined categories. Typical categories are: person names, organization names, dates, locations (cities, countries, states, provinces, continents), numbers, times, meaning of acronyms and abbreviations, weights, lengths, temperatures, speed, manner, duration, products, reasons etc.[7][8]Information extraction (IE) techniques allow us to extract lists of semantic categories from text automatically[9], such as person names, organization names, dates, locations, duration, etc., which are subsets of the whole pre-defined question categories. If a question category is covered by IE, finding the locations of answer candidates becomes easier: the task remains is to rank the list of answer candidates extracted by IE. Otherwise, a number of heuristics are employed to locate the answer candidates and rank them.Overview of Oracle Q/A system :Our Q/A system consists of three major components shown in figure2: (1) question processor (2) sentence ranking (3) answer extraction.Question Processor:Its role is to: (a) classify a question into a list of pre-defined semantic categories (b) extract content words from a question and send them to Oracle to retrieve relevant documents.To classify a question, the first step is to determine its question type. The following wh-words are used to determine the question types: who, why, where, whom, what, when, how much money, how much, how many, how (rich, long, big, tall, hot, far, fast, large, old, wide, etc.).A list of heuristics will help to map the question types to the pre-defined semantic categories:(1) who is (was) "person name" => occupation(2) other "who" types => personal name(3) how rich, how much money, how much + VBD(VBP, VBZ, MD) => money expression(4) other "how much" types => number(5) how hot (cold) => temperature(6) how fast => speed(7) how old => age(8) how long => period of time or length(9) how big => length or square-measure or cubic-measure(10) how tall (wide, far) => lengthTable 2: Routing result. The number in the parathesis is themedian value for all participants. Run labelRun type Mean average precision (median)oraRO082801Routing 0.104(0.082)Figure 2: Architecture of the Oracle Q/A SystemA complicated problem is to map the question type "what" to its semantic category. Here, a part-of-speech (POS) tagger is used to assign the most appropriate part-of-speech for each word in a question based on the contextual information [10]. The head noun of the first noun phrase in a question is used to decide its semantic category. For example, "What costume designer decided that Michael Jackson should only wear one glove?" The head noun of the first noun phrase is "designer". Using WordNet’s lexicon [11], one finds that "designer" is a person, so, the semantic category of this question is "person name". If the head noun of the first noun phrase in a question is a stop word, then, the head noun of the second noun phrase is used to decide the semantic category. For example, "What was the name of the first Russian astronaut to do a spacewalk?" The head noun of the first noun phrase is "name" (a stop word), so, the head noun of the second noun phrase "astronaut" is used to decide the semantic category. Similarly, WordNet’s API can tell that its semantic category is "person name".When extracting a list of keywords from a question, our principle is to extract all content words, but ignore all non-content words. The distinction between these two types of words is that content words should appear in the relevant documents, but non-content words should not appear in the relevant documents. At lease, stop words and stop phrase (such as: how much, what time, what country) belong to non-content words. Furthermore, a list of heuristics is helpful to distinguish content words from non-content words. For example, "What is the length of coastline of the state of Alaska?", and "What is the Illinois state flower?"Word "state" is a non-content word in the first question, but a content word in the second question.Removing non-content words as many as possible makes retrieved documents more focusing on the subject topic of the question and is very helpful for extracting right answers from retrieved documents.content words extractionquestion categorization question processor Trec indexOracle search enginesentence segmentation IE categoriessentence filteringsentence ranking IE-based answer non-IE based answer extractor sentence rankinganswer extractorquestion no yesextractorSentence Ranking:After the query processor extracts a number of content words from a question, two queries are formulated: one uses proximity operator “near" with maximum span size 25 to connect these words, the other uses “accum" operator to connect them. Near opearator find all query terms within specified span. Documents are ranked based on the frequencies and proximity of query terms in the document. Accum (accumulate) operator finds documents matching one or more query terms. Documents are ranked based on the sum of weights of the terms matched and frequency of the terms in the document. The first query has higher priority than the second one, because “near” operator always retrieves more relevant documents, but usually, the number of documents retrieved by “near” is not big enough, so, “accum” query is used to supplement it. Oracle Text retrieves a list of relevant documents (60 documents in trec10) based on the two queries. Then, the relevant documents are broken into paragraphs, the paragraphs are segmented into sentences. According to our experiments, it is suitable to extract long answers (250 bytes) from ranked paragraphs, but to extract short answers (50 bytes), the paragraphs must be further segmented into sentences.Ranking the segmented sentences is based on the following information: (1) the number of unique content words in a sentence (2) tf and idf of each content word (3) total number of content words in a query (4) the smallest window size which contains all the unique content words in the sentence.Our information extractor (IE) has two modules: one used for sentence filtering, the other used for answer extraction (IE-based answer extractor). If the semantic category of a question is covered by the IE, the IE is used for sentence filtering. Only selected sentences which satisfy the IE, are the candidates of the sentence ranking. For example, if the semantic category of a question is "person name", only the sentences which include at least one person name will participate the sentence ranking, all the rest of sentences are filtered out from answer extraction, because they do not include answers of the question. The IE was also integrated with sentence segmentation algorithm. The standard sentence delimiters are "?!.", followed by one or more spaces, then followed by a word whose first letter is a capital letter. There are many exceptional cases, such as Mr. Steve, St. Louis. The IE could recognize these exceptional cases, and guarantee the success of the sentence segmentation.Answer Extraction:After the sentences are ranked, top five of them are used to extract the answers. From previous description, our IE only covers a subset of the whole semantic categories. If the answer type of a question belongs to the subset, it is easy to extract answers using the IE. Otherwise, we concluded a number of heuristics, which help to extract answers. The sentence ranking algorithm can find the smallest window in a sentence, which contains all the content words in the sentence. This window divides the sentence into three parts: (1) the words in front of the window, (2) the words after the window and (3) the words inside of the window. According to our observation, the priorities of the three parts are (1) (3) (2). We further observed that in (1) and (3), the words closer to the windows have higher priority than others. Based on these observations, we picked up certain percent of words from each part of the sentence according to their priorities to form the final answers.Other Linguistic Processing:(1) acronyms and abbreviations: like other advanced search engines, our system also does limited automatic query expansion, mainly for queries with acronyms, abbreviations, etc. It expanded (a) acronyms of geographical terms, such as "U.S. = United States", "N.C. = North Carolina" (b) abbreviations of organization names, such as "YMCA = young mens christian association", "NBS = national bureau of standards"(2) stemming: Oracle's search engine does not use Porter's stemmer. Our stemmer is more conservative, which obtains good precision, may hurt recall a little bit. To remedy this problem, extra stemming wasadded in rare situations. For example, "When did Hawaii become a state?", the main verb was stemmed as "$become".(3) Information Extractor (IE): an information extractor was created over the last few months to recognize(a) person names (b) organization names (c) dates (d) number (e) locations (f) money expression (g) time (h) temperature (I) speed (j) weight (k) length (l) square measure (m) cubic measure (n) age, etc. Performance Evaluation:A question answering system was created based on information retrieval and information extraction. Our study shows that traditional IR technique are not only useful to rank documents, but also to rank paragraphs and sentences. Finding the smallest window from a sentence which contains all the content words in it, is very helpful to extract answers when its semantic category is not covered by the IE, the window size is also an important factor to decide the sentence rank.The following table shows the evaluation result provided by NIST for our systemstrict lenientNIST score 0.477 0.491% of correct answers 60.77% 62.60%% of correct first answers 40.04% 40.85%The current (Oracle 9i) knowledge base is designed for information retrieval; for Q/A track, we found it nec-essary to expand the lexicon to cover wh-focus ontological facets.3. Web TrackAs preparation, we investigated the TREC-10 web task using TREC-9 web track documents and queries. We also attempted to productize lessons learnt from our participation in Trec8 adhoc manual task. A set of dif-ferent collections including TREC Web and Adhoc collections helped us in our effort to formulate generic techniques applicable across domain. Due to resource constraints, we were unable to work on Trec10 web track. Here we summarize our findings based on older collections.Our experiments in link analysis using Oracle intranet data indicate that link analysis adds little value to intranet search. Link analysis is a technique that helps bring order to an unorganized collection lacking cen-tral authority (such as web) by using popularity measure. A organized intranet will have clearly defined authorities for different subject matters.IDF weighting used in tf-idf scoring is not very effective when the collection is pretty large (a couple of mil-lion documents) and number of terms in the queries is pretty high. If the queries are free-text queries, IDF weighting fails to distinguish between important and unimportant terms. Weighting techniques which weight terms inversely proportional to a factor of the frequency ratios (x times as rare terms get y times as much weight) seem to perform better in this situation. We saw significant improvement in R-precision by adopting this technique.As the number of documents increases, the number of distinct score values supported by a system becomes important. Until recently Oracle Text used 100 distinct integers in the range of 1 to 100 for scoring. We found that allowing a million distinct values improves system IR quality computed in average precision by improving tie splitting. Even though number of relevant documents retrieved did not increase very signifi-cantly (about 3-4%), average precision increased by 10-15% (for example, Trec9 web track average preci-sion improved from 0.11 to 0.125).。

相关文档
最新文档