Converting Between System Representations
02-bits-bytes-ints

Bits, Bytes, and Integers Introduction to Computer Systems2nd Lecture, Sep 18, 2014Instructors:Xiangqun Chen,Junlin LuGuangyu Sun,Xuetao GuanToday: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsBinary RepresentationsBase 2 Number Representation▪Represent 1521310as 111011*********▪Represent 1.2010as 1.0011001100110011[0011] (2)▪Represent 1.5213 X 104 as 1.11011011011012X 2133.3V2.8V0.5V0.0VBinary is the most practical system to use!Encoding Byte Values Byte = 8 bits▪Binary 000000002 to 111111112 Array▪Decimal: 010 to 25510▪Hexadecimal 0016 to FF16▪Base 16 number representation▪Use characters ‘0’ to ‘9’ and ‘A’ to‘F’▪Write FA1D37B16 in C as–0xFA1D37B–0xfa1d37bByte-Oriented Memory Organization• • •Programs refer to data by address▪Conceptually, envision it as a very large array of bytes▪In reality, it’s not, but can think of it that way▪An address is like an index into that array▪and, a pointer variable stores an addressNote: system provides private address spaces to each “process”▪Think of a process as a program being executed▪So, a program can clobber its own data, but not that of othersMachine WordsAny given computer has a “Word Size”▪Nominal size of integer-valued data▪and of addresses▪Most current machines use 32 bits (4 bytes) as word size ▪Limits addresses to 4GB (232 bytes)▪Becoming too small for memory-intensive applications – leading to emergence of computers with 64-bit word size▪Machines still support multiple data formats▪Fractions or multiples of word size▪Always integral number of bytes?? ?? ?? ?? Word-Oriented Memory OrganizationAddresses Specify Byte Locations▪ Address of first byte in word▪ Addresses of successive words differby 4 (32-bit) or 8 (64-bit)32-bit Words 64- bitWordsHow about the addresseslike “0001”, “0002”?Addr = 0000Addr = 0004Addr = 0008Addr= 0012Bytes Addr.0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015Byte OrderingConventions▪Big Endian: Sun, PPC Mac, Internet▪Least significant byte has highest address ▪Little Endian: x86▪Least significant byte has lowest addressByte Ordering ExampleExample▪Variable x has 4-byte value of 0x01234567 ▪Address given by &x is 0x100Big Endian Little Endian 0x100 0x101 0x102 0x1030x100 0x101 0x102 0x103 67452301 01234567Data RepresentationsToday: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsGeneral Boolean AlgebrasOperate on Bit Vectors▪Operations applied bitwise011010010110100101101001& 01010101| 01010101^ 01010101~ 01010101 01000001011111010011110010101010 All of the Properties of Boolean Algebra ApplyShift OperationsLeft Shift: x << y▪Shift bit-vector x left y positions– Throw away extra bits on left ▪Fill with 0’s on rightRight Shift: x >> y▪Shift bit-vector x right y positions▪Throw away extra bits on right▪Logical shift▪Fill with 0’s on left▪Arithmetic shift▪Replicate most significant bit on left Undefined Behavior▪Shift amount < 0 or ≥ word size Argument x01100010 << 300010000 Log. >> 200011000 Arith. >> 200011000Argument x10100010 << 300010000 Log. >> 200101000 Arith. >> 211101000Today: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsEncoding IntegersUnsigned Two’s Complementw-1w-2B2U(X) = ∑x i⋅2 ii=0B2T(X) =-x w-1⋅2w-1+ ∑x i⋅2ii=0C short 2 bytes longSign Bit▪For 2’s complement, most significant bit indicates sign ▪0 for nonnegative▪1 for negative Sign Bitshort int x = 15213; short int y = -15213;Encoding Example (Cont.)B2T(X) =-x w-1⋅2w-1+ w-2∑x i⋅2i i=0Numeric RangesUnsigned Values▪UMin = 0000 0▪UMax = 2w – 1111 (1)Values for W = 16 Two’s Complement Values ▪TMin = –2w–1100 0▪TMax = 2w–1 – 1011 (1)Other Values▪Minus 1111 (1)Decimal Hex BinaryUMax 65535FF FF 11111111 11111111 TMax 327677F FF 01111111 11111111 TMin -32768 80 00 10000000 00000000 -1 -1FF FF 11111111 11111111 0000 00 00000000 00000000Values for Different Word SizesW816 32 64UMax 25565,5354,294,967,29518,446,744,073,709,551,615 TMax 12732,7672,147,483,6479,223,372,036,854,775,807 TMin -128-32,768-2,147,483,648-9,223,372,036,854,775,808Observations▪|TMin | = TMax + 1Asymmetric range ▪UMax = 2 * TMax + 1 C Programming▪#include <limits.h>▪Declares constants, e.g.,U LONG_MAXL ONG_MAXL ONG_MIN▪Values platform specificNegation: Complement & Increment Claim: Following Holds for 2’s Complement Complement▪x-1 1 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1Unsigned & Signed Numeric ValuesEquivalence▪ Same encodings for nonnegativevaluesUniqueness▪ Every bit pattern representsunique integer value▪ Each representable integer has unique bit encodingCan Invert Mappings▪ U2B(x ) = B2U -1(x )▪Bit pattern for unsigned integer▪ T2B(x ) = B2T -1(x )▪ Bit pattern for two’s comp integerX B2U(X ) B2T(X ) 0000 0 0 0001 1 1 0010 2 2 0011 3 3 0100 4 4 0101 5 5 0110 6 6 0111 7 7 1000 8 –8 1001 9 –7 1010 10 –6 1011 11 –5 1100 12 –4 1101 13 –3 1110 14 –2 111115–1Today: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsMapping Between Signed & UnsignedTwo’s UnsignedxuxMaintain Same Bit PatternTwo’s Complement xMaintain Same Bit PatternMappings between unsigned and two’s complement numbers: keep bit representations and reinterpretT2U U2TBits 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 Signed 0 1 2 3 4 5 6 7 -8 -7 -6 -5 -4 -3 -2 Unsigned0 1 2 3 456 7 8 9 10 11 12 13 14=+/- 16Bits 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 Signed 0 1 2 3 4 5 6 7 -8 -7 -6 -5 -4 -3 -2 Unsigned0 1 234 5 6 7 8 9 10 11 12 13 14Relation between Signed & UnsignedTwo’s UnsignedxuxMaintain Same Bit Patternw –1ux xLarge negative weightbecomesLarge positive weight+ + + • • • + + +- + +• • • + + +Conversion Visualized2’s Comp. →Unsigned ▪Ordering Inversion▪Negative →Big Positive UMax UMax – 1TMax TMax + 1TMaxUnsignedRange2’s Complement 0 0 Range 1–2Signed vs. Unsigned in CConstants▪By default are considered to be signed integers▪Unsigned if have “U” as suffix0U, 4294967259UCasting▪Explicit casting between signed & unsigned same as U2T and T2U int tx, ty;unsigned ux, uy;tx = (int) ux;uy = (unsigned) ty;▪Implicit casting also occurs via assignments and procedure calls tx = ux;uy = ty;Casting SurprisesExpression Evaluation▪If there is a mix of unsigned and signed in single expression,signed values implicitly cast to unsigned▪Including comparison operations <, >, ==, <=, >=▪Examples for W = 32: TMIN = -2,147,483,648 , TMAX = 2,147,483,647Constant1 0 Constant20URelation==Evaluationunsigned-1 0 <signed -1 0U >unsigned 2147483647 -2147483647-1 >signed 2147483647U -2147483647-1 <unsigned -1 -2 >signed (unsigned)-1 -2 >unsigned 2147483647 2147483648U <unsignedSummaryCasting Signed ↔ Unsigned: Basic RulesBit pattern is maintainedBut reinterpretedCan have unexpected effects: adding or subtracting 2wExpression containing signed and unsigned int ▪int is cast to unsigned!!Today: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, strings• • •Sign ExtensionTask:▪ Given w -bit signed integer x▪ Convert it to w +k -bit integer with same valueRule:▪ Make k copies of sign bit:▪ X ' = x w –1 ,…, x w –1 , x w –1 , x w –2 ,…, x 0k copies of MSBXX 'kww• • •• • •• • •Sign Extension Exampleshort int int xix==15213;(int) x;short int y = -15213;int iy = (int) y;Decimal Hex Binaryx15213 3B 6D 00111011 01101101 ix 15213 00 00 3B 6D 00000000 00000000 00111011 01101101 y-15213C4 93 11000100 10010011 iy -15213FF FF C4 93 11111111 11111111 11000100 10010011Converting from smaller to larger integer data typeC automatically performs sign extensionSummary:Expanding, Truncating: Basic Rules Expanding (e.g., short int to int)▪Unsigned: zeros added▪Signed: sign extension▪Both yield expected resultTruncating (e.g., unsigned to unsigned short) ▪Unsigned/signed: bits are truncated▪Result reinterpreted▪Unsigned: mod operation▪Signed: similar to modToday: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsUnsigned AdditionOperands: w bits u+ vTrue Sum: w+1 bits u + v Discard Carry: w bits UAdd w(u , v)Standard Addition Function▪Ignores carry outputImplements Modular Arithmetic s = UAdd w(u , v) = u + v mod 2w • • •• • •• • •• • •Visualizing (Mathematical) Integer AdditionInteger Addition▪ 4-bit integers u , v ▪ Compute true sumAdd 4(u , v )▪ Values increase linearly with u and v▪ Forms planar surfaceAdd 4(u , v )14121086242u0 v4 0 6 4 8 10 12 14 32 28 24 20 16 12 8 Integer AdditionVisualizing Unsigned AdditionWraps Around▪ If true sum ≥ 2w ▪ At most onceOverflowTrue Sum2w +12wOverflowModular SumUAdd 4(u , v )16 14 12 10 86 4 14 12 10 8 2 6 0v4242u68101214Two’s Complement AdditionOperands: w bits u + v True Sum: w +1 bitsu + v Discard Carry: w bitsTAdd w (u , v )TAdd and UAdd have Identical Bit-Level Behavior▪ Signed vs. unsigned addition in C:int s, t, u, v;s = (int) ((unsigned) u + (unsigned) v); t = u + v▪ Will give s == t• • •• • •• • •• • •TAdd Overflow Functionality▪True sum requires w+1 bits▪Drop off MSB▪Treat remaining bits as 2’s comp. integer 0 111 (1)0 100 00 000 0True Sum2w–1PosOver2w –1TAdd Result011 (1)000 01 011 (1)1 000 0–2w –1–1–2w NegOver100 0Visualizing 2’s Complement AdditionNegOverValues▪4-bit two’s comp.▪Range from -8 to +7 Wraps Around▪If sum 2w–1▪Becomes negative▪At most oncew–1▪If sum < –2▪Becomes positive▪At most onceTAdd4(u , v)8642-2-4642-6-2-8 -4-8 -6-4 -6-2 0u 24-86vPosOverMultiplicationGoal: Computing Product of w-bit numbers x, y▪Either signed or unsignedBut, exact results can be bigger than w bits▪Unsigned: up to 2w bits▪Result range: 0 ≤ x * y ≤ (2w – 1) 2 = 22w – 2w+1 + 1▪Two’s complement min (negative): Up to 2w-1 bits▪Result range: x * y ≥ (–2w–1)*(2w–1–1) = –22w–2 + 2w–1▪Two’s complement max (positive): Up to 2w bits, but only for (TMin)2w ▪Result range: x * y ≤ (–2w–1) 2 = 22w–2So, maintaining exact results…▪would need to keep expanding word size with each product computed ▪is done in software, if needed▪e.g., by “arbitrary precision” arithmetic packagesUnsigned Multiplication in Cu Operands: w bitsTrue Product: 2*w bits* v Discard w bits: w bitsUMult w(u , v)Standard Multiplication Function▪Ignores high order w bitsImplements Modular Arithmetic • • •• • •Signed Multiplication in Cu Operands: w bits* v True Product: 2*w bitsDiscard w bits: w bitsTMult w(u , v)Standard Multiplication Function▪Ignores high order w bits▪Some of which are different for signedvs. unsigned multiplication▪Lower bits are the same • • •• • •• • •• • •• • •Power-of-2 Multiply with Shift Operation▪u << k gives u * 2k▪B oth signed and unsigned Operands: w bitsk u* 2kTrue Product: w+k bitsDiscard k bits: w bits UMult w(u , 2k)TMult w(u , 2k)Examples▪u << 3 == u * 8▪(u << 5) – (u << 3)== u * 24▪Most machines shift and add faster than multiply ▪Compiler generates this code automatically • • •0•••010•••00• • •0•••00•••0•••00u / 2Unsigned Power-of-2 Divide with ShiftQuotient of Unsigned by Power of 2▪ u >> k gives ⎣ u / 2k ⎦ ▪ Uses logical shiftOperands:Division: Result:k⎣ u / 2k ⎦••••••0 ••• 0 0 •••0 Signed Power-of-2 Divide with ShiftQuotient of Signed by Power of 2▪ x >> k gives ⎣ x / 2k ⎦ ▪ Uses arithmetic shift▪ Rounds wrong direction when u < 0Operands:Division: Result:kRoundDown(x / 2k )••••••••••••Correct Power-of-2 DivideQuotient of Negative Number by Power of 2▪ Want ⎡ x / 2k ⎤ (Round Toward 0) ▪ Compute as ⎣ (x+2k -1)/ 2k ⎦▪ In C: (x + (1<<k)-1) >> k ▪Biases dividend toward 0Case 1: No roundingDividend:Divisor:ku+2k –1Biasing has no effect1 ••• 0 ••• 0 0 0 ••• 0 0 1 ••• 1 1Correct Power-of-2 Divide (Cont.)Case 2: Rounding Dividend:Divisor:k 1 ••• •••Incremented by 1Biasing adds 1 to final resultToday: Bits, Bytes, and IntegersRepresenting information as bitsBit-level manipulationsIntegers▪Representation: unsigned and signed▪Conversion, casting▪Expanding, truncating▪Addition, negation, multiplication, shifting▪SummaryRepresentations in memory, pointers, stringsArithmetic: Basic RulesAddition:▪Unsigned/signed: Normal addition followed by truncate, same operation on bit level▪Unsigned: addition mod 2w▪Mathematical addition + possible subtraction of 2w ▪Signed: modified addition mod 2w (result in proper range) ▪Mathematical addition + possible addition or subtraction of 2wMultiplication:▪Unsigned/signed: Normal multiplication followed by truncate, same operation on bit level▪Unsigned: multiplication mod 2w▪Signed: modified multiplication mod 2w (result in proper range)。
FortiConverter迁移工具说明书

Major Features§Allows migration to FortiOS solutions.§Eases the pain of vendor transition.§Translation of complex policy sets.§Removal of historical configuration errors. §Automatic validation of new configurations.Whether in timelines, costs, or manpower, FortiConverter provides substantial advantages.Configuration ValidationFortinet believes that transitioning to next-generation security platforms should be as seamless as possible. For this reason, we have developed the FortiConverter software solution.Configuration changes can introduce errors, which accumulate over time. But because these errors have not caused problems, they are often missed or overlooked in a large, complex configuration deployed across an organization. With FortiConverter, validation of theseconfigurations is a simple process. Manually inspecting for errors can take hundreds of hours of expert analysis, which is cost-prohibitive, even if available. With FortiConverter, the software identifies and then removes incorrect or redundant configuration elements during the conversion process.DATA SHEETFortiConverter ™Multi-vendor Configuration Conversion for FortiOSFortiGuard Security ServicesFortiCare Worldwide 24x7 Support2 Key Features & BenefitsMulti-vendor Support Conversion from Alcatel-Lucent, Cisco, Juniper, Check Point, Palo Alto Networks, and SonicWall. A single tool for converting from all the supported vendors.Automated Conversion Configuration conversion is performed according to conversion rules automatically, with a small amount of fine tuning to complete the process. Human error in the conversion process is minimized.Error Correction With many complex configurations, errors creep in over time. FortiConverter identifies these errors so that the new platform can operate at maximum efficiency at the required level of security. Also avoids copying unneeded objects into the new configuration.Full Support The FortiConverter standard license gives access to all vendors and supports configuration conversion of any size and complexity.FortiGate to FortiGateCan migrate configurations between FortiGate devices to minimize the risk associated with network upgrades. Provides the ability to split VDOMs into individual config files. Facilitates migration when the source platform is not supported in 5.x and the target platform is not supported in 4.x. This feature is available with the trial license.savings of simplified management can be realized.HIGHLIGHTS3TUNING FEATURE MATRIXBASE NETWORK OBJECTSShowEdit Add Delete Cascade Update Policy Locate Policy Tooltip Interface Interface √√√√√√√Zone √√√√√√√AddressSubnet √√√√√√√Range √√√√√√√FQDN √√√√√√√Group√√√√√√√ServiceTCPUDP √√√√√√√ICMP √√√√√√√Other √√√√√√√Group√√√√√√√ScheduleOnce √√√√√√√Recur √√√√√√√Group√√√√√√√NATVIP √√√√√√√IPPool√√√√√√√POLICYShowEdit Add Delete Filter Reorder Edit/Add Show Policy √√√√√√√NAT-POLICYShowPolicy Locate SourceNAT √√DestinationNAT √√StaticNAT √√NATRule√√GLOBAL HEADQUARTERS Fortinet Inc.899 Kifer RoadSunnyvale, CA 94086United StatesTel: +/salesEMEA SALES OFFICE 120 rue Albert Caquot 06560, Sophia Antipolis, FranceTel: +33.4.8987.0510APAC SALES OFFICE 300 Beach Road 20-01The Concourse Singapore 199555Tel: +65.6513.3730LATIN AMERICA SALES OFFICEProl. Paseo de la Reforma 115 Int. 702Col. Lomas de Santa Fe,C.P . 01219Del. Alvaro Obregón México D.F .Tel: 011-52-(55) 5524-8480Copyright© 2015 Fortinet, Inc. All rights reserved. Fortinet®, FortiGate®, FortiCare® and FortiGuard®, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and actual performance and other results may vary and may be significantly less effective than the metrics stated herein. Network variables, different network environments and other conditions may negatively affect performance results and other metrics stated herein. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet and any such commitment shall be limited by the disclaimers in this paragraph and other limitations in the written contract. For absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests, and in no event will Fortinet be responsible for events or issues that are outside of its reasonable control. Notwithstanding anything to the contrary, Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.FST -PROD-DS-FCCC FC-DAT -R3-201507DATA SHEET: FortiConverter ™Product SKUDescriptionFortiConverter FC-10-CON01-401-01-12 1 Year Multi-vendor Configuration Conversion Tool (requires MS Windows) to create FortiOS configuration files.FortiConverterFC-10-CON01-401-02-121 Year Renewal Multi-vendor Configuration Conversion Tool (requires MS Windows) to create FortiOS configuration files.ORDER INFORMATION。
数字通信编码_白宝明

4
References
[1] C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, pp. 379-423, 623-656, July-Oct. 1948; Reprinted in C. E. Shannon and W. Weaver, The Mathematical Theory of Communication. Urbana, IL: Univ. Illinois Press, 1949. [2] R. G. Gallager, “Claude E. Shannon: A Retrospective on His Life,Work, and Impact,” IEEE Trans. Inform. Theory, vol.47, no.7, pp. 2681-2695, Nov. 2001. [3] G. D. Forney, Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Inform. Theory, vol.44, no.6, pp. 2384-2415, Oct. 1998. [4] E. Biglieri, J. Proakis, and S. Shamai (Shitz), “Fading channels: Information-theoretic and communication aspects,” IEEE Trans. Inform. Theory, vol.44, pp.2619-2692, Oct. 1998. [5] D. J. Costello, J. Hagenauer, H. Imai, and S. B. Wicker, “Applications of error-control coding,” IEEE Trans. Inform. Theory, vol.44, no.6, pp.2531-2560, Oct. 1998. [6] A. R. Calderbank, “The art of signaling: Fifty years of coding theory,” IEEE Trans. Inform. Theory, vol.44, no.6, pp.2561-2595, Oct. 1998. [7] J. G. Proakis, Digital Communications. 4rd ed. New York: McGraw-Hill, 2000. [8] E. A. Lee and D. G. Messerschmitt, Digital Communication, 2nd ed. Kluwer Academic Publishers, Boston, 1994. [9] G. D. Forney and R. Gallager, Principles of Digital Communications. Course notes. MIT. [10] R. G. Gallager, Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. [11] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley, 1991. [12] J. L. Massey, Applied Digital Information Theory. Course notes. ETH. [13] S. G. Wilson, Digital Modulation and Coding. Prentice-Hall, 1996. [14] E. Biglierli, D. Divsalar, P. J. McLane, and M. K. Simon, Introduction to Trellis-Coded Modulation with Applications. New York: MacMillan, 1991. [15] D. N. C. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge University Press, 2005. [16] A. Goldsmith, Wireless Communications. Cambridge University Press, 2005. [17] T. J. Richardson and R. L. Urbanke, Modern Coding Theory. Course notes. EPFL. [18] C. Schlegel and L. Perez, Trellis and Turbo Coding. IEEE Press, 2004. [19] Proceedings of The IEEE, Special issue on wireless commun., vol.92, no.2, Feb. 2004. [20] IEEE Signal Processing Magazine, Jan. 2004. [21] IEEE Communication Magazine, Aug. 2003. [22] M. Medard and R. G. Gallager, “Bandwidth scaling for fading multipath channels,” IEEE Trans. Inform. Theory, vol.48, no.4, pp.840-852, April 2002. [23] I. C. Abou-Faycal, M. D. Trott, and S. Shamai (Shitz), “The capacity of discrete-time memoryless Rayleigh-fading channels,” IEEE Trans. Inform. Theory, vol.47, no.4, pp.1290-1301, May 2001. [24] E. Biglieri, G. Caire, and G. Taricco, “Limiting performance of block-fading channels with multiple antennas,” IEEE Trans. Inform. Theory, vol.47, no.4, pp.1273-1289, May 2001. [25] G. J. Foschini and M. J. Gans, “On limits of wireless communications in a fading environment when using multiple antennas,” Wireless Personal Communications, vol.6, no.3, pp.311-335, Mar. 1998. [26] I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” European Trans. Telecomm., vol.10, no.6, pp.585-596, Nov.-Dec. 1996.
BRIDGE-MIB

BRIDGE-MIB DEFINITIONS ::= BEGINIMPORTSCounter,TimeTicksFROM RFC1155-SMImib-2FROM RFC1213-MIBOBJECT-TYPEFROM RFC-1212TRAP-TYPEFROM RFC-1215;-- All representations of MAC addresses in this MIB Module -- use, as a textual convention (i.e. this convention does -- not affect their encoding), the data type:MacAddress ::= OCTET STRING (SIZE (6))-- a 6 octet address-- in the-- "canonical"-- order-- defined by IEEE 802.1a, i.e., as if it were transmitted -- least significant bit first, even though 802.5 (in-- contrast to other n802.x protocols) requires MAC-- addresses to be transmitted most significant bit first. ---- 16-bit addresses, if needed, are represented by setting -- their upper 4 octets to all 0's, i.e., AAFF would be-- represented as 00000000AAFF.-- Similarly, all representations of Bridge-Id in this MIB -- Module use, as a textual convention (i.e. this-- convention does not affect their encoding), the data-- type:BridgeId ::= OCTET STRING (SIZE (8))-- the-- Bridge-Identifier-- as used in the-- Spanning Tree-- Protocol to uniquely identify a bridge. Its first two-- octets (in network byte order) contain a priority-- value and its last 6 octets contain the MAC address-- used to refer to a bridge in a unique fashion-- (typically, the numerically smallest MAC address-- of all ports on the bridge).-- Several objects in this MIB module represent values of-- timers used by the Spanning Tree Protocol. In this-- MIB, these timers have values in units of hundreths of-- a second (i.e. 1/100 secs).-- These timers, when stored in a Spanning Tree Protocol's -- BPDU, are in units of 1/256 seconds. Note, however,-- that 802.1D-1990 specifies a settable granularity of-- no more than 1 second for these timers. To avoid-- ambiguity, a data type is defined here as a textual-- convention and all representation of these timers-- in this MIB module are defined using this data type. An -- algorithm is also defined for converting between the-- different units, to ensure a timer's value is not-- distorted by multiple conversions.-- The data type is:Timeout ::= INTEGER-- a STP timer in units of 1/100 seconds-- To convert a Timeout value into a value in units of-- 1/256 seconds, the following algorithm should be used:---- b = floor( (n * 256) / 100)---- where:-- floor = quotient [ignore remainder]-- n is the value in 1/100 second units-- b is the value in 1/256 second units---- To convert the value from 1/256 second units back to-- 1/100 seconds, the following algorithm should be used:---- n = ceiling( (b * 100) / 256)---- where:-- ceiling = quotient [if remainder is 0], or-- quotient + 1 [if remainder is non-zero]-- n is the value in 1/100 second units-- b is the value in 1/256 second units---- Note: it is important that the arithmetic operations are-- done in the order specified (i.e., multiply first, divide-- second).dot1dBridge OBJECT IDENTIFIER-- 1.3.6.1.2.1.17 -- ::= { mib-217 }-- groups in the Bridge MIBdot1dBase OBJECT IDENTIFIER-- 1.3.6.1.2.1.17.1 -- ::= { dot1dBridge1 }dot1dStp OBJECT IDENTIFIER-- 1.3.6.1.2.1.17.2 -- ::= { dot1dBridge2 }dot1dSr OBJECT IDENTIFIER-- 1.3.6.1.2.1.17.3 -- ::= { dot1dBridge3 }-- separately documenteddot1dTp OBJECT IDENTIFIER-- 1.3.6.1.2.1.17.4 -- ::= { dot1dBridge4 }dot1dStatic OBJECT IDENTIFIER-- 1.3.6.1.2.1.17.5 -- ::= { dot1dBridge5 }-- the dot1dBase group-- Implementation of the dot1dBase group is mandatory for all-- bridges.dot1dBaseBridgeAddress OBJECT-TYPESYNTAX MacAddressACCESS read-onlySTATUS mandatoryDESCRIPTION"The MAC address used by this bridge when it mustbe referred to in a unique fashion. It isrecommended that this be the numerically smallestMAC address of all ports that belong to thisbridge. However it is only required to be unique.When concatenated with dot1dStpPriority a uniqueBridgeIdentifier is formed which is used in theSpanning Tree Protocol."REFERENCE "IEEE 802.1D-1990: Sections 6.4.1.1.3 and 3.12.5" -- 1.3.6.1.2.1.17.1.1 -- ::= { dot1dBase1 }dot1dBaseNumPorts OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of ports controlled by this bridgingentity."REFERENCE "IEEE 802.1D-1990: Section 6.4.1.1.3"-- 1.3.6.1.2.1.17.1.2 -- ::= { dot1dBase2 }dot1dBaseType OBJECT-TYPESYNTAX INTEGER {unknown(1),transparent-only(2),sourceroute-only(3),srt(4) }ACCESS read-onlySTATUS mandatoryDESCRIPTION"Indicates what type of bridging this bridge canperform. If a bridge is actually performing acertain type of bridging this will be indicated byentries in the port table for the given type."-- 1.3.6.1.2.1.17.1.3 -- ::= { dot1dBase3 }-- The Generic Bridge Port Tabledot1dBasePortTable OBJECT-TYPESYNTAX SEQUENCE OF Dot1dBasePortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A table that contains generic information aboutevery port that is associated with this bridge.Transparent, source-route, and srt ports areincluded."-- 1.3.6.1.2.1.17.1.4 -- ::= { dot1dBase4 }dot1dBasePortEntry OBJECT-TYPESYNTAX Dot1dBasePortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A list of information for each port of thebridge."REFERENCE "IEEE 802.1D-1990: Section 6.4.2, 6.6.1"INDEX {dot1dBasePort}-- 1.3.6.1.2.1.17.1.4.1 -- ::= { dot1dBasePortTable1 }Dot1dBasePortEntry ::= SEQUENCE {dot1dBasePort INTEGER,dot1dBasePortIfIndex INTEGER,dot1dBasePortCircuit OBJECT IDENTIFIER,dot1dBasePortDelayExceededDiscards Counter,dot1dBasePortMtuExceededDiscards Counter}dot1dBasePort OBJECT-TYPESYNTAX INTEGER (1..65535)ACCESS read-onlySTATUS mandatoryDESCRIPTION"The port number of the port for which this entrycontains bridge management information."-- 1.3.6.1.2.1.17.1.4.1.1 -- ::= { dot1dBasePortEntry1 }dot1dBasePortIfIndex OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The value of the instance of the ifIndex object,defined in MIB-II, for the interface correspondingto this port."-- 1.3.6.1.2.1.17.1.4.1.2 -- ::= { dot1dBasePortEntry2 }dot1dBasePortCircuit OBJECT-TYPESYNTAX OBJECT IDENTIFIERACCESS read-onlySTATUS mandatoryDESCRIPTION"For a port which (potentially) has the same valueof dot1dBasePortIfIndex as another port on thesame bridge, this object contains the name of anobject instance unique to this port. For example,in the case where multiple ports correspond one-to-one with multiple X.25 virtual circuits, thisvalue might identify an (e.g., the first) objectinstance associated with the X.25 virtual circuitcorresponding to this port.For a port which has a unique value ofdot1dBasePortIfIndex, this object can have thevalue { 0 0 }."-- 1.3.6.1.2.1.17.1.4.1.3 -- ::= { dot1dBasePortEntry3 }dot1dBasePortDelayExceededDiscards OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of frames discarded by this port dueto excessive transit delay through the bridge. Itis incremented by both transparent and sourceroute bridges."REFERENCE "IEEE 802.1D-1990: Section 6.6.1.1.3" -- 1.3.6.1.2.1.17.1.4.1.4 -- ::= { dot1dBasePortEntry4 }dot1dBasePortMtuExceededDiscards OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of frames discarded by this port dueto an excessive size. It is incremented by bothtransparent and source route bridges."REFERENCE "IEEE 802.1D-1990: Section 6.6.1.1.3" -- 1.3.6.1.2.1.17.1.4.1.5 -- ::= { dot1dBasePortEntry5 }-- the dot1dStp group-- Implementation of the dot1dStp group is optional. It is-- implemented by those bridges that support the Spanning Tree-- Protocol.dot1dStpProtocolSpecification OBJECT-TYPESYNTAX INTEGER {unknown(1),decLb100(2),ieee8021d(3) }ACCESS read-onlySTATUS mandatoryDESCRIPTION"An indication of what version of the SpanningTree Protocol is being run. The value'decLb100(2)' indicates the DEC LANbridge 100Spanning Tree protocol. IEEE 802.1dimplementations will return 'ieee8021d(3)'. Iffuture versions of the IEEE Spanning Tree Protocolare released that are incompatible with thecurrent version a new value will be defined."-- 1.3.6.1.2.1.17.2.1 -- ::= { dot1dStp1 }dot1dStpPriority OBJECT-TYPESYNTAX INTEGER (0..65535)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The value of the write-able portion of the BridgeID, i.e., the first two octets of the (8 octetlong) Bridge ID. The other (last) 6 octets of theBridge ID are given by the value ofdot1dBaseBridgeAddress."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.7"-- 1.3.6.1.2.1.17.2.2 -- ::= { dot1dStp2 }dot1dStpTimeSinceTopologyChange OBJECT-TYPESYNTAX TimeTicksACCESS read-onlySTATUS mandatoryDESCRIPTION"The time (in hundredths of a second) since thelast time a topology change was detected by thebridge entity."REFERENCE "IEEE 802.1D-1990: Section 6.8.1.1.3" -- 1.3.6.1.2.1.17.2.3 -- ::= { dot1dStp3 }dot1dStpTopChanges OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The total number of topology changes detected bythis bridge since the management entity was lastreset or initialized."REFERENCE "IEEE 802.1D-1990: Section 6.8.1.1.3" -- 1.3.6.1.2.1.17.2.4 -- ::= { dot1dStp4 }dot1dStpDesignatedRoot OBJECT-TYPESYNTAX BridgeIdACCESS read-onlySTATUS mandatoryDESCRIPTION"The bridge identifier of the root of the spanningtree as determined by the Spanning Tree Protocolas executed by this node. This value is used asthe Root Identifier parameter in all ConfigurationBridge PDUs originated by this node."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.1"-- 1.3.6.1.2.1.17.2.5 -- ::= { dot1dStp5 }dot1dStpRootCost OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The cost of the path to the root as seen fromthis bridge."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.2"-- 1.3.6.1.2.1.17.2.6 -- ::= { dot1dStp6 }dot1dStpRootPort OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The port number of the port which offers thelowest cost path from this bridge to the rootbridge."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.3"-- 1.3.6.1.2.1.17.2.7 -- ::= { dot1dStp7 }dot1dStpMaxAge OBJECT-TYPESYNTAX TimeoutACCESS read-onlySTATUS mandatoryDESCRIPTION"The maximum age of Spanning Tree Protocolinformation learned from the network on any portbefore it is discarded, in units of hundredths ofa second. This is the actual value that thisbridge is currently using."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.4" -- 1.3.6.1.2.1.17.2.8 -- ::= { dot1dStp8 }dot1dStpHelloTime OBJECT-TYPESYNTAX TimeoutACCESS read-onlySTATUS mandatoryDESCRIPTION"The amount of time between the transmission ofConfiguration bridge PDUs by this node on any portwhen it is the root of the spanning tree or tryingto become so, in units of hundredths of a second.This is the actual value that this bridge iscurrently using."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.5" -- 1.3.6.1.2.1.17.2.9 -- ::= { dot1dStp9 }dot1dStpHoldTime OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"This time value determines the interval lengthduring which no more than two Configuration bridgePDUs shall be transmitted by this node, in unitsof hundredths of a second."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.14" -- 1.3.6.1.2.1.17.2.10 -- ::= { dot1dStp10 }dot1dStpForwardDelay OBJECT-TYPESYNTAX TimeoutACCESS read-onlySTATUS mandatoryDESCRIPTION"This time value, measured in units of hundredthsof a second, controls how fast a port changes itsspanning state when moving towards the Forwardingstate. The value determines how long the portstays in each of the Listening and Learningstates, which precede the Forwarding state. Thisvalue is also used, when a topology change hasbeen detected and is underway, to age all dynamicentries in the Forwarding Database. [Note thatthis value is the one that this bridge iscurrently using, in contrast todot1dStpBridgeForwardDelay which is the value thatthis bridge and all others would start usingif/when this bridge were to become the root.]"REFERENCE "IEEE 802.1D-1990: Section 4.5.3.6" -- 1.3.6.1.2.1.17.2.11 -- ::= { dot1dStp11 }dot1dStpBridgeMaxAge OBJECT-TYPESYNTAX Timeout (600..4000)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The value that all bridges use for MaxAge whenthis bridge is acting as the root. Note that802.1D-1990 specifies that the range for thisparameter is related to the value ofdot1dStpBridgeHelloTime. The granularity of thistimer is specified by 802.1D-1990 to be 1 second.An agent may return a badValue error if a set isattempted to a value which is not a whole numberof seconds."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.8" -- 1.3.6.1.2.1.17.2.12 -- ::= { dot1dStp12 }dot1dStpBridgeHelloTime OBJECT-TYPESYNTAX Timeout (100..1000)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The value that all bridges use for HelloTime whenthis bridge is acting as the root. Thegranularity of this timer is specified by 802.1D-1990 to be 1 second. An agent may return abadValue error if a set is attempted to a valuewhich is not a whole number of seconds."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.9" -- 1.3.6.1.2.1.17.2.13 -- ::= { dot1dStp13 }dot1dStpBridgeForwardDelay OBJECT-TYPESYNTAX Timeout (400..3000)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The value that all bridges use for ForwardDelaywhen this bridge is acting as the root. Note that802.1D-1990 specifies that the range for thisparameter is related to the value ofdot1dStpBridgeMaxAge. The granularity of thistimer is specified by 802.1D-1990 to be 1 second.An agent may return a badValue error if a set isattempted to a value which is not a whole numberof seconds."REFERENCE "IEEE 802.1D-1990: Section 4.5.3.10" -- 1.3.6.1.2.1.17.2.14 -- ::= { dot1dStp14 }-- The Spanning Tree Port Tabledot1dStpPortTable OBJECT-TYPESYNTAX SEQUENCE OF Dot1dStpPortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A table that contains port-specific informationfor the Spanning Tree Protocol."-- 1.3.6.1.2.1.17.2.15 -- ::= { dot1dStp15 }dot1dStpPortEntry OBJECT-TYPESYNTAX Dot1dStpPortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A list of information maintained by every portabout the Spanning Tree Protocol state for thatport."INDEX {dot1dStpPort}-- 1.3.6.1.2.1.17.2.15.1 -- ::= { dot1dStpPortTable1 }Dot1dStpPortEntry ::= SEQUENCE {dot1dStpPort INTEGER,dot1dStpPortPriority INTEGER,dot1dStpPortState INTEGER,dot1dStpPortEnable INTEGER,dot1dStpPortPathCost INTEGER,dot1dStpPortDesignatedRoot BridgeId,dot1dStpPortDesignatedCost INTEGER,dot1dStpPortDesignatedBridge BridgeId,dot1dStpPortDesignatedPort OCTET STRING,dot1dStpPortForwardTransitions Counter}dot1dStpPort OBJECT-TYPESYNTAX INTEGER (1..65535)ACCESS read-onlySTATUS mandatoryDESCRIPTION"The port number of the port for which this entrycontains Spanning Tree Protocol managementinformation."REFERENCE "IEEE 802.1D-1990: Section 6.8.2.1.2" -- 1.3.6.1.2.1.17.2.15.1.1 -- ::= { dot1dStpPortEntry1 }dot1dStpPortPriority OBJECT-TYPESYNTAX INTEGER (0..255)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The value of the priority field which iscontained in the first (in network byte order)octet of the (2 octet long) Port ID. The otheroctet of the Port ID is given by the value ofdot1dStpPort."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.1"-- 1.3.6.1.2.1.17.2.15.1.2 -- ::= { dot1dStpPortEntry2 }dot1dStpPortState OBJECT-TYPESYNTAX INTEGER {disabled(1),blocking(2),listening(3),learning(4),forwarding(5),broken(6) }ACCESS read-onlySTATUS mandatoryDESCRIPTION"The port's current state as defined byapplication of the Spanning Tree Protocol. Thisstate controls what action a port takes onreception of a frame. If the bridge has detecteda port that is malfunctioning it will place thatport into the broken(6) state. For ports whichare disabled (see dot1dStpPortEnable), this objectwill have a value of disabled(1)."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.2"-- 1.3.6.1.2.1.17.2.15.1.3 -- ::= { dot1dStpPortEntry3 }dot1dStpPortEnable OBJECT-TYPESYNTAX INTEGER {enabled(1),disabled(2) }ACCESS read-writeSTATUS mandatoryDESCRIPTION"The enabled/disabled status of the port."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.2"-- 1.3.6.1.2.1.17.2.15.1.4 -- ::= { dot1dStpPortEntry4 }dot1dStpPortPathCost OBJECT-TYPESYNTAX INTEGER (1..65535)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The contribution of this port to the path cost ofpaths towards the spanning tree root which includethis port. 802.1D-1990 recommends that thedefault value of this parameter be in inverseproportion to the speed of the attached LAN."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.3" -- 1.3.6.1.2.1.17.2.15.1.5 -- ::= { dot1dStpPortEntry5 }dot1dStpPortDesignatedRoot OBJECT-TYPESYNTAX BridgeIdACCESS read-onlySTATUS mandatoryDESCRIPTION"The unique Bridge Identifier of the Bridgerecorded as the Root in the Configuration BPDUstransmitted by the Designated Bridge for thesegment to which the port is attached."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.4" -- 1.3.6.1.2.1.17.2.15.1.6 -- ::= { dot1dStpPortEntry6 }dot1dStpPortDesignatedCost OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The path cost of the Designated Port of thesegment connected to this port. This value iscompared to the Root Path Cost field in receivedbridge PDUs."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.5" -- 1.3.6.1.2.1.17.2.15.1.7 -- ::= { dot1dStpPortEntry7 }dot1dStpPortDesignatedBridge OBJECT-TYPESYNTAX BridgeIdACCESS read-onlySTATUS mandatoryDESCRIPTION"The Bridge Identifier of the bridge which thisport considers to be the Designated Bridge forthis port's segment."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.6" -- 1.3.6.1.2.1.17.2.15.1.8 -- ::= { dot1dStpPortEntry8 }dot1dStpPortDesignatedPort OBJECT-TYPESYNTAX OCTET STRING (SIZE (2))ACCESS read-onlySTATUS mandatoryDESCRIPTION"The Port Identifier of the port on the Designated Bridge for this port's segment."REFERENCE "IEEE 802.1D-1990: Section 4.5.5.7" -- 1.3.6.1.2.1.17.2.15.1.9 -- ::= { dot1dStpPortEntry9 }dot1dStpPortForwardTransitions OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of times this port has transitionedfrom the Learning state to the Forwarding state."-- 1.3.6.1.2.1.17.2.15.1.10 -- ::= { dot1dStpPortEntry10 }-- the dot1dTp group-- Implementation of the dot1dTp group is optional. It is-- implemented by those bridges that support the transparent-- bridging mode. A transparent or SRT bridge will implement-- this group.dot1dTpLearnedEntryDiscards OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The total number of Forwarding Database entries,which have been or would have been learnt, buthave been discarded due to a lack of space tostore them in the Forwarding Database. If thiscounter is increasing, it indicates that theForwarding Database is regularly becoming full (acondition which has unpleasant performance effectson the subnetwork). If this counter has asignificant value but is not presently increasing,it indicates that the problem has been occurringbut is not persistent."REFERENCE "IEEE 802.1D-1990: Section 6.7.1.1.3"-- 1.3.6.1.2.1.17.4.1 -- ::= { dot1dTp1 }dot1dTpAgingTime OBJECT-TYPESYNTAX INTEGER (10..1000000)ACCESS read-writeSTATUS mandatoryDESCRIPTION"The timeout period in seconds for aging outdynamically learned forwarding information.802.1D-1990 recommends a default of 300 seconds."REFERENCE "IEEE 802.1D-1990: Section 6.7.1.1.3"-- 1.3.6.1.2.1.17.4.2 -- ::= { dot1dTp2 }-- The Forwarding Database for Transparent Bridgesdot1dTpFdbTable OBJECT-TYPESYNTAX SEQUENCE OF Dot1dTpFdbEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A table that contains information about unicastentries for which the bridge has forwarding and/orfiltering information. This information is usedby the transparent bridging function indetermining how to propagate a received frame."-- 1.3.6.1.2.1.17.4.3 -- ::= { dot1dTp3 }dot1dTpFdbEntry OBJECT-TYPESYNTAX Dot1dTpFdbEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"Information about a specific unicast MAC addressfor which the bridge has some forwarding and/orfiltering information."INDEX {dot1dTpFdbAddress}-- 1.3.6.1.2.1.17.4.3.1 -- ::= { dot1dTpFdbTable1 }Dot1dTpFdbEntry ::= SEQUENCE {dot1dTpFdbAddress MacAddress,dot1dTpFdbPort INTEGER,dot1dTpFdbStatus INTEGER}dot1dTpFdbAddress OBJECT-TYPESYNTAX MacAddressACCESS read-onlySTATUS mandatoryDESCRIPTION"A unicast MAC address for which the bridge hasforwarding and/or filtering information."REFERENCE "IEEE 802.1D-1990: Section 3.9.1, 3.9.2" -- 1.3.6.1.2.1.17.4.3.1.1 -- ::= { dot1dTpFdbEntry1 }dot1dTpFdbPort OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"Either the value '0', or the port number of theport on which a frame having a source addressequal to the value of the corresponding instanceof dot1dTpFdbAddress has been seen. A value of'0' indicates that the port number has not beenlearned but that the bridge does have someforwarding/filtering information about thisaddress (e.g. in the dot1dStaticTable).Implementors are encouraged to assign the portvalue to this object whenever it is learned evenfor addresses for which the corresponding value of dot1dTpFdbStatus is not learned(3)."-- 1.3.6.1.2.1.17.4.3.1.2 -- ::= { dot1dTpFdbEntry2 }dot1dTpFdbStatus OBJECT-TYPESYNTAX INTEGER {other(1),invalid(2),learned(3),self(4),mgmt(5) }ACCESS read-onlySTATUS mandatoryDESCRIPTION"The status of this entry. The meanings of thevalues are:other(1) : none of the following. This would include the case where some otherMIB object (not the correspondinginstance of dot1dTpFdbPort, nor an entry in the dot1dStaticTable) isbeing used to determine if and how frames addressed to the value ofthe corresponding instance ofdot1dTpFdbAddress are beingforwarded.invalid(2) : this entry is not longer valid(e.g., it was learned but has since aged-out), but has not yet beenflushed from the table.learned(3) : the value of the correspondinginstance of dot1dTpFdbPort waslearned, and is being used.self(4) : the value of the correspondinginstance of dot1dTpFdbAddressrepresents one of the bridge'saddresses. The correspondinginstance of dot1dTpFdbPortindicates which of the bridge'sports has this address.mgmt(5) : the value of the correspondinginstance of dot1dTpFdbAddress isalso the value of an existinginstance of dot1dStaticAddress."-- 1.3.6.1.2.1.17.4.3.1.3 -- ::= { dot1dTpFdbEntry3 }-- Port Table for Transparent Bridgesdot1dTpPortTable OBJECT-TYPESYNTAX SEQUENCE OF Dot1dTpPortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A table that contains information about everyport that is associated with this transparentbridge."-- 1.3.6.1.2.1.17.4.4 -- ::= { dot1dTp4 }dot1dTpPortEntry OBJECT-TYPESYNTAX Dot1dTpPortEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A list of information for each port of atransparent bridge."INDEX {dot1dTpPort}-- 1.3.6.1.2.1.17.4.4.1 -- ::= { dot1dTpPortTable1 }Dot1dTpPortEntry ::= SEQUENCE {dot1dTpPort INTEGER,dot1dTpPortMaxInfo INTEGER,dot1dTpPortInFrames Counter,dot1dTpPortOutFrames Counter,dot1dTpPortInDiscards Counter}dot1dTpPort OBJECT-TYPESYNTAX INTEGER (1..65535)ACCESS read-onlySTATUS mandatoryDESCRIPTION"The port number of the port for which this entrycontains Transparent bridging managementinformation."-- 1.3.6.1.2.1.17.4.4.1.1 -- ::= { dot1dTpPortEntry1 }-- It would be nice if we could use ifMtu as the size of the-- largest INFO field, but we can't because ifMtu is defined-- to be the size that the (inter-)network layer can use which-- can differ from the MAC layer (especially if several layers-- of encapsulation are used).dot1dTpPortMaxInfo OBJECT-TYPESYNTAX INTEGERACCESS read-onlySTATUS mandatoryDESCRIPTION"The maximum size of the INFO (non-MAC) field thatthis port will receive or transmit."-- 1.3.6.1.2.1.17.4.4.1.2 -- ::= { dot1dTpPortEntry2 }dot1dTpPortInFrames OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of frames that have been received bythis port from its segment. Note that a framereceived on the interface corresponding to thisport is only counted by this object if and only ifit is for a protocol being processed by the localbridging function, including bridge managementframes."REFERENCE "IEEE 802.1D-1990: Section 6.6.1.1.3" -- 1.3.6.1.2.1.17.4.4.1.3 -- ::= { dot1dTpPortEntry3 }dot1dTpPortOutFrames OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"The number of frames that have been transmittedby this port to its segment. Note that a frametransmitted on the interface corresponding to thisport is only counted by this object if and only ifit is for a protocol being processed by the localbridging function, including bridge managementframes."REFERENCE "IEEE 802.1D-1990: Section 6.6.1.1.3" -- 1.3.6.1.2.1.17.4.4.1.4 -- ::= { dot1dTpPortEntry4 }dot1dTpPortInDiscards OBJECT-TYPESYNTAX CounterACCESS read-onlySTATUS mandatoryDESCRIPTION"Count of valid frames received which werediscarded (i.e., filtered) by the ForwardingProcess."REFERENCE "IEEE 802.1D-1990: Section 6.6.1.1.3" -- 1.3.6.1.2.1.17.4.4.1.5 -- ::= { dot1dTpPortEntry5 }-- The Static (Destination-Address Filtering) Database-- Implementation of this group is optional.dot1dStaticTable OBJECT-TYPESYNTAX SEQUENCE OF Dot1dStaticEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"A table containing filtering informationconfigured into the bridge by (local or network)management specifying the set of ports to whichframes received from specific ports and containingspecific destination addresses are allowed to beforwarded. The value of zero in this table as theport number from which frames with a specificdestination address are received, is used tospecify all ports for which there is no specificentry in this table for that particulardestination address. Entries are valid forunicast and for group/broadcast addresses."REFERENCE "IEEE 802.1D-1990: Section 6.7.2"-- 1.3.6.1.2.1.17.5.1 -- ::= { dot1dStatic1 }dot1dStaticEntry OBJECT-TYPESYNTAX Dot1dStaticEntryACCESS not-accessibleSTATUS mandatoryDESCRIPTION"Filtering information configured into the bridgeby (local or network) management specifying theset of ports to which frames received from aspecific port and containing a specificdestination address are allowed to be forwarded."REFERENCE "IEEE 802.1D-1990: Section 6.7.2"INDEX {dot1dStaticAddress,dot1dStaticReceivePort}-- 1.3.6.1.2.1.17.5.1.1 -- ::= { dot1dStaticTable1 }Dot1dStaticEntry ::= SEQUENCE {dot1dStaticAddress MacAddress,dot1dStaticReceivePort INTEGER,dot1dStaticAllowedToGoTo OCTET STRING,dot1dStaticStatus INTEGER}。
computer ststem

2.2
Integer Representations
Figure 2.17. For small (< 2w−1) numbers, the conversion from unsigned to signed preserves the numeric value. Large (≥ 2w−1) numbers are converted to negative values.
2.2
Integer Representations
C supports a variety of integral data types—ones that represent finite ranges of integers. These are shown in Figures along with the ranges of values they can have for “typical” 32- and 64-bit machines.
2.2
Integer Representations
Summarize For values x in the range 0 ≤x <2w−1, we have T2Uw(x) = x and U2Tw(x) = x. That is, numbers in this range have identical unsigned and two’s-complement representations. For values outside of this range, the conversions either add or subtract 2w.
2.2
Integer Representations
Omega SPRTX-SS系列产品说明书

SPRTX-SS SERIESIndustrial RTD Connector/TransmitterThe OMEGA®2-wire SPRTX-SS Series temperature transmitters are highperformance, low cost, industrial transmitters designed for direct connectionto most CIP (clean-in-place) sanitary Pt100 probes and sensors thatincorporate a M12 style connector. All models feature an encapsulated microminiature signal conditioner built into a sealed connector housing. The signalconditioner converts the resistive change of a 100 Ω, 0.00385 RTD probe orsensor into an industry-standard 2 wire, 4 to 20 mA analog output across adedicated temperature range. This analog output can be sent hundreds of feetaway from the location of your sensor (probe) to an indicating device,controller, PLC, computer, data logger or chart recorder. Your SPRTX-SStemperature transmitter has been factory calibrated to provide maximumperformance and requires no field adjustments.Omega’s SPRTX-SS transmitters are available in M12-1 and M12-2 models,allowing for more flexible international use. The SPRTX-SS1-M12-1 andSPRTX-SS2-M12-1 are designed for use commonly in North America. TheSPRTX-SS1-M12-2 and SPRTX-SS2-M12-2 are geared towards Europeanapplications.UnpackingRemove the packing list and verify that you have received all yourequipment. If you have any questions about the shipment, please callCustomer Service at: 1-800-622-2378or 203-359-1660.On the web you can find us at: e-mail:******************When you receive the shipment, inspect the container and equipment forany signs of damage. Note any evidence of rough handling in transit.Immediately report any damage to the shipping agent.material and carton in the event reshipment is necessary.The following items are supplied in the box with your SPRTXConnector/Transmitter.• This User's Manual, # M-4755(1 ea.)2SPRTX-SS Temperature Transmitter* See Figure 1 for Wiring Option information Recommended AccessoriesPower Supply, OMEGA Part No.: PSU-93Shielded 2-conductor cable (100 ft), OMEGA Part No.: TX2-100Figure 1. Wiring OptionsIntroduction/SafetyYour SPRTX-SS Temperature Transmitter has been designed for ease of use and flexibility. It is important that you read this Users Manual completely and follow all safety precautions before operating your unit.Precautions1. FOLLOW ALL SAFETY PRECAUTIONS AND OPERATING INSTRUCTIONS OUTLINED IN THIS MANUAL.2. INSURE PROBE/M12 CONNECTOR CONNECTION IS ALWAYS FULLY TIGHTENED DURING USE.3. ADD ADDITIONAL SAFE GUARDS TO YOUR SYSTEM INCRITICAL APPLICATIONS WHERE DAMAGE OR INJURY MAY RESULT FROM PROBE/CONNECTOR SEPARATION OR FAILURE.4. NEVER EXPOSE THE CONNECTOR/MODULE BODY TOAMBIENT TEMPERATURES ABOVE 85ºC (185ºF) OR BELOW -40°C (-40°F). DAMAGE MAY RESULT.5. DO NOT OPERATE IN FLAMMABLE OR EXPLOSIVE ENVIRONMENTS.6. DO NOT USE IN HUMAN MEDICAL OR NUCLEAR APPLICATIONS.7. NEVER OPERATE WITH A POWER SOURCE OTHER THAN WHAT IS SPECIFIED IN THIS MANUAL.Model No.Description*RangeSPRTX-SS1-M12-1Pt100 Transmitter, Wiring Option 1-99 to 20°C (-146 to 406°F)SPRTX-SS1-M12-2Pt100 Transmitter, Wiring Option 2SPRTX-SS2-M12-1Pt100 Transmitter, Wiring Option 1 2 to 569°C (36 to 1056°F)SPRTX-SS2-M12-2Pt100 Transmitter, Wiring Option 28. REMOVE AND OR DISCONNECT POWER SOURCE BEFOREATTEMPTING INSTALLATION OR MAINTENANCE.9. ALWAYS OPERATE YOUR UNIT WITH THE SHIELD WIRECONNECTED TO EARTH GROUND.10. INSTALLATION AND WIRING SHOULD BE DONE BY TRAINEDPROFESSIONALS ONLY.11. DO NOT OPEN OR DISASSEMBLE YOUR UNIT.There are no user serviceable parts inside your unit. Attempting to open, repairor service your unit will void your warranty.Theory of OperationA 4-20 mA loop is a series loop in which a transmitter will vary the currentflow depending on the input to the transmitter. With the SPRTX-SS theamount of current allowed to flow in the loop will vary depending on theresistance change, due to changes in the temperature being measured by theSanitary RTD sensor (probe). Some advantages of a current output over avoltage output is that the signal measured is less susceptible to electricalnoise interference and the loop can support more than one measuringinstrument as long as the maximum loop resistance is not exceeded.A typical application utilizing a current loop will normally consist of apower supply, the transmitter and a meter, recorder or controller tomeasure the current flow. The loop resistance is the sum of the measuringinstruments and wire used. The maximum allowable loop resistance for theSPRTX-SS to function properly is found by using the following formula:Rmax = (power supply voltage – 9 volts)/.02 ampsFor applications that require a voltage output, the 4-20mA signal from theSPRTX-SS can be converted in the field by adding a 250 Ohm shuntresistor that will convert the transmitters output to a 1-5 Vdc signal whenwired correctly.Figure 2. SPRTX-SS Operation3Mounting Your SPRTX to ProbesThe SPRTX-SS Series of connector/transmitters are designed for quickconnection to Sanitary RTD sensors and probes. See below for correct usage.Protection from High Ambient TemperaturesYour SPRTX-SS Connector/Transmitter Assembly can be damaged if exposed toambient temperatures above 85°C (185°F). Some applications may require thatyou shield the SPRTX-SS unit from radiated heat as shown below. You shouldalways use a probe where the length allows for a safe distance of 76 mm (3")or more between the body of the SPRTX-SS and your source of heat.Figure 4. SPRTX-SS Temperature/Protection4Transmitter Wiring ExamplesConverting from 4 to 20mA Output to 1 to 5Vdc Output (3-wire)VOLTAGE MUST BE INCREASED TO 15 Vdc.5500.1C24V+–+–POWER SUPPLYDC VOLTAGE INPUT METERBARE WIRERED WIREBLACK WIREMUST BE CONNECTED TO EARTH GROUNDSPRTX-SSDC VOLTAGE INPUT METERFigure 5. Transmitter Wiring: 2-Wire, 4 to 20mA OutputTemperature to Analog Output CalculationsModels: SPRTX-SS1-M12-1, SPRTX-SS1-M12-2Temp deg. C Temp deg. F RTD Resistance Output mA -99-14660.67 4.00-50-5880.31 6.55032100.009.162577109.7310.4650122119.4011.7775167128.9913.07100212138.5114.37125257147.9515.67150302157.3316.98175347166.6218.28208406178.8020.00Models: SPRTX-SS2-M12-1, SPRTX-SS2-M12-2Temp deg. C Temp deg. F RTD Resistance Output mA -18092.95 3.54236100.78 4.002577109.73 4.6550122119.40 5.35100212138.51 6.77150302157.338.18200392175.869.59250482194.1011.00300572212.0512.41400752247.0915.23500932280.9818.105691056303.6820.00Calibration/ServiceYour transmitter has been factory calibrated to meet or exceed thespecifications outlined in this manual. No field adjustments are needed orpossible on your unit. If your unit should become damaged or malfunction,please contact Omega Customer Service at:On the web you can find us at:e-mail:******************6SpecificationsSupply Voltage:9 to 24 Vdc @ 30mA maxMax Load:Rmax (Ω)= (V supply – 9V)/0.02 AMax Input Lead Resistance:50VOutput:Linearized 4 to 20 mAAccuracy:±0.5% of full scale @ 23ºC (73ºF)Repeatability:± 0.25ºC (± 0.5ºF)Temperature Effect:±0.0022 mA/°C (±0.0012 mA/°F)Temperature Input Range by modelSPRTX-SS1:-99 to 208ºC (-146 to 406ºF)SPRTX-SS2: 2 to 569ºC (36 to 1056ºF)Input:3-wire, Pt100 (Ω=0.00385) 4-wire compatibleProbe/Sensor Input ConnectionM12, 4-pin female, (connects to RTD probeshaving a 12M, 4-pin male connection) Transmitter Operating Temp:-40 to 85ºC (-40 to 185ºF)O utput Connection:2-wire, Shielded, polyurethane cable 4 m(12') includedApprovals:CE MarkedMax Loop Resistance:Ohms = (V supply – 9 V)/0.02 ADimensions:76 L x 20 mm D (3 x 0.8") without cableWeight:110 g (0.25 lb) max with cable7It is the policy of OMEGA Engineering, Inc. to comply with all worldwide safety and EMC/EMI regulations that apply. OMEGA is constantly pursuingcertification of its products to the European New Approach Directives. OMEGA will add the CE mark to every appropriate device upon certification.The information contained in this document is believed to be correct, but OMEGA accepts no liability for any errors it contains, and reserves the right to alter specifications without notice.WARNING: These products are not designed for use in, and should not be used for, human applications.WARRANTY/DISCLAIMEROMEGA ENGINEERING, INC. warrants this unit to be free of defects in materials and workmanship for a period of 13 months from date of purchase. OMEGA’s WARRANTY adds an additional one (1) month grace period tothe normal one (1) year product warranty to cover handling and shipping time. This ensures that OMEGA’s customers receive maximum coverage on each product.If the unit malfunctions, it must be returned to the factory for evaluation. OMEGA’s Customer Service Department will issue an Authorized Return (AR) number immediately upon phone or written request. Upon examination by OMEGA, if the unit is found to bedefective, it will be repaired or replaced at no charge. OMEGA’s WARRANTY does not apply to defects resulting from any action of the purchaser, including but not limited to mishandling, improper interfacing, operation outside of design limits, improper repair, or unauthorized modification. This WARRANTY is VOID if the unit shows evidence of having been tampered with or shows evidence of having been damaged as a result of excessive corrosion; or current, heat, moisture or vibration;improper specification;misapplication; misuse or other operating conditions outside of OMEGA’s control. Components in which wear is not warranted,include but are not limited to contact points, fuses, and triacs.OMEGA is pleased to offer suggestions on the use of its various products. However, OMEGA neither assumes responsibility for any omissions or errors nor assumes liability for any damages that result from the use of its products in accordance with information provided by OMEGA, either verbal or written. OMEGA warrants only that the parts manufactured by the company will be as specified and free of defects. OMEGA MAKES NO OTHER WARRANTIES OR REPRESENTATIONS OF ANY KIND WHATSOEVER, EXPRESSED OR IMPLIED, EXCEPT THAT OF TITLE, AND ALL IMPLIED WARRANTIES INCLUDING ANY WARRANTY OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. LIMITATION OF LIABILITY: The remedies of purchaser set forth herein are exclusive, and the total liability of OMEGA with respect to this order, whether based on contract, warranty, negligence, indemnification, strict liability or otherwise, shall not exceed the purchase price of the component upon which liability is based. In no event shall OMEGA be liable for consequential,incidental or special damages.CONDITIONS: Equipment sold by OMEGA is not intended to be used, nor shall it be used: (1) as a “Basic Component” under 10 CFR 21 (NRC), used in or with any nuclear installation or activity; or (2) in medical applications or used on humans. Should any Product(s)be used in or with any nuclear installation or activity, medical application, used on humans, or misused in any way, OMEGA assumes no responsibility as set forth in our basic WARRANTY/ DISCLAIMER language, and, additionally, purchaser will indemnify OMEGA and hold OMEGA harmless from any liability or damage whatsoever arising out of the use of the Product(s) in such a manner.RETURN REQUESTS /INQUIRIESDirect all warranty and repair requests/inquiries to the OMEGA Customer Service Department. BEFORE RETURNING ANY PRODUCT(S) TO OMEGA, PURCHASER MUST OBTAIN AN AUTHORIZED RETURN (AR) NUMBER FROM OMEGA’S CUSTOMER SERVICE DEPARTMENT (IN ORDER TO AVOID PROCESSING DELAYS). The assigned AR number should then be marked on the outside of the return package and on any correspondence.The purchaser is responsible for shipping charges, freight, insurance and proper packaging to prevent breakage in transit. FOR WARRANTY RETURNS, please have the following informa-tion available BEFORE contacting OMEGA:1.Purchase Order number under which the product was PURCHASED,2.Model and serial number of the product under warranty, and3.Repair instructions and/or specific problems relative to the product.FOR NON-WARRANTY REPAIRS,consult OMEGA for cur-rent repair charges. Have the following information avail-able BEFORE contacting OMEGA:1. Purchase Order number to cover the COST of the repair,2.Model and serial number of the product, and3.Repair instructions and/or specific problems relative to the product.OMEGA’s policy is to make running changes, not model changes, whenever an improvement is possible. This affords our customers the latest in technology and engineering.OMEGA is a registered trademark of OMEGA ENGINEERING, INC.© Copyright 2013 OMEGA ENGINEERING, INC. All rights reserved. This document may not be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine-readable form, in whole or in part, without the prior written consent of OMEGA ENGINEERING, INC.。
数据图标英语作文 核心词
数据图标英语作文核心词Core Concepts":Data visualization is a powerful tool in the modern world, allowing us to make sense of the vast amounts of information that surrounds us. By converting complex data into visual representations, we can more easily identify patterns, trends, and insights that would otherwise be obscured. At the heart of effective data visualization are a set of core concepts that guide the design and implementation of these visual tools.One of the fundamental principles of data visualization is the notion of encoding. This refers to the process of mapping data attributes to visual elements such as position, size, color, and shape. The choices made in this encoding process can have a significant impact on the clarity and effectiveness of the visualization. For example, using color to represent categorical variables can help viewers quickly distinguish between different groups, while using position to show numerical values along an x-y axis can highlight relationships and trends.Another key concept in data visualization is the idea of hierarchy andfocus. Effective visualizations often prioritize the most important information, drawing the viewer's attention to the core insights while relegating less critical details to the periphery. This can be achieved through techniques such as emphasizing key data points, using negative space strategically, and employing visual hierarchies to guide the viewer's eye.Closely related to the idea of hierarchy is the concept of simplicity. While data visualizations can be powerful tools, they can also quickly become overwhelming if they are overly complex or cluttered. Successful visualizations strike a balance, presenting just the right amount of information to convey the essential insights without drowning the viewer in a sea of data. This often requires careful curation and editing, as well as a deep understanding of the audience and their needs.Another critical component of effective data visualization is the concept of context. Visualizations do not exist in a vacuum; they are designed to be interpreted within a broader framework of knowledge and understanding. By providing relevant contextual information, such as labels, annotations, and supplementary text, visualizations can become more meaningful and impactful for the viewer.In addition to these core conceptual elements, data visualization alsorelies on a variety of technical and design principles. These include the use of appropriate chart types, the application of color theory, the optimization of visual layouts, and the consideration of accessibility and inclusivity. Mastering these technical skills is essential for creating visualizations that are not only aesthetically pleasing but also highly functional and effective.Ultimately, the true power of data visualization lies in its ability to transform complex information into insights that can drive decision-making, influence behavior, and spark innovation. By understanding and applying the core concepts of data visualization, practitioners can create visualizations that are not only visually compelling but also deeply meaningful and impactful.As the volume and complexity of data continue to grow, the importance of effective data visualization will only increase. Professionals in a wide range of fields, from business and finance to science and healthcare, are increasingly relying on these visual tools to make sense of the world around them. By embracing the core principles of data visualization, we can harness the power of data to unlock new discoveries, solve pressing challenges, and create a better future for all.。
科学文献
Activity Concept
Documents Expectations
Requirements s System design System specs SL Program design Program source PL Either: Hardware synthesis Circuit diagram Netlist Or: Compilation Machine code ML
1 Oxford University Computing Laboratory
Abstract. The goal of the Provably Correct Systems project (ProCoS) is to develop a mathematical basis for development of embedded, realtime, computer systems. This survey paper introduces the speci cation languages and veri cation techniques for four levels of development: Requirements de nition and control design; Transformation to a systems architecture with program designs and their transformation to programs; Compilation of real-time programs to conventional processors, and Compilation of programs to hardware.
Language Natural (informal) Requirements RL
数字图像处理英文原版及翻译
Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pixels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields asingle number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high level processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higher level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting(segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.”As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to theother.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good”enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image , such as the jpg used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a longway toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig 2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where theinformation of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig 2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge detection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there maytherefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian of the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholdingparameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.数字图像处理和边缘检测数字图像处理在数字图象处理方法的兴趣从两个主要应用领域的茎:改善人类解释图像信息;和用于存储,传输,和表示用于自主机器感知图像数据的处理。
基于乐高的语音机器人
Talking Robots With LEGO MindStormsAlexander Koller Saarland University Saarbr¨u cken,Germany koller@coli.uni-sb.de Geert-Jan M.Kruijff Saarland University Saarbr¨u cken,Germany gj@coli.uni-sb.deAbstractThis paper shows how talking robots can be built from off-the-shelf components,based on the Lego MindStorms robotics platform.We present four robots that students created asfinal projects in a seminar we supervised.Because Lego robots are so affordable,we argue that it is now feasible for any dialogue researcher to tackle the interesting chal-lenges at the robot-dialogue interface.1IntroductionEver since KarelˇCapek introduced the word“robot”in his1921novel Rossum’s Universal Robots and the subsequent popularisation through Issac Asi-mov’s books,the idea of building autonomous robots has captured people’s imagination.The cre-ation of an intelligent,talking robot has been the ul-timate dream of Artificial Intelligence from the very start.Yet,although there has been a tremendous amount of AI research on topics such as control and navigation for robots,the issue of integrat-ing dialogue capabilities into a robot has only re-cently started to receive attention.Early successes were booked with Flakey(Konolige et al.,1993), a voice-controlled robot which roamed the corri-dors of SRI.Since then,thefield of socially in-teractive robots has established itself(see(Fong et al.,2003)).Often-cited examples of such inter-active robots that have a capability of communi-cating in natural language are the humanoid robot R OBOVIE(Kanda et al.,2002)and robotic mu-seum tour guides like R HINO(Burgard et al.,1999) (Deutsches Museum Bonn),its successor M INERVA touring the Smithsonian in Washington(Thrun et al.,2000),and R OBOX at the Swiss National Ex-hibition Expo02(Siegwart and et al,2003).How-ever,dialogue systems used in robotics appear to be mostly restricted to relatively simplefinite-state, query/response interaction.The only robots in-volving dialogue systems that are state-of-the-art in computational linguistics(and that we are aware of)are those presented by Lemon et al.(2001),Sidner et al.(2003)and Bos et al.(2003),who equipped a mobile robot with an information state based dia-logue system.There are two obvious reasons for this gap be-tween research on dialogue systems in robotics on the one hand,and computational linguistics on the other hand.One is that the sheer cost involved in buying or building a robot makes traditional robotics research available to only a handful of re-search sites.Another is that building a talking robot combines the challenges presented by robotics and natural language processing,which are further ex-acerbated by the interactions of the two sides.In this paper,we address at least thefirst prob-lem by demonstrating how to build talking robots from affordable,commercial off-the-shelf(COTS) components.We present an approach,tested in a seminar taught at the Saarland University in Win-ter2002/2003,in which we combine the Lego MindStorms system with COTS software for speech recognition/synthesis and dialogue modeling.The Lego MindStorms1system extends the tra-ditional Lego bricks with a central control unit(the RCX),as well as motors and various kinds of sen-sors.It provides a severely limited computational platform from a traditional robotics point of view, but comes at a price of a few hundred,rather than tens of thousands of Euros per kit.Because Mind-Storms robots can beflexibly connected to a dia-logue system running on a PC,this means that af-fordable robots are now available to dialogue re-searchers.We present four systems that were built by teams of three students each under our supervision,and use off-the-shelf components such as the Mind-Storms kits,a dialogue system,and a speech recog-niser and synthesis system,in addition to commu-nications software that we ourselves wrote to link all the components together.It turns out that using 1LEGO and LEGO MindStorms are trademarks of the LEGO Company.this accessible technology,it is possible to create basic but interesting talking robots in limited time (7weeks).This is relevant not only for future re-search,but can also serve as a teaching device that has shown to be extremely motivating for the stu-dents.MindStorms are a staple in robotics educa-tion (Yu,2003;Gerovich et al.,2003;Lund,1999),but to our knowledge,they have never been used as part of a language technology curriculum.The paper is structured as follows.We first present the basic setup of the MindStorms system and the software architecture.Then we present the four talking robots built by our students in some de-tail.Finally,we discuss the most important chal-lenges that had to be overcome in building them.We conclude by speculating on further work in Sec-tion 5.2ArchitectureLego MindStorms robots are built around a pro-grammable microcontroller,the RCX.This unit,which looks like an oversized yellow Lego brick,has three ports each to attach sensors and motors,an infrared sender/receiver for communication with the PC,and 32KB memory to store the operating system,a programme,anddata.Figure 1:Architecture of a talking Lego robot.Our architecture for talking robots (Fig.1)con-sists of four main modules:a dialogue system,a speech client with speech recognition and synthesis capabilities,a module for infrared communication between the PC and the RCX,and the programme that runs on the RCX itself.Each student team had to specify a dialogue,a speech recognition gram-mar,and the messages exchanged between PC and RCX,as well as the RCX control programme.All other components were off-the-shelf systems that were combined into a larger system by us.The centrepiece of the setup is the dialogue system.We used the DiaWiz system byCLTFigure 2:The dialogue system.Sprachtechnologie GmbH 2,a proprietary frame-work for defining finite-state dialogues (McTear,2002).It has a graphical interface (Fig.2)that al-lows the user to draw the dialogue states (shown as rectangles in the picture)and connect them via edges.The dialogue system connects to an arbitrary number of “clients”via sockets.It can send mes-sages to and receive messages from clients in each dialogue state,and thus handles the entire dialogue management.While it was particularly convenient for us to use the CLT system,it could probably re-placed without much effort by a V oiceXML-based dialogue manager.The client that interacts most directly with the user is a module for speech recognition and synthe-sis.It parses spoken input by means of a recogni-tion grammar written in the Java Speech Grammar Format,3and sends an extremely shallow semantic representation of the best recognition result to the dialogue manager as a feature structure.The out-put side can be configured to either use a speech synthesiser,or play back recorded WA V files.Our implementation assumes only that the recognition and synthesis engines are compliant with the Java Speech API 4.The IR communication module has the task of converting between high-level messages that the di-2http://www.clt-st.de3/products/java-media/speech/forDevelopers/JSGF/4/products/java-media/speech/Figure3:A robot playing chess.alogue manager and the RCX programme exchange and their low-level representations that are actually sent over the IR link,in such a way that the user need not think about the particular low-level details. The RCX programme itself is again implemented in Java,using the Lejos system(Bagnall,2002).Such a programme is typically small(tofit into the mem-ory of the microcontroller),and reacts concurrently to events such as changes in sensor values and mes-sages received over the infrared link,mostly by con-trolling the motors and sending messages back to the PC.3Some Robots3.1Playing ChessThefirst talking robot we present plays chess against the user(Fig.3).It moves chess pieces on a board by means of a magnetic arm,which it can move up and down in order to grab and release a piece,and can place the arm under a certain posi-tion by driving back and forth on wheels,and to the right and left on a gear rod.The dialogue between the human player and the robot is centred around the chess game:The human speaks the move he wants to make,and the robot confirms the intended move,and announces check and checkmate.In order to perform the moves for the robot,the dialogue manager connects to a spe-cialised client which encapsulates the GNU Chess system.5In addition to computing the moves that the robot will perform,the chess programme is also used in disambiguating elliptical player inputs. Figure4shows the part of the chess dialogue model that accepts a move as a spoken command from the player.The Input node near the top waits for the speech recognition client to report that it 5/software/chess/chess.htmlFigure4:A small part of the Chess dialogue.<cmd>=[<move>]<piece><to><squareTo> |...<squareTo>=<colTo><rowTo><colTo>=[a wie]anton{colTo:a}|[b wie]berta{colTo:b}|... <rowTo>=eins{rowTo:1}|zwei{rowTo:2}|...Figure5:A small part of the Chess grammar. understood a player utterance as a command.An excerpt from the recogniser grammar is shown in Fig.5:The grammar is a context-free grammar in JSGF format,whose production rules are annotated with tags(in curly brackets)representing a very shallow semantics.The tags for all production rules used in a parse tree are collected into a table.The dialogue manager then branches depend-ing on the type of the command given by the user.If the command specified the piece and target square,e.g.“move the pawn to e4”,the recogniser will return a representation like{piece="pawn" colTo="e"rowTo="4"},and the dialogue will continue in the centre branch.The user can also specify the source and target square.If the player confirms that the move command was recognised correctly,the manager sends the move description to the chess client(the“send move”input nodes near the bottom),which can dis-ambiguate the move description if necessary,e.g. by expanding moves of type“move the pawn toe4”to moves of type “move from e2to e4”.Note that the reference “the pawn”may not be globally unique,but if there is only one possible referent that could perform the requested move,the chess client resolves this automatically.The client then sends a message to the RCX,which moves the piece using the robot arm.It up-dates its internal data structures,as well as the GNU Chess representations,computes a move for itself,and sends this move as another message to the RCX.While the dialogue system as it stands already of-fers some degree of flexibility with regard to move phrasings,there is still plenty of open room for im-provements.One is to use even more context infor-mation,in order to understand commands like “take it with the rook”.Another is to incorporate recent work on improving recognition results in the chess domain by certain plausibility inferences (Gabsdil,2004).3.2Playing a Shell GameFigure 6introduces Luigi Legonelli .The robot rep-resents a charismatic Italian shell-game player,and engages a human player in style:Luigi speaks Ger-man with a heavy Italian accent,lets the human player win the first round,and then tries to pull sev-eral tricks either to cheat or to keep the player inter-ested in thegame.Figure 6:A robot playing a shell game.Luigi’s Italian accent was obtained by feeding transliterated German sentences to a speech synthe-sizer with an Italian voice.Although the resulting accent sounded authentic,listeners who were unfa-miliar with the accent had trouble understanding it.For demonstration purposes we therefore decided to use recorded speech instead.To this end,the Italian student on the team lent his voice for the different sentences uttered by Luigi.The core of Luigi’s dialogue model reflects the progress of game play in a shell game.At the start,Luigi and the player settle on a bet (between 1and 10euros),and Luigi shows under which shell the coin is.Then,Luigi manipulates the shells (see also below),moving them (and the coin)around the board,and finally asks the player under which shell the player believes the coin is.Upon the player’s guess Luigi lifts the shell indicated by the player,and either loudly exclaims the unfairness of life (if he has lost)or kindly inquires after the player’s visual capacities (in case the player has guessed wrong).At the end of the turn,Luigi asks the player whether he wants to play again.If the player would like to stop,Luigi tries to persuade the player to stay;only if the player is persistent,Luigi will end the game and beat a hasty retreat.(1)rob “Ciao,my name is Luigi Legonelli.Do you feel like a little game?”usr “Yes ...”rob “The rules are easy.I move da cuppa,you know,cuppa?You look,say where coin is.How much money you bet?”usr “10Euros.”rob (Luigi moves the cups/shells)rob “So,where is the coin?What do youthink,where’s the coin?”usr “Cup 1”rob “Mamma mia!You have won!Whotold you,where is coin?!Another game?Another game!”usr “No.”rob “Come!Play another game!”usr “No.”rob “Okay,ciao signorina!Police,muchpolice!Bye bye!”The shells used in the game are small cups with a metal top (a nail),which enables Luigi to pick them up using a “hand”constructed around a magnet.The magnet has a downward oriented,U-shaped construction that enables Luigi to pick up two cups at the same time.Cups then get moved around the board by rotating the magnet.By magnetizing the nail at the top of the cup,not only the cup butalso the coin (touched by the tip of the nail)can be moved.When asked to show whether the coin is un-der a particular shell,one of Luigi’s tricks is to keep the nail magnetized when lifting a cup –thus also lifting the coin,giving off the impression that there was no coin under the shell.The Italian accent,the android shape of the robot,and the ’authentic’behavior of Luigi all contributed to players genuinely getting engaged in the game.After the first turn,having won,most players ac-knowledged that this is an amusing Lego construc-tion;when they were tricked at the end of the sec-ond turn,they expressed disbelief;and when we showed them that Luigi had deliberately cheated them,astonishment.At that point,Luigi had ceased to be simply an amusing Lego construction and had achieved its goal as an entertainment robot that can immerse people into its game.3.3Exploring a pyramidThe robot in Figure 7,dubbed “Indy”,is inspired by the various robots that have been used to explore the Great Pyramids in Egypt (e.g.Pyramid Rover 6,UPUAUT 7).It has a digital videocamera (webcam)and a lamp mounted on it,and continually transmits images from inside the pyramid.The user,watch-ing the images of the videocamera on a computer screen,can control the robot’s movements and the angle of the camera byvoice.Figure 7:A robot exploring a pyramid.Human-robot interaction is crucial to the explo-ration task,as neither user nor robot has a com-plete picture of the environment.The robot is aware of the environment through an (all-round)array of touch-sensors,enabling it to detect e.g.openings in walls;the user receives a more detailed picture,but6/news/news.jsp?id=ns999928057only of the environment straight ahead of the robot (due to the frontal orientation of the camera).The dialogue model for Indy defines the possible interaction that enables Indy and the user to jointly explore the environment.The user can initiate a di-alogue to control the camera and its orientation (by letting the robot turn on the spot,in a particular di-rection),or to instruct the robot to make particular movements (i.e.turn left or right,stop).3.4Traversing a labyrinthA variation on the theme of human-robot interaction in navigation is the robot in Figure 8.Here,the user needs to guide a robot through a labyrinth,specified by thick black lines on a white background.The task that the robot and the human must solve col-laboratively is to pick up objects randomly strewn about the maze.The robot is able to follow the black lines lines (the “path”)by means of an array of three light sensors at itsfront.Figure 8:A robot traversing a labyrinth.Both the user and the robot can take the initia-tive in the dialogue.The robot,capable of spotting crossings (and the possibilities to go straight,left and/or right),can initiate a dialogue asking for di-rections if the user had not instructed the robot be-forehand;see Example 2.(2)rob (The robot arrives at a crossing;itrecognizes the possibility to go either straight or left;there are no current in-structions)rob “I can go left or straight ahead;whichway should I go?”usr “Please go right.”rob “I cannot go right r “Please go straight.”rob “Okay.”The user can give the robot two different types of directions:in-situ directions(as illustrated in Ex-ample2)or deictic directions(see Example3be-low).This differentiates the labyrinth robot from the pyramid robot described in§3.3,as the latter could only handle in-situ directions.(3)usr“Please turn left at the next crossing.”rob“Okay”rob(The robot arrives at a crossing;it recognizes the possibility to go eitherstraight or left;it was told to go left atthe next crossing)rob(The robot recognizes it can go left and does so,as instructed)4DiscussionThefirst lesson we can learn from the work de-scribed above is that affordable COTS products in dialogue and robotics have advanced to the point that it is feasible to build simple but interesting talk-ing robots with limited effort.The Lego Mind-Storms platform,combined with the Lejos system, turned out to be aflexible and affordable robotics framework.More“professional”robots have the distinct advantage of more interesting sensors and more powerful on-board computing equipment,and are generally more physically robust,but Lego MindStorms is more than suitable for robotics ex-perimentation under controlled circumstances. Each of the robots was designed,built,and pro-grammed within twenty person-weeks,after an ini-tial work phase in which we created the basic in-frastructure shown in Figure1.One prerequisite of this rather efficient development process was that the entire software was built on the Java platform, and was kept highly modular.Speech software ad-hering to the Java Speech API is becoming avail-able,and plugging e.g.a different JSAPI-compliant speech recogniser into our system is now a matter of changing a line in a configurationfile. However,building talking robots is still a chal-lenge that combines the particular problems of dia-logue systems and robotics,both of which introduce situations of incomplete information.The dialogue side has to robustly cope with speech recognition er-rors,and our setup inherits all limitations inherent in finite-state dialogue;applications having to do e.g. with information seeking dialogue would be better served with a more complex dialogue model.On the other hand,a robot lives in the real world,and has to deal with imprecisions in measuring its po-sition,unexpected obstacles,communications with the PC breaking off,and extremely limited sensory information about its surroundings.5ConclusionThe robots we developed together with our stu-dents were toy robots,looked like toy robots,and could(given the limited resources)only deal with toy examples.However,they confirmed that there are affordable COTS components on the market with which we can,even in a limited amount of time,build engaging talking robots that capture the essence of various(potential)real-life applications. The chess and shell game players could be used as entertainment robots.The labyrinth and pyramid robots could be extended into tackling real-world exploration or rescue tasks,in which robots search for disaster victims in environments that are too dangerous for rescuers to venture into.8Dialogue capabilities are useful in such applications not just to communicate with the human operator,but also possibly with disaster victims,to check their condi-tion.Moreover,despite the small scale of these robots, they show genuine issues that could provide in-teresting lines of research at the interface between robotics and computational linguistics,and in com-putational linguistics as such.Each of our robots could be improved dramatically on the dialogue side in many ways.As we have demonstrated that the equipment for building talking robots is affordable today,we invite all dialogue researchers to join us in making such improvements,and in investigat-ing the specific challenges that the combination of robotics and dialogue bring about.For instance,a robot moves and acts in the real world(rather than a carefully controlled computer system),and suffers from uncertainty about its surroundings.This limits the ways in which the dialogue designer can use vi-sual context information to help with reference res-olution.Robots,being embodied agents,present a host of new challenges beyond the challenges we face in computational linguistics.The interpretation of language needs to be grounded in a way that is both based in perception,and on conceptual struc-tures to allow for generalization over experiences. Naturally,this problem extends to the acquisition of language,where approaches such as(Nicolescu and Matari´c,2001;Carbonetto and Freitos,2003; Oates,2003)have focused on basing understanding entirely in sensory data.Another interesting issue concerns the interpreta-tion of deictic references.Research in multi-modal 8See also /robocuprescue/interfaces has addressed the issue of deictic refer-ence,notably in systems that allow for pen-input (see(Oviatt,2001)).Embodied agents raise the complexity of the issues by offering a broader range of sensory input that needs to be combined(cross-modally)in order to establish possible referents. Acknowledgments.The authors would like to thank LEGO and CLT Sprachtechnologie for pro-viding free components from which to build our robot systems.We are deeply indebted to our stu-dents,who put tremendous effort into designing and building the presented robots.Further information about the student projects(including a movie)is available at the course website,http://www.coli.uni-sb.de/cl/courses/lego-02.ReferencesBrian Bagnall.2002.Core Lego Mindstorms Pro-gramming.Prentice Hall,Upper Saddle River NJ.Johan Bos,Ewan Klein,and Tetsushi Oka.2003. Meaningful conversation with a mobile robot.In Proceedings of the10th EACL,Budapest.W.Burgard, A.B.Cremers, D.Fox, D.H¨a hnel, kemeyer, D.Schulz,W.Steiner,and S.Thrun.1999.Experiences with an interactive museum tour-guide robot.Artificial Intelligence, 114(1-2):3–55.Peter Carbonetto and Nando de Freitos.2003.Why can’t Jos´e talk?the problem of learning se-mantic associations in a robot environment.In Proceedings of the HLT-NAACL2003Workshop on Learning Word Meaning from Non-Linguistic Data,pages54–61,Edmonton,Canada. Terrence W Fong,Illah Nourbakhsh,and Kerstin Dautenhahn.2003.A survey of socially interac-tive robots.Robotics and Autonomous Systems, 42:143–166.Malte bining acoustic confi-dences and pragmatic plausibility for classifying spoken chess move instructions.In Proceedings of the5th SIGdial Workshop on Discourse and Dialogue.Oleg Gerovich,Randal P.Goldberg,and Ian D. Donn.2003.From science projects to the en-gineering bench.IEEE Robotics&Automation Magazine,10(3):9–12.Takayuki Kanda,Hiroshi Ishiguro,Tetsuo Ono,Mi-chita Imai,and Ryohei Nakatsu.2002.Develop-ment and evaluation of an interactive humanoid robot”robovie”.In Proceedings of the IEEE In-ternational Conference on Robotics and Automa-tion(ICRA2002),pages1848–1855.Kurt Konolige,Karen Myers,Enrique Ruspini,and Alessandro Saffiotti.1993.Flakey in ac-tion:The1992aaai robot competition.Techni-cal Report528,AI Center,SRI International,333 Ravenswood Ave.,Menlo Park,CA94025,Apr. Oliver Lemon,Anne Bracy,Alexander Gruenstein, and Stanley Peters.2001.A multi-modal dia-logue system for human-robot conversation.In Proceedings NAACL2001.Henrik Hautop Lund.1999.AI in children’s play with LEGO robots.In Proceedings of AAAI1999 Spring Symposium Series,Menlo Park.AAAI Press.Michael McTear.2002.Spoken dialogue technol-ogy:enabling the conversational user interface. ACM Computing Surveys,34(1):90–169. Monica N.Nicolescu and Maja J.Matari´c.2001. Learning and interacting in human-robot do-mains.IEEE Transactions on Systems,Man and Cybernetics,31.Tim Oates.2003.Grounding word meanings in sensor data:Dealing with referential un-certainty.In Proceedings of the HLT-NAACL 2003Workshop on Learning Word Meaning from Non-Linguistic Data,pages62–69,Edmonton, Canada.Sharon L.Oviatt.2001.Advances in the robust processing of multimodal speech and pen sys-tems.In P.C.Yuen,Y.Y.Tang,and P.S.Wang, editors,Multimodal InterfacesB for Human Ma-chine Communication,Series on Machine Per-ception and Artificial Intelligence,pages203–218.World Scientific Publisher,London,United Kingdom.Candace L.Sidner,Christopher Lee,and Neal Lesh. 2003.Engagement by looking:Behaviors for robots when collaborating with people.In Pro-ceedings of the7th workshop on the semantics and pragmatics of dialogue(DIABRUCK).R.Siegwart and et al.2003.Robox at expo.02: A large scale installation of personal robots. Robotics and Autonomous Systems,42:203–222. S.Thrun,M.Beetz,M.Bennewitz,W.Burgard, A.B.Cremers,F.Dellaert,D.Fox,D.H¨a hnel, C.Rosenberg,N.Roy,J.Schulte,and D.Schulz. 2000.Probabilistic algorithms and the interactive museum tour-guide robot minerva.International Journal of Robotics Research,19(11):972–999. Xudong Yu.2003.Robotics in education:New platforms and environments.IEEE Robotics& Automation Magazine,10(3):3.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Converting Between System Representations
State space to transfer function [ Zeros at infinity] Transfer function to state space State space to zero/pole and transfer function to zero/pole Pole/zero to state space and pole/zero to transfer function
A dynamic system is most commonly described in one of three ways: by a set of state-space equations and the corresponding matrices, by a transfer function with the numerator and denominator polynomials, or by a list of poles and zeros and the associated gain. From time to time, it is useful to convert between these various representations. Matlab can do these conversions quickly and easily.
State space to transfer function: To begin with, suppose you have a set of state-space equations and you would like to convert them to the equivalent transfer function. This is done using the command
[num,den] = ss2tf(A,B,C,D,iu) This command creates the numerator and denominator of the transfer function for the iu'th input. Note that in most of the systems considered in this tutorial, there is only one input and therefore the "iu" term does not need to be included. For example, suppose you had the following set of state equations:
m=1000 b=50 u=500 If you want to change this to a transfer function, just run the following m-file:
A = [0 1 0 -0.05];
B = [0; 0.001];
C = [0 1];
D = 0;
[num,den]=ss2tf(A,B,C,D) Matlab should return the following to the command window: num =
0 0.0010 0 den =
1.0000 0.0500 0 This is the way Matlab presents 0.001s + 0 ------------------ s^2 + 0.05s + 0
You now have the transfer function that describes the system. As you can see this command is pretty easy to use. Here are some notes about ss2tf:
The numerator, num, will have as many rows as there are outputs (or rows in the matrix C). The numerator and denominator are returned in descending powers of s Care must be taken to check the numerator and denominator, as zeros at infinity may produce erroneous transfer functions.
Zeros at infinity This last point needs some further explanation. We say that a system has zeros at infinity if the limit as s->infinity of the value of the transfer function is equal to zero; this happens whenever you have more poles than zeros. You will see this in the root locus plot as asymptotes which go to infinity (the number of asymptotes is equal to the number of zeros at infinity). Matlab sometimes computes these zeros at infinity as being large finite numbers.When this happens, some of the coefficients in the numerator that are supposed to be zero end up being very small numbers. It may not seem like a big deal, but it can cause errors when trying to use the transfer function later on. You should always check your transfer function, and if numbers that are 0.0000 show up that are supposed to be zero, rewrite the numerator into Matlab to compensate.
A good example of this is given by the following set of state equations:
If you enter this into Matlab using the following m-file: A = [0 1 0 0 0 -0.1818 2.6727 0 0 0 0 1 0 -4.545 31.1818 0];
B = [0 1.8182 0 4.5455];
C = [1 0 0 0 0 0 1 0];
D = [0 0];
[num,den]=ss2tf(A,B,C,D)
You should get the following transfer function: num =
0 0.0000 1.8182 -0.0000 -44.5460 0 0.0000 4.5455 0.0000 0
den =
1.0000 0.1818 -31.1818 -4.4541 0
If you look at the numerator, the first and last element of each row are 0, while the second and fourth element in each row are 0.0000. If you look closer at each of these elements, you will find that they are not zero, but in fact some very small number. To see this, enter any of the following commands into the Matlab command window: num(1,2), num(1,4), num(2,2) or num(2,4). You should get something similar to the following as an answer: 7.1054e-15, -6.2172e-15, 1.2434e-14, or 4.4409e-15. Look at the roots of the numerator polynomials using roots(num(1,:)) and you will see the roots of the numerator which are almost at infinity but not quite.
This numerical inconsistency can be eliminated by adding the following line after the ss2tf command to get rid of the numbers that are not supposed to be there:
num = [num(1,3) 0 num(1,5) num(2,3) 0 num(2,5)];
Now all of the small numbers have been replaced with zeros. Always make sure to look at your transfer function and understand what it means before you use it in the design process.
Transfer function to state space: The reverse of the command ss2tf is the tf2ss command, which converts a transfer function of a system into state-space form. The command is issued like this: