2008_Min-Max decoding for non binary LDPC codes
LTE_3GPP_36.213-860(中文版)

3GPP
Release 8
3
3GPP TS 36.213 V8.6.0 (2009-03)
Contents
Foreword ...................................................................................................................................................... 5 1 2 3
Internet
Copyright Notification No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media.
© 2009, 3GPP Organizational Partners (ARIB, ATIS, CCSA, ETSI, TTA, TTC). All rights reserved. UMTS™ is a Trade Mark of ETSI registered for the benefit of its members 3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners LTE™ is a Trade Mark of ETSI currently being registered for the benefit of i ts Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM Association
香港身份证号码

For this example d = (1 0 0)
T
The error occurred at the 4th position. D is called a parity check matrix.
19
Probability for Correct Sending
Suppose we are transmitting a message of 4 digits on a binary channel and q=0.9. Then the probability of correctly sent for No coding: q4=0.6561; Repetition code:
8
9
X354670(?)
9(58)+8(33)+7(3)+6(5)+5(4)+4(6)+3(7)+2(0)+z = 902+z 被11整除,所以z=0。 我們可利用Modular arithmetic來簡化運算。 z = 9α8β7a6b5c4d3e2f ≡ 2α+3β+4a+5b5c4d3e2f (mod 11) 所以 z ≡ 2(58)+3(33)+4(3)+5(5) 5(4) 4(6) 3(7) 2(0) ≡ 2(3)+3(0)+12+252024210 ≡ 6+0+1+3+22+10=11 ≡ 0 (mod 11) 即 X354670(0) 是正確的香港身分證號碼。
a1 + a3 + a5 + a7 + a9 + a11 + a13 + 3(a2 + a4 + a6 + a8 + a10 + a12 ) ≡ 0 (mod 10)
第5章无失真编码

Theorem 5.1 (Kraft Inequality) A necessary and sufficient condition for the existence of a binary prefix code set whose code words have lengths n1 n2 ... nL is
code)
Kraft theorem
Unfixed length coding theorem(Shannon First Theorem)
Kraft theຫໍສະໝຸດ remquestion:find real-time, uniquely decodable code method:research the code partition condition of the
li
Xi {x1, x2 ,..., xr )
How to code losslessly? Assuming that the statistical characteristics of the source can be ignored, it must satisfy the following condition.
dimension (3) Node ↔ Part of a code word (4) Terminal node ↔ End of a code word (5) Number of branches ↔ Code length (6) Non-full branch tree ↔ Variable length code (7) Full branch tree ↔ Fixed length code
real-time uniquely decodable code introduction:concept of “code tree” conclusion:present the condition that the real-time
太阳集团TM T1设备 T1规格说明书

SPECIFICATIONSConnectorsBantam jacks (Eq Tx, Eq Rx, Fac Tx, Fac Rx)8-pin mini DIN RS232C serial port, DTEAccessSingle ModeDSX Monitor: 100ΩBridged Monitor: > 1000ΩTerminated: 100ΩTerminated Loop: 100ΩBridged Loop: > 1000ΩDSX Monitor Loop: 100ΩDual ModeThru A/B, Split A/B, Split E/F, Loop E/F, Mon E/FTerminationThru, Split, Loop: 100ΩMon: > 1000ΩTransmitterFraming: SF-D4, ESF, SLC-96, T1DMCoding: AMI, B8ZSLine Build Out (LBO): 0, 7.5, 15 dBDSX pre-equalization: 0 to 655 ft, 133 ft per step Clock: Internal (1.544 MHz ± 5 ppm), looped,externalPulse shape to Telcordia TR-TSY-000499; reference: G.703, CB113, CB119, CB132, CB143, PUB62508, PUB62411 Transmit PatternsRepeating: 3 in 24, 1 in 8 (1:7), all 1s, 1 in 16, 55octet, alt 1010, all 0s, T1-T6, DDS1-DDS6User programmable pattern 1 to 2048 bitsStore up to 10 programmable patterns with alphanu-meric namesPseudo random: QRS, PRBS, n = 6, 7, 9, 11, 15, 20, 23 Test pattern inversionInsert errors: BPV, logic, frame errors; programmableerror burst 1 to 9999 counts, or error rate 2 x 10-3to 1 x 10-9ReceiverInput sensitivityTerminate, Bridge: +6 to -36 dB cable lossDSXMON: -15 to -30 dB, resistiveCoding: AMI, B8ZS, AutoFraming: SF, ESF, SLC-96, T1DM, auto frame Frequency range: 1542 kHz to 1546 kHzAuto pattern synchronizationReceived pattern sync independent of transmitted patternProgrammable loss of frame criteria, error averaging interval Basic MeasurementsSummary MeasurementsElapsed time, remaining time, framing, line coding, transmitted pattern, received pattern, BPV count and rate, bit error count and rate, framing bit error count, pulse level (dB), CRC-6 block error count, line frequency, errored second count and percent, severely errored second count and percent, error free second percent, available second percent, unavailable second count and percentLogical Error MeasurementsBit error count and current rate, average bit error rate since start, bit slips, bit errored seconds and percent, severely bit errored seconds and percent, available seconds and percent, unavailable seconds and percent, degraded minutes count and percent, loss of sync seconds count and percentSignal MeasurementsSignal available seconds count and percent, loss of signal seconds count and percent, low density seconds count, excess 0s seconds count, AIS seconds count, signalunavailable seconds percentSimplex current: 1 to 150 mA, ± 1 mA ± 5%.Receive bit rate: 1542 to 1546 kbps, ± 1 bps, ± clock source accuracy, external or internal clockReceive level (volts and dBdsx)Peak to peak: 60 mV to 15V, ± 10 mV, ± 5%Positive pulse: 30 mV to 7.5V, ± 10 mV, ± 5%Negative pulse: -30 mV to -7.5V, ± 10 mV, ± 5%eLine Error MeasurementsBPV count and rate (current and average), BPV error seconds count and percent, BPV SES count and percent, BPV AS count and percent, BPV UAS count and percent, BPV degraded minutes count and percentPath - Frame MeasurementsFrame bit error count and rate (current and average), frame slip count, OOF second count, COFA count, frame synch loss seconds, yellow alarm second count, frame error second count and percent, frame severely errored second count and rate, frame available second count and percent, frame unavailable second count and percent Path - CRC-6 MeasurementsCRC-6 block error count and rate (current and average), CRC-6 errored second count and percent, CRC-6 severely errored second count and percent, CRC-6 available second count and percent, CRC-6 unavailable second count and percentFrequency MeasurementsMoving bar graph of slip rate, received signal frequency, max frequency, min frequency, clock slips, frame slips, max positive wander, max negative wanderOther MeasurementsView Received DataView T1 data in binary, hex, ASCIIShows data in bytes by time slotShows 8 time slots per display pageCaptures 256 consecutive time slots as test patternPropagation DelayMeasure round trip propagation delay in unit intervals ± 1 UI, with translation to microseconds and one way distance over cable Quick Test I and II2 programmable automated loopback tests that save time when performing standardized acceptance testsBridge TapAutomated transmission and measurement of 21 different patterns to identify possible bridge taps at some point on lineLoopbacksLoopback Control, In-bandCSU, NIU, 10000010 programmable user patterns, 1 to 32 bitsLoopback Control, ESF-Facility Data LinkPayload, Line, Network10 programmable user patterns, 1 to 32 bitsWestell & Teltrend Looping Devices Control (SW1010) Automated looping of Westell and Teltrend line and central office repeaters. Includes SF and ESF modes, arm, loop up/down, loopback query, sequential loopback, power loop query, span power down/up, unblocking.Voice Frequency CapabilityMonitor speaker with volume controlBuilt-in microphone for talkView all 24 channel A, B (C, D) bitsControl A, B (C, D) bits (E&M ground/loop start, FXO, FXS, on/off hook, wink)Generator: 404, 1004, 1804, 2713, 2804 Hz @ 0 dBm and -13 dBm DTMF dialing, 32 digits, 10 sets preprogrammable speed dial number Programmable tone and interdigital periodCompanding law - µ LawHitless drop and insertProgrammable idle channel A, B (C, D) bitsSelectable idle channel code, 7F or FF hexVF Level, Freq & Noise Measurement (SW111) Generator: 50 to 3950 Hz @ 1 Hz step; +3 to -60 dBm @ 1 dBm step Level, Freq measurements: 50 to 3950 Hz +3 dBm to -60 dBm Noise: 3 kHz flat, C-message, C-notch, S/NMF/DTMF/DP Dialing, Decoding and Analysis (SW141) MF/DTMF/DP dialingProgrammable DP %break and interdigital period @ 10 ppsMF/DTMF decode up to 40 received digits. Analyze number, high/low frequencies, high/low levels, twist, tone period, interdigital time. DP decode up to 40 digits. Analyze number, %break, PPS, interdigital time.Signaling AnalysisLive: Graphical display of A, B (C, D) signaling state changesTrigger: Programmable A, B (C, D) trigger state to start analysis on the opposite sideMFR1: Timing analysis of signaling transition states and decoding of dialed digitsMFR1M: Modified MFR1 CO switches signaling analysisMIXTONE: Decode a signaling sequence that has both MF andDTMF digitsFractional T1 (SW105, SW1010)Error measurements, channel configuration verificationNx64 kbps, Nx56 kbps, N=1 to 24Sequential, alternating, or random channelsAuto scan and auto configure to any FT1 orderScan for active channelsRx and Tx do not need to be same channelsHitless drop and insertProgrammable idle channel A, B (C, D) bitsSelectable idle channel code, 7F or FF hexESF Facility Data Link (SW107, SW1010)Read and Send T1.403 message on FDL (PRM and BOM)Automatic HDLC protocol handlingYEL ALM, LLB ACT, LLB DEA, PLB ACT, PLB DEAAT&T 54016, 24 hr performance report retrievalT1.403, 24 hour PRM collection per 15 min intervalSunSet TM T1SLC-96 Data Link (SW107, SW1010)Send and receive messageWP1, WP1B, NOTE formatsAlarms, switch-to-protect, far end loopTo Telcordia TR-TSY-000008 specificationsSLC-96 FEND loopCSU/NI Emulation (SW106, SW1010)Bidirectional (Equipment and Facility Directions)CSU/NI replacement emulationResponds to loopback commands - inband and datalinkGraphic indication of incoming signal status in both directions Simultaneous display of T1 line measurementsAutomatic generation of AISLoopbacksFacility: Line and payload loopbackEquipment: Line loopbackSimultaneous loopbacks in both directionsLocal and remote loopback controlRemote Control (SW100)VT100 emulation with same graphical interface used by test set Circuit status table provides current & historical information on test set LEDs Uses test set's serial port at 9600 baud, 8-pin MINI DINSerial port can not be connected to printer during remote control Westell PM NIU and MSS (SW120)Supports Westell performance monitoring network interface unit and maintenance switch system with rampSet/query NIU time and date. Query performance data by hour or all.Reset performance registers. Read data over ramp line. Perform maintenance switch function for Westell and Teltrend.Pulse Mask Analysis (SW130)Scan Period: 800 nsMeasurements: Pass/Fail, ns Rise time, ns fall time, ns pulse width, %overshoot, %undershootResolution: 1 ns or 1%, as applicableMasks: ANSI T1.102, T1.403, AT&T CB119, Pub 62411Pulse/Mask Display: Test set screen and SS118 printerDDS Basic Package (SW170)Choose receive and transmit time slots independentlyTest rates: 2.4, 4.8, 9.6, 19.2, 56, 64 kbpsPatterns: 2047, 511, 127, 63, all 1s, all 0s, DDS-1, DDS-2, DDS-3, DDS-4, DDS-5, DDS-6, 8-bit userLoopbacks: Latching, interleaved, CSU, DSU, OCU, DSO-DP, 8-bit user Measurements: Bit errors, Bit error rateControl code send/receive: Abnormal, mux out of sync, idleAccess Mode: Loopback tests require intrusive access to T1Teleos & Switched 56 Tests (SW144)Switched 56 call set up: Supervision and dialingSend test patterns: 2047, 511, 127, 63, all 1s, all 0s, FOX, DDS1-6, USERBit error, bit error rate measurementTeleos signaling sequence timing analysis and dial digits decoding GENERALOperating temperature: 0˚C to 50˚COperating humidity: 5% to 90%, noncondensingStorage temperature: -20˚C to 70˚CSize: 2.4" (max) x 4.2" (max) x 10.5"Weight: 2.7 lb [1.2 kg]Battery operation time: 2.5 hr nominalAC operation: 110V/120V @ 60 Hz, or 220V/240V @ 50/60 HzORDERING INFORMATIONTest SetSS100SunSet T1 ChassisIncludes battery charger, User's manual, Instrument stand.Software cartridge must be ordered separately.CLEI: T1TUW04HAACPR: 674488Software OptionsSW1000Software T1Includes basic measurements, loopback control, testpatterns send/rcv, bridge tap, propagation delay, quick test.Also includes VF channel capabilities: Talk/listen, view/control A, B (C ,D), DTMF dialing, send 5 tones at 2 levelsCLEI: T1TUW01HAACPR: 674485SW1010Software FT1Includes all Software T1 features and adds: Fractional T1,Teltrend/Westell looping device control, CSU/NIU emula-tion, ESF/SLC-96 data link controlCLEI: T1TUW02HAACPR: 674486SW100Remote ControlGraphical, menu driven VT100 emulationIncludes SS115 & SS122SW105Fractional T1Purchased with SW1000 onlySW106CSU/NIU EmulationPurchased with SW1000 onlySW107ESF & SLC-96 Data Link Send and ReceivePurchased with SW1000 onlySW111VF Level, Frequency & Noise MeasurementSW120Westell Maintenance Switch, PM NIU, RAMPPurchased with SW1010 onlySW130Pulse Mask AnalysisSW141MF/DTMF/DP Dialing, Decoding, and AnalysisSW144Teleos/Northern Switched 56 testsSW170Basic DDS PackageAccessoriesSS101Carrying CaseSS104Cigarette Lighter Battery ChargerSS105Repeater ExtenderSS106Single Bantam to Single Bantam Cable, 6'SS107Dual Bantam to Dual Bantam Cable, 6'SS108Single Bantam to Single 310 Cable, 6'SS109Single Bantam to Probe Clip Cable, 6'Note: Specifications subject to change without notice.© 2001 Sunrise Telecom Incorporated. All rights reserved.Printed in USA.00SS110Dual Bantam to 15-pin D Connector Cable,Male, 6'SS111Dual Bantam to 15-pin D Connector Cable,Female, 6'SS112Dual Bantam to 8-position Modular Plug Cable, 6'SS113A AC Battery Charger, 120VAC SS113B AC Battery Charger, 110VAC SS114SunSet T1 User's ManualSS115DIN-8 to RS232C Printer Cable SS115B DIN-8 to DB-9 Printer Cable SS116Instrument StandSS117A Printer Paper, 5 rolls, for SS118B/CSS118B High Capacity Thermal Printer with 110 VAC charger. Includes SS115B.SS118C High Capacity Thermal Printer with 220 VAC charger. Includes SS115B.SS121A SunSet AC Charger, 230VAC, 50/60 Cycle European style connectorSS121B SunSet AC Charger, 220VAC, 50/60 Cycle 3-prong IEC connectorSS121C SunSet AC Charger, 240VAC, 50/60 Cycle 3-prong IEC connectorSS122Null Modem Adapter, DB-25SS122A Null Modem Adapter, DB-9SS123A SunSet JacketSS125SunSet T1 Training Tape, EnglishSS130A Removable SunSet Rack Mount - 19"/23"SS130B Permanent SunSet Rack Mount - 19"/23"SS132Two Single Bantams to 4-position Modular Plug Cable。
气泡混合轻质土使用规程

目次1总则 (3)2术语和符号 (4)2.1 术语 (4)2.2 符号 (5)3材料及性能 (6)3.1 原材料 (6)3.2 性能 (6)4设计 (8)4.1 一般规定 (8)4.2 性能设计 (8)4.3 结构设计 (9)4.4 附属工程设计 (10)4.5 设计计算 (10)5配合比 (13)5.1 一般规定 (13)5.2 配合比计算 (13)5.3 配合比试配 (14)5.4 配合比调整 (14)6工程施工 (15)6.1 浇筑准备 (15)6.2 浇筑 (15)6.3 附属工程施工 (15)6.4 养护 (16)7质量检验与验收 (17)7.1 一般规定 (17)7.2 质量检验 (17)7.3 质量验收 (18)附录A 发泡剂性能试验 (20)附录B 湿容重试验 (22)附录C 适应性试验 (22)附录D 流动度试验 (24)附录E 干容重、饱水容重试验 (25)附录F 抗压强度、饱水抗压强度试验 (27)附录G 工程质量检验验收用表 (28)本规程用词说明 (35)引用标准名录 (36)条文说明 (37)Contents1.General provisions (3)2.Terms and symbols (4)2.1 Terms (4)2.2 Symbols (5)3. Materials and properties (6)3.1 Materials (6)3.2 properties (6)4. Design (8)4.1 General provisions (8)4.2 Performance design (8)4.3 Structure design (9)4.4 Subsidiary engineering design (9)4.5 Design calculation (10)5. Mix proportion (13)5.1 General provisions (13)5.2 Mix proportion calculation (13)5.3 Mix proportion trial mix (14)5.4 Mix proportion adjustment (14)6. Engineering construction (15)6.1 Construction preparation (15)6.2 Pouring .............................................................. .. (15)6.3 Subsidiary engineering construction (16)6.4 Maintenance (17)7 Quality inspection and acceptance (18)7.1 General provisions (18)7.2 Quality evaluate (18)7.3 Quality acceptance (19)Appendix A Test of foaming agent performance (20)Appendix B Wet density test (22)Appendix C Adaptability test (23)Appendix D Flow value test.................................................................................. .. (24)Appendix E Air-dry density and saturated density test (25)Appendix F Compressive strength and saturated compressive strength test (27)Appendix G Table of evaluate and acceptance for quality (28)Explanation of Wording in this code (35)Normative standard (36)Descriptive provision (37)1总则1.0.1为规范气泡混合轻质土的设计、施工,统一质量检验标准,保证气泡混合轻质土填筑工程安全适用、技术先进、经济合理,制订本规程。
Low Latency Decoding of EG LDPC Codes

Publication History:– 1. First printing, TR-2005-036, June 2005
Low Latency Decoding of EG LDPC Codes
Juntan Zhang, Jonathan S. Yedidia and Marc P. C. Fossorier
MITSUBISHI ELECTRIC RESEARCH LABORATORIES
Low Latency Decoding ofnathan S. Yedidia, and Marc P.C. Fossorier
This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c Mitsubishi Electric Research Laboratories, Inc., 2005 201 Broadway, Cambridge, Massachusetts 02139
Research Statement
Research StatementParikshit GopalanMy research focuses on fundamental algebraic problems such as polynomial reconstruction and interpolation arising from various areas of theoretical computer science.My main algorith-mic contributions include thefirst algorithm for list-decoding a well-known family of codes called Reed-Muller codes[13],and thefirst algorithms for agnostically learning parity functions[3]and decision trees[11]under the uniform distribution.On the complexity-theoretic side,my contribu-tions include the best-known hardness results for reconstructing low-degree multivariate polyno-mials from noisy data[12]and the discovery of a connection between representations of Boolean functions by polynomials and communication complexity[2].1IntroductionMany important recent developments in theoretical computer science,such as probabilistic proof checking,deterministic primality testing and advancements in algorithmic coding theory,share a common feature:the extensive use of techniques from algebra.My research has centered around the application of these methods to problems in Coding theory,Computational learning,Hardness of approximation and Boolean function complexity.While atfirst glance,these might seem like four research areas that are not immediately related, there are several beautiful connections between these areas.Perhaps the best illustration of these links is the noisy parity problem where the goal is to recover a parity function from a corrupted set of evaluations.The seminal Goldreich-Levin algorithm solves a version of this problem;this result initiated the study of list-decoding algorithms for error-correcting codes[5].An alternate solution is the Kushilevitz-Mansour algorithm[19],which is a crucial component in algorithms for learning decision trees and DNFs[17].H˚a stad’s ground-breaking work on the hardness of this problem has revolutionized our understanding of inapproximability[16].All these results rely on insights into the Fourier structure of Boolean functions.As I illustrate below,my research has contributed to a better understanding of these connec-tions,and yielded progress on some important open problems in these areas.2Coding TheoryThe broad goal of coding theory is to enable meaningful communication in the presence of noise, by suitably encoding the messages.The natural algorithmic problem associated with this task is that of decoding or recovering the transmitted message from a corrupted encoding.The last twenty years have witnessed a revolution with the discovery of several powerful decoding algo-rithms for well-known families of error-correcting codes.A key role has been played by the notion of list-decoding;a relaxation of the classical decoding problem where we are willing to settle for a small list of candidate transmitted messages rather than insisting on a unique answer.This relaxation allows one to break the classical half the minimum distance barrier for decoding error-correcting codes.We now know powerful list-decoding algorithms for several important code families,these algorithms have also made a huge impact on complexity theory[5,15,23].List-Decoding Reed-Muller Codes:In recent work with Klivans and Zuckerman,we give the first such list-decoding algorithm for a well-studied family of codes known as Reed-Muller codes, obtained from low-degree polynomials over thefinitefield F2[13].The highlight of this work is that our algorithm is able to tolerate error-rates which are much higher than what is known as the Johnson bound in coding theory.Our results imply new combinatorial bounds on the error-correcting capability of these codes.While Reed-Muller codes have been studied extensively in both coding theory and computer science communities,our result is thefirst to show that they are resilient to remarkably high error-rates.Our algorithm is based on a novel view of the Goldreich-Levin algorithm as a reduction from list-decoding to unique-decoding;our view readily extends to polynomials of arbitrary degree over anyfield.Our result complements recent work on the Gowers norm,showing that Reed-Muller codes are testable up to large distances[21].Hardness of Polynomial Reconstruction:In the polynomial reconstruction problem,one is asked to recover a low-degree polynomial from its evaluations at a set of points and some of the values could be incorrect.The reconstruction problem is ubiquitous in both coding theory and computational learning.Both the Noisy parity problem and the Reed-Muller decoding problem are instances of this problem.In joint work with Khot and Saket,we address the complexity of this problem and establish thefirst hardness results for multivariate polynomials of arbitrary degree [12].Previously,the only hardness known was for degree1,which follows from the celebrated work of H˚a stad[16].Our work introduces a powerful new algebraic technique called global fold-ing which allows one to bypass a module called consistency testing that is crucial to most hardness results.I believe this technique willfind other applications.Average-Case Hardness of NP:Algorithmic advances in decoding of error-correcting codes have helped us gain a deeper understand of the connections between worst-case and average case complexity[23,24].In recent work with Guruswami,we use this paradigm to explore the average-case complexity of problems in NP against algorithms in P[8].We present thefirst hardness amplification result in this setting by giving a construction of an error-correcting code where most of the symbols can be recovered correctly from a corrupted codeword by a deterministic algorithm that probes very few locations in the codeword.The novelty of our work is that our decoder is deterministic,whereas previous algorithms for this task were all randomized.3Computational LearningComputational learning aims to understand the algorithmic issues underlying how we learn from examples,and to explore how the complexity of learning is influenced by factors such as the ability to ask queries and the possibility of incorrect answers.Learning algorithms for a class of concept typically rely on understanding the structure of that concept class,which naturally ties learning to Boolean function complexity.Learning in the presence of noise has several connections to decoding from errors.My work in this area addresses the learnability of basic concept classes such as decision trees,parities and halfspaces.Learning Decision Trees Agnostically:The problem of learning decision trees is one of the central open problems in computational learning.Decision trees are also a popular hypothesis class in practice.In recent work with Kalai and Klivans,we give a query algorithm for learning decision trees with respect to the uniform distribution on inputs in the agnostic model:given black-box access to an arbitrary Boolean function,our algorithmfinds a hypothesis that agrees with it on almost as many inputs as the best decision tree[11].Equivalently,we can learn decision trees even when the data is corrupted adversarially;this is thefirst polynomial-time algorithm for learning decision trees in a harsh noise model.Previous decision-tree learning algorithms applied only to the noiseless setting.Our algorithm can be viewed as the agnostic analog of theKushilevitz-Mansour algorithm[19].The core of our algorithm is a procedure to implicitly solve a convex optimization problem in high dimensions using approximate gradient projection.The Noisy Parity Problem:The Noisy parity problem has come to be widely regarded as a hard problem.In work with Feldman et al.,we present evidence supporting this belief[3].We show that in the setting of learning from random examples(without queries),several outstanding open problems such as learning juntas,decision trees and DNFs reduce to restricted versions of the problem of learning parities with random noise.Our result shows that in some sense, noisy parity captures the gap between learning from random examples and learning with queries, as it is believed to be hard in the former setting and is known to be easy in the latter.On the positive side,we present thefirst non-trivial algorithm for the noisy parity problem under the uniform distribution in the adversarial noise model.Our result shows that somewhat surprisingly, adversarial noise is no harder to handle than random noise.Hardness of Learning Halfspaces:The problem of learning halfspaces is a fundamental prob-lem in computational learning.One could hope to design algorithms that are robust even in the presence of a few incorrectly labeled points.Indeed,such algorithms are known in the setting where the noise is random.In work with Feldman et al.,we show that the setting of adversarial errors might be intractable:given a set of points where99%are correctly labeled by some halfs-pace,it is NP-hard tofind a halfspace that correctly labels even51%of the points[3].4Prime versus Composite problemsMy thesis work focuses on new aspects of an old and famous problem:the difference between primes and composites.Beyond basic problems like primality and factoring,there are many other computational issues that are not yet well understood.For instance,in circuit complexity,we have excellent lower bounds for small-depth circuits with mod2gates,but the same problem for circuits with mod6gates is wide open.Likewise in combinatorics,set systems where sizes of the sets need to satisfy certain modular conditions are well studied.Again the prime case is well understood,but little is known for composites.In all these problems,the algebraic techniques that work well in the prime case break down for composites.Boolean function complexity:Perhaps the simplest class of circuits for which we have been unable to show lower bounds is small-depth circuits with And,Or and Mod m gates where m is composite;indeed this is one of the frontier open problems in circuit complexity.When m is prime, such bounds were proved by Razborov and Smolensky[20,22].One reason for this gap is that we do not fully understand the computational power of polynomials over composites;Barrington et.al were thefirst to show that such polynomials are surprisingly powerful[1].In joint work with Bhatnagar and Lipton,we solve an important special case:when the polynomials are symmetric in their variables[2].We show an equivalence between computing Boolean functions by symmetric polynomials over composites and multi-player communication protocols,which enables us to apply techniques from communication complexity and number theory to this problem.We use these techniques to show tight degree bounds for various classes of functions where no bounds were known previously.Our viewpoint simplifies previously known results in this area,and reveals new connections to well-studied questions about Diophantine equations.Explicit Ramsey Graphs:A basic open problem regarding polynomials over composites is: Can asymmetry in the variables help us compute a symmetric function with low degree?I show a connec-tion between this question and an important open problem in combinatorics,which is to explicitly construct Ramsey graphs or graphs with no large cliques and independent sets[6].While good Ramsey graphs are known to exist by probabilistic arguments,explicit constructions have proved elusive.I propose a new algebraic framework for constructing Ramsey graphs and showed howseveral known constructions can all be derived from this framework in a unified manner.I show that all known constructions rely on symmetric polynomials,and that such constructions cannot yield better Ramsey graphs.Thus the question of symmetry versus asymmetry of variables is precisely the barrier to better constructions by such techniques.Interpolation over Composites:A basic problem in computational algebra is polynomial interpolation,which is to recover a polynomial from its evaluations.Interpolation and related algorithmic tasks which are easy for primes become much harder,even intractable over compos-ites.This difference stems from the fact that over primes,the number of roots of a polynomial is bounded by the degree,but no such theorem holds for composites.In lieu of this theorem I presented an algorithmic bound;I show how to compute a bound on the degree of a polynomial given its zero set[7].I use this to give thefirst optimal algorithms for interpolation,learning and zero-testing over composites.These algorithms are based on new structural results about the ze-roes of polynomials.These results were subsequently useful in ruling out certain approaches for better Ramsey constructions[6].5Other Research HighlightsMy other research work spans areas of theoretical computer science ranging from algorithms for massive data sets to computational complexity.I highlight some of this work below.Data Stream Algorithms:Algorithmic problems arising from complex networks like the In-ternet typically involve huge volumes of data.This has led to increased interest in highly efficient algorithmic models like sketching and streaming,which can meaningfully deal with such massive data sets.A large body of work on streaming algorithms focuses one estimating how sorted the input is.This is motivated by the realization that sorting the input is intractable in the one-pass data stream model.In joint work with Krauthgamer,Jayram and Kumar,we presented thefirst sub-linear space data stream algorithms to estimate two well-studied measures of sortedness:the distance from monotonicity(or Ulam distance for permutations),and the length of the Longest Increasing Subsequence or LIS.In more recent work with Anna G´a l,we prove optimal lower bounds for estimating the length of the LIS in the data-stream model[4].This is established by proving a direct-sum theorem for the communication complexity of a related problem.The novelty of our techniques is the model of communication that they address.As a corollary,we obtain a separation between two models of communication that are commonly studied in relation to data stream algorithms.Structural Properties of SAT solutions:The solution space of random SAT formulae has been studied with a view to better understanding connections between computational hardness and phase transitions from satisfiable to unsatisfiable.Recent algorithmic approaches rely on connectivity properties of the space and break down in the absence of connectivity.In joint work with Kolaitis,Maneva and Papadimitriou,we consider the problem:Given a Boolean formula,do its solutions form a connected subset of the hypercube?We classify the worst-case complexity of various connectivity properties of the solution space of SAT formulae in Schaefer’s framework[14].We show that the jump in the computational hardness is accompanied by a jump in the diameter of the solution space from linear to exponential.Complexity of Modular Counting Problems:In joint work with Guruswami and Lipton,we address the complexity of counting the roots of a multivariate polynomial over afinitefield F q modulo some number r[9].We establish a dichotomy showing that the problem is easy when r is a power of the characteristic of thefield and intractable otherwise.Our results give several examples of problems whose decision versions are easy,but the modular counting version is hard.6Future Research DirectionsMy broad research goal is to gain a complete understanding of the complexity of problems arising in coding theory,computational learning and related areas;I believe that the right tools for this will come from Boolean function complexity and hardness of approximation.Below I outline some of the research directions I would like to pursue in the future.List-decoding algorithms have allowed us to break the unique-decoding barrier for error-correcting codes.It is natural to ask if one can perhaps go beyond the list-decoding radius and solve the problem offinding the codeword nearest to a received word at even higher error rates. On the negative side,we do not currently know any examples of codes where one can do this.But I think that recent results on Reed-Muller codes do offer some hope[13,21].Algorithms for solving the nearest codeword problem if they exist,could also have exciting implications in computational learning.There are concept classes which are well-approximated by low-degree polynomials over finitefields lying just beyond the threshold of what is currently known to be learnable efficiently [20,22].Decoding algorithms for Reed-Muller codes that can tolerate very high error rates might present an approach to learning such concept classes.One of the challenges in algorithmic coding theory is to determine whether known algorithms for list-decoding Reed-Solomon codes[15]and Reed-Muller codes[13,23]are optimal.This raises both computational and combinatorial questions.I believe that my work with Khot et al.rep-resents a goodfirst step towards understanding the complexity of the decoding/reconstruction problem for multivariate polynomials.Proving similar results for univariate polynomials is an excellent challenge which seems to require new ideas in hardness of approximation.There is a large body of work proving strong NP-hardness results for problems in computa-tional learning.However,all such results only address the proper learning scenario where the learning algorithm is restricted to produce a hypothesis from some particular class H which is typically the same as the concept class C.In contrast,known learning algorithms are mostly im-proper algorithms which could use more complicated hypotheses.For hardness results that are independent of the hypothesis H used by the algorithm,one currently has to resort to crypto-graphic assumptions.In ongoing work with Guruswami and Raghavendra,we are investigating the possibility of proving NP-hardness for improper learning.Finally,I believe that there are several interesting directions to explore in the agnostic learn-ing model.An exciting insight in this area comes from the work of Kalai et al.who show that 1regression is a powerful tool for noise-tolerant learning[18].A powerful paradigm in com-putational learning is to prove that the concept has some kind of polynomial approximation and then recover the approximation.Algorithms based on 1regression require a weaker polynomial approximation in comparison with previous algorithms(which use 2regression),but use more powerful machinery for the recovery step.Similar ideas might allow us to extend the boundaries of efficient learning even in the noiseless model;this is a possibility I am currently exploring.Having worked in areas ranging from data stream algorithms to Boolean function complexity, I view myself as both an algorithm designer and a complexity theorist.I have often found that working on one aspect of a problem gives insights into the other;indeed much of my work has originated from such insights([12]and[13],[10]and[4],[6]and[7]).Ifind that this is increasingly the case across several areas in theoretical computer science.My aim is to maintain this balance between upper and lower bounds in my future work.References[1]D.A.Barrington,R.Beigel,and S.Rudich.Representing Boolean functions as polynomialsmodulo composite putational Complexity,4:367–382,1994.[2]N.Bhatnagar,P.Gopalan,and R.J.Lipton.Symmetric polynomials over Z m and simultane-ous communication protocols.Journal of Computer&System Sciences(special issue for FOCS’03), 72(2):450–459,2003.[3]V.Feldman,P.Gopalan,S.Khot,and A.K.Ponnuswami.New results for learning noisyparities and halfspaces.In Proc.47th IEEE Symp.on Foundations of Computer Science(FOCS’06), 2006.[4]A.G´a l and P.Gopalan.Lower bounds on streaming algorithms for approximating the lengthof the longest increasing subsequence.In Proc.48th IEEE Symp.on Foundations of Computer Science(FOCS’07),2007.[5]O.Goldreich and L.Levin.A hard-core predicate for all one-way functions.In Proc.21st ACMSymposium on the Theory of Computing(STOC’89),pages25–32,1989.[6]P.Gopalan.Constructing Ramsey graphs from Boolean function representations.In Proc.21stIEEE symposium on Computational Complexity(CCC’06),2006.[7]P.Gopalan.Query-efficient algorithms for polynomial interpolation over composites.In Proc.17th ACM-SIAM symposium on Discrete algorithms(SODA’06),2006.[8]P.Gopalan and V.Guruswami.Deterministic hardness amplification via local GMD decod-ing.Submitted to23rd IEEE Symp.on Computational Complexity(CCC’08),2008.[9]P.Gopalan,V.Guruswami,and R.J.Lipton.Algorithms for modular counting of roots of mul-tivariate polynomials.In tin American Symposium on Theoretical Informatics(LATIN’06), 2006.[10]P.Gopalan,T.S.Jayram,R.Krauthgamer,and R.Kumar.Estimating the sortedness of a datastream.In Proc.18th ACM-SIAM Symposium on Discrete Algorithms(SODA’07),2007.[11]P.Gopalan,A.T.Kalai,and A.R.Klivans.Agnostically learning decision trees.In Proc.40thACM Symp.on Theory of Computing(STOC’08),2008.[12]P.Gopalan,S.Khot,and R.Saket.Hardness of reconstructing multivariate polynomials overfinitefields.In Proc.48th IEEE Symp.on Foundations of Computer Science(FOCS’07),2007. [13]P.Gopalan,A.R.Klivans,and D.Zuckerman.List-decoding Reed-Muller codes over smallfields.In Proc.40th ACM Symp.on Theory of Computing(STOC’08),2008.[14]P.Gopalan,P.G.Kolaitis,E.N.Maneva,and puting the connec-tivity properties of the satisfiability solution space.In Proc.33rd Intl.Colloqium on Automata, Languages and Programming(ICALP’06),2006.[15]V.Guruswami and M.Sudan.Improved decoding of Reed-Solomon and Algebraic-Geometric codes.IEEE Transactions on Information Theory,45(6):1757–1767,1999.[16]J.H˚a stad.Some optimal inapproximability results.J.ACM,48(4):798–859,2001.[17]J.Jackson.An efficient membership-query algorithm for learning DNF with respect to theuniform distribution.Journal of Computer and System Sciences,55:414–440,1997.[18]A.T.Kalai,A.R.Klivans,Y.Mansour,and R.A.Servedio.Agnostically learning halfspaces.In Proc.46th IEEE Symp.on Foundations of Computer Science,pages11–20,2005.[19]E.Kushilevitz and Y.Mansour.Learning decision trees using the Fourier spectrum.SIAMJournal of Computing,22(6):1331–1348,1993.[20]A.Razborov.Lower bounds for the size of circuits of bounded depth with basis{∧,⊕}.Mathematical Notes of the Academy of Science of the USSR,(41):333–338,1987.[21]A.Samorodnitsky.Low-degree tests at large distances.In Proc.39th ACM Symposium on theTheory of Computing(STOC’07),pages506–515,2007.[22]R.Smolensky.Algebraic methods in the theory of lower bounds for Boolean circuit com-plexity.Proc.19th Annual ACM Symposium on Theoretical Computer Science,(STOC’87),pages 77–82,1987.[23]M.Sudan,L.Trevisan,and S.P.Vadhan.Pseudorandom generators without the XOR lemma.put.Syst.Sci.,62(2):236–266,2001.[24]L.Trevisan.List-decoding using the XOR lemma.In Proc.44th IEEE Symposium on Foundationsof Computer Science(FOCS’03),pages126–135,2003.。
The Decision Reliability of MAP, Log-MAP,
The Decision Reliability of MAP, Log-MAP, Max-Log-MAP and SOV A Algorithmsfor Turbo CodesAbstract —In this paper, we study the reliability of decisions ofe Codes, Channel Reliability,e N comm llular, satellite and we also consider two improved versions, named Log-MAP two different or identicalRecursi s, connectedin pFig. 1. The turbo encoder with rate 1/3.The first encoder operat ed b e u , i ond encoderp Lucian Andrei Peri şoar ă, and Rodica Stoianth MAP, Log-MAP, Max-Log-MAP and SOVA decoding algorithms for turbo codes, in terms of the a priori information, a posteriori information, extrinsic information and channel reliability. We also analyze how important an accurate estimate of channel reliability factor is to the good performances of the iterative turbo decoder. The simulations are made for parallel concatenation of two recursive systematic convolutional codes with a block interleaver at the transmitter, AWGN channel and iterative decoding with mentioned algorithms at the receiver.Keywords —Convolutional Turbo D cision Reliability, Extrinsic Information, Iterative Decoding.I. I NTRODUCTIONunication systems, like ce computer fields, the information is represented as a sequence of binary digits. The binary message is modulated to an analog signal and transmitted over a communication channel affected by noise that corrupt the transmitted signal.The channel coding is used to protect the information fromnoise and to reduce the number of error bits.One of the most used channel codes are convolutional codes, with the decoding strategy based on the Viterbialgorithm. The advantages of convolutional codes are used inTurbo Codes (TC), which can achieve performances within a2 dB of channel capacity [1]. These codes are parallelconcatenation of two Recursive Systematic Convolutional (RSC) codes separated by an interleaver. The performances of the turbo codes are due to parallel concatenation ofcomponent codes, the interleaver schemes and the iterative decoding using the Soft Input Soft Output (SISO) algorithms [2], [3].In this paper we study the decision reliability problem for turbo coding schemes in the case of two different decodingstrategies: Maximum A Posteriori (MAP) algorithm and Soft Output Viterbi Algorithm (SOVA). For the MAP algorithmand Max-Log-MAP algorithms. The first one is a simplified algorithm which offers the same optimal performance with a reasonable complexity. The second one and the SOVA are less complex again, but give a slightly degraded performance. The paper is organized as follows. In Section II, the turbo encoder is presented. In Section III, the turbo decoder is ex Manuscript received December 10, 2008. This work was supported in part by the Romanian National University Research Council (CNCSIS) under theGrant type TD (young doctoral students), no. 24.L. A. Peri şoar ă is with the Applied Electronics and InformationEngineering Department, Politehnica University of Bucharest, Romania (e-mail: lucian@orfeu.pub.ro, lperisoara@, www.orfeu.pub.ro).R. Stoian is with the Applied Electronics and Information Engineering Department, Politehnica University of Bucharest, Romania (e-mail: rodica@orfeu.pub.ro, rodicastoian2004@, www.orfeu.pub.ro).plained in detail, presenting firstly the iterative decoding principle (turbo principle), specifying the concepts of a priori information, a posteriori information, extrinsic information, channel reliability and source reliability. Then, we review the MAP, Log-MAP, Max-Log-MAP and SOVA decoding algorithms for which we discuss the decision reliability. In Section IV is analyzed the influence of channel reliability factor on decoding performances for the mentioned decoding algorithms. Section V presents some simulation results, which we obtained.II. T HE T URBO C ODING S CHEME The turbo encoder can use ve Systematic Convolutional (RSC) code arallel, see Fig. 1.es on the input bits represent n their original order, while the sec y the fram o erates on the input bits which are permuted by the interleaver, frame u ’, [4]. The output of the turbo encoder is represented by the frame: I2)()()1211,12,121,22,21,2,,,,,,,,,...,,,k k k u c c u c c u c c ==v u c c /R k n = to b , (1)is less likely where frame c1 is the output of the first RSC and frame c2 is the output of the second RSC. If the input frame u is of length k and the output frame x is of length n , then the encoder rate is .For block encoding data is segmented into non-overlapping blocks of length k with each block encoded (and decoded)independently. This scheme imposes the use of a blockinterleaver with the constraint that the RSC’s must begin in the same state for each new block. This requires either trellis termination or trellis truncation. Trellis termination need appending extra symbols (usually named tail bits) to the inputframe to ensure that the shift registers of the constituent RSC encoders starts and ends at the same zero state. If the encoder has code rate 1/3, then it maps k data bits into 3k coded bits plus 3m tail bits. Trellis truncation simply involves resettingthe state of the RSC’s for each new block.The interleaver used for parallel concatenation is a device that permutes coordinates either on a block basis (a generalized “block” interleaver) or on a sliding window basis(a generalized “convolutional” interleaver). The interleaver ensures that the set of code sequences generated by the turbo code has nice weight properties, which reduces the probabilitythat the decoder will mistake one codeword for another.The output codeword is then modulated, for example with Binary Phase Shift Keying (BPSK), resulting the sequence , which is transmitted over an Additive White Gaussian Noise (AWGN) channel.(12,,=v u c c 12,)p x x )(,s p =x x e a low weight codeword due to the interleaver in front of it. The interleaver shuffles the inputsequence It is known that turbo codes are the best practical codes due to their performance at low SNR. One reason for their better performance is that turbo codes produce high weight code words [4]. For example, if the input sequence u is originally low weight, the systematic u and parity c 1 outputs mayproduce a low weight codeword. However, the parity output c 2 is less likely to be a low weight codeword due to the u , in such a way that when introduced to the second encoder, it is more likely probable to produce a high weight codeword. This is ideal for the code because high weight code words result in better decoder performance. III. T HE T URBO D ECODING S CHEME Let be the received sequence of length n , 12(,,)s p p =y y y y where the vector y s is formed only by the received informationsymbols s y 222222(,,...,)p p p p n y y y =y p 1 and y p 2and . These three streams are applied to the input of the turbo decoder presented in Fig. 2. 11112(,,...,)p p p p n y y y 1=y y At time j , decoder 1 using partial received information 1,s p j j y y (), makes its decision and outputs the a posterioriinformation s j L x +()()()e s s s s . Then, the extrinsic information is computed j j j c jL x L x L x L y +−=−−. Decoder 2 makes itsdecision based on the extrinsic information ()e sj L x 2 from decoder 1 and the received information ',s p j jy y . The term(')s j L x + is the a posteriori information derived from decoder 2 and used by decoder 1 as a priori information about thereceived sequence, noted with (')sj L x −(). Now, the second iteration can begin, and the first decoder decodes the same channel symbols, but now with additional information about the value of the input symbols provided by the second decoder in the first iteration. After some iterations, the algorithm converges and the extrinsic information values remains the same. Now the decision about the message bits u j is made based on the a posteriori values s j L x +.e s y p 2y p 1y sFig. 2. The turbo decoder.Each constituent decoder operates based on the Logarithm Likelihood Ratio (LLR).A. The Decision Reliability of MAP DecoderBahl, Cocke, Jelinek and Raviv proposed the Maximum APosteriori (MAP) decoding algorithm for convolutional codesin 1974 [1]. The iterative decoder developed by Berrou et al.[5] in 1993 has a greatly increased attention. In their paper,they considered the iterative decoding of two RSC codesconcatenated in parallel through a non-uniform interleaver and the MAP algorithm was modified to minimize the sequence error probability instead of bit error probability.Because of its increased complexity, the MAP algorithm was simplified in [6] and the optimal MAP algorithm calledthe Log-MAP algorithm was developed. The LLR of a transmitted bit is defined as [2]:(1)()log ()(1)s Wenoted def j s sj j s j P x L x L x P x −⎛⎞=+==⎜⎟⎜⎟=−⎝⎠where the sign of the LLR ()s j L x indicate whether the bit s j xis more likely to be +1 or -1 and the magnitude of the LLRgives an indication of the correct value of s j x . The term()sj L x − is defined as the a priori information about s j x .In channel coding theory we are interested in theprobability that , based or conditioned on some received sequence 1s j x =±s j y . Hence, we use the conditional LLR: ()()()1||log (1|s s We noted def j j s s s j j j s s j j P x y L x y L x P x y +⎛⎞=+⎜⎟=⎜⎟=−⎝⎠=) The conditional probabilities (1|s sj j P x y =± are the a posteriori probabilities of the decoded bit s j x and ()s j L x + is thea posteriori information about sj x , which is the information that the decoder gives us, including the received frame, the a priori information for the systematic symbols y s j and the apriori information for symbol x s j . It is the output of the MAPalgorithm. In addition, we will use the conditional LLR ()|s s j j L y x based on the probability that the receiver’s output would be s j y when the transmitted bit s j x was either +1 or -1:()()()|1|log |1s s defj j s s jjs s j j P y x L y x P y x ⎛⎞=+⎜=⎜=−⎝⎠⎟⎟. (3)For AWGN channel using BPSK modulation, we can write the conditional probability density functions, [7]:()()20|12s s b j j j EP y x y a N ⎡⎤=±=−⎢⎣⎦m ⎥, (4)where is the transmitted energy per bit, a is the fadingamplitude and is the noise variance.b E 0/2N We can rewrite the (3) as follows: ()()()2200|4,s s s s b j j j j Noteds s b j c j E L y x y a y a N E a y L y N ⎡⎤=−−−+⎢⎥⎣⎦== (5) the fading amplitude and is the noise power. For nonfading AWGN channels a = 1 and 0N /204c b L E N =. Theratio is defined as the Signal to Noise Ration (SNR) of thechannel.0/b E N The extrinsic information can be computed as [1], [2], [9]: ()()()()()()1|()log 1|1|log log 1|()().s s j j e sj s s jj s s j j s sj j s s s j j c j P x y L x P x y P x P y x P x P y x L x L x L y +−⎛⎞=+⎜⎟=⎜⎟=−⎝⎠⎛⎞⎛=+=+⎜⎟⎜−−⎜⎟⎜=−=−⎝⎠⎝=−−11s j s j ⎞⎟⎟⎠ (6)The a posteriori information defined in (2), can be written asthe following [1], [10]:11(')()(',)()log (')()(',)e j j j s j e j j j s s s s L x s s s s −++−−α⋅β⋅γ=α⋅β⋅γ∑∑, (7)where +∑is the summation over all possible transition branch pairs (s ’,s ) in the trellis, at time j , given the transmittedsymbol x s j = +1. Analog, −∑is for transmitted symbol x s j =-1.The forward and backward terms, represented in Fig. 3 as transitions between two consecutive states from the trellis, can be computed recursively as following [7], [10], [11]:1'()(')(',)j j j s s s s s −α=αγ∑, (8)1(')()(',)j j j ss s s s −β=βγ∑. (9)For systematic codes, which is our case, the branch transition probabilities (',)js s γ are given by the relation:11(',)exp ()(',)22s s s s e j j j c jj j s s L x x L x y s −⎡γ=+⋅γ⎢⎣⎦s ⎤⎥, (10) where:112211(',)exp 22e p p j c j j c p p j j s s L x y L x ⎡⎤γ=+⎢⎥⎣⎦y .(11)At each iteration and for each frame y, ()s j L x + is computedat the output of the second decoder and the decision is done,symbol by symbol j = 1…k , based on the sign of ()sj L x +, original information bit u j being estimated as [2], [3]: {ˆ()sj usign L x +=}j . (12) In the iterative decoding procedure, the extrinsicinformation ()e s j L x is permuted by the interleaver andbecomes the a priori information ()sj L x − for the next decoder. influence on ()s j L x + is insignificant.B. The Decision Reliability of Max-Log-MAP DecoderThe MAP algorithm as described in previous section is much more complex than the Viterbi algorithm and with hard decision outputs performs almost identically to it. Therefore for almost 20 years it was largely ignored. However, its application in turbo codes renewed interest in this algorithm. Its complexity can be dramatically reduced without affecting its performance by using the sub-optimal Max-Log-MAP algorithm, proposed in [12]. This technique simplifies the MAP algorithm by transferring the recursions into the log domain and invoking the approximation: ln max()i x i ii e x ⎛⎞≈⎜⎟⎝⎠∑. (13)where max()i i x means the maximum value of x i . If we note:()()ln ()j j A s =αs , (14)()()ln ()j j B s s =β, (15)and:()(',)ln (',)j j G s s s s =γ, (16)then the equations (8), (9) and (10) can be written as: ()(()1'1'1'()ln ()ln (')(',)ln exp (')(',)max (')(',),j j j j s j j s j j s )A s s s s A s G s s A s G s s −−−⎛⎞=α=αγ⎜⎟⎝⎠⎛=+⎜⎝⎠≈+∑∑s ⎞⎟⎞⎟(17) ()()()11(')ln (')ln ()(',)ln exp ()(',)max ()(',),j j j j s j j s j j s B s s s s s B s G s s B s G s s −−⎛⎞=β=βγ⎜⎟⎝⎠⎛=+⎜⎝⎠≈+∑∑ (18) 11(',)()22s s s s jj j c j G s s C x L x L x y −=++j , (19) term ()s s j j x L x −.Finally, the a posteriori LLR ()s j L x + which the Max-Log-MAP algorithm calculates is:Fig. 3. Trellis states transitions.for ()j s αfor 1(')j s −β((1(',)11(',)1()max(')(',)()max (')(',)().j j s j j j j s s for u j j j s s for u L x As G s s B s ))A s G s s B s +−=+−=−≈++−++ (20)In [12] and [13] the authors shows that the complexity of Max-Log-MAP algorithm is bigger than two times that of a classical Viterbi algorithm Unfortunately, the storage requirements are much greater for Max-Log-MAP algorithm, due to the need to store both the forward and backward recursively calculated metrics and before the ()j A s ()j B s ()s j L x + values can be calculated.C. The Decision Reliability of Log-MAP DecoderThe Max-Log-MAP algorithm gives a slight degradation in performance compared to the MAP algorithm due to the approximation of (13). When used for the iterative decodingof turbo codes, Robertson found this degradation to result in a drop in performance of about 0.35 dB, [12]. However, the approximation of (13) can be made exact by using the Jacobian logarithm:()(()121212121212ln()max(,)ln 1exp ||max(,)||(,),x x e e x x x x )x x f x x g x x +=++−−=+−= (21)where ()f δ can be thought of as a correction term. However,the maximization in (17) and (18) is completed by the correction term ()f δ in (21). This means that the exact ratherthan approximate values of and are calculated. For binary trellises, the maximization will be done only for two terms. Therefore we can correct the approximations in (17) and (18) by adding the term ()j A s ()j B s ()f δ, where δ is the magnitude of the difference between the metrics of the twomerging paths. This is the basis of the Log-MAP algorithmproposed by Robertson, Villebrun and Hoeher in [12]. Thus we must generalize the previous equation for more than two 1x terms, by nesting the 12(,)g x x operations as follows: (((13211ln ,,,(,)i n x n n i e g x g x g x g x x −=⎛⎞=⎜⎟⎝⎠∑K ))), (22)The correction term ()f δδ need not to be computed for every value of , but instead can be stored in a look-up table. In [12], Robertson found that such a look-up table need containonly eight values for , ranging between 0 and 5. This meansthat the Log-MAP algorithm is only slightly more complexthan the Max-Log-MAP algorithm, but it gives exactly the same performance as the MAP algorithm. Therefore, it is a very attractive algorithm to use in the component decoders of an iterative turbo decoder. δD. The Decision Reliability of SOVA DecoderThe MAP algorithm has a high computational complexityfor providing the Soft Input Soft Output (SISO) decoding. However, we obtain easily the optimal a posteriori probabilities for each decoded symbol. The Viterbi algorithm provides the Maximum Likelihood (ML) decoding for convolutional codes, with optimalsequence estimation. The conventional Viterbi decoder has two main drawbacks for a serial decoding scheme: the inner Viterbi decoder produces bursts of error bits and hard decision output, which degrades the performance of the outer Viterbi decoder [3]. Hagenauer and Hoeher modified the classical Viterbi algorithms and they provided a substantially less complex and suboptimal alternative in their Soft OutputViterbi Algorithm (SOVA). The performance improvement is obtained if the Viterbi decoders are able to produce reliability values or soft outputs by using a modified metric [14]. These reliability values are passed on to the subsequent Viterbi decoders as a priori information .In soft decision decoding, the receiver doesn’t assign a zero or a one to each received symbol from the AWGN channel, but uses multi-bit quantized values for the received sequence y , because the channel alphabet is greater than the sourcealphabet [3]. In this case, the metric derived from Maximum Likelihood principle, is used instead of Hamming distance. For an AWGN channel, the soft decision decoding produces again of 2÷3 dB over hard decision decoding, and an eight-level quantization offers enough performance in comparison with an infinite bit quantization [7].The original Viterbi algorithm searches for an informationsequence u that maximizes the a posteriori probability, s being the states sequence generated by the message u . Using the Bayes theorem and taking into account that thereceived sequence y is fixed for the metric computation and it can be discarded, the maximization of is: (|)P s y (|)P s y {}{max (|)max (|)()P P =u us }P y y s s . (23)For a systematic code, this relation can be expanded to:(1211max (,,)|,()k s p p j j j j j j j P y y y s s P s −=)⎧⎫⎪⎪⎨⎬⎪⎪⎩⎭∏u. (24) Taking into account that:()()()(1211122(,,)|,|||s p p j j j j j s s p p p p j j j j j j P y y y s s P y x P y x P y x −==⋅⋅), (25)where 1(,)j j s s − denotes the transitions between the states attime j -1 and the states at time j , the SOVA metric is obtained from (24) as [15]:()()***1***|1(1)log log ,(0)|1j j j j j j j j j jP y x P u M M x u P u P y x −⎛⎞=+⎛⎞=⎜⎟=++⎜⎟⎜⎟⎜⎟==−⎝⎠⎝⎠∑ (26)where *1,2,(,,)j j j j x u c c = is the RSC output code word at timej , at channel input and *1(,,)s p p j j j j 2y y y y = is the channeloutput. The summation is made for each pair of information symbols (,s j j u y ) and for each pair of parity symbols (11,,p j j c y )and (2,2,p j j y 1*c ).According [14] and [7], the relation (26) can be reduced as: **c j j ()j j j j M M L −=+∑x y u L u +(), (27)where the source reliability j L u , defined in (26), is the log-likelihood ratio of the binary symbol u j . The sign of ()j L u ) is the hard decision of u j and the magnitude of (j L u is the decision reliability .According [10], the SOVA metric includes values from the past metric M j -1, the channel reliability L c and the source reliability ()j L u (, as an a priori value. If the channel is very good, the second term in (27) is greater than the third term andthe decoding relies on the received channel values. If thechannel is very bad, the decoding relies on the a priori information )j L u . If M 1j , M 2j are two metrics of the survivor path and concurrent path in the trellis, at time j , then the metric difference is defined as [7]:01212j j j M M −)(s m Δ=. (28)The probability of path m , at time j , is related as:()/2mjM (path )exp m j P m P ==. (29) where j s is a states vector and mj M is the metric. The probability of choosing the survivor path is: 001)(path (correc ath 1)(path 2)1jjP e P P P e ΔΔ==++t)(p . (30)The reliability of this path decision is calculated as:(correct)orrect)log 1-(c j P P =Δ. (31) The reliability values along the survivor paths, for aparticular node and time j , are denoted as d j Δ, where d is the distance from the current node at time j . If the survivor path bit for is the same with the associated bit on the competing path, then there would be no error if the competing path is chosen. The reliability value remains unchanged.d j =To improve the reliability values an updating process must be used, so the “soft” values of a decision symbol are:(')'di j d j di L u u −−=j=Δ∑, (32)which can be approximated as:{}0...(')'min i j d j d i d L u u −−=j =⋅Δ. (33)The SOVA algorithm described in this section is the least complex of all the SISO decoders discussed in this section. In [12], Robertson shows that the SOVA algorithm is about halfas complex as the Max-Log-MAP algorithm. However, theSOVA algorithm is also the least accurate of the algorithmsdescribed in this section and, when used in an iterative turbo decoder, performs about 0.6 dB worse than a decoder using the MAP algorithm. If we represent the outputs of the SOVA algorithm they will be significantly more noisy than thosefrom the MAP algorithm, so an increased number of decodingiterations must be used for SOVA to obtain the sameperformances as for MAP algorithm.The same results are reported also for the iterative decoding (turbo decoding) of the turbo product codes, which are basedon two concatenated Hamming block codes not on convolutional codes [19]. IV. T HE INFLUENCE OF L C ON DECODING PERFORMANCE In this section we analyze the importance of an accurate estimate of the channel reliability factor L c is to the good performance of an iterative turbo decoder which uses the MAP, SOVA, Max-Log-MAP and Log-MAP algorithms. In the MAP algorithm the channel inputs and the a priori information are used to calculate the transition probabilities from one state to another, that are then used to calculate theforward and backward recursion terms [2], [8]. Finally, the aposteriori information ()s j L x + is computed and the decision about the original message is made based on it. In the iterative decoding with MAP algorithm, the channelreliability is calculated from the received channel values. At first iteration, the decoder 1 has no a priori information available (the ()s j L x − is zero) and the output from thealgorithm is calculated based on channel values. If an incorrect value of L c is used the decoder will make more decision errors and the extrinsic information from the output of the first decoder will have incorrect values, for the softchannel inputs [16].In the SOVA algorithm the channel values are used torecursively calculate the metric *c j L y j M for the current state s along a path from the metric 1j M − for the previous state along that path added to an a priori information term and to a cross-correlation term between the transmitted and the receivedchannel values, *j x and *j y , using (27). The channel reliability factor is used to scale this cross-correlation. When we usec Lan incorrect value of , e.g. , we are scaling the channel values applied to the inputs of component decoders by a factor of one instead of the correct value of . This has the effect of scaling all the metrics by the same factor, see (8), and the metric differences are also scaled by the same factor, see (9). This scaling of the metrics do not affect the path chosen by the algorithm as a survivor path or as a Maximum Likelihood (ML) path, so the hard decisions given by the algorithm are not affected by using an incorrect value of L c [16]-[18].c L ()j B s 1c L =c L c In the iterative decoding with SOVA algorithm, in the first iteration we assume that no a-priori information about the transmitted bits is available to the decoder (the a-priori information is zero), the first component decoder takes only the channel values. If channel reliability factor is incorrect, the channel values are scaled, the extrinsic information will be also scaled by the same factor and the a-priori information for the second decoder will also be scaled. Because of the linearity of the SOVA, the effect of using an incorrect value of the channel reliability factor is that the output LLR from the decoder is scaled by a constant factor. The relative importance of the two inputs to the decoder, the a priori information and the channel information, will not change, since the LLRs for both these sources of information will be scaled by the same factor. In the final iteration, the soft outputs from the final component decoder will have the same sign as those that would have been calculated using the correct value of . So, the hard outputs from the turbo decoder using the SOVA algorithm are not affected by the channel reliability factor [16].L c L The Max-Log-MAP algorithm has the same linearity that is found in the SOVA algorithm. Instead of one metric, now two metrics and are calculated, for forward andbackward recursions, see (17), (18) and (19), were are used only simple additions of the cross-correlation of the transmitted and received symbols. But, if an incorrect value of the channel reliability value is used, all the metrics are simply scaled by a factor as in the SOVA algorithm. The soft outputs given by the differences in metrics between different paths will also be scaled by the same factor, with the sign unchanged and the final hard decisions given by the turbo decoder will not be affected.()j A s The Log-MAP algorithm is identical to the Max-Log-MAP algorithm, except for a correction term ()()ln exp()f δ=−δ1+, used in the calculation of the forward and backward metrics and ()j A s ()j B s , and the soft output LLRs. The function()f δ is not a linear function, it decreases asymptoticallytowards zero as δ increases. Hence the linearity that is present in the Max-Log-MAP and SOVA algorithms is not present in the Log-MAP algorithm. This effect of non-linearity determines more hard decision errors of thecomponent decoders if the channel reliability factor is incorrect, and the extrinsic information derived from the first component decoder have incorrect amplitudes, which become the a-priori information for the second decoder in the first iteration. Both decoders in subsequent iterations will have incorrect amplitudes relative to the soft channel inputs.c L In the iterative decoding with Log-MAP algorithm, the extrinsic information exchange from one component decoder to another, determines a rapid decrease in the BER as the number of iterations increases. When the incorrect value of is used, no such rapid fall in the BER occurs due to the incorrect scaling of the a priori information relative to the channel inputs. In fact, the performance of the decoder is largely unaffected by the number of iterations used.c L For wireless communications, some of them modeled as Multiple Input Multiple Output (MIMO) systems [23], the channel is considered to be Rayleigh or Rician fading channel. If the Channel State Information (CSI) is not known at the receiver, a natural approach is to estimate the channel impulse response and to use the estimated values to compute the channel reliability factor required by the MAP algorithm to calculate the correct decoding metric.c L In [20], the degradation in the performance of a turbo decoder using the MAP algorithm is studied when the channel SNR is not correctly estimated. The authors propose a method for blind estimation of the channel SNR, using the ratio of the average squared received channel value to the square of the average of the magnitudes of the received channel values. In addition, they showed that using these estimates for SNR gives almost identical performances for the turbo decoder to that given using the true SNR.In [8], the authors proposes a simple estimation scheme for from the statistical computation on the block observation of matched filter outputs. The channel estimator includes the error variance of the channel estimates. In [24], is used the Minimum Mean Squared Error (MMSE) estimation criterion and is studied an iterative joint channel MMSE estimation and MAP decoding.c L None of above works requires a training sequence with pilot symbols to estimate the channel reliability factor. Other studies used pilot symbols to estimate the channel parameters, like [22] and [25].In [22] it is shown that it is not necessary to estimate the channel SNR for a turbo decoder with Max-Log-MAP or SOVA algorithms. If the MAP or the Log-MAP algorithm is used then the value of does not have to be very close to the true value for a good BER performance to be obtained. c LV. S IMULATION RESULTSThis section presents some simulation results for the turbo codes ensembles, with MAP, Max-Log-MAP, Log-MAP and SOVA decoding algorithms. The turbo encoder is the same for the four decoding algorithms and is described by two identical RSC codes with constraint length 3 and the generator polynomials and . No tail bitsand no puncturing are performed. The two constituent encoders are parallel concatenated by a classical block interleaver, with dimensions variable according to the frame21f G =+D D 21b G D =++。
Indradrive 系列 故障代码
Error MessagesF9001 Error internal function call.F9002 Error internal RTOS function callF9003 WatchdogF9004 Hardware trapF8000 Fatal hardware errorF8010 Autom. commutation: Max. motion range when moving back F8011 Commutation offset could not be determinedF8012 Autom. commutation: Max. motion rangeF8013 Automatic commutation: Current too lowF8014 Automatic commutation: OvercurrentF8015 Automatic commutation: TimeoutF8016 Automatic commutation: Iteration without resultF8017 Automatic commutation: Incorrect commutation adjustment F8018 Device overtemperature shutdownF8022 Enc. 1: Enc. signals incorr. (can be cleared in ph. 2) F8023 Error mechanical link of encoder or motor connectionF8025 Overvoltage in power sectionF8027 Safe torque off while drive enabledF8028 Overcurrent in power sectionF8030 Safe stop 1 while drive enabledF8042 Encoder 2 error: Signal amplitude incorrectF8057 Device overload shutdownF8060 Overcurrent in power sectionF8064 Interruption of motor phaseF8067 Synchronization PWM-Timer wrongF8069 +/-15Volt DC errorF8070 +24Volt DC errorF8076 Error in error angle loopF8078 Speed loop error.F8079 Velocity limit value exceededF8091 Power section defectiveF8100 Error when initializing the parameter handlingF8102 Error when initializing power sectionF8118 Invalid power section/firmware combinationF8120 Invalid control section/firmware combinationF8122 Control section defectiveF8129 Incorrect optional module firmwareF8130 Firmware of option 2 of safety technology defectiveF8133 Error when checking interrupting circuitsF8134 SBS: Fatal errorF8135 SMD: Velocity exceededF8140 Fatal CCD error.F8201 Safety command for basic initialization incorrectF8203 Safety technology configuration parameter invalidF8813 Connection error mains chokeF8830 Power section errorF8838 Overcurrent external braking resistorF7010 Safely-limited increment exceededF7011 Safely-monitored position, exceeded in pos. DirectionF7012 Safely-monitored position, exceeded in neg. DirectionF7013 Safely-limited speed exceededF7020 Safe maximum speed exceededF7021 Safely-limited position exceededF7030 Position window Safe stop 2 exceededF7031 Incorrect direction of motionF7040 Validation error parameterized - effective thresholdF7041 Actual position value validation errorF7042 Validation error of safe operation modeF7043 Error of output stage interlockF7050 Time for stopping process exceeded8.3.15 F7051 Safely-monitored deceleration exceeded (159)8.4 Travel Range Errors (F6xxx) (161)8.4.1 Behavior in the Case of Travel Range Errors (161)8.4.2 F6010 PLC Runtime Error (162)8.4.3 F6024 Maximum braking time exceeded (163)8.4.4 F6028 Position limit value exceeded (overflow) (164)8.4.5 F6029 Positive position limit exceeded (164)8.4.6 F6030 Negative position limit exceeded (165)8.4.7 F6034 Emergency-Stop (166)8.4.8 F6042 Both travel range limit switches activated (167)8.4.9 F6043 Positive travel range limit switch activated (167)8.4.10 F6044 Negative travel range limit switch activated (168)8.4.11 F6140 CCD slave error (emergency halt) (169)8.5 Interface Errors (F4xxx) (169)8.5.1 Behavior in the Case of Interface Errors (169)8.5.2 F4001 Sync telegram failure (170)8.5.3 F4002 RTD telegram failure (171)8.5.4 F4003 Invalid communication phase shutdown (172)8.5.5 F4004 Error during phase progression (172)8.5.6 F4005 Error during phase regression (173)8.5.7 F4006 Phase switching without ready signal (173)8.5.8 F4009 Bus failure (173)8.5.9 F4012 Incorrect I/O length (175)8.5.10 F4016 PLC double real-time channel failure (176)8.5.11 F4017 S-III: Incorrect sequence during phase switch (176)8.5.12 F4034 Emergency-Stop (177)8.5.13 F4140 CCD communication error (178)8.6 Non-Fatal Safety Technology Errors (F3xxx) (178)8.6.1 Behavior in the Case of Non-Fatal Safety Technology Errors (178)8.6.2 F3111 Refer. missing when selecting safety related end pos (179)8.6.3 F3112 Safe reference missing (179)8.6.4 F3115 Brake check time interval exceeded (181)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand ControlsI Bosch Rexroth AG VII/XXIITable of ContentsPage8.6.5 F3116 Nominal load torque of holding system exceeded (182)8.6.6 F3117 Actual position values validation error (182)8.6.7 F3122 SBS: System error (183)8.6.8 F3123 SBS: Brake check missing (184)8.6.9 F3130 Error when checking input signals (185)8.6.10 F3131 Error when checking acknowledgment signal (185)8.6.11 F3132 Error when checking diagnostic output signal (186)8.6.12 F3133 Error when checking interrupting circuits (187)8.6.13 F3134 Dynamization time interval incorrect (188)8.6.14 F3135 Dynamization pulse width incorrect (189)8.6.15 F3140 Safety parameters validation error (192)8.6.16 F3141 Selection validation error (192)8.6.17 F3142 Activation time of enabling control exceeded (193)8.6.18 F3143 Safety command for clearing errors incorrect (194)8.6.19 F3144 Incorrect safety configuration (195)8.6.20 F3145 Error when unlocking the safety door (196)8.6.21 F3146 System error channel 2 (197)8.6.22 F3147 System error channel 1 (198)8.6.23 F3150 Safety command for system start incorrect (199)8.6.24 F3151 Safety command for system halt incorrect (200)8.6.25 F3152 Incorrect backup of safety technology data (201)8.6.26 F3160 Communication error of safe communication (202)8.7 Non-Fatal Errors (F2xxx) (202)8.7.1 Behavior in the Case of Non-Fatal Errors (202)8.7.2 F2002 Encoder assignment not allowed for synchronization (203)8.7.3 F2003 Motion step skipped (203)8.7.4 F2004 Error in MotionProfile (204)8.7.5 F2005 Cam table invalid (205)8.7.6 F2006 MMC was removed (206)8.7.7 F2007 Switching to non-initialized operation mode (206)8.7.8 F2008 RL The motor type has changed (207)8.7.9 F2009 PL Load parameter default values (208)8.7.10 F2010 Error when initializing digital I/O (-> S-0-0423) (209)8.7.11 F2011 PLC - Error no. 1 (210)8.7.12 F2012 PLC - Error no. 2 (210)8.7.13 F2013 PLC - Error no. 3 (211)8.7.14 F2014 PLC - Error no. 4 (211)8.7.15 F2018 Device overtemperature shutdown (211)8.7.16 F2019 Motor overtemperature shutdown (212)8.7.17 F2021 Motor temperature monitor defective (213)8.7.18 F2022 Device temperature monitor defective (214)8.7.19 F2025 Drive not ready for control (214)8.7.20 F2026 Undervoltage in power section (215)8.7.21 F2027 Excessive oscillation in DC bus (216)8.7.22 F2028 Excessive deviation (216)8.7.23 F2031 Encoder 1 error: Signal amplitude incorrect (217)VIII/XXII Bosch Rexroth AG | Electric Drivesand ControlsRexroth IndraDrive | Troubleshooting GuideTable of ContentsPage8.7.24 F2032 Validation error during commutation fine adjustment (217)8.7.25 F2033 External power supply X10 error (218)8.7.26 F2036 Excessive position feedback difference (219)8.7.27 F2037 Excessive position command difference (220)8.7.28 F2039 Maximum acceleration exceeded (220)8.7.29 F2040 Device overtemperature 2 shutdown (221)8.7.30 F2042 Encoder 2: Encoder signals incorrect (222)8.7.31 F2043 Measuring encoder: Encoder signals incorrect (222)8.7.32 F2044 External power supply X15 error (223)8.7.33 F2048 Low battery voltage (224)8.7.34 F2050 Overflow of target position preset memory (225)8.7.35 F2051 No sequential block in target position preset memory (225)8.7.36 F2053 Incr. encoder emulator: Pulse frequency too high (226)8.7.37 F2054 Incr. encoder emulator: Hardware error (226)8.7.38 F2055 External power supply dig. I/O error (227)8.7.39 F2057 Target position out of travel range (227)8.7.40 F2058 Internal overflow by positioning input (228)8.7.41 F2059 Incorrect command value direction when positioning (229)8.7.42 F2063 Internal overflow master axis generator (230)8.7.43 F2064 Incorrect cmd value direction master axis generator (230)8.7.44 F2067 Synchronization to master communication incorrect (231)8.7.45 F2068 Brake error (231)8.7.46 F2069 Error when releasing the motor holding brake (232)8.7.47 F2074 Actual pos. value 1 outside absolute encoder window (232)8.7.48 F2075 Actual pos. value 2 outside absolute encoder window (233)8.7.49 F2076 Actual pos. value 3 outside absolute encoder window (234)8.7.50 F2077 Current measurement trim wrong (235)8.7.51 F2086 Error supply module (236)8.7.52 F2087 Module group communication error (236)8.7.53 F2100 Incorrect access to command value memory (237)8.7.54 F2101 It was impossible to address MMC (237)8.7.55 F2102 It was impossible to address I2C memory (238)8.7.56 F2103 It was impossible to address EnDat memory (238)8.7.57 F2104 Commutation offset invalid (239)8.7.58 F2105 It was impossible to address Hiperface memory (239)8.7.59 F2110 Error in non-cyclical data communic. of power section (240)8.7.60 F2120 MMC: Defective or missing, replace (240)8.7.61 F2121 MMC: Incorrect data or file, create correctly (241)8.7.62 F2122 MMC: Incorrect IBF file, correct it (241)8.7.63 F2123 Retain data backup impossible (242)8.7.64 F2124 MMC: Saving too slowly, replace (243)8.7.65 F2130 Error comfort control panel (243)8.7.66 F2140 CCD slave error (243)8.7.67 F2150 MLD motion function block error (244)8.7.68 F2174 Loss of motor encoder reference (244)8.7.69 F2175 Loss of optional encoder reference (245)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand Controls| Bosch Rexroth AG IX/XXIITable of ContentsPage8.7.70 F2176 Loss of measuring encoder reference (246)8.7.71 F2177 Modulo limitation error of motor encoder (246)8.7.72 F2178 Modulo limitation error of optional encoder (247)8.7.73 F2179 Modulo limitation error of measuring encoder (247)8.7.74 F2190 Incorrect Ethernet configuration (248)8.7.75 F2260 Command current limit shutoff (249)8.7.76 F2270 Analog input 1 or 2, wire break (249)8.7.77 F2802 PLL is not synchronized (250)8.7.78 F2814 Undervoltage in mains (250)8.7.79 F2815 Overvoltage in mains (251)8.7.80 F2816 Softstart fault power supply unit (251)8.7.81 F2817 Overvoltage in power section (251)8.7.82 F2818 Phase failure (252)8.7.83 F2819 Mains failure (253)8.7.84 F2820 Braking resistor overload (253)8.7.85 F2821 Error in control of braking resistor (254)8.7.86 F2825 Switch-on threshold braking resistor too low (255)8.7.87 F2833 Ground fault in motor line (255)8.7.88 F2834 Contactor control error (256)8.7.89 F2835 Mains contactor wiring error (256)8.7.90 F2836 DC bus balancing monitor error (257)8.7.91 F2837 Contactor monitoring error (257)8.7.92 F2840 Error supply shutdown (257)8.7.93 F2860 Overcurrent in mains-side power section (258)8.7.94 F2890 Invalid device code (259)8.7.95 F2891 Incorrect interrupt timing (259)8.7.96 F2892 Hardware variant not supported (259)8.8 SERCOS Error Codes / Error Messages of Serial Communication (259)9 Warnings (Exxxx) (263)9.1 Fatal Warnings (E8xxx) (263)9.1.1 Behavior in the Case of Fatal Warnings (263)9.1.2 E8025 Overvoltage in power section (263)9.1.3 E8026 Undervoltage in power section (264)9.1.4 E8027 Safe torque off while drive enabled (265)9.1.5 E8028 Overcurrent in power section (265)9.1.6 E8029 Positive position limit exceeded (266)9.1.7 E8030 Negative position limit exceeded (267)9.1.8 E8034 Emergency-Stop (268)9.1.9 E8040 Torque/force actual value limit active (268)9.1.10 E8041 Current limit active (269)9.1.11 E8042 Both travel range limit switches activated (269)9.1.12 E8043 Positive travel range limit switch activated (270)9.1.13 E8044 Negative travel range limit switch activated (271)9.1.14 E8055 Motor overload, current limit active (271)9.1.15 E8057 Device overload, current limit active (272)X/XXII Bosch Rexroth AG | Electric Drivesand ControlsRexroth IndraDrive | Troubleshooting GuideTable of ContentsPage9.1.16 E8058 Drive system not ready for operation (273)9.1.17 E8260 Torque/force command value limit active (273)9.1.18 E8802 PLL is not synchronized (274)9.1.19 E8814 Undervoltage in mains (275)9.1.20 E8815 Overvoltage in mains (275)9.1.21 E8818 Phase failure (276)9.1.22 E8819 Mains failure (276)9.2 Warnings of Category E4xxx (277)9.2.1 E4001 Double MST failure shutdown (277)9.2.2 E4002 Double MDT failure shutdown (278)9.2.3 E4005 No command value input via master communication (279)9.2.4 E4007 SERCOS III: Consumer connection failed (280)9.2.5 E4008 Invalid addressing command value data container A (280)9.2.6 E4009 Invalid addressing actual value data container A (281)9.2.7 E4010 Slave not scanned or address 0 (281)9.2.8 E4012 Maximum number of CCD slaves exceeded (282)9.2.9 E4013 Incorrect CCD addressing (282)9.2.10 E4014 Incorrect phase switch of CCD slaves (283)9.3 Possible Warnings When Operating Safety Technology (E3xxx) (283)9.3.1 Behavior in Case a Safety Technology Warning Occurs (283)9.3.2 E3100 Error when checking input signals (284)9.3.3 E3101 Error when checking acknowledgment signal (284)9.3.4 E3102 Actual position values validation error (285)9.3.5 E3103 Dynamization failed (285)9.3.6 E3104 Safety parameters validation error (286)9.3.7 E3105 Validation error of safe operation mode (286)9.3.8 E3106 System error safety technology (287)9.3.9 E3107 Safe reference missing (287)9.3.10 E3108 Safely-monitored deceleration exceeded (288)9.3.11 E3110 Time interval of forced dynamization exceeded (289)9.3.12 E3115 Prewarning, end of brake check time interval (289)9.3.13 E3116 Nominal load torque of holding system reached (290)9.4 Non-Fatal Warnings (E2xxx) (290)9.4.1 Behavior in Case a Non-Fatal Warning Occurs (290)9.4.2 E2010 Position control with encoder 2 not possible (291)9.4.3 E2011 PLC - Warning no. 1 (291)9.4.4 E2012 PLC - Warning no. 2 (291)9.4.5 E2013 PLC - Warning no. 3 (292)9.4.6 E2014 PLC - Warning no. 4 (292)9.4.7 E2021 Motor temperature outside of measuring range (292)9.4.8 E2026 Undervoltage in power section (293)9.4.9 E2040 Device overtemperature 2 prewarning (294)9.4.10 E2047 Interpolation velocity = 0 (294)9.4.11 E2048 Interpolation acceleration = 0 (295)9.4.12 E2049 Positioning velocity >= limit value (296)9.4.13 E2050 Device overtemp. Prewarning (297)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand Controls| Bosch Rexroth AG XI/XXIITable of ContentsPage9.4.14 E2051 Motor overtemp. prewarning (298)9.4.15 E2053 Target position out of travel range (298)9.4.16 E2054 Not homed (300)9.4.17 E2055 Feedrate override S-0-0108 = 0 (300)9.4.18 E2056 Torque limit = 0 (301)9.4.19 E2058 Selected positioning block has not been programmed (302)9.4.20 E2059 Velocity command value limit active (302)9.4.21 E2061 Device overload prewarning (303)9.4.22 E2063 Velocity command value > limit value (304)9.4.23 E2064 Target position out of num. range (304)9.4.24 E2069 Holding brake torque too low (305)9.4.25 E2070 Acceleration limit active (306)9.4.26 E2074 Encoder 1: Encoder signals disturbed (306)9.4.27 E2075 Encoder 2: Encoder signals disturbed (307)9.4.28 E2076 Measuring encoder: Encoder signals disturbed (308)9.4.29 E2077 Absolute encoder monitoring, motor encoder (encoder alarm) (308)9.4.30 E2078 Absolute encoder monitoring, opt. encoder (encoder alarm) (309)9.4.31 E2079 Absolute enc. monitoring, measuring encoder (encoder alarm) (309)9.4.32 E2086 Prewarning supply module overload (310)9.4.33 E2092 Internal synchronization defective (310)9.4.34 E2100 Positioning velocity of master axis generator too high (311)9.4.35 E2101 Acceleration of master axis generator is zero (312)9.4.36 E2140 CCD error at node (312)9.4.37 E2270 Analog input 1 or 2, wire break (312)9.4.38 E2802 HW control of braking resistor (313)9.4.39 E2810 Drive system not ready for operation (314)9.4.40 E2814 Undervoltage in mains (314)9.4.41 E2816 Undervoltage in power section (314)9.4.42 E2818 Phase failure (315)9.4.43 E2819 Mains failure (315)9.4.44 E2820 Braking resistor overload prewarning (316)9.4.45 E2829 Not ready for power on (316)。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
In this paper we propose several new algorithms for decoding non-binary LDPC codes, one of which is called the MinMax algorithm. They are all independent of thermal noise estimation errors and perform quasi-optimal decoding – meaning that they present a very small performance loss with respect to the optimal iterative decoding (Sum-Product). We also propose two implementations of the Min-Max algorithm, both in the LLR domain, so that the decoding is computationally stable: a “standard implementation” whose complexity scales
Notations related to LDPC codes:
• H ∈ MM,N (GF(q)), the q-ary check matrix of the code. • C , set of codewords of the LDPC code. • Cn(a), set of codewords with the nth coordinate equal to a; for given 1 ≤ n ≤ N and a ∈ GF(q). • N = (x1, x2, . . . , xN ) a q-ary codeword transmitted over the channel.
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
Min-Max decoding for non binary LDPC codes
Valentin Savin, CEA-LETI, MINATEC, Grenoble, France, valentin.savin@cea.fr
The Sum-Product algorithm can be efficiently implemented in the probability domain using binary Fourier transforms [4] and its complexity is dominated by O(q log2 q) sum and product operations for each check node processing, where q is the cardinality of the Galois field of the non-binary LDPC code. The Min-Sum decoding can be implemented either in the log-probability domain or in the log-likelihood ratio (LLR) domain and its complexity is dominated by O(q2) sum operations for each check node processing. In the LLR domain, a reduced selective implementation of the Min-Sum decoding, called Extended Min-Sum, was proposed in [5], [6]. Here “selective” means that the check-node processing uses the incoming messages concerning only a part of the Galois field elements. Non binary LDPC codes were also investigated in [7], [8], [9].
The paper is organized as follows. In the next section we briefly review several realizations of the Min-Sum algorithm for non binary LDPC codes. It is intended to keep the paper self-contained but also to justify some of our choices regarding the new decoding algorithms introduced in section III. The implementation of the Min-Max decoder is discussed in section IV. Section V presents simulation results and section VI concludes this paper.
The following notations will be uபைடு நூலகம்ed throughout the paper.
Notations related to the Galois field: • GF(q) = {0, 1, . . . , q −1}, the Galois field with q elements, where q is a power of a prime number. Its elements will be called symbols, in order to be distinguished from ordinary integers. • a, s, x will be used to denote GF(q)-symbols. • =, I, N will be used to denote vectors of GF(q)-symbols. For instance, = = (a1, . . . , aI ) ∈ GF(q)I , etc.
as the square of the Galois field’s cardinality and a reduced complexity implementation called “selective implementation”. That makes the Min-Max decoding very attractive for practical purposes.
Notations related to the Tanner graph: • H, the Tanner graph of the code. • n ∈ {1, 2, . . . , N } a variable node of H. • m ∈ {1, 2, . . . , M } a check node of H. • H(n), set of neighbor check nodes of the variable node n. • H(m), set of neighbor variable nodes of the check node m. • L (m), set of local configurations verifying the check node m; i.e. the set of sequences of GF(q)-symbols = = (an)n∈H(m) , verifying the linear constraint:
Abstract—Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the MinSum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field’s cardinality and a reduced complexity implementation called selective implementation, which makes the Min-Max decoding very attractive for practical purposes.