硬核DSP简介及其实现

合集下载

DSP简介(精)

DSP简介(精)

dsp百科名片基于dsp的线路应用数字信号处理(Digital Signal Processing,简称DSP)是一门涉及许多学科而又广泛应用于许多领域的新兴学科。

20世纪60年代以来,随着计算机和信息技术的飞速发展,数字信号处理技术应运而生并得到迅速的发展。

数字信号处理是一种通过使用数学技巧执行转换或提取信息,来处理现实信号的方法,这些信号由数字序列表示。

在过去的二十多年时间里,数字信号处理已经在通信等领域得到极为广泛的应用。

德州仪器、Freescale等半导体厂商在这一领域拥有很强的实力。

目录DSP微处理器DSP技术的应用DSP发展轨迹DSP未来发展Windows系统DSP文件扩展名:DSP磷酸氢二钠:DSP交货进度计划:DSPdsp单身派DSP舞团展开编辑本段DSP微处理器DSP(digital signal processor)是一种独特的微处理器,是以数字信号来处理大量信息的器件。

其工作原理是接收模拟信号,转换为0或1的数字信号。

再对数字信号进行修改、删除、强化,并在其他系统芯片中把数字数据解译回模拟数据或实际环境格式。

它不仅具有可编程性,而且其实时运行速度可达每秒数以千万条复杂指令程序,远远超过通用微处理器,是数字化电子世界中日益重要的电脑芯片。

它的强大数据处理能力和高运行速度,是最值得称道的两大特色。

DSP微处理器(芯片)一般具有如下主要特点:(1)在一个指令周期内可完成一次乘法和一次加法;(2)程序和数据空间分开,可以同时访问指令和数据;(3)片内具有快速RAM,通常可通过独立的数据总线在两块中同时访问;(4)具有低开销或无开销循环及跳转的硬件支持;(5)快速的中断处理和硬件I/O支持;(6)具有在单周期内操作的多个硬件地址产生器;(7)可以并行执行多个操作;(8)支持流水线操作,使取指、译码和执行等操作可以重叠执行。

当然,与通用微处理器相比,DSP微处理器(芯片)的其他通用功能相对较弱些。

dsp原理及应用技术

dsp原理及应用技术

dsp原理及应用技术数字信号处理(Digital Signal Processing,简称DSP)是一种处理数字信号的技术,广泛应用于各个领域,例如通信、音频处理、图像处理等。

本文将介绍DSP的原理、应用技术以及其在不同领域中的具体应用。

一、DSP原理及基本概念数字信号处理是将连续的信号转化为离散的信号,并通过计算机进行处理和分析的技术。

其原理基于采样、量化和数字编码等基本概念。

1. 采样:将模拟信号以一定的频率进行采样,将连续信号离散化成一系列样本点,从而得到离散的信号序列。

2. 量化:对采样得到的样本进行量化,将其映射到离散的数值,以表示样本的幅度。

3. 数字编码:将量化后的样本映射为二进制码,以实现信号的数字化表示。

4. 数字滤波:通过对数字信号进行滤波操作,可以去除噪声、增强信号等。

5. 数字变换:对数字信号进行变换,常见的有傅里叶变换、离散傅里叶变换等,以实现信号的频域分析。

二、DSP的应用技术DSP技术在各个领域中都有广泛的应用,下面将介绍DSP在通信、音频处理和图像处理中的具体应用技术。

1. 通信领域中的DSP应用技术在通信领域中,DSP技术起到了至关重要的作用。

其中,数字调制和解调技术是DSP在通信中的核心应用之一。

通过数字调制和解调,可以将模拟信号转化为数字信号进行传输,并在接收端进行解调还原为模拟信号。

此外,DSP在音频编解码、信号增强和数字滤波等方面也具有广泛应用。

2. 音频处理领域中的DSP应用技术在音频处理中,DSP技术可以用于音频信号的降噪和音效处理,如环境噪声抑制、回声消除和均衡器等。

此外,通过DSP技术,还可以实现语音识别、语音合成等高级音频处理技术。

3. 图像处理领域中的DSP应用技术在图像处理中,DSP技术可以应用于图像的压缩、增强和识别等方面。

图像压缩技术通过对图像进行编码和解码,将图像的数据量减小,实现图像的高效传输和存储。

图像增强技术通过滤波、锐化和去噪等操作,改善图像的质量。

dsp原理及应用做什么的

dsp原理及应用做什么的

DSP原理及应用:做什么的?简介数字信号处理(Digital Signal Processing,简称DSP)是一种将模拟信号转换为数字信号并进行处理的技术。

它使用数字算法来实现对信号的滤波、压缩、编码、解码、增强、分析等操作。

DSP技术在媒体处理、通信、音频、视频、雷达、医学成像等领域有着广泛的应用。

本文将介绍DSP的原理,并探讨其在不同领域的应用。

DSP原理数字信号处理的原理基于数字信号的采样与量化,以及数字算法的应用。

DSP处理的基本流程如下:1.信号采样与量化:模拟信号经过模数转换器(ADC)进行采样,将其转换为离散的数字信号。

同时,对采集到的信号进行量化,将其表示为离散的数值。

2.数字滤波:数字滤波是DSP的核心操作之一。

它利用数字算法对信号进行滤波,包括低通滤波、高通滤波、带通滤波等。

滤波操作可以去除噪声、增强信号等。

3.算法处理:DSP利用各种数字算法对信号进行处理。

常见的算法包括FFT(快速傅里叶变换)、FIR(有限脉冲响应滤波器)、IIR(无限脉冲响应滤波器)等。

这些算法能够实现信号的编解码、压缩、增强等功能。

4.数字解调与合成:在通信领域,DSP可以将数字信号解调为模拟信号,或将模拟信号合成为数字信号。

这一功能在无线通信、音频处理等方面有着重要的应用。

DSP应用数字信号处理技术在众多领域都有着重要的应用。

以下是几个主要领域的应用示例:1. 媒体处理•音频处理:DSP可以对音频信号进行滤波、降噪、音效处理等,广泛应用于音乐制作、音频设备等。

•视频处理:DSP可用于视频压缩、编码、解码等操作,提供高清视频播放和传输的功能。

2. 通信•无线通信:DSP在无线通信中扮演重要角色,用于数字解调、信号处理、编解码等操作,支撑起现代通信技术的发展。

•语音识别与合成:通过DSP技术,可以实现语音的识别和合成,广泛应用于智能手机、智能助理等设备。

3. 音频设备•音频放大器:DSP可以用于音频放大器的设计和优化,提供更好的音频体验。

dsp芯片的原理与开发应用课件

dsp芯片的原理与开发应用课件

DSP芯片的原理与开发应用课件1. 什么是DSP芯片DSP芯片(Digital Signal Processing Chip)是一种专门用于数字信号处理的集成电路芯片。

它具有强大的计算能力和高速处理速度,广泛应用于音频信号处理、图像处理、通信系统、雷达信号处理等领域。

2. DSP芯片的工作原理DSP芯片通过高效的算法和硬件加速器,对输入的数字信号进行采样、压缩、编码、滤波、频谱分析、解调、解码等处理,得到所需的输出信号。

其工作原理大致如下:1.信号采样:DSP芯片将输入的连续模拟信号通过采样电路转换为离散数字信号。

2.数字信号处理:DSP芯片使用内置的运算器和指令集,对采样到的数字信号进行各种算法处理,如滤波、频域变换、时域变换等。

3.运算加速:为了提高处理速度,DSP芯片通常配备专门的硬件加速器,如DSP协处理器、FPGA等,来协助完成复杂的计算任务。

4.输出处理:处理后的数字信号经过解码、解调等步骤后,再通过解调电路将其还原为模拟信号,输出到外部设备或其他系统中。

3. DSP芯片的开发应用3.1 音频信号处理DSP芯片在音频领域的应用非常广泛,可以用于音频编解码、音效处理、语音识别等。

通过采用各种数字算法,DSP芯片可以实现高质量音频信号处理和实时音效增强,提升用户体验。

在音频编解码方面,DSP芯片支持各种音频格式的解码和编码,如MP3、AAC、WAV等。

通过对音频信号进行压缩和解压缩,可以有效减小音频文件的大小,提高存储和传输效率。

3.2 图像处理DSP芯片在图像处理领域的应用日益重要。

利用DSP芯片的高速计算能力和并行处理能力,可以实现图像的滤波、边缘检测、图像增强、图像压缩等功能。

图像处理算法包括傅里叶变换、离散余弦变换、边缘检测、图像分割等。

这些算法可以在DSP芯片上进行高效的实现,帮助用户快速获得满足各种图像处理需求的结果。

3.3 通信系统DSP芯片在通信系统中起到了关键作用。

通信系统中需要对信号进行调制、解调、滤波、编解码等处理。

dsp芯片的原理与开发应用

dsp芯片的原理与开发应用

DSP芯片的原理与开发应用1. 什么是DSP芯片?DSP芯片(Digital Signal Processor)是一种专用的数字信号处理器芯片,用于加速数字信号的处理和计算。

它通常由高速运算单元、数据存储器和输入输出接口等组成,具备高速、高效的信号处理能力。

DSP芯片广泛应用于音频、视频、通信、雷达、医疗等领域,是实现实时信号处理的重要工具。

2. DSP芯片的工作原理DSP芯片的工作原理可以简单概括为以下几个步骤:2.1 信号采样DSP芯片首先对输入信号进行采样,将连续的模拟信号转换为离散的数字信号。

常用的采样方式有周期采样和非周期采样,通过选择合适的采样频率和采样精度,可以有效地保留原始信号的特征。

2.2 数字信号处理采样后的信号经过ADC(Analog-to-Digital Converter)转换为数字信号后,DSP芯片开始进行数字信号处理。

这个过程包括滤波、变换、编码、解码、增益控制等一系列算法和操作。

DSP芯片通常集成了多种数学运算单元,如乘法器、加法器、移位器等,可以高速、高效地执行各种信号处理算法。

2.3 数据存储DSP芯片在处理过程中需要对输入、输出数据进行存储,通常包括程序存储、数据存储和寄存器等。

程序存储用于存放DSP芯片的软件程序,数据存储用于存放输入、输出数据以及中间计算结果,而寄存器则用于存放计算过程中的临时数据和控制信息。

2.4 输出重构在数字信号处理算法执行完毕后,DSP芯片将输出数据转换为模拟信号,经过DAC(Digital-to-Analog Converter)转换为连续的模拟信号。

输出重构的过程可以根据需求进行滤波、放大等处理,以获取高质量的模拟输出信号。

3. DSP芯片的开发应用DSP芯片具备高速、高效的信号处理能力,广泛应用于以下领域:3.1 通信领域DSP芯片在通信系统中广泛应用,如无线通信、移动通信和光纤通信等。

它可以处理无线信号的调频解调、调制解调、信号压缩和解码,实现高质量的音频和视频通信。

dsp芯片的原理与应用论文

dsp芯片的原理与应用论文

DSP芯片的原理与应用论文引言•DSP芯片(Digital Signal Processor,数字信号处理器)是一种特殊用途的集成电路,主要用于处理数字信号,并在实时性要求较高的应用领域中发挥重要作用。

•本文将介绍DSP芯片的基本原理及其在各个领域的应用情况。

DSP芯片的原理•DSP芯片是一种专门用于数字信号处理的硬件设备,其内部的架构和运算规则与通用微处理器不同。

•DSP芯片通过并行运算、硬件加速等技术,提供高效的数字信号处理能力。

•DSP芯片的内部包含算术逻辑单元(ALU)、数字信号处理核心(DSP Core)、存储器等主要模块。

DSP芯片的应用领域1. 通信领域•DSP芯片在通信领域中扮演着重要的角色,主要用于无线通信、音频信号处理、图像和视频处理等方面。

•在调制解调器中,DSP芯片能够高效处理调制、解调等数字信号处理任务,提供稳定可靠的通信质量。

•在移动通信领域,DSP芯片广泛应用于手机、基站等设备中,以实现高速数据传输、音频处理、语音识别等功能。

2. 汽车电子领域•DSP芯片在汽车电子领域中也有广泛的应用,例如车载娱乐系统、车载导航系统等。

•在车载音频处理方面,DSP芯片可以对音频信号进行降噪、声音平衡、音效处理等,提供更好的音频体验。

•在车载导航系统中,DSP芯片可以进行语音识别、指令处理等,提供准确可靠的导航功能。

3. 视频与图像处理领域•DSP芯片在视频与图像处理领域中有很高的应用价值,例如视频编解码、图像处理、计算机视觉等方面。

•在视频编解码方面,DSP芯片能够高效处理视频的压缩、解压缩等任务,提供流畅的视频播放效果。

•在图像处理方面,DSP芯片能够对图像进行滤波、边缘检测、图像识别等操作,提供更精细的图像处理效果。

4. 工业自动化领域•DSP芯片在工业自动化领域中也有重要的应用,例如机器人控制、运动控制、工业监控等方面。

•在机器人控制方面,DSP芯片能够处理机器人的运动轨迹规划、动力学控制等任务,提供灵活高效的控制能力。

dsp芯片的原理及开发应用

dsp芯片的原理及开发应用

DSP芯片的原理及开发应用1. DSP芯片的概述DSP(Digital Signal Processor,数字信号处理器)芯片是一种专门用于数字信号处理的集成电路。

它具备高效、快速的处理能力和专门的指令集,可以实现数字信号的采集、处理和输出。

DSP芯片在音频、视频、通信和图像处理等领域都有广泛的应用。

2. DSP芯片的原理DSP芯片相比于通用微处理器,其主要原理在于以下几个方面:2.1 架构DSP芯片的架构通常采用多重并行处理单元的结构,以支持复杂的数字信号处理算法。

典型的DSP芯片包含三个主要部分:控制单元、数据单元和外设控制器。

其中,控制单元负责协调整个系统的运行,数据单元主要用于执行算法运算,而外设控制器则管理芯片与外部设备的通信。

2.2 计算能力DSP芯片具备较强的计算能力,这得益于其专门的硬件加速器和指令集。

通常,DSP芯片具备高效的乘法累加器(MAC)和并行数据路径,可以在一个时钟周期内同时进行多个操作,从而加快信号处理速度。

2.3 特殊指令集DSP芯片的指令集通常优化了常见的数字信号处理算法,如滤波、变换和编码等。

这些指令可以直接操作数据和执行复杂的运算,减少了编程的复杂性和运算的时间。

2.4 存储器结构DSP芯片通常具备专门的高速存储器,包括数据存储器和程序存储器。

数据存储器用于存放输入和输出数据,而程序存储器则用于存放程序指令。

这样的存储器结构可以提高访问速度和运算效率。

3. DSP芯片的开发应用3.1 音频处理DSP芯片在音频处理中有广泛的应用,例如音频编解码、音频增强、音频滤波和音频效果处理等。

通过使用DSP芯片,可以提高音频处理的速度和质量,为音频设备和应用带来更好的用户体验。

3.2 视频处理DSP芯片在视频处理中也起到重要的作用。

例如,在视频编解码中,DSP芯片可以提供高效的压缩和解压缩算法,实现图像的高质量传输和存储。

此外,DSP芯片还可用于视频增强、图像处理和实时视频分析等领域。

dsp芯片的原理与开发应用pdf

dsp芯片的原理与开发应用pdf

DSP芯片的原理与开发应用PDF1. DSP芯片的基本原理•DSP芯片的定义:数字信号处理(Digital Signal Processing,DSP)芯片是一种专门用于处理数字信号的集成电路芯片。

•DSP芯片的功能:DSP芯片通过对数字信号进行处理,实现信号的滤波、变换、调制、解调、编码、解码、压缩、解压缩等一系列数学运算和算法实现。

•DSP芯片的架构:DSP芯片的内部结构通常由数字信号处理器核心、存储器、计算单元、时钟控制单元和I/O接口等组成。

•DSP芯片的优势:相比通用微处理器,DSP芯片具有更高的运算速度和更低的功耗,更适合处理与实时性要求较高的音频、视频、图像和语音信号。

2. DSP芯片的开发工具和开发环境•DSP芯片的开发工具:常见的DSP芯片开发工具包括CCS(Code Composer Studio)、Keil MDK(Microcontroller Development Kit)、MATLAB等。

•DSP芯片的开发环境:DSP芯片的开发环境需要一台计算机、开发工具、编译器、仿真器、调试器等硬件和软件设备的支持。

•DSP芯片的开发语言:DSP芯片的开发语言主要包括C语言、汇编语言以及特定DSP芯片的编程语言和指令集。

•DSP芯片开发的基本步骤:项目规划、系统设计、算法开发、编码实现、调试测试、性能优化等。

3. DSP芯片的应用领域DSP芯片在许多应用领域都有着广泛的应用,其中一些主要领域包括:3.1 通信领域•无线通信系统:DSP芯片用于实现数字调制、解调、编码、解码、信号处理等关键技术,例如5G通信系统、移动通信系统等。

•音频和语音处理:DSP芯片用于实现音频和语音信号的压缩、编解码、降噪、语音识别、语音合成等技术,在手机、耳机、音响等设备中广泛应用。

3.2 音视频处理领域•多媒体编码解码:DSP芯片用于实现音视频信号的编码、传输、解码、渲染等关键技术,例如MP3、AAC、H.264、H.265等编码标准的实现。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Title: Hard Core DSP – What it is and how to make it happenAuthor: Lynn PattersonTitle: VP Product DevelopmentDate: 6/11/98OVERVIEWIn recent years Digital Signal Processing technology has been applied to a variety of types of processing applications. Generally these can be classified as non-real time, soft real-time and hard-core real-time applications.Non real-time DSP refers to applications where the huge FLOP capacity of the DSP is put to work on historical data. The data was collected and archived for processing at a later time. The data is stored on some type of mass storage media and job processed in a compute center. Some examples are seismic evaluation, image enhancement and intelligent signal extraction applications.Soft real-time DSP refers to applications where data arrives to the system from a “sensor” as it is sampled, an algorithm is applied to that data and results are posted. This process repeats continuously. In a “soft” system, the processing node may not be able to fully process all data without some tuning of the system. This on-the-fly adjustment can be implemented in several ways.1. The data source can be throttled – that is there is some handshaking mechanismfrom the processing back to the source that triggers the source to slow the rate atwhich it delivers data to the system for processing.2. The system employs the elasticity in the system buffers to hold the additional datasamples over the steady state rate. Essentially, one or several blocks of data arequeued up while one block requires longer for processing than the time line alotted .The ability to allow for oversized buffering space is typically difficult and typically theworst case scenario can not be accounted for.3. Data is dropped. If the processing node can not accept the samples or block ofsamples, it is dropped and is not retrievable.4. Additional processing nodes are applied to the data stream. This requires the systemto have a real-time dynamic architecture.5. The algorithm applied to the data adjusts to require a reduced processing load underpeak conditions. Depending on the nature of the adjustment, this may or may notdifferentiate an application as soft or hard real-time.In all of these cases the “performance” of the system may vary over time but the system does not fail. Consider the case ff the system throttles itself, the overall performance drops since the system does not run at full speed. For the case of elastic buffers, dropped data may ultimately result if the processing can not “catch up”. Therefore, for cases 2 and 3, if data is dropped, the algorithm has a reduced set of data to work on and it should be expected that the quality of the result is decreased. The fourth case is rarely possible in real-time systems. However, if the system did accommodate this, it is reasonable to assume that the additional processing is taken from another system task and the overall performance of the system is reduced due to that. The final case also implies a decrease in the quality of the result since a reduced algorithm was implemented.Consider an example of an image inspection system, if throttling is implemented, the full algorithm is applied but the rate of inspection for the system is decreased and hence the performance of the product. If data is dropped, less frames are averaged and the quality of the image is reduced. I will assume additional processors can not be employed as this is anembedded system that was built to cost guidelines. A reduced algorithm would result in less accurate analysis of the image or processing to be skipped on part of an image.Hard-core DSP refers to applications where there is an absolute guarantee that the processing will keep up with the real-time data flow, even under worst case conditions. That includes the peak data arrival rates and most calculation intensive algorithm conditions. The algorithm may have some loading adjustment built in for quick calculation. This is acceptable if it is part of the system design. Hard-core DSP frequently refers to applications where data flows are fixed by the system requirements and the processing must accommodate them, as opposed to allowing the processing power to define the system performance and adjusting the data. Data is processed in blocks. If the calculations on a previous block are not finished and results posted within the specified time window, input data on the next incoming block is generally lost. In the hard-core world, dropped data usually puts at risk the validity of the output of the system. That is it is deemed the system fails if it can not process the entire stream of input data. Lastly the system typically requires a tight coupling of the data to the processing. That is, a very low latency between sampling and processing is required. It is interesting to note that many soft real-time applications must be treated as hard-core designs if the system performance parameters are set at absolute limits.The rest of this paper outlines several issues that must be considered when architecting a hard-core DSP system based on the SHARC processor. How these areas are affected by the specific system approach where the real-time IO arrives at the processor via the SHARC serial ports is then considered.Hard-core Design IssuesWhen implementing an application with real-time data flow, three main issues must always be addressed. First is the bandwidth on the processors data buses, second is the latency associated with distributing data throughout the system, and third is the DSP core loading associated with moving the data. These issues will be addressed relative to the SHARC processor when the IO stream into the processor is via the SHARC serial ports.Processor data busThe SHARC processor is designed to be clustered in groups up to 6. When clustered, all of these processors share a common parallel off chip data bus. This data bus is used for inter-processor communications and accesses to off chip memory. In addition, often the real time I/O data is read over this bus. The bandwidth on this external cluster bus is therefore a precious commodity when implementing an application.With the SHARC processor, there is a second data bus that must be evaluated when considering bus loading. That is, the I/O data bus. This is a parallel data bus internal to the SHARC that carries all the data that is moved via DMAs in the SHARC. Serial data that is sent/received via DMAs is carried over this bus and is thus worth evaluating.Data Distribution LatencyEach SHARC processor has two full duplex serial ports. Each can be programmed to operate as either standard synchronous serial ports or in the TDM mode. In the TDM mode, data is transmitted in frames with a specific number of time slots. Each slot in every frame contains the data to or from one specific I/O channel. This is repeated every frame. The 1688s presents and recieves its data as a TDM stream. The SHARC can be programmed to receive any slots on the incoming TDM stream and to output any slots on the outgoing TDM stream. All other slots are ignored. For example, one SHARC processor can be programmed to input slots 1 and 2 from the TDM stream into its internal memory and another SHARC can be programmed to input channels 3 and 4 into the internal memory of that SHARC. Inside the SHARCs, only the data from the specified channels is packed into an input array in consecutive memory addresses. Therefore, the SHARC application is only presented with the data from the channels it isinterested in. Another noteworthy point is, the channels that are input/output can be changed at any time. Therefore, the application can change on the fly the input or output channels on which it processes. There are no restrictions on the number of SHARCs that can input the same digitized data from the 1688s. That is any number of SHARCs on the serial chain, from zero to all, can receive the data for any channel. However, for the output channel, only one SHARC should output data for any channel to avoid contention.Processor LoadingTo operate the SHARC processor serial ports in the TDM mode, the serial ports are programmed with several key facts such as frame size and requested input/output channels. Also, the user can set up chained DMA transfers that continuously input/output data to/from the SHARC processor via the serial port. Typically, a double buffer scheme is used for the input/output data. An interrupt can trigger the core after each buffer is received. These DMAs are set up once and there is no additional code overhead required to keep them functioning.Evaluation of a System Solution for SHARC and serial port data systemsA powerful system solution which is very applicable for systems with many channels and lower sample rates can be created by using the SHARC serial ports. Ixthos’s products provide systems that integrate various combinations of analog input and output channels and SHARCS in an integrated system solution. Examples of solutions that can be provided in a single VME slot are up to 32 analog input (16 bit 200kHz) input channels and 16 SHARCS, or 16 analog input and 16 analog output (16bit 48kHz) channels and 16 SHARCS. The following discussion considers the second case (16 input, 16 output and 16 SHARCS) relative to the above architecture issues. The module that provides the IO is referred to as the IXI1688s and the processor base card is referred to as the IXZ16.IXZ16/IXI1688s - Processor Data Bus loading evaluationWhen using the 1688s on any of the IXZ16 card, there is no loading on the external cluster bus for any of the SHARCs since the data is delivered to the SHARCs over the serial bus. This means the full cluster bandwith is available for inter-processor communications and off chip memory accesses.The 1688s does add some minimal loading to the internal I/O data bus. This loading is dependent on the number of channels the specific SHARC is processing. Even if a specific SHARC processes half of the 16 input and 16 output channels, this loading is less than 3% of the I/O data bus’s capacity. (I/O Data Bus capacity is 160MB/sec. 1688s loading is 48k samples/sec/channel * 16channels* 4 bytes/sample for data + 48k controls/sec/2channel * 8 channels * 4bytes/control for control = 3.9 MB/sec. Net loading = 3.9MB/sec / 160MB/sec = 2.5% Note: on the output serial stream one control word for every 2 channels must be transmitted).IXZ16/1688s – Data Distribution LatencyIn the IXZ family, the customer can configure several clusters or all of the SHARCs on a board to be ganged on one serial chain. There are also methods to to extend this serial chain to other IXZ basecards. Therefore, the user can configure the system to have a variable number of processors all inputting/outputting data off of the same serial chain. This serial chain is received at all processors at essentially the same time. (Only transmission delays skew this; there is no buffering of the data)There is no fifo that holds data that is output from the 1688s. Each sample is transmitted in the appropriate TDM slot as it is formed by the sigma delta converter.The power of this data distribution method is that all processors receive the data essentially simultaneously, no processor needs to be burdened with distributing data to other processors and multiple processors can receive the same input channel automatically. All these facts lead to a minimal latency to distribute data to any number of processors in a system.IXZ/161688s – Processor LoadingThe SHARC processors on the IXZ base card send and receive all data over the serial ports. Therefore, other than an initial setup of the serial ports and launching of the DMAs there is NO loading on the DSP core to move the I/O data. The data simply appears in the internal memory of the SHARC and is output from the internal memory of the SHARC. This is a powerful feature! IXZ/1688 system configuration notesWith all the above configurations it is possible to extend the processors that have access to the input digital TDM stream. That is, not only the SHARCs on the basecard populated with the 1688s module, but other basecards can have direct access to the TDM stream of digitized analog input data values. That is the system scales to additional processing nodes as required.This product offering is available as a commercial level product and in an 8 SHARC processor configuration for rugged military applications.ConclusionsWhen designing a hard-core real-time DSP application many issues other than counting the FLOPs of the system must be considered. Specifically, how is data going to move in the system, what impact does this data movement have on the valuable system resources, and is the latency associated with this data distribution acceptable for the system requirements. The SHARC is a powerful data moving processor and by using its full capabilities the best system solution is created. Ixthos has wide variety of product offerings that integrate IO and the DSP processing for creating lean hard-core systems.。

相关文档
最新文档