OpenCv参考手册-CvAux中文参考手册

合集下载

opencv 4计算机视觉应用程序编程手册

opencv 4计算机视觉应用程序编程手册

标题:深度学习视角下的 OpenCV 4 计算机视觉应用程序编程手册在当今社会,计算机视觉技术已经成为了科技领域中的热门话题。

作为一种通过计算机和摄像机等设备对图像或视频进行识别、分析和处理的技术,计算机视觉技术在各个领域都有着广泛的应用。

而OpenCV 作为一个开源的计算机视觉库,在其中发挥了举足轻重的作用。

本文将从深度学习的视角出发,综合评估并探讨 OpenCV 4 计算机视觉应用程序编程手册,以期能够为读者带来更深入的理解和研究。

一、OpenCV 4 计算机视觉应用程序编程手册概述OpenCV 4 计算机视觉应用程序编程手册是一本涵盖了 OpenCV 应用程序开发及计算机视觉技术的深入指南。

通过系统地介绍计算机视觉基础知识、OpenCV 应用程序开发原理和实际应用案例,本书旨在帮助读者掌握最前沿的计算机视觉技术,为其在此领域的学习和工作提供有力的支持。

在本书中,作者以深度学习的视角出发,全面地介绍了 OpenCV 4 中与计算机视觉相关的内容,包括深度学习框架、图像处理、视觉特征、对象检测与识别等。

本书还涵盖了大量的案例分析和实际应用,使得读者可以通过实践来深入理解 OpenCV 4 的应用场景和技术特点。

二、深度学习视角下的评估在深度学习的视角下,我们首先需要对 OpenCV 4 计算机视觉应用程序编程手册进行全面的评估。

从内容的深度和广度的角度来看,本书涵盖了计算机视觉领域的各个方面,包括图像处理、目标检测、图像识别等,覆盖了从基础知识到实际应用的全过程。

本书还详细介绍了深度学习在计算机视觉中的应用,使得读者能够全面地了解到当前计算机视觉领域的最新进展。

在本书的撰写过程中,可以看出作者对于深度学习技术的理解和应用非常深入,对于深度学习框架、算法原理等方面都进行了详细的介绍和分析,为读者提供了丰富的知识储备和实践指导。

本书还涵盖了大量的案例分析和实际应用,使得读者可以通过实践来深入理解深度学习技术在计算机视觉中的应用场景和技术特点。

Opencv 文档

Opencv 文档


矩阵/向量数据操作及线性代数运算(矩阵乘积、矩阵方程求解、特征值、 奇异值分解) Matrix and vector manipulation and linear algebra routines (products, solvers, eigenvalues, SVD).

支持多种动态数据结构(链表、队列、数据集、树、图) Various dynamic data structures (lists, queues, sets, trees, graphs).


5.2 2、访问矩阵元素 5.2.1 (1) 假设需要访问一个 2D 浮点型矩阵的第(i, j)个单元. 5.2.2 (2) 间接访问: 5.2.3 (3) 直接访问(假设矩阵数据按 4 字节行对齐): 5.2.4 (4) 直接访问(当数据的行对齐可能存在间隙时 possible alignment gaps): 5.2.5 (5) 对于初始化后的矩阵进行直接访问: o 5.3 3、矩阵/向量运算 5.3.1 (1) 矩阵之间的运算: 5.3.2 (2) 矩阵之间的元素级运算: 5.3.3 (3) 向量乘积: 5.3.4 (4) 单一矩阵的运算: 5.3.5 (5) 非齐次线性方程求解: 5.3.6 (6) 特征值与特征向量 (矩阵为方阵): 6 六、视频处理 o 6.1 1、从视频流中捕捉一帧画面 6.1.1 (1) OpenCV 支持从摄像头或视频文件(AVI 格 式)中捕捉帧画面. 6.1.2 (2) 初始化一个摄像头捕捉器: 6.1.3 (3) 初始化一个视频文件捕捉器: 6.1.4 (4) 捕捉一帧画面: 6.1.5 (5) 释放视频流捕捉器: o 6.2 2、获取/设置视频流信息 6.2.1 (1) 获取视频流设备信息: 6.2.2 (2) 获取帧图信息: 6.2.3 (3) 设置从视频文件抓取的第一帧画面的位置: o 6.3 3、保存视频文件 6.3.1 (1) 初始化视频编写器: 6.3.2 (2) 保持视频文件: 6.3.3 (3) 释放视频编写器:

OpenCV使用说明

OpenCV使用说明

目录1引言 (1)2 OpenCV的结构 (1)3 VC 6下的安装与配置 (2)3.1安装OpenCV(略) (2)3.2 配置Windows环境变量 (2)4 VC++的环境设置 (4)5如何创建一个项目来开始OpenCV 编程 (5)6如何读入和显示图像 (7)7如何访问图像像素 (10)8如何访问矩阵元素 (11)9如何在OpenCV 中处理我自己的数据 (12)10. 例程 (13)10.1 Kalman滤波进行旋转点的跟踪 (13)10.2 背景建模 (16)10.3 视频I/O (21)10.4 矩阵操作 (23)10.5 轮廓检测 (27)1引言OpenCV(Intel® Open Source Computer Vision Library) 是Intel 公司面向应用程序开发者开发的计算机视觉库,其中包含大量的函数用来处理计算机视觉领域中常见的问题,例如运动分析和跟踪、人脸识别、3D 重建和目标识别等。

目前该函数库的最新版本是OpenCV 4.0,可以通过访问/projects/opencvlibrary免费获得OpenCV 库以及相关的资料。

另外,还可以通过访问/group/OpenCV,对于OpenCV使用中的一些问题与经验进行讨论。

相对于其它图像函数库,OpenCV是一种源码开放式的函数库,开发者可以自由地调用函数库中的相关处理函数。

OpenCV中包含500多个处理函数,具备强大的图像和矩阵运算能力,可以大大减少开发者的编程工作量,有效提高开发效率和程序运行的可靠性。

另外,由于OpenCV具有很好的移植性,开发者可以根据需要在MS-Windows和Linux两种平台进行开发,速度快,使用方便。

2 OpenCV的结构目前OpenCV包含如下几个部分:Cxcore: 一些基本函数(各种数据类型的基本运算等)Cv: 图像处理和计算机视觉功能(图像处理,结构分析,运动分析,物体跟踪,模式识别,摄像定标)Highgui: 用户交互部分(GUI, 图像视频I/O, 系统调用函数)Cvaux: 一些实验性的函数(ViewMorphing, 三维跟踪,PCA,HMM)另外还有cvcam, 不过linux版本中已经抛弃。

OpenCv参考手册-CvImage类参考手册

OpenCv参考手册-CvImage类参考手册

OpenCv参考手册-CvImage类参考手册CvImage类参考手册Wikipedia,自由的百科全书CvImage使用前需要包含 cv.h 头文件#include目录1 CvImage::CvImage2 CvImage::~CvImage3 CvImage::clone4 CvImage::create5 CvImage::release6 CvImage::clear7 CvImage::attach8 CvImage::detach9 CvImage::load10 CvImage::read11 CvImage::save12 CvImage::write13 CvImage::show14 CvImage::is_valid15 CvImage::width16 CvImage::height17 CvImage::size18 CvImage::roi_size19 CvImage::roi20 CvImage::coi21 CvImage::set_roi22 CvImage::reset_roi23 CvImage::set_coi24 CvImage::depth25 CvImage::channels26 CvImage::pix_size27 CvImage::data28 CvImage::step29 CvImage::origin30 CvImage::roi_row31 运算符重载32 编写者[编辑]CvImage::CvImagebool CvImage::CvImage();bool CvImage::CvImage(CvSize size, int depth, int channels);bool CvImage::CvImage(IplImage* pIplImg);bool CvImage::CvImage(const CvImage& cvImg);bool CvImage::CvImage(const char* filename, const char* imgname=0, int color=-1);bool CvImage::CvImage(CvFileStorage* fs, const char* mapname, const char* imgname);bool CvImage::CvImage(CvFileStorage* fs, const char* seqname, int idx);默认构造函数,创建一个图像。

OpenCV简介

OpenCV简介

基本的数据结构
1.图像结构 2.矩阵的与向量结构 3.其他的数据结构 (1)点的表示 (2)长方形维数的表示 (3)有偏移量的长方形表示

中国.中学政治教学网崇尚互联共享
图像结构
Opencv中的所有图像都采用同一个结构 IplImage,该图像结构说明参考后面的 PPT。实际上, IplImage是借鉴于Intel 公司最早发布的IPP图像处理开发包中 的定义;但由于IPP并非开源项目,因 此对于OpenCV基本采用这个较复杂的 图像结构,其版权方面的问题还有待 研究。

中国.中学政治教学网崇尚互联共享
OpenCV基础
1.OpenCV命名规则 2.基本数据结构 3.矩阵的使用与操作 4.图象的使用与操作 5.数据结构与数据操作

中国.中学政治教学网崇尚互联共享
OpenCV命名规则
中国.中学政治教学网崇尚互联共享
学习资源
目前,OpenCV方面的资源已经很多,当然最简单的方法还是在网上搜索, 比如在Google搜索引擎() 中输入”OpenCV”即可进行相关查找,这里给出一些信息作为参考。 (1)参考手册 英文 请打开文件<opencv-root>/docs/index.htm进行查询。 中文 请打开网页/进行查阅。 (2)网络资源 官方网站 网址是/technology/computing/opencv/。 中文官方网站 网址是/。 软件下载 网址是/projects/opencvlibrary/。 (3)书籍 请阅读北京航空航天大学出版社出版的《OpenCV教程》。
1.函数命名规则
通用函数名为 cvActionTargetMod(…) 其中,Action表示核心函数(比如:Set,Create); Target表示目标图像区域(比如:轮廓,多边形); Mod表示可选变种(比如:变量类型)。

OpenCV例程使用手册

OpenCV例程使用手册

OpenCV例程使⽤⼿册OpenCV例程使⽤⼿册⽬录1 OpenCV说明 (2)2 OpenCV模块测试 (3)3 TI官⽅综合例程 (5)3.1例程说明 (6)3.2编译例程 (6)3.3运⾏ (6)4 Sobel边缘检测算法 (9)5 Canny边缘检测算法 (11)6 VideoCapture图像采集 (12)更多帮助.................................................................................................... 错误!未定义书签。

公司官⽹:/doc/f2cd203bb5daa58da0116c175f0e7cd185251853.html 销售邮箱:sales@/doc/f2cd203bb5daa58da0116c175f0e7cd185251853.html 公司总机:020-8998-6280 1/13技术论坛:/doc/f2cd203bb5daa58da0116c175f0e7cd185251853.html 技术邮箱:support@/doc/f2cd203bb5daa58da0116c175f0e7cd185251853.html 技术热线:020-3893-97341 OpenCV说明OpenCV的全称是:Open Source Computer Vision Library,开源计算机视觉。

OpenCV 是⼀个基于BSD许可(开源)发⾏的跨平台计算机视觉库,可以运⾏在Linux、Windows、Android和Mac OS操作系统上。

它轻量级⽽且⾼效——由⼀系列C函数和少量C++构成,同时提供了Python、Ruby、MATLAB等语⾔的接⼝,实现了图像处理和计算机视觉⽅⾯的很多通⽤算法。

TI AM57x系列处理器⽀持的OpenCV模块有以下14个:core moduleimgproc modulecalib3d modulefeatures2d moduleobjdetect modulephoto modulevideo moduleflann moduleimgcodecs moduleml moduleshape modulestiching modulesuperres modulevideoio moduleOpenCV库的地址:SDK根⽬录/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi/usr/lib/OpenCV头⽂件路径:SDK根⽬录/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi/usr/include/opencv2 TI官⽅测试说明链接:/doc/f2cd203bb5daa58da0116c175f0e7cd185251853.html/index.php/OpenCV_AM57_Test_Instructions2 OpenCV模块测试此例程是TI官⽅提供⽤于测试AM5728⽀持的各个OpenCV模块。

OpenCv参考手册-机器学习中文参考手册

OpenCv参考手册-机器学习中文参考手册
[编辑]
CvStatModel::save
将模型保存到文件
void CvStatModel::save( const char* filename, const char* name=0 );
save 方法将整个模型状态以指定名称或默认名称(取决于特定的类)保存到指 定的 XML 或 YAML 文件中。该方法使用的是 cxcore 中的数据保存功能。
如果两种排布方式都支持这个函数的参数tflag可以使用下面的取值tflagcvrowsample表示特征向量以行向量存储tflagcvcolsample表示特征向量以列向量存储训练数据必须是32fc132位的浮点数单通道格式响应值通常是以向量方式存储一个行或者一个列向量存储格式为32sc1仅在分类问题中或者32fc1格式每个输入特征向量对应一个值虽然一些算法比如某几种神经网络响应值为向量
virtual void write( CvFileStorage* storage, const char* name )=0; virtual void read( CvFileStorage* storage, CvFileNode* node )=0; };
在上面的声明中,一些函数被注释掉。实际上,一些函数没有一个单一的 API (缺省的构造函数除外),然而,在本节后面描述的语法和定义方面有一些相 似之处,好像他们是基类的一部分一样。
2 Normal Bayes 分类器 o 2.1 CvNormalBayesClassifier o 2.2 CvNormalBayesClassifier::train o 2.3 CvNormalBayesClassifier::predict
3 K 近邻算法 o 3.1 CvKNearest o 3.2 CvKNearest::train o 3.3 CvKNearest::find_nearest o 3.4 例程:使用 kNN 进行 2 维样本集的分类,样本集的分布为混 合高斯分布

Cv中文参考手册之二------图像轮廓处理

Cv中文参考手册之二------图像轮廓处理

Cv中文参考手册之二------图像轮廓处理Cv中文参考手册之二 - Cv结构分析下面的链接是OPENCV之CV部分用户参考手册的中文翻译,在此感谢Z.M.Zhang对模式识别、照相机定标与三维重建部分所做的翻译,Y.C.WEI对全文做了统一细致的更改Cv结构分析目录• 1 轮廓处理函数o 1.1 ApproxChainso 1.2 StartReadChainPointso 1.3 ReadChainPointo 1.4 ApproxPolyo 1.5 BoundingRecto 1.6 ContourAreao 1.7 ArcLengtho 1.8 CreateContourTreeo 1.9 ContourFromContourTreeo 1.10 MatchContourTrees• 2 计算几何o 2.1 MaxRecto 2.2 CvBox2Do 2.3 PointSeqFromMato 2.4 BoxPointso 2.5 FitEllipseo 2.6 FitLineo 2.7 ConvexHull2o 2.8 CheckContourConvexityo 2.9 CvConvexityDefecto 2.10 ConvexityDefectso 2.11 PointPolygonTesto 2.12 MinAreaRect2o 2.13 MinEnclosingCircleo 2.14 CalcPGH• 3 平面划分o 3.1 CvSubdiv2Do 3.2 CvQuadEdge2Do 3.3 CvSubdiv2DPointo 3.4 Subdiv2DGetEdgeo 3.5 Subdiv2DRotateEdgeo 3.6 Subdiv2DEdgeOrgo 3.7 Subdiv2DEdgeDsto 3.8 CreateSubdivDelaunay2Do 3.9 SubdivDelaunay2DInserto 3.10 Subdiv2DLocateo 3.11 FindNearestPoint2Do 3.12 CalcSubdivVoronoi2Do 3.13 ClearSubdivVoronoi2D轮廓处理函数ApproxChains用多边形曲线逼近 Freeman 链CvSeq* cvApproxChains( CvSeq* src_seq, CvMemStorage* storage,int method=CV_CHAIN_APPROX_SIMPLE,double parameter=0, int minimal_perimeter=0, int recursive=0 );src_seq•涉及其它链的链指针storage•存储多边形线段位置的缓存method•逼近方法 (见函数 cvFindContours 的描述).parameter•方法参数(现在不用).minimal_perimeter•仅逼近周长大于minimal_perimeter 轮廓。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

CvAux中文参考手册Wikipedia,自由的百科全书目录∙ 1 立体匹配o 1.1 FindStereoCorrespondence ∙ 2 View Morphing Functionso 2.1 MakeScanlineso 2.2 PreWarpImageo 2.3 FindRunso 2.4 DynamicCorrespondMultio 2.5 MakeAlphaScanlineso 2.6 MorphEpilinesMultio 2.7 PostWarpImageo 2.8 DeleteMoire∙ 3 3D Tracking Functionso 3.1 3dTrackerCalibrateCameraso 3.2 3dTrackerLocateObjects∙ 4 Eigen Objects (PCA) Functionso 4.1 CalcCovarMatrixExo 4.2 CalcEigenObjectso 4.3 CalcDecompCoeffo 4.4 EigenDecompositeo 4.5 EigenProjection∙ 5 Embedded Hidden Markov Models Functions o 5.1 CvHMMo 5.2 CvImgObsInfoo 5.3 Create2DHMMo 5.4 Release2DHMMo 5.5 CreateObsInfoo 5.6 ReleaseObsInfoo 5.7 ImgToObs_DCTo 5.8 UniformImgSegmo 5.9 InitMixSegmo 5.10 EstimateHMMStateParamso 5.11 EstimateTransProbo 5.12 EstimateObsProbo 5.13 EViterbio 5.14 MixSegmL2[编辑]立体匹配[编辑]FindStereoCorrespondence计算一对校正好的图像的视差图cvFindStereoCorrespondence(const CvArr* leftImage, const CvArr* rightImage, int mode, CvArr* depthImage,int maxDisparity,double param1, double param2, double param3,double param4, double param5 );leftImage:: 左图,必须为8位的灰度图rightImage:: 右图,必须为8位的灰度图mode:: 指定采用的算法(当前只支持 CV_DISPARITY_BIRCHFIELD ) depthImage:: 输出的视差图, 8位的灰度图maxDisparity:: 指定最大的可能差异(视差).物体越近视差越大.param1, param2, param3, param4, param5:: - 算法的参数,param1 为遮挡时的处罚值(constant occlusion penalty), param2 为匹配时的奖励值, param3 定义高可靠区域 (set of contiguous pixels whose reliability is at least param3), param4 定义比较可靠区域defines a moderatelyreliable region, param5 定义有些可靠的区域defines a slightlyreliable region. 如果省略一些参数就会采用默认值.在Birchfield算法中param1 = 25, param2 = 5, param3 = 12, param4 = 15, param5 = 25 (这些数值来自书籍"Depth Discontinuities by Pixel-to-Pixel Stereo" Stanford University Technical Report STAN-CS-TR-96-1573, July 1996.)函数cvFindStereoCorrespondence计算两个校正后的灰度图像的视差图例子。

计算一对图像的视差/*---------------------------------------------------------------------------------*/IplImage* srcLeft = cvLoadImage("left.jpg",1);IplImage* srcRight = cvLoadImage("right.jpg",1);IplImage* leftImage = cvCreateImage(cvGetSize(srcLeft), IPL_DEPTH_8U, 1);IplImage* rightImage = cvCreateImage(cvGetSize(srcRight),IPL_DEPTH_8U, 1);IplImage* depthImage = cvCreateImage(cvGetSize(srcRight),IPL_DEPTH_8U, 1);cvCvtColor(srcLeft, leftImage, CV_BGR2GRAY);cvCvtColor(srcRight, rightImage, CV_BGR2GRAY);cvFindStereoCorrespondence( leftImage, rightImage,CV_DISPARITY_BIRCHFIELD, depthImage, 50, 15, 3, 6, 8, 15 );/*---------------------------------------------------------------------------------*/本例子使用的图片可在以下地址下载/pics/left.jpg/pics/right.jpg[编辑]View Morphing Functions[编辑]MakeScanlinesCalculates scanlines coordinates for two cameras by fundamental matrixvoid cvMakeScanlines( const CvMatrix3* matrix, CvSize img_size, int* scanlines1,int* scanlines2, int* lengths1, int* lengths2, int* line_count );matrix:: Fundamental matrix.imgSize:: Size of the image.scanlines1:: Pointer to the array of calculated scanlines of the firstimage.scanlines2:: Pointer to the array of calculated scanlines of the second image.lengths1:: Pointer to the array of calculatedlengths (in pixels) of the first image scanlines.lengths2:: Pointer to the array of calculated lengths (in pixels) of the second image scanlines.line_count:: Pointer to the variable that stores the number of scanlines.The functioncvMakeScanlinesfinds coordinates of scanlines for two images. This function returns the number of scanlines. The function does nothing except calculating the number of scanlines if the pointersscanlines1orscanlines2are equal to zero.[编辑]PreWarpImageRectifies imagevoid cvPreWarpImage( int line_count, IplImage* img, uchar* dst,int* dst_nums, int* scanlines );line_count:: Number of scanlines for the image.img:: Image to prewarp.dst:: Data to store for the prewarp image.dst_nums:: Pointer to the array of lengths of scanlines.scanlines:: Pointer to the array of coordinates of scanlines.The functioncvPreWarpImagerectifies the image so that the scanlines in the rectified image are horizontal. The output buffer of sizemax(width,height)*line_count*3must be allocated before calling the function.[编辑]FindRunsRetrieves scanlines from rectified image and breaks them down into runsvoid cvFindRuns( int line_count, uchar* prewarp1, uchar* prewarp2,int* line_lengths1, int* line_lengths2,int* runs1, int* runs2,int* num_runs1, int* num_runs2 );line_count:: Number of the scanlines.prewarp1:: Prewarp data of the first image.prewarp2:: Prewarp data of the secondimage.line_lengths1:: Array of lengths of scanlines in the first image.line_lengths2:: Array of lengths of scanlines in the second image.runs1:: Array of runs in each scanline in the firstimage.runs2:: Array of runs in each scanline in the secondimage.num_runs1:: Array of numbers of runs in each scanline in the first image.num_runs2:: Array of numbers of runs in each scanline in the second image.The functioncvFindRunsretrieves scanlines from the rectified image and breaks each scanline down into several runs, that is, series of pixels of almost the same brightness.[编辑]DynamicCorrespondMultiFinds correspondence between two sets of runs of two warped imagesvoid cvDynamicCorrespondMulti( int line_count, int* first, int*first_runs,int* second, int* second_runs,int* first_corr, int* second_corr );line_count:: Number of scanlines.first:: Array of runs of the first image.first_runs:: Array of numbers of runs in each scanline of the first image.second:: Array of runs of the second image.second_runs:: Array of numbers of runs in each scanline of the secondimage.first_corr:: Pointer to the array of correspondence information found for the first runs.second_corr:: Pointer to the array of correspondence information found for the second runs.The functioncvDynamicCorrespondMultifinds correspondence between two sets of runs of two images. Memory must be allocated before calling this function. Memory size for one array of correspondence information ismax( width,height )* numscanlines*3*sizeof ( int ).[编辑]MakeAlphaScanlinesCalculates coordinates of scanlines of image from virtual cameravoid cvMakeAlphaScanlines( int* scanlines1, int* scanlines2,int* scanlinesA, int* lengths,int line_count, float alpha );scanlines1:: Pointer to the array of the first scanlines.scanlines2:: Pointer to the array of the second scanlines.scanlinesA:: Pointer to the array of the scanlines found in the virtual image.lengths:: Pointer to the array of lengths of the scanlines found in the virtual image.line_count:: Number of scanlines.alpha:: Position of virtual camera(0.0 - 1.0). The functioncvMakeAlphaScanlinesfinds coordinates of scanlines for the virtual camera with the given camera position. Memory must be allocated before calling this function. Memory size for the array of correspondence runs is numscanlines*2*4*sizeof(int). Memory size for the array of the scanline lengths isnumscanlines*2*4*sizeof(int).[编辑]MorphEpilinesMultiMorphs two pre-warped images using information about stereo correspondencevoid cvMorphEpilinesMulti( int line_count, uchar* first_pix, int*first_num,uchar* second_pix, int* second_num,uchar* dst_pix, int* dst_num,float alpha, int* first, int* first_runs, int* second, int* second_runs,int* first_corr, int* second_corr );line_count:: Number of scanlines in the prewarp image.first_pix:: Pointer to the first prewarp image.first_num:: Pointer to the array of numbers of points in each scanline in the first image.second_pix:: Pointer to the second prewarp image.second_num:: Pointer to the array of numbers of points in each scanline in the second image.dst_pix:: Pointer to the resulting morphed warped image.dst_num:: Pointer to the array of numbers of points in each line.alpha:: Virtual camera position(0.0 - 1.0).first:: First sequence of runs.first_runs:: Pointer to the number of runs in each scanline in the first image.second:: Second sequence of runs.second_runs:: Pointer to the number of runs in each scanline in the second image.first_corr:: Pointer to the array of correspondence information found for the first runs.second_corr:: Pointer to the array of correspondence information found for the second runs. The functioncvMorphEpilinesMultimorphs two pre-warped images using information about correspondence between the scanlines of two images.[编辑]PostWarpImageWarps rectified morphed image backvoid cvPostWarpImage( int line_count, uchar* src, int* src_nums,IplImage* img, int* scanlines );line_count:: Number of the scanlines.src:: Pointer to the prewarp image virtual image.src_nums:: Number of the scanlines in the image.img:: Resulting unwarp image.scanlines:: Pointer to the array of scanlines data.The functioncvPostWarpImagewarps the resultant image from the virtual camera by storing its rows across the scanlines whose coordinates are calculated by cvMakeAlphaScanlines.[编辑]DeleteMoireDeletes moire in given imagevoid cvDeleteMoire( IplImage* img );img:: Image.The functioncvDeleteMoiredeletes moire from the given image. The post-warped image may have black (un-covered) points because of possible holes between neighboring scanlines. The function deletes moire (black pixels) from the image by substituting neighboring pixels for black pixels. If all the scanlines are horizontal, the function may be omitted.[编辑]3D Tracking FunctionsThe section discusses functions for tracking objects in 3d space using a stereo camera. Besides C API, there is DirectShow 3dTracker filter and the wrapper application 3dTracker. Here you may find a description how to test the filter on sample data.[编辑]3dTrackerCalibrateCamerasSimultaneously determines position and orientation of multiple camerasCvBool cv3dTrackerCalibrateCameras(int num_cameras,const Cv3dTrackerCameraIntrinsics camera_intrinsics[],CvSize checkerboard_size,IplImage *samples[],Cv3dTrackerCameraInfo camera_info[]);num_cameras:: the number of cameras to calibrate. This is the size of each of the three array parameters.camera_intrinsics:: cameraintrinsics for each camera, such as determined byCalibFilter.checkerboard_size:: the width and height (in number of squares) of the checkerboard.samples:: images from each camera, witha view of the checkerboard.camera_info:: filled in with the resultsof the camera calibration. This is passed into 3dTrackerLocateObjectsto do tracking.The functioncv3dTrackerCalibrateCamerassearches for a checkerboard of the specified size in each of the images. For each image in which it finds the checkerboard, it fillsin the corresponding slot incamera_infowith the position and orientation of the camera relative to the checkerboard and sets thevalidflag. If it finds the checkerboard in all the images, it returns true; otherwise it returns false. This function does not change the membersof thecamera_infoarray that correspond to images in which the checkerboard was not found. This allows you to calibrate each camera independently,instead of simultaneously. To accomplish this, do the following: 1. clear all thevalidflags before calling this function the first time;1. call this function with each set of images;1. check all thevalidflags after each call. When all thevalidflags are set, calibration is complete. . Note that this method works well only if the checkerboard is rigidly mounted; if it is handheld,all the cameras should be calibrated simultanously to get an accurate result. To ensure that all cameras are calibrated simultaneously, ignore thevalidflags and use the return value to decide when calibration is complete. [编辑]3dTrackerLocateObjectsDetermines 3d location of tracked objectsint cv3dTrackerLocateObjects(int num_cameras,int num_objects,const Cv3dTrackerCameraInfo camera_info[],const Cv3dTracker2dTrackedObject tracking_info[],Cv3dTrackerTrackedObject tracked_objects[]);num_cameras:: the number of cameras.num_objects:: the maximum number of objects found by any camera. (Also the maximum number of objects returned intracked_objects.)camera_info:: camera position and location information for each camera, as determined by 3dTrackerCalibrateCameras.tracking_info:: the 2d position of each object as seen by each camera. Although this is specified as a one-dimensional array, it is actually a two-dimensional array:const Cv3dTracker2dTrackedObjecttracking_info[num_cameras][num_objects]. Theidfield of any unused slots must be -1. Ids need not be ordered or consecutive.tracked_objects:: filled in with the results. Thefunctioncv3dTrackerLocateObjectsdetermines the 3d position of tracked objects based on the 2dtracking information from multiple cameras and the camera position and orientation information computed by 3dTrackerCalibrateCameras. It locates any objects with the sameidthat are tracked by more than one camera. It fills in thetracked_objectsarray and returns the number of objects located. Theidfields of any unused slots intracked_objectsare set to -1.[编辑]Eigen Objects (PCA) FunctionsThe functions described in this section do PCA analysis and compression for a set of 8-bit images that may not fit into memoryall together. If your data fits into memory and the vectors are not8-bit (or you want a simpler interface), use cvCalcCovarMatrix, cvSVD and cvGEMM to do PCA[编辑]CalcCovarMatrixExCalculates covariance matrix for group of input objectsvoid cvCalcCovarMatrixEx( int object_count, void* input, int io_flags, int iobuf_size, uchar* buffer, void* userdata,IplImage* avg, float* covar_matrix );object_count:: Number of source objects.input:: Pointer either to the array ofIplImageinput objects or to the read callback function according to the value of the parameterioFlags.io_flags:: Input/output flags.iobuf_size:: Input/output buffersize.buffer:: Pointer to the input/output erdata:: Pointerto the structure that contains all necessary data for thecallback:: functions.avg:: Averaged object.covar_matrix:: Covariance matrix. An output parameter; must be allocated before the call. The function cvCalcCovarMatrixExcalculates a covariance matrix of the input objects group using previously calculated averaged object. Depending onioFlagsparameter it may be used either in direct access or callback mode. If ioFlagsis notCV_EIGOBJ_NO_CALLBACK, buffer must be allocated before calling the function.[编辑]CalcEigenObjectsCalculates orthonormal eigen basis and averaged object for group of input objectsvoid cvCalcEigenObjects( int nObjects, void* input, void* output, int ioFlags,int ioBufSize, void* userData, CvTermCriteria* calcLimit,IplImage* avg, float* eigVals ); nObjects:: Number of source objects.input:: Pointer either to the array of IplImage input objects or to the read callback function according to the value of the parameter ioFlags output:: Pointer either to the array of eigen objects or to the write callback function according to the value of the parameter ioFlags .ioFlags:: Input/output flags.ioBufSize:: Input/output buffer size in bytes. The size is zero, if erData:: Pointer to the structure that contains all necessary data for the callback functions.calcLimit:: Criteria that determine when to stopcalculation of eigen objects.avg:: Averaged object.eigVals:: Pointer to the eigenvalues array in the descending order; may be <pre>NULL. The functioncvCalcEigenObjectscalculates orthonormal eigen basis and the averaged object for a group of the input objects. Depending onioFlagsparameter it may be used either in direct access or callback mode. Depending on the parametercalcLimit, calculations are finished either after firstcalcLimit.max_iterdominating eigen objects are retrieved or if the ratio of the current eigenvalue to the largest eigenvalue comes down tocalcLimit.epsilonthreshold. The valuecalcLimit -> typemust beCV_TERMCRIT_NUMB, CV_TERMCRIT_EPS, orCV_TERMCRIT_NUMB | CV_TERMCRIT_EPS. The function returns the real valuescalcLimit->max_iterandcalcLimit->epsilon.The function also calculates the averaged object, which must be created previously. Calculated eigen objects are arranged accordingto the corresponding eigenvalues in the descending order.. The parametereigValsmay be equal toNULL, if eigenvalues are not needed. The functioncvCalcEigenObjectsuses the function cvCalcCovarMatrixEx.[编辑]CalcDecompCoeffCalculates decomposition coefficient of input objectdouble cvCalcDecompCoeff( IplImage* obj, IplImage* eigObj, IplImage* avg );obj:: Input object.eigObj:: Eigen object.avg:: Averaged object.The functioncvCalcDecompCoeffcalculates one decomposition coefficient of the input object usingthe previously calculated eigen object and the averaged object.[编辑]EigenDecompositeCalculates all decomposition coefficients for input objectvoid cvEigenDecomposite( IplImage* obj, int eigenvec_count, void* eigInput,int ioFlags, void* userData, IplImage* avg, float* coeffs );obj:: Input object.eigenvec_count:: Number of eigen objects.eigInput:: Pointer either to the array ofIplImageinput objects or to the read callback function according to the value of the parameterioFlags.ioFlags:: Input/output erData:: Pointer to the structurethat contains all necessary data for the callback functions.avg:: Averaged object.coeffs:: Calculated coefficients; an output parameter. The functioncvEigenDecompositecalculates all decomposition coefficients for the input object using the previously calculated eigen objects basis and the averaged object. Depending onioFlagsparameter it may be used either in direct access or callback mode. [编辑]EigenProjectionCalculates object projection to the eigen sub-spacevoid cvEigenProjection( void* input_vecs, int eigenvec_count, intio_flags, void* userdata,float* coeffs, IplImage* avg, IplImage*proj );input_vec:: Pointer to either an array ofIplImageinput objects or to a callback function, depending onio_flags.eigenvec_count:: Number of eigenvectors.io_flags:: Input/output flags; see erdata:: Pointer to the structurethat contains all necessary data for the callback functions.coeffs:: Previously calculated decomposition coefficients.avg:: Average vector, calculated by cvCalcEigenObjects.proj:: Projection to the eigen sub-space. The functioncvEigenProjectioncalculates an object projection to the eigen sub-space or, in other words, restores an object using previously calculated eigen objects basis, averaged object, and decomposition coefficients of therestored object. Depending onio_flagsparameter it may be used either in direct access or callback mode. [编辑]Embedded Hidden Markov Models FunctionsIn order to support embedded models the user must define structuresto represent 1D HMM and 2D embedded HMM model.[编辑]CvHMMEmbedded HMM Structuretypedef struct _CvEHMM{int level;int num_states;float* transP;float** obsProb;union{CvEHMMState* state;struct _CvEHMM* ehmm;} u;} CvEHMM;level:: Level of embedded HMM. Iflevel ==0, HMM is most external. In 2D HMM there are two types of HMM: 1external and several embedded. External HMM haslevel ==1, embedded HMMs havelevel ==0.num_states:: Number of states in 1D HMM.transP:: State-to-state transition probability, square matrix(num_state×num_state ).obsProb:: Observation probability matrix.state:: Array of HMM states. For the last-level HMM, that is, an HMM without embedded HMMs, HMM states are real.ehmm:: Array of embedded HMMs. If HMM is not last-level, then HMM states are not real and they are HMMs.For representation of observations the following structure is defined:[编辑]CvImgObsInfoImage Observation Structuretypedef struct CvImgObsInfo{int obs_x;int obs_y;int obs_size;float** obs;int* state;int* mix;} CvImgObsInfo;obs_x:: Number of observations in the horizontal direction.obs_y:: Number of observations in the vertical direction.obs_size:: Length of every observation vector.obs:: Pointer to observation vectors stored consequently. Number of vectors isobs_x*obs_y.state:: Array of indices of states, assigned to every observation vector.mix:: Index of mixture component, corresponding to the observation vector within an assigned state.obs_x 水平方向的观测向量,obs_垂直方向的观测和向量。

相关文档
最新文档