Vision Research Phantom Miro 高速摄像系统

合集下载

基于驾驶场景的高效多模态融合检测方法

基于驾驶场景的高效多模态融合检测方法

第43卷第3期2024年6月沈㊀阳㊀理㊀工㊀大㊀学㊀学㊀报JournalofShenyangLigongUniversityVol 43No 3Jun 2024收稿日期:2023-03-28基金项目:辽宁省重点科技创新基地联合开放基金项目(2021-KF-12-05)作者简介:李东宇(1999 )ꎬ男ꎬ硕士研究生ꎻ高宏伟(1978 )ꎬ通信作者ꎬ男ꎬ教授ꎬ博士ꎬ研究方向为计算机视觉测量㊁图像处理与识别等ꎮ文章编号:1003-1251(2024)03-0018-08基于驾驶场景的高效多模态融合检测方法李东宇ꎬ王绪娜ꎬ高宏伟(沈阳理工大学自动化与电气工程学院ꎬ沈阳110159)摘㊀要:目标检测是自动驾驶中重要的组成部分ꎮ为解决在弱光条件下单一的可见光图像不能满足实际驾驶场景检测的需求并进一步提高检测精度ꎬ提出一种用于红外和可见光融合图像的交通场景检测网络ꎬ简称AM ̄YOLOv5ꎮAM ̄YOLOv5中改进的Repvgg结构可以提升对融合图像特征的学习能力ꎮ此外ꎬ在主干网络末端引入自注意力机制并提出一种新的空间金字塔模块(SimSPPFCSPC)充分获取信息ꎻ为提升网络推理速度ꎬ在颈部网络的前端使用一种全新的卷积(GS卷积)ꎮ实验结果表明ꎬAM ̄YOLOv5在FLIR数据集融合图像上的mAP0.5达到了69.35%ꎬ与原YOLOv5s相比ꎬ在没有牺牲推理速度的前提下ꎬ检测精度提升了1.66%ꎮ关㊀键㊀词:目标检测ꎻ多模态融合ꎻ驾驶场景ꎻ融合图像中图分类号:TP391.41文献标志码:ADOI:10.3969/j.issn.1003-1251.2024.03.003EfficientMulti ̄modalFusionDetectionMethodBasedonDrivingScenesLIDongyuꎬWANGXunaꎬGAOHongwei(ShenyangLigongUniversityꎬShenyang110159ꎬChina)Abstract:Targetdetectionisanimportantcomponentinautonomousdriving.InordertosolvetheproblemthatasinglevisibleimagecannotmeetthedemandofactualdrivingscenedetectionunderlowlightconditionsandtoimprovethedetectionaccuracyꎬatrafficscenetargetdetectionnetworkforfusedinfraredandvisibleimagesisproposedꎬwhichiscalledAM ̄YOLOv5forshort.Theim ̄provedRepvggstructureinAM ̄YOLOv5canenhancetheabilityoflearningfeaturesoffusedima ̄ges.Inadditionꎬaself ̄attentionmechanismisintroducedandanewspatialpyramidmodule(Sim ̄SPPFCSPC)isproposedattheendofthebackbonenetworktoobtainsufficientinformation.Toim ̄provetheinferencespeedofthenetworkꎬanewconvolution(GSconvolution)isusedatthefrontoftheneck.ExperimentalresultsshowthatAM ̄YOLOv5achieves69.35%mAP0.5onthefusionim ̄ageofFLIRdatasetꎬthedetectionaccuracyisimprovedby1.66%comparedwiththeoriginalYOLOv5sꎬwithoutanysacrificeininferencespeed.Keywords:objectdetectionꎻmulti ̄modalfusionꎻdrivingscenesꎻimagefusion㊀㊀智能交通㊁物联网和人工智能等科学技术的不断革新推动了自动驾驶的迅猛发展ꎮ自动驾驶车辆需要对周围环境进行实时感知和识别ꎬ以便做出正确的驾驶决策ꎮ目标检测算法从车载摄像头拍摄的可见光图像中提取当前驾驶场景的相关信息ꎬ包括障碍位置㊁其他车辆信息和行人信息等[1]ꎮ在日常驾驶环境中ꎬ通常存在着复杂且实时变化的道路情况ꎬ并且在弱光条件下可见光图像包含的目标信息不足[2]ꎬ依靠单一视觉传感器的信息不能满足全天候检测的需求ꎬ于是针对多模态图像的目标检测任务逐渐受到关注ꎮ可见光图像包含丰富的细节信息但容易受到环境光线影响ꎻ红外图像可以突出目标信息ꎬ抗干扰能力强ꎬ但环境细节信息不足ꎮ而可见光图像和红外图像两种模态融合得到的融合图像同时具有两种模态的特点信息ꎬ对于目标检测任务有很大的增益ꎬ同时相比于可见光-激光雷达的多模态融合方式可以保留更多的视觉信息且方便部署ꎮ然而可见光红外融合图像具有的特征信息比单一可见光图像和红外图像复杂ꎬ对目标检测算法在准确度和推理速度方面提出了更高要求ꎮRedmon等[3]提出的YOLO系列算法ꎬ尤其是YOLOv5ꎬ在目标检测方面具有优异的性能ꎮYOLOv5的主干网络采用CSPDarknet53结构ꎬ颈部网络部分采用的是PANet结构ꎬ检测头为Yoloheadꎮ由于其兼顾了准确性和较快的检测速度ꎬ已成为单阶段目标检测算法的代表ꎬ尽管YOLOv5性能已经很优秀ꎬ但在处理复杂的交通环境中可见光红外融合图像信息时ꎬ原始网络的特征学习能力不足ꎮ本文选取YOLOv5作为基础网络并针对日常自动驾驶任务需求进行改进ꎬ简称为AM ̄YOLOv5ꎬ解决复杂驾驶场景融合图像的目标检测任务ꎮ算法在特征提取和学习方面加入了更高效的结构和模块ꎬ将多分支学习结构与一种改进的空间金字塔模块(SimSPPFCSPC)引入到主干网络中ꎬ实现了多分支的学习能力ꎬ在提高特征学习能力的同时不影响推理速度ꎬ提高计算效率ꎮ同时ꎬ对于图片中目标可能被遮挡的问题ꎬ本文将多注意力C3模块引入到主干网络中ꎬ使网络充分获得全局信息和丰富的上下文信息ꎮ此外ꎬ本文在网络的颈部使用了GS卷积ꎬ优化网络参数以提高效率ꎬ并使网络能够更好地学习多尺度特征ꎮ1㊀AM ̄YOLOv51.1㊀整体网络结构AM ̄YOLOv5在网络的主干部分将改进的Repvgg模块(Replite)与YOLOv5整合在一起ꎬ末端引入自注意力机制ꎬ加入了更高效的多注意C3模块(C3TR)和SimSPPFCSPCꎮ在此基础上ꎬ网络颈部的前端普通卷积被GS卷积替代ꎮAM ̄YOLOv5算法的结构如图1所示ꎮ1.2㊀高效特征提取主干网络1.2.1㊀改进的Repvgg模块主干网络设计对于自动驾驶场景多模态融合图像的目标检测至关重要ꎮ多分支结构的训练网络[4]虽然能获得较高的性能提升ꎬ但因为内存占用增加导致推理速度下降ꎮ普通单路径结构的网络由于没有分支ꎬ特征提取能力偏弱ꎬ但在操作完成后可以立即释放输入占用的内存ꎬ通过牺牲性能可实现较高的推理效率ꎮDing等[5]提出的结构重参数化可有效解决该问题ꎬ如图2(a)所示ꎬ训练结构由多个Repvgg模块组成ꎬ每个Repvgg模块一般由一个3ˑ3卷积ꎬ一个1ˑ1卷积和一个恒等分支组成ꎮ推理结构如图2(b)所示ꎬ原来的多分支结构经过重参数化操作等价变成了由多个3ˑ3卷积组成的单路经结构ꎮ通常在无重参数化的卷积神经网络算法中ꎬ卷积层参数在训练和推理时不会发生变化ꎬ卷积层参数相关公式如下ꎮPc=(K2ˑC+b)ˑN(1)Wout=Win+2p-ws+1(2)Hout=Hin+2p-hs+1(3)Dout=k(4)式中:Pc是卷积层所有参数量ꎻK代表卷积核尺寸ꎻC代表输入通道数ꎻN代表过滤器数量ꎻK2ˑC代表权重数量ꎻb代表偏置数量ꎻWin和Hin是输入层参数ꎻWout㊁Hout和Dout是经过卷积后的输出层参数ꎻk是卷积核个数ꎻp是填充大小ꎻs是步长ꎻw和h是卷积核参数ꎮ重参数化意味着卷积层参数发生相应改变ꎬ将卷积层与批量归一化(BN)层融合并对恒等分支转换如下ꎮWᶄi=γiσiWi(5)bᶄi=-μiγiσi+βi(6)式中:Wi是转换前的卷积层参数ꎻσi代表BN层标卷积准差ꎻμi代表BN层累计平均值ꎻγi和βi分别代表BN层学习的比例因子和偏差ꎻWᶄi和bᶄi分别代表融合后卷积层的权重与偏置ꎬi代表操作范围为所有通道ꎮ㊀㊀具有残差和串联连接的层不应存在恒等分支ꎬ会导致不同特征图存在更多的梯度多样性[6]ꎬ所以本研究提出了改进的Repvgg模块ꎬ去除91第3期㊀㊀㊀李东宇等:基于驾驶场景的高效多模态融合检测方法图1㊀AM ̄YOLOv5结构Fig.1㊀ThestructureofAM ̄YOLOv5图2㊀Repvgg在不同任务时的结构Fig.2㊀ThestructureofRepvggunderdifferenttasksRepvgg模块的恒等分支ꎬ只保留两个卷积分支ꎬ并命名为Replite模块ꎬ用其替换YOLOv5主干网络中末端的两个3ˑ3卷积块ꎬ以实现多分支训练结构与单路径推理结构的转换ꎬ不明显增加推理速度的同时提升网络对于细节特征的学习能力ꎮ训练阶段如图3(a)所示ꎬ由3ˑ3卷积和1ˑ1卷积组成的多分支结构可以被看作是多浅层模型的集成ꎬ有利于解决梯度消失问题ꎬ提高模型的准确度和训练时的算法性能ꎮ推理阶段采用与Repvgg同样的重参数化方法将原来的1ˑ1卷积融入到3ˑ3的卷积中ꎬ变回单路径结构ꎬ如图3(b)所示ꎮ图3㊀带有Replite模块的主干网络在不同任务时的结构Fig.3㊀ThestructureofthebackbonenetworkwithReplitemodulesunderdifferenttasks02沈㊀阳㊀理㊀工㊀大㊀学㊀学㊀报㊀㊀第43卷1.2.2㊀多注意力C3模块为使网络能够充分获得全局信息和丰富的上下文信息ꎬ提高对融合图像的检测能力ꎬ本研究受到VisionTransformer的启发ꎬ将自注意力机制引入YOLOv5ꎬ以提高在实际场景中检测被遮挡物体的能力[7]ꎮTransformer模块结构如图4所示ꎬ由输入嵌入㊁位置编码㊁Transformer编码器组成ꎮ每个Transformer编码器[8]包含一个多头注意力模块㊁一个前馈层以及残差连接ꎬ多头注意力模块可以更多地关注像素点并获得上下文信息ꎬ多头注意力处理过程表示如下ꎮMultiHead(QꎬKꎬV)=Concat(head1ꎬ ꎬheadh)WO(7)headi=Attention(QWQiꎬKWKiꎬVWVi)(8)式中:MultiHead代表多头注意力操作ꎻAttention代表注意力操作ꎻConcat代表拼接操作ꎻWQiɪRdmodelˑdkꎻWKiɪRdmodelˑdkꎻWViɪRdmodelˑdv(1ɤiɤh)ꎻWO代表权重矩阵且WOɪRhˑdvˑdmodelꎻdmodel代表模型的维度ꎻdk代表内容信息的维度ꎻdv代表信息的维度ꎻK㊁V㊁Q分别代表内容信息㊁信息本身和输入信息ꎻh代表头部或平行的注意力层数ꎮ图4㊀Transformer模块结构Fig.4㊀ThestructureofTransformermodule㊀㊀位置编码由一个全连接层构成ꎬ以保留位置信息ꎮ输入嵌入通过重塑㊁展开等操作将二维图像处理成一个序列ꎬ输入Transformer编码器ꎮ最终的嵌入因子通过位置编码与输入嵌入相加得到ꎬ再送入Transformer编码器ꎮ本文将C3模块与Transformer结合提出了多注意力C3模块(C3TR)且只有网络中主干末端的C3模块被替换成C3TRꎬ即将C3模块中Bot ̄tleneck模块用Transformer模块取代ꎬC3TR结构如图5所示ꎮ通过在分辨率较低的主干网络末端部署Transformer模块ꎬ网络可以获得更丰富的上下文信息和更好的全局信息ꎬ以提高大型物体的检测精度并影响所有的检测头ꎮ此外ꎬ部署在主干网络末端不会占用过多内存ꎬ并在提高检测效率的同时降低模型复杂度ꎮ图5㊀C3TR结构Fig.5㊀ThestruetureofC3TR1.2.3㊀改进的空间金字塔模块本研究提出了一个改进的空间金字塔模块SimSPPFCSPCꎬ其基于SPPF和SPPCSPC[6]的结构ꎬ将两者相结合并针对自动驾驶场景融合图像检测进一步优化以达到更好的特征学习能力ꎬSimSPPFCSPC的结构如图6所示ꎮ特征在输入SimSPPFCSPC模块后被分到两个分支进行处理ꎬ一个分支利用标准卷积对特征进行常规处理ꎬ另一个分支利用快速空间金字塔(SPPF)结构对特征进行处理ꎬ最后合并两个分支ꎬ此结构相比于按序处理减少了计算量并提高了精度ꎮ此外ꎬ在SP ̄PF分支中使用3ˑ3的最大池化层ꎬ可以在保证学习对象特征的同时节约计算成本ꎬ在一定程度上减轻了过拟合的风险ꎮ图6㊀SimSPPFCSPC结构Fig.6㊀ThestructureofSimSPPFCSPC㊀㊀假设SimSPPFCSPC模块的输入为XɪRCˑWˑHꎬ模块的输出为YɪRCˑWˑHꎬC是特征图的通道数ꎬW和H分别代表特征图的宽和高ꎬ卷积层为Convi(i=1ꎬ ꎬ7)ꎬ拼接操作层为Concatꎬ12第3期㊀㊀㊀李东宇等:基于驾驶场景的高效多模态融合检测方法则SimSPPFCSPC的输出Y可以表示为Y=Conv7(Concat(Conv2(X)ꎬYS))(9)YM=(MXP1(X1)ꎬMXP2(Y1)ꎬMXP3(Y2))(10)YS=Conv6(Conv5(Concat(YMꎬX1)))(11)X1=Conv4(Conv3(Conv1(X)))(12)式中:YS是SPPF分支的输出ꎻX1是SPPF结构的输入ꎻ最大池化层MXPi(i=1ꎬ2ꎬ3)的内核尺寸为3ˑ3ꎻY1是第一个最大池化层MXP1处理的输出ꎻY2是第二个最大池化层MXP2处理的输出ꎻYM是所有最大池化层分支输出ꎮ1.3㊀性能平衡的颈部网络在自动驾驶场景融合图像目标检测算法的实际应用中ꎬ平衡模型的精度与速度是十分必要的ꎮ为进一步提升网络的速度ꎬ本文考虑了在颈部网络部署当前常用的深度可分离卷积(DSC)[9-10]替换标准卷积以降低模型的参数量ꎮ深度可分离卷积的结构如图7所示ꎮ图7㊀DSC结构Fig.7㊀ThestructureofDSC㊀㊀图像的通道在进行卷积的过程中被分离ꎬ每个通道进行单独的运算ꎬ降低了卷积层的参数量ꎮ然而ꎬ各通道之间的信息缺失导致其特征融合能力变弱ꎬ不利于针对融合图像的模型训练和检测ꎮ本文在网络中引入了GS卷积[11]ꎬGS卷积的结构如图8所示ꎬGS卷积同时结合了标准卷积与深度可分离卷积中的逐通道卷积(DW卷积)ꎬ最后采用打乱操作重分配信息ꎬ从而促进了模型对更多正确特征的学习ꎬ使特征信息得到充分利用ꎮ通过使用GS卷积ꎬ可以在速度和精度之间取得良好的平衡ꎬ减少网络的计算量(FLOPs)和参数量ꎬ而不牺牲性能ꎮ图8㊀GS卷积结构Fig.8㊀GSconvolutionalstructure㊀㊀假设GS卷积的输入为XGSɪRCˑWˑHꎬ输出为YGSɪRCˑWˑHꎬC是特征图的通道数ꎬW和H分别代表特征图的宽和高ꎬ普通卷积层为Convꎬ逐通道卷积层为DWCꎬ拼接操作层为Concatꎬ打乱操作表示为SHFꎬ则GS卷积的输出YGS可以表示为YGS=SHF(Concat(Conv(XGS)ꎬDWC(YC)))(13)式中:YC是普通卷积的输出ꎬConv和DWC的卷积核参数分别为CˑCᶄˑ1ˑ1和CˑCᶄˑ5ˑ1ꎬ此处Cᶄ=C/2ꎮ本文将颈部网络部分中FPN结构的1ˑ1卷积替换成了GS卷积ꎬ以努力平衡模型在实际应用中部署时的准确性和速度ꎬ而且GS卷积的部署仅限于颈部网络部分ꎬ因为在卷积神经网络中ꎬ空间信息是逐渐传输到通道上的ꎬ由于特征图的空间压缩和通道扩展ꎬ传输过程中可能会导致语义信息的丢失ꎬ而GS卷积可以保留存在通道之间的隐藏连接从而保留部分语义信息ꎬ但是全阶段部署可能会造成数据流阻碍以及推理时间的增加ꎬ为了缓解此问题本文采用了部分部署的策略ꎮ2㊀实验结果和分析2.1㊀数据集FLIR数据集是近年发布ꎬ用于自动驾驶领域神经网络训练ꎬ包含可见光图像和标注过的红外图像的目标检测数据集(拍摄于白天和黑夜情况下)ꎬ共计14000张ꎮ本研究为实现多模态自动驾驶场景ꎬ充分验证所提出算法的提升效果ꎬ使用对齐的FLIR数据集[12](https://paperswithcode.com/dataset/flir ̄aligned)ꎬ选取4489可见光-红外图像对ꎬ保证了白天和黑夜道路情况的均匀分布ꎮ所有图像对采用基于非下采样剪切波变换(NSST)图像融合算法[13]进行融合ꎬ得到FLIR数据集融合图像作为实验的数据集ꎬ3476张作为训练集ꎬ1013张作为验证集和测试集ꎮ其中只使用三种常用的标注类别:汽车ꎬ自行车和人ꎬ融合图像标注信息与可见光图像和红外图像一致ꎬ数据集中每张图像的大小均为640ˑ512像素ꎮ2.2㊀评价指标本文对提出的AM ̄YOLOv5算法与其他目标检测代表算法进行性能对比实验和消融研究ꎬ采用的评价指标包括:平均精度均值(mAP)㊁计算量(FLOPs)㊁参数量和推理时间ꎮmAP是所包含种类的AP平均值ꎬ数值越大代表模型准确度越高ꎬFLOPs代表模型的复杂程度ꎬ推理时间代表模型在推理单张图片时所用的时间ꎬ数值越小代表22沈㊀阳㊀理㊀工㊀大㊀学㊀学㊀报㊀㊀第43卷推理速度越快ꎮ2.3㊀实验细节在Matlab软件环境下调用NSST算法得到FLIR数据集融合图像ꎮ在自动驾驶场景融合图像目标检测实验中ꎬ网络的输入为640ˑ640ˑ3ꎬ本文将FLIR数据集融合图像输入到训练网络进行端对端的权重模型训练ꎬ训练完成后对相应图像进行检测ꎮ所有训练建立在预训练过的YOLOv5s权重基础之上ꎬ初始学习率设置为0.01ꎬ设置动量为0.937ꎬ训练批次大小为8ꎬ设置权重衰减为0.0005ꎬ训练的迭代次数为100次ꎬ其余参数设置遵循YOLOv5默认的设置ꎮ本研究在Inteli7 ̄11800CPU㊁32GB运行内存㊁NVIDIARTX3070GPU的硬件环境以及Win ̄dows11㊁matlabR2020b㊁Opencv3.4.10图像处理视觉库和Pytorch1.10.1软件环境下完成ꎮ2.4㊀与其他检测器性能对比在FLIR数据集融合图像上进行不同检测算法训练ꎬ结果对比如表1所示ꎮ本文算法mAP0.5达到了69.35%ꎬ比原YOLOv5s提升了约1.66%ꎬ同时超越了一些代表性算法ꎬ如SSD[14]㊁CenterNet[15]和Faster ̄RCNN[16]算法的mAP0.5ꎬ涨幅都在14%以上ꎮ同时高于YOLOXs的mAP0.5ꎬ表明YOLOXs对于融合图像效果欠佳ꎮ表1㊀本文算法与其他目标检测算法在FLIR数据集融合图像上的对比实验结果Table1㊀ComparisonsofexperimentalresultsofthealgorithminthispaperandothertargetdetectionalgorithmsonfusedimagesofFLIRdataset检测网络mAP0.5/%FLOPs/109s-1参数量/MBSSD51.8662.826.29Faster ̄RCNN52.84939.628.48CenterNet55.4769.932.67YOLOXs66.9921.78.05YOLOv5s67.6915.87.02AM ̄YOLOv569.3521.213.41㊀㊀图9展示了YOLOv5s和AM ̄YOLOv5在FLIR数据集融合图像上的可视化检测结果对比ꎬ其中(a)列图像中的黄色框代表YOLOv5s未检测出来的对象ꎬ绿色框代表YOLOv5s错检出的对象ꎮAM ̄YOLOv5可以检测出YOLOv5s在远距离以及光线不强的情况下没有检测出的行人㊁自行车和汽车等目标ꎬ同时算法可以检测到被遮挡的物体ꎬ提高了检测精度ꎬ有效减少错检ꎮ图9㊀YOLOv5s和AM ̄YOLOv5在FLIR数据集融合图像上的检测结果对比Fig.9㊀ComparisonsofdetectionresultsofYOLOv5sandAM ̄YOLOv5onfusedimagesofFLIRdataset2.5㊀消融研究㊀㊀为进一步体现本文算法对于多模态自动驾驶场景图像检测性能的提升ꎬ依然采用FLIR数据集融合图像ꎬ以YOLOv5s为基础ꎬ添加㊁替换不同组件ꎬ对比结果如表2所示ꎮ1)添加改进的Repvgg模块的影响ꎮ网络在FLIR数据集上mAP0.5由67.69%提升到了68.50%ꎮ多分支结构会使整体参数量稍微增大ꎬ是由于在推理过程中融合了分支ꎬ检测时的推理时间没有增加反而有所下降ꎬ为增加精确度而采用Replite模块是值得的ꎮ2)添加多注意力C3模块的影响ꎮ单独添加C3TR模块ꎬmAP0.5比原YOLOv5s提高0.88%ꎬ参数量和FLOPs增加很少ꎬ在B组配置的基础上叠加此模块后精度提升了0.41%ꎮ因配置此模块的位置恰当ꎬ不会占用过多内存ꎬ参数量没有发生明显变化ꎬFLOPs数减少了0.2ˑ109s-1ꎬ推理时间与未添加此模块时极度接近ꎬ检测效率有效提高ꎬ证明添加C3TR模块有效ꎮ3)添加改进的空间金字塔模块的影响ꎮ单独添加SimSPPFCSPC模块ꎬmAP0.5比原始模型提高0.61%ꎬ网络在F组配置上叠加此模块之后模型的mAP0.5提高至了69.27%ꎮ因为结合了两个分支ꎬ参数量和FLOPs数量有了比较明显的上涨ꎮ32第3期㊀㊀㊀李东宇等:基于驾驶场景的高效多模态融合检测方法表2㊀在FLIR数据集融合图像上消融研究实验结果对比Table2㊀ComparisonsofexperimentalresultsofablationstudiesonfusedimagesofFLIRdataset方法mAP0.5/%mAP0.5:0.95/%FLOPs/109s-1参数量/MB推理时间/msAYOLOv5s67.6932.3115.87.029.3BA+Replite68.5033.0216.37.067.9CA+C3TR68.5733.1816.17.068.8DA+SimSPPFCSPC68.3032.4921.513.489.3EA+GSConv67.9733.1516.26.988.3FB+C3TR68.9133.1516.17.068.7GF+SimSPPFCSPC69.2732.7421.313.499.5HG+GSConv69.3532.7721.213.419.1此种结构对于按序处理结构已经降低了计算量且提升了模型精度ꎬ推理时间没有大幅提升ꎬ此模块的增益程度更大ꎮ4)添加GS卷积的影响ꎮ单独添加GS卷积ꎬ网络在没有牺牲精度的情况下参数量有所下降ꎬ得益于GS卷积轻量化的结构以及接近标准卷积效果的特点ꎮ在G组配置基础上叠加此模块后ꎬ模型的FLOPs数量降低了0.1ˑ109s-1ꎬ参数量也降低了0.08MBꎬ在FLIR数据集上推理时间有少许降低ꎮ因为局部使用GS卷积ꎬ特征被充分利用ꎬ语义信息没有过多丢失ꎬ模型准确度有了小幅提升ꎮ上述实验数据表明ꎬ本文网络所引入的模块对于检测性能的提升达到了预期效果ꎮ2.6㊀AM ̄YOLOv5检测结果分析本文将FLIR数据集可见光灰度图像和融合图像的训练集分别放入AM ̄YOLOv5网络内进行训练ꎬ保存训练好的权重参数ꎬ并对测试集图像进行检测ꎬ部分结果如图10所示ꎮ㊀㊀本文列出了具有代表性的四组实验结果ꎬ由第一组结果可以看出ꎬ融合图像因为具有红外图像的优点ꎬ所有行人在图像中被显著高亮且均被检测ꎬ行人被成功识别ꎮ此外ꎬ融合图像可以克服灯光反射造成的过度曝光现象并且削弱闪光影响ꎬ如第二组结果中对向车辆灯光通过地面反射到行人所在区域ꎬ融合图像中行人被成功识别ꎬ可见光图像则相反ꎮ第三组图像是由暗光隧道穿越到日光环境中ꎬ在可见光图像上隧道内的车辆可以被识别ꎬ但是远处道路几乎没有车辆的迹象ꎬ而融合图像中远处强光的目标和环境细节也可见ꎮ最后一组在正常日光条件下ꎬ所有图像均可以表现出良好的检测性能ꎬ但是融合图像上被检测的目标置信度更高ꎮ图10㊀在FLIR数据集可见光图像和融合图像上目标检测对比结果Fig.10㊀ComparisonresultsoftargetdetectionperformedonvisibleimagesandfusedimagesofFLIRdataset㊀㊀通过结合可见光和红外图像的优势ꎬ融合图像对目标检测的优势得到了明确的体现ꎬ同时本文提出的算法表现出了良好的检测性能ꎮ3㊀结论本文提出了一种应用于多模态自动驾驶场景42沈㊀阳㊀理㊀工㊀大㊀学㊀学㊀报㊀㊀第43卷的高性能目标检测算法AM ̄YOLOv5ꎮAM ̄YOLOv5的主干网络采用了Replite模块ꎬ实现了多分支训练结构和单路推理结构的转换ꎬ在提高精度的同时没有影响速度ꎻC3TR模块和SimSP ̄PFCSPC模块的加入提升了网络的计算效率ꎬ进一步提高了精度ꎻ颈部网络引用的全新卷积较好地平衡了网络的精度和速度ꎮAM ̄YOLOv5在FLIR数据集融合图像上的检测性能相比于原始的YOLOv5sꎬmAP0.5提升了1.66%ꎬ整体参数量有些许增加ꎬ但是没有牺牲推理速度ꎬ基本符合预期结果ꎮ参考文献(References):[1]㊀王麒.基于深度学习的自动驾驶感知算法[D].杭州:浙江大学ꎬ2022.[2]㊀祝文斌ꎬ苑晶ꎬ朱书豪ꎬ等.低光照场景下基于序列增强的移动机器人人体检测与姿态识别[J].机器人ꎬ2022ꎬ44(3):299-309.㊀㊀ZHUWBꎬYUANJꎬZHUSHꎬetal.Sequence ̄enhancement ̄basedhumandetectionandposturerecognitionofmobilero ̄botsinlowilluminationscenes[J].Robotꎬ2022ꎬ44(3):299-309.(inChinese)[3]㊀REDMONJꎬDIVVALASꎬGIRSHICKRꎬetal.Youonlylookonce:unifiedꎬreal ̄timeobjectdetection[C]//2016IEEEConferenceonComputerVisionandPatternRecognition(CVPR).LasVegasꎬNVꎬUSA:IEEEꎬ2016:779-788. [4]㊀NORKOBILSAYDIRASULOVICHSꎬABDUSALOMOVAꎬJAMILMKꎬetal.AYOLOv6 ̄basedimpro ̄vedfiredetectionapproachforsmartcityenvironme ̄nts[J].Sensorsꎬ2023ꎬ23(6):3161.[5]㊀DINGXHꎬZHANGXYꎬMANNꎬetal.RepVGG:makingVGG ̄styleConvNetsgreatagain[C]//2021IEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR).NashvilleꎬTNꎬUSA:IEEEꎬ2021:13733-13742. [6]㊀JIANGKLꎬXIETYꎬYANRꎬetal.Anattentionmechanism ̄improvedYOLOv7objectdetectionalgor ̄ithmforhempduckcountestimation[J].Agricultureꎬ2022ꎬ12(10):1659.[7]㊀于楠晶ꎬ范晓飚ꎬ邓天民ꎬ等.基于多头自注意力的复杂背景船舶检测算法[J].浙江大学学报(工学版)ꎬ2022ꎬ56(12):2392-2402.㊀㊀YUNJꎬFANXBꎬDENGTMꎬetal.Shipdetectionalgo ̄rithmincomplexbackgroundsviamulti ̄headself ̄attention[J].JournalofZhejiangUniversity(EngineeringScience)ꎬ2022ꎬ56(12):2392-2402.(inChinese)[8]㊀VASWANIAꎬSHAZEERNꎬPARMARNꎬetal.Attentionisallyouneed[C]//Proceedingsofthe31stInternationalCon ̄ferenceonNeuralInformationProcessingSystems.LongBeachꎬCaliforniaꎬUSA:ACMꎬ2017:6000-6010. [9]㊀CHOLLETF.Xception:deeplearningwithdepthwisesepara ̄bleconvolutions[C]//2017IEEEConferenceonComputerVisionandPatternRecognition(CVPR).HonoluluꎬHIꎬUSA:IEEEꎬ2017:1800-1807.[10]杨小冈ꎬ高凡ꎬ卢瑞涛ꎬ等.基于改进YOLOv5的轻量化航空目标检测方法[J].信息与控制ꎬ2022ꎬ51(3):361-368.㊀㊀YANGXGꎬGAOFꎬLURTꎬetal.LightweightaerialobjectdetectionmethodbasedonimprovedYOLOv5[J].Informa ̄tionandControlꎬ2022ꎬ51(3):361-368.(inChinese) [11]HUJEꎬWANGZBꎬCHANGMJꎬetal.PSG ̄Yolov5:apar ̄adigmfortrafficsigndetectionandrecognitionalgorithmbasedondeeplearning[J].Symmetryꎬ2022ꎬ14(11):2262. [12]ZHANGHꎬFROMONTEꎬLEFEVRESꎬetal.Mul ̄tispectralfusionforobjectdetectionwithcyclicfuse ̄and ̄refineblocks[C]//2020IEEEInternationalConfer ̄enceonImagePro ̄cessing(ICIP).AbuDhabiꎬUnitedArabEmirates:IEEEꎬ2020:276-280.[13]张全.基于NSST的红外与可见光图像融合算法研究[D].西安:西安电子科技大学ꎬ2020.[14]LIUWꎬANGUELOVDꎬERHANDꎬetal.SSD:singleshotMultiBoxdetector[C]//EuropeanConferenceonComputerVision.Cham:Springerꎬ2016:21-37.[15]DUANKWꎬBAISꎬXIELXꎬetal.CenterNet:keypointtrip ̄letsforobjectdetection[C]//ProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision.SeoulꎬKorea(South):IEEEꎬ2020:6568-6577.[16]RENSQꎬHEKMꎬGIRSHICKRꎬetal.FasterR ̄CNN:to ̄wardsreal ̄timeobjectdetectionwithregionproposalnetworks[J].IEEETransactionsonPatternAnalysisandMachineIn ̄telligenceꎬ2017ꎬ39(6):1137-1149.(责任编辑:和晓军)52第3期㊀㊀㊀李东宇等:基于驾驶场景的高效多模态融合检测方法。

深圳视锐视威科技有限公司产品介绍

深圳视锐视威科技有限公司产品介绍

公司BREAD PPT DESIGN 主要特点:采用10-30倍高清晰度彩色摄像机。

稳定可靠、低功耗、安装方便。

内置全屏OSD屏幕菜单,兼容多种控制协议、波特率可调。

矢量驱动技术,精密的步进电机细分技术,可以精确定稳定的移动图像。

部分产品带智能追踪功能,巡航追踪、预置位追踪、直接启动追踪。

红外距离可达30-150米。

防水等级IP66,采用RS485通讯协议。

适合场所:大厦监控、银行保安、城市道路、机场、车站监控、电力部门、仓库等场所。

部分产品图部分产品图部分产品图部分产品图主要特点:部分产品图主要特点:部分产品图主要特点:采用1/3SONY SUPER HAD CCD与DSP数字处。

部分产品图主要特点:部分产品图主要特点:•外观小巧、安装简易。

•采用1/3SONY SUPER HAD CCD与DSP数字处理.•3.6mm定焦镜头。

•夜间红外距离达20-30米.•采用优质铝材,精细加工工艺,良好散热设计.主要特点:•4路实时H.264 DVR (DVR-8404A) 1台•20米红外摄像机4台(IR-287 420/600线)•20米电源&视频线4条•专用电源1个•一分五的电源分配线1条•遥控器和USB鼠标各1个•2TB 硬盘1个(选配)主要特点:•4/8路实时H.264 DVR (DVR-8404A) 1台•20米红外摄像机4台(IR-287 420线)•20米电源&视频线4条•专用电源1个•一分五的电源分配线1条•遥控器和USB鼠标各1个•2TB 硬盘1个(选配)部分产品图BREAD PPT DESIGN深圳锐视威科技有限公司联系人:联系电话: 传真:地址:深圳市宝安区龙华大浪社区下岭排工业区11号锐视威大厦谢谢观看!。

康耐视视觉软件说明书

康耐视视觉软件说明书

Now you can get the world’s top vision software—no matter which camera, frame grabber, or direct-connect digital standard you use.Support for all image capture needsBroad Camera SupportVisionPro captures images from hundreds of industrial cameras, covering the complete range of video formats and acquisition requirements. Strategic Cognex relationships with major camera suppliers enable early support of new cameras and technology. VisionPro provides open camera support, as well as configuration and diagnostic tools. This enables customers to configure, analyze, and modify cameras for numerous acquisition platforms.Acquisition IndependenceVisionPro software provides pre-configured, tightly integrated acquisition from both Cognex hardware and direct-connect digital cameras. For images from any other source, such as microscopes or 3rd-party frame grabbers, VisionPro provides a flexible acquisition architecture. This allows customers and vision partners to develop custom interfaces for any programmable image source.Direct-Connect TechnologyGigE Vision ® acquisition provide a broad range of digital cameras with attractive features and high performance. Direct-connect technology takes advantage of the latest PC architectures to provide reliable image acquisition without a traditional frame grabber.Frame GrabberBoth Camera Link ® and analog frame grabbers can be used with VisionPro. This provides fail-safe image capture with image buffers, advanced error detection, and dedicated multi-channel hardware.Acquisition AllianceCognex maintains strategic relationships with major camera suppliers through our Acquisition Alliance program. Through the Acquisition Alliance, Cognex establishes cooperative sales and marketing efforts and strong technical relationships, leading to rapid integration of new cameras with VisionPro. Cognex supports hundreds of industrial cameras and videoformats covering the complete range of acquisition requirements typically used in machine vision.TOTAL HARDWARE INDEPENDENCENotes:*1 Cognex Designer is only available with the Development dongle, VC5, CC24 Comm Card, or 8704E GigE Framegrabber*2 3D Measurements Tools are exclusive to the Cognex Displacement Sensor series。

超高速摄像机简介

超高速摄像机简介

超高速摄影机是美国加州大学Henry Samueli工程与应用科学分校的科学家们开发了一种可以在一秒钟之内拍摄六百一十万张照片的超高速摄影机,快门速度高达440万亿分之一秒。

利用它将解决众多科研难题。

超高速摄像机简介美国Vision Research公司推出世界上最快4百万像素高速摄像机Phantom V640,超大2560×1600分辨率CMOS传感器,满幅摄速率为1,400帧/秒,最高拍摄速率300,000帧/秒,独立HD-SDI 4:2:2和4:4:4双端口设计可实现4中模式数字视频输出,支持256GB 和512GB超大容量CineMag存储器,满足超长时间拍摄的需要。

这台相机使用不同频率的激光束来照射被摄物。

相机的每个像素点都具备独立的信号,将这些信号放大处理后,便可形成影像。

科学家将这种技术命名为STEAM(serialtime-encoded amplified microscopy)。

STEAM技术所具备的分辨率仅有3000像素,不过目前研究小组正在研发百万像素级别,每秒能拍摄1亿张图片的产品。

超高速摄像机应用超高速摄像机具有广泛用途,例如:要想开发低噪音的洗衣机,就可以通过慢动作图像分析防噪音零件震动的情况,以制造更加安静的产品;在汽车的冲撞实验中,也可以验证气囊的膨胀方式是否对人体产生不良影响。

此外,还可以用来调查瓶装工厂的机械故障原因、了解混凝土的破坏、昆虫翅膀的振动、爆炸发生时的冲击波等详细过程,使人们看到许多未知世界。

超高速摄像机意义这项技术的研究由美国国防部出资赞助。

赞助的项目中包括一种流式细胞计的研究,这项技术将被用于进行血液分析。

传统的血液分析器能够计算血细胞流动的数量,并能计算出血细胞的大小尺寸,但由于拍摄速度的限制,现有的技术并不能对血细胞进行详细拍摄。

要在人体中发现染病细胞,就必须对细胞进行拍摄。

而目前的细胞拍摄只能采用少量血液取样的方法进行。

新技术将能对快速流动的血液中的细胞进行直接拍摄,有助于尽早发现血流中的少量染病细胞,如瘤细胞等。

美国Phantom系列高速数字摄像机技术参数对照表

美国Phantom系列高速数字摄像机技术参数对照表

v 711
v 711 1280×800 @7530帧/秒 1400000帧/秒 1024000
v 1210
v 1210 1280×800 @12600帧/秒 820000帧/秒
v 1610
v 1610 1280×800 @16600帧/秒
v 2010
v 642
v 642 Broadcast 2560×1600 @1450帧/秒
eX4 800×600 @1260帧/秒 111100帧/秒 32×8
Miro3 800×600 @1200帧/秒 111111帧/秒 480000
Miro-AE 800×600 @1265帧/秒 111110帧/秒 128×8
Miro-AE HD 1920×1080 @335帧/秒 1390帧/秒 2073600 5.5μm
v 311
v 311 1280×800 @3250帧/秒 500000帧/秒
vபைடு நூலகம்411
v 411 1280×800 @4200帧/秒 600000帧/秒 128×8 20μm 25.6×16.0 8/12位 20000黑白/2500彩色
v 611
v 611 1280×800 @6242帧/秒 1000000帧/秒
触发
触发点可控(前/后触发记录);IBAT;On- 触发点可控(前/后触 camera触发按钮;LCD触摸屏按钮;硬件触发 发记录);硬件触发
触发点可控(前/后触发记录);硬件触发 (TTL或+28VDC);软件触发
触发点可控(前/后触发记录);IBAT;软件触发;硬件触发(BNC);Burst模式
4096×2160@125帧/秒: 1T(10分钟);2T(20分钟) 4096×2160@24帧/秒: 1T(50分钟);2T(100分钟)

锐视科技 4K Ultra HD 无线外墙摄像头说明书

锐视科技 4K Ultra HD 无线外墙摄像头说明书

• 4K (8MP) Ultra HD delivers four times the detail of 1080p 1 for the clearest visual evidence possible (resolution settings must be manually switched to 4K)•Programmable dual motion-activated LED warning lights warn away would-be intruders •Remote-triggered siren to discourage trespassing and alert others •Advanced dual motion detection technology increases accuracy•The latest H.265 compression technology reduces video file sizes by up to 50% to save valuable hard drive space•Color Night Vision™ delivers full color nighttime video for improved recognition of people or objects in low light conditions 2 •IR night vision range up to 130ft (40m) in ambient lighting and 90ft (27m) in total darkness 3 •Smart IR for improved recognition of close-up objects or people in the dark•True HDR gives you picture clarity and detail under high-contrast lighting conditions •Super wide angle 128° field of view (diagonal) •Built-in microphone and speaker for 2-way talk 4•Includes two mounts for multiple indoor and outdoor mounting options•Simple camera installation using a single CAT5e cable with Power over Ethernet (PoE) •Weatherproof IP66 rated 5 and cold climate capability (-22°F / -30°C)•Weatherproof Ethernet connector cover for a protective seal against the elementsFeatures:STOP CRIME BEFORE IT EVEN STARTSLorex Active Deterrence cameras offer a new level of security coverage for your home or business. Warn off would-be intruders with dual programmable LED lights, remote-triggered siren, and 2-way talk. The super wide angle viewing allows you to cover more area with a single camera.LNB8105X Series4K ACTIVE DETERRENCE NETWORKSECURITY CAMERA2-Way Talk128°Field of view4K (8MP)Ultra High DefinitionNight Vision130/90 FT40/27 MSpecificationsProduct InformationMount / Tabletop Stand, 1× Wall Mount, 1× 60ft (18m) CAT5e In-Wall Rated UL Ethernet Cable, 1 × Mounting Kit, Quick Start Guide1. Default resolution settings must be manually changed to 4K (8MP) in order to record or view 4K video. Compatible with select Lorex LNR Series NVRs. For the most up-to-date list of compatible recorders, visit /compatibility2. Full color nighttime video typically switches to black & white IR night vision below 1 lux to ensure optimal low-light image quality.3. Stated IR illumination range is based on ideal conditions in typical outdoor night time ambient lighting and in total darkness. Actual range and image clarity depends on installation location, viewing area, and light reflection / absorption level of object. In low light, the camera will switch to black and white.4. Audio recording is disabled by default. Audio recording without consent is illegal in certain jurisdictions. Lorex Technology does not assume liability for any use of its products that fails to conform with local laws.5. Not intended for submersion in water. Installation in a sheltered location recommended.DisclaimersImage Sensor 1/2.5" 8MP Video Format NTSC / PAL Effective Pixels H: 3840 V: 2160Resolution 8MP (3840×2160) @ 15fps 1Scan System Progressive Sync System Internal S/N Ratio 44dB (AGC Off)IrisFixedAES Shutter Speed 1/3(4)~1/100,000 seconds Min. Illumination 0.7 Lux without IR LED 0 Lux with IR LED Video Output IPAudioBuilt-in Microphone & Speaker 4 Lens / Lens Type 2.8mm F2.0 / Fixed Field of view (Diagonal)128°Termination RJ45 Ethernet / 12V DC Power Barrel (optional)IR LED Type 850nmNight Vision Range 130ft (40m) / 90ft (27m) 3Color Night Vision™Yes 2Power Requirement PoE (Power over Ethernet) / 12V DC Power ConsumptionMax. 600mA / 7.2W Operating Temp. Range -22°F ~ 122°F / -30° ~ 50°C Operating Humidity Range <95% RHEnvironmental Rating IP66 (Indoor / Outdoor) 5Dimensions ( W × D × H)with Ceiling Mount/ Table Top Stand 3.0" × 3.8" × 4.7" / 75mm × 98mm × 119mm Dimensions ( W × D × H)with Wall Mount 3.0" × 4.4" × 3.1" / 75mm × 113mm × 78mm Weight1.4lbs / 0.64kg© 2019 Lorex TechnologyAs our product is subject to continuous improvement, Lorex Technolog y & subsidiaries reserve the right to modify product design, specifications & priceswithout notice and without incurring any obligation. E&OE.Lorex Corporation999 Corporate Blvd. Suite 110Linthicum, MD, 21090, United States3-02202019 (19-0072-LOR)Setup DiagramCameraHD NVRRouterPoE SwitchDimensionsCamera with Wall MountCamera with Ceiling Mount / Table Top Stand。

共聚焦显微镜激光高速扫描控制系统设计及实现

共聚焦显微镜激光高速扫描控制系统设计及实现

共聚焦显微镜激光高速扫描控制系统设计及实现胡茂海;杨晓春【摘要】Based on the traditional optical scanning and the effective combination of a high-frequency resonant scanner and a galvanometer scanner, a novel high-speed laser scanning method with sampling rate of 4M/s was proposed. The hardware platform of control system was built, the PC-client and microcontroller software were designed. The experimental results prove that the control system is rapid and stable, can realize real-time confocal scanning imaging.%在传统光学扫描方法基础上,有效地将检流式与共振式光学扫描振镜结合起来,提出一种速度可达4 M/s采样率的高速激光扫描方法,并基于单片机系统设计搭建了系统控制硬件平台,编写了上位机端和下位机端应用软件.实验结果表明:该控制系统扫描速度快,性能稳定可靠,能够应用于共聚焦显微镜,实现实时扫描成像.【期刊名称】《应用光学》【年(卷),期】2011(032)004【总页数】4页(P797-800)【关键词】激光扫描;共聚焦显微镜;单片机;控制系统【作者】胡茂海;杨晓春【作者单位】南京理工大学,电光学院,江苏,南京,210094;南京理工大学,电光学院,江苏,南京,210094【正文语种】中文【中图分类】TN202;TH742激光共焦扫描显微镜具有高横向、高纵向空间分辨率等显著特点[1-5],其在半导体工业、材料科学、生物学和医学等研究领域中有着广泛的应用。

H2系列高性能大角度红外摄像头产品说明书

H2系列高性能大角度红外摄像头产品说明书

Linha de produtos VB-H761LVE (H2)VB-H760VE (H2) VB-H751LE (H2) VB-M741LE (H2)VB-M740E (H2)•Câmera bullet IR grande-angular com especificações avançadas ideal para uso em temperaturas extremas e condições de iluminação zero.Apresentando captura de imagens em full HD, resistência a condições climáticas desfavoráveis, análise integrada, imagens excepcionais em baixa iluminação e pintura Hydrophilic Coating II.• S ensor CMOS de 1/3” para imagens Full HD e desempenho excepcional em condições de pouca iluminação• O s LEDs infravermelhos integrados fornecem imagens monocromáticas detalhadas até 30 m • C onformidade IP66 e NEMA250 Tipo 4X para proteção contra condições ambientais desfavoráveis• A pintura Hydrophilic Coating II previne a redução da visibilidade durante e após a chuva e é resistente à incrustação causada por areia ou sujeira.• T ecnologias de redução do tamanho dos dados e de otimização de imagens Motion Adaptive Noise Reduction e Area-Specific Data Size Reduction (ADSR)• D etecção inteligente de alarmes com 8 perfis analíticos integrados• C ompatibilidade com ONVIF® Profile S e ONVIF® Profile GAll Weather ModelPoEIntelligent Function124.3° Horizontalview angleVB-H751LE (H2)PC640-VB PA-V18Fonte de alimentação CA*Disponível com plugue de2 pinos europeu e pluguede 3 pinos do Reino UnidoAcessórios compatíveisDimensõesConteúdo da embalagem• VB-H751LE (H2) Main Unit• Proteção contra mau tempo• Antes de usar esta câmera• Guia com suporte EAC• Cabo de segurança• P arafuso de fixação do cabode segurança• Placa de teto• P arafuso de fixação da placade teto x 4• Chave• Gabarito• M ulticabo com vedação deborracha à prova d'água(E/S, áudio, alimentação)• Fita de vedação à prova d'água• Tampa para cabo de LAN• V edação de borracha à provad'água para cabo de LAN• Prendedor de cabos• Fixação de embalagem• Parafuso de aterramento• Guia de Instalação• Precauções de segurança• Cartão de garantiaT odas as dimensões estão em mm (pol.)•Câmera bullet IR grande-angular com especificações avançadas ideal para uso em temperaturas extremas e condições de iluminação zero.Apresentando captura de imagens em full HD, resistência a condições climáticas desfavoráveis, análise integrada, imagens excepcionais em baixa iluminação e pintura Hydrophilic Coating II.Data de início das vendas: Outubro de 2019Acessórios opcionais:Nome do produtoCódigo Mercury Código EAN Kit de montagem suspensa PC640-VB 0719C001AA 4549292043532Caixa para conduíte CB740-VB 0712C001AA 4549292043464Adaptador CA PA-V18 (E)8362B002AA 4960999986791Adaptador CA PA-V18 (GB)8362B003AA4960999986319Informações de medição/logística:Nome do produtoCódigo MercuryTipo de embalagem Descrição da embalagem Quantidade por embalagem Comprimento (mm) Largura (mm) Altura (mm)Peso líquido (kg) Peso bruto (kg) Câmera de rede VB-H751LE (H2)3748C001AAEA Cada 12384682892,64,1CT Papelão 24886002745,19,6EPPalete europeu——————Detalhes do produto:Nome do produtoCódigo Mercury Código EAN Câmera de rede VB-H751LE (H2)3748C001AA4549292149104VB-H751LE (H2)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Vision Research Phantom Miro M / R /LC120Vision Research Phantom Miro M140 / M340 Vision Research Phantom Miro M / R /LC310Vision Research Phantom Miro M / R /LC320SVision Research Phantom Miro Airborne HD Vision Research Phantom Miro Airborne Vision Research Phantom Miro eX4Vision Research Phantom Miro 3Vision Research Phantom Miro C210J /C210∙2,000 fps at 1280 x 1024∙12 bit 1.3 Megapixel CMOS sensor∙ISO Monochrome 5,000T; 2,500D; Color 640T; 640D∙Hi-G: 170G – Shock, 17 Grms - Vibration∙Small: 73 mm x 73 mm x 72 mm (H x W x D)∙Modular: Connect to Miro Junction Box for multi-camera configurationsOverviewSpecificationsAccessoriesFAQsVideosThe Phantom® Miro® C210J and C210 are small, light and rugged digital high-speed cameras, designed to meet the most demanding applications. The cameras have a 12 bit ½-inch CMOS sensor and achieve up to 2,000 frames per second (fps) at full resolution of 1280 x 1024. They are also modular, and can be connected to a Miro Junction Box to create multi-camera configurations as simple or as complex as needed.Small, light & rugged: At 73 mm x 73 mm x 72 mm and weighing just over 1 lb. (.5 kg), the C210 and C210J are 3-inch cubes small enough to fit into the tightestplaces. Strategically placed mounting holes securely position the cameras, even in difficult places. The cameras were tested at 170 G (IAW MIL-STD-810G) for shock, and 17 Grms (IAW MIL-STD-202G) for vibration, and are rugged enough to withstand the most challenging environments.Image protection: The C210J and C210 are specifically designed to protect valuable images if a cable is severed during an experiment. Each camera has local memory, including an internal, non-removable 128G CineFlash® for image storage. The cameras can be set to save all images immediately to the non-volatile CineFlash. An internal battery provides up to 30 minutes of back-up power to allow images to be saved in the event AC power is lost.Modular and flexible: The Miro C210 has connectors compatible with other Phantom Miro cameras for power and control, allowing it to operate as a stand-alone camera. The Miro C210J is designed to operate through a single cable to Vision Research’s new Miro Junction Box (JBox). The JBox is a flexible hub designed not only to operate the cameras but to create almost any multi-camera configuration imaginable. Each JBox can accept up to six cameras, or use any one of the camera ports to daisy chain or tree branch to another JBox, significantly increasing the number of cameras in a configuration. The Miro C210, as well as other Phantom Miro cameras, can also connect to the Miro JBox, offering users additional configuration flexibility.Motion Analysis: The Miro C210J and C210 benefit from Vision Research Phantom Camera Control (PCC) software, including the motion analysis tools in the software. PCC can perform timing, position, distance, velocity, angle and angular speed measurements and provide a full suite of calculations on the data.What's in the Box...Miro C210J:∙PCC software and Getting started manual not included. Included with JBox.Miro C210:∙Getting started manual∙Power supply∙Ethernet cable∙MiniBoB∙PCC softwarePhantom FlexDigital cinema camera with extreme flexibility at 2.5K pixel resolution∙Academy Award® winning technology∙Shoot 10-2570 fps at 1920 x 1080∙ISO Color 1,250T, 1,600D∙HQ Mode for ultimate image quality∙Raw digital and/or video workflow solutions∙Flexible lens solutions90OverviewSpecificationsAccessoriesFAQsVideosPhantom FlexThe Phantom Flex is a 2.5K digital cinema camera providing exceptional flexibility in all areas of high-speed image capture. Depending on the shooting mode and resolution, the Flex is capable of shooting from 5 frames-per-second (fps) to over 10,750 fps.The Phantom Flex offers two user-selectable shooting modes, each adapted to aparticular shooting environment. In Standard Mode, the Phantom Flex is just like any other Phantom digital high-speed camera. Shoot at resolutions up to 2560x1600 pixels atanywhere from 10 frames-per-second up to 1,455 frames-per-second (fps). Maximum speed increases as the resolution decreases – up to 2,570 fps at 1920x1080, 5,350 fps at 1280x720 and 10,750 fps at 640x480.In Phantom HQ Mode, Vision Research's proprietary Academy Award® winning image enhancement technology results in electronic image stability for stable blacks, low noise, higher dynamic range and repeatable shots at all settings without the need for pre-shot black references. Maximum frame rates in HQ mode are about half those in Standard mode, which means that in HQ Mode Flex captures images at speeds up to 1,275 fps at 1920x1080 or 2,640 fps at 1280x720.The Phantom Flex supports multiple workflows: a raw digital workflow, a video workflow, or combination of both for maximum control and flexibility.With a video workflow, the high-speed digital camera offers a video signal on the dual-link HD-SDI ports independent of the camera resolution. Set the resolution to 2560x1440 (16:9), and the camera will automatically scale the oversampled image when rendering the video signal. This technique increases the dynamic range and decreases noise in the video signal.The Phantom Flex high-speed digital camera accepts a wide range of industry standard lenses. 35mm (PL, Canon EOS, Nikon F), Super16m and 2/3" lenses are all compatible.Key FeaturesUp to 2,570 fps at 1920x1080 in Standard Mode∙12-bit pixel depth∙HQ Mode provides the ultimate in image stability under changing shooting conditions∙Phantom CineMag & CineMag II compatible, CineMag interface has field-replaceable pin array∙ 2 x 4:2:2 HD-SDI video ports, can be configured as dual-link 4:4:4 video (4:4:4 not available at 60fps video formats)∙Global, electronic shutter to 1 μs (shutter angles in HQ mode dependent upon frame rate and resolution)∙Multi-cine capable via segmented memory∙Internal mechanical shutter for hands-free and remote Current Session References∙On-camera controls for camera modes, settings, playback, edit & save∙Frame synchronization to external signal, allows multiple cameras to be synchronized –essential for stereo 3D recording∙Three 12VDC, 1.5A auxiliary power outputs for powering external devices (one is on the Viewfinder port), 4A maximum load∙External trigger signal on camera connector panel and both 12VDC power ports∙Genlock for synchronizing video playback – essential for 3D video workflows What's in the Box...∙Power supply∙Ethernet cable∙Phantom PCC software∙Spare CineMag interface pin array∙Case。

相关文档
最新文档