A Memory-Efficient KinectFusion Using Octree

合集下载

倾斜镜面成像的自动调焦方法

倾斜镜面成像的自动调焦方法

第31卷第12期 光电工程Vol.31,No.12 2004年12月 Opto-Electronic Engineering Dec,2004文章编号:1003-501X(2004)12-0034-04倾斜镜面成像的自动调焦方法孟永宏1,2,靳刚1(1. 中国科学院力学研究所国家微重力实验室,北京 100080;2. 中国科学院研究生院,北京 100039)摘要:倾斜镜面成像系统的像面与光轴倾斜,仅利用轴向自动调焦无法实现像面整体清晰。

为此提出一种基于图像清晰度判断的自动调焦方法。

该方法将轴向调焦与角度调焦相结合,通过步进方式平移和旋转像接收面,利用离焦函数判断聚焦情况,采用数据拟合回归的方法实现了像面整体清晰。

该方法在椭偏成像系统的应用结果验证了其有效性,调焦精度达到微米量级。

关键词:自动调焦;角度调焦;椭偏成像;倾斜镜中图分类号:TB851 文献标识码:AAn automatic focusing method for the imaging of an illumination–tiltedspecular-reflection imaging systemMENG Yong-hong, JIN Gang(1. State Key Laboratory of Microgravity ,Institute of Mechanics, the Chinese Academy of Sciences,Beijing 100080, China; 2. Graduate School of the Chinese Academy of Sciences, Beijing 100039, China)Abstract:The image plane of illumination-tilted specular reflection imaging system(TSIS) is not perpendicular to the optical axes, which makes auto-focusing in optical axes direction unavailable. In order to solve this problem, an auto-focusing method based on image brightness gradient sharpness function is proposed, which combines auto-focusing in optical axes direction and angle-tilted auto-focusing by step moving and rotating imaging sensor successively. The auto-focusing method can make planar image clearly in whole image plane by using defocusing function and data fitting method.Experiment result from imaging ellipsometry system demonstrates its availability and its precision reaches micron order.Key words:Automatic focusing;Angle focusing;Imaging ellipsometry;Tilt mirror引言纳米薄膜材料在分子生物学、生物化学、电化学、材料学、微电子学等领域具有广泛的应用,与此同时,纳米薄膜检测技术获得迅速的发展,作为非接触、快速的纳米表面成像测量方法,椭偏成像测量(Imaging Ellipsometry,简称IE)[1-3]和等离子体共振成像(Surface Plasma Resonance Imaging,简称SPRI)[4-7]技术可满足对纳米膜层进行纵向分辨率为纳米量级、横向分辨率为微米量级的检测。

融合基因分析方法在肿瘤研究中的应用与发展

融合基因分析方法在肿瘤研究中的应用与发展

【摘 要】 融合基因是指两个独立基因的编码区首尾相连所形成的且置于同一套调控序列控制的产物,研究表明许多癌症的发生与融合基因存在密切的联系。

融合基因可作为癌症治疗的靶点,在癌症诊断及治疗领域中融合基因的研究具有重要意义。

部分融合基因驱动癌症的机制已被初步揭示,但是有些真实存在的在肿瘤发生发展过程中有重要意义的融合基因由于工具和实验技术限制还未被发现。

因此,对融合基因的分析预测研究方法逐渐成为关注的热点之一。

本文探讨了目前常用的关于融合基因的分析工具及方法,为融合基因在癌症中的研究提供思路。

【关键词】 融合基因;分析工具;检测;验证DOI:10. 3969 / j. issn. 1000-8535. 2023. 08. 001Development and application of fusion gene analysis methods in tumors researchZHANG Junwei,GUAN XinchaoSchool of Public Health,Guangzhou Medical University【Abstract】 Fusion genes are the products of two independent genes whose coding regions are linked and controlled by the same set of regulatory sequencesused as targets for cancer therapythe mechanisms of fusion genes driving cancer have been initially revealedthe process of tumor development have not been discovered due to the limitation of tools and experimental techniquesthe analysis and prediction methods of fusion genes are becoming a hot topic of interestanalytical tools and methods on fusion genes to provide ideas for the study of fusion genes in cancer【Key words】 fusion gene现代研究普遍认为,肿瘤的发生与遗传物质的改变密切相关。

人工智能在实验室应用的英语作文

人工智能在实验室应用的英语作文

人工智能在实验室应用的英语作文Artificial intelligence (AI) is a cutting-edge technology that has been making waves in various industries, including the field of scientific research. In laboratory settings, AI is being increasingly used to streamline processes, accelerate the pace of discovery, and help researchers make sense of large volumes of data. This essay will explore the applications of AI in laboratory settings and discuss the benefits and challenges associated with its use.One of the primary applications of AI in laboratories is in the field of data analysis. With the ever-increasing amount of data being generated in scientific research, traditional methods of data analysis are often insufficient to process and interpret the vast amounts of information. AI algorithms, on the other hand, are able to quickly analyze large datasets and identify patterns and trends that would be difficult for humans to detect. This can lead to faster and more accurate results, helping researchers make breakthroughs in their fields.Another area where AI is being utilized in laboratories is in the field of experimental design. By using machine learning algorithms, researchers can optimize experimental parameters and predict outcomes before conducting experiments. This notonly saves time and resources, but also helps researchers avoid costly mistakes and improve the quality of their experiments. Additionally, AI can be used to generate hypotheses based on existing data, leading to novel discoveries and insights that may have been overlooked using traditional methods.In addition to data analysis and experimental design, AI is also being used in laboratory automation. Robots equipped with AI technology can perform repetitive tasks such as sample processing, data collection, and instrument calibration, freeing up researchers to focus on more complex and creative aspects of their work. This not only increases efficiency and productivity in the lab, but also reduces the risk of human error and ensures the reproducibility of results.Despite the many benefits of using AI in laboratory settings, there are also challenges that must be addressed. One of the main challenges is the lack of interpretability of AI algorithms. While AI can generate accurate predictions and insights, it is often difficult to understand how these conclusions were reached. This lack of transparency can be a barrier to adoption, as researchers may be hesitant to trust and rely on AI-generated results without a clear understanding of the underlying processes.Another challenge is the potential for bias in AI algorithms. Like any technology, AI is only as good as the data it is trained on. If the training data is biased or incomplete, AI algorithms may generate inaccurate or unfair results. This is a significant concern in scientific research, where objectivity and impartiality are crucial. Researchers must be vigilant in ensuring that the data used to train AI models is diverse and representative of the population being studied.In conclusion, AI has the potential to revolutionize the way research is conducted in laboratory settings. By automating routine tasks, analyzing large datasets, and optimizing experimental design, AI can help researchers work more efficiently and effectively, leading to new discoveries and breakthroughs in a wide range of scientific fields. However, significant challenges remain in terms of interpretability, bias, and ethical considerations, which must be addressed to fully realize the potential of AI in the laboratory. With careful planning and oversight, AI can be a powerful tool for advancing scientific knowledge and driving innovation in research.。

Arnold渲染器用户指南说明书

Arnold渲染器用户指南说明书

Arnold featuresMemory-efficient, scalable raytracer rendering software helps artists render complex scenes quickly and easily.◆ See what's new (video: 2:31 min.)◆ Get feature details in the Arnold for Maya, Houdini, Cinema 4D, 3ds Max, or Katana user guidesSubsurface scatterHigh-performance ray-traced subsurface scattering eliminates the need to tune point clouds. Hair and furMemory-efficient ray-traced curve primitives help you create complex fur and hair renders.Motion blur3D motion blur interacts with shadows, volumes, indirect lighting, reflection, or refraction. Deformation motion blur and rotational motion are also supported. VolumesThe volumetric rendering system in Arnold can render effects such as smoke, clouds, fog, pyroclastic flow, and fire.InstancesArnold can more efficiently ray trace instances of many scene objects with transformation and material overrides. Subdivision and displacementArnold supports Catmull-Clark subdivision surfaces.OSL supportArnold now features support for Open Shading Language (OSL), an advanced shading language for Global Illumination renderers. Light Path ExpressionsLPEs give you power and flexibility to create Arbitrary Output Variables to help meet the needs of production.NEW|Adaptive samplingAdaptive sampling gives users another means of tuning images, allowing them to reduce render times without jeopardizing final image quality. NEW|Toon shaderAn advanced Toon shader is part of a non-photorealistic solution provided in combination with the Contour Filter.NEW|DenoisingTwo denoising solutions in Arnold offer flexibility by allowing users to use much lower-quality sampling settings. NEW|Material assignments and overrides Operators make it possible to override any part of a scene at render time and enable support for open standard framework such as MaterialX.NEW|Alembic proceduralA native Alembic procedural allows users to render Alembic files directly without any translation.NEW|Profiling API and structured statistics An extensive set of tools allow users to more easily identify performance issues and optimize rendering processes.Standard Surface shaderThis energy-saving, physically based uber shader helps produce a wide range of materials and looks. Standard Hair shaderThis physically based shader is built to render hair and fur, based on the d'Eon and Zinke models for specular and diffuse shading.Flexible and extensible APIIntegrate Arnold in external applications and create custom shaders, cameras, light filters, and output drivers. Stand-alone command-line rendererArnold has a native scene description format stored in human-readable text files. Easily edit, read, and write these files via the C/Python API.◆ See Arnold 5.1 release notesIntegrate Arnold into your pipeline•Free plug-ins provide a bridge to the Arnold renderer from within many popular 3D applications.•Arnold has supported plug-ins available for Maya, Houdini, Cinema 4D, 3ds Max, and Katana.•Arnold is fully customizable, with a powerful API to create custom rendering solutions.◆ See Arnold plug-ins。

MEWS评分在急诊留观患者护理决策中的作用分析

MEWS评分在急诊留观患者护理决策中的作用分析

MEWS评分在急诊留观患者护理决策中的作用分析一、MEWS评分的概念简化急诊患者危重度评估(Modified Early Warning Score,MEWS)是一种通过观察生命体征来评估患者病情变化的评分系统。

MEWS评分包括呼吸频率、心率、收缩压、体温和意识状态五个指标,通过对这些指标进行评分,并将评分结果相加,来评估患者的病情变化程度。

当评分结果高于一定阈值时,就需要及时采取相应的护理措施,以避免患者病情的进一步恶化。

MEWS评分系统简单易行、操作方便,因此在临床中得到了广泛的使用。

二、MEWS评分在急诊留观患者护理决策中的作用1. 及时发现患者病情变化在急诊留观患者的护理过程中,患者病情的变化可能随时发生,而且有些变化可能相当微弱,容易被忽略。

通过对患者进行定期的MEWS评分,可以及时监测患者的生命体征指标,并将评分结果及时记录在案。

一旦发现患者的MEWS评分升高,就可以及时采取护理措施,以防止患者病情的进一步恶化。

MEWS评分在急诊留观患者护理决策中可以起到及时发现患者病情变化的作用。

2. 提高护理质量MEWS评分可以帮助医护人员及时发现患者的病情变化,有利于提高护理质量。

通过对患者进行定期的MEWS评分,可以及时发现患者的病情变化,及时采取相应的护理措施,有利于减少医疗事故的发生,提高医疗质量和护理效果。

3. 促进医护人员间的交流在急诊留观患者的护理决策中,医护人员之间的交流配合是至关重要的。

通过对患者进行定期的MEWS评分,可以使医护人员更好地了解患者的病情变化情况,并及时进行交流,共同制定护理方案,有利于提高医护人员之间的沟通和配合,促进医护团队的协作效率。

三、MEWS评分在急诊留观患者护理决策中的局限性1. 评分标准不够客观MEWS评分系统主要通过对患者的生命体征指标进行评分,存在一定的主观性。

不同的医护人员可能会对患者的生命体征指标进行评判时存在主观性,因此可能会对评分结果产生一定的误差。

Evidence for a protein tether involved in somatic touch

Evidence for a protein tether involved in somatic touch
for the coarse mode.
① The probe was positioned near the neurite or cell body ② moved forward in steps of 200–750 nm for 500 ms and then withdrawn. ③ If there was no response, the probe was moved forward by one step coarse mode ④ Repeat the same procedure until a mechanically current was recorded. ⑤ Analysis of the kinetic properties of current the Fitmaster software (HEKA).
* Mechanoreceptor VS Nociceptor
XT Group Scientific Instruments Co., Ltd.
We Smart Your Lab!
three distinct types of mechanosensitive currents in sensory neurons:
XT Group Scientific Instruments Co., Ltd.
We Smart Your Lab!
Results: - Only the terminal endings of sensory neurons in the skin are mechanosensitive in vivo - Establishing an in vitro model of the cellular milieu found at the receptor ending - Adult mouse sensory neurons were cultured on a monolayer of mouse 3T3 fibroblasts

医学影像智能计算教育部重点实验室 英语

医学影像智能计算教育部重点实验室英语全文共3篇示例,供读者参考篇1Medical imaging is an essential part of modern medicine, allowing doctors to diagnose and treat a wide range of diseases and conditions. With advanced technologies such as MRI, CT scans, and X-rays, medical professionals can obtain detailed images of the inside of the body, helping them to make accurate diagnoses and develop effective treatment plans.In recent years, there has been a growing interest in the use of artificial intelligence (AI) and machine learning in medical imaging. These technologies have the potential to revolutionize the field by improving the accuracy and efficiency of image analysis, as well as by enabling new applications such as image-based diagnosis and personalized treatment planning.The Ministry of Education of China has recognized the importance of advancing research in this area and has established a Key Laboratory for Medical Imaging and Intelligent Computation. This laboratory aims to promote cutting-edge research in medical imaging and AI, with a focus on developinginnovative technologies that can improve healthcare outcomes and reduce healthcare costs.One of the key objectives of the laboratory is to provide training and education for researchers and students in the field of medical imaging and AI. By offering courses, workshops, and research opportunities, the laboratory aims to foster a new generation of experts who can drive innovation in the field and contribute to the development of new technologies and applications.Through collaborations with leading hospitals, research institutions, and industry partners, the laboratory is working to translate research findings into real-world applications that can benefit patients and healthcare providers. By combining the expertise of researchers in medical imaging, AI, and healthcare, the laboratory is well-positioned to make significant contributions to the field and to advance the state of the art in medical imaging and intelligent computation.Overall, the establishment of the Key Laboratory for Medical Imaging and Intelligent Computation represents an important step forward in the field of medical imaging and AI. By bringing together experts from different disciplines and providing resources and support for research and education, the laboratoryis well-positioned to make significant contributions to the field and to advance the state of the art in medical imaging and intelligent computation.篇2Medical imaging plays a crucial role in the diagnosis and treatment of various medical conditions. With the advancements in technology, medical imaging has become more precise and accurate, leading to better patient outcomes. The establishment of the Medical Imaging Intelligent Computing Key Laboratory by the Ministry of Education is a significant step towards advancing the field of medical imaging and improving healthcare services.The Medical Imaging Intelligent Computing Key Laboratory focuses on developing intelligent computing algorithms and technologies for medical imaging applications. These algorithms and technologies aim to improve the accuracy, efficiency, and reliability of medical image analysis, diagnosis, and treatment planning. Researchers at the laboratory are working on developing computer-aided diagnosis systems, image segmentation algorithms, image registration techniques, and image reconstruction methods.One of the key research areas at the laboratory is the development of deep learning algorithms for medical image analysis. Deep learning is a subfield of artificial intelligence that has shown remarkable success in various applications, including medical imaging. By training deep learning models on large datasets of medical images, researchers at the laboratory can develop algorithms that can accurately detect and classify medical conditions, such as tumors, fractures, and abnormalities.Another research area at the laboratory is the development of image registration techniques for medical imaging applications. Image registration is the process of aligning and matching two or more medical images to enable the comparison and analysis of images acquired at different times or using different imaging modalities. Researchers at the laboratory are working on developing novel image registration algorithms that can improve the accuracy and efficiency of medical image analysis.In addition to research, the Medical Imaging Intelligent Computing Key Laboratory also focuses on education and training in the field of medical imaging. The laboratory offers undergraduate and graduate courses in medical imaging, intelligent computing, and image analysis. Students have theopportunity to learn from leading researchers and experts in the field, as well as gain hands-on experience with state-of-the-art medical imaging technologies and tools.Overall, the establishment of the Medical Imaging Intelligent Computing Key Laboratory by the Ministry of Education is a significant milestone in the field of medical imaging. The research and development activities conducted at the laboratory have the potential to revolutionize the field of medical imaging and improve healthcare services for patients. Throughcutting-edge research, education, and training, the laboratory is shaping the future of medical imaging and intelligent computing.篇3Medical imaging smart computing education ministry key laboratoryWith the rapid development of artificial intelligence and machine learning, the field of medical imaging has witnessed significant advancements in recent years. Medical imaging smart computing has emerged as a powerful tool for improving the accuracy and efficiency of medical diagnosis and treatment. In order to further promote the development of medical imagingsmart computing, the Ministry of Education has established a key laboratory dedicated to this field.The Medical Imaging Smart Computing Education Ministry Key Laboratory is a state-of-the-art research facility equipped with the latest technologies and equipment for conducting research in medical imaging smart computing. The laboratory is staffed with a team of experienced researchers and experts in the field who are committed to advancing the development and application of medical imaging smart computing.One of the key objectives of the laboratory is to educate and train the next generation of medical imaging smart computing professionals. The laboratory offers a comprehensive curriculum that covers a wide range of topics in medical imaging smart computing, including image processing, machine learning, deep learning, and computer vision. Students enrolled in the program have the opportunity to work on cutting-edge research projects and gain hands-on experience with the latest technologies in the field.In addition to its educational programs, the laboratory also conducts research in collaboration with hospitals, medical institutions, and industry partners. The research conducted at the laboratory focuses on developing new algorithms andtechniques for analyzing medical images, improving the accuracy of diagnosis, and enhancing the efficiency of medical treatment.Overall, the Medical Imaging Smart Computing Education Ministry Key Laboratory is at the forefront of innovation in the field of medical imaging smart computing. Through its education and research programs, the laboratory is playing a crucial role in advancing the development and application of medical imaging smart computing, ultimately improving the quality of healthcare for patients around the world.。

纹理物体缺陷的视觉检测算法研究--优秀毕业论文


摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II

电致化学发光传感技术的种类及在食品分析中的研究进展

刘树萍,邢珂,韩博林,等. 电致化学发光传感技术的种类及在食品分析中的研究进展[J]. 食品工业科技,2024,45(1):325−334.doi: 10.13386/j.issn1002-0306.2022110292LIU Shuping, XING Ke, HAN Bolin, et al. Types of Electrochemiluminescence Sensing Techniques and Research Progress in Food Analysis[J]. Science and Technology of Food Industry, 2024, 45(1): 325−334. (in Chinese with English abstract). doi:10.13386/j.issn1002-0306.2022110292· 专题综述 ·电致化学发光传感技术的种类及在食品分析中的研究进展刘树萍1, *,邢 珂2,韩博林2,关桦楠2,*(1.哈尔滨商业大学旅游烹饪学院,黑龙江哈尔滨 150028;2.哈尔滨商业大学食品工程学院,黑龙江哈尔滨 150028)摘 要:电致化学发光传感技术(Electrochemiluminescence ,ECL )是一种结合了电化学和光化学的分析方法,因其可控性强、灵敏度高和响应速度快等优势在食品分析领域引起了广泛关注。

ECL 是通过改变发射物或共反应物的浓度,使其信号强度发生变化,从而实现对目标物质的灵敏检测。

首先,该文总结了经典ECL 检测体系及基于新型发射物和共反应物的检测体系,并重点介绍了新型发射物中金属纳米团簇和量子点的最新进展,举例阐述其ECL 传感器的结构和检测原理。

其次,综述了ECL 传感器在食品分析领域中的研究进展。

最后对ECL 传感技术的未来发展趋势进行了展望,为食品中营养成分和污染物的检测提供参考,同时也促进该技术的进一步研究,助力未来食品检测发展。

Modeling of Silver Nanoparticle Formation in a Microreactor Reaction Kinetics Coupled

Hongyu Liu,†,‡ Jun Lቤተ መጻሕፍቲ ባይዱ,†,§ Daohua Sun,*,†,§ Tareque Odoom-Wubah,† Jiale Huang,†,§ and Qingbiao Li*,†,‡,§,∥,⊥

Department of Chemical and Biochemical Engineering, College of Chemistry and Chemical Engineering, ‡Environmental Science Research Center, College of the Environment and Ecology, §National Engineering Laboratory for Green Chemical Productions of Alcohols, Ethers and Esters, and ∥Key Lab for Chemical Biology of Fujian Province, Xiamen University, Xiamen 361005, People’s Republic of China ⊥ College of Chemistry and Life Science, Quanzhou Normal University, Quanzhou 362002, People’s Republic of China ABSTRACT: Reactive kinetics coupled with population balance model (PBM) and computational fluid dynamics (CFD) was implemented to simulate silver nanoparticles (AgNPs) formation in a microtubular reactor. The quadrature method of moments, multiphase model theory, and kinetic theory of granular flow were employed to solve the model, and the particle size distributions (PSD) were calculated. The simulation results were validated by synthesizing AgNPs experimentally in an actual microtubular reactor for comparison with the PSD. The results confirmed the effectiveness of the model and its applicability in predicting AgNPs formation and its PSD evolution in the microtubular system. Finally, benefiting from its superiority, in that the influence of reactive kinetics and fluid dynamics on particle evolution could be considered separately, the model was employed to verify predictions and inferred conclusions in our previous works, which were difficult to verify through experiment.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Memory-Efficient KinectFusion Using OctreeMing Zeng,Fukai Zhao,Jiaxiang Zheng,and Xinguo LiuState Key Lab of CAD&CG,Zhejiang University,Hangzhou,China,310058mingzeng85@,xgliu@Abstract.KinectFusion is a real time3D reconstruction system based on a low-cost moving depth camera and commodity graphics hardware.It represents thereconstructed surface as a signed distance function,and stores it in uniform volu-metric grids.Though the uniform grid representation has advantages for parallelcomputation on GPU,it requires a huge amount of GPU memory.This paperpresents a memory-efficient implementation of KinectFusion.The basic idea isto design an octree-based data structure on GPU,and store the signed distancefunction on data nodes.Based on the octree structure,we redesign reconstructionupdate and surface prediction to highly utilize parallelism of GPU.In the recon-struction update step,wefirst perform“add nodes”operations in a level-ordermanner,and then update the signed distance function.In the surface predictionstep,we adopt a top-down ray tracing method to estimate the surface of the scene.In our experiments,our method costs less than10%memory of KinectFusionwhile still being fast.Consequently,our method can reconstruct scenes8timeslarger than the original KinectFusion on the same hardware setup.Keywords:Octree,GPU,KinectFusion,3D Reconstruction.1Introduction3D reconstruction for real scenes is an important research area in computer vision and computer graphics.For decades,researchers have developed all kinds of methods for efficient and accurate reconstruction.One popular method reconstructs scenes by fusing depth maps from different views.Especially,Newcombe et al.[9]and Izadi et al.[6] proposed KinectFusion,which used a commodity depth camera Kinect[8]to scan and model the dense surface of a room-size(about4m×4m×4m)scene in realtime.The algorithm leverages the parallel computing ability of the modern GPU to track the pose of the Kinect,and fuses the depth map of each frame into a scene volume.KinectFusion uses a uniformly divided volume,and stores the data of all voxels in this volume.In a real scene,however,large amount of space is not occupied by the object’s surface,therefore KinectFusion greatly wastes GPU memory,which hinders scene reconstruction in larger scale.To solve the problem,we introduce an octree struc-ture to efficiently store the scene data.Based on the octree,we propose an algorithm to maintain the octree and integrate depth maps online.Our method sufficiently ex-ploits the hierarchical structure of the octree and GPU parallelism.We adopt different Corresponding author.S.-M.Hu and R.R.Martin(Eds.):CVM2012,LNCS7633,pp.234–241,2012.c Springer-Verlag Berlin Heidelberg2012A Memory-Efficient KinectFusion Using Octree235 traversal manners in our method to update and trace the octree to achieve optimized per-formance.When updating the volume,we traverse the octree in a level-order manner to exploit GPU parallelism.When predicting the scene surface,we traverse the octree in a top-down way to skip large non-data spaces.Benefit from careful designs,our method performs better than KinectFusion both in memory and computational efficiency.2Related WorkThis section introduces the related works on3D reconstruction and octree construction on GPU.We only survey works most relevant to this paper.3D Reconstruction.3D reconstruction techniques capture scene information by RGB cameras or depth cameras,and then reconstruct the geometry of the scene.Lots of researchers capture RGB images of a scene,then utilize structure from mo-tion(SFM)and multi-view stereo(MVS)to locate cameras and recover a sparse point cloud of the scene[2,11].This kind of method is time consuming and unable to ob-tain dense scene surface.Recently,some studies used RGB sequence to reconstruct the dense surface[12,13,10]of a small scene(about1m×1m×1m)in real time.Depth cameras are able to capture a depth map of current scene on each frame. Leveraging this ability,we can align depth maps to form the whole scene surface.The most popular method for this task is Iterative Closest Point(ICP)[1].It was widely used to register views of depth maps into a global coordinate.With ICP,the KinectFusion proposed by Newcombe et al.[9]and Izadi et al.[6]leverages depth camera and GPU to reconstruct a dense surface of a room-size(about4m×4m×4m)scene in real time. Based on the KinectFusion,Whelan et al.[15]proposed a system“Kintinuous”which supports modeling on unbounded regions by shifting volume and extracting meshes continuously.This system utilized KinectFusion as a building block,and extended the ability to support large scale scanning.In contrast to[15],our method improves the core parts of the KinectFusion,and it can be a building block of large scale scanning systems like Kintinuous.Octree Construction on GPU.An octree adaptively splits the space of the scene ac-cording to the complexity of the scene to use memory efficiently[3].Though the sim-plicity of its definition,it is hard to be maintained using parallelism feature of GPU due to the sparseness of its nodes[16].Sun et al.[14]built an octree with only leaf nodes to store volume data and accelerated photon tracing based on the octree.Zhou et al.[16] constructed a whole octree structure on GPU to accelerate Poisson Reconstruction[7]. 3OverviewThe overviews of KinectFusion and our method are shown in Figure1left.The Kinect-Fusion contains four main stages:Surface Measurement,Camera Pose Estimation,Re-construction Update,and Surface Prediction(see[9,6]).In our method,we adopt the similarflowchart of KinectFusion.However,to overcome the large memory consump-tion due to the uniform voxel in KinectFusion,we introduce an octree structure to com-pactly organize the scene data in our method.Based on the octree,we design new algo-rithms for Reconstruction Update and Surface Prediction to utilize GPU parallelism.236M.Zeng et al.Fig.1.Overview and Data Structure of our method.Left:overview of KinectFusion and improve-ments of our method.The red dash rectangle indicates the improved parts.Right:illustration of the octree structure.In the following sections,wefirst introduce the octree structure in Section4,fol-lowed by Reconstruction Update and Surface Prediction.In Section7,we describe im-plementation details and experimental results,after which we give the conclusion.4Octree StructureOur octree structure is stored in arrays of different layers.One array corresponds to one layer of the octree.As illustrated in Figure1right,there are three kinds of layers in our octree structure:Branch Layer,Middle Layer and Data Layer.All possible nodes in the branch layer are allocated,and can be randomly accessed.Nodes of other layers are not fully allocated,which can only be visited by links betweens adjacent layers.–Branch Layer:The branch layer contains all nodes in its layer.Each node of the branch layer stores the index idChild of thefirst child.If the node doesn’t have children nodes,idChild is set to−1.–Middle Layer:Nodes in middle layers record both idChild and shuffled xyz key[16].The shuffled xyz key of a node at depth D is defined as a bit string: x1y1z1x2y2z2...x D y D z D,where each3-bit code x i y i z i encodes one of the eight subregions at depth i.idChild can be used to traverse an octree in the depth-first manner,while shuffled xyz key can be utilized for the level-order traversal.–Data Layer:Each node in this layer stores its shuffled xyz key and data of the scene[9,6]:Signed Distance Field SDF and weight w.In the data structure,the branch layer is the starting layer,and it is not necessarily the root layer.We use a symbol OT(T,L)to represent an octree with a branch layer at depth T,and data layer at depth L.Accordingly,OF(T,L)represents our algorithm based on OT(T,L).We also denote our algorithm with a n-depth octree as OF n.To represent the octree structure,we use following arrays to organize above data:–NodeBuf is a pre-allocated two-dimensional array to store nodes of each level.The ith node at level L is stored in NodeBuf[L][i].–IdStart is a pre-allocated one-dimensional array to record the current starting in-dex of the available buffer of each level,as black arrows shown in right of Figure1.The current starting index of level L is recorded in IdStart[L].A Memory-Efficient KinectFusion Using Octree237 Algorithm1.Add Nodes for the Octree at Level L1://Step1:Predict whether a node needs further split16://Step2:Scan the splitflag array2:for Node O i at Level L in parallel do17:(ScanOut,nNewNodeCount)←Scan(ScanIn,+)3:if O i in view frustum4:v g←PosFromNode(O i,L)18://Step3:Assign child index and compute xyz key5:v←WorldCoord2CamCoord(v g)19:for O i at level L in parallel6:p←perspective project vertex v20:if ScanIn[i]==17:dir←v g−v cam21:idx=IdStart[L+1]+(ScanOut[i]<<3)8:sd f= v cam−v g −D(p,dir)22:NodeBuf[L][i].idChild=idx9:if IsSplit(O i,sd f)and O i has no child23:key=O i.xyzkey10:ScanIn[i]=124:for k=0to7do11:else25:NodeBuf[L+1][idx+k].xyzkey=(key<<3)|k 12:ScanIn[i]=026:end for13:endif27:end if14:endif28:end for15:end for29://Step4:Update IdStart30:IdStart[L+1]+=8∗nNewNodeCount5Reconstruction Update Based on OctreeThere are two operations in the step of reconstruction update:add nodes for new scene data,and SDF data update.5.1Add Nodes for OctreeTo add new nodes for the octree,we adopt a top-down level-order manner to traverse the octree.We split nodes and assign children indices level by level,starting from the branch layer and moving towards the data layer,one level at a time.For level L,the pseudo code of the procedure is listed in Algorithm1.At thefirst step,we predict whether a node needs further split in parallel.For the node in the view frustum,we calculate the signed distance sd f from its center to the current depth map(Line4to8).Then,given sd f,we use the function IsSplit(described later) to predict node splits and mark1/0in an auxiliary array ScanIn.At step2,we take Parallel Prefix Sum(Scan)[5]with the add operation“+”on ScanIn to compute the unique id of new split nodes.At step3,we assign index of itsfirst child to the node to be split.At this step,we alsofigure out and store the shuffled xyz key of each new child node according to the shuffled xyz key of its parent node and its relative position in all eight children.Finally at step4,we update the index of thefirst available node in the memory pool of the child layer.In the function IsSplit,we assume the scene near the node is approximately a plane. We denote distance between center of the node and scene surface as sd f,diagonal length of the node as d,then distances from all points within the node to the surface must be in[sd f−d/2,sd f+d/2].Take the further consideration that the distancefield stored in scene volume is in[−U,U],U is the maximal truncation value.To ensure a correct split prediction,the node split iff[sd f−d/2,sd f+d/2]intersects with[−U,U],i.e. sd f∈[−U−d/2,U+d/2].5.2Update SDF for OctreeAfter updating the structure of the octree,the current depth map should be integrated into the scene volume to update signed distance function.We only need to update the238M.Zeng et al.Algorithm2Surface Prediction1:for each pixel u in imaging plane in parallel do18://estimate position and normal2:Ray dir←direction of the ray19:if l==level of thefinest layer then 3:g←first voxel along ray dir in thefinest level20:if zero crossing from g prev to g then 4:Ray step←021:P←estimate surface position 5:while voxel g within volume bounds do22:N←estimate surface normal 6://determine the marching step23:break7:xyzkey←Grid2Key(g)24:end if8:l←level of the branch layer25:end if9:node←Key2Node(xyzkey,l,NodeBuf,null)26:end while10:while node has children do27:end for11:l++12:node←Key2Node(xyzkey,l,NodeBuf,node)13:end while14:Ray step←NextStepLen(g,Ray dir,node)15://march forward16:g prev←g17:g+=Ray dir·Ray stepnodes at the data layer to integrate the depth map.We adopt the similar integration algorithm as the method in[6]except that we compute positions of data nodes from shuffled xyz code while[6]directly compute them from their grid indices.6Surface Prediction Based on OctreeAfter structure and data update of the octree,like[9,6],a ray tracer is taken to ray-cast the scene volume and estimate the scene surface.Algorithm2lists the pseudo code. Each ray marches forward until crossing the object’s surface.Line6to14determine the current amount offinest steps to march forward.In Line7,Grid2Keyfigures out the xyzkey.Then in Line8to13,with xyzkey,we adopt a top-down way tofind the most compact octree node holding the position g.The most compact node for g means no surface data is in the node,so the ray can“bravely”marches across this most compact node.So in Line14,we compute the marching step according to the depth(level)of the most compact node by the function NextStepLen.After marching forward,if the ray crosses surface,we estimate the position and normal of the hitting points between the ray and the object’s surface.The position and normal of the intersection point are estimated in a trilinear interpo-lation way[9,6].Normals of points around two sides of the zero-crossing surface can be calculated by forward/backward differences according to the relative position of the node of its siblings.7Implementation and Experiments7.1Implementation DetailsWe have implemented the whole pipeline of our algorithm in C++with Nvidia CUDA, and test it on a desktop computer with an Intel Core2Duo E74002.80GH CPU(use one core)and a Nvidia GeForce GTX480graphics card.For the operator Scan,we used the implementation provided in the highly optimized GPU library CUDPP[4].A Memory-Efficient KinectFusion Using Octree 239All data data structures are stored in the global memory.For a layer at depth D of an OT (T,L ),the node count N of the GPU memory buffer is as follows:N = 23D ,D ≤T µ·23D ,D >T(1)where µis proportion to all possible nodes at its depth.In our current implementation,µis set to 0.05for depth no deeper than 9,and 0.03for depth 10.7.2ExperimentsThis section compares our method with KinectFusion from three aspects:memory con-sumption,computation time,and reconstruction for large scene.Memory Comparison.We compare memory consumption between KinectFusion and our method on a depth map sequence “chairs”(Figure 2).For KinectFusion,we adopt the 5123resolution (KF512),and for our method,we test both 9-depth (OF9)and 10-depth octree (OF10).In this experiment,branch layers of both OF9and OF10are set at depth 7.Figure 2middle shows the memory consumptions on each frame of “chairs”with KF512,OF9,and OF10.KF512cost 512MB memory constantly,while OF9and OF10increase the memory as the camera scans new parts of the scene.At the begin-ning,OF9and OF10pre-allocate 81.6MB and 362.6MB,respectively.Finally,OF9uses 55.6M,and OF10uses 227.4MB.With 5123resolution,memory consumption of OF9is 81.6/512≈15.9%of KF512.With 10243resolution,KinectFusion will consume 4GB GPU memory,which is ≥10times larger than that of our method.Figure 2right gives details of memory consumption of each layer.Time Comparison.We show time comparisons on a test scene in Figure 3,where OF(7,9)processes a frame in about 19ms,and OF(7,10)processes a frame less than 25ms.They are both faster than KF512.OF(7,9)gains about 2×speed up on the improved parts than KF512.This is because KF512needs to update its all 5123voxels,while our method only updates existing nodes,and it can skip large spaces on the ray-cast rge Scale Scene.Our octree based method efficiently uses memory,and OF10can reach a maximal 10243resolution,which is able to robustly track the camera in a large scale scene and reconstruct it.We scan an office in a 8m ×8m ×8m bounding box using OF10,which captures about 3800frames of depth maps.The scene data costs 299MB in 363MB pre-allocated GPU memory,and the extracted mesh from the scene volume contains 6200K triangle faces and 3400K vertices.As shown in Figure 4,though the size of the scene is large,the reconstructed model still possesses abundant details.Fig.2.Memory comparison of a static scene “chairs”240M.Zeng et al.Fig.3.Processing time for a test scene.Left are the Phong-shaded rendering and the normal map of the test scene,respectively.Right is the processing time on each stage of different methods.Fig.4.Reconstruction result of a large scene and two zoom-in parts8Conclusion and Future WorkWe propose an octree based KinectFusion.Our method represents the scene data in an octree structure,and maintains this octree by“add node”operations according to changes of the scene.We also modify KinectFusion to adapt the octree representation to highly utilize parallelism computation ability of GPU.Experiments show that our method costs only about10%memory of original KinectFusion and runs about2×faster than original KinectFusion on the improvement parts.Our method can reconstruct 3D scenes8times larger than that of KinectFusion.The system can be extended in several ways.One is to design a more efficient mem-ory management solution to replace current pre-allocation method.Another possible improvement may come from the combination of our method and Kintinuous[15].A straightforward way is to use our method as a building block in Kintinuous,which may also involve some modifications both on data structures and the volume shifting algo-rithm of Kintinuous.Acknowledgments.We would like to thank the anonymous reviewers for their valu-able comments,and thank Bo Jiang,Zizhao Wu and Xuan Cheng for proof reading and video editing.This work was partially supported by NSFC(No.60970074),China973 Program(No.2009CB320801),Fok Ying Tung Education Foundation and the Funda-mental Research Funds for the Central Universities.A Memory-Efficient KinectFusion Using Octree241 References1.Chen,Y.,Medioni,G.:Object modeling by registration of multiple range images.Image andVision Computing(IVC)10(3),145–155(1992)2.Fitzgibbon,A.W.,Zisserman,A.:Automatic Camera Recovery for Closed or Open ImageSequences.In:Burkhardt,H.-J.,Neumann,B.(eds.)ECCV1998.LNCS,vol.1406,pp.311–326.Springer,Heidelberg(1998)3.Frisken,S.F.,Perry,R.N.,Rockwood,A.P.,Jones,T.R.:Adaptively sampled distancefields:a general representation of shape for computer graphics.In:Proceedings of the27th An-nual Conference on Computer Graphics and Interactive Techniques,SIGGRAPH2000,pp.249–254.ACM Press/Addison-Wesley Publishing Co.,New York(2000)4.Harris,M.,Owens,J.D.,Sengupta,S.,Zhang,Y.,Davidson,A.:Cudpp homepage(2007),/developer/cudpp5.Harris,M.,Sengupta,S.,Owens,J.D.:Parallel prefix sum(scan)with CUDA,ch.39.Addison Wesley(August2007)6.Izadi,S.,Kim,D.,Hilliges,O.,Molyneaux,D.,Newcombe,R.,Kohli,P.,Shotton,J.,Hodges,S.,Freeman,D.,Davison,A.,Fitzgibbon,A.:Kinectfusion:real-time3d recon-struction and interaction using a moving depth camera.In:Proceedings of the24th Annual ACM Symposium on user Interface Software and Technology,UIST2011,pp.559–568.ACM,New York(2011)7.Michael,K.,Matthew,B.,Hugues,H.:Poisson surface reconstruction.In:Proceedings ofthe Fourth Eurographics Symposium on Geometry Processing,SGP2006,pp.61–70.Euro-graphics Association,Aire-la-Ville(2006)8.Microsoft.Microsoft kinect project(2010),/kinect9.Newcombe,R.A.,Izadi,S.,Hilliges,O.,Molyneaux,D.,Kim,D.,Davison,A.J.,Kohli,P.,Shotton,J.,Hodges,S.,Fitzgibbon,A.:Kinectfusion:Real-time dense surface mapping and tracking.In:Procedings of IEEE/ACM International Symposium on Mixed and Augmented Reality,pp.127–136(2011)10.Newcombe,R.A.,Lovegrove,S.,Davison,A.J.:Dtam:Dense tracking and mapping in real-time.In:International Conference on Computer Vision,pp.2320–2327(2011)11.Pollefeys,M.,Gool,L.V.,Vergauwen,M.,Verbiest,F.,Cornelis,K.,Tops,J.,Koch,R.:Visual modeling with a hand-held camera.International Journal of Computer Vision59(3), 207–232(2004)12.Richard,A.J.D.,Newcombe,A.:Live dense reconstruction with a single moving camera.In:IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), pp.1498–1505(June2010)13.St¨u hmer,J.,Gumhold,S.,Cremers,D.:Real-Time Dense Geometry from a Handheld Cam-era.In:Goesele,M.,Roth,S.,Kuijper,A.,Schiele,B.,Schindler,K.(eds.)DAGM2010.LNCS,vol.6376,pp.11–20.Springer,Heidelberg(2010)14.Sun,X.,Zhou,K.,Stollnitz,E.,Shi,J.,Guo,B.:Interactive relighting of dynamic refractiveobjects.ACM Trans.Graph.27(3),35:1–35:9(2008)15.Whelan,T.,McDonald,J.,Kaess,M.,Fallon,M.,Johannsson,H.,Leonard,J.J.:Kintinuous:Spatially extended kinectfusion.In:RSS Workshop on RGB-D:Advanced Reasoning with Depth Cameras(July2012)16.Zhou,K.,Gong,M.,Huang,X.,Guo,B.:Data-parallel octrees for surface reconstruction.IEEE Transactions on Visualization and Computer Graphics(TVCG)17(5),669–681(2011)。

相关文档
最新文档