Reconstruction of 3D Models from Intensity Images and Partial Depth

合集下载

基于牙冠与牙根特征的牙齿建模方法

基于牙冠与牙根特征的牙齿建模方法

基于牙冠与牙根特征的牙齿建模方法李占利;高天宇;李洪安;武璠菲【摘要】三维扫描技术和基于CT图像的三维重构是目前主流的牙齿建模方法.由于三维扫描技术只能得到牙冠部分的三维数据模型,而被牙龈包裹的牙根数据却无法采集.利用CT图像进行三维重构虽然可以得到完整的牙齿数据模型,但建模耗时长且成本高.针对上述建模方法的不足,提出一种基于牙冠与牙根特征的牙齿建模方法.此方法基于牙体测量数据,建立牙冠与牙根数据间的关联规则,根据Cardinal样条插值和Tangent权的Laplacian网格变形算法进行三维建模.最后将建模结果与原始数据对比,实验证明建模方法高效可行.%Three-dimensional (3D) scanning technology and 3D reconstruction based on CT images are popular methods for tooth modeling recently. However, 3D scanning only can provides partial 3D data of tooth crown, and the data of gingival root wrapped by the gum cannot be collected. Using CT images for 3D reconstruction can get complete data model of tooth, but the process is time-consuming and costly. In order to overcome the weakness of these methods above mentioned, a novel tooth modeling method is proposed based on tooth crown and the features of tooth root. This method establishes association rules for tooth crow and root based on the measured data of tooth, and 3D model is constructed according to the Cardinal spline interpolation and Tangent weight - Laplacian mesh editing. Compared the modeling results and original data, experimental results show that the modeling method is efficient and feasible.【期刊名称】《图学学报》【年(卷),期】2017(038)004【总页数】6页(P483-488)【关键词】牙根;牙体测量;三维建模;样条曲线【作者】李占利;高天宇;李洪安;武璠菲【作者单位】西安科技大学计算机科学与技术学院,陕西西安 710054;西安科技大学计算机科学与技术学院,陕西西安 710054;西安科技大学计算机科学与技术学院,陕西西安 710054;西安科技大学计算机科学与技术学院,陕西西安 710054【正文语种】中文【中图分类】TP391随着生活质量的日益提高,人们在关心自身健康的同时也对容貌越发关注。

数字图像处理-冈萨雷斯-课件(英文)Chapter11-表示与描述可编辑全文

数字图像处理-冈萨雷斯-课件(英文)Chapter11-表示与描述可编辑全文
an image in other forms that are more suitable than the image itself.
Benefits: - Easier to understand - Require fewer memory, faster to be processed - More “ready to be used”
3 from
Lupper
Turn Right OK!
Turn Right OK!
Algorithm (cont.)
For the lower side of a convex hull
7. 8.
Put For
the i=
np-o2indtoswpnn
The First Difference of a Chain Codes
Problem of a chain code: a chain code sequence depends on a starting point.
Solution: treat a chain code as a circular sequence and redefine the starting point so that the resulting sequence of numbers forms an integer of minimum magnitude.
Remove the first and the last points from AReptpuernndLLlower to Lupper resulting in the list
LLlower
3 points from Llower
Turn Left NOK!

Project Overview

Project Overview

Cumuli1,Panorama2,and Vanguard3Project OverviewR.Mohr1,1,R.Buschmann2,2,L.Falkenhagen2,3,L.Van Gool3,4,andR.Koch3,41INRIA,ZIRST-655avenue de l’Europe,38330Montbonnet,France2Siemens AG,ZT IK2,81730M¨u nchen,Germany3University of Hannover,Germany4K.U.Leuven,3001Leuven,BelgiumAbstract.This overview summarizes the goals of the European projectsCumuli,Panorama,and Vanguard and references the various contri-butions in this volume.There are several overlaps between the projectswhich all evolve around the geometric analysis of scenes from multipleimages.All projects attempt to reconstruct the geometry and visual ap-pearance of complex3D scenes that may be static or dynamic.While Cumuli and Vanguard deal with images from uncalibrated cam-eras and unrestricted camera position for general scenes,Panoramafocusses on a highly calibrated setup used to capture3D person mod-els.Cumuli and Vanguard developed techniques for handling multi-view relations,object tracking and camera calibration,image and geom-etry based view synthesis,and3D model generation.Interaction withthe modeled scene and mixing of virtual and real objects leads to Vir-tual/Augmented Reality applications in Vanguard and Cumuli,whilethe Panorama approach is tuned to fully automatic scene analysis for vi-sual communication and3D-telepresence.Visualisation aspects are han-dled by Vanguard and Panorama with the development of auto-stereo-scopic displays.1The Cumuli ProjectCumuli—“Computational Understanding of MULtiple Images”—is an Es-prit Long term Research project focusing on multi-image geometry and its applications to3D industrial metrology.It can be seen as a follow-up to the suc-cessful Esprit–BRA project Viva,which involved three of Cumuli’s academic partners:Lund University(G.Sparr),Inria Sophia-Antipolis(O.Faugeras)and Inria–Imag in Grenoble(R.Mohr).On the industrial side,Cumuli’s partners have expertise in image-based metrology:Innovativ Vision Image Systems in Link¨o ping are specialists in motion measurement from high speed image se-quences,e.g.car crash testing;and Imetric based in Courgenay(Switzerland) are world leaders in high precision industrial photogrammetry from still images. Thefinal partner,Fraunhofer IGD in Darmstadt and Munich,specializes in visual modelling for augmented and virtual reality.D.Wang from the Leibniz Reinhard Koch,Luc Van Gool(Eds.):SMILE’98,LNCS1506,pp.1–13,1998.c Springer-Verlag Berlin Heidelberg19982R.Mohr et al.laboratory in Grenoble has also joined the academic team,bringing his expertise in algebraic proof for geometry.Viva provided deep insights into the geometry of3D perception with un-calibrated cameras.Cumuli builds on this,considering on the one hand auto-calibration,Euclidean structure and extensions to more general types of image features,and on the other the special problems introduced by image sequences. It also addresses a new basic research problem:how can we automate the geo-metric reasoning(consistency,integration of diverse types of information,...) required to build large3D models from image data.1.1Multi-camera Geometry,Discrete ImagesThefirst part of Cumuli concentrates on the geometry of discrete sets of im-ages of static scenes.The goal is to recover3D scene structure under different assumptions:–Unknown camera parameters and scenes:the basic structure of uncalibrated multi-camera geometry has been well studied in the past,so here we focus on a unification of the theory on one side,and practical numerical methods on the other;–Partial camera or scene knowledge:in many applications some prior infor-mation is available(e.g.camera calibrations,the knowledge that points lie on a plane or curve,...).Here we work on using this knowledge to extend the range or quality of the reconstruction,for example creating methods that allow reconstruction from the minimum amount of image data;–Non-point-like image features:to date,much of the scientific and techni-cal work has focused on point-like primitives.In many applications lines, curves,and surfaces are also important.The goal is to use the strong ge-ometric constraints that such primitives induce to relax the conditions for reconstruction,and to improve its accuracy.Some of our recent results improve our understanding of multi-camera geom-etry,for example[3,9].Corresponding efficient reconstruction procedures have also been obtained,for instance[6,9].Autocalibration is a potentially important means of moving towards Eu-clidean3D structure.The basic autocalibration constraint has been reformulated in terms of the absolute dual quadric[8,2]and the direction frame formalisms [10],and this has lead to much stabler numerical algorithms.Autocalibration has also been considered under weaker conditions,for instance when the skew is the only constant camera parameter[3].A complete characterization of the camera motions for which autocalibration is necessarily degenerate is given in [7]Considering geometric features,one major extension is the fact that silhou-ettes of general surfaces can be used to compute both relative positioning(Eu-clidean case)and projective reconstruction[4].Line features have been used in a fast linear algorithm for the affine camera case based on a‘1D camera’(vanishing point to image line at infinity projection)[5].Cumuli,Panorama,and Vanguard Project Overview3 The demonstrator for this part of the project will be integrated by Imetric. It will use the different types of features to initialize a photogrammetric bundle adjustment,for3D metrology in scenes like the one illustrated infig.1.The goal is to allow moreflexible system initialization,minimizing the the amount of prior information and scene instrumentation required.Fig.1.A point scene of an industrial environment1.23D Perception from Continuous Image SequencesThe second part of the project considers image sequences and non-rigid motion estimation.Although the same underlying theory applies to discrete and con-tinuous images,the practical implementations tend to be rather different.For one thing,the geometry inherited from the discrete case becomes degenerate when the views are very close or the inter-image motions are aligned(seefig.2). The appropriate incremental geometry must be developed and integrated with the handling of uncertainty,compensating the degeneracy of close views with the redundancy of many images.A key problem is to specialize the multi-image matching constraints to the continuous case.An important result is that third order continuous constraints are needed to reconstruct the scene and motion [1].Tracking tools and several efficient reconstruction methods are also being developed to deal with the large volume of image data.Innovativ Vision will integrate a demonstrator of the tools coming out of this part of the project.The aim is to reconstruct the non-rigid3D motion of points4R.Mohr et al.on a car or a test dummy during a car crash test,with as little prior calibration as possible.Fig.2shows some typical data provided by a high speed camera. Some camera vibration occurs,so static points in the scene will be tracked to provide a reference frame.Fig.2.Image of a car crash sequence1.3Algebraic Geometric Reasoning from Image DataThefinal part of Cumuli addresses a new area:the use of automated algebraic and geometric reasoning tools in visual reconstruction.The geometric models in computer vision are more than just coordinates:they are complex networks of incidence relations,constraints,etc.To build such models from weak or uncertain data,we need methods that make efficient use of any known constraints on the environment,to reduce the uncertainty and improve the consistency and quality of the generated models.Often,many such constraints are available.They are often used informally when building models by hand,but it is currently difficult to automate this process.In particular,we are studying the integration of digital site maps with vision data,as these are an important source of constraints in urban modelling(see The Use of Reality Models for Augmented Reality Applica-tions).Another main focus is the use of automated geometric reasoning techniques for computer vision applications.However,although the need to manipulate ab-Cumuli,Panorama,and Vanguard Project Overview5 stract geometric knowledge(the constraints)strongly motivates their use,they must be adapted to the particular requirements of computer vision.One illustra-tion of this approach is the automatic derivation of minimal parametrizations of constrained3D geometric models(see Imposing Euclidean Constraints during the Self-Calibration Process).This is a crucial step towards enforcing the constraints, but given the intrinsic uncertainty and the potential complexity of the3D mod-els involved in computer vision(many primitives,complicated constraints),to actually impose the constraints the algebraic techniques need to be coupled to efficient numerical methods(see Euclidean and Affine Structure/Motion for Un-calibrated Cameras from Affine Shape and Subsidary Information).2The Panorama ProjectThis paper gives an overview over the ACTS Panorama project and addi-tionally a more detailed look on the3-D reconstruction approach developed in Panorama.The Panorama consortium consists of14European partners from Universities,Research Institutes and Industry.The objective of the Panorama project is to enhance the visual information exchange in telecommunications with3-D telepresence.The main challenges of Panorama are:–Multiview camera and autostereoscopic display which allows for movement of the observer(multiviewpoint capability)–Realtime imaging system using special purpose hardware for analysis,vector coding and interpolation of intermediate views–Imaging system based on offline image analysis using3D model objects and state-of-the-art3D graphic computers–Application studies infield trials2.1Goals of PanoramaThe ultimate goal for future telecommunication is highly effective interper-sonal information exchange.The effectiveness of telecommunication is greatly enhanced by3-D telepresence.In this concept it is crucial that visual infor-mation is presented in such a way that the viewer is under the impression of actually being physically close to the party with whom the communication takes place.Existing systems realise3-D telepresence by stereoscopic imaging and dis-play technologies.A natural3-D impression is achieved if the camera positions correspond to the observers eye positions,and if the observer is stationary.The objective of the Panorama project is to enhance the visual information exchange in telecommunication by alleviating two major drawbacks of existing3-D telepresence systems.In thefirst place,an autostereoscopic display is realised that spatially separates the left and right view images according to the observers eye positions instead of the current techniques which require wearing polarised or active LCD glasses.Secondly,the communication system allows for movements of the observer while providing a3-D view,which enables to look around objects.6R.Mohr et al.In order to look around objects,intermediate images are synthesised at the receiver side appropriate to the observers head position.The synthesis of in-termediate views uses the captured trinocular image sequences and3-D scene information that is obtained my means of image analysis.Two different image synthesis approaches are considered in Panorama.A high-quality but complex off-line system based on3-D reconstruction of dynamic scenes is realised that uses explicit3-D models.With this approach,synthesis of intermediate images is carried out on-line using existing3-D computer graphics hardware.On the other hand a real-time system is developed which is composed of disparity estimator, disparityfield,video and audio codec for transmission over Dutch and German National Hosts and disparity compensated image synthesis of intermediate views.The Panorama project demonstrates its achievements by application stud-ies which are realised by demonstrator,prototype subsystems or services.Two demonstrators for video communication systems are currently developed,one for each of the investigated image synthesis approaches.Since also other3-D imag-ing applications can greatly benefit from the developed technologies,three addi-tional application studies are performed in Panorama in thefields of medicine, automation in3-D modelling of industrial environments,and3-D program pro-duction.2.23-D Reconstruction Approach in PanoramaOne of the two image synthesis approaches developed in Panorama for the generation of intermediate views is based on3-D reconstruction and3-D com-puter graphics.This approach uses a calibrated trinocular camera setup which is arranged around the autostereoscopic screen showing the other communication party(fig.3).Cumuli,Panorama,and Vanguard Project Overview7 calibration approach has been developed in Panorama[11]which is based on the Tsai calibration[12].It exploits geometric constraints of the trinocular camera setup in order to manage calibration with a relatively simple planar calibration pattern.The developed3-D reconstruction approach for dynamic scenes represents the scene using explicit3-D models,which are defined by3-D shape,surface colour and3-D motion parameters.This kind of representation is known from computer graphics[13,14].In3-D reconstruction,additional data like observa-tion points and a data memory for information from preceding time instances is required.Thus,Panorama developed a new3-D scene representation that supports3-D reconstruction and is still compatible to computer graphics.This scene representation is currently implemented in more than100C++classes.The3-D reconstruction can be subdivided into two phases:the initialisation phase,where new objects occurring in the scene are analysed,and an update phase,where temporal changes of already reconstructed objects are analysed and the quality of3-D models is successively improved.In the initialisation phase,a set of depth maps and3-D edges are estimated from the input image triples[15].The depth maps are back-projected into3-D space,resulting in a cloud of3-D points.These points are interpolated together with the3-D edges into a coherent surface model.This model is further approxi-mated using a triangular mesh that represents the shape of the objects[16].The texture of the objects is represented by mapping parts of the image triple onto each triangle[17].The update phase starts by analysing the motion of the objects,assuming that most of the changes result from motion of rigid objects[18,19].In case of persons,the assumption of rigidity requires a subdivision of the3-D models into articulated components.This subdivision is based on an evaluation of local3-D motion information which is estimated in a robust manner for each triangle[20]. After compensating the motion of the object components,the shape of the3-D models is updated.This update considersflexible deformations inside the models and evaluates the changes at object borders[21].Finally the texture of the3-D models is updated,compensating the changes of the objects surface colour.The resulting3-D models are stored in order to analyse the next image triple and are transmitted to the receiver(see Improving Block-based Disparity Estimation by Considering the non-uniform Distribution of the Estimation Error,Multi-Camera Acquisitions for High-Accuracy3D Reconstruction,and Integration of Multiple Range Maps through Consistency Processing).2.3Telepresence VisualizationAt the receiver,two views of the scene are visualised on an autostereoscopic dis-play according to the observers head position.Two techniques for a visualisation of the3-D models were developed.On the one hand,interfaces to common com-puter graphicsfile formats like VRML1,VRML2and OpenInventor were built which enable the visualisation with commercially available viewers.Since these8R.Mohr et al.file formats provide only limited representations of temporal changes,addition-ally a new viewer for direct visualisation of Panorama3-D scene representation and afile format based on OpenGL was developed.With this viewer,also shape and texture updates can be easily visualised.A distributed development of a common3-D reconstruction software is per-formed in Panorama.The software is integrated and tested at one site to guar-antee the interfacing and functionality.The overall approach is tested using im-age sequences of typical video communication scenes.The tests proved to provide realistically looking synthesised images of the other communication party for rea-sonable observer movements.An example of such a synthesised image is given in fig4(left).Due to thefixed camera positions that show the other communication party from the front side,the3-D models are not closed at their back.Therefore, the observer should not move his head outside the area that is covered by the real camera triple.Fig.4.(left)Image synthesised from automatically generated3-D model,(right) Seamless integration of an automatically generated3-D model into a virtual environmentFig.4(right)shows that the automatically generated3-D models can be seam-lessly integrated into virtual environments.This allows an integration of3-D telecommunication into3-D applications with communication requirements like e.g.3-D teleshopping or cooperative CAD.Beyond that the seamless integration of automatically generated3-D models and virtual environments will allow to design comfortable and intuitive teleconference systems where participants are brought together in a3-D virtual conference room and to give the participants the impression of being physically close to each other.The project itself and a number of recent results are also documented online athttp://www.tnt.uni-hannover.de/project/eu/panorama/Cumuli,Panorama,and Vanguard Project Overview9 3The V anguard ProjectVanguard stands for Visualisation Across Networks using Graphics and Un-calibrated Acquisition of Real Data.The expertise of the consortium is in the fields of uncalibrated tracking and3D reconstruction(University of Oxford, K.U.Leuven),image-based view synthesis and mosaicing(University of Jeru-salem),augmented reality systems(IGD Darmstadt)and3D displays(SHARP Laboraties Europe).The primary goal of this project is automatic3D model building from video sequences,and the use of these models for the rendering of scenes in telepresence applications.The scenes may contain both real and synthetic objects.This goal entails extracting both object geometry and surface descriptions(reflectance)ata level suitable for high quality graphical rendering.3.1ApproachTwo key points underlie our approach:–The camera motion is unknown and unconstrained,–The camera internal parameters(e.g.focal length)are unknown.Here,the goal is to arrive at3D models from multiple images taken without any prior calibration of the internal or external camera parameters.These points are essential forflexible and general purpose model acquisition because camera motion and calibration are usually not recorded(think of ex-tracting3D models of buildings from old newsreel footage,or from an unknown image sequence on the net).The3D geometrical models,together with appropriate surface information facilitate:–Graphical rendering of novel views,and sequences of views,of the scenes.–Integration of synthetic and real models and sequences.The basic Vanguard modules and the interaction between vision and graph-ics are sketched infig.5.The main modules are:Computer Vision This module handles all computer vision tasks such as tracking of features,self-calibration,and surface reconstruction[22,23].It forms the back-bone of the Vanguard technology(see the contributions Metric3D Surface Re-construction from Uncalibrated Image Sequences,Automatic3D Model Construc-tion for Turn-Table Sequences,and Matching and Reconstruction from Widely Separated Views).Another important direction here is the development of image based view synthesis methods[25]and of panoramic image representations[24]. 3D Geometry The geometrical models as produced by the vision tasks need post processing like mesh retriangulation and reduction[26],compact represen-tation,model fusion and editing(see Fitting Geometrical Deformable Models to Registered Range Images).10R.Mohr et al.Uncalibrated camera Distributed computingTracking and Display TechnologiesInteractive 3D graphicsComputer Vision:TrackingModelling Surface propertiesImage sequence3D Geometry Real-World ModelsWPG 1WPG 2,3WPG 4TELEPRESENCE APPLICATIONSStereo Look-around, WP 6.1Collaborative Design, WP 6.2Augmented Reality, WP 6.3Walk-through, WP 6.4Fig.5.Schematic overview of Vanguard modules.Surface properties In addition to the geometric properties Vanguard aims at extracting surface albedo,reflectance and texture properties This information is used to control the rendering and to further increase the realism of the generated models.3.2ApplicationsAll modules as mentioned above are tuned towards a set of applications that will use the extracted real-world models and enhance them with AR/VR techniques.The project has focussed on three major application areas:3D surface modeling from an image sequence The goal of this very general appli-cation is to develop strategies for easy and efficient geometric modeling.Based on images of the scene alone highly realistic metric models of specific objects or scenes (e.g.buildings,landscapes)are generated that can be used for visual-isation and demonstration purposes.As a test case for this approach,a recon-struction of parts of the archaeological excavation site in Sagalassos,Turkey,was performed.An example of 3D modeling shows the reconstruction of a corner of the Roman bath at the Sagalassos site,based on five uncalibrated views (fig.6).Stereo Visualisations of Objects and Scenes Given a monocular (single camera)sequence,the objective is to generate the image sequences that would have been seen had a stereo-rig moved along the same path.The stereo sequence will be used as an image source to drive 3D displays (e.g.shutter glasses on a workstation,and SHARP’s autostereoscopic displays).By creating views that are “in-between”Cumuli,Panorama,and Vanguard Project Overview11Fig.6.2Images of Roman bath sequence,reconstructed shape and textured modelthe discrete set of images,an active“look-around”system can be built—a pseudo-holographic display.Collaborative Scene Visualisation The objective of this application is to walk around objects with a video camera and then extract a3D model.The objects will then be graphically rendered together with possible arrangements of previ-ously modeled(e.g.CAD)objects and surroundings.Remote sites will be able to jointly change the configurations of real and virtual objects.The application will serve as a trial using ISDN or ATM(see Applying Augmented Reality Techniques in the Field of Interactive Collaborative Design).These applications represent a significant integration of state-of-the-art com-puter vision and computer graphics techniques.Many other application areas could benefit from these capabilities:non-invasive surgery;heritage preserva-tion;archeologists recording layouts and discovered artefacts at an excavation (a video sequence at each stage providing the basis for subsequent virtual real-ity walkthroughs),accident investigators recording a scene before it is cleared; architecture visualization;collaborative design;teleshopping,etc.Walkthrough,look–around and model acquisition also have applications in the games industry,e.g.to create more realistic backgrounds than simple graph-12R.Mohr et al.ically rendered ones,to immerse the player more fully(by changing the scene when his/her viewing perspective changes),and to provide virtual objects.Fi-nally,although not explored here,the work opens up the prospect of intelligent keying-the integration of two video sequences-based on the geometries of the viewed scenes;it also allows virtual“tele-porting”of objects,since a cloned model can be cut at a distant location using the geometry and surface description provided by a video of the original.References1.K.˚A str¨o m and A.Heyden.Continous time matching constraints for image stream.International Journal of Computer Vision,page To appear,1998.2. A.Heyden and K.˚A str¨o m.Euclidean reconstruction from constant intrinsic param-eters.In Proceedings of the13th International Conference on Pattern Recognition, Vienna,Austria,volume I,pages339–343,August1996.3. A.Heyden and K.˚A str¨o m.Euclidean reconstruction from image sequences withvarying and unknown focal length and principal point.In Proceedings of the Con-ference on Computer Vision and Pattern Recognition,Puerto Rico,USA,pages 438–443.ieee Computer Society Press,June1997.4. F.Kahl and K.˚A str¨o m.Motion estimation in images sequences using the defor-mation of apparent contours.In Proceedings of the6th International Conference on Computer Vision,Bombay,India,pages939–944,1998.5.L.Quan and T.Kanade.Affine structure from line correspondences with un-calibrated affine cameras.ieee Transactions on Pattern Analysis and Machine Intelligence,19(8):834–845,August1997.6.G.Sparr.Simultaneous reconstruction of scene structure and camera locations fromuncalibrated image sequences.In Proceedings of the13th International Conference on Pattern Recognition,Vienna,Austria,volume I,pages328–333.ieee Computer Society Press,August1996.7.P.Sturm.Vision3D non calibr´e e:contributions`a la reconstruction projective et´e tude des mouvements critiques pour l’auto-calibrage.Th`e se de doctorat,Institut National Polytechnique de Grenoble,December1997.8. B.Triggs.Autocalibration and the absolute quadric.In Proceedings of the Con-ference on Computer Vision and Pattern Recognition,Puerto Rico,USA,pages 609–614.ieee Computer Society Press,June1997.9. B.Triggs.Linear projective reconstruction from matching tensors.Image andVision Computing,15(8):617–625,August1997.10. B.Triggs.Autocalibration from planar scenes.In Proceedings of the5th EuropeanConference on Computer Vision,Freiburg,Germany,1998.11. F.Pedersini,et.al,”Calibration and Self-Calibration of multi-Ocular Camera Sys-tems”,Int.Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging-IWSNHC3DI’97,Rhodes,Greece,1997.12.R.Y.Tsai,”A Versatile Camera Calibration Technique for High-Accuracy3DMachine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses”,Journal of Robotics and Automation,Vol.RA-3.No.4,August1987,pp.323-344.13.”The Virtual Reality Modeling Language Specification”,Version2.0,Final Work-ing Draft,ISO/IEC WD14772,July28,1996,/moving-worlds/spec/Cumuli,Panorama,and Vanguard Project Overview13 14.Foley,J.D.,van Dam,A.,Feiner,S.K.,Hughes J.F.,”Computer Graphics”,Addison-Wesley p.1990.15.L.Falkenhagen,”Hierarchical Block-Based Disparity Estimation ConsideringNeighbourhood Constraints”,Int.Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging-IWSNHC3DI’97,Rhodes,Greece,1997.16.T.Riegel,et.al.”3-D Shape Approximation for Objects in Multiview Image Se-quences”,Int.Workshop on Synthetic-Natural Hybrid Coding and Three Dimen-sional Imaging-IWSNHC3DI’97,Rhodes,Greece,1997.17.W.Niem,H.Broszio,”Mapping Texture from Multiple Camera Views onto3DObject models for Computer Animation”,Int.Workshop on Stereoscopic and3D Imaging-IWS3DI’95,Santorini,Greece,Sept.1995.18.R.Koch,”Dynamic3-D Scene Analysis through Synthesis Feedback Control”,IEEE Transactions on PAMI,Vol.12,No.6,June199319.L.Falkenhagen,”3-D Motion Estimation of3-D Model Objects”,ACTS-Panorama Project Deliverable,AC092/UH/D006,1996.20. D.Tzovaras,et.al.”3D Motion Estimation of Small Surface Patches”,ACTS-Panorama Project Deliverable,AC092/UT/D013,1997.21. A.Kopernik,et.al.”3D Shape Update and Subdivision of3D Model Objects,FirstResults”,ACTS-Panorama Project Deliverable,AC092/UT/D014,1997.22.Torr P.,Fitzgibbon A.and Zisserman A.,“Maintaining Multiple Motion ModelHypotheses Over Many Views to Recover Matching and Structure”,To appear in Intern.Journal on Computer Vision,1998.23.M.Pollefeys,R.Koch and L.Van Gool:Self-Calibration and Metric Reconstructionin spite of Varying and Unknown Internal Camera Parameters.To appear in Intern.Journal on Computer Vision,1998.24.Avidan,S.and Shashua,A.Novel View Synthesis in Tensor Space.IEEE Confer-ence on Computer Vision and Pattern Recognition(CVPR),June1997.25.Rousso,S.Peleg,I.Finci,and A.Rav-Acha.Universal Mosaicing using Pipe Pro-jection.In Sixth International Conference on Computer Vision,Bombay,India, 1998.26.P.J.Neugebauer and K.Klein:Adaptive Triangulation of Objects Reconstructedfrom Multiple Range Images.Proc.IEEE Visualization’97,Phoenix,Arizona。

CT+成像原理介绍

CT+成像原理介绍

Table Speed & Pitch
Table Speed is defined as distance traveled in mm per 360 rotation Pitch => Table Feed per rotation Collimation
Table Feed 10 mm/rot 15 mm/rot 20 mm/rot
X-ray generation Data acquisition Recon. & postpro.
How does CT Work?
X-ray goes through collimator therefore penetrate only an axial layer of the object, called
Slip-ring Scanner Slip-
Computed Tomography
CT Basics Principle of Spiral CT Scan Parameter & Image Quality Optimizing Injection Protocols Clinical Applications
Contiguous Image Reconstruction
Slice Thickness
Increment = Slice Thickness No Overlap No Gaps
Increment
Overlapping Image Reconstruction
SliceThickness
Overlap
What is Spiral Scan? -- just 4"C"
Continuously rotating tube/detector

哈姆林中心杨广中实验室

哈姆林中心杨广中实验室

Predictive Cardiac Motion Modeling and Correction with PLSR Predictive cardiac motion modeling and correction based on partial least squares regression to extract intrinsic relationships between three-dimensional (3D) cardiac deformation due to respiration and multiple one-dimensional real-time measurable surface intensity traces at chest or abdomen. - see IEEE TMI 23(10), 2004
Myocardial Strain and Stain Rate Analysis Virtual tagging with MR myocardial velocity mapping - IEEE TMI Strain rate analysis with constrained myocardial velocity restoration Review of methods for measuring intrinsic myocardial mechanics - JMRI Atheroma Imaging and Analysis The use of selective volume excitation for high resolution vessel wall imaging (JMRI, 2003;17(5):572-80). 3D morphological modeling of the arterial wall Feature reduction based atheroma classification Volume Selective Coronary Imaging A locally focused MR imaging method for 3-D zonal echo-planar coronary angiography using volume selective RF excitation. Spatially variable resolution was used for delineating coronary arteries and reducing the effect of residual signals caused by the imperfect excitation profile of the RF pulse. The use of variable resolution enabled the derivation of basis functions having variable spatial characteristics pertain to regional object details and a significantly smaller number of phase encoded signal measurements was needed for image reconstruction. Gatehouse PD, Keegan J, Yang GZ, Firmin DN. Magn Reson Med, 2001 Nov;46(5):1031-6. Yang GZ, Burger P, Gatehouse, PD, Firmin DN. Magn Reson Med, 41, 171-178, 1999. Yang GZ, Gatehouse PD, Keegan J, Mohiaddin RH, Firmin DN. J. Magn Reson Med, 39: 833-842, 1998.

无人驾驶英语PPT

无人驾驶英语PPT

Motion Planning
It determines the specific actions, including acceleration, braking, and steering, that the vehicle needs to take along the planned path
Global Path Planning
This technique plans a complete route for the vehicle from the start to the destination, considering all possible traffic scenarios and objectives
要点二
Semantic Segmentation
It allows the vehicle to understand the scene in detail by assigning semantic means to different parts of the environment
要点三
3D Reconstruction
Level 4
High automation The vehicle can handle most or all driving tasks without human intervention, but limited to specific geographic regions and weather conditions.
Public transportation
Autonomous vehicles can be used for shared rides, shuttle services, or even fully automated bus systems, providing effective and sustainable transportation options for urban areas

A space-sweep approach to true multi-image matching


2 True Multi-Image Matching 2.1 De nition
This section presents, for the rst time, a set of conditions that a stereo matching technique should meet to be called a \true multi-image" method. By this we mean that the technique truly operates in a multi-image manner, and is not just a repeated application of two- or three-camera techniques.
1 Introduction This paper considers the problem of multi-image stereo reconstruction, namely the recovery of
static 3D scene structure from multiple, overlapping images taken by perspective cameras with known extrinsic (pose) and intrinsic (lens) parameters. The dominant paradigm is to rst determine corresponding 2D image features across the views, followed by triangulation to obtain a precise estimate of 3D feature location and shape. The rst step, solving for matching features across multiple views, is by far the most di cult. Unlike motion sequences, which exhibit a rich set of constraints that lead to e cient matching techniques based on tracking, determining feature correspondences from a set of widely-spaced views is a challenging prob-

基于双目线结构光的三维重建及其关键技术研究

基于双目线结构光的三维重建及其关键技术研究基于双目线结构光的三维重建是一种常见的三维重建方法,在计算机视觉和图像处理领域有广泛应用。

本文将探讨双目线结构光三维重建的基本原理和关键技术。

一、基本原理双目线结构光的三维重建基于以下原理:通过投射具有特定空间编码的光线,利用摄像机捕捉图像,并对图像进行处理和分析,可以推断出场景中物体的三维形状和深度信息。

二、关键技术1. 双目成像双目成像是双目线结构光重建的基础。

通过使用两个物理上分开的相机,可以获取场景的不同视角,从而获得更多的信息,提高重建的精度和稳定性。

2. 线结构光投影线结构光投影是双目线结构光重建的核心技术。

通过投射特定编码的结构光,可以在场景中形成一系列光条或光带,从而在摄像机中产生对应的图像。

这样,可以通过分析图像中结构光的失真或形状变化,来推断物体表面的深度信息。

3. 结构光编码结构光编码是双目线结构光重建的重要组成部分。

通过在结构光中引入编码,可以增加光条或光带的区分度,从而提高重建的精度。

常见的编码方法包括灰度编码、正弦编码、校正编码等。

4. 影像获取与处理双目线结构光重建需要获取并处理图像数据。

影像获取涉及到摄像机的标定、同步和触发等技术,以确保双目系统的准确性和稳定性。

影像处理包括去噪、校准、纹理映射等步骤,以提取出有效的结构光信息,并进行后续的三维重建处理。

5. 三维重建算法三维重建算法是双目线结构光重建的核心内容。

常见的算法包括三角测量、立体匹配、点云拼接等。

这些算法通过分析不同视角的结构光图像,通过匹配和计算来推断物体的三维形状和深度信息。

6. 点云处理与可视化三维重建通常最终呈现为点云模型。

点云处理涉及到点云滤波、配准、分割等技术,以去除噪声、合并重叠点云、提取物体表面等。

点云可视化则将点云数据以直观的形式呈现,便于人们观察和理解。

综上所述,基于双目线结构光的三维重建是一种常见的三维重建方法。

它利用投射特定编码的结构光,结合双目成像和影像处理技术,通过分析图像中的结构光信息,推断物体的三维形状和深度信息。

高分辨率与低分辨率3D-SPACE序列在磁共振胰岛管水成像中的应用对比

高分辨率与低分辨率3D-SPACE序列在磁共振胰岛管水成像中的应用对比肖建明;彭涛;王宗勇;王娜【摘要】目的对比高分辨率与低分辨率三维快速自转回波成像技术(3D-Sampling Perfection with Application-Optimized Contrasts by using Different Flip Angle Evolutions,3D-SPACE)序列在磁共振胰胆管水成像(Magnetic Resonance Cholangiopancreatography,MRCP)中的应用效果.方法使用Siemens Avanto 1.5T磁共振扫描仪对44名受检者行胰胆管水成像,同时采用3D-SPACE的高分辨率与低分辨率序列扫描,两名高年资医师对成像效果进行评价和评分,评分的结果采用秩和校验进行统计学分析.结果高分辨率3D-SPACE原始图像对微小病变显示更清晰;低分辨率3D-SPACE成像时间更快;最大密度投影(Maximum Intensity Projection,MIP)重建图像质量无统计学差异(胆总管显示评分差异:P=0.899,左、右肝内胆管、主胰管显示评分差异:P=0.623).结论对于配合程度较差的受检者应用低分辨率3D-SPACE更易获得较理想图像.%Objective To compare the application effects of 3 dimensional sampling perfertion with application optimized contrasts using different flip angle evolutions (3D-SPACE) sequence under the status of high-resolution and low-resolution in Magnetic Resonance Cholangiopancreatography (MRCP) examination. Methods Forty-four examinees were conducted MRCP scanning with Siemens Avanto 1.5T magnetic resonance imaging (MRI) scanner. In such examination, examinees were scanned by adopting 3D-SPACE sequence with high-resolution and low-resolution mode respectively. Then the imaging effects in the two resolution models were evaluated and scoredby two senior radiologists separately. Later, the scoring results were analyzed statistically with the Wilcoxon rank sum test. Results Minimal changes were displayed more clearly in original image with high-resolution 3D-SPACE sequence, whilst imaging time of low-resolution 3D-SPACE sequence was less. There was no significant difference in image quality with Maximum Intensity Projection (MIP) reconstruction technology between the two resolution models (score difference in delineating common bile duct: P=0.899, score difference in delineating right and left hepatic duct as well as main pancreatic duct: P=0.623). Conclusion For examinees with poor cooperation, low-resolution 3D-SPACE sequence was easier to get qualified images.【期刊名称】《中国医疗设备》【年(卷),期】2017(032)003【总页数】4页(P77-79,102)【关键词】磁共振胰胆管水成像;可变翻转角快速自旋回波;最大密度投影;分辨率;3D-SPACE【作者】肖建明;彭涛;王宗勇;王娜【作者单位】成都大学附属医院放射科,四川成都 610081;成都大学附属医院放射科,四川成都 610081;成都大学附属医院放射科,四川成都 610081;成都大学附属医院放射科,四川成都 610081【正文语种】中文【中图分类】R445.2;R575.7磁共振胰胆管水成像(Magnetic Resonance Cholangio-pancreatography,MRCP)检查时采用多序列结合的方法可以取长补短,有效提高肝胆疾病的磁共振诊断效能[1-3]。

雨天公路水膜厚度模型验证及行车安全性

雨天公路水膜厚度模型验证及行车安全性王祎祚;李光元;张泽垚;邓鹏【摘要】针对雨天行车道路表面产生水膜,易使车辆打滑发生交通事故的问题,采用人工模拟降雨试验,获得了水泥混凝土路表水膜厚度回归方程,验证了国内外现有水膜厚度计算模型.结果表明:路表水膜厚度随降雨强度和排水长度的增大而增大,随路面坡度的增大而减小;相同条件下,水泥混凝土路表水膜厚度值高于沥青路面水膜厚度值,其原因主要是水泥混凝土的亲水性导致路表水分子流动阻力增大,水分子积聚增多,使水膜厚度增大.对西安地区不同暴雨重现期下高速公路水膜厚度值进行计算,结果表明在五年重现期暴雨强度1.954 mm·min-1下,路表水膜厚度值不超过滑水速度发生时的临界水膜厚度值2.35 mm,设计时速80 km·h-1可保证雨天行车安全.%For the water film on the rainy road surface which is easy to make the vehicle slip causes traffic accidents,the regression equation of water film thickness of cement concrete road surface was obtained by artificial simulated rainfall test,and the existing calculation models of water film thickness are verified.The results show that the thickness of the surface water film increases with the increase of rainfall intensity and drainage length,and decreases with the increase of slope;under the same conditions,the thickness value of cement concrete road surface water film is higher than that of asphalt pavement.The reason is that the hydrophilicity of cement concrete leads to the increase of water resistance of water surface,leads to the accumulation of the surface water molecules increases.The results show that under the rainstorm intensity of 1.954 mm · min-1 in the five-year reconstruction period,the thickness of thesurface water film does not exceed the critical water film thickness 2.35 mm when the water-skiing velocity occurs.The design speed of 80 km · h-1 can ensure the safety of rainy days.【期刊名称】《科学技术与工程》【年(卷),期】2017(017)029【总页数】5页(P128-132)【关键词】水泥混凝土路面;沥青路面;水膜厚度;降雨试验;回归模型;亲水性;行车安全【作者】王祎祚;李光元;张泽垚;邓鹏【作者单位】空军工程大学航空航天工程学院,西安710038;空军工程大学航空航天工程学院,西安710038;空军工程大学航空航天工程学院,西安710038;空军工程大学航空航天工程学院,西安710038【正文语种】中文【中图分类】U416.216雨天环境下行车,会使道路表面产生水膜。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Reconstructionof3DModelsfromIntensityImagesandPartialDepthLuzA.Torres-M´endezandGregoryDudekCenterforIntelligentMachines,McGillUniversityMontreal,QuebecH3A2A7,CA{latorres,dudek}@cim.mcgill.ca

AbstractThispaperaddressestheprobabilisticinferenceofgeometricstructuresfromimages.Specifically,ofsyn-thesizingrangedatatoenhancethereconstructionofa3Dmodelofanindoorenvironmentbyusingvideoimagesand(very)partialdepthinformation.Inourmethod,weinterpolatetheavailablerangedataus-ingstatisticalinferenceslearnedfromtheconcurrentlyavailablevideoimagesandfromthose(sparse)regionswherebothrangeandintensityinformationisavail-able.ThespatialrelationshipsbetweenthevariationsinintensityandrangecanbeefficientlycapturedbytheneighborhoodsystemofaMarkovRandomField(MRF).Incontrasttoclassicalapproachestodepthre-covery(i.e.stereo,shapefromshading),wecanaf-fordtomakeonlyweakpriorassumptionsregardingspecificsurfacegeometriesorsurfacereflectancefunc-tionssincewecomputetherelationshipbetweenexist-ingrangedataandtheimageswestartwith.Experimen-talresultsshowthefeasibilityofourmethod.IntroductionThispaperpresentsanalgorithmforestimatingdepthinfor-mationfromacombinationofcolor(orachromatic)inten-sitydataandalimitedamountofknowndepthdata.Sur-facedepthrecoveryisoneoftheclassicalstandardvisionproblems,bothbecauseofitsscientificandpragmaticvalue.Theproblemofinferringthe3Dlayoutofspaceisacriticalprobleminroboticsandcomputervision,andisoftencitedasasignificantcognitiveskill.Inthevisioncommunity,so-lutionstosuch“shapefromX”problemsareoftenbasedonstrongpriorassumptionsregardingthephysicalpropertiesoftheobjectsinthescene(suchasmatteorLambertianre-flectanceproperties).Intheroboticscommunity,suchdepthinferenceisoftenperformedusingsophisticatedbutcostlyhardwaresolutions.Whileseveralelegantalgorithmsfordepthrecoveryhavebeendeveloped,theuseoflaserrangedatainmanyapplica-tionshasbecomecommonplaceduetotheirsimplicityandreliability(butnottheirelegance,costorphysicalrobust-ness).Inrobotics,forexample,theuseofrangedatacom-binedwithvisualinformation,hasbecomeakeymethodo-logy,butitisoftenhamperedbythefactthatrangesensorsCopyrightc󰀁2004,AmericanAssociationforArtificialIntelli-gence(www.aaai.org).Allrightsreserved.thatprovidecomplete(2-1/2D)depthmapswitharesolu-tionakintothatofacamera,areprohibitivelycostly.Stereocamerascanproducevolumetricscansthatareeconomical,buttheyoftenrequirecalibrationorproducerangemapsthatareeitherincompleteoroflimitedresolution.Weseektoreconstructsuitable3Dmodelsfromsparserangedatasetswhilesimultaneouslyfacilitatingthedataac-quisitionprocess.Wepresentanefficientalgorithmtoesti-mate3Ddensedatafromacombinationofvideoimagesandalimitedamountofobservedrangedata.Ithasbeenshownin(Lee,Pedersen,&Mumford2001)thatalthoughtherearecleardifferencesbetweenopticalandrangeimages,theydohavesimilarsecond-orderstatisticsandscalingproperties(i.e.theybothhavesimilarstructurewhenviewedasrandomvariables).Ourmotivationistoexploitthisfactandalsothatbothvideoimagingandlimitedrangesensingareubiquitousreadily-availabletechnologieswhilecompletevolumescan-ningremainsprohibitiveonmostmobileplatforms.Pleasenotethatwearenotsimplyinferringafewmissingpixels,butsynthesizingacompleterangemapfromaslittleasfewlaserscansacrosstheenvironment.Ourmethodisbasedonlearningastatisticalmodelofthe(local)relationshipbetweentheobservedrangedataandthevariationsintheintensityimageandusethismodeltocom-puteunknowndepthvalues.Thiscanberegardedasaformofshape-from-shading(depthinferencefromvariationsinsurfaceshading)basedonstatisticallearning,althoughtra-ditionalshape-from-shadingisquitedifferentfromthisap-proachinitstechnicaldetails.Here,weapproximatethecompositeofrangeandintensityateachpointasaMarkovprocess.UnknowndepthvaluesaretheninferredbyusingthestatisticsoftheobservedrangedatatodeterminethebehavioroftheMarkovprocess.Thepresenceofintensitywhererangedataisbeinginferrediscrucialsinceintensityprovidesknowledgeofsurfacesmoothnessandvariationsindepth.Ourapproachlearnsthatknowledgedirectlyfromtheobserveddata,withouthavingtohypothesizeconstraintsthatmightbeinapplicabletoaparticularenvironment.

PreviousWorkWebaseourrangeestimationprocessontheassumptionthatthepixelsconstitutingboththerangeandintensityimagesacquiredinanenvironment,canberegardedastheresultsofpseudo-randomprocesses,butthattheserandomprocesses

相关文档
最新文档