SIFT算法英文详解
ASIFT算法

1. 绪论目标识别是在图像或视频中寻找目标的过程。
在目标识别的应用中,特征提取和特征选择是第一个步骤,而且也是决定最后效果最重要的一个步骤之一。
如果说一个完整应用是一个系统的话,那么特征就代表着由输入数据转化而来的,传送给系统其它部分的数据。
很多时候最终能否得到一个理想的结果,在很大程度上依赖于能否找到最具代表性的特征。
好的特征将会较大地降低后续工作的难度;不好的特征则会对系统后续处理产生干扰甚至使系统产生错误的输出。
良好的特征也降低了系统后续处理的复杂度,为其广泛应用提供了可能。
SIFT(Scale-Invariant Feature Transform, 尺度不变特征转换)特征是一个检测和描述图像局部特征的算法,这种算法具有优良的特性,能较准确地用于特征匹配。
它具有广泛的用途,包括目标识别,三维重建、表情识别以及视频和运动追踪。
SIFT基于两个大的组成部分,并使用了多种改进技术以改善各项指标。
这里简要介绍经典SIFT算法和基于SIFT的目标识别应用。
为后面介绍ASIFT算法作铺垫。
SIFT是一种检测与描述图像局部特征的方法。
为了识别和描述图像中的目标,需要从图像中提取出与目标相联系的特征。
利用这些特征可以从众多背景目标中识别出所需目标。
为了确保准确无误地识别出目标,要求在各种视角、光照、距离以及噪声条件下仍然可以准确地进行匹配。
SIFT使用不同的技术解决以上问题。
对物体距离远近,大小和噪声条件的适应由高斯尺度空间来完成,而视角、光照等条件则对应于特征描述符。
SIFT并不是一开始就以解决上述问题为设计目标,而是先提出一种算法,而后发现它有诸多优良的性质。
在SIFT算法提出以前,若干技术已经趋于成熟,而SIFT则成功地把它们整合为一个整体,并包括一些创新点。
SIFT的两个主要组成部分是尺度空间检测和特征描述符。
图像的特征点一般都比周围的点显著,比如位于极值点或灰度值变化加速率大的位置。
特征的类型一般包括点、块、边缘、脊等。
sift特征点检测算法原理

sift特征点检测算法原理SIFT特征点检测算法原理SIFT(Scale-Invariant Feature Transform)是一种用于在图像中检测和描述局部特征的算法。
它的原理是通过在不同尺度空间中寻找关键点,并计算这些关键点的局部特征描述子,从而实现图像的特征匹配和识别。
1. 尺度空间构建SIFT算法首先通过高斯金字塔构建尺度空间,以便在不同尺度下检测出特征点。
高斯金字塔是通过对原始图像进行多次降采样得到的一系列图像,每个图像都是前一层图像的二分之一。
在每一层图像上应用高斯滤波器,得到一系列平滑图像。
2. 关键点检测在构建好的尺度空间中,SIFT算法使用Difference of Gaussian (DoG)来检测关键点。
DoG是通过对相邻两层平滑图像进行相减得到的,可以有效地检测出图像中的边缘和角点。
在DoG图像中,局部极值点被认为是潜在的关键点。
3. 关键点定位为了准确定位关键点的位置,SIFT算法采用了尺度空间极值点的精确定位方法。
它使用T aylor展开式对DoG图像进行拟合,通过求解极值点的二阶导数来精确定位关键点的位置。
同时,为了排除低对比度的关键点和边缘响应的干扰,SIFT算法会对关键点进行一定的筛选。
4. 方向分配为了使特征描述子具有旋转不变性,SIFT算法对每个关键点分配一个主方向。
它通过计算关键点周围像素的梯度方向直方图,找出主要梯度方向,并将其作为关键点的方向。
这样可以保证特征描述子在旋转变换下具有一定的稳定性。
5. 特征描述在关键点的周围区域内构建特征描述子,用于表示关键点的局部特征。
SIFT算法将关键点周围的图像区域划分为若干个子区域,并在每个子区域内计算梯度方向直方图。
最后将这些直方图连接起来,得到一个具有128维特征向量的特征描述子。
通过以上步骤,SIFT算法可以在图像中检测出大量的关键点,并为每个关键点生成一个128维的特征描述子。
这些特征描述子具有尺度不变性、旋转不变性和光照不变性等特点,可以用于图像匹配、物体识别和三维重建等应用领域。
siftkeypoint参数

siftkeypoint参数
SIFT(Scale-Invariant Feature Transform)是一种用于图像处理和计算机视觉领域的特征提取算法,它可以在不同尺度和旋转下提取出稳定的特征点。
在SIFT算法中,关键点(keypoint)是指图像中具有显著特征的点,这些点可以用来进行图像配准、目标识别和其他计算机视觉任务。
SIFT算法中的关键点参数包括:
1. 尺度空间参数(octaves),SIFT算法使用高斯滤波器构建图像的尺度空间金字塔,octaves参数指定金字塔的层数,影响了提取关键点的尺度范围。
2. 尺度参数(sigma),高斯滤波器的标准差,用于控制图像的平滑程度和特征点的尺度。
3. 阈值参数(contrastThreshold和edgeThreshold),用于筛选关键点的对比度和边缘响应阈值,可以控制提取出的关键点质量和数量。
4. 方向参数(orientationBins),用于计算关键点的主方向,可以提高关键点的旋转不变性。
在使用SIFT算法时,调整这些关键点参数可以影响到提取出的
关键点的数量、质量和稳定性,需要根据具体的应用场景和图像特
点进行合理的选择和调整。
同时,SIFT算法也有一些默认的参数值,可以根据具体情况进行调整以获得最佳的特征提取效果。
sift特征提取的几个主要步骤

sift特征提取的几个主要步骤SIFT(Scale-Invariant Feature Transform)是一种能够提取图像中的稳定、具有尺度不变性的特征点的算法,它广泛应用于计算机视觉和图像处理领域。
SIFT特征提取主要有以下几个主要步骤:1. 尺度空间构建(Scale Space Pyramid):SIFT算法首先通过使用不同尺度的高斯模糊函数对原始图像进行滤波,产生一系列图像金字塔,也称为尺度空间。
这是因为图像中的物体在不同尺度下具有不同的细节。
高斯金字塔的构建会产生一系列模糊程度不同的图像。
2. 特征点检测(Scale-Space Extrema Detection):在尺度空间中,SIFT算法通过在每个尺度上对图像进行梯度计算,并寻找图像中的极值点来检测潜在的关键点。
这些关键点通常是在空间和尺度上稳定的,它们能够在不同尺度和旋转下保持一定的不变性。
3. 关键点定位(Keypoint Localization):为了更准确地定位关键点,SIFT算法通过使用DoG(Difference of Gaussians)图像金字塔来检测潜在的关键点位置。
DoG图像是通过对高斯图像金字塔的相邻尺度进行相减得到的。
对DoG图像进行极值点检测可以找到潜在的关键点。
4. 关键点方向确定(Orientation Assignment):在确定了潜在的关键点位置后,SIFT算法会对每个关键点周围的领域计算梯度幅度和方向。
然后,使用梯度直方图来确定关键点的主要方向。
这样做能够使得后续的特征描述过程对旋转更具有鲁棒性。
5. 特征描述(Feature Description):在关键点方向确定后,SIFT算法会在每个关键点周围的邻域中构建一个针对尺度和旋转不变性的局部特征描述符。
这个描述符是由关键点周围的梯度直方图组成的,梯度直方图反映了关键点周围的图像局部特征。
6. 特征匹配(Feature Matching):在特征描述生成后,可以使用一些匹配算法来比较两个图像之间的特征点,找到两个图像中相对应的关键点对。
sift算法详解

2、高斯模糊
SIFT 算法是在不同的尺度空间上查找关键点,而尺度空间的获取需要使用高斯模糊来 实现,Lindeberg 等人已证明高斯卷积核是实现尺度变换的唯一变换核,并且是唯一的线性 核。本节先介绍高斯模糊算法。
2.1 二维高斯函数
高斯模糊是一种图像滤波器,它使用正态分布(高斯函数)计算模糊模板,并使用该模板 与原图像做卷积运算,达到模糊图像的目的。 N 维空间正态分布方程为:
G (r ) =
1 2πσ
2
N
e −r
2
/(2 σ 2 )
(1-1)
其中, σ 是正态分布的标准差, σ 值越大,图像越模糊(平滑)。r 为模糊半径,模糊半 径是指模板元素到模板中心的距离。如二维模板大小为 m*n,则模板上的元素(x,y)对应的高 斯计算公式为:
G ( x, y ) =
1
2πσ 2
3.1 尺度空间理论
尺度空间(scale space)思想最早是由 Iijima 于 1962 年提出的,后经 witkin 和 Koenderink 等人的推广逐渐得到关注,在计算机视觉领域使用广泛。 尺度空间理论的基本思想是: 在图像信息处理模型中引入一个被视为尺度的参数, 通过 连续变化尺度参数获得多尺度下的尺度空间表示序列, 对这些序列进行尺度空间主轮廓的提 取,并以该主轮廓作为一种特征向量,实现边缘、角点检测和不同分辨率上的特征提取等。 尺度空间方法将传统的单尺度图像信息处理技术纳入尺度不断变化的动态分析框架中, 更容易获取图像的本质特征。 尺度空间中各尺度图像的模糊程度逐渐变大, 能够模拟人在距 离目标由近到远时目标在视网膜上的形成过程。 尺度空间满足视觉不变性。该不变性的视觉解释如下:当我们用眼睛观察物体时,一方 面当物体所处背景的光照条件变化时, 视网膜感知图像的亮度水平和对比度是不同的, 因此 要求尺度空间算子对图像的分析不受图像的灰度水平和对比度变化的影响, 即满足灰度不变 性和对比度不变性。另一方面,相对于某一固定坐标系,当观察者和物体之间的相对位置变 化时,视网膜所感知的图像的位置、大小、角度和形状是不同的,因此要求尺度空间算子对 图像的分析和图像的位置、 大小、 角度以及仿射变换无关, 即满足平移不变性、 尺度不变性、 欧几里德不变性以及仿射不变性。
SIFTSURFORBFAST特征提取算法比较

SIFTSURFORBFAST特征提取算法比较在计算机视觉领域中,SIFT(Scale-Invariant Feature Transform,尺度不变特征转换)、SURF(Speeded Up Robust Features,加速鲁棒特征)、ORB(Oriented FAST and Rotated BRIEF,方向性的快速特征和旋转的BRIEF)、FAST(Features from Accelerated Segment Test,加速线段测试特征)都是常用的特征提取算法。
本文将对这四种算法进行比较,主要从算法原理、特点以及在不同应用场景的优缺点进行分析。
1.SIFT算法SIFT算法是由Lowe在1999年提出的一种获取图像局部特征的算法。
其主要兴趣点在于尺度不变特征的提取,通过尺度空间构建和关键点提取和描述来实现图像匹配。
SIFT算法的特点是具有旋转不变性、尺度不变性和光照不变性。
然而,SIFT算法计算复杂度较高,不适合实时应用。
2.SURF算法SURF算法是由Bay等人在2024年提出的一种加速鲁棒特征提取算法。
与SIFT算法相比,SURF算法主要优化了尺度空间构建、关键点检测和描述子生成等步骤。
SURF算法利用图像的Hessian矩阵来检测尺度空间的极值点,并通过Haar小波响应计算描述子。
该算法具有较好的尺度不变性和旋转不变性,同时计算速度更快。
然而,SURF算法对于局部对比度较低的图像特征提取效果较差。
3.ORB算法ORB算法是由Rublee等人在2024年提出的一种速度和描述能力兼具的特征提取算法。
该算法结合了FAST关键点检测和BRIEF描述子生成算法,并引入了旋转和尺度不变性。
ORB算法在FAST检测关键点时,通过计算像素灰度和检测周围点的弧长度来提高检测的稳定性。
在描述子生成过程中,ORB算法利用了方向梯度直方图(DOG)来提取关键点的旋转信息。
ORB算法具有快速、简单和可靠的特点,适合于实时应用。
找特征点的算法SIFT和SURF算法

找特征点的算法SIFT和SURF算法SIFT算法和SURF算法是用于图像特征点的检测与描述的两种经典算法。
它们在图像处理、计算机视觉和模式识别等领域得到广泛应用。
下面将分别介绍SIFT算法和SURF算法,并对其原理和应用进行详细阐述。
一、SIFT算法(Scale-Invariant Feature Transform)SIFT算法是由Lowe于1999年提出的一种用于图像特征点检测与描述的算法。
它通过分析图像的局部特征来提取与尺度无关的特征点,具有尺度不变性、旋转不变性和仿射不变性等优点。
1.特征点检测SIFT算法首先通过高斯差分金字塔来检测图像中的特征点。
高斯差分金字塔是由一系列模糊后再进行差分操作得到的,通过不同尺度的高斯核函数对图像进行卷积,然后对结果进行差分运算,得到图像的拉普拉斯金字塔。
在拉普拉斯金字塔上,通过寻找局部最大值和最小值来确定特征点的位置。
2.特征点描述在确定特征点的位置后,SIFT算法使用梯度直方图表示特征点的局部特征。
首先,计算特征点周围邻域内每个像素点的梯度幅值和方向,然后将邻域分为若干个子区域,并统计每个子区域内的梯度幅值和方向的分布,最后将这些统计结果组合成一个向量作为特征点的描述子。
3.特征点匹配SIFT算法通过计算特征点描述子之间的欧式距离来进行特征点的匹配。
欧式距离越小表示两个特征点越相似,因此选择距离最近的两个特征点作为匹配对。
二、SURF算法(Speeded Up Robust Features)SURF算法是由Bay等人于2024年提出的一种在SIFT算法的基础上进行改进的图像特征点检测与描述算法。
它通过加速特征点的计算速度和增强特征点的稳定性来提高算法的实时性和鲁棒性。
1.特征点检测SURF算法使用Hessian矩阵来检测图像中的特征点。
Hessian矩阵是图像的二阶导数矩阵,通过计算Hessian矩阵的行列式和迹来确定图像的局部最大值和最小值,从而找到特征点的位置。
SIFT特征提取算法

SIFT特征提取算法SIFT(Scale-Invariant Feature Transform)特征提取算法是一种用于图像的局部特征分析的算法。
它能够提取出图像中的关键点,并对这些关键点进行描述,从而可以用于图像匹配、物体识别等应用领域。
本文将详细介绍SIFT算法的原理和过程。
1.尺度空间构建SIFT算法首先通过使用高斯滤波器来构建图像的尺度空间,以便在不同尺度下检测关键点。
高斯滤波器可以通过一系列的高斯卷积操作实现,每次卷积之后对图像进行下采样(降低分辨率),得到不同尺度的图像。
2.关键点检测在尺度空间构建完成后,SIFT算法使用差分运算来检测关键点。
差分运算可以通过对图像进行高斯平滑操作来实现,然后计算相邻尺度之间的差分图像。
对差分图像进行极值检测,即寻找局部最大和最小值的像素点,这些像素点就是图像中的关键点。
3.关键点精确定位关键点的精确定位是通过拟合关键点周围的局部图像来实现的。
SIFT算法使用了一种高度鲁棒的方法,即利用关键点周围梯度的方向和大小来进行拟合。
具体来说,SIFT算法在关键点周围计算图像的梯度幅值和方向,并构建梯度直方图。
然后通过在梯度直方图中寻找局部极值来确定关键点的方向。
4.关键点描述关键点的描述是为了提取关键点周围的特征向量,用于后续的匹配和识别。
SIFT算法使用了一种局部特征描述算法,即将关键点周围的图像区域划分为小的子区域,并计算每个子区域的梯度方向直方图。
然后将这些直方图组合起来,构成一个维度较高的特征向量。
5.特征向量匹配在完成关键点描述之后,SIFT算法使用一种近似的最近邻方法来进行特征向量的匹配。
具体来说,使用KD树或者暴力匹配的方法来寻找两幅图像中最相似的特征向量。
通过计算特征向量之间的距离,可以找到最相似的匹配对。
6.尺度不变性SIFT算法具有尺度不变性的特点,即对于图像的缩放、旋转和视角变化等变换具有较好的鲁棒性。
这是因为在特征提取的过程中,SIFT算法对图像进行了多尺度的分析,并利用了关键点周围的梯度信息进行描述。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
SIFT: Scale Invariant Feature TransformThe algorithmSIFT is quite an involved algorithm. It has a lot going on and can be come confusing, So I’ve split up the entire algorithm into multiple parts. Here’s an outline of what happens in SIFT. Constructing a scale spaceThis is the initial preparation. You create internal representations of the original image to ensure scale invariance. This is done by generating a “scale space”.LoG ApproximationThe Laplacian of Gaussian is great for finding interesting points (or key points) in an image. But it’s computationally expensive. So we cheat and approximate it using the representation created earlier.Finding keypointsWith the super fast approximation, we now try to find key points. These are maxima and minima in the Difference of Gaussian image we calculate in step 2Get rid of bad key pointsEdges and low contrast regions are bad keypoints. Eliminating these makes the algorithm efficient and robust. A technique similar to the Harris Corner Detector is used here.Assigning an orientation to the keypointsAn orientation is calculated for each key point. Any further calculations are done relative to this orientation. This effectively cancels out the effect of orientation, making it rotation invariant. Generate SIFT featuresFinally, with scale and rotation invariance in place, one more representation is generated. This helps uniquely identify features. Lets say you have 50,000 features. With this representation, you can easily identify the feature you’re looking for (sa y, a particular eye, or a sign board).That was an overview of the entire algorithm. Over the next few days, I’ll go through each step in detail. Finally, I’ll show you how to implement SIFT in OpenCV!What do I do with SIFT features?After you run through the algorithm, you’ll have SIFT features for your image. Once you have these, you can do whatever you want.Track images, detect and identify objects (which can be partly hidden as well), or whatever you can think of. We’ll get into this later as well.But the catch is, this algorithm is patented.>.<So, it’s good enough for academic purposes. But if you’re looking to make something commercial, look for something else! [Thanks to aLu for pointing out SURF is patented too]1. Constructing a scale spaceReal world objects are meaningful only at a certain scale. You might see a sugar cube perfectly on a table. But if looking at the entire milky way, then it simply does not exist. This multi-scale nature of objects is quite common in nature. And a scale space attempts to replicate this concepton digital images.Scale spacesDo you want to look at a leaf or the entire tree? If it’s a tree, get rid of some detail from the image (like the leaves, twigs, etc) intentionally.While getting rid of these details, you must ensure that you do not introduce new false details. The only way to do that is with the Gaussian Blur (it was proved mathematically, under several reasonable assumptions).So to create a scale space, you take the original image and generate progressively blurred out images. Here’s an example:Look at how the cat’s helmet loses detail. So do it’s whiskers.Scale spaces in SIFTSIFT takes scale spaces to the next level. You take the original image, and generate progressively blurred out images. Then, you resize the original image to half size. And you generate blurred out images again. And you keep repeating.Here’s what it would look like in SIFT:Images of the same size (vertical) form an octave. Above are four octaves. Each octave has 5 images. The individual images are formed because of the increasing “scale” (the amount of blur).The technical detailsNow that you know things the intuitive way, I’ll get into a few technical details.Octaves and ScalesThe number of octaves and scale depends on the size of the original image. While programming SIFT, you’ll have to decide for yourself how many octaves and scales you want. However, the creator of SIFT suggests that 4 octaves and 5 blur levels are ideal for the algorithm.The first octaveIf the original image is doubled in size and antialiased a bit (by blurring it) then the algorithm produces more four times more keypoints. The more the keypoints, the better!BlurringMathematically, “blurring” is referred to as the convolution of the gaussian operator and the image. Gaussian blur has a particular expression or “operator” that is applied to each pixel. What results is the blurred image.The symbols:L is a blurred imageG is the Gaussian Blur operatorI is an imagex,y are the location coordinatesσ is the “scale” para meter. Think of it as the amount of blur. Greater the value, greater the blur. The * is the convolution operation in x and y. It “applies” gaussian blur G onto the image I.This is the actual Gaussian Blur operator.Amount of blurringThe amount of blurring in each image is important. It goes like this. Assume the amount of blur in a particular image is σ. Then, the amount of blur in the next image will be k*σ. Here k is whatever constant you choose.This is a table of σ’s for my current example. See how each σ differs by a fa ctor sqrt(2) from the previous one.SummaryIn the first step of SIFT, you generate several octaves of the original image. Each octave’s image size is half the previous one. Within an octave, images are progressively blurred using the Gaussian Blur operator.In the next step, we’ll use all these octaves to generate Difference of Gaussian images.2. LoG ApproximationIn the previous step , we created the scale space of the image. The idea was to blur an image progressively, shrink it, blur the small image progressively and so on. Now we use those blurred images to generate another set of images, the Difference of Gaussians (DoG). These DoG images are a great for finding out interesting key points in the image.Laplacian of GaussianThe Laplacian of Gaussian (LoG) operation goes like this. You take an image, and blur it a little. And then, you calculate second order derivatives on it (or, the “laplacian”). This locates edges and corners on the image. These edges and corners are good for finding keypoints.But the second order derivative is extremely sensitive to noise. The blur smoothes it out the noise and stabilizes the second order derivative.The problem is, calculating all those second order derivatives is computationally intensive. So we cheat a bit.The ConTo generate Laplacian of Guassian images quickly, we use the scale space. We calculate the difference between two consecutive scales. Or, the Difference of Gaussians. Here’s how:These Difference of Gaussian images are approximately equivalent to the Laplacian of Gaussian. And we’ve replaced a computationally intensive process with a simple subtraction (fast and efficient). Awesome!These DoG images comes with another little goodie. These approximations are also “scale invariant”. What does that mean?The BenefitsJust the Laplacian of Gaussian images aren’t great. They are not scale invariant. That is, they depend on the amount of bl ur you do. This is because of the Gaussian expression. (Don’t panic)See the σ2 in the demonimator? That’s the scale. If we somehow get rid of it, we’ll have true scale independence. So, if the laplacian of a gaussian is represented like this:Then the scale invariant laplacian of gaussian would look like this:But all these complexities are taken care of by the Difference of Gaussian operation. The resultant images after the DoG operation are already multiplied by the σ2. Great eh!Oh! And it has also been proved that this scale invariant thingy produces much better trackable points! Even better!Side effectsYou can’t have benefits without side effects >.<You know the DoG result is multiplied with σ2. But it’s also multiplied by another number. That number is (k-1). This is the k we discussed in the previous step.But we’ll just be looking for the location of the maximums and minimums in the images. We’ll never check the actual values at those locations. So, this additional factor won’t be a problem to us. (Even if you multiply throughout by some constant, the maxima and minima stay at the same location)ExampleHere’s a gigantic image to demonstrate how this difference of Gaussians works.3. Finding keypointsUp till now, we have generated a scale space and used the scale space to calculate the Difference of Gaussians. Those are then used to calculate Laplacian of Gaussian approximations that is scale invariant. I told you that they produce great key points. Here’s how it’s done!Finding key points is a two part processLocate maxima/minima in DoG imagesFind subpixel maxima/minimaLocate maxima/minima in DoG imagesThe first step is to coarsely locate the maxima and minima. This is simple. You iterate through each pixel and check all it’s neighbours. The check is done within the current image, and also the one above and below it. Something like this:X marks the current pixel. The green circles mark the neighbours. This way, a total of 26 checks are made. X is marked as a “key point” if it is the greatest or least of all 26 neighbours. Usually, a non-maxima or non-minima position won’t have to go through all 26 checks. A few initial checks will usually sufficient to discard it.Note that keypoints are not detected in the lowermost and topmost scal es. There simply aren’t enough neighbours to do the comparison. So simply skip them!Once this is done, the marked points are the approximate maxima and minima. They are “approximate” because the maxima/minima almost never lies exactly on a pixel. It lies somewhere between the pixel. But we simply cannot access data “between” pixels. So, we must mathematically locate the subpixel location.Here’s what I mean:The red crosses mark pixels in the image. But the actual extreme point is the green one.Find subpixel maxima/minimaUsing the available pixel data, subpixel values are generated. This is done by the Taylor expansion of the image around the approximate key point.Mathematically, it’s like this:We can easily find the extreme points of this equation (differentiate and equate to zero). On solving, we’ll get subpixel key point locations. These subpixel values increase chances of matching and stability of the algorithm.ExampleHere’s a result I got from the example image I’ve been using till now:Assigning an orientation to the keypointsThe author of SIFT recommends generating two such extrema images. So, you need exactly 4DoG images. To generate 4 DoG images, you need 5 Gaussian blurred images. Hence the 5 level of blurs in each octave.In the image, I’ve shown just one octave. This is done for all octaves. Also, this image just sh ows the first part of keypoint detection. The Taylor series part has been skipped.SummaryHere, we detected the maxima and minima in the DoG images generated in the previous step. This is done by comparing neighbouring pixels in the current scale, the sca le “above” and the scale “below”.Next, we’ll reject some keypoints detected here. This is because they either don’t have enough contrast or they lie on an edge4. Get rid of bad key pointsKey points generated in the previous step produce a lot of key points. Some of them lie along an edge, or they don’t have enough contrast. In b oth cases, they are not useful as features. So we get rid of them. The approach is similar to the one used in the Harris Corner Detector for removing edge features. For low contrast features, we simply check their intensities.Removing low contrast featuresThis is simple. If the magnitude of the intensity (i.e., without sign) at the current pixel in the DoG image (that is being checked for minima/maxima) is less than a certain value, it is rejected. Because we have subpixel keypoints (we used the Taylor expansion to refine keypoints), we again need to use the taylor expansion to get the intensity value at subpixel locations. If it’s magnitude is less than a certain value, we reject the keypoint.Removing edgesThe idea is to calculate two gradients at the keypoint. Both perpendicular to each other. Based on the image around the keypoint, three possibilities exist. The image around the keypoint can be:A flat regionIf this is the case, both gradients will be small.An edgeHere, one gradient will be big (perpendicular to the edge) and the other will be small (along the edge)A “corner”Here, both gradients will be big.Corners are great keypoints. So we want just corners. If both gradients are big enough, we let it pass as a key point. Otherwise, it is rejected.Mathematically, this is achieved by the Hessian Matrix. Using this matrix, you can easily check if a point is a corner or not.If you’re interested in the math, first check the posts on the Harris corner detector. A lot of the same math used used in SIFT. In the Harris Corner Detector, two eigenvalues are calculated. In SIFT, efficiency is increased by just calculating the ratio of these two eigenvalues. You never need to calculate the actual eigenvalues.ExampleHere’s a visual example of what happens in this step:Both extrema images go through the two tests: the contrast test and the edge test. They reject a few keypoints (sometimes a lot) and thus, we’re left with a lower number of keypoints to deal with.SummaryIn this step, the number of keypoints was reduced. This helps increase efficiency and also the robustness of the algorithm. Keypoints are rejected if they had a low contrast or if they were located on an edge.In the next step we’ll assign an orientation to all the keypoints that passed both tests.5. Assigning an orientation to the keypointsAfter step 4, we have legitimate key points. They’ve been tested to be stable. We already know the scale at which the keypoint was detected (it’s the same as the scale of the blurred image). So wehave scale invariance. The next thing is to assign an orientation to each keypoint. This orientationprovides rotation invariance. The more invariance you have the better it is.The ideaThe idea is to collect gradient directions and magnitudes around each keypoint. Then we figureout the most prominent orientation(s) in that region. And we assign this orientation(s) to thekeypoint.Any later calculations are done relative to this orientation. This ensures rotation invariance.The size of the “orientation collection region” around the keypoint depends on it’s scale. The bigger the scale, the bigger the collection region.The detailsNow for the little details about collecting orientations.Gradient magnitudes and orientations are calculated using these formulae:The magnitude and orientation is calculated for all pixels around the keypoint. Then, A histogram is created for this.In this histogram, the 360 degrees of orientation are broken into 36 bins (each 10 degrees). Lets say the gradient direction at a certain point (in the “orientation collection region”) is 18.759 degrees, then it will go into the 10-19 degree bin. And the “amount” that is added to the bin is proportional to the magnitude of gradient at that point.Once you’ve done this for all pixels around the keypoint, the histogram will have a peak at some point.Above, you see the histogram peaks at 20-29 degrees. So, the keypoint is assigned orientation 3 (the third bin)Also, any peaks above 80% of the highest peak are converted into a new keypoint. This new keypoint has the same location and scale as the original. But it’s orientation is equal to the otherpeak.So, orientation can split up one keypoint into multiple keypoints.The Technical DetailsMagnitudesSaw the gradient magnitude image above? In SIFT, you need to blur it by an amount of 1.5*sigma. Size of the windowThe window size, or the “orientation collection region”, is equal to the size of the kernel for Gaussian Blur of amount 1.5*sigma.SummaryTo assign an orientation we use a histogram and a small region around it. Using the histogram, the most prominent gradient orientation(s) are identified. If there is only one peak, it is assigned to the keypoint. If there are multiple peaks above the 80% mark, they are all converted into a new keypoint (with their respective orientations).Next, we generate a highly distinctive “fingerprint” for each keypoint. Here’s a little teaser. This fingerprint, or “feature vector”, has 128 different numbers.6. Generate SIFT featuresNow for the final step of SIFT. Till now, we had scale and rotation invariance. Now we create a fingerprint for each keypoint. This is to identify a keypoint. If an eye is a keypoint, then using this finge rprint, we’ll be able to distinguish it from other keypoints, like ears, noses, fingers, etc.The ideaWe want to generate a very unique fingerprint for the keypoint. It should be easy to calculate. We also want it to be relatively lenient when it is being compared against other keypoints. Things are never EXACTL Y same when comparing two different images.To do this, a 16×16 window around the keypoint. This 16×16 window is broken into sixteen 4×4 windows.Within each 4×4 window, gradient magnitudes and orientations are calculated. These orientations are put into an 8 bin histogram.Any gradient orientation in the range 0-44 degrees add to the first bin. 45-89 add to the next bin. And so on.And (as always) the amount added to the bin depends on the magnitude of the gradient. Unlike the past, the amount added also depends on the distance from the keypoint. So gradients that are far away from the keypoint will add smaller values to the histogram.This is done u sing a “gaussian weighting function”. This function simply generates a gradient (it’s like a 2D bell curve). You multiple it with the magnitude of orientations, and you get a weighted thingy. The farther away, the lesser the magnutide.Doing this for all 16 pixels, you would’ve “compiled” 16 totally random orientations into 8 predetermined bins. You do this for all sixteen 4×4 regions. So you end up with 4x4x8 = 128numbers. Once you have all 128 numbers, you normalize them (just like you would normalize a vector in school, divide by root of sum of squares). These 128 numbers form the “feature vector”. This keypoint is uniquely identified by this feature vector.You might have seen that in the pictures above, the keypoint lies “in between”. It does not lie exactly on a pixel. That’s because it does not. The 16×16 window takes orientations and magnitudes of the image “in-between” pixels. So you need to interpolate the image to generate orientation and magnitude data “in between” pixels.ProblemsThis feature vector introduces a few complications. We need to get rid of them before finalizing the fingerprint.Rotation dependenceThe feature vector uses gradient orientations. Clearly, if you rotate the image, everything changes. All gradient orientations also change. To achieve rotation independence, the keypoint’s rotation is subtracted from each orientation. Thus each gradient orientation is relative to the keypoint’s orientation.Illumination dependenceIf we threshold numbers that are big, we can achieve achieve illumination independence. So, any number (of the 128) greater than 0.2 is changed to 0.2. This resultant feature vector is normalized again. And now you have an illumination independent feature vector!SummaryYou take a 16×16 window of “in-between” pixels around the keypoint. You split that window into sixteen 4×4 windows. From each 4×4 window you generate a histogram of 8 bins. Each bin corresponding to 0-44 degrees, 45-89 degrees, etc. Gradient orientations from the 4×4 are put into these bins. This is done for all 4×4 blocks. Finally, you normalize the 128 values you get.To solve a few problems, you subtract the keypoint’s orie ntation and also threshold the value of each element of the feature vector to 0.2 (and normalize again).。