2015+An eigenvalue problem for even order tensors
数学专业英语词汇

数学专业英语词汇相信自己比依赖别人重要。
用尽心机不如静心做事数学 mathematics, maths(BrE), math(AmE) 公理 axiom 定理 theorem 计算calculation 算 operation 证明 prove 假设 hypothesis, hypotheses(pl.) 命题 proposition 算术 arithmetic 加 plus(prep.), add(v.), addition(n.) 被加数 augend, summand 加数 addend 和 sum 减 minus(prep.), subtract(v.), subtraction(n.) 被减数 minuend 减数 subtrahend 差 remainder 乘times(prep.), multiply(v.), multiplication(n.) 被乘数 multiplicand, faciend 乘数 multiplicator 积 product 除 divided by(prep.), divide(v.), division(n.) 被除数 dividend 除数 divisor 商 quotient 等于 equals, is equal to, is equivalent to 大于 is greater than 小于 is lesser than 大于等于 is equal or greater than 小于等于 is equal or lesser than 运算符operator 平均数mean 算术平均数arithmatic mean 几何平均数geometric mean n个数之积的n次方根倒数(reciprocal) x的倒数为1/x 有理数 rational number 无理数 irrational number 实数 real number 虚数 imaginary number 数字 digit 数 number 自然数 natural number 整数 integer 小数 decimal 小数点 decimal point 分数 fraction 分子 numerator 分母 denominator 比ratio 正 positive 负 negative 零 null, zero, nought, nil 十进制 decimal system 二进制 binary system 十六进制 hexadecimal system 权 weight, significance 进位 carry 截尾 truncation 四舍五入 round 下舍入 round down 上舍入 round up 有效数字 significant digit 无效数字 insignificant digit 代数 algebra 公式 formula, formulae(pl.) 单项式 monomial 多项式polynomial, multinomial 系数 coefficient 未知数 unknown, x-factor, y-factor, z-factor 等式,方程式 equation 一次方程 simple equation 二次方程quadratic equation 三次方程 cubic equation 四次方程 quartic equation 不等式 inequation 阶乘 factorial 对数 logarithm 指数,幂 exponent 乘方power 二次方,平方 square 三次方,立方 cube 四次方 the power of four, the fourth power n次方 the power of n, the nth power 开方 evolution, extraction 二次方根,平方根 square root 三次方根,立方根 cube root 四次方根 the root of four, the fourth root n次方根 the root of n, the nth root sqrt(2)=1.414 sqrt(3)=1.732sqrt(5)=2.236 常量 constant 变量 variable 坐标系 coordinates 坐标轴x-axis, y-axis, z-axis 横坐标 x-coordinate 纵坐标 y-coordinate 原点origin 象限quadrant 截距(有正负之分)intercede (方程的)解solution 几何geometry 点 point 线 line 面 plane 体 solid 线段 segment 射线 radial 平行 parallel 相交 intersect 角 angle 角度 degree 弧度 radian锐角 acute angle 直角 right angle 钝角 obtuse angle 平角 straight angle 周角perigon 底 base 边 side 高 height 三角形 triangle 锐角三角形 acute triangle 直角三角形 right triangle 直角边 leg 边 hypotenuse 勾股定理Pythagorean theorem 钝角三角形 obtuse triangle 不等边三角形 scalene triangle 等腰三角形 isosceles triangle 等边三角形 equilateral triangle 四边形 quadrilateral 平行四边形 parallelogram 矩形 rectangle 长 length 宽 width 周长 perimeter 面积 area 相似 similar 全等 congruent 三角trigonometry 正弦 sine 余弦 cosine 正切 tangent 余切 cotangent 正割secant 余割 cosecant 反正弦 arc sine 反余弦 arc cosine 反正切 arc tangent 反余切 arc cotangent 反正割 arc secant 反余割 arc cosecant集合aggregate 元素 element 空集 void 子集 subset 交集 intersection 并集union 补集 complement 映射 mapping 函数 function 定义域 domain, field of definition 值域 range 单调性 monotonicity 奇偶性 parity 周期性periodicity 图象 image 数列,级数 series 微积分 calculus 微分differential 导数 derivative 极限 limit 无穷大 infinite(a.) infinity(n.) 无穷小 infinitesimal 积分 integral 定积分 definite integral 不定积分indefinite integral 复数 complex number 矩阵 matrix 行列式 determinant 圆 circle 圆心 centre(BrE), center(AmE) 半径 radius 直径 diameter 圆周率 pi 弧 arc 半圆 semicircle 扇形 sector 环 ring 椭圆 ellipse 圆周 circumference 轨迹 locus, loca(pl.) 平行六面体parallelepiped 立方体 cube 七面体 heptahedron 八面体 octahedron 九面体enneahedron 十面体 decahedron 十一面体 hendecahedron 十二面体dodecahedron 二十面体 icosahedron 多面体 polyhedron 旋转 rotation 轴axis 球 sphere 半球 hemisphere 底面 undersurface 表面积 surface area 体积 volume 空间 space 双曲线 hyperbola 抛物线 parabola 四面体 tetrahedron 五面体 pentahedron 六面体 hexahedron菱形 rhomb, rhombus, rhombi(pl.), diamond 正方形 square 梯形 trapezoid 直角梯形 right trapezoid 等腰梯形isosceles trapezoid 五边形 pentagon 六边形 hexagon 七边形 heptagon 八边形 octagon 九边形 enneagon 十边形 decagon 十一边形 hendecagon 十二边形dodecagon 多边形 polygon 正多边形 equilateral polygon 相位 phase 周期period 振幅 amplitude 内心 incentre(BrE), incenter(AmE) 外心excentre(BrE), excenter(AmE) 旁心 escentre(BrE), escenter(AmE) 垂心orthocentre(BrE), orthocenter(AmE) 重心 barycentre(BrE), barycenter(AmE) 内切圆 inscribed circle 外切圆 circumcircle 统计 statistics 平均数average 加权平均数 weighted average 方差 variance 标准差 root-mean-square deviation, standard deviation 比例 propotion 百分比 percent 百分点 percentage 百分位数 percentile 排列 permutation 组合 combination 概率,或然率 probability 分布 distribution 正态分布 normal distribution 非正态分布 abnormal distribution 图表 graph 条形统计图 bar graph 柱形统计图 histogram 折线统计图 broken line graph 曲线统计图 curve diagram 扇形统计图 pie diagram abscissa 横坐标 absolute value 绝对值 acute angle 锐角 adjacent angle 邻角 addition 加 algebra 代数 altitude 高 angle bisector 角平分线 arc 弧 area 面积 arithmetic mean 算术平均值(总和除以总数) arithmetic progression 等差数列(等差级数) arm 直角三角形的股 at 总计(乘法) average 平均值 base 底 be contained in 位于...上 bisect 平分center 圆心 chord 弦 circle 圆形 circumference 圆周长 circumscribe 外切,外接 clockwise 顺时针方向 closest approximation 最相近似的combination 组合 common divisor 公约数,公因子 common factor 公因子complementary angles 余角(二角和为90度) composite number 合数(可被除1及本身以外其它的数整除) concentric circle 同心圆 cone 圆锥(体积,1/3*pi*r*r*h) congruent 全等的 consecutive integer 连续的整数coordinate 坐标的 cost 成本 counterclockwise 逆时针方向 cube 1.立方数 2.立方体(体积,a*a*a 表面积,6*a*a) cylinder 圆柱体 decagon 十边形 decimal 小数 decimal point 小数点 decreased 减少 decrease to 减少到 decrease by 减少了 degree 角度 define 1.定义 2.化简 denominator 分母 denote 代表,表示 depreciation 折旧 distance 距离 distinct 不同的 dividend 1. 被除数 2.红利 divided evenly 被除数 divisible 可整除的 division 1.除 2.部分divisor 除数 down payment 预付款,定金 equation 方程 equilateral triangle 等边三角形 even number 偶数 expression 表达 exterior angle 外角face (立体图形的)某一面 factor 因子 fraction 1.分数 2.比例 geometric mean 几何平均值(N个数的乘积再开N次方) geometric progression 等比数列(等比级数) have left 剩余 height 高 hexagon 六边形 hypotenuse 斜边improper fraction 假分数 increase 增加 increase by 增加了 increase to 增加到 inscribe 内切,内接 intercept 截距 integer 整数 interest rate 利率in terms of... 用...表达 interior angle 内角 intersect 相交 irrational无理数 isosceles triangle 等腰三角形 least common multiple 最小公倍数least possible value 最小可能的值 leg 直角三角形的股 length 长 listprice 标价 margin 利润 mark up 涨价 mark down 降价 maximum 最大值 median, medium 中数(把数字按大小排列,若为奇数项,则中间那项就为中数,若为偶数项,则中间两项的算术平均值为中数。
eigenval解读 -回复

eigenval解读-回复Eigenvalues are a fundamental concept in linear algebra with a wide range of applications in various fields such as physics, engineering, computer science, and statistics. In this article, we will delve into the intricacies of eigenvalues, discussing their meaning, properties, computation, and significance.To begin with, let's define eigenvalues formally. Given a square matrix A, an eigenvalue λis a scalar that satisfies the equation Ax = λx, where x is a non-zero vector known as the eigenvector corresponding to λ. Intuitively, an eigenvalue represents a scalar factor by which the eigenvector is stretched or compressed when the matrix A acts upon it.One of the key properties of eigenvalues is that they determine the essential characteristics of a matrix. For example, they provide information about the matrix's determinant. Specifically, the determinant of A is equal to the product of its eigenvalues. Moreover, the trace of a matrix (the sum of its diagonal elements) is equal to the sum of its eigenvalues.Eigenvalues also play a crucial role in solving systems of linearequations. When solving the equation Ax = b, where b is a known vector, eigenvalues can assist with determining whether the system has a unique solution, no solution, or infinitely many solutions. This is accomplished by examining whether the matrix A has a non-zero eigenvalue or a zero eigenvalue.To compute eigenvalues, we often use the characteristic polynomial of a matrix. The characteristic polynomial is defined as det(A - λI), where I is the identity matrix. Setting this polynomial equal to zero gives us the characteristic equation, which allows us to find the eigenvalues. Solving this equation can sometimes be complex, especially for larger matrices, and numerical methods such as the iterative power method or the QR algorithm are employed.Eigenvalues have several important applications in various fields. In physics, they are used to calculate the vibrational modes of structures, determine stability in dynamic systems, and solve quantum mechanics problems. In engineering, eigenvalues are employed to analyze structural behavior, design electrical circuits, and optimize signal processing algorithms. In computer science, eigenvalues aid in clustering data, dimensionality reduction, and image compression. In statistics, eigenvalues are utilized in factoranalysis, principal component analysis, and data visualization techniques.Additionally, eigenvalues have an intrinsic geometric interpretation. They can be viewed as determining the stretching or shrinking factors of a linear transformation in a vector space. Eigenvalues greater than 1 stretch the space along their corresponding eigenvectors, while eigenvalues between 0 and 1 represent the shrinking of the space. In the case of a zero eigenvalue, the corresponding eigenvector lies in the null space of the matrix, meaning that it is unaffected by the linear transformation.In conclusion, eigenvalues are a fundamental concept in linear algebra with a wide range of applications and significance in diverse fields. They provide essential information about a matrix, enabling us to understand its behavior, solve systems of equations, and analyze various phenomena. The computation and interpretation of eigenvalues offer valuable insights into the underlying principles of mathematical and scientific phenomena.。
对称矩阵特征值反问题的最佳逼近解的一种数值解法

对称矩阵特征值反问题的最佳逼近解的一种数值解法何欢;孙合明;左环【摘要】利用复合最速下降法,给出了对称矩阵特征值反问题AX=XΛ有解和无解两种情况下最佳逼近解的通用数值算法,对任意给定的初始矩阵A0,经过有限步迭代可以得到对称矩阵特征值反问题的最佳逼近解,并分别给出有解和无解两种情况下的数值实例,证明了此算法的可行性.另外,结合投影算法,可以用此算法来求解其它凸约束下矩阵特征值反问题的最佳逼近解,从而扩大了此算法的求解范围.%By applying the hybrid steepest descent method, this paper gives a general numerical algorithm to find the optimal approximation solution to inverse eigenvalue problem, AX = X(A), for symmetric matrices. For any given initial matrix, the optimal approximation can be derived by finite iteration steps. Some numerical examples are provided to illustrate the feasibility of the algorithm. Moreover, combined with projection algorithm, the numerical algorithm can also be used to calculate the optimal approximation solution to other convex constrained inverse eigenvalue problem, thus extending the applicable scope of this algorithm.【期刊名称】《四川师范大学学报(自然科学版)》【年(卷),期】2012(035)004【总页数】5页(P473-477)【关键词】复合最速下降法;特征值反问题;最佳逼近【作者】何欢;孙合明;左环【作者单位】河海大学理学院,江苏南京211100;河海大学理学院,江苏南京211100;河海大学理学院,江苏南京211100【正文语种】中文【中图分类】O24特征值反问题及其最佳逼近已被广泛地研究与应用.L.Zhang[1]首次提出特征值反问题的对称解及其最佳逼近问题;Z.Y.Peng[2]用谱分解的方法解决了厄尔米特反自反矩阵的特征值反问题及其最佳逼近;郭丽杰等[3]和梁俊平等[4]利用矩阵的奇异值分解解决了二次特征值反问题对称解及其最佳逼近;Y.B.Deng等[5]讨论了对称矩阵的特征值反问题有解的条件,并在有解的情况给出了通解形式及其最佳逼近;F.Z.Zhou等[6]研究了正交对称矩阵的特征值反问题有解的条件及其最佳逼近;于蕾等[7]利用正交对称矩阵的特殊性质,给出了一类对称正交反对称矩阵特征值反问题的最佳逼近解的数值算法;Z.Y.Liu等[8]解决了中心厄尔米特矩阵特征值反问题及其最佳逼近;S.F.Yuan等[9]研究了在谱约束下三对角化对称和三对角化双对称矩阵的特征值反问题及其最佳逼近;郭丽杰[10]和陈亚波[11]利用奇异值分解分别得出子矩阵约束下矩阵特征值反问题的对称、反对称解及其最佳逼近.本文拟给出凸约束下的矩阵特征值反问题最佳逼近解的通用数值解法.记Rn×m表示全体实n×m矩阵的集合,Rn×n代表全体实对称n×n矩阵的集合,AT是A的转置矩阵,‖·‖F表示矩阵的Frobenius范数,H代表一实希尔伯特空间.下面给出特征值反问题及其最佳逼近问题.问题1(a) 给定矩阵X∈Rn×m,Λ=diag(λ1,…,λm)∈Rm×m,求A∈Rn×n使得由于实际情况下X和Λ来自实验数据,所以问题1(a)通常无解.问题1(b) 给定矩阵X∈Rn×m,Λ=diag(λ1,…,λm)∈Rm×m,求A∈Rn×n使得问题2 假设SE是问题1解的集合,对给定的∈Rn×n,求A*∈SE使得复合最速下降法[12-14]作为一个三步优化算法,首次提出是为了最小化实希尔伯特空间里非扩展映射不动点集合上的一些凸函数.目前复合最速下降法已经被成功地用来计算对于给定的对称矩阵的最佳逼近解[15],以及被成功应用到图像记忆[16].本文拟利用复合最速下降法,求问题1有解和无解两种情况下问题2的最佳逼近解A*.1 用复合最速下降法求解问题2定义1.1 设U为H的一个开子集,映射Φ:H→R∪{∞},如果对于所有的u∈U,存在a(u)∈H使得则称映射Φ:H→R∪{∞}是Gateaux可微,称Φ':U→H:u|→a(u)为Φ在U上的Gateaux导数.定义1.2 映射T:H→H,若则映射T:H→H为非扩展映射.特别地,若存在非空集合S⊂H,κ>0,使得对于所有的x,y∈S,恒有则T:H→H在S⊂H上κ-Lipschitzian.定义1.3 非空集合S⊂H,映射Φ:H→H在S⊂H上是单调的,如果存在η>0,使得对于所有的u,v∈S,恒有则称Φ:H→H在S⊂H上η-强单调.设则Ψ(A)=min,Θ(A)=min,分别与问题1和问题2等价.易知其中,B=XΛ.要求解问题2,首先,证明下面的引理.引理1.4 Ψ(A)是凸函数.证明∀A1,A2∈SRn×n,α∈[0,1],则有所以引理1.4得证.引理1.5 Θ(A)是凸函数.证明∀A1,A2∈Rn×n,α∈[0,1],则有所以引理1.5得证.引理1.6 Ψ'(A)满足κ-Lipschitzian.证明Ψ'(A)=AXXT-BXT,其中,B=XΛ,则有其中,κ=‖X‖2,所以引理1.6得证.引理1.7 Θ'(A)满足γ-Lipschitzian且η-强单调.证明那么存在γ≥1,0<η≤1满足引理1.7得证.定理 1.8(复合最速下降法[12-13]) 设 T:H→H是一非扩展映射,且Fix(T)≠Ø.假设Θ:H→R∪{∞}是一凸函数,Θ':H→H在T(H)上满足γ-Lipschitzian和η-强单调.如果非负实数序列(λn)n>1⊂[0,∞)满足:或者(λn)n>1⊂[0,∞)满足:那么对任意的u0∈H,强收敛到唯一解u*∈Fix(T),且定理1.9[14]设K⊂H是一闭凸子集.假设(I)Ψ:H→R∪{∞}是Gateaux可微凸函数,其G-导数Ψ':H→H满足κ-Lipschitzian;(II)Θ:H→R∪{∞}是Gateaux可微凸函数,其G-导数Θ':H→H在T(H)上γ-Lipschitzian和η-强单调,则T:=PK(I-vΨ')是非扩展映射,其中,v∈(0,2/κ]. 另外如果则对任意的u0∈H,应用复合最速下降法迭代公式un+1:=T(un)-λn+1Θ'(T(un))得到的序列(un)n>1强收敛到点.当问题1的解集SE非空时,很容易得到SE是一个闭凸集[17].定理1.10 令那么KΨ是一个闭凸集;Ψ(A)为凸函数且其G-导数Ψ'(A)满足κ-Lipschitzian;Θ(A)为凸函数且其G-导数Θ'(A)满足γ-Lipschitzian;并且η-强单调.对任意的v∈(0,2/κ],T:=PK(I-vΨ')是非扩展映射应用复合最速下降法得到的序列(An)n>1强收敛到点即问题2的解,其中T(An)=PK(An-v(AnXXT-BXT)),PK是到凸集K的投影,λn+1满足定理1.8的条件.证明该定理的条件已证明,仅需证明迭代公式如下:其中定理1.10得证.2 算法和数值例子根据定理1.10,得到下面的数值算法,可以求问题2的解A*.算法2.11)输入2)随机选择初始矩阵A0;3)计算B=XΛ;4)计算κ=‖X‖2,v=1/κ,n=0;5)λn+1=1/(n+1),根据计算An+1,其中6)若‖An+1-An‖≤10-10,A*=An+1,停止迭代;否则,令n=n+1,转5).现在将给出一些数值例子来说明结果,所有的实验数据都由Matlab 7.0计算得到. 例2.2取得到问题2的解A*,并且得到‖A*X-XΛ‖=说明:例2.2中的X和Λ通过某一已知矩阵的特征值分解所得,结果表明在问题1有解的情况下此算法是可行的.例2.3 取X、Λ和并求得A*的值并有8.861 1.说明:例2.3表明通过取部分特征值和特征向量(问题1无解的情况)此算法是可行的.通过上面的例子表明提出的数值算法用来求解问题2是可行的.进而,可以用此算法去求解其它凸约束下的矩阵特征值反问题最佳逼近解.参考文献[1]Zhang L.A class of inverse eigenvalue problems of symmetric matrices[J].Num Math J Chin Univ,1990,12(1):65-71.[2]Peng Z Y.The inverse eigenvalue problem for Hermitian anti-reflexive matrices and its approximation[J].Appl Math Comput,2005,162:1377-1389.[3]郭丽杰,周硕.二次特征值反问题的对称次反对称解及其最佳逼近[J].吉林大学学报:理学版,2009,47(6):1185-1190.[4]梁俊平,卢琳璋.二次特征值反问题的中心斜对称解及其最佳逼近[J].福建师范大学学报:自然科学版,2006,22(3):10-14.[5]Deng Y B,Hu X Y,Zhang L.The solvability conditions for the inverse eigenvalue problem of the symmetrizable matrices[J].J Comput ApplMath,2004,163:101-106.[6]Zhou F Z,Hu X Y,Zhang L.The solvability conditions for the inverse problems of symmetric ortho-symmetric matrices[J].Appl Math Comput,2004,154:153-166.[7]于蕾,张凯院,周丙常.一类对称正交反对称矩阵反问题的最佳逼近[J].数学的实践与认识,2008,38(8):158-163.[8]Liu Z Y,Tan Y X,Tian Z L.Generalized inverse eigenvalue problemfor centrohermitian matrices[J].J Shanghai Univ:Eng Ed,2004,8(4):448-453.[9]Yuan S F,Liao A P,Lei Y.Inverse eigenvalue problems of tridiagonal symmetric matrices and tridiagonal bisymmetric matrices[J].Comput Math Appl,2008,55:2521-2532.[10]郭丽杰.子矩阵约束下矩阵反问题的对称解及其最佳逼近[J].东北电力大学学报,2006,26(4):74-78.[11]陈亚波.子阵约束下矩阵方程反问题的实反对称解及其最佳逼近[J].湖南农业大学学报:自然科学版,2002,28(5):444-446.[12]Yamada I,Ogura N,Yamashita Y,et al.Quadratic optimization of fixed points of nonexpansive mappings in Hilbert space[J].Num Funct Anal Optim,1998,19:165-190.[13]Yamada I.The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings[C]//Butnariu D,Censor Y,Reich S.Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications.New York:Elsevier,2001:473-504.[14]Yamada I,Ogura N,Shirakawa N.A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems[C]//Nashed Z,Scherzer O.Inverse Problems,Image Analysis,and Medical Imaging.Contemporary Mathematics,2002,313:269-305. [15]Slavakis K,Yamada I,Sakaniwa putation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method [J].Signal Processing,2003,83:1135-1140.[16]Sun H M,Hasegawa H,Yamada I.Multidimensional associative memory neural network to recall nearest pattern from input[C]//Nonlinear Signal and Image Processing.Sapporo:IEEE-Eurasip,2005:39.[17]Paulo J,Ferreira S G.The existence and uniqueness of the minimum norm solution to certain linear and nonlinear problems[J].Signal Processing,1996,55:137-139.。
幂法求矩阵最大特征值

幂法求矩阵最大特征值摘要在物理、力学和工程技术中的很多问题在数学上都归结为求矩阵特征值的问题,而在某些工程、物理问题中,通常只需要求出矩阵的最大的特征值(即主特征值)和相应的特征向量,对于解这种特征值问题,运用幂法则可以有效的解决这个问题。
幂法是一种计算实矩阵A的最大特征值的一种迭代法,它最大的优点是方法简单。
对于稀疏矩阵较合适,但有时收敛速度很慢。
用java来编写算法。
这个程序主要分成了三个大部分:第一部分为将矩阵转化为线性方程组;第二部分为求特征向量的极大值;第三部分为求幂法函数块。
其基本流程为幂法函数块通过调用将矩阵转化为线性方程组的方法,再经过一系列的验证和迭代得到结果。
关键词:幂法;矩阵最大特征值;j ava;迭代POWER METHOD TO CALCULATE THE MAXIMUMEIGENV ALUE MATRIXABSTRACTIn physics, mechanics and engineering technology of a lot of problems in math boil down to matrix eigenvalue problem, and in some engineering, physical problems, usually only the largest eigenvalue of the matrix (i.e., the main characteristics of the value) and the corresponding eigenvectors, the eigenvalue problem for solution, using the power law can effectively solve the problem.Power method is A kind of computing the largest eigenvalue of real matrix A of an iterative method, its biggest advantage is simple.For sparse matrix is right, but sometimes very slow convergence speed.Using Java to write algorithms.This program is mainly divided into three most: the first part for matrix can be converted to linear equations;The second part is the eigenvector of the maximum;The third part is the exponentiation method of function block.Its basic process as a power law function block by calling the method of matrix can be converted to linear equations, then after a series of validation and iteration to get the results.Key words: Power method; Matrix eigenvalue; Java; The iteration目录1幂法 (1)1.1 幂法基本思想 (1)1.2规范化 (2)2概要设计 (3)2.1 设计背景………………..…………………………………………………………. .32.2 运行流程 (3)2.3运行环境 (3)3 程序详细设计 (4)3.1 第一部分:矩阵转化为线性方程组……..………………………………………. .43.2 第二部分:特征向量的极大值 (4)3.3 第三部分:求幂法函数块 (5)4 运行过程及结果 (6)4.1 运行过程.........................................................………………………………………. .64.2 运行结果 (6)4.3 结果分析 (6)5 心得体会 (7)参考文献 (8)附录:源程序 (9)1 幂法设A n 有n 个线性相关的特征向量v 1,v 2,…,v n ,对应的特征值λ1,λ2,…,λn ,满足|λ1| > |λ2| ≥ …≥ |λn |1.1 基本思想因为{v 1,v 2,…,v n }为C n的一组基,所以任给x (0)≠ 0,∑==ni i i v a x 1)0( —— 线性表示所以有])([)(21111111)0(∑∑∑∑====+====ni i i ki kni k k i i ni ik i n i i i kkv a v a v a v A a v a A xA λλλλ若a 1 ≠ 0,则因11<λλi知,当k 充分大时 A (k )x (0) ≈ λ1k a 1v 1 = cv 1 属λ1的特征向量,另一方面,记max(x ) = x i ,其中|x i | = ||x ||∞,则当k 充分大时,111111*********)0(1)0()max()max()max()max()max()max(λλλλλ==≈---v a v a v a v a x A x A k kk k k k若a 1 = 0,则因舍入误差的影响,会有某次迭代向量在v 1方向上的分量不为0,迭代下去可求得λ1及对应特征向量的近似值。
The algebraic eigenvalue problem代数特征值问题

CLARENDON PRESS • OXFORD 1965
Contents
1. THEORETICAL BACKGROUND Page
Introduction Definitions Eigenvalues and eigenvectors of the transposed matrix Distinct eigenvalues Similarity transformations Multiple eigenvalues and canonical forms for general matrices Defective system of eigenvectors The Jordan (classical) canonical form The elementary divisors Companion matrix of the characteristic polynomial of A Non-derogatory matrices The Frobenius (rational) canonical form Relationship between the Jordan and Frobenius canonical forms Equivalence transformations Lambda matrices Elementary operations Smith's canonical form The highest common factor offc-rowedminors of a A-matrix Invariant factors of (A —XI) The triangular canonical form Hermitian and symmetric matrices Elementary properties of Hermitian matrices Complex symmetric matrices Reduction to triangular form by unitary transformations Quadratic forms Necessary and sufficient conditions for positive definiteness Differential equations with constant coefficients Solutions corresponding to non-linear elementary divisors Differential equations of higher order Second-order equations of special form Explicit solution of By = —Ay Equations of the form (AB— XI)x — 0 The minimum polynomial of a vector The minimum polynomial of a matrix Cayley-Hamilton theorem Relation between minimum polynomial and canonical forms Principal vectors Elementary similarity transformations Properties of elementary matrices Reduction to triangular canonical form by elementary similarity transformations Elementary unitary transformations Elementary unitary Hermitian matrices Reduction to triangular form by elementary unitary transformations Normal matrices Commuting matrices
几类非负矩阵特征值反问题

The Inverse Eigenvalue Problem for Several Classes of Nonnegative MatricesA DissertationSubmitted for the Degree of MasterOn computational mathematicsby Tian YuUnder the Supervision ofProf. Wang Jinlin(College of Mathematics and Information Sciences)Nanchang Hangkong University, Nanchang, ChinaJune, 2011摘 要非负矩阵理论一直是矩阵理论中最活跃的研究领域之一,在数学、自然科学的其他分支以及社会科学中都广泛涉及到,例如博弈论、Markov链(随机矩阵)、概率论、概率算法、数值分析、离散分布、群论、matrix scaling、小振荡弹性系统(振荡矩阵)和经济学等等.近年来,特征值反问题是矩阵理论研究的热点,本文将就非负矩阵特征值反问题(NIEP)这一问题进行研究.文章主要研究几类特殊形式的非负矩阵特征值反问题,得到了相关问题的充分必要条件和一些充分条件,进而给出了这几类特殊形式的非负矩阵特征值反问题数值算法,并通过数值算例来验证相关定理的正确性以及算法的准确性.主要工作如下: 第一章是绪论部分,阐述了非负矩阵特征值反问题的重要意义和发展历程,介绍国内外研究现状.第二章,研究非负三对角矩阵特征值反问题.首先对三阶非负三对角矩阵特征值反问题,分几种情形进行讨论,解决了三阶非负三对角矩阵特征值反问题,得到了三阶非负三对角矩阵特征值反问题有解的充分必要条件.然后对n阶非负三对角矩阵特征值反问题,通过非负三对角矩阵截断矩阵特征多项式,并结合Jacobi 矩阵特征值的关系,得到了非负三对角矩阵的特征值的相关性质,并最终解决了非负三对角矩阵特征值反问题.第三章,研究非负五对角矩阵特征值反问题.三阶非负五对角矩阵,即是三阶非负矩阵,文中给出了其特征值反问题有解的充分必要条件,而对于n阶非负五对角矩阵特征值反问题,由于其复杂性,文中仅给出了它的一些充分条件.第四章,研究非负循环矩阵特征值反问题.首先总结了NIEP近些年来取得的研究成果,提出实循环矩阵特征值反问题,并成功解决了实循环矩阵特征值反问题,得到其充分必要条件.最后在实循环矩阵特征值反问题的基础上提出非负循环矩阵特征值反问题,得到了充分条件和相关推论.第五章,根据第二、三、四章的结论给出相关算法和实例.第六章,在总结全文的同时,提出了需要进一步研究的问题.关键词:特征值,反问题,非负三对角矩阵,非负五对角矩阵,非负循环矩阵AbstractThe theory of nonnegative matrices has always been one of the most active research areas in the matrix theory and has been widely applied in mathematics and other branches of natural and social sciences. There are, for example, game theory, Markov chains (stochastic martices), theory of probability, probabilistic algorithms, numerical analysis, discrete distribution, group theory, matrix scaling, theory of small osillations of elastic systems (oscillation marrices), economics and so on. In recent years, the inverse eigenvalue problem comes to be the focus of the matrix theory. This thesis will study the inverse eigenvalue problem for nonnegative matrices (NIEP). The major researches of this theisis focus on the inverse eigenvalue problem for several special classes of nonnegative matrices, the necessary and sufficient conditions and some sufficient conditions of which are derived. Moreover, the numerical algorithms of the inverse eigenvalue problem for these special classes of nonnegative matrices are given, the accuracy of which together with the correcteness of related theories is testified by several numerical examples. The main procedures of this theisis are as follows:In the first chapter, the significance and the development of the inverse eigenvalue problem for nonnegative matrices are addressed, and the research situation home and abroad is introduced.In the second chapter, the inverse eigenvalue problem for nonnegative tridiagonal matrices is studied. First, the inverse eigenvalue problem for 33⨯ nonnegative tridiagonal matrices is solved by discussion of a variety of situations. Moveover ,the necessary and sufficient conditions of the solutions of the inverse eigenvalue problem for 33⨯ nonnegative tridiagonal matrices are derived. Then, the properties of eigenvalue of n n ⨯ nonnegative tridiagonal matrices are derived by characteristic polynomial of truncated matrices of nonnegative tridiagonal matrices, with the combination of the relationship between eigenvalues of Jacobi matrix. Finally, the inverse eigenvalue problem for nonnegative tridiagonal matrices is solved.In the third chapter, the inverse eigenvalue problem for nonnegative five-diagonal matrices is studied. 33⨯ nonnegative five-diagonal matrices is also 33⨯ nonnegative matrices, the necessary and sufficient conditions of the solutions of the inverse eigenvalue problem for which are given in this thesis. For the inverseeigenvalue problem for n nnonnegative five-diagonal matrices, only some sufficient conditions are given because of its complexity.In the fourth chapter, the inverse eigenvalue problem for nonnegative circulant matrices is studied. First, some remarkable conclusions of the inverse eigenvalue problem for nonnegative matrices in recent years are summarized. Then, the inverse eigenvalue problem for real circulant matrices is advanced and successfully solved, the necessary and sufficient conditions of which are given also. Finally, the inverse eigenvalue problem for nonnegative circulant matrices is advanced based on the inverse eigenvalue problem for real circulant matrices, whose sufficient conditions and some relevant conclusions are given.In the fifth chapter, some algorithms and numerical examples are given based on the conclusions derived in the previous three chapters.In the sixth chapter, the summary of the paper is given and the future research work is put forward.Key words: eigenvalue, inverse problem, nonnegative tridiagonal matrices, nonnegative five-diagonal matrices, nonnegative circulant matrices目录第一章 绪论 (1)1.1选题的依据与意义 (1)1.2非负矩阵特征值反问题的研究现状 (2)1.3研究的主要内容 (3)第二章 非负三对角矩阵特征值反问题 (5)2.1引言 (5)2.2三阶非负三对角矩阵特征值反问题 (6)n阶非负三对角矩阵特征值反问题 (24)2.3第三章 非负五对角矩阵特征值反问题 (33)3.1引言 (33)3.2非负五对角矩阵特征值反问题相关结论 (33)第四章 非负循环矩阵特征值反问题 (38)4.1引言 (38)4.2一类特殊矩阵的特征值反问题 (40)4.3非负循环矩阵特征值反问题 (42)第五章 算法设计及实例 (45)5.1非负三对角矩阵特征值反问题算法 (45)5.2非负五对角矩阵特征值反问题算法 (47)5.3实循环矩阵特征值反问题算法 (49)5.4非负循环矩阵特征值反问题算法 (50)第六章 总结与展望 (53)6.1全文总结 (53)6.2工作展望 (53)参考文献 (54)攻读硕士学位期间发表的论文 (57)致 谢 (58)IV第一章 绪论1.1 选题的依据与意义反问题,顾名思义是相对于正问题而言的,它是根据事物的演化结果,由可观测的现象来探求事物的内部规律或所受的外部影响,由表及里,索隐探秘.在数学中有着许多反问题,例如已知两个自然数的乘积,如何求这两个自然数;已知导数,如何求原函数;已知一个角的三角函数值,如何求这个角的度数,等等.近些年来,人们在生活、工业生产、科学探索中经常遇到反问题,对反问题的研究也越来越受到重视.事实上,对于一般问题来说,反问题要比正问题复杂.如前面提到的求角度数问题,已知一个角,求其三角函数值是唯一的,但如果只知道一个角的三角函数值而不对这个角加以约束,这样的角将会有无穷多个,因而反问题的解一般来说不唯一.另外,反问题的解也极不稳定.因此,对反问题的研究主要包括以下几个方面:存在性、唯一性、稳定性、数值方法和实际应用.矩阵特征值反问题(又称代数特征值反问题或逆特征值问题),就是根据已给定的特征值和/或特征向量等信息来确定一个矩阵,使得该矩阵满足所给的条件.矩阵特征值反问题的来源非常广泛.它不仅来自于数学物理反问题的离散化,而且来自固体力学、粒子物理、量子物理、结构设计、系统参数识别、自动控制等许多领域.由于矩阵特征值反问题的应用广泛性,因而自从此类问题被提出来的几十年里,受到了大量学者的深入研究,得到了一系列优秀成果.本文研究的非负矩阵特征值反问题正是在此期间提出来的,它作为矩阵特征值反问题的一个重要分支,尤其是在概率统计、随机分布、系统分析方面有着重要应用.所谓非负矩阵特征值反问题就是根据已给定的特征值信息来确定一个非负矩阵,使得该非负矩阵满足所给的条件.例如在概率统计中提出一类随机矩阵(矩阵的元素行和为1),这类矩阵在Markov链中有着重要应用,假如对矩阵的特征值有某些特殊要求,能否构造和如何构造出此类矩阵?非负矩阵特征值反问题从提出到现在的几十年间,虽然受到了大量学者的研究,但由于其复杂性,目前仍存在大量的疑难问题尚未解决,这也是它吸引众多学者研究的魅力所在.因而从以上可以看出对非负矩阵特征值的研究无论是对数学本身的发展还是对其它科学的发展都有着重要的意义及广阔的前景.1.2 非负矩阵特征值反问题的研究现状非负矩阵特征值反问题的提出始于上个世纪50年代,它是由矩阵特征值反问题抽离出来的一个子问题.1937年,Kolmogorov [1]首先提出了给定一个复数z 何时为某个非负矩阵特征值的问题.1949年,Suleimanova [2]扩展了Kolmogorov 提出的问题,称为非负矩阵特征值反问题(简称NIEP),即寻找以一组复数12{,,,}n σλλλ= 为特征值的n 阶非负矩阵A ,并且假若能够找到这样一个矩阵A ,就说矩阵A 实现了σ.Kolmogorov 问题显然很容易回答,Minc [3]给出了解答,即对于33⨯阶正循环矩阵,总可以找到一个这样的矩阵使得给定的复数z 作为它的特征值.然而NIEP 从提出至今仍未得到很好地解决,为此一些学者首先从NIEP 的必要条件开始研究.Loewy 和Londow [4]、Johnson [5]给出文献[6]中NIEP 的四个必要条件,其中最后一个条件称为JLL 条件.1998年,Laffey 和Meehan [7]又对奇数阶非负矩阵进行了讨论,给出了奇数阶非负矩阵迹为零的JLL 条件.由于一般的n 阶NIEP 无法直接解答,一批学者考虑了低阶矩阵的情形.1978年,Loewy 和Londow [4]完全解决3n =时的NIEP,给出了四个充分必要条件.45n =、时的NIEP,目前只解决了迹为零的情形.1996年,Reams [8]解决了4n =时迹为零的情形,即:令1234{,,,}σλλλλ=为一组复数,假若120,0,S S =≥30S ≥和2244S S ≤(这里的41k k i i S λ==∑),则必存在一个4阶非负矩阵能够实现σ.1999年,Laffey 和Meehan [9]解决了5n =时迹为零的情形.上面介绍了6n <的情形,然而当6n ≥时,NIEP 却是一个极大地挑战,到目前为止,未见任何形式的解答.虽然NIEP 未曾从正面给出很好的解答,但却吸引大批学者对12{,,}n σλλλ= ,的特殊形式作出深入探讨,这其中包括H.Suleimanova、H.Perfect、R.Kellogg、Salzman、Guo Wuwen 等等.Suleimanova [2]证明0(2,3,,)i i n λ≤= 的σ可被实现的充分必要条件是10ni i λ=≥∑.Kellogg [10]对σ的序列进行分块研究,给出了某些符合要求的分块可被实现.Guo Wuwen [11-12]对已可实现的σ修正做了研究,其中修正后可被实现与σ中最大的数有着密切关系,文献[12]定理3.1的结论尤为重要,它在研究扩展σ可被实现中被广泛引用.另外值得一提的是Ricardo.Soto、Alberto.Borobia、Julio.Moro 三位近十年来在非负矩阵特征值反问题上做了大量深入的研究,文献[13-18]集中反映了他们在这一块的研究成果.上面介绍了NIEP,如果把上面的非负矩阵换成非负对称矩阵,则称为非负对称矩阵特征值反问题(简称SNIEP);如果把上面的一组复数12{,,}n σλλλ= ,换成一组实数,则称为非负矩阵实特征值反问题(简称RNIEP).SNIEP和RNIEP都是NIEP的子问题,它们是研究NIEP的重要组成部分,虽然两者研究的都是实特征值,但它们并不完全等价.一般地,当5n≥时,这是两个完全不同的问题.目前,当4n≤时,SNIEP已被完全解决,当5n=时,R.Loewy和J.J.Mcdonald在文献[9]中做了详细的讨论.而当6n≥时,尚无人解决.文献[19-21]给出了SNIEP的相关结论.4n=时的RNIEP已被解决,事实上,Loewy和Londow在文献[4]中给出的NIEP四个必要条件也是4阶RNIEP的充分条件.当5n≥时,目前尚未有所突破.另外,文献[2,10,22,23,24,25]给出了RNIEP的相关结论.随机矩阵和双随机矩阵作为非负矩阵的两种特殊形式,在研究NIEP中有着极为重要的应用,这里把它们归为一类问题,即随机和双随机矩阵特征值反问题.Johnson[26]证明了如果一个非负矩阵A有正Perron根ρ,则存在一个随机矩阵与1Aρ同谱.1981年,Soules[27]给出了一种构造对称双随机矩阵的方法并得到构造对称双随机矩阵的充分条件.以上是NIEP的主要研究的方向,由于NIEP的复杂性和作者的水平限度,可能衍生出更多的小问题,本文没有一一涉及到,在此后面将不再叙述.此外,由于NIEP研究不够成熟,关于它的数值计算目前研究的不多.Robert.Orsi[28]利用交错射影的思想构造出一种迭代方法来计算非负矩阵特征值反问题,但需指出的是这种迭代并不一定会能得出好的结果,仍需要找到好的判定条件.O.Rojo等在文献[29-30]中通过快速Fourier变化巧妙地得到一种构造对称非负矩阵的方法,大大节省计算时间,这种方法通过在Matlab上实现,证明效率是非常高的.目前,国内尚无对此方面的研究的相关文献.从以上可以看出,虽然非负矩阵特征值反问题的研究得到了一定的成果,但仍有大量的问题需要解决,本文将从几类特殊矩阵来探讨此类问题,进一步促进此方向的研究.例如:能否给出非负(对称)三对角矩阵的特征值反问题的充要条件以及如何实现?如何实现非负循环矩阵的特征值反问题?等等.1.3 研究的主要内容本文研究几类特殊形式的非负矩阵特征值反问题,得到了相关问题的充分必要条件和一些充分条件,进而给出这几种特殊形式的非负矩阵特征值反问题算法,并通过数值算例来验证相关定理的正确性以及算法的准确性.主要工作如下:第一章是绪论部分,阐述了非负矩阵特征值反问题的重要意义和发展历程,介绍国内外研究现状.第二章,研究非负三对角矩阵特征值反问题.首先对三阶非负三对角矩阵特征值反问题,分几种情形进行讨论,解决了三阶非负三对角矩阵特征值反问题,得到了三阶非负三对角矩阵特征值反问题有解的充分必要条件.然后对n阶非负三对角矩阵特征值反问题,通过非负三对角矩阵截断矩阵特征多项式,并结合Jacobi矩阵特征值的关系,得到了非负三对角矩阵的特征值的相关性质,并最终解决了非负三对角矩阵特征值反问题.第三章,研究非负五对角矩阵特征值反问题.三阶非负五对角矩阵,即是三阶非负矩阵,文中给出了其特征值反问题有解的充分必要条件,而对于n阶非负五对角矩阵特征值反问题,由于其复杂性,文中仅给出了它的一些充分条件.第四章,研究非负循环矩阵特征值反问题.首先总结了NIEP近些年来取得的研究成果,提出实循环矩阵特征值反问题,并成功解决了实循环矩阵特征值反问题,得到其充分必要条件.最后在实循环矩阵特征值反问题的基础上提出非负循环矩阵特征值反问题,得到了充分条件和相关推论.第五章,根据第二、三、四章的结论给出相关算法和实例.第六章,在总结全文的同时,提出了需要进一步研究的问题.南昌航空大学硕士学位论文 第二章 非负矩阵特征值反问题第二章 非负三对角矩阵特征值反问题2.1 引言在控制论、振动理论、结构设计中经常要求根据已给的特征值/或特征向量来构造矩阵,即是特征值反问题(或特征值逆问题).三对角矩阵作为一类特殊矩阵,在实际问题中常出现,是研究矩阵理论的一个重要方面,因而有必要对其特征值反问题进行研究.文章的引言部分已给出了非负矩阵特征值反问题的研究现状,可以看出对于非负三对角矩阵的特征值反问题一直缺乏研究,本章将对这一问题进行研究.首先给出如下定义.定义 2.1.1 设n 阶实三对角矩阵形式如下:11112211100n n n n n n x y z x y T z x y z x ----⎡⎤⎢⎥⎢⎥⎢⎥=⎢⎥⎢⎥⎢⎥⎣⎦. (1)若0(1,2,,)i i y z i n =>= ,则称n T 为Jacobi 矩阵;(2)若0,0,0i i i x y z ≥≥≥,则称n T 为非负三对角矩阵;(3)若0,0i i i x y z ≥=≥,则称n T 为非负对称三对角矩阵;若0,0i i i x y z ≥=>,则称n T 为非负Jacobi 矩阵.非负三对角矩阵特征值反问题:给定一组复数12{,,,}n σλλλ= ,寻找非负三对角矩阵A 以σ为特征值,并且假设能够找到这样一个矩阵,就说矩阵A 实现了σ.下面再给出两个引理.引理 2.1.1[31](广义Perron 定理) 设A 是一个n n ⨯阶非负矩阵.定义Perron 根如下:()max{:()}A A ρλλσ=∈.则()A ρ为A 的特征值,并且其相应的特征向量0x ≥(即向量x 的每个元素均大于等于零).引理 2.1.2[4] 设123{,,}σλλλ=是一个由复数构成的序列,并且假设σ满足如下条件: (i)13max{:}i i i λλσσ≤≤∈∈; (ii)σσ=;(iii)11230s λλλ=++≥; (iv)2123s s ≤.则σ能被一个非负矩阵A 实现.2.2 三阶非负三对角矩阵特征值反问题设12{,,,}n σλλλ= 是一个由n 个复数构成的序列,文献[6]给出由Loewy 和Londow [4]、Johnson [5]得到的NIEP 四个必要条件,显然这四个条件对非负三对角矩阵特征值反问题也适用,即(i)Perron 根max{:}i i ρλλσσ=∈∈; (ii)σσ=;(iii)定义1(1,2,)nk k i i s k λ===∑ ,则有0k s ≥;(iv)(JLL 条件)1(,1,2,)m m kk m s n s k m -≤= .二阶非负矩阵特征值反问题有如下结论.引理2.2.1 给定两个数12,λλ,则12{,}σλλ=可以被非负矩阵实现的充分必要条件是12,λλ均为实数(不妨设12λλ≥)并且12λλ≥.证明 首先可以证明这两个数是实数.实矩阵的特征值如果是复数(虚部不为零),则会以共轭对的形式出现,不妨将12,λλ设为,(x yi x yi i +-=.假设σ可以被实现,则存在一个非负矩阵A 以12,λλ为特征值.令非负矩阵a c A d b ⎡⎤=⎢⎥⎣⎦(,,,a b c d 均大于等于零),则2()acI A a b ab cd d bλλλλλ---==-++---. (2-1) 由式(2-1)知,有1220a b x λλ+=+=≥, (2-2) 2212ab cd x y λλ=-=+. (2-3)由式(2-2)中20a b x +=≥,根据均值不等式的关系知ab 的最大值为2x .而由式(2-3)有222ab x y cd x =++≥,显然当12,λλ是复数时,20,y ab x ≠>,矛盾.故12,λλ不可能是复数.充分性.当12λλ≥时,可以分为两种情形讨论即20λ≥和20λ<.而120λλ==时,显然可以被零矩阵实现.当20λ≥时,σ可以被1200λλ⎡⎤⎢⎥⎣⎦实现.当20λ≤时,可以取定,a b 均大于等于零使得式(2-2)成立,这时120ab cd λλ=-≤,显然可以取无数个均大于等于零,c d 使得式(2-3)成立.这样就存在一个矩阵a c d b ⎡⎤⎢⎥⎣⎦实现σ. 必要性.由于12λλ≥,故只需证12λλ<时,σ不能被现实即可.当12λλ<时,由式(2-2)有120a b λλ+=+<,而,a b 均大于等于零,矛盾.证毕.引理 2.2.2 给定三个实数123,,λλλ,如果123(2,3),i i λλλλ≥=≥和1230λλλ++≥,则123{,,}σλλλ=可被非负矩阵A 实现.证明 分三种情形讨论.当0(1,2,3)i i λ≥=时,令123000000A λλλ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦,则A 可实现σ.当1230λλλ≥≥≥时,令13131313202202200A λλλλλλλλλ+-⎡⎤⎢⎥⎢⎥-+⎢⎥=⎢⎥⎢⎥⎢⎥⎢⎥⎣⎦,则A 可实现σ. 当1230λλλ≥≥≥时,令1231231231230A λλλλλλλλλλλλ⎡+++-⎢=+-++⎢⎢⎢⎥⎣⎦,则A 可实现σ.证毕.定理 2.2.3 给定一组实数12{,,,}n σλλλ= 12()n λλλ≥≥≥ ,1n 表示其中0(1,2,,)i i n λ>= 的个数,2n 表示0(1,2,,)i i n λ<= 的个数.如果12n n ≥且120(1,2,,)i n i i n λλ+-+≥= ,则12{,,,}n σλλλ= 可以被非负三对角矩阵实现.证明 由引理2.2.1知120(1,2,,)i n i i n λλ+-+≥= 时,1{,}(1,2,,i n i i σλλ+-==2)n 可以被一个二阶非负矩阵2(1,2,,)i A i n = 实现,而2210(1,2,,i i n n n λ≥=++),则22112{,,,}n n n σλλλ++= 可以被非负三对角矩阵22112{,,,}n n n diag λλλ++ 实现.因而12{,,}n σλλλ= ,可以被非负三对角矩阵22211212{,,,,,,,,n n n n diag A A A λλλ++ 12}n n n --0实现,其中12n n n --0表示12n n n --阶零矩阵.证毕.推论 2.2.4 给定一组实数12{,,,}n σλλλ= 12()n λλλ≥≥≥ ,1n 表示其中0(1,2,,)i i n λ>= 的个数,2n 表示0(1,2,,)i i n λ<= 的个数,11{1,2,,}n Γ= 对应的正特征值112222,,,,{1,2,,}n n n n n n λλλΓ=-+-+ 对应的负特征值2212,,,n n n n n λλλ-+-+ .如果12n n ≥,对于2Γ中的每个数j 都能在1Γ中找到一个数i 使得1220(1,2,,,1,2,,)i j i n j n n n n n λλ+≥==-+-+ 且每个i 对应一个j ,则12{,,,}n σλλλ= 可以被非负三对角矩阵实现.推论 2.2.5 给定一组实数12{,,,}n σλλλ= 12()n λλλ≥≥≥ ,1n 表示其中0(1,2,,)i i n λ>= 的个数,2n 表示0(1,2,,)i i n λ<= 的个数,11{1,2,,}n Γ= 对应的正特征值112,,,n λλλ ,222{1,2,,}n n n n n Γ=-+-+ 对应的负特征值2212,,,n n n n n λλλ-+-+ .如果12n n ≥,对于2Γ中的每个数j 都能在1Γ中找到一个数i 使得1220(1,2,,,1,2,,)i j i n j n n n n n λλ+≥==-+-+ 且每个i 对应一个j ,则12{,,,}n σλλλ= 可以被非负对称三对角矩阵实现.定理2.2.6 给定一组实数123123{,,}()σλλλλλλ=≥≥,如果3(1,2)i i λλ≥=和1230λλλ++>,假若123{,,}σλλλ=能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现,则A 中的13,a a 均不能为零.证明 设非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=,即123,,λλλ是矩阵A 的三个特征值.首先给出矩阵A 的特征多项式.11122231232211133212312132312322111332123121323112212312231100()()()()()()()()()()()a b I A c a b c a a a a b c a b c a a a a a a a a a a a a a b c a b c a a a a a a a a a a b c b c a a a a b c a b c λλλλλλλλλλλλλλλλλ---=-----=-------=-+++++-----=-+++++---++.由根与系数的关系知,有下列成立,123123a a a λλλ++=++, (2-4) 1213231213231122a a a a a a b c b c λλλλλλ++=++--, (2-5)123123122311a a a a b c a b c λλλ=--. (2-6)令123112132321233111,,,d d d b c t λλλλλλλλλλλλ++=++===和222b c t =,显然由3(1,2)i i λλ≥=和1230λλλ++>知10,0(2,3),0(1,2)i i d d i t i ><=≥=,则式(2-4)、式(2-5)和式(2-6)可改写成如下:1123d a a a =++, (2-7) 212132312d a a a a a a t t =++--, (2-8) 31231231d a a a a t a t =--. (2-9) 下面用反证法证明13,a a 均不能为零.显然13,a a 不能同时为零,否则式(2-9)不成立.由于式(2-7)、式(2-8)和式(2-9)中的13,a a 是一个对称的关系,故不妨假设10a =.当130,0a a =≠时,有12322312331d a a d a a t t d a t=+⎧⎪=--⎨⎪=-⎩.(2-10) 由式(2-10)可得133223233//t d a t a a d d a =-⎧⎨=-+⎩. (2-11) 再来分析22313,,,,a a d d λ之间的关系.3(1,2)i i λλ≥=和1230λλλ++>,由123λλλ≥≥可知20λ>且1232λλλλ++≤.由式(2-10)有23120,a a d λ≤≤<和332d λ>.将213a d a =-带入式(2-11),有2133233()/t d a a d d a =--+. (2-12)将式(2-12)可以看做成2t 关于3a 的函数,对2t 关于3a 进行求导,可得'2213332/t d a d a =--. (2-13)显然'2t 在31(0,]a d ∈上有'20t >,而2t 又是关于31(0,]a d ∈上的连续函数,故2t在31a d =时取得最大值,这时3221d t d d =-+. (2-14) 将12311213232,d d λλλλλλλλλ++=++=和1233d λλλ=带入式(2-14)中,可得32211231213231231213231231231232221232133121231232212123123121232123312()()()()()()2()()()[(d t d d λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ=-+=-+++++-+++++=++-+-+-+-=++-+-+-+=++-+++=1212312332312123132312123)]()[()()]()()()().λλλλλλλλλλλλλλλλλλλλλλλλλ+++-++++=++-+++=++ (2-15)由式(2-15)可知,当130,0a a =≠时,20t <.因为123{,,}σλλλ=能被非负三对角矩阵111222300a b c a b c a ⎡⎤⎢⎥⎢⎥⎢⎥⎣⎦实现时,总有11122200t b c t b c =≥⎧⎨=≥⎩,矛盾.因而13,a a 均不能为零.证毕.定理 2.2.7 给定一组全不为零的实数123{,,}σλλλ=123()λλλ≥≥,如果3(1,2)i i λλ≥=和1230λλλ++=,则123{,,}σλλλ=不能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现.证明 设非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=,即123,,λλλ是矩阵A 的三个特征值.同定理2.2.7的证明类似,可以给出矩阵A 的特征多项式,并由根与系数的关系可以得到式(2-4)、式(2-5)和式(2-6).对于式(2-4),当1230λλλ++=时,由于0,1,2,3i a i ≥=,可知123a a a ==0=.而123,,λλλ全不为零和3(1,2)i i λλ≥=可知0(1,2)i i λ>=和30λ<.对于式(2-6)左边=1230λλλ<,右边=1231223110a a a a b c a b c --=,左右不相等,矛盾.故123{,,}σλλλ=不能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现.证毕. 定理2.2.8 给定一组实数123123{,,}()σλλλλλλ=≥≥,如果3(1,2)i i λλ≥=和1230λλλ++>,则123{,,}σλλλ=不能被非负三对角矩阵111222000a b A c b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现. 证明 设非负三对角矩阵矩阵111222000a b A c b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=,即123,,λλλ是矩阵A 的三个特征值.首先给出矩阵A 的特征多项式.11122212221112321212221112321212112212221100()()()()()()()()().a b I A c b c a a a b c a b c a a a a a b c a b c a a a a a b c b c a b c a b c λλλλλλλλλλλλλλλλλ---=----=------=-++----=-++--++由根与系数的关系知,有下列成立:12312a a λλλ++=+, (2-16) 121323121122a a b c b c λλλλλλ++=--, (2-17) 123122211a b c a b c λλλ=--. (2-18)令123112132321233111,,,d d d b c t λλλλλλλλλλλλ++=++===和222b c t =,显然由3(1,2)i i λλ≥=和1230λλλ++>知10,0(2,3),0(1,2)i i d d i t i ><=≥=,则式(2-16) 、式(2-17)和式(2-18)可改写成如下:112d a a =+, (2-19) 21212d a a t t =--, (2-20) 31221d a t a t =--. (2-21)先讨论12a a =的情形.当12a a =时,由式(2-19)可知1122da a ==,则式(2-20)和式(2-21)可化为:21212/4t t d d +=-, (2-22)1123()2dt t d +=-. (2-23)这里式(2-22)和式(2-23)两式中的12t t +必须相等,因而有231212/4d d d d -=-. (2-24) 将123112132321233,,d d d λλλλλλλλλλλλ++=++==代入式(2-24)中可得到只关于123,,λλλ的方程,即2123123121323123312312132312312333322222123123123123233222221231231232332()/4(),()4()()80,3()3()143()4(()(3)λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ-++-++=++++-+++++=+++++++++-++++++22333222212312312323322222211232322331232222112323123)0,()()()0,(())()()0,(())()(())0.λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ=++-+---+=--++-+--=---+--=上式最终可化为22123123()(())0λλλλλλ----=. (2-25)由式(2-25)知要使得式(2-22)和式(2-23)两式中的12t t +相等,就必须满足1230λλλ--=或22123()0λλλ--=,故可得123λλλ=+或123()λλλ=±-.已知1233,(1,2)i i λλλλλ≥≥≥=和1230λλλ++>,显然无论是123λλλ=+还是123()λλλ=±-均不满足已知条件,因而12a a ≠.下面讨论12a a ≠的情形.结合式(2-19)、式(2-20)和式(2-21)联解,可得 2111312111()2a d a d a d t a d -+-=-, (2-26)21113122111211()()2a d a d a d t a d a d a d -+-=----. (2-27)对于式(2-26)和式(2-27)可以看成12,t t 关于1111(0,)(,)22d da d ∈ 的函数,下面把式(2-26)和式(2-27)分在两个区间上讨论.(i)当11(0,2da ∈时,先讨论1t ,令21111312()H a d a d a d =-+-,实际上1H 就是1t 的分子部分.因为20d <,所以有21211113()2d dH a d a d <-+-.由式(2-14)3210d d d -+<知312d d d <,这样2121111()2d d H a d a <-+.令2122111()2d dH a d a =-+,则在11(0,]2d a ∈上有12H H <.对2H 关于1a 求导,可得'2211111123(23)H a d a a d a =-=-,显然在112(0,)3d a ∈上有'20H >,故在11(0,2d a ∈上有'20H >,因而2H 在11(0,2d a ∈上单调递增,又因2H 在112d a =处有定义,则当112da =时,2H 取得最大值,且22111212112(((4)2228d d d d dH d d d =-+=+. (2-28)由1233,(1,2)i i λλλλλ≥≥≥=和1230λλλ++>可知1232λλλλ++≤,则12d λ≤.将1231d λλλ++=和1213232d λλλλλλ++= 代入式(2-28)中,可得21212222121323222312312132321223121323(4)8[4()]8[()()3()]8[()()3()]80.d H d d λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ=+≤+++=++++++=+++++< 这样,在11(0,)2d a ∈上就有20H <,故1H 在11(0,2da ∈上同样也有10H <.因为在11(0,)2d a ∈上1120a d -<,则有10t >. 下面再来讨论2t .将2t 通分化简,得3221111121232112()2a a d a d d d d d t a d -+-++-=-. (2-29)令32231111121232()H a a d a d d d d d =-+-++-,对3H 关于1a 进行求导,得到'2231111234H a a d d d =-+--.显然在11(0,2d a ∈上,'3H 单调递增,且当10a =时,'3H 取得最小值212d d --.将1231d λλλ++=和1213232d λλλλλλ++=代入212d d --中,前面已说明12d λ≤,因而可得到2212123121323221213232231231223()()()()()()()0.d d λλλλλλλλλλλλλλλλλλλλλλλλλλ--=-++-++>--++=-+-+=-++>由上面可以看到'3H 在11(0,)2d a ∈上有'30H >.因此3H 在11(0,)2da ∈上单调递增,又3H 在10a =处有定义,3123(0)0H d d d =->,故3H 在11(0,2da ∈上有30H >,则2t 在区间11(0,2da ∈有20t <.(ii)当111(,)2da d ∈时,同样先分析1t ,直接对1H 求导可得,'21111223H a d a d =--2111122()()a d a a d =--+. (2-30) 对于式(2-30)中右边第二项有221222221213231223()()()()()0.a d d λλλλλλλλλλλλ-+>-+=-+++=-++>因而在111(,)2d a d ∈上有'10H >,故1H 在此区间上单调递增,又1H 在11a d =处有定义,则在11a d =处取得最大值,即11312()0H d d d d =-<.因此,在区间111(,)2d a d ∈上,有10H <,又1120a d ->,则10t <. 下面再来分析2t .对3H 求导,可得 '2231111234()H a a d d d =-+-+211111123()()a d a a d d d =-+-+. (2-31) 对于式(2-31)中右边第三项有221222()()0d d d λ-+>-+>.因而在111(,)2d a d ∈上有'30H >,故3H 在此区间上单调递增,又3H 在112d a =处取得最小值,即322111*********1121232112123()(2(()2222()24()20.d d d dH d d d d d d d d d d d d da d d d d =-+-++-=-++->-+++-> 因此,在区间111(,)2d a d ∈上,有30H >,又1120a d ->,则20t >. 通过对式(2-26)和式(2-27)在1111(0,)(,)22d da d ∈ 上的分析,可以得出当11(0,)2d a ∈时,120,0t t ><;当111(,)2da d ∈时,120,0t t <>.因而当12a a ≠时,12,t t 无法满足同时大于等于零.这样,以上的推导就证明了不存在非负三对角矩阵111222000a b A c b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=.证毕.定理2.2.9 给定一组实数123123{,,}()σλλλλλλ=≥≥,如果3(1,2)i i λλ≥=和1230λλλ++>,则123{,,}σλλλ=不能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现,其中123,,a a a 全不为零.证明 设非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=,其中123,,a a a 均不为零,即123,,λλλ是矩阵A 的三个特征值.首先给出矩阵A 的特征多项式.11122231232211133212312132312322111332123121323112212312231100()()()()()()()()()()()a b I A c a b c a a a a b c a b c a a a a a a a a a a a a a b c a b c a a a a a a a a a a b c b c a a a a b c a b c λλλλλλλλλλλλλλλλλ---=-----=-------=-+++++-----=-+++++---++.由根与系数的关系知,有式(2-4)、式(2-5)和式(2-6)成立.令123112132321233111,,,d d d b c t λλλλλλλλλλλλ++=++===和222b c t =,由3(1,2)i i λλ≥=和1230λλλ++>知10,0(2,3),0(1,2)i i d d i t i ><=≥=,则式(2-4)、式(2-5)和式(2-6)可改写成式(2-7)、式(2-8)和式(2-9).下面分两种情形讨论:13a a =和13a a ≠.(i)当13a a =时.由式(2-8)和(2-9)式分别得到212121323212122t t a a a a a a d a a a d +=++-=+-, (2-32) 2123121a a d t t a -+=. (2-33)显然式(2-32)和式(2-33)都有120t t +>,下面证明两式不可能相等.令23221231111231121211()2a a d a a d a d d f a a a a d a a --+-=+--=-.对于上式中的分子部分,令3241111123()H a a a d a d d =-+-.对4H 求导可得'24111232H a a d d =-+.令'40H =得1a =4H 的两个极值点分别在(,0)-∞和12(,)3d+∞上,因而4H 在区间1(0,2d 单调.因为1212a a d +=,则显然112d a <.当10a =时,43(0)0H d =->.当112d a =时,333111211243(0)()024282d d d d d d d H d =-+->-->.因此在区间1(0,)2d 内无法找到1a 满足41()0H a =,即找不到1a 使得1()0f a =,则式(2-32)和式(2-33)不相等.故当13a a =时,无法找到非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦满足条件. (ii)当13a a ≠时.由式(2-7)、式(2-8)和式(2-9)联解,得 21233231121323231()a a a d d a t a a a a a a d a a ++-=++---, (2-34) 2123323231()a a a d d a t a a ++-=-. (2-35) 将式(2-34)通分可得:2121323*********131()()()a a a a a a d a a a a a d d a t a a ++---+-+=-212332131()a a a d d a a a -+-+=-. (2-36) ①当13a a <时.由式(2-36)可知21233211312121131321121123131()2(2)422.a a a d d a t a a d d a d a a d d d d d d a a a a -+-+=--->----+>=--因为221221213231223222()()()0d d d λλλλλλλλλλλ+<+++=+++<,则10t >. 由式(2-35)有212332323121113231321121123131()42(2)4240.a a a d d a t a a d d d d d a a d d d d d d a a a a ++-=-+-<-++<=<--②当13a a >时.对于式(2-36)分子部分有2221121123321112()(2)0424d d d da a a d d a d d d -+-+>--=-+>.因而10t <.对于式(2-33)分子部分有322112112332312()(2)0424d d d d a a a d d a d d ++-<+=+<.因而20t >.由以上的分析可以得出无论123,,a a a 如何取值均不能满足12,t t 均大于等于零.这样,就证明了找不到一个非负三对角矩阵能够实现123{,,}σλλλ=.证毕.由定理2.2.6、定理2.2.7、定理2.2.8和定理2.2.9可以得出下面的结论. 推论2.2.10 给定一组实数123123{,,}()σλλλλλλ=≥≥,如果3(1,2)i i λλ≥=和1230λλλ++≥,则123{,,}σλλλ=不能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现. 推论2.2.11 推论2.2.3、定理2.2.6、定理2.2.7、定理2.2.8、定理2.2.9和推论2.2.10的结论中非负三对角矩阵均可改为非负对称三对角矩阵,结论依然成立.注:推论2.2.10和2.2.11实际上也是对广义Perron 定理[31]一种验证. 定理 2.2.12 给定三个实数123,,λλλ,如果132(2,3),0i i λλλλ≥=≤<,和1230λλλ++≥,则123{,,}σλλλ=不能被非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦实现. 证明 设非负三对角矩阵111222300a b A c a b c a ⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦能够实现123{,,}σλλλ=,即123,,λλλ是矩阵A 的三个特征值.首先给出矩阵A 的特征多项式.11122231232211133212312132312322111332123121323112212312231100()()()()()()()()()()()a b I A c a b c a a a a b c a b c a a a a a a a a a a a a a b c a b c a a a a a a a a a a b c b c a a a a b c a b c λλλλλλλλλλλλλλλλλ---=-----=-------=-+++++-----=-+++++---++.由根与系数的关系知,有式(2-4)、式(2-5)和式(2-6)成立.。
线性代数教学资料—chapter 4

Example: Find all eigenvalues and eigenvectors of A,
where
A
5 6
2 2
solution: The matrix A I has the form
A
I
5
6
2
2
10
Li Jie
THE EIGENVALUE PROBLEM
(1) A I is singular if and only if ( 5 )( 2 ) 12 0 or 2 3 2 0.
eigenvalue of A, and any nonzero (n 1) vector X
saNtiosnfzyeinrog AX X is called an eigenvector cosorlruetsipono/nding to .
Infinitely many solution
5
Li Jie
since 2 3 2 ( 2 )( 1), it follows that A I is singular if and only if 1 2 or 2 1
11
Li Jie
THE EIGENVALUE PROBLEM
A
5 6
2 2
1 2 or 2 1
ቤተ መጻሕፍቲ ባይዱ
( 2 ) for 1 2
AX X ?, X ?
AX X 0
A is(siAngulaIr)Xd0et( A ) H0omogeneous
( X 0) r( A ) n Systems
AX 0( X 0 ) Step 1 : find all scalar su{chAt1h,aAt 2A,-L,IAisns}ingular. Step 2 : given a scalar lsiuncheathralyt Ad-epeInisdseinngtular,
Algebraic eigenvalue problems

6.0. Introduction 113 Chapter 6Algebraic eigenvalue problemsDas also war des Pudels Kern! G OETHE.6.0. IntroductionDetermination of eigenvalues and eigenvectors of matrices is one of the most important problems of numerical analysis. Theoretically, the problem has been reduced to finding the roots of an algebraic equation and to solving linear homogeneous systems of equations. In practical computation, as a rule, this method is unsuitable, and better methods must be applied.When there is a choice between different methods, the following questions should be answered:(a)Are both eigenvalues and eigenvectors asked for, or are eigenvalues alone sufficient?(b)Are only the absolutely largest eigenvalue(s) of interest?(c)Does the matrix have special properties (real symmetric, Hermitian, and so on)?If the eigenvectors are not needed less memory space is necessary, and further, if only the largest eigenvalue is wanted, a particularly simple technique can be used. Except for a few special cases a direct method for computation of the eigenvalues from the equation is never used. Further it turns out that practically all methods depend on transforming the initial matrix one way or other without affecting the eigenvalues. The table on p. 114 presents a survey of the most important methods giving initial matrix, type of transformation, and transformation matrix. As a rule, the transformation matrix is built up successively, but the resulting matrix need not have any simple properties, and if so, this is indicated by a horizontal line. It is obvious that such a compact table can give only a superficial picture; moreover, in some cases the computation is performed in two steps. Thus a complex matrix can be transformed to a normal matrix following Eberlein, while a normal matrix can be diagonalized following Goldstine-Horwitz. Incidentally, both these procedures can be performed simultaneously giving a unified method as a result. Further, in some cases we have recursive techniques which differ somewhat in principle from the other methods.It is not possible to give here a complete description of all these methods because of the great number of special cases which often give rise to difficulties. However, methods which are important in principle will be treated carefully114 Algebraic eigenvalue problems6.1. The power method 115 and in other cases at least the main features will be discussed. On the whole we can distinguish four principal groups with respect to the kind of transformation used initially:1.Diagonalization,2.Almost diagonalization (tridiagonalization),3.Triangularization,4.Almost triangularization (reduction to Hessenberg form).The determination of the eigenvectors is trivial in the first case and almost trivialin the third case. In the other two cases a recursive technique is easily established which will work without difficulties in nondegenerate cases. To a certain amount we shall discuss the determination of eigenvectors, for example, Wilkinson's technique which tries to avoid a dangerous error accumulation. Also Wielandt's method, aiming at an improved determination of approximate eigenvectors, will be treated.6.1. The power methodWe assume that the eigenvalues of are where Now we let operate repeatedly on a vector which we express as a linear combination of the eigenvectors(6.1.1) Then we haveand through iteration we obtain(6.1.2). For large values of the vectorwill converge toward that is, the eigenvector of The eigenvalue is obtained as(6.1.3) where the index signifies the component in the corresponding vector. The rate of convergence is determined by the quotient convergence is faster the116 Algebraic eigenvalue problemssmaller is. For numerical purposes the algorithm just described can be formulated in the following way. Given a vector we form two other vectors, and(6.1.4)The initial vector should be chosen in a convenient way, often one tries vector with all components equal to 1.E XAMPLEStarting fromwe find thatandAfter round-off, we getIf the matrix is Hermitian and all eigenvalues are different, the eigenvectors, as shown before, are orthogonal. Let be the vector obtained after iterations:We suppose that all are normalized:6.1. The power method 117 Then we haveandFurther,When increases, all tend to zero,and with, we get Rayleigh's quotient(6.1.5) ExampleWith andwe obtain for and 3,,and respectively, compared with the correct value The corresponding eigenvector isThe quotients of the individual vector components give much slower convergence; for example,The power method can easily be modified in such a way that certain other eigenvalues can also be computed. If, for example,has an eigenvalue then has an eigenvalue Using this principle, we can produce the two outermost eigenvalues. Further, we know that is an eigenvalue of and analogously that is an eigenvalue of If we know that an eigenvalue is close to we can concentrate on that, since becomes large as soon as is close toWe will now discuss how the absolutely next largest eigenvalue can be calculated if we know the largest eigenvalue and the corresponding eigenvector Let be the first row vector of and form(6.1.6)Here is supposed to be normalized in such a way that the first component is Hence the first row of is zero. Now let and be an eigenvalue and the corresponding eigenvector with the first component of equal to Then118 Algebraic eigenvalue problems we havesince and(note that the first component of as well as of is 1).Thus is an eigenvalue and is an eigenvector of Since has the first component equal to 0, the first column of is irrelevant, and in fact we need consider only the-matrix, which is obtained when the first row and first column of are removed. We determine an eigenvector of this matrix, and by adding a zero as first component, we get a vector Then we obtain from the relationMultiplying with we find and hence When and have been determined, the process, which is called deflation, can be repeated.E XAMPLEThe matrixhas an eigenvalue and the corresponding eigenvectoror normalized,Without difficulty we findNow we need consider onlyand we find the eigenvalues which are also eigenvalues of6.1. The power method 119 the original matrix The two-dimensional eigenvector belonging to isand henceSince we get andWith we findand Hence andand all eigenvalues and eigenvectors are known.If is Hermitian, we have when Now suppose thatand form(6.1.7) It is easily understood that the matrix has the same eigenvalues and eigenvectors as exceptwhich has been replaced by zero. In fact, we haveand and so on. Then we can again use the power method on the matrix120 Algebraic eigenvalue problems With the starting vectorwe find the following values for Rayleigh's quotient:and compared with the correct valueIf the numerically largest eigenvalue of a real matrix is complex,then must also be an eigenvalue. It is also clear that if is the eigenvector belonging to then is the eigenvector belonging toNow suppose that we use the power method with a real starting vectorThen we form with so large that the contributions from all the other eigenvectors can be neglected. Further, a certain component of is denoted by Thenwhere and the initial component of corresponding to is Hencewhere we have put Now we formHence(6.1.8) Then we easily findIn particular, if that is, if the numerically largest eigenvalues are ofthe form with real then we have the simpler formula(6.1.10)6.2. Jacobi's methodIn many applications we meet the problem of diagonalizing real, symmetric matrices. This problem is particularly important in quantum mechanics.In Chapter 3 we proved that for a real symmetric matrix all eigenvalues are real, and that there exists a real orthogonal matrix such that is diagonal. We shall now try to produce the desired orthogonal matrix as a— product of very special orthogonal matrices. Among the off-diagonal elements6.2. Jacobi's method 121 we choose the numerically largest element:The elementsand form a submatrix which can easily be transformed to diagonal form. We putand get(6.2.1) Now choose the angle such that that is, tan This equationgives 4 different values of and in order to get as small rotations as possible we claimPuttingandwe obtain:since the angle must belong to the first quadrant if tan and to the fourth quadrant if tan Hence we have for the anglewhere the value of the arctan-function is chosen between After a few simple calculations we get finally:(6.2.2)(Note that andWe perform a series of such two-dimensional rotations; the transformation matrices have the form given above in the elements and and are identical with the unit matrix elsewhere. Each time we choose such values and that We shall show that with the notation the matrix for increasing will approach a diagonal122 Algebraic eigenvalue problems matrix with the eigenvalues of along the main diagonal. Then it is obvious that we get the eigenvectors as the corresponding columns of since we have that is, Let be the column vector of and the diagonal element of Then we haveIf is denoted by we know from Gershgorin's theorem that for some value of and if the process has been brought sufficiently far, every circle defined in this way contains exactly one eigenvalue. Thus it is easy to see when sufficient accuracy has been attained and the procedure can be discontinued.The convergence of the method has been examined by von Neumann and Goldstine in the following way. We put and, as before,The orthogonal transformation affects only the row and column and the row and column. Taking only off-diagonal elements into account, we find for and relations of the formand hence Thus will be changed only through the cancellation of the elements and that is,Since was the absolutely largest of all off-diagonal elements, we haveandHence we get the final estimate,(6.2.3)After iterations,has decreased with at least the factor and for a sufficiently large we come arbitrarily close to the diagonal matrix containing the eigenvalues.In a slightly different modification, we go through the matrix row by row performing a rotation as soon as Here is a prescribed tolerance which, of course, has to be changed each time the whole matrix has been passed. This modification seems to be more powerful than the preceding one. The method was first suggested by Jacobi. It has proved very efficient for diagonalization of real symmetric matrices on automatic computers.6.2. Jacobi's method 123 ExampleChoosing we obtain, tan andAfter the first rotation, we haveHere we take and obtain tan andAfter the second rotation we haveand after 10 rotations we haveAfter rotations the diagonal elements are and while the remaining elements are equal to to decimals accuracy. The sum of the diagonal elements is and the product in good agreement with the exact characteristic equation:Generalization to Hermitian matrices, which are very important in modern physics, is quite natural. As has been proved before, to a given Hermitian matrix we can find a unitary matrix such that becomes a diagonal matrix. Apart from trivial factors, a two-dimensional unitary matrix has the formA two-dimensional Hermitian matrix124 Algebraic eigenvalue problems is transformed to diagonal form by wherePutting we separate the real and imaginary parts and then multiply the resulting equations, first by and then by and and finally add them together. Using well-known trigonometric formulas, we get(6.2.4) In principle we obtain from the first equation and then can be solved from the second. Rather arbitrarily we demand and hencewhereSince the remaining equation has the solutionwith and Now we want to choose according to in order to get as small a rotation as possible which impliesThe following explicit solution is now obtained (note that and cannot both be equal to because then would already be diagonal):(6.2.5) As usual the value of the arctan-function must be chosen between and6.3. Givens' method 125The element can now be writtenand consequently:(6.2.6) If we get and recover the result in Jacobi's method.This procedure can be used repeatedly on larger Hermitian matrices, where the unitary matrices differ from the unit matrix only in four places. In the places and we introduce the elements of our two-dimensional matrix. The product of the special matrices is a new unitary matrix approaching when is increased.Finally we mention that a normal matrix (defined through can always be diagonalized with a unitary matrix. The process can be performed following a technique suggested by Goldstine and Horwitz which is similar to the method just described for Hermitian matrices. The reduction of an arbitrary complex matrix to normal form can be accomplished through a method given by Patricia Eberlein In practice, both these processes are performed simultaneously.6.3. Givens' methodAgain we assume that the matrix is real and symmetric. In Givens' method we can distinguish among three different phases. The first phase is concerned with orthogonal transformations, giving as result a band matrix with unchanged characteristic equation. In the second phase a sequence of, functions is generated, and it is shown that it forms a Sturm sequence, the last member of which is the characteristic polynomial. With the aid of the sign changes in this sequence, we can directly state how many roots larger than the inserted value the characteristic equation has. By testing for a number of suitable values we can obtain all the roots. During the third phase, the eigenvectors are computed. The orthogonal transformations are performed in the following order. The elements and define a two-dimensional subspace, and we start by performing a rotation in this subspace. This rotation affects all elements in the second and third rows and in the second and third columns. However, the quantity defining the orthogonal matrixis now determined from the condition and not, as in Jacobi's method, by We have and The next rotation is performed in the (2, 4)-plane with the new126 Algebraic eigenvalue problemsdetermined from that is, tan that the element was changed during the preceding Now all elements in the second and fourth rows and in the second and fourth columns are changed, and it should be particularly observed that the element is not affected. In the same way, we make the elements equal to zero by rotations in the-planes.Now we pass to the elements and they are all set to zero by rotations in the planesDuring the first of these rotations, the elements in the third and fourth rows and in the third and fourth columns are changed, and we must examine what happens to the elements and which were made equal to zero earlier. We findFurther, we get and By now the procedure should be clear, and it is easily understood that we finally obtain a band matrix, that is, such a matrix that In this special case we have Now we put(6.3.1)has been obtained from by a series of orthogonal transformations,with In Chapter it was proved that and have the same eigenvalues and further that, if is an eigenvector of and an eigenvector of(both with the same eigenvalue), then we have Thus the problem has been reduced to the computation of eigenvalues and eigenvectors of the band matrixWe can suppose that all otherwise could be split into two determinants of lower order Now we form the following sequence of functions:(6.3.2)with and We find at once that which can be interpreted as the determinant of the-element in the matrix6.3. Givens' method 127 Analogously, we have which is the-minor ofBy induction, it is an easy matter to prove that is the characteristic polynomial.Next we shall examine the roots of the equation For we have the only root.For we observe that, Hence we have two real roots and with, for example,For we will use a method which can easily be generalized to an induction proof. Then we write and obtain from (6.3.2):Now it suffices to examine the sign of in a few suitable points:We see at once that the equation has three real roots and such thatIn general, if has the roots and the roots thenwhereBy successively putting and we find that has different signs in two arbitrary consecutive points. Hence has real roots, separated by the roots ofWe are now going to study the number of sign changes in the sequenceIt is evident that and Suppose that and are two such real numbers that in the closed interval Then obviously First we examine what happens if the equation has a root in the interval. From it follows for thatHence and have different signs, and clearly this is also true in an interval Suppose, for example, that then we may have the following combination of signs:Hence, the number of sign changes does not change when we pass through a root of When however, the situation is different.128 Algebraic eigenvalue problemsSuppose, for example, that& odd. Denoting the roots of by and the roots of by we haveThen we see that Now we let increase until it reaches the neighborhood of |where we find the following scheme:Hence Then we let increase again (now a sign change of may appear, but, as shown before, this does not affect until we reach the neighborhood of where we haveand hence Proceeding in the same way through all the rootswe infer that the number of sign changes decreases by one unit each time a root is passed. Hence we have proved that if is the number of eigenvalues of the matrix which are larger than then(6.3.3) The sequence is called a The described technique makes it possible to compute all eigenvalues in a given interval ("telescope method").For the third phase, computation of the eigenvectors, we shall follow J. H. Wilkinson in Let be an exact eigenvalue of Thus we search for a vector such that Since this is a homogeneous system in variables, and since we can obtain a nontrivial solution by choosing equations and determine the components of(apart from a constant factor); the remaining equation must then be automatically satisfied. In practical work it turns out, even for quite well-behaved matrices, that the result to a large extent depends on which equation was excluded from the Essentially, we can say that the serious errors which appear on an unsuitable choice of equation to be excluded depend on numerical compensations; thus round-off errors achieve a dominant influence.Let us assume that the equation is excluded, while the others are solved by elimination. The solution (supposed to be exact) satisfies the equations used for elimination but gives an error when inserted into the6.3. Givens' method 129Actually, we have solved the system(We had to use an approximation instead of the exact eigenvalue.) Since constant factors may be omitted, this system can be written in a simpler way:(6.3.5)where is a column vector with the component equal to and the others equal to If the eigenvectors of are this vector can be expressed as a linear combination, that is,(6.3.6) and from (6.3.5) we get(6.3.7) Now let and we obtain(6.3.8)Under the assumption that our solution approaches as(apart from trivial factors). However, it may well happen that is of the same order of magnitude as(that is, the vector is almost orthogonal to), and under such circumstances it is clear that the vector in (6.3.8) cannot be a good approximation of Wilkinson suggests that (6.3.5) be replaced by(6.3.9)where we have the vector at our disposal. This system is solved by Gaussian elimination, where it should be observed that the equations are permutated properly to make the pivot element as large as possible. The resulting system is written:(6.3.10)As a rule, most of the coefficients are zero. Since the have been obtained from the which we had at our disposal, we could as well choose the constants deliberately. It seems to be a reasonable choice to take all130 Algebraic eigenvalue problems equal to no eigenvector should then be disregarded. Thus we choose(6.3.11) The system is solved, as usual, by back-substitution, and last, the vector is normalized. Even on rather pathological matrices, good results have been obtained by Givens' method.6.4. Householder's methodThis method, also, has been designed for real, symmetric matrices. We shall essentially follow the presentation given by Wilkinson The first step consists of reducing the given matrix to a band matrix. This is done by orthogonal transformations representing reflections. The orthogonal matrices, will be denoted by with the general structure(6.4.1) Here is a column vector such that(6.4.2) It is evident that is symmetric. Further, we havethat is,is also orthogonal.The matrix acting as an operator can be given a simple geometric interpretation. Let t operate on a vector from the left:In Fig. 6.4 the line is perpendicular to the unit vector in a plane defined by and The distance from the endpoint of to is and the mapping means a reflection in a plane perpendicular toFigure 6.46.4. Householder's method 131 Those vectors which will be used are constructed with the first components zero, orWith this choice we form Further, by (6.4.2) we haveNow put and form successively(6.4.3)At the first transformation, we get zeros in the positionsand in the corresponding places in the first column. The final result will become a band matrix as in Givens' method. The matrix contains elements in the row, which must be reduced to zero by transformation with this gives equations for theelements and further we have the condition that the sum of the squares must beWe carry through one step in the computation in an example:The transformation must now produce zeros instead of and Obviously, the matrix has the following form:Since in the first row of only the first element is not zero, for example, the -element of can become zero only if the corresponding element is zero already in Puttingwe find that the first row of has the following elements:Now we claim that(6.4.4) Since we are performing an orthogonal transformation, the sum of the squares132 Algebraic eigenvalue problems of the elements in a row is invariant, and hencePutting we obtain(6.4.5) Multiplying (6.4.5) by and (6.4.4) by and we getThe sum of the first three terms is and further Hence(6.4.6) Inserting this into (6.4.5), we find that and from (6.4.4), andIn the general case, two square roots have to be evaluated, one for and one for Since we have in the denominator, we obtain the best accuracy if is large. This is accomplished by choosing a suitable sign for the square-root extraction for Thus the quantities ought to be defined as follows:(6.4.7) The sign for this square root is irrelevant and we choose plus. Hence we obtain for and(6.4.8) The end result is a band matrix whose eigenvalues and eigenvectors are computed exactly as in Givens' method. In order to get an eigenvector of an eigenvector of the band matrix has to be multiplied by the matrix this should be done by iteration:(6.4.9) 6.5. Lanczos' methodThe reduction of real symmetric matrices to tridiagonal form can be accomplished through methods devised by Givens and Householder. For arbitrary matrices a similar reduction can be performed by a technique suggested by Lanczos. In this method two systems of vectors are constructed, and which are biorthogonal; that is, for we have6.5. Lanczos' method 133 The initial vectors and can be chosen arbitrarily though in such a way that The new vectors are formed according to the rulesThe coefficients are determined from the biorthogonality condition, and for we form:If we getAnalogouslyLet us now consider the numerator in the expression for whenbecause of the biorthogonality. Hence we have for and similarly we also have under the same condition. In this way the following simpler formulas are obtained:If the vectors are considered as columns in a matrix and if further a tridiagonal matrix is formed from the coefficients and with one's in the remaining diagonal:then we can simply write and provided the vectors are linearly independent134 Algebraic eigenvalue problemsIf similar matrices are formed from the vectors and from the coefficientswe getCertain complications may arise, for example, that some or may become zero, but it can also happen that even if and The simplest way out is to choose other initial vectors even if it is sometimes possible to get around the difficulties by modifying the formulas themselves.Obviously, Lanczos' method can be used also with real symmetric or Hermi tian matrices. Then one chooses just one sequence of vectors which must form an orthogonal system. For closer details, particularly concerning the determination of the eigenvectors, Lanczos' paper should be consulted; a detailed discussion of the degenerate cases is given by Causey and GregoryHere we also mention still one method for tridiagonalization of arbitrary real matrices, first given by La Budde. Space limitations prevent us from a closer discussion, and instead we refer to the original paper6.6. Other methodsAmong other interesting methods we mention the method. Starting from a matrix we split it into two triangular matrices with and then we form Since the new matrix has the same eigenvalues as Then we treat in the same way as and so on, obtaining a sequence of matrices which in general converges toward an upper triangular matrix. If the eigenvalues are real, they will appear in the main diagonal. Even the case in which complex eigenvalues are present can be treated without serious complications. Closer details are given in where the method is described by its inventor, H. Rutishauser. Here we shall also examine the more general eigenvalue problem,where and are symmetric and, further,is positive definite. Then we can split according to where is a lower triangular matrix. Henceand where Sincethe problem has been reduced to the usual type treated before.6.7. Complex matricesFor computing eigenvalues and eigenvectors of arbitrary complex matrices (also, real nonsymmetric matrices fall naturally into this group), we shall first discuss a triangularization method suggested by Lotkin and Greenstadt6.7. Complex matrices 135The method depends on the lemma by Schur stating that for each square matrix there exists a unitary matrix such that where is a (lower or upper) triangular matrix (see Section 3.7). In practical computation one tries to find as a product of essentially two-dimensional unitary matrices, using a procedure similar to that described for Hermitian matrices in Section 6.2. It is possible to give examples for which the method does not converge (the sum of the squares of the absolute values of the subdiagonal elements is not monotonically decreasing, cf.but in practice convergence is obtained in many cases. We start by examining the two-dimensional case and put(6.7.1)From we get Further, we suppose that whereand obtain(6.7.2) Clearly we have Claiming we find withand(6.7.3) Here we conveniently choose the sign that makes as small as possible; with and we get Hence is obtained directly from the elements and Normally, we must take the square root of a complex number, and this can be done by the formulawhere When has been determined, we get and from(6.7.4) Now we pass to the main problem and assume that is an arbitrary complex matrix We choose that element below the main diagonal which is largest。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
See discussions, stats, and author profiles for this publication at:https:///publication/281367651An eigenvalue problem for even order tensors with its applicationsARTICLE in LINEAR AND MULTILINEAR ALGEBRA · AUGUST 2015Impact Factor: 0.74 · DOI: 10.1080/03081087.2015.1071311READS394 AUTHORS, INCLUDING:Chuan ChenHong Kong Baptist University5 PUBLICATIONS 1 CITATIONSEE PROFILEWen LiSouth China Normal University107 PUBLICATIONS 510 CITATIONSSEE PROFILEAvailable from: Chuan ChenRetrieved on: 23 February 2016This article was downloaded by: [University of Hong Kong Libraries]On: 07 August 2015, At: 03:04Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: 5 Howick Place, London, SW1P 1WGClick for updatesLinear and Multilinear AlgebraPublication details, including instructions for authors and subscription information:/loi/glma20An eigenvalue problem for even order tensors with its applicationsLu-Bin Cui a, Chuan Chen b, Wen Li c& Michael K. NgbaHenan Engineering Laboratory for Big Data Statistical Analysis and Optimal Control, School of Mathematics and Information Sciences, Henan Normal University , XinXiang, P .R. China.bDepartment of Mathematics, Hong Kong Baptist University , Hong Kong, China.cSchool of Mathematical Sciences, South China Normal University ,Guangzhou, China.Published online: 06 Aug 2015.PLEASE SCROLL DOWN FOR ARTICLETaylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However , Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Linear and Multilinear Algebra ,2015/10.1080/03081087.2015.1071311An eigenvalue problem for even order tensors with its applicationsLu-Bin Cui a ,Chuan Chen b ,Wen Li c ∗and Michael K.Ng ba Henan Engineering Laboratory for Big Data Statistical Analysis and Optimal Control,School of Mathematics and Information Sciences,Henan Normal University,XinXiang,P .R.China;b Department of Mathematics,Hong Kong Baptist University,Hong Kong,China;c School ofMathematical Sciences,South China Normal University,Guangzhou,ChinaCommunicated by R.-C.Li(Received 6August 2014;accepted 6July 2015)In this paper,we study an eigenvalue problem for even order ing the matrix unfolding of even order tensors,we can establish the relationship between a tensor eigenvalue problem and a multilevel matrix eigenvalue problem.By considering a higher order singular value decomposition of a tensor,we show that higher order singular values are the square root of the eigenvalues of the product of the tensor and its conjugate transpose.This result is similar to that in matrix case.Also we study an eigenvalue problem for Toeplitz/circulant tensors,and give the lower and upper bounds of eigenvalues of Toeplitz tensors.An application in image restoration is also discussed.Keywords:tensors;eigenvalues;eigenvectors;higher order singular value decomposition;multilevel matrices;Toeplitz tensors;circulant tensors AMS Subject Classifications:15A18;15A691.IntroductionAtensor is a multidimensional array.Let C be the complex field.An m th-order n -dimensional tensor A consisting of n m entries in C is denoted by:A =(a i 1,i 2,...,i m ),a i 1,i 2,...,i m ∈C ,1≤i k ≤n ,k =1,2,...,m .(1)In the following discussion,we also use A (i 1,i 2,...,i m )to denote (i 1,i 2,...,i m )th entry of A .The tensor eigenpair was introduced by Lim [1]and Qi [2]independently in 2005.At present,the tensor eigenvalue problem becomes a hot topic because of its applications in diffusion tensor imaging,higher order Markov chains and data mining et al.,see e.g.[1–13].Below is the definition of eigenvalues of tensors.[2]Definition 1.1Let A be a real m th-order n -dimensional tensor.If x ∈C n \{0},λ∈C ,xand λsatisfy∗Corresponding author.Email:liwen@©2015Taylor &FrancisD o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 20152L.-B.Cui et al.A x m −1=λx [m −1],(2)then we call λan eigenvalue of A ,and x its corresponding eigenvector.In particular,if x is real,then λis also real.Here x T is the transpose of x ,A x m −1:=⎛⎝ni 2,...,i m =1a i 1,i 2,...,i m x i 2...x i m ⎞⎠1≤i 1≤nand x [m −1]= x m −1i1≤i ≤nwith x =(x 1,x 2,...,x n )T .It is clear when m =2,the above definition is same as that of eigenvalues and eigen-vectors of real matrices.Hence,the tensor eigenvalue can be regarded as a generalization of matrix eigenvalues.According to Definition 1.1,we see that a tensor eigenvalue problem is equivalent to solving a set of multivariate polynomials of variables x 1,x 2,...,x n and an unknown λ.In general,the tensor eigenvalue problem given by Definition 1.1is NP-hard.[4]However,it is interesting to note that there are some other ways to define the tensor eigenvalue.For example,in computational mechanics [14–16]and signal processing[17],the eigenvalue of a fourth-order symmetric tensor C was introduced as follows when they studied the elasticity of isotropic materials,C ·E =⎛⎝nk ,l =1C i jkl E kl ⎞⎠=λE ,(3)where C is a fourth-order n -dimension symmetric tensor and E is an n ×n square matrix.Here,the symmetry means that C i jkl =C kli j for all i ,j ,k ,l ∈{1,2,...,n }holds.Obviously,the eigenvalue problem given by (3)can be considered as a linear transfor-mation as an fourth-order square tensor on a second-order square tensor (matrix),which generalizes the idea of the matrix case.Recently,Qi [18]extended (3)to a (2m )th-order square tensor,i.e.Definition 1.2[17,18]Let A be a complex 2m th-order n -dimensional tensor.If X is a nonzero complex m th-order n -dimensional tensor,λ∈C ,and X and λsatisfyA ·X =λX ,(4)where(A ·X )i 1,...,i m =n j 1=1n j 2=1···n j m =1a i 1,...,i m ,j 1,...,j m x j 1,...,j m ,1≤i k ≤n ,1≤k ≤m .(5)We call λand X are eigenvalue and eigentensor of A .It is noted that if m =1,then A is a square matrix and Definition 1.2reduces to thematrix eigenvalue.It is also noted that Definition 1.2is very different from the one in (2).From now on,if no other special illustration,we say the eigenvalue means the eigenvalue given by Definition 1.2.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Linear and Multilinear Algebra3Figure 1.A stack of brain MRI images.The first and second images in the first row are different,but the parts in the red circles in these two images are similar.A natural question is that why we study the tensor eigenvalue problem?In fact,for high-dimensional problems,the data have inherent tensor structure.Firstly,the tensor structure means that the data is high dimensional.The difference slices of the data may have some relationship.For example,human brain is a three-dimensional structure,any point in the brain can be localized on the x ,y and z planes.The brain can be cut on any of these planes and are named the coronal plane,the horizontal plane or the sagittal plane.Figure 1shows a stack of brain MRI images on horizontal plane.The data of the stack of brain MRI images on horizontal plane is high dimensional.But the slices of brain are not independent.As we know,brain is made up of many specialized regions.For example,thinking and voluntary movements are controlled by telencephalon.If we want to reconstruct a part of telencephalon,only one slice of brain MRI images on horizontal is not enough,many slices are needed.These slices contain information of the part of telencephalon.So they have a relationship.If we just process these MRI images slice by slice,we may lose some information of the tensor structure of MRI images.So it is necessary to study this tensor eigenvalue problem.As we know,the matrix unfolding of a tensor is a useful tool for studying tensor problems.For example,in [19],the authors used the matrix unfolding to solve multilinear systems in quantum mechanical models and high-dimensional PDEs.In this paper,we will apply the matrix unfolding technique to study the eigenvalue problem.The singular value of a matrix A is the square root of the eigenvalue of A ∗A .In multilinear algebra,the higher order singular value decomposition (HOSVD)[20]was a generalization of SVD of a matrix.Naturally,one may ask:what is the relationship between the eigenvalue problem in Definition 1.2and HOSVD?One contribution of this paper is to establish the relationship between the tensor eigenvalue problem in Definition 1.2and HOSVD.More precisely,we show that the singular values and the associated singular vectors of A are just the eigenvalues and the associated eigentensors of A ∗ A .Here A ∗represents the conjugate transpose of A ,and refers to the multiplication of two tensors.Their definitions will be given in Section 2.Another contribution of this paper is to study eigenvalue problems for Toeplitz/circulant tensors,which can be applied to image processing.[21,22]In particular,we construct eigentensors to diagonalize circulant tensors to obtain the eigenvalues.For Toeplitz tensors,we present the lower and upper bounds of the eigenvalues based on generating functions.The remaining of the paper is organized as follows.In Section 2,we give the basic properties of the tensor eigenvalue problem and demonstrate the relationship between tensorD o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 20154L.-B.Cui et al.eigenvalue problem and HOSVD.In Section 3,we study Toepltiz and circulant tensors and analyse their eigenvalues,and give an application in image restoration.The concluding remarks are given in Section 4.2.Properties of tensor eigenvalues 2.1.Unfolding operationsSuppose that A is a 2m th-order n -dimensional tensor.We can reorder A as a square matrix using the square matrix unfolding of tensors.In [23],Kofidis et al.employed the square matrix unfolding of tensors,and their aim is to study the problem of the best rank-one approximation of a super-symmetric tensor.Here,the super-symmetric meansthat A i 1,i 2,...,i m =A i 1,i 2...,i m ,where (i 1,i 2...,i m )is any permutation of (i 1,i 2,...,i m ),i k ∈{1,2,...,n },k ∈{1,2,...,m }.Definition 2.1[23]Let A be a (2m )th-order n -dimensional tensor.The square matrix unfolding of A with an ordering P is an n m -by-n m matrix A P where its (k ,h )th entry is given byA P (k ,h )=A (i 1,i 2,...,i m ,j 1,j 2,...,j m ),withk =n m −1(i 1−1)+n m −2(i 2−1)+···+n (i m −1−1)+i m ,1≤ik ≤n ,1≤k ≤m ,h =n m −1(j 1−1)+n m −2(j 2−1)+···+n (j m −1−1)+j m ,1≤j k ≤n ,1≤k ≤m ,and P is the permutation matrix corresponding to the ordering P :(i 1,i 2,...,im )=(i 1,i 2,...,i m )P ,(j 1,j 2,...,jm )=(j 1,j 2,...,j m )P .Let us consider a simple example of the square matrix unfolding with the natural orderingI .The permutation matrix P is just the identity matrix I .Suppose that A =(a i ,j ,k ,l )is a 4th-order three-dimensional tensor,the square matrix unfolding of A with the natural ordering I is a 32-by-32matrix givenbySince A is a 4th-order tensor,we have the other square matrix unfolding and the permutation matrix is given as follows:P = 0110.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Linear and Multilinear Algebra 5We note that (i 1,i 2)=(i 1,i 2)P =(i 2,i 1).The corresponding square matrix unfolding of A is equal to a 32-by-32matrix:Remark 1In Definition 1.1,the tensor is a (2m )th-order n -dimensional tensor.We maygeneralize this definition to a (2m )th-order n 1×n 2×···×n m ×n 1×n 2×···×n m -dimensional tensor,where n i =n i,i =1,...,m .In this paper,we study the eigenvalue problem of a (2m )th-order n -dimensional tensor.But those results can be generalized to a(2m )th-order n 1×n 2×···×n m ×n 1×n 2×···×n m -dimensional tensor,where n i =n i ,i =1,...,m ,easily.Given two different orderings P and P ,it is interesting to note that A P and A P are similar via a permutation matrix.The permutation matrix is called a perfect shuffle permutation.[24]Proposition 2.2Suppose P and P are two different orderings.Then there exist a permutation matrix P ,P such thatP ,P A P T P ,P =A P .(8)For example,(6)and (7)are the two square matrix unfoldings of A with two differentorderings.It is easy to check that the perfect shuffle permutation matrix P ,I is givenbyand P ,I A P P ,I =A I .Without loss of generality,we can assume that the natural ordering I is used in the square matrix unfolding of tensors in the following discussion.For simplicity,we denote A I by A .We remark that the square matrix unfolding of an (2m )th-order n -dimensional tensor is a multilevel matrix with m levels.More precisely,at the first level,A is an n -by-n block matrix with n m −1-by-n m −1blocks:D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 20156L.-B.Cui et al.A =A (1)=⎛⎜⎜⎜⎜⎝A (2)(1,1)A (2)(1,2)···A (2)(1,n )A (2)(2,1)A (2)(2,2)···A (2)(2,n )............A (2)(n ,1)A (2)(n ,2)···A (2)(n ,n )⎞⎟⎟⎟⎟⎠,(9)where the (i ,j )th-block A (2)(i ,j )is a second-level matrix given by a n -by-n block with nm −2-by-n m −2block matrix:A (2)(i ,j )=⎛⎜⎜⎜⎜⎝A (3)(i ,1,j ,1)A (3)(i ,1,j ,2)···A (3)(i ,1,j ,n )A (3)(i ,2,j ,1)A (3)(i ,2,j ,2)···A (3)(i ,2,j ,n )............A (3)(i ,n ,j ,1)A (3)(i ,n ,j ,2)···A (3)(i ,n ,j ,n )⎞⎟⎟⎟⎟⎠,1≤i ,j ≤n .In general,the th-level matrix is an n -by-n block matrix with n m − -by-n m − blocks given byA ( )(i 1,...,i −1,j 1,...,j −1)=⎛⎜⎜⎜⎜⎝A ( +1)(i 1,...,i −1,1,j 1,...,j −1,1)A ( +1)(i 1,...,i −1,1,j 1,...,j −1,2)···A ( +1)(i 1,...,i −1,1,j 1,...,j −1,n )A ( +1)(i 1,...,i −1,2,j 1,...,j −1,1)A ( +1)(i 1,...,i −1,2,j 1,...,j −1,2)···A ( +1)(i 1,...,i −1,2,j 1,...,j −1,n )............A ( +1)(i 1,...,i −1,n ,j 1,...,j −1,1)A ( +1)(i 1,...,i −1,n ,j 1,...,j −1,2)···A ( +1)(i 1,...,i −1,n ,j 1,...,j −1,n )⎞⎟⎟⎟⎟⎠,for 1≤i ,j ≤n and 2≤ ≤m .It is clear that when =m ,A (m +1)(i 1,...,i m −1,j 1,...,j m −1)is an n -by-n matrix.We will discuss multilevel Toeplitz and circulant matrices in Section 3.To change the new tensor eigenvalue problem into a multilevel matrix eigenvalue problem,we also need to change the eigentensor into a column vector.Definition 2.3Let X be an m th-order n -dimensional tensor.The vectorization of X with an ordering P is an n m -vector x P where its i th entry x P (j )is given byx P (j )=X i 1,i 2,...,i m ,1≤i k ≤n ,1≤k ≤m ,with j = m −1k =1nm −k (ik −1)+i m and P is the permutation matrix corresponding to the ordering P :(i 1,i 2,...,i m )=(i 1,i 2,...,i m )P .Using the same ordering P on A and X ,we have the following characterization for thetensor eigenvalue problem.Proposition 2.4The tensor eigenvalue problem in (4)is equivalent to the following the matrix eigenvalue system:A P x P =λx P .D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Linear and Multilinear Algebra 7Remark 2According to Proposition 2.4,we can calculate the eigenvalues of A and its associated eigentensor by solving the eigenvalue problem of the corresponding matrix A P based on the ordering P .Indeed we see by Proposition 2.2that both the eigenvalue and eigentensor are unique up to the ordering P .Proposition 2.5Let A be a (2m )th-order n-dimensional tensor.Suppose P and P are two different orderings,λis an eigenvalue of A P and y is the associated eigenvector.Then λis also eigenvalue of A P and P ,P y is an eigenvector of A P ,where P ,P is perfect shuffle permutation matrix in Proposition 2.2.Proof We know that A P y =λy and P ,P A P T P ,P=A P .It implies thatP ,P A P T P ,P P ,P y =λ P ,P y .The result follows.It is noted that different from the tensor eigenvalue problem in Definition 1.1,the new tensor eigenvalue problem in Definition 1.2is solvable and computable.2.2.Relationship with HOSVDIn this subsection,we will establish the relationship between the proposed tensor eigen-value problem and HOSVD [20]of a tensor.In numerical multilinear algebra,there are many applications for HOSVD [20].The computational procedure of HOSVD involves the calculation of singular value decomposition of matrices with respect to tensor unfolding at different indices.The HOSVD of A is given byA =S ×1U 1×2U 2×3···×2m U 2m ,(10)where S is a (2m )th-order n -dimensional all-orthogonal and ordering tensor and U k are n -by-n unitary matrices.Here,the multiplication ×k of a tensor A with a matrix U k is a (2m )th-order n -dimensional tensor given by(A ×k U k )i 1,...,i k −1,j k ,i k +1,...,i 2m =n i k =1a i 1,...,i k −1,i k ,i k +1,...,i 2m U k (j k ,i k ),for 1≤i l ,j k ≤n and 1≤l ≤2m .Next,we will show that the HOSVD of A can provide the information for the eigenvalues and their associated eigentensors for the multiplication of A and its conjugate transpose.Let us first define the conjugate transpose of a tensor.Definition 2.6Let A be a (2m )th-order n -dimensional tensor.A ∗is called the conjugate transpose of A where its entry is given by a j 1,...,j m ,i 1,...,i m for 1≤i k ,j k ≤n and 1≤k ≤m .A is called Hermitian ifa i 1,...,i m ,j 1,...,j m =a j 1,...,j m ,i 1,...,i m ,1≤i k ,j k ≤n ,1≤k ≤m ,i.e.A =A ∗.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015The contraction product of two (2m )th-order n -dimensional tensors (the multiplication of two square tensors)can be defined as follows:(A B )i 1,...,i m ,j 1,...,j m =n k 1,...,k m =1a i 1,...,i m ,k 1,...,k mb k 1,...,k m ,j 1,...,j m ,(11)for 1≤i l ,j l ≤n and 1≤l ≤m .Indeed,the contraction product of two square tensorscan be expressed in terms of the multiplication of two multilevel matrices.Also we have (A B )(:,...,:,j 1,...,j m )=A ·B (:,...,:,j 1,...,j m ),1≤j k ≤n ,1≤k ≤m .(12)Proposition 2.7Let A and B be (2m )th-order n-dimensional tensors.If their corre-sponding matrix unfolding are A P and B P under the ordering P,then the multilevel matrix of A B under the ordering P is equal to A P B P .Proof Using Proposition 2.2,it is sufficient to consider the natural ordering.The (i ,j )entry of AB isn mk =1A (i ,k )B (k ,j )=n k 1,...,k m =1a i 1,...,i m ,k 1,...,k mb k 1,...,k m ,j 1,...,j m .Therefore,the multilevel matrix AB is the square matrix unfolding of A B under thenatural ordering. Because A has a HOSVD given in (10),we can make use of the decomposition to express the multilevel matrix A corresponding to A according to the natural ordering:A =(U 1⊗U 2⊗···⊗U m )SU T m +1⊗U T m +2⊗···⊗U T2m ,where S is the square matrix unfolding of S under the natural ordering.This implies thatA ∗A =(¯U m +1⊗¯U m +2⊗···⊗¯U 2m )S ∗SU T m +1⊗U T m +2⊗···⊗U T 2m ,(13)where ¯Uk is a matrix where its entry is the complex conjugate of the entry of U k .We make use of the singular value decomposition of S ,i.e.S =QDW ∗where Q and W are unitary matrices and D is a real non-negative diagonal matrix.Therefore,we haveS ∗S =WD 2W ∗.(14)By substituting (14)into (13),we obtainA ∗A =(¯U m +1⊗¯U m +2⊗···⊗¯U 2m )WD 2W ∗ U T m +1⊗U T m +2⊗···⊗U T 2m .(15)It is clear that the eigenvectors of A ∗A are given by the unitary matrix:V =(¯Um +1⊗¯U m +2⊗···⊗¯U 2m )W or the eigentensors of A ∗A are just given byV =W ×1¯Um +1×2¯U m +2×3···×m ¯U 2m ,D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015where V and W are (2m )th-order n -dimensional tensors and their square matrix unfoldings are equal to V and W ,respectively.Correspondingly,the eigenvalues of A ∗A are given by the eigenvalues D 2of S ∗S or S ∗ S ,i.e.the square of singular values D of S .In other words,the square root of the eigenvalue of A ∗A is just equal to the singular value of S corresponding to square matrix unfolding of S which is the core tensor of A .In other words,we obtain the eigen-decomposition of A ∗ A :(A ∗ A ) V =V D ,where D is a (2m )th-order n -dimensional diagonal tensor with its entries given byD i 1,i 2,...,i m ,j 1,j 2,...,j m =D 2k ,k ,k =n l =1(i l −1)n m −l +i m ,i l =j l ,1≤l ≤m ,0,otherwise .Example 1Let A be a 4th-order three-dimensional withA (1,:,1,:)=⎛⎝−0.1050−0.14470.14170.05920.08210.08880.14140.12620.0567⎞⎠,A (1,:,2,:)=⎛⎝−0.2156−0.03060.21270.22080.08320.02240.39450.2600−0.0458⎞⎠,A (1,:,3,:)=⎛⎝0.2830−0.8334−0.0230−0.61530.17100.4950−0.8596−0.34390.5920⎞⎠,A (2,:,1,:)=⎛⎝0.2631−0.4137−0.48890.89700.15070.2317−0.01130.32170.5491⎞⎠,A (2,:,2,:)=⎛⎝0.2353−0.6650−0.71351.30660.45970.4064−0.12150.56020.8921⎞⎠,A (2,:,3,:)=⎛⎝0.69100.2683−0.01540.0391−1.0960−0.30630.4814−0.4065−0.3998⎞⎠,A (3,:,1,:)=⎛⎝−0.1438−0.22050.0344−0.06740.27190.31310.03300.25640.2619⎞⎠,A (3,:,2,:)=⎛⎝−0.3521−0.20770.0935−0.08860.30470.49610.16730.44240.3168⎞⎠,A (3,:,3,:)=⎛⎝0.6487−0.5308−0.1976−0.04710.4314−0.1700−0.5455−0.30520.3088⎞⎠.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015The unitary matrices in the HOSVD of A are given byU 1=⎛⎝−0.58750.4445−0.6762−0.4185−0.8821−0.2163−0.69260.15590.7043⎞⎠,U 2=⎛⎝−0.2879−0.84420.4522−0.92950.36010.0804−0.2307−0.3971−0.8883⎞⎠,U 3=⎛⎝−0.5579−0.1280−0.8200−0.7333−0.38660.5593−0.38850.91330.1218⎞⎠,U 4=⎛⎝−0.5798−0.55210.5992−0.28810.82680.4830−0.76210.1074−0.6385⎞⎠.The core tensor of A is S ,and its entries are given byS (1,:,1,:)=⎛⎝0.9172−0.0540−0.0119−0.28580.53080.33710.7572−0.77920.1622⎞⎠,S (1,:,2,:)=⎛⎝0.6020−0.2290−0.4427−0.2630−0.91330.10670.65410.1524−0.9619⎞⎠,S (2,:,1,:)=⎛⎝0.75370.9340−0.7943−0.38040.1299−0.31120.5678−0.56880.5285⎞⎠,S (2,:,2,:)=⎛⎝0.6892−0.82580.0046−0.74820.53830.77490.4505−0.99610.8173⎞⎠,S (3,:,1,:)=⎛⎝0.07590.4694−0.1656000000⎞⎠,S (3,:,2,:)=⎛⎝0.08380.0782−0.86870000⎞⎠,S (1,:,3,:)=S (2,:,3,:)=S (3,:,3,:)=⎛⎝000000000⎞⎠.The square matrix unfolding of S with the natural ordering is given byS =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0.9172−0.0540−0.01190.6020−0.2290−0.4427000−0.28580.53080.3371−0.2630−0.91330.10670000.7572−0.77920.16220.65410.1524−0.96190000.75370.9340−0.79430.6892−0.82580.0046000−0.38040.1299−0.3112−0.74820.53830.77490000.5678−0.56880.52850.4505−0.99610.81730000.07590.4694−0.16560.08380.0782−0.868700000000000000000000⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Then we obtainA =(U 1⊗U 2)S (U T 3⊗U T 4).Next,we will find an unitary matrix W such that W ∗S ∗SW = 2is a diagonal matrix,whereW =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−0.61300.02140.0793−0.41620.66460.05050000.05980.3451−0.77590.11600.17260.48170000.0105−0.00740.50600.57470.26500.5858000−0.5777−0.00020.0490−0.1615−0.67250.43070000.3679−0.6915−0.0601−0.43540.06300.43520000.38920.63430.3599−0.5170−0.04470.2177000000000100000000010000000001⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.The matrix V is given byV =(U 3⊗U 4)W=⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−0.23010.0182−0.4107−0.29410.13750.0148−0.49130.45270.4754−0.2132−0.11980.2202−0.2032−0.0729−0.4156−0.3961−0.67800.2362−0.29010.04710.2955−0.03120.29650.25510.5235−0.08810.6249−0.3821−0.1425−0.5880−0.39190.10910.09800.3351−0.3088−0.3243−0.4241−0.09950.2655−0.1442−0.1448−0.62070.27010.4624−0.1611−0.43180.16650.4481−0.12980.27010.4271−0.35710.0601−0.42620.20570.7764−0.0650−0.18060.4249−0.35010.0730−0.0672−0.07060.5118−0.34940.2637−0.70610.17400.05230.05880.1007−0.03510.0298−0.4475−0.06850.38570.7557−0.2439−0.07780.0131−0.0928⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,and2=diag (6.0484,3.8184,3.0304,1.1585,0.0968,0.0240,0,0,0).Therefore,the eigenvalues of A ∗ A are6.0484,3.8184,3.0304,1.1585,0.0968,0.0240,0,0,0and the eigentensors are V .The entries of the tensor V can be constructed correspondingly.3.Toeplitz and circulant tensorsIn this section,we will study the eigenvalue problem for special structured tensors:Toeplitz and circulant tensors.Definition 3.1A (2m )th-order n -dimensional tensor T =(t i 1,...,i m ,j 1,...,j m )is called a Toeplitz tensor ift i 1,...,i m ,j 1,...,j m =r j 1−i 1,...,j m −i m ,1≤i k ,j k ≤n ,1≤k ≤m ,(16)where R =(r i 1,...,i m )is an m th-order (2n −1)-dimensional tensor.D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015Figure 2.A 4th-order Toeplitz Tensor.It is clear when m =1,the above definition is the same as that of Toeplitz matrix.Because of its special structure,we only require m 2n −1entries to construct a Toeplitz tensor.This Toeplitz tensor has been studied in [17].In Figure 2,we give an example to show the structure of a 4th-order n -dimensional Toeplitz tensor.Let us fix two indices i 2and j 2.For example,when i 2=3and j 2=1,T (:,3,:,1)is a Toeplitz matrix (the green one as shown in the figure).Indeed,all front slices of each cuboid,T (:,i 2,:,j 2),are Toeplitz matrices.Similarly,all the slices T (i 1,:,j 1,:)are also Toeplitz matrices.It is worth to noting that if j 2−i 2(or j 1−i 1)are fixed,then the corresponding matrices T (:,i 2,:,j 2)(or T (i 1,:,j 1,:))are the same.For example,when j 2−i 2=0,these matrices shown in red are the same in different cuboids in Figure 2.Similar to (9),we can use the square matrix unfolding with natural ordering to present a Toeplitz tensor.It is an n m ×n m multilevel Toeplitz matrix T .Indeed,T can be regarded as the first-level matrix,i.e.T =T (1)=⎛⎜⎜⎜⎜⎝T (2)(0)T (2)(1)···T (2)(n −1)T (2)(−1)T (2)(0)···T (2)(n −2)............T (2)(1−n )T (2)(2−n )···T (2)(0)⎞⎟⎟⎟⎟⎠,where (i ,j )th-block T (2)(j −i )is a second-level matrix.T (2)(k 1)is given by:T (2)(k 1)=⎛⎜⎜⎜⎜⎝T (3)(k 1,0)T (3)(k 1,1)···T (3)(k 1,n −1)T (3)(k 1,−1)T (3)(k 1,0)···T (3)(k 1,n −2)............T (3)(k 1,1−n )T (3)(k 1,2−n )···T (3)(k 1,0)⎞⎟⎟⎟⎟⎠,1−n ≤k 1≤n −1.In general,the th-level matrix is an n -by-n block matrix with n m − -by-n m − blocks given byT ( )(k 1,k 2,...,k −1)=⎛⎜⎜⎜⎜⎝T ( +1)(k 1,k 2,...,k −1,0)T ( +1)(k 1,k 2,...,k −1,1)···T ( +1)(k 1,k 2,...,k −1,n −1)T ( +1)(k 1,k 2,...,k −1,−1)T ( +1)(k 1,k 2,...,k −1,0)···T ( +1)(k 1,k 2,...,k −1,n −2)............T ( +1)(k 1,k 2,...,k −1,1−n )T ( +1)(k 1,k 2,...,k −1,2−n )···T ( +1)(k 1,k 2,...,k −1,0)⎞⎟⎟⎟⎟⎠,D o w n l o a d e d b y [U n i v e r s i t y o f H o n g K o n g L i b r a r i e s ] a t 03:04 07 A u g u s t 2015。