Téléphone +33(0)4.68.66.20.64

合集下载

名词解释16496

名词解释16496

名词解释:1、微绒毛(microvillus):上皮细胞游离面的细胞膜和细胞质伸出的微细指状突起,中轴含有微丝,可增大细胞表面积,有利于细胞的吸收功能。

2、纤毛(cilium):上皮细胞的游离面的表面伸出的细长指状突起,中轴含9+2排列的双联微管,可定向摆动,起到清洁保护作用。

3、Isogenous group(同源细胞群):即同源细胞群,位于软骨深部的软骨细胞常成群分布,他们由一个细胞分化而来,称为同源细胞群。

4、Osteon(骨单位):即骨单位,又称哈弗斯系统,是长骨干起支持作用的主要结构和营养单位,位于内、外环骨板之间。

骨单位呈长筒形,中轴为中央管,内含血管、神经和骨内膜,以中央管为中心有多层同心圆排列的骨板,各层骨板内和骨板间有骨细胞,并有骨小管连接。

5、sarcomere(肌节):即肌节,指两相邻Z线之间的一段肌原纤维,由1/2I带+A带+1/2I带所组成,肌节是骨骼肌与心肌收缩和舒张的基本结构单位。

6、Neuron(神经元):即神经元,亦称神经细胞,它是神经系统的结构和功能单位,分为胞体和突起两部分。

7、synapse(突触):即突触,是神经元与神经元之间或神经元与非神经元之间的特殊细胞连接。

通过此处的传递作用,实现细胞间的兴奋、抑制信息传递。

8、淋巴小结(lymphoid nodule):又称淋巴滤泡,是以B细胞为主密集而成的球形或椭圆形淋巴组织,边界清楚。

在受抗原刺激后,小结增大,中央出现浅染的生发中心,内有许多呈分裂相的B细胞,还有巨噬细胞、滤泡树突状细胞和Th细胞等。

有生发中心的称为次级淋巴小结,而没有生发中心的称为初级淋巴小结。

9、blood-thymus barrier(血-胸腺屏障):即血-胸腺屏障,为血液与胸腺皮质间的屏障结构。

主要由以下5层组成:①连续毛细血管内皮,内皮细胞间有紧密连接;②内皮周围连续的基膜;③血管周隙,内含巨噬细胞等;④胸腺上皮细胞基膜;⑤一层连续的胸腺上皮细胞。

gotty参数

gotty参数

gotty参数
GOTTY是一个命令行工具,通常在服务器上使用,用于通过网络访问和操作服务器的终端。

以下是 GOTTY 常用的参数:
- `-a, --address`:监听的IP地址(默认:"0.0.0.0",即全网开放),对应配置变量为($GOTTY_ADDRESS)。

- `-p, --port`:监听的端口号(默认:8080),对应配置变量为($GOTTY_PORT)。

- `-m, --path`:基本路径(默认"/"),对应配置变量为($GOTTY_PATH)。

- `-w, --permit-write`:允许客户端向TTY写入(默认值:false,注意安全),对应配置变量为($GOTTY_PERMIT_WRITE)。

- `-c, --credential`:基本身份验证的凭据(例如:用户名:密码,默认禁用),对应配置变量为($GOTTY_CREDENTIAL)。

- `-r, --random-url`:向URL添加一个随机字符串(默认值:false,建议启用),对应配置变量为($GOTTY_RANDOM_URL)。

- `--random-url-length`:随机URL 长度(默认值:8),对应配置变量为($GOTTY_RANDOM_URL_LENGTH)。

这些参数可以根据具体需求进行调整,以实现不同的功能和安全性要求。

在使用 GOTTY 时,建议仔细阅读相关文档或参考手册,以确保正确使用。

圆周率的前位打油诗

圆周率的前位打油诗

圆周率的前100位打油诗π=3.14159 26535 89793 23846
26433 83279 50288 41971
69399 37510 58209 74944
59230 78164 06286 20899
86280 34825 34211 70679
下面的小故事利用谐音记住圆周率小数点后的100位数字。

先设想一个酒徒在山寺狂饮,醉死山沟的情景[前30位]:
山巅一寺一壶酒(3.14159),尔乐苦煞吾(26535),把酒吃(897),酒杀尔(9 32),杀不死(384),乐而乐(626),死了算罢了(43383),儿弃沟(279)。

接着设想“死”者父亲得知儿“死”后的心情[15位]:
吾疼儿(502),白白死已够凄矣(88 41971),留给山沟沟(69399)。

再设想“死”者父亲到山沟寻找儿子的情景[15位]:
山拐我腰痛(37510),我怕儿冻久(58 209),凄事久思思(74944)。

然后是父亲在山沟里把儿子找到,并把他救活。

儿子迷途知返的情景[最后40位]:
吾救儿(592),山洞拐(307),不宜留(816)。

四邻乐(406),儿不乐(286),儿疼爸久久(20899)。

爸乐儿不懂(8628 0)。

“三思吧(348)!”儿悟(25)。

三思而依依(34211),妻等乐其久(7067 9)。

茅台酒申购答题模板

茅台酒申购答题模板

茅台酒申购答题模板全文共四篇示例,供读者参考第一篇示例:茅台酒申购答题模板一、茅台酒公司基本信息1. 茅台酒公司是中国一家历史悠久、知名度极高的酒类企业,总部位于贵州茅台。

2. 公司主要产品是茅台酒,是中国著名的白酒品牌之一。

二、茅台酒公司的发展历程1. 茅台酒公司成立于1951年,是中国最早的酒类企业之一。

2. 经过多年的发展,茅台酒凭借其独特的酿造工艺和高品质的产品,在国际市场上建立了良好的声誉。

三、茅台酒公司的产品特点1. 茅台酒采用优质的原料和传统的酿造工艺,口感纯正,回味悠长。

2. 茅台酒有着独特的风味和香气,深受消费者喜爱。

四、茅台酒公司的发展战略1. 茅台酒公司致力于打造高端白酒品牌,在国内外市场上提升品牌知名度和市场份额。

2. 公司不断加大研发投入,推出新品种,满足不同层次消费者的需求。

五、茅台酒公司的市场表现1. 茅台酒在国内市场上销售情况良好,年销量稳定增长。

2. 公司在国际市场上也取得了一定的成绩,产品远销海外。

六、茅台酒公司的未来规划1. 茅台酒将继续保持高端品牌形象,提升产品质量和服务水平。

2. 公司将加强与经销商合作,扩大销售网络,开拓新的市场。

1. 茅台酒公司成立的时间是?A. 1951年B. 1960年C. 1970年2. 茅台酒的主要产品是?A. 白酒B. 啤酒C. 红酒4. 茅台酒在国内市场上的销量情况如何?A. 稳定增长B. 下降C. 波动不定5. 茅台酒公司的发展战略是?A. 提升品牌知名度B. 降低产品质量C. 缩小销售网络以上是茅台酒申购答题的模板,请根据实际情况进行修改和完善。

祝您顺利通过茅台酒申购答题!第二篇示例:茅台酒申购答题模板一、茅台酒的基本信息:1. 茅台酒的产地是哪里?答:茅台酒产地位于贵州省遵义市茅台镇。

2. 茅台酒的历史有多久?答:茅台酒的历史可以追溯到近200年前。

3. 茅台酒被誉为“国酒”的原因是什么?答:茅台酒因其独特的酿造工艺和历史传统,被誉为中国的“国酒”。

344热点名词解释

344热点名词解释

344热点名词解释
344是一个热门的网络流行语,它源自于中国互联网上的一种特殊表达方式。

这个短语的含义并不是固定的,它可以根据不同的语境和使用者的理解而有所不同。

下面我将从不同角度解释344热点名词。

1. 344作为网络流行语的含义:
在网络用语中,344通常被解释为“爱你死”,其中的数字3代表“爱”,4代表“死”。

这种表达方式是一种夸张、幽默的说法,用来表示对某人或某事的极度喜爱或热爱。

2. 344在生活中的解释:
除了网络流行语的含义,344也可以理解为一种表达情感的方式。

在现实生活中,人们可能会用344来表达自己对亲人、朋友或伴侣的深深的爱意和依恋之情。

3. 344在数字领域的解释:
在一些特定的领域,344也可能代表其他含义。

例如,在某些计算机编程语言中,344可能指代特定的数据类型或变量。

此外,在数学或统计学中,344也可能被用作一个特定的数字或代表某种规律或模式。

4. 344在社会和文化中的意义:
除了上述解释外,344也可以被视为社会和文化的象征。

它代表了互联网时代人们对于表达感情的创新和多样性。

它反映了人们在数字时代中通过网络语言和表情符号来传达情感和交流的方式的变化。

总结起来,344这个热点名词有多重含义,它可以作为网络流行语的一种表达方式,也可以在生活中表达深深的爱意,甚至在特定领域中代表其他的含义。

它是网络文化和社会交流的一部分,展现了人们用数字语言表达情感和交流的多样性。

圆周率前100位顺口溜

圆周率前100位顺口溜

圆周率前100位口诀顺口溜3 . 14 15 9 26 5 3 5 8 97 9 3 2 38 4 6 2 6山巅一寺一壶酒,尔乐苦煞吾,把酒吃,酒杀尔,杀不死,乐尔乐。

4 3 3 8 3 2 7 95 0 2 8 8 4 1 9 7 16 9 3 9 9 37 死珊珊,霸占二妻。

救吾灵儿吧!不只要救妻,一路救三舅,救三妻。

5 1 0 5 8 2 0 9 7 4 9 4 4 5 9 2 3 0 7吾一拎我爸,二拎舅(其实就是撕吾舅耳)三拎妻。

8 1 6 4 0 6 2 8 6 2 0 8 9 9 8 6不要溜!司令溜,儿不溜!儿拎爸,久久不溜!2 8 034 8 25 3 4 2 1 1 7 067 9 8饿不拎,闪死爸,而吾真是饿矣!要吃人肉?吃酒吧!圆周率口诀顺口溜英语版3 . 1 4 1 5 9Now I,even I,would celebrate2 6 53 5In rhymes inapt,the great8 9 7 9Immortal Syracusan,rivaled nevermore,3 2 3 8 4Who in his wondrous lore,6 2 6Passed on before,4 3 3 8Left men his guidance3 2 7 9How to circles mensurate圆周率的来历及发明者π=3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510 58209 74944 5923078164 06286 20899 86280 34825 34211 70679古埃及早在4000年前就已经发现了圆周率,是谁发现的根本无法考证.中国,最初在《周髀算经》中就有“径一周三”的记载,取π值为3.魏晋时,刘徽曾用使正多边形的边数逐渐增加去逼近圆周的方法(即“割圆术”),求得π的近似值3.1416.汉朝时,张衡得出π的平方除以16等于5/8,即π等于10的开方(约为3.162).虽然这个值不太准确,但它简单易理解,所以也在亚洲风行了一阵.王蕃(229-267)发现了另一个圆周率值,这就是3.156,但没有人知道他是如何求出来的.公元5世纪,祖冲之和他的儿子以正24576边形,求出圆周率约为355/113,和真正的值相比,误差小于八亿分之一.这个纪录在一千年后才给打破.印度,约在公元530年,数学大师阿耶波多利用384边形的周长,算出圆周率约为√9.8684.婆罗门笈多采用另一套方法,推论出圆周率等于10的算术平方根,斐波那契算出圆周率约为3.1418,韦达用阿基米德的方法,算出3.1415926535。

毛渡河鲜竹酒

毛渡河鲜竹酒

毛渡河释义
茅台渡口位于国酒之乡茅台镇,这里是中国工农红军第一方面军(中央红军)四渡赤水河的主要渡口。

红军四渡赤水之战,记起生动的体现了毛泽东(枪杆子里出政权)的军事思想和灵活机动的战略战术,是我军战争史上的伟大奇迹,是世界军事史上领兵作战以少胜多的光辉范例。

当时茅台渡口有一王姓人家酿酒为生,其酒坊被当地土豪劣绅霸占,毛泽东带兵四渡赤水河后将酒坊归还王姓人家。

王姓人为感谢红军恩情,将其酒坊酿造的酒命名为《毛渡河》,以弘扬红军长征精神,昭示红军英雄事迹,教育启迪后世子孙。

七毛

七毛

你在我的胳膊上画了一维条码,并写上:0.7¥,此笨有主。

你说,你只值七毛哦,七毛就可以卖给我……。

你说那是交叉二五码,我算了好久也没知道是什么含义。

你讲给我,是b520哦,并教给我算法。

我觉得那个算法不太对,但挺好玩的。

因为是只属于我的专用条码。

你就要去N城实习,我们要分开两个月。

我们坐同一列火车回家。

卧铺,对床。

我坐一天一夜,你两天两夜。

白天的时候,我们坐在窗前的位置,看风景,聊天,吃东西,喝可乐。

往常坐火车都是熬不到下车。

这次离别前的列车时光却显得格外短暂。

天黑了,夜深了。

翌日我就要下车。

纵缱绻深情,这滴滴逝去的相伴怎能留住?我说,累了,睡吧。

相对着,牵手,入梦。

早上五点多,我醒了。

枕边放着一朵四叶草。

是你叠的。

四颗浅粉的心组成的花瓣。

我抬头看,你仍睡着。

我小心地收好这株幸运草。

我起身收拾行李。

六点就要下车的。

等收拾好洗漱回来。

你却醒了。

本来我想,不叫醒你,你还要坐一天车呢,很累。

再说,不想让你看到我离去的背影。

去问列车长换车票,他说,晚点一个半小时。

真好。

我们还有一个半小时。

我们坐在窗前。

你的眼神,像是溢满了思念,又像是写满了忧伤。

你拥抱我。

说,别和其他男生玩,下学期别被研究生学长骗走了……我说,不可能,研究生、博士生都是极品呢。

列车缓了下来。

要进站了。

列车长喊我下车。

我踏上了熟悉的故土。

看你站在车门口,依然是之前的眼神。

我来不及读懂,车就开了。

我们互相在心里说,再见。

出站后,收到你的短信。

“送你一株四叶草,一个月数一片叶子,我们要分开四个月,等叶子数完,我就回来了。

”瞬间,我很感动。

眼泪流出的那一刻,心里已深深印下两个字:等你。

我会等你回来。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

´Equipe de Recherche DALILaboratoire LP2A,EA3679Universit´e de PerpignanApplications of fast and accurate summation in computational geometryStef Graillat16mai2005Research Report N o RR2005-03Universit´e de Perpignan52avenue Paul Alduy,66860Perpignan cedex,FranceT´e l´e phone:+33(0)4.68.66.20.64T´e l´e copieur:+33(0)4.68.66.22.87Adresse´e lectronique:dali@univ-perp.frApplications of fast and accurate summation in computationalgeometryStef Graillat16mai2005AbstractIn this paper,we present a recent algorithm given by Ogita,Rump andOishi[39]for accurately computing the sum of nfloating point numbers.Theyalso give a computational error bound for the computed result.We applythis algorithm in computing determinant and more particularly in computingrobust geometric predicates used in computational geometry.We improveexisting results that use either a multiprecision libraries or extended largeaccumulators.Keywords:accurate summation,finite precision,floating-point arithmetic,computational geometry,robust geometry predicate,determinantR´e sum´eDans ce papier,nous pr´e sentons un algorithme r´e cent de Ogita,Rump et Oi-shi[39]pour sommer de fa¸c on pr´e cise n nombresflottants.Dans leur papier,Ogita,Rump et Oishi fournissent aussi une borne calculable de l’erreur com-mise par rapport au r´e sultat exact.Nous appliquons cet algorithme pour cal-culer des d´e terminants de matrices et en particulier pour calculer des pr´e dicatsg´e om´e triques utilis´e s en g´e om´e trie algorithmique.Nous am´e liorons les algo-rithmes existants qui utilisent soit la multipr´e cision soit des accumulateurs tr`e slarges.Mots-cl´e s:sommation pr´e cise,pr´e cisionfine,arithm´e tiqueflottante,g´e om´e trie algorithmiquepr´e dicat g´e ometrique,d´e terminantApplications of fast and accurate summation incomputational geometryStef Graillat∗May16,2005AbstractIn this paper,we present a recent algorithm given by Ogita,Rump and Oishi[39]for accurately computing the sum of nfloating point numbers.They also give a computationalerror bound for the computed result.We apply this algorithm in computing determinant andmore particularly in computing robust geometric predicates used in computational geometry.We improve existing results that use either a multiprecision libraries or extended largeaccumulators.Key words:accurate summation,finite precision,floating-point arithmetic,computational geometry,robust geometry predicate,determinantAMS Subject Classifications:15-04,65G99,65G50,65-041IntroductionFloating point summation is one of the most basic operation in scientific computing,and many algorithms have been developped[3,13,17,18,25,26,31,27,28,32,33,34,36,35,39,40,41, 42,43,44,46,47].A good survey of these algorithms is presented in chapter4of the book[22] and in the article[21].Algorithms that make decisions based on geometric test such as determining which side of a line a point falls on,often fail due to roundofferror.A solution to answer these problems is to use software implementations of exact arithmetic often at great expense.Improving the speed of correct geometric computation has received recent attention[4,11,30],but the proposals tale integer or rational inputs of small precision.These methods do not seem to be usable if it is important to use ordinaryfloating point inputs.A possible way to improve the accuracy is to increase the working precision.For this purpose,some multiprecision libraries have been developed.One can divide those libraries into three parts:•Arbitrary precision library using a multiple-digit format where a number is expressed asa sequence of digits coupled with a single exponent.Examples of this format are Bailey’sMPFUN[5,6],Brent’s MP[9],MPFR[2]or GMP[1].•Extendedfixed precision library using the multiple-component format but with a limited number of components.Examples of this format are Bailey’s double-double[7],Brigg’s ∗Laboratoire LP2A,Universit´e de Perpignan,52,avenue Paul Alduy,F-66860Perpignan Cedex,France (graillat@univ-perp.fr,http://gala.univ-perp.fr/~graillat).doubledouble[10](double-double numbers are represented as an unevaluated sum of a leading double and a trailing double)and quad-double[20](quad-double numbers are represented as an unevaluated sum of four IEEE doubles).•Arbitrary precision library using a multiple-component format where a number is ex-pressed as unevaluated sums of ordinaryfloating point words.Examples of this format are Priest[42,43]and Shewchuk[45,46].Shewchuk[45,46]used an arbitrary precision library to obtain fast C implementation of four geometric predicates,the2D and3D orientation and incircle tests.The inputs are single or double precisionfloating point numbers.The speed of these algorithms is due to two fea-tures.First,they employ fast algorithms for arbitrary precision arithmetic and second they are adaptive;the running time depends on the degree of uncertainty of the result.Recently,Demmel and Hida presented algorithms using a wider accumulator[17,18].Float-ing point numbers p i,1≤i≤n,given in working precision with f bits in the mantissa are added in an extra-precise accumulator with F bits,F>f.Some algorithms are presented with and without sorting the input data.The authors give a detailed analysis of the accuracy of the computed result depending on f,F and the number of summands.Those algorithms bear one or more of the following disadvantages:•sorting of input data is necessary,either–by absolute value or,–by exponent,•besides working precision,some extra(higher)precision is necessary,•access to mantissa and/or exponent is necessary.Each of those properties can slow down the performance significantly and restrict application fo specific computer architectures or compilers.In this paper,we use recent algorithms from Ogita,Rump and Oishi[39]to provide fast and accurate algorithms to compute determinants of matrices and geometric predicates.The advantages of these algorithms is that they use one working precision and are adaptive.If the computed result has the wanted relative error,we stop.Otherwise,we continue the computation still in the same working precision.Contrary to Demmel and Hida[18],we do not need extra-precisefloating point format and contrary to Shewchuk[45,46],we do not need renormalisation that slow down the performance.The rest of the paper is organized as follows.In Section2,we present basic notation we will use in the rest of the paper and in particular onfloating point arithmetic.In Section3,we recall the so-called error-free transformation introduced by Ogita,Rump and Oishi[39].In Section4, we present the summation algorithm of Ogita,Rump and Oishi presented in[39].They designed accurate and fast algorithm to compute the sum offloating point number.Section3and Section4borrow heavily from Ogita,Rump and Oishi[39].In Section5,we present applications of those algorithm for computing determinant of matrices and robust geometric predicates used in computational geometry.2NotationsThroughout the paper,we assume afloating point arithmetic adhering to IEEE754floating point standard[23].We do not address issues of overflow and underflow[16,19].The set offloating point numbers is denoted by F,and the relative rounding error by eps.For IEEE754 double precision we have eps=2−53.We denote byfl(·)the result of afloating point computation,where all operations inside parentheses are done infloating point working precision.Floating point operations in IEEE754 satisfy[22]fl(a◦b)=(a◦b)(1+ε)for◦={+,−,·,/}and|ε|≤eps.This implies that(2.1)|a◦b−fl(a◦b)|≤eps|a◦b|and|a◦b−fl(a◦b)|≤eps|fl(a◦b)|for◦={+,−,·,/}. One can notice that a◦b∈R andfl(a◦b)∈F but in general we do not have a◦b∈F.It is known that for the basic operations+,−,·,the approximation error of afloating point operation is still afloating point number(see for example[15]):(2.2)x=fl(a±b)⇒a±b=x+y with y∈F x=fl(a·b)⇒a·b=x+y with y∈FThese are error-free transformations of the pair(a,b)into the pair(x,y).We use standard notation for error estimations.The quantitiesγn are defined as usual[22]byγn:=n eps1−n epsfor n∈N.3Error-free transformationsFortunately,the quantities x and y in(2.2)can be computed exactly infloating point arithmetic. For the algorithm,we use Matlab-like[24]notation.For addition,we can use the following algorithm by Knuth[29,Thm B.p.236].Algorithm3.1(Knuth[29]).Error-free transformation of the sum of twofloating point numbers.function[x,y]=TwoSum(a,b)x=fl(a+b)z=fl(x−a)y=fl((a−(x−z))+(b−z))Another algorithm to compute an error-free transformation is the following algorithm from Dekker[15].The drawback of this algorithm is that we have x+y=a+b if|a|≥|b|.Algorithm3.2(Dekker[15]).Error-free transformation of the sum of twofloating point numbers.function[x,y]=FastTwoSum(a,b)x=fl(a+b)y=fl((a−x)+b)For the error-free transformation of a product,wefirst need to split the input argument into two parts.Let p be given by eps=2−p and define s= p/2 .For example,if the working precision is IEEE754double precision,then p=53and s=27.The following algorithm by Dekker[15]splits afloating point number a∈F into two parts x and x such thata=x+y and x and y nonoverlapping with|y|≤|x|.Algorithm3.3(Dekker[15]).Error-free split of afloating point number into two part. function[x,y]=Split(a,b)factor=2s+1c=fl(factor·a)x=fl(c−(c−a))y=fl(a−x)With this function,an algorithm from Veltkamp(see[15])enables to compute an error-free transformation for the product of twofloating point numbers.This algorithm returns two floating point numbers x and y such thata·b=x+y with x=fl(a·b).Algorithm3.4(Veltkamp[15]).Error-free transformation of the product of twofloating point numbers.function[x,y]=TwoProduct(a,b)x=fl(a·b)[a1,a2]=Split(a)[b1,b2]=Split(b)y=fl(a2·b2−(((x−a1·b1)−a2·b1)−a1·b2))The following theorem summarizes the properties of algorithms TwoSum and TwoProduct. Theorem3.1(Ogita,Rump and Oishi[39,Thm.3.4]).Let a,b∈F and let x,y∈F such that[x,y]=TwoSum(a,b)(Algorithm3.1).Then,also in the presence of underflow,(3.3)a+b=x+y,x=fl(a+b),|y|≤eps|x|,|y|≤eps|a+b|.Let a,b∈F and let x,y∈F such that[x,y]=TwoProduct(a,b)(Algorithm3.4).Then,if no underflow occurs,(3.4)a·b=x+y,x=fl(a·b),|y|≤eps|x|,|y|≤eps|a·b|,and,in the presence of underflow,(3.5)a·b=x+y+5η,x=fl(a·b),|y|≤eps|x|+5eta,|y|≤eps|a·b|+5eta with|η|≤eta.The TwoProduct algorithm can be re-written in a very simple way if a Fused-Multiply-and-Add(FMA)operation is available on the targeted architecture[37,8].This means that for a,b,c∈F,the result of FMA(a,b,c)is the nearestfloating point number of a·b+c∈R. Algorithm3.5(Ogita,Rump and Oishi[39,Algo.3.5]).Error-free transformation of the product of twofloating point numbers using a FMA.function[x,y]=TwoProductFMA(a,b)x=a·by=FMA(a,b,−x)If we suppose that we have ADD3(a,b,c)which compute the nearestfloating point number of the exact sum a+b+c∈R for a,b,c∈F then the algorithm TwoSum can be replaced by the following one.Algorithm3.6(Ogita,Rump and Oishi[39,Algo.3.6]).Error-free transformation of the sum of twofloating point numbers using ADD3.function[x,y]=TwoSumADD3(a,b)x=a+by=ADD3(a,b,−x)An error-free algorithm for the FMA has been recently given by Boldo and Muller[8].The approximation cannot be represented by a singlefloating-point number anymore.It is now the sum of twofloating point numbers.Algorithm3.7(Boldo and Muller[8]).Error-free transformation of the FMA of three floating point numbers.function[x,y,z]=ErrFMA(a,b,c)x=fl(a·b+c)[u1,u2]=TwoProduct(a,b)[α1,α2]=TwoSum(b,u2)[β1,β2]=TwoSum(u1,α1)γ=fl(fl(β1−r1)+β2)[y,z]=TwoSum(γ,α2)4SummationLetfloating point numbers p i∈F,1≤i≤n,be given.The aim of this section is to present algorithms for Ogita,Rump and Oishi[39]that compute a good approximation of the sum s= p i.In[39],they cascade TwoSum Algorithm and sum up the errors to improve the result of the ordinaryfloating point sumfl( p i).Their cascading algorithm summing up errors terms is as follows.Algorithm4.1(Ogita,Rump and Oishi[39,Algo.4.1]).Cascaded summation.function res=Sum2s(p)π1=p1;σ1=0;for i=2:n[πi;q i]=TwoSum(πi−1,p i)σi=fl(σi−1+q i)res=fl(πn+σn)This is a compensated summation[22].One can rewrite Algorithm4.1in order to overwrite the vector entries.This can be done as follows.Algorithm4.2(Ogita,Rump and Oishi[39,Algo.4.3]).Error-free vector transformation for summation.function p=VecSum(p)for i=2:n[p i,p i−1]=TwoSum(p i,p i−1)Using ideas of Algorithm4.2,Algorithm4.1can be written in the following equivalent way.Algorithm4.3(Ogita,Rump and Oishi[39,Algo.4.4]).Cascaded summation equivalent to Algorithm4.1.function res=Sum2(p)for i=2:n[p i;p i−1]=TwoSum(p i,p i−1)res=fl n−1 i=1p i +p nProposition4.1(Ogita,Rump and Oishi[39,Prop.4.5]).Suppose Algorithm4.4(Sum2) is applied tofloating point numbers p i∈F,1≤i≤n,set s:= p i∈R and S:= |p i|and suppose n eps<1.Then,also in the presence of underflow,(4.6)|res−s|≤eps|s|+γ2n−1S.The error bound(4.6)for the result res of Algorithm4.1is not computable since it involves the exact value s of the sum.The following theorem[39,Cor.4.7]compute a valid error bound infloating point in round to nearest,which is also less pessimistic.Proposition4.2(Ogita,Rump and Oishi[39,Cor.4.7]).Letfloating point numbers p i∈F,1≤i≤n,be given.Append the statementsif2n eps≥1,error( dimension too large ),endβ=(2n eps/(1−2n eps))· n−1i=1|p i|err=eps|res|+(β+(2eps2|res|+3eta))(to be executed in working precision)to Algorithm4.4(Sum2).If the error message is not triggered,err satisfiesres−err≤ p i≤res+err.A computation shows that the algorithm Sum2transform a vector p i begin ill-conditioned with respect to summation into a new vector with identical sum but with a condition number improved by a factor eps.This is why it is interesting to cascade the error-free transformation. The algorithm is as follows.Algorithm4.4(Ogita,Rump and Oishi[39,Algo.4.8]).Summatin as in K-fold precision by(K−1)-fold error-free transformation.function res=SumK(p,K)for k=1:K−1p=VecSum(p)res=fl n−1 i=1p i +p nThe following theorem gives an estimate error for Algorithm4.4.Proposition4.3(Ogita,Rump and Oishi[39,Prop.4.10]).Letfloating point numbers p i∈F,1≤i≤n,be given and assume4n eps≤1.Then,also in the presence of underflow,the result res of Algorithm4.8(SumK)satisfiees for K≥3|res−s|≤(eps+3γ2n−1)|s|+γK2n−2S,where s:= p i and S:= |p i|.5Applications in robust computational geometry 5.1Accurate computation of determinantThe literature for the computation of accurate determinant is quite large (see for example [13,12,14]and the references therein).They often use multiprecision arithmetic to compute accurately the determinant.Here,we present an algorithm to compute the determinant of a floating-point matrix using only one working precision (IEEE 754double precision)to compute the result up to a given relative error ε.Let us study the case of a 2×2determinant det2= a b c d =ad −bc.Using TwoProduct ,we can write [x,y ]=TwoProduct (a,d )and [z,t ]=TwoProduct (b,c ).Then we have det2=x +y +z +t .We have transformed the problem of computing the determinant into a problem of computing a sum accurately.We can then apply the accurate summation algorithm 4.1.to compute this sum.We can do the same thing for example with a 3×3determinant det3= a 1,1a 1,2a 1,3a 2,1a 2,2a 2,3a 3,1a 3,2a 3,3 = σ∈S 3signature(σ)a 1,σ(1)·a 2,σ(2)·a 3,σ(3),using ThreeProduct to transform a 1,σ(1)·a 2,σ(2)·a 3,σ(3)into a sum of four floating point numbers..Algorithm 5.1.Error-free transformation of the product of three floating point numbers.function [x,y,z,t ]=ThreeProduct (a,b,c )[p,e ]=TwoProduct (a,b )[x,y ]=TwoProduct (p,c )[z,t ]=TwoProduct (e,c )This can be used to compute for example the area of a planar triangle or the volume of a tetrahedron as shown in [38].The following algorithm computes the determinant of a matrix up to a give relative error.We suppose we have a function DetVector that transforms the computation of the determinant into a summation like mentioned above.We then compute the sum and compute an error bound.If this error bound is less than the desired relative error εthen we stop.Otherwise,we continue the computation.Algorithm 5.2.Algorithm to compute the determinant up to a relative error ε.function resdet =det (A,ε)p =DetVector (A )p =VecSum (p )res =p n β=(2n eps /(1−2n eps ))· n −1i =1|p i | err =eps |res |+(β+(2eps 2|res |))while (err >ε|res |)p =VecSum (p )res =p n β=(2n eps /(1−2n eps ))· n −1i =1|p i |err =eps |res |+(β+(2eps 2|res |))resdet =p n5.2Robust geometric predicatesAn application requiring guaranteed accuracy is the computation of geometric predicates.Some algorithm like Delaunay triangulation and mesh generation need self-consistent geometric tests.Shewchuck [45,46]gives a survey of the issues involved and presents an adaptative (and ar-bitrary)precision floating point arithmetic to solve this problem.Most of these geometric predicates require the computation of the sign of a small determinant.Recent work on this is-sue are [45,46,4,11,30,18].Consider for example the predicate Orient3D which determines whether a point D is to the left or right if the oriented plane defined by the points A ,B andC .The result of the predicate depend on the sign of the determinantOrient3D (a,b,c,d )=sign a x a y a z 1b x b y b z 1c x c y c z 1d x d y d z 1 =sign a x −d x a y −d y a z −d z b x −d x b y −d y b z −d z c x −d x c y −d y c z −d z .The computed result is of the same sign that the exact result if the relative error is less thatone.As a consequence,it is sufficient to compute an approximate value of the determinant only with a relative error less than one.Recently,Demmel and Hida [17,18]provide another method to certified the sign of a small determinant.They used an accurate sommation algorithm using large accumulators.Here,we use algorithms developped in [39]with accurate computation of an error bound.Indeed,the determinant can be evaluated as a sum of 24monomials of the form ±a i b j c k .Each monomial can be expressed as the sum of four numbers with the algorithm ThreeProduct .It follows that the determinant can be expressed as a sum of 4×24=96terms.We can then apply a summation algorithm until the relative error is less than one.This can be done using the error bound of Proposition 4.2.In the following algorithm,we suppose we have a function DetVector that transforms the matrix of the determinant into a vector whose the sum is the determinant (using ThreeProduct ).We denote by n the length of the vector (here n =94).Algorithm 5.3.Algorithm to compute the predicate Orient3D (a,b,c,d).function sign =Orient3D (A )p =DetVector (A )p =VecSum (p )res =p n β=(2n eps /(1−2n eps ))·n −1i =1|p i |err =eps |res |+(β+(2eps 2|res |))while (err >|res |)p =VecSum (p )res =p n β=(2n eps /(1−2n eps ))· n −1i =1|p i |err =eps |res |+(β+(2eps 2|res |))sign =sign(p n )6ConclusionWe have tested one geometric predicate that is Orient3D .Of course,the same techniques can be easily applied to compute other predicates like InCircle and InSphere for example (see[45,46].The potential of our algorithms is in providing a fast and simple way to extendslightly the precision of critical variable in numerical algorithms.The techniques used here are simple enough to be coded directly in numerical algorithms,avoiding function call overhead and conversion costs.References[1]GMP,the GNU Multi-Precision library.Available at URL=/gmp/.[2]MPFR,the Multiprecision Precision Floating Point Reliable library.Available at URL=.[3]I.J.Anderson.A distillation algorithm forfloating-point summation.SIAM -put.,20(5):1797–1806(electronic),1999.[4]F.Avnaim,J.-D.Boissonnat,O.Devillers,F.P.Preparata,and M.Yvinec.Evaluatingsigns of determinants using single-precision arithmetic.Algorithmica,17(2):111–132,1997.[5]David H.Bailey.Algorithm719;multiprecision translation and execution of fortran pro-grams.ACM Trans.Math.Softw.,19(3):288–319,1993.[6]David H.Bailey.A fortran90-based multiprecision system.ACM Trans.Math.Softw.,21(4):379–387,1995.[7]David H.Bailey.A Fortran-90double-double library,2001.Available at URL=http:///~dhbailey/mpdist/index.html.[8]Sylvie Boldo and Jean-Michel Muller.Some functions computable with a fused-mac.InProceedings of the17th Symposium on Computer Arithmetic,Cape Cod,USA,2005. [9]Richard P.Brent.A fortran multiple-precision arithmetic package.ACM Trans.Math.Softw.,4(1):57–70,1978.[10]Keith Briggs.Doubledoublefloating point arithmetic,1998.Available at URL=http:///keithmbriggs/doubledouble.html.[11]H.Br¨o nnimann and M.Yvinec.Efficient exact evaluation of signs of determinants.Algo-rithmica,27(1):21–56,2000.[12]Herv´e Br¨o nnimann and Mariette Yvinec.Efficient exact evaluation of signs of determinants.In SCG’97:Proceedings of the thirteenth annual symposium on Computational geometry, pages166–173,New York,NY,USA,1997.ACM Press.[13]Kenneth L.Clarkson.Safe and effective determinant evaluation.In33rd annual symposiumon Foundations of computer science(FOCS).Proceedings,Pittsburgh,PA,USA,October 24–27,1992.Washington,DC:IEEE Computer Society Press,387-395.[14]Marc Daumas and Claire Finot.Division offloating point expansions with an applicationto the computation of a determinant.J.UCS,5(6):323–338,1999.[15]T.J.Dekker.Afloating-point technique for extending the available precision.Numer.Math.,18:224–242,1971.[16]James Demmel.Underflow and the reliability of numerical software.SIAM J.Sci.Statist.Comput.,5(4):887–919,1984.[17]James Demmel and Yozo Hida.Accurate and efficientfloating point summation.SIAM J.put.,25(4):1214–1248(electronic),2003.[18]James Demmel and Yozo Hida.Fast and accuratefloating point summation with applica-tion to computational geometry.Numer.Algorithms,37(1-4):101–112,2004.[19]John R.Hauser.Handlingfloating-point exceptions in numeric programs.ACM Trans.ng.Syst.,18(2):139–174,1996.[20]Yozo Hida,Xiaoye S.Li,and David H.Bailey.Algorithms for quad-double precisionfloatingpoint arithmetic.In Proc.15th IEEE Symposium on Computer Arithmetic,pages155–162.IEEE Computer Society Press,Los Alamitos,CA,USA,2001.[21]Nicholas J.Higham.The accuracy offloating point summation.SIAM put.,14(4):783–799,1993.[22]Nicholas J.Higham.Accuracy and stability of numerical algorithms.Society for Industrialand Applied Mathematics(SIAM),Philadelphia,PA,second edition,2002.[23]IEEE Standard for Binary Floating-Point Arithmetic,ANSI/IEEE Standard754-1985.Institute of Electrical and Electronics Engineers,New York,1985.Reprinted in SIGPLAN Notices,22(2):9–25,1987.[24]The MathWorks Inc.MATLAB User’s Guide,2000.[25]M.Jankowski,A.Smoktunowicz,and H.Wo´z niakowski.A note onfloating-point summa-tion of very many rmation Processing and Cybernetics-EIK,19(9):435–440, 1983.[26]M.Jankowski and H.Wo´z niakowski.The accurate solution of certain continuous problemsusing only single precision arithmetic.BIT,25:635–651,1985.[27]W.Kahan.Further remarks on reducing truncation put.Mach.,8(1):40,1965.[28]W.Kahan.A survey of error analysis.In Proc.IFIP Congress,Ljubljana,InformationProcessing71,pages1214–1239,Amsterdam,The Netherlands,1972.North-Holland. [29]Donald E.Knuth.The Art of Computer Programming,Volume2,Seminumerical Algo-rithms.Addison-Wesley,Reading,MA,USA,third edition,1998.[30]Shankar Krishnan,Mark Foskey,Tim Culver,John Keyser,and Dinesh Manocha.Precise:efficient multiprecision evaluation of algebraic roots and predicates for reliable geometric computation.In SCG’01:Proceedings of the seventeenth annual symposium on Computa-tional geometry,pages274–283,New York,NY,USA,2001.ACM Press.[31]Xiaoye S.Li,James W.Demmel,David H.Bailey,Greg Henry,Yozo Hida,Jimmy Iskandar,William Kahan,Suh Y.Kang,Anil Kapur,Michael C.Martin,Brandon J.Thompson, Teresa Tung,and Daniel J.Yoo.Design,implementation and testing of extended and mixed precision blas.ACM Trans.Math.Softw.,28(2):152–205,2002.[32]Seppo Linnainmaa.Analysis of some known methods of improving the accuracy offloating-point sums.BIT,14:167–202,1974.[33]Seppo Linnainmaa.Software for doubled-precisionfloating-point computations.ACMTrans.Math.Software,7(3):272–283,1981.[34]Michael A.Malcolm.On accuratefloating-point m.ACM,14(11):731–736,1971.[35]Ole Møller.Note on quasi double-precision.BIT,5:251–255,1965.[36]Ole Møller.Quasi double-precision infloating point addition.BIT,5:37–50,1965.[37]Yves Nievergelt.Scalar fused multiply-add instructions producefloating-point matrix arith-metic provably accurate to the penultimate digit.ACM Trans.Math.Software,29(1):27–48, 2003.[38]Yves Nievergelt.Analysis and applications of priest’s distillation.ACM Trans.Math.Softw.,30(4):402–433,2004.[39]Takeshi Ogita,Siegfried M.Rump,and Shin’ichi Oishi.Accurate sum and dot product.SIAM put.,2005.to appear.[40]M.Pichat.Correction d’une somme en arithm´e tique`a virguleflottante.Numer.Math.,19:400–406,1972.[41]Mich`e le Pichat.Contributions`a l’etude des erreurs d’arrondi en arithm´e tique`a virguleflottante.PhD thesis,Universit´e Scientifique et M´e dicale de Grenoble,Grenoble,France, 1976.[42]D.M.Priest.Algorithms for arbitrary precisionfloating point arithmetic.In P.Kornerupand D.W.Matula,editors,Proceedings of the10th IEEE Symposium on Computer Arith-metic(Arith-10),pages132–144,Grenoble,France,1991.IEEE Computer Society Press, Los Alamitos,CA.[43]Douglas M.Priest.On Properties of Floating Point Arithmetics:Numerical Stability andthe Cost of Accurate Computations.PhD thesis,Mathematics Department,University of California,Berkeley,CA,USA,November1992.ftp:///pub/ theory/priest-thesis.ps.Z.[44]D.R.Ross.Reducing truncation errors using cascading put.Mach.,8(1):32–33,1965.[45]Johnathan Richard Shewchuk.Robust adaptivefloating-point geometric predicates.InSCG’96:Proceedings of the twelfth annual symposium on Computational geometry,pages 141–150,New York,NY,USA,1996.ACM Press.[46]Jonathan Richard Shewchuk.Adaptive precisionfloating-point arithmetic and fast robustgeometric predicates.Discrete Comput.Geom.,18(3):305–363,1997.[47]Jack M.Wolfe.Reducing truncation errors by m.ACM,7(6):355–356,1964.。

相关文档
最新文档