Reconstruction of a Deceleration Parameter from the Latest Type Ia Supernovae Gold Dataset

合集下载

European Signal Processing Conference, EUSIPCO’94, Edinburgh, Scotland, 13.-16.09.1994 Mod

European Signal Processing Conference, EUSIPCO’94, Edinburgh, Scotland, 13.-16.09.1994 Mod

Modelling of fractal coding schemesBernd H¨URTGEN and Frank M¨ULLER†Institut f¨u r Elektrische Nachrichtentechnik,RWTH Rheinisch-Westf¨a lische Technische Hochschule Aachen 52056Aachen,Germany,Phone:+49–241–807677,Fax:+49–241–8888196,Email:huertgen@ient.rwth-aachen.de †Institut f¨ur Elektrische Nachrichtentechnik,RWTH Rheinisch-Westf¨a lische Technische Hochschule Aachen 52056Aachen,Germany,Phone:+49–241–807681,Fax:+49–241–8888196,Email:mueller@ient.rwth-aachen.deAbstract-Fractal techniques applied in the area of signal coding suffer from the lack of an analytical description,especially if the question of convergence at the decoder is addressed.For this purpose an expensive eigenvalue calculation of the transformation matrix during the encoding process is necessary which is in general computational infeasible.In this paper the conditional equation for the eigenvalues is determined for a rather general coding scheme.This allows formulation of the probability density function of the largest eigenvalue which in turn determines whether or not the reconstruction converges.These results are not only important for evaluation of the convergence properties of various coding schemes but are also valuable for optimizing their appropriate encoding parameters.1.IntroductionFractal techniques are known for several years,especially in three distinctfields of applications namely segmentation,signal mod-elling and coding.Our attention is focused on the aspects of signal coding for which originally Barnsley[1,2]contributed major ideas.The lack of a practical algorithm suitable for auto-matic encoding and modelling of gray-level images at common compression ratios has beenfilled by Jacquin[3],who proposed a block-based implementation.A recent review on fractal cod-ing schemes may be found in[4]and an excellent mathemati-cal foundation based on the theory offinite-dimensional vector spaces in[5].Numerous supplements and improvements of Jacquin’s scheme have been reported.The objective of this paper is to pro-vide a method for judging the different proposals and to estimate the influence of distinct design parameters on the convergence property of the decoding process.For this purpose a probabilistic approach based upon the transformation matrix is introduced. As derived below,a necessary and sufficient condition for a convergent decoding process is that all eigenvalues of the trans-formation matrix lie within the unit circle.Since the calculation of the eigenvalues is a great expenditure,it is in general infea-sible to perform this during the encoding process.Nevertheless quantification of the probability of divergence,given the type of coding scheme and its pertinent design parameters,is desirable. This way one can quantify the probability of divergent trans-formations without actually determining the eigenvalues.This is performed by regarding the eigenvalues as random variables and modelling their probability density function(pdf)which de-pends upon the used coding scheme and its appropriate design parameters.The paper is organized as follows:The mathematical back-ground of fractal coding is introduced in section2.Section3is concerned with the distribution of the eigenvalues and is divided into three parts.While the topics3.1and3.2treat two special cases for which a rather simple analytical solution of the charac-teristic equation is possible,topic3.3deals with a more general case,for which no analytical description has been found yet.The paper concludes with a short summary.2.TheoryThe basic coding principle emerges from a blockwise defined affine transformation which composes a signal by parts of itself in order to exploit the natural self-similarities as a special form of redundancy for compression purposes.Fractal coding schemes can be viewed as some sort of vector quantization with a signal dependent codebook.Let be a signal vector consisting of samples.This signal is segmented into non-overlapping blocks with elements. Then for each of these blocks one codebook entry from a setof entries is selected which after scaling with and adding an offset minimizes some predefined distortion measure(1) for all blocks.The codebook is generated from the signal itself by use of a codebook construction matrix which is mainly determined by the type of coding scheme.If denotes the’fetch-operation’of the codebook entry from the codebook and the’put-operation’of the modified codebook entry into the approximation,the mapping process for the entire image may be formulated by(2)This represents an affine transformation consisting of a linear part and a non-linear offset which together form the fractal code for the original signal.A very simple coding scheme maybe constructed if the mapping is contractive or at least eventually contractive.This is if there exists a number so that for some the contractivity condition(3) holds.Then the decoder can construct a uniquefixed point from the fractal code without any knowledge about the codebook. This is guaranteed by Banach’sfixed point theorem which states that the sequence of iterates with converges for any arbitrary initial signal to a uniquefixed point of.If contractivity is assumed,the collage theorem (e.g.[5,6])ensures that thefixed point itself is close to the original signal,since(4)As can be seen from(4)the collage theorem also motivates the mapping process at the encoder which minimizes the approxima-tion error.Because not the original signal itself,but a fixed point,which is close to the original signal,is encoded, fractal coding schemes sometimes are also termed attractor cod-ing.A coding gain in this way can be achieved,if the fractal code,which serves as representation of thefixed point in the fractal domain,can be expressed with fewer bits than the original signal itself.For affine transformations can be shown that a necessary and sufficient condition for contractivity is that the spectral radius of the linear part,which is the largest eigenvalue in magnitude,is smaller than one[5,7].One can show that this demand is equivalent with the statement of eventual contractivity.Control about the convergence of can therefore be ob-tained by determining the eigenvalues of the linear part which is in general a very difficult task and analytical solutions are given only for some rather simple coding schemes,e.g.[7,8,5].Con-tractivity is always ensured,if the magnitude of all scaling coeffi-cients is strictly smaller than one.As reported by several au-thors,e.g.[9,10],a less stringent restriction for the improves reconstruction quality and convergence speed.On the other hand contractivity of the transformation is no longer guaranteed. Our investigations have shown,that the scaling coefficients may be regarded as statistically independent and inuniformly distributed random variables.Since the eigenvalues are solely determined by the scaling coefficients and the structure of the linear part they are also random variables.The following section illustrates by means of three distinct applications how the probability density function(pdf)of the eigenvalues for various choices of the design parameters can be derived.Whereas for thefirst two schemes,described in topic3.1and3.2,an analyt-ical solution of the characteristic equation is given,for the later and more general scheme the pdf is approxi-mated.The pdf of the eigenvalues determines the probability for divergent transformations and so the influence of various design parameters on the contractivity can be quantified.3.ApplicationsAs mentioned above,contractivity is determined by the largest eigenvalue in magnitude of the transformation matrix.Due to its huge dimension straightforward determination by solving the characteristic equation is infeasible.In-stead the specific structure of the matrix must be considered in order tofind an exact and quick solution.For a rather general coding approach this is done in[8],so this paper only concisely summarizes the results.We emerge from a coding approach published in[11].The basic idea is tofind for consecutive blocks within the sig-nal each consisting of samples another block of size samples,so that after some sort of geo-metric transformation,scaling and an additional offset the distor-tion measure(1)after the mapping processbecomes as small as possible.As shown in[8]the largest eigen-value of the transformation matrix for the entire signal is then determined by(5)The index denotes one of mapping cycles.Each of these cycles can be treated independently from all others.This is due to the fact,that those part of the signal belonging to one cycle is not regarded as codebook entry from all other cycles and vice versa.The number is the length of cycle,which equals the number of codebook entries involved in this cycle.For simplification this paper considers two important special cases from the literature.Thefirst one,published in[12],treats each blocks of the signal independently from all others.There-fore the length of mapping cycles for all mappings equals one.We call these schemes non-concatenated coding schemes. The second one does not geometrically scale the codebook en-tries,or only does this by subsampling.One can show that this results in the parameter being equal to one.Those schemes have been thoroughly investigated in[5]and are called decimat-ing coding schemes.3.1Non-concatenated coding schemeThe coding schemes described in this section are characterized by cycles with length one.Then eq.(5)can be simplified,so that the eigenvalue for one cycle is determined bycycle=k(6)As presumed above,the scaling coefficients are uniformly dis-tributed in.If they are also statistically indepen-dent,the pdf of the eigenvalues is equal to the-fold convolution of the pdf of the scaling parameter.Introduc-ing the rect-function withrect for(7)the pdf of the random variable can be written asrect(8)The-fold convolution of with itself then yields the pdf of the eigenvalues,which ism-times(9) The mapping results in a divergent recon-Figure1Probability density function forsome common values of the design parameterstruction,if and only if the largest eigenvalue of the linear part is outside the unit circle.Since is an even function,the probability for divergence can easily be determined byProb(10) Figure2shows this probability for some different parameters(12) Since Prob Prob the probability for diver-gence can be obtained by integrating the pdf so thatfinally Prob(13) As can be seen fromfigure3long mapping cycles are advanta-geous for a convergent reconstruction process.Summarizing the results of topic3.1and3.2,one can state that a fractal coding scheme which is optimized with respect to the contractivity of the transformation should combine a geometrical scaling of the codebook entries()with long mapping cycles().A typical representative of such a scheme isdescribed in the following topic.101010101010101010101003.3General coding schemeJacquin’s original proposal for encoding of gray-level images isthe most general one as far as the possible variations of the trans-formation matrix are concerned.To the knowledge of the authors no exact formulation of the eigenvalues has been foundyet,but only an upper bound derived from the euclidean oper-ator norm[13].So an analytical derivation of the pdf of the eigenvalues,as has been performed in the previous two topics, is not possible.Instead the pdf of the largest eigenvalue is ap-proximated.This has been done by generating the transformation matrix and evaluating its largest eigenvalue.For a large num-ber of experiments the probability of divergence approximatesProb large(14)with denoting the number of experiments where the mag-nitude of the largest eigenvalue exceeds one.Results of our com-puter simulations are depicted infigure4.One can see that a simple decimation matrix results in significantly more divergent transformations compared to a matrix which performs averaging of two or more samples.The differences be-tween larger are less significant,the choice of a suited aver-aging parameter is therefore mainly determined by the associated computational burden of the encoding process.Figure4Probability for divergentreconstruction of Jacquin’s coding scheme4.SummaryIn this paper the transformation matrices of fractal coding schemes are examined.Since all eigenvalues of the transfor-mation matrix must lie within the unit circle,the probability for divergent reconstruction sequences at the decoder can be quantified by determining the pdf of its eigenvalues.The conditional equation for the eigenvalues has been given for a rather general class of coding schemes.By modelling the scaling parameters as uniformly distributed,statistically indepen-dent random variables the eigenvalues itself are functions of ran-dom variables.Their pdf’s have been modelled for the two spe-cial cases of non-concatenated and decimating coding schemes.This allows specification of the probability for divergent recon-struction at the decoder and an appropriate choice for some de-sign parameters.Up to now no simple conditional equation for the eigenvalues of Jacquin’s original proposal has been found. Therefore their appropriate pdf is approximated by the relative frequency of eigenvalues being larger than one as result of an experiment performed many times.References[1]M.F.Barnsley,V.Ervin,D.Hardin,and ncaster(1986),“Solution of an inverse problem for fractals and other sets,”in A,vol.83,pp.1975–1977,Apr.1986.[2]M.F.Barnsley and J.H.Elton(1988),“A new class ofmarkov processes for image encoding,”Advances in applied probability,vol.20,pp.14–22,1988.[3] A. E.Jacquin(1990),“Fractal image coding based ona theory of iterated contractive image transformations,”in Proceedings SPIE Visual Communications and Image Processing’90,vol.1360,pp.227–239,1990.[4] A.E.Jacquin(1993),“Fractal image coding:A review,”Proceedings of the IEEE,vol.81,pp.1451–1465,Oct.1993.[5]L.Lundheim(1992),Fractal signal modelling for sourcecoding.PhD thesis,Universitetet I Trondheim Norges Tekniske Høgskole,1992.[6]Y.Fisher,E.W.Jacobs,and R.D.Boss(1992),“Fractalimage compression using iterated transforms,”in Image and text compression(J.A.Storer,ed.),ch.2,pp.35–61,Kluwer Academic Publishers,1992.[7] B.H¨urtgen(1993),“Contractivity of fractal transforms forimage coding,”Electronics Letters,vol.29,pp.1749–1750, Sept.1993.[8] B.H¨urtgen and S.F.Simon(1994),“On the problemof convergence in fractal coding schemes,”in Proceedings of the IEEE International Conference on Image Processing ICIP’94,vol.3,(Austin,Texas,USA),pp.103–106,Nov.1994.[9]Y.Fisher,E.W.Jacobs,and R.D.Boss(1991),“Iteratedtransform images compression,”Tech.Rep.1408,Naval Ocean Systems Center,San Diego,CA,Apr.1991. [10] E.W.Jacobs,Y.Fisher,and R.D.Boss(1992),“Imagecompression:A study of iterated transform method,”Signal Processing,Elsevier,vol.29,pp.251–263,1992.[11] D.M.Monro(1993),“Class of fractal transforms,”Electronics Letters,vol.29,no.4,pp.362–363,1993. [12] D.M.Monro and F.Dudbridge(1992),“Fractal approxi-mation of image blocks,”in Proceedings of the IEEE Inter-national Conference on Acoustics Speech and Signal Pro-cessing ICASSP’92,vol.3,pp.485–488,1992.[13] B.H¨urtgen and T.Hain(1994),“On the convergence offractal transforms,”in Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing ICASSP’94,vol.5,pp.561–564,1994.。

RECEPTION ARRANGEMENT FOR A RADIO SIGNAL

RECEPTION ARRANGEMENT FOR A RADIO SIGNAL

专利名称:RECEPTION ARRANGEMENT FOR A RADIO SIGNAL发明人:KOERNER, HEIKO申请号:EP03787718.0申请日:20030722公开号:EP1525729A1公开日:20050427专利内容由知识产权出版社提供摘要:The invention relates to a reception arrangement for a radio signal, comprising a downlink mixer (3) with a downstream signal processing chain and a demodulator (6). An RSSI signal is obtained between the downlink mixer (3) and the demodulator (6), for example in a limiter (5). Said signal is filtered by a high-pass filter, averaged and supplied to a comparator (15). According to the level of the RSSI signal thus processed, the recognition unit (8) can easily detect whether there are any useful signals, amplitude-modulated signals or frequency-modulated signals. The demodulator (6) is correspondingly controlled by a control unit (9). According to the cited principle, a recognition process can be especially easily and precisely carried out and the demodulator can be correspondingly adjusted, according to whether an amplitude-modulated signal or a frequency-modulated signal is received.申请人:INFINEON TECHNOLOGIES AG地址:St.-Martin-Strasse 53 81669 München DE国籍:DE代理机构:Epping Hermann & Fischer更多信息请下载全文后查看。

Homomorphic Evaluation of the AES Circuit

Homomorphic Evaluation of the AES Circuit

Homomorphic Evaluation of the AES CircuitCraig Gentry IBM ResearchShai HaleviIBM ResearchNigel P.SmartUniversity of Bristol February16,2012AbstractWe describe a working implementation of leveled homomorphic encryption(without bootstrapping) that can evaluate the AES-128circuit.Our current implementation takes about a week to evaluate anentire AES encryption operation,using NTL(over GMP)as our underlying software platform,andrunning on a large-memory ing SIMD techniques,we can process close to100blocks ineach evaluation,yielding an amortized rate of roughly2hours per block.For this implementation we developed both AES-specific optimizations as well as several“generic”tools for FHE evaluation.These last tools include(among others)a different variant of the Brakerski-Vaikuntanathan key-switching technique that does not require reducing the norm of the ciphertext vector,and a method of implementing the Brakerski-Gentry-Vaikuntanathan modulus-switching transformationon ciphertexts in CRT representation.Keywords.AES,Fully Homomorphic Encryption,ImplementationThefirst and second authors are sponsored by DARPA under agreement number FA8750-11-C-0096.The ernment is authorized to reproduce and distribute reprints for Governmental purposes notwithstand-ing any copyright notation thereon.The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements,either expressed or implied,of DARPA or the ernment.Distribution Statement“A”(Approved for Public Release, Distribution Unlimited).The third author is sponsored by DARPA and AFRL under agreement number FA8750-11-2-0079.The same disclaimers as above apply.He is also supported by the European Commission through the ICT Programme under Contract ICT-2007-216676ECRYPT II and via an ERC Advanced Grant ERC-2010-AdG-267188-CRIPTO,by EPSRC via grant COED–EP/I03126X,and by a Royal Society Wolfson Merit Award.The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements,either expressed or implied,of the European Commission or EPSRC.Contents1Introduction1 2Background22.1Notations and Mathematical Background (2)2.2BGV-type Cryptosystems (3)2.3Computing on Packed Ciphertexts (5)3General-Purpose Optimizations63.1A New Variant of Key Switching (6)3.2Modulus Switching in Evaluation Representation (7)3.3Dynamic Noise Management (8)3.4Randomized Multiplication by Constants (8)4Homomorphic Evaluation of AES94.1Homomorphic Evaluation of the Basic Operations (9)4.2Implementing The Permutations (11)4.3Performance Details (12)References12 A More Details13A.1Plaintext Slots (14)A.2Canonical Embedding Norm (14)A.3Double CRT Representation (15)A.4Sampling From A q (15)A.5Canonical embedding norm of random polynomials (16)B The Basic Scheme16B.1Our Moduli Chain (16)B.2Modulus Switching (17)B.3Key Switching (18)B.4Key-Generation,Encryption,and Decryption (19)B.5Homomorphic Operations (20)C Security Analysis and Parameter Settings21C.1Lower-Bounding the Dimension (22)C.1.1LWE with Sparse Key (23)C.2The Modulus Size (24)C.3Putting It Together (26)D Further AES Implementation Methods27E Scale(c,q t,q t−1)in dble-CRT Representation281IntroductionIn his breakthrough result[11],Gentry demonstrated that fully-homomorphic encryption was theoretically possible,assuming the hardness of some problems in integer lattices.Since then,many different improve-ments have been made,proposing new variants,improving efficiency,suggesting other hardness assump-tions,etc.Some of these works were accompanied by implementation[20,12,7,21,16],but all the imple-mentations so far were either“proofs of concept”that can compute only one basic operation at a time(at great cost),or special-purpose implementations limited to evaluating very simple functions.In this work we report on thefirst implementation powerful enough to support an“interesting real world circuit”.Specifi-cally,we implemented a variant of the leveled FHE-without-bootstrapping scheme of Brakerski,Gentry,and Vaikuntanathan[4](BGV),with support for deep enough circuits so that we can evaluate an entire AES-128 encryption operation.Why AES?We chose to shoot for an evaluation of AES since it seems like a natural benchmark:AES is widely deployed and used extensively in security-aware applications(so it is“practically relevant”to imple-ment it),and the AES circuit is nontrivial on one hand,but on the other hand not astronomical.Moreover the AES circuit has a regular(and quite“algebraic”)structure,which is amenable to parallelism and optimiza-tions.Indeed,for these same reasons AES is often used as a benchmark for implementations of protocols for secure multi-party computation(MPC),for example[19,8,14,15].Using the same yardstick to measure FHE and MPC protocols is quite natural,since these techniques target similar application domains and in some cases both techniques can be used to solve the same problem.Beyond being a natural benchmark,homomorphic evaluation of AES decryption also has interesting applications:When data is encrypted under AES and we want to compute on that data,then homomorphic AES decryption would transform this AES-encrypted data into an FHE-encrypted data,and then we could perform whatever computation we wanted.(Such applications were alluded to in[16,21]).Our Contributions.Our implementation is based on a variant of the ring-LWE scheme of BGV[4,6,5], using the techniques of Smart and Vercauteren(SV)[21]and Gentry,Halevi and Smart(GHS)[13],and we introduce many new optimizations.Some of our optimizations are specific to AES,these are described in Section4.Most of our optimization,however,are more general-purpose and can be used for homomorphic evaluation of other circuits,these are described in Section3.Many of our general-purpose optimizations are aimed at reducing the number of FFTs and CRTs that we need to perform,by reducing the number of times that we need to convert polynomials between coef-ficient and evaluation representations.Since the cryptosystem is defined over a polynomial ring,many of the operations involve various manipulation of integer polynomials,such as modular multiplications and additions and Frobenius maps.Most of these operations can be performed more efficiently in evaluation representation,when a polynomial is represented by the vector of values that it assumes in all the roots of the ring polynomial(for example polynomial multiplication is just point-wise multiplication of the evalu-ation values).On the other hand some operations in BGV-type cryptosystems(such as key switching and modulus switching)seem to require coefficient representation,where a polynomial is represented by listing all its coefficients.1Hence a“naive implementation”of FHE would need to convert the polynomials back and forth between the two representations,and these conversions turn out to be the most time-consuming part of the execution.In our implementation we keep ciphertexts in evaluation representation at all times, converting to coefficient representation only when needed for some operation,and then converting back.1The need for coefficient representation ultimately stems from the fact that the noise in the ciphertexts is small in coefficient representation but not in evaluation representation.1We describe variants of key switching and modulus switching that can be implemented while keeping almost all the polynomials in evaluation representation.Our key-switching variant has another advantage, in that it significantly reduces the size of the key-switching matrices in the public key.This is particularly important since the main limiting factor for evaluating deep circuits turns out to be the ability to keep the key-switching matrices in memory.Other optimizations that we present are meant to reduce the number of modulus switching and key switching operations that we need to do.This is done by tweaking some operations(such as multiplication by constant)to get a slower noise increase,by“batching”some operations before applying key switching,and by attaching to each ciphertext an estimate of the“noisiness”of this ciphertext,in order to support better noise bookkeeping.Our Implementation.Our implementation was based on the NTL C++library running over GMP,we utilized a machine which consisted of a processing unit of Intel Xeon CPUs running at2.0GHz with18MB cache,and most importantly with256GB of RAM.2Memory was our main limiting factor in the implemen-tation.With this machine it took us just under eight days to compute a single block AES encryption using an implementation choice which minimizes the amount of memory required;this is roughly two orders of magnitude faster than what could be done with the Gentry-Halevi implementation[12].The computation was performed on ciphertexts that could hold1512plaintext slots each;where each slot holds an element of F28.This means that we can compute 1512/16 =94AES operations in parallel,which gives an amortize time per block of roughly two hours.We note that there are a multitude of optimizations that one can perform on our basic implementation. Most importantly,we believe that by using the“bootstrapping as optimization”technique from BGV[4]we can speedup the AES performance by an additional order of magnitude.Also,there are great gains to be had by making better use of parallelism:Unfortunately,the NTL library(which serves as our underlying software platform)is not thread safe,which severely limits our ability to utilize the multi-core functionality of modern processors(our test machine has24cores).We expect that by utilizing many threads we can speed up some of our(higher memory)AES variants by as much as a16x factor;just by letting each thread compute a different S-box lookup.Organization.In Section2we review the main features of BGV-type cryptosystems[5,4],and briefly survey the techniques for homomorphic computation on packed ciphertexts from SV and GHS[21,13]. Then in Section3we describe our“general-purpose”optimizations on a high level,with additional details provided in Appendices A and B.A brief overview of AES and a high-level description(and performance numbers)of one of our AES-specific implementations is provided in Section4,with details of alternative implementations being provided in Appendix D.2Background2.1Notations and Mathematical BackgroundFor an integer q we identify the ring Z/q Z with the interval(−q/2,q/2]∩Z,and we use[z]q to denote the reduction of the integer z modulo q into that interval.Our implementation utilizes polynomial rings defined by cyclotomic polynomials,A=Z[X]/Φm(X).The ring A is the ring of integers of a the m th cyclotomic numberfield Q(ζm).We let A q def=A/q A=Z[X]/(Φm(X),q)for the(possibly composite)integer q,and we identify A q with the set of integer polynomials of degree uptoφ(m)−1reduced modulo q.2This machine was BlueCrystal Phase2;and the authors would like to thank the University of Bristol’s Advanced Computing Research Centre(https:///)for access to this facility2Coefficient vs.Evaluation Representation.Let m,q be two integers such that Z /q Z contains a primitive m -th root of unity,and denote one such primitive m -th root of unity by ζ∈Z /q Z .Recallthat the m ’th cyclotomic polynomial splits into linear terms modulo q ,Φm (X )= i ∈(Z /m Z )∗(X −ζi )(mod q ).For an element a ∈A q ,we consider two ways of representing it:Viewing a as a degree-(φ(m )−1)poly-nomial,a (X )= i<φ(m )a i X i ,we can just list all the coefficients in order a = a 0,a 1,...,a φ(m )−1 ∈(Z /q Z )φ(m ).We call a the coefficient representation of a .For the other representation we consider the values that the polynomial a (X )assumes on all primitive m -th roots of unity modulo q ,b i =a (ζi )mod q for i ∈(Z /m Z )∗.The b i ’s in order also yield a vector b ∈(Z /q Z )φ(m ),which we call the evaluation representation of a .Clearly these two representations are related via b =V m ·a ,where V m is the Van-dermonde matrix over the primitive m -th roots of unity modulo q .We remark that for all i we have the equality (a mod (X −ζi ))=a (ζi )=b i ,hence the evaluation representation of a is just a polynomial Chinese-Remaindering representation.In both evaluation and coefficient representations,an element a ∈A q is represented by a φ(m )-vector of integers in Z /q Z .If q is a composite then each of these integers can itself be represented either using the standard binary encoding of integers or using Chinese-Remaindering relative to the factors of q .We usually use the standard binary encoding for the coefficient representation and Chinese-Remaindering for the evaluation representation.(Hence the latter representation is really a double CRT representation,relative to both the polynomial factors of Φm (X )and the integer factors of q .)2.2BGV-type CryptosystemsOur implementation uses a variant of the BGV cryptosystem due to Gentry,Halevi and Smart,specifically the one described in [13,Appendix D](in the full version).In this cryptosystem both ciphertexts and secret keys are vectors over the polynomial ring A ,and the native plaintext space is the space of binary polynomials A 2.(More generally it could be A p for some fixed p ≥2,but in our case we will always use A 2.)At any point during the homomorphic evaluation there is some “current integer modulus q ”and “current secret key s ”,that change from time to time.A ciphertext c is decrypted using the current secret key s by taking inner product over A q (with q the current modulus)and then reducing the result modulo 2in coefficient representation .Namely,the decryption formula isa ←[[ c ,s mod Φm (X )]q noise ]2.(1)The polynomial [ c ,s mod Φm (X )]q is called the “noise”in the ciphertext c .Informally,c is a valid ciphertext with respect to secret key s and modulus q if this noise has “sufficiently small norm”relative to q .The meaning of “sufficiently small norm”is whatever is needed to ensure that the noise does not wrap around q when performing homomorphic operations,in our implementation we keep the norm of the noise always below some pre-set bound (which is determined in Appendix C.2).The specific norm that we use to evaluate the magnitude of the noise is the “canonical embedding norm reduced mod q ”,as described in [13,Appendix D](in the full version).This is useful to get smaller parameters,but for the purpose of presentation the reader can think of the norm as the Euclidean norm of the noise in coefficient representation.More details are given in the Appendices.We refer to the norm of the noise as the noise magnitude .The central feature of BGV-type cryptosystems is that the current secret key and modulus evolve as we apply operations to ciphertexts.We apply five different operations to ciphertexts during homomorphic evaluation.Three of them —addition,multiplication,and automorphism —are “semantic operations”that we use to evolve the plaintext data which is encrypted under those ciphertexts.The other two operations3—key-switching and modulus-switching —are used for “maintenance”:These operations do not change the plaintext at all,they only change the current key or modulus (respectively),and they are mainly used to control the complexity of the evaluation.Below we briefly describe each of these five operations on a high level.For the sake of self-containment,we also describe key generation and encryption in Appendix B.More detailed description can be found in [13,Appendix D].Addition.Homomorphic addition of two ciphertext vectors with respect to the same secret key and mod-ulus q is done just by adding the vectors over A q .If the two arguments were encrypting the plaintext polynomials a 1,a 2∈A 2then the sum will be an encryption of a 1+a 2∈A 2.This operation has no effect on the current modulus or key,and the norm of the noise is at most the sum of norms from the noise in the two arguments.Multiplication.Homomorphic multiplication is done via tensor product over A q .In principle,if the two arguments have dimension n over A q then the product ciphertext has dimension n 2,each entry in the output computed as the product of one entry from the first argument and one entry from the second.3This operation does not change the current modulus,but it changes the current key:If the two input ciphertexts are valid with respect to the dimension-n secret key vector s ,encrypting the plaintext polynomi-als a 1,a 2∈A 2,then the output is valid with respect to the dimension-n 2secret key s which is the tensor product of s with itself,and it encrypt the polynomial a 1·a 2∈A 2.The norm of the noise in the product ciphertext can be bounded in terms of the product of norms of the noise in the two arguments.The specific bound depends on the norm in use,for our choice of norm function the norm of the product is no larger than the product of the norms of the two arguments.Key Switching.The public key of BGV-type cryptosystems includes additional components to enable converting a valid ciphertext with respect to one key into a valid ciphertext encrypting the same plaintext with respect to another key.For example,this is used to convert the product ciphertext which is valid with respect to a high-dimension key back to a ciphertext with respect to the original low-dimension key.To allow conversion from dimension-n key s to dimension-n key s (both with respect to the same modulus q ),we include in the public key a matrix W =W [s →s ]over A q ,where the i ’th column of W is roughly an encryption of the i ’th entry of s with respect to s (and the current modulus).Then given a valid ciphertext c with respect to s ,we roughly compute c =W ·c to get a valid ciphertext with respect to s .In some more detail,the BGV key switching transformation first ensures that the norm of the ciphertext c itself is sufficiently low with respect to q .In [4]this was done by working with the binary encoding of c ,and one of our main optimization in this work is a different method for achieving the same goal (cf.Section 3.1).Then,if the i ’th entry in s is s i ∈A (with norm smaller than q ),then the i ’th column of W [s →s ]is an n -vector w i such that [ w i ,s mod Φm (X )]q =2e i +s i for a low-norm polynomial e i ∈A .Denoting e =(e 1,...,e n ),this means that we have s W =s +2e over A q .For any ciphertext vector c ,setting c =W ·c ∈A q we get the equation[ c ,s mod Φm (X )]q =[s W c mod Φm (X )]q =[ c ,s +2 c ,e mod Φm (X )]qSince c ,e ,and [ c ,s mod Φm (X )]q all have low norm relative to q ,then the addition on the right-hand side does not cause a wrap around q ,hence we get [[ c ,s mod Φm (X )]q ]2=[[ c ,s mod Φm (X )]q ]2,as needed.The key-switching operation changes the current secret key from s to s ,and does not change the current modulus.The norm of the noise is increased by at most an additive factor of 2 c ,e .3It was shown in [6]that over polynomial rings this operation can be implemented while increasing the dimension only to 2n −1rather than to n 2.4Modulus Switching.The modulus switching operation is intended to reduce the norm of the noise,to compensate for the noise increase that results from all the other operations.To convert a ciphertext c with respect to secret key s and modulus q into a ciphertext c encrypting the same thing with respect to the same secret key but modulus q ,we roughly just scale c by a factor q /q (thus getting a fractional ciphertext),then round appropriately to get back an integer ciphertext.Specifically c is a ciphertext vector satisfying(a)c =c (mod 2),and (b)the “rounding error term”τdef =c −(q /q )c has low norm.Converting cto c is easy in coefficient representation,and one of our optimizations is a method for doing the same in evaluation representation (cf.Section 3.2)This operation leaves the current key s unchanged,changes the current modulus from q to q ,and the norm of the noise is changed as n ≤(q /q ) n + τ·s .Note that if the key s has low norm and q is sufficiently smaller than q ,then the noise magnitude decreases by this operation.A BGV-type cryptosystem has a chain of moduli,q 0<q 1···<q L −1,where fresh ciphertexts are with respect to the largest modulus q L −1.During homomorphic evaluation every time the (estimated)noise grows too large we apply modulus switching from q i to q i −1in order to decrease it back.Eventually we get ciphertexts with respect to the smallest modulus q 0,and we cannot compute on them anymore (except by using bootstrapping).Automorphisms.In addition to adding and multiplying polynomials,another useful operation is convert-ing the polynomial a (X )∈A to a (i )(X )def =a (X i )mod Φm (X ).Denoting by κi the transformationκi :a →a (i ),it is a standard fact that the set of transformations {κi :i ∈(Z /m Z )∗}forms a group under composition (which is the Galois group G al (Q (ζm )/Q )),and this group is isomorphic to (Z /m Z )∗.In [4,13]it was shown that applying the transformations κi to the plaintext polynomials is very useful,some more examples of its use can be found in our Section 4.Denoting by c (i ),s (i )the vector obtained by applying κi to each entry in c ,s ,respectively,it was shown in [4,13]that if s is a valid ciphertext encrypting a with respect to key s and modulus q ,then c (i )is a valid ciphertext encrypting a (i )with respect to key s (i )and the same modulus q .Moreover the norm of noise remains the same under this operation.We remark that we can apply key-switching to c (i )in order to get an encryption of a (i )with respect to the original key s .2.3Computing on Packed CiphertextsSmart and Vercauteren observed [20,21]that the plaintext space A 2can be viewed as a vector of “plaintext slots”,by an application the polynomial Chinese Remainder Theorem.Specifically,if the ring polynomial Φm (X )factors modulo 2into a product of irreducible factors Φm (X )= −1j =0F j (X )(mod 2),then a plaintext polynomial a (X )∈A 2can be viewed as encoding different small polynomials,a j =a mod F j .Just like for integer Chinese Remaindering,addition and multiplication in A 2correspond to element-wise addition and multiplication of the vectors of slots.The effect of the automorphisms is a little more involved.When i is a power of two then the transforma-tions κi :a →a (i )is just applied to each slot separately.When i is not a power of two the transformation κi has the effect of roughly shifting the values between the different slots.For example,for some parameters we could get a cyclic shift of the vector of slots:If a encodes the vector (a 0,a 1,...,a −1),then κi (a )(for some i )could encode the vector (a −1,a 0,...,a −2).This was used in [13]to devise efficient procedures for applying arbitrary permutations to the plaintext slots.We note that the values in the plaintext slots are not just bits,rather they are polynomials modulo the irreducible F j ’s,so they can be used to represents elements in extension fields GF (2d ).In particular,in some of our AES implementations we used the plaintext slots to hold elements of GF (28),and encrypt one5byte of the AES state in each slot.Then we can use an adaption of the techniques from [13]to permute the slots when performing the AES row-shift and column-mix.3General-Purpose OptimizationsBelow we summarize our optimizations that are not tied directly to the AES circuit and can be used also in homomorphic evaluation of other circuits.Underlying many of these optimizations is our choice of keeping ciphertext and key-switching matrices in evaluation (double-CRT)representation.Our chain of moduli is defined via a set of primes of roughly the same size,p 0,...,p L −1,all chosen such that Z /p i Z has a m ’th roots of unity.(In other words,m |p i −1for all i .)For i =0,...,L −1we then define our i ’th modulus as q i = i j =0p i .The primes p 0and p L −1are special (p 0is chosen to ensure decryption works,and p L −1is chosen to control noise immediately after encryption),however all other primes p i are of size 217≤p i ≤220if L <100,see Appendix C.In the t -th level of the scheme we have ciphertexts consisting of elements in A q t (i.e.,polynomialsmodulo (Φm (X ),q t )).We represent an element c ∈A q t by a φ(m )×(t +1)“matrix”of its evaluationsat the primitive m -th roots of unity modulo the primes p 0,...,p t .Computing this representation from the coefficient representation of c involves reducing c modulo the p i ’s and then t +1invocations of the FFT algorithm,modulo each of the p i (picking only the FFT coefficients corresponding to (Z /m Z )∗).To convert back to coefficient representation we invoke the inverse FFT algorithm t +1times,each time padding the φ(m )-vector of evaluation point with m −φ(m )zeros (for the evaluations at the non-primitive roots of unity).This yields the coefficients of t +1polynomials modulo (X m −1,p i )for i =0,...,t ,we then reduce each of these polynomials modulo (Φm (X ),p i )and apply Chinese Remainder interpolation.We stress that we try to perform these transformations as rarely as we can.3.1A New Variant of Key SwitchingAs described in Section 2,the key-switching transformation introduces an additive factor of 2 c ,e in the noise,where c is the input ciphertext and e is the noise component in the key-switching matrix.To keep the noise magnitude below the modulus q ,it seems that we need to ensure that the ciphertext c itself has low norm.In BGV [4]this was done by representing c as a fixed linear combination of small vectors,i.e.c = i 2i c i with c i the vector of i ’th bits in c .Considering the high-dimension ciphertextc ∗=(c 0|c 1|c 2|···)and secret key s ∗=(s |2s |4s |···),we note that we have c ∗,s ∗ = c ,s ,and c ∗has low norm (since it consists of 0-1polynomials).BGV therefore included in the public key the matrix W =W [s ∗→s ](rather than W [s →s ]),and had the key-switching transformation computes c ∗from c and sets c =W ·c ∗.When implementing key-switching,there are two drawbacks to the above approach.First,this increases the dimension (and hence the size)of the key switching matrix.This drawback is fatal when evaluating deep circuits,since having enough memory to keep the key-switching matrices turns out to be the limiting factor in our ability to evaluate these deep circuits.Another drawback is it seems that this key-switching procedure requires that we first convert c to coefficient representation in order to compute the c i ’s,then convert each of the c i ’s back to evaluation representation before multiplying by the key-switching matrix.In level t of the circuit,this seem to require Ω(t log q t )FFTs.In this work we propose a different variant:Rather than manipulating c to decrease its norm,we instead temporarily increase the modulus q .To that end we recall that for a valid ciphertext c ,encrypting plaintext a with respect to s and q ,we have the equality c ,s =2e +a over A q ,for a low-norm polynomial e .6This equality,we note,implies that for every odd integer p we have the equality c ,p s =2e +a ,holding over A pq ,for the “low-norm”polynomial e (namely e =p ·e +p −12a ).Clearly,when considered relativeto secret key p s and modulus pq ,the noise in c is p times larger than it was relative to s and q .However,since the modulus is also p times larger,we maintain that the noise has norm sufficiently smaller than the modulus.In other words,c is still a valid ciphertext that encrypts the same plaintext a with respect to secret key p s and modulus pq .By taking p large enough,we can ensure that the norm of c (which is independent of p )is sufficiently small relative to the modulus pq .We therefore include in the public key a matrix W =W [p s →s ]modulo pq for a large enough odd integer p .(Specifically we need p ≈q √m .)Given a ciphertext c ,valid with respect to s and q ,we apply the key-switching transformation simply by setting c =W ·c over A pq .The additive noise term c ,e that we get is now small enough relative to our large modulus pq ,thus the resulting ciphertext c is valid with respect to s and pq .We can now switch the modulus back to q (using our modulus switching routine),hence getting a valid ciphertext with respect to s and q .We note that even though we no longer break c into its binary encoding,it seems that we still need to recover it in coefficient representation in order to compute the evaluations of c mod p .However,since we do not increase the dimension of the ciphertext vector,this procedure requires only O (t )FFTs in level t (vs.O (t log q t )=O (t 2)for the original BGV variant).Also,the size of the key-switching matrix is reduced by roughly the same factor of log q t .Our new variant comes with a price tag,however:We use key-switching matrices relative to a larger modulus,but still need the noise term in this matrix to be small.This means that the LWE problem under-lying this key-switching matrix has larger ratio of modulus/noise,implying that we need a larger dimension to get the same level of security than with the original BGV variant.In fact,since our modulus is more than squared (from q to pq with p >q ),the dimension is increased by more than a factor of two.This translates to more than doubling of the key-switching matrix,partly negating the size and running time advantage that we get from this variant.We comment that a hybrid of the two approaches could also be used:we can decrease the norm of c only somewhat by breaking it into digits (as opposed to binary bits as in [4]),and then increase the modulus somewhat until it is large enough relative to the smaller norm of c .We speculate that the optimal setting in terms of runtime is found around p ≈√q ,but so far did not try to explore this tradeoff.3.2Modulus Switching in Evaluation RepresentationGiven an element c ∈A q t in evaluation (double-CRT)representation relative to q t = t j =0p j ,we wantto modulus-switch to q t −1–i.e.,scale down by a factor of p t ;we call this operation Scale (c,q t ,q t −1)The output should be c ∈A ,represented via the same double-CRT format (with respect to p 0,...,p t −1),such that (a)c ≡c (mod 2),and (b)the “rounding error term”τ=c −(c/p t )has a very low norm.As p t is odd,we can equivalently require that the element c †def=p t ·c satisfy(i)c †is divisible by p t ,(ii)c †≡c (mod 2),and(iii)c †−c (which is equal to p t ·τ)has low norm.Rather than computing c directly,we will first compute c †and then set c ←c †/p t .Observe that once we compute c †in double-CRT format,it is easy to output also c in double-CRT format:given the evaluations for c †modulo p j (j <t ),simply multiply them by p −1t mod p j .The algorithm to output c †in double-CRT format is as follows:7。

A Geometric Proof of Con uence by Decreasing Diagrams

A Geometric Proof of Con uence by Decreasing Diagrams

SEN-R0007 March 31, 2000
Report SEN-R0007 ISSN 1386-369X CWI P.O. Box 94079 1090 GB Amsterdam The Netherlands
CWI is the National Research Institute for Mathematics and Computer Science. CWI is part of the Stichting Mathematisch Centrum (SMC), the Dutch foundation for promotion of mathematics and computer science and their applications. SMC is sponsored by the Netherlands Organization for Scientific Research (NWO). CWI is a member of ERCIM, the European Research Consortium for Informatics and Mathematics.
ABSTRACT The criterion for con uence using decreasing diagrams is a generalization of several well-known con uence criteria in abstract rewriting, such as the strong con uence lemma. We give a new proof of the decreasing diagram theorem based on a geometric study of in nite reduction diagrams, arising from unsuccessful attempts to obtain a con uent diagram by tiling with elementary diagrams. 2000 Mathematics Subject Classi cation: 68Q42, 52C20 1999 ACM Computing Classi cation system: F.1.1, F.4.2 Keywords and Phrases: abstract rewriting, tiling, con uence Note: Work of rst and second author carried out under project SEN2

Reconstruction of the Large Scale Structure

Reconstruction of the Large Scale Structure

4 Constrained realizations
Standard CDM realizations constrained by ZCAT and IRAS (1:9Jy ) density eld are presented here. Fig. 1 shows plots of the overdensity and velocity elds superimposed. The upper two plots correspond to the supergalactic plane SGZ = 0, and the lower ones show the SGY = 0 plane which coincide with the Galactic plane. Thus, the SGY = 0 plots yield the reconstructed perturbation eld at the Zone of Avoidance. The two left plots are constrained by ZCAT, and the two right ones are based on IRAS sampled at the same points as ZCAT. The overall structure produced by ZCAT and IRAS is generally similar, however there are some marked di erences. First, the amplitude of the ZCAT eld is higher than IRAS' by roughly a factor of 2. The two reconstructions recover the Perseus-Pisces (PP) supercluster and nd a similar structure, yet they di er with respect to the Cetaurus-Hydra complex. The ZCAT map places the peak of that structure at a distance of 35h 1 Mpc, compared with the 45h 1Mpc found from IRAS and which coincides with the Great Attractor (GA) solution 12] (see ref. 7] for detailed discussion). A more detailed graphical analysis of the structure at the Zone of Avoidance is presented by Figs. 2 and 3 where the overdensity eld is plotted in Galactic spherical projection at distances of R = 40h 1 Mpc (Fig. 2) and R = 60h 1 Mpc (Fig. 3). Here we focus on the b 0 region. The R = 40h 1 Mpc IRAS and ZCAT maps are similar and are reproducing the PP and Hydra-Centaurus (or GA) complexes. New features are found at R = 60h 1 Mpc. In the ZCAT map the GA almost does not extend beyond this radius and only one extension is found which peaks at l = 15 and b = 5 at that distance. The structure there is statistically robust and it is found in all realization we have made. The proximity of that direction to the Galactic center makes the optical veri cation of that structure to be very di cult. The IRAS R = 60h 1Mpc reconstructed map shows a much richer structure, as the GA extends beyond that distance at l 325 ; b = 0 . A new peak is found here at l = 285 ; b = 5 , and inspection of Fig. 1 shows it comes from an extension of the GA complex. Interesting, a recent optical redshift survey of that region has found an indication to an excess of galaxies which coincide with the peak reported here (Kraan-Korteweg and Woudt 9]). Again this peak is statistically robust and has been reproduced in all realizations made by us.

连续微段样条曲线重构加工算法

连续微段样条曲线重构加工算法

连续微段样条曲线重构加工算法陶佳安;陈胜;黄宇亮;施群【摘要】针对复杂曲面在采用连续微段模式加工的过程中合成速度波动大导致加工效率降低的问题,提出了适用于微段加工的样条曲线重构新算法,该方法包含建立一种具有快速递推性质的样条曲线,及基于该曲线的速度规划和快速递推插补加工的方法。

实验表明,算法在保证加速度连续的条件下,通过样条重构及速度规划减少了频繁加减速,提高了加工效率;快速递推则提高了插补计算的速度,插补点精确通过微段节点,保证了加工精度,提升了数控系统的性能。

%In complex contour machining process with continuous micro-lines mode, machining efficiency was reduced by big fluctuation of composite velocity. To solve this problem, a curve reconstruction micro-line machining algo- rithm was presented. This algorithm included a new spline which had rapid recursive formula, and a method of ve- locity planning and recursive interpolation machining algorithm based on this spline. The experiment result showed the algorithm could reduce the situation of frequentacceleration/deceleration and improve the machining efficiency through spline reconstruction and velocity planning on the conditions that ensure continuity of acceleration in machi- ning process. Meantime, computational speed of interpolation was improved by fast recursion, and interpolation point could pass knot of micro line precisely, so machining accuracy was ensured and performance of CNC was promoted.【期刊名称】《计算机集成制造系统》【年(卷),期】2012(018)006【总页数】5页(P1195-1199)【关键词】样条曲线;微段;加减速;合成速度;插补;算法【作者】陶佳安;陈胜;黄宇亮;施群【作者单位】上海大学通信与信息工程学院,上海200072;上海大学机电工程与自动化学院,上海200072;上海大学机电工程与自动化学院,上海200072;上海大学机电工程与自动化学院,上海200072【正文语种】中文【中图分类】TG6590 引言复杂型面数控加工的计算机辅助制造(Computer Aided Manuf act uring,CA M)系统中用直线逼近曲线,生成大量微小直线段G代码,再经过系统解释执行控制轴运动。

农学专业英语3复习题

农学专业英语3复习题

Accrue:自然增加,产生Deplete:耗尽,使衰竭Deterioration:恶化,退化Duration:持续时间,为期Elimination:消除Fragile:易碎的,脆的Hammering:捶打,击打Humid:潮湿的,湿润的,多湿气的Impetus:推动力,促进Irrefutable:不能反驳的,不能驳倒的No-tillage:免耕salination:盐化(作用)Muloh:覆盖)Stubble-muloh: 秸秆覆盖Amenity :使人愉快的事物或环境Deceleration:减速Elasticity: 弹性,伸缩性Incremental:增长性Infrastructure:基础,基本建设Ingredient:成分,调料Pronounced:明显的Scenario:方案Strident:尖锐的Subsector:亚区,分布,部分Substantially:实质的Welfare:幸福,健康,福利Acquisition:获得,获得物Cytoplasmic: 细胞质的Cytoplasmic male sterility :细胞质雄性不育Genotype:基因型Hybrid:杂交的Hybridization:杂交Pollinate:授粉Since the 1940s,andparticularly during the last thirty years, maize and wheat have become increasingly important in developing countries. In the early 1960s,the developing world accounted for just over a third of global maize production and less than 30% of global wheat output . By the early 1990s , developing countries accounted for more than 45% of total world production of both cereals. Maize production in developing countries grew at an annual rate of 3.6% from 1961 to 1995; wheat production at an even faster rat自1940年代以来,特别是在过去的三十年,玉米和小麦在发展中国家中发挥了变的越来越重要。

Ination Targets and Debt Accumulation in a Monetary Union ¤

Ination Targets and Debt Accumulation in a Monetary Union ¤

In‡ation T argets and Debt Accumulation in aMonetary Union¤Roel BeetsmaUniversity of Amsterdam and CEPR yns BovenbergTilburg University and CEPR zOctober1999AbstractThis paper explores the interaction between centralized monetary policyand decentralized…scal policy in a monetary union.Discretionary mone-tary policy su¤ers from a failure to commit.Moreover,decentralized…scalpolicymakers impose externalities on each other through the in‡uence oftheir debt policies on the common monetary policy.These imperfectionscan be alleviated by adopting state-contingent in‡ation targets(to com-bat the monetary policy commitment problem)and shock-contingent debttargets(to internalize the externalities due to decentralized…scal policy).Keywords:discretionary monetary policy,decentralized…scal policy, monetary union,in‡ation targets,debt targets.JEL Codes:E52,E58,E61,E62.¤We thank David Vestin and the participants of the EPRU Workshop“Structural Change and European Economic Integration”for helpful comments on an earlier version of this paper. The usual disclaimer applies.y Mailing address Roel Beetsma:Department of Economics,University of Amsterdam, Roetersstraat11,1018WB Amsterdam,The Netherlands(phone:+31.20.5255280;fax: +31.20.5254254;e-mail:Beetsma@fee.uva.nl).z Mailing address Lans Bovenberg:Department of Economics,Tilburg University,P.O.Box 90153,5000LE Tilburg,The Netherlands(phone:+31.13.4662912;fax:+31.13.4663042;e-mail: A.L.Bovenberg@kub.nl).1.IntroductionAlthough the Maatricht Treaty has laid the institutional foundations for European Monetary Union(EMU),how these institutions can best be operated in practice remains to be seen in the coming years.For example,the European Central Bank (ECB)has announced a two-tier monetary policy strategy based on a reference value for money growth and an indicator that is based on a number of other measures,such as output gaps,in‡ation expectations,etcetera(see European Central Bank,1999).Over time the ECB may well shift to implicit targeting of in‡ation.Indeed,a number of economists has argued(e.g,see Svensson,1998) that also the Bundesbank has pursued such a strategy.Furthermore,how the Excessive De…cit Procedure and the Stability and Growth Pact(see Beetsma and Uhlig,1999)will work in practice is not yet clear.This paper deals with the interaction between in‡ation targets and constraints on decentralized…scal policy in a monetary union.To do so,we extend our earlier work on the interaction between a common monetary policy and decentralized …scal policies in a monetary union.In particular,in Beetsma and Bovenberg (1999)we showed that monetary uni…cation raises debt accumulation,because in a monetary union countries only partly internalize the e¤ects of their debt policies on future monetary policy.This additional debt accumulation is actually welfare enhancing(if the governments share societal preferences).We showed that,in the absence of shocks,making the central bank su¢ciently conservative(in the sense of Rogo¤,1985,that is by imposing on the central bank a loss function that attaches a su¢ciently high weight to price stability)can lead the economy to the second-best equilibrium.However,this is no longer the case in the presence of common shocks,as the economies are confronted with a trade o¤between credibility and‡exibility.While Beetsma and Bovenberg(1999)emphasized the e¤ects of lack of com-mitment in monetary policy,this paper introduces another complication in the form of strategic interactions between decentralized…scal policymakers who have di¤erent views on the stance of the common monetary policy.1These di¤erent views originate in di¤erences among the economies in the monetary union.In particular,we allow for systematic di¤erences in labour and product market dis-tortions,public spending requirements and initial public debt levels.We also allow for idiosyncratic stochastic shocks hitting the countries.In combination with the decentralization of…scal policy these di¤erences lead to con‡icts about the preferred future stance of the common monetary policy.In particular,coun-tries that su¤er from severe distortions in labor and commodity markets,feature 1Our earlier model incorporated another potential distortion:the possibility that govern-ments discount the future at a higher rate than their societies do.We ignore this distortion throughout the current paper.2higher public spending or initial debt levels or are hit by worse shocks prefer a laxer future stance of monetary policy.These con‡icts about monetary policy induce individual governments to employ their debt policy strategically,so as to induce the union’s central bank to move monetary policy into the direction they prefer.This strategic behavior imposes negative externalities on other countries, thereby producing welfare losses.In contrast to Beetsma and Bovenberg(1999),we do not address the distor-tions in the model by making the common central bank su¢ciently conservative. Instead,we focus on state-contingent in‡ation targets which,in contrast to a con-servative central bank,can lead the economy to the second-best equilibrium if countries are identical.Hence,as stressed by Svensson(1997)in a model with-out…scal policy and debt accumulation,in‡ation targets eliminate the standard credibility-‡exibility trade-o¤.If…scal policy is decentralized to heterogeneous countries,however,the optimal state-contingent in‡ation targets need to be com-plemented by(country-speci…c)debt targets to establish the second best.In this way,in‡ation targets address the lack of commitment in monetary policy,while debt targets eliminate strategic interaction among heterogeneous governments with di¤erent views about the common monetary policy stance.The remainder of this paper is structured as follows.Section2presents the model.Section3discusses the second-best equilibrium in which not only monetary but also…scal policy is centralized and in which monetary policy is conducted under commitment.This is the second-best optimum that can be attained under monetary uni…cation,assuming that the supranational authorities attach an equal weight to the preferences of each of the participating countries.Section4derives the equilibrium for the case of a common,discretionary monetary policy with decentralized…scal policies.Section5explores institutional arrangements(i.e. in‡ation targets and public debt targets)that may alleviate the welfare losses arising from the lack of monetary policy commitment and the wasteful strategic interaction among the decentralized governments.Finally,Section6concludes the main body of this paper.The derivations are contained in the appendix.2.The modelA monetary union,which is small relative to the rest of the world,is formed by n countries.2A common central bank(CCB)sets monetary policy for the entire union,while…scal policy is determined at a decentralized,national level by the n governments.There are two periods.2Monetary uni…cation is taken as given.Hence,we do not explore the incentives of countries to join a monetary union.3Workers are represented by trade unions who aim for some target real wage rate(e.g.see Alesina and Tabellini,1987,and Jensen,1994).They set nominal wages so as to minimize the expected squared deviation of the realized real wage rate from this target.Monetary policy(i.e.,the in‡ation rate)is selected after nominal wages have been…xed.In each country,…rms face a standard production function with decreasing returns to scale in labour.Output in period t is taxed at a rate¿it.Therefore,output in country i in periods1and2,respectively,is given by3x i1=º(¼1¡¼e1¡¿i1)¡¹¡²i;(2.1)x i2=º(¼2¡¼e2¡¿i2);(2.2) where¹represents a common union-wide shock,while²i stands for an idiosyn-cratic shock that solely hits country i.¼et denotes the in‡ation rate for period texpected at the start of period t(that is,before period t shocks have materialized, but after period t¡1;t¡2;::shocks have hit).We assume that E[²i]=0;8i; E[¹]=0;E[²i²j]=0;8j=i;and that¹²´1P n i=1²i=0.4The variances of¹and ²i are given by¾2¹and¾2²,respectively.We abstract from shocks in the secondperiod,because they would not a¤ect debt accumulation.Each country features a social welfare function which is shared by the govern-ment of that country.Hence,governments are benevolent.In particular,the loss function of government i is de…ned over in‡ation,output and public spending:V S;i=12X t=1¯t¡1h®¼¼2t+(x it¡~x it)2+®g(g it¡~g it)2i;0<¯ 1;®¼;®g>0:(2.3)Welfare losses increase in the deviations of in‡ation,(log)output and government spending(g it is government spending as a share of output in the absence of distor-tions)from their targets(or…rst-best levels or“bliss points”).For convenience, the target level for in‡ation corresponds to price stability.The target level for output is denoted by~x it>0.Two distortions reduce output below this optimal level.First,the output tax¿it drives a wedge between the social and private bene…ts of additional output.Second,market power enables unions to drive the real wage above its level in the absence of distortions.Hence,even in the ab-sence of taxes,output is below the…rst-best output level~x it>0.The…rst-best 3Details on the derivations of these output equations can be found in Beetsma and Bovenberg (1999).4Without this assumption,the mean¹²of the²’s would play the same role as¹does.In the outcomes given below,¹would then be replaced by^¹´¹+¹².For convenience,we assume that ¹²=0.4level of government spending,~g it,can be interpreted as the optimal share of non-distortionary output to be spent on public goods if(non-distortionary)lump-sum taxes would be available(see Debelle and Fischer,1994).The target levels for output and government spending can di¤er across countries.Parameters®¼and ®g correspond to the weights of the price stability and government spending ob-jectives,respectively,relative to the weight of the output objective.Finally,¯denotes society’s subjective discount factor.Government i’s budget constraint can be approximated by(e.g.,see Appendix A in Beetsma and Bovenberg,1999):g it+(1+½)d i;t¡1=¿it+¼t+d it;(2.4) where d i;t¡1represents the amount of public debt carried over from the previous period into period t,while d it stands for the amount of debt outstanding at the end of period t.All public debt is real,matures after one period,and is sold on the world capital market against a real rate of interest of½.This interest rate is exogenous because the countries making up the monetary union are small relative to the rest of the world.5¿it andÂ(a constant)stand for,respectively,distor-tionary tax revenue and real holdings of base money as shares of non-distortionary output.All countries share equally in the seigniorage revenues of the CCB,so that the seigniorage revenues accruing to country i amount to¼t.We combine(2.4)with the expression for output,(2.1)or(2.2),to eliminate ¿it.The resulting equation can be rewritten to yield the government…nancing requirement of period t:GF R it=~K it+(1+½)d i;t¡1¡d it+±t(¹+²i)=º=[(~x it¡x it)=º]+¼t+(~g it¡g it)+(¼t¡¼e t);(2.5) where±t is an indicator function,such that±1=1and±2=0,and where~Kit´~g it+~x it=º.The government…nancing requirement,GF R it,consists of three components.The …rst component,~K it,amounts to the government spending target,~g it,and an out-put subsidy aimed at o¤setting the implicit output tax due to labor-or product-market distortions,~x it=º.The second component involves net debt-servicing costs, 5In the following,we will occasionally explore what happens when the number of union participants becomes in…nitely large(i.e.n!1)in order to strengthen the intuition behind our results.In these exercises the real interest rate remains beyond the control of union-level policymakers.5(1+½)d i;t¡1¡d it.The…nal component(in period1only)is the stochastic shock (scaled byº),(¹+²i)=º.The last right-hand side of(2.5)represents the sources of…nance:the shortfall(scaled byº)of output from its target(henceforth re-ferred to as the output gap),(~x it¡x it)=º,seigniorage revenues,¼t,the shortfall of government spending from its target(henceforth referred to as the spending gap),~g it¡g it,and the in‡ation surprise,¼t¡¼e t.All public debt is paid o¤at the end of the second period(d i2=0;i= 1;::;n).Under this assumption,while taking the discounted(to period one)sums of the left-and right-hand sides of(2.5)(t=1;2),we obtain the intertemporal government…nancing requirement:IGF R i=~F i+(¹+²i)=º=2X t=1(1+½)¡(t¡1)[(~x it¡x it)=º+¼t+(~g it¡g it)+(¼t¡¼e t)];(2.6)where~F i´~K i1+(1+½)d i0+~K i2=(1+½)stands for the deterministic component of the intertemporal government…nancing requirement.Monetary policy is delegated to a common central banker(CCB),who has direct control over the union’s in‡ation rate.One could assume that the CCB has certain intrinsic preferences regarding the policy outcomes.Alternatively,and this is the interpretation we prefer,one could assume that the CCB is assigned a loss function by means of an appropriate contractual agreement.More speci…cally,this agreement shapes the CCB’s incentives in such a way(by appropriately specifying its salary and other bene…ts–for example,possible reappointment–conditional on its performance)that it chooses to maximize the following loss function:V CCB=12X t=1¯t¡1(®¼(¼t¡¼¤t)2+1n X i=1h(x it¡~x it)2+®g(g it¡~g it)2i);(2.7)where¼¤t is the in‡ation target in period t,which may be di¤erent from the socially-optimal in‡ation rate,which was set at zero.If¼¤1=¼¤2=0,the CCB’s objective function corresponds to an equally-weighted average of the individual societies’objective functions.We assume that ¼¤2is a linear function of d i1;i=1;::;n.This linearity assumption su¢ces for our purposes:we will see later on that the optimal second-period in‡ation target is indeed a linear function of d i1;i=1;::;n.The optimal…rst-period in‡ation target will be a function of d i0,which is exogenous.63.The second-best equilibriumAs a benchmark for the remainder of the analysis,we discuss the equilibrium resulting from centralized…scal and monetary policies under commitment.Mon-etary policy is set by the CCB.Fiscal policy is conducted by a centralized…scal authority,which minimizes:V U´1n X i=1V S;i;(3.1)where the V S;i are given by(2.3),i=1;::;n.Equation(3.1)assumes that coun-tries have equal bargaining power as regards to the…scal policy decisions taken at the union ernment spending is residually determined,so that the CCB,when it selects monetary policy,internalizes the government budget con-straints.The resulting equilibrium is Pareto optimal.In the sequel,we refer to this equilibrium as the second-best equilibrium.In the absence of…rst-best policies (such as the use of lump-sum taxation and the elimination of product-and labor-market distortions),it is the equilibrium with the smallest possible welfare loss (3.1),given monetary uni…cation.The derivation of the second-best equilibrium is contained in Appendix A.3.1.In‡ation,the output gap and the public spending gapTable1contains the outcomes for in‡ation,the output gap,6~x it¡x it,and the spending gap,~g it¡g it.We write each of these outcomes as the sum of two deterministic and two stochastic components.~F¢i is the deviation of country i’s deterministic component of its intertemporal government…nancing requirement from the cross-country average,de…ned by~F.Formally,~F´1n P n j=1~F j and ~F¢i´~F i¡~F.The factor between square brackets in each of the entries of Table1 makes clear how,within a given period,the government…nancing requirement is distributed over the…nancing sources(seigniorage,the output gap,the spending gap and an in‡ation surprise).Indeed,for each period these factors add up to unity,both across the deterministic and across the stochastic components.For example,for the…rst period one has:6Throughout,we present the outcome for the output gap instead of the outcome for the tax rate.The reason is that,in contrast to the latter,the former directly enters the welfare loss functions.7[(~x i1¡x i1)=º]+¼1+(~g i1¡g i1)+(¼1¡¼e1)=h¯¤(1+½)1+¯¤(1+½)i³~F+~F¢i´+h¯¤(1+½)(P¤=P)1+¯¤(1+½)(P=P)i³¹º+²iº´=~K i1+(1+½)d i0¡d S i1+(¹+²i)=º;(3.2)where d Si1is the second-best debt level.The last equality can be checked bysubstituting(3.4)-(3.7)into(3.3)(all given below)and substituting the resulting expression into the last line of(3.2).For each of the outcomes,the terms that follow the factor in square brackets regulate the inter temporal allocation of the intertemporal government…nancing requirement.The coe¢cients of the common stochastic shock¹º(in the fourth column ofTable1,°2)di¤er in two ways from the coe¢cients of the common determinis-tic component of the intertemporal government…nancing requirement~F(in thesecond column of Table1,°0).The…rst di¤erence is with respect to the…rst-period,intra temporal,allocation of the government…nancing requirement overthe…nancing sources.The deterministic components of the government…nancing requirement are anticipated and thus correctly incorporated in expected in‡a-tion.The common shock,in contrast,is unanticipated and,hence,not taken intoaccount when in‡ation expectations are formed.The predetermination of the in-‡ation expectation is exploited by the central policymakers so as to…nance part ofthis common shock through an in‡ation surprise.Indeed,whereas the coe¢cientof¼1¡¼e1is zero in the second column in Table1,this coe¢cient is positive in the fourth column,indicating that part of the common shock is…nanced throughan in‡ation surprise in the…rst period.With surprise in‡ation absorbing part ofthe common shock,the output gap and the spending gap have to absorb a smallershare of this shock.In the second period,the allocation over the…nancing sources for the stochastic component¹is the same as for the deterministic component~F.The reason is that the…rst-period shock¹has materialized before second-period in‡ation expectations are formed.The e¤ect of¹on the second-period outcomes will thus be perfectly anticipated.Indeed,the share of¹that is transmitted into the second period through debt policy becomes part of the deterministic component of the second-period government…nancing requirement(when viewed from the start of the second period).The second way in which the coe¢cient of the stochastic shock¹di¤ers from the coe¢cient of~F,involves the inter temporal allocation of the government…-nancing requirement.In particular,the share of¹absorbed in the…rst period (relative to the second period)is larger than that of~F(¯¤(P¤=P)c1>¯¤c0and c1<c0,where c0and c1are de…ned in Table1).The reason is again that…rst-8period in‡ation expectations are predetermined when the stochastic shock hits. This enables the policymakers to absorb a relatively large share of the stochastic shock in the…rst period through an in‡ation surprise.The responses of the output and government spending gaps to~F¢i and²i di¤er from the responses to~F and¹.Since in‡ation is attuned to cross-country averages,it cannot respond to country-speci…c circumstances as captured by~F¢i and²i.Accordingly,taxes(the output gap)and the government spending gap have to fully absorb these country-speci…c components of the government…nancing requirements.3.2.Public debt policyThe solution for debt accumulation in the second-best equilibrium can be written as:d S i1=¹d e;S1+d¢;e;Si1+¹d d;S1+d±;S i1;(3.3) where¹d e;S 1=h~K1+(1+½)¹d0¡~K2i+(1¡¯¤)~K2¤;(3.4)d¢;e;S i1=h~K¢i1+(1+½)d¢i0¡~K¢i2i+(1¡¯¤)~K¢i2¤;n>1;(3.5) =0;n=1;¹d d;S 1="11+¯¤(1+½)(P¤=P)#¹º;(3.6)d±;S i1="11+¯¤(1+½)#²iº;n>1;(3.7) =0;n=1;where the superscript“S”stands for“second-best equilibrium”,the superscript “e”denotes the expectation of a variable,an upperbar above a variable indicates its cross-country average(except for variables carrying a tilde,like~K1,where the cross-country average is indicated by dropping the country-index),a super-script“¢”denotes an idiosyncratic deviation of a deterministic variable from its cross-country average(for example,~K¢i1´~K i1¡~K1),a superscript“d”denotes9the response to a common shock,a superscript“±”indicates the response to an idiosyncratic shock,and where¯¤´¯(1+½);(3.8)P´Â2=®¼+1=º2+1=®g;P¤´(Â+1)2=®¼+1=º2+1=®g:Hence,optimal debt accumulation(3.3)is the sum of two deterministic compo-nents and two stochastic components.The component¹d e;S1optimally distributes over time the absorption of the cross-country averages of the deterministic compo-nents of the government…nancing requirements.Therefore,it is common acrosscountries.The country-speci…c components d¢;e;Si1intertemporally distribute theidiosyncratic deterministic components of the government…nancing requirements. The common(across countries)component¹d d;S1represents the optimal debt re-sponse to the common shock¹,while d±;S i1stands for the optimal debt response to the country-speci…c shock,²i.The debt response to the common shock is less active than the response to the idiosyncratic shock(since P¤=P>1).The common in‡ation rate can exploit the predetermination of in‡ation expectations only in responding to the common shock,because the common in‡ation rate can not be attuned to idiosyncratic shocks.Hence,the share of the common shock that can be absorbed in the…rst period can be larger than the corresponding share of the idiosyncratic shock. Public debt thus needs to respond less vigorously to the common shock.4.Discretionary monetary policy with decentralized…scalpolicyThis section introduces two distortions compared with the second-best equilib-rium explored in the previous section.First,the CCB is no longer able to commit to monetary policy announcements.Second,…scal policy is decentralized to in-dividual governments,which may result in wasteful strategic interaction among heterogeneous governments.From now on,the timing of events in each period is as follows.At the start of the period,the institutional parameters are set.That is,an in‡ation target is imposed on the CCB for the coming period and,if applicable,the debt targets on the individual governments are set.The in‡ation target may be conditioned on the state of the world.In particular,the in‡ation target may depend on the average debt level in the union.7Furthermore,the debt target,which represents 7The optimal in‡ation target can either be optimally reset at the start of each period,or10the amount of public debt that a government has to carry over into the next period,may be shock-contingent.8After the institutional parameters have been set,in‡ation expectations are determined(through the nominal wage-setting pro-cess).Third,the shock(s)materialize.Fourth,taking in‡ation expectations as given,the CCB selects the common in‡ation rate and the…scal authorities simul-taneously select taxes and,in the absence of a debt target,public debt.Each of the players takes the other players’policies at this stage as given.Finally,public spending levels are residually determined.As a result,the CCB internalizes the e¤ect of its policies on the government budget constraints.This section explores the outcomes under pure discretion,i.e.in the absence of both in‡ation targets(i.e.,¼¤1=¼¤2=0)and debt targets.The complete derivation of the equilibrium is contained in Appendix B.The suboptimality of the resulting equilibrium compared to the second best motivates the exploration of in‡ation and debt targets in Section5.4.1.In‡ation,the output gap and the public spending gapTable2contains the solutions for the in‡ation rate,the output gap and the spend-ing gap.The main di¤erence compared to the outcomes under the second-best equilibrium(see Table1)is that,for a given amount of debt d i1to be carried over into the second period,expected…rst-period in‡ation(and,hence,seignior-age ifÂ>0)will be higher(compare the term between the square parenthe-ses in the second column and the second row of Table2with the correspond-ing term in Table1and observe that[Â(Â+1)=®¼]=S>(Â2=®¼)=P,where S´Â(Â+1)=®¼+1=º2+1=®g).The source of the higher expected in‡ation rate under pure discretion is the inability to commit to a stringent monetary policy,which yields the familiar in‡ation bias(Barro and Gordon,1983).The outcomes for in‡ation,the output gap and the spending gap deviate from the outcomes under the second-best equilibrium also because debt accumulation un-der pure discretion di¤ers from debt accumulation under the second best.These di¤erences are discussed below.4.2.Public debt policyGovernment i’s debt can,analogous to(3.3),be written as:d D i1=¹d e;D1+d¢;e;Di1+¹d d;D1+d±;D i1;(4.1)be determined according to a state-contingent rule selected at the beginning of the…rst period. These two alternative interpretations yield equivalent results.8Debt at the end of the second period is restricted to be zero.Hence,the second period features a debt target of zero.11where the superscript“D”is used to indicate the solution of the purely discre-tionary equilibrium with decentralized…scal policies and where¹d e;D 1=h~K1+(1+½)d0¡~K2i+[1¡¯¤(S¤=S)]~K2¤;(4.2)d¢;e;D i1=h~K¢i1+(1+½)d¢i0¡~K¢i2i+[1¡¯¤(Q=S)]~K¢i21+¯¤(1+½)(Q=S);if n>1;(4.3) =0;if n=1;¹d d;D 1="11+¯¤(1+½)(S¤=S)(P¤=S)#¹º;(4.4)d±;D i1="11+¯¤(1+½)(Q=S)#²iº;if n>1;(4.5) =0;if n=1;and whereS´Â(Â+1)=®¼+1=º2+1=®g;(4.6)S¤´Â(Â+1)=®¼+(Â+1)=(n®¼)+1=º2+1=®g;Q´[(n¡1)=n][Â(Â+1)=®¼]+1=º2+1=®g:4.2.1.Response to the common deterministic components of the gov-ernment…nancing requirementsPositive analysis:This subsection explores the solution for expected average debt¹d e;D1in(4.2). Whereas current in‡ation expectations are predetermined at the moment that debt is selected,future in‡ation expectations still need to be determined.A re-duction in debt reduces the future government…nancing requirement and,thus, the tax rate in the future.This,in turn,weakens the CCB’s incentive to raise future in‡ation in order to protect employment.Hence,by restraining debt ac-cumulation,governments help to reduce future in‡ation expectations,which are endogenous from a…rst-period perspective.The reduction in future in‡ation expectations implies a lower in‡ation bias in the future.In other words,asset accumulation is an indirect way to enhance the commitment of a central bank to low future in‡ation.12。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :a s t r o -p h /0701490v 1 17 J a n 2007TP-DUT/2007-1Reconstruction of a Deceleration Parameter from the Latest Type Ia SupernovaeGold DatasetLixin Xu ∗,†Chengwu Zhang,Baorong Chang,and Hongya LiuSchool of Physics &Optoelectronic Technology,Dalian University of Technology,Dalian,116024,P.R.ChinaIn this paper,a parameterized deceleration parameter q (z )=1/2−a/(1+z )b is reconstructedfrom the latest type Ia supernovae gold dataset.It is found out that the transition redshift from decelerated expansion to accelerated expansion is at z T =0.35+0.14−0.07with 1σconfidence level inthis parameterized deceleration parameter.And,the best fit values of parameters in 1σerrors are a =1.56+0.99−0.55and b =3.82+3.70−2.27.PACS numbers:98.80.-k,98.80.Es Keywords:Cosmology;dark energy I.INTRODUCTION Recent observations of High redshift Type Ia Supernova indicate that our universe is undergoing accelerated ex-pansion which is one of the biggest challenges in present cosmological research,now [1,2,3,4,5,6].Meanwhile,this suggestion is strongly confirmed by the observations from WMAP [7,8,9,10]and Large Scale Structure survey [11].To understand the late-time accelerated expansion of the universe,a large part of models are proposed by assuming the existence of an extra energy component with negative pressure and dominated at late time pushing the universe to accelerated expansion.In principle,a natural candidate for dark energy could be a small cosmological constant Λwhich has the constant equation of state (EOS)w Λ=−1.However,there exist serious theoretical problems:fine tuning and coincidence problems.To overcome the coincidence problem,the dynamic dark energy models are proposed,such as quintessence [12],phantom [13],quintom [14],k-essence [15],Chaplygin gas [16],holographic dark energy [17],etc.,as alternative candidates.Another approach to study the dark energy is by an almost model-independent way,i.e.,the parameterized equation state of dark energy which is implemented by giving the concrete form of the equation of state of dark energy directly,such as w (z )=w 0+w 1z [18],w (z )=w 0+w 1z ∂z ≪1,is ruled out [23].Also,the dark energy favors a quitom-like dark energy,i.e.crossing the cosmological constant boundary.In all,it is an effective method to rule out the dark energy models.As known,now the universe is dominated by dark energy and is undergoing accelerated expansion.However,in the past,the universe was dominated by dark matter and underwent a decelerated epoch.So,inspired by this idea,the parameterized decelerated parameter is present in almost model independent way by giving a concrete form ofdecelerated parameters which is positive in the past and changes into negative recently [22,24,25].Moreover,it is interesting and important to know what is the transition time z T from decelerated expansion to accelerated expansion.This is the main point of the paper to be explored.The structure of this paper is as follows.In section II,a parameterized decelerated parameter is constrained by latest 182Sne Ia data points compiled by Riess [23].Section III is the conclusion.II.RECONSTRUCTION OF DECELERATION PARAMETERWe consider a flat FRW cosmological model containing dark matter and dark energy with the metricds 2=−dt 2+a 2(t )dx 2.(1)The Friedmann equation of theflat universe is written asH2=8πGa− ˙aaH2(4) gives˙H=−(1+q)H2.(5) By using the relation a0/a=1+z,the relation of H and q,i.e.,Eq.(5)can be written in its integration formH(z)=H0exp z0[1+q(u)]d ln(1+u) ,(6)where the subscript”0”denotes the current values of the variables.If the function of q(z)is given,the evolution of the Hubble parameter is obtained.In this paper,we consider a parameterized deceleration parameter[22],q(z)=1/2−a/(1+z)b,(7) where,a,b are constants which can be determined from the current observational constraints.From Eq.(7),it can be seen that at the limit of z→∞,the decelerated parameter q→1/2which is the value of decelerated parameter at dark matter dominated epoch.And,the current value of decelerated parameter is determined by q0=1/2−a.In the Eq.(7)form of decelerated parameter,the Hubble parameter is written in the formH(z)=H0(1+z)3/2exp a (1+z)−b−1 /b (8) From the explicit expression of Hubble parameter,it can be seen that this mechanism can also be tried as parametriza-tion of Hubble parameter.Now,we can constrain the model by the supernovae observations.We will use the latest released supernovae datasets to constrain the parameterized deceleration parameter Eq.(7).The Gold dataset contains182Sne Ia data [23]by discarding all Sne Ia with z<0.0233and all Sne Ia with quality=’Silver’.The182datasets points are used to constrain our model.Constraint from Sne Ia can be obtained byfitting the distance modulusµ(z)µth(z)=5log10(D L(z))+M,(9) where,D L(z)is the Hubble free luminosity distance H0d L(z)andd L(z)=(1+z) z0dz′Mpc +25=M−5log10h+42.38,(11) where,M is the absolute magnitude of the object(Sne Ia).With Sne Ia datasets,the bestfit values of parameters in dark energy models can be determined by minimizingχ2SneIa(p s)=Ni=1(µobs(z i)−µth(z i))2χ2with 180dof a b z T∂x i 2x =¯x Cov (x i ,x i )+2n i =1n j =i +1 ∂y ∂x j x =¯xCov (x i ,x j )(13)is used extensively (see Ref.[26]for instance).For Ansatz Eq.(7),we obtain errors of the parameterized decelerated parameter.The evolution of the decelerated parameters q (z )with 1σerror are plotted in Fig.2.III.CONCLUSIONIn this paper,by an almost model-independent way,we have used a parameterized decelerated parameter to obtain the transition time or redshift z T from decelerated expansion to accelerated expansion.It is found out that the best fit transition redshift z T is about z T =0.35+0.14−0.07with 1σerror in this parameterized equation which is compatiblewith the result of Ref.[23].Though,we also can derive the transition redshift from a giving equation of state of dark energy and an concrete dark energy models,they are much model dependent.So,we advocate the almost model-independent way to test and rule out some existent dark energy models.z q zFIG.2:The evolution of decelerated parameter with respect to the redshift z .The center solid lines is plotted with the best fit value,where the shadows denote the 1σregion.AcknowledgmentsL.Xu is supported by DUT (3005-893321)and NSF (10647110).H.Liu is supported by NSF (10573003)and NBRP (2003CB716300)of P.R.China.[1]A.G.Riess,et.al.,Observational evidence from supernovae for an accelerating universe and a cosmological constant,1998Astron.J.1161009,astro-ph/9805201.[2]S.Perlmutter,et.al.,Measurements of omega and lambda from 42high-redshift supernovae,1999Astrophys.J.517565,astro-ph/9812133.[3]J.L.Tonry,et.al.,Cosmological Results from High-z Supernovae ,2003Astrophys.J.5941,astro-ph/0305008;[4]R.A.Knop,et.al.,New Constraints on ΩM ,ΩΛ,and w from an Independent Set of Eleven High-Redshift Supernovae Observed with HST,astro-ph/0309368.[5]B.J.Barris,et.al.,23High Redshift Supernovae from the IfA Deep Survey:Doubling the SN Sample at z >0.7,2004Astrophys.J.602571,astro-ph/0310843.[6]A.G.Riess,et.al.,Type Ia Supernova Discoveries at z >1From the Hubble Space Telescope:Evidence for Past Deceleration and Constraints on Dark Energy Evolution,astro-ph/0402512.[7]P.de Bernardis,et.al.,A Flat Universe from High-Resolution Maps of the Cosmic Microwave Background Radiation,2000Nature 404955,astro-ph/0004404[8]S.Hanany,et.al.,MAXIMA-1:A Measurement of the Cosmic Microwave Background Anisotropy on angular scales of 10arcminutes to 5degrees,2000Astrophys.J.545L5,astro-ph/0005123.[9]D.N.Spergel et.al.,First Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations:Determination of Cosmo-logical Parameters,2003Astrophys.J.Supp.148175,astro-ph/0302209.[10]D.N.Spergel et al 2006,astro-ph/0603449.[11]M.Tegmark et al.,Phys.Rev.D 69(2004)103501,astro-ph/0310723;M.Tegmark et al.,Astrophys.J.606(2004)702,astro-ph/0310725.[12]I.Zlatev,L.Wang,and P.J.Steinhardt ,Quintessence,Cosmic Coincidence,and the Cosmological Constant,1999Phys.Rev.Lett.82896,astro-ph/9807002;P.J.Steinhardt,L.Wang ,I.Zlatev,Cosmological Tracking Solutions,1999Phys.Rev.D 59123504,astro-ph/9812313;M.S.Turner ,Making Sense Of The New Cosmology,2002Int.J.Mod.Phys.A 17S1180,astro-ph/0202008;V.Sahni ,The Cosmological Constant Problem and Quintessence,2002,Class.Quant.Grav.193435,astro-ph/0202076.[13]R.R.Caldwell,M.Kamionkowski,N.N.Weinberg,Phantom Energy:Dark Energy with w <−1Causes a Cosmic Dooms-day,2003Phys.Rev.Lett.91071301,astro-ph/0302506;R.R.Caldwell ,A Phantom Menace?Cosmological consequences of a dark energy component with super-negative equation of state,2002Phys.Lett.B 54523,astro-ph/9908168;P.Singh,M.Sami,N.Dadhich,Cosmological dynamics of a phantomfield,2003Phys.Rev.D6*******,hep-th/0305110;J.G.Hao,X.Z.Li,Attractor Solution of Phantom Field,2003Phys.Rev.D6*******,gr-qc/0302100.[14]Feng B et al2005Phys.Lett.B607(1-2)35.[15]Armendariz-Picon,T.Damour,V.Mukhanov,k-Inflation,1999Physics Letters B458209;M.Malquarti,E.J.Copeland,A.R.Liddle,M.Trodden,A new view of k-essence,2003Phys.Rev.D6*******;T.Chiba,Tracking k-essence,2002Phys.Rev.D6*******,astro-ph/0206298.[16]A.Y.Kamenshchik,U.Moschella,and V.Pasquier,Phys.Lett.B511(2001)265,gr-qc/0103004;N.Bilic,G.B.Tupper,and R.D.Viollier,Phys.Lett.B535(2002)17[astro-ph/0111325];M.C.Bento,O.Bertolami,and A.A.Sen,Phys.Rev.D66(2002)043507,gr-qc/0202064.[17]M.Li,Phys.Lett.B603(2004)1,hep-th/0403127;K.Ke and M.Li,Phys.Lett.B606(2005)173,hep-th/0407056;Y.Gong,Phys.Rev.D70(2004)064029,hep-th/0404030;Y.S.Myung,Phys.Lett.B610(2005)18,hep-th/0412224;Q.G.Huang and M.Li,JCAP0408(2004)013,astro-ph/0404229;Q.G.Huang,M.Li,JCAP0503(2005)001,hep-th/0410095;Q.G.Huang and Y.Gong,JCAP0408(2004)006,astro-ph/0403590;Y.Gong,B.Wang and Y.Z.Zhang,Phys.Rev.D 72(2005)043510,hep-th/0412218;Z.Chang,F.-Q.Wu,and X.Zhang,astro-ph/0509531.[18]A.R.Cooray and D.Huterer,Astrophys.J.513L95(1999).[19]M.Chevallier,D.Polarski,Int.J.Mod.Phys.D10213(2001);gr-qc/0009008.[20]E.V.Linder,Phys.Rev.Lett.90091301(2003).[21]B.F.Gerke and G.Efstathiou,Mon.Not.Roy.Astron.Soc.33533(2002).[22]L.Xu,H.Liu and Y.Ping,Reconstruction of Five-dimensional Bounce cosmological Models From Deceleration Factor,Int.Jour.Theor.Phys.45,869-876,(2006),astro-ph/0601471.[23]A.G.Riess et al.,astro-ph/0611572.[24]N.Banerjee,S.Das,Acceleration of the universe with a simple trigonometric potential,astro-ph/0505121.[25]Y.Gong,A.Wang,Reconstruction of the deceleration parameter and the equation of state of dark energy,astro-ph/0612196.[26]U.Alam,V.Sahni,T.D.Saini and A.A.Starobinsky,astro-ph/0406672.H.Wei,N.N.Tang,S.N Zhang,Reconstructionof Hessence Dark Energy and the Latest Type Ia Supernovae Gold Dataset astro-ph/0612746.。

相关文档
最新文档