Proper Complex Random Processes with
Mobile radio channels modeling in Matlab

RADIOENGINEERING, VOL. 12, NO. 4, DECEMBER 2003
13
2 . K = β 2 / 2σ 0
(2)
The best- and worst-case Rician fading channels associated with K-factors of K = ∞ and K = 0 are the Gaussian and Rayleigh channels with strong LOS and no LOS path, respectively. So, the Rayleigh fading channel can be considered as a special case of a Rician fading channel with K = 0. The Rician PDF is given by [3]
2. The Mobile Radio Channel
The mobile radio channel is characterized by two types of fading effects: large-scale fading and small scale fading [1], [2]. Large-scale fading is the slow variation of the mean (distant-dependent) signal power over time. This depends on the presence of obstacles in the signal path and on the position of the mobile unit. The large-scale fading is assumed to be a slow process and is commonly modeled as having lognormal statistics. Small-scale fading is also called Rayleigh or Rician fading because if a large number of reflective paths is encountered the received signal envelope is described by a Rayleigh or a Rician probability density function (PDF) [3]. The small-scale fading under consideration is assumed to be a flat fading (i.e., there is no intersymbol interference). It is also assumed that the fading level remains approximately constant for (at least) one signaling interval. With this model of fading channel the main difference with respect to an AWGN channel resides in the fact that fading amplitudes are now Rayleigh- or Riciandistributed random variables, whose values affect the signal amplitude (and, hence, the power) of the received signal. The fading amplitudes can be modeled by a Rician or a Rayleigh distribution, depending on the presence or absence of specular signal component. Fading is Rayleigh if the multiple reflective paths are large in number and there is no dominant line-of-sight (LOS) propagation path. If there is also a dominant LOS path, then the fading is Rician-distributed. The fading amplitude ri at the ith time instant can be represented as
optimazation of foundry process

Application of a Multi Objective Genetic Algorithm and a Neural Network to the optimisation of foundry processes.G.Meneghetti *, V. Pediroda**, C. Poloni ***Engin Soft Trading Srl, Italy** Dipartiento di Energetica, Università di Trieste, ItalyAbstractAim of the work was the analysis and the optimisation of a ductile iron casting using the Frontier software. Five geometrical and technological variables were chosen in order to maximise three design objectives. The calculations were performed using the software MAGMASOFT, devoted to the simulation of foundry processes based on fluid-dynamics, thermal and metallurgical theoretical approaches. Results are critically discussed by comparing the traditional and the optimised solution.1. IntroductionA very promising field for computer simulation techniques is certainly given by the foundry industry. The possibility of reliably estimating both the fluid-dynamics, thermal and microstructural evolution of castings (from the pouring of the molten alloy into the mould till the complete solidification) and the final properties are very interesting. In fact if the final microstructure and then the mechanical properties of a casting can be predicted by numerical simulation, the a-priori optimisation of the process parameters (whose number is usually high) can be carried out by exploring different technological solutions with significant improvements in the quality of the product, managing of human and economical resources and time-savings.This approach is extremely new in foundry and in this work an exploratory project aimed at the process optimisation of an industrial ductile iron casting will be presented.2. The simulation of foundry processes and foundamental equationsFrom a theoretical point of view, a foundry process can be considered as the sequence of various events [1-4]:-the filling of a cavity by means of a molten alloy, as described by fluid-dynamics laws (Navier-Stokes equation),-the solidification and cooling of the alloy, according to the heat transfer laws (Fourier equation),-the solid state transformations, related to the thermodynamics and the kinetics.A full understanding of the whole foundry process requires an investigation throughout all these three phenomena. However under some hypotheses (regular filling of the mould cavity, homogeneous temperature distribution at the end of filling, etc.) the analyses of the solidification and the solid state transformation only can lead to reliable estimation of the final microstructure and of the properties of the casting.The accuracy in simulating the solidification process depends mainly on:- the use of proper thermophysical properties of the materials involved in the process, taking into account their change with temperature,- the correct definition of the starting and boundary conditions, with particular regard to the heat transfer coefficients.From a numerical point of view, the investigation of the solidification process could be carried out by means of a pure heat flow calculation described by Fourier’s law of unsteady heat conduction :∂∂ρ∂∂λ∂∂t ()x [x C T T p j j=However a more correct evaluation requires to incorporate the additional heat transport by convective movement of mass due to temperature dependent shrinkage of the solidifying mush.Doing that temperature-dependent density functions are needed, so that the shrinkage can be calculated basing on the actual temperature loss. The total metal shrinkage within one time interval will lead to a corresponding metal volume flowing from the feeder into the casting passing through the feeder-neck. The actual temperature distribution in the feeder neck can be calculated on the basis of the following equation :∂∂ρ∂∂ρ∂∂λ∂∂t ()x (u )x [x ]C T C T T S p j p j j j+=+where the second term on the left hand side of the equation is the convective term while the first one on the right hand side is the conductive term. S denotes the additional internal heat source. The additional heat transport by convective movement of mass means that feeding may last much longer than being calculated by heat flow based uniquely on conduction.Anyway, when the feeder-neck freezes to a certain temperature, the feeding mechanism locks.Therefore the solidification of any other portion of the cast, now insulated, will take place independently from one another and the feed metal required during solidification will come from the remaining liquid. The final volume shrinkage will result in a certain porosity, which typically will be located at the hot spots.From the point of view of the real industrial interest, the above phenomena and the related equations can be approached only numerically: in fact complex 3D geometries have to be taken into account, as well as manufacturing parameters ensuring compliance with temperature-dependent thermophysical properties of the materials, production and process parameters. Finite elements,finite differences, control volumes or a combination of these are typical methods implemented in the software packages [2-3].The final result of the simulation is the knowledge of the actual feeding conditions, which is the basis for correctly design the size of feeders. It must be recalled that this knowledge-based approach is often by-passed by the use of empirical rules and in most cases the optimisation of the feeder size is not really performed (so that the feeders are simply oversized) or it is carried out by means of expensive in-field trial-and-correction procedures.The analyses were performed by using the MAGMASOFT ® software, specifically devoted to the simulation of foundry processes, based on fluid-dynamics, thermal and metallurgical theoretical approaches. In particular MAGMASOFT has a module, named MAGMAIron, devoted to the simulation of mould filling, casting solidification, solid state transformation, with the related mechanical properties (such as hardness, tensile strength and Young Modulus) of cast irons [8]3. Optimisation toolFormally, the optimisation problem addressed can be stated as follows.Minimise: F j (X ) , j=1,nWith respect to:X Subject to c X i mi (;,≥=01X (F ),......X (F ),X (F n 21Where X is the design variables vector, F i(X) are the objectives, and c i(X) are the constraints. FRONTIER’s optimisation methods are based on Genetic Algorithms (GA) and hill climbing methods. These allow the user to combine the efficient local convergence of traditional hill climbers, with the strengths of GA’s, which are their robustness to noise, discontinuities and multimodal characteristic, their ability to address multiple objectives cases directly, and their suitability for parallelisation.GA GENERAL STRUCTURE. A GA has the following stages:1.initialise a population of candidate designs and evaluate them, then2.create a new population by selecting some individuals which reproduce or mutate, and evaluatethis new populationStage 2 is repeated until terminationGA MECHANISMS. Design variables are encoded into chromosomes by means of integer number lists. Though there is an inherent accuracy limitation in using integer values, this is not significant since accuracy can easily be refined using classical optimisation techniques. The initial selection of candidates is important especially when evaluations are so expensive that not many can be afforded in the total optimisation. Initialisation can be done in FRONTIER either by reading a user-defined set, or by random choice, or by using a Sobol algorithm [9] to generate a uniformly distributed quasirandom sequence. The optimisation can also be restarted from a previous population.The critical operators governing GA performance are selection, crossover reproduction, and reproduction by mutation.Four selection operators are provided, all based on the concept of Pareto dominance. They are; (1) Local Geographic Selection; (2) Pareto Tournament Selection; (3) Pareto Tournament Directional Selection; and (4) Local Pareto Selection. The user can choose from these though (4) is recommended for use with either type of crossover, and (2) to generate the proportion of the population which is sent to the next generation unmodified.Most emphasis in FRONTIER is on use of directional crossover, which makes use of detected direction of improvement, and has some parallels with the Nelder & Mead Simplex algorithm. Classical two-point crossover algorithm are also provided.Mutation is carried out when chosen, by randomly selecting a design variable to mutate, then randomly assigning a value from the set of all possibilities.In all cases, GA probabilities can be selected by the user, in place of recommended defaults, if desired. All the algorithm are described in more detail in [10].OPERATIONAL USER CHOICE. Traditional GA’s generate a complete new population of designs from an existing set, at each generation. This can be done in FRONTIER using its MOGA algorithm. An alternative strategy is to use steady state reproduction via a MOGASTD algorithm. In this case, only a few individuals are replaced at each generation. This strategy is more likely to retain best individuals. The FRONTIER algorithm removes any duplicates generated. Population size are under the user’s control. FRONTIER case study work has usually used population from 16 to 64, due to the computational expense of the design evaluations.Classical hill climbers can be chosen by the user not only the refine GA solution. They can be adopted from the start of the optimisation, if the user can formulate his problem suitably, and if he is confident that the condition are appropriate.Returning to the problem of expansive design evaluation, many research have made use of response surface. These interpolate a set of computed design evaluation. The surface can then be used to provide objective functions which are much faster to evaluate. Interpolation of nonlinear functions in many variables, using polynomial or spline functions, becomes rapidly intractable however. FRONTIER provides a response surface option based on use of a neural net, with two nodal planes.Tests have shown this to be an extremely effective strategy when closely combined with the GA to provide a continuous update to the neural net.FITNESS AND CONSTRAINTS HANDLING. The objective values themselves are used as fitness values. Optionally, the user can supply weights to combine these into a single quantity. Constraints are normally used to compute a penalty decrementing the fitness. Alternatively, the combined constraint penalty can be nominated as an extra objective to be minimised.PARALLELISATION OF GA. The multithreading features of Java have been used to parallelise FRONTIER’s GA’s. The same code is usable in a parallel or sequential environment, thus enhancing portability. Multithreading is used to facilitate concurrent design evaluations, with analyses executed in parallel as far as possible, on the user’s available computational resources.DECISION SUPPORT. Even where there are a number of conflicting objectives to consider, we are likely to went to choose only one design. The Pareto boundary set of designs provides candidates for the final choice. In order to proceed further, the designer needs to focus on the comparative importance of the individual objectives. The role of decision support in FRONTIER is to help him to do this, by moving to a single composite objective which combines the original objectives in a way which accurately reflect his preferences.LOCAL UTILITY APPROACH. A wide range of methods has been tried for multiple criteria decision making . The main FRONTIER technique used is the Local Utility Approach (LUTA)[11]. This avoids asking the designer to directly weight the objectives relative to each other (though he can if he wishes), but rather asks him to consider some of the designs which have already been evaluated, and state which he prefers, without needing to give reasons. The algorithm then proceeds in two stages. First it decides if the preferences give are consistent in themselves, and guides the designer to change them if they are not. Then, it proposes a ‘common currency’ objectives measure, termed a utility, this being the sum of a set of piecewise linear utility functions, one for each individual objective. The preference information which the designer has provided can then be stated as a set of inequality relations between the utilities of designs. The algorithm uses the feasible region formed by these constraints to calculate the most typical composite utility function which is consistent with the designer’s preferences.This LUTA technique can be invoked after accumulating a comprehensive set of Pareto boundary designs as a result of a number of optimisation iterations. The advantage of the latter approach is that the focusing of attention on the part of the Pareto boundary which is of most interest can result in considerable computational saving, by avoiding computing information on the whole boundary. In practice so far in FRONTIER, we have generally used the LUTA technique after a set number of design evaluations, after which the utility function for a local hill climber to rapidly refine a solution.4. Object of the study and adopted optimisation procedureThe component investigated is a textile machine guide, for which both mechanical and integrity requirements are prescribed. Such requirements are satisfied, respectively, by reaching proper hardness values and by minimising the porosity content. Furthermore, from the industrial point of view, it is fundamental to maximise the process yield, lowering the feeder size.The chemical composition of the ductile iron is the following :C Si Mn P S Cu Sn Ni Cr Mg3.55 2.770.130.0380.00370.0480.0450.0170.0300.035The liquidus and solidus temperatures are 1155°C and 1120°C respectively. The thermo-physical properties of the material (thermal conductivity, density, specific heat, viscosity) are already implemented into the MAGMASOFT Materials Database.The GA optimisation process was performed starting from a configuration of the casting system which is already the result of the foundry practise optimisation.Only the solidification process was taken into account for the simulation, since it was considered to be more affected by the technological variables selected. Therefore the temperature of the cast at the beginning of the solidification process was set as a constant. Moreover the gating system was neglected in the simulation since its influence on the heat flow involved in the solidification process was thought to be negligible. As a consequence the numerical model considers only the cast, the feeder and the feeder-neck (see Fig. 1, referred to the starting casting system). The adopted mesh was chosen in such a way to balance the accuracy and the calculation time. As a consequence a number of metal cells ranging from 9000 to 12000 (resulting in a total number of cells approximately equal to 200000) was obtained in any analysed model.Five technological variables governing the solidification process have been taken into account and the respective ranges of possible variation were defined:1.temperature of the cast at the beginning of the solidification process , 1300 °C <T init.< 1380°C;2.heat transfer coefficients (HTC) between cast and sand mould , 400 W/m2K <HTC< 1200W/m2K;3.feeder height, 80 mm <H f< 180 mm4.feeder diameter, 30 mm<D f < 80 mm5.section area of the feeder-neck, 175 mm2 <A n< 490 mm2.These variables were considered to be representative of the foundry technology and significant in order to optimise the following design objectives :1.Hardness of the material in a particular portion of the cast2.casting weight (i.e. raw cast + feeder + feeder-neck)3.porosityAim of the analysis was to maximise the hardness and to minimise the total casting weight and the porosity. No constraints were defined for this analysis.Generally speaking, the optimisation procedure should be performed running one MAGMA simulation for each generated individual. That implies the possibility to assign all the input parameters and start the analysis via command files. Similarly the output files should be available in the form of ascii files from which the output parameters can be extracted. However at this stage a complete open interface between MAGMASOFT and Frontier is not still available. As a consequence another solution was adopted. First of all 64 analyses were performed in order to get sufficient information in all the variable domain. After that a interpolation algorithm was used to build a response surface model basing on a Neural Network, “trained” on the available results. It has been verified that the approximation reached is lower than 1% for all the available set of solutions with the exception of one point only where the approximation is slightly lower than 5%. After that the response surface model was used in the next optimisation procedure to calculate the design objectives. In such a way further time-expensive work needed to run one MAGMASOFT interactive session for each simulation was avoided.Concerning the Genetic Algorithm a mix between a classical and directional cross-over was used. The first population was created in a deterministic way.5. Results and discussionThe first optimisation task was done for 4 generations with 16 individuals for each generation. Since a complete simulation required about 20 minutes of CPU time on a workstation HP C200, the total CPU time resulted in about 21 hours and 20 minutes. Figs. 3, 4 and 5 report the obtained solutions. In particular from the tables it can be noted that the hardness values increase as we move from the first to the fourth generation, while the weights decrease. Not the same for the porosity, whose values seems to be less stable to converge towards an optimum solution: in fact the same range between the minimum and the maximum value is maintained both in the first and in the last generation. Moreover Fig. 2 illustrates the strong correlation between the casting weight and the hardness: such correlation is due to the particular geometry of the casting under examination and to the position where the hardness value was determined. Anyway the dependency between these two variables is favourable, the hardness increasing as the casting weight decreases, due to the changed cooling conditions. Figs 3 and 4 show that the other variables are not correlated to each other. From all these three figures it can be noted that the optimisation algorithm tends to calculate a greater number of solutions in a specific area of the design objectives plane, where the optimum solution can be expected to be located.As mentioned before the second optimisation step was performed using an approximation function consisting of three independent Neural Networks (one for each design objective) to fit the results obtained from the first optimisation procedure. Then to explore more extensively the variables domain, an optimisation task was done for 8 generation with 16 individuals for each generation.Figs. 5, 6 and 7 report the obtained solutions. By comparing this set of figures with the corresponding previous one (figs. 2, 3 and 4), it can be noted that the GA could reach better solutions, located at the top-right side area of each diagram. Since the raw casting weight was equal to about 2.5 kg and not influenced by any of the chosen variables, the casting weight resulted to be never lower than about 3 kg.All these design objectives were further processed to obtain the results in the form of Pareto Frontier. The Pareto set is reported in table 1, consisting of 11 non-dominated solutions. A direct comparison among them allowed for identifying three solutions (indicated with number 4, 7 and 8 in table 1) which seemed to reach the best compromise among the three objectives.These solutions were checked by running three MAGMAIron simulations. The comparison between the design objectives as predicted by the response surface model and as calculated by MAGMAIron is reported in table 2. It can be noted that the hardness values are predicted with good approximation by the Neural Network, while the porosity values do not match satisfactorily those calculated by MAGMAIron. Anyway the optimum set of variables (4, 7 and 8) reported in table 1 together with the objectives calculated by MAGMAIron were compared with the set of variables corresponding to the present foundry practise. The results, reported in table 3, suggest to decrease the heat transfer coefficient and the feeder size and to increase the feeder-neck section, in order to reach the objectives. The initial temperature instead is already very near to the optimised value.Finally Fig. 8 compares the sizes of the feeders and highlights the bigger feeder now adopted with respect to that of the optimised solution.6. ConclusionsFrontier was applied to MAGMASOFT code enabling the numerical simulation of mould filling and solidification of castings. On the other hand till now it was not possible to interface Frontier with MAGMASOFT since this software does not accept command files to input design parameters. As a consequence an initial optimisation procedure running MOGA for 4 generations with 16 individuals for each generation was performed and a Neural Network was built through the available design objectives. A second optimisation task running Frontier for 8 generations with 16 individuals for each generation was performed. Some design objectives belonging to the Pareto setwere then checked running MAGMASOFT simulations. The following conclusions could be drawn :•In this application the hardness could be increased from 207 HB up to 220 HB and the casting weight reduced from 4.53 kg to 3.11 kg with a slight increase in porosity from 1.27% to 1.80%.•The approximation that could be reached with the Neural Network is probably limited by the small number of available “training points” considering that five design variables were treated. Infact one of the three design objectives was not predicted satisfactorily, as compared with the solution obtained by MAGMASOFT.References[1]M.C. Flemings: "Solidification Processing", Mc Graw Hill, New York (1974).[2]ASM Metals Handbook, 9th ed., vol. 15: Casting (1988), ASM - Metals Park, Ohio.[3]P.R. Sahm, P.N. Hansen: "Numerical simulation and modelling of casting and solidificationprocesses for foundry and cast-house", CIATF (1984)[4] D.M. Stefanescu: "Critical review of the second generation of solidification models forcastings: macro transport - transformation kinetics codes", Proc. Conf. "Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 3-20.[5]T. Overfelt: "The manufacturing significance of solidification modeling", Journal of Metals, 6(1992), pp 17-20.[6]T. Overfelt: “Sensitivity of a steel plate solidification model to uncertainties inthermophysical properties”, Proc. Conf. "Modelling of Casting, Welding and Advanced Solidification Processes - VI", 663-670.[7] F. Bonollo, N. Gramegna: "L'applicazione delle proprietà termofisiche dei materiali nei codicidi simulazione numerica dei processi di fonderia", Proc. Conf. "La misura delle grandezze fisiche" (1997), Faenza, pp 285-299.[8]MAGMAIron User Manual[9] C.Poloni, V.Pediroda "GA Coupled with Computationally Expensive Simulations: Tool toImprove Efficiency" in "Genetic Algorithms and Evolution Strategies in Engineering and Computer Science", J.Wiley and Sons 1998[10]Paul Bratley and Bennett L. Fox, Algorithm 659, “Implementing Sobol’s QuasirandomSequence Generator”, 88-100, ACM Transactions on Mathematical Software, vol.14,No. 1, March 1988.[11]Pratyush Sen, Jian Bo Yang, “Multiple-criteria Decision-making in Design Selection andSynthesis”, 207-230, Journal of Engineering Design,vol.6 No. 3, 1995[12]I.L. Svensson, E. Lumback: "Computer simulation of the solidification of castings", Proc.Conf. "State of the art of computer simulation of casting and solidification processes", Strasbourg (1986), pp 57-64.[13]I.L. Svensson, M. Wessen, A. Gonzales: "Modelling of structure and hardness in nodular castiron castings at different silicon contents", Proc. Conf. "Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 29-36.[14] E. Fras, W. Kapturkiewicz, A.A. Burbielko: "Computer modeling of fine graphite eutecticgrain formation in the casting central part", Proc. Conf. "Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 261-268.[15] D.M. Stefanescu, G. Uphadhya, D. Bandyopadhyay: "Heat transfer-solidification kineticsmodeling of solidification of castings", Metallurgical Transactions, 21A (1990), pp 997-1005.[16]H. Tian, D.M. Stefanescu: "Experimental evaluation of some solidification kinetics-relatedmaterial parameters required in modeling of solidification of Fe-C-Si alloys", Proc. Conf."Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 639-646.[17]S. Viswanathan, V.K. Sikka, H.D. Brody: "The application of quality criteria for theprediction of porosity in the design of casting processes", Proc. Conf. "Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 285-292.[18]S. Viswanathan: "Industrial applications of solidification technology", Journal of Metals, 3(1996), p 19.[19] F. Bonollo, S. Odorizzi: "Casting on the screen - Simulation as a casting tool", Benchmark, 2(1998), pp 26-29.[20] F. Bonollo, N. Gramegna, L. Kallien, D. Lees, J. Young: "Simulazione dei processi difonderia e ottimizzazione dei getti: due casi applicativi", Proc. XIV Assofond Conf. (1996), Baveno.[21] F. Bonollo, N. Gramegna, S. Odorizzi: "Modellizzazione di processi di fonderia", Fonderia,11/12 (1997), pp 50-54.[22] F.J. Bradley, T.M. Adams, R. Gadh, A.K. Mirle: "On the development of a model-basedknowledge system for casting design", Proc. Conf. "Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 161-168.[23]G. Upadhya, A.J. Paul, J.L. Hill: "Optimal design of gating & risering for casting: anintegrated approach using empirical heuristics and geometrical analysis", Proc. Conf."Modeling of Casting, Welding and Advanced Solidification Processes VI", TMS (1993), pp 135-142.[24]T.E. Morthland, P.E. Byrne, D.A. Tortorelli, J.A. Dantzig: "Optimal riser design for metalcastings", Metallurgical Transactions, 26B (1995), pp 871-885.[25]N. Gramegna: "Colata a gravità in ghisa sferoidale", Engin Soft Trading Internal Report(1996)MATERIALSData-baseadopted mesh for the cast and the feeder-7.66-6.128-4.596-3.064-1.532198.3202.62206.94211.26215.58219.9Hardness Brinell C a s t i n g w e i g ht -6,29-5,032-3,774-2,516-1,2580198,3202,62206,94211,26215,58219,9Hardness BrinellP o r o s i ty -6.29-5.032-3.774-2.516-1.258-7.66-6.128-4.596-3.064-1.5320Casting weightP o r o s i t y Figs. 2,3 and 4 : solutions in the design objectives space obtained using MAGMASOFT software.VARIABLESDESIGN OBJECTIVES N°T init .(°C)HTC (W/m 2úK)H feeder (mm)D feeder (mm)A neck (mm 2)Hardness Brinell casting weight (kg)porosity (%)1130012008630194217 2.90 4.60213808119736289215 3.340.873134110378732276218 3.17 2.354135246010533400218 3.380.70513719408030176219 3.09 3.756133512008531225216 3.01 3.93713654008932341220 3.640.778133640011231400219 3.470.43913628148431315217 3.11 2.7810134610098932278219 3.18 2.6011133510598531225217 3.10 1.90Table 1: Pareto Set extracted from the 128 available solutions obtained with the Neural Network。
toeplitz与循环矩阵

.
(1.1)
Such matrices arise in many applications. For example, suppose that x0 x1 . . . xn−1
x = (x0 , x1 , . . . , xn−1 )′ =
1.1. Toeplitz and Circulant Matrices
3
Toeplitz matrices. Toeplitz matrices also arise in solutions to differential and integral equations, spline functions, and problems and methods in physics, mathematics, statistics, and signal processing. A common special case of Toeplitz matrices — which will result in significant simplification and play a fundamental role in developing more general results — results when every row of the matrix is a right cyclic shift of the row above it so that tk = t−(n−k) = tk−n for k = 1, 2, . . . , n − 1. In this case the picture becomes t0 t−1 t−2 · · · t−(n−1) t−(n−1) t0 t −1 . . . (1.2) Cn = t−(n−2) t−(n−1) t0 . . . . .. . t −1 t−2 ··· t0
Probability and Stochastic Processes

Probability and Stochastic ProcessesProbability and Stochastic Processes are two fundamental concepts in the field of mathematics and statistics. They play a crucial role in understanding and predicting the uncertain behavior of various systems and processes. In this essay, I will delve into the significance of probability and stochastic processes, their applications in real-world scenarios, and their impact on decision-making and risk management. First and foremost, probability is the likelihood of a particular event or outcome occurring. It is a measure of uncertainty and is used to quantify the chances of different events taking place. Probability theory provides a framework for analyzing random phenomena and making informed decisions in the presence of uncertainty. Whether it's predicting the outcome of a coin toss or estimating the likelihood of a stock price movement, probability theory forms the foundation for understanding and dealing with uncertainty in various domains. Stochastic processes, on the other hand, are mathematical models used to describe the evolution of random variables over time. They are essential for modeling and analyzing systems that evolve in a random or unpredictable manner. Stochastic processes find applications in a wide range of fields, including finance, engineering, biology, and telecommunications. For instance, in finance, stochastic processes are used to model the movement of asset prices, while in telecommunications, they are used to analyze the behavior of data traffic in networks. The significance of probability and stochastic processes extends beyond the realm of mathematics and statistics. These concepts have profound implications for decision-making and risk management in business and everyday life. In business, probabilistic models are used to assess the potential risks and returns associated with investment decisions. By quantifying the uncertainty involved, businesses can make more informed choices and minimize their exposure to risk. Similarly, in everyday life, an understanding of probability and stochastic processes can help individuals make better decisions in situations involving uncertainty, such as choosing insurance policies or making investment decisions. Moreover, the applications of probability and stochastic processes are not limited to the business world. They also play a crucial role in various scientific andengineering disciplines. For instance, in physics, probabilistic models are usedto describe the behavior of subatomic particles, while in biology, stochastic processes are employed to model the dynamics of populations and the spread of diseases. In engineering, these concepts are used to analyze the reliability and performance of complex systems, such as communication networks and manufacturing processes. In addition to their practical applications, probability and stochastic processes have also contributed to the development of advanced technologies and scientific breakthroughs. For example, the field of quantum mechanics, which describes the behavior of particles at the atomic and subatomic levels, relies heavily on probabilistic models to make predictions about the behavior of particles. Similarly, in the field of machine learning and artificial intelligence, probabilistic models are used to make predictions and decisions in the presence of uncertainty, leading to advancements in areas such as natural language processing, image recognition, and autonomous systems. In conclusion, probability and stochastic processes are fundamental concepts that underpin our understanding of uncertainty and randomness in various domains. Their significance extends beyond the realm of mathematics and statistics, influencing decision-making, risk management, and the advancement of technology and science. As we continue to grapple with uncertainty in our increasingly complex world, a deep understanding of probability and stochastic processes will be essential for making informed decisions and driving innovation.。
目标识别

ISSN1751-8784328IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340 &The Institution of Engineering and Technology2009doi:10.1049/iet-rsn.2008.0146IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340329 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009330IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340&The Institution of Engineering and Technology2009doi:10.1049/iet-rsn.2008.0146IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340331 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009332IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340&The Institution of Engineering and Technology2009doi:10.1049/iet-rsn.2008.0146IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340333 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009334IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340&The Institution of Engineering and Technology2009doi:10.1049/iet-rsn.2008.0146IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340335 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009Figure7Four target PSVs corresponding to four hypothesesin Band#1336IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340 &The Institution of Engineering and Technology2009doi:10.1049/iet-rsn.2008.0146select transmission waveforms in a single experiment of a dual-band CRIET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340337 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009be transmitted in that band continued until the experiment was terminated.The following figures detail the path taken by the dual-band MI waveform over a particular experiment,so chosen such that it had taken quite a few Figure 12Probability update history of the four hypotheses in a single experiment of dual-band CR338IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340&The Institution of Engineering and Technology 2009doi:10.1049/iet-rsn.2008.0146IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340339 doi:10.1049/iet-rsn.2008.0146&The Institution of Engineering and Technology2009340IET Radar Sonar Navig.,2009,Vol.3,Iss.4,pp.328–340&The Institution of Engineering and Technology 2009doi:10.1049/iet-rsn.2008.0146。
基于材料参数随机性的粘弹性结构振动特性分析_桂洪斌

振 动 与 冲 击第21卷第4期J OURNAL OF VIBR ATION AND SHOCK Vol.21No.42002 基于材料参数随机性的粘弹性结构振动特性分析桂洪斌1 赵德有2 金咸定1(1.上海交通大学船海学院结构力学研究所,上海 200030;2.大连理工大学船舶工程系,大连 116024)摘 要 基于粘弹性材料的随机性对粘弹性结构的振动特性进行了分析。
研究了模量模型的随机性对结构固有频率和模态损耗因子的影响。
在模型的随机性中分别考察了常复数模型、Kelvin_Voigt模型和三参数标准流变学模型。
结果表明,粘弹性材料参数的随机性对粘弹性结构的模态损耗因子的影响还是比较大的。
因此,对粘弹性结构采取随机分析是非常必要的。
关键词:粘弹性,随机结构,振动,Monte Carlo法中图分类号:O3270 引 言传统的工程结构分析,通常是采用确定性的力学模型进行。
在这类模型中,所采用的力学结构计算参数是一些确定量。
在具体的分析与计算过程中,本质上是用某种意义上的均值参数系统代替原来的结构系统。
现有研究表明,只有在本原系统关于这类模型系统的变异性较小时,上述分析才能给出较为符合实际的结果。
否则,上述分析难以在均值反应的意义上把握本原系统的反应。
随机结构分析是工程结构分析理论的基本发展方向之一。
关于随机结构的动力学分析,李杰[1]、陈塑寰[2]和张湘伟[3]对此都有专门的论述。
文献[4]对1987年至1998年的随机结构的动力学分析的研究成果进行了综述。
基于结构随机性的粘弹性结构动力学分析则不多见。
文献[5]研究了考虑随机参数之间关系的粘弹性杆的振动问题,考虑的动力学方程是常规MC K形式,并且认为刚度和阻尼的随机性是由于杆的直径的随机性引起的。
本文针对不同的粘弹性模型,考察了材料参数随机性对固有频率和模态损耗因子影响。
采用直接Monte Carlo法进行分析,分析中基本随机变量(输入随机变量)均认为是服从正态分布的随机变量,用变异系数作为比较随机变量离散度的判据。
2020-2021学年上海市彭浦第三中学高三英语下学期期中考试试题及参考答案
2020-2021学年上海市彭浦第三中学高三英语下学期期中考试试题及参考答案第一部分阅读(共两节,满分40分)第一节(共15小题;每小题2分,满分30分)阅读下列短文,从每题所给的A、B、C、D四个选项中选出最佳选项AOvernight French ToastWhat You’ll Need•16-ounce loaf of French bread•5 eggs•1 1 /2 cups milk•1/2 cup half-and-half•1/3 cup maple syrup(枫糖浆)•1/2 teaspoon salt•foil(锡箔纸)•2 tablespoons melted butter(for topping)•2 tablespoons maple syrup(for topping)What to Do•With an adult’s help, cut the bread into 1-inch slices.•Place the eggs, milk, half-and-half, maple syrup, and salt into a large bowl. Stir(揽拌)the mixture until blended(混合均匀).•Place the sliced bread into a baking dish. Pour the mixtureover the bread and press the slices into it. Cover the dish with foil and refrigerate overnight.•Remove the dish from the refrigerator at least one hour before baking. Ask an adult for help to preheat the oven to 375°F. Bake the French toast for 35 minutes or until golden brown.•For the topping, combine the melted butter and 2 tablespoons of maple syrup. Pour it over the French toast before serving.1.How much salt will you need to make a French toast?A.1/3 cup.B.1/2 teaspoon.C.2 tablespoons.D.16 ounces.2.How will you use foil?A.Place the sliced breadB.Cover the dish.C.Remove the dish.D.Eat the French toast.3.Who is the passage written for?A.Teachers.B.Parents.C.Cooks.D.Kids.BScientists at the Massachusetts Institute of Technology (麻省理工学院) have turned spider webs into music——creating an strange soundtrack that could help them better understand how the spiders output their complex creations and even how they communicate.The MIT team worked with Berlin-based artist Tomas Saraceno to take 2D (two-dimensional) laser (激光) scans of a spider web, which were linked together and made into a mathematical model that could recreate the web in 3Din VR (virtual reality). They also worked with MIT’s music department to create the virtual instrument.“Even though the web looks really random (随机),there actually are a lot of inside structures and you can visualize (可视化) them and you can look at them, but it’s really hard to grasp for the human imagination or human brain to understand all these structural details,” said MIT engineering professor Markus Buehler, who presented the work on Monday at a virtual meeting of the American Chemical Society.Listening to the music while moving through the VR spider web lets you see and hear these structural changes and gives a better idea of how spiders see the world, he told CNN. “Spiders use vibrations (振动) as a way to locate themselves, to communicate with other spiders and so the idea of thinking really like a spider would experience the world was something that was very important to us as spider material scientists,” Buehler said.Spiders are able to build their webs without shelves or supports, so having a better idea of how they work could lead to the development of advanced new 3D printing techniques. “The reason why I did that is I wanted to be able to get information really from the spider world, which is very weird and mysterious,” Buehler explained. In addition to the scientific value, Buehler said the webs are musically interesting and that you can hear the sounds the spider creates during construction. “It’s unusual and eerie and scary, but finally beautiful.” he described.4. What have MIT scientists done according to the passage?A. They have translated spider webs into sounds.B. They have made a mathematical model to produce webs.C. They have created a soundtrack to catch spiders.D. They have known how spiders communicate.5. What can we know about spider webs from paragraph 3?A. Their structures are beautiful and clear.B. Professor Markus Buehler knows them well.C. The American Chemical Society presents the result.D. They are complex for people to figure it out.6. In which field will the study be helpful?A. virtual realityB. printingC. paintingD. film-making7. What is the main idea of the passage?A. It tells us that the music created by spiders is scary.B. It shows how the researchers carry out the experiment.C. It presents a new and creative way to study spiders.D. It explains why scientists did the experiment.CThe Native American of northern California were highly skilled at basketry, using the reeds, graeses, barks, and roots they found around them to fashion articles of all sorts and sizes-not only trays, containers, and cooking pots, but hats, boats, fish traps, baby carriers, and ceremonial objects.Of all these experts, none excelled the Pomo-a group who lived on or near the coast during the 1800's, and whose descendants continue to live in parts of the same region to this day. They made baskets three feet in diameter and othersno bigger than a thimble (顶针). The Pomo people were masters of decoration. Some of their baskets were completely covered with shell pendants;others with feathers that made the baskets’ surfaces as soft as the breasts of birds. Moreover, the Pomo people made use of more weaving techniques than did their neighbors. Most groups made al their basketwork by twining--the twisting of a flexible horizontal material, called a weft, around stiffer vertical strands of material, the warp. Others depended primarily on coiling-a process in which a continuous coil of stiff material is held in the desired shape with tight wrapping of flexible strands. Only the Pomo people used both processes with equal ease and frequency. In addition, they made use of four distinct variations on the basic twining process, often employing more than one of them in a single article.Although a wide variety of materials was available, the Pomo people used only a few. The warp was always made of willow, and the most commonly used weft was sedge root, a woody fiber that could easily be separated into strands no thicker than a thread. For color1 , the Pomo people used the bark of red-bud for their twined work and dyed bullrush root for black in coiled work. Though other materials were sometimes used, these four were thestaples in their finest basketry.If the basketry materials used by the Pomo people were limited, the designs were amazingly varied. Every Pomo basket maker knew how to produce from fifteen to twenty distinct patterns that could be combined in a number of different.8. The word “fashion” in paragraph 1 is closest in meaning to ______.A. maintainB. organizeC. tradeD. create9. What is the author's main point in paragraph 2?A. The neighbors of the Pomo people tried to improve on the Pomo basket weaving techniques.B. The Pomo people were the most skilled basket weavers in their region.C. The Pomo people learned their basket weaving techniques from other Native Americans.D. The Pomo baskets have been handed down for generations.10. According to the passage, the relationship between red-bud and twining is most similar to the relationship between ______.A. bullrush and coilingB. weft and warpC. willow and feathersD. sedge and weaving11. Which of the following statements about Pomo baskets can be best inferred from the passage?A. Baskets produced by other Native Americans were less varied in design than those of the Pomo.B. Baskets produced by Pomo weaves were primarily for ceremonial and religious purposes.C. There were a very limited number of basket-making materials available to the Pomo people.D. The basket-making production of the Pomo people has been increasing over the years.DItzhak Perlman was born in Tel Aviv, in whatwas thenPalestine, in 1945. Today he lives inNew York City. But his music has made him a citizen of the world. He has played in almost every major city. He has won many Grammy awards for his recordings. He has also won Emmy Awards for his work on television.Itzhak Perlman suffered from polio (小儿麻痹症) at the age of four. The disease damaged his legs. He uses a wheelchair or walks with the aid of crutches (拐杖) on his arms. But none of this stopped him from playing the violin. He began as a young child. He took his first lessons at the Music Academy of Tel Aviv. Very quickly, his teachers recognized that he had a special gift.At thirteen he went to the United Sates to appear on television. His playing earned him the financial aid to attend theJuilliardSchoolinNew York. In 1964 Itzhak Perlman won the Leventritt Competition in that city. Hisinternational fame had begun.His music is full of power and strength. It can be sad or joyful, loud or soft. But critics (评论家) say it is not the music alone that makes his playing so special. They say he is able to communicate the joy he feels in playing, and the emotions that great music can deliver.Anyone who has attended a performance by Itzhak Perlman will tell you thatit is exciting to watch him play. His face changes as the music from his violin changes. He looks sad when the music seems sad. He smiles and closes his eyes when the music is light and happy. He often looks dark and threatening when the music seems dark and threatening.12. According to the passage, what do we know about Itzhak Perlman?A. He is 75 years old today.B. He was born inNew York City.C. He has some achievements in music.D. He was a rich citizen of the world.13. When Itzhak Perlman first learned music, his teachers ________.A. ignored his talentsB. thought he was fit to learn musicC. had pity on himD. didn't want to accept him14. What makes Itzhak Perlman's playing special according to critics?A. The emotions he communicates in his playing.B. The style in which he plays his music.C. The kind of music he plays.D. The power and strength in his music.15. How do people feel when they hear Itzhak Perlman play?A. Moved.B. Calm.C. Funny.D. Excited.第二节(共5小题;每小题2分,满分10分)阅读下面短文,从短文后的选项中选出可以填入空白处的最佳选项。
Stabilizer
should react with HCl to protect the primary stabilizers.
Scavenging the hydrogen chloride generated during degradation
Most Important Classes of Stabilizers
The
by ionization of chlorine followed by rapid elimination of a proton
Problem: cannot explain the real polyene distribution cannot explain the catalytic effect of HCl
levels off
In the absent of oxygen, the molecular weight and viscosity
increase. At some degradation point, melt viscosity increases
considerably, indicating the crosslinking
tertiary or allylic chlorine atoms, increase the degradation
Initial rates of degradation are
proportional to the content of these
irregularities.
Primary Degradation of PVC
Lead based stabilizers Alkyltin Stabilizers Mixed Metal Stabilizers Alkyl Phosphites Stabilizers
Probability and Stochastic Processes
Probability and Stochastic ProcessesProbability and stochastic processes are important concepts in the field of mathematics and have applications in various areas such as engineering, finance, and computer science. In this response, I will discuss the significance of probability and stochastic processes from multiple perspectives, highlightingtheir practical applications, theoretical foundations, and potential limitations. From a practical perspective, probability and stochastic processes play a crucial role in decision-making under uncertainty. Whether it is predicting the weather, estimating the risk of a financial investment, or designing a reliable communication system, the ability to quantify and analyze uncertainty is essential. Probability theory provides a framework for modeling and analyzing random events, enabling us to make informed decisions based on the likelihood of different outcomes. Stochastic processes, on the other hand, allow us to model systems that evolve over time in a probabilistic manner, providing valuable insights into the behavior of complex systems. In the field of engineering, probability and stochastic processes are used extensively in reliability analysis and system design. By modeling the failure rates of components and the interactions between them, engineers can evaluate the reliability of a system and identify potential weaknesses. This information is crucial for designing robust systems that can withstand uncertainties and minimize the risk of failure. Stochastic processes, such as Markov chains and queuing theory, are also used to model and analyze various engineering systems, including communication networks, manufacturing processes, and transportation systems. From a financial perspective, probability and stochastic processes are essential tools for risk management and investment analysis. Financial markets are inherently uncertain, and understanding the probabilistic nature of asset prices and returns is crucial for making informed investment decisions. By modeling the behavior of financial variables using stochastic processes, such as geometric Brownian motion or jump-diffusion processes, analysts can estimate the probabilities of different market scenarios and assess the risk associated with different investment strategies. This information is invaluable for portfolio management, option pricing, and hedging strategies. From a theoretical perspective, probability theory and stochasticprocesses provide a rigorous mathematical foundation for understanding randomness and uncertainty. Probability theory, with its axioms and theorems, allows us to reason logically about uncertain events and make precise statements about their probabilities. Stochastic processes, as mathematical models for random phenomena, provide a framework for studying the long-term behavior of systems and analyzing their statistical properties. This theoretical understanding is not only important for practical applications but also for advancing our knowledge in various scientific disciplines, including physics, biology, and social sciences. However, it is important to acknowledge the limitations of probability and stochastic processes. Firstly, these concepts are based on assumptions and simplifications that may not always hold in real-world situations. For example, many stochastic models assume that the underlying processes are stationary and independent, which may not be true in practice. Secondly, probability and stochastic processes can only provide probabilistic predictions and estimates, rather than deterministic outcomes. This inherent uncertainty means that even with the best models and data, there will always be a degree of unpredictability. Lastly, the accuracy of probability and stochastic models heavily relies on the availability and quality of data. In situations where data is limited or unreliable, the predictions and estimates obtained from these models may be less accurate or even misleading. In conclusion, probability and stochastic processes are fundamental concepts with wide-ranging applications and theoretical significance. They provide a powerful framework for quantifying and analyzing uncertainty, enabling us to make informed decisions and understand the behavior of complex systems. From practical applications in engineering and finance to theoretical foundations in mathematics and science, probability and stochastic processes play a crucial role in our understanding of the world. However, it is important to recognize theirlimitations and the inherent uncertainties they entail. By embracing uncertainty and using probability and stochastic processes as tools for reasoning anddecision-making, we can navigate the complexities of the world with greater confidence and understanding.。
Discourse Patterns of Complex Adaptation
Discourse: Patterns of Complex Adaptation Complexity, Society, and Liberty © Copyright 1996. Chaos Limited. All rights reserved.
Glenda H. Eoyang Chaos Limited 50 East Golden Lake Road Circle Pines, MN 55014 eoyang@delphi.com Brenda Fiala Stewart University of Minnesota 5244 Beaver Street White Bear Lake, MN 55110-6539 fiala003@maroon.tc.umn.edu
Abstract This paper attempts to contribute to an understanding of the dynamics of conversation in a learning community as a special case of complex adaptive systems. The transcript of a conversation is coded according to categories drawn from complex adaptive systems. The time series of the conversation is analyzed for patterns of participation by various speakers and patterns of sequence of types of comments (self-similarity, difference, and self-organization). Best-fit linear time series models are estimated and correlated to the experiences of speakers in the conversation. A phase-space diagram of the participants' contributions is generated and analyzed.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 4, JULY 1993 1293
Proper Complex Random Processes with Applications to Information Theory
Fredy D. Neeser, Student Member, IEEE and James L. Massey, Fellow, ZEEE
Abstract- The “covariance” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudo-covariance. A characterization of uncorrelatedness and wide-sense stationarity in terms of covariance and pseudo- covariance is given. Complex random variables and processes with a vanishing pseudo-covariance are called proper. It is shown that properness is preserved under affine transformations and that the complex-multivariate Gaussian density assumes a natural form only for proper random variables. The maximum-entropy theorem is generalized to the complex-multivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zero-mean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discrete-time channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unit-sample response is determined. This derivation is considerably simpler than an earlier derivation for the real discrete-time Gaussian channel with intersymbol interference, whose capacity is obtained as a by-product of the results for the complex channel. Znder Terms-Proper complex random processes, circular sta- tionarity, intersymbol interference, capacity. I. INTRODUCTION T HE PURPOSE of this paper is to provide a rounded treatment of certain complex random variables and pro- cesses, which we will call proper, and to show their usefulness in statistical communication theory. It will be shown, for instance, that the probability density function of a complex Gaussian random vector assumes the anticipated ‘natural’ form only for proper random vectors. The convenience of proper complex random variables will be demonstrated by the computation of capacity for the complex baseband equivalent of linear bandpass communication channels with memory and additive white Gaussian noise (AWGN). Linear bandpass channels are usually represented in base- band by an equivalent two-dimensional channel with two Manuscript received November 20, 1991; revised October 30, 1992. This work was presented in part at the Information Theory Workshop, Oberwolfach, Germany, April 5-11, 1992. The authors are with the Signal and Information Processing Laboratory, ETH-Zentrum/ISI, CH-8092 Ziirich, Switzerland. IEEE Log Number 9208096. quadrature inputs and outputs [l], [2]. In general, for a passband channel with memory, the quadrature components interfere so that the two-dimensional equivalent baseband channel does not reduce to a pair of independent quadrature channels, as in the memoryless case. To simplify notation, most communication engineers describe the equivalent base- band channel in terms of complex signals and complex impulse responses. Formulations of linear systems for complex-valued signals are also increasingly employed in adaptive signal processing, see e.g., [3]. Somewhat paradoxically, one finds in the literature very few treatments of complex random vari- ables and processes. In fact, many investigators resort to the two-dimensional real representation of systems with complex signals whenever a probabilistic treatment is needed. Notable exceptions are Doob [4], who gives considerable attention to complex Gaussian random processes, and Wooding [5], who first derived the complex-multivariate Gaussian density by assuming certain covariance relations, which are equivalent to properness in our terminology. The organization of this paper is as follows. In Section II, we characterize second-order statistical properties such as un- correlatedness and wide-sense stationarity of complex random variables and processes. We show that to specify the four covariances arising between the real and imaginary parts of two complex random variables X and Y, one needs both the conventional covariance cxy 2 E[(X - mx)(Y -my)*] and the unconventional quantity 2xy 2 E[(X - mx)(Y -my)], which we will call the pseudo-covariance. Complex random variables and processes with a vanishing pseudo-covariance will be called proper. In Section III, we justify the terminology “proper” by demonstrating several natural results for the class of proper complex random variables and processes that do not hold in general. For instance, the probability density function and the entropy of a proper complex Gaussian random vector are specified solely by the vector of means and the matrix of (conventional) covariances. It is also shown that for bandpass communication channels with real wide-sense stationary noise, the complex noise at the demodulator output is proper. In Section IV, we prove a general discrete Fourier transform cor- respondence between circular stationarity in the time-domain and uncorrelatedness in the frequency-domain for sequences of proper complex random variables. An application of this correspondence is provided in Section V, where an earlier derivation of capacity for discrete-time Gaussian channels with memory [23] is considerably simplified by first generalizing to complex channels.