DIMENSION REDUCTION IN l1 Practical Procedures for Dimension Reduction in l1
state variables 英文 定义

In the realm of computer science and programming, state variables serve as fundamental building blocks for modeling systems and processes that evolve over time. They embody the essence of dynamic behavior in software applications, enabling developers to capture and manipulate various aspects of an object or system's condition at any given moment. This essay delves into the concept of state variables from multiple perspectives, providing a detailed definition, discussing their roles and significance, examining their implementation across various programming paradigms, exploring their impact on program design, and addressing the challenges they introduce.**Definition of State Variables**At its core, a state variable is a named data item within a program or computational system that maintains a value that may change over the course of program execution. It represents a specific aspect of the system's state, which is the overall configuration or condition that determines its behavior and response to external stimuli. The following key characteristics define state variables:1. **Persistence:** State variables retain their values throughout the lifetime of an object or a program's execution, unless explicitly modified. These variables hold onto information that persists beyond a single function call or statement execution.2. **Mutability:** State variables are inherently mutable, meaning their values can be altered by program instructions. This property allows programs to model evolving conditions or track changes in a system over time.3. **Contextual Dependency:** The value of a state variable is dependent on the context in which it is accessed, typically determined by the object or scope to which it belongs. This context sensitivity ensures encapsulation and prevents unintended interference with other parts of the program.4. **Time-variant Nature:** State variables reflect the temporal dynamics of a system, capturing how its properties or attributes change in response to internal operations or external inputs. They allow programs to model systemswith non-static behaviors and enable the simulation of real-world scenarios with varying conditions.**Roles and Significance of State Variables**State variables play several critical roles in software development, contributing to the expressiveness, versatility, and realism of programs:1. **Modeling Dynamic Systems:** State variables are instrumental in simulating real-world systems with changing states, such as financial transactions, game characters, network connections, or user interfaces. By representing the relevant attributes of these systems as state variables, programmers can accurately model complex behaviors and interactions over time.2. **Enabling Data Persistence:** In many applications, maintaining user preferences, application settings, or transaction histories is crucial. State variables facilitate this persistence by storing and updating relevant data as the program runs, ensuring that users' interactions and system events leave a lasting impact.3. **Supporting Object-Oriented Programming:** In object-oriented languages, state variables (often referred to as instance variables) form an integral part of an object's encapsulated data. They provide the internal representation of an object's characteristics, allowing objects to maintain their unique identity and behavior while interacting with other objects or the environment.4. **Facilitating Concurrency and Parallelism:** State variables underpin the synchronization and coordination mechanisms in concurrent and parallel systems. They help manage shared resources, enforce mutual exclusion, and ensure data consistency among concurrently executing threads or processes.**Implementation Across Programming Paradigms**State variables find expression in various programming paradigms, each with its own idiomatic approach to managing and manipulating them:1. **Object-Oriented Programming (OOP):** In OOP languages like Java, C++, or Python, state variables are typically declared as instance variables withina class. They are accessed through methods (getters and setters), ensuring encapsulation and promoting a clear separation of concerns between an object's internal state and its external interface.2. **Functional Programming (FP):** Although FP emphasizes immutability and statelessness, state management is still necessary in practical applications. FP languages like Haskell, Scala, or Clojure often employ monads (e.g., State monad) or algebraic effects to model stateful computations in a pure, referentially transparent manner. These constructs encapsulate state changes within higher-order functions, preserving the purity of the underlying functional model.3. **Imperative Programming:** In imperative languages like C or JavaScript, state variables are directly manipulated through assignment statements. Control structures (e.g., loops and conditionals) often rely on modifying state variables to drive program flow and decision-making.4. **Reactive Programming:** Reactive frameworks like React or Vue.js utilize state variables (e.g., component state) to manage UI updates in response to user interactions or data changes. These frameworks provide mechanisms (e.g., setState() in React) to handle state transitions and trigger efficient UI re-rendering.**Impact on Program Design**The use of state variables significantly influences program design, both positively and negatively:1. **Modularity and Encapsulation:** Well-designed state variables promote modularity by encapsulating relevant information within components, objects, or modules. This encapsulation enhances code organization, simplifies maintenance, and facilitates reuse.2. **Complexity Management:** While state variables enable rich behavioral modeling, excessive or poorly managed state can lead to complexity spirals. Convoluted state dependencies, hidden side effects, and inconsistent state updates can make programs difficult to understand, test, and debug.3. **Testing and Debugging:** State variables introduce a temporal dimension to program behavior, necessitating thorough testing across different states and input scenarios. Techniques like unit testing, property-based testing, and state-machine testing help validate state-related logic. Debugging tools often provide features to inspect and modify state variables at runtime, aiding in diagnosing issues.4. **Concurrency and Scalability:** Properly managing shared state is crucial for concurrent and distributed systems. Techniques like lock-based synchronization, atomic operations, or software transactional memory help ensure data consistency and prevent race conditions. Alternatively, architectures like event-driven or actor-based systems minimize shared state and promote message-passing for improved scalability.**Challenges and Considerations**Despite their utility, state variables pose several challenges that programmers must address:1. **State Explosion:** As programs grow in size and complexity, the number of possible state combinations can increase exponentially, leading to a phenomenon known as state explosion. Techniques like state-space reduction, model checking, or static analysis can help manage this complexity.2. **Temporal Coupling:** State variables can introduce temporal coupling, where the correct behavior of a piece of code depends on the order or timing of state changes elsewhere in the program. Minimizing temporal coupling through decoupled designs, immutable data structures, or functional reactive programming can improve code maintainability and resilience.3. **Caching and Performance Optimization:** Managing state efficiently is crucial for performance-critical applications. Techniques like memoization, lazy evaluation, or cache invalidation strategies can optimize state access and updates without compromising correctness.4. **Debugging and Reproducibility:** Stateful programs can be challenging to debug due to their non-deterministic nature. Logging, deterministic replaysystems, or snapshot-based debugging techniques can help reproduce and diagnose issues related to state management.In conclusion, state variables are an indispensable concept in software engineering, enabling programmers to model dynamic systems, maintain data persistence, and implement complex behaviors. Their proper utilization and management are vital for creating robust, scalable, and maintainable software systems. While they introduce challenges such as state explosion, temporal coupling, and debugging complexities, a deep understanding of state variables and their implications on program design can help developers harness their power effectively, ultimately driving innovation and progress in the field of computer science.。
基于广义旁瓣相消的新降维方法

基于广义旁瓣相消的新降维方法张涛麟,廖桂生,曾操,薄保林(西安电子科技大学雷达信号处理国家重点实验室,陕西西安710071)摘要:针对阵列信号中部分自适应信号处理问题,提出了一种新的基于广义旁瓣相消的降维方法。
该方法利用波束形成无法对加性白噪声进行抑制的特点,根据广义旁瓣相消器的结构,设计出一种新的阻塞降维矩阵,使辅助通道内仅存干扰信号,从而增强主辅通道关于干扰信号的相关性,更有利于抑制干扰,同时也达到了降维的效果。
另一方面,在存在阵列误差的情况下,借助稳健的波束形成算法,对阻塞降维矩阵进行校正,既使算法对模型误差不敏感,又便于进行降维后的部分自适应处理,降低运算量。
最后通过大量的计算机仿真验证了所提方法的有效性和优越性。
关键词:部分自适应;广义旁瓣相消器;降维;阻塞矩阵;模型误差;稳健性中图分类号:T N911.7;T N957.51文献标识码:A文章编号:1672-2337(2007)03-0213-07Dimension-Reduction Method Based on GSCZH ANG T ao-lin,LIAO Gu-i sheng,ZENG Cao,BO Bao-lin(N ationa l K e y L ab f or R adar S ignal Pr oce ssing,X idian Univ er sity,X i p an710071,China)Abstract:A iming at the pr oblem o f par tial adaptiv ity in arr ay sig nal processing,a no vel dimension-re-duction method based o n G SC(General Sidelo be Canceller)is pr esented in this paper.A new block and d-i mensio n-r eduction matr ix is pr oposed using the st ructur e of GSC and the propert y that beamfo rming has no effect for addit ive white gaussian no ise,thus the coherence abo ut interfer ences is impro ved acco rding ly. T herefo re,not o nly can int erferences be suppr essed,but satisfactor y dimension-reductio n can be acquir ed a-l so.T he pr oposed method has low computatio nal complex it y due to the desig ned blo ck matrix and is not sens-i tive to the sy stem err or by using robust pro cessing.Simulation results demonstrate its v alidity and super io rit y at the end of the paper.Key words:partial adaptiv ity;GSC(G ener al sidelobe canceller);dimension-reduction;blo ck matr ix; system err or s;ro bustness1引言自适应波束形成已广泛用于雷达、声纳、通信、声学、地震学、医学等众多领域。
创意产业发展[英文]
![创意产业发展[英文]](https://img.taocdn.com/s3/m/ff9a5fdf84254b35eefd34a7.png)
The development dimension
8
Chapter III: Analysing the creative economy
- Need for systematic analysis, consistent methodology, reliable statistics and qualitative indicators
(UNCTAD)
4
Creative Economy
Is a set of knowledge-based economic activities with cross-cutting linkages to the overall economy
Creative Industries
Are tangible goods and intangible services with creative content, economic value and market objectives
9
Chapter IV: Towards an evidence-based assessment of the creative economy
- Reliable benchmark: international base using trade data
- Operational model: universal comparative analysis to all countries
Globalization re-shaped patterns of world cultural consumption in a world dominated by images, sounds, texts and symbols Connectivity influencing society life-style and the way creative products are created, reproduced and commercialized Shift towards a more holistic approach to development strategies 1 interface between economics, culture and technology
基于filter+wrapper模式的特征选择算法

Feature selection algorithm based on Filter + Wrapper pattern
Zhou Chuanhua1, 2, Liu Zhicai1, Ding Jing’an1, Zhou Jiayi3
(1. School of Management Science & Engineering Anhui University of Technology, Maanshan AnHui 243002, China; 2. School of Computer Science & Technology, University of Science &Technology of China, Hefei 230026, China; 3. Graduate School of Information, Production & Systems Waseda University, Tokyo, Japan) Abstract: Feature selection is one of the most important issues in data mining, machine learning and pattern recognition. Aiming at the problem of preference of traditional information gain algorithm in feature selection when the class and feature are unevenly distributed, this paper proposes a new feature selection algorithm based on information gain ratio and random forest. The proposed algorithm combined with the advantages of Filter and Wrapper modes. First, a comprehensive measurement of features is carried out from two aspects of information correlation and classification ability. Second, Sequential Forward Selection (SFS) strategy is used to select the features, and the classification accuracy is used as the evaluation index to measure the feature subset. Finally, obtain the optimal feature subset. The experimental results show that the proposed algorithm can not only achieve the effect of dimension reduction in feature space, but also effectively improve the classification performance and recall rate of classification algorithm. Key words: information gain ratio; random forest; feature selection; filter mode; wrapper mode 法通常运行效率较高,但结果较差;而封装式特征选择算法则 依赖于机器学习算法的分类精度作为特征子集选择的评价准则, 该类算法效率较低,但选择的特征集合性能较优。 常见的特征选择算法有信息增益(information gain, IG) 、 粗糙集、神经网络、互信息[2](mutual information, MI)和卡方 统计等。其中,IG 是一种有效的特征选择算法,多用于文本分 类中。文献[3-6]研究了传统 IG 特征选择算法在文本分类中的 应用,发现在类和特征分布不均时,传统信息增益在特征选择
reducedimension method

reducedimension method1. IntroductionThe dimensionality reduction technique known as "Reducedimension" refers to a method used to reduce the number of features in a dataset while preserving the most important information. It is commonly used in machine learning and data analysis tasks to overcome the curse of dimensionality and improve computational efficiency. In this article, we will discuss the principles and procedures of the Reducedimension method.2. Principles of ReducedimensionReducedimension is based on the assumption that the data lies on a low-dimensional manifold embedded in a high-dimensional space. The method aims to find a transformation that maps the original high-dimensional space to a lower-dimensional space while preserving the intrinsic structure and relationships of the data. By reducing the dimensionality, it becomes easier to visualize, analyze, and model the data.3. Procedure of ReducedimensionThe Reducedimension method can be performed in several steps:a. Data preprocessing: Before applying Reducedimension, it is necessary to preprocess the data. This includes handling missing values, normalizing or standardizing the features, and dealing with categorical variables. Data preprocessing ensures that the algorithm performs optimally.b. Covariance matrix computation: The covariance matrix is a symmetric positive semi-definite matrix that represents therelationship between the features in the dataset. It is computed to capture the linear dependencies between the variables.c. Eigenvalue decomposition: By decomposing the covariance matrix, we obtain its eigenvalues and eigenvectors. The eigenvalues represent the amount of variance explained by each eigenvector. The eigenvectors are the directions along which the data varies the most.d. Selection of principal components: The principal components are selected based on the eigenvalues. The eigenvectors corresponding to the largest eigenvalues capture most of the variance in the data. These eigenvectors are chosen as the principal components.e. Projection: The original high-dimensional data is projected onto the newly defined lower-dimensional space spanned by the principal components. The projection reduces the dimensionality of the data while preserving its essential properties and minimizing information loss.f. Reconstruction: If needed, the reduced-dimensional data can be reconstructed back into the original high-dimensional space using the inverse projection matrix. This allows for analysis or visualization in the original feature space.4. Advantages of ReducedimensionThe Reducedimension method offers several advantages:a. Dimensionality reduction: The method reduces thedimensionality of the dataset, which helps to overcome the curse of dimensionality. It simplifies the data representation and improves computational efficiency.b. Feature selection: Reducedimension automatically selects the most informative features by calculating the eigenvalues and eigenvectors. It eliminates redundant or irrelevant features, resulting in a more concise and interpretable dataset.c. Intrinsic structure preservation: Reducedimension aims to preserve the intrinsic structure and relationships of the data during the dimensionality reduction process. It ensures that the important information is retained while discarding noise and irrelevant variations.5. Applications of ReducedimensionThe Reducedimension method has various applications in various fields, including:a. Image and signal processing: It is used to reduce the dimensionality of image and signal data, enabling efficient compression, denoising, and feature extraction.b. Pattern recognition: Reducedimension is applied to extract discriminative features from high-dimensional datasets, improving the performance of pattern recognition algorithms.c. Bioinformatics: It is used to analyze and visualize genomic and proteomic data, enabling the identification of key genes or proteins associated with specific diseases or biological processes.d. Financial analysis: Reducedimension is used to analyze and model financial data, identifying the key factors that drive stock prices or predicting market trends.6. ConclusionThe Reducedimension method provides an effective approach for reducing the dimensionality of high-dimensional datasets while preserving the most relevant information. Its principles and procedures, including data preprocessing, covariance matrix computation, eigenvalue decomposition, principal component selection, projection, and reconstruction, enable efficient analysis, modeling, and visualization of complex datasets in various fields.。
Annu.Rev.Mater.Res.31_1_2001

Annu.Rev.Mater.Res.2001.31:1–23Copyright c2001by Annual Reviews.All rights reserved S YNTHESIS AND D ESIGN OF S UPERHARDM ATERIALSJ Haines,JM L´e ger,and G BocquillonLaboratoire de Physico-Chimie des Mat´e riaux,Centre National de la Recherche Scientifique,1place Aristide Briand,92190Meudon,France;e-mail:haines@cnrs-bellevue.fr;leger@cnrs-bellevue.frKey Words diamond,cubic boron nitride,carbon nitride,high pressure,stishovite s Abstract The synthesis of the two currently used superhard materials,diamond and cubic boron nitride,is briefly described with indications of the factors influencing the quality of the crystals obtained.The physics of hardness is discussed and the importance of covalent bonding and fixed atomic positions in the crystal structure,which determine high hardness values,is outlined.The materials investigated to date are described and new potentially superhard materials are presented.No material that is thermodynamically stable under ambient conditions and composed of light (small)atoms will have a hardness greater than that of diamond.Materials with hardness values similar to that of cubic boron nitride (cBN)can be obtained.However,increasing the capabilities of the high-pressure devices could lead to the production of better quality cBN compacts without binders.INTRODUCTIONDiamond has always fascinated humans.It is the hardest substance known,based on its ability to scratch any other material.Its optical properties,with the highest refraction index known,have made it the most prized stone in jewelry.Furthermore,diamond exhibits high thermal conductivity,which is nearly five times that of the best metallic thermal conductors (copper or silver)at room temperature and,at the same time,is an excellent electrical insulator,even at high temperature.In industry,the hardness of diamond makes it an irreplaceable material for grinding tools,and diamond is used on a large scale for drilling rocks for oil wells,cutting concrete,polishing stones,machining,and honing.The diamonds used for industry are now mostly man-made because their cutting edges are much sharper than those of natural diamonds,which have been eroded by geological time.The synthesis of diamond has been a goal of science from Moissant at the end of the nineteenth century to the successful synthesis under high pressures in 1955(1).However,diamond has a major drawback in that it reacts with iron and cannot be used for machining steel.This has prompted the synthesis of a second superhard0084-6600/01/0801-0001$14.001A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r gb y C h i n e s e Ac ade m y of S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .2HAINESL ´EGERBOCQUILLONmaterial,cubic boron nitride (cBN),whose structure is derived from that of dia-mond with half the carbon atoms being replaced by boron and the other half by nitrogen atoms.The resulting compound is half as hard as diamond,but it does not react with iron and can be used for machining steel.Cubic boron nitride does not exist in nature and is prepared under high-pressure high-temperature conditions,as is synthetic diamond.However,its synthesis is more difficult,and it has not been possible to prepare large crystals.Industry is thus looking for new superhard ma-terials that will need to be much harder than present ceramics (Si 3N 4,Al 2O 3,TiC).Hardness is a quality less well defined than many other physical properties.Hardness was first defined as the ability of one material to scratch another;this corresponds to the Mohs scale.This scale is highly nonlinear (talc =1,diamond =10);however,this definition of hardness is not reliable because materials of similar hardness can scratch each other and the resulting value depends on the specific details of the contact between the two materials.It is well known (2)that at room temperature copper can scratch magnesium oxide and at high temperatures cBN can scratch diamond (principle of soft indenter).Another,more accurate,way of defining and measuring hardness is by the indentation of the material by a hard indenter.According to the nature and shape of the indenter,different scales are used:Brinell,Rockwell,Vickers,and Knoop.The last two are the most frequently used.The indenter is made of a pyramidal-shaped diamond with a square base (Vickers),or elongated lozenge (Knoop).The hardness is deduced from the size of the indentation produced using a defined load;the unit is the same as that for pressure,the gigapascal (GPa).Superhard materials are defined as having a hardness of above 40GPa.The hardness of diamond is 90GPa;the second hardest material is cBN,with a hardness of 50GPa.The design of new materials with a hardness comparable to diamond is a great challenge to scientists.We first describe the current status of the two known super-hard materials,diamond and cBN.We then describe the search for new bulk super-hard materials,discuss the possibility of making materials harder than diamond,and comment on the new potentially superhard materials and their preparation.DIAMOND AND CUBIC BORON NITRIDE DiamondThe synthesis of diamond is performed under high pressure (5.5–6GPa)and high temperature (1500–1900K).Carbon,usually in the form of graphite,and a transi-tion metal,e.g.iron,cobalt,nickel,or alloys of these metals [called solvent-catalyst (SC)],are treated under high-pressure high-temperature conditions;upon heating,graphite dissolves in the metal and if the pressure and temperature conditions are in the thermodynamic stability field of diamond,carbon can crystallize as dia-mond because the solubility of diamond in the molten metal is less than that of graphite.Some details about the synthesis and qualities of diamond obtained by this spontaneous nucleation method are given below,but we do not describe the growthA n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r gb y C h i n e s e Ac ade m y of S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .SUPERHARD MATERIALS 3of single-crystal diamond under high pressure,which is necessary in order to ob-tain large single crystals with dimensions greater than 1mm.Crystals of this size are expensive and represent only a very minor proportion of the diamonds used for machining;they are principally used for their thermal properties.It is well known that making diamonds is relatively straightforward,but control-ling the quality of the diamonds produced is much more difficult.Improvements in the method of synthesis since 1955have greatly extended the size range and the mechanical properties and purity of the synthetic diamond crystals.Depending on the exact pressure (P )and temperature (T )of synthesis,the form and the nature of the carbon,the metal solvent used,the time (t )of synthesis,and the pathways in P-T-t space,diamond crystals (3,4)varying greatly in shape (5),size,and fri-ability are produced.These three characteristics are used to classify diamonds;the required properties differ depending on the industrial application.Friability is related to impact strength.It is the most important mechanical property for the practical use of superhard materials,and low friability is required in order for tools to have a long lifetime.In commercial literature,the various types of diamonds are classed as a function of their uses,which depend mainly on their friabilities,but the numerical values are not given,so it is difficult to compare the qualities of diamonds from various sources.The friability,which is defined by the percentage of diamonds destroyed in a specific grinding process,is obtained by subjecting a defined quantity of diamonds to repeated impacts by grinding in a ball-mill or by the action of a load falling on them.The friability values depend strongly on the experimental conditions used,and only values for crystals measured under the same conditions can be compared.The effect of various synthesis parameters on their quality can be evaluated by considering the total mass of diamond obtained in one experiment,the distribution size of these diamonds,and the friability of the diamonds of a defined size.A first parameter is the source of the carbon.Most carbon-based substance can be used to make diamonds (6),but the nature of the carbon source has an effect on the quantity and the quality of synthetic diamonds.The best carbon source for diamond synthesis is graphite,and its characteristics are important.The effect of the density,gas permeability,and purity of graphite on the diamond yield have been investigated using cobalt as the SC (7).Variations of the density and gas permeability have no effect on the diamond yield,but carbon purity is important.The main impurity in synthetic graphite is CaO.If good quality diamonds are required,the calcium content should be kept below 1000ppm in order to avoid excessive nucleation on the calcium oxide particles.A second factor that alters the quality of diamonds is the nature of the SC.The friability and the size distribution are better with CoFe (alloy of cobalt with a small quantity of iron)than with invar,an iron-nickel alloy (Table 1:Ia,Ib;Figure 1a ).Another parameter is how the mixture of carbon and SC is prepared.When fine or coarse powders of intimately mixed graphite and SC are used,a high yield of diamonds with high friabilities is obtained (8).These diamonds are very small,A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .4HAINESL ´EGERBOCQUILLONTABLE 1Friabilities of some diamonds as a function of the details of the synthesis process fora selected size of 200–250µmSDA a MBS a 100+70Ia Ib IIa IIb IIc IIIa IIIb Synthesis CoFeInvar 1C-2SC 1C-1SC 2C-2SC Cycle A Cycle B detail stacking stacking stacking Friability 74121395053637250(%)aThe SDA 100+is among De Beers best diamond with a very high strength that is recommended for sawing the hardest stones and cast refractories.MBS 70is in the middle of General Electric’s range of diamonds for sawing and drilling.Other diamonds were obtained in the laboratory using a belt–type apparatus with a working chamber of 40mm diameter.(C,graphite;SC,solvent-catalyst.)with metal inclusions,and they are linked together with numerous cavities filled with SC.A favorable geometry in order to obtain well-formed diamonds is to stack disks of graphite and SC.The effect of local concentration has been exam-ined by changing the stacking of these disks (Table 1:IIa,IIb,IIc;Figure 1b ).The method of stacking modifies the local oversaturation of dissolved carbon and thus the local spontaneous diamond germination.For the synthesis of dia-mond,the heating current goes directly through the graphite-SC mixture.Because the electrical resistivity of the graphite is much greater than that of the SC,the temperature of the graphite is raised by the Joule effect,whereas that of the SC increases mainly because of thermal conduction.Upon increasing the thickness of the SC disk,the local thermal gradient increases and the dissolved atoms of carbon cannot move as easily;the local carbon oversaturation then enhances the spontaneous diamond germination.This enables one to work at lower tempera-tures and pressures,which results in slower growth and therefore better quality diamonds.Another important factor for the yield and the quality of the diamonds is the pathway followed in P-T-t space.The results of two cycles with the same final pressure and temperature are shown.In cycle A (Figure 1d ),the graphite-SC mixture reaches the eutectic melting temperature while it is still far from the equilibrium line between diamond and graphite;as a result spontaneous nucleation is very high and the seeds grow very quickly.These two effects explain the high yield and the poor quality and small size and high friability of the diamonds compared with those obtained in cycle B (Figure 1d ;see Table 1IIIa and IIIb and Figure 1c ).Large crystals (over 400µm)of good quality are obtained when the degree of spontaneous nucleation is limited.The pathway in P-T-t space must then remain near the graphite-diamond phase boundary (Figure 1d ),and the time of the treatment must be extended in the final P-T-t conditions.Usually,friability increases with the size of the diamonds.Nucleation takes place at the beginning of the synthesis when the carbon oversaturation is important,and the carbon in solution is then absorbed by the existing nuclei,which grow larger.A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .SUPERHARD MATERIALS5Figure 1Size distribution of diamonds in one laboratory run for different synthesis pro-cesses;effects of (panel a )the nature of the metal,(panel b )the stacking of graphite and metal disks,(panel c )the P-T pathway.(Panel d )P-T pathways for synthesis.1:graphite-diamond boundary and 2:melting temperature of the carbon-eutectic.The diamond synthesis occurs between the boundaries 1and 2.The growth time is about the same for all the crystals,thus those that can grow more quickly owing to a greater local thermal gradient become the largest.Owing to their rapid growth rate,they trap more impurities and have more defects,and therefore their friability is higher.Similarly,friability increases with the diamond yield.The diamonds produced by the spontaneous nucleation method range in size up to 800–1000µm.The best conditions for diamond synthesis correspond to a compromise between the quantity and the quality of the diamonds.A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .6HAINESL ´EGERBOCQUILLONCubic Boron NitrideCubic boron nitride (cBN)is the second hardest material.The synthesis of cBN isperformed in the same pressure range as that for diamond,but at higher tempera-tures,i.e.above 1950K.The general process is the same;dissolution of hexagonal boron nitride (hBN)in a solvent-catalyst (SC),followed by spontaneous nucle-ation of cBN.However,the synthesis is much more complicated.The usual SCs are alkali or alkaline-earth metals and their nitrides (9):Mg,Ca,Li 3N,Mg 3N 2,and Ca 3N 2.All these SCs are hygroscopic,and water or oxygen are poisons for the synthesis.Thus great care must be taken,which requires dehydration of the materials and preparation in glove boxes,to avoid the presence of water in the high-pressure cell.Furthermore,the above compounds react first with hBN to form inter-mediate compounds,Li 3BN 2,Mg 3B 2N 4,or Ca 3B 2N 4,which become the true SC.These compounds and the hBN source are electrical insulators,thus an internal furnace must be used,which makes fabrication of the high pressure cell more complicated and reduces the available volume for the samples.In addition,the chemical reaction involved is complicated by this intermediate step,and in gen-eral the yield of cBN is lower than for diamond.Work is in progress to determine in situ which intermediate compounds are involved in the synthesis process.The crystals of cBN obtained from these processes are of lower quality (Figure 2)and size than for diamond.Depending on the exact conditions,orange-yellow or dark crystals are obtained;the color difference comes from a defect or an excess of boron (less than 1%);the dark crystals,which have an excess of born,are harder.As in the synthesis of diamond,the initial forms of the SC source,hBN,play important roles,but the number of parameters is larger.For the source of BN,it is better to use pressed pellets of hBN powder rather than sintered hBN products,as the latter contain additives (oxides);a very fine powder yields a better reactivity.Doping of Li,Ca,or Mg nitrides with Al,B,Ti,or Si induces a change in the morphology and color of cBN crystals,which are dark instead of orange,are larger (500µm),have better shapes and,in addition,gives a higher yield (10).Use of supercritical fluids enables cBN to be synthesized at lower pressures and temperatures (2GPa,800K),but the resulting crystal size is small (11).Diamond and cBN crystals are produced on a large scale,and the main problem is how to use them for making viable tools for industry.Different compacts of these materials are made (12)for various pacts of diamonds are made using cobalt as the SC.The mixture is treated under high-pressure high-temperature conditions,at which superficial graphitization of the diamonds takes place,and then under the P-T-t diamond synthesis conditions so as to transform the graphite into diamond and induce intergranular growth of diamonds.The diamond compacts produced in this way still contain some cobalt as a binder,but their hardness is close to that of single-crystal pacts of cBN cannot be made in the same way because the SCs are compounds that decompose in air.Sintering without binders (13)is possible at higher pressures of about 7.5–8GPa and temperatures higher than 2200K,but these conditions are currently outside the range of thoseA n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y.SUPERHARD MATERIALS7Figure 2SEM photographs of diamond (top )and cBN (bottom )crystals of different qualities depending on the synthesis conditions (the long vertical bar corresponds to a distance of 100µm).Top left :good quality mid-sized diamonds of cubo-octahedral shape with well-defined faces and sharp edges;top right :lower quality diamonds;bottom left :orange cBN crystals;bottom right :very large black cBN crystals of better shapes.A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .8HAINESL ´EGERBOCQUILLONused in industrial pacts of cBN with TiC or TaN binders are ofmarkedly lower hardness because there is no direct bonding between the superhard crystals,in contrast to diamond compacts.In addition,they are expensive,and this has motivated the search for other superhard materials.SEARCH FOR NEW SUPERHARD MATERIALSOne approach for increasing hardness of known materials is to manipulate the nanostructure.For instance,the effect of particle size on the hardness of materials has been investigated.It is well known that high-purity metals have very low shear strengths;this arises from the low energy required for nucleation and motion of dislocations in metals.The introduction of barriers by the addition of impurities or grain size effects may thus enhance the hardness of the starting phase.In this case,intragranular and intergranular mechanisms are activated and compete with each other.As each mechanism has a different dependency on grain size,there can be a maximum in hardness as the function of the grain size.This effect of increasing the hardness with respect to the single-crystal value does not exist in the case of ceramic materials.In alumina,which has been thoroughly studied,the hardness (14)of fine-grained compacts is at most the hardness of the single crystal.When considering superhard materials,any hardness enhancement would have to come from the intergranular material,which would be by definition of lower hard-ness.In the case of thin films,it has been reported that it is possible to increase the hardness by repeating a layered structure of two materials with nanometer scale dimensions,which are deposited onto a surface (15).This effect arises from the repulsive barrier to the movement of dislocations across the interface between the two materials and is only valid in one direction for nanometer scale defor-mations.This could be suitable for coatings,but having bulk superhard materials would further enhance the unidirectional hardness of such coatings.In addition,hardness in these cases is determined from tests at a nanometer scale with very small loads,and results vary critically (up to a factor of three)with the nature of the substrate and the theoretical models necessary to estimate quantitatively the substrate’s influence (16).We now discuss the search for bulk superhard materials.Physics of HardnessThere is a direct relation between bulk modulus and hardness for nonmetallic ma-terials (Figure 3)(17–24),and here we discuss the fundamental physical properties upon which hardness depends.Hardness is deduced from the size of the inden-tation after an indenter has deformed a material.This process infers that many phenomena are involved.Hardness is related to the elastic and plastic properties of a material.The size of the permanent deformation produced depends on the resistance to the volume compression produced by the pressure created by the indenter,the resistance to shear deformation,and the resistance to the creation and motion of dislocations.These various types of resistance to deformation indicateA n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .SUPERHARD MATERIALS 9Figure 3Hardness as a function of the bulk modulus for selected materials (r-,rutile-type;c,cubic;m,monoclinic).WC and RuO 2do not fill all the requirements to be superhard (see text).which properties a material must have to exhibit the smallest indentation possible and consequently the highest hardness.There are three conditions that must be met in order for a material to be hard:The material must support the volume decrease created by the applied pressure,therefore it must have a high bulk modulus;the material must not deform in a direction different from the applied load,it must have a high shear modulus;the material must not deform plastically,the creation and motion of the dislocations must be as small as possible.These conditions give indications of which materials may be superhard.We first consider the two elastic properties,bulk modulus (B)and shear modulus (G),which are related by Poisson’s ratio (ν).We consider only isotropic materials;a superhard material should preferably be isotropic,otherwise it would deform preferentially in a given direction (the crystal structure of diamond is isotropic,but the mechanical properties of a single crystal are not fully isotropic because cleavage may occur).In the case of isotropic materials,G =(3/2)B (1−2ν)/(1+ν);In order for G to be high,νmust be small,and the above expression reduces thenA n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .10HAINESL ´EGERBOCQUILLONto G =(3/2)B (1−3ν).The value of νis small for covalent materials (typicallyν=0.1),and there is little difference between G and B:G =1.1B.A typical value of νfor ionic materials is 0.25and G =0.6B;for metallic materials νis typically 0.33and G =0.4B;in the extreme case where νis 0.5,G is zero.The bulk and shear moduli can be obtained from elastic constants:B =(c 11+2c 12)/3,G =(c 11−c 12+3c 44)/5.Assuming isotropy c 11−c 12=2c 44,it follows that G =c 44;Actually G is always close to c 44.In order to have high values of B and G,then c 11and c 44must be high with c 12low.This is the opposite of the central forces model in which c 12=c 44(Cauchy relation).The two conditions,that νbe small and that central forces be absent,indicate that bonding must be highly directional and that covalent bonding is the most suitable.This requirement for high bulk moduli and covalent or ionic bonding has been previously established (17–19,21–24)and theoretical calculations (19,25,26)over the last two decades have aimed at finding materials with high values of B (Figure 3).The bulk modulus was used primarily for the reason that it is cheaper to calculate considering the efficient use of computer time,and an effort was made to identify hypothetical materials with bulk moduli exceeding 250–300GPa.At the present time with the power of modern computers,elastic constants can be obtained theoretically and the shear modulus calculated (27).The requirement for having directional bonds arises from the relationship be-tween the shear modulus G and bond bending (28).Materials that exhibit lim-ited bond bending are those with directional bonds in a high symmetry,three-dimensional lattice,with fixed atomic positions.Covalent materials are much better candidates for high hardness than ionic compounds because electrostatic interac-tions are omnidirectional and yield low bond-bending force constants,which result in low shear moduli.The ratio of bond-bending to bond-stretching force constants decreases linearly from about 0.3for a covalent material to essentially zero for a purely ionic compound (29,30).The result of this is that the bulk modulus has very little dependence on ionicity,whereas the shear modulus will exhibit a relative de-crease by a factor of more than three owing entirely to the change in bond character.Thus for a given value of the bulk modulus,an ionic compound will have a lower shear modulus than a covalent material and consequently a lower hardness.There is an added enhancement in the case of first row atoms because s-p hybridization is much more complete than for heavier atoms.The electronic structure also plays an important role in the strength of the bonds.In transition metal carbonitrides,for example,which have the rock-salt structure,the hardness and c 44go through a maximum for a valence electron concentration of about 8.4per unit cell (31).The exact nature of the crystal and electronic structures is thus important for determin-ing the shear modulus,whereas the bulk modulus depends mainly on the molar volume and is less directly related to fine details of the structure.This difference is due to the fact that the bulk modulus is related to the stretching of bonds,which are governed by central forces.Materials with high bulk moduli will thus be based on densely packed three-dimensional networks,and examples can be found among covalent,ionic,and metallic materials.In ionic compounds,the overall structure isA n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .principally defined by the anion sublattice,with the cations occupying interstitial sites,and compounds with high bulk moduli will thus have dense anion packing with short anion-anion distances.The shear modulus,which is related to bond bending,depends on the nature of the bond and decreases dramatically as a func-tion of ionicity.In order for the compound to have a high shear modulus and high hardness (Figure 4),directional (covalent)bonding and a rigid structural topology are necessary in addition to a high bulk modulus.A superhard material will have a high bulk modulus and a three-dimensional isotropic structure with fixed atomic positions and covalent or partially covalent ionic bonds.Hardness also depends strongly on plastic deformation,which is related to the creation and motion of dislocations.This is not controlled by the shear modulus but by the shear strength τ,which varies as much as a factor of 10for different materials with similar shear moduli.It has been theoretically shown that τ/G is of the order of 0.03–0.04for a face-centered cubic metal,0.02for a layer structure such as graphite,0.15for an ionic compound such as sodium chloride,and 0.25for a purely covalent material such as diamond (32).Detailed calculationsmustFigure 4Hardness as a function of the shear modulus for selected materials (r-,rutile-type;c,cubic).A n n u . R e v . M a t e r . R e s . 2001.31:1-23. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y C h i n e s e A c a d e m y o f S c i e n c e s - L i b r a r y o n 05/16/09. F o r p e r s o n a l u s e o n l y .。
基于单层小波变换的压缩感知图像处理
2010年8月Journal on Communications August 2010 第31卷第8A期通信学报V ol.31No.8A 基于单层小波变换的压缩感知图像处理岑翼刚1,陈晓方2,岑丽辉2,陈世明3(1.北京交通大学信息科学研究所,北京 100044;2.中南大学信息科学与工程学院,湖南长沙 410083;3. 华东交通大学电气与电子工程学院,江西南昌 330013)摘 要:根据图像小波变换系数层的特点,提出了基于单层小波变换的压缩感知算法,保留图像低频系数,只对高频系数进行测量。
重构时,利用正交匹配追踪算法(OMP)对高频系数进行恢复,再进行小波反变换重构图像。
仿真结果表明,与原有压缩感知算法相比,重构图像质量得到极大提升,在相同的测量点数下,PSNR平均提高2~4dB。
关键词:压缩感知;图像处理;单层小波变换;图像编码中图分类号:TN911.72 文献标识码:A 文章编号:1000-436X(2010)8A-0052-04Compressed sensing based on the single layerwavelet transform for image processingCEN Yi-gang1, CHEN Xiao-fang2,CEN Li-hui2, CHEN Shi-ming3(1. Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China;2. Institute of Information Science and Engineering, Central South University, Changsha 410083, China;3. School of Electrical & Electronic Engineering, East China Jiaotong University, Nanchang 330013, China)Abstract: According to the properties of wavelet transform sub-bands, an improved compressed sensing algorithm based on the single layer wavelet transform was proposed, which only measured the high-pass wavelet coefficients of the image but preserving the low-pass wavelet coefficients. For the reconstruction, by using the orthogonal matching pursuit (OMP) algorithm, high-pass wavelet coefficients could be recovered by the measurements. Then the image could be recon-structed by the inverse wavelet transform. Compared with the original compressed sensing algorithm, simulation results demonstrated that the proposed algorithm improved the quality of the recovered image significantly. For the same meas-urement number, the PSNR of the proposed algorithm was improved about 2~4dB.Key words: compressed sensing; image processing; single layer wavelet transform; image coding1引言信息技术的飞速发展使得人们对信息的需求量剧增,信号从模拟到数字的转换一直都严格遵守着奈奎斯特采样定理,它指出,采样速率必须达到信号带宽的2倍以上才能精确重构信号。
机器学习与数据挖掘笔试面试题
Why do we combine multiple trees? What is Random Forest? Why would you prefer it to SVM? Logistic regression: Link to Logistic regression Here's a nice tutorial What is logistic regression? How do we train a logistic regression model? How do we interpret its coefficients? Support Vector Machines A tutorial on SVM can be found and What is the maximal margin classifier? How this margin can be achieved and why is it beneficial? How do we train SVM? What about hard SVM and soft SVM? What is a kernel? Explain the Kernel trick Which kernels do you know? How to choose a kernel? Neural Networks Here's a link to on Coursera What is an Artificial Neural Network? How to train an ANN? What is back propagation? How does a neural network with three layers (one input layer, one inner layer and one output layer) compare to a logistic regression? What is deep learning? What is CNN (Convolution Neural Network) or RNN (Recurrent Neural Network)? Other models: What other models do you know? How can we use Naive Bayes classifier for categorical features? What if some features are numerical? Tradeoffs between different types of classification models. How to choose the best one? Compare logistic regression with decision trees and neural networks. and What is Regularization? Which problem does Regularization try to solve? Ans. used to address the overfitting problem, it penalizes your loss function by adding a multiple of an L1 (LASSO) or an L2 (Ridge) norm of your weights vector w (it is the vector of the learned parameters in your linear regression). What does it mean (practically) for a design matrix to be "ill-conditioned"? When might you want to use ridge regression instead of traditional linear regression? What is the difference between the L1 and L2 regularization? Why (geometrically) does LASSO produce solutions with zero-valued coefficients (as opposed to ridge)? and What is the purpose of dimensionality reduction and why do we need it? Are dimensionality reduction techniques supervised or not? Are all of them are (un)supervised? What ways of reducing dimensionality do you know? Is feature selection a dimensionality reduction technique? What is the difference between feature selection and feature extraction? Is it beneficial to perform dimensionality reduction before fitting an SVM? Why or why not? and Why do you need to use cluster analysis? Give examples of some cluster analysis methods? Differentiate between partitioning method and hierarchical methods. Explain K-Means and its objective? How do you select K for K-Means?
什么的力量英语作文
In the vast tapestry of human experiences, there exists a potent force that transcends boundaries of culture, creed, and circumstance. This force is resilience – the remarkable capacity to withstand, recover from, and adapt to life's challenges, setbacks, and traumas. It is an inner strength that empowers individuals to navigate through adversity, transforming trials into triumphs and scars into sources of wisdom. This essay aims to delve into the multifaceted nature of resilience, exploring its profound impact on individuals, societies, and the world at large.I. Resilience as an Individual StrengthA. Psychological DimensionResilience is first and foremost a psychological trait, embodying the mental fortitude required to confront and overcome adversity. It involves the ability to maintain emotional equilibrium, cognitive flexibility, and a sense of hope in the face of stressors such as loss, illness, or personal crises. Resilient individuals possess a growth mindset, viewing setbacks not as insurmountable obstacles but as opportunities for learning and self-improvement. They harness coping mechanisms like optimism, humor, and mindfulness to mitigate stress and foster mental resilience. Moreover, they cultivate a strong sense of self-efficacy, believing in their ability to exert control over their lives and shape their destinies. This psychological armor enables them to weather life's storms with dignity and grace, emerging stronger and more resilient than before.B. Emotional and Social DimensionResilience is not merely an individual endeavor; it is deeply intertwined with our emotional and social connections. Emotional resilience involves the ability to regulate emotions, maintain emotional balance, and bounce back from emotional setbacks. It is fostered by cultivating empathy, gratitude, and forgiveness, which help individuals maintain perspective during trying times. Social resilience, on the other hand, arises from the support networks we build and nurture. Strong relationships with family, friends, and community provide a safety net of emotional support, practical assistance, and guidance when facing adversity. These connections serve as a buffer against stress, promote positive emotions, and instill a sense of belonging, all of which contribute to enhanced resilience.II. Resilience in Societies and CommunitiesA. Collective ResilienceAt the societal level, resilience manifests as collective resilience –the ability of a community or society to withstand, recover from, and adapt to collective challenges such as natural disasters, economic downturns, or public health crises. Collective resilience is underpinned by shared values, social cohesion, effective leadership, and robust institutions. It is fostered by open communication, collaboration, and a sense of shared responsibility. In times of crisis, communities with high levels of collective resilience demonstrate remarkable resourcefulness, solidarity, and adaptability, effectivelymobilizing resources, coordinating responses, and supporting vulnerable members.B. Role of Social CapitalSocial capital –the networks, norms, and trust that facilitate cooperation within and among groups –plays a pivotal role in fostering societal resilience. It acts as a glue that binds communities together, enabling them to pool resources, share information, and coordinate actions effectively. In times of adversity, social capital facilitates rapid mobilization of resources, fosters mutual aid, and promotes collective problem-solving. Moreover, it helps maintain social order, reduces anxiety and panic, and fosters a sense of shared purpose and belonging, all of which contribute to enhanced societal resilience.III. Resilience and Global ChallengesIn an increasingly interconnected and complex world, resilience assumes paramount importance in addressing global challenges such as climate change, pandemics, and geopolitical instability. At the global level, resilience involves the ability of nations, international organizations, and civil society to collaborate, innovate, and adapt in response to these transboundary threats. It necessitates a shift from reactive crisis management to proactive risk reduction and adaptive capacity building. This requires investments in research, early warning systems, disaster preparedness, and climate-resilient infrastructure. Furthermore, it calls for strengthening global governance, promoting international cooperation, and fostering a culture of resilience at all levels – local, national, and global.IV. Cultivating Resilience: Strategies and InterventionsA. Personal Resilience BuildingCultivating personal resilience involves a combination of psychological, emotional, and social strategies. These include developing a growth mindset, practicing mindfulness and stress-management techniques, nurturing positive relationships, and seeking professional help when needed. Educational and workplace programs can play a crucial role in promoting resilience by incorporating resilience-building activities into curricula, providing mental health support, and fostering a supportive and inclusive environment.B. Community and Societal Resilience EnhancementEnhancing community and societal resilience requires concerted efforts from multiple stakeholders. Governments, NGOs, and community leaders should invest in social infrastructure, promote social cohesion, and strengthen institutions. Disaster risk reduction strategies, including early warning systems, emergency preparedness plans, and community-based resilience initiatives, should be prioritized. Additionally, fostering social capital through community engagement, volunteerism, and civic education can significantly boost collective resilience.V. Conclusion: The Indomitable Power of ResilienceResilience, with its multifaceted nature and far-reaching implications,is an indomitable force in overcoming adversity. As an individual strength, it empowers us to withstand life's challenges, maintain emotional equilibrium, and grow through adversity. At the societal level, it fosters collective resilience, enabling communities to weather crises, support vulnerable members, and emerge stronger. In the face of global challenges, resilience becomes a vital prerequisite for effective collaboration, adaptation, and sustainable development. By cultivating resilience at all levels – individual, communal, and global – we can harness this transformative power, turning adversity into opportunity, and forging a more resilient, compassionate, and thriving world for generations to come.[Note: The above text is approximately 3000 words long, exceeding the specified word count due to the comprehensive nature of the topic and the requirement for in-depth analysis. Please feel free to extract the desired length from the provided content, keeping in mind that a thorough exploration of the theme may necessitate a longer essay.]。
Reducing the Dimensionality of Data with Neural Networks
The setup for measuring the SHG is described in the supporting online material (22). We expect that the SHG strongly depends on the resonance that is excited. Obviously, the incident polarization and the detuning of the laser wavelength from the resonance are of particular interest. One possibility for controlling the detuning is to change the laser wavelength for a given sample, which is difficult because of the extremely broad tuning range required. Thus, we follow an alternative route, lithographic tuning (in which the incident laser wavelength of 1.5 mm, as well as the detection system, remains fixed), and tune the resonance positions by changing the SRR size. In this manner, we can also guarantee that the optical properties of the SRR constituent materials are identical for all configurations. The blue bars in Fig. 1 summarize the measured SHG signals. For excitation of the LC resonance in Fig. 1A (horizontal incident polarization), we find an SHG signal that is 500 times above the noise level. As expected for SHG, this signal closely scales with the square of the incident power (Fig. 2A). The polarization of the SHG emission is nearly vertical (Fig. 2B). The small angle with respect to the vertical is due to deviations from perfect mirror symmetry of the SRRs (see electron micrographs in Fig. 1). Small detuning of the LC resonance toward smaller wavelength (i.e., to 1.3-mm wavelength) reduces the SHG signal strength from 100% to 20%. For excitation of the Mie resonance with vertical incident polarization in Fig. 1D, we find a small signal just above the noise level. For excitation of the Mie resonance with horizontal incident polarization in Fig. 1C, a small but significant SHG emission is found, which is again po-
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
L I , H ASTIE ,
AND
C HURCH
In this study, we show that l1 dimension reduction is not as pessimistic if the goal is to recover the original l1 distances from projections or sampling. For Cauchy random projections, we propose two nonlinear estimators based on statistical estimation theory, an unbiased estimator and a maximum likelihood estimator (MLE). The unbiased estimator enables us to prove an analog of the JL lemma for l1 . The maximum likelihood estimator further improves the unbiased estimator. Sampling is another option for dimension reduction in l 1 . In fact, we can estimate distances in any norm (e.g., l1 or l2 ) from random samples by a simple scaling. The key behind the JL bound is that the data have exponential tails. Therefore, if the data are generated from some common thintailed distributions such as normal or gamma, the JL bound does exist using sampling. In severely heavy-tailed data, however, sampling may cause quite large errors. For sparse boolean data, Li and Church (Li and Church, 2005a,b) developed a sketch-based sampling algorithm for dimension reduction. This general technique can be considered a combination of sampling and sketching (hashing), as it generates random samples online from sketches of the data. We show that it is straightforward to extend the algorithm to estimating l 1 distances in general real-valued data. This technique is advantageous when the data are highly sparse, often the case in large-scale data mining applications. 1.1 Motivations for Dimension Reduction In modern learning and data mining applications, very large datasets are often encountered. We denote a data matrix by A ∈ Rn×D , i.e., n data points in D dimensions. Both n and D can be very large, for example, A can be the term-by-document matrix at Web scale, or the market basket data for Amazon or Wal-Mart. Many operations on the data matrix are based solely on the pairwise distances, such as multidimensional scaling, clustering, or nearest neighbor searching, as well as SVM kernels. Computing all pairwise distances for A costs O (n 2 D ), often infeasible; hence dimension reductions would be desirable. In addition, after reducing the dimensions, the data may be small enough to reside in the main memory hence many tasks (e.g., information retrieval, database query optimizations) can be efficiently conducted in the memory. A popular technique called random projections multiplies the original data matrix A by a random matrix R ∈ RD×k to generate a small projected data matrix B = AR ∈ R n×k , with k min(n, D ). For dimension reduction in l 2 , we often let R consist of i.i.d. standard normal N (0, 1) entries; and hence we name it normal random projections. The well-known Johnson-Lindestrauss (JL) lemma (Johnson and Lindenstrauss, 1984) says that we need only k = O log(n)/ 2 so that the l2 distance between any pair of data points can be estimated within (1 ± ) fraction of the original l2 distance, with a high probability. Dimension reduction techniques are often crucial in machine learning, for example, the various projection pursuit and spectral projection algorithms (Friedman, 1987; Vempala and Wang, 2002; Kannan et al., 2005; Achlioptas and McSherry, 2005). Dasgupta (1999) applied random projections for learning mixtures of Gaussians. For training on large datasets using kernel machines (e.g., SVM), computing the Gram matrix (e.g., AA T ) and kernels could be very time-consuming and various projections or sampling methods have been suggested for speeding up the computations, e.g., (Smola and Sch¨ olkopf, 2000; Williams and Seeger, 2000; Achlioptas et al., 2001; Fine and Scheinberg, 2002; Drineas and Mahoney, 2005a,b; Keerthi et al., 2006).
Microsoft Research Microsoft Corporation Redmond, WA 98052, USA Editor:
CHURCH @ MICROSOFT. COM
Abstract
We show that an analog of the Johnson-Lindestrauss (JL) lemma for dimension reduction in l 1 can be established using linear projections and nonlinear estimators. Previous studies have proved that no JL lemma exists for l1 using linear estimators. We develop two nonlinear estimators including a strictly unbiased estimator and an improved estimator based on the maximum likelihood. While the maximum likelihood estimator (MLE) does not have a closed-form density function, we propose highly accurate closed-form approximations. Sampling is also effective for dimension reduction in l1 . We apply a sketch-based sampling technique for l1 dimension reduction, which is a combination of sketching and sampling and is particularly advantageous when the data are sparse. Our results will be useful for applications concerning pairwise l1 distances, including clustering, nearest neighbor searching, as well as approximating l 1 kernels for (e.g.) support vector machines (ion reduction, l1 norm, Random projections, Sampling, JL bound