Using fuzzy cognitive maps.to model performance measurement system of Internet-based supply cha
智能水管理系统(IoT 基础设施) 植物水管理系统的智能化说明书

Internet of Things Based Intelligent WaterManagement System for PlantsIsbat Uzzin Nadhori1,* M. Udin Harun Al Rasyid1 Ahmad Syauqi Ahsan1 BintangRefani Mauludi11 Informatics and Computer Engineering Department, Politeknik Elektronika Negeri Surabaya, Indonesia*ABSTRACTWater has an important role for crops. Every crop needs water to survive. The amount of water that crops need, in different regions and seasons, is different. To calculate the amount of water needed by the crop precisely require careful analysis of the available supporting data. In practice, the fulfilment of water needs in crops is only based on soil moisture without being adjusted to weather data. Thus, water is often wasted, for example when watering during high rainfall. Therefore, we need a system that can determine the volume of water requirements in crops based on its conditions, watering schedules, and weather data. This research aims to build a monitoring system for crops that can determine the right watering volume by considering soil moisture, air temperature and humidity, watering schedules, and weather data by utilizing the fuzzy method. Based on the results of our experiments, the system has managed to monitor crops and display watering volume notifications when its conditions are not normal and when to do watering based on the weather. Keywords: smart water management, water monitoring, IoT, sensor, Fuzzy1. INTRODUCTIONWater has a major role in the plant body. The role of water in the plant body includes: as a constituent of protoplasm, as a solvent for nutrients, as a substance that plays a direct role in metabolism, and also plays a role in cell enlargement and elongation. [1].All crops need water to survive. The amount of water needed by different crops is not the same for different regions and seasons. To calculate and estimate how much water is needed by crops, careful and thorough analysis is needed of available supporting data such as climate data, irrigated area environment, crop types and cropping patterns, soil types, rainfall data, and other meteorological data [2].Meanwhile, farmers still use manual methods in watering their crops, without considering some of the factors above. By using this manual method, water is often wasted or vice versa, water for crops is less. This is not good for crop growth, so it is necessary to develop a system that can calculate and provide the right amount of water for crops. To solve the problem of providing water for crops appropriately, several researchers have worked in this field with various parameters, various approaches, various hardware, various platforms, and also utilizing analytical methods in it.Vijay et. Al. [3] proposed intelligent agricultural monitoring and irrigation systems with ThingSpeak and NodeMCU based IoT platforms. This system monitors temperature and humidity to optimize water use. The data from the sensors is sent to the IoT platform, analyzed with Matlab to take appropriate action, and if the value is below the threshold, a notification will be sent to the user via email.Chen Yuanyuan et. Al. [4] proposed intelligent water-saving irrigation based on ZigBee-wifi. The system monitors soil conditions based on soil moisture sensors using several sensors placed in certain planting areas. The results of soil moisture monitoring are used as a reference in making decisions about when to start and when to stop irrigation.Maria Gemel et. Al. [5] proposed a water management system that utilizes temperature sensors, humidity sensors, and soil moisture sensors to collectdata on crop and soil conditions. This data is then used to Proceedings of the 2nd International Conference on Smart and Innovative Agriculture (ICoSIA 2021)determine the exact water requirements for tomatoes and eggcrops. Overall this results in a total savings of 44% in water consumption, and the crops are visually healthier than traditional watering methods.R. Kondaveti et. al [6] proposed an automatic irrigation system with precise rainfall prediction algorithms that can help us determine what crops are suitable for planting in a particular area. Automatic irrigation is used to water crops when needed by activating an electric motor, this can save water and electricity so it is very beneficial for farmers.Jiaxing Xie et al [7] conducted a study to predict the water requirement for longan garden irrigation based onthree environmental factors: air temperature, soil moisture content, and light intensity. The data is then processed using the backpropagation neural network method using a genetic algorithm to optimize the weight and threshold of the artificial neural network. This model is used to predict irrigation water requirements based on environmental factors in longan plantations.S. Kumar [8] proposed a lawn watering system using soil moisture sensors and weather forecast data. The soil moisture sensor is used to provide information on the water content in the soil, if the soil moisture is below a certain level the watering system will activate automatically. Weather forecast data is used to get rain information so that if there is a rain prediction, watering will be delayed by one to two days. Weather prediction data is obtained from the Indian government website .in which provides weather information for the next 6 days, as well as weather information for 24 hours.Based on those research, we propose real-time water demand monitoring system for crops by combining sensor data (soil moisture, temperature and humidity), watering scheduling data and weather data to determine the volume of watering crops using fuzzy method.2. PROPOSED SYSTEM DESIGNThe solution we propose aims to solve the problem of how to determine the right volume of watering crops based on its conditions, watering schedules and weather predictions with the required volume of water output. The proposed system consists of four important parts as shown in Figure 1 below. Figure 1 Proposed system designThe first part is a sensor system designed to monitor the state of soil moisture, air temperature, and air humidity in crops, which consists of a Capacitive Soil Moisture Sensor and a temperature and humidity sensor. (DHT11 sensor). The two sensors are connected to Arduino Uno to get soil moisture data, air temperature data, and air humidity data. The data is sent via the ESP8266-01 Wi-Fi module which is connected to the Arduino Uno to the second part (server) then processed using the fuzzy logic method to get the volume of water needed by the crops. The server requires weather data (part two) as well as the time and schedule for watering crops (part three) according to the type of crop to determine water requirements and watering times more accurately. The latest weather data is obtained through api weather (third part) which is used to get a forecast of whether it will rain today or not. The results of processing on the server are displayed in the form of visualization of watering needs in the fourth part.3. EXPERIMENTAL STUDYIn this system there are 3 fuzzy variables used in the fuzzification process, namely soil moisture, temperature and volume variables that will be used in decision making.Soil moisture sensor is useful for observing the value of moisture in the soil. Soil moisture data is expressed in units of %RH. Soil moisture sensor data is divided into three categories, namely dry, moist, and wet. To provide a clear picture of the fuzzy set of soil moisture sensors, it can be described in the membership function shown inFigure 2.Figure 2 Fuzzy set of temperature variable (℃) The temperature sensor is useful for observing the value of the air temperature around the monitored environment. The temperature sensor data is divided into five categories, namely cold, cold, normal, warm and hot. To provide a clear picture of the fuzzy set of temperature sensors, it can be described in the membership function shown in Figure 3.Figure 3. Fuzzy set of temperature variable (℃) This volume set is the result set that is used to determine the final result of this fuzzy process.Figure 4. Fuzzy set of volume variable (mL)After the fuzzification stage, fuzzy rules will be formed. The formation of fuzzy rules is done to express the relationship between input and output. The operator used to connect two inputs is the AND operator, while the operator that maps between input and output is IF-THEN.This volume set is the result set that is used to determine the final result of this fuzzy process. The number of rules formed is obtained from the multiplication between each membership of the fuzzy variable. In this study, 15 rules were formed from the use of 2 parameters. Examples of rules that have been formed can be seen in Table 1.Table 1. Fuzzy Logic RulesAfter getting the rules used in the inference process, the next thing to do is to aggregate or combine the output of all the rules. This stage is called the Composition stage which will produce the predicate of each rule.After going through the Composition stage which produces -predicate from each rule, the next step is the Deffuzification process. This defuzzification process is a crisp output calculation process by calculating the average of all z with the following formula:z=α1∗z1+α2∗z2+⋯+αn∗z nZ1+Z2+⋯+ Z n(1)The following is an illustration of the stages of preparation for the trial environment that will be carried out. System testing should be carried out on agricultural land that has "bedengan" (part of the ground that is raised for plants to grow). “Bedengan” generally have a width of 100 cm with a length that is adapted to soil conditions. The height of the "bedengan" is approximately 20 cm with a distance between the "bedengan" of 100 cm. See figure 4 for the illustration of “bedengan”.Figure 5 Overview of “bedengan” in generalPlanting crops in “bedengan” has its own rules. One “bedengan” consists of 2 rows, and each row has a distance of 60-70 cm. And the distance between the crop holes is 60 cm. Figure 8 is a description of the system testing on a “bedengan” with a size of 1 m2. This system should be tested on open agricultural land so that water and weather requirements can be tested in real time. However, due to time constraints, the trial was carried out on polybags with the same concept of “bedengan” and calculations. Tests on polybags can be seen in Figure 6,7 and Figure 8.Figure 6 “Bedengan ” in an area of 1 m 2Figure 7. Transition of trials from agricultural land to polybagsFigure 8. Trial prototypeSystem testing was carried out on polybags with a height of 20 cm and an area of 50 cm x 50 cm. This concept is the same as the concept of beds on agricultural land. Crops are placed in open land and not indoors or in the shade. This experiment was carried out for 4 days. Watering is done twice a day if the weather on that day is sunny and the soil moisture value is less than the normal limit. Watering is done once a day if the rainfall is low on that day and the soil moisture value is less than the normal limit. Watering is not carried out if on that day the weather predicts high rainfall. The data for each scenario will be stored in a database that is useful for analyzing the results of system trials.. Table 2 contains crop monitoring data carried out for 4 days for trials carried out on polybags. The parameters monitored were the value of soil moisture, air temperature, and air humidity taken before watering the crops. In table 2 it can be seen that the soil moisture valueis lower than the optimum soil moisture value for eggplant which should be in the range of 60% - 80%. The monitoring results also show that the temperature and humidity values are quite constant. Table 3 shows the performance results of the system that has been created. On the first and second days there was no rain, so watering in the morning and evening was still carried out. Notifications have also worked according to the watering schedule based on weather conditions and crop conditions.Table 4 contains data from the 2-day trial. In this table there is a date column that shows when the experiment was carried out, a watering time column, and a watering volume column. There is also a column of soil moisture, air temperature and humidity that contains the data values measured after watering.There are two application interfaces for this system, web-based and android-based. The interface of thisapplication can be used to monitor the condition of theTable 2. Monitoring Before WateringTable 3. Notifications based on existing conditions.Table 4. Monitoring after watering.crop and display the amount of water it needs. Web-based applications are used to determine crop conditions in detail, making it easier for farmers to take an action. There is a graph that displays the condition of the last 100 data. The android application is used to make it easier for farmers to monitor their crops at any time, and provide notifications if there is a need for watering for crops. The interfaces of these two applications can be seen in Figures 9 and 10.Figure 9. Web application interfaceFigure 10. Android application i nterface4. DISCUSSION AND CONCLUSIONOn the first day of the experiment, it was used to see the condition of the crops, where the crops were not in accordance with the ideal conditions, the crops gradually reached the ideal conditions after the fourth day. Experiments were carried out on eggplant crops. In general, eggplant crops have an optimum humidity value of 60% RH - 80% RH. So that on the fourth day of the experiment the volume calculation was appropriate because after giving the volume the value of soil moisture was between the values of 60% RH - 80% RH.Implementation of crop condition monitoring on the device can work in real-time. After conducting several trials on the actual crop environment, it can be concluded that this application has succeeded in monitoring crops and displaying watering volume notifications when crop conditions are not normal or when the time for watering crops based on the weather has arrived. ACKNOWLEDGMENTSThis research was supported in part by Ministry of Research and Technology of the Republic of Indonesia, under scheme Higher Education Excellence Applied Research Penelitian Dasar Unggulan Perguruan Tinggi', No. Grant B/112/E3/RA.00/2021. REFERENCES[1]Saccon, P., Water for agriculture, irrigationmanagement. Applied Soil Ecology, 123, 793–796., 2018[2]Sun, J., Kang, Y., Wan, S., Hu, W., Jiang, S., &Zhang, T., Soil salinity management with drip irrigation and its effects on soil hydraulic properties in north China coastal saline soils. Agricultural Water Management, 115, 10–19., 2012[3]Vijay, Anil Kumar Saini, Susmita Banerjee andHimanshu Nigam, An IoT Instrumented Smart Agricultural Monitoring and Irrigation System, International Conference on Artificial Intelligence and Signal Processing (AISP), Vellore Institute of Technology Andhara Pradesh and IEEE Guntur Subsection, India, 10-12th January 2020[4]Chen Yuanyuan, Zhang Zuozhuang, Research andDesign of Intelligent Water-saving Irrigation Control System Based on WSN, IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian China, 27-29 June 2020[5]Maria Gemel B. Palconit, Edgar B. Macachor,Markneil P. Notarte,Wenel L. Molejon, Arwin Z.Visitacion2, Marife A. Rosales, Elmer P. Dadios1;IoT-Based Precision Irrigation System for Eggplant and Tomato; International Symposium on Computational Intelligence and Industrial Applications (ISCIIA2020) CITIC Jingling Hotel Beijing, Beijing, China, Oct.31-Nov.3, 2020[6]Revanth Kondaveti, Akash Reddy, Supreet Palabtla,Smart Irrigation System Using Machine Learning and IOT, International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN), Vellore, Tamilnadu, India, 30-31, March 2019[7]Jiaxing Xie, Guoslicng Hu,Chuting L, Peng Gao,Daozong Sun,Xiuyun Xue, Xin X, Jianmei Liu, Huazhong Lu, Weixing Wang; Irrigation Prediction Model with BP Neural Network Improved by Geneti Algorithm in Orchards; International Conference on Advanced Computational Intelligence,Guilin, China, June 7-9, 2019[8] C. Kamienski, J.-P. Soininen, M. Taumberger et al.,“Smart water management platform: iot-basedprecision irrigation for agricul ture,” Sensors, vol. 19, no. 2, p. 276, 2019.[9]Sudheer Kumar Nagothu, Weather based Smartwatering system using soil sensor and GSM, World Conference on Futuristic Trends in Research and Innovation for Social Welfare, Karpagam College of Engineering, Coimbatore Tamilnadu India, 29th February & 1st March 2016。
机器学习题库

机器学习题库一、 极大似然1、 ML estimation of exponential model (10)A Gaussian distribution is often used to model data on the real line, but is sometimesinappropriate when the data are often close to zero but constrained to be nonnegative. In such cases one can fit an exponential distribution, whose probability density function is given by()1xb p x e b-=Given N observations x i drawn from such a distribution:(a) Write down the likelihood as a function of the scale parameter b.(b) Write down the derivative of the log likelihood.(c) Give a simple expression for the ML estimate for b.2、换成Poisson 分布:()|,0,1,2,...!x e p x y x θθθ-==()()()()()1111log |log log !log log !N Ni i i i N N i i i i l p x x x x N x θθθθθθ======--⎡⎤=--⎢⎥⎣⎦∑∑∑∑3、二、 贝叶斯假设在考试的多项选择中,考生知道正确答案的概率为p ,猜测答案的概率为1-p ,并且假设考生知道正确答案答对题的概率为1,猜中正确答案的概率为1,其中m 为多选项的数目。
模糊认知图的学习

模糊认知图的学习此篇随笔仅⽤于记录学习内容⽅便以后查阅,主要参考学习了 迟亚雄. 模糊认知图智能学习算法与应⽤研究[D].基本理论与⽅法基于Hebbian的学习⽅法 核⼼思想:神经元的激活顺序和⽅式会影响权重的变化,若两神经元异步激活则降低权值,同步激活则增⾼权值。
特点:总是依赖专家知识。
⾮线性Hebbian学习算法[1](Nonlinear Hebbian Learning, NHL)的初始化需要专家对concept进⾏⼲预,⽐如建议各concept的模糊值,这些模糊值的取值范围以及各concept之间的因果关系等。
数据驱动型NHL[2](Data-driven Nonlinear Hebbian Learning, DDNHL),与NHL类似,但DDNHL利⽤了可观测到的数据进⾏学习来提⾼ FCMs 的模型质量。
集成学习与NHL相结合的学习算法[3],⽤NHL训练模型,再使⽤集成学习算法提⾼性能。
在学习模糊认知图的准确性⽅⾯由于DDNHL。
基于进化计算的学习⽅法 Hebbian的⽅法⽐较依赖专家知识,进化计算的⽬的是从数据中学习模糊认知图,搜索最优的模糊认知图。
进化算法是⼀种启发式算法,⽐较常见的启发式算法有:遗传算法,粒⼦群优化算法,模拟退⽕,蚁群优化等。
⾯临的问题 模糊认知图扩展到⼀定规模时,⾯临的问题是多维度优化问题,因为随着决策变量的增加,需要确定的权重关系的数量会呈指数增长,会造成维度灾难。
从优化的⾓度看,训练样本数量不变的情况下,决策变量的数量增加会导致过拟合。
当数据的维度到达⼀定⾼度,如果要找到最优解或者仅仅达到低维度的同等性能,则需要⼏何数量增长的数据。
真实模糊认知图的⽹络密度要⽐算法学习得到的⽹络模型的密度低的多。
基于神经⽹络和进化计算的模糊认知图学习Reference[1] Elpiniki Papageorgiou, Chrysostomos Stylios, Peter Groumpos. Fuzzy Cognitive Map Learning Based on Nonlinear Hebbian Rule[M]// AI 2003: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2003.[2] Stach W , Kurgan L A , Pedrycz W . Data-driven Nonlinear Hebbian Learning method for Fuzzy Cognitive Maps[C]// IEEE IEEE International Conference on Fuzzy Systems. IEEE, 2008.[3] Papageorgiou E I , Kannappan A . Fuzzy cognitive map ensemble learning paradigm to solve classification problems: Application to autism identification[J]. Applied Soft Computing Journal, 2012, 12(12):3798-3809.[] 迟亚雄. 模糊认知图智能学习算法与应⽤研究[D].。
基于BP_神经网络和高阶模糊认知图的股票价格预测

第13卷㊀第8期Vol.13No.8㊀㊀智㊀能㊀计㊀算㊀机㊀与㊀应㊀用IntelligentComputerandApplications㊀㊀2023年8月㊀Aug.2023㊀㊀㊀㊀㊀㊀文章编号:2095-2163(2023)08-0100-08中图分类号:TP391文献标志码:A基于BP神经网络和高阶模糊认知图的股票价格预测王雨涵1,张亚萌2,魏国亮1(1上海理工大学管理学院,上海200093;2上海理工大学光电信息与计算机工程学院,上海200093)摘㊀要:由于金融市场具有的非线性㊁不确定性㊁复杂性等特点,其预测问题受到了学术界和相关从业人员的普遍关注㊂基于模糊认知图的方法广泛应用于金融市场预测,但大部分模型忽略了预测误差对模型精度的影响,针对这一现状本文提出了一种两阶段的预测模型,即基于BP神经网络(BPNN)和高阶模糊认知图(HFCMs)的股票价格预测模型,充分考虑了预测误差对预测精度的影响㊂模型第一阶段,利用HFCMs模型对历史数据进行预测,并得到与之对应的误差数据集㊂第二阶段结合了HFCMs模型和BPNN误差预测模型,得到最终的预测结果㊂其中,HFCMs模型的最佳权重矩阵采用时间变化加速系数粒子群优化(TVAC-PSO)算法得到㊂最后,在5个金融数据集上对所提模型进行了实验,结果表明本文的模型具有更好的预测效果㊂关键词:高阶模糊认知图;BP神经网络;金融时间序列;预测;误差StockpricepredictionbasedonBPneuralnetworkandhigh-orderfuzzycognitivemapWANGYuhan1,ZHANGYameng2,WEIGuoliang1(1BusinessSchool,UniversityofShanghaiforScienceandTechnology,Shanghai200093,China;2SchoolofOptical-ElectricalandComputerEngineering,UniversityofShanghaiforScienceandTechnology,Shanghai200093,China)ʌAbstractɔTheproblemoffinancialmarketforecastinghasbeenanimportantissueofconsiderableinteresttoacademicsandrelatedpractitioners.Duetothenonlinearity,uncertaintyandcomplexityoffinancialmarkets,fuzzycognitivemapshavebeenproposedandappliedtofinancialmarketforecasting.However,mostofthemodelsbasedonfuzzycognitivemapsignoretheinfluenceofhistoricaldatapredictionerrorsonmodelaccuracy.Thepaperproposesanewpredictionmodelforthissituation,astockpricepredictionmodelbasedonbackpropagationneuralnetworkandhigh-orderfuzzycognitivemaps,whichfullyconsiderstheinfluenceofpredictionerroronpredictionaccuracy.First,theHFCMsmodelisusedtoforecasthistoricaldataandobtaintheerrordatasetcorrespondingtoit,whichisusedasthebasisforconstructingtheBPNNerrorpredictionmodel.Then,thefinalpredictionresultsareobtainedbycombiningtheHFCMsmodelandtheBPNNerrorpredictionmodel.Amongthem,theoptimalweightmatrixofthemodelisobtainedusingthetime-varyingacceleratedcoefficientparticleswarmoptimization(TVAC-PSO)algorithm.Finally,experimentsareconductedontheproposedmodelonfivefinancialdatasets,andtheresultsshowthattheproposedmodelhashigherpredictionaccuracy.ʌKeywordsɔhigh-orderfuzzycognitivemaps;BPneuralnetwork;financialtimeseries;forecasting;error作者简介:王雨涵(2001-),女,本科生,主要研究方向:人工智能㊁大数据分析;张亚萌(1992-),女,博士研究生,主要研究方向:人工智能㊁大数据分析;魏国亮(1973-),男,博士,教授,博士生导师,主要研究方向:数据分析㊁非线性系统控制㊁图像处理㊂通讯作者:张亚萌㊀㊀Email:155****1987@163.com收稿日期:2022-09-020㊀引㊀言作为经济学的一个重要组成部分,金融市场能够反映当前和未来的经济形势,为金融领域从业者提供投资参考㊂为了能够更好地掌握未来的经济发展方向,对金融数据进行预测一直是学术研究的重要问题㊂但由于金融数据呈现出的非线性㊁非平稳性㊁不确定性和复杂性,对其进行准确预测存在着不小的难度㊂目前,已经有许多方法可以用于对金融数据进行预测,主要可以分为传统模型和现代模型[1]㊂传统模型主要有自回归模型(AR)㊁自回归综合移动平均模型(ARIMA)㊁自回归条件异方差模型(ARCH)㊂文献[2]利用过去的信息建立自回归模型来预测未来的收益㊂文献[3]利用自回归综合移动平均模型对菲律宾皮索-美元的汇率进行了预测㊂文献[4]利用自回归条件异方差模型对9种人民币汇率的波动情况进行了预测㊂这些模型为金融时间序列预测方法的发展提供了坚实的基础,但仍存在无法准确捕捉市场的高度非线性和复杂性的问题㊂近年来,得益于计算机技术的发展,大量的现代预测方法被提出,并用于金融数据的预测之中㊂而模糊逻辑凭借其处理数据不确定性的优秀能力,吸引了许多人的关注㊂例如,文献[5]提出了一种将模糊技术与灰色理论相结合的模糊灰色预测方法,来对股票价格进行预测㊂文献[6]引入了一种新的模糊度量,并建立了一个用于预测股票指数的时间变量模糊时间序列结构㊂而结合了模糊逻辑和神经网络的模糊认知图(FCMs)在经济领域中得到了广泛的应用㊂例如,在文献[7]中,提出了一个基于函数权重的FCMs模型用于股票价格的预测㊂在文献[8]中,一种结合了动态时间扭曲和模糊c-均值技术的FCMs模型被用于股票价格时间序列问题的研究㊂在文献[9]中,将模糊认知图与测度递进策略相结合,从而对股票的日均值进行预测㊂尽管现有的FCMs模型在金融市场预测中取得了许多成果,但依然存在很多缺陷㊂如,文献[10]采用小波变换技术提取的特征序列可能存在冗余,而文献[11]中将单变量序列先处理成了多个序列,这些因素都可能影响模型预测的稳健性㊂而且,目前的FCMs预测方法主要通过历史数据进行预测,没有考虑灵活运用历史数据预测过程中产生的误差㊂文献[12]在已有预测模型的基础上构造了一个相应的误差预测模型来提高预测的准确性㊂基于该思想,本文通过把历史预测误差列为影响模型预测能力的一个因素,提出了一种结合BP神经网络和HFCMs模型的方法来对金融市场进行预测㊂该模型以金融数据集中的原始股票价格作为输入,以预测股票价格作为输出㊂首先,对金融数据集进行预处理,使之达到规范化标准㊂然后,建立HFCMs模型对金融数据集进行预测,并利用BP神经网络构造相应的误差预测模型㊂最后,将2个模型相结合,构造BPNN-HFCMs模型来预测股票价格㊂其中,最佳权重矩阵由时间变化加速系数粒子群优化(TVAC-PSO)算法得到㊂本文的结构如下㊂第1节介绍了模型的结构和方法论,第2节介绍了利用上述模型对一系列金融数据集进行实验的结果,第3节用于总结模型的优点和不足㊂1㊀基于BP神经网络和高阶模糊认知图的股票价格预测方法㊀㊀本节将详细介绍利用高阶模糊认知图(High-orderFuzzyCognitiveMaps)模型对股票价格进行预测的方法论,以及如何利用BP神经网络(BackPropagationNeutralNetwork)对模型误差进行进一步优化㊂首先,用TVAC-PSO算法来寻找HFCMs模型的最优参数,得到对股票价格的初步预测值㊂然后,利用BP神经网络建立误差预测模型,预测HFCMs模型的预测误差,从而提高模型的预测精度㊂1.1㊀高阶模糊认知图(HFCMs)模糊认知图最早由Kosko等学者于1986年提出,是对于认知图的一种延伸㊂将认知图中概念之间的三值逻辑关系{-1,0,1}扩展为[-1,1]区间上的模糊关系,便构成了模糊认知图的底层逻辑㊂一般来说,FCM是由N个概念节点或实体以及一系列加权边构成的有向图,节点与节点之间用加权边相互连接,从而体现出各个概念之间的因果关系㊂而节点Ci和Cj之间的因果关系可以由模糊成员关系wij来表示,因此可以用一个nˑn阶矩阵W=(wij)nˑn来唯一确定一个具有n个概念节点的模糊认知图,wij的取值范围即为[-1,1],一般来说,wij需要遵循以下规则:(1)wij>0,表示节点Ci对节点Cj有正的影响㊂即,如果Ci变化,Cj会发生同向的变化,变化程度即为wij㊂(2)wij<0,表示节点Ci对节点Cj有负的影响㊂即,如果Ci变化,Cj会发生反向的变化,变化程度即为wij㊂(3)wij=0,表示节点Ci对节点Cj无影响㊂C2C1C3C5C4W14W13W21W42W25W35W54W32图1㊀FCMs模型结构示意图Fig.1㊀SchematicdiagramofthestructureofFCMsmodel101第8期王雨涵,等:基于BP神经网络和高阶模糊认知图的股票价格预测㊀㊀为了使概念描述更直观,图1中描述了一个具有5个节点的简单FCMs模型㊂节点Ci在(t+1)时刻的状态值由式(1)得到:㊀Aj(t+1)=ΦðNi=1wijAi(t)()㊀j=1,2, ,N(1)其中,Ai(t)表示节点Ci在时间t时的状态值;Aj(t+1)表示节点Cj在时间t+1时的状态值;N为概念节点的数量;wij为概念节点Ci到Cj的连接权重;Φ(㊃)为非线性激活函数,可以将数据转换到一定的范围之内,如[0,1]㊂常用的激活函数主要有sigmoid函数和双曲切线函数两种,分别见式(2)㊁式(3):Φ(x)=11+e-η1x(2)Φ(x)=tanh(η2x)=eη2x-e-η2xeη2x+e-η2x(3)㊀㊀其中,η1和η2是2个激活函数中的正数,sigmoid函数和双曲切线函数可以将输入映射到[0,1]和[-1,1]上㊂在式(1)基础上利用历史信息,Stach提出了HFCMs模型,从而减少了长期滞后性对模型预测结果的影响㊂HFCMs模型和FCMs模型的区别主要体现在状态值的定义公式上,而HFCMs模型的数学表达如式(4)所示:㊀Aj(t+1)=Φ(ðNi=1(w(1)ijAi(t)+w(2)ijAi(t-1)+ +w(m)ijAi(t-m+1))+bi)j=1,2, ,N(4)其中,wij(m)表示在m时刻概念节点Ci到Cj的连接权重;bi表示偏置项;其他符号的含义与式(1)中相同㊂对比式(1)和式(2)不难看出,当m=1时,HFCMs模型与FCMs模型相同㊂对于模型中的权重矩阵wij,本文通过时间变化加速系数粒子群优化(TVAC-PSO)算法来寻优㊂TVAC-PSO是一种基于种群的自适应学习算法,其中每个个体称作粒子,并且能够保存种群中所有粒子最优位置和速度的记忆㊂通过多次迭代,粒子不断改变其状态,直到达到最优或平衡状态㊂粒子i在d维搜索空间中的速度vid和位置xid的定义公式具体如下:vid=wˑvid+c1ˑrandˑ(pbid-xid)+c2ˑrandˑ(gb-xid)(5)㊀㊀㊀㊀㊀㊀㊀㊀xid=xid+vid(6)㊀㊀其中,rand表示[0,1]内均匀分布的随机数;pbid表示粒子i在d维搜索空间中局部最佳位置;gb表示全局最佳位置㊂而研究推出的w㊁c1㊁c2的定义公式为:w=(wmax-wmin)ˑiterm-iteriterm+wmin(7)c1=(c1f-c1i)ˑiMAX+c1i(8)c2=(c2f-c2i)ˑiMAX+c2i(9)㊀㊀其中,iterm和iter分别表示最大迭代数和当前迭代数;c1f,c1i,c2f,c2i为常数;i表示当前迭代数;MAX为总迭代数㊂TVAC-PSO通过最小化以下损失函数来得到最佳的权重矩阵:J1=1TðTt=1ðNi=1(A^i(t)-Ai(t))2+λ2ˑðNj=1wij2s.t㊀㊀-1<wij<1,i,j=1,2, ,N(10)其中,λ是正则化参数,范围为[0,1]㊂通过TVAC-PSO模型,可以帮助HFCMs模型获得更高的预测精度㊂1.2㊀反向传播神经网络(BPNN)由于HFCMs模型存在一定的预测误差,因此本文通过对误差进行二次预测,从而进一步提高模型准确率㊂在此,选择了反向传播神经网络(BPNN)来建立该误差预测模型㊂神经网络模仿了人类神经元的激活㊁传递过程㊂BP神经网络采用输出结果前向传播,误差反向传播的方式来对输入进行处理,是一种基础神经网络㊂BP神经网络包含输入层㊁隐层㊁输出层三层结构㊂每一层神经元都与上一层神经元相连接,收集其传递来的信息并通过仿射变换z将其变为一个标量,再由激活函数(见式(2)㊁式(3))处理之后把仿射变换后的数据传递给下一层㊂仿射变换z可以用式(11)进行简单表述:zi=Wiˑai-1+bi(11)㊀㊀其中,ai-1为该层神经元的输入;Wi为第i层的权重矩阵;bi为偏置向量㊂可以看出,各层神经网络的权重矩阵Wi以及对应的偏置向量bi即为神经网络中的待训练参数㊂为了便于理解,一个3层的BP神经网络结构如图2所示㊂㊀㊀为了得到模型的最佳参数,在训练过程中,首先对权重矩阵和偏置向量进行初始化,再使用梯度下降法反向传递误差,从而逐渐减小模型输出值与实际值之间的差距,使权重矩阵Wi以及对应的偏置向201智㊀能㊀计㊀算㊀机㊀与㊀应㊀用㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第13卷㊀量bi能够接近最佳值㊂I 1I 2l 1H 1H 2l 2O 1O 2W 11(1)W 12(1)W 21(1)W 22(1)W 22(2)W 21(2)W 11(2)W 12(2)b 1b 1b 2b2图2㊀BPNN结构图示Fig.2㊀SchematicdiagramofthestructureofBPNN1.3㊀BPNN-HFCMs利用如上BP神经网络来构造对于HFCMs的误差预测模型,再与HFCMs模型相结合,得到BPNN-HFCMs模型对金融数据集进行预测㊂为了便于理解,图3描述了该模型的流程图㊂B P N N 误差预测模型第T 天预测误差对应真实值数据集剩余50%历史数据真实值剩余50%历史预测值数据100%历史数据预测误差数据集50%历史数据H F C M s 模型1H F C M s 模型2第T 天预测值第T 天最终预测值模型建立预测训练训练图3㊀模型结构流程图Fig.3㊀Flowchartofmodelstructure㊀㊀BPNN-HFCMs模型分为模型建立部分和预测部分㊂在模型建立部分,以原始股票价格以及最大迭代次数NG作为模型输入,首先用50%的历史数据利用TVAC-PSO算法训练HFCMs模型1,然后用训练好的HFCMs模型1来预测剩余的50%历史数据㊂通过比较剩余50%历史数据的真实值和预测值之间的差得到预测误差集,再用BPNN算法用预测误差集构造误差预测模型㊂模型的输出即为预测价格㊂在预测部分,同样以原始股票价格作为模型输入,首先,用100%的历史数据训练HFCMs模型2,得到第T天的预测值;然后用构造好的BPNN模型以T-1天的数据为输入,输出第T天的预测误差㊂结合第T天的预测误差和第T天的预测值,则可以得到第T天的最终预测值㊂模型的输出即为最终预测价格㊂1.4㊀算法在本节中,提出一个算法来详细展示上文所提出的预测系统,即HFCMs预测模型和BPNN-HFCMs模型㊂为简洁起见,模型的伪代码详见算法1及算法2㊂算法1㊀HFCMs模型输入㊀原始股票价格,最大迭代次数:NG输出㊀预测价格1:获取历史数据,并将股票价格初始化为[-1,1]的范围内2:随机初始化TVAC-PSO算法中所有粒子的速度vid和位置xid3:forg=1toNGdo4:用方程(5)和方程(6)更新所有粒子的速度和位置5:计算损失函数J1的值6:检查是否满足结束条件7:endfor8:得到HFCMs模型框架9:通过HFCMs模型框架预测未来价格算法2㊀BPNN-HFCMs模型输入㊀原始股票价格输出㊀最终预测价格1:获取历史数据2:以50%的历史数据作为输入,根据算法1构造HFCMs模型13:根据算法1,使用HFCMs模型1预测剩余50%的历史数据4:用预测误差和实际值作为输入,训练BPNN误差预测模型5:使用HFCMs模型预测股票价格,BPNN误差预测模型预测对应时间点的预测误差6:结合步骤5中的预测数据和预测误差,得到对应时间点的最终预测值2㊀实验为了验证BPNN-HFCMs模型对金融时间序列预测的有效性并评价其预测效果,本节中选取了5个金融数据集对模型进行了实验㊂其中,2.1节介绍了实验的设置和评价指标,2.2节则是对实验结果的展示㊂301第8期王雨涵,等:基于BP神经网络和高阶模糊认知图的股票价格预测2.1㊀实验设置和评价指标本文的实验均是在一台2.38GHzCPU㊁16GB内存㊁使用Python3.10版本的Windows10操作系统的笔记本电脑上进行的㊂对这里的研究内容可给出阐释论述如下㊂(1)数据集㊂本文选取了5个从雅虎金融上下载的金融时间序列数据集:SP500㊁RUT㊁N225㊁HSI㊁DJI,来对预测模型的有效性进行验证㊂其中,每个数据集分别包括从2010年1月4日到2020年10月22日的每日开盘价㊁最高价㊁最低价以及收盘价㊂由于整个模型主要分为2个部分:根据HFCMs的预测结果训练误差预测模型BPNN,以及结合HFCMs模型对真实值的预测和BPNN对误差的预测,从而得到最终的预测值㊂在对BPNN模型的训练中,首先用HFCMs模型对50%的数据集进行预测,这由HFCMs模型预测得到的50%数据即为BPNN模型的输入㊂此后再将这50%的数据集以80%:20%的比例划分为训练集和测试集进行误差预测,并与HFCMs模型中的预测值相结合㊂即从总体上来看,数据集中10%数据被用于测试集,剩余的90%均用于模型的训练㊂在表1中对这5个金融数据集进行了统计描述㊂表1㊀各股票收盘价格的统计描述Tab.1㊀Thestatisticaldescriptionoftheclosepriceofeachstock数据集数量最大值最小值均值标准差SP50027213580.8401022.5802050.590659.476RUT27211740.750580.4901158.9818316.3638N225266524270.6198160.00916130.3405040.198HSI266833154.12116250.26923760.8913129.092DJI272129551.4199686.48018262.7665496.737㊀㊀(2)基线模型㊂为了验证模型的预测能力,本文选取HFCMs㊁FCMs㊁BPNN作为基线模型进行比较㊂这些基线模型的详细描述见如下㊂①HFCMs:HFCMs是在处理金融领域预测中较为常见的一种预测方法,因此将其作为基线模型之一㊂此外,HFCMs模型的最佳权重矩阵由TVAC-PSO算法得到㊂②FCMs:FCMs是用于预测的基础模型之一,因此将其作为基线模型之一㊂③BPNN:BPNN是神经网络中最为经典的预测模型之一,本质上可以将其认为是用训练样本构造的回归模型㊂通过各股票的4个基本价格(开盘价㊁收盘价㊁最高价和最低价)作为输入来对股票的收盘价进行预测㊂(3)评价指标:在实验中用平均绝对误差(MAE)和均方根误差(RMSE)对实验结果进行评价㊂这2个指标均用于衡量股票价格的真实值和预测值之间的差异,其值可由式(12)㊁式(13)来求得:MAE=1TðTi=1|yi-y^i|(12)RMSE=1TðTi=1(yi-y^i)2(13)㊀㊀其中,T表示预测数据的总数,yi和y^i分别表示实际值和预测值㊂并且,MAE和RMSE的值越小,表示模型的性能越好㊂2.2㊀实验结果本小节将对实验结果进行展示分析㊂首先,为了使预测结果更加准确,本文将股票价格数据映射到了[0,1]区间内,再对所提出的模型进行训练㊂对于每个归一化后的金融数据集,将其分为3个部分,即:训练集㊁验证集和测试集㊂㊀㊀考虑到高阶HFCMs模型对计算机运算能力会产生较大负荷,本文中使用的HFCMs的阶数为二阶,以减小模型计算量㊂本文利用提出的BPNN-HFCMs模型对各金融数据集进行预测,并与各基线模型的预测效果进行对比㊂表2 表3给出了不同模型在各数据集预测上的MAE对比以及RMSE对比结果,而图4 图5则是多次运行BPNN-HFCMs模型得到的在各数据集上MAE㊁RMSE值的箱型图㊂表2㊀各模型MAE值对比Tab.2㊀ThecomparisonofMAEoneachmodel数据集HFCMsBPNN-HFCMsFCMsBPNNSP50063.717645.058864.054163.7874RUT29.936926.095736.978135.0225N225415.2766318.6681419.9975418.5336HSI301.4185298.8813316.9396304.4796DJI605.7794425.0222635.8673606.5479401智㊀能㊀计㊀算㊀机㊀与㊀应㊀用㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第13卷㊀表3㊀各模型RMSE值对比Tab.3㊀ThecomparisonofRMSEoneachmodel数据集HFCMsBPNN-HFCMsFCMsBPNNSP50098.255574.0138101.7706100.2642RUT41.376840.737747.010245.3475N225503.5245402.2467517.8484506.3272HSI400.8046398.3109408.1520405.2948DJI928.8779765.9011956.2847929.183629282726252449484746454443M A EM A E(a )S P 500(b )R U T324323322321320319318301.5301.0300.5299.5299.0298.5298.0M A EM A E(c )N 225(b )H S I440435430425420415(e )D J IM A E图4㊀BPNN-HFCMs模型在各数据集上的MAE值Fig.4㊀TheMAEvalueofBPNN-HFCMsmodeloneachdataset8280787674727045444342414039400.5400.0399.5399.0398.5398.0397.5405.0404.5404.0403.5403.0402.5402.0R M S ER M S ER M S ER M S E800790780770760750740(a )S P 500(b )R U T(c )N 225(d )H S I(e )D J IR M S E图5㊀BPNN-HFCMs模型在各数据集上的RMSE值Fig.5㊀TheRMSEvalueofBPNN-HFCMsmodeloneachdataset㊀㊀从表2 表3可以看出,传统FCMs模型的预测效果最差,BPNN的预测效果次之,而BPNN-HFCMs模型对于各个金融数据集的预测效果则要优于本文选取的3个基线模型的效果㊂根据图4图5,可以认为BPNN-HFCMs模型的预测结果是较为稳定的㊂因此可以认为,在股票价格预测中,BPNN-HFCMs模型是一种较好的预测方法㊂㊀㊀并且,通过对比BPNN-HFCMs模型与HFCMs模型的预测结果,可以得出以下结论:用BP神经网络对HFCMs模型的预测误差做进一步预测,对于提高模型的最终预测精度是有效的㊂根据表2和表3中2个模型在5个数据集预测上的MAE和RMSE值,可以得出对于数据集SP500㊁N225和DJI,其MAE㊁RMSE值均有了显著的降低㊂对于SP500,预测结果的MAE值降低了29.2%,RMSE值降低了24.6%;对于N225,预测结果的MAE值降低了23.2%,RMSE值降低了20.1%;对于DJI,预测结果的MAE值降低了24.6%,RMSE值降低了17.5%㊂尽管金融数据集RUT和HSI在预测结果的MAE㊁RMSE值上降幅并不大,RUT上的降幅为12.8%和1.5%,HSI上的降幅为0.8%和0.6%,但BPNN-HFCMs模型的预测结果依然优于HFCMs模型㊂㊀㊀此外,本文还应用了Diebold-Mariano检验(DM检验)来验证2个模型之间在预测能力上的差异㊂DM检验的无效假设是:2个被测模型之间的预测能力没有明显的差异㊂其中,将第i个模型在t时刻的预测误差定义为ei,t=yi,t-y^i,t,则DM统计数据可由式(14)运算得到:DM=g2πfg(0)/T(14)g-=1TðTt=1δt(15)㊀㊀其中,fg(0)表示fg(0)的一致估计值,即频率为0的损耗差的频谱密度㊂根据DM检验的p值可以判断2个模型预测能力是否存在差异,若p值小于0.05,则2个模型的预测能力存在差异㊂BPNN-HFCMs模型与其他模型之间DM检验的p值见表4㊂根据表4可得,本文提出的BPNN-HFCMs模型与HFCMs㊁FCMs㊁BPNN模型的预测能力存在显著差异㊂㊀㊀为了更好地观察模型BPNN-HFCMs的预测效果,模型在各数据集测试集上的预测结果与真实值的对比如图6所示㊂㊀㊀由图6可知,BPNN-HFCMs模型得到的预测值与真实值较为接近,但由于金融市场中诸多不确定性因素的存在,依然无法实现对金融数据集的零误501第8期王雨涵,等:基于BP神经网络和高阶模糊认知图的股票价格预测差预测㊂但本文提出的模型与其他作为对照的模型相比,依然取得了较为理想的预测效果㊂表4㊀BPNN-HFCMs模型与其他模型之间DM检验的p值Tab.4㊀Thep-valueoftheDMtestbetweentheBPNN-HFCMsmodelandothermodels数据集度量HFCMsFCMsBPNNSP500RMSE0.0000.0000.000MAE0.0000.0000.000RUTRMSE0.0000.0000.000MAE0.0000.0000.000N225RMSE0.0000.0000.000MAE0.0000.0000.000HSIRMSE0.0000.0000.000MAE0.0000.0000.000DJIRMSE0.0000.0000.000MAE0.0000.0000.00038003600340032003000280026002400220020001800P r i c e121416181101121141161181201221241261r a w p r e d i c t T i m e㊀(a)SP500180017001600150014001300120011001000900800P r i c e121416181101121141161181201221241261r a w p r e d i c t T i m e㊀(b)RUT24800238002280021800208001980018800178001680015800P r i c e121416181101121141161181201221241261r a wp r e d i c tT i m e㊀(c)N22530000290002800027000260002500024000230002200021000P r i c e121416181101121141161181201221241261r a wp r e d i c tT i m e㊀(d)HSI310002900027000250002300021000190001700015000P r i c e121416181101121141161181201221241261r a w p r e d i c tT i m e㊀(e)DJI图6㊀BPNN-HFCMs模型在各数据集上预测结果与初始值对比图Fig.6㊀ThecomparisonofpredictoutcomesandinitialvaluesoneachdatasetofBPNN-HFCMs3㊀结束语本文提出了一个基于BP神经网络和高阶模糊认知图的股票价格预测模型来预测金融市场㊂通过对初步预测值和真实值的误差构建的误差预测模型来提高预测的准确性㊂模型的最佳权重矩阵通过TVAC-PSO算法得出㊂该模型主要有以下2个优点:(1)在预测中充分考虑了预测误差对模型精度的影响㊂(2)采用TVAC-PSO算法得到模型的最佳权重矩阵㊂最后,通过对5个金融数据集的实验,验证了该模型的有效性㊂参考文献[1]HUANGYusheng,GAOYelin,GANYan,etal.Anewfinancialdataforecastingmodelusinggeneticalgorithmandlongshort-termmemorynetwork[J].Neurocomputing,2021,425:207-218.[2]董雪建.试用自回归模型预测股票市场的收益[J].时代金融,2016(12):127-128.[3]TOLEDONI.Theautoregressiveintegratedmovingaveragemodelinforecastingphilippinepeso-UnitedStatesdollarexchangerates[J].JournalofPhysics:ConferenceSeries,2021,1936(1):012002.(下转第113页)601智㊀能㊀计㊀算㊀机㊀与㊀应㊀用㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第13卷㊀[4]XIEXunwei,ZHANGYongjun,XIAOLing,etal.Anovelextendedphasecorrelationalgorithmbasedonlog-Gaborfilteringformultimodalremotesensingimageregistration[J].InternationalJournalofRemoteSensing,2019,40(14):5429–5453.[5]DETONED,MALISIEWICZT,RABINOVICHA.SuperPoint:Self-supervisedInterestpointdetectionanddescription[J].arXivpreprintarXiv:1712.07629,2018.[6]SARLINPE,DETONED,MALISIEWICZT,etal.SuperGlue:LearningfeaturematchingwithgraphNeuralNetworks[C]//2020IEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR).Seattle,WA,USA:IEEE,2020:4937-4946.[7]EFEU,INCEKG,ALATANAA.DFM:Aperformancebaselinefordeepfeaturematching[J].2021IEEE/CVFConferenceonComputerVisionandPatternRecognitionWorkshops(CVPRW).Nashville,TN,USA:IEEE,2021:4279-4288.[8]SCHONBERGERJL,HARDMEIERH,SATTLERT,etal.Comparativeevaluationofhand-craftedandlearnedlocalfeatures[C]//2017IEEEConferenceonComputerVisionandPatternRecognition(CVPR).Honolulu,HI:IEEE,2017:6959-6968.[9]ZHANGHan,NIWeiping,YANWeidong,etal.Registrationofmultimodalremotesensingimagebasedondeepfullyconvolutionalneuralnetwork[J].IEEEJournalofSelectedTopicsinAppliedEarthObservationsandRemoteSensing,2019,12(8):3028-3042.[10]QUANDou,WANGShuang,GUYu,etal.Deepfeaturecorrelationlearningformulti-modalremotesensingimageregistration[J].IEEETransactionsonGeoscienceandRemoteSensing,2022,60:1-16.[11]VASWANIA,SHAZEERN,PARMARN,etal.Attentionisallyouneed[C]//AdvancesinNeuralInformationProcessingSystem.LongBeach:NIPSFoundation,2017:5998-6008.[12]OPPENHEIMAV,LIMJS.Theimportanceofphaseinsignals[J].ProceedingsoftheIEEE,1981,69(5):529-541.[13]MORRONEMC,OWENSRA.Featuredetectionfromlocalenergy[J].PatternRecognitionLetters,1987,6(5):303-313.[14]KOVESIP.Imagefeaturesfromphasecongruency[J].QuarterlyJournal,1999,1(3):1-27.[15]JIANGXingyu,MAJiayi,XIAOGuobao,etal.Areviewofmultimodalimagematching:Methodsandapplications[J].InformationFusion,2021,73:22-71.[16]姚永祥,张永军,万一,等.顾及各向异性加权力矩与绝对相位方向的异源影像匹配[J].武汉大学学报(信息科学版),2021,46(11):1727-1736.[17]FANZhongli,LIUYuxian,LIUYuxuan,etal.3MRS:Aneffectivecoarse-to-finematchingmethodformultimodalremotesensingimagery[J].RemoteSensing,2022,14(3):478.[18]QUANDou,WANGShuang,GUYu,etal.Deepfeaturecorrelationlearningformulti-modalremotesensingimageregistration[J].TransactionsonGeoscience&RemoteSensing,2022,60:1-16.(上接第106页)[4]茆训诚,崔百胜,王周伟.多元旋转自回归条件异方差模型的构建与应用研究 以九种人民币汇率波动为例[J].上海经济研究,2014(01):70-82.[5]WANGYF.Predictingstockpriceusingfuzzygreypredictionsystem[J].ExpertSystemsWithApplications,2002,22(1):33-38.[6]JILANITA,BURNEYSMA.Arefinedfuzzytimeseriesmodelforstockmarketforecasting[J].PhysicaA:StatisticalMechanicsanditsApplications,2008,387(12):2857-2862.[7]KARATZINISGD,BOUTALISYS.Fuzzycognitivenetworkswithfunctionalweightsfortimeseriesandpatternrecognitionapplications[J].AppliedSoftComputingJournal,2021,106:107415.[8]FENGGuoliang,ZHANGLiyong,YANGJianhua,etal.Long-termpredictionoftimeseriesusingfuzzycognitivemaps[J].EngineeringApplicationsofArtificialIntelligence,2021,102:104274.[9]马楠,杨炳儒,邱正强,等.基于测度递进的模糊认知图及其应用[J].计算机工程与设计,2012,33(05):1958-1962.[10]YANGShanchao,LIUJing.Time-seriesforecastingbasedonhigh-orderfuzzycognitivemapsandwavelettransform[J].IEEETransactionsonFuzzySystems,2018,26(6):3391-3402.[11]LIUZongdong,LIUJing.Arobusttimeseriespredictionmethodbasedonempiricalmodedecompositionandhigh-orderfuzzycognitivemaps[J].Knowledge-BasedSystems,2020,203(prepublish).[12]HUANGYusheng,GAOYelin,GANYan,etal.Anewfinancialdataforecastingmodelusinggeneticalgorithmandlongshort-termmemorynetwork[J].Neurocomputing,2020(prepublish).[13]林春梅,何跃,汤兵勇,等.模糊认知图在股票市场预测中的应用研究[J].计算机应用,2006,26(01):195-197,201.311第8期石添鑫,等:一种基于交叉注意力机制多模态遥感图像匹配网络。
configurational analysis 意思

configurational analysis 意思Configurational Analysis is an analytical framework used to examine the relationships between different elements or variables within a system or organization. It seeks to understand how these variables interact with each other and influence the overall performance or outcome. This article will delve into the concept of configurational analysis, its key components, and how it can be applied in various contexts.1. Introduction to Configurational Analysis:Configurational Analysis is based on the assumption that the configuration or arrangement of different elements in a system is crucial in determining the overall performance. It emphasizes the interdependencies and interactions between these elements, rather than focusing solely on individual variables. By examining the patterns and structures that emerge from these relationships, configurational analysis aims to uncover the underlying mechanisms or configurations that lead to specific outcomes.2. Key Components of Configurational Analysis:There are several key components of configurational analysis that need to be considered when conducting this type of analysis:2.1 Variables:Configurational analysis involves the identification and selection of relevant variables. These variables can be quantitative or qualitative in nature, and they should represent different aspects or dimensions of the system or organization under study. It is important to select variables that are meaningful and have a significant impact on the overall performance or outcome.2.2 Configurations:Configurations refer to the patterns or structures that emerge from the relationships between variables. These configurations can be simple or complex in nature, and they can take various forms, such as causal chains, feedback loops, or interaction effects. Configurations may involve both direct and indirect relationships between variables, and they can be represented graphically using causal maps or influence diagrams.2.3 Conditions:Conditions refer to the contextual factors or circumstances that shape the relationships between variables. These conditions can include external factors such as market conditions, regulatory environment, or technological advancements, as well as internal factors such as organizational culture, leadership style, or resource availability. Conditions can have a moderating or mediating effect on the relationships between variables and may influence the emergence of specific configurations.2.4 Outputs:Outputs refer to the desired or intended outcomes of the system or organization under study. These outputs can be tangible or intangible in nature, and they can be measured using various performance indicators or metrics. Outputs can include financial performance, customer satisfaction, employee engagement, or social impact, depending on the context or objectives of the analysis.3. Applying Configurational Analysis:Configurational analysis can be applied in various contexts, including business organizations, public institutions, social systems, or environmental systems. It can be used to understand the factors that contribute to success or failure, to identify critical leverage points or bottlenecks, or to design interventions or strategies for improvement.3.1 Business Organizations:In business organizations, configurational analysis can be used to understand the factors that drive profitability, market share, or customer loyalty. It can help identify the key resources, capabilities, or processes that contribute to competitive advantage and to design effective business models or strategies. Configurational analysis can also be used to analyze organizational culture, leadership style, or decision-making processes and to understand how these factors influence employee motivation, innovation, or performance.3.2 Public Institutions:In public institutions or government agencies, configurational analysis can be used to analyze the factors that contribute to effective policy implementation, service delivery, or citizen engagement. It can help identify the key drivers of public value and to design effective governance structures or accountability mechanisms. Configurational analysis can also be used to analyze the factors that contribute to the success or failure of public-private partnerships or collaborative networks.3.3 Social Systems:In social systems or communities, configurational analysis can beused to analyze the factors that contribute to social cohesion, social capital, or community resilience. It can help identify the key relationships, norms, or values that shape collective action or social change. Configurational analysis can also be used to analyze the factors that contribute to inequality, social exclusion, or conflict and to design interventions or policies for social justice or inclusion.3.4 Environmental Systems:In environmental systems or ecosystems, configurational analysis can be used to analyze the factors that contribute to ecological resilience, biodiversity, or sustainable resource management. It can help identify the key interactions, feedback loops, or thresholds that shape ecosystem dynamics or vulnerability. Configurational analysis can also be used to analyze the factors that contribute to environmental degradation, climate change, or natural disasters and to design interventions or policies for environmental sustainability or adaptation.4. Challenges and Limitations:4.1 Complexity:Configurational analysis involves dealing with complexity and uncertainty, as it seeks to understand and model the relationships between multiple variables and their configurations. This complexity can make it challenging to identify the most relevant variables or to determine the causal relationships between them. It often requires the use of advanced analytical methods, such as fuzzy set analysis or qualitative comparative analysis, to handle the complexity and uncertainty inherent in configurational analysis.4.2 Context-Specific:Configurational analysis is context-specific, meaning that the configurations and relationships identified in one context may not be applicable or transferable to other contexts. This context-specificity can limit the generalizability of findings and may require the replication or validation of results in different settings or conditions. It also makes it challenging to compare or aggregate configurational analysis across different studies or contexts.4.3 Data Availability:Configurational analysis relies on the availability of relevant and accurate data to identify and analyze the relationships between variables. However, data collection and analysis can be time-consuming and resource-intensive, especially when dealing with complex and dynamic systems. It may also be challenging to quantify or measure certain variables or to obtain reliable data for all relevant variables. These data limitations can affect the validity and reliability of configurational analysis findings.5. Conclusion:Configurational analysis is a powerful approach for understanding and analyzing complex systems or organizations. By examining the relationships between variables and their configurations, configurational analysis can provide valuable insights into the underlying mechanisms or configurations that lead to specific outcomes. However, configurational analysis also presents challenges in terms of complexity, context-specificity, and data availability. Despite these limitations, configurational analysis offers a unique and holistic perspective on the interdependenciesand interactions that shape the performance or outcomes of systems or organizations.。
机器学习与人工智能领域中常用的英语词汇

机器学习与人工智能领域中常用的英语词汇1.General Concepts (基础概念)•Artificial Intelligence (AI) - 人工智能1)Artificial Intelligence (AI) - 人工智能2)Machine Learning (ML) - 机器学习3)Deep Learning (DL) - 深度学习4)Neural Network - 神经网络5)Natural Language Processing (NLP) - 自然语言处理6)Computer Vision - 计算机视觉7)Robotics - 机器人技术8)Speech Recognition - 语音识别9)Expert Systems - 专家系统10)Knowledge Representation - 知识表示11)Pattern Recognition - 模式识别12)Cognitive Computing - 认知计算13)Autonomous Systems - 自主系统14)Human-Machine Interaction - 人机交互15)Intelligent Agents - 智能代理16)Machine Translation - 机器翻译17)Swarm Intelligence - 群体智能18)Genetic Algorithms - 遗传算法19)Fuzzy Logic - 模糊逻辑20)Reinforcement Learning - 强化学习•Machine Learning (ML) - 机器学习1)Machine Learning (ML) - 机器学习2)Artificial Neural Network - 人工神经网络3)Deep Learning - 深度学习4)Supervised Learning - 有监督学习5)Unsupervised Learning - 无监督学习6)Reinforcement Learning - 强化学习7)Semi-Supervised Learning - 半监督学习8)Training Data - 训练数据9)Test Data - 测试数据10)Validation Data - 验证数据11)Feature - 特征12)Label - 标签13)Model - 模型14)Algorithm - 算法15)Regression - 回归16)Classification - 分类17)Clustering - 聚类18)Dimensionality Reduction - 降维19)Overfitting - 过拟合20)Underfitting - 欠拟合•Deep Learning (DL) - 深度学习1)Deep Learning - 深度学习2)Neural Network - 神经网络3)Artificial Neural Network (ANN) - 人工神经网络4)Convolutional Neural Network (CNN) - 卷积神经网络5)Recurrent Neural Network (RNN) - 循环神经网络6)Long Short-Term Memory (LSTM) - 长短期记忆网络7)Gated Recurrent Unit (GRU) - 门控循环单元8)Autoencoder - 自编码器9)Generative Adversarial Network (GAN) - 生成对抗网络10)Transfer Learning - 迁移学习11)Pre-trained Model - 预训练模型12)Fine-tuning - 微调13)Feature Extraction - 特征提取14)Activation Function - 激活函数15)Loss Function - 损失函数16)Gradient Descent - 梯度下降17)Backpropagation - 反向传播18)Epoch - 训练周期19)Batch Size - 批量大小20)Dropout - 丢弃法•Neural Network - 神经网络1)Neural Network - 神经网络2)Artificial Neural Network (ANN) - 人工神经网络3)Deep Neural Network (DNN) - 深度神经网络4)Convolutional Neural Network (CNN) - 卷积神经网络5)Recurrent Neural Network (RNN) - 循环神经网络6)Long Short-Term Memory (LSTM) - 长短期记忆网络7)Gated Recurrent Unit (GRU) - 门控循环单元8)Feedforward Neural Network - 前馈神经网络9)Multi-layer Perceptron (MLP) - 多层感知器10)Radial Basis Function Network (RBFN) - 径向基函数网络11)Hopfield Network - 霍普菲尔德网络12)Boltzmann Machine - 玻尔兹曼机13)Autoencoder - 自编码器14)Spiking Neural Network (SNN) - 脉冲神经网络15)Self-organizing Map (SOM) - 自组织映射16)Restricted Boltzmann Machine (RBM) - 受限玻尔兹曼机17)Hebbian Learning - 海比安学习18)Competitive Learning - 竞争学习19)Neuroevolutionary - 神经进化20)Neuron - 神经元•Algorithm - 算法1)Algorithm - 算法2)Supervised Learning Algorithm - 有监督学习算法3)Unsupervised Learning Algorithm - 无监督学习算法4)Reinforcement Learning Algorithm - 强化学习算法5)Classification Algorithm - 分类算法6)Regression Algorithm - 回归算法7)Clustering Algorithm - 聚类算法8)Dimensionality Reduction Algorithm - 降维算法9)Decision Tree Algorithm - 决策树算法10)Random Forest Algorithm - 随机森林算法11)Support Vector Machine (SVM) Algorithm - 支持向量机算法12)K-Nearest Neighbors (KNN) Algorithm - K近邻算法13)Naive Bayes Algorithm - 朴素贝叶斯算法14)Gradient Descent Algorithm - 梯度下降算法15)Genetic Algorithm - 遗传算法16)Neural Network Algorithm - 神经网络算法17)Deep Learning Algorithm - 深度学习算法18)Ensemble Learning Algorithm - 集成学习算法19)Reinforcement Learning Algorithm - 强化学习算法20)Metaheuristic Algorithm - 元启发式算法•Model - 模型1)Model - 模型2)Machine Learning Model - 机器学习模型3)Artificial Intelligence Model - 人工智能模型4)Predictive Model - 预测模型5)Classification Model - 分类模型6)Regression Model - 回归模型7)Generative Model - 生成模型8)Discriminative Model - 判别模型9)Probabilistic Model - 概率模型10)Statistical Model - 统计模型11)Neural Network Model - 神经网络模型12)Deep Learning Model - 深度学习模型13)Ensemble Model - 集成模型14)Reinforcement Learning Model - 强化学习模型15)Support Vector Machine (SVM) Model - 支持向量机模型16)Decision Tree Model - 决策树模型17)Random Forest Model - 随机森林模型18)Naive Bayes Model - 朴素贝叶斯模型19)Autoencoder Model - 自编码器模型20)Convolutional Neural Network (CNN) Model - 卷积神经网络模型•Dataset - 数据集1)Dataset - 数据集2)Training Dataset - 训练数据集3)Test Dataset - 测试数据集4)Validation Dataset - 验证数据集5)Balanced Dataset - 平衡数据集6)Imbalanced Dataset - 不平衡数据集7)Synthetic Dataset - 合成数据集8)Benchmark Dataset - 基准数据集9)Open Dataset - 开放数据集10)Labeled Dataset - 标记数据集11)Unlabeled Dataset - 未标记数据集12)Semi-Supervised Dataset - 半监督数据集13)Multiclass Dataset - 多分类数据集14)Feature Set - 特征集15)Data Augmentation - 数据增强16)Data Preprocessing - 数据预处理17)Missing Data - 缺失数据18)Outlier Detection - 异常值检测19)Data Imputation - 数据插补20)Metadata - 元数据•Training - 训练1)Training - 训练2)Training Data - 训练数据3)Training Phase - 训练阶段4)Training Set - 训练集5)Training Examples - 训练样本6)Training Instance - 训练实例7)Training Algorithm - 训练算法8)Training Model - 训练模型9)Training Process - 训练过程10)Training Loss - 训练损失11)Training Epoch - 训练周期12)Training Batch - 训练批次13)Online Training - 在线训练14)Offline Training - 离线训练15)Continuous Training - 连续训练16)Transfer Learning - 迁移学习17)Fine-Tuning - 微调18)Curriculum Learning - 课程学习19)Self-Supervised Learning - 自监督学习20)Active Learning - 主动学习•Testing - 测试1)Testing - 测试2)Test Data - 测试数据3)Test Set - 测试集4)Test Examples - 测试样本5)Test Instance - 测试实例6)Test Phase - 测试阶段7)Test Accuracy - 测试准确率8)Test Loss - 测试损失9)Test Error - 测试错误10)Test Metrics - 测试指标11)Test Suite - 测试套件12)Test Case - 测试用例13)Test Coverage - 测试覆盖率14)Cross-Validation - 交叉验证15)Holdout Validation - 留出验证16)K-Fold Cross-Validation - K折交叉验证17)Stratified Cross-Validation - 分层交叉验证18)Test Driven Development (TDD) - 测试驱动开发19)A/B Testing - A/B 测试20)Model Evaluation - 模型评估•Validation - 验证1)Validation - 验证2)Validation Data - 验证数据3)Validation Set - 验证集4)Validation Examples - 验证样本5)Validation Instance - 验证实例6)Validation Phase - 验证阶段7)Validation Accuracy - 验证准确率8)Validation Loss - 验证损失9)Validation Error - 验证错误10)Validation Metrics - 验证指标11)Cross-Validation - 交叉验证12)Holdout Validation - 留出验证13)K-Fold Cross-Validation - K折交叉验证14)Stratified Cross-Validation - 分层交叉验证15)Leave-One-Out Cross-Validation - 留一法交叉验证16)Validation Curve - 验证曲线17)Hyperparameter Validation - 超参数验证18)Model Validation - 模型验证19)Early Stopping - 提前停止20)Validation Strategy - 验证策略•Supervised Learning - 有监督学习1)Supervised Learning - 有监督学习2)Label - 标签3)Feature - 特征4)Target - 目标5)Training Labels - 训练标签6)Training Features - 训练特征7)Training Targets - 训练目标8)Training Examples - 训练样本9)Training Instance - 训练实例10)Regression - 回归11)Classification - 分类12)Predictor - 预测器13)Regression Model - 回归模型14)Classifier - 分类器15)Decision Tree - 决策树16)Support Vector Machine (SVM) - 支持向量机17)Neural Network - 神经网络18)Feature Engineering - 特征工程19)Model Evaluation - 模型评估20)Overfitting - 过拟合21)Underfitting - 欠拟合22)Bias-Variance Tradeoff - 偏差-方差权衡•Unsupervised Learning - 无监督学习1)Unsupervised Learning - 无监督学习2)Clustering - 聚类3)Dimensionality Reduction - 降维4)Anomaly Detection - 异常检测5)Association Rule Learning - 关联规则学习6)Feature Extraction - 特征提取7)Feature Selection - 特征选择8)K-Means - K均值9)Hierarchical Clustering - 层次聚类10)Density-Based Clustering - 基于密度的聚类11)Principal Component Analysis (PCA) - 主成分分析12)Independent Component Analysis (ICA) - 独立成分分析13)T-distributed Stochastic Neighbor Embedding (t-SNE) - t分布随机邻居嵌入14)Gaussian Mixture Model (GMM) - 高斯混合模型15)Self-Organizing Maps (SOM) - 自组织映射16)Autoencoder - 自动编码器17)Latent Variable - 潜变量18)Data Preprocessing - 数据预处理19)Outlier Detection - 异常值检测20)Clustering Algorithm - 聚类算法•Reinforcement Learning - 强化学习1)Reinforcement Learning - 强化学习2)Agent - 代理3)Environment - 环境4)State - 状态5)Action - 动作6)Reward - 奖励7)Policy - 策略8)Value Function - 值函数9)Q-Learning - Q学习10)Deep Q-Network (DQN) - 深度Q网络11)Policy Gradient - 策略梯度12)Actor-Critic - 演员-评论家13)Exploration - 探索14)Exploitation - 开发15)Temporal Difference (TD) - 时间差分16)Markov Decision Process (MDP) - 马尔可夫决策过程17)State-Action-Reward-State-Action (SARSA) - 状态-动作-奖励-状态-动作18)Policy Iteration - 策略迭代19)Value Iteration - 值迭代20)Monte Carlo Methods - 蒙特卡洛方法•Semi-Supervised Learning - 半监督学习1)Semi-Supervised Learning - 半监督学习2)Labeled Data - 有标签数据3)Unlabeled Data - 无标签数据4)Label Propagation - 标签传播5)Self-Training - 自训练6)Co-Training - 协同训练7)Transudative Learning - 传导学习8)Inductive Learning - 归纳学习9)Manifold Regularization - 流形正则化10)Graph-based Methods - 基于图的方法11)Cluster Assumption - 聚类假设12)Low-Density Separation - 低密度分离13)Semi-Supervised Support Vector Machines (S3VM) - 半监督支持向量机14)Expectation-Maximization (EM) - 期望最大化15)Co-EM - 协同期望最大化16)Entropy-Regularized EM - 熵正则化EM17)Mean Teacher - 平均教师18)Virtual Adversarial Training - 虚拟对抗训练19)Tri-training - 三重训练20)Mix Match - 混合匹配•Feature - 特征1)Feature - 特征2)Feature Engineering - 特征工程3)Feature Extraction - 特征提取4)Feature Selection - 特征选择5)Input Features - 输入特征6)Output Features - 输出特征7)Feature Vector - 特征向量8)Feature Space - 特征空间9)Feature Representation - 特征表示10)Feature Transformation - 特征转换11)Feature Importance - 特征重要性12)Feature Scaling - 特征缩放13)Feature Normalization - 特征归一化14)Feature Encoding - 特征编码15)Feature Fusion - 特征融合16)Feature Dimensionality Reduction - 特征维度减少17)Continuous Feature - 连续特征18)Categorical Feature - 分类特征19)Nominal Feature - 名义特征20)Ordinal Feature - 有序特征•Label - 标签1)Label - 标签2)Labeling - 标注3)Ground Truth - 地面真值4)Class Label - 类别标签5)Target Variable - 目标变量6)Labeling Scheme - 标注方案7)Multi-class Labeling - 多类别标注8)Binary Labeling - 二分类标注9)Label Noise - 标签噪声10)Labeling Error - 标注错误11)Label Propagation - 标签传播12)Unlabeled Data - 无标签数据13)Labeled Data - 有标签数据14)Semi-supervised Learning - 半监督学习15)Active Learning - 主动学习16)Weakly Supervised Learning - 弱监督学习17)Noisy Label Learning - 噪声标签学习18)Self-training - 自训练19)Crowdsourcing Labeling - 众包标注20)Label Smoothing - 标签平滑化•Prediction - 预测1)Prediction - 预测2)Forecasting - 预测3)Regression - 回归4)Classification - 分类5)Time Series Prediction - 时间序列预测6)Forecast Accuracy - 预测准确性7)Predictive Modeling - 预测建模8)Predictive Analytics - 预测分析9)Forecasting Method - 预测方法10)Predictive Performance - 预测性能11)Predictive Power - 预测能力12)Prediction Error - 预测误差13)Prediction Interval - 预测区间14)Prediction Model - 预测模型15)Predictive Uncertainty - 预测不确定性16)Forecast Horizon - 预测时间跨度17)Predictive Maintenance - 预测性维护18)Predictive Policing - 预测式警务19)Predictive Healthcare - 预测性医疗20)Predictive Maintenance - 预测性维护•Classification - 分类1)Classification - 分类2)Classifier - 分类器3)Class - 类别4)Classify - 对数据进行分类5)Class Label - 类别标签6)Binary Classification - 二元分类7)Multiclass Classification - 多类分类8)Class Probability - 类别概率9)Decision Boundary - 决策边界10)Decision Tree - 决策树11)Support Vector Machine (SVM) - 支持向量机12)K-Nearest Neighbors (KNN) - K最近邻算法13)Naive Bayes - 朴素贝叶斯14)Logistic Regression - 逻辑回归15)Random Forest - 随机森林16)Neural Network - 神经网络17)SoftMax Function - SoftMax函数18)One-vs-All (One-vs-Rest) - 一对多(一对剩余)19)Ensemble Learning - 集成学习20)Confusion Matrix - 混淆矩阵•Regression - 回归1)Regression Analysis - 回归分析2)Linear Regression - 线性回归3)Multiple Regression - 多元回归4)Polynomial Regression - 多项式回归5)Logistic Regression - 逻辑回归6)Ridge Regression - 岭回归7)Lasso Regression - Lasso回归8)Elastic Net Regression - 弹性网络回归9)Regression Coefficients - 回归系数10)Residuals - 残差11)Ordinary Least Squares (OLS) - 普通最小二乘法12)Ridge Regression Coefficient - 岭回归系数13)Lasso Regression Coefficient - Lasso回归系数14)Elastic Net Regression Coefficient - 弹性网络回归系数15)Regression Line - 回归线16)Prediction Error - 预测误差17)Regression Model - 回归模型18)Nonlinear Regression - 非线性回归19)Generalized Linear Models (GLM) - 广义线性模型20)Coefficient of Determination (R-squared) - 决定系数21)F-test - F检验22)Homoscedasticity - 同方差性23)Heteroscedasticity - 异方差性24)Autocorrelation - 自相关25)Multicollinearity - 多重共线性26)Outliers - 异常值27)Cross-validation - 交叉验证28)Feature Selection - 特征选择29)Feature Engineering - 特征工程30)Regularization - 正则化2.Neural Networks and Deep Learning (神经网络与深度学习)•Convolutional Neural Network (CNN) - 卷积神经网络1)Convolutional Neural Network (CNN) - 卷积神经网络2)Convolution Layer - 卷积层3)Feature Map - 特征图4)Convolution Operation - 卷积操作5)Stride - 步幅6)Padding - 填充7)Pooling Layer - 池化层8)Max Pooling - 最大池化9)Average Pooling - 平均池化10)Fully Connected Layer - 全连接层11)Activation Function - 激活函数12)Rectified Linear Unit (ReLU) - 线性修正单元13)Dropout - 随机失活14)Batch Normalization - 批量归一化15)Transfer Learning - 迁移学习16)Fine-Tuning - 微调17)Image Classification - 图像分类18)Object Detection - 物体检测19)Semantic Segmentation - 语义分割20)Instance Segmentation - 实例分割21)Generative Adversarial Network (GAN) - 生成对抗网络22)Image Generation - 图像生成23)Style Transfer - 风格迁移24)Convolutional Autoencoder - 卷积自编码器25)Recurrent Neural Network (RNN) - 循环神经网络•Recurrent Neural Network (RNN) - 循环神经网络1)Recurrent Neural Network (RNN) - 循环神经网络2)Long Short-Term Memory (LSTM) - 长短期记忆网络3)Gated Recurrent Unit (GRU) - 门控循环单元4)Sequence Modeling - 序列建模5)Time Series Prediction - 时间序列预测6)Natural Language Processing (NLP) - 自然语言处理7)Text Generation - 文本生成8)Sentiment Analysis - 情感分析9)Named Entity Recognition (NER) - 命名实体识别10)Part-of-Speech Tagging (POS Tagging) - 词性标注11)Sequence-to-Sequence (Seq2Seq) - 序列到序列12)Attention Mechanism - 注意力机制13)Encoder-Decoder Architecture - 编码器-解码器架构14)Bidirectional RNN - 双向循环神经网络15)Teacher Forcing - 强制教师法16)Backpropagation Through Time (BPTT) - 通过时间的反向传播17)Vanishing Gradient Problem - 梯度消失问题18)Exploding Gradient Problem - 梯度爆炸问题19)Language Modeling - 语言建模20)Speech Recognition - 语音识别•Long Short-Term Memory (LSTM) - 长短期记忆网络1)Long Short-Term Memory (LSTM) - 长短期记忆网络2)Cell State - 细胞状态3)Hidden State - 隐藏状态4)Forget Gate - 遗忘门5)Input Gate - 输入门6)Output Gate - 输出门7)Peephole Connections - 窥视孔连接8)Gated Recurrent Unit (GRU) - 门控循环单元9)Vanishing Gradient Problem - 梯度消失问题10)Exploding Gradient Problem - 梯度爆炸问题11)Sequence Modeling - 序列建模12)Time Series Prediction - 时间序列预测13)Natural Language Processing (NLP) - 自然语言处理14)Text Generation - 文本生成15)Sentiment Analysis - 情感分析16)Named Entity Recognition (NER) - 命名实体识别17)Part-of-Speech Tagging (POS Tagging) - 词性标注18)Attention Mechanism - 注意力机制19)Encoder-Decoder Architecture - 编码器-解码器架构20)Bidirectional LSTM - 双向长短期记忆网络•Attention Mechanism - 注意力机制1)Attention Mechanism - 注意力机制2)Self-Attention - 自注意力3)Multi-Head Attention - 多头注意力4)Transformer - 变换器5)Query - 查询6)Key - 键7)Value - 值8)Query-Value Attention - 查询-值注意力9)Dot-Product Attention - 点积注意力10)Scaled Dot-Product Attention - 缩放点积注意力11)Additive Attention - 加性注意力12)Context Vector - 上下文向量13)Attention Score - 注意力分数14)SoftMax Function - SoftMax函数15)Attention Weight - 注意力权重16)Global Attention - 全局注意力17)Local Attention - 局部注意力18)Positional Encoding - 位置编码19)Encoder-Decoder Attention - 编码器-解码器注意力20)Cross-Modal Attention - 跨模态注意力•Generative Adversarial Network (GAN) - 生成对抗网络1)Generative Adversarial Network (GAN) - 生成对抗网络2)Generator - 生成器3)Discriminator - 判别器4)Adversarial Training - 对抗训练5)Minimax Game - 极小极大博弈6)Nash Equilibrium - 纳什均衡7)Mode Collapse - 模式崩溃8)Training Stability - 训练稳定性9)Loss Function - 损失函数10)Discriminative Loss - 判别损失11)Generative Loss - 生成损失12)Wasserstein GAN (WGAN) - Wasserstein GAN(WGAN)13)Deep Convolutional GAN (DCGAN) - 深度卷积生成对抗网络(DCGAN)14)Conditional GAN (c GAN) - 条件生成对抗网络(c GAN)15)Style GAN - 风格生成对抗网络16)Cycle GAN - 循环生成对抗网络17)Progressive Growing GAN (PGGAN) - 渐进式增长生成对抗网络(PGGAN)18)Self-Attention GAN (SAGAN) - 自注意力生成对抗网络(SAGAN)19)Big GAN - 大规模生成对抗网络20)Adversarial Examples - 对抗样本•Encoder-Decoder - 编码器-解码器1)Encoder-Decoder Architecture - 编码器-解码器架构2)Encoder - 编码器3)Decoder - 解码器4)Sequence-to-Sequence Model (Seq2Seq) - 序列到序列模型5)State Vector - 状态向量6)Context Vector - 上下文向量7)Hidden State - 隐藏状态8)Attention Mechanism - 注意力机制9)Teacher Forcing - 强制教师法10)Beam Search - 束搜索11)Recurrent Neural Network (RNN) - 循环神经网络12)Long Short-Term Memory (LSTM) - 长短期记忆网络13)Gated Recurrent Unit (GRU) - 门控循环单元14)Bidirectional Encoder - 双向编码器15)Greedy Decoding - 贪婪解码16)Masking - 遮盖17)Dropout - 随机失活18)Embedding Layer - 嵌入层19)Cross-Entropy Loss - 交叉熵损失20)Tokenization - 令牌化•Transfer Learning - 迁移学习1)Transfer Learning - 迁移学习2)Source Domain - 源领域3)Target Domain - 目标领域4)Fine-Tuning - 微调5)Domain Adaptation - 领域自适应6)Pre-Trained Model - 预训练模型7)Feature Extraction - 特征提取8)Knowledge Transfer - 知识迁移9)Unsupervised Domain Adaptation - 无监督领域自适应10)Semi-Supervised Domain Adaptation - 半监督领域自适应11)Multi-Task Learning - 多任务学习12)Data Augmentation - 数据增强13)Task Transfer - 任务迁移14)Model Agnostic Meta-Learning (MAML) - 与模型无关的元学习(MAML)15)One-Shot Learning - 单样本学习16)Zero-Shot Learning - 零样本学习17)Few-Shot Learning - 少样本学习18)Knowledge Distillation - 知识蒸馏19)Representation Learning - 表征学习20)Adversarial Transfer Learning - 对抗迁移学习•Pre-trained Models - 预训练模型1)Pre-trained Model - 预训练模型2)Transfer Learning - 迁移学习3)Fine-Tuning - 微调4)Knowledge Transfer - 知识迁移5)Domain Adaptation - 领域自适应6)Feature Extraction - 特征提取7)Representation Learning - 表征学习8)Language Model - 语言模型9)Bidirectional Encoder Representations from Transformers (BERT) - 双向编码器结构转换器10)Generative Pre-trained Transformer (GPT) - 生成式预训练转换器11)Transformer-based Models - 基于转换器的模型12)Masked Language Model (MLM) - 掩蔽语言模型13)Cloze Task - 填空任务14)Tokenization - 令牌化15)Word Embeddings - 词嵌入16)Sentence Embeddings - 句子嵌入17)Contextual Embeddings - 上下文嵌入18)Self-Supervised Learning - 自监督学习19)Large-Scale Pre-trained Models - 大规模预训练模型•Loss Function - 损失函数1)Loss Function - 损失函数2)Mean Squared Error (MSE) - 均方误差3)Mean Absolute Error (MAE) - 平均绝对误差4)Cross-Entropy Loss - 交叉熵损失5)Binary Cross-Entropy Loss - 二元交叉熵损失6)Categorical Cross-Entropy Loss - 分类交叉熵损失7)Hinge Loss - 合页损失8)Huber Loss - Huber损失9)Wasserstein Distance - Wasserstein距离10)Triplet Loss - 三元组损失11)Contrastive Loss - 对比损失12)Dice Loss - Dice损失13)Focal Loss - 焦点损失14)GAN Loss - GAN损失15)Adversarial Loss - 对抗损失16)L1 Loss - L1损失17)L2 Loss - L2损失18)Huber Loss - Huber损失19)Quantile Loss - 分位数损失•Activation Function - 激活函数1)Activation Function - 激活函数2)Sigmoid Function - Sigmoid函数3)Hyperbolic Tangent Function (Tanh) - 双曲正切函数4)Rectified Linear Unit (Re LU) - 矩形线性单元5)Parametric Re LU (P Re LU) - 参数化Re LU6)Exponential Linear Unit (ELU) - 指数线性单元7)Swish Function - Swish函数8)Softplus Function - Soft plus函数9)Softmax Function - SoftMax函数10)Hard Tanh Function - 硬双曲正切函数11)Softsign Function - Softsign函数12)GELU (Gaussian Error Linear Unit) - GELU(高斯误差线性单元)13)Mish Function - Mish函数14)CELU (Continuous Exponential Linear Unit) - CELU(连续指数线性单元)15)Bent Identity Function - 弯曲恒等函数16)Gaussian Error Linear Units (GELUs) - 高斯误差线性单元17)Adaptive Piecewise Linear (APL) - 自适应分段线性函数18)Radial Basis Function (RBF) - 径向基函数•Backpropagation - 反向传播1)Backpropagation - 反向传播2)Gradient Descent - 梯度下降3)Partial Derivative - 偏导数4)Chain Rule - 链式法则5)Forward Pass - 前向传播6)Backward Pass - 反向传播7)Computational Graph - 计算图8)Neural Network - 神经网络9)Loss Function - 损失函数10)Gradient Calculation - 梯度计算11)Weight Update - 权重更新12)Activation Function - 激活函数13)Optimizer - 优化器14)Learning Rate - 学习率15)Mini-Batch Gradient Descent - 小批量梯度下降16)Stochastic Gradient Descent (SGD) - 随机梯度下降17)Batch Gradient Descent - 批量梯度下降18)Momentum - 动量19)Adam Optimizer - Adam优化器20)Learning Rate Decay - 学习率衰减•Gradient Descent - 梯度下降1)Gradient Descent - 梯度下降2)Stochastic Gradient Descent (SGD) - 随机梯度下降3)Mini-Batch Gradient Descent - 小批量梯度下降4)Batch Gradient Descent - 批量梯度下降5)Learning Rate - 学习率6)Momentum - 动量7)Adaptive Moment Estimation (Adam) - 自适应矩估计8)RMSprop - 均方根传播9)Learning Rate Schedule - 学习率调度10)Convergence - 收敛11)Divergence - 发散12)Adagrad - 自适应学习速率方法13)Adadelta - 自适应增量学习率方法14)Adamax - 自适应矩估计的扩展版本15)Nadam - Nesterov Accelerated Adaptive Moment Estimation16)Learning Rate Decay - 学习率衰减17)Step Size - 步长18)Conjugate Gradient Descent - 共轭梯度下降19)Line Search - 线搜索20)Newton's Method - 牛顿法•Learning Rate - 学习率1)Learning Rate - 学习率2)Adaptive Learning Rate - 自适应学习率3)Learning Rate Decay - 学习率衰减4)Initial Learning Rate - 初始学习率5)Step Size - 步长6)Momentum - 动量7)Exponential Decay - 指数衰减8)Annealing - 退火9)Cyclical Learning Rate - 循环学习率10)Learning Rate Schedule - 学习率调度11)Warm-up - 预热12)Learning Rate Policy - 学习率策略13)Learning Rate Annealing - 学习率退火14)Cosine Annealing - 余弦退火15)Gradient Clipping - 梯度裁剪16)Adapting Learning Rate - 适应学习率17)Learning Rate Multiplier - 学习率倍增器18)Learning Rate Reduction - 学习率降低19)Learning Rate Update - 学习率更新20)Scheduled Learning Rate - 定期学习率•Batch Size - 批量大小1)Batch Size - 批量大小2)Mini-Batch - 小批量3)Batch Gradient Descent - 批量梯度下降4)Stochastic Gradient Descent (SGD) - 随机梯度下降5)Mini-Batch Gradient Descent - 小批量梯度下降6)Online Learning - 在线学习7)Full-Batch - 全批量8)Data Batch - 数据批次9)Training Batch - 训练批次10)Batch Normalization - 批量归一化11)Batch-wise Optimization - 批量优化12)Batch Processing - 批量处理13)Batch Sampling - 批量采样14)Adaptive Batch Size - 自适应批量大小15)Batch Splitting - 批量分割16)Dynamic Batch Size - 动态批量大小17)Fixed Batch Size - 固定批量大小18)Batch-wise Inference - 批量推理19)Batch-wise Training - 批量训练20)Batch Shuffling - 批量洗牌•Epoch - 训练周期1)Training Epoch - 训练周期2)Epoch Size - 周期大小3)Early Stopping - 提前停止4)Validation Set - 验证集5)Training Set - 训练集6)Test Set - 测试集7)Overfitting - 过拟合8)Underfitting - 欠拟合9)Model Evaluation - 模型评估10)Model Selection - 模型选择11)Hyperparameter Tuning - 超参数调优12)Cross-Validation - 交叉验证13)K-fold Cross-Validation - K折交叉验证14)Stratified Cross-Validation - 分层交叉验证15)Leave-One-Out Cross-Validation (LOOCV) - 留一法交叉验证16)Grid Search - 网格搜索17)Random Search - 随机搜索18)Model Complexity - 模型复杂度19)Learning Curve - 学习曲线20)Convergence - 收敛3.Machine Learning Techniques and Algorithms (机器学习技术与算法)•Decision Tree - 决策树1)Decision Tree - 决策树2)Node - 节点3)Root Node - 根节点4)Leaf Node - 叶节点5)Internal Node - 内部节点6)Splitting Criterion - 分裂准则7)Gini Impurity - 基尼不纯度8)Entropy - 熵9)Information Gain - 信息增益10)Gain Ratio - 增益率11)Pruning - 剪枝12)Recursive Partitioning - 递归分割13)CART (Classification and Regression Trees) - 分类回归树14)ID3 (Iterative Dichotomiser 3) - 迭代二叉树315)C4.5 (successor of ID3) - C4.5(ID3的后继者)16)C5.0 (successor of C4.5) - C5.0(C4.5的后继者)17)Split Point - 分裂点18)Decision Boundary - 决策边界19)Pruned Tree - 剪枝后的树20)Decision Tree Ensemble - 决策树集成•Random Forest - 随机森林1)Random Forest - 随机森林2)Ensemble Learning - 集成学习3)Bootstrap Sampling - 自助采样4)Bagging (Bootstrap Aggregating) - 装袋法5)Out-of-Bag (OOB) Error - 袋外误差6)Feature Subset - 特征子集7)Decision Tree - 决策树8)Base Estimator - 基础估计器9)Tree Depth - 树深度10)Randomization - 随机化11)Majority Voting - 多数投票12)Feature Importance - 特征重要性13)OOB Score - 袋外得分14)Forest Size - 森林大小15)Max Features - 最大特征数16)Min Samples Split - 最小分裂样本数17)Min Samples Leaf - 最小叶节点样本数18)Gini Impurity - 基尼不纯度19)Entropy - 熵20)Variable Importance - 变量重要性•Support Vector Machine (SVM) - 支持向量机1)Support Vector Machine (SVM) - 支持向量机2)Hyperplane - 超平面3)Kernel Trick - 核技巧4)Kernel Function - 核函数5)Margin - 间隔6)Support Vectors - 支持向量7)Decision Boundary - 决策边界8)Maximum Margin Classifier - 最大间隔分类器9)Soft Margin Classifier - 软间隔分类器10) C Parameter - C参数11)Radial Basis Function (RBF) Kernel - 径向基函数核12)Polynomial Kernel - 多项式核13)Linear Kernel - 线性核14)Quadratic Kernel - 二次核15)Gaussian Kernel - 高斯核16)Regularization - 正则化17)Dual Problem - 对偶问题18)Primal Problem - 原始问题19)Kernelized SVM - 核化支持向量机20)Multiclass SVM - 多类支持向量机•K-Nearest Neighbors (KNN) - K-最近邻1)K-Nearest Neighbors (KNN) - K-最近邻2)Nearest Neighbor - 最近邻3)Distance Metric - 距离度量4)Euclidean Distance - 欧氏距离5)Manhattan Distance - 曼哈顿距离6)Minkowski Distance - 闵可夫斯基距离7)Cosine Similarity - 余弦相似度8)K Value - K值9)Majority Voting - 多数投票10)Weighted KNN - 加权KNN11)Radius Neighbors - 半径邻居12)Ball Tree - 球树13)KD Tree - KD树14)Locality-Sensitive Hashing (LSH) - 局部敏感哈希15)Curse of Dimensionality - 维度灾难16)Class Label - 类标签17)Training Set - 训练集18)Test Set - 测试集19)Validation Set - 验证集20)Cross-Validation - 交叉验证•Naive Bayes - 朴素贝叶斯1)Naive Bayes - 朴素贝叶斯2)Bayes' Theorem - 贝叶斯定理3)Prior Probability - 先验概率4)Posterior Probability - 后验概率5)Likelihood - 似然6)Class Conditional Probability - 类条件概率7)Feature Independence Assumption - 特征独立假设8)Multinomial Naive Bayes - 多项式朴素贝叶斯9)Gaussian Naive Bayes - 高斯朴素贝叶斯10)Bernoulli Naive Bayes - 伯努利朴素贝叶斯11)Laplace Smoothing - 拉普拉斯平滑12)Add-One Smoothing - 加一平滑13)Maximum A Posteriori (MAP) - 最大后验概率14)Maximum Likelihood Estimation (MLE) - 最大似然估计15)Classification - 分类16)Feature Vectors - 特征向量17)Training Set - 训练集18)Test Set - 测试集19)Class Label - 类标签20)Confusion Matrix - 混淆矩阵•Clustering - 聚类1)Clustering - 聚类2)Centroid - 质心3)Cluster Analysis - 聚类分析4)Partitioning Clustering - 划分式聚类5)Hierarchical Clustering - 层次聚类6)Density-Based Clustering - 基于密度的聚类7)K-Means Clustering - K均值聚类8)K-Medoids Clustering - K中心点聚类9)DBSCAN (Density-Based Spatial Clustering of Applications with Noise) - 基于密度的空间聚类算法10)Agglomerative Clustering - 聚合式聚类11)Dendrogram - 系统树图12)Silhouette Score - 轮廓系数13)Elbow Method - 肘部法则14)Clustering Validation - 聚类验证15)Intra-cluster Distance - 类内距离16)Inter-cluster Distance - 类间距离17)Cluster Cohesion - 类内连贯性18)Cluster Separation - 类间分离度19)Cluster Assignment - 聚类分配20)Cluster Label - 聚类标签•K-Means - K-均值1)K-Means - K-均值2)Centroid - 质心3)Cluster - 聚类4)Cluster Center - 聚类中心5)Cluster Assignment - 聚类分配6)Cluster Analysis - 聚类分析7)K Value - K值8)Elbow Method - 肘部法则9)Inertia - 惯性10)Silhouette Score - 轮廓系数11)Convergence - 收敛12)Initialization - 初始化13)Euclidean Distance - 欧氏距离14)Manhattan Distance - 曼哈顿距离15)Distance Metric - 距离度量16)Cluster Radius - 聚类半径17)Within-Cluster Variation - 类内变异18)Cluster Quality - 聚类质量19)Clustering Algorithm - 聚类算法20)Clustering Validation - 聚类验证•Dimensionality Reduction - 降维1)Dimensionality Reduction - 降维2)Feature Extraction - 特征提取3)Feature Selection - 特征选择4)Principal Component Analysis (PCA) - 主成分分析5)Singular Value Decomposition (SVD) - 奇异值分解6)Linear Discriminant Analysis (LDA) - 线性判别分析7)t-Distributed Stochastic Neighbor Embedding (t-SNE) - t-分布随机邻域嵌入8)Autoencoder - 自编码器9)Manifold Learning - 流形学习10)Locally Linear Embedding (LLE) - 局部线性嵌入11)Isomap - 等度量映射12)Uniform Manifold Approximation and Projection (UMAP) - 均匀流形逼近与投影13)Kernel PCA - 核主成分分析14)Non-negative Matrix Factorization (NMF) - 非负矩阵分解15)Independent Component Analysis (ICA) - 独立成分分析16)Variational Autoencoder (VAE) - 变分自编码器17)Sparse Coding - 稀疏编码18)Random Projection - 随机投影19)Neighborhood Preserving Embedding (NPE) - 保持邻域结构的嵌入20)Curvilinear Component Analysis (CCA) - 曲线成分分析•Principal Component Analysis (PCA) - 主成分分析1)Principal Component Analysis (PCA) - 主成分分析2)Eigenvector - 特征向量3)Eigenvalue - 特征值4)Covariance Matrix - 协方差矩阵。
第三章 模糊认知图

第三章模糊认知图3.1认知图因果知识通常涉及许多相互作用的事物及其关系,由于缺乏有力的分析工具,因此,对这类知识的处理显得比较困难。
在这种情况下,一些其它技术包括定性推理技术就被应用到因果知识的处理中。
认知图就是这种定性推理技术的一种。
认知图是一个新兴的研究领域,它是一种计算智能,提供了一个有效的软计算工具来支持基于先验知识的自适应行为。
对它的研究涉及到模糊数学、模糊推理、不确定性理论及神经网络等诸多学科。
认知图的显著特点就是可利用系统的先验知识、并对复杂系统的子系统具有简单的可加性,能表示出用树结构、Bayes网络及Markov模型等很难表示的具有反馈的动态因果系统。
在认知图中很容易鸟瞰系统中各事物间如何相互作用,每个事物与那些事物具有因果关系。
认知图通常由概念(concept)与概念间的关系(relations of concepts)组成。
概念(用节点表示)可以表示系统的动作、原因、结果、目的、感情、倾向及趋势等,它反映系统的属性、性能与品质。
概念间的关系表示概念间的因果关系(用带箭头的弧表示,箭头的方向表示因果联系的方向)。
3.2认知图的发展简史认知图首先由Tloman于1948年在 Cognitive Maps in Rats and Men一文中提出的,其最初目的是想为心理学建立一个模型,此后认知图便被应用到其他方向和领域中。
人们把认知图描述为有向图,认为认知图是由一些弧连接起来节点的集合,但不同的学者对弧与节点赋予不同的含义。
1955年Kelly依据个人构造理论(Personal construct theory)提出了认知图,概念间的关系是三值的,即利用“+”、“-"表示概念间不同方向因果关系的影响效果,“O”表示概念间不具有因果关系。
1976年Axelord在 structure of Decision –The Cognitive Maps of Political Elites 中提出的认知图比Kelly的更接近于动态系统。
遥感erdas界面翻译

一、主页(一)Information1、ContentsContentsRetrieverGeocoder2、MetadataView/Edit Image Metadata View/Edit Point Cloud Metadata View/Edit Vector Metadata View/Edit Annotation Metadata View/Edit NITF MetadataView/Edit IMAGINE HFAEdit Image Metadata3、SelectSelectSelect by BoxSelect by LineSelect by EllipseSelect by PolygonFollow HyperlinksSelector PropertiesPick Properties4、InquireInquire BoxInquire ShapeInquire Color5、MeasureMeasure(二)Edit1、Cut2、Copy3、Paste4、Delete5、Undo6、Paste From Selected Object (三)Extent1、Fit to Frame2、ResetReset (一)信息内容:内容检索地理编码元数据:查看/编辑影像元数据查看/编辑点云元数据查看/编辑矢量元数据查看/编辑注记元数据查看NITF元数据查看IMAGINE HFA文件内容元数据编辑选择:选择通过拉框选择通过画线选择通过画椭圆选择通过画多边形选择跟踪超链接选择器属性采集器属性查询:查询框查询光标形状查询光标颜色测量:测量(二)编辑剪切复制粘贴删除撤销由选中内容粘贴(三)内容全景显示重置:重置(七)Roam1、HorizontalHorizontalVerticalUser-Defined2、Speed Down3、Speed4、Speed UpSpeed UpSpeed Reset5、Go to Start6、Step Backwards7、Reverse8、Stop9、Start/Pause10、Step Forwards11、Go to End12、Snail TrailSnail TrailMerge Snail Trails13、Roam Properties(七)漫游水平:水平垂直自定义减速速度加速:加速重置速度从头开始快退倒退停止开始/暂停快进到最后追踪:追踪合并追踪漫游属性二、Manage Data管理数据(一)Catalog1、Hexagon ContentHexagon ContentImage Catalog(二)Conversion1、Coordinate Calculator2、Import Data3、Export Data4、GeoMedia ToolsShapefiles to WarehouseWarehouse to ShapefilesGeoMedia Utilities5、Pixels To ASC||6、ASC|| to Pixels7、Graphical Importer(一)目录海克斯康内容:海克斯康内容影像目录(二)转换坐标计算器数据导入数据导出空间数据仓库工具:Shp转仓库仓库转Shp空间数据仓库工具栅格转文本文本转栅格GeoRaster导入器8、GeoRaster Manager9、Imagizer Data Prep(三)VectorizeRaster to Shape to Shape to Annotation (四)Rasterize1、Vector to RasterVector to RasterAnnotation to Raster(五)Image1、Edit Image Metadata2、Pyramids &StatisticsCompute Pyramids and Statistics Create ThumbnailsProcess Footprints and RSETS3、Compare Images4、Create New Image5、Create ECW Transparency(六)NITF/NSIF1、NITFView NITF MetadataExtract Shape LASDPPDB WorkstationMake RPF TOCCIB JPEG 2000 ExporterRPF Frame SearchMake ECRG/ECIB TOCECRG/ECIB Frame Search(七)Office Tools1、Send to PowerPoint2、Send to Word3、Send to GeoPDF4、Send to JPEG GeoRaster管理器IMAGIZER数据预处理(三)矢量化栅格转shp:栅格转shp栅格转注记(四)栅格化矢量转栅格:矢量转栅格注记转栅格(五)影像编辑元数据金字塔和统计:计算金字塔和统计创建缩略图处理范围线和金字塔影像比较创建新影像创建ECW透明度(六)NITF/NSIFNITF:查看NITF元数据提取Shp文件提取LAS文件合并数字式目标定位数据库工作站生成影像目录CIB JPEG 2000输出RPF帧搜索制作ECRG/ECIB目录表ECRG/ECIB帧搜索(七)Office工具发送到PPT发送到Word发送到GeoPDF发送到JPEG三、Raster栅格(一)Resolution (一)分辨率·Two Layer Union Operators Zonal AttributesMatrix UnionSummary Report of Matrix Overlay by Min or Max Index by Weighted Sum (七)Scientific1、FunctionsTwo Image Functions Single Image Functions2、Fourier Analysis Fourier TransformFourier Transform Editor Inverse Fourier Transform Fourier Magnitude ·双层联合计算区域分析矩阵分析归纳分析叠加分析加权分析(七)科学的函数分析:两个影像函数单个影像函数傅里叶分析:傅里叶变换傅里叶变换编辑傅里叶逆变换傅里叶幅值计算四、Vector矢量(一)Manage1、Copy Vector Layer2、Rename Vector Layer3、Delete Vector Layer4、Buffer Analysis5、Attribute to Annotation(二)ER Mapper Vector to Shape 1、Reproject Shape Shape Elevation (三)Raster To VectorRaster to Shapefile (一)管理复制矢量重命名矢量删除矢量缓冲区分析属性转注记ER映射矢量转Shp Shape重投影Shp裁切高程重投影(二)栅格转矢量栅格转Shp五、Terrain地形(一)Manage (一)管理六、Toolbox工具箱(一)Common1、IMAGINE Photogrammetry2、Image Equalizer3、Spatial Model Editor Spatial Model EditorLaunch Spatial Model4、Model MakerModel MakerModel Librarian5、MosaicMosaicProMosaicPro from 2D View Mosaic ExpressUnchip NITF6、AutoSync Workstation AutoSync Workstation Georeferencing wizardEdge Match WizardOpen AutoSync Project7、Stereo AnalystStereo AnalystAuto-Texturize from Block Texel MapperExport 3D shape KML Extended Features to Ground 8、MapsMap Series ToolMap Database ToolEdit Composition Paths9、VirtualGISVirtual World EditorCreate MovieRecord Flight Path with GPS Create TIN Mesh (一)通用图像摄影测量影像匀光器空间模型编辑器:空间模型编辑器发射空间模型空间建模:空间建模模型库管理影像镶嵌:启动专业镶嵌镶嵌视窗显示影像镶嵌快车合并自动配准:自动配准地理参考向导边缘匹配向导打开自动配准工程立体分析:立体分析自动纹理纹理编辑器输出Shp为KML构建地面实体地图工具:图幅地图地图数据库工具修改制图文件路径虚拟GIS:虚拟世界编辑器录像录制通过GPS点定义飞行路径建立不规则三角网七、Help帮助(一)Reference Library1、Help2、About IMAGINE3、Reference booksHexGeoWiki4、WorkflowsCommon WorkflowsSpatial Modeler WorkflowsClassification WorkflowsPhotogrammetry WorkflowsPoint Cloud WorkflowsZonal Change WorkflowsRectification WorkflowsMap Making WorkflowsMosaic WorkflowsVector WorkflowsModel Maker WorkflowsNITF Workflows5、User GuidesAAIC User GuideAutonomous Spectral Image Processing User GuideAutoSync User GuideDeltaCue User GuideHyperspectral User GuideIMAGINE Objective User GuideIMAGIZER Data Prep User GuideIMAGIZER Viewer User Guide Photogrammetry Suite Contents Operational Radar User GuideRader InterferometryStereo Analyst User GuideSubpixel Classifier User GuideVirtual GIS User GuideInstallation and Configuration Guide6、Spatial ModelingSpatial Model EditorModel Maker(Legacy) (一)相关阅览帮助关于IMAGINE参考书:希格维基工作流:常见工作流空间建模器工作流分类工作流摄影测量工作流点云工作流分区变化工作流整流工作流地图制图工作流镶嵌工作流矢量工作流模型制作工作流NITF工作流使用指南:AAIC使用指南自主光谱图像处理用户指南自动配准使用指南变化检测使用指南高光谱使用指南IMAGINE面向对象使用指南IMAGIZER数据准备使用指南IMAGIZER视窗使用指南摄影测量套件目录操作雷达用户指南雷达干涉立体分析使用指南子像元分类使用指南虚拟GIS使用指南安装与配置指南空间建模:空间模型编辑器模型制作者(Legacy)Spatial Modeler Language(Legacy)Graphical Models Reference Guide7、Language ReferenceERDAS Macro Language8、Release NotesERDAS IMAGINE Issues ResolvedAutonomous Spectral Image ProcessingRelease Notes9、ERDAS IMAGINE Release Notes(二)Search Commands1、Search2、Search Box(三)Page1、Previous2、Next空间建模语言(Legacy)图估模型的参考指南建模和定制:ERDAS宏语言发布说明:ERDAS IMAGINE问题解决自主光谱图像处理发布说明ERDAS IMAGINE发行说明(二)搜索命令搜索搜索框(三)页码向前向后八、Multispectral(一)Enhancement1、Adjust RadiometryGeneral ContrastBrightness/ContrastPhotography EnhancementsPiecewise ContrastBreakpointsLoad BreakpointsSave BreakpointsData Scaling2、Discrete DRADiscrete DRADRA Properties(二)Brightness Contrast1、Contrast Down/Up2、Brightness Down/Up(三)Sharpness1、Sharpness Down/Up2、FilteringConvolution FilteringStatistical FilteringReset Convolution(四)Bands1、Sensor Types2、Common Band Combinations (五)View1、Set Resampling Method2、Pixel Transparency(六)Utilities1、Subset & ChipCreate Subset ImageNITF ChipMaskDice ImageImage Slicer2、Spectral Pro Pro Pro Pro Features3、Pyramids & Statistics Compute Pyramids && Statistics Compute Statistics on Window Generate RSETs(七)Transform & Orthocorrect 1、Transform & OrthoOrtho Using Existing ModelOrtho With Model Selection Transform Using Existing Model Create Affine CalibrationPerform Affine Resample Resample Pixel Size2、Control Points3、Single Point4、Check Accuracy(八)Edit1、Fill2、Offset3、Interpolate九、Drawing(一)Edit1、Cut2、Copy3、Paste4、Delete5、Undo6、Paste from Selected Object (二)Insert Geometry1、Point2、Insert Tic3、Arc4、Create Freehand Polyline5、Rectangle6、Polygon7、Ellipse8、Create Concentric Rings9、Text10、Place GeoPoint11、Place GeoPoint Properties12、GrowGrowGrowing Properties13、EasyTrace14、Lock15、Layer Creation Options (三)Modify1、Enable Editing2、SelectSelectSelect by BoxSelect by LineSelect by EllipseSelect by PolygonFollow HyperlinksSelector PropertiesPick Properties3、LineLineReshapeReplace a portion of a lineSplineDensifyGeneralizeJoinSplit4、AreaAreaReshapeSplit polygon with PolylineReplace a portion of a polygon Append to existing polygonInvert Region5、Vector Options(四)Insert Map Element1、Map GridMap GridUTM GridGeographic GridMGRS Grid Zone ElementDeconflict Grid TicmarksGrid Tic Modifier toolGrid Preferences2、Scale Bar3、Legend4、North ArrowNorth ArrowDefault North Arrow Style5、Dynamic ElementsDynamic ElementsDynamic Text EditorConvert to Text(五)Font/Size1、Font Face2、Font/Symbol Unit Type3、Font/Symbol Size4、Font/Symbol Units5、Bold6、Italic7、Underline8、Colors(六)Locking1、Lock Annotation OrientationLock Annotation OrientationReset Annotation Orientation to Screen Reset Annotation Orientation to Map Lock Annotation Orientation Set Default (七)Styles1、Object Style GalleryAdd to GalleryCustomize Styles2、Customize Styles(八)Shape1、Area Fill2、Line Color3、Line StyleLine Thickness1 pt2 pt4 pt6 ptLine PatternSolid LineDotted LineDashed LineDashed Dotted Line OutlineNo OutlineArrowsNo ArrowStart ArrowEnd ArrowBoth Ends(九)Arrange1、ArrangeOrder ObjectsBring to FrontBring ForwardSend To BackSend BackwardGroup ObjectsGroupUngroupPosition ObjectsRotate North十、Format(一)Insert Geometry1、point2、Insert Tic3、Arc4、Create Freehand Polyline5、Rectangle6、Polygon7、Ellipse8、Create Concentric Rings9、Text10、Place GeoPoint11、Place GeoPoint Properties12、GrowGrowGrowing Properties13、EasyTrace14、Lock15、Layer Creation Options (二)Text1、Text GalleryAdd to Gallery(三)Font1、Font Face2、Font Unit Type3、Font Size4、Font Units5、Bold6、Italic7、Underline8、Colors(四)Symbol1、Symbol Size2、Symbol Units3、Symbol Unit Type(五)Locking1、Lock Annotation OrientationLock Annotation OrientationReset Annotation Orientation to Screen Reset Annotation Orientation to Map Lock Annotation Orientation Set Default (六)Styles1、Object Style GalleryAdd to GalleryCustomize Styles2、Customize Styles(七)Shape1、Area Fill2、Line Color3、Line StyleLine Thickness1 pt2 pt4 pt6 ptLine PatternSolid LineDotted LineDashed LineDashed Dotted Line OutlineNo OutlineArrowsNo ArrowStart ArrowEnd ArrowBoth Ends(八)Arrange1、Bring to Front Bring to FrontBring Forward2、Send to Back Send to BackSend Backward3、GroupGroupUngroup4、AlignAlign Horizontal Left Align Horizontal Center Align Horizontal Right Align Vertically Top Align Vertically Center Align Vertically Bottom Distribute Horizontally Distribute Vertically Alignment..5、FlipFlip VerticallyFlip Horizontally6、Rotate North(十一)Table(一)View1、Show AttributesShow AttributesFrom View Attribute2、Switch Table View(二)Drive1、Drive Viewer to first selected item2、Drive to previous selected feature3、Drive to next selected feature4、Drive Viewer to last selected item5、Zoom to Item(三)Column1、Unselect Columns2、Select All Columns3、Invert Column Selection4、Add Class Name5、Add Area(四)Row1、Unselect Rows2、Select All Rows3、Invert Row Selection4、Criteria(五)Query1、Merge2、Colors3、Column Properties(六)Edit1、Edit Column Next2、Edit Row Next。