Time sensitive sequential myopic information gathering
永恒的时间偏移Immortaltimebias-药品评价中心

Newest Methods in Pharmacoepidemiology: back to basics and beyond
上海市药品不良反应监测中心 杜文民
Shanghai Center for Adverse Drug Reaction Monitoring Du Wenmin
2007年11月29-30日
第一届中国药物警戒研讨会 杜文民
药物流行病学 (Pharmacoepidemiology)
• 药物流行病学是研究大人群中药物利用和 药物作用的研究 • 药物流行病学是应用流行病学的方法研究 临床药理学的问题
• (Pharmacoepidemiology is the study of the use of and the effects of drugs in large numbers of people. Pharmacoepidemiology applies the methods of Epidemiology to the content area of Clinical Pharmacology.) Prof. Brain Strom
病例报告和病例系列
2007年11月29-30日
第一届中国药物警戒研讨会 杜文民
--- RCT 设计 ---
--- RWS 设计 ---
患病人群 纳入 / 排除标准 随机化 治疗 对照 暴露
患病人群
治疗的适应症 进行治疗 对照
研究方案的治疗和随访
专门的治疗和随访
结果?
2007年11月29-30日 第一届中国药物警戒研讨会 杜文民
结果?
依那西普研究
• 评价依那西普在真实世界环境下,治疗RA的 疗效、安全性和经济性 • 利用来自RCT的资料观察比较RWS的患者 特点、临床结果和经济学评价
基于CMOS有源超材料的生物分子的自旋太赫兹传感

第 22 卷 第 2 期2024 年 2 月太赫兹科学与电子信息学报Journal of Terahertz Science and Electronic Information TechnologyVol.22,No.2Feb.,2024基于CMOS有源超材料的生物分子的自旋太赫兹传感陈亚玄a,c,d,孔茹茹a,李昭颖a,c,d,孙统a,c,d,熊凡a,c,d,孙芸a,d,刘永山a,张有光b,白中扬*b,c,d,温良恭*a,c,d(北京航空航天大学 a.集成电路科学与工程学院;b.电子信息工程学院,北京100191;c.深圳研究院,广东深圳518063;d.杭州创新研究院,浙江杭州310051)摘要:基于互补金属氧化物半导体(CMOS)有源超材料,通过结合自旋太赫兹源,对生物分子的太赫兹频段指纹谱进行传感检测。
对比基于自旋太赫兹源和基于光电导天线的太赫兹时域光谱系统测量的3种不同生物样品的指纹谱特征峰,验证了自旋太赫兹源测量生物样品的可行性。
同时,提出一种基于CMOS有源超材料的生物分子太赫兹传感方案,仿真结果表明CMOS调控器件对生物样品的扫描测试表现为生物透射峰与器件谐振峰变化统一。
当吸收峰位于器件的调控范围内时,随着加电压的增大,生物透射谱谐振频率减小,出现红移,最大红移量达到40 GHz。
因为不同的生物样品有不同的吸收峰,根据不同的太赫兹指纹谱吸收峰的位置设计了5种对应中心频率的CMOS有源超材料,为实现生物分子太赫兹传感系统的小型化和集成化提供了理论及实验基础。
关键词:自旋太赫兹源;指纹谱;CMOS有源超材料;太赫兹传感中图分类号:TN20 文献标志码:A doi:10.11805/TKYDA2022249Spintronic terahertz sensing of biological molecules based on CMOScontrollable metamaterialsCHEN Yaxuan a,c,d,KONG Ruru a,LI Zhaoying a,c,d,SUN Tong a,c,d,XIONG Fan a,c,d,SUN Yun a,d,LIU Yongshan a,ZHANG Youguang b,BAI Zhongyang*b,c,d,WEN Lianggong*a,c,d(a.School of Integrated Circuit Science and Engineering;b.School of Electronic Information Engineering,Beijing 100191,China;c.Shenzhen Innovation Institute,Shenzhen Guangdong 518063,China;d.Hangzhou Innovation Institute,Beihang University,Hangzhou Zhejiang 310051,China)AbstractAbstract::The sensing and detection of the biomolecule Terahertz(THz) spectrum fingerprint is performed based on Complementary Metal Oxide Semiconductor(CMOS) controllable metamaterials,using a Spintronic THz Emitter device. The spintronic THz spectrum fingerprint of three differentbiological samples were benchmarked with results using THz photoconductive antennas. The results showthat the feasibility of measuring biological samples with spin terahertz source is verified. At the sametime, a frequency based biomolecular THz sensing scheme is proposed by utilizing CMOS controllablemetamaterials. Finite element models are built based on the performance test and the biosensing processof the CMOS controllable metamaterial devices. Five CMOS controllable metamaterials were designedwith center frequencies related to the absorption peaks of the biomolecules under test. The simulationresults show that the resonance frequency has a red shift with the increase of voltage, and the maximumred shift is up to 40 GHz. This paper provides experimental and theoretical foundations for buildingminiaturized and integrated biomolecular THz sensing systems.KeywordsKeywords::spin THz source;fingerprint spectra;CMOS controllable metamaterials;THz sensing文章编号:2095-4980(2024)02-0160-08收稿日期:2022-12-28;修回日期:2023-10-09基金项目:广东省基础与应用基础研究基金资助项目(2021B1515120012);青岛市科技惠民示范专项基金资助项目(23-2-8-smjk-3-nsh)*通信作者:白中扬email:*********************.cn;温良恭email:**************.cn;第 2 期陈亚玄等:基于CMOS 有源超材料的生物分子的自旋太赫兹传感太赫兹(THz)技术具有频谱宽、穿透性强、光子能量低、相干性等特点[1]。
sequential gatekeeping 方法

sequential gatekeeping 方法全文共四篇示例,供读者参考第一篇示例:sequential gatekeeping 方法是一种用于确定哪些观察结果应该被纳入最终分析的方法。
这种方法通常用于临床试验或其他研究中,以确保分析的有效性和可靠性。
在这种方法中,研究者在进行研究的过程中设置了多个阈值或关卡,只有通过了前一个关卡的观察结果才能继续通过后续的分析。
sequential gatekeeping 方法的一个重要优势是可以减少多重比较问题的风险。
在传统的分析中,研究者往往会对所有的观察结果进行一次性的统计检验,这样就会增加发生假阳性(即错误拒绝真实假设)的可能性。
而通过sequential gatekeeping 方法,研究者可以确保只有确实通过了一系列关卡的观察结果才能被纳入最终的统计分析中,这样就可以有效降低假阳性的风险。
在实际应用中,sequential gatekeeping 方法通常会涉及到设置多个中间检验点和最终检验点。
每个检验点都对应着一个统计检验,只有通过了该检验的观察结果才能继续向下一个检验点前进。
如果某个检验点的观察结果未通过检验,则整个分析过程将被终止,从而避免了后续进一步检验的需求。
sequential gatekeeping 方法还可以帮助研究者更加精确地控制实验的统计功效。
通过逐步检验的方式,研究者可以根据中间结果及时进行调整,从而在整个研究过程中保持足够的统计功效,确保最终的结果具有可靠性和可信度。
sequential gatekeeping 方法也存在一些局限性。
设置合适的检验点需要对研究设计和问题有深入的理解,否则可能会导致不当的筛选和分析,从而影响研究结果的可靠性。
sequential gatekeeping 方法需要额外的计算和编程工作,可能会增加研究的复杂性和成本。
在实际应用中需要权衡各种因素来确定是否采用这种方法。
sequential gatekeeping 方法是一种有效的统计分析方法,可以帮助研究者在进行大规模数据分析时降低错误率、提高统计功效,并确保最终结果的可靠性。
在脑片水平上突触可塑性长时程增强的研究进展

在脑片水平上突触可塑性长时程增强的研究进展1郑小波1, 田心1*,宋毅军21 天津医科大学 天津市神经病学研究所,天津 (300070)2 天津医科大学总医院, 天津 (300052)E-mail:tianx@摘要: 长时程增强(Long-term Potentiation, LTP)是突触效能的重要表现形式,是研究学习与记忆突触机制的客观指标。
近年来随着脑片技术的发展,很多关于LTP的实验研究都在脑片水平上进行,本文介绍了海马脑片CA1区LTP的调节表达机制的研究,海马脑片上诱导产生的LTP的特征和脑片条件的关系,多巴胺转运蛋白阻断剂通过活化D3多巴胺受体增强海马脑片CA1区LTP,以及激活大鼠海马脑片CA1区突触β-肾上腺素能受体增强联合LTP的研究,综述了在脑片水平上研究LTP的诱导表达维持及调节等方面的研究动态进展。
关键词: 脑片;突触可塑性;突触效能;长时程增强1.引言突触的长时程增强(Long-term Potentiation, LTP)效应和学习、记忆机制密切相关,1973年Bliss[1]等发现家兔海马经短暂高频刺激后,神经元兴奋性突触后电位可增大并持续几小时甚至几周,他将这一现象称为长时程增强效应。
其后,许多研究人员也在实验中观察到LTP 的存在。
LTP的形成是一个非常复杂的过程,其形式和机制是多样的,因所在部位与接受刺激的不同而不同。
脑片是指从动物脑区制备的厚度为100~700μm能够在体外存活一定的时间的脑薄片,脑片技术起始于20世纪50年代Li和McIlwain的离体脑片电生理研究。
脑片兼有在体脑实验和离体神经细胞培养的某些特点,在体外48小时内依然能保持良好的活性,离子通道性质不会发生变化,离体脑片保持有完整的神经突起和神经解剖通路,便于研究突触活性。
在脑片的电生理过程中排除了活体血压、温度、电解质、血脑屏障等因素的干扰,可以按不同的实验目的直接准确地改变脑片灌流液的成份和条件,如温度、酸度和渗透压、通氧状态、以及离子通道或细胞信号转导通路的阻断剂等;还能借助显微镜准确地放置记录电极和刺激电极。
干细胞钟表遗传网络及其时态控制

干细胞钟表遗传网络及其时态控制干细胞是指具有自我更新和分化能力的一类细胞,是维持生物体发育和组织修复的重要细胞基础。
为了维持干细胞的自我更新和分化能力,细胞内需要一套完整的时态控制系统,确保各个基因在合适的时机进行表达和调控。
干细胞的时态控制主要由细胞内的钟表、遗传网络和表观遗传机制共同调节,这其中相互作用错综复杂,但又密不可分。
一、干细胞内部的钟表干细胞内部的钟表系统又称为生物钟系统,这一系统的主要功能是调节细胞的基因表达和代谢活性,因此是控制细胞时态的重要机制之一。
钟表系统是由一系列生物学重复的分子和反馈回路组成,其中最重要的蛋白质分别是BMAL1和CLOCK,在细胞内向下调节一系列基因的表达。
二、干细胞内部的遗传网络干细胞内部的遗传网络主要由多个信号通路、转录因子、miRNA等多个成分构成。
这些成分相互作用,形成一套错综复杂的正、负反馈回路,通过不同的信号通路,调控细胞内基因的表达和功能的改变。
在干细胞中,常见的信号通路包括Wnt信号通路、Notch信号通路、PDGF信号通路以及多种细胞因子的作用等。
这些信号通路通过调控转录因子,影响基因的表达和系统功能,直接影响干细胞的自我更新和分化能力。
三、干细胞内部的表观遗传机制表观遗传学是探究基因表达和其它基因活动与环境相互关系的研究。
在干细胞内部,表观遗传机制主要包括DNA甲基化、组蛋白修饰、miRNA与3'UTR相互作用等。
这些机制通过基因组的改变和调控基因的表达,影响干细胞的自我更新和分化能力。
细胞内三种不同的时态调控机制之间密不可分,一个系统的失衡会导致整个干细胞系统的内部失调,甚至导致一些内部的异常基因表达,从而影响干细胞的生物学特性。
除此之外,干细胞时态也受到外界环境的影响,如睡眠时间、饮食习惯、气候等等,这些外界条件都会对生物钟、遗传网络和表观遗传机制的活动,产生影响。
总之,干细胞内部的时态控制涉及到生物钟、遗传网络和表观遗传机制的互动作用。
基于时间分辨方法的LicT蛋白荧光动力学特性

基于时间分辨方法的LicT蛋白荧光动力学特性常孟方;李磊;曹潇丹;贾梦辉;周加胜;陈缙泉;徐建华【摘要】In this paper,the fluorescence dynamics of tryptophan residues in LicT protein is investigated by time-resolved fluorescence method combined with UV absorption and steady-state fluorescence spectroscopy.The local microenvironment and structural changes of LicT protein before and after activation are studied.The activated LicT protein AC 141 prevents the antitermination of gene transcription involved in carbohydrate utilization to accelerate the body's metabolism.The structural properties and microenvironment of activated protein AC 141 and wild-type protein Q 22 were determined by different fluorescence emissions and lifetimes of tryptophan residues.The interaction between tryptophan residues and solvent is elucidated by decay associated spectroscopy (DAS) and time-resolved emission spectra (TRES),indicating that upon activation,the structure of AC 141 is more compact than that of wild-type Q 22.In addition,TRES also showed that tryptophan residues in the protein had a continuous spectral relaxation process.Anisotropy results illustrated the conformational motions of residues and whole proteins,suggesting that tryptophan residues had independent local motions in the protein system,and that the motions were more intense in the activated protein.%使用时间分辨荧光方法,结合紫外吸收光谱和稳态荧光光谱技术,测量了LicT蛋白中色氨酸残基的荧光动力学特性,进而对LicT蛋白质激活前后的局部微环境和结构变化进行了研究.LicT蛋白质的激活态使得有关糖类利用的基因转录过程继续进行,促进机体新陈代谢.通过色氨酸残基的荧光发射和寿命的差异判断出激活型蛋白AC 141和野生型蛋白Q 22不同的结构性质和微环境差异.在此基础上,通过衰减相关光谱(DAS)和时间分辨发射光谱(TRES)阐释了两种蛋白色氨酸残基和溶剂的相互作用,说明了激活型AC 141的比野生型Q 22的结构更加紧密.此外,TRES还说明了蛋白中的色氨酸残基存在连续光谱弛豫过程.各向异性结果则对残基和整个蛋白的构象运动进行了阐述,说明了色氨酸残基在蛋白质体系内有独立的局部运动,且在激活型蛋白中该运动更加强烈.【期刊名称】《物理化学学报》【年(卷),期】2017(033)005【总页数】6页(P1065-1070)【关键词】时间相关单光子计数;色氨酸;衰减相关光谱;时间分辨发射光谱;各向异性【作者】常孟方;李磊;曹潇丹;贾梦辉;周加胜;陈缙泉;徐建华【作者单位】华东师范大学精密光谱科学与技术国家重点实验室,上海200062;华东师范大学精密光谱科学与技术国家重点实验室,上海200062;华东师范大学精密光谱科学与技术国家重点实验室,上海200062;中国科学院上海光学精密机械研究所,上海201800;华东师范大学精密光谱科学与技术国家重点实验室,上海200062;华东师范大学精密光谱科学与技术国家重点实验室,上海200062;华东师范大学精密光谱科学与技术国家重点实验室,上海200062【正文语种】中文【中图分类】O643荧光光谱技术广泛应用于对生物分子结构和动力学的研究中1。
Siemens SIMATIC S7-1500 数字输入模块 DI 16x24 V DC AUX 产
4; 4 totalizers max. 10 kHz or 2 totalizers max. 20 kHz + 2 totalizers max. 10 kHz 20 kHz 32 bit Yes Yes 16 16 15.625 µs
Interrupts/diagnostics/status information Diagnostics function Alarms ● Diagnostic alarm ● Hardware interrupt Diagnoses ● Monitoring the supply voltage ● Monitoring of encoder power supply ● Wire-break ● Short-circuit Diagnostics indication LED ● RUN LED ● ERROR LED
24 V -30 to +5 V +11 to +30V
9 mA
Yes; 0.05 / 0.1 / 0.4 / 0.8 / 1.6 / 3.2 / 12.8 / 20 ms 0.05 ms 20 ms 0.05 ms 20 ms
Yes
Yes
1 000 m; 600 m for technological functions; depending on input frequency, encoder and cable quality; max. 50 m at 20 kHz 600 m; for technological functions: No
last modified:
Yes; green LED Yes; green LED Yes; red LED Yes; red LED
TIME-DELAY-BASED MULTI-TARGET DETECTION AND POWER DELIVERING
TIME-DELAY-BASED MULTI-TARGET DETECTION AND POWER DELIVERING X.-M. Zhong* , C. Liao, and W.-B. Lin Institute of Electromagnetics, Southwest Jiaotong University, Chengdu 610031, China Abstract—The paper presents an approach to locate and concentrate electromagnetic energy on targets based on time delays. An array of antennas is used in the approach, in which one antenna sends ultrawide-band signals, and all antennas receive the signals backscattered by the targets. The time delays can be obtained by the interrelation of the transmitted and received signals. By controlling the timing of the pulses radiated from the individual antennas, high concentration of electromagnetic energy on the targets’ locations can be achieved. The performance of this approach is demonstrated by several numerical simulations. 1. INTRODUCTION Detecting unwelcome targets and providing high concentration of electromagnetic energy to destroy them has many applications in civil or military fields, such as hyperthermia treatment of cancer and other maladies, location and destruction of invading targets etc. Schwartz and Steinberg [1] demonstrate that the thinned transient arrays can achieve high concentration of electromagnetic energy and low side radiation with a very few elements. Baum [2] realizes it in focused aperture antennas. Hackett et al. [3] illustrate that the precise control of the pulses radiated from the individual transient elements allows the concentration of energy within regions where the pulses may overlap in a coherent fashion. Funk and Lee [4] show that precisely controlling the timing of pulses radiated from an array of N ultra-wide–band antennas produces a peak power that scales approximately as N 2 in directions where the pulses radiated from the individual elements combine coherently. These papers address the maximization of the
Volocity Tracking Tutorial说明书
DataLive Cell TrackingWorkflowTracking objects may be appropriate if you are interested in characterizing the movement of objects (i.e. their speed, direction), or monitoring properties of objects as they move over time. In Volocity, tracking is a two stage process:1) The identification of the objects.2) The analysis of the positions of those objects and the building of tracks.Finding objectsClick once on the data in the library list and click on the Measurements tab to display the Measurement View. The image is shown in the mode that best shows the objects to be measured, and below it is an area where all measurements made will be displayed. At the top left of the screen is an area where a measurement protocol will be built, to find objects of interest in the dataset and track them, using the list of tasks in the area below.Volocity Tutorial TrackingThis tutorial will demonstrate how to perform tracking using Volocity ®.TUTORIAL NOTEIt is important that the Measurement Protocol identifies objects as accurately as possible in each timepoint as this will be essential for the tracking algorithm.The objects within this dataset do not exhibit the same intensity values throughout the time-course; therefore thresholding on the same intensity values in each timepoint is unlikely to be successful. The task “Find Objects Using SD Intensity” selects intensities within a specified number of standard deviations from the mean. Select this task and drag and drop it into the measurement protocol.Where intensities are found within range a colored overlay is applied. Groups of selected intensities form objects. View the image, with object overlays, in different ways by changing the mode of view in the top left.In this example, some of the objects formed are too large, they are actually two or more objects currently identified as one. Use the task “Separate Touching Objects” to improve this situ ation. The Object size guide, shown in the “Separate Touching Objects” task box, can be set to the approximate size of the smallest object that can be created by the separating step. In this example the Object size guide is 0 µm3, the best separation is achieved by not restricting the size of objects that will be made.TrackingOnce objects have been identified they can be connected together by tracks. Drag and drop a “Track Objects” task to the measurement protocol.The tracking algorithm uses the centroid measurement for each previously identified object to determine whether there is any movement of objects over time. Tracks are generated by connecting the centroids so as to trace the path of a moving object. The track objects task will always place itself at the bottom of the list of tasks in the protocol since objects must be found before they can be tracked.To see the results of the “Track Objects” task all timepoints must be measured. Select Measure all timepoints from the Measurements menu.To alter how the tracks are measured, click on the cog icon on the “Track Objects”task to access the secondary dialog for this task.For example, it may be necessary to set a maximum distance between objects, in this example 5 µm.Every object measured is given a unique object ID. Objects that have been tracked, and are therefore determined to be the same object in different locations are given the same Track ID; however their object ID remains the same.In the drop-down menu at the top of the measurements table, filter by tracks to just see measurements made on tracks. This contains summary information such as track length, the average velocity for the duration of the track, the trajectory and the meandering index.Filter by objects and sort by Track ID to see individual properties of the object and how they change over time.Now that the objects have been measured and tracked, we should confirm these tracking results by examining individual tracks. Select a row (shift-click to select multiple rows) in the table, representing a track, to show the individual overlay of that track on the image..Showing object feedback for the current timepoint only can assist in understanding what is being shown. To adjust the feedback that is displayed on the image, select Feedback Options... from the Measurements menu.Use the time navigation controls to compare the feedback with the underlying image data.The most likely problem with tracks is caused by setting the wrong maximum distance between objects in the secondary dialogue of the “Track Objects” task (as discussed previously). If tracks are incorrect because they switch to different objects part way through the time series, the maximum distance assigned is too great. If tracks are incorrect because they do not follow an object far enough in the time course, the maximum distance may be too small. Adjust as appropriate.Analysis of tracking dataThe tracking process generates information about object movement and object properties over time. Tracksrecord information about the behavior of the object for the duration of the time that it was followed. Individualproperties for each object, some of which are added by the tracking step, are also recorded.To easily extract what is of particular interest from this wealth of information, you may store all the measurements, in table format, as a separate Measurement Item within the library, and then perform further analysis. Select Make Measurement Item… from the Measurements menu, remembering to select Measure All Timepoints when prompted.Raw tables, analysis tables and charts, created within the resulting Measurement Item, can all be viewed in Volocity or exported as text or image files.is a registered trademark of PerkinElmer, Inc. All other trademarks are the property of their respective owners.。
心理学专业名词中英对照(普通心理学)
StructuralismFunctionalismSensory ThresholdDark AdaptationBright AdaptationPerceptionBottom Up ProcessingTop Down Processing Perceptual Constancy Temporal PerceptionSensory MemoryShort-Term MemoryLong-Term MemoryEpisodic MemorySemantic MemoryImplicit MemoryExplicit MemoryDeclarative Memory Procedural MemoryPartial-Report Procedure Working MemoryThe Curve of ForgettingSerial Position EffectProactive Inhibition Retroactive InhibitionRapid Eye Movement Sleep, REM Selective AttentionSustained AttentionDivided AttentionFilter TheoryAttenuation TheoryAutomatic Processing Controlled Processing Convergent ThinkingDivergent ThinkingMental RotationConcept FormationAlgorithm StrategyHeuristic MethodMean-end AnalysisBackward SearchHill Climbing MethodFluencyFlexibility 构造主义机能主义感觉阈限暗适应明适应知觉自下而上的加工自上而下的加工知觉恒常性时间知觉感觉记忆短时记忆长时记忆情景记忆语义记忆内隐记忆外显记忆陈述性记忆程序性记忆局部报告法工作记忆遗忘曲线系列位置效应前摄抑制倒摄抑制快速眼动睡眠选择性注意持续性注意分配性注意过滤器理论衰减理论自动化加工受意识控制的加工辐合思维发散思维心理旋转概念形成算法策略启发法手段目的分析逆向搜索爬山法流畅性变通性OriginalityDialogue LanguageMonologue LanguageWritten LanguageInner LanguagePhysiological NeedSafety NeedBelongingness and Love Need Esteem NeedSelf Actualization NeedDeficit or Deficiency NeedGrowth NeedMoodIntense EmotionStressFluid IntelligenceCrystallized Intelligence Emotional IntelligenceGeneral FactorSpecific FactorMultiple-Intelligence Theory Triarchic Theory Intelligence Component Subtheory of Intelligence Meta-componentsPerformance Components Knowledge-Acquisition Components Successful IntelligencePlanning-Arousal-Simultaneous SuccessiveTemperamentCharacterCommon TraitsIndividual TraitsCardinal TraitsCentral TraitsSecondary TraitsExtraversionAgreeableness ConscientiousnessNeuroticismOpennessExtroversionIntroversionCognitive Style 独特性对话语言独白语言书面语言内部语言生理需要安全需要归属和爱的需要尊重的需要自我实现需要缺失需要成长需要心境激情应激流体智力晶体智力情绪智力一般因素特殊因素多元智力模型三元智力理论智力成分亚理论元成分操作成分知识获得成分Successful Intelligence PASS 模型气质性格共同特质个人特质首要特质中心特质次要特质外倾性宜人性责任心神经质/情绪稳定性开放性外倾人格内倾人格认知风格Field-Independent, (FI) Field-Dependent, (FD) Impulsivity Reflection Successive Simultaneous 场独立性场依存性冲动性沉思性继时性同时性。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Time Sensitive Sequential Myopic Information GatheringChiu-Che Tseng Department of Computer ScienceEngineering University of Texas at Arlington Arlington, Texas 76011tseng@Piotr J. Gmytrasiewicz Department of Computer ScienceEngineering University of Texas at Arlington Arlington, Texas 76011piotr@AbstractDIGS (Decision Support Information Gathering System) uses the value of information to guide the information gathering process and uses the gathered information to provide the decision recommendations to the human users. DIGS uses an influence diagram as a modeling tool. In this paper, we create a model to represent the investment scenario of a novice stock investor. By using the sequential myopic information gathering technique, DIGS generates a sequence of information gathering actions. The actions are dependent on each other in that the action DIGS executes at time t1will be based on the results of the pervious action at time t0. DIGS also employs a stopping mechanism for the information gathering actions based on the information value and time constraints. Thus, DIGS can be used as an anytime system. Compared to a pre-generated sequence of actions, our technique has the flexibility to react to the gathered information, and to use it to guide the subsequent gathering actions. Therefore, our system can adapt to the newly acquired information and avoid the computational complexity of planning the series of actions in advance.1. IntroductionAn influence diagram is a compact representation emphasizing the qualitative features of decision problem [13]. We describe a Decision Support Information Gathering System (DIGS) that uses influence diagrams as models of the decision-making situations of the user, and suggests to the user how best to retrieve information related to his or her decision situations. Thus, we assume that a user is engaged in making a decision, and that there are many alternative information sources that can be used to aid this process. Since the information may not be available for free, and it may take substantial time to deliver the information to the user, our system evaluates beforehand whether consulting an information source, such as a WWW page, is worthwhile. Furthermore, in order to take advantage of multiple information sources, we develop an information gathering strategy to execute a sequence of information gathering actions to better aid the user’s decision making process.Recent approaches for the similar problem include Matheson (1990), Jensen and Dittmer (1997), Jensen and Liang (1992); their approaches concentrate on the obtaining the most valuable test to perform. Horvitz, et al. (1993), present an approach that deals with the time critical information display. Zilberstein and Lesser (1996) use nonmyopic approach for gathering small amounts of information. In this paper, we describe a method that combines the temporal cost, which is an important factor in time critical domain such as stock market, with the information value theory to guide the information gathering process. Our paper proposes a sequential myopic strategy that incorporates the temporal cost of the information accessing delay using influence diagram. We use this strategy to overcome the infeasible nonmyopic approach and the inaccuracy of decision making based on the query result from one single information source. We apply the sequential myopic strategy to the information gathering process and demonstrate the idea using a simple stock investor application.To accomplish its task, DIGS has to consider the following factors.Information costs. Some information providers may charge for the information provided either per line or pertuple. The system has to calculate the cost of the information gathering and keep it within the user’s budget. Moreover, the system needs the cost of information in order to determine if it is worthwhile to continue the information gathering process.Time constraint. In realistic situations, say financial or defense-related, the quality of decisions deteriorates with time delay. Once an opportunity is missed, one may not be able to take advantage of the situation again. The system has to be able to perform its duty under these time constraints, which means that the system has to gather the information and present it to the human user before it is too late.Quality of the information. The information providers seldom can provide perfect information regarding the situation at hand. The information often contains some degree of uncertainty and inaccuracy. The system has to be able to assess and reason with the reliability of the information it can expect from a given source.DIGS uses the notion of value of information, as defined in decision theory, to guide the information gathering process, and is able to provide the user with the decision suggestion. The system’s five major modules are shown in Figure 1.Knowledge base- contains the information aboutthe information sources.User model - stores a library of influence diagramsthat represent the human decision models.Strategy module – produces the sequentialinformation gathering strategy.Executor- performs actual information gatheringactions.Interface - provides communication with the human user.Figure 1. Decision Support Information GatheringSystemIn the remainder of this paper, Section 2 provides some background on the influence diagram; Section 3 addresses the basic value of information theory and the time sensitive myopic strategy. Section 4 gives the description of each module in the system. Section 5 illustrates the implementation of the time sensitive sequential information gathering using a novice stock investor example. Section 6 contains conclusions and some future enhancements.2. Influence DiagramsThe Decision Support Information Gathering System (DIGS) uses influence diagrams as models of the decision-making situation of the user. An influence diagram can be viewed as a Bayesian network with decision and utility nodes, where the value of each decision variable is not determined probabilistically, but rather is computed to meet some optimization objectives.Influence diagrams are extensions of Bayesian networks, introduced by Howard and Matheson [13], with three types of nodes – chance nodes, decision nodes and utility nodes. Chance nodes, usually shown as ovals, represent random variables in the environment. The decision nodes, usually shown as squares, represent the choices available to the decision-maker. Utility nodes, shown as diamond or flattened hexagon shapes, represent the usefulness of the consequences of the decisions measured on a numerical scale called utility. The arcs in the graph have different meanings based on their destinations. The arcs that point to the utility or chance nodes are dependency arcs that represent probabilistic or functional dependence. The arcs that point to the decision nodes are informational arcs; they imply that the parent nodes will be known to the decision-maker before the decision is made.There are several methods for determining the optimal decision policy from an influence diagram. One, by Howard and Matheson [13], consists of converting the influence diagram to a decision tree and solving for the optimal policy within the tree, using the exp-max labeling process. Another, by Shachter [14], consists of eliminating nodes from the diagram through a series of value preserving transformations. Each transformation leaves the expected utility intact, and at every step the modified graph is still an influence diagram. Shachter proved that the transformation does not affect the optimal decision policy and the expected value of the optimal policy. Pearl [10] has suggested a hybrid method using branch and bound technique to prune the search space of the converted decision tree from the influence diagram.3. Sequential Myopic Strategy Approach3.1 Value of Imperfect InformationBefore performing an information gathering task,DIGS uses the values of imperfect information to decide which information are worth retrieving. The system uses an influence diagram, described further in Section 4, to calculate the net value of retrieving imperfect information for each information source in the diagram.Suppose that the current information available is C,which can be represented as belief regarding the values of the different random variables (chance nodes) in the influence diagram. Let Result i (A) be the i-th possible outcome of the user’s action A. The current best action,a , is one with the highest expected utility computed with information C:EU(a |C)=Amaxi∑U(Result i (A))P(Result i (A)|C,Do(A))(1)Now consider the situation in which the user can get additional information that will provide some evidence,X n . The value of the best action a Xn after obtaining the evidence X n is:EU(a Xn |C,X n )=Amaxi∑U(Result i (A)) ×P(Result i (A)|C, X n ,Do(A)))(2)But X n is a random variable whose value is currently unknown, so we must average over all possible value X n kthat we might discover for X n , using system’s current belief about its value. The Value of Information (VI) on node Xn is:VI(X n ) = (∑kP(X n =X n k |C)EU(a Xn k |C, X n =X n k )) -EU(a |C)(3)The VI is the value of the perfect information but,interestingly, it can be made to express the value originated from unreliable information sources (see section 5).Another effect that we must consider is the cost. The cost function of the investor domain can be viewed as a combination of two sub costs. One is the monetary cost, the actual price we have to pay for the information, and the other one is the opportunity cost, or the temporal cost. The temporal cost is the cost of delaying action in a possibly urgent situation the decision-maker is in. For example, to consult the Charles Schwab’s SchwabNOW Online Investments one is charged $6 per report, but getting the report andreasoning about its results takes time during which the investor might be losing money. In section 3.3, we describe how to calculate the temporal cost in more detail.We define the net value of retrieving imperfect information, NVI, to be:NVI(Xn) = VI(Xn) – C Xn(4)C Xn = CM Xn + CT Xn(5)Where C Xn is the total cost, CM Xn is the monetary cost of source Xn and CT Xn is the temporal cost of source Xn.3.2 Sequential Myopic StrategyIn order to produce a sequence of information gathering actions, the system should consider all possible ordered sequences of such actions. This, however, is intractable. Therefore, myopic strategy that gathers information one at a time can avoid the burden of the nonmyopic approach. There has been some investigation of the accuracy of the myopic strategy; Kalagnanm and Henrion [7] showed that a myopic policy is optimal when the decision maker’s utility function is linear, and the relationship between hypotheses and evidence is deterministic. Gorry [6] demonstrated that the use of a myopic approach does not significantly diminish the diagnostic accuracy of an expert system for congenital heart disease.For the information gathering task, we use the sequential myopic approach as a way to avoid the computational burden of the non-myopic approach and to overcome the disadvantage of not gathering enough information before making the decision. We cannot guarantee optimality since the conditions described in Kalagnanm and Henrion do not hold in our domain,however, we believe that our strategy will be a good approximation to optimality as in Gorry’s congenital heart disease example. Additional advantage is that our design functions as an anytime system that can produce a result to the human user at any point in time.The information sources are ranked according to the information value of each imperfect source. The top ranking source is used to direct the information retrieval agents that reside in the executor module of our system to perform the information retrieval task from that source. After the first information gathering action, the system can decide either to continue gathering the information from other sources or stop and give the human user decision suggestion, given the newly acquired information. In order to define a criterion for the system to stop performing information gatheringaction, we defined the value of gathering (VG). The value of gathering is defined as the maximum value of information that can be obtained from yet unexplored information sources, and is based on all of the information obtained previously. After an information gathering action is completed, the system incorporates the newly acquired information and re-calculates the net value of imperfect information for the remaining information sources.VG =EXn∉max VI(Xn|E) - C Xn , (6) where E is the previously acquired information.The system will continue to calculate and perform the information gathering action and incorporate the new information into the system to provide the human user with a more refined and accurate decision suggestion.The stopping criterion for the system is defined as VG < 0, or anytime the decision maker requests the current best decision suggestion. Thus, our system can be used as an anytime system, producing a result anytime during its operation.3.3 Temporal CostThe actual monetary cost of accessing a source of information is easily included in our system. However, in order to represent the temporal cost a number of external factors are required. Horvitz and Barry [2] and Gmytrasiewicz and Durfee [15] explored the idea of using the cost of time, or urgency, in their systems. Their approaches took the cost of time as a discount factor in their calculation, but this is not always true in the financial domain. For our stock investor example, we defined a runtime algorithm that calculates the temporal cost. The algorithm needs to constantly monitor the stock price in order to provide the information for the algorithm to perform the calculation. Our algorithm only looks back one time period due to the myopic strategy we used for the system. For a highly uncertain domain like the stock price, our algorithm provides an approximate estimation of the temporal cost, but we intend to further investigate this estimate to provide more realistic values. Temporal Cost AlgorithmFor all the information sources Xn{Trend = | stock-price(t 0)– stock-price(t -1) | /(t 0 – t –1)CT Xn = ∑iTrend × p(Process-time Xn i ) ×Process-time Xni}Here, the Trend parameter represents the positive amount of the stock price movement within some time period. The stock-price(t) is the actual stock price at time t (t0 represents the current time and t–1represents the previous time step) and the Process-time Xn i is the possible value of the sum of the information turnaround time and the time needed to include the information from source Xn into the decision-making model. Since the Process-time Xn i is not a constant due to variable network conditions, the process time is described as a probability distribution over its discretized possible values. For each information source, Xn the probability distribution of the Process-time Xn can be obtained from prior access delay time of each information source that is stored in the knowledge base module. Thus, CT Xn is the expected temporal cost for the information source Xn.4. System ModulesThe user model of DIGS uses the influence diagram as the representation for the human user’s decision model. We built the influence diagrams for the decision-making of a stock investor by consulting the human expert of the field. The model can be stored in a library contained in the knowledge base, and can be reused whenever the similar situation occurs. The model reflects the decision criteria and the attitude toward risk of the stock investor as in Figure 2. The user model requests the needed information from the knowledge base (CPTs, etc.) and outputs the additional information source query suggestions to the strategy module to form an information gathering plan. By using the information value criteria, the system will return the information from the source(s) to the user to assist the investor in making the investment decisions.The user model coordinates with the strategy module in order to provide the sequential information gathering plan for the executor to execute. The model includes the chance nodes that represent the results obtained from the investment advisor sources such as the First Call, Zacks Investment Inc., etc. that could be accessed by the system. The model also contains the external variables of the domain, the Future stock trend node that represents the future price movement of the stock. The other part of the model to be elicited was the utility model, which is used to compare possible outcomes as a function of the decisions. The utility was expressed as the monetary gain or loss to the stock investor, and it is determined by the price difference, the buy/sell decision, and the information selection decision. To represent our model in Howard canonical form [12] we used a group of deterministic nodes such as Zacks Result, etc. to represent the information query results from the chancenodes that represent the information query results, and the information selection decision node. The conditional probability tables, each associated with an information query result node, represent the accuracy of each information source. They can be obtained from the historic accuracy data for each source, and are stored in the knowledge base module. The conditional probability tables of the Price Diff node (representing the possible values of the price change for that particular stock within one week) can also be obtained from historical data, and is stored in the knowledge base module. For the probability table of the Future stock trend node, we assume uniform probability distribution for the values in that node.DIGS uses this model to choose among the information sources (e.g., Zacks Investment Inc., etc.). It also returns the retrieved information (the recommendation from the financial experts on the web) to the user in order to assist the user in making the investment decision for a certain stock.Figure 2. Novice stock investor user modelThe strategy module handles the sequential information gathering process and calculates the total cost of the gathering process (temporal cost plus monetary cost). It uses the user model described above and incorporates the newly information into it. The module is also responsible for checking the stopping criteria of the system and directs the executor module for the sequential information gathering.The knowledge base contains the information about the alternative information sources that are not directly included in the influence diagram. This includes data on the sources turnaround time, the historic prediction accuracy data of each source and their availability. It also lists the type of information the sources provide and the information needed to interact with the sources.The executor module contains the retrieval agents that are used by DIGS system to get the information from the sources. These agents are responsible for generating the visual reports from their information gathering results. The executor module then sends the report generated from the retrieval agents to the interface module.The interface module handles the interaction between the human user and the system. Further, the module displays the information that the executor module gathers and the decision suggestion from the system.5. ImplementationWe constructed the experimental prototype of the DIGS system for the financial domains using the belief network library NeticaAPI provided by the Norsys Inc., and the internet agent building platform LiveAgent Pro from AgentSoft. DIGS returns the buy/sell suggestion to the user based on the query result from the additional information source. The decision on which additional information source to query is provided by the user model. DIGS also continues to gather more information based on the method described in the previous sections, or stops the gathering process. Based on the additional information, the system suggests that the user take an appropriate action. In order to account for imperfections of information sources, we represent them as separate nodes in the network, causally connected to the node about which they provide information. The strength of this connection is the representation of the faithfulness of the information source and correlates the actual value of the node to the values reported by the information source. We have included this effect in our implementations, as depicted below, in the domain of stock investor decision-making.We tested the DIGS system using the user models described in the Section 4. Below is a run from the financial domain prototype.Novice Stock Investor ScenarioAn investor is to decide whether to invest in a certain stock. He or she wants to gather valuable (cost sensitive) and useful information before making such decision.DIGS’s model for the stock investor will be responsible for giving the best information source(s) to retrieve the information based on the current information, and the decision suggestion on whether or not to invest in the company’s stock is based on theadditionalinformation.In this run, the investor is looking at the stock of IBM. Given this information, DIGS calculated the information values for each of the additional sources (see Table 1), and suggested obtaining the information from CDAInvest. (CDA Investnet Inc.) Thus, CDA Investnet will provide the most valuable query result under this situation.Table 1. The Net Imperfect Information Value foreach sourceInformation Sources Net Value Of Imperfect InformationCDA Invest138.25SP76.529FC-10Innovest54.609Zacks120.04. In this case, the query returned the value “neutral”for the CDAInvest’s expert opinion of the IBM’s stock trend. After obtaining the information from CDAInvest, DIGS incorporates that information into user model (see Figure 3) and calculates the value of information of the remaining information sources (see Table 2). In this case, the DIGS performs the information gathering action on the next information source, which is Zacks (Zacks investment Inc.). DIGS will continue gathering the information from the information sources until the stopping criteria is met. At this point, DIGS will return a decision suggestion to the user.Table 2. The Net Imperfect Information Value forthe remaining sourcesInformation Sources Net Value Of Imperfect InformationSP 2.5847FC-10Innovest0Zacks22.7716. Conclusion and Future WorkWe presented our work on the use of myopic sequential information gathering which uses the net value of imperfect information to guide the information gathering process, and used the previously gathered information to provide the decision recommendations to the human users. Our technique provides a time sensitive myopic way to perform a sequence of information gathering actions, and provides an alternative to the traditional myopic analyses for determining the next best action to take. In our future work, we will improve on the current model of the temporal cost to represent the situation more accurately. Furthermore, we intend to expand our system to handle multiple stocks and portfolio allocation problems. We will also test DIGS in the stock market in order to evaluate its performance under realistic conditions.Figure 3. The user model after incorporated the firstretrieved information References[1] D. Heckerman, E. Horvitz and B. Middleton, “An Approximate Nonmyopic Computation for Value of Information”,IEEE Transaction of Pattern Analysis and Machine Intelligence, 1993.[2] E. Horvitz and M. Barry, “Display of Information for Time-Critical Decision Making”, Proceedings of the Eleventh conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA., 1995, pp. 296-305.[3] F. V. Jensen,An Introduction to Bayesian Networks. Springer-Verlag, New York, NY., 1996.[4] F. V. Jensen and J. Liang, “drHugin A system for hypothesis driven myopic data request”, Technical Report R-92-2021, Department of Mathematics and Computer Science, Aalborg University, 1992.[5] F. V. Jensen and J. Liang, “drHugin A system for value of information in Bayesian networks”, Proceedings of the 1994 Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, 1994, pp178-183. [6] G. Gorry and G. Barnett, “Experience with a model of sequential diagnosis”, Computers and Biomedical Research, 1968.[7] J. Kalagnanam and M. Henrion, “A comparison of decision analysis and expert rules for sequential diagnosis”, Uncertainty in Artificial Intelligence 4, 1990, pp. 271-281.[8] J. Grass and S. Zilberstein, “Value-Driven Information Gathering”, AAAI 97 Building Resource-Bounded Reasoning Systems Workshop, 1997.[9] J. E. Matheson, “Using Influence Diagrams to Value Information and Control”, Influence Diagrams, Belief Nets and Decision Analysis, 1990, pp. 25-48.[10] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference Revised Second Printing, Morgan Kaufmann, San Francisco, CA., 1997.[11] R. A. Howard, “Information value theory”, IEEE Transactions on Systems Science and Cybernetics, 1966. [12] R. A. Howard, “From Influence to Relevance to Knowledge”, Influence Diagrams, Belief Nets and Decision Analysis, 1990, pp. 3-23.[13] R. A. Howard and J. E. Matheson, “Influence Diagrams”, The Principles and Applications of Decision Analysis: Vol. II, Strategic Decisions Group, Menlo Park, CA. 1984.[14] R. D. Shachter, “Evaluating Influence Diagram”, Operations Research, 1987, pp. 871-872.[15] O. Etzioni, “Moving Up the Information Food Chain: Deploying Softbots on the World Wide Web”, AAAI-96 invited talk, MIT Press, Cambridge, MA., 1996.[16] P. J. Gmytrasiewicz and E. H. Durfee, “Elements of a Utilitarian Theory of Knowledge and Action”, IJCAI, 1993, pp. 396-402.[17] S. J. Russell and P. Norvig, Artificial Intelligence, A Modern Approach, Prentice Hall, Englewood Cliffs, New Jersey, 1995.[18] S. Zilberstein and V. Lesser, “Intelligent information gathering using decision models”, Technical Report 96-35, Computer Science Department, University of Massachusetts, 1996.。