Design and Analysis of an MST-Based Topology Control Algorithm Abstract — In this paper, w

合集下载

Thesis The Design and Implementation of a Region-Based Parallel Language

Thesis The Design and Implementation of a Region-Based Parallel Language

Bradford L. Chamberlain /homes/bradbrad@P.O. Box 95621 Seattle, WA 98145-2621 (206) 324-6336QuickSilver Technology 700 5th Avenue, Suite 5300Seattle, WA 98104(206) 749-1009Research InterestsCompilers, language design, and algorithms, as well as their influence on one another. Parallel computing, with anemphasis in these areas.EducationPh.D., Computer Science and Engineering, University of WashingtonThesis:The Design and Implementation of a Region-Based ParallelLanguageAdvisor:Lawrence SnyderSeptember 2001M.S., Computer Science and Engineering, University of Washington May 1995B.S. (with honors), Computer Science, Stanford University June 1992Awards and HonorsUSENIX Scholar2000 Intel Foundation Graduate Fellowship1997–1998 Cray Undergraduate Fellowship1991–1992 Tau Beta Pi1991 Eagle Scout May 1988Professional ExperienceSenior Software Developer QuickSilver Technology September 2001 – present QuickSilver Technology is a startup company that is developing novel hardware for adaptable embeddedcomputing. I am a member of QST's tools group, which develops assemblers, compilers, and debuggingcapabilities for programming such architectures. My responsibilities have included the design, prototyping, andsimulation of a special-purpose language for our custom hardware nodes. I have also participated in thecompiler's design and implementation. My duties also include the creation of documents and presentations forinternal demos and discussion. The bulk of my code development with QST was done in C++ and Java in anExtreme Programming environment.Research Assistant University of Washington, Deptartment of CS&E, ZPL group September 1992 – 2001 Worked with Lawrence Snyder on ZPL, a novel parallel programming language developed at UW. Since itsinception, I have been a primary contributor both to ZPL's design and its implementation via the ZPL compilerand runtime system. My contributions to ZPL's language design aided in its evolution from modest support ofJacobi-style kernels to its current support for applications utilizing hierarchical and sparse arrays. My workfocused on the concept of the region, a language-based index set useful for specifying parallel array-basedcomputation. For example, I studied the benefits of regions and their role in supporting a syntax-basedperformance model. I also developed language support for regions with replicated, privatized, hierarchical, andsparse dimensions.I served as the chief architect and implementor of ZPL's runtime libraries from their inception in 1993 until thedate of my graduation. I also worked on the ZPL compiler during that time, serving as a primary implementorsince 1997, the year of its initial release. My work in ZPL's library support included development of theIronman communication interface which supports the portable specification of high-performancecommunication. I also developed data structures for storing distributed regions and arrays with dense, sparse,hierarchical, replicated, and privatized dimensions. I created periodic public releases of the compiler on the web,and provided user support for researchers who utilized ZPL in their own work.Research Assistant University of Washington, Department of CS&E, GRAIL group June 1994 – May 1996 With Tony DeRose and David Salesin, I developed and implemented a technique for automatically acceleratingthe rendering of complex static scenes using a spatial hierarchy.Research Assistant Microsoft Research, User Interfaces Group, Persona Project June – September 1993 Worked on support for a real-time 3D agent by writing a fast inter-machine communication interface and bylaying groundwork for its speech recognition capabilities.Software Development Engineer Microsoft, Windows NT Services for Macintosh June – October 1992 Developed a Macintosh RPC client to enable automated testing of networking software between Windows NTmachines and Macintoshes.Programmer Los Alamos National Laboratory June – August 1991 Developed library support for rapid development of data-entry forms. Used these forms to improve the userinterface of radar sounding programs.Undergraduate Researcher Utah State University, Department of E&CE June – August 1990 As part of NSF's Research Experience for Undergraduates program, wrote a library for real-time plotting ofatmospheric data during collection using radar soundings of the ionosphere.Engineering Aid David Taylor Research Center, Annapolis MD June – August 1989 Maintained and improved data collection programs used for Thermo-Gravimetric Analysis.Teaching ExperienceDistance Learning Course Designer Univ. of Washington Educational Outreach July 2000 – August 2001 Designed the curriculum for UWEO's first data structures course for distance learning students. Created a dozenself-paced lessons, each with a programming or written assignment, as well as introductory materials and exams.Volunteer Tutor University of Washington, Department of CS&E January 1995 – June 2000 Tutored women and minority CS&E undergraduates in a number of subjects including compilers, datastructures, algorithms, theory, and software engineering.Instructor University of Washington, Department of CS&E Spring and Autumn 1999 For two quarters, taught the department's undergraduate data structures course for non-majors (CSE 373).Prepared and presented three lectures a week, designed written and programming assignments, managed twoTAs, held office and lab hours, designed and graded exams. Received excellent student evaluations.Head Teaching Assistant University of Washington, Department of CS&E Autumn 1996 Served as the head TA for CSE 143, the department's second introductory programming course. Managed fiveTAs, organized weekly staff meetings, created and graded weekly quizzes, taught discussion sections for TAswho were unable to do so, and helped with the grading of exams.Teaching Assistant University of Washington, Department of CS&E Winter 1996 Served as the TA for CSE 341, the department's programming languages course for majors. Taught twodiscussion sections twice a week each, providing further explanation and practice with concepts taught inlectures. Graded assignments, helped create assignments and exams. Languages covered included ML, Scheme,Prolog, and Smalltalk.Peer Tutor Stanford University, Center for Teaching and Learning January 1990 – June 1992 Served as a peer tutor for students requesting help in a number of courses in Computer Science, Physics, andMathematics.PublicationsJournal PublicationsHigh-level Language Support for User-defined Reductions. Steven J. Deitz, Bradford L. Chamberlain, andLawrence Snyder, to appear in a special issue of The Journal of Supercomputing, (originally published inProceedings of the LACSI Symposium, October 2001).ZPL: A Machine Independent Programming Language for Parallel Computers. Bradford L. Chamberlain,Sung-Eun Choi, E Christopher Lewis, Calvin Lin, Lawrence Snyder, and W. Derrick Weathersby. IEEETransactions on Software Engineering, 26(3):197–211, March 2000.The Case for High-Level Parallel Programming in ZPL. Bradford L. Chamberlain, Sung-Eun Choi, EChristopher Lewis, Lawrence Snyder, W. Derrick Weathersby, and Calvin Lin. IEEE Computational Scienceand Engineering, 5(3):76–86, July–September 1998.Refereed Conference PublicationsArray Language Support for Parallel Sparse Computation. Bradford L. Chamberlain and Lawrence Snyder.In 15th ACM International Conference on Supercomputing (ICS’01), pages 133–145, June 2001.Eliminating Redundancies in Sum-of-Product Array Computations. Steven J. Deitz, Bradford L.Chamberlain, and Lawrence Snyder. In 15th ACM International Conference on Supercomputing (ICS’01), pages65–77, June 2001.A Comparative Study of the NAS MG Benchmark across Parallel Languages and Architectures. BradfordL. Chamberlain, Steven J. Deitz, and Lawrence Snyder. In Proceedings of the 2000 ACM/IEEE SupercomputingConference on High Performance Networking and Computing (SC2000), November 2000.Regions: An Abstraction for Expressing Array Computation. Bradford L. Chamberlain, E ChristopherLewis, Calvin Lin, and Lawrence Snyder. In Proceedings of the 1999 ACM/SIGAPL International Conferenceon Array Programming Languages (APL99), pages 41–49, August 1999.Problem Space Promotion and Its Evaluation as a Technique for Efficient Parallel Computation. BradfordL. Chamberlain, E Christopher Lewis, and Lawrence Snyder. In Proceedings of the 13th ACM InternationalConference on Supercomputing (ICS’99), pages 311–318, June 1999.Portable Performance of Data Parallel Languages. Ton Ngo, Lawrence Snyder, and Bradford Chamberlain.In Proceedings of the 1997 ACM/IEEE Supercomputing Conference on High Performance Networking andComputing (SC97), November 1997.Fast Rendering of Complex Environments Using A Spatial Hierarchy. Bradford Chamberlain, TonyDeRose, Dani Lischinski, David Salesin, and John Snyder. In Proceedings of the 22nd Annual GraphicsInterface Conference (GI’96), pages 132–141, May 1996.Reviewed Workshop PublicationsLanguage Support for Pipelining Wavefront Computations. Bradford L. Chamberlain, E Christopher Lewis,and Lawrence Snyder. In Proceedings of the 12th International Workshop on Languages and Compilers forParallel Computing (LCPC’99), pages 318–332, August 1999.ZPL’s WYSIWYG Performance Model. Bradford L. Chamberlain, Sung-Eun Choi, E Christopher Lewis,Calvin Lin, Lawrence Snyder, and W. Derrick Weathersby. In Proceedings of High-Level ParallelProgramming Models and Supportive Environments (HIPS’98), pages 50–61, March 1998.A Compiler Abstraction for Machine Independent Parallel Communication Generation. Bradford L.Chamberlain, Sung-Eun Choi, and Lawrence Snyder. In Proceedings of the 10th International Workshop onLanguages and Compilers for Parallel Computing (LCPC’97), pages 261–276, August 1997.Factor-Join: A Unique Approach to Compiling Array Languages for Parallel Machines. Bradford L.Chamberlain, Sung-Eun Choi, E Christopher Lewis, Calvin Lin, Lawrence Snyder, and W. Derrick Weathersby.In Proceedings of the 9th International Workshop on Languages and Compilers for Parallel Computing(LCPC’96), pages 481–500, August 1996.Technical ReportsParallel Language Support for Multigrid Algorithms. Bradford L. Chamberlain, Steven Deitz, and LawrenceSnyder. University of Washington Technical Report UW-CSE-99-11-03, November 1999.A Region-based Approach for Sparse Parallel Computing. Bradford L. Chamberlain, E Christopher Lewis,and Lawrence Snyder. University of Washington Technical Report UW-CSE-98-11-01, November 1998.Graph Partitioning Algorithms for Distributing Workloads of Parallel Computations(generals exam).Bradford L. Chamberlain. University of Washington Technical Report UW-CSE-98-10-03, October 1998.ZPL vs. HPF: A Comparison of Performance and Programming Style. Calvin Lin, Lawrence Snyder, RuthAnderson, Bradford L. Chamberlain, Sung-Eun Choi, George Forman, E Christopher Lewis, and W. DerrickWeathersby. University of Washington Technical Report UW-CSE-95-11-05, November 1995.Miscellaneous PublicationsArticles for the Arctic Region Supercomputer Center’s HPC/Cray T3E Users’ Group Newsletter:"I/O Algorithms on Cray T3E," Issue 214, February 2001."Comparison of Languages for Multi-Grid Methods," Issues 188–189, February 2000."MPI Message Tags on Yukon," Issue 161, January 1999."ZPL: New Parallel Language Available," Issue 122, July 1997.Poster: "A Region-based Approach for Sparse Parallel Computing" At 9th International Workshop onLanguages and Compilers for Parallel Computing, University of California San Diego, August 1996.Invited Talks"A Region-based Implementation of the NAS MG Benchmark," Cray Inc.October 2000 "Support for Multigrid Applications in ZPL," Los Alamos Nat'l Lab, ACL Seminar. March 2000 ServiceSurrogate Advisor, Steven Deitz (while advisor is on sabbatical) 2001–2002 Co-advisor, Steven Deitz, Maria Gullickson (qualifying grad students) 1999–2000 Referee, IEEE TVCG, ICS, PaCT, IPPS/SPDPStudent Volunteer, SCxy Conferences November 1995 and 1997 Graduate Student Coordinator, UW CS&E1996–1997 Chairperson, Prospective Student Committee, UW CS&E1996–1997 Volunteer, UPC Shelter for Homeless Teens July 1993 – November 1996 Editor, Mossy Bits' first online edition (UW CS&E grad creative arts journal) 1995–1996 Member, Undergraduate Admissions Committee, UW CS&E1994–1995 SoftwareZPL Compiler and Runtime SystemI was the primary implementor of the ZPL compiler from 1997 to 2001 and of the runtime libraries sincedevelopment began in 1993. ZPL has been publicly available since the initial release of the compiler (version1.10) in July 1997. The most recent release (version 1.16) was made in August 2000, spanning twelvearchitectures and six communication configurations. Installations are publicly available on the web at/research/zpl/, and I provide user support via the mailing aliases:zpl-bugs@ and zpl-info@.Technical SkillsLanguages:Expert C/C++ programmer. Experience with Java, Fortran 90, Perl, System C, ML, High Performance Fortran, Co-Array Fortran, Single Assignment C, Pascal, Lisp, Scheme, Smalltalk, and Prolog.Platforms:Extensive experience with UNIX and Windows-based machines, as well as numerous parallel platforms: Cray T3D/T3E, IBM SP, SGI Origin, Sun Enterprise, high-performance Linux clusters, Intel Paragon,KSR-2, BBN GP1000, and iPSC/2.Libraries:Experienced user of the MPI, PVM, SHMEM, and NX parallel libraries.PersonalBorn in Stanford, CA; raised in Annapolis, MD and Northern Idaho. An avid trombone player and proponent of the notion that comics are an art form to be reckoned with. Often found appreciating great film and music. Regular attendee of University Presbyterian Church.ReferencesProfessor Lawrence SnyderDepartment of Computer Science and Engineering University of WashingtonBox 352350Seattle, WA 98195-2350(206) 543-9265snyder@ Dr. Burton SmithCray Inc.411 First Avenue S, Suite 600 Seattle, WA 98104-2860 (206) 701-2000burton@Professor Craig ChambersDepartment of Computer Science and Engineering University of WashingtonBox 352350Seattle, WA 98195-2350(206) 685-2094chambers@ Professor Calvin Lin Department of Computer Sciences The University of Texas at Austin Austin, TX 78712-1188 (512) 471-9560lin@Professor Carl EbelingDepartment of Computer Science and Engineering University of WashingtonBox 352350Seattle, WA 98195-2350(206) 543-9342ebeling@ Michelle Sheimanprimary manager at QuickSilver Technology sheiman@(additional contact information available upon request)。

Computer-Aided Design and Analysis

Computer-Aided Design and Analysis

Computer-Aided Design and Analysis As a seasoned writer, I understand the importance of creating compelling and engaging content that resonates with readers on a deep level. When it comes to discussing topics like Computer-Aided Design and Analysis, it's essential to convey not only the technical aspects but also the emotional impact and significance of such technologies. Computer-Aided Design (CAD) has revolutionized the way we create and design products, buildings, and more. The ability to create detailed 3D models and simulations has significantly enhanced the efficiency and accuracy of the design process. Engineers and designers can now visualize their ideas in a virtual space before bringing them to life, leading to better outcomes and fewer errors. This technological advancement has not only saved time and resources but has also opened up new possibilities for innovation and creativity. On the other hand, Computer-Aided Analysis (CAA) plays a crucial role in evaluating and optimizing designs for performance and functionality. By running simulations and tests on CAD models, engineers can identify potential issues and make necessary adjustments before production. This not only ensures the safety and reliability of the final product but also allows for continuous improvement and refinement throughout the design process. The ability to predict how a design will perform in real-world conditions is invaluable, especially in industries where failure is not an option. However, with great power comes great responsibility. The reliance on CAD and CAA tools can sometimes lead to complacency and a lack of critical thinking. It's essential for designers and engineers to remember that these tools are aids, not substitutes for human judgment and creativity. While CAD and CAA can streamline the design process and improve efficiency, they should always be used in conjunction with human expertise and experience. The human touch is irreplaceable when it comes to problem-solving, innovation, and pushing the boundaries of what is possible. Moreover, the advancement of CAD and CAA technologies has raised concerns about job displacement and the future of work. As these tools become more sophisticated and automated, there is a fear that traditional design and analysis roles may become obsolete. It's crucial for professionals in these fields to adapt and upskill to remain relevant in a rapidly evolving technological landscape. Embracing new tools and technologies can enhancejob performance and open up new opportunities for growth and development. In conclusion, Computer-Aided Design and Analysis have transformed the way we design, create, and analyze products and structures. These tools have enhanced efficiency, accuracy, and innovation in various industries, leading to significant advancements in technology and design. However, it's essential to remember the importance of human judgment and creativity in conjunction with these tools to ensure optimal outcomes. By embracing the benefits of CAD and CAA while maintaining a human-centric approach, we can harness the full potential of these technologies and continue to push the boundaries of what is possible.。

1. Data Mining – Practical Machine Learning Tools and Techniques with Java

1. Data Mining – Practical Machine Learning Tools and Techniques with Java

COURSE DESCRIPTIONDepartment and Course Number CSc 177 CourseCoordinatorMeiliu LuCourse Title Data Warehousing and DataMiningTotal Credits 3Catalog Description: Data mining is the automated extraction of hidden predictive information from databases. Data mining has evolved from several areas including: databases, machine learning, algorithms, information retrieval, and statistics. Data warehousing involves data preprocessing, data integration, and providing on-line analytical processing (OLAP) tools for the interactive analysis of multidimensional data, which facilitates effective data mining. This course introduces data warehousing and data mining techniques and their software tools. Topics include: data warehousing, association analysis, classification, clustering, numeric prediction, and selected advanced data mining topics. Prerequisite: CSC 134 and Stat 50.Textbooks:Data Mining – Concepts and Techniques, Han and Kamber, Morgan Kaufman, 2001.References:1.Data Mining – Practical Machine Learning Tools and Techniques with JavaImplementation, Witten and Frank, Morgan Kaufmann, 2000;2.Data Mining – Introductory and Advanced Topics, Dunham, Prentice Hall 2003.Course GoalsStudy various subjects in data warehousing and data mining that include: •Basic concepts on knowledge discovery in databases•Concepts, model development, schema design for a data warehouse•Data extraction, transformation, loading techniques for data warehousing•Concept description: input characterization and output analysis for data mining •Data preprocessing•Core data mining algorithms design, implementation and applications•Data mining tools and validation techniquesPrerequisites by TopicThorough understanding of:•Entity-relationship analysis•Physical design of a relational database•Probability and statistics – estimation, sampling distributions, hypothesis tests •Concepts of algorithm design and analysisBasic understanding of:•Relational database normalization techniques•SQLExposure to:•Bayesian theory•RegressionMajor Topics Covered in the Course1.Introduction to the process of knowledge discovery in databases2.Basic concepts of data warehousing and data mining3.Data preprocessing techniques: selection, extraction, transformation, loading4.Data warehouse design and implementation: multidimensional data model, casestudy using Oracle technology5.Machine learning schemes in data mining: finding and describing structure patterns(models) in data, informing future decisionsrmation theory and statistics in data mining: from entropy to regression7.Data mining core algorithms: statistical modeling, classification, clustering,association analysis8.Credibility: evaluating what has been leaned from training data and predictingmodel performance on new data, evaluation methods, and evaluation metrics9.Weka: a set of commonly used machine learning algorithms implemented in Javafor data mining10.C5 and Cubist: Decision tree and model tree based data mining tools11.Selected advance topics based on students’ interests such as: web mining, textmining, statistical learning12.Case studies of real data mining applications (paper survey and invited speaker) Laboratory Projects1.Design and implement a data warehouse database (4 weeks)2.Explore extraction, transformation, loading tasks in data warehousing (1 week)3.Explore data mining tools and algorithms implementation (3 weeks)4.Design and implement data mining application (3 weeks)Expected OutcomesThorough understanding of:•Process and tasks for Knowledge discovery in databases.•Differences between a data warehouses OLAP and operational databases OLTP.•Multidimensional data model design and development.•Techniques for data extraction, transformation, and loading.•Machine learning schemes in data mining.•Mining association rules (Apriori).•Classification and prediction (Statistical based: Naïve Bayes, regression trees and model trees; Distance based: KNN, Decision tree based: 1R, ID3, CART;Covering algorithm: Prism).•Cluster analysis (Hierarchical algorithms: single link, average link, and complete link; Partitional algorithms: MST, K-means; Probability based algorithm: EM).•Use of data mining tools: C5, Cubist, Weka.Basic understanding of:•Data warehouse architecture.•Information theory and statistics in data mining.•Credibility analysis and performance evaluation.Exposure to:•Mining complex types of data: multimedia, spatial, and temporal•Statistical learning theory•Support vector machine and ANNEstimated CSAB Category ContentCORE ADVANCED CORE ADVANCEDDataStructures .2Algorithms 1 Computer Org & ArchitectureSoftwareDesign .5 Concepts ofProgrammingLanguages .2Oral and Written Communications1.Three written reports (term project proposal, research paper review, and termproject report).2.Two oral presentations (10 minutes for paper review and 15-20 minutes for termproject).Social and Ethical IssuesNo significant component.Theoretical Content1.Data warehouse schema and data cube computation.rmation theory and statistics in data mining.3.Data mining algorithms and their output model performance prediction.4.Evaluation metrics (confusion matrix, cost matrix, F measure, ROC curve). Analysis and Design1.Design of a data warehouse.2.Design of a process of ETL (Extraction, Transformation, Loading).3.Design of a data mining application.4.Analysis of performance of a data warehouse.5.Analysis and comparison of data mining schema.CSC 17711-04。

Advanced Structural Analysis and Design

Advanced Structural Analysis and Design

Advanced Structural Analysis and Design Advanced Structural Analysis and Design is a field that encompasses the design and analysis of structures that are subjected to various types of loads, such as gravity, wind, earthquake, and temperature. It is a critical field that requires a deep understanding of the behavior of different materials, such as concrete, steel, and timber, under different loading conditions. In this essay, I will discuss the various challenges and requirements of advanced structural analysis and designfrom multiple perspectives. From the perspective of a structural engineer, advanced structural analysis and design require a deep understanding of the principles of mechanics, mathematics, and physics. Structural engineers need to be able to analyze the behavior of different materials and structures under various loading conditions and design structures that can withstand these loads. They need to be familiar with various design codes and standards, such as the American Concrete Institute (ACI) and the American Institute of Steel Construction (AISC), and ensure that their designs comply with these codes. From the perspective of a construction manager, advanced structural analysis and design require careful planning and coordination. Construction managers need to ensure that the design is constructible and that the construction process is efficient and safe. They needto work closely with the structural engineer to ensure that the design is feasible and that the construction process does not compromise the safety and integrity of the structure. From the perspective of a building owner, advanced structural analysis and design require a balance between cost, safety, and aesthetics. Building owners need to ensure that the structure is safe and meets all applicable codes and standards, but they also need to consider the cost of construction and the aesthetic value of the structure. They need to work closely with thestructural engineer and the architect to ensure that the design meets their needs and expectations. From the perspective of a material supplier, advancedstructural analysis and design require the production of high-quality materialsthat meet the specifications and requirements of the structural engineer. Material suppliers need to ensure that their products are reliable and consistent and that they can provide the necessary technical support to the structural engineer during the design and construction process. From the perspective of a governmentregulator, advanced structural analysis and design require the enforcement of codes and standards that ensure the safety and integrity of structures. Government regulators need to ensure that the design and construction of structures comply with applicable codes and standards and that the structures are safe for the public to use. They also need to ensure that the structural engineer and the construction team are qualified and licensed to perform their respective roles. In conclusion, advanced structural analysis and design is a critical field that requires a deep understanding of the behavior of different materials andstructures under various loading conditions. It requires careful planning and coordination between the structural engineer, the construction manager, the building owner, the material supplier, and the government regulator. It also requires a balance between cost, safety, and aesthetics. Advanced structural analysis and design plays a crucial role in ensuring the safety and integrity of structures and the well-being of the public.。

摩尔分析软件手册说明书

摩尔分析软件手册说明书

Software Manual MO.Affinity AnalysisContents1.System Requirements2.Term Definitions3.General Layout4.Saving and Exporting Data5.Analysis Setup in the Home Screen6.Data Selection7.Dose Response Fitpare ResultsThe MO.Affinity Analysis software allows straightforward analysis and evaluation of MicroScale Thermophoresis data. It allows quantification of binding parameters such as dissociation constants (K d) or EC50 values and easy comparison of results e.g. for one target protein binding to different compounds.The MO.Affinity Analysis software guides the user through all important steps from data selection to evaluation by using a clearly organized submenu layout in the task bar. Creating an analysis file will retain the chosen settings for data analysis. Additionally, the software allows inspection and exporting of both raw and processed data at any step during data analysis.This manual explains the main functions integrated in the MO.Affinity Analysis software.1. S ystem RequirementsIf the necessary licenses have been purchased, MO.Affinity Analysis software can be installed on computers meeting the following requirements:Operating system: Windows 7/10 Professional 64 bitCPU: Intel Core i5 or betterRAM: 8 GB or moreHard disk: 20 GB or more free disk space availableDisplay resolution: 1600 x 900 or betterSoftware: Microsoft .NET 4.5.1 framework (included in installer ofMO.Affinity Analysis software)Operating system language: English or GermanAn external computer mouse is necessary to access all software features.2. T erm DefinitionsTarget: The fluorescent molecule. The concentration of the target molecule is constant throughout a dilution series.Ligand: Non-fluorescent binding partner. The ligand concentration is varied by serial dilution.MST Trace: MST fluorescence signal overtime. A typical MST trace contains an initialdetection of the sample fluorescence (bydefault recorded for 3-5 seconds), followed byactivation of the MST power to induce thetemperature gradient and subsequentdetection of thermophoretic changes influorescence (by default recorded for 20-30 seconds). Finally, MST power isdeactivated and back diffusion of fluorescentmolecules is monitored (recorded for a shortperiod only).MST Run: A run includes a series of MST traces,typically of a fluorescent target molecule versus aserial dilution of a ligand.Merge Set: A series of replicates of MST runs, withidentical MST power, LED/excitation power as wellas target concentrations. Data within one Merge Setwill be averaged and error bars will be calculatedand displayed. Note that the ligand concentrationsdo not necessarily have to be identical and can varybetween merged MST runs.Analysis Set: A complete dataset consisting of a number of Merge Sets or single MST runs, for parallel analysis and direct comparison. Dose response curves of runs and Merge Sets in Analysis Sets can be compared in the same charts with the MO.Affinity Analysis software. All runs contained in one Analysis Set are analyzed with the same evaluation parameters.Analysis (file): A single Analysis Set, or a collection of Analysis Sets. The analysis can be saved at any time. An analysis file can be used to integrate a larger number of MST experiments for a comprehensive and systematic data analysis.Raw Data: All fluorescence data recorded by the Monolith instrument: MST traces, capillary scans and shapes, initial fluorescence values and bleaching rates. All raw data can be viewed in the Raw Data Inspection tool (see Figure 1 and section 3, point 6).Please note that the capillary scan displayed is the scan recorded before the MST measurement, not afterwards (applicable to MST measurements performed with MO.Control software).Figure 1: MST Raw Data Inspection: (A) Properties, (B) MST Traces, (C) Capillary Scan, (D) Capillary Shape, (E) Initial Fluorescence, (F) Bleaching Rate. Please note that the capillary scan displayed is the scan recorded before the MST measurement, not afterwards (applicable to MST measurements performed with MO.Control software).3. G eneral LayoutAll major functions of the MO.Affinity Analysis software are organized in the task bar:Four tabs guide the user through the process of MST data analysis:1. Home2. Data Selection3. Dose Response Fit4. Compare ResultsIt is recommended to complete all four steps in this order to ensure proper documentation and analysis of MST experiments. Movement between tabs during the analysis process is possible, e.g. to add additional files, edit names of Analysis sets, etc.Additional buttons in the task bar are:5. Quick saving of the analysis file.6. Raw Data Inspection is available at any time during the analysis process by selecting theRaw Data Inspection button on the top right of the window. This will open a separate window which displays all experiment-associated settings and meta-data, as well as detailed charts of raw MST traces, capillary scans, overlays of capillary shapes, initial fluorescence values and bleaching rates. Selected runs and traces will be highlighted in both, the MO.Affinity Analysis main window as well as in the Raw Data Inspection window. For detailed views of Raw Data Inspection options, see Figure 1.7. Alerts will be displayed on the top right of the main window. Alerts include experimentalinconsistencies as well as warnings about potential inconsistencies during data processing and fitting.Context-related supporting information, such as term definitions and equations, can be found when clicking the buttons located on each page.Anything you do in the software can be undone by pressing Ctrl + Z.4. S aving and Exporting DataThe MO.Affinity Analysis software allows for saving the current analysis at any time, using the drop-down menu in the top left (click ), the quick save button in the task bar or navigation back to the Home tab.Moreover, chart and tabular data can be exported where indicated using the export buttons (click ), which are located in the Dose Response Fit and Compare Results submenus as well as in the Raw Data Inspection section. Available image formats for export are .svg, .pdf and .png. Notethat .svg and .pdf contain vector graphics which can be processed by graphic editing software. Tabular data are saved in .xlsx or .csv format. Results can also be saved as a condensed report in .pdf format on the Dose Response Fit and Compare Results screens.5. A nalysis Setup in the Home ScreenIn the Home screen, create a new MST analysis or load a preexisting analysis file. Analysis files are saved in the .nta format. Changes in an analysis can be saved at any time.When creating a new analysis, enter an analysis name and optionally add comments, e.g. purpose of the analysis, assay conditions etc. Recently opened analysis files are listed chronologically. Start adding raw data to your analysis here, or in the next tab Data Selection.6. D ata SelectionSee Figure 2 for a summary of the options you have for Data Selection.To add MST runs to the analysis, use the drag-and-drop function for .ntp, .ntdb or .moc files, or use the load function to browse folders and select single or multiple files. MST runs of the selected file will appear in the Data Selection window as thumbnails of normalized MST Traces with a description of name, experiment settings and date. By changing the View option in the top panel, you can alter the presentation of the MST data thumbnails to Dose Response, Capillary Shape or Initial Fluorescence.Before data analysis is performed, choose the type of analysis. Toanalyze binding by MST, click on the MST button in the Choose AnalysisType panel. In cases of ligand-induced fluorescence changes where thefluorescence values of each capillary are used to determine bindingconstants, click the Initial Fluorescence button.Note: Use the Initial Fluorescence analysis if there is a ligand-concentration dependent change in sample fluorescence >±20 %. Referto the User Starting Guide or the MO.Control software for moreinformation.Note:The data points presented in the “Dose Response” thumbnail viewcorrespond to the F norm values determined after the MST powerdependent time intervals (see section 7).For further analysis and determination of binding parameters, MST runs are combined into Analysis Sets. Clicking the “Auto-Append” button will create a single Analysis Set which contains all loaded MST runs as independent single runs. Alternatively, MST runs can be added by clicking the symbol on the bottom right of each MST run thumbnail. This automatically creates a new Merge Set.To add replicate runs to a Merge Set, drag-and-drop them there directly. Merged runs will be displayed with average values and error bars in the Dose Response Fit screen. The software allows the merging of runs if the runs were collected using the same- LED/excitation power- MST power- Capillary type- Acquisition mode (fluorescence channel) and optics module (optics contained in a Monolith NT.Automated vs optics contained in a Monolith NT.115/NT.115Pico/belFree)Figure 2: Create Analysis and Merge Sets from MST runs in the Data Selection menu.Depending on the software used to perform the measurement, not all of these criteria may be saved in the file. As a result, it may not be possible to merge data collected with different software. When the user tries to add an incompatible run to a Merge Set, the software will reject the run and display an incompatibility message. To create a custom number of different Merge and Analysis Sets, drag-and-drop the MST run thumbnails.Hint: Analysis Sets and Merge Sets can also be rearranged by simple dragging and dropping.Please note that names of Merge and Analysis Set can be edited for better description and documentation by clicking the pen symbol on the respective flyout. The flyout appears upon mouse-over in the respective Analysis Set field (see screenshot below). Also, after an Analysis Set was created the analysis mode can still be switched between MST- and initial fluorescence analysis by selecting the respective button. Another button allows to switch to the expert mode for analysis (see next section for more details). Once Analysis Sets are created, binding data can be quantified.7. D ose Response FitThe Dose Response fit window allows for fitting MST data to obtain either dissociation constants (K d s, using the law of mass action) or EC50 values (using the Hill equation). In the window, normalized MST traces as well as corresponding dose response plots of the selected MST data are shown. Figure 3 summarizes the data analysis and fitting workflow.By selecting either an Analysis Set or a Merge Set on the left, the respective MST traces and their dose response plots are displayed. By default, F norm-values in the dose response plot are calculated from the ratio of normalized fluorescence F0/F1, where F0corresponds to the normalized fluorescence prior to MST activation. F1is by default determined after an optimal MST power-dependent time interval which yields the best signal-to-noise ratio.Use the mouse or the arrow keys to navigate through the analysis tree in the left panel. The right arrow key expands an Analysis Set or Merge Set, while the left arrow key collapses it.Data fitting is performed instantly after selecting the respective fit routine (K d Model or Hill Model). Fitting requires initial values, which are determined automatically by the software (shown as Guess values in the fit model). Known parameters, such as target concentration, need to be fixed by checking the Fix checkbox. In some cases it may be required to guide the fitting algorithm by manually entering initial Guess values.Note: The Hill fit should only be used if theinteraction involves a cooperative bindingmode. A 1:1 interaction should always be fittedusing the K d Model.After a fit is performed, a range of statisticalparameters is automatically calculated anddisplayed. For definitions, fit equations andmore information, click the button.Replicates within one Merge Set are displayedas average values and error bars representingthe standard deviation. Fits are applied to theaverage values. In order to get an errorestimation on the resulting K d, fit the replicatesindividually and use this data to performstatistics.For in-depth data evaluation and fit refinement, single runs and MST traces can be highlighted either by selection on the left, or by clicking on the respective MST trace or data point in the graphs. After highlighting, outliers can be excluded from the fit (either greyed out or invisible ).Figure 3: Analysis of binding constants and EC50-values.For a more in-depth evaluation of MST data, activate the Raw Data Inspection . Here, effects such as sample aggregation, adsorption to capillary walls or fluorescence intensity variations in the titration series can be easily identified. Please see the MO.Control software for more information on sample quality and assay optimization.When preparing Merge Sets for presentation that containnon-binding negative control interactions, move themouse cursor over the name of the Merge Set. A flyoutwill appear with a chain link symbol. Clicking this symbolinitializes all enabled fit parameters as nonbinder, whichmeans a horizontal line will be drawn through the pointsat their average value. This allows the comparison ofnonbinders with binders in the Compare Results view. Torevoke non-binder status, navigate to the data fittingsection of the respective run and untick all unneededcheckboxes.As an alternative to using the default analysis settings, the positions of F1 and F0 can be manually adjusted after enabling the Expert Mode for the Analysis Set (see Figure 4). Using this mode, the F1 and F0 cursors can be placed anywhere along the MST timetraces. The Expert Mode should only be used if the default analysis procedure did not yield satisfying results.Figure 4: Activation of the Expert Mode and visualization of different cursor settings.Similarly, when working with an Initial Fluorescence Analysis Set, the Expert Mode can be enabled to analyze ligand-dependent photobleaching effects (bleaching rate). Please contact your NanoTemper Technologies Support for more information.Chart visuals: Chart colors can be changed in the Data Selection, Dose Response Fit and Compare Results sections. All charts in the Dose Response Fit and Compare Results sections can be zoomed and adjusted for optimal visualization. Use the Zoom Extent button to adjust all data in the chart to the chart size. Zooming in-and-out of the chart is performed by scrolling the mouse wheel. Horizontal or vertical zooming can be performed by pressing shift or control on the keyboard while scrolling, respectively. Click and hold the mouse wheel and move the mouse to drag the chart (see Figure 5).Figure 5: Mouse control of chart visualization.8. C ompare ResultsThe Compare Results tab allows for a side-by-side comparison of MST runs and Merge Sets within an Analysis Set. In this tab, data and fitting results can also be exported in tabular and graphic format, including all binding data and the algorithms used. By selecting an Analysis Set on the left, all included data are plotted in the same chart. Selection of a Merge Set or a single MST run will highlight the selected experiments and grey-out the remaining experiments in the Analysis Set.Dropdown menus to change the Fit Model (K d or Hill) and display type (F norm, ∆F norm, Fraction Bound) are located on the top of the dose response chart. While the Fraction Bound normalization is best suited for a direct comparison of binding affinities, the ∆F norm normalization provides additional information about amplitude size and direction (please contact your NanoTemper Technologies Customer Support for more information). Both charts and chart raw data can be exported as an image file (.svg, .png or .pdf) or text file (.csv or .xlsx) for further use and external analysis.In addition to the visualization options, the Compare Results menu also includes a table summarizing number of averaged experiments (n), fit parameters, affinities and fit quality.Finally, the Generate Full Report button summarizes all charts and tables into one single PDF. Click the Generate Full Report button in the Dose Response Fit view to obtain a report even with unfitted data.ContactNanoTemper Technologies GmbHFloessergasse 481369 MunichGermanyPhone: +49 (0)89 4522895 0Fax: +49 (0)89 4522895 60***********************MicroScale Thermophoresis™ is a trademark.NanoTemper® and Monolith® are registered trademarks.NanoTemper® and Monolith® are registered in the U.S. Patentand Trademark Office.V10_2018-03-14。

NEURAL NETWORKS FOR RAPID DESIGN AND ANALYSIS

NEURAL NETWORKS FOR RAPID DESIGN AND ANALYSIS

AIAA-98-1779 NEURAL NETWORKS FOR RAPID DESIGN AND ANALYSISDean W. Sparks, Jr.* and Peiman G. MaghamiNASA Langley Research Center, Hampton, VA 23681-0001AbstractArtificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.I ntroductionThe overall design process for aerospace systems typically consists of the following steps: design, analysis and evaluation. If the evaluation is not satisfactory, the process is repeated until a satisfactory design is obtained. Dynamics and control analyses, which define the critical performance of many aerospace systems, are particularly important. Generally, all_____________________________* Aerospace Technologist, Guidance and Control Branch.Senior Research Engineer, Guidance and Control Branch, Senior Member, AIAA.Copyright Ó 1998 by the American Institute of Aeronautics and Astronautics, Inc. No copyright is asserted in the United States under Title 17, U.S. Code. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes. All other rights are reserved by the copyright owner.aerospace systems experience excitations resulting from internal and external disturbances, for example, aerodynamic turbulence encountered by aircraft or instrument scanning in space systems. Excessive vibrations due to turbulent aerodynamics could diminish the ride quality or safety of an aircraft. In space systems, excessive vibrations could be detrimental to its science instruments which usually require consistently steady pointing in a specified direction for a prescribed time duration. Typically, in the course of the design of an aerospace system, as the definitions and the designs of the system and its components mature, several detailed dynamics and controls analyses are performed in order to insure that all mission requirements are being met. These analyses, although necessary, have historically been very time consuming and costly due to the large number of disturbance scenarios involved, and the extent of time domain simulations that need to be carried out. For example, a typical pointing performance analysis for a space system might require several months or more, which can amount to a considerable drain on the time and resources of a space mission.It is anticipated that artificial neural networks (ANNs) can be used to significantly speed up the design and analysis process of aerospace systems. This paper will focus on the application of ANNs in approximating nonlinear dynamic components in simulations, in order to reduce overall time domain analysis time and compute effort. Initial work has shown that ANNs, once properly trained, can be used in place of nonlinear dynamical systems in simulations. These ANNs can give very good approximations of the systemsÕ outputs, and they can drastically reduce computational burden in running the overall simulation.A numerical example of a dynamical system simulation with an ANN is presented, and comparisons between conventional (i.e., without the ANN) simulation times, in terms of computer processing unit (CPU) seconds, versus simulation times with the ANN in place is made.The paper is organized as follows. After this introduction section, a brief description on conventional dynamics analysis is given. Next, discussions onneural networks, their use in approximating functional relationships, together with a typical design outline, is presented. Then, numerical results of an example application of an ANN in a simulation is reported. Finally, a conclusions section closes the paper.Conventional Dynamics AnalysisConventional dynamics analysis can be divided into two categories: time domain analysis and frequency domain analysis. Both are used to determine specific characteristics of a system performance, but as implied by their respective names, the characteristics are either defined in terms of time or as a function of frequency. In this paper, the emphasis will be on time domain analysis. Time domain analysis tries to compute the transient and steady state time responses of a system given specific inputs. Examples of typical system response characteristics, which are studied in time domain analysis, include transient system response maximum overshoot, rise and settling times. Another is the systemÕs steady state performance, which is usually defined by some metric on the steady state error between the system response and a reference signal. If the system is simple enough, i.e., linear, of very low order and has relatively few inputs and outputs, like a single-input, single-output (SISO) system, its responses can be obtained by direct solution of the system equations which describe the model. However, most realistic system models are of high order and/or nonlinear, which precludes a direct solution. The usual procedure in this case is to construct a simulation of the system to obtain the time responses, via integration (e.g., Runge-Kutta methods) of the systemÕs equations of motion. There are available several simulation-based packages, such as MATRIXx/System Build and MATLAB/Simulink, which can perform whatever time domain analysis is required. However, even with these tools, computing time response solutions can be expensive both in terms of time and effort, depending upon a number of factors, such as the order of the system, the number of inputs and outputs, the level of nonlinearities, the type and level of disturbance inputs and/or reference signals, and the kind of integration selected.Whatever type of analyses need to be done, it would be highly beneficial to the analyst to be able to rapidly assess the effects on system time response performance due to the almost inevitable design changes that a system will undergo during its lifetime. During the design phase of an aerospace system, almost all components go through some level of change, with each change having the potential to affect the performance of the overall system to some degree. In many instances, these changes are expected to affect the performance of the system to such a degree as to warrant a partial or full analysis of its performance. In the area of spacecraft dynamics and controls, these types of changes include: changes in the inertia or flexibility of the structural components which would affect the dynamic characteristics of the spacecraft; changes in the characteristics of the external and internal disturbances that may act on the spacecraft while it is in orbit; or changes in the control system design, hardware, and software. For example, for a reaction wheel system, changes could include: wheel size, nonlinear friction characteristics, or wheel speed internal controller design. Now, depending on the nature and extent of these changes, there may be a need to reevaluate the controlled dynamical responses of the system. The computational time and cost associated with each of these performance analyses (i.e., executing conventional time simulations) may be substantial. The cost can be exorbitant especially if the analysis has to be repeated several times during the design phase. One approach to this problem is to use artificial neural networks (ANNs) to help speed up the analysis.Rapid Analysis with ANNsThe motivation behind the use of ANNs is to speed up the analysis process substantially. The main use of ANNs lies with their ability to approximate functional relationships, specifically nonlinear relationships. This can be either a static relationship, one that does not involve time explicitly, or a dynamic relationship, which explicitly does involve time. Dynamic approximations via ANNs can be achieved by using the appropriate time delays and feedback of the output back to the input, which is defined as recurrence. Such networks are referred to as recurrent networks1,2. In any case, to an ANN, there is no distinction between a static or dynamic map, there is just input/output data. For example, an ANN could be designed to approximate the dynamic behavior of a nonlinear component, e.g., the mapping between the nonlinear torque output of a spacecraft reaction wheel and its angular wheel speed and input torque command. Once such a network is trained, the torque output of the wheel, for given wheel speed and torque command inputs, can be easily obtained by simulating the ANN. One application of ANNs is to use them to speed up the simulation process and therefore, the overall analysis time. For example, ANNs can be designed to approximate the outputs of a continuous-time, nonlinear system, with outputs computed for a specified discrete step. This way, thetraditional continuous-time integration (e.g. Runge-Kutta) of the nonlinear dynamics can be replaced by discrete-time nonlinear algebraic updates, with reasonable accuracy. Although the initial training time for an ANN may be long, it can be performed during off hours, in a semi-automated manner, without much direct involvement by the designer. Also, once an ANN has been designed to represent a dynamic component, it can be stored in a component library and recalled for use in future analyses.The successful design of an ANN depends on the proper training of the network. The training of a network involves the judicious selection of points in the input variable space, which along with the corresponding output points, constitute the training set. In the reaction wheel example, in order to properly train an ANN approximation, it is important that the input points, i.e., the wheel speed and commanded torque values, completely cover the range of possible values for both. In addition, it is important that enough points are selected such that they cover areas where fine resolution in the design space is required, i.e., areas where small variations in input data cause large variations in the corresponding output data. Of course, there will be the inevitable trade-off between selecting enough points for good training and keeping the number of training points down to practical levels for computation.Before proceeding, it is important to restate here that the true advantage of using ANNs lies with modeling nonlinear relationships. Although one can certainly use ANNs to represent linear systems, there will be no gain, in terms of reductions in compute time and effort, in their use over conventional representations of the same linear systems. One can always take any pure linear, dynamical system and rewrite it as a series of output difference equations, which are functions of appropriate time-delayed output feedbacks and input signals. It turns out that the coefficients of these system output equations are equivalent to the Òweighting coefficientsÓ (which are defined in the following subsection) of pure linear ANNs, with the ÒbiasÓ parameters (see following subsection) set to zeros. Thus, there would be no point in training ANNs to represent linear dynamical systems. Therefore, the work reported in this paper will only cover representing nonlinear systems with ANNs.In the following subsections, a brief overview of ANNs and the training of a specific type of ANN that was used in this work, are presented.Artificial Neural Networks (ANNs)Artificial neural networks (ANNs) have grown into a large field since their inception, and a complete discussion on them is beyond the scope of this paper. Instead, this section will present a very brief description on ANNs. ANNs were developed as an attempt to mimic the process of the human brain. They consist of groups of elements (called neurons) which perform specific computations on incoming data, with interconnections which permit data flow from one group of neurons to the next, similar to the way groups of biological neurons receive and transmit information through dendrites and axons, respectively, in a brain. Like their biological counterparts, ANNs can be trained to perform a variety of tasks, such as modeling functional relationships. The parameters of the ANN, when presented with the appropriate input and output data related to a specific functional relationship, can be adjusted such that the ANN can give a good representation of that relationship. This feature is particularly useful when the relationship is nonlinear and/or not well defined, and thus difficult to model by conventional means. Also ANNs, by their very nature, are a perfect fit for efficient parallel computations on digital computers. Though there are several types of ANNs, in this paper, only the feedforward ANN will be discussed.A typical feedforward ANN is depicted in Figure 1, with m inputs and npoutputs, and eachInputLayerHiddenLayer 1HiddenLayer 2OutputLayerinput 1input 2input moutput 1output 2n p Figure 1. Typical feedforward ANN. circle, or node, representing a single neuron. The name feedforward implies that the data flow is one way (forward) and there are no feedback paths between neurons. The output of each neuron from one column is an input to each neuron of the next column. Using the typical naming convention, each column of neurons is called a layer, the initial column where the inputs come into the ANN is called the input layer, and the last layer, i.e., where the outputs come out of the ANN, is denoted as the output layer. All other layers in between are called hidden layers. These ANNs can have as many layers as desired, and each hidden layer can haveas many neurons as desired. Each neuron can be modeled as shown in Figure 2, with n being the number of inputs to the neuron.output from neuronto Figure 2. Representation of a neuron in the feedforwardANN.Associated with each of the ninputs is some adjustable scalar weight, w i , i= 1, 2, ..., n , which multiplies that input. In addition, an adjustable bias value, b, can be added to the summed scaled inputs. These combined inputs arethen fed into an activation function, which produces the output of the neuron. The activation function can take on many forms to shape the output;three of the more common functions are linear, tan sigmoid, and log sigmoid, as shown in Figure 3. Thelinear activation function simply outputs the input; the tan sigmoid function is the hyperbolic tangent function,with output values between [-1,1] for inputs (-¥,+¥);while the log sigmoid is also a nonlinear function,which can be written as y e x=+-11/(), with the outputs values, y , in the range [0,1], given inputs, x , in the range (-¥,+¥). During training, the set of weights and bias terms associated with the neurons are adjusted until the output of the ANN matches, to within some specified level of tolerance, the true outputs for the same inputs.The objective is to design a feedforward network to map the functional relationship between a set of input points and a corresponding set of output points, or target points. To accomplish this task, a feedforward network, like the one shown in Figure 1, but with only one hidden layer, is considered. The input layer has n cnodes, corresponding to the elements of the input vector, while the output layer has n p nodes, which correspond to the elements in the output vector. The number of nodes in the hidden layer is arbitrary,however, it has to be large enough to guarantee convergence of the network to the functional relationship that it is to approximate. Once the number of nodes in the hidden layer has been chosen, the network design is reduced to adjusting, or training, the weighting coefficients and biases. The parameters of feedforward networks are usually trained using either a gradient method named the back propagation method 1,2,or a pseudo-Newtonian approach, such as the Levenberg-Marquardt 3 technique. Typically, in these methods, the weights and biases are trained to minimize some cost function of the error of the network. The network error is defined as the difference between the output of the true system and that of its ANN approximation, for a given set of inputs. The cost function is usually taken as the sum squared error of the network over all of the input points. If q sets of points (e.g., points taken for q time samples) are used for training the network, then the input U to the network would be an n c x q matrix, with each column corresponding to a set of input points for a given time sample, and the output Y p would be a n c x q matrix,with each column of Y p corresponding to that of U .Now the cost function, in terms of the sum squared error of the network, can be written asE e k Y j r Y j r d p j n r qk qn pp==-===ååå()((,)(,))22111(1)where Y d is a n p x q matrix of the target outputs. The typical procedure is to keep updating the weights and biases until the error E goes below some specified tolerance level. At this point, the feedforward network is considered trained.It has been shown in the literature that a feedforward network with only one hidden layer can approximate a continuous function to any degree of accuracy 4-6. It is obvious that this capability carries over to networks with more than one hidden layer. The use of feedforward ANNs has some advantages over the conventional approximation techniques, such as polynomials and splines. For example, polynomials are hard to implement in hardware due to signal saturation, and if they are of higher order, there may be stability problems in determining the coefficients.ANNs, on the other hand, are very amenable to hardware implementation. As a matter of fact, to date, several VLSI chips based on multilayer neural network architecture are available 7,8.Reaction Wheel Model ExampleIn order to illustrate the feasibility of using ANNsto approximate dynamic components, a model of a reaction wheel assembly, consisting of three reaction wheels, one each for the roll, pitch, and yaw axes of a spacecraft, was selected as a test application. Figure 4shows the blockrepresentation of a reaction wheel model in this assembly; this model was used for all three wheels.This model is fairly simple in nature, and consists of the following: the input, T com , is the torque command (in units of N-m) to the reaction wheel, which is updated every 1.024 seconds; T act , in N-m, is the actual torque output of the wheel, which includes nonlinear viscous friction torque; the wheel momentum, M w ; and the angular wheel speed, W sp , which is converted into units of revolutions per minute (RPM). The parameter J w is the wheel inertia. The actual torque output of the wheel, T act , is the combination of the torque command and viscous friction torque, T fric , (which takes the opposite sign of that of the wheel speed W sp ): T T T sign W actcom fric sp =-*(). (2)Two different nonlinear functions were used to model the wheel friction torque: a quadratic function in terms of wheel speed; and an exponential function in terms of wheel speed. In each case, an ANN was designed to approximate the dynamics of the wheel.The results are presented in the following subsections. Quadratic Friction FunctionIn the first case, the wheel viscous friction torque was modeled with a quadratic function in terms of W sp ,which is given below:T awaw aw W fric sp =+=--33671761024104510562.*.*,x x . (3)The above wheel model is continuous and nonlinear, and in the past, has been simulated using aRunge-Kutta (2,3) variable step size integration for accurate, but time consuming, integration. The error tolerance for the integration was set at 10-6, the minimum step size set at 10-8 seconds, and the maximum step size at one time sample of 1.024seconds. Note that the tight error tolerance was required for solution accuracy. One way to speed up the simulation was to convert the continuous-time model into a discrete-time model, and then use discrete updates at every 1.024 seconds to propagate the system state equations. However, as will be discussed later in this section, the direct discrete simulation of this model results in unacceptable inaccuracies because of the nonlinear torque friction component.To try to keep the speed advantage of discrete update simulations, and still maintain reasonable accuracy in the wheel model outputs, an ANN was trained to map the functional relationship from the torque input command at the k th discrete time step,T com (k), and wheel speed at the k th time step, W sp (k), to the wheel speed for the next time step, W sp (k+1). In other words, the wheel speed from one time step to the next was approximated. Figure 5 depicts the discrete-time model of the reaction wheel, with a single-hidden layer ANN (hidden layer with a tan sigmoid activation function, the output layer with a pure linear function)computing W sp (k+1). A unit delay is in place to obtain the current (k th step) wheel speed.The friction torque computation was done the same way as in Figure 4. It was felt that there would be no real advantage gained in substituting a separate ANN to replace the simple quadratic friction function (Eq. 2),which was a static map. Since the same model was used for each of the three wheels in the assembly, the same ANN could be used for each wheel.Before the ANN for the wheel speed could be trained, the appropriate input/output training data had to be generated. To accomplish this, proper data points for both the torque command T com and wheel speed W sp ,the two inputs to the ANN, had to be selected first.With this specific wheel model, the expected operating range for T com was assumed to be +/- 0.1 N-m, and +/-300 RPM for Wsp. To get adequate coverage of datapoints over these ranges, the Tcompoints were taken inequally-spaced increments of 0.005 N-m, while the Wsp points were taken in increments of 3.0 RPM. With these ranges and increments, the total number oftraining input pairs (Tcom , Wsp) was 8,241. Thecorresponding training output, or target, points werethen computed by taking each training input pair, and running a MATLAB (v5.0)/Simulink (v2.0) simulation of the continuous reaction wheel model (Figure 4) over a specified time interval [0, Tstep]; the wheel speed valueat time Tstepwas recorded as the desired target point for that specific training pair. As mentioned earlier, forthis reaction wheel model, Tstepwas set to 1.024 seconds. Each Simulink simulation used the second-order, three-function-evaluation-per-step Bogaki-Shampine variable-step integration routine, the minimum and maximum step sizes allowed were set at 10-8 and 1.024 seconds, respectively; the relative and absolute error tolerance parameters were set to 10-6.Each Tcominput value was held constant over the integration range [0, 1.024], while the correspondingWspinput value was entered as the initial wheel speed value (i.e., at time 0) in the pure integrator block.After the training data was generated, the ANN could now be trained. Prior to the actual ANN training, both the input and output training data were normalized with respect to their absolute maximum values; by keeping the training data in the [-1, 1] range, more efficient use of the ANN training routines was obtained. The training led to a feedforward ANN with one 10-neuron, hidden layer (using a tan sigmoid activation function) and a pure linear output layer. The training was performed using the standard ÔtrainlmÕ function from the MATLAB Neural Network Toolbox, which is based on the Levenberg-Marquardt training algorithm [Ref. 3]. Running on a Sun Ultra-2 Workstation, the training of this ANN completed in less than 2.0 (elapsed time) hours. The training reduced the sum squared error (see Eq. (1)) of the ANN down to a level of 2.98 x 10-5, which was deemed acceptable. Once the training was completed, the final ANN weights and bias numbers were scaled back to their true values. In checking the accuracy of the approximation achieved by this ANN, given the training input points, the mean percent error between the true target points and the ANN outputs points was 0.063% , and only 1.07% of the points had errors greater than 1%.Once the ANN-based model of the reaction wheel assembly was developed, its performance, in terms of accuracy and execution CPU time, was evaluated in several discrete-time simulations under a specific set oftorque command input, Tcom, profiles. These 6000-second torque command profiles for the roll, pitch and yaw axis wheels, respectively, shown in Figure 6, werea series of 1.024-second-wide pulses. In addition, a small random signal was added to the pitch axis wheel torque command. These could be typical torquecommand profiles required to counter the motions of Figure 6. Roll, pitch and yaw torque input commandprofiles.scanning instruments on a spacecraft, for example. The original continuous-time reaction wheel assembly model, as shown in Figure 4, was also simulated using the Simulink second-order, three-function-evaluation-per-step Bogaki-Shampine variable-step integration routine, with the minimum and maximum step sizes allowed were set at 10-8 and 1.024 seconds, respectively. The relative and absolute error tolerance parameters were set to 10-6. The results of this simulation were considered the ÔtrueÕ results, against which the other simulations were tested. The average execution time for this Ôtrue modelÕ simulation was 15.01 CPU seconds on the Sun Ultra-2.Using the ANN-based model of the reaction wheel assembly, two different discrete-time simulations, both running at a discrete update period of 1.024 seconds, were performed to see if meaningful reductions in simulation execution times can be achieved without sacrificing accuracy down to unacceptable levels. Table 1 contains the execution times (in CPU seconds), the rms and maximum absolute errors (as compared to the Ôtrue modelÕ results from above) for three discrete-time simulations . First, a MATLAB function file version of the ANN-based discrete-time model was written; this function was executed in MATLAB v5.0. The average execution time for was 6.49 CPU seconds. In comparing the wheel speed outputs from this discrete function file simulation with those from the Ôtrue modelÕ, very good agreement was observed; theTable 1. Discrete-time simulation results for quadratic friction case.Simulation C P Usecrmserror(RPM)Max.error(RPM)ANN MATLAB functionroll axis wheelpitch axis wheelyaw axis wheel 6.490.00110.00490.02630.00920.07900.0417ANN MEX fileroll axis wheelpitch axis wheelyaw axis wheel 0.350.00110.00490.02630.00920.07900.0417discrete MATLAB functionroll axis wheelpitch axis wheelyaw axis wheel 1.711.84371.39684.291541.568380.603763.6718root-mean-square (rms) of the errors between ÔtrueÕ and ANN-based discrete-time simulation outputs, over the length of the simulation, were 0.0011 RPM for the roll axis wheel, 0.0049 RPM for the pitch axis wheel, and 0.0263 RPM for the yaw axis wheel. These results indicated that, although acceptable simulation accuracies were achieved with the ANN-based model, the execution time speed up was only a factor of 2.3.This was somewhat expected, because the friction nonlinearity was fairly benign, i.e., the Runge-Kutta integration did not have to take many steps to converge to the solution. More reduction in execution time can be achieved if another compute language is used, one with faster loop execution capability. To do this, the ANN-based wheel model simulation was written in FORTRAN-77, for execution as a MEX file called by MATLAB. MEX files are dynamically linked subroutines which MATLAB can load and execute like regular MATLAB functions.The third simulation was just a pure discrete-time simulation (zero-order-hold integration, with no ANN) of the wheel assembly, sampled at 1.024 seconds. This simulation was also written as a MATLAB function, and executed in MATLAB v5.0.The results in Table 1 show that although the pure discrete wheel model simulation executes at a faster rate, its accuracy leaves much to be desired. Figure 7. shows the roll axis wheel speed output time histories for this case, from the Ôtrue modelÕ Simulink simulation (top), from the ANN-based model MEX file simulation (middle), and the pure discrete-time model simulation (bottom). In these simulations, the initial angular speed of all three wheels was 250 RPM.Figure 7. Roll axis wheel speed simulation results. Clearly, the ANN-based MEX file simulation results matched the Ôtrue modelÕ results much better than did the pure discrete-time simulation results. The combination of the nonlinearity and the rapid dynamics caused by the pulse command profile made it difficult for the pure discrete model to accurately match the Ôtrue modelÕ simulation, at the update period of 1.024 seconds. On the other hand, the ANN-based wheel model simulations, while executing slower than the pure discrete model simulation, gave much more accurate results. The ANN-based wheel model MEX file simulation gave wheel output results which were very comparable to the Ôtrue modelÕ results, while executing about 40 times faster, which was a fairly significant speed up.Exponential Friction FunctionIn the second case, the wheel viscous friction torque was modeled with an exponential function in terms of Wsp, which is given below:T aw e aw Wfricawsp==-001001.**,.. (4)As in the quadratic friction function case, an ANN-based model of the reaction wheel assembly was designed. Another 10-node, feedforward ANN, was trained in the exact manner as reported in the previous case. Running on the Sun Ultra-2 Workstation, the training of this ANN completed in less than 2.0 (elapsed time) hours. The training reduced the sum squared error of this ANN down to 3.009 x 10-4, which was deemed acceptable. In checking the accuracy of this ANN, given the training input points, the percent error between the true target points and the ANN outputs points were 0.419% on average, and only 1.32% of the。

《浅谈质量源于设计》解读概要


4
QbD: CMC先期计划
2005年6月 FDA ONDQA 宣布启动一项CMC先期计划: 为制药企业提供如何进行CMC的展示(i) QbD 的应 用原则 (ii) 产品和生产工艺的理解 使FDA有能力评价那些基于QbD 理念进行开发的新 药申请。 在FDA重新起草制药行业质量评估系统时寻求到更 多的公众资源和信息。
截止2010年底, 共有21项NDAs, 18 INDs, 9 Suppls
5
QbD: 仿制药
2005年8月, FDA仿制药审评办公室公开了一个仿制药 的审评要求QbR-QoS模板作为21世纪GMP的启动; 2008年1月, QbR-QoS 开始启用; 2011年12月, FDA仿制药审评办公室发布了一个缓释 片的QbD 模板。
I. 目标产品质量(QTPP)

定义: 为实现产品说明书中描述的安全性和功效 ,产品所必须具备的质量谱 制定产品QTPP的三个步骤: 明确质量属性-了解已上市对照药 确定关键质量属性并解释其合理性 (Justification) 汇总、制定目标产品质量
21
常见的目标产品质量属性表
2013年1月,所有仿制药申请都必须基于QbD理念 。
6
QbD:配方和工艺开发
定义:
ICH Q8
质量源于设计QbD是通过系统性的设计和 研究开发产品。在开发过程中通过完善的 科学研究和全面的质量风险管理,通过全 面的过程控制手段,努力实现既定的产品 质量目标。
7
QbD:配方和工艺开发

ICH Q8
Make a new Table for CQA
28
建立仿制药的QTPP 第二步:确定关键质量属性


列出所有质量属性: 物理性质、鉴别、含量、含量 均匀性、溶出度、降解产物、残留溶剂、水分、微 生物限度等等。 质量属性分为关键和非关键属性。 关键质量属性是根据对产品安全性和有效性影响的 严重程度来界定的。 在产品开发过程中,只有那些受产品配方和工艺参 数变化影响大的关键质量参数才会被研究。 某些关键质量属性(例如:鉴别)并不会因配方或 工艺参数的变化而变化,因此它并不是我们要重点 研究的质量属性。

一个复杂的特征值分析与设计相结合的方法实验(DOE)研究盘式制动器制动尖叫(译文)

毕业设计(论文)外文资料翻译系别:机电信息系专业:机械设计制造及其自动化班级:姓名:学号:外文出处:International Journal of Engineering, Science andTechnologyVol. 1, No. 1, 2009, pp. 254-271附件: 1. 原文; 2. 译文2013年03月一个复杂的特征值分析与设计相结合的方法实验(DOE)研究盘式制动器制动尖叫摘要:本文提出了研究结合有限元模拟与统计回归技术的制动片上的盘式制动器制动尖叫的影响因素探讨。

复杂的特征值分析(CEA)已被广泛用于在制动系统模型预测的不稳定频率、有限元模型与实验模态试验的相关性。

“制动器和制动盘的几何形状之间的输入输出关系的构建可以利用各种几何配置预测盘式制动器的尖叫。

影响的各种因素,即;杨氏模量背板,背板厚度,槽,两槽间的距离,槽的宽度和角度,槽所使用的设计研究实验(DOE)技术等。

预测在数学模型的基础上已开发的最有影响的因素验证,仿真实验证明了它的充分性。

预测结果表明,制动尖叫倾向可以通过增加的杨氏模量的背板和添加修改倒角形状减少摩擦材料双方的摩擦。

通过引入槽结构,制动尖叫使用建模相结合的方法CEA和美国能源部被发现通过验证试验的统计学足够。

这种组合方式会有用到盘式制动器的设计阶段。

关键词:盘式制动器的制动尖叫,有限元分析,实验模态分析,实验设计1、引言:制动器尖叫是因为摩擦力能够诱导的动态不稳定性引起的振动引起的噪声问题(Akay,2002)。

制动操作期间,垫和盘之间的摩擦力可以诱导系统中的动态不稳定性。

通常制动尖叫发生在1和20千赫之间的频率范围。

尖叫声是一个复杂的现象,部分原因是因为它的强烈的依赖于许多参数,部分原因是因为这些机械相互作用在制动系统。

机械的相互作用被认为是由于在摩擦界面接触的非线性影响非常复杂。

发生尖叫是间歇性的或随机的。

在一定的条件下,即使当汽车是全新的,它也往往产生尖叫噪声,以消除噪声为目标进行广泛的研究。

Structural Analysis and Design

Structural Analysis and Design Structural analysis and design play a crucial role in the construction and engineering industry. It involves the study of the behavior and performance of structures, ensuring their safety, reliability, and sustainability. This field encompasses a wide range of elements, including materials, loads, forces, and environmental considerations. In this discussion, we will delve into the significance of structural analysis and design from various perspectives,exploring its technical, practical, and ethical dimensions. From a technical standpoint, structural analysis and design involve the application of engineering principles to create safe and efficient structures. Engineers use sophisticated software and mathematical models to simulate the behavior of different materials and structural components under various conditions. By analyzing these simulations, they can optimize the design to ensure that the structure can withstand the expected loads and environmental factors. This process is essential for preventing structural failures, which can have catastrophic consequences in terms of human safety and economic losses. Moreover, structural analysis and design also play a pivotal role in ensuring the sustainability and environmental impact of structures. With growing concerns about climate change and resource depletion, engineers are increasingly focusing on creating eco-friendly and energy-efficient designs. This involves using sustainable materials, optimizing the use of resources, and minimizing the carbon footprint of structures. By integrating these considerations into the design process, engineers can contribute to a more sustainable built environment, aligning with global efforts to combat climate change. Beyond the technical aspects, the practical implications of structural analysis and designare equally significant. Well-designed structures not only ensure safety but also contribute to the overall functionality and aesthetics of the built environment. Whether it's a bridge, a skyscraper, or a residential building, the design of structures has a profound impact on the lives of people. Aesthetically pleasingand well-designed structures can enhance the quality of urban landscapes, instilla sense of pride in communities, and contribute to the overall well-being of society. Furthermore, ethical considerations are paramount in structural analysis and design. Engineers have a responsibility to uphold ethical standards andprioritize the safety and welfare of the public. This involves adhering tobuilding codes and regulations, conducting thorough risk assessments, and being transparent about the limitations of a design. Additionally, ethical engineering practices also encompass considerations for social equity and accessibility. Engineers should strive to create inclusive designs that accommodate the needs of all individuals, including those with disabilities, and contribute to the overall welfare of society. In conclusion, structural analysis and design are integral components of the construction and engineering industry, with far-reaching implications for safety, sustainability, functionality, and ethical responsibility. By approaching this field from a multidimensional perspective, we can appreciateits profound impact on the built environment and society as a whole. As wecontinue to advance technologically and face evolving challenges, the role of structural analysis and design will remain pivotal in shaping a safer, more sustainable, and ethically responsible built environment.。

MEMS考试重点

1What’s MST ?what’s micro-machine ? MEMS, NEMS, micro-system? and explain them simply ?(P2)答MEMS is simultaneously a toolbox, a physical product, and a methodology, all in one:A micro-system is an intelligent miniaturized system comprising sensing, processing and actuating functions. These would normally combine two or more of the following: electrical, mechanical, optical, chemical, biological, magnetic or other properties integrated into a single or multichip hybrid.MST is Microsystems technology, it's a name of technology called in Europe, and the technology is called microelectromechanical systems (MEMS). NEMS: Nano-Electromechanical system, it’s feature sizes in 1-10 nm, combined mechanical and electrical, new effect devices and systems based on nano structure. Micro machining is the set of design and fabrication tools that precisely machine and form structure and elements at a scale well below the limits of our human perspective faculties – the micro-scale.MEMS micro- electro – mechanical systems is a technology that in its most general form can be defined as miniaturized mechanical and electro-mechanical elements that are made using the techniques of micro-fabrication.2,What are advantages of microtechnology? Why are different governments interested in MEMS?答: Advantages:∙It is a brand-new field has to consider a variety of physical fields of the mixing action research, compared with the traditional mechanical technologies.∙The size much smaller, 0.1~100um, its thickness much smaller.∙Use the silicon material, which has good electrical performance, strength, hardness, and Young Modules are similar with iron. Good heat transfer rate.∙Can use in production of the mature of IC technology, process, make high-volume , low-cost production.∙Higher level functions, integrate smaller function together into one package for great utility.∙Brings cost benefits directly through low unit pricing by cutting silica and maintenance costs.Micro electrical mechanical structures and systems miniature devices that enable the operation of complex systems. They exist today in many environments, especially auto motive, medical, consumer, industrial and aerospace. Their potential for future penetration into a broad range of applications is real supported by strong development activities. At many companies and fabrication processes. The development of MEMS is inherently inter disciplinary, necessitating and understanding of the tool box as well as of the end application.3 What are main application fields of MEMS? Explain these fields respectively (speak out at least four fields). You may take some example to support your ideas.(P3)答:Four application fields.(1).In the commercial application1)Drug delivery systems.2)RF and wireless electronics.3)Engine and propulsion control.(2).In the military application1)Head-and night-display systems.2)Low-power,high-density mass data storage devices.3)Embedded sensors and actuators for condition-based maintenance.(3).In car industry application1)MEMS pressure sensors.2)MEMS brake sensors.3)MEMS acceleration sensors.4)Auto motive safety braking and suspension systems.(4).In biochemical and medical application1)Miniature biochemical analytical.2)Invasive and noninvasive biomedical sensors.3)Medical imaging.4)minimally invasive surgery.4 Introduction one basic process tools.(P34)答: OxidationHigh-quality amorphous silicon dioxide is obtained by oxidizing silicon in either dty oxygen or in steam at elevated temperatures(8500C-11500C).Oxidation mechanism have showing final oxide thickness as function of temperature,oxidizing environment,and time are widely available.Thermal oxidation of silicon generates compressive stress in the silicon dioxide film.There are two reasons for the stress:silicon dioxide molecules take more volume than silicon atoms,and there is a mismatch between the coefficients of thermal expansion of silicon and silicon dioxide.The compressive stress depends on the total thickness of the silicon dioxide layer and can reach hundreds of Mpa.As a result,thermally grown oxide films cause bowing of the underlying substrate.Moreover,freestanding membranes and suspended cantilevers made of thermally grown silicon oxide tend to warp or curl due to stress variation through the thickness of the film.5 Introduction photolithography process.(P40)答: Lithography involves three sequential steps:∙Application of photoresist, which is photosensitive emulsion layer;∙Optical exposure to print an image of the mask onto the resist;∙Immersion in an aqueous developer solution to dissolve the exposed resist and render visible the latent image.Photolithography is the process of transferring shapes on a mask to the surface of a silicon wafer. The steps involved in the photolithography process are wafer cleaning; barrier layer formation; photoresist application; soft baking; mask alignment; exposure and development; and hard-baking. 6.Brief explanations of the difference between isotropic(各向同性的) and anisotropic(各项异性的).(P45)答:Isotropic etchants etch uniformly in all directions, resulting in rounded cross-sectional features. By contrast, anisotropic etchants etch in some directions preferentially over others, resulting in trenches or cavities delineated by flat and well-defined surfaces, which need not be perpendicular to the surface of the wafer. The etch medium (wet versus dry) plays a role in selecting a suitable etch method. Wet etch in aqueous solution offer the advantage of low-cost batch fabricaton--25to50 100-mm-diameter wafers can be etched simultaneously—and can be either of the isotropic or anisotropic type. Dry etching involves the use of reactant gases, usually in a low-pressure plasma, but nonplasma gas-phase etching is also used to a small degree. It can be isotropic or vertical. The equipment for dry etching is specialized and requires the plumbing of ultra-clean pipes to bring high-purity reactant gases into the vacuum chamber.Isotropic etchants etch uniformly .in all directions, resulting in rounded cross-sectional features. By contrast, anisotropic etchants etch in some directions preferentially over others, resulting in trenches or cavities delineated by flat and well-defined surfaces, which need not be perpendicular to the surface of the wafer. (page 45)各项同性蚀刻剂在各个方向的蚀刻都有一致性,形成了圆形的横截面特征。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Design and Analysis of an MST-Based Topology Control AlgorithmNing Li,Jennifer C.Hou,and Lui ShaDepartment of Computer ScienceUniversity of Illinois at Urbana-ChampaignUrbana,IL61801{nli,jhou,lrs}@Abstract—In this paper,we present a Minimum Spanning Tree (MST)based topology control algorithm,called Local Minimum Spanning Tree(LMST),for wireless multi-hop networks.In this algorithm,each node builds its local minimum spanning tree independently and only keeps on-tree nodes that are one-hop away as its neighbors in thefinal topology.We analytically prove several important properties of LMST:(1)the topology derived under LMST preserves the network connectivity;(2)the node degree of any node in the resulting topology is bounded by6;and (3)the topology can be transformed into one with bi-directional links(without impairing the network connectivity)after removal of all uni-directional links.These results are corroborated in the simulation study.I.I NTRODUCTIONTopology control and management–how to determine the transmission power of each node so as to maintain network connectivity while consuming the minimum possible power–has emerged to be one of the most important issues in wireless multi-hop networks[1].Instead of transmitting using the max-imum possible power,nodes in a wireless multi-hop network collaboratively determine their transmission power and define the topology of the wireless network by the neighbor relation under certain criteria.This is in contrast to the“traditional”network in which each node transmits using its maximum transmission power and the topology is built implicitly by routing protocols(that update their routing caches as timely as possible)[2][3]without considering the power issue. Not until recently has the issue of topology/power control with respect to maintaining network connectivity,optimizing network spatial reuse,and mitigating MAC-level interference attracted much attention.The importance of topology control lies in the fact that it critically affects the system performance in several ways. For one,as shown in[4],it affects network spatial reuse and hence the traffic carrying capacity.Choosing too large a power level results in excessive interference,while choosing too small a power level results in a disconnected network. Power control also effects the energy usage of communication, thus impacts on battery life,a critical resource in many mobile applications.In addition,topology control also impacts on contention for the medium.Collisions can be mitigated as much as possible by choosing the smallest transmission power subject to maintaining network connectivity[5][6].Several topology control algorithms[5],[7]–[9]have been proposed to create a power-efficient network topology in wireless multi-hop networks with limited mobility.We will summarize the existing work in Section II.Some of the algorithms require explicit propagation channel models(e.g., [9]),while others incur significant message exchanges(e.g., [5]).Their ability to maintain the topology in the case of mobility is also rather limited.In this paper,we propose a Minimum Spanning Tree(MST) based topology control algorithm,called Local Minimum Spanning Tree(LMST),for multi-hop wireless networks with limited mobility.The topology is constructed by each node building its local MST independently(with the use of infor-mation locally collected)and only keeping one-hop on-tree nodes as neighbors.The contributions of this paper include:(i) the topology constructed under LMST preserves the network connectivity,(ii)the degree of any node in the resulting topology is bounded by6;and(iii)the resulting topology can be converted into one with only bi-directional links(after removal of uni-directional links).Feature(ii)is desirable because a small node degree reduces the MAC-level contention and interference.The capability of forming a topology that consists of only bi-directional links is important for link level acknowledgments,and critical for packet transmissions and retransmissions over the unreliable wireless medium.Bi-directional links are also important for the medium access control mechanisms such as RTS/CTS in IEEE802.11.The rest of the paper is organized as follows.The related work isfirstly summarized in Section II.Then we present the LMST algorithm in Section III,and prove its properties: preservation of network connectivity,bound on the node degree,and construction of topology with only bi-directional links,in Section IV.The frequency to update the topology in case of limited mobility is determined under a probabilistic model in Section IV.Finally,we present a simulation-based performance study in Section V,and conclude the paper in Section VI.II.R ELATED W ORKAs mentioned in the previous section,several topology control algorithms have been proposed in the literature,among which the relay-region and enclosure-based approach[9],CBTC(α)[7],COMPOW[5],and CONNECT[8]may have received the most attention.Several broadcast/multicast al-gorithms for ad-hoc wireless networks([10][11][12][13] [14])have also attempted to maintain some type of overlay topology,upon which a multicast tree/mesh can be built.The issue of constructing an overlay topology to facilitate multicast tree/mesh building is outside the scope of this paper.Relay-region and enclosure-based approach(R&M): Rodoplu et al.[9]introduced the notion of relay region and enclosure for the purpose of power control.For any node i that intends to transmit to node j,node j is said to lie in the relay region of a third node r,if node i will consume less power when it chooses to relay through node r instead of transmitting directly to node j.The enclosure of node i is then defined as the union of the complement of relay regions of all the nodes that node i can reach by using its maximal transmission power.It is shown that the network is strongly connected if every node maintains links with the nodes in its enclosure and the resulting topology is a minimum power topology.A two-phase distributed protocol was then devised tofind the minimum power topology for a static network.In thefirst phase,each node i executes local search tofind the enclosure graph.This is done by examining neighbor nodes which a node can reach by using its maximal power and keeping only those do not lie in the relay regions of previously found nodes.In the second phase,each node runs the distributed Bellman-Ford shortest path algorithm upon the enclosure graph,using the power consumption as the cost metric.When a node completes the second phase,it can either start data transmission or enter the sleep mode to conserve power.To deal with limited mobility,each node periodically exe-cutes the distributed protocol tofind the enclosure graph.This algorithm assumes that there is only one data sink(destination) in the network,which may not hold in practice.Also,an explicit propagation channel model is needed to compute the relay region.CONNECT and its extension:Ramanathan et al.[8]pre-sented two centralized algorithms to minimize the maximum power used per node while maintaining the(bi)connectivity of the network.CONNECT is a simple greedy algorithm that iteratively merges different components until only one remains.Augmenting a connected network to a bi-connected network is done by BICONN-AUGMENT,which uses the same idea as in CONNECT to iteratively build the bi-connected network.In addition,a post-processing phase can be applied to ensure per-node minimality by deleting redundant connections. Two distributed heuristics are introduced for mobile net-works.In LINT,each node is configured with three parameters -the“desired”node degree d d,a high threshold d h on the node degree,and a low threshold d l.Every node will periodically check the number of active neighbors and change its power level accordingly,so that the node degree is kept within the thresholds.LILT further improves LINT by overriding the high threshold when the topology change indicated by the routing update results in undesirable connectivity.Both CONNECT and BICONN-AUGMENT are centralizedalgorithms that requires global information,thus cannot bedirectly deployed in the case of mobility.On the other hand,the proposed heuristics LINT and LILT cannot guarantee thepreservation of the network connectivity.COMPOW:Narayanaswamy et al.[5]developed a powercontrol protocol,called COMPOW.The authors argued thatif each node uses the smallest common power required tomaintain the network connectivity,the traffic carrying capacityof the entire network is maximized,the battery life is extended,and the contention at the MAC layer is reduced.In COMPOWeach node runs several routing daemons in parallel,one foreach power level.Each routing daemon maintains its ownrouting table by exchanging control messages at the specifiedpower level.By comparing the entries in different routingtables,each node can determine the smallest common powerthat ensures the maximal number of nodes are connected.Specifically,let N(P i)denote the number of entries in the routing table corresponding to the power level P i.Then theadequate power level for data packets is simply set to thesmallest power level P i for which N(P i)=N(P max).The major drawback of COMPOW is its significant messageoverhead,since each node runs multiple daemons,each ofwhich has to exchange link state information with the coun-terparts at other POW also tends to use higherpower in the case of unevenly distributed nodes.Finally,sincethe common power is collaboratively determined by the allnodes inside the network,global reconfiguration is required inthe case of node joining/leaving.CBTC(α):The work that comes closest to our work isCBTC(α)[7].CBTC(α)is a two-phase algorithm in whicheach nodefinds the minimum power p such that transmittingwith p ensures that it can reach some node in every coneof degreeα.The algorithm has been analytically shown topreserve the network connectivity ifα<5π/6.It has alsoensured that every link between nodes is bi-directional.Several optimizations to the basic algorithm are also dis-cussed,which include:(i)a shrink-back operation can beadded at the end to allow a boundary node to broadcast withless power,if doing so does not reduce the cone coverage;(ii)ifα<2π/3,asymmetric edges can be removed whilemaintaining the network connectivity;and(iii)if there existsan edge from u to v1and from u to v2,respectively,the longeredge can be removed while preserving connectivity,as longas d(v1,v2)<max{d(u,v1),d(u,v2)}.An event-driven strategy is proposed to reconfigure thenetwork topology in the case of mobility.Each node is notifiedwhen any neighbor leaves/joins the neighborhood and/or theangle changes.The mechanism used to realize this requiresstate to be kept at,and message exchanges among,neighboringnodes.The node then determines whether it needs to rerun thetopology control algorithm.Other power-efficient topology control work:There alsoexists work in generating power-efficient topology in wirelessnetworks.By following a probabilistic approach,Santi etal.derived the suitable common transmission range whichpreserves network connectivity,and established the lower and upper bounds on the probability of connectedness[6].In[15], a“backbone protocol”is proposed to manage large wireless ad hoc networks,in which a small subset of nodes is selected to construct the backbone.In[16],a method of calculating the power-aware connected dominating sets was proposed to establish an underlying topology for the network.III.T HE MST-B ASED T OPOLOGY C ONTROL A LGORITHM In this section,wefirst outline a set of guidelines for devising topology control algorithms.Then we present a distributed topology control algorithm called LMST(Local Minimum Spanning Tree).A.Design GuidelinesThe following guidelines are essential to an effective topol-ogy control algorithm:1)The network connectivity should be preserved withthe use of minimal possible power.This is the most important objective of topology control algorithms. 2)The algorithm should be distributed.This is due to thefact that there is,in general,no central authority in a wireless multi-hop network,thus each node has to make its decision based on the information it has collected from the network.3)To be less susceptible to the impact of mobility,thealgorithm should depend only on the information col-lected locally,e.g.,information collected within one hop.Algorithms that depend only on local information also incur less message overhead/delay in the process of collecting information.4)It is desirable that all links are bi-directional.As men-tioned in Section I,bi-directional links facilitate link-level acknowledgment,proper operation of the RTS/CTS mechanism,and ensures existence of reverse paths[5].5)It is also desirable that the node degree in the topologyderived under the algorithm is small.A small node degree may help to mitigate the well known hidden and exposed terminal problems,1as there will not be so many nodes that have to be silenced in a communication activity.B.The LMST AlgorithmTo facilitate discussion of the proposed algorithm,wefirst define the following terms.We denote the network topology constructed under the common maximum transmission range d max as an undirected simple graph G=(V,E)in the plane, where V is the set of nodes in the network and E={(u,v): d(u,v)≤d max,u,v∈V}is the edge set of G.A unique id (such as an IP/MAC address)is assigned to each node.For 1The hidden terminal problem refers to the situation in which a station is hidden when it is within the transmission range of the intended receiver node of the packet but out of the range of the sender node,where the exposed terminal problem refers to the situation in which a station is exposed when it is within the transmission range of the sender node,but out of the range of the receiver.notational simplicity,we denote id(v i)=i.We also define the Visible Neighborhood NV u(G)of node u as follows.Definition1(Visible Neighborhood):The visible neighbor-hood NV u(G)is the set of nodes that node u can reach by using the maximum transmission power,i.e.,NV u(G)= {v∈V(G):d(u,v)≤d max}.For each node u∈V(G), let G u=(V u,E u)be an induced subgraph of G such that V u=NV u.The proposed algorithm is composed of the following three phases:information collection,topology construction, and determination of transmission power,and an optional optimization phase:construction of topology with only bi-directional edges.We assume that the propagation channel is symmetric and obstacle-free,and each node is equipped with the ability to gather its location information via,for example, GPS for outdoor applications and pseudolite[17]for indoor applications.1)Information Exchange:The information needed by each node u in the topology construction process is the information of all nodes in NV u(G).This can be obtained by having each node broadcast periodically a Hello message using its maximal transmission power.The information contained in a Hello message should at least include the node id and the position of the node.These periodic messages can be sent either in the data channel or in a separate control channel.2 The time interval between two broadcasts of Hello messages depends on the level of nodal mobility,and will be determined by the probabilistic model to be introduced in Section IV-B.2)Topology Construction:After obtaining the information of visible neighborhood NV u(G),each node u applies Prim’s Algorithm[18]independently to obtain its local minimum spanning tree T u=(V(T u),E(T u))of G u.Note that the time complexity of Prim’s algorithm is O(n log n+e log n)= O(e log n),where n is the number of nodes and e is the number of edges in G u.This can be improved using Fibonacci Heaps to O(e+log n)[19].Two points are worth mentioning here.Firstly,to build a power efficient minimum spanning tree,the weight of an edge should be the transmission power between the two nodes.As power consumption is,in general,of the form c·d r,r≥2, i.e.,a strictly increasing function of the Euclidean distance,it suffices to use the Euclidean distance as the weight function. The same minimum spanning tree will result.Secondly,the minimum spanning tree derived under Prim’s algorithm may not be unique if there exist multiple edges with the same weight.The uniqueness is necessary for the proof of connec-tivity,thus we refine the weight function as follows:Definition2(Weight Function):Given two edges(u1,v1) and(u2,v2),the weight function d :E→R is defined as: d (u1,v1)>d (u2,v2)⇔d(u1,v1)>d(u2,v2)or(d(u1,v1)=d(u2,v2)2Each node can piggyback its location information in data packets to reduce the number of Hello exchanges.LMST may be uni-directional.&&max{id(u1),id(v1)}>max{id(u2),id(v2)}) or(d(u1,v1)=d(u2,v2)&&max{id(u1),id(v1)}=max{id(u2),id(v2)}&&min{id(u1),id(v1)}>min{id(u2),id(v2)}). The weight function d guarantees that in each step of Prim’s algorithm,the choice on the minimum weight edges e is unique,thus the local minimum spanning tree T u constructed by node u is unique.After node u builds a minimum spanning tree to span its visible neighborhood,it will determine its neighbors.To facilitate discussion,we define the Neighbor Relation and the Neighbor Set:Definition3(Neighbor Relation and Neighbor Set):Node v is a neighbor of node u’s,denoted u→v,if and only if (u,v)∈E(T u).u↔v if and only if u→v and v→u.That is,node v is a neighbor of node u’s if and only if node v is on node u’s minimum spanning tree,T u,and is“one-hop”away from node u.The neighbor set N(u)of node u is N(u)={v∈V(G u):u→v}.The neighbor relation defined above is not symmetric, i.e.,u→v does not necessarily imply v→u.Fig-ure1gives such an example.There are altogether6nodes, V={u,v,w1,w2,w3,w4},where d(u,v)=d<d max, d(u,w4)<d max,d(u,w i)>d max,i=1,2,3,and d(v,w j)<d max,j=1,2,3,4.Since NV u={u,v,w4}, it can be obtained from T u that u→v and u→w4.Also V N v={u,v,w1,w2,w3,w4},thus v→w1.Here we have u→v but v u.The network topology under LMST is all the nodes in V and their individually perceived neighbor relations(note that it is not a simple superposition of all local MSTs).Definition4(Topology G0):The topology,G0,derived un-der LMST is a directed graph G0=(V0,E0),where V0=V, E0={(u,v):u→v,u,v∈V(G)}.3)Determination of Transmission Power:Assume that the maximal transmission power is known and is the same to all nodes.By measuring the receiving power of Hello messages,the specific power levels it needsits neighbors.This approach can be appliedchannel model.In what follows,wefirstpropagation models,and thenwe determine the transmission power.propagation model,the relation betweentransmit packets,P t and the power received,asP r=P t G t G rλ2(4πd)2L,(1)antenna gain of the transmitter,G r is thethe receiver,λis the wave length,d is thethe antenna of the transmitter and that ofL is the system loss.In the Two-Ray Ground propagation model,the relation between P t and P r isP r=P t G t G r h2t h2rd4L,(2) where G t is the antenna gain of the transmitter,G r is the antenna gain of the receiver,h t is the antenna height of the transmitter,h r is the antenna height of the receiver,d is the distance between the antenna of the transmitter and that of the receiver,and L is the system loss.In general,the relation between P t and P r is of the following formP r=P t·G,(3) where G is a function of G t,G r,h t,h r,λ,d,α,L and is time-invariant if all the above parameters are time-invariant.At the information exchange stage,each node broadcasts its position using the maximal transmission power P max.When node A receives the position information from node B,it measures the receiving power P r and obtains GG=P r/P max.(4) Henceforth node A needs to transmit using at least P th·G= P th P r/P max so that node B can receive messages,where P th is the power threshold to correctly understand the message.A broadcast to all neighbors requires a power level that can reach the farthest neighbor.Here we introduce the notion of Radius:Definition5(Radius of Node u):The radius,r u,of node u is defined as the distance between node u and its far-thest neighbor(in terms of Euclidean distance),i.e,r u= max{d(u,v):v∈N(u)}.4)Construction of Topology with Only Bi-Directional Edges:As illustrated in Figure1,some links in G0may be uni-directional.As mentioned in Section III-A,it is desirable to obtain network topologies consisting of only bi-directional edges.There are two possible solutions:(i)to enforce all the uni-directional links in G0to become bi-directional;or(ii)to delete all the uni-directional links in G0.We term the two new topologies G+0and G−0,respectively.Specifically,000probe each of its neighbors in the neighbor set N(u)tofindout whether or not the corresponding edge is uni-directional,and in the case of a uni-directional edge,either deletes theedge(G−0)or notifies its neighbor to add the reverse edge (G+0).In Section IV,we will prove that both new topologiespreserve the desirable properties of G0.There exists a trade-off between the two choices:the latter gives a comparativelysimpler topology,and hence is more efficient in terms ofspatial reuse,while the former keeps more routing redundancy.IV.T HEORETICAL B ASE OF LMSTIn this section,we state and prove several desirable prop-erties of the network topology derived by LMST.We alsodetermine,with the use of a probabilistic model,how oftenthe neighborhood information should be exchanged and thetopology should be updated.A.Properties of LMSTDefinition8(Cone):As shown in Figure2,a cone(u,α,v) is the region in the plane that lies between OA and OB,where ∠COA=∠COB=α/2.1)Degree Bound:It has been observed that any minimum spanning tree of afinite set of points in the plane has a maximum node degree of six[20].We prove this property (which will serve as the base for the proof of Theorem3) independently in the context of topology control.Lemma1:Given three nodes u,v,w∈V(G0)satisfying d (u,v)>d (u,w)and d (u,v)>d (v,w),then u v.Proof:If d(u,v)>d max,then u v.Thus,we onlyneed to consider the case d(u,v)≤d max.Assume u→v.Then(u,v)∈E(T u).It follows that only one of the two edges(u,w)and(v,w)can be in E(T u)(otherwise a loop is formed).Without loss of generality,assume(u,w)∈E(T u). Since(v,w)/∈E(T u)and d (v,w)<d (u,v),replacing as the number of neighbors.The degree of any node in G0is bounded by6,i.e.,deg(u)≤6,∀u∈V(G0).Proof:Consider any node u∈V(G0).Order nodes in N(u)by their distances from u,such that for the ith node w i and the jth node w j,j>i,we have d(u,w j)≥d(u,w i).By Lemma1,we have d(u,w j)≤d(w i,w j),otherwise u w j. Thus node w j cannot reside inside Cone(u,2π/3,w i).That is,as shown in Figure3,node u cannot have neighbors other than node w1inside Cone(u,2π/3,w1).By induction on the rank of nodes in N(u),the maximal number of neighbors that u can have is no greater than6,i.e.,deg(u)≤6.Note that Figure3also depicts the only scenario in which deg(u)=6 occurs.In wireless multi-hop networks,one observation is that less node degree usually results in less contention and interfer-ence.The degree bound obtained in Theorem1can be very important to scheduling algorithms.As matter of fact,several TDMA-based scheduling algorithms have been proposed to maximize the spatial reuse and minimize frame length[21] [22],most of which require that the maximum degree must be bounded.2)Network Connectivity:We prove that the topology,G0, derived under LMST preserves the network connectivity of G.For any two nodes u,v∈V(G0),node u is said to be connected to node v(denoted u⇔v)if there exists a path (w0=u,w1,...,w m−1,w m=v)such that w j↔w j+1,j= 0,1,···,m−1,where w k∈V(G0),k=0,1,···,m.It follows that u⇔w if u⇔v and v⇔w.Lemma2:For any node pair[u,v],u,v∈V(G0),if d(u,v)≤d max then u⇔v.Proof:For all the node pairs[u,v]satisfying d(u,v)≤d max and u,v∈V(G0),sort them in the increasing order of d (u,v),i.e.,d (u1,v1)<d (u2,v2)<···<d (u l,v l). We prove by induction on the rank of the node pairs in theordering.1)Basis :k =1,the first pair [u 1,v 1]satisfies d (u 1,v 1)min u,v ∈V (G 0){d (u,v )}and d (u 1,v 1)≤d max .u ↔v ,which means u ⇔v .2)Induction :Assume Lemma 2holds for all [u i ,v i ],i =1,2,···,k −1.Now we prove also holds for the node pair [u k ,v k ].We consider cases:•Case 1:u k ↔v k ,which implies u k ⇔v k .•Case 2:Either u k v k or v k u k ,or Assume u k v k ,without loss of Since v k ∈NV u k ,there exists a unique p =(w 0=u k ,w 1,w 2,···,w m −1,w m =from node u k to node v k ,where (w i ,w i +1)E (T u k ),i =0,1,···,m −1.Since T u k is unique minimum spanning tree of G u k ,we d (w i ,w i +1)<d (u k ,v k );otherwise we can struct another spanning tree with a less weight,replacing edge (w i ,w i +1)with (u k ,v k )and all the other edges in T u k unchanged.Applying induction hypothesis to each pair [w i ,w i +1],i 0,1,···,m −1,we have w i ⇔w i +1,thus u k ⇔Theorem 2:G 0preserves the connectivity of G ,i.e.,G 0connected if G is connected.Proof:Suppose G is connected.We prove by contra-diction that G 0derived under LMST is a strongly connected graph.Assume G 0is not strongly connected.Among all the node pairs [u,v ]satisfying u v ,there exists a node pair with the minimum distance,i.e.,we can find [v 0,v 1]such that d (v 0,v 1)=min u,v ∈V (G 0){d (u,v ):u v }.Since G is connected,d (v 0,v 1)≤d max .By Lemma 2,v 0⇔v 1,which leads to the contradiction.3)G +0and G −0Preserve Properties of G 0:G +0is an undirected graph,thus all the edges are bi-directional.Sinceall the links in G 0are preserved in G +0,it follows that G +0preserves the connectivity of G 0.Now we prove that the degree of any node in G +0is also bounded by 6.Notice that this is not a simple property of the MST because G +0may not be an MST due to those edges added.Theorem 3:The degree of any node in G +0is bounded by6,i.e.,deg (u )≤6,∀u ∈V (G +0).Proof:For any node u ∈V (G +0),denote N +(u )={(u,v )∈E (G +0)}.We prove by contradiction that if v ∈N +(u )in G +0,there does not exist any other node w ∈N +(u )that lies inside Cone (u,2π/3,v ).Assume that such a node w ∈N +(u )exists.We consider four cases:•Case 1:u →v ,u →w in G 0.This is proved in Theorem 1.•Case 2:u →v ,u w ,but w →u in G 0.We have d (u,w )>d (u,v );otherwise d (u,v )>d (v,w )and d (u,v )>d (u,w ),which implies u v by Lemma 1.Thus,d (u,w )>d (v,w )and d (u,w )>d (u,v ),which implies w v by Lemma 1.This contradicts the assumption that w →u .contradicts the assumption that w ∈Cone (u,2π/3,v ).•Case 4:u v but v →u ,and u w but w →u in G 0.As shown in Figure 4,we have d (u,w )>d (u,v );otherwise d (u,v )>d (v,w )and d (u,v )>d (u,w ),which implies u v by Lemma 1.Thus d (u,w )>d (v,w )and d (u,w )>d (u,v ),which implies w v by Lemma 1.Now we have proved that there does not exist any neighbor other than v that lies inside Cone (u,2π/3,v )in G +ing the same arguments as in Theorem 1,it is easy to see that the maximal number of neighbors that u can have is no greater than 6,i.e.,deg (u )≤6.Since G −0is derived from G 0by deleting uni-directional links,it is easy to see that the degree of any node in G −0is also bounded by 6.We now prove that G −0preserves the connectivity of G .Theorem 4:G −0preserves the connectivity of G ,i.e.,G −0is connected if G is connected.Proof:If a node pair [u,v ],u,v ∈V (G 0)satisfies d (u,v )≤d max ,by Lemma 2,there exists a path p =(w 0=u,w 1,w 2,...,w m −1,w m =v )such that w j ↔w j +1,j =0,1,···,m −1,where w k ∈V (G 0),k =0,1,···,m .The same result holds for G −0since all links in p are bi-directional and the removal of uni-directional links does not affect the existence of such a path.Following the same line of argument as presented in Theorem 2,one can prove that G −0preserves the connectivity of G .B.Determination of Information Exchange PeriodWe determine the time interval between two information exchanges (i.e.,two broadcasts of Hello messages)by a probabilistic model with the following assumptions:。

相关文档
最新文档