Table of Contents
rsc要求的table of contents entry

在RSC(英国皇家化学学会)的投稿要求中,Table of Contents(目录条目)指的是在论文中列出各章节和重要段落标题的列表。
它通常出现在论文的开头部分,以方便读者快速了解论文的内容和结构。
在撰写论文时,应该按照论文的逻辑结构和重要程度,将各个章节和重要段落的标题整理成一份简明扼要的列表,并按照适当的顺序进行排列。
每个标题前面可以加上相应的页码,以便读者快速找到感兴趣的内容。
在RSC的投稿要求中,Table of Contents需要遵循一定的格式和排版规范,例如字体、字号、行距、对齐方式等。
具体的格式要求可以参考RSC的投稿指南或联系编辑部获取详细信息。
总之,Table of Contents是论文中非常重要的一部分,它可以帮助读者快速了解论文的内容和结构,提高阅读的效率和体验。
因此,在撰写论文时应该认真编写Table of Contents,并遵循相应的规范和要求。
Table Of Contents Table Of Contents 1

Proposal: Floating-Point Numbers in SmalltalkDavid N. SmithIBM T J Watson Research Center30 Saw Mill River RoadHawthorne, NY 10598914-784-7716Email: dnsmith@17 November 1996Table Of ContentsTable Of Contents1Summary1Floating-Point Numbers1Where We Are Today1IEEE Standard2Other Float Formats2This Proposal2Smalltalk Support for Floating-Point Numbers2Classes3Lengths3Exceptions3Special Values Testing4Literals4Conversions6Floating-Point Memory Areas6Printing7Constants8Rounding9Machine Parameters10Library Issues11Portability of Extensions11References11From July 1996 X3J20 Working Draft12Implementation Notes for 64-Bit Platforms13SummaryThis is a proposal for extending the floating-point number support in Smalltalk. It proposes full sup-port for IEEE floating-point numbers of three lengths, and support for exceptions, literals, conver-sions, constants, printing, and library additions and operability. The support of formats other thanIEEE is briefly considered.Note: An earlier version of this proposal was presented at the OOPSLA ’96 Workshop on Smalltalk Ex-tensions in San Jose, California, in October 1996.Floating-Point NumbersWhere We Are TodayThe draft ANSI Smalltalk standard [ANSI] proposes (August ’96) three lengths of floats. (See ’From July1996 X3J20 Working Draft’ on page 12.) While this is a welcome step, additional work is necessary tomake Smalltalk fully support floating-point numbers at a level required for serious scientific and en-gineering computation.Floating-point Numbers in Smalltalk, David N. Smith Page 1IEEE StandardThe IEEE floating-point standards are [IEEE85] and [IEEE87]. The 1987 standard and the 1985 standard define the same binary floating-point standard; the 1987 standard adds decimal floating-point for usein calculators. Both IEEE standards define four floating-point number sizes, as shown in Table 1: ’For-mats of IEEE Floating-Point Numbers’.The single and double widths are commonly implemented and supported. Double extended with 128bits is supported on some platforms.The 32-bit and 64-bit floating-point formats, from [IEEE85] are:The ’s’ field is the sign bit; the exponent field is a biased exponent, and the fraction field is the binaryfractional-part of the number less the leading 1-bit.Other Float FormatsVirtually all new hardware designs support IEEE floating-point number formats. The days of roll-your-own are gone. However, some older system designs still exist and do not use IEEE formats. These in-clude IBM S/390, for which a Smalltalk implementation exists. S/390 supports three widths, a 32-bitsingle width, a 64-bit double width, and a 128-bit extended (double-double) width. It is hexadecimalbased rather than binary.This ProposalThis proposal is intended to suggest areas in which Smalltalk needs additional support in order to pro-vide proper floating-point number support.Smalltalk Support for Floating-Point NumbersIEEE standard 754 and 854 arithmetic can be exploited in several ways in numerical software, providedthat proper access to the arithmetic’s features is available in the programming environment. Unfortu-nately, although most commercially significant floating point processors at least nearly conform to theIEEE standards, language standards and compilers generally provide poor support (exceptions includethe Standard Apple Numerics Environment (SANE), Apple’s PowerPC numerics environment, and Sun’sSPARCstation compilers).--- [Higham96] Page 492Smallt alk should support t hree lengt hs of float ing-point values, single, double, and an ext endedlength. It should completely support the IEEE standard, when IEEE format numbers are available, butnot exclude other formats where present.are shown in the four white data columns. Double-double is used by Apple on the PowerPC Mac-intosh [Apple96]; it consists of two double precision values taken together as a single 128 bit num-ber. Quad is an example of a 128 bit double extended format from one hardware implementation.s exponent fraction 182432-bit widths:1115264-bit widths:There are some problems with matching a language specification to hardware, and with compatibilitywith all existing Smalltalk systems. Since most existing Smalltalk systems don't do floats well, I've onlytried to provide some conversion path; that is, one might need to run a filter on the source of existingcode on some systems when imported into an ANSI Smalltalk system.ClassesFloating-point classes in this document are assumed to be the following:<realNumber>Float No instances, but has class methodsFloatSingleFloatDoubleFloatExtendedEach class should always be present. When a precision is not supported, the class should support thenearest precision. For example, on a plat form wit h no ext ended precision support, FloatExtended would be equivalent to FloatDouble, and on a platform with no single precision support, FloatSingle would be equivalent to FloatDouble. This allows development of code on one platform that can be runon another platform with different hardware.Class Float is suggested as a named superclass of the other classes in order to have a common class towhich inquiry messages are sent.Note that the current ANSI draft has a different class hierarchy.LengthsMachines which support only two lengths of floating-point number must make a choice as to whichshould be called which. If the IEEE floating-point standard is used, then the decisions are straight for-ward.FloatSingle would be single; it takes 4 bytes. FloatDouble would be double; it takes 8 bytes. FloatExtend-ed would be double extended, double-double, or quad, if any are present. If extended is missing, thenan implementation should substitute double, with or without a warning.On machines not implementing IEEE floating-point numbers, similar choices should be made usingthese guidelines:•Lengths which are similar to IEEE short should be short.•Lengths which are similar to IEEE double should be double.•Lengths which are significantly longer than double should be extended.Some fictitious machine might have two lengths of floats: 8 bytes and 16 bytes. Smalltalk should sup-port double and extended, forcing all single values to double.ExceptionsFloating-point arithmetic requires error detection support for overflow, underflow, and others. It mayalso require a way to test a result without raising an (expensive) exception. For example, one shouldbe able to write:xy := x div: y onUnderflow: 0.0d0.and/or:xy := x div: y onUnderflow: [ :result | 0.0d0 ].This should not generate any kind of underflow condition, but should simply answer 0.0d0 if an un-derflow occurs. The IEEE standard defines a number of conditions which may nor may not need toraise an exception depending on how they are computed. There are five exceptions; possible messagesfor performing operations and catching these exceptions is:Floating-point Numbers in Smalltalk, David N. Smith Page 3x operation: y onUnderflow: resultOrBlockx operation: y onOverflow: resultOrBlockx div: y onZeroDivide: resultOrBlockx operation: y onInvalid: resultOrBlockx operation: y onInexact: resultOrBlockThese correspond to what IEEE calls trap handlers. When a block is specified the result of the opera-tion is passed as a parameter. Note that not all operations can raise all exceptions.If no handler is specified, the default is to proceed with no exception. A set of five flags is maintainedwhich shows if one of the exceptions has occurred. It is reset only on request. Messages to test and setthese flags might be:Float exceptionStatus Answer a value holding all five flagsFloat exceptionStatus:Set the statusFloat clearExceptionStatus Clear all status flagsThe value answered by exceptionStatus is an integer with a value in the range 0 to 2r11111.Bit Mask Exception Message to answer the mask Message to answer the index 12r00001Invalid Float invalidExceptionMask Float invalidExceptionIndex 22r00010ZeroDivide Float zeroDivideExceptionMask Float zeroDivideExceptionIndex 32r00100Overflow Float overflowExceptionMask Float overflowExceptionIndex 42r01000Underflow Float underflowExceptionMask Float underflowExceptionIndex 52r10000Inexact Float inexactExceptionMask Float inexactExceptionIndex The actual bit values and masks may be implementation defined; their values are for illustration.On non-IEEE platforms, these status flags should be simulated when possible and feasible, but not theextent that performance is adversely affected.ExamplesPerform some action if a zero divide exception has occurred:(Float exceptionStatus bitAt: Float zeroDivideExceptionIndex) = 1 ifTrue: [ ... ].Clear the zero divide exception status bit:Float exceptionStatus: (Float exceptionStatus clearBit: Float zeroDivideExceptionMask )Special Values TestingSome IEEE result s include ’numbers’ which represent (signed) infinit y, not a number (NaN), andsigned zero. Tests to detect these are needed:x isNaNx isNotaNumberx isInfinitex isFinitex isNegativeInfinityx isPositiveInfinityx isZerox notZerox isPositiveZerox isNegativeZeroWhile some of these are simple comparisons with standardized values, others, in particular NaN, isnot. These should answer an appropriate value on hardware which does not support the test. (See also’Constants’ on page 8.)Some method of enquiring about the floating point support should be present. At least the followingshould be present:Float isIEEELiteralsSimple float literals have a period, but no exponent.1.2 3.14159272 12345.678901234567890Floating-point Numbers in Smalltalk, David N. Smith Page 5Short float literals have a period and an ’ e ’ indicating an exponent.1.2e0 3.14159272e0 1.2345678901234567890e4Note that last value may loose many digits of precision since it is forced to be short.Double float literals have a period and a ’d ’ indicating an exponent. 1.2d0 3.14159272d0 1.2345678901234567890d4Extended float literals have a period and an ’ x ’ (or ’ q ’ as in the draft standard) indicating an exponent. 1.2x0 3.14159272x0 1.2345678901234567890x4Simple floats are short, unless they contain enough digits that a longer size is needed to representthem. Thus:The value:12345.678901 is probably not equal to:12345.678901e0, but is probably equal to: 12345.678901d0.Since the value has more digits than a short float most likely has, it is made into a double.1 However, the size specified by the ' e ', ' d ', or ' x ' is always honored even if digits must be discarded. It is assumed that the programmer knows what she is doing in such cases, and such control over constantsis needed when writing certain kinds of library code. (I write more places than some platforms allowso that I'm assured of having enough places for the one with the largest size.)RadixRadix floats:16r1.2d15The value and exponent are hexadecimal 2r1.110110d1011 Both are in binaryRadix specification of floats is rarely needed but when it is needed, such as in building floating pointlibrary routines, it is critical to have it. Some existing systems allow radix floats, but have decimal ex-ponents. This proposal uses the same radix for exponents and uses the radix as the exponent base:fraction * (radix raisedToInteger: exponent)Thus:Base 2 and 16 can be used to specify the bits of constants precisely. This isFloatDouble pi : 16r3.243F6A8885A30d0For the implementer of basic algorithms, a way to specify the exact bits in an IEEE (or other) formatnumber should be available.16r400921FB54442D18 asIEEEFloatDouble "FloatDouble pi"IntegersExponents with no radix point should indicate integers, not floats:123e10 Should be: 1230000000000There is no good reason to make such numbers be floats, since it easy to add a radix point. On theother hand, there is no good way to write large and readable integer values unless exponents on inte-gers are allowed. (Note that some Smalltalk implementations support this today.)Literal Evaluated as Equivalent to2r1.0e12r1.0 * (2 raisedToInteger: 1) 2.016r1.0e116r1.0 * (16 raisedToInteger: 1)16.0ConversionsOther float lengthsOperations on floats of one length should produce results of the same length. Operations where thelength of one operand is longer than another should produce results of the length of the longer op-erand.Short-short:1.2e0 + 1.3e0produces:2.5e0Short-double:1.2e0 + 1.3d0produces:2.5d0Integers and DecimalsConversions from integer or decimal values to specified floating-point widths are:x asFloatSinglex asFloatDoublex asFloatExtendedCoercingNon-floating-point values should be coerced to the same length as the floating-point value whichforces the coercion. In this example, the integer 2 should be converted to a single precision value:1.0 + 2In cases where the other value will loose precision, such as in:1.0 + 1234567890123456789 " Produces a FloatSingle result"then the programmer should force a conversion to the appropriate floating-point precision:1.0 + 1234567890123456789 asFloatDouble " Produces a FloatDouble result "StringsConversions from string to numeric values should support all form of numeric literals and should fol-low the same rules as for compiling a literal with the same form. That is, given a numeric literal, itshould not matter whether it is compiled or converted from a string. The results should be equal.aString asNumber Convert a numeric literal in aString to a number; raise an ex-ception if aString does not contain a valid literal.aString asNumberIfError: aBlock Convert a numeric literal in aString to a number; if aString doesnot contain a valid literal, evaluate aBlock, passing the positionof the first character found to be invalid.Current Implementation. An implement at ion of asNumber can be found as readNumberString in [Smith95] pages 258-260. It converts floating point number literals to fractions, but only supports dec-imal radix.Floating-Point Memory AreasUsers of floating-point values frequently work with arrays of values. Implementing such arrays usingclass Array really provides an array of object pointers to floats stored elsewhere. This not only causesmemory usage to be much higher2, but imposes significant overhead in each floating-point number access. Further, such arrays cannot be readily passed to external routines, such as libraries written in FORTRAN.Smalltalk already has the concept of an object memory area, an area which holds many objects of a giv-en (primitive) class. Strings are a memory area for characters in which the ’raw’ value of a character is3stored in each element of the string, not an object pointer to a character.3.Thanks to Alan Knight for pointing this out with respect to floating-point values.In a similar fashion, Smalltalk needs to support floating-point memory. A new set of collection classesis needed.Object<somewhereAppropriate><floatingPointArray>FloatSingleArrayFloatDoubleArrayFloatExtendedArrayInstances of these classes need to have both the basic indexed collection protocol and an arithmeticprotocol, including both simple element-by-element operations as well as operations to support ma-trix computations. (Such operations need to be either inlined or done by a primitive on platforms thatcare about floating-point performance.) Operations on floating-point arrays need to be able to specifya target array:a plus:b into: cInstances should be stored in a way that is compatible with common scientific languages so that FOR-TRAN and/or C libraries can be called to directly operate on the values.MapsIn addition, there needs to be some way to map a floating-point array into a multi-dimensional ma-trix, possibly by simply specifying how indexing should be done. It should be easy to reference, with-out copying the values, a new floating-point array which represents some subpart, such as a columnor row.| fpa twoD row1 row2 row3 |fpa := FloatSingleArray new: 64.fpa atAllPut: 0.0.twoD := FloatMap on: fpa rows: 4.row1 := twoD at: 1." A reference to row 1; does not hold the data "row2 := twoD at: 2.row3 := twoD at: 3.row2 plus: row3 into: row1" Assigns all 16 values in row1 "The standard does not need to provide full matrix operations at this time, but does need to specify thelow level primitives from which more complex operations can be built.PrintingSupport for printing floating-point numbers is basically missing from most implementations; print-String usually produces something unacceptable: it has no ability to specify the width of the result,the precision of the result, or the format of the result. New floating point printing methods shouldallow a great degree of control over floating-point printing.There are two classes of methods proposed here: fixed width, used where the number of character po-sitions is known and specified, and variable width, used where the amount of precision required isknown and specified.Fixed Width PrintingFixed width printing prints values without an exponent whenever possible. In general, fixed widthprinting provides the most readable result with the maximum precision possible in the given width.aFloat printStringWidth: m Format aFloat into m characters.aFloat printStringWidth: m decimalPlaces: n Format aFloat into m characters, with n places to theright of the decimal point.Floating-point Numbers in Smalltalk, David N. Smith Page 7Variable Width PrintingaFloat printWithoutExponent Format to the full precision of the value and without an exponent; the result may be extremely long for verylarge or small exponents.0.0000000000012345678901234567aFloat printScientific Format with an exponent always present. 0.12345678901234567e-11aFloat printScientific: pSame, but precision limited to p decimal digits. 0.12345678e-11 for p=8.aFloat printEngineering Format with an exponent always present; the expo-nent is zero or a multiple of 3 or -3.12.345678901234567e-9aFloat printEngineering: pSame, but precision limited to p decimal digits. 12.345678e-9 for p=8.aFloat printMaximumPrecisionPrints the number to its full precision, so that the re-sulting string, if compiled, would produce a value equal to the receiver. (See [Burger96]). An exponent is present when required.aFloat printStringRadix: anInteger Format in radix anInteger with an exponent always present. Always provides the full precision of the val-ue. Radix 2 and 16 thus show the exact bits in the val-ue.416r1.DEALe0The standard printOn: method should answer the same value asprintMaximumPrecision .Current Implementation. An implemen t a tion of printScientific: (as printStringDigits:), printString-Width:, and printStringWidth:decimalPlaces: can be found in [Smith95] pages 260-273.ConstantsWhile it seems desirable to use a pool to hold floating point constants, there needs to be three differ-ent precisions, one for each floating-point class. It thus seems better to use class messages to fetch con-stant values.•FloatSingle pi •Float pi Not in the proposal; implementers can make it answer what it always did.•FloatDouble piOver180Answers: pi/180(Used internally in degressToRadians )•FloatDouble piUnder180Answers: 180/pi (Used internally in radiansToDegrees )•FloatExtended e •Special floating-point values, so that hand coded functions can answer the same kind of values ashardware operations, including:FloatDouble positiveInfinity FloatSingle negativeInfinity FloatExtended positiveZero FloatExtended negativeZero FloatSingle quietNaN FloatDouble signallingNaNFloatDouble signallingNaN: implementationDependentBitsresults of floating-point calculations which seem to answer unexpected results.Usage ExamplesMatching precisions. The correct precision of pi is selected to match the unknown precision of x: x * x class piNegative infinities. In some collection of floating-point numbers, negativeInfinity is used when finding the largest element. Note that it even could be said to work ’correctly’ when the collection is empty.maximumElement| max |max := self class negativeInfinity.self do: [ :element |element > max ifTrue: [ max := element ] ].^ maxUninitialized values. Initialize values in floating-point memory areas to signallingNan when they are created so that failure of a user to initialize elements can be detected.new: anInteger^ super newatAllPut: self class elementClass signallingNan;yourselfRoundingThe IEEE floating-point standard specifies four kinds of rounding:•Round to Nearest: This is the default; it produces the values nearest to the infinite-precision true result.Rounding of results can be directed to positive infinity, negative infinity, and zero.•Round to Positive Infinity: The value is equal to, or the next possible value greater than the infi-nite-precision true result.•Round to Negative Infinity: The value is equal to, or the next possible value less than the infinite-precision true result.•Round to Zero: For positive results, the value is equal to, or the next possible value less than the infinite-precision true result. For negative results, the value is equal to, or the next possible valuegreater than the infinite-precision true result.While most computations will use the default round-to-nearest mode, some computations use otherkinds of rounding. One example is interval arithmetic, in which two simultaneous calculations areperformed, one rounding to positive infinity and the other to negative infinity.Support for setting the rounding mode needs to be fast and simple, since it can be called frequently.One possibility is to add protocol to class Float:roundToNearest Set the rounding modesroundToPositiveInfinityroundToNegativeInfinityroundToZeroand:roundingMode Answer the current rounding moderoundingMode:Reset the rounding modeThe value answered by roundingMode is implementation dependent. Its values are defined by the ex-pressions:Float roundToNearestModeFloat roundToPositiveInfinityModeFloat roundToNegativeInfinityModeFloat roundToZeroModeFloating-point Numbers in Smalltalk, David N. Smith Page 9Convenience Methods. For convenience, class Float might provide protocol which saves the current rounding mode, sets a new mode, performs some operations, and restores the rounding mode: roundToNearest: aBlockroundToPositiveInfinity: aBlockroundToNegativeInfinity: aBlockroundToZero: aBlock}The basic operat ions, addit ion(+), subt ract ion (-), mult iplicat ion (*), and division (/) might have rounding versions (where the character • indicates one of the basic operations)a •b Round using the current rounding modea •~b Round to nearesta •>b Round to positive infinitya •<b Round to negative infinitya •=b Round to zeroFor example, (a *> b) would be functionally equivalent to:(Float roundToPositiveInfinity: [ a * b ])or:( [| mode result |mode := Float roundingMode.Float roundToPositiveInfinity.result := a * b.Float roundingMode: mode.result ] value )Example. This method is from a fictitious FloatDoubleInterval class:* anInterval| low high mode |mode := Float roundingMode.Float roundToNegativeInfinity.low := self lowest * anInterval lowest.Float roundToPositiveInfinityhigh := self highest * anInterval highest.Float rounding: mode.^ self class lowest: low highest: highMachine ParametersThere are a number of machine parameters which are suggested by various authors. See [Press92] and [Cody 88]. These include the floating point radix (2 for IEEE, 16 for S/390), number of digits in each width, largest and smallest floating-point numbers in each width, and others.These might be implemented as:FloatSingle digits On IEEE it produces: 24FloatDouble digits On IEEE it produces: 53Float base On IEEE it produces: 2FloatSingle base The sameFloatSingle maximumValue On IEEE it produces: 3.40e38FloatDouble maximumValue On IEEE it produces: 1.79e308FloatDouble guardDigitBits The number of bits of guard digits presentBut note these cases:Float digits An error; Float is abstractFloat maximumValue An error; Float is abstractSee [Press92] and [Cody 88] for more information and more parameters.Library IssuesISO 10967 Language Independent Arithmetic[ISO95] defines a number of library routines that should be supported in all languages. When the stan-dard is finished (it is now a draft) it will provide ’bindings’ for eight common languages (Ada, BASIC,C, Common Lisp, Fortran, Modula-2, Pascal, and PL/I), as the companion [ISO94] does for arithmetic operations.While Smalltalk is not defined by [ISO95], the recommendations of it should be followed as closely aspossible, at least in part since it is a significant attempt to standardize across languages.Most of the library is already a part of most Smalltalk implementations; while some functions aremissing, what is truly missing from Smalltalk is a precise definition of the functions, and a completeset of operations. (For example, the common arctan2 function is typically missing from Smalltalk im-plementations.)IEEE Library AdditionsThe Appendix to [IEEE87] lists a number of functions that languages should support on conforminghardware. These include copying the sign, next representable neighbor, test for infinite and NaN, andothers.Current Implementation. See [Cody93] for source for a C implementation.Portability of ExtensionsThe proposed ANSI Smalltalk standard indicates that subclasses and extensions to standard classes arenot portable.However, no provisions for numeric computation will ever be complete. Users will have extensionsand small vendors will market extensions. Since these extensions are quite necessary for the use ofSmallt alk in various scient ific, engineering, and financial areas, and since t he market s are alwayssmall, it is mandatory that the effort to port from one implementation to another not be extremelyhigh.The standard should specify that numeric classes be implemented in such a way as to assist such port-ability. The features that must be specified include:• A complete class hierarchy, with few if any abstract protocols.•Specification of coercion techniques and methods, including the ways in which certain kinds of numbers are determined to be more general than others.References[ANSI]American National Standards Institute draft Smalltalk Language Standard.[Apple96]’Chapter 2 - Floating-Point Data Formats’ in PowerPC Numerics, an HTML document: /dev/techsupport/insidemac/PPCNumerics/PPCNumerics-15.html#HEADING15-0[Burger96]Burger, Robert G, and R. Kent Dybvig. ’Printing Floating-Point Numbers Quickly andAccurately’ in Proceedings of the SIGPLAN ’96 Conference on Programming Language De-sign and Implementation. Also at:/hyplan/burger/FP-Printing-PLDI96.ps.gz[Cody88]Cody, W.J. ’Algorithm 665. MACHAR: A Subroutine to Dynamically Determine Ma-chine Parameters’, ACM Transactions on Mathematical Software, Vol. 14, No. 4, De-cember 1988, pp. 302-311. Software at:ftp:///netlib/toms/665.Z[Cody93]Cody, W.J. and J. T. Coonen. ’Algorithm 722: Functions to Support the IEEE Stan-dard for Binary Floating-Point Arithmetic’, ACM Transactions on Mathematical Soft-ware, Vol. 19 No. 4, pages 443-451, Dec 93. Software at:Floating-point Numbers in Smalltalk, David N. Smith Page 11。
table of contents 例子 -回复

table of contents 例子-回复读者提出的问题。
[table of contents 例子]文章标题:如何创建一个有效的目录示例引言:目录是一个有组织的结构,用于指导读者在长篇文章或书籍中找到特定部分的信息。
本文将详细介绍如何创建一个有效的目录,并提供一个例子来帮助读者更好地理解。
第一部分:目录的目的和重要性- 解释目录在文档中的作用,以及为什么有一个有效的目录对读者很重要。
- 强调一个好的目录可以帮助读者快速定位所需的信息,并提高阅读体验。
第二部分:创建目录的步骤1. 确定需要在目录中包含的内容:列出所有章节、子章节以及其他相关信息,以确定目录的结构。
2. 为每个章节和子章节创建标题:为每个章节和子章节添加一个有意义的标题,并确保标题之间有层次结构,以反映内容的组织。
3. 编写每个章节和子章节的页码:确定每个章节和子章节的起始页码,并将其添加到目录中。
4. 创建目录页面:在文档的开始处创建一个新页面,并在该页面上建立目录。
第三部分:目录示例下面是一个目录示例,展示了如何使用目录创建一个有效的阅读指南:目录1. 引言 (1)2. 第一部分:目录的目的和重要性 (2)2.1 目录在文档中的作用 (2)2.2 一个有效的目录为读者带来的好处 (3)3. 第二部分:创建目录的步骤 (5)3.1 确定需要包含的内容 (5)3.2 为每个章节和子章节创建标题 (6)3.3 编写页码 (7)3.4 创建目录页面 (8)4. 第三部分:目录示例 (10)结论:创建一个有效的目录可以帮助读者更轻松地浏览和导航长篇文档。
通过遵循本文提供的步骤,读者可以创建一个结构清晰、易于使用的目录。
使用目录来指导读者,能够使文档更加易于理解,提高阅读体验。
目 录 Table of Contents

目录Table of Contents翻译的原则Principles of Translation中餐Chinese Food冷菜类Cold Dishes热菜类Hot Dishes猪肉Pork牛肉Beef羊肉Lamb禽蛋类Poultry and Eggs菇菌类Mushrooms鲍鱼类Ablone鱼翅类Shark’s Fins海鲜类Seafood蔬菜类Vegetables豆腐类Tofu燕窝类Bird’s Nest Soup羹汤煲类Soups主食、小吃Rice, Noodles and Local Snacks西餐Western Food头盘及沙拉Appetizers and Salads汤类Soups禽蛋类Poultry and Eggs牛肉类Beef猪肉类Pork羊肉类Lamb鱼和海鲜Fish and Seafood面、粉及配菜类Noodles, Pasta and Side Dishes面包类Bread and Pastries甜品及其他西点Cakes, Cookies and Other Desserts中国酒Chinese Alcoholic Drinks黄酒类Yellow Wine 白酒类Liquor 啤酒Beer葡萄酒Wine洋酒Imported Wines开胃酒Aperitif 白兰地Brandy威士忌Whisky金酒Gin朗姆酒Rum伏特加Vodka龙舌兰Tequila利口酒Liqueurs清酒Sake啤酒Beer鸡尾酒Cocktails and Mixed Drinks餐酒Table Wine饮料Non-Alcoholic Beverages矿泉水Mineral Water咖啡Coffee茶Tea茶饮料Tea Drinks果蔬汁Juice碳酸饮料Sodas混合饮料Mixed Drinks其他饮料Other Drinks冰品Ice•recipe 配方cookbook 菜谱ingredients 配料cook 烹调raw (adj.)生的cooked (adj.)熟的fried (adj.)油煎的fresh (adj.)新鲜的•cook 烹调bake 烘烤fry 油煎boil 煮沸broil 烤roast 烘烤simmer 炖,煨saute 煎炒•heat 加热cool 冷却freeze - froze 冻结melt 融化burn - burned / burnt 烧焦boil 煮沸•add 掺加include 包括remove 除去replace 代替mix 混合combine 结合stir 搅拌•spread 涂开sprinkle 撒slice切片 dice 切成块chop 剁,切细stuff 充填⏹烹饪方法英语:⏹shallow fry煎, shallow-fried 煎的, stir-fry 炒,deep fry 炸, toasted烤的(如面包),grilled 铁扒烤的,steam (蒸), stew/braise (炖,焖),boil(煮), roast/broil (烤), bake, smoke (熏), pickle (腌), barbecue (烧烤),翻译的原则一、以主料为主、配料为辅的翻译原则1、菜肴的主料和配料主料(名称/形状)+ with + 配料如:白灵菇扣鸭掌Mushrooms with Duck Webs2、菜肴的主料和配汁主料 + with/in + 汤汁(Sauce)如:冰梅凉瓜Bitter Melon in Plum Sauce二、以烹制方法为主、原料为辅的翻译原则1、菜肴的做法和主料做法(动词过去分词)+主料(名称/形状)如:火爆腰花Sautéed Pig Kidney2、菜肴的做法、主料和配料做法(动词过去分词)+主料(名称/形状)+ 配料如:地瓜烧肉Stewed Diced Pork and Sweet Potatoes3、菜肴的做法、主料和汤汁做法(动词过去分词)+主料(名称/形状)+ with/in +汤汁如:京酱肉丝Sautéed Shredded Pork in Sweet Bean Sauce三、以形状、口感为主、原料为辅的翻译原则1、菜肴形状或口感以及主配料形状/口感 + 主料如:玉兔馒头 Rabbit-Shaped Mantou脆皮鸡Crispy Chicken2、菜肴的做法、形状或口感、做法以及主配料做法(动词过去分词)+ 形状/口感 + 主料 + 配料如:小炒黑山羊Sautéed Sliced Lamb with Pepper and Parsley四、以人名、地名为主,原料为辅的翻译原则1、菜肴的创始人(发源地)和主料人名(地名)+ 主料如:麻婆豆腐Mapo Tofu (Sautéed Tofu in Hot and Spicy Sauce)广东点心Cantonese Dim Sum2、介绍菜肴的创始人(发源地)、主配料及做法做法(动词过去式)+ 主辅料+ + 人名/地名 + Style如:北京炒肝Stewed Liver, Beijing Style北京炸酱面Noodles with Soy Bean Paste, Beijing Style五、体现中国餐饮文化,使用汉语拼音命名或音译的翻译原则1、具有中国特色且被外国人接受的传统食品,本着推广汉语及中国餐饮文化的原则,使用汉语拼音。
Table of Content

©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .GIAC Security EssentialsPractical Assignment Version 1.4bOnlineSubmitted by: Tan Koon Yaw©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Table of ContentABSTRACT..........................................................................................................1 1. INTRODUCTION. (1)2. INITIAL RESPONSE (2)3. EVIDENCE GATHERING..............................................................................3 4. PROTECTING THE VOLATILE INFORMATION..........................................3 5. CREATING A RESPONSE TOOLKIT.. (4)6. GATHERING THE EVIDENCE (7)7. SCRIPTING THE INITIAL RESPONSE (15)8. IDENTIFICATION OF FOOTPRINTS (15)9. WHAT’S NEXT?..........................................................................................16 10.WRAPPING UP (16)REFERENCES (18)APPENDIX A (19)©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Windows Responder’s GuideAbstractWhen a system encounters an incident, there is a need to handle the case properly to gather evidence and investigate the cause. Initial response is the stage where preliminary information is gathered to determine whether there is any breach of security and the possible causes if any. This paper provides the first responder guide to handle incident occur on a Windows platform system.In this paper, we will discuss what are the issues one needs to consider during the initial response stage. There are critical evidence that need to be protected and gathered during the initial response stage. We will hence discuss what are the tools that can be used to gather the necessary evidence and how to collect them appropriately. Finally, we will explore areas that one needs to look out for during the investigation on the evidence collected. 1. IntroductionWhen a system encounters an incident, the common reaction among most people will be to panic and jump straight into the system to find out the cause and hopefully try to get it back to normal working condition as soon as possible. Such knee-jerk reactions is especially so for systems supporting critical business operations. However, such actions may tamper with the evidence and even lead to a lost of information causing potential implications. This is especially critical if the recourse actions involve legal proceedings. Hence it is very important to establish a set of proper and systematic procedures to preserve all evidence during this critical initial response stage.Not every incident will lead to a full investigation or legal proceeding. However, in the event when a security breach has taken place, proper handling of the system is necessary. However, one should always bear in mind that different incidents might require different procedures to resolve.In most cases, not all systems can afford the downtime to carry a fullinvestigation before knowing the most possible cause. Initial response is the stage of preliminary information gathering to determine the probable causes and the next appropriate response. Responders should be equipped with the right knowledge on how and what information to collect without disrupting the services. During the initial response, it is also critical to capture the volatile evidence on the live system before they are lost.This paper will cover the initial response focusing on the windows platform, how and what evidence should be collected and analyzed quickly. We will begin the discussion on what is initial response, what are the potential issues need to be©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .considered, what to do and what not to do during the stage of initial response. To carry out the initial response successfully, the responder needs to prepare a set of tools to gather the evidence. We will list out some of the essential tools that a responder should be equipped and run through how and what evidence should be collected. This paper will not cover the forensic investigative analysis process. However, areas to look out for footprints of intrusion on the system will be discussed. 2. Initial ResponseInitial response is the stage where preliminary information is gathered todetermine whether there is any breach of security, and if so, to determine the possible breach and assess the potential impact. This will allow one to determine what is the next course of action, whether to let the system continues its operation or arrange for immediate isolation for a full investigation.During the initial response stage, the following questions (Who, What, When, Where, How) should be asked: • Who found the incident? • How was the incident discovered? • When did the incident occur? • What was the level of damage? • Where was the attack initiated? • What techniques were being used to compromise the system?There should be a well-documented policy and procedures on how different types of incidents should be handled. It is also important to understand thepolicies and response posture. The level of success to solve an incident does not depend only on the ability to uncover evidence from the system but also the ability to follow proper methodology during the incident response and evidence gathering stage.When one suspects a system is compromised, the natural question is to ask whether to bring the system offline, power off the system or let it remains. For a compromised system, do you intend to collect evidence and trace the attacker or just patch the system and life goes on? There is no right answer to this. It really depends on the organization business needs and response plan. For example, when one suspects the attacker is still on the system, you may not want to alert him/her by pulling the system offline immediately, but let the system remains and continue to monitor the his/her activities before taking appropriate actions.However, for system that contains sensitive information, there may be a need to pull the system offline immediately before incurring further damage.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .3. Evidence GatheringElectronic media is easily manipulated, thus a responder needs to be carefulwhen handling evidence. The basic principles to keep in mind when gathering the evidence is to perform as little operations on the system as possible and maintain a detailed documentation on every single steps on what have been done to the system.Majority of the security incidents do not lead to civil or criminal proceedings. However, it is to the best interest of the organization to treat the incidents with the mindset that every action you take during incident response may later lead to legal proceeding or one day under the scrutiny of individuals who desire to discredit your techniques, testimony or basic finding skills.Maintaining a chain of custody is important. Chain of custody establishes arecord of who handle the evidence, how the evidence is handled and the integrity of the evidence is maintained.When you begin to collect the evidence, record what you have done and the general findings in a notebook together with the date and time. Use a tape recorder if necessary. Note that the system that you are working on could be rootkited.Keep in mind that there are things to avoid doing on the system: • Writing to the original media • Killing any processes • Meddling the timestamp • Using untrusted tools • Meddling the system (reboot, patch, update, reconfigure the system). 4. Protecting the Volatile InformationWhen the system is required to undergo the computer forensic process, it is necessary to shutdown the system in order to make bit-level image of the drive. There are discussions on how system should be shutdown, and we are not going to cover this in details here. However, by shutting down the system, a great deal of information will be lost. These are the volatile information, which include the running processes, network connection and memory content. It is therefore essential to capture the volatile information on the live system before they are lost.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .The order of volatility is as follows: • Registers, cache contents • Memory contents • State of network connections • State of running processes • Contents of file system and hard drives • Contents of removable and backup mediaFor the first four content, the information are lost or modified if the system is shutdown or rebooted.Some of the volatile evidence that are important to gather are: • System date and time • Current running and active processes • Current network connections • Current open ports • Applications that listening on the open sockets • Current logon usersSuch volatile evidence is important, as it will provide the critical first hand information, which may make or break a case. In some cases, some hackers may have tools that run in memory. Gathering such evidence is therefore necessary as part of the initial response procedure. 5. Creating a Response ToolkitPreserving evidence and ensuring those evidence that you gather is correct is very important. There is a need to ensure the programs and tools that one uses to collect the evidence are trusted. Burning them into a CD-ROM media will be ideal to carry them around when responding to incidents. The responder should always be equipped with the necessary programs beforehand. This will shorten the response time and enable a more successful response effort.There are many tools available that can be used to gather evidence from the system. Below is a list of tools that you should minimally be equipped with. There could be more depending how much you wish to carry out prior to bit-level imaging of the media. The important is to harvest the volatile information first. Those residing on the media could still be retrieved during the forensic analysis on the media image.You need to ensure the tools that you used will not alter any data or timestamp of files in the system. It is therefore important to create a response disk that has allthe dependencies covered. The utility, filemon, could be used to determine the files being accessed and affected by each of the tool used.Below is the set of response tools you should prepare:© S A N SI n s t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .6. Gathering the EvidenceA critical question to ask someone when you encounter a live system is whether the system has been rebooted. It will be great news if the answer is no, but a yes reply is usually not a surprise.Albeit the system has been rebooted and caused some vital information to be lost, it is still a good practice to carry out the initial response steps to gather the evidence prior to shutting down the system, as you will never know there could still be some other footprints around.Step One: Open a Trusted Command ShellThe first step is to ensure all the tools are run from a trusted command shell.Initiate a command shell from the Start Menu. Run the trusted command prompt from the trusted tools from the CD you have prepared.All subsequent commands should then be run over this trusted shell.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Key fingerprint = AF19 FA27 2F94 998D FDB5 DE3D F8B5 06E4 A169 4E46 Step Two: Prepare the Collection SystemRemember that you should not write the evidence collected to the original media. A simple way is to write the data to a floppy disk. However, some of the evidence collected may exceed the disk space of the floppy disk. One simple way is to pipe the data over the network to your responder’s system. To do this, we could use the popular known “TCP/IP Swiss Army Knife” tool, netcat, to perform the job.The process of setting up the netcat is first by setting up the netcat listener on your responder’s system.D:\>nc -l -p 55555 >> evidence.txtThe above command open a listening port on your responder’s system and redirect anything received to evidence.txt. The switch -l indicates listening mode. The listener will close the socket when it receives data. To allow the listener to continue to listen harder after the first data is captured, use the -L switch instead. Thus, you can choose whether to create a new file for each command or appending all evidence gathered into one single file by using the appropriate switch. The switch -p allows you to select the port for the listener. You could choose any other port.When the listener is ready, you can start to pipe the evidence to the responder’s system by executing the following (assuming E Drive is the CD ROM Drive):E:\>nc <IP address of responder’s system> <port> -e <command>ORE:\<command> | nc <IP address of responder’s system> <port>For example, if you want to pipe the directory listing to the responder’s system (with IP address 10.1.2.3), you execute:E:\> nc 10.1.2.3 55555 -e dirORE:\dir | nc 10.1.2.3 55555Note that the evidence pipes through netcat is in clear. If you prefer to encrypt the channel (for example, you suspect there is a sniffer on the network), you can use cryptcat. Cryptcat is the standard netcat enhanced with twofish encryption. It is used in the same way as netcat. Note that the secret is hardcoded to be "metallica" (use the -k option to change this key).u te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure 1: Using netcat to collect evidenceStep Three: Collect Volatile EvidenceNow you can start running your toolkit to collect the volatile evidence.The necessary evidence to collect is: • Basic system information • Running processes • Open sockets • Network connections • Network shares • Network usersThe system date and time should be recorded before and after collecting the evidence.D:\>nc –l –p 55555 >> evidence.txt E:\>nc 10.1.2.3 55555 –e <command> E:\><command> | nc 10.1.2.3 55555o rr et a in sf u l l ri g h t s .Some of the evidence gathered may seem normal but when all the evidence are collected, they provide a good picture of the system. From there, one can trace the normal and unusual processes, connections and files occurring in the system.Step Four: Collect Pertinent LogsAfter gather the volatile information, the next thing is to gather the pertinent logs. While this information is not considered to be volatile and could be retrieved during the forensic investigation, getting these information will still be helpful to get the first hand knowledge of the cause. Note that bit-level image of the media could take a while and during this period, investigation can be started on these logs first.The pertinent logs to gather are: • Registry • Events logs • Relevant application logs©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Note that an attacker can make use of the NTFS stream to hide files. For example, the following will allow the attacker to hide the file, hack_file.exe, in web.log.C:\> cp hack_file.exe web.log:hack_file.exeThe file size of web.log will not change. To identify stream file, use the streams command.To obtain the stream file, you just need to reverse the process:C:\> cp web.log:hack_file.exe hack_file.exe©S e 2003,A ut ho rr et a in sf ul l ri g h t s .Stream file can be executed by START command:C:\> start web.log:hack_file.exeEvent logs and other application logs are next to collect. They could be piped over to the responder’s system using the cat utility. The default locations are as follows: After the files are captured into the responder’s system, you should make a md5sum on the files to ensure the integrity of the files are not tampered when carry out subsequent investigation.Step Five: Perform additional network surveillanceWhere possible, it is good to monitor closely any connection to the system subsequently, especially if you suspect the attacker might return. Running a sniffer program on another system to monitor the network activities on that suspected system would be good.©S AN SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .7. Scripting the Initial ResponseThe commands used to gather the evidence can be written in a batch file. This will make the job of the responder easier and at the same time avoid mistyping the command. A simple way to create a script is to create a text file and give a .bat extension to it. This will give us a very neat way to collect evidence from the system. For example, we could key in the following as a single text file with file name ir.bat:8. Identification of FootprintsYou have now collected: • Basic system information • Running processes • Open sockets • Network connections • Network shares • Network users • Pertinent logsThe next step is to identify the footprints. During the review, one should look out for the following: • Check for hidden or unusual files • Check for unusual processes and open sockets • Check for unusual application requests • Examine any jobs running©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .• Analyze trust relationship • Check for suspicious accounts • Determine the patch level of the systemWhenever there is any suspicious observation, take note of the event andtimestamp. Correlate the event with other logs based on related files, processes, relationship, keywords and timestamp. The timestamp will also be useful tocorrelate with external logs such as the logs from firewall and intrusion detection system. Any suspected events should not be left out.If one is analyzing IIS records, note that it uses UTC time. This is supposed to help to synchronize when running servers in multiple time zones. Windows calculates UTC time by offsetting the value of the system clock with the system time zone. Take note of this when you correlate the entries of the IIS logs with timestamp of other logs.The Registry provides a good audit trail: • Find software installed in the past • Determine security posture of the machine • Determine DLL Trojan and startup programs • Determine Most Recently Used (MRU) Files information 9. What’s Next?Based on initial response finding, one should be able to determine the possible cause of the security breach and decide the next course of action whether to: • Perform a full bit-level imaging for full investigation; • Call the law enforcer; or • Get the system back to normal (reinstall, patch and harden the system).For bit-level disk image, there are tools out that that could perform an excellent job. Encase and SafeBack are two of the commercial tools that you could consider for image acquisition and restoration, data extraction, and computer forensic analysis. Another tool that you can consider is dd, which is free. dd is a utility that comes with most Unix platform. Now it has ported to Windows platform as well and you can get it at /.10. Wrapping UpIn the event of any incident, having a proper initial response plan and procedure is important to ensure the evidence gathered is intact and at the same time do not tamper the evidence as far as possible. Volatile information is critical to©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .protect and ensure they are collected first before they are lost. Sometimes such information may make or break a case.By having a good preparation to response to any security incidents will save a lot of time and effort in handling cases. Planning ahead is necessary for initial response. Never rush to handle an incident without any preparation.Having said all these, the next step after a good preparation is practice. The actions taken during the stage of initial response is critical. Do not wait for an incident to occur before you start to kick in your established plan, checklist and toolkit. Remember practice makes perfect.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .ReferencesH. Carvey, “Win2K First Responder’s Guide”, 5 September 2002, URL: /infocus/1624Jamie Morris, “Forensics on the Windows Platform, Part One”, 28 January 2003, URL: /infocus/1661Stephen Barish, “Windows Forensics: A Case Study, Part One”, 31 December 2002, URL: /infocus/1653Stephen Barish, “Windows Forensics - A Case Study: Part Two”, 5 March 2003, URL: /infocus/1672Mark Burnett, “Maintaining Credible IIS Log Files”, 13 November 2002, URL: /infocus/1639Norman Haase, “Computer Forensics: Introduction to Incident Response and Investigation of Windows NT/2000”, 4 December 2001, URL: /rr/incident/comp_forensics3.phpLori Willer, “Computer Forensics”, 4 May 2001, URL: /rr/incident/comp_forensics2.phpKelvin Mandia and Chris Prosise, “Incident Response: Investigating Computer Crime”, Osborne/McGraw-Hill, July 2001, ISBN: 0-07-213182-9//////©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Appendix AFigure A-1: envFigure A-2: psinfo©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g ht s .Figure A-3: psuptimeFigure A-4: net startFigure A-5: pslist©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-6: pulistFigure A-7: psservice©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-8: listdllsFigure A-9: fport©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-10: Last Access TimeFigure A-11: Last Modification Time©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-12: Last Create TimeFigure A-13: hfind。
Table of Contents

Booting Linux from DiskOnChip HOWTORohit Agarwal<rohdimp_24@>Vishnu Swaminathan<Vishnu.Swaminathan@>20060907Revision HistoryRevision 1.02006−09−07Revised by: MGLast review for LDP publicationThis document discusses how to make the Flash Drives Linux bootable. We will describe how to boot from such a drive, instead of from the normal hard drive.1. Introduction (1)1.1. Why this document? (1)1.2. NFTL vs. INFTL (1)1.3. Practical goals (2)2. Reference configuration (3)3. Assumptions (4)4. Using M−Systems DiskOnChip 2000 TSOP as an additional storage drive in Linux (5)4.1. Step 1: Patch the Kernel (5)4.2. Step 2: Compile the Kernel (6)4.3. Step 3: Create Nodes (8)4.4. Step 4: Reboot with the new kernel (8)4.5. Step 5: Insert M−Systems Driver/Module in the new Kernel (9)4.6. Step 6: Create a filesystem on the DiskOnChip (9)4.7. Step 7: Mount the newly created partition to start accessing DOC (9)5. Install Linux and LILO on DiskOnChip (11)5.1. Step 1: Copying the DOC firmware onto DiskOnChip (11)5.2. Step 2: Format DiskOnChip using Dos Utilities (12)5.3. Step 3: Patch and Compile the kernel 2.4.18 (12)5.4. Step 4: Create nodes (12)5.5. Step 5: Modify the /etc/module.conf file (12)5.6. Step 6: Create the initrd image (13)5.7. Step 7: Insert the DOC driver into the new kernel (14)5.8. Step 8: Create a filesystem on the DiskOnChip (15)5.9. Step 9: Build Root Filesystem on the DiskOnChip (15)5.10. Step 10: Use rdev to specify the DOC root filesystem location to kernel image (16)5.11. Step 11: Compile lilo−22.3.2 (16)5.12. Step 12: Copy the boot.b file into boot directory of DOC (17)5.13. Step 13: Modify the /etc/lilo.conf file (18)5.14. Step 14: Store the new LILO configuration on the DiskOnChip (18)5.15. Step 15: Modify etc/fstab of DiskOnChip root file system (19)5.16. Step16: Update Firmware (19)5.17. Step17: BOOT from DiskOnChip (19)6. Install Development ToolChain on DiskOnChip (20)6.1. Step1: Obtain the latest copy of root_fs_i386.ext2 (20)6.2. Step2: Replace the root filesystem of the DiskOnChip (20)6.3. Step3: Modify etc/fstab of DiskOnChip root file system (21)6.4. Step4: Reboot (21)7. References (22)A. Output of dinfo (23)B. License (24)C. About Authors (25)D. Dedications (26)1. Introduction1.1. Why this document?DiskOnChip (DOC) is a flash drive that is manufactured by M−Systems. The use of flash drives is emerging as a substitute for Hard Disks in embedded devices. Embedded Linux is gaining popularity as the operating system of choice in the embedded systems community; as such, there is an increased demand for embedded systems that can boot into Linux from flash drives.Much of the documentation currently available on the subject is either incorrect or incomplete; the presentation of the information which is provided by such documents is likely to confuse novice users. 1.2. NFTL vs. INFTLAnother fundamental problem is that most of the documents assume the DiskOnChip to be a NFTL (NAND Flash Translation Layer) device, and proceed to describe the booting process for NFTL devices. DiskOnChip architectures come in two variants, each of which requires different booting procedures: NFTL and INFTL (Inverse NFTL). Dan Brown, who has written a boot loader known as DOCBoot, explains the differences between these variants in a README document, which is included with the DOCBoot package:/pub/people/dwmw2/mtd/cvs/mtd/docboot/.An INFTL device is organized as follows:IPLMedia HeaderPartition 0 (BDK or BDTL)(Optional) Partition 1(BDK or BDTL).. Up to at most Partition 3Under Linux MTD partitions are created for each partition listed in the INFTL partition table. Thus up to 5 MTD devices are created.By contrast the NFTL device is organized as follows:FirmwareMedia HeaderBDTL DataUnder Linux, normally two MTD devices will be created.Booting Linux from DiskOnChip HOWTOAccording to the above excerpt, the process used by the boot loader when fetching the kernel image for an INFTL device is different from the method used for NFTL devices, since both devices have different physical layouts. (repetitive)Using a 2.4.x kernel for an INFTL DiskOnChip device is complicated by the lack of native support inpre−2.6.x kernels (although native NFTL support is present). Such functionality is only available by patching the kernel; an approach which is ill−advised.Patching the kernel with external INFTL support is discouraged; the developers of the MTD driver, the open source driver available for DiskOnChip, are apprehensive of this approach as well. For more information on this matter, feel free to peruse the mailing list conversation on the subject at/pipermail/linux−mtd/2004−August/010165.html.The drivers that provide native INFTL support in the 2.6.x kernels failed to identify the DiskonChip device used for this exercise, and the following message was reported by the system:INFTL no longer supports the old DiskOnChip drivers loaded via docprobe.Please use the new diskonchip driver under the NAND subsystem.So then we decided to use the drivers provided by M−Systems (manufacturer of DiskOnChip). However, according to the documentation provided by the vendor on these drivers, they were designed for NFTL devices only. As such, we decided to write this HOWTO which will address the use of INFTL devices. We have taken special care to remove any ambiguity in the steps and also tried to give reasons for the need of a particular step so as to make things logically clear. We have explained things in such a way that a person with less experience on Linux can also follow the steps.1.3. Practical goalsThis document aims to act as a guide to:•Use M−Systems DiskOnChip 2000 TSOP as an additional storage drive along with an IDE HDDrunning Linux on it.Install Linux on DiskOnChip 2000 TSOP and boot Linux from it.•Install the Development Tool−Chain so as to compile and execute programs directly on DiskOnChip.•The method described here has been tested for DiskOnChip 2000 TSOP 256MB and DiskOnChip 2000 TSOP 384MB.2. Reference configurationWe used the following hard− and software:1.VIA Eden CPU 1GHz clock speed 256MB RAM2.RTD Enhanced Phoenix − AwardBIOS CMOS Setup Utility (v6.00.04.1601)Kernel 2.4.18 source code downloaded from /pub/linux/kernel/v2.43.4.256 MB M−Systems DiskOnChip 2000 TSOP (MD2202−D256)5.M−Systems TrueFFS Linux driver version 5.1.4 fromhttp://www.m−/site/en−US/Support/SoftwareDownload/Driver+Download.htm?driver=linux_binary.5_1_46.LILO version 22.3.2 (distributed with driver)7.DiskOnChip DOS utilities version 5.1.4 and BIOS driver version 5.1.4 fromhttp://www.m−/site/en−US/Support/SoftwareDownload/TrueFFS5.x/BIOSDOSdriverandtools.htm8.Dual bootable Hard Disk with Knoppix 3.9 and Windows XP using Grub 0.96 as the Boot Loader9.GNU GCC−2.95.3Latest root_fs_i386 image from /downloads/root_fs_i386.ext2.bz2 or10./downloads/root_fs_i386.ext2.tar.gz3. AssumptionsWe have made some assumptions related to working directories and mounting points which we would like to mention before listing the entire procedure for putting Linux on DiskOnChip.•We will perform all the compilation in /usr/src of the host machine so downloading of thenecessary files must be done into that directory.All the commands listed are executed assuming /usr/src as the present working directory.••We will mount the DiskOnChip partition on /mnt/doc.The names of the directories will be exactly the same as the files that have been downloaded so the •document will give the actual path as were created on the host system.•DiskOnChip and DOC have been used interchangeably to mean M−Systems DiskOnChip 2000TSOP.•The DOS utilities have been downloaded and saved in a Windows partition directory.4. Using M−Systems DiskOnChip 2000 TSOP as an additional storage drive in LinuxThe following are the steps performed for this purpose.4.1. Step 1: Patch the KernelDownload a fresh copy of Kernel 2.4.18 from /pub/linux/kernel/v2.4.The kernel that is downloaded from the site does not have support for the M−Systems driver so we need to add this functionality. This is done by adding a patch to the kernel.The steps to conduct patching are as follows:1.Untar the kernel source file and the M−systems TrueFFS Linux driver version 5.14. If the source code is in .tar.gz format, usetar −xvzf linux−2.4.18.tar.gzIf the source code is in .tar.bz2 format, usebunzip2 linux−2.4.18.tar.bz2After using bunzip2, you will get a file named linux−2.4.18.tar. Untar it using the commandtar −xvf linux−2.4.18.tarUnarchiving the driver is done using the commandtar −xvzf linux_binary.5_1_4.tgzThis results in the creation of two directories: linux and linux_binary.5_1_4.2.The TrueFFS Linux driver package contains three different folders:♦Documentation: this contains a PDF document describing the various functions ofTrueFFS.♦dformat_5_1_4_37: this contains a utility dformat, which is used to update the firmwareon the DiskOnChip (DOC) and to create low level partitions on the DOC.♦doc−linux−5_1_4_20: this contains patches, initrd scripts and other utilities.3.Now apply the patch to the kernel. We will use the linux−2_4_7−patch file that is present inlinux_binary.5_1_4/doc−linux−5_1_4_20/driver. The following commands are used for this purpose:cd linux_binary.5_1_4/doc−linux−5_1_4_20/driverpatch −p1 −d/usr/src/linux < linux−2_4_7−patchThis will create a directory named doc in the linux/drivers/block directory.The patch created the doc directory, but did not copy the source files of the M−Systems driver, which are necessary in order to build the driver, into this directory. So execute the following command:cp linux_binary.5_1_4/doc−linux−5_1_4_20/driver/doc/*/usr/src/linux/drivers/block/doc4. Kernel versionThe patch will fail for kernels other than 2.4.18 since the source files where the patch is to be applied may be somewhat different in different kernels. The patch has been provided specifically for kernel2.4.18.Before moving on to Step 2, do the following:Login as root.• Make sure that gcc version is 2.95.3 else the build will fail. Use gcc −−version to check this. If your gcc version is different compile gcc−2.95.3. Refer to .columns/20020316for this purpose.• 4.2. Step 2: Compile the KernelComplete the following tasks for compiling the kernel:cd linux1. make menuconfigCheck for the following options:In the "Block devices menu", select:M−Systems driver as module i.e. (M)◊ Loopback device support as built−in i.e. (*)◊ RAM disk support as built−in i.e. (*)◊ Initial RAM disk (initrd) support as built .in i.e. (*)◊ ♦ In the "Processor type and features menu", select "Disable Symmetric MultiprocessorSupport".♦ In the "filesystem menu", select:Ext3 journaling file system support as built−in◊ DOS FAT fs support as built−in a◊ MSDOS fs support as built−in b◊ VFAT (Windows−95) fs support as built−in c◊ ♦ File System Menua,b,c options should be activated if you want to mount your MS Windows partition, else they can be left out. It is, however, generally recommended to use them.An excellent resource on kernel compilation is the Kernel Rebuild Guide.The configuration file, linux/.config should essentially contain the following lines (only a part of the config file has been given):2.## Loadable module support#CONFIG_MODULES=yCONFIG_MODVERSIONS=yCONFIG_KMOD=y## Processor type and features## CONFIG_SMP is not set## Memory Technology Devices (MTD)## CONFIG_MTD is not set## Block devices## CONFIG_BLK_DEV_FD is not set# CONFIG_BLK_DEV_XD is not set# CONFIG_PARIDE is not set# CONFIG_BLK_CPQ_DA is not set# CONFIG_BLK_CPQ_CISS_DA is not set # CONFIG_BLK_DEV_DAC960 is not set CONFIG_BLK_DEV_LOOP=y# CONFIG_BLK_DEV_NBD is not set CONFIG_BLK_DEV_RAM=yCONFIG_BLK_DEV_RAM_SIZE=4096 CONFIG_BLK_DEV_INITRD=yCONFIG_BLK_DEV_MSYS_DOC=m## File systems## CONFIG_QUOTA is not set# CONFIG_AUTOFS_FS is not set# CONFIG_AUTOFS4_FS is not set CONFIG_EXT3_FS=yCONFIG_FAT_FS=yCONFIG_MSDOS_FS=y# CONFIG_UMSDOS_FS is not set CONFIG_VFAT_FS=y# CONFIG_EFS_FS is not set# CONFIG_JFFS_FS is not set# CONFIG_JFFS2_FS is not set# CONFIG_CRAMFS is not setCONFIG_TMPFS=y# CONFIG_RAMFS is not setCONFIG_ISO9660_FS=y# CONFIG_JOLIET is not set# CONFIG_HPFS_FS is not setCONFIG_PROC_FS=y# CONFIG_DEVFS_FS is not set# CONFIG_DEVFS_MOUNT is not set# CONFIG_DEVFS_DEBUG is not set CONFIG_DEVPTS_FS=y# CONFIG_QNX4FS_FS is not set# CONFIG_QNX4FS_RW is not set# CONFIG_ROMFS_FS is not setCONFIG_EXT2_FS=y3.make dep4.make bzImage5.make modules6.make modules_installCopy the newly created bzImage to the /bott directory and name it vmlinuz−2.4.18, using7.this command:cp /arch/i386/boot/bzImage /boot/vmlinuz−2.4.18Check for lib/modules/2.4.18/kernel/drivers/block/doc.o. This is the M−Systems driver that we need to access DiskOnChip.4.3. Step 3: Create NodesNow we will create block devices, which are required to access the DOC These block devices will use theM−Systems driver that was built in Section 4.2 to access the DOC. The script mknod_fl inlinux_binary.5_1_4/doc−linux−5_1_4_20/driver is used for this purpose.We need to create the block devices with the major number of 62. For this purpose we will pass the argument 62 while creating the nodes:./mknod_fl 62This will create the following devices in /dev/msys with major number 62:fla...fla4flb...flb4flc...flc4fld...fld44.4. Step 4: Reboot with the new kernelIn order to have the DiskOnChip recognized by Linux OS, we need to insert the DOC driver module into the kernel. Since the currently running kernel doesn.t have support for the M−Systems Driver, we need to boot into new kernel we just compiled in .For this purpose we need to add the following entries in the /boot/grub/menu.lst file:title Debian GNU/Linux,Kernel 2.4.18root (hd0,7)kernel /boot/vmlinuz−2.4.18 root=/dev/hda8safedefaultbootWhere (hd0,7) is the partition holding the kernel image vmlinuz−2.4.18 and /dev/hda8 is the partition holding the root filesystem. These partitions may vary from one system to another. Now reboot and choose the kernel 2.4.18 option (the kernel that has been compiled in Step 2) in the grub menu to boot into the new kernel.4.5. Step 5: Insert M−Systems Driver/Module in the new KernelThe M−Systems driver by default gets loaded with major number 100, but our newly created nodes (see Section 4.3) have a major number 62. Therefore we need to insert this module with a major number 62. This can be done in either of two ways:1.While inserting the module using insmod also mention the major number for the module which needs to be assigned to it otherwise it will take the default major number of 100:insmod doc major=62Add the following line to /etc/modules.conf:2.options doc major=62Then use modprobe doc to insert the modules.Check for the correct loading of the module using the lsmod command without options.4.6. Step 6: Create a filesystem on the DiskOnChipBefore we can start using DiskOnChip we need to create a filesystem on it. We will create an ext2 filesystem since it is small in size.This involves a hidden step of making partitions on the DOC using fdisk. The actual steps are as follows:fdisk /dev/msys/fla1.This command will ask to create partitions. Create a primary partition number 1 with start cylinder as1 and final cylinder as 1002.Check the partition table, which should look like this:Device Boot Start End Blocks ID System/dev/msys/fla1 1 1002 255984 83 Linux2.Make the filesystem on /dev/msys/fla1 with the commandmke2fs −c/dev/msys/fla1Where fla1 is the first primary partition on the DOC. (We have created only one partition in order to avoid unnecessary complexity.)4.7. Step 7: Mount the newly created partition to start accessing DOCCreate a new mount point for the DiskOnChip in the /mnt directory:mkdir /mnt/docMount the DOC partition on the newly created directory:mount −t auto/dev/msys/fla1 /mnt/docYou will now be able to read and write to the DOC as an additional storage drive.When you reboot your system, make the DOC available by inserting the driver into the kernel (see Section 4.5) and mounting the device.5. Install Linux and LILO on DiskOnChipIn this section we will learn how to install Linux operating system on an unformatted DOC and boot from it using LILO as the boot loader.In order to get to this state, a procedure will be discussed. Some steps in this procedure resemble the steps discussed previously in this document. Even so, this should be considered a separate procedure, rather than a continuation of the steps in Section 4.In general, to make a device to boot into Linux, it should have the following components:•Kernel Image•Root Filesystem•Boot loader to load the kernel Image into memoryThis section will basically try to fulfill the above three requirements.The following steps should be followed for achieving the goal of this section.5.1. Step 1: Copying the DOC firmware onto DiskOnChipWe will use the dformat utility from linux_binary.5_1_4/dformat_5_1_4_37.M−Systems does not provide the firmware for using the DOC on Linux platforms. We address this problem by making a copy of the firmware shipped with the M−Systems dos utilities into this directory ("dos utilities" is the term used by the M−Systems people so we have also used this name). On our system we copied it by mounting the windows partition and extracting it from there:mount −t auto/dev/hda5 /mnt/dcp /mnt/d/dos\ utilities/doc514.exb linux_binary.5_1_4/dformat_5_1_4_37/Now format the drive, using the dtformat from linux_binary.5_1_4/dformat_5_1_4_37/:cd linux_binary.5_1_4/dformat_5_1_4_37/./dformat −WIN:D000 −S:doc514.exbD000 specifies the address of the DiskOnChip in the BIOS.The following is the BIOS (RTD Enhanced Phoenix − AwardBIOS CMOS Setup Utility (v6.00.04.1601)) setting on our system.The Integrated peripherals of the BIOS menu should have:SSD Socket #1 to Bios ExtensionBios Ext. Window size 8kBios Ext. window [D000:0000]Fail safe Boot ROM [Disabled]The Bios Ext. Window denotes the address for your DiskOnChip.BIOSesThe setting may be different depending upon your BIOS version.Now shutdown the system and boot into Windows XP.From now on you will notice the TrueFFS message and some time delay before the Grub Menu appears. 5.2. Step 2: Format DiskOnChip using Dos UtilitiesBoot into Windows XP. We will use the M−Systems Dos Utilities for formatting the DiskOnChip. The Dos utility dformat will copy the firmware to the DOC, and then format it as a fat16 device.Using the command prompt, run the following command from the DOS utilities folder (assuming that you have already downloaded the DOS utilities):dformat /WIN:D000 /S:doc514.exbCheck the DOC partition using another utility called dinfo. A sample dinfo output is given in the appendix. Again shutdown the system and now boot into Linux.Always shutdownAfter formatting you should always do a full shutdown (power off) and not just a reboot.Even though Step 1 and Step 2 seem to be the same, the only difference being that Step 1 is done from Linux and Step 2 from Windows XP, they both have to be done.5.3. Step 3: Patch and Compile the kernel 2.4.18This has to be performed in exactly the same manner as described in Section 4.1 and Section 4.2.Also add an entry for the new kernel in /boot/grub/menu.lst as described in Section 4.4.5.4. Step 4: Create nodesThis is done using the ame procedure as described in Section 4.3.5.5. Step 5: Modify the /etc/module.conf fileThe file /etc/modules.conf has to be modified, adding this line at the end of the file:options doc major=62This is required since our nodes use a major number of 62, while the doc driver module uses a major number of 100. When creating the initrd image, the driver will be loaded with major number value of 100 (instead of 62) if you do not edit the module configuration file. This will make it impossible for the nodes to use the driver. The reason for using the initrd image will be explained in the next step.The mkinitrd_doc script from linux_binary.5_1_4/doc−linux−5_1_4_20/driver reads the/etc/modules.conf file and looks for anything that has been mentioned for the DOC driver regarding the major number. By default, mkinitrd_doc will create an initrd image that loads the DOC module with a major number of 100. However, with the modifications we have made to the /etc/modules.conf file, the initrd image will load the module with a major number of 62.5.6. Step 6: Create the initrd imageRun the mkinitrd_doc script from linux_binary.5_1_4/doc−linux−5_1_4_20/driver/:./mkinitrd_docThis may give warning messages similar to the following, which can be safely ignored:cp: cannot stat ./sbin/insmod.static.: No such file or directorycp: cannot stat ./dev/systty.: No such file or directoryCheck for the newly created initrd image, initrd−2.4.18.img, in the /boot directory.Running the mkinitrd_doc script produces this image. The reason for making an initrd image is that the provided M−Systems driver cannot be added as a built−in support in the kernel, which leaves no other option than adding it as a loadable module. If we want to boot from DOC, the kernel should know how to access the DOC at the time of booting to search for /sbin/init in the root filesystem on the DOC (the root filesystem is necessary to get the Linux system up).In the booting sequence of the Linux, /sbin/init is the file (a command actually) that the kernel looks for in order to start various services and, finally, give the login shell to the user. The figure below illustrates the problem:Figure 1. Why we need an initrd image5.7. Step 7: Insert the DOC driver into the new kernelReboot the system and boot into the newly created kernel.Now insert the doc module:modprobe docThis will give the following messages:fl : Flash disk driver for DiskOnChipfl: DOC devices(s) found: 1fl: _init:registed device at major 62....To access the DOC, ensure that the major number assigned to the nodes is 62.In case of a major number of 100 is assigned, check if the /etc/modules.conf was successfully modified. If it was not, then repeat Section 5.5. You must then also repeat Section 5.6 because the initrd image depends on /etc/modules.conf. If the DOC entry were incorrect in this file, the initrd image will be useless.5.8. Step 8: Create a filesystem on the DiskOnChipPerform Section 4.6. This is required to create partitions on the DOC.5.9. Step 9: Build Root Filesystem on the DiskOnChipBefore starting with this step make sure that you have not mounted /dev/msys/fla1 on any of the mount points, as this step will involve reformatting the DiskOnChip.Also, in order to understand the details of Root File system refer to The Linux Bootdisk How To available at .We will use the mkdocimg script located in linux_binary.5_1_4/doc−linux−5_1_4_20/build. We will also use the redhat−7.1.files directory, located in the same directory (i.e. build), which contains the list of the files that will be copied in the root filesystem that will be created on the DOC../mkdocimg redhat−7.1.filesThis step will take a few minutes to complete.Now mount the /dev/msys/fla1 partition on the mount point /mnt/doc and check the files that have been created:mount −t auto/dev/msys/fla1 /mnt/doccd /mnt/docThe following directories are created on the DOC as a result of running the script:bin dev sbin etc lib usr home mnt tmp var bootThe most important is the boot directory. This contains the vmlinuz−2.4.18 andinitrd−2.4.18.img which gets copied from the /boot directory. This directory is required when booting from DiskOnChip.Apart from these files there are some other files which must be deleted:•System.map−2.4.18•boot.3E00These two files are created later by LILO.The redhat−7.1.files directory contains a list of files and directories that will be created when we use the mkdocimg script.This script does not create all the files that are necessary for creating the root filesystem on the DOC. So replace the directories created by the mkdocimg script, with the directories of the / filesystem (root filesystem that is currently running).The directories under /, such as etc, sbin, bin and so on contain lot of files that are not useful and ideally should not be copied while building the root filesystem for DOC. But since we have not discussed the files that are essential and the files that can be removed, we therefore suggest that one should copy the entire contents of the directories. We know that it is a clumsy way of building the root filesystem and will unnecessarily take lot of memory; bear with us as in the next section we will explain how to put the development tools on the DOC. We will then remove the useless files from the root filesystem of DOC.If you are aware of how to build the root filesystem we would encourage you to copy only the essential files. The following is the set of commands we used to modify the root filesystem:rm −rf/mnt/doc/sbinrm −rf/mnt/doc/etcrm −rf/mnt/doc/librm −rf/mnt/doc/devcp −rf/sbin /mnt/doccp −rf/etc /mnt/doccp −rf/dev /mnt/doccp −rf/lib /mnt/docrm −rf/mnt/doc/lib/modulesNow our filesystem is ready.The total size occupied by this filesystem will be about 35Mb.5.10. Step 10: Use rdev to specify the DOC root filesystem location to kernel imageThis step is required to specify the location of the DOC root filesystem to the kernel we compiled in the step 3. The step can be avoided by giving the details of the root filesystem location in the Boot Loader configuration file, but we had some problems in making the kernel locate the root filesystem at the time of booting so we recommend executing this command:rdev /boot/vmlinuz−2.4.18 /dev/msys/fla15.11. Step 11: Compile lilo−22.3.2We are going to use LILO as the boot loader since this is the only BootLoader that can read an INFTL device without many changes to be done to the BootLoader source code.For more information on how LILO and other boot loaders operate, refer to .We need to compile the lilo−22−3.2 source code to get the executable file for LILO.We will use the source code fromlinux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2.Before starting the build we need to do the following:Create a soft link for the kernel−2.4.18 source code with the name linux.1.When you untar the file linux−2.4.18.tar.gz it will create a directory linux. So we need to rename the directory linux to linux−2.4.18 before creating a soft link with the same name:mv linux linux−2.4.18ln −s linux−2.4.18 linuxIf the above steps are not done the build might fail.Patch file:2.linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/common.h:The lilo−22.3.2 source code that comes with the M−Systems linux_binary.5_1_4.tgz isbuggy as one of the variables PAGE_SIZE is not defined. We need to patch the LILO source code as follows:Add the following lines in the common.h after the line "#include .lilo.h.":+ #ifndef PAGE_SIZE+ #define PAGE_SIZE 4096U+ #endif#define 0_NACCESS 3Where "+" indicates the lines to be added.3.Make sure that the gcc version is 2.95.3 by using gcc −−version.Now we can start the build process. Runmake clean && makeThis will create a new LILO executable,linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo. Copy this LILO executable into /sbin/lilo and /mnt/doc/sbin/lilo:cp linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo /sbin/lilocp linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo/mnt/doc/sbin/lilo5.12. Step 12: Copy the boot.b file into boot directory of DOC。
02Table of content

Table of contentPart 1 Basic knowledge of mechanicsUnit 1 General equilibrium conditions of systemReading Material 1 Static analysis of beamsUnit 2 Stress and StrainReading Material 2 Shear force and bending moment in beams.Unit 3 Normal stress and shear stressReading Material 3 Theories of strengthUnit 4 Membrane stresses in shell of revolutionReading Material 4 Stresses in cylindrical shells due to internal pressureUnit 5 Mechanical vibrationsReading Material 5 Static and dynamic balance of rotating bodiesPart 2 Metallic materialsUnit 6 MetalsReading Material 6 Stainless steelUnit 7 Properties of materialsReading Material 7 Standard mechanical testsUnit 8 Manufacturing engineering processesReading Material 8 Examples of manufacturingprocessesUnit 9 Internal structure of steelReading Material 9 Heat treatment of steelUnit 10 Corrosion of metalsReading Material 10 Corrosion controlPart 3 Process industryUnit 11 Chemical engineeringReading Material 11 Chemical industryUnit 12 Transport phenomena in process industry Reading Material 12 Principles of momentum transfer Unit 13 Principles of heat transferReading Material 13 Principle of mass transferUnit 14 Unit operation in chemical engineeringReading Material 14 EvaporationUnit 15 Chemical reaction engineeringReading Material 15 Chemical industry and environmental protectionPart 4 Process equipmentUnit 16 Pressure vessels and their componentsReading Material 16 Pressure vessel codesUnit 17 Design of pressure vesselReading Material 17 Stress categoriesUnit 18 Distilling equipmentReading Material 18 Packed towerUnit 19 Types of heat exchangersReading Material 19 Shell and tube heat exchanger Unit 20 Types of reactorsReading Material 20 Basic stirred tank designPart 5 Process machineryUnit 21 PumpsReading Material 21 The centrifugal pumpUnit 22 Pumping equipment for gasesReading Material 22 Reciprocating compressors and their applicationsUnit 23 Solid liquid separationReading Material 23 Centrifugal filtrationUnit 24 ValvesReading Material 24 Four types of valveUnit 25 Seal classificationReading Material 25 Primary sealing components Part 6 Process controlUnit 26 Introduction to process control 1Reading Material 26 Introduction to process control 2 Unit 27 General concept of process controlReading Material 27 Control categoriesUnit 28 Process control equipmentReading Material 28 Measuring devicesUnit 29 The modes of control action 1Reading Material 29 The modes of control action 2 Unit 30 Process control systemReading Material 30 Typical control systems。
Table of Contents格式

Table of Contents(加粗,小二号,居中)
(中间空一行,使用段落中的空行标准)
Acknowledgements(加粗,四号) (i)
(此处空一行,使用段落中的空行标准)
Abstract (加粗,四号) (ii)
摘要(宋体,四号、加粗) (iii)
(此处空一行,使用段落中的空行标准)
Introduction(宋体,加粗,四号) (1)
I. Nature of Translation (加粗,四号,I. 点后空一格) (2)
1.1 Translation Is a Science (小四,不加粗,缩进) (2)
1.2 Translation Is an Art (同上) (4)
II. Prose Cognition (加粗,四号) (10)
2.1 What Is Prose?(小四,不加粗,缩进) (10)
2.2 What Are the Characteristics of Prose? (小四,不加粗) (10)
III. Aesthetics & Translation (11)
Conclusion(加粗,四号) (20)
Bibliography(加粗,四号) (21)
(此页英文字体Times New Roman,汉语宋体;页边距上、左2.5;下、右2.0;除Table of Contents为居中外其余行均为分散对齐;题号点后与题目之间空一格;次级题号与上一行词首对齐;此页不需页码)
段落中的空行标准操作方法:选中文本,点击菜单中的“段落”,选择“段前”空一行或“段后”空一行。
这样做出的空行大小适度,美观工整。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Final Report: A Data Retrieval, Analysis, and Visualization Tool for Hydrological Calibrations and Applications at NOAA National WeatherServiceUniversity PI: Yao LiangAdvanced Research InstituteDept. of Electrical and Computer EngineeringVirginia Tech4300 Wilson Blvd., Suite 750Arlington, VA 22203Email: yaliang@NWS: Thomas Adams at Ohio RFCProject period: 01/01/04 – 05/31/05Award: $9,064Table of ContentsChapter 1 . Introduction (3)Chapter 2 . HIDE: Mechanisms, Architecture and Implementation (5)Chapter 3 . Accomplishments (15)References (16)Chapter 1 .Introduction1.1. Objectives, rationale, approach, significance, andexpected benefitsThe advent of global observing systems (e.g., satellite remote sensing) and global field programs has generated unprecedented amounts of critical multi-variate data for scientific studies. The advancement of such new kinds of data is creating new opportunities to explore new findings to improve hydrologic calibrations, weather prediction skills, and understanding of critical environmental interactions. In particular, modern information technology (IT), such as data visualization, and data mining, may significantly improve our hydrological and weather predictions and calibration skills through easier access to various data sources.Objectives:In this project, we focused on developing a web-enabled XML-based data retrieval, analysis, and visualization tool (HIDE – Hydrological Integrated Data Environment) to facilitate easier access of various data sources with heterogeneous data formats needed for improving hydrological model calibration processes and other hydrological research and application activities at the NWS. The objectives for this project are:(1). Coherent management, share and visualization of various heterogeneous datasources with dramatically different formats and structures with extensibility, scalability, uniformity, and transparency. The data tool allows full or partial direct access, retrieval, display, and analysis of the NWS River Forecast System (NWSRFS) IHFS database. An easy direct access and retrieval of the OFS data source could significantly facilitate the analysis of the model calibration and post event inspection, filling possible data gaps, and obtaining reservoir operation information for hydrologic forecasts at NWS.(2). Rapid search and access of massive amounts of data by specifying queryconditions, browsing, analyzing, aggregating, and visualization of queried data and customizing the formats of retrieved data, by geographically distributed researchers effectively. Also, the retrieved data could be organized in time-series for individual locations from data to facilitate future distributed hydrological modeling and their calibrations for improvement of hydrologic forecasts at interior locations.(3). Analysis and visualization of the accessed data first before retrieving them whichcould facilitate the data selecting process.(4). Platform Independency with a consistent and unified user-friend interface, basedon any type of web browser, such as Netscape or Internet Explorer. Rationale:Operational research plays a critical role in the operationally hydrologic forecast mission at the NOAA National Weather Service (NOAA NWS) River Forecast Centers (RFCs) which is uniquely mandated amongst Federal agencies to provide forecasts for the Nation’s rivers by providing daily and other forecasts at over 4,000 points across the contiguous United States and Alaska. The operational component of the mission is performed at 13 River Forecast Centers and approximately 120 Weather Forecast Offices at strategic locations across the United States. The NWS Office of Hydrologic Development supports the operational mission by developing, implementing, and maintaining hydrologic models and systems. The forecast models used are developed and calibrated for specific rivers and streams based on historical events. They are conditioned and constrained operationally using current observations and, in the case of operational ensemble forecasts, with historical data as well. Therefore, delayed, inaccurate, inconsistent, incomplete or insufficient data used for calibrating the forecast models can cause significant problems in the forecast process and for the forecasters who operate them. In order to make it easier to access different data sources needed for improving the hydrological model calibration processes for achieving better forecast skills, significant efforts have been made to develop a broad set of tools over the years at the Hydrology Laboratory of the NWS. However, these developed tools are limited for specific data sources at present. As new data sources emerge and research with new models progresses, corresponding tools for exploring, accessing, analyzing these new heterogeneous data sources have not been developed. Clearly, if such challenges and issues in data acquisition, processing, and management are not sufficiently addressed, significant improvement in hydrologic research cannot be achieved.One of the possible solutions to the above mentioned problem is to develop data integration models in the information system super-imposed on the specific scientific domain. This methodology works well as long as the models can satisfy the constraints and rules of the domain, with the advent of ontology to characterize the semantic nature of the data. However, the relative autonomy nature of the data collecting organizations, e.g. US Geological survey [5] and the presence of legacy systems prove to be a hindrance in achieving complete semantic interoperability. Our approach, an integration solution achieves semantic and structural interoperability through a generic concept “DataNodes” arranged as a DataNode tree. The generic nature of the DataNodes can very well be applied in the context of the semantic heterogeneity and structural heterogeneity of the registered data sources. The implementation of our DataNode tree model relies on a metadata model for data semantics and logic organization, which eliminates/reduces the collaborative effort required from the participating data sources. The approach is applied onto the datasets from USGS and NWSRFS IHFS database.Benefits:This work directly addresses two key concerns in data management and analysis: (1) new data analysis techniques with wide application and usefulness in teaching, research, or operational forecasting, and (2) facilitate improvement of hydrological forecast through improvement of hydrological model calibration. The tool developed as part of this project could significantly eliminate current burdens and efforts of accessing massive data of diverse data sources (e.g., data searching, selecting, retrieving, aggregating, preprocessing, and analyzing). Therefore, the data tool could offer great potentials to improve operational hydrological forecasts by facilitating the hydrological calibration processes and research. Also, the data tool has great potential to benefit other RFCs through a future plan of national implementation of its successful components. The features supported by the data tool are:•Access and manage very large volumes of heterogeneous data from distributed diverse sources with distinct different structures and formats promptly, intelligently, and efficiently,•Easily adopt changes, over times, of transmitted data structures and formats due to rapid advancement in data transmission technologies (i.e., open system architecture), •Handle multi-resolution data outputs and provide various customizable data output formats,•Provide users with easy and uniform GUI (Graphical User Interface) type of data access across heterogeneous data sources without worrying about computer platform and different data formats and structures used in any data sources,•Merge datasets obtained from diverse data sources as needed, and provide various data analysis methods, and•Provide various 2-D and 3-D visualization methods to visualize data from heterogeneous data sources and modeling results.Chapter 2 .HIDE: Mechanisms, Architecture and Implementation The complexities in data integration are attributed to various heterogeneities of the data as well as the data systems. One can define several kinds of heterogeneity leading to several levels of interoperability: system, syntax, structure and semantic [4]. In this classification, the machine readable aspects of data representation falls into syntactic heterogeneity, and the representational heterogeneity in terms of data modeling falls into structural heterogeneity. Semantic heterogeneity relates to the difference in the semantics of the datasets to be relevant to semantic interoperability. The semantic interoperability requires that system understands the semantics of the information request as well as those of the information sources and satisfy the request as much as it can. The work in [2], list some of the key issues to be considered in a scientific data integration scenario: the nature of the datasets (structure, schema, syntax etc), publishing methodology which is often non-standardized resulting in an “annotation pipeline”, and the fluid and dynamic federation of the datasources.Current research trends in a scientific domain are exploring the possibilities of using ontologies for semantic interoperability. The success of these efforts depends on acomprehensive ontology design complete with domain specific generic rules. Ontology is an explicit specification of conceptualization [3]. Consequently, an ontology defines a set of terms of the domain (e.g.: classes, objects, relations, properties) and formal axioms which constrain the interpretation and well formed use of the terms. Defining ontologies helps systems (agents) in communication and collaboration, to bridge the semantic gap between information systems in a scientific domain.The integration architecture of HIDE utilizes partial collaborative effort from autonomous data sources. A user can define various characteristics such as data organization, structure, and semantics of the datasets to be integrated through the metadata standard. Our approach achieves a balance between the requirements of autonomous control over data by hydrologic data sources and a uniform infrastructure satisfying semantic needs. The uniqueness of our approach is to combine the ontology aspects of the domain and the logical data organization exclusive to a data source through a tree model “DataNode Trees”. Consequently this helps in attaining a “virtual information space” uniform to all data sources. The tree model provides the flexibility of easy query evaluations and representations. Furthermore, the model can be specified using our metadata standards.2.1. Methods and MechanismsData Integration ModelIn our Data integration model we integrate heterogeneous data by defining a generic concept “DataNode”. A DataNode, the smallest unit of information integration can represent an ontological concept such as precipitation or a structural elements such as database, tables, files, datasource etc. Based on the temporal-spatial nature of hydrology scientific data, a DataNode in the hydrology datasets can be modeled as a Time-Space-Attribute node. In other words, a DataNode essentially represents Time-Space-attribute information. The Time aspect of the model corresponds to factors for instance; realtime, monthly, daily while the space aspect characterizes spatial features such as states, watersheds, latitude-longitude ranges. The “attribute” aspect of the model facilitates in defining special variables/features that often qualifies a unit of hydrologic information. . For example, in case of modelling precipitation data, the variable “precipitation” can be considered as an attribute. Similarly while defining a unit of hydrologic information such as a data source USGS, the feature “USGS” can be expressed as an attribute.The association between various DataNodes in our system conceptualizes a generic ontology of the domain (domain view) or a logical organization of data (user view). The views in the system define relations between the DataNodes as hierarchical and are presented to the user as a DataNode sub trees. These views are later combined using Adaptation DataNodes into a DataNode tree (Figure 1). Generalization of semantic and structural nuances in a datasource through DataNodes and corresponding DataNode tree presents transparency to user. Hence, user can pose high level queries independent of the “DataNode tree” which are translated into DataNode sub-queries and executed.Figure 1. Views – Domain View and logical views of data sources USGS and NWS.The domain view of a DataNode tree can be defined for various users wit different levels of permissions while the logical views is unique to a datasource.Search and Query a dataset is translated to “search” and “query” DataNodes of the DataNode trees in our system. Based on the user defined parameters, our search engines traces the search from root DataNode to any intermediate or leaf node (Figure 2). If the found DataNode is not satisfactory to the user, the system provides a guided search by taking the user to each levels of the DataNode tree. Once the DataNodes are properly identified, the users can pose “query” to the query interfaces of the DataNode. The DataNode query at the leaf node referred as “atomic query” is off special interest as it performs the actual query at the datasource. The DataNode query at the intermediate levels of the logical view is an “aggregation query” involving, compose of multiple sub queries and delegating it to its immediate children. The child DataNodes applies the sub-DataNode query with the continuation of the procedure until the leaf node. At the leaf node, a DataNode “aggregation query” is transformed to an “atomic query”. The results of all sub-DataNode queries are aggregated/joined at the intermediate DataNodes.Figure 2: A query Trace for the query ““What is the precipitation distribution ofrealtime water data for the state Alabama from USGS data” source”.Metadata StandardsKeeping the design objective of flexibility and extensibility, we have used the concept of metadata for defining our DataNode trees, the query interfaces and other necessary information. The XML language is used to describe the metadata. Our XML metadata standard can be classified into 4 categories.1.Describe the DataNode & DataNode tree.2.Define query model and its translation to the query model of the datasource3.Define the syntactic nature of data.4.Define data transformations if required.The first standard is used to define a DataNode and DataNode tree. In this specification, user needs to include information about the DataNode such as name, documentation (URLs, weblinks), vocabulary (a collection of similar names), links to specification of its children and other additional info. Disparity of interfaces of each data sources makes locating and access to data cumbersome. Hence one of our primary objectives is to provide a uniform representation of the user interfaces irrespective of the complexities of underlying data source. This is achieved through our 2nd standard. In this specification, the user can define a query interface for each DataNode. In addition, each datasource uses various methodologies for storing and accessing their data; for instancewebsites, databases, files/directories. Hence, we use a translation mechanism, which transforms the query posed to the DataNode, to the query to the data source. The 3rd standard is used to define the syntactic details of the data such as ascii files, comma-separated etc. To facilitate the need for a common data model for different data, our system implements 3 types of data models – temporal, temporal-spatial and spatial model. Our final standard can be used to transform the data to these models. A sample of our specification is shown in Figure 3.<?xml version="1.0"?><DataNode xmlns:xsi="/2001/XMLSchema-instance"xsi:noNamespaceSchemaLocation='c:\IDAM\xml\xsd\funcModelSchema.xsd' ><identifiedAs><name> River Forecasting </name><label> river forecasting </label><index> rfc,river,forecasting,center </index></identifiedAs><documentedBy><url> </url><history> River forecast data from sources such as Ohio River forecast center</history></documentedBy><DataNodeMembers><DataNodeMember>comet/ohio/ohiorfc_funcModel.xdms</DataNodeMember></DataNodeMembers></DataNode>2.2. ArchitectureThere are various efforts in designing strategies to address the extensibility of the system. One such approach is to use a modular architecture in which a system is composed of small and autonomous components that collaborate to achieve a common goal. The beauty of this architecture is that one can add/replace any components without affecting others. Based on this aspect, it fit perfectly well to the nature of the HIDE system (Figure 4).Figure 3: The Description specification (1st standard) for the DataNode “Ohio RFC” in Figure 2.Data source1Figure 4: The modular architecture of HIDE.Application servicesA hydrologist is often required to perform various kinds of data analysis on the data. Some of the analysis tools used are visualization (2D, 2D-3D transform), search engines, forecasting tools and modelers. The application services module of HIDE is used to present these services. As the interface between each module is open, new application services can be identified and plugged in without significant changes in the subsequent modulesData ModelsThe interim DataModels of HIDE- temporal, spatial and temporal-spatial facilitates a common representation of heterogeneous data in the system. The definition of these models is based on the nature of the hydrology data. The model defines a uniform representation of the data while incorporating the data integration aspects as well. For instance; precipitation data from USGS are characterized differently compared to precipitation data from NWS. Addressing this requirement, the DataModels of HIDE are linked to the corresponding DataNodes. Additionally, one can say, the DataNode tree in the system acts as core engine driving the various modules.Query EngineOne of the key process in data integration is the evaluation of query from the user to the data sources. The query engine module, as the name suggests, acts as the query processor of the system. It transforms the user level query to intermediate query and performs search and query operation. The query evaluation is a complicated process that involves traversal of the tree and can often become a bottleneck in the system. Therefore, successful completion of the query evaluation depends on the optimized creation of the DataNode tree from the metadata.Transformation EngineThe complex nature of scientific datasets can be defined using various standards such as DODS [2004]. Each of these standards has a distinct data model. For example; DODShas defined data types of ‘Structure’, ‘Sequence’, ’Arrays’ and ‘Grids’ for modeling various kinds of scientific data.Although the basic nature of the data is temporal-spatial, a certain extent of transformation is required for the data from the source to the interim DataModels of HIDE. The transformation engine performs this transformation.Access EngineAccess Engine module performs the transformation of the intermediate query of the HIDE system to query of the external data sources. In addition to that, the transformed query is posed to the data source with the retrieval of the data.2.3. ImplementationWe have implemented our system on a Java environment. Our system is web-enabled and uses a client-server approach. We use Apache-Tomcat server running at back-end. By using a web-based method with a Java runtime environment, we were able to address the objective of platform independency in our system. Some of the features provided in the system are search and query, visualization tools, history of retrieved data, saving the data to local file systems.In the current version of our system, two types of heterogeneous data from datasources USGS and NWSRFS OFS are integrated. We have also provided the functionality of saving the data to OH datacard format.The features of the system are built and supported by the model DataNode trees. Some of them are explained below.Search Engine: We have developed a Search Engine similar to “Google” which can be used to find the datasets. The primary functionality of the search engine is to perform a search to find appropriate datasets based on the search string provided by the user. The search is carried out on the model DataNode trees, traversing from the root node to the appropriate nodes corresponding to the search string. The user can view a list of DataNodes supplemented with sample information after the search is performed. Upon selecting a DataNode, the user can perform a query through the query interface provided. The Search screen after performing a search for the Ohio RFC dataset is shown below.Figure 5: Search Engine.Upon selecting the DataNode, the user can view the query interface for the node. These interfaces are developed based on the XML based configuration query metadata provided by the System integrator. In the metadata, user can include various search conditions and possible output parameters. It has to be noted here that the modifications to the system ifa different storage mechanisms for instance “database”, used will be minimal. Hence thequery interfaces remains the same with the previous versions.Data Views: Upon entering the search conditions and possible output parameters in the query interface, HIDE sends the query to the data engine(s), retrieve the results, performs necessary transformations and integration for multiple data engines and results are displayed. Though the present version, HIDE system has the capabilities to issue SQL queries to any database, it does not have necessary tables to show the forecasted and current data (waiting for inputs from Ohio RFC ). Hence the DataViews of the USGS data is shown below (Figure 7).Recently Viewed Data: This functionality allows the user to keep track of all the datasets which he/she has recently used. The following screen shows the recently viewed datasets.Figure 7: DataViews of USGS data.Figure 8: Recently used Data.Visualization tools : This functionality allows the user to apply our remote visualization tools on the data to be retrieved. The tools provided in the system for temporal data are 2D plots (Figure 9a), and spatial data are 3D plots (Figure 9b).Figure 9 (a): Visualization tool- 2D plot. The Left window shows the temporal plots of parameters such as precipitation etc. The Right window (zoom window) shows the zoomed range (purple – green) of the Left window. The Mean, STD,Minimum for each parameter is also shown.Figure 9 (b): Visualization tool- 3D plot for precipitation data. The right window shows various optionssuch as animation, color bar, grey scaling that can be applied to the plot.Chapter 3 .AccomplishmentsThe focus of our project was to develop a web-enabled XML-based data integration, retrieval, analysis and visualization tool. This tool will enable hydrology scientists at NWS to access, analyze and share distributed heterogeneous massive amounts of data and NWS’s own data more resourcefully and efficiently. The accomplishments of this project are summarized as follows:1. A Data integration model for integration of data with heterogeneities –semantic, structure, system and syntax from autonomous data sources based onDataNode trees. The model is generic to be applied to other domain as well.2.The implementation of the model – HIDE can be used to integrate various datafrom internal NWSRFS to external data sources USGS, etc. dynamically. Thisis necessary due to the fact that only queried data is integrated while the hugedataset is residing at the data source.3.The layered architecture of HIDE promotes simplicity and extensibility thusfacilitating the addition of new application services, new data models easier andsimpler.4.An XML-based implementation of the system helps in addressing the objectiveof flexibility. Hence addition of a new DataNode/ Dataset to the systeminvolves the mere inclusion of new metadata files. This helps tremendously asthe system need not be modified for every data set changes.5.The implementation of the remote data visualization tools for both 2-D and 3-Ddata online visualization.6.Additional enhancements such as SQL capability facilitating the dataintegration for data in databases. Therefore, data in any data storagemechanisms such as databases, files, websites, etc., can be integrated by HIDE.We extended the HIDE system so that it will be able to integrate the data fromIHFS database and provided all the functionalities of the HIDE system such assearch, query, data views, and visualizations. However, due to the dramaticchange in the data storage and management platform of the NWS Ohio RFCduring our project period, VT team has not been able to receive the NWS’database schema and data of requisite tables (current and forecasted data) yet,and therefore the data from the IHFS database have not been actually integratedinto HIDE yet. We expect to receive the related data schemas and data fromOhio RFC in the near future, so that we can integrate Ohio RFC’s actual dataand test on it.References[1] S.S Anand and A.G Büchner, “Decision Support Using Data Mining”, Financial Times Management, ISBN 0-273-63269-8, pp. -168, London: Pitman Publishers, 1998.[2] O. Boucelma et al., “Report on the EDBT'02 panel on scientific data integration,” ACM SIGMOD Record, vol. 31, no. 4, Dec. 2002, pp. 107-112.[3] T. Gruber, “Toward principles for the design of ontologies used for knowledge sharing,” Int’l J. Human-Computer Studies, vol. 43, no. 5-6, Dec. 1995, pp. 907-928.[4] A. Sheth,”Changing Focus on Interoperability in Information Systems: From System, Syntax, Structure to Semantics,” Interoperating Geographic Information Systems, M.F. GoodChild et al., eds., Kluwer Publishers, 1998.[5] USGS, U.S Geological Survey, 2004; /nwis.。