2014大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译
仓库管理系统外文翻译英文文献

仓库管理系统外文翻译英文文献核准通过,归档资料。
未经允许,请勿外传~Warehouse Management Systems (WMS).The evolution of warehouse management systems (WMS) is very similar to that of many other software solutions. Initially a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. To use the grandfather of operations-related software, MRP, as a comparison, material requirements planning (MRP) started as a system for planning raw material requirements in a manufacturing environment. Soon MRP evolved into manufacturing resource planning (MRPII), which took the basic MRP system and added scheduling and capacity planning logic. Eventually MRPII evolved into enterprise resource planning (ERP), incorporating all the MRPII functionality with full financials and customer and vendor management functionality. Now, whether WMS evolving into a warehouse-focused ERP system is a good thing or not is up to debate. What is clear is that the expansion of the overlap in functionality between Warehouse Management Systems, Enterprise Resource Planning, Distribution Requirements Planning, Transportation Management Systems, Supply Chain Planning, Advanced Planning and Scheduling, and Manufacturing Execution Systems will only increase the level ofconfusion among companies looking for software solutions for their operations.Even though WMS continues to gain added functionality, the initialcore functionality of a WMS has not really changed. The primary purposeof a WMS is to control the movement and storage of materials within an operation and process the associated transactions. Directed picking, directed replenishment, and directed put away are the key to WMS. The detailed setup and processing within a WMS can vary significantly fromone software vendor to another, however the basic logic will use a combination of item, location, quantity, unit of measure, and1order information to determine where to stock, where to pick, and in what sequence to perform these operations.At a bare minimum, a WMS should:Have a flexible location system.Utilize user-defined parameters to direct warehouse tasks and uselivedocuments to execute these tasks.Have some built-in level of integration with data collection devices.Do You Really Need WMS?Not every warehouse needs a WMS. Certainly any warehouse couldbenefit from some of the functionality but is the benefit great enoughto justify the initial and ongoing costs associated with WMS? Warehouse Management Systems are big, complex, data intensive, applications. They tend to require a lot of initial setup, a lot of system resources to run, and a lot of ongoing data management to continue to run. That’s ri ght, you need to "manage" your warehouse "management" system. Often times, large operations will end up creating a new IS department with the sole responsibility of managing the WMS.The Claims:WMS will reduce inventory!WMS will reduce labor costs!WMS will increase storage capacity!WMS will increase customer service!WMS will increase inventory accuracy!The Reality:The implementation of a WMS along with automated data collectionwill likely give you increases in accuracy, reduction in labor costs (provided the labor required to maintain the system is less than the labor saved on the warehouse floor), and a greater ability to servicethe customer by reducing cycle times. Expectations of inventoryreduction and increased storage capacity are less likely. Whileincreased accuracy and efficiencies in the receiving process may reduce the level of safety stock required, the impact of this reduction will likely be negligible in comparison to overall inventory levels. The predominant factors that control inventory levels are2lot sizing, lead times, and demand variability. It is unlikely that a WMS will have a significant impact on any of these factors. And while a WMS certainly provides the tools for more organized storage which may result in increased storage capacity, this improvement will be relative to just how sloppy your pre-WMS processes were.Beyond labor efficiencies, the determining factors in deciding to implement a WMS tend to be more often associated with the need to do something to service your customers that your current system does not support (or does not support well) such as first-in-first-out, cross-docking, automated pick replenishment, wave picking, lot tracking, yard management, automated data collection, automated material handling equipment, etc.SetupThe setup requirements of WMS can be extensive. The characteristics of each item and location must be maintained either at the detail level or by grouping similar items and locations into categories. An example of item characteristics at the detail level would include exact dimensions and weight of each item in each unit of measure the item is stocked (each, cases, pallets, etc) as well as information such as whether it can be mixed with other items in a location, whether it is rack able, max stack height, max quantity per location, hazard classifications, finished goods or raw material, fast versus slow mover, etc. Although some operations will need to set up each item this way,most operations will benefit by creating groups of similar products. For example, if you are a distributor of music CDs you would create groups for single CDs, and double CDs, maintaining the detailed dimension and weight information at the group level and only needing to attach the group code to each item. You would likely need to maintain detailed information on special items such as boxed sets or CDs in special packaging. You would also create groups for the different types of locations within your warehouse. An example would be to create three different groups (P1, P2, P3) for the three different sized forward picking locations you use for your CD picking. You then set up the quantity of single CDs that will fit in a P1, P2, and P3 location, quantity of double CDs that fit in a P1, P2, P3 location etc. You would likely also be setting up case quantities, and pallet quantities of each CD group and quantities of cases and pallets per each reserve storage location group.If this sounds simple, it is…well… sort of. In reality most operations have a much morediverse product mix and will require much more system setup. And setting up the physical characteristics of the product and locations is only part of the picture. You have set up enough so that the system knows where a product can fit and how many will fit in that location. You now need to set up the information needed to let the system decide exactly which location to pick3from, replenish from/to, and put away to, and in what sequence these events should occur (remember WMS is all about “directed” m ovement). You do this by assigning specific logic to the various combinations of item/order/quantity/location information that will occur.Below I have listed some of the logic used in determining actual locations and sequences.Location Sequence. This is the simplest logic; you simply define a flow through your warehouse and assign a sequence number to each location. In order picking this is used to sequence your picks to flow through the warehouse, in put away the logic would look for the first location in the sequence in which the product would fit.Zone Logic. By breaking down your storage locations into zones you can direct picking, put away, or replenishment to or from specific areas of your warehouse. Since zone logic only designates an area, you will need to combine this with some other type of logic to determine exact location within the zone.Fixed Location. Logic uses predetermined fixed locations per item in picking, put away, and replenishment. Fixed locations are most often used as the primary picking location in piece pick and case-pick operations, however, they can also be used for secondary storage.Random Location. Since computers cannot be truly random (nor would you want them to be) the term random location is a little misleading. Random locations generally refer to areas where products are not storedin designated fixed locations. Like zone logic, you will need some additional logic to determine exact locations.First-in-first-out (FIFO). Directs picking from the oldest inventory first.Last-in-first-out (LIFO). Opposite of FIFO. I didn't think there were any realapplications for this logic until a visitor to my site sent an email describing their operation that distributes perishable goods domestically and overseas. They use LIFO for their overseas customers (because of longer in-transit times) and FIFO for their domestic customers.Pick-to-clear. Logic directs picking to the locations with the smallest quantities on hand. This logic is great for space utilization.Reserved Locations. This is used when you want to predetermine specific locations to put away to or pick from. An application for reserved locations would be cross-docking, where you may specify certain quantities of an inbound shipment be moved to specific outbound staging locations or directly to an awaiting outbound trailer.Maximize Cube. Cube logic is found in most WMS systems however it is seldom used. Cube logic basically uses unit dimensions to calculate cube (cubic inches per unit) and then compares this to the cube capacity of the location to determine how much will fit. Now if the units are capable of being stacked into the location in a manner that fills every cubic inch of4space in the location, cube logic will work. Since this rarely happens in the real world, cube logic tends to be impractical.Consolidate. Looks to see if there is already a location with the same product stored in it with available capacity. May also create additional moves to consolidate like product stored in multiple locations.Lot Sequence. Used for picking or replenishment, this will use the lot number or lot date to determine locations to pick from or replenish from.It’s very common to combine multiple logic methods to determine the best location. Forexample you may chose to use pick-to-clear logic within first-in-first-out logic when there are multiple locations with the same receipt date. You also may change the logic based upon current workload. During busy periods you may chose logic that optimizes productivity while during slower periods you switch to logic that optimizes space utilization.Other Functionality/ConsiderationsWave Picking/Batch Picking/Zone Picking. Support for various picking methods variesfrom one system to another. In high-volume fulfillment operations, picking logic can be a critical factor in WMS selection. See my article on Order Picking for more info on these methods.Task Interleaving. Task interleaving describes functionality that mixes dissimilar tasks such as picking and put away to obtain maximum productivity. Used primarily in full-pallet-load operations, task interleaving will direct a lift truck operator to put away a pallet on his/her way to the next pick. In large warehouses this can greatly reduce travel time, not only increasing productivity, but also reducing wear on the lift trucks and saving on energy costs by reducing lift truck fuel consumption. Task interleaving is also used with cycle counting programs to coordinate a cycle count with a picking or put away task.Integration with Automated Material Handling Equipment. If you are planning onusing automated material handling equipment such as carousels, ASRS units, AGNS, pick-to-light systems, or separation systems, you’ll want to consider this during the software selection process. Since these types of automation are very expensive and are usually a core component of your warehouse, you may find that the equipment will drive the selection of the WMS. As with automated data collection, you should be working closely with the equipment manufacturers during the software selection process.5Advanced Shipment Notifications (ASN). If your vendors are capableof sendingadvanced shipment notifications (preferably electronically) and attaching compliance labels to the shipments you will want to make sure that the WMS can use this to automate your receiving process. In addition, if you have requirements to provide ASNs for customers, you will also want to verify this functionality.Yard Management. Yard management describes the function of managing the contents (inventory) of trailers parked outside the warehouse, or the empty trailers themselves. Yard management is generally associated with cross docking operations and may include the management of both inbound and outbound trailers.Labor Tracking/Capacity Planning. Some WMS systems provide functionality relatedto labor reporting and capacity planning. Anyone that has worked in manufacturing should be familiar with this type of logic. Basically, you set up standard labor hours and machine (usually lift trucks) hours per task and set the available labor and machine hours per shift. The WMS system will use this info to determine capacity and load. Manufacturing has been using capacity planning for decades with mixed results. The need to factor in efficiency and utilization to determine rated capacity is an example of the shortcomings of this process. Not that I’m necessarily against capacity planning in warehousing, I just think most operations don’t really need it and can avoid the disap pointment of trying to make it work. I am, however, a big advocate of labor tracking for individual productivity measurement. Most WMS maintain enough datato create productivity reporting. Since productivity is measured differently from one operation to another you can assume you will have to do some minor modifications here (usually in the form of custom reporting).Integration with existing accounting/ERP systems. Unless the WMS vendor hasalready created a specific interface with your accounting/ERP system (such as those provided by an approved business partner) you can expect to spend some significant programming dollars here. While we are all hoping that integration issues will be magically resolved someday by a standardized interface, we isn’t there yet. Ideally you’ll want an integrator that has already integrated the WMS you chose with the business software you are using. Since this is not always possible you at least want an integrator that is very familiar with one of the systems.WMS + everything else = ? As I mentioned at the beginning of this article, a lot ofother modules are being added to WMS packages. These would include full financials, light manufacturing, transportation management, purchasing, and sales order management. I don’t see t his as aunilateral move of WMS from an add-on module to a core system, but rather an optional approach that has applications in specific industries such as 3PLs. Using ERP systems6as a point of reference, it is unlikely that this add-onfunctionality will match the functionality of best-of-breed applications available separately. If warehousing/distribution is your core business function and you don’t want to have to deal with the integration issues of incorporating separate financials, order processing, etc. you mayfind these WMS based business systems are a good fit.Implementation TipsOutside of the standard “don’t underestimate”, “thoroughlytest”, “train, train, train” implementation tips that apply to any business software installation ,it’s i mportant to emphasize that WMSare very data dependent and restrictive by design. That is, you need to have all of the various data elements in place for the system tofunction properly. And, when they are in place, you must operate within the set parameters.When implementing a WMS, you are adding an additional layer of technology onto your system. And with each layer of technology there is additional overhead and additional sources of potential problems. Now don’t take this as a condemnation of Warehouse Management Systems. Coming from a warehousing background I definitely appreciate the functionality WMS have to offer, and, in many warehouses, this functionality is essential to their ability to serve their customers and remain competitive. It’s just impo rtant to note that every solution hasits downsides and having a good understanding of the potential implications will allow managers to make better decisions related to the levels of technology that best suits their unique environment.仓库管理系统( WMS )仓库管理系统( WMS )的演变与许多其他软件解决方案是非常相似的。
管理系统类毕业设计外文文献翻译

.NET Compact Framework 2.0中的新事物介绍.NET Compact Framework 2.0版在以前版本——.NET Compact Framework1.0版——上提供许多改善。
虽然普遍改善,但他们都集中在共同的目标——改进开发商生产力、以完整的.NET Framwork提供更强的兼容性,以及加大对设备特性的支持。
这篇文章提供一个.NET Compact Framework2.0的变动和改进的高水平的概要。
用户界面相关的灵活的设备显示器的小尺寸要求:应用程序高效率地使用可用空间。
这在过去是要求开发商花费很多时间来设计和实施应用的用户界面。
最近的在灵活的显示能力方面的进步,譬如高分辨率和多方位支持,使得用户界面发展的工作更具挑战性。
为了简化创造应用用户界面的任务,.NET Compact Framework2.0提供许多关于这方面描述的新特性。
窗口形式控制存在于用户界面中心的是控制;.NET Compact Framework2.0提供了很多新的控制。
这些新控制由除了特别针对设备之外的控制组成。
这种控制是.NET Compact Framework有的与.NET Framework一样充分的控制。
MonthCalendarMonthCalendar控制是提供日期显示的可定制的日历控制,而且是有利于为用户提供一个图解方式来精选日期。
DateTimePickerDateTimePicker控制是为显示和允许用户进入日期和时间信息的可定制的控制。
由于它的一个紧凑显示和图解日期选择格式的组合,它特别适用于灵活的设备应用程序。
当显示信息时,DateTimePicker控制与正文框相似;但是,当用户选择了一个日期, 可能显示一个类似于MonthCalendar控制的弹出日历。
WebBrowserWebBrowser控制压缩了设备Web浏览器,并且提供强大的显示能力和暴露很多事件。
这些事件除了允许你的应用程序提供对于这些事件的用户化的行为,还允许你的应用程序追踪用户与Web浏览器内容的互动。
英文参考文献原文复印件及译文

英文参考文献原文复印件及译文专业:电气工程及其自动化姓名:曹丽倩学号:100062630指导教师:高敬格完成日期2014 年 6 月RelaysThe Programmable Logic ControllerEarly machines were controlled by mechanical means using cams, gears, levers and other basic mechanical devices. As the complexity grew, so did the need for a more sophisticated control system. This system contained wired relay and switch control elements. These elements were wired as required to provide the control logic necessary for the particular type of machine operation. This was acceptable for a machine that never needed to be changed or modified, but as manufacturing techniques improved and plant changeover to new products became more desirable and necessary, a more versatile means of controlling this equipment had to be developed. Hardwired relay and switch logic was cumbersome and time consuming to modify. Wiring had to be removed and replaced to provide for the new control scheme required. This modification was difficult and time consuming to design and install and any small "bug" in the design could be a major problem to correct since that also required rewiring of the system. A new means to modify control circuitry was needed. The development and testing ground for this new means was the U.S. auto industry. The time period was the late 1960's and early 1970's and the result was the programmable logic controller, or PLC. Automotive plants were confronted with a change in manufacturing techniques every time a model changed and, in some cases, for changes on the same model if improvements had to be made during the model year. The PLC provided an easy way to reprogram the wiring rather than actually rewiring the control system.The PLC that was developed during this time was not very easy to program. The language was cumbersome to write and required highly trained programmers. These early devices were merely relay replacements and could do very little else. The PLC has at first gradually, and in recent years rapidly developed into a sophisticated and highly versatile control system component. Units today are capable of performing complex math functions including numerical integration and differentiation and operate at the fast microprocessor speeds now available. Older PLCs were capable ofonly handling discrete inputs and outputs (that is, on-off type signals), while today's systems can accept and generate analog voltages and currents as well as a wide range of voltage levels and pulsed signals. PLCs are also designed to be rugged. Unlike their personal computer cousin, they can typically withstand vibration, shock, elevated temperatures, and electrical noise to which manufacturing equipment is exposed.As more manufacturers become involved in PLC production and development, and PLC capabilities expand, the programming language is also expanding. This is necessary to allow the programming of these advanced capabilities. Also, manufacturers tend to develop their own versions of ladder logic language (the language used to program PLCs). This complicates learning to program PLC's in general since one language cannot be learned that is applicable to all types. However, as with other computer languages, once the basics of PLC operation and programming in ladder logic are learned, adapting to the various manufacturers’ devices is not a complicated process. Most system designers eventually settle on one particular manufacturer that produces a PLC that is personally comfortable to program and has the capabilities suited to his or her area of applications.It should be noted that in usage, a programmable logic controller is generally referred to as a “PLC” or “programmable controller”. Although the term “programmable controller” is generally accepted, it is not abbreviated “PC” because the abbreviation “PC” is usually used in reference to a personal computer. As we will see in this chapter, a PLC is by no means a personal computer.Programmable controllers (the shortened name used for programmable logic controllers) are much like personal computers in that the user can be overwhelmed by the vast array of options and configurations available. Also, like personal computers, the best teacher of which one to select is experience. As one gains experience with the various options and configurations available, it becomes less confusing to be able to select the unit that will best perform in a particular application.中文翻译可编程序控制器早期的机器用机械的方法采用凸轮控制、齿轮、杠杆和其他基本机械设备。
毕业设计_外文文献翻译

(二 〇 一 四 年 六 月本科毕业设计外文文献翻译 学校代码: 10128 学 号:题 目:P a c k e t H a n d l i n g H a r d w a r e S u p p o r t学生姓名:学 院:系 别:专 业:班 级:指导教师:Packet Handling Hardware Support参考文献:Texas 1101 Low-Power Sub-1 GHz RF Transceiver.www. . 2013The CC1101 has built-in hardware support for packet oriented radio protocols.In transmit mode, the packet handler can be configured to add the following elements to the packet stored in the TX FIFO:● A programmable number of preamble bytes● A two byte synchronization (sync) word. Can be duplicated to give a 4-bytesync word (recommended). It is not possible to only insert preamble or onlyinsert a sync word● A CRC checksum computed over the data field.The recommended setting is 4-byte preamble and 4-byte sync word, except for 500 kBaud data rate where the recommended preamble length is 8 bytes. In addition, the following can be implemented on the data field and the optional 2-byte CRC checksum:●Whitening of the data with a PN9 sequence●Forward Error Correction (FEC) by the use of interleaving and coding of thedata (convolutional coding)In receive mode, the packet handling support will de-construct the data packet by implementing the following (if enabled):●Preamble detection●Sync word detection●CRC computation and CRC check●One byte address check●Packet length check (length byte checked against a programmable maximumlength)●De-whitening●De-interleaving and decodingOptionally, two status bytes (see Table 27 and Table 28) with RSSI value, Link Quality Indication, and CRC status can be appended in the RX FIFO.Table 27: Received Packet Status Byte 1(first byte appended after the data) Bit Field Name Description7:0 RSSI RSSI valueTable 28: Received Packet Status Byte 2(second byte appended after the data) Bit Field Name Description7 CRC_OK 1:CRC for received data OK(or CRC disabled)0:CRC error in received data6:0 LQI Indicating the link qualityNote: Register fields that control the packethandling features should only be altered whenCC1101 is in the IDLE state.1. Data whiteningFrom a radio perspective, the ideal over the air data are random and DC free. This results in the smoothest power distribution over the occupied bandwidth. This also gives the regulation loops in the receiver uniform operation conditions (on data dependencies).Real data often contain long sequences of zeros and ones. In these cases, performance can be improved by whitening the data before transmitting, and de-whitening the data in the receiver.With CC1101, this can be done automatically. By setting PKTCTRLO. WHITE_DATA=1, all data, except the preamble and the sync word will be XOR-ed with a 9-bit pseudo-random (PN9) sequence before being transmitted. This is shown in Figure 16. At the receiver end, the data are XOR-ed with the same pseudorandom sequence. In this way, the whitening is reversed, and the original data appear in the receiver. The PN9sequence is initialized to all 1’s.2. Packet FormatThe format of the data packet can be configured and consists of the following items (see Figure 17):●Preamble●Synchronization word●Optional length byte●Optional address byte●Payload●Optional 2 byte CRCThe preamble pattern is an alternating sequence of ones and zeros (10101010…). The minimum length of the preamble is programmable through the value of MDMCFG1.NUM_PREAMBLE. When enabling TX, the modulator will start transmitting the preamble. When the programmed number of preamble bytes has beentransmitted, the modulator will send the sync word and then data from the TX FIFO ifdata is available. If the TX FIFO is empty, the modulator will continue ro send preamble bytes until the first byte is written to the TX FIFO. The modulator will then send the sync word and then the data bytes.The synchronization word is a two-byte value set in the SYNC1 and SYNC0 registers. The sync word provides byte synchronization of the incoming packet. A one-byte sync word can be emulated by setting the AYNC1 value to the preamble pattern. It is also possible to emulate a 32 bit sync word by setting MDMCFG2.SYNC_MODE to 3 or 7. The sync word will then be repeated twice.CC1101 supports both constant packet length protocols and variable length protocols. Variable or fixed packet length mode can be used for packets up to 255 bytes. For longer packets, infinite packet length mode must be used.Fixed packet length mode is selected by setting PKTCTRL0.LENGTH_CONFIG =0. The desired packet length is set by the PKTLEN register. This value must be different from 0.In variable packet length mode, PKTCTRL0.LENGTH_CONFIG=1, the packet length is configured by the first byte after the sync word. The packet length is defined as the payload data, excluding the length byte and optional CRC. The PKTLEN register is used to set the maximum packet length allowed in RX. Any packet received with a length byte with a value greater than PKTLEN will be discarded. The PKTLEN value must be different from 0. The byte written to the TXFIFO must be different from 0.With PKTCTRL0.LENGTH_CONFIG=2, the packet length is set to infinite and transmission and reception will continue until turned off manually. As described in the next section, this can be used to support packet formats with different length configuration than natively supported by CC1101. one should make sure that TX is not turn off during the transmission of the first half of any byte. Refer to the CC1101Errata Notes [4] for more details.Note: The minimum packet length supported (excluding the optional length byte and CRC) is one byte of payload data.2.1 Arbitrary Length Field ConfigurationThe packet length register, PKTLEN, can be reprogrammed during receive and transmit. In combination with fixed packet length mode (PKTCTRL0. LENGTH_CONFIG=0), this opens the possibility to have a different length field configuration can supported for variable length packets (in variable packet length mode the length byte is the first byte after the sync word). At the start of reception, the packet length is set a large value. The MCU reads out enough bytes to interpret the length field in the packet. Then the PKTLEN value is set according to this value. The end of packet will occur when the byte counter in the packet handler is equal to the PKTLEN register. Thus, the MCU must be able to program the correct length, before the internal counter reaches the packet length.2.2 Packet Length >255The packet automation control register, PKTCTRL0, can be reprogrammed during TX and RX. This opens the possibility to transmit and receive packets that are longer than 256 bytes and still be able to use the packet handling hardware support. At the start of the packet, the infinite packet length mode (PKTCTRL0. LENGTH_CONFIG=2) must be active. On the TX side, the PKTLEN register is set to mod(length, 256). On the RX side the MCU reads out enough bytes to interpret the length field in the packet and sets the PKTLEN register to mod(length, 256). When less than 256 bytes remains of the packet, the MCU disables infinite packet length mode and activates fixed packet length mode. When the internal byte counter reaches the PKTLEN value, the transmission or reception ends(the radio enters the state determined by TXOFF_MODE or RXOFF_MODE). Automatic CRC appending/checking can also be used(by setting PKTCTRL0.CRC_EN=1).When for example a 600-byte packet is to be transmitted, the MCU should do the following(see also Figure 18)●Set PKTCTRL0.LENGTH_CONFIG=2.●Pre-program the PKTLEN register to mod(600,256)=88.●Transmit at least 345 bytes(600-255), for example by filling the 64-byte TX FIFOsix times(384 bytes transmitted).●Set PKTCTRL0.LENGTH_CONFIG=0.●The transmission ends when the packet counter reaches 88. a total of 600 bytesare transmitted.3 Packet filtering in Receive ModeCC1101 supports three different types of packet-filtering; address filtering, maximum length filtering, and CRC filtering.3.1 Addressing FilteringSetting PKTCTRL1.ADR_CHK to any other value than zero enables the packet address filter. The packet handler engine will compare the destination address byte in the packet with the programmed node address in the ADDR register and the 0*00 broadcast address when PKTCTRL1.ADR_CHK=10 or both the 0*00 and 0*FF broadcast addresses when PKTCTRL1.ADR_CHK=11. If the received address matches a valid address, the packet is received and written into the RX FIFO. If the address match fails, the packet is discarded and receive mode restarted(regardless of the MCSM1.RXOFF_MODE setting).If the received address matches a valid address when using infinite packet length mode and address filtering is enabled, 0*FF will be written into the RX FIFO followed by the address byte and then the payload data.3.2Maximum Length FilteringIn variable packet length mode, PKTCTRL0.LENGTH_CONFIG=1, the PKTLEN.PACKET_LENGTH register value is used to set the maximum allowed packet length. If the received length byte has a larger value than this, the packet is discarded and receive mode restarted(regardless of the MCSM1.RXOFF_MODE setting).3.3 CRC FilteringThe filtering of a packet when CRC check fails is enabled by setting PKTCTRL1.CRC_AUTOFLUSH=1. The CRC auto flush function will flush the entire RX FIFO if the CRC check fails. After auto flushing the RX FIFO, the next state depends on the MCSM1.RXOFF_MODE setting.When using the auto flush function, the maximum packet length is 63 bytes in variable packet length mode. Note that when PKTCTRL1APPEND_STATUS is enabled, the maximum allowed packet length is reduced by two bytes in order to make room in the RX FIFO for the two status bytes appended at the end of the packet. Since the entire RX FIFO is flushed when the CRC check fails, the previously received packet must be read out of the FIFO before receiving the current packet. The MCU must not read from the current packet until the CRC has been checked as OK.4 Packet Handling in Transmit ModeThe payload that is to be transmitted must be written into the TX FIFO. The first byte written must be the length byte when variable packet length is enabled. The length byte has a value equal to the payload of the packet(including the optional address byte). If address recognition is enabled on the receiver, the second byte written to the TX FIFO must be the address byte.If fixed packet length is enabled, the first byte written to the TX FIFO should be the address(assuming the receiver uses address recognition).The modulator will first send the programmed number of preamble bytes. If data is avaible in the TX FIFO, the modulator will send the two-bytes(optionally 4-byte)sync word followed by the payload in the TX FIFO. If CRC is enabled, the checksum is calculated over all the data pulled from the TX FIFO, and the result is sent as two extra bytes following the payload data. If the TX FIFO runs empty before the complete packet has been transmitted, the radio will enter TXFIFO_UNDERFLOW state. The only way to exit this state is by issuing an SFTX strobe. Writing to the TX FIFO after it has been underflowed will not restart TX mode.If whitening is enabled, everything following the sync words will be whitened. This is done before the optional FEC/interleaver stage. Whitening is enabled by setting PKTCTRL0.WHITE_DATA=1.If FEC/interleaving is enabled, everything following the sync words will be scrambled by the interleaver and FEC encoded before being modulated. FEC is enabled by setting MDMCFG1.FEC_EN=1.5 Packet Handling in Receive ModeIn receive mode, the demodulator and packet handler will search for a valid preamble and the sync word. When found, the synchronism and will receive the first payload byte.If FEC/interleaving is enabled, the FEC decoder will start to decode the first payload byte. The intrerleaver will de-scramble the bits before any other processing is done to the data.If whitening is enabled, the data will be de-whitened at this stage.When variable packet length mode is enabled, the first byte is the length byte. The packet handler stores this value as the packet length and receives the number of bytes indicated by the length byte. If fixed packet length mode is used, the packet handler will accept the programmed number of bytes.Next, the packet handler optionally checks the address and only continues the reception if the address matches. If automatic CRC check is enabled, the packet handler computes CRC and matches it with the appended CRC checksum.At the end of the payload, the packet handler will optionally white two extra packet status bytes(see Table27 and Table28) that contain CRC status, link quality indication, and RSSI value.6 Packet Handling in FirmwareWhen implementing a packet oriented radio protocol in firmware, the MCU needs to know when a packet has been received/transmitted. Additionally, for packets longer than 64 bytes, the RX FIFO needs to be refilled white in TX. This means that the MCU needs to know the number of bytes that can be read from or written to the RX FIFO and TX FIFO respectively. There are two possible solutions to get the necessary status information:a)Interrupt Driven SolutionThe GDO pins can be used in both RX and TX to give an interrupt when a sync word has been received/transmitted or when a complete packet has been received/transmitted by setting IOFGX.GDOx_CFG=0*06. In addition, there are two configurations for the IOCFGx.GDOx_CFG register that can be used as an interrupt source to provide information on how many bytes that are in the RX FIFO and TX FIFO respectively. The IOCFGx.GDOx_CFG=0*02 and IOCFGx.GDOx_CFG=0*03 configurations are associated with the TX FIFO. See Table 41 for more information.b)SPI PollingThe PKTSTSTUS register can be polled at a given rate to get information about the current GDO2 and GDO0 values respectively. The RXBYTES and TXBYTES registers can be polled at a given rate to get information about the number of bytes in the RX FIFO and TX FIFO respectively. Alternatively, the number of bytes in the RX FIFO and the TX FIFO can be read from the chip status byte returned on the MISO line each time a header byte, data byte, or command strobe is sent on the SPI bus.It is recommended to employ an interrupt driven solution since high rate SPI polling reduces the RX sensitivity. Furthermore, as explained in Section 10.3 and the CC1101 Errata Notes[4], when using SPI polling, there is a small, but finite, probability that a single read from registers PKSTATUS, RXBYTES and TXBYTES is being corrupt. The same is the case when reading the chip status byte.Refer to the TI website for SW examples ([9] and [10]).数据包处理的硬件支持CC1101 提供了对数据包导向无线协议的内置硬件支持。
毕业设计(论文)外文参考文献译文本

武汉工业学院毕业设计(论文)外文参考文献译文本2011届原文出处IBM SYSTEMS JOURNAL, VOL 35, NOS 3&4, 1996毕业设计(论文)题目音乐图像浏览器的设计与实现院(系)计算机与信息工程专业名称计算机科学与技术学生姓名郭谦学生学号070501103指导教师丰洪才译文要求:1、译文内容须与课题(或专业)有联系;2、外文翻译不少于4000汉字。
隐藏数据技术研究数据隐藏,是一种隐秘的数据加密形式,它将数据嵌入到数字媒体之中来达到鉴定,注释和版权保护的目的。
然而,这一应用却受到了一些限制:首先是需要隐藏的数据量,其次是在“主”讯号受到失真的条件影响之下,对于这些需隐藏数据的可靠性的需要。
举例来说,就是有损压缩以及对有损压缩来说数据遇到被拦截,被修改或被第三方移除等操作的免疫程度。
我们同时用传统的和新式技术来探究解决数据隐藏问题的方法并且对这些技术在以下三个方面的应用:版权保护,防止篡改,和增强型数据嵌入做出评估。
我们能非常方便地得到数字媒体并且潜在地改善了其可移植性,信息展现的效率,和信息呈现的准确度。
便捷的数据访问所带来的负面效果包括以下两点:侵犯版权的几率增加或者是有篡改或修改其中内容的可能性增大。
这项工作的目的在于研究知识产权保护条款、内容修改的相关指示和增加注解的方法。
数据隐藏代表了一类用于插入数据的操作,例如版权信息,它利用“主”信号能够感知的最小变化量来进入到各种不同形式的媒体之内,比如图像、声音或本文。
也就是说,嵌入的数据对人类观察者来说应该是既看不见也听不见的。
值得注意的是,数据隐藏虽然与压缩很类似,但与加密解密技术却是截然不同的。
它的目标不是限制或者管理对“主”信号的存取,而是保证被嵌入的数据依然未被破坏而且是可以恢复的。
数据隐藏在数字媒体中的两个重要应用就是提供版权信息的证明,和保证内容完整性。
因此,即使主讯号遭受诸如过滤、重取样,截取或是有损压缩等破坏行为,数据也应该一直在“主”信号中保持被隐藏的特点。
外文文献及翻译-库房管理系统(FMS)

外文文献及翻译-库房管理系统(FMS)
概述
本文介绍了一种基于RFID技术的库房管理系统(FMS),该系统具有可拓展性和高效性,可以在多种环境下使用。
基于标签的追踪技术,该系统可以自动监测库房中的物品,从而提高了库存管理的效率。
除此之外,该系统还具有多重质量控制和安全措施,以确保库房中的物品得到有效的管理和保护。
系统组件
该系统由多个组件组成,主要包括RFID读写器、标签、传感器、数据库和用户界面等。
RFID读写器和标签用于监测库房中物品的位置和数量。
传感器则用于检测库房的环境条件,例如温度和湿度等。
数据库则用于储存和管理物品信息,同时提供数据分析和报告等功能。
用户界面则为用户提供了可视化和交互式的界面,以便于用户实时监测库房中的物品情况。
系统优势
相比传统的库房管理方式,该系统具有以下优势:
- 实时监测库房中物品的位置和数量。
- 减少了手动操作,提高了效率和准确性。
- 多重质量控制和安全措施确保库房中物品得到有效的管理和保护。
- 可拓展性高,可以适用于多种环境。
系统应用
该系统可以广泛应用于各种行业和场合,例如:
- 仓储和物流行业
- 医药和生物科学行业
- 工业制造业
- 客户服务和零售业
结论
库房管理系统(FMS)是一种基于RFID技术的高效管理系统,具有实时监测、质量控制和安全保护等优势。
该系统可以广泛应用于多种行业和场合,是一种值得推广的库房管理方式。
毕业论文英文参考文献及译文

Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of these big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, should include the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes. And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment. Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can seethat many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicators in large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprises to exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space and inventory storage costs, thereby increasing product costs; take a lot of liquidity, resulting in sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine how to ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions. In general, the inventory function:(1) to prevent interrupted. Received orders to shorten the delivery of goods from the time in order to ensure quality service, at the same time to prevent out of stock.(2) to ensure proper inventory levels, saving inventory costs.(3) to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4) ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5) display function.(6) reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored incentral places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。
毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。
大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。
在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。
许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。
“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。
宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。
数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。
按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。
这个简短、全面的定义指出了数据仓库的主要特征。
四个关键词,面向主题的、集成的、时变的、非易失的,将数据仓库与其它数据存储系统(如,关系数据库系统、事务处理系统、和文件系统)相区别。
让我们进一步看看这些关键特征。
(1) 面向主题的:数据仓库围绕一些主题,如顾客、供应商、产品和销售组织。
数据仓库关注决策者的数据建模与分析,而不是构造组织机构的日常操作和事务处理。
因此,数据仓库排除对于决策无用的数据,提供特定主题的简明视图。
(2) 集成的:通常,构造数据仓库是将多个异种数据源,如关系数据库、一般文件和联机事务处理记录,集成在一起。
使用数据清理和数据集成技术,确保命名约定、编码结构、属性度量的一致性等。
(3) 时变的:数据存储从历史的角度(例如,过去5-10 年)提供信息。
数据仓库中的关键结构,隐式或显式地包含时间元素。
(4) 非易失的:数据仓库总是物理地分离存放数据;这些数据源于操作环境下的应用数据。
由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。
通常,它只需要两种数据访问:数据的初始化装入和数据访问。
概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。
数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。
“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。
数据仓库的构造需要数据集成、数据清理、和数据统一。
利用数据仓库常常需要一些决策支持技术。
这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。
有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。
我们将不区分二者。
“组织机构如何使用数据仓库中的信息?”许多组织机构正在使用这些信息支持商务决策活动,包括:(1)、增加顾客关注,包括分析顾客购买模式(如,喜爱买什么、购买时间、预算周期、消费习惯);(2)、根据季度、年、地区的营销情况比较,重新配置产品和管理投资,调整生产策略;(3)、分析运作和查找利润源;(4)、管理顾客关系、进行环境调整、管理合股人的资产开销。
从异种数据库集成的角度看,数据仓库也是十分有用的。
许多组织收集了形形色色数据,并由多个异种的、自治的、分布的数据源维护大型数据库。
集成这些数据,并提供简便、有效的访问是非常希望的,并且也是一种挑战。
数据库工业界和研究界都正朝着实现这一目标竭尽全力。
对于异种数据库的集成,传统的数据库做法是:在多个异种数据库上,建立一个包装程序和一个集成程序(或仲裁程序)。
这方面的例子包括IBM 的数据连接程序和Informix的数据刀。
当一个查询提交客户站点,首先使用元数据字典对查询进行转换,将它转换成相应异种站点上的查询。
然后,将这些查询映射和发送到局部查询处理器。
由不同站点返回的结果被集成为全局回答。
这种查询驱动的方法需要复杂的信息过滤和集成处理,并且与局部数据源上的处理竞争资源。
这种方法是低效的,并且对于频繁的查询,特别是需要聚集操作的查询,开销很大。
对于异种数据库集成的传统方法,数据仓库提供了一个有趣的替代方案。
数据仓库使用更新驱动的方法,而不是查询驱动的方法。
这种方法将来自多个异种源的信息预先集成,并存储在数据仓库中,供直接查询和分析。
与联机事务处理数据库不同,数据仓库不包含最近的信息。
然而,数据仓库为集成的异种数据库系统带来了高性能,因为数据被拷贝、预处理、集成、注释、汇总,并重新组织到一个语义一致的数据存储中。
在数据仓库中进行的查询处理并不影响在局部源上进行的处理。
此外,数据仓库存储并集成历史信息,支持复杂的多维查询。
这样,建立数据仓库在工业界已非常流行。
1.操作数据库系统与数据仓库的区别由于大多数人都熟悉商品关系数据库系统,将数据仓库与之比较,就容易理解什么是数据仓库。
联机操作数据库系统的主要任务是执行联机事务和查询处理。
这种系统称为联机事务处理(OLTP)系统。
它们涵盖了一个组织的大部分日常操作,如购买、库存、制造、银行、工资、注册、记帐等。
另一方面,数据仓库系统在数据分析和决策方面为用户或“知识工人”提供服务。
这种系统可以用不同的格式组织和提供数据,以便满足不同用户的形形色色需求。
这种系统称为联机分析处理(OLAP)系统。
OLTP 和OLAP 的主要区别概述如下。
(1) 用户和系统的面向性:OLTP 是面向顾客的,用于办事员、客户、和信息技术专业人员的事务和查询处理。
OLAP 是面向市场的,用于知识工人(包括经理、主管、和分析人员)的数据分析。
(2) 数据内容:OLTP 系统管理当前数据。
通常,这种数据太琐碎,难以方便地用于决策。
OLAP 系统管理大量历史数据,提供汇总和聚集机制,并在不同的粒度级别上存储和管理信息。
这些特点使得数据容易用于见多识广的决策。
(3) 数据库设计:通常,OLTP 系统采用实体-联系(ER)模型和面向应用的数据库设计。
而OLAP 系统通常采用星形或雪花模型和面向主题的数据库设计。
(4) 视图:OLTP 系统主要关注一个企业或部门内部的当前数据,而不涉及历史数据或不同组织的数据。
相比之下,由于组织的变化,OLAP 系统常常跨越数据库模式的多个版本。
OLAP 系统也处理来自不同组织的信息,由多个数据存储集成的信息。
由于数据量巨大,OLAP 数据也存放在多个存储介质上。
(5)、访问模式:OLTP 系统的访问主要由短的、原子事务组成。
这种系统需要并行控制和恢复机制。
然而,对OLAP系统的访问大部分是只读操作(由于大部分数据仓库存放历史数据,而不是当前数据),尽管许多可能是复杂的查询。
OLTP 和OLAP 的其它区别包括数据库大小、操作的频繁程度、性能度量等。
2.但是,为什么需要一个分离的数据仓库“既然操作数据库存放了大量数据”,你注意到,“为什么不直接在这种数据库上进行联机分析处理,而是另外花费时间和资源去构造一个分离的数据仓库?”分离的主要原因是提高两个系统的性能。
操作数据库是为已知的任务和负载设计的,如使用主关键字索引和散列,检索特定的记录,和优化“罐装的”查询。
另一方面,数据仓库的查询通常是复杂的,涉及大量数据在汇总级的计算,可能需要特殊的数据组织、存取方法和基于多维视图的实现方法。
在操作数据库上处理OLAP 查询,可能会大大降低操作任务的性能。
此外,操作数据库支持多事务的并行处理,需要加锁和日志等并行控制和恢复机制,以确保一致性和事务的强健性。
通常,OLAP 查询只需要对数据记录进行只读访问,以进行汇总和聚集。
如果将并行控制和恢复机制用于这OLAP 操作,就会危害并行事务的运行,从而大大降低OLTP 系统的吞吐量。
最后,数据仓库与操作数据库分离是由于这两种系统中数据的结构、内容和用法都不相同。
决策支持需要历史数据,而操作数据库一般不维护历史数据。
在这种情况下,操作数据库中的数据尽管很丰富,但对于决策,常常还是远远不够的。
决策支持需要将来自异种源的数据统一(如,聚集和汇总),产生高质量的、纯净的和集成的数据。
相比之下,操作数据库只维护详细的原始数据(如事务),这些数据在进行分析之前需要统一。
由于两个系统提供很不相同的功能,需要不同类型的数据,因此需要维护分离的数据库。
Data warehousing provides architectures and tools for business executives to sy stematically organize, understand, and use their data to make strategic decisions. A lar ge number of organizations have found that data warehouse systems are valuable tools in today's competitive, fast evolving world. In the last several years, many firms have spent millions of dollars in building enterprise-wide data warehouses. Many people f eel that with competition mounting in every industry, data warehousing is the latest m ust-have marketing weapon —— a way to keep customers by learning more about the ir needs.“So", you may ask, full of intrigue, “what exactly is a data warehouse?"Data warehouses have been defined in many ways, making it difficult to formulat e a rigorous definition. Loosely speaking, a data warehouse refers to a database that is maintained separately from an organization's operational databases. Data warehouse s ystems allow for the integration of a variety of application systems. They support info rmation processing by providing a solid platform of consolidated, historical data for a nalysis.According to W. H. Inmon, a leading architect in the construction of data wareho use systems, “a data warehouse is a subject-oriented, integrated, time-variant, and non volatile collection of data in support of management's decision making process." This short, but comprehensive definition presents the major features of a data warehouse. T he four keywords, subject-oriented, integrated, time-variant, and nonvolatile, distingui sh data warehouses from other data repository systems, such as relational database systems, transaction processing systems, and file systems. Let's take a closer look at each of these key features.(1).Subject-oriented: A data warehouse is organized around major subjects, such as customer, vendor, product, and sales. Rather than concentrating on the day-to-day o perations and transaction processing of an organization, a data warehouse focuses on t he modeling and analysis of data for decision makers. Hence, data warehouses typical ly provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.(2) Integrated: A data warehouse is usually constructed by integrating multiple he terogeneous sources, such as relational databases, flat files, and on-line transaction rec ords. Data cleaning and data integration techniques are applied to ensure consistency i n naming conventions, encoding structures, attribute measures, and so on.(3).Time-variant: Data are stored to provide information from a historical pers pective (e.g., the past 5-10 years). Every key structure in the data warehouse contains, either implicitly or explicitly, an element of time.(4)Nonvolatile: A data warehouse is always a physically separate store of data tra nsformed from the application data found in the operational environment. Due to this separation, a data warehouse does not require transaction processing, recovery, and co ncurrency control mechanisms. It usually requires only two operations in data accessi ng: initial loading of data and access of data.In sum, a data warehouse is a semantically consistent data store that serves as a p hysical implementation of a decision support data model and stores the information on which an enterprise needs to make strategic decisions. A data warehouse is also often viewed as an architecture, constructed by integrating data from multiple heterogeneou s sources to support structured and/or ad hoc queries, analytical reporting, and decisio n making.“OK", you now ask, “what, then, is data warehousing?"Based on the above, we view data warehousing as the process of constructing and using data warehouses. The construction of a data warehouse requires data integratio n, data cleaning, and data consolidation. The utilization of a data warehouse often nec essitates a collection of decision support technologies. This allows “knowledge worke rs" (e.g., managers, analysts, and executives) to use the warehouse to quickly and con veniently obtain an overview of the data, and to make sound decisions based on infor mation in the warehouse. Some authors use the term “data warehousing" to refer onlyto the process of data warehouse construction, while the term warehouse DBMS is use d to refer to the management and utilization of data warehouses. We will not make thi s distinction here.“How are organizations using the information from data warehouses?" Many org anizations are using this information to support business decision making activities, in cluding:(1) increasing customer focus, which includes the analysis of customer buying pa tterns (such as buying preference, buying time, budget cycles, and appetites for spendi ng),(2) repositioning products and managing product portfolios by comparing the per formance of sales by quarter, by year, and by geographic regions, in order to fine-tune production strategies,(3) analyzing operations and looking for sources of profit,(4) managing the customer relationships, making environmental corrections, and managing the cost of corporate assets.Data warehousing is also very useful from the point of view of heterogeneous dat abase integration. Many organizations typically collect diverse kinds of data and main tain large databases from multiple, heterogeneous, autonomous, and distributed infor mation sources. To integrate such data, and provide easy and efficient access to it is hi ghly desirable, yet challenging.Much effort has been spent in the database industry and research community tow ards achieving this goal.The traditional database approach to heterogeneous database integration is to buil d wrappers and integrators (or mediators) on top of multiple, heterogeneous databases . A variety of data joiner and data blade products belong to this category. When a quer y is posed to a client site, a metadata dictionary is used to translate the query into quer ies appropriate for the individual heterogeneous sites involved. These queries are then mapped and sent to local query processors. The results returned from the different sit es are integrated into a global answer set. This query-driven approach requires comple x information filtering and integration processes, and competes for resources with pro cessing at local sources. It is inefficient and potentially expensive for frequent queries, especially for queries requiring aggregations.Data warehousing provides an interesting alternative to the traditional approach o f heterogeneous database integration described above. Rather than using a query-driven approach, data warehousing employs an update-driven approach in which informati on from multiple, heterogeneous sources is integrated in advance and stored in a ware house for direct querying and analysis. Unlike on-line transaction processing database s, data warehouses do not contain the most current information. However, a data ware house brings high performance to the integrated heterogeneous database system since data are copied, preprocessed, integrated, annotated, summarized, and restructured int o one semantic data store. Furthermore, query processing in data warehouses does not interfere with the processing at local sources. Moreover, data warehouses can store an d integrate historical information and support complex multidimensional queries. As a result, data warehousing has become very popular in industry.1. Differences between operational database systems and data warehousesSince most people are familiar with commercial relational database systems, it is easy to understand what a data warehouse is by comparing these two kinds of systems .The major task of on-line operational database systems is to perform on-line trans action and query processing. These systems are called on-line transaction processing ( OLTP) systems. They cover most of the day-to-day operations of an organization, suc h as, purchasing, inventory, manufacturing, banking, payroll, registration, and account ing. Data warehouse systems, on the other hand, serve users or “knowledge workers" i n the role of data analysis and decision making. Such systems can organize and presen t data in various formats in order to accommodate the diverse needs of the different us ers. These systems are known as on-line analytical processing (OLAP) systems.The major distinguishing features between OLTP and OLAP are summarized as f ollows.(1). Users and system orientation: An OLTP system is customer-oriented and is u sed for transaction and query processing by clerks, clients, and information technolog y professionals. An OLAP system is market-oriented and is used for data analysis by k nowledge workers, including managers, executives, and analysts.(2). Data contents: An OLTP system manages current data that, typically, are too detailed to be easily used for decision making. An OLAP system manages large amou nts of historical data, provides facilities for summarization and aggregation, and stores and manages information at different levels of granularity. These features make the d ata easier for use in informed decision making.(3). Database design: An OLTP system usually adopts an entity-relationship (ER)data model and an application -oriented database design. An OLAP system typically adopts either a star or snowflake model, and a subject-oriented database design.(4). View: An OLTP system focuses mainly on the current data within an enterpri se or department, without referring to historical data or data in different organizations. In contrast, an OLAP system often spans multiple versions of a database schema, due to the evolutionary process of an organization. OLAP systems also deal with informat ion that originates from different organizations, integrating information from many da ta stores. Because of their huge volume, OLAP data are stored on multiple storage me dia.(5). Access patterns: The access patterns of an OLTP system consist mainly of sh ort, atomic transactions. Such a system requires concurrency control and recovery me chanisms. However, accesses to OLAP systems are mostly read-only operations (since most data warehouses store historical rather than up-to-date information), although m any could be complex queries.Other features which distinguish between OLTP and OLAP systems include data base size, frequency of operations, and performance metrics and so on. 2. But, why ha ve a separate data warehouse?“Since operational databases store huge amounts of data", you observe, “why not perform on-line analytical processing directly on such databases instead of spending additional time and resources to construct a separate data warehouse?"A major reason for such a separation is to help promote the high performance of both systems. An operational database is designed and tuned from known tasks and w orkloads, such as indexing and hashing using primary keys, searching for particular re cords, and optimizing “canned" queries. On the other hand, data warehouse queries ar e often complex. They involve the computation of large groups of data at summarized levels, and may require the use of special data organization, access, and implementati on methods based on multidimensional views. Processing OLAP queries in operationa l databases would substantially degrade the performance of operational tasks.Moreover, an operational database supports the concurrent processing of several t ransactions. Concurrency control and recovery mechanisms, such as locking and loggi ng, are required to ensure the consistency and robustness of transactions. An OLAP qu ery often needs read-only access of data records for summarization and aggregation. Concurrency control and recovery mechanisms, if applied for such OLAP operations, may jeopardize the execution of concurrent transactions and thus substantially reducethe throughput of an OLTP system.Finally, the separation of operational databases from data warehouses is based on the different structures, contents, and uses of the data in these two systems. Decision support requires historical data, whereas operational databases do not typically mainta in historical data. In this context, the data in operational databases, though abundant, i s usually far from complete for decision making. Decision support requires consolidat ion (such as aggregation and summarization) of data from heterogeneous sources, resu lting in high quality, cleansed and integrated data. In contrast, operational databases c ontain only detailed raw data, such as transactions, which need to be consolidated bef ore analysis. Since the two systems provide quite different functionalities and require different kinds of data, it is necessary to maintain separate databases.。