计算机英文参考文献翻译-英语论文

合集下载

计算机专业外文文献及翻译

计算机专业外文文献及翻译

微软Visual Studio1微软Visual StudioVisual Studio 是微软公司推出的开发环境,Visual Studio可以用来创建Windows平台下的Windows应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和Office 插件。

Visual Studio是一个来自微软的集成开发环境IDE,它可以用来开发由微软视窗,视窗手机,Windows CE、.NET框架、.NET精简框架和微软的Silverlight支持的控制台和图形用户界面的应用程序以及Windows窗体应用程序,网站,Web应用程序和网络服务中的本地代码连同托管代码。

Visual Studio包含一个由智能感知和代码重构支持的代码编辑器。

集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。

其他内置工具包括一个窗体设计的GUI应用程序,网页设计师,类设计师,数据库架构设计师。

它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如Subversion和Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如Team Foundation Server的客户端:团队资源管理器)。

Visual Studio支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。

内置的语言中包括C/C + +中(通过Visual C++),(通过Visual ),C#中(通过Visual C#)和F#(作为Visual Studio 2010),为支持其他语言,如M,Python,和Ruby等,可通过安装单独的语言服务。

它也支持的XML/XSLT,HTML/XHTML,JavaScript和CSS.为特定用户提供服务的Visual Studio也是存在的:微软Visual Basic,Visual J#、Visual C#和Visual C++。

计算机英语论文(中英双语)[精华]

计算机英语论文(中英双语)[精华]

稀疏表示计算机视觉和模式识别从抽象技术的现象已经可以开始看到稀疏信号在电脑视觉产生重大影响,通常在非传统的应用场合的目标不仅是要获得一个紧凑的高保真度表示的观察信号,而且要提取语义信息。

非常规词典在字典的选择中扮演了重要的角色,衔接的差距或学习、训练样本同来获得自己提供钥匙,解出结果和附加语义意义信号稀疏表示。

理解这种非传统的良好性能要求词典把新的算法和分析技术。

本文强调了一些典型例子:稀疏信号的表现如何互动的和扩展计算机视觉领域,并提出了许多未解的问题为了进一步研究。

稀疏表现已经被证明具有非常强大的工具,获取、表示、压缩高维信号的功能。

它的成功主要是基于这个事实,即重要类型的信号(如声音和图像,稀疏表示很自然地就固定基地或串连这样的基地。

此外,高效、大概有效算法说明基于凸优化一书提供了计算这样的陈述。

虽然这些应用在经典信号处理的铺垫下,已经可以在电脑视觉上形成一个我们经常更感兴趣的内容或语义,而不是一种紧凑、高保真的表示。

一个人可能会理所当然地知道是否可以有用稀疏表示为视觉任务。

答案很大程度上是积极的:在过去的几年里,变化和延伸的最小化已应用于许多视觉任务。

稀疏表示的能力是揭示出语义信息,大部分来自于一个简单但重要的性质数据:虽然照片所展示的图像是在非常高自然的空间,在许多同类应用中图像属于次级结构。

也就是说他们在接近低维子空间或层次。

如果发现一个收集的样本分布,我们理应期望一个典型的样品有一个稀疏表示理论的基础。

然而,想要成功地把稀疏表示应用于电脑视觉,我们通常是必须面对的一个额外的问题,如何正确选择依据。

这里的数据选择不同于在信号处理的传统设置,基于指定的环境具有良好的性能可以被假定。

在电脑视觉方面,我们经常要学习样本图像的任务词典,我们不得不用一个连贯的思想来贯穿工作。

因此,我们需要扩展现有的理论和稀疏表示算法新情况。

自动人像识别仍然是最具有挑战性的应用领域和计算机视觉的难题。

在理论基础实验上,稀疏表示在近期获得了显著的进展。

计算机专业文献翻译

计算机专业文献翻译

毕业设计(论文)外文资料原文及译文原文出处:《Cloud Computing》作者:M ichael Miller以下为英文原文Beyond the Desktop: An Introduction to Cloud ComputingIn a world that sees new technological trends bloom and fade on almost a daily basis, one new trend promises more longevity. This trend is called cloud computing, and it will change the way you use your computer and the Internet.Cloud computing portends a major change in how we store information and run applications. Instead of running programs and data on an individual desktop computer, everything is hosted in the “cloud”—a nebulous assemblage of computers and servers accessed via the Internet. Cloud computing lets you access all your applications and documents from anywhere in the world, freeing you from the confines of the desktop and making it easier for group members in different locations to collaborate.The emergence of cloud computing is the computing equivalent of the electricity revolution of a century ago. Before the advent of electrical utilities, every farm and business produced its own electricity from freestanding generators. After the electrical grid was created, farms and businesses shut down their generators and bought electricity from the utilities, at a much lower price (and with much greater reliability) than they could produce on their own.Look for the same type of revolution to occur as cloud computing takes hold. Thedesktop-centric notion of computing that we hold today is bound to fall by the wayside as we come to expect the universal access, 24/7 reliability, and ubiquitous collaboration promised by cloud computing.It is the way of the future.Cloud Computing: What It Is—and What It Isn’tWith traditional desktop computing, you run copies of software programs on each computer you own. The documents you create are stored on the computer on which they were created. Although documents can be accessed from other computers on the network, they can’t be accessed by computers outside the network.The whole scene is PC-centric.With cloud computing, the software programs you use aren’t run from your personal computer, but are rather stored on servers accessed via the Internet. If your computer crashes, the software is still available for others to use. Same goes for the documents you create; they’re stored on a collection of servers accessed via the Internet. Anyone with permission can not only access the documents, but can also edit and collaborate on those documents in real time. Unlike traditional computing, this cloud computing model isn’t PC-centric, it’sdocument-centric. Which PC you use to access a document simply isn’t important.But that’s a simplification. Let’s look in more detail at what cloud computing is—and, just as important, what it isn’t.What Cloud Computing Isn’tFirst, cloud computing isn’t network computing. With network computing,applications/documents are hosted on a single company’s server and accessed over the company’s network. Cloud computing is a lot bigger than that. It encompasses multiple companies, multiple servers, and multiple networks. Plus, unlike network computing, cloud services and storage are accessible from anywhere in the world over an Internet connection; with network computing, access is over the company’s network only.Cloud computing also isn’t traditional outsourcing, where a company farms out (subcontracts) its computing services to an outside firm. While an outsourcing firm might host a company’s data or applications, those documents and programs are only accessible to the company’s employees via the company’s network, not to the entire world via the Internet.So, despite superficial similarities, networking computing and outsourcing are not cloud computing.What Cloud Computing IsKey to the definition of cloud computing is the “cloud”itself. For our purposes, the cloud is a large group of interconnected computers. These computers can be personal computers or network servers; they can be public or private. For example, Google hosts a cloud that consists of both smallish PCs and larger servers. Google’s cloud is a private one (that is, Google owns it) that is publicly accessible (by Google’s users).This cloud of computers extends beyond a single company or enterprise. The applications and data served by the cloud are available to broad group of users, cross-enterprise andcross-platform. Access is via the Internet. Any authorized user can access these docs and apps from any computer over any Internet connection. And, to the user, the technology and infrastructure behind the cloud is invisible. It isn’t apparent (and, in most cases doesn’t matter) whether cloud services are based on HTTP, HTML, XML, JavaScript, or other specific technologies.It might help to examine how one of the pioneers of cloud computing, Google, perceives the topic. From Google’s perspective, there are six key properties of cloud computing:·Cloud computing is user-centric.Once you as a user are connected to the cloud, whatever is stored there—documents, messages, images, applications, whatever—becomes yours. In addition, not only is the data yours, but you can also share it with others. In effect, any device that accesses your data in the cloud also becomes yours.·Cloud computing is task-centric.Instead of focusing on the application and what it can do, the focus is on what you need done and how the application can do it for you., Traditional applications—word processing, spreadsheets, email, and so on—are becoming less important than the documents they create.·Cloud computing is powerful. Connecting hundreds or thousands of computers together in a cloud creates a wealth of computing power impossible with a single desktop PC. ·Cloud computing is accessible. Because data is stored in the cloud, users can instantly retrieve more information from multiple repositories. You’re not limited to a single source of data, as you are with a desktop PC.·Cloud computing is intelligent. With all the various data stored on the computers in a cloud, data mining and analysis are necessary to access that information in an intelligent manner.·Cloud computing is programmable.Many of the tasks necessary with cloud computing must be automated. For example, to protect the integrity of the data, information stored on a single computer in the cloud must be replicated on other computers in the cloud. If that one computer goes offline, the cloud’s programming automatically redistributes that computer’s data to a new computer in the cloud.All these definitions behind us, what constitutes cloud computing in the real world?As you’ll learn throughout this book, a raft of web-hosted, Internet-accessible,group-collaborative applications are currently available, with many more on the way. Perhaps the best and most popular examples of cloud computing applications today are the Google family of applications—Google Docs & Spreadsheets, Google Calendar, Gmail, Picasa, and the like. All of these applications are hosted on Google’s servers, are accessible to any user with an Internet connection, and can be used for group collaboration from anywhere in the world.In short, cloud computing enables a shift from the computer to the user, from applications to tasks, and from isolated data to data that can be accessed from anywhere and shared with anyone. The user no longer has to take on the task of data management; he doesn’t even have to remember where the data is. All that matters is that the data is in the cloud, and thus immediately available to that user and to other authorized users.From Collaboration to the Cloud: A Short History of CloudComputingCloud computing has as its antecedents both client/server computing and peer-to-peer distributed computing. It’s all a matter of how centralized storage facilitates collaboration and how multiple computers work together to increase computing power.Client/Server Computing: Centralized Applications and StorageIn the antediluvian days of computing (pre-1980 or so), everything operated on the client/server model. All the software applications, all the data, and all the control resided on huge mainframe computers, otherwise known as servers. If a user wanted to access specific data or run a program, he had to connect to the mainframe, gain appropriate access, and then do his business while essentially “renting”the program or data from the server.Users connected to the server via a computer terminal, sometimes called a workstation or client. This computer was sometimes called a dumb terminal because it didn’t have a lot (if any!) memory, storage space, or processing power. It was merely a device that connected the user to and enabled him to use the mainframe computer.Users accessed the mainframe only when granted permission, and the information technology (IT) staff weren’t in the habit of handing out access casually. Even on a mainframe computer, processing power is limited—and the IT staff were the guardians of that power. Access was not immediate, nor could two users access the same data at the same time.Beyond that, users pretty much had to take whatever the IT staff gave them—with no variations. Want to customize a report to show only a subset of the normal information? Can’t do it. Want to create a new report to look at some new data? You can’t do it, although the IT staff can—but on their schedule, which might be weeks from now.The fact is, when multiple people are sharing a single computer, even if that computer is a huge mainframe, you have to wait your turn. Need to rerun a financial report? No problem—if you don’t mind waiting until this afternoon, or tomorrow morning. There isn’t always immediate access in a client/server environment, and seldom is there immediate gratification.So the client/server model, while providing similar centralized storage, differed from cloud computing in that it did not have a user-centric focus; with client/server computing, all the control rested with the mainframe—and with the guardians of that single computer. It was not a user-enabling environment.Peer-to-Peer Computing: Sharing ResourcesAs you can imagine, accessing a client/server system was kind of a “hurry up and wait”experience. The server part of the system also created a huge bottleneck. All communications between computers had to go through the server first, however inefficient that might be.The obvious need to connect one computer to another without first hitting the server led to the development of peer-to-peer (P2P) computing. P2P computing defines a network architecture inwhich each computer has equivalent capabilities and responsibilities. This is in contrast to the traditional client/server network architecture, in which one or more computers are dedicated to serving the others. (This relationship is sometimes characterized as a master/slave relationship, with the central server as the master and the client computer as the slave.)P2P was an equalizing concept. In the P2P environment, every computer is a client and a server; there are no masters and slaves. By recognizing all computers on the network as peers, P2P enables direct exchange of resources and services. There is no need for a central server, because any computer can function in that capacity when called on to do so.P2P was also a decentralizing concept. Control is decentralized, with all computers functioning as equals. Content is also dispersed among the various peer computers. No centralized server is assigned to host the available resources and services.Perhaps the most notable implementation of P2P computing is the Internet. Many of today’s users forget (or never knew) that the Internet was initially conceived, under its original ARPAnet guise, as a peer-to-peer system that would share computing resources across the United States. The various ARPAnet sites—and there weren’t many of them—were connected together not as clients and servers, but as equals.The P2P nature of the early Internet was best exemplified by the Usenet network. Usenet, which was created back in 1979, was a network of computers (accessed via the Internet), each of which hosted the entire contents of the network. Messages were propagated between the peer computers; users connecting to any single Usenet server had access to all (or substantially all) the messages posted to each individual server. Although the users’connection to the Usenet server was of the traditional client/server nature, the relationship between the Usenet servers was definitely P2P—and presaged the cloud computing of today.That said, not every part of the Internet is P2P in nature. With the development of the World Wide Web came a shift away from P2P back to the client/server model. On the web, each website is served up by a group of computers, and sites’visitors use client software (web browsers) to access it. Almost all content is centralized, all control is centralized, and the clients have no autonomy or control in the process.Distributed Computing: Providing More Computing PowerOne of the most important subsets of the P2P model is that of distributed computing, where idle PCs across a network or across the Internet are tapped to provide computing power for large, processor-intensive projects. It’s a simple concept, all about cycle sharing between multiple computers.A personal computer, running full-out 24 hours a day, 7 days a week, is capable of tremendous computing power. Most people don’t use their computers 24/7, however, so a good portion of a computer’s resources go unused. Distributed computing uses those resources.When a computer is enlisted for a distributed computing project, software is installed on the machine to run various processing activities during those periods when the PC is typically unused. The results of that spare-time processing are periodically uploaded to the distributedcomputing network, and combined with similar results from other PCs in the project. The result, if enough computers are involved, simulates the processing power of much larger mainframes and supercomputers—which is necessary for some very large and complex computing projects.For example, genetic research requires vast amounts of computing power. Left to traditional means, it might take years to solve essential mathematical problems. By connecting together thousands (or millions) of individual PCs, more power is applied to the problem, and the results are obtained that much sooner.Distributed computing dates back to 1973, when multiple computers were networked togetherat the Xerox PARC labs and worm software was developed to cruise through the network looking for idle resources. A more practical application of distributed computing appeared in 1988, when researchers at the DEC (Digital Equipment Corporation) System Research Center developed software that distributed the work to factor large numbers among workstations within their laboratory. By 1990, a group of about 100 users, utilizing this software, had factored a 100-digit number. By 1995, this same effort had been expanded to the web to factor a 130-digit number.It wasn’t long before distributed computing hit the Internet. The first major Internet-based distributed computing project was , launched in 1997, which employed thousands of personal computers to crack encryption codes. Even bigger was SETI@home, launched in May 1999, which linked together millions of individual computers to search for intelligent life in outer space.Many distributed computing projects are conducted within large enterprises, using traditional network connections to form the distributed computing network. Other, larger, projects utilize the computers of everyday Internet users, with the computing typically taking place offline, and then uploaded once a day via traditional consumer Internet connections.Collaborative Computing: Working as a GroupFrom the early days of client/server computing through the evolution of P2P, there has been a desire for multiple users to work simultaneously on the same computer-based project. This type of collaborative computing is the driving force behind cloud computing, but has been aroundfor more than a decade.Early group collaboration was enabled by the combination of several different P2P technologies. The goal was (and is) to enable multiple users to collaborate on group projects online, in real time.To collaborate on any project, users must first be able to talk to one another. In today’s environment, this means instant messaging for text-based communication, with optionalaudio/telephony and video capabilities for voice and picture communication. Most collaboration systems offer the complete range of audio/video options, for full-featured multiple-user video conferencing.In addition, users must be able to share files and have multiple users work on the same document simultaneously. Real-time whiteboarding is also common, especially in corporate andeducation environments.Early group collaboration systems ranged from the relatively simple (Lotus Notes and Microsoft NetMeeting) to the extremely complex (the building-block architecture of the Groove Networks system). Most were targeted at large corporations, and limited to operation over the companies’private networks.Cloud Computing: The Next Step in CollaborationWith the growth of the Internet, there was no need to limit group collaboration to asingle enterprise’s network environment. Users from multiple locations within a corporation, and from multiple organizations, desired to collaborate on projects that crossed company and geographic boundaries. To do this, projects had to be housed in the “cloud”of the Internet, and accessed from any Internet-enabled location.The concept of cloud-based documents and services took wing with the development of large server farms, such as those run by Google and other search companies. Google already had a collection of servers that it used to power its massive search engine; why not use that same computing power to drive a collection of web-based applications—and, in the process, provide a new level of Internet-based group collaboration?That’s exactly what happened, although Google wasn’t the only company offering cloud computing solutions. On the infrastructure side, IBM, Sun Systems, and other big iron providers are offering the hardware necessary to build cloud networks. On the software side, dozens of companies are developing cloud-based applications and storage services.Today, people are using cloud services and storage to create, share, find, and organize information of all different types. Tomorrow, this functionality will be available not only to computer users, but to users of any device that connects to the Internet—mobile phones, portable music players, even automobiles and home television sets.The Network Is the Computer: How Cloud Computing WorksSun Microsystems’s slogan is “The network is the computer,”and that’s as good as any to describe how cloud computing works. In essence, a network of computers functions as a single computer to serve data and applications to users over the Internet. The network exists in the “cloud”of IP addresses that we know as the Internet, offers massive computing power and storage capability, and enables widescale group collaboration.But that’s the simple explanation. Let’s take a look at how cloud computing works in more detail.Understanding Cloud ArchitectureThe key to cloud computing is the “cloud”—a massive network of servers or even individual PCs interconnected in a grid. These computers run in parallel, combining the resources of eachto generate supercomputing-like power.What, exactly, is the “cloud”? Put simply, the cloud is a collection of computers and servers that are publicly accessible via the Internet. This hardware is typically owned and operated by a third party on a consolidated basis in one or more data center locations. The machines can run any combination of operating systems; it’s the processing power of the machines that matter, not what their desktops look like.As shown in Figure 1.1, individual users connect to the cloud from their own personal computers or portable devices, over the Internet. To these individual users, the cloud is seen as a single application, device, or document. The hardware in the cloud (and the operating system that manages the hardware connections) is invisible.FIGURE 1.1How users connect to the cloud.This cloud architecture is deceptively simple, although it does require some intelligent management to connect all those computers together and assign task processing to multitudes of users. As you can see in Figure 1.2, it all starts with the front-end interface seen by individual users. This is how users select a task or service (either starting an application or opening a document). The user’s request then gets passed to the system management, which finds the correct resources and then calls the system’s appropriate provisioning services. These services carve out the necessary resources in the cloud, launch the appropriate web application, and either creates or opens the requested document. After the web application is launched, the system’s monitoring and metering functions track the usage of the cloud so that resources are apportioned and attributed to the proper user(s).FIGURE 1.2The architecture behind a cloud computing system.As you can see, key to the notion of cloud computing is the automation of many management tasks. The system isn’t a cloud if it requires human management to allocate processes to resources. What you have in this instance is merely a twenty-first-century version ofold-fashioned data center–based client/server computing. For the system to attain cloud status, manual management must be replaced by automated processes.Understanding Cloud StorageOne of the primary uses of cloud computing is for data storage. With cloudstorage, data is stored on multiple third-party servers, rather than on the dedicated servers used in traditional networked data storage.When storing data, the user sees a virtual server—that is, it appears as if the data is stored in a particular place with a specific name. But that place doesn’t exist in reality. It’s just a pseudonym used to reference virtual space carved out of the cloud. In reality, the user’s data could be stored on any one or more of the computers used to create the cloud. The actual storage location may even differ from day to day or even minute to minute, as the cloud dynamically manages available storage space. But even though the location is virtual, the user sees a “static”location for his data—and can actually manage his storage space as if it were connected to his own PC.Cloud storage has both financial and security-associated advantages.Financially, virtual resources in the cloud are typically cheaper than dedicated physical resources connected to a personal computer or network. As for security, data stored in the cloud is secure from accidental erasure or hardware crashes, because it is duplicated across multiple physical machines; since multiple copies of the data are kept continually, the cloud continues to function as normal even if one or more machines go offline. If one machine crashes, the data is duplicated on other machines in the cloud.Understanding Cloud ServicesAny web-based application or service offered via cloud computing is called a cloud service. Cloud services can include anything from calendar and contact applications to word processing and presentations. Almost all large computing companies today, from Google to Amazon to Microsoft, are developing various types of cloud services.With a cloud service, the application itself is hosted in the cloud. An individual user runs the application over the Internet, typically within a web browser. The browser accesses the cloud service and an instance of the application is opened within the browser window. Once launched, the web-based application operates and behaves like a standard desktop application. The only difference is that the application and the working documents remain on the host’s cloud servers.Cloud services offer many advantages. If the user’s PC crashes, it doesn’t affect either the host application or the open document; both remain unaffected in the cloud. In addition, an individual user can access his applications and documents from any location on any PC. He doesn’t have to have a copy of every app and file with him when he moves from office to home to remote location. Finally, because documents are hosted in the cloud, multiple users can collaborate on the same document in real time, using any available Internet connection. Documents are no longer machine-centric. Instead, they’re always available to any authorized user.Companies in the Cloud: Cloud Computing TodayWe’re currently in the early days of the cloud computing revolution. Although many cloud services are available today, more and more interesting applications are still in development. That said, cloud computing today is attracting the best and biggest companies from across the computing industry, all of whom hope to establish profitable business models based in the cloud.As discussed earlier in this chapter, perhaps the most noticeable company currently embracing the cloud computing model is Google. As you’ll see throughout this book, Google offers a powerful collection of web-based applications, all served via its cloud architecture. Whether you want cloud-based word processing (Google Docs), presentation software (Google Presentations), email (Gmail), or calendar/scheduling functionality (Google Calendar), Google has an offering. And best of all, Google is adept in getting all of its web-based applications to interface with each other; their cloud services are interconnected to the user’s benefit.Other major companies are also involved in the development of cloud services. Microsoft, for example, offers its Windows Live suite of web-based applications, as well as the Live Mesh initiative that promises to link together all types of devices, data, and applications in a common cloud-based platform. Amazon has its Elastic Compute Cloud (EC2), a web service that provides cloud-based resizable computing capacity for application developers. IBM has established a Cloud Computing Center to deliver cloud services and research to clients. And numerous smaller companies have launched their own webbased applications, primarily (but not exclusively) to exploit the collaborative nature of cloud services.As we work through this book, we’ll examine many of these companies and their offerings. All you need to know for now is that there’s a big future in cloud computing—and everybody’s jumping on the bandwagon.Why Cloud Computing MattersWhy is cloud computing important? There are many implications of cloud technology, for both developers and end users.For developers, cloud computing provides increased amounts of storage and processing power to run the applications they develop. Cloud computing also enables new ways to access information, process and analyze data, and connect people and resources from any location anywhere in the world. In essence, it takes the lid off the box; with cloud computing, developers are no longer boxed in by physical constraints.For end users, cloud computing offers all those benefits and more. A person using a web-based application isn’t physically bound to a single PC, location, or network. His applications and documents can be accessed wherever he is, whenever he wants. Gone is the fear of losing data if a computer crashes. Documents hosted in the cloud always exist, no matter what happens to the user’s machine. And then there’s the benefit of group collaboration. Users from around the world can collaborate on the same documents, applications, and projects, in real time. It’s a whole new world of collaborative computing, all enabled by the notion of cloud computing.And cloud computing does all this at lower costs, because the cloud enables more efficient sharing of resources than does traditional network computing. With cloud computing, hardware doesn’t have to be physically adjacent to a firm’s office or data center. Cloud infrastructure can be located anywhere, including and especially areas with lower real estate and electricity costs. Inaddition, IT departments don’t have to engineer for peak-load capacity, because the peak load can be spread out among the external assets in the cloud. And, because additional cloud resources are always at the ready, companies no longer have to purchase assets for infrequent intensive computing tasks. If you need more processing power, it’ s always there in the cloud—and accessible on a cost-efficient basis.。

计算机专业毕业论文外文翻译

计算机专业毕业论文外文翻译

附录(英文翻译)Rich Client Tutorial Part 1The Rich Client Platform (RCP) is an exciting new way to build Java applications that can compete with native applications on any platform. This tutorial is designed to get you started building RCP applications quickly. It has been updated for Eclipse 3.1.2By Ed Burnette, SASJuly 28, 2004Updated for 3.1.2: February 6, 2006IntroductionTry this experiment: Show Eclipse to some friends or co-workers who haven't seen it before and ask them to guess what language it is written in. Chances are, they'll guess VB, C++, or C#, because those languages are used most often for high quality client side applications. Then watch the look on their faces when you tell them it was created in Java, especially if they are Java programmers.Because of its unique open source license, you can use the technologies that went into Eclipse to create your own commercial quality programs. Before version 3.0, this was possible but difficult, especially when you wanted to heavily customize the menus, layouts, and other user interface elements. That was because the "IDE-ness" of Eclipse was hard-wired into it. Version 3.0 introduced the Rich Client Platform (RCP), which is basically a refactoring of the fundamental parts of Eclipse's UI, allowing it to be used for non-IDE applications. Version 3.1 updated RCP with new capabilities, and, most importantly, new tooling support to make it easier to create than before.If you want to cut to the chase and look at the code for this part you can find it in the accompanying zip file. Otherwise, let's take a look at how to construct an RCP application.Getting startedRCP applications are based on the familiar Eclipse plug-in architecture, (if it's not familiar to you, see the references section). Therefore, you'll need to create a plug-in to be your main program. Eclipse's Plug-in Development Environment (PDE) provides a number of wizards and editors that take some of the drudgery out of the process. PDE is included with the Eclipse SDK download so that is the package you should be using. Here are the steps you should follow to get started.First, bring up Eclipse and select File > New > Project, then expand Plug-in Development and double-click Plug-in Project to bring up the Plug-in Project wizard. On the subsequent pages, enter a Project name such as org.eclipse.ui.tutorials.rcp.part1, indicate you want a Java project, select the version of Eclipse you're targeting (at least 3.1), and enable the option to Create an OSGi bundle manifest. Then click Next >.Beginning in Eclipse 3.1 you will get best results by using the OSGi bundle manifest. In contrast to previous versions, this is now the default.In the next page of the Wizard you can change the Plug-in ID and other parameters. Of particular importance is the question, "Would you like to create a rich client application?". Select Yes. The generated plug-in class is optional but for this example just leave all the other options at their default values. Click Next > to continue.If you get a dialog asking if Eclipse can switch to the Plug-in Development Perspective click Remember my decision and select Yes (this is optional).Starting with Eclipse 3.1, several templates have been provided to make creating an RCP application a breeze. We'll use the simplest one available and see how it works. Make sure the option to Create a plug-in using one of the templates is enabled, then select the Hello RCP template. This isRCP's equivalent of "Hello, world". Click Finish to accept all the defaults and generate the project (see Figure 1). Eclipse will open the Plug-in Manifest Editor. The Plug-in Manifest editor puts a friendly face on the various configuration files that control your RCP application.Figure 1. The Hello World RCP project was created by a PDE wizard.Taking it for a spinTrying out RCP applications used to be somewhat tedious. You had to create a custom launch configuration, enter the right application name, and tweak the plug-ins that were included. Thankfully the PDE keeps track of all this now. All you have to do is click on the Launch an Eclipse Application button in the Plug-in Manifest editor's Overview page. You should see a bare-bones Workbench start up (see Figure 2).Figure 2. By using thetemplates you can be up andrunning anRCPapplication inminutes.Making it aproductIn Eclipse terms a product is everything that goes with your application, including all the other plug-ins it depends on, a command to run the application (called the native launcher), and any branding (icons, etc.) that make your application distinctive. Although as we've just seen you can run a RCP application without defining a product, having one makes it a whole lot easier to run the application outside of Eclipse. This is one of the major innovations that Eclipse 3.1 brought to RCP development.Some of the more complicated RCP templates already come with a product defined, but the Hello RCP template does not so we'll have to make one.In order to create a product, the easiest way is to add a product configuration file to the project. Right click on the plug-in project and select New > Product Configuration. Then enter a file name for this new configuration file, such as part1.product. Leave the other options at their default values. Then click Finish. The Product Configuration editor will open. This editor lets you control exactly what makes up your product including all its plug-ins and branding elements.In the Overview page, select the New... button to create a new product extension. Type in or browse to the defining plug-in(org.eclipse.ui.tutorials.rcp.part1). Enter a Product ID such as product, and for the Product Application selectorg.eclipse.ui.tutorials.rcp.part1.application. Click Finish to define the product. Back in the Overview page, type in a new Product Name, for example RCP Tutorial 1.In Eclipse 3.1.0 if you create the product before filling inthe Product Name you may see an error appear in the Problems view. The error will go away when you Synchronize (see below). This is a known bug that is fixed in newer versions. Always use the latest available maintenance release for the version of Eclipse you're targeting!Now select the Configuration tab and click Add.... Select the plug-in you just created (org.eclipse.ui.tutorials.rcp.part1) and then click on Add Required Plug-ins. Then go back to the Overview page and press Ctrl+S or File > Save to save your work.If your application needs to reference plug-ins that cannot be determined until run time (for example the tomcat plug-in), then add them manually in the Configuration tab.At this point you should test out the product to make sure it runs correctly. In the Testing section of the Overview page, click on Synchronize then click on Launch the product. If all goes well, the application should start up just like before.Plug-ins vs. featuresOn the Overview page you may have noticed an option that says the product configuration is based on either plug-ins or features. The simplest kind of configuration is one based on plug-ins, so that's what this tutorial uses. If your product needs automatic update or Java Web Start support, then eventually you should convert it to use features. But take my advice and get it working without them first.Running it outside of EclipseThe whole point of all this is to be able to deploy and run stand-alone applications without the user having to know anything about the Java and Eclipse code being used under the covers. For a real application you may want to provide a self-contained executable generated by an install program like InstallShield or NSIS. That's really beyond the scope of this article though, so we'll do something simpler.The Eclipse plug-in loader expects things to be in a certain layout so we'll need to create a simplified version of the Eclipse install directory. This directory has to contain the native launcher program, config files,and all the plug-ins required by the product. Thankfully, we've given the PDE enough information that it can put all this together for us now.In the Exporting section of the Product Configuration editor, click the link to Use the Eclipse Product export wizard. Set the root directory to something like RcpTutorial1. Then select the option to deploy into a Directory, and enter a directory path to a temporary (scratch) area such as C:\Deploy. Check the option to Include source code if you're building an open source project. Press Finish to build and export the program.The compiler options for source and class compatibility in the Eclipse Product export wizard will override any options you have specified on your project or global preferences. As part of the Export process, the plug-in is code is recompiled by an Ant script using these options.The application is now ready to run outside Eclipse. When you're done you should have a structure that looks like this in your deployment directory:RcpTutorial1| .eclipseproduct| eclipse.exe| startup.jar+--- configuration| config.ini+--- pluginsmands_3.1.0.jarorg.eclipse.core.expressions_3.1.0.jarorg.eclipse.core.runtime_3.1.2.jarorg.eclipse.help_3.1.0.jarorg.eclipse.jface_3.1.1.jarorg.eclipse.osgi_3.1.2.jarorg.eclipse.swt.win32.win32.x86_3.1.2.jarorg.eclipse.swt_3.1.0.jarorg.eclipse.ui.tutorials.rcp.part1_1.0.0.jarorg.eclipse.ui.workbench_3.1.2.jarorg.eclipse.ui_3.1.2.jarNote that all the plug-ins are deployed as jar files. This is the recommended format starting in Eclipse 3.1. Among other things this saves disk space in the deployed application.Previous versions of this tutorial recommended using a batch file or shell script to invoke your RCP program. It turns out this is a bad idea because you will not be able to fully brand your application later on. For example, you won't be able to add a splash screen. Besides, theexport wizard does not support the batch file approach so just stick with the native launcher.Give it a try! Execute the native launcher (eclipse or eclipse.exe by default) outside Eclipse and watch the application come up. The name of the launcher is controlled by branding options in the product configuration.TroubleshootingError: Launching failed because the org.eclipse.osgi plug-in is not included...You can get this error when testing the product if you've forgotten to list the plug-ins that make up the product. In the Product Configuration editor, select the Configuration tab, and add all your plug-ins plus all the ones they require as instructed above.Compatibility and migrationIf you are migrating a plug-in from version 2.1 to version 3.1 there are number of issues covered in the on-line documentation that you need to be aware of. If you're making the smaller step from 3.0 to 3.1, the number of differences is much smaller. See the References section for more information.One word of advice: be careful not to duplicate any information in both plug-in.xml and MANIFEST.MF. Typically this would not occur unless you are converting an older plug-in that did not use MANIFEST.MF into one that does, and even then only if you are editing the files by hand instead of going through the PDE.ConclusionIn part 1 of this tutorial, we looked at what is necessary to create a bare-bones Rich Client application. The next part will delve into the classes created by the wizards such as the WorkbenchAdvisor class. All the sample code for this part may be found in the accompanying zip file.ReferencesRCP Tutorial Part 2RCP Tutorial Part 3Eclipse Rich Client PlatformRCP Browser example (project org.eclipse.ui.examples.rcp.browser)PDE Does Plug-insHow to Internationalize your Eclipse Plug-inNotes on the Eclipse Plug-in ArchitecturePlug-in Migration Guide: Migrating to 3.1 from 3.0Plug-in Migration Guide: Migrating to 3.0 from 2.1译文:Rich Client教程第一部分The Rich Client Platform (RCP)是一种创建Java应用程序的令人兴奋的新方法,可以和任何平台下的自带应用程序进行竞争。

计算机专业英文文献

计算机专业英文文献

What Is an Object?Objects are key to understanding object-oriented technology. You can look around you now and see many examples of real-world objects: your dog, your desk, your television set, your bicycle.Real-world objects share two characteristics: They all have state and behavior. For example, dogs have state (name, color, breed, hungry) and behavior (barking, fetching, wagging tail). Bicycles have state (current gear, current pedal cadence, two wheels, number of gears) and behavior (braking, accelerating, slowing down, changing gears).Software objects are modeled after real-world objects in that they too have state and behavior. A software object maintains its state in one or more variables.A variable is an item of data named by an identifier. A software object implements its behavior with methods. A method is a function (subroutine) associated with an object.Definition:An object is a software bundle of variables and related methods. You can represent real-world objects by using software objects. You might want to represent real-world dogs as software objects in an animation program or a real-world bicycle as a software object in the program that controls an electronic exercise bike. You can also use software objects to model abstract concepts. For example, an event is a common object used in window systems to represent the action of a user pressing a mouse button or a key on the keyboard. The following illustration is a common visual representation of a software object.A software object.Everything the software object knows (state) and can do (behavior) is expressed by the variables and the methods within that object. A software object that modelsyour real-world bicycle would have variables that indicate the bicycle's current state: Its speed is 18 mph, its pedal cadence is 90 rpm, and its current gear is 5th. These variables are formally known as instance variables because they contain the state for a particular bicycle object; in object-oriented terminology, a particular object is called an instance. The following figure illustrates a bicycle modeled as a software object.A bicycle modeled as a softwareobject.In addition to its variables, the software bicycle would also have methods to brake, change the pedal cadence, and change gears. (It would not have a method for changing its speed because the bike's speed is just a side effect of which gear it's in and how fast the rider is pedaling.) These methods are known formally as instance methods because they inspect or change the state of a particular bicycle instance.Object diagrams show that an object's variables make up the center, or nucleus, of the object. Methods surround and hide the object's nucleus from other objects in the program. Packaging an object's variables within the protective custody of its methods is called encapsulation. This conceptual picture of an object —a nucleus of variables packaged within a protective membrane of methods — is an ideal representation of an object and is the ideal that designers of object-oriented systems strive for. However, it's not the whole story.Often, for practical reasons, an object may expose some of its variables or hide some of its methods. In the Java programming language, an object can specify one of four access levels for each of its variables and methods. The access level determines which other objects and classes can access that variable or method. Refer to the Controlling Access to Members of a Class section for details.Encapsulating related variables and methods into a neat software bundle is a simple yet powerful idea that provides two primary benefits to software developers:Modularity:The source code for an object can be written and maintainedindependently of the source code for other objects. Also, an objectcan be easily passed around in the system. You can give your bicycleto someone else, and it will still work.Information-hiding: An object has a public interface that otherobjects can use to communicate with it. The object can maintain privateinformation and methods that can be changed at any time withoutaffecting other objects that depend on it. You don't need to understanda bike's gear mechanism to use it.What Is a Message?A single object alone generally is not very useful. Instead, an object usually appears as a component of a larger program or application that contains many other objects. Through the interaction of these objects, programmers achieve higher-order functionality and more complex behavior. Your bicycle hanging from a hook in the garage is just a bunch of metal and rubber; by itself, it is incapable of any activity; the bicycle is useful only when another object (you) interacts with it (by pedaling).Software objects interact and communicate with each other by sending messages to each other. When object A wants object B to perform one of B's methods, object A sends a message to object B (see the following figure).Objects interact by sending each other messages.Sometimes, the receiving object needs more information so that it knows exactly what to do; for example, when you want to change gears on your bicycle, you have to indicate which gear you want. This information is passed along with the message as parameters.Messages use parameters to pass alongextra information that the objectneeds —in this case, which gear thebicycle should be in.These three parts are enough information for the receiving object to perform the desired method. No other information or context is required.Messages provide two important benefits:An object's behavior is expressed through its methods, so (aside fromdirect variable access) message passing supports all possibleinteractions between objects.Objects don't need to be in the same process or even on the same machineto send messages back and forth and receive messages from each other. What Is a Class?In the real world, you often have many objects of the same kind. For example, your bicycle is just one of many bicycles in the world. Using object-orientedterminology, we say that your bicycle object is an instanceof the class of objects known as bicycles. Bicycles have some state (current gear, current cadence, two wheels) and behavior (change gears, brake) in common. However, each bicycle's state is independent of and can be different from that of other bicycles.When building them, manufacturers take advantage of the fact that bicycles share characteristics, building many bicycles from the same blueprint. It would be very inefficient to produce a new blueprint for every bicycle manufactured.In object-oriented software, it's also possible to have many objects of the same kind that share characteristics: rectangles, employee records, video clips, and so on. Like bicycle manufacturers, you can take advantage of the fact that objects of the same kind are similar and you can create a blueprint for those objects.A software blueprint for objects is called a class (see the following figure).A visual representation of a class.Definition: A class is a blueprint that defines the variables and the methods common to all objects of a certain kind.The class for our bicycle example would declare the instance variables necessary to contain the current gear, the current cadence, and so on for each bicycle object. The class would also declare and provide implementations for the instance methods that allow the rider to change gears, brake, and change the pedaling cadence, as shown in the next figure.The bicycle class.After you've created the bicycle class, you can create any number of bicycleobjects from that class. When you create an instance of a class, the system allocates enough memory for the object and all its instance variables. Each instance gets its own copy of all the instance variables defined in the class, as the next figure shows.MyBike and YourBike are two different instances of the Bike class. Each instance has its own copy of the instance variables defined in the Bike class but has different values for these variables.In addition to instance variables, classes can define class variables. A class wariable contains information that is shared by all instances of the class. For example, suppose that all bicycles had the same number of gears. In this case, defining an instance variable to hold the number of gears is inefficient; each instance would have its own copy of the variable, but the value would be the same for every instance. In such situations, you can define a class variable that contains the number of gears (see the following figure); all instances share this variable. If one object changes the variable, it changes for all other objects of that type.YourBike, an instance of Bike, has access to the numberOfGears variable in the Bike class; however, the YourBike instance does not have a copy of this class variable.A class can also declare class methods You can invoke a class method directly from the class, whereas you must invoke instance methods on a particular instance.The Understanding Instance and Class Members section discusses instance variables and methods and class variables and methods in detail.Objects provide the benefit of modularity and information-hiding. Classes provide the benefit of reusability. Bicycle manufacturers use the same blueprint over and over again to build lots of bicycles. Software programmers use the same class, and thus the same code, over and over again to create many objects.Objects versus ClassesYou've probably noticed that the illustrations of objects and classes look very similar. And indeed, the difference between classes and objects is often the source of some confusion. In the real world, it's obvious that classes are not themselves the objects they describe; that is, a blueprint of a bicycle is not a bicycle. However, it's a little more difficult to differentiate classes and objects in software. This is partially because software objects are merelyelectronic models of real-world objects or abstract concepts in the first place. But it's also because the term object is sometimes used to refer to both classes and instances.In illustrations such as the top part of the preceding figure, the class is not shaded because it represents a blueprint of an object rather than the object itself. In comparison, an object is shaded, indicating that the object exists and that you can use it.What Is Inheritance?Generally speaking, objects are defined in terms of classes. You know a lot about an object by knowing its class. Even if you don't know what a penny-farthing is, if I told you it was a bicycle, you would know that it had two wheels, handlebars, and pedals.Object-oriented systems take this a step further and allow classes to be defined in terms of other classes. For example, mountain bikes, road bikes, and tandems are all types of bicycles. In object-oriented terminology, mountain bikes, road bikes, and tandems are all subclasses of the bicycle class. Similarly, the bicycle class is the supclasses of mountain bikes, road bikes, and tandems. This relationship is shown in the following figure.The hierarchy of bicycle classes.Each subclass inherits state (in the form of variable declarations) from the superclass. Mountain bikes, road bikes, and tandems share some states: cadence, speed, and the like. Also, each subclass inherits methods from the superclass. Mountain bikes, road bikes, and tandems share some behaviors — braking and changing pedaling speed, for example.However, subclasses are not limited to the states and behaviors provided to them by their superclass. Subclasses can add variables and methods to the ones they inherit from the superclass. Tandem bicycles have two seats and two sets of handlebars; some mountain bikes have an additional chain ring, giving them a lower gear ratio.Subclasses can also override inherited methods and provide specialized implementations for those methods. For example, if you had a mountain bike with an additional chain ring, you could override the "change gears" method so that the rider could shift into those lower gears.You are not limited to just one layer of inheritance. The inheritance tree, or class hierardry, can be as deep as needed. Methods and variables are inherited down through the levels. In general, the farther down in the hierarchy a class appears, the more specialized its behavior.Note:Class hierarchies should reflect what the classes are, not how they're implemented. When implementing a tricycle class, it might be convenient to make it a subclass of the bicycle class —after all, both tricycles and bicycles have a current speed and cadence. However, because a tricycle is not a bicycle, it's unwise to publicly tie the two classes together. It could confuse users, make the tricycle class have methods (for example, to change gears) that it doesn't need, and make updating or improving the tricycle class difficult.The Object class is at the top of class hierarchy, and each class is its descendant (directly or indirectly). A variable of type Object can hold a reference to any object, such as an instance of a class or an array. Object provides behaviors that are shared by all objects running in the Java Virtual Machine. For example, all classes inherit Object's toString method, which returns a string representation of the object. The Managing Inheritance section covers the Object class in detail.Inheritance offers the following benefits:Subclasses provide specialized behaviors from the basis of commonelements provided by the superclass. Through the use of inheritance,programmers can reuse the code in the superclass many times.Programmers can implement superclasses called abstract classes thatdefine common behaviors. The abstract superclass defines and maypartially implement the behavior, but much of the class is undefinedand unimplemented. Other programmers fill in the details withspecialized subclasses.What Is an Interface?In general, an interface is a device or a system that unrelated entities use to interact. According to this definition, a remote control is an interface between you and a television set, the English language is an interface between two people, and the protocol of behavior enforced in the military is the interface between individuals of different ranks.Within the Java programming language, an interface is a type, just as a class is a type. Like a class, an interface defines methods. Unlike a class, an interface never implements methods; instead, classes that implement the interface implement the methods defined by the interface. A class can implement multiple interfaces.The bicycle class and its class hierarchy define what a bicycle can and cannot do in terms of its "bicycleness." But bicycles interact with the world on other terms. For example, a bicycle in a store could be managed by an inventory program. An inventory program doesn't care what class of items it manages as long as each item provides certain information, such as price and tracking number. Instead of forcing class relationships on otherwise unrelated items, the inventory program sets up a communication protocol. This protocol comes in the form of a set of method definitions contained within an interface. The inventory interface would define, but not implement, methods that set and get the retail price, assign a tracking number, and so on.计算机专业中英文文献翻译To work in the inventory program, the bicycle class must agree to this protocol by implementing the interface. When a class implements an interface, the class agrees to implement all the methods defined in the interface. Thus, the bicycle class would provide the implementations for the methods that set and get retail price, assign a tracking number, and so on.You use an interface to define a protocol of behavior that can be implemented by any class anywhere in the class hierarchy. Interfaces are useful for the following:Capturing similarities among unrelated classes without artificiallyforcing a class relationshipDeclaring methods that one or more classes are expected to implementRevealing an object's programming interface without revealing itsclassModeling multiple inheritance, a feature of some object-orientedlanguages that allows a class to have more than one superclass。

计算机外文翻译(完整)

计算机外文翻译(完整)

计算机外⽂翻译(完整)毕业设计(论⽂)外⽂资料翻译专业:计算机科学与技术姓名:王成明学号:06120186外⽂出处:The History of the Internet附件: 1.外⽂原⽂ 2.外⽂资料翻译译⽂;附件1:外⽂原⽂The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet had no idea of the possibilities of an "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCP/IP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, had 200 computers. The Defense Department, satisfied with ARPAnets results, decided to fully adopt it into service, and connected many military computers and resources into the network. ARPAnet then had 562 computers on its network. By the year 1984, it had over 1000 computers on its network.In 1986 ARPAnet (supposedly) shut down, but only the organization shut down, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new high speed lines were successfully installed at line speeds of 56k (a normal modem nowadays) through telephone lines in 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were290,000.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet has become one of the most important technological advancements in the history of humanity. Everyone wants to get 'on line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will have Internet access. The Internet has truly become a way of life in our time and era, and is evolving so quickly its hard to determine where it will go next, as computer and network technology improve every day.HOW IT WORKS:It's a standard thing. People using the Internet. Shopping, playing games,conversing in virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or have the means to connect to each other. The Internet is just like an office network, only it has millions of computers connected to it.The main thing about how the Internet works is communication. How does a computer in Houston know how to access data on a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCP/IP. TCP/IP establishes a language for a computer to access and transmit data over the Internet system.But TCP/IP assumes that there is a physical connecetion between onecomputer and another. This is not usually the case. There would have to be a network wire that went to every computer connected to the Internet, but that would make the Internet impossible to access.The physical connection that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual hard core connections are established among computers called routers.A router is a computer that serves as a traffic controller for information.To explain this better, let's look at how a standard computer might viewa webpage.1. The user's computer dials into an Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very high speed powerful computer running special software. The collection of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the heat of the sun to the heat of an ice-cube.Routers handle data that is going back and forth. A router puts small chunks of data into packages called packets, which function similarly to envelopes. So, when the request for the webpage goes through, it uses TCP/IP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leadingto the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer where the webpage is stored that is running a program that handles requests for the webpage and sends the webpage to whoever wants to see it.6. The webpage is put in packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know how to reassemble the data in the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and help on how to use main services of the Internet.Before you access webpages, you must have a web browser to actually be able to view the webpages. Most Internet Access Providers provide you with a web browser in the software they usually give to customers; you. The fact that you are viewing this page means that you have a web browser. The top two use browsers are Netscape Communicator and Microsoft Internet Explorer. Netscape can be found at /doc/bedc387343323968011c9268.html and MSIE can be found at /doc/bedc387343323968011c9268.html /ie.The fact that you're reading this right now means that you have a web browser.Next you must be familiar with actually using webpages. A webpage is a collection of hyperlinks, images, text, forms, menus, and multimedia. To "navigate" a webpage, simply click the links it provides or follow it's own instructions (like if it has a form you need to use, it will probably instruct you how to use it). Basically, everything about a webpage is made to be self-explanetory. That is the nature of a webpage, to be easily navigatable."Oh no! a 404 error! 'Cannot find web page?'" is a common remark made by new web-users.Sometimes websites have errors. But an error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page hasn't been created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling him/her about the error.A Javascript error is the result of a programming error in the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that your browser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must have an e-mail client, which is just like a personal post office, since it retrieves and stores e-mail. Secondly, you must have an e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus justby reading e-mail, you'll have to physically execute some form of program for a virus to strike.A signature is a feature of many e-mail programs. A signature is added to the end of every e-mail you send out. You can put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island has ports to accept and send out ships.A computer on the Internet has access nodes called ports. A port is just a symbolic object that allows the computer to operate on a network (or the Internet). This method is similar to the island/ocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux has it built into the command line; telnet. A popular telnet program for Macintosh is NCSA telnet.Any server software (web page daemon, chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet how will you know what the address of a page you want is?Search engines save the day. A search engine is a very large website that allows you to search it's own database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed2. Yahoo (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed Collection3. Excite (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed4. Lycos (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed5. Metasearch (/doc/bedc387343323968011c9268.html ) - Multiple searchA web spider is a program used by search engines that goes from page to page, following any link it can possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses hand-added links. For instance, on Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security has always been on the mind of system developers and system operators. Since the dawn of AT&T and its phone network, hackers have been known by many, hackers who find ways all the time of breaking into systems. It used to not be that big of a problem, since networking was limited to big corporate companies or government computers who could afford the necessary computer security.The biggest problem now-a-days is personal information. Why should you be careful while making purchases via a website? Let's look at how the internet works, quickly.The user is transferring credit card information to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone.The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a "hacker" from intercepting this data stream at one of the backbone points?Big-brother is not watching you if you access a web site, but users should be aware of potential threats while transmitting private information. There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries hands.A DES uses a single key of information to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a highly difficult system to break. One document was cracked and decoded, but it was a combined effort of14,000 computers networked over the Internet that took a while to do it, so most hackers don't have that many resources available.附件2:外⽂资料翻译译⽂Internet的历史起源——ARPAnetInternet是被美国政府作为⼀项⼯程进⾏开发的。

计算机中英论文

计算机中英论文

Understanding Web Addresses You can think of the World Wide Web as a network of electronic files stored on computers all around the world. Hypertext links these
news - a newsgroup
Ø telnet - a computer system that you can log into over the Internet Ø WAIS - a database or document in a Wide Area Information Search database Ø file - a file located on a local drive (your hard drive)
1
resources together. Uniform Resource Locators or URLs are the addresses used to locate these files. The information contained in a URL gives you the ability to jump from one web page to another with just a click of your mouse. When you type a URL into your browser or click on a hypertext link, your browser is sending a request to a remote computer to download a file. What does a typical URL look like? Here are some examples: / The home page for study English. ftp:///pub/ A directory of files at MIT available for downloading. news:rec.gardens.roses A newsgroup on rose gardening. The first part of a URL (before the two slashes* tells you the type of resource or method of access at that address. For example: Ø Ø Ø files Ø http - a hypertext document or directory gopher - a gopher document or menu ftp - a file available for downloading or a directory of such

计算机专业中英文文献翻译

计算机专业中英文文献翻译

1In the past decade the business environment has changed dramatically. The world has become a small and very dynamic marketplace. Organizations today confront new markets, new competition and increasing customer expectations. This has put a tremendous demand on manufacturers to; 1) Lower total costs in the complete supply chain 2) Shorten throughput times 3) Reduce stock to a minimum 4) Enlarge product assortment 5) Improve product quality 6) Provide more reliable delivery dates and higher service to the customer 7) Efficiently coordinate global demand, supply and production. Thus today's organization have to constantly re-engineer their business practices and procedures to be more and more responsive to customers and competition. In the 1990's information technology and business process re-engineering, used in conjunction with each other, have emerged as important tools which give organizations the leading edge.ERP Systems EvolutionThe focus of manufacturing systems in the 1960's was on inventory control. Most of the software packages then (usually customized) were designed to handle inventory based on traditional inventory concepts. In the 1970's the focus shifted to MRP (Material Requirement Planning) systems which translatedthe Master Schedule built for the end items into time-phased net requirements for the sub-assemblies, components and raw materials planning and procurement,In the 1980's the concept of MRP-II (Manufacturing Resources Planning) evolved which was an extension of MRP to shop floor and distribution management activities. In the early 1990's, MRP-II was further extended to cover areas like Engineering, Finance, Human Resources, Projects Management etc i.e. the complete gamut of activities within any business enterprise. Hence, the term ERP (Enterprise Resource Planning) was coined.In addition to system requirements, ERP addresses technology aspects like client/server distributedarchitecture, RDBMS, object oriented programming etc. ERP Systems-Bandwidth ERP solutions address broad areas within any business like Manufacturing, Distribution, Finance, Project Management, Service and Maintenance, Transportation etc. A seamless integration is essential to provide visibility and consistency across the enterprise.An ERP system should be sufficiently versatile to support different manufacturing environments like make-to-stock, assemble-to-order and engineer-to-order. The customer order decoupling point (CODP) should be flexible enough to allow the co-existence of these manufacturing environments within the same system. It is also very likely that the same product may migrate from one manufacturing environment to another during its produce life cycle.The system should be complete enough to support both Discrete as well as Process manufacturing scenario's. The efficiency of an enterprise depends on the quick flow of information across the complete supply chain i.e. from the customer to manufacturers to supplier. This places demands on the ERP system to have rich functionality across all areas like sales, accounts receivable, engineering, planning, inventory management, production, purchase, accounts payable, quality management, distribution planning and external transportation. EDI (Electronic Data Interchange) is an important tool in speeding up communications with trading partners.More and more companies are becoming global and focusing on down-sizing and decentralizing their business. ABB and Northern Telecom are examples of companies which have business spread around the globe. For these companies to manage their business efficiently, ERP systems need to have extensive multi-site management capabilities. The complete financial accounting and management accounting requirementsof the organization should be addressed. It is necessary to have centralized or de-centralized accounting functions with complete flexibility to consolidate corporate information.After-sales service should be streamlined and managed efficiently. A strong EIS (Enterprise Information System) with extensive drill down capabilities should be available for the top management to get a birds eye view of the health of their organization and help them to analyze performance in key areas.Evaluation CriteriaSome important points to be kept in mind while evaluating an ERP software include: 1) Functional fit with the Company's business processes 2) Degree of integration between the various components of the ERP system 3) Flexibility and scalability 4) Complexity; user friendliness 5) Quick implementation; shortened ROI period 6) Ability to support multi-site planning and control 7) Technology; client/server capabilities, database independence, security 8)Availability of regular upgrades 9) Amount of customization required 10) Local support infrastructure II) Availability of reference sites 12) Total costs,including cost of license, training, implementation, maintenance, customization and hardware requirements.ERP Systems-ImplementationThe success of an ERP solution depends on how quick the benefits can be reaped from it. This necessitates rapid implementations which lead to shortened ROI periods. Traditional approach to implementation has been to carry out a Business Process Re-engineering exercise and define a "TO BE"model before the ERP system implementation. This led to mismatches between the proposed model and the ERP functionality, the consequence of which was customizations, extended implementation time frames, higher costs and loss of user confidence.ERP Systems-The FutureThe Internet represents the next major technology enabler which allows rapid supply chain management between multiple operations and trading partners. Most ERP systems are enhancing their products to become "Internet Enabled" so that customers worldwide can have direct to the supplier's ERP system. ERP systems are building in the Workflow Management functionally which provides a mechanism to manage and controlthe flow of work by monitoring logistic aspects like workload, capacity, throughout times, work queue lengths and processing times.译文1在过去十年中,商业环境发生了巨大的变化。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
相关文档
最新文档