Hadoop分布式文件系统:架构和设计外文翻译
hadoop组件原理

hadoop组件原理Hadoop is an open-source software framework for distributed storage and processing of large sets of data. Hadoop是一个用于分布式存储和处理大型数据集的开源软件框架。
It is designed to scale from single servers to thousands of machines, and it is known for its reliability and ability to handle massive amounts of data. 它设计用来从单个服务器扩展到成千上万台机器,并以其可靠性和处理海量数据的能力而著称。
Hadoop is widely used in industries such as banking, healthcare, entertainment, and social media for various purposes including data warehousing, log processing, data analytics, and more. Hadoop广泛应用于银行、医疗保健、娱乐和社交媒体等行业,主要包括数据仓库、日志处理、数据分析等各种用途。
The core components of Hadoop include Hadoop Distributed File System (HDFS) for storage and Apache MapReduce for processing. Hadoop的核心组件包括Hadoop分布式文件系统(HDFS)用于存储以及Apache MapReduce用于处理。
HDFS is designed to store large files across multiple machines in a fault-tolerant manner, while MapReduce is a programming model for processing and generating large datasets. HDFS旨在以容错的方式跨多台机器存储大文件,而MapReduce是一种用于处理和生成大型数据集的编程模型。
Hadoop分布式文件系统-架构和设计要点(翻译)

Hadoop分布式文件系统:架构和设计要点(翻译)一、前提和设计目标1、硬件错误是常态,而非异常情况,HDFS可能是有成百上千的server组成,任何一个组件都有可能一直失效,因此错误检测和快速、自动的恢复是HDFS的核心架构目标。
2、跑在HDFS上的应用与一般的应用不同,它们主要是以流式读为主,做批量处理;比之关注数据访问的低延迟问题,更关键的在于数据访问的高吞吐量。
3、HDFS以支持大数据集合为目标,一个存储在上面的典型文件大小一般都在千兆至T字节,一个单一HDFS实例应该能支撑数以千万计的文件。
4、HDFS应用对文件要求的是write-one-read-many访问模型。
一个文件经过创建、写,关闭之后就不需要改变。
这一假设简化了数据一致性问题,使高吞吐量的数据访问成为可能。
典型的如MapReduce框架,或者一个web crawler应用都很适合这个模型。
5、移动计算的代价比之移动数据的代价低。
一个应用请求的计算,离它操作的数据越近就越高效,这在数据达到海量级别的时候更是如此。
将计算移动到数据附近,比之将数据移动到应用所在显然更好,HDFS提供给应用这样的接口。
6、在异构的软硬件平台间的可移植性。
二、Namenode和DatanodeHDFS采用master/slave架构。
一个HDFS集群是有一个Namenode和一定数目的Datanode组成。
Namenode是一个中心服务器,负责管理文件系统的namespace和客户端对文件的访问。
Datanode在集群中一般是一个节点一个,负责管理节点上它们附带的存储。
在内部,一个文件其实分成一个或多个block,这些block存储在Datanode集合里。
Namenode执行文件系统的namespace操作,例如打开、关闭、重命名文件和目录,同时决定block到具体Datanode节点的映射。
Datanode在Namenode的指挥下进行block 的创建、删除和复制。
Hadoop分布式文件系统HDFS

public static FileSystem get(final URI uri, final Configuration conf, String user) throws IOException, InterruptedException public static FileSystem get(Configuration conf) throws IOException
}
JAVA API接口
• 输出文件内容示例
public void displayData(String[] args) throws IOException { String uri=args[0]; Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(URI.create(uri),conf); InputStream in = fs.open(new Path(uri)); IOUtils.copyBytes(in,System.out的文件系统叫分布式文件系统。 它能解决的问题:数据集的大小超过一台物理计算机的存储能力。 实现上的困难:面向网络实现、节点故障问题、配置管理问题、对用户来说透明的
访问。
设计假设和目标
• 高容错率 故障检测和快速自动恢复是HDFS的核心架构目标。 • 优化流式数据访问方式 着重批处理性能强调高吞吐量的数据访问。而非交互业务应用那样的单记录查询速度 • 大数据集 常规文件大小达到 GB~TB数量级。支持高数据访问带宽和大量节点的集群。
设计假设和目标
• 一次生成、写入多次读取 可尾部追加数据,可截断数据 • 迁移计算代码比迁移数据消耗小 • 可移植性好 跨异构硬件移植跨不同软件平台移植 由java实现的
Hadoop三种架构介绍及搭建

Hadoop三种架构介绍及搭建apache hadoop三种架构介绍(standAlone,伪分布,分布式环境介绍以及安装)hadoop ⽂档1、StandAlone环境搭建运⾏服务服务器IPNameNode192.168.221.100SecondaryNameNode192.168.221.100DataNode192.168.221.100ResourceManager192.168.221.100NodeManager192.168.221.100第⼀步:下载apache hadoop并上传到服务器下载链接:解压命令cd /export/softwarestar -zxvf hadoop-2.7.5.tar.gz -C ../servers/第⼆步:修改配置⽂件修改core-site.xml第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim core-site.xml<configuration><property><name></name><value>hdfs://192.168.221.100:8020</value></property><property><name>hadoop.tmp.dir</name><value>/export/servers/hadoop-2.7.5/hadoopDatas/tempDatas</value></property><!-- 缓冲区⼤⼩,实际⼯作中根据服务器性能动态调整 --><property><name>io.file.buffer.size</name><value>4096</value></property><!-- 开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 --><property><name>fs.trash.interval</name><value>10080</value></property></configuration>修改hdfs-site.xml第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim hdfs-site.xml<configuration><!-- NameNode存储元数据信息的路径,实际⼯作中,⼀般先确定磁盘的挂载⽬录,然后多个⽬录⽤,进⾏分割 --> <!-- 集群动态上下线<property><name>dfs.hosts</name><value>/export/servers/hadoop-2.7.4/etc/hadoop/accept_host</value></property><property><name>dfs.hosts.exclude</name><value>/export/servers/hadoop-2.7.4/etc/hadoop/deny_host</value></property>--><property><name>node.secondary.http-address</name><value>node01:50090</value></property><property><name>node.http-address</name><value>node01:50070</value></property><property><name>.dir</name><value>file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2</value></property><!-- 定义dataNode数据存储的节点位置,实际⼯作中,⼀般先确定磁盘的挂载⽬录,然后多个⽬录⽤,进⾏分割 --> <property><name>dfs.datanode.data.dir</name><value>file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2</value></property><property><name>node.edits.dir</name><value>file:///export/servers/hadoop-2.7.5/hadoopDatas/nn/edits</value></property><property><name>node.checkpoint.dir</name><value>file:///export/servers/hadoop-2.7.5/hadoopDatas/snn/name</value></property><property><name>node.checkpoint.edits.dir</name><value>file:///export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits</value></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.blocksize</name><value>134217728</value></property></configuration>修改hadoop-env.sh第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim hadoop-env.shvim hadoop-env.shexport JAVA_HOME=/export/servers/jdk1.8.0_181修改mapred-site.xml第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim mapred-site.xml<configuration><property><name></name><value>yarn</value></property><property><name>mapreduce.job.ubertask.enable</name><value>true</value></property><property><name>mapreduce.jobhistory.address</name><value>node01:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name> <value>node01:19888</value></property></configuration>修改yarn-site.xml第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim yarn-site.xml<configuration><property><name>yarn.resourcemanager.hostname</name><value>node01</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.log-aggregation-enable</name><value>true</value></property><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property></configuration>修改mapred-env.sh第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim mapred-env.shexport JAVA_HOME=/export/servers/jdk1.8.0_181修改slaves第⼀台机器执⾏以下命令cd /export/servers/hadoop-2.7.5/etc/hadoopvim slaveslocalhost第三步:启动集群要启动 Hadoop 集群,需要启动 HDFS 和 YARN 两个模块。
hadoop分布式存储平台外文翻译文献

hadoop分布式存储平台外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:TechnicalIssuesofForensicInvestigationsinCloudComputing EnvironmentsDominik BirkRuhr-UniversityBochum HorstGoertzInstituteforITSecurityBochum, GermanyRuhr-University BochumHorstGoertzInstitute forITSecurity Bochum,GermanyAbstract—Cloud Computing is arguably one of the mostdiscussed information technologiestoday. It presentsmany promising technological andeconomicalopportunities. However, many customers remain reluctant tomove t hei r bus i n ess I T infr a structure com pl etely t o th e cl o ud. O n e of theirma i n concernsisCloudSecurityandthethreatoftheunknown.CloudService Provide rs(CSP) encourage this perception by not letting their customerssee whatis behindtheirvirtualcurtain.Aseldomlydiscussed,butinthisregard highly relevant open issue is the ability to perform digital investigations.This c onti nue s to fuel insecu ri ty on the si d es of both pro vi dersa nd c us t omers.CloudForensicsconstitutesanewanddisruptivechallengeforinvestigators.D ue to the decentralized nature of data processing in the cloud,traditional approachestoevidencecollectionandrecoveryareno longerpractical. This paperfocusesonthetechnicalaspectsofdigitalforensicsindistributedcloud e nvir onments. W e c ont ribute b y ass es sing whether it i s possible for t he customerofcloudcomputingservicestoperformatraditionaldigital investigation from a technical point of view. Furthermore we discusspossible solutions and possible new methodologies helping customers to perform such investigations.I.INTRODUCTIONAlthough the cloud might appear attractive to small as well as tolargecompanies, it does not comealongwithoutits own uniqueproblems.Outsourcingsensitive corporatedata intothe cloudraisesconcerns regardingtheprivacyandsecurityofdata.Securitypolicies,companiesmainpillarc oncern ing s e curity, cann ot be ea s il y d eployed into distributed,virtualize d cloud environments. This situation is further complicated bytheunknown physicallocationofthe companie’s assets.Normally,if a securityincident occurs,thecorporatesecurityteamwantstobeabletoperformtheirownin vestigationwithoutdependencyonthirdparties.Inthecloud,thisisnotpos sible any m ore: The CSP obtain s all t he pow e r over the env i ronmentandthus controls the sources of evidence. In the best case, a trusted thirdpartyactsasa trustee and guarantees for the trustworthiness of theCSP.Furthermore,theimplementationofthe technicalarchitecture andcircumstanceswithincloudcomputingenvironmentsbiasthewayani nvestigation may b e proc es sed. I n detail, evidenc e dat a hasto be i nte r pret ed by an investigator in a We would like to thank the reviewers for thehelpful commentsandDennisHeinson(CenterforAdvancedSecurityResearch Darmstadt - CASED) for the profound discussions regarding the legalaspects ofcloudforensics.propermannerwhichishardlybepossibleduetothelack of circumstantial information. For auditors, this situation does notchange:Questions who accessed specific data and information cannot be answeredby t hec us tomers,i fnocorrespondingl og sareav a ila bl e.Withthei nc rea s i n g demandforusingthepowerofthecloudforprocessingals osensible informationand data,enterprises face the issue of DataandProcess Provenance in the cloud [ 10]. Digital provenance, meaning meta-data that describesthe ancestry or history of a digital object, is a crucialfeaturefor f oren s i c i nve stigations. In combination w ith a suitableauthentication sche m e,it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations indistributed environmentssuchasthecloud.Unfortunately,theaspectsofforensic invest igations in distributed environment have so far been mostly neglectedby t he res e a rc h commun i ty. Current discuss i on c ent e rs mostly a rounds e curity,privacy and data protection issues [ 35], [ 9], [ 12]. The impact of forensic investigations on cloud environments was little noticed albeit mentionedby the authors of[ 1]in2009:”[...] to ourknowledge,noresearchhasbeen published on how cloud computing environments affect digitalartifacts,a ndon acquisit i on l og i s t i cs a nd lega l i s sues rela t ed t oc l oudcomputing environments.”This statementis also confirmed by other authors [34],[36],[ 40] stressing that further research on incident handling, evidence trackingand accountabilityincloudenvironmentshastobedone.Atthe same time,binedwiththefa ct t hat i nforma t iont e chnologyin c re a singl y transcende nt s pe oples’p ri vateand professional life, thus mirroring more and more of peoples’actions,itbecomes apparent that evidence gathered from cloud environments will beof high significance to litigation or criminal proceedings in the future. Within thiswork, we focus the notionof cloud forensics by addressing the technical issuesof for e nsi c s in a ll three m a j or c loud ser vi ce m od els a ndcons i dercross-disciplinary aspects. Moreover, we address the usability of varioussources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as asurveying discussion of an almost unexploredresearch area. The paperisor g an iz e d a s fol lows:Wedi s c us stherela t edw or ka n dt h e f unda m enta l technical background information of digital forensics, cloudcomputing and the fault model in section II and III. In section IV, wefocus on the technical issues of cloud forensics and discuss the potential sources and natureofdigital evidenceaswellasinvestigationsin XaaSenvironmentsincludingthe cross-disciplinary aspects. We conclude in sectionV.II.RELATEDWORKV ariousworkshavebe e npubli s he d in t hefi e ldofclo ud se c ur i tya n dp riva cy[9],[35],[30 ]focussingonaspectsforprotectingdatainmulti-tenant,virtualized environments. Desired security characteristics forcurrentcloud infrastructuresmainlyrevolvearoundisolationofmulti-tenantplatforms[12],security of hypervisors in order to protect virtualized guest systems andsecure ne tworki nfra struc t ures[32].Albe i td i g ita l provenance,desc r ib i ngthean ce s try of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [ 8], [10]contributing totheissuesofcloudforensis.Withinthiscontext,cryptographicproofsfor v erifying data integrity mainly in cloud storage offers have beenproposed,y et l a cki n g ofpractic a li m pl em entations[24],[37],[23].Tradi t iona l computer forensics hasalreadywellresearchedmethodsforvariousfieldsofapplication[ 4], [5], [6], [ 11], [13]. Also the aspects of forensics in virtual systemshave been addressed by several works [2], [ 3], [20] including the notionofvirtual introspection [ 25]. Inaddition, the NIST already addressed WebService F orensics [ 22] which has a hug e im pa ct on investig a ti on processes in c loudcomputing environments. In contrast, the aspects of forensic investigationsincloud environments have mostly been neglected by both the industry andthe researchcommunity.Oneofthefirstpapersfocusingonthistopicwaspublishedby Wolthusen[40]afterBebeeetalalready introduced problemsw i t hin c loud e nvi ronments [ 1]. Wol t husen s t re s s ed t hat t here is an i nhere nt strong need for interdisciplinary work linking the requirements andconceptsofevidence arising from the legal field to what can be feasiblyreconstructed andinferredalgorithmicallyorinanexploratorymanner.In2010, Grobaueretal [ 36] published a paper discussing the issues of incident response in cloude nvironments- un f ortunatelyno s pe ci f i c i s sue s a nd s ol uti onsof cloudforensics have been proposed which will be done within thiswork.III.TECHNICALBACKGROUNDA. Traditional DigitalForensicsThe notion of Digital Forensics is widely known as the practice of identifying,e xt rac ti ng andconsi de r i ngevide n cefromdi g i t almedia.U nf ortunately,dig italevidence is both fragile and volatile and therefore requires the attentionof specialpersonnelandmethodsinordertoensurethat evidencedatacanbe proper isolated and evaluated. Normally, the process of a digitalinvestigation can be separatedinto three different steps each having its ownspecificpurpose:1)In the Securing Phase, the majorintention is the preservation of evidence for an alys is. T h e d ata has t o be col l ec t ed in a ma nne r that ma x imize s its integrity. This is normally done by abitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloudcomputing where you never know exactly where your data is andadditionallydonot have access to any physical hardware.However,thesnapshot t ec hnol ogy,di s c us s e di n s e c ti on IV-B3,provi de sapowerful t oolt ofr e ez e system states and thus makes digital investigations, at least in IaaSscenarios,theoreticallypossible.2)We refer to the Analyzing Phase as the stage in which the data is sifted and combined.Itisinthisphasethatthedatafrommultiplesystemsorsourcesis p ull e dto g et he r to c r e ate as complete a pic t ure a nd event r econst r uc t ionas possible.Especiallyindistributedsysteminfrastructures, thismeans that bits and pieces of data are pulled together for deciphering the real story ofwhat happened and for providing a deeper look into the data.3)Finally, at the end of the examination and analysis of the data, theresultsof t heprevi ous phasesw i ll be r ep r o cessedint heP re s entati o n P ha s e.T he r ep ort,cr eated in this phase, is a compilation of all the documentation andevidencefromtheanalysisstage.Themainintentionofsuchareportisthatitcontain s allresults,itiscompleteandcleartounderstand.Apparently,thesuccessofthesethreesteps stronglydependsonthefirststage.Ifitis notpossible tos e cur e the com pl ete s et ofe vidence data, no exhausti ve analysi s w ill bepossible. However, in real world scenarios often only a subset of theevidencedatacanbesecuredbytheinvestigator.Inaddition,animportantdefinition in the general context of forensics is the notion of a Chain of Custody. This chainclarifies how and where evidence is stored and who takes possession ofit.E spe c iallyforc a se s w h ic h areb ro ughtt o c ou rt it iscrucia l thatt he chainofcustody ispreserved.B.CloudComputingAccordingtotheNIST[16],cloudcomputingisamodel forenablingconvenient,on-demandnetworkaccesstoasharedpoolof configurablec omput i ng resources(e.g., networks, se rv ers, storage, a ppli c at i onsandservices) that can be rapidly provisioned and released with minimalCSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go andreliability.Within this work, the following three models are used: In the Infrastructureas aService(IaaS)model,thecustomerisusingthevirtualmachineprovidedbytheCSP for installing his own system on it. The system can be used likeany o ther physical com pu ter w i th a few lim i tations. Ho w eve r,t he additive customerpoweroverthesystemcomesalongwithadditionalsecurity obligations. Platform as a Service (PaaS) offerings provide the capabilityto deploy application packages created using the virtual development environment supported by the CSP. Forthe efficiency of software de velopmentprocessth i s s e rv icem ode lcanbepropel l ent.Inth e S oft w a reas aService(SaaS)model,the customermakesuseofaservicerunbytheCSPon a cloud infrastructure. In most of the cases this service canbe accessed throughanAPIforathinclientinterfacesuchas a web browser.Closed-source public SaaS offers such as Amazon S 3 and GoogleMailcan onlybe usedi nt hepublicdepl oy ment m odelleading t ofurtherissue sco ncerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud haveto monpubliccloudsaremadeavailabletothe general public. The corresponding infrastructure is owned by one organizationacting a sa C SPand of fe ri ngser vi ces t oits c usto m ers.I n cont ra st,t he pri va tecl ou disexclusively operated for an organization but may not providethescalabilityand agility of public offers. The additional notions of community andhybrid cloud are notexclusively covered within this work. However, independentlyfromthespecific modelused,themovementofapplicationsand data tothec loud c ome sa longwit h limitedcontrolfort he custom e ra bou tt h eappl i ca t ionitself, the data pushed into the applications and also about theunderlyingtechnicalinfrastructure.C.FaultModelBeitanaccountforaSaaSapplication,adevelopmentenvironment(PaaS)ora vi rt ual ima g e of an Iaa S environm e nt, sys t ems in th e clou d can b e a f fec t edby inconsistencies. Hence, for both customer and CSP it is crucial to havetheability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused bythefollowing tworeasons:1)M a l ic i ous ly Inte nde dF a ult sInternal or external adversaries with specific malicious intentions cancause faultsoncloudinstancesorapplications.Economicrivalsaswellasformer empl oyees can be the reason for these faults and state a constant threatto customersandCSP.Inthismodel,alsoamaliciousCSPisincludedalbeitheis assu med to be rare in real world scenarios. Additionally, from thetechnicalpointof view, the movement of computing power to a virtualized,multi-tenant e nvironme nt c a npos e fu rt herthreadsandri s kst o thesyste m s.Onere a sonfor thisi sthatifasinglesystemorserviceinthecloudiscompromised,allother guest systems and even the host system are at risk. Hence, besides the needfor furthersecurity measures, precautions for potential forensic investigations have to be taken intoconsideration.2)U ni n tent i onal F aul t s Inconsistenciesintechnicalsystemsorprocessesintheclouddonothave implicitly to be caused by malicious intent. Internal communication errorsor humanfailurescanleadtoissuesintheservicesofferedtothecostumer(i.e. loss or modification of data). Although these failures are notcaused i nte nt iona l ly, both t he CS P and t he custom e r have a stron g int e nti on to discover the reasons and deploy correspondingfixes.IV.TECHNICALISSUES Digitalinvestigationsareaboutcontrolofforensicevidencedata.Fromthe technicalstand point,thisdatacanbeavailableinthreedifferentstates:atrest,i n motion or in execut i o n. Da t a at re st i s repr e sent ed by a l locat e d diskspace.Whether the data is stored in a database or in a specific file format, itallocatesdisk space. Furthermore, if a file is deleted, the disk space is de-allocatedfor theoperatingsystembutthedataisstillaccessiblesincethediskspacehasnotbeen re-allocated and overwritten. This fact is often exploited byinvestigatorsw hi c hexplorethesede-a l l ocated d iskspa c eonha rddi s ks.I n c a set heda taisi n motion, data is transferred from one entity to another e.g. a typical filetransferover a network can be seen as a data in motion scenario.Severalencapsulated protocolscontainthedataeachleavingspecifictracesonsystemsan dnetworkdeviceswhichcaninreturnbeusedbyinvestigators.Datacanbeloadedintom em or ya nd e x ecutedasap ro c e ss.In t his c as e,the da taisneithe r atres t orinmotion but in execution. Onthe executing system, processinformation,machine instruction and allocated/de-allocated data can beanalyzed by creating a snapshot of the current system state. In thefollowing sections, wepoint out the potential sources for evidential data in cloud environments anddi s c us s t he te c hnical issue s ofdigit a l i nve stiga t ions in XaaSenvironm e ntsaswell as suggest several solutions to theseproblems.A.Sources and Nature ofEvidenceConcerning the technical aspects of forensic investigations, the amountof potential evidence available to the investigator strongly diverges between the different cloud service anddeployment models. The virtualmachine(VM),hostingin most of the cases the server application, provides severalpiecesof i nforma t ionthatc oul dbeusedbyi nve s t iga t ors.Onth e netw orkl evel,n e twork componentscanprovideinf ormationaboutpossiblecommunicationchannels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensicinvestigation.I nd epe ndentlyfromthe u sedm ode l,thefol l ow i n g t hr e e compone ntscouldac tas sourcesfor potential evidentialdata.1)VirtualCloudInstance:TheVMwithinthecloud,wherei.e.dataisstored orprocesses arehandled,containspotentialevidence[2],[3].Inmostofthe cases, it is the place where an incident happened and hence providesagood s t a rti ngpo int for afor e nsici nve stig a ti o n.The V Mi ns ta nc e c anbeaccess e dby both, the CSP and the customer whois running the instance. Furthermore,virtual introspection techniques [25] provide access to the runtime state ofthe VM via the hypervisor and snapshot technology suppliesa powerful technique for the customer to freeze specific states of the VM. Therefore,virtual i ns t anc e scanb es t i ll runn ingduri n g a na lys i s whichle a dstothec a s e ofli v einvestigations [41] or can be turned off leading to static image analysis. InSaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential informationis highly limited or simply notpossible.2)Network Layer: Traditional network forensics isknowna sthe anal ys is of net w ork tra f fic logs fo r trac i ng e v e nt s t hat h a ve oc curred inthe past. Since the differentISO/OS I networklayers provide severalinformation on protocols and communication between instances within aswell aswithinstancesoutsidethecloud[4],[5],[6],networkforensics istheoreticallyalsofeasiblein cloud environments. However inpractice,ordi nary CSP currently do not provide any log d ata f rom th e network componentsusedbythe customer’s instancesorapplications.Forinstance, in case of a malware infection of an IaaSVM, it will be difficult forthe investigator to getany form of routing information and network logdata ingeneral which is crucial for further investigative steps. This situationgetse ven more complica te d in case of Paa So r SaaS. So again, the sit ua tiono f gathering forensic evidence is strongly affected by the support theinvestigator receives from thecustomer and the CSP.3)Client System: On the system layer of the client, it completely depends on theusedmodel(IaaS,PaaS,SaaS)ifandwherepotentialevidencecouldbe extracted. In most of the scenarios, the user agent (e.g. the web browser)ontheclient system is the only application that communicates with the servicein t he c l oud. Thi s e s peci a lly holds for SaaS appl ic ati ons which are use d and controlled by the web browser. But also in IaaSscenarios, the administration interface is often controlled via the browser. Hence, in an exhaustiveforensic investigation, the evidence data gathered from the browserenvironment [7]should not beomitted. a)B rows e r F o rensics: Generally, the circumst a nces leading to aninve s t i gation haveto be differentiated: In ordinary scenarios, themaingoal of an investigationofthewebbrowseristodetermineifauserhasbeenvictimofa crime.Inco mplexSaaSscenarioswithhighclient-serverinteraction,this constitutes a difficult task. Additionally, customers strongly make useof t hird-part y extens i ons [17] w hi ch can b e a bu s e d for malic i ous pu rposes.He n ce,theinvestigatormightwanttolookformalicious extensions, searches performed, websites visited, files downloaded, information entered in formsor stored in local HTML5 stores, web-based email contents andpersistent browser cookies for gathering potential evidence data. Within this context, itis i nevitable to investi g ate the appe a r a nce of mal i ci o us J a vaScript [18] l eadi n g toe.g. unintended AJAX requests and hence modified usage ofadministrationinterfaces. Generally, the web browser contains a lot of electronicevidence datathatcouldbeusedtogiveananswertobothoftheabovequestions-evenif the private mode is switched on [19].B. Investigat i ons i n X a aSEnvironmentsTraditionaldigital forensicmethodologiespermit investigators toseizeequipment and perform detailed analysis on the media and data recovered [11].In a distributedinfrastructure organization like the cloudcomputingenvironment, investigators are confrontedwith an entirely different situation.T he y have no longer t he opt i on of s e izing physic a l data stora g e. Dataandprocesses of the customer are dispensed over an undisclosed amount ofvirtualinstances, applications and network elements. Hence, it is in question whether preliminary findingsof the computer forensic community in the fieldof digitalforensicsapparently havetoberevisedandadaptedtothenewe nvironment. Wi t hin t h i s sectio n, s pecific issues of inve s t ig ations inS a aS,PaaSand IaaSenvironments will be discussed. In addition,cross-disciplinary issueswhichaffectseveralenvironmentsuniformly,willbetakeninto consideration. We also suggest potential solutions to the mentionedproblems.1)SaaS Environments: Especially in the SaaS model, the customer does not obtain any control of the underlying operating infrastructure such as network,servers,operatingsystemsortheapplicationthatisused.Thismeansthatno d eepervie w in t o the s ystem and it s unde r lying infrastructure i s provi d ed tot hecustomer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extractedfromtheclient(seesectionIV-A3).In alotof casesthisurgestheinvestigatorto rely on high-level logs which are eventually provided by the CS P. Giventhe c ase t ha t the CS P doe s not run any logging a ppl i ca t ion, the c ustomer hasno opportunity tocreateanyusefulevidencethroughthe installationof any toolkit or logging tool. These circumstances do not allow a validforensic investigationandleadtothe assumptionthatcustomersofSaaSoffersdonot have any chance to analyze potentialincidences.a)Data Pr ove nance: The notio n of Digita l Provenanc e is known a sm eta-dat athatdescribestheancestryorhistoryofdigitalobjects.Secureprovenancethat records ownershipandprocesshistoryofdataobjectsisvitaltothesuccessof data forensics in cloud environments, yet it is still a challenging issue today[ 8]. Albeit data provenance is of high significance also for IaaSandPaaS,it s t a te s ahugepro bl ems p ecifica l l y forSaaS-ba s eda p plica t i o ns:Curr e nt g l o ba l acting public SaaS CSP offer Single Sign-On (SSO) access control to the setoftheir services. Unfortunately in case of an account compromise, mostofthe CSPdonotofferanypossibilityforthecustomertofigureoutwhichdataandinformati onhasbeenaccessedbytheadversary.Forthevictim,thissituationc an hav e tr e mendous impac t: If sensitive dat a has been compr om ised, i ti sunclear which data has been leaked and which has not been accessed bytheadversary. Additionally, data could be modified or deleted byanexternal adversaryorevenbytheCSPe.g.duetostoragereasons.Thecustomerhasnoabil ity to proof otherwise. Secure provenance mechanisms fordistributede nvironm e ntscan improve this sit u ation but have no tbeenp r actic a llyimplemented by CSP [10]. Suggested Solution: In private SaaS scenariosthissituation is improved by the fact that the customer and the CSP areprobably under the sameauthority. Hence, logging and provenance mechanismscouldbeimplemented which contribute to potential investigations. Additionally, thee xac t loca t io n o f theserversandt he dataisknownatanyti m e.P ublicS aa SCSP should offer additional interfaces for the purpose of compliance,forensics,operationsandsecuritymatterstotheircustomers.ThroughanAPI ,the customers should have the ability to receive specific informationsuch asaccess,errorandeventlogsthatcouldimprovetheirsituationincaseofan investigation. Furthermore, dueto the limited ability of receiving forensicinformationfrom the server and proofing integrity of stored data inSaaS s cen a rios, the c li e n t ha s t o c on tribut e t o t his process. This could bea c hieved byimplementingProofsofRetrievability(POR)inwhichaverifier(client)is en abled to determine that a prover(server) possesses a file or data object andit canbe retrievedunmodified[24].ProvableData Possession(PDP) techniques[ 37] could be used to verify that an untrusted server possesses the originaldata w i t houtt he need fo rthec l ienttore t ri e vei t.Alt houg ht h esec r yptogra phi c proofs have not been implemented by any CS P, the authors of [ 23] introduced a new data integrity verification mechanism for SaaS scenarios whichcould also be used for forensicpurposes. 2)PaaSEnvironments:OneofthemainadvantagesofthePaaSmodelisthat t he developed s oft wa r e applica t ion is u nde r t he c ontrol of t hecus t omer a nd exceptforsomeCSP,thesourcecodeoftheapplicationdoesnothavetoleave t he local development environment. Given these circumstances,thecustomer obtainstheoreticallythepowertodictatehowthe applicationinteractswith other dependencies such as databases, storage entities etc. CSP normallyclaim t hi s tr a nsfer is encrypt e d but t his statem e n t can hardly be verifi e d bythecustomer. Since the customer has the ability to interact with the platform overa prepared API, system states and specific application logs can beextracted.However potential adversaries, which can compromise the application duringruntime,shouldnotbeabletoaltertheselogfilesafterwards.SuggestedS olution: Depending on the runtime environment,logging mechanisms couldbeimplemented which automatically sign and encrypt the log informationbefore itstransfertoacentralloggingserverunderthecontrolofthecustomer. Additional signing and encrypting could prevent potential eavesdroppers fromb eingabl e to v i e wandalter l ogdatainformation on theway t othel o gg i n g server. Runtime compromise of an PaaSapplication by adversaries couldbemonitored by push-only mechanisms for log data presupposing that theneeded informationtodetectsuchanattackare logged. Increasingly, CSPofferingPaaSsolutionsgivedeveloperstheabilitytocollectandstoreavarietyofdi agn os t ics dat a i nahighlyconfigura bl eway w ith t heh el po f r unt imefeaturesets [38].3)IaaSEnvironments: As expected, even virtual instances in the cloudget compromised by adversaries. Hence, the ability to determine how defenses in thevirtualenvironment failedandtowhatextent the affected systems have been compromised is crucial not only for recovering from an incident. Alsoforensicinvestigations gain leverage from such information and contributeto r e s ili en ce a gai n stfutur e a t tac ks onthesy s tems.F r o mt hef or e ns ic poi ntof view,IaaSinstancesdoprovidemuchmoreevidencedatausableforpotent ial forensics than PaaSand SaaS models do. This fact is causedthrough theabilityofthecustomertoinstallandsetuptheimageforforensicpurposes before an incident occurs. Hence, as proposed for PaaSenvironments, logdata a nd other f orensic ev i de nc e infor m at i on could be s i gned and e nc r ypt e d befo reit istransferred to third-party hosts mitigating the chance that amaliciously motivatedshutdownprocessdestroysthevolatiledata.Although,IaaS enviro nments provide plenty of potential evidence, it has to beemphasized t hat the customer VM is in the end s till unde r t he cont rol of the CS P. He controls the hypervisor which is e.g. responsible for enforcing hardware boundariesand routing hardware requests amongdifferent VM.Hence,besides the security responsibilities of the hypervisor, he exertstremendous controloverhow customer’s VMcommunicatewiththehardwareand t heor etica l ly c ani nt erven e ex e cutedpr oc e s sesont he hos t edvirtuali ns t a ncethrough virtual introspection [ 25]. This could also affect encryption orsigningprocesses executed on the VM and therefore leading to theleakage ofthe secretkey.Althoughthisriskcanbedisregardedinmostof thecases, theimpact on the security of high security environments istremendous.a)Sna p s hot Analysi s: T r a d ition a l fo r e ns i c s expect t arge t ma c hines tob e powered down to collect an image (dead virtual instance). Thissituationcompletely changed with the advent of the snapshot technology whichis supported byall popular hypervisors such as Xen, VMware ESX andHyper-V.A snapshot, also referred to as the forensic image of a VM,providesa powerful t ool with wh i ch a v irtua l instance ca n be c l oned byoneclickincludingalsotherunning system’s memory.Duetotheinvention of the snapshot technology, systems hosting crucial business processes donot have to be powered down for forensic investigation purposes. The investigatorsimply createsand loads a snapshot of the target VM foranalysis(l ivevi rt ual i n stance). Thi s beha vi or is especi a lly i mportant for sc e nariosi n which a downtime of a system is not feasible or practical due to existingSLA.However the information whether the machine is running or has been properly powered down is crucial [ 3] for the investigation. Live investigationsof running virtual instancesbecome more common providing evidence data that is not available on powered down systems. The technique of liveinvestigation。
Hoop分布式文件系统架构和设计

H o o p分布式文件系统架构和设计Hessen was revised in January 2021Hadoop分布式文件系统:架构和设计引言云计算(cloud computing),由位于网络上的一组服务器把其计算、存储、数据等资源以服务的形式提供给请求者以完成信息处理任务的方法和过程。
在此过程中被服务者只是提供需求并获取服务结果,对于需求被服务的过程并不知情。
同时服务者以最优利用的方式动态地把资源分配给众多的服务请求者,以求达到最大效益。
Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。
它和现有的分布式文件系统有很多共同点。
但同时,它和其他的分布式文件系统的区别也是很明显的。
HDFS是一个高度容错性的系统,适合部署在廉价的机器上。
HDFS能提供高吞吐量的数据访问,非常适合大规模数据集上的应用。
一前提和设计目标1 hadoop和云计算的关系云计算由位于网络上的一组服务器把其计算、存储、数据等资源以服务的形式提供给请求者以完成信息处理任务的方法和过程。
针对海量文本数据处理,为实现快速文本处理响应,缩短海量数据为辅助决策提供服务的时间,基于Hadoop云计算平台,建立HDFS分布式文件系统存储海量文本数据集,通过文本词频利用MapReduce原理建立分布式索引,以分布式数据库HBase存储关键词索引,并提供实时检索,实现对海量文本数据的分布式并行处理.实验结果表明,Hadoop 框架为大规模数据的分布式并行处理提供了很好的解决方案。
2 流式数据访问运行在HDFS上的应用和普通的应用不同,需要流式访问它们的数据集。
HDFS的设计中更多的考虑到了数据批处理,而不是用户交互处理。
比之数据访问的低延迟问题,更关键的在于数据访问的高吞吐量。
3 大规模数据集运行在HDFS上的应用具有很大的数据集。
HDFS上的一个典型文件大小一般都在G字节至T字节。
hadoop英文参考文献

hadoop英文参考文献Hadoop是一个开源的分布式计算平台,它基于Google的MapReduce算法和Google文件系统(GFS)的思想,能够处理大规模的数据集。
对于研究Hadoop的人来说,阅读Hadoop的英文参考文献是非常必要的。
下面是一些Hadoop的英文参考文献:1. Apache Hadoop: A Framework for Running Applications on Large Clusters Built of Commodity Hardware. This paper describes the architecture of Hadoop and its components, including the Hadoop Distributed File System (HDFS) and MapReduce.2. Hadoop MapReduce: Simplified Data Processing on Large Clusters. This paper provides an overview of the MapReduce programming model and how it can be used to process large data sets on clusters of commodity hardware.3. Hadoop Distributed File System. This paper provides a detailed description of the Hadoop Distributed File System (HDFS), including its architecture, design goals, and implementation.4. Hadoop Security Design. This paper describes the security features of Hadoop, including authentication, authorization, and auditing.5. Hadoop Real World Solutions Cookbook. This book providespractical examples of how Hadoop can be used to solve real-world problems, including data processing, data warehousing, and machine learning.6. Hadoop in Practice. This book provides practical guidance on how to use Hadoop to solve data analysis problems, including data cleaning, data modeling, and data visualization.7. Hadoop: The Definitive Guide. This book provides a comprehensive overview of Hadoop and its components, including HDFS, MapReduce, and YARN. It also includes practical examples and best practices for using Hadoop.8. Pro Hadoop. This book provides a deep dive into Hadoop and its ecosystem, including HDFS, MapReduce, YARN, and a variety of tools and frameworks for working with Hadoop.9. Hadoop Operations. This book provides guidance on how to deploy, manage, and monitor Hadoop clusters in production environments, including best practices for performance tuning, troubleshooting, and security.以上是一些Hadoop的英文参考文献,它们涵盖了Hadoop的各个方面,对于学习和使用Hadoop都是非常有帮助的。
hadoop介绍讲解

hadoop介绍讲解Hadoop是一个由Apache软件基金会开发的开源分布式系统。
它的目标是处理大规模数据集。
Hadoop可以更好地利用一组连接的计算机和硬件来存储和处理海量数据集。
Hadoop主要由Hadoop分布式文件系统(HDFS)和MapReduce两部分组成。
以下是hadoop的详细介绍。
1. Hadoop分布式文件系统(HDFS)HDFS是Hadoop的分布式文件系统。
HDFS将大量数据分成小块并在多个机器上进行存储,从而使数据更容易地管理和处理。
HDFS适合在大规模集群上存储和处理数据。
它被设计为高可靠性,高可用性,并且容错性强。
2. MapReduceMapReduce是Hadoop中的计算框架。
它分为两个阶段:Map和Reduce。
Map阶段将数据分为不同的片段,并将这些片段映射到不同的机器上进行并行处理,Reduce阶段将结果从Map阶段中得到,并将其组合在一起生成最终的结果。
MapReduce框架根据数据的并行处理进行拆分,而输出结果则由Reduce阶段组装而成。
3. Hadoop生态系统Hadoop是一个开放的生态系统,其包含了许多与其相关的项目。
这些项目包括Hive,Pig,Spark等等。
Hive是一个SQL on Hadoop工具,用于将SQL语句转换为MapReduce作业。
Pig是另一个SQL on Hadoop工具,它是一个基于Pig Latin脚本语言的高级并行运算系统,可以用于处理大量数据。
Spark是一个快速通用的大数据处理引擎,它减少了MapReduce 的延迟并提供了更高的数据处理效率。
4. Hadoop的优点Hadoop是一个灵活的、可扩展的与成本优势的平台,它可以高效地处理大规模的数据集。
同时,它的开放式和Modular的体系结构使得其在大数据环境下无论是对数据的处理还是与其他开发者的协作都非常便利。
5. 总结Hadoop是一个很好的大数据处理工具,并且在行业中得到了广泛的应用。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文翻译原文来源The Hadoop Distributed File System: Architecture and Design 中文译文Hadoop分布式文件系统:架构和设计姓名 XXXX学号 ************2013年4月8 日英文原文The Hadoop Distributed File System: Architecture and DesignSource:/docs/r0.18.3/hdfs_design.html IntroductionThe Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed onlow-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is/core/.Assumptions and GoalsHardware FailureHardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.Streaming Data AccessApplications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are notneeded for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.Large Data SetsApplications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.Simple Coherency ModelHDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. AMap/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.“Moving Computation is Cheaper than Moving Data”A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.Portability Across Heterogeneous Hardware and Software PlatformsHDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.NameNode and DataNodesHDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocksare stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range ofmachines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.The File System NamespaceHDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features.The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.Data ReplicationHDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time.The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster.Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.Replica Placement: The First Baby StepsThe placement of replicas is critical to HDFS reliability and performance. Optimizing replica placement distinguishes HDFS from most other distributed file systems. This is a feature that needs lots of tuning and experience. The purpose of a rack-aware replica placement policy is to improve data reliability, availability, and network bandwidth utilization. The current implementation for the replica placement policy is a first effort in this direction. The short-term goals of implementing this policy are to validate it on production systems, learn more about its behavior, and build a foundation to test and research more sophisticated policies.Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches. In most cases, network bandwidth between machines in the same rack is greater than network bandwidth between machines in different racks.The NameNode determines the rack id each DataNode belongs to via the process outlined in Rack Awareness. A simple but non-optimal policy is to place replicas on unique racks. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. However, this policy increases the cost of writes because a write needs to transfer blocks to multiple racks.For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.The current, default replica placement policy described here is a work in progress. Replica SelectionTo minimize global bandwidth consumption and read latency, HDFS tries to satisfy a read request from a replica that is closest to the reader. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If angg/ HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.SafemodeOn startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.The Persistence of File System MetadataThe HDFS namespace is stored by the NameNode. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata. For example, creating a new file in HDFS causes the NameNode to insert a record into the EditLog indicating this. Similarly, changing the replication factor of a file causes a new record to be inserted into the EditLog. The NameNode uses a file in its local host OS file system to store the EditLog. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too.The NameNode keeps an image of the entire file system namespace and file Blockmap in memory. This key metadata item is designed to be compact, such that a NameNode with 4 GB of RAM is plenty to support a huge number of files and directories. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint. In the current implementation, a checkpoint only occurs when the NameNode starts up. Work is in progress to support periodic checkpointing in the near future.The DataNode stores HDFS data in files in its local file system. The DataNode has no knowledge about HDFS files. It stores each block of HDFS data in a separatefile in its local file system. The DataNode does not create all files in the same directory. Instead, it uses a heuristic to determine the optimal number of files per directory and creates subdirectories appropriately. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory. When a DataNode starts up, it scans through its local file system, generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode: this is the Blockreport.The Communication ProtocolsAll HDFS communication protocols are layered on top of the TCP/IP protocol. A client establishes a connection to a configurable TCP port on the NameNode machine. It talks the ClientProtocol with the NameNode. The DataNodes talk to the NameNode using the DataNode Protocol. A Remote Procedure Call (RPC) abstraction wraps both the Client Protocol and the DataNode Protocol. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients.RobustnessThe primary objective of HDFS is to store data reliably even in the presence of failures. The three common types of failures are NameNode failures, DataNode failures and network partitions.Data Disk Failure, Heartbeats and Re-ReplicationEach DataNode sends a Heartbeat message to the NameNode periodically. A network partition can cause a subset of DataNodes to lose connectivity with the NameNode. The NameNode detects this condition by the absence of a Heartbeat message. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. Any data that was registered to a dead DataNode is not available to HDFS any more. DataNode death may cause the replication factor of some blocks to fall below their specified value. The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. The necessity for re-replication may arise due to many reasons: a DataNode may become unavailable, a replica may become corrupted, a hard disk on a DataNode may fail, or the replication factor of a file may be increased.Cluster RebalancingThe HDFS architecture is compatible with data rebalancing schemes. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold. In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster. These types of data rebalancing schemes are not yet implemented.Data IntegrityIt is possible that a block of data fetched from a DataNode arrives corrupted. This corruption can occur because of faults in a storage device, network faults, or buggy software. The HDFS client software implements checksum checking on the contents of HDFS files. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.Metadata Disk FailureThe FsImage and the EditLog are central data structures of HDFS. A corruption of these files can cause the HDFS instance to be non-functional. For this reason, the NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog. Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously. This synchronous updating of multiple copies of the FsImage and EditLog may degrade the rate of namespace transactions per second that a NameNode can support. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use.The NameNode machine is a single point of failure for an HDFS cluster. If the NameNode machine fails, manual intervention is necessary. Currently, automatic restart and failover of the NameNode software to another machine is not supported.SnapshotsSnapshots support storing a copy of data at a particular instant of time. One usage of the snapshot feature may be to roll back a corrupted HDFS instance to a previously known good point in time. HDFS does not currently support snapshots but will in a future release.Data OrganizationData BlocksHDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 64 MB. Thus, an HDFS file is chopped up into 64 MB chunks, and if possible, each chunk will reside on a different DataNode.StagingA client request to create a file does not reach the NameNode immediately. In fact, initially the HDFS client caches the file data into a temporary local file. Application writes are transparently redirected to this temporary local file. When the local file accumulates data worth over one HDFS block size, the client contacts the NameNode. The NameNode inserts the file name into the file system hierarchy and allocates a data block for it. The NameNode responds to the client request with the identity of the DataNode and the destination data block. Then the client flushes the block of data from the local temporary file to the specified DataNode. When a file is closed, the remaining un-flushed data in the temporary local file is transferred to the DataNode. The client then tells the NameNode that the file is closed. At this point, the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed, the file is lost.The above approach has been adopted after careful consideration of target applications that run on HDFS. These applications need streaming writes to files. If a client writes to a remote file directly without any client side buffering, the network speed and the congestion in the network impacts throughput considerably. This approach is not without precedent. Earlier distributed file systems, e.g. AFS, have used client side caching to improve performance. APOSIX requirement has been relaxed to achieve higher performance of data uploads.Replication PipeliningWhen a client is writing data to an HDFS file, its data is first written to a local file as explained in the previous section. Suppose the HDFS file has a replication factor of three. When the local file accumulates a full block of user data, the client retrieves a list of DataNodes from the NameNode. This list contains the DataNodes that will host a replica of that block. The client then flushes the data block to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB), writes each portion to its local repository and transfers that portion to the second DataNode in the list. The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the data to its local repository. Thus, a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. Thus, the data is pipelined from one DataNode to the next.AccessibilityHDFS can be accessed from applications in many different ways. Natively, HDFS provides a Java API for applications to use. A C language wrapper for this Java API is also available. In addition, an HTTP browser can also be used to browse the files of an HDFS instance. Work is in progress to expose HDFS through the WebDAV protocol.FS ShellHDFS allows user data to be organized in the form of files and directories. It provides a commandline interface called FS shell that lets a user interact with the data in HDFS. The syntax of this command set is similar to other shells (e.g. bash, csh) that users are already familiar with. Here are some sample action/command pairs:FS shell is targeted for applications that need a scripting language to interact with the stored data.DFSAdminThe DFSAdmin command set is used for administering an HDFS cluster. These are commands that are used only by an HDFS administrator. Here are some sample action/command pairs:Browser InterfaceA typical HDFS install configures a web server to expose the HDFS namespace through a configurable TCP port. This allows a user to navigate the HDFS namespace and view the contents of its files using a web browser.Space ReclamationFile Deletes and UndeletesWhen a file is deleted by a user or an application, it is not immediately removed from HDFS. Instead, HDFS first renames it to a file in the /trash directory. The file can be restored quickly as long as it remains in /trash. A file remains in/trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the file from the HDFS namespace. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS.A user can Undelete a file after deleting it as long as it remains in the /trash directory. If a user wants to undelete a file that he/she has deleted, he/she can navigate the /trash directory and retrieve the file. The /trash directory contains only the latest copy of the file that was deleted. The /trash directory is just like any other directory with one special feature: HDFS applies specified policies to automatically delete files from this directory. The current default policy is to delete files from /trash that are more than 6 hours old. In the future, this policy will be configurable through a well defined interface.Decrease Replication FactorWhen the replication factor of a file is reduced, the NameNode selects excess replicas that can be deleted. The next Heartbeat transfers this information to the DataNode. The DataNode then removes the corresponding blocks and the corresponding free space appears in the cluster. Once again, there might be a time delay between the completion of the setReplication API call and the appearance of free space in the cluster.中文译本原文地址:/docs/r0.18.3/hdfs_design.html一、引言Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。