Solaris MC A multi-computer OS

合集下载

Solaris 10 操作系统说明书

Solaris 10 操作系统说明书

Enterprises are under tremendous pressure to do more with less, roll out new businessservices faster, fit more servers into the same space, and comply with new regulations, all while their budgets are shrinking and headcount is frozen. Can an operating system really help you address these issues and turn IT into a business advantage? The answer is yes, with the Solaris™ Operating System.The Solaris OS is the strategic platform for today’s demanding enterprise. It’s the only open operating system that has delivered proven results, running everything from mission-critical enterprise databases to high performance Web farms, from large-scale SMP systems to industry-standard x86 systems from HP, IBM, Dell, and Sun.For customers facing challenging business and technical requirements — such as lowering costs, simplifying system administration, and maintaining high service levels — the Solaris 10 OS is the ideal cross-platform choice. Its innovative, built-in features deliver break-through virtualization and utilization, high availability, advanced security, and industry leading performance to meet these stringent requirements — all at a great price.Ten things to know about the Solaris OS1. Great productThe constant demonstrated innovation within the Solaris OS pays off by delivering benefits that can save companies time, hardware costs, power and cooling, while preserving investments in software and training. In short: innovation matters, because it saves you money.2. Great priceSolaris 10 support pricing is 20% to 50% lower than equivalent support from other open OS vendors. No-cost end user licensing lowers barriers to entry, while overall efficiency lowers costs of operation.3. Open sourceThe Solaris OS code base is the foundation of the OpenSolaris™ open source community (visit ). In addition, the Solaris OS includes the leading Web 2.0 open source packages, ready to run and optimized for the over 1,000 x64 and SPARC system platforms supported by Solaris 10.4. Application compatibility — guaranteed The Solaris OS delivers binary compatibility from release to release and source compati-bility between SPARC® and x86 processors; with the Solaris Application Guarantee backing it, it’s something you can count on. And for the ultimate in conversion ease, use Solaris 8 and Solaris 9 Containers on Solaris 10, a “Physical to Virtual”way to quickly and easily run your existing application environ-ments on the latest SPARC systems.5. One Solaris — same features on hundreds of systemsWith a single source code base, the Solaris OS runs on x86 and SPARC and processor-based systems — and delivers the same features on all platforms. You can develop and optimize applications on the Solaris OS for use on over 1000 system models from leading vendors such as Sun, HP, IBM, and Dell.<HighlightsThe Solaris™ Operating Systemmeets and exceeds expectations for:•Virtualization: Optimize resourceutilization to deliver predictableservice levels with SolarisContainers•Networking: Attain near-wirespeedthroughput with the open, program-mable Solaris networking stack•Security: Implement a securefoundation for deploying serviceswith Solaris leading-edge securityfeatures•Availability: Increase uptime withPredictive Self Healing6. Designed to run securely all the timeThe leading-edge security features in the Solaris 10 OS help you reduce the risk of intrusions, secure your applications and data, assign the minimum set of privileges and roles needed by users and applications, and control access to data based on its sensitivity label. Solaris 10 has been inde-pendently evaluated at EAL4+ at three Protection Profiles, one of the highest levels of Common Criteria certifications.7. Designed for observabilitySolaris Dynamic Tracing (DTrace) technology makes it fast and easy to identify perform-ance bottlenecks, especially on production systems. System administrators can use this to troubleshoot even the most difficult problems in minutes instead of days; devel-opers can use it to optimize applications, with significant performance gains possible — real-world use has yielded increases up to 50 times previous performance.8. Designed for virtualizationSolaris 10 has powerful virtualization features built in at no additional charge. With Solaris Containers, you can maintain a one application per virtual server deploy-ment model while consolidating dozens or even hundreds of applications onto one server and OS instance. Share hardware resources while maintaining predictable service levels; increase utilization rates, cut system and licensing costs while gaining the ability to quickly provision and move workloads from system to system. Logical Domains and Xen-based paravirtualization support add even more virtualization flexibility.9.Designed for high availabilityPredictive Self Healing is a key feature in the Solaris 10 OS that helps you increase system and service availability. It automati-cally detects, diagnoses, and isolates system and software faults before they cause downtime. And it spans the full range from diagnosis to recovery on SPARC, AMD Opteron™ and Athlon, and Intel® Xeon®and Core Duo processor-based systems.10.Designed for performanceThe Solaris 10 OS has set over 244 priceperformance records since its release,unleashing even more power from existingapplications. Download the latest Sun™Studio compilers and developer tools tobring even greater performance to yourapplications.For business, industry, and developersThe Solaris 10 OS offers the technology, flexi-bility, and versatility you need to get down tobusiness immediately, whether you’re a smalldeveloper, a large enterprise, or anything inbetween.OpenSolaris participation and OS releaseMore than an open source project, OpenSolarisis also a community, a Web site for collabora-tion — and now provides a supported, leadingedge release every six months. The OpenSolarisrelease is available at , andSolaris source code, downloads, developertools, mailing lists, user groups, and events areall available at . OpenSolaristechnology features a single source base forSPARC and x86 platforms. It includes the keyinnovations delivered in the Solaris 10 OS, aswell as providing access to new technologiesas they’re being developed. The OpenSolarisproject and release provide a low-risk optionfor evaluating emerging OS technologies, plusan excellent opportunity to participate inshaping the direction of the Solaris OS.Development toolsDevelopers need integrated, ready-to-use toolsthat are compatible with all the environmentsin which they must deploy applications. Withthat in mind, Sun includes popular softwaretools from the free and open source world andcomplements them with access to key Sundeveloper technologies like the Sun Studiocompilers and tools and unique Solaris 10utilities such as DTrace.Solaris 10 technologiesWith the Solaris OS, you get compelling newfeatures that your applications can take advan-tage of immediately with few, if any, changes.Binary and source compatibility with previousreleases also helps make it easier to move toSolaris 10 from earlier releases of Solaris.DTraceSystem administrators, integrators, and devel-opers can use the dynamic instrumentation andtracing capabilities in the Solaris OS to see what’sreally going on in the system. Solaris DTracecan be safely used on production systems —without modifying applications. It is a powerfultool that gives a comprehensive view of theentire system, from kernel to application, eventhose running in a Java™ Virtual Machine. Thislevel of insight reduces the time for diagnosingproblems from days and weeks to minutes andhours and ultimately reduces the time to fixthose problems.Solaris ContainersSolaris Containers is an OS-level virtualizationtechnology built into the Solaris 10 OS. Usingflexible, software-defined boundaries to isolatesoftware applications and services, this break-through approach allows multiple privateexecution environments to be created withina single instance of the Solaris OS. Each envi-ronment has its own identity, including adiscrete network stack, separate from theunderlying hardware, so it behaves as if it’srunning on its own system — making consoli-dation simple, safe, and secure.By dynamically controlling application andresource priorities, businesses can define andachieve predictable service levels. Systemadministrators can easily meet changingrequirements by quickly provisioning newSolaris Containers or moving them from systemto system or disk to disk within the same systemas capacity or configuration needs change.Containers can be patched in parallel, increasing speed by up to 300% on systems with multiple containers configured. This also raises the bar on the number of contain-ers that can be realistically run on a system. Containers also offer the ability to emulate other environments, prior Solaris releases, such as Solaris 8 and Solaris 9, as well as support for Linux applications.In addition to Solaris Containers, Sun also offers Logical Domains (LDoms), a hardware partitioning technology that allows multiple instances of the Solaris OS to run on a single Sun CoolThreads™ server.Solaris ZFSThe Solaris ZFS file system is designed from the ground up to deliver a general-purpose file system that spans from the desktop to the datacenter. Anyone who has ever lost important files, run out of space on a partition, spent weekends adding new storage to servers, tried to grow or shrink a file system, or experienced data corruption knows the limitations of tradi-tional file systems and volume managers. Solaris ZFS addresses these challenges efficiently and with minimal manual intervention.Predictive Self HealingPredictive Self Healing is an innovative capability in the Solaris 10 OS that automatically diagnoses, isolates, and helps you recover from many hardware and application faults. As a result, business-critical applications and essential system services can continue uninterrupted in the event of software failures, major hardware component failures, and even software config-uration problems.• Solaris Fault Manager continuously monitorsdata relating to hardware and softwareerrors. It automatically and silently detectsand diagnoses the underlying problem andcan automatically take the faulty componentoffline on SPARC, Intel Xeon, and AMD Opteronprocessor based systems. Easy-to-understanddiagnostic messages link to articles in Sun’sknowledge base to help clearly guide admin-istrators through corrective tasks requiringhuman intervention.• Solaris Service Manager (SMF) creates astandardized control mechanism for applica-tion services by turning them into first-classobjects that administrators can observe andmanage in a uniform way. These servicescan automatically be restarted if they’reaccidentally terminated by an administrator,fail as the result of a software programmingerror, or interrupted by an underlyinghardware problem.PerformanceOptimizing performance and efficiency inSolaris 10 is the result of many factors: under-lying technologies, system configuration andutilization, tools, applications, and systemtuning. An enhanced networking stack mini-mizes latency and offers improved networkperformance for most applications out ofthe box.With DTrace, you can delve deeply into today’scomplex systems when troubleshooting systemicproblems or diagnosing performance bottlenecks— in real time and on the fly. Additional built-in technologies that help deliver increasedapplication performance include:• High-performance networking stack• Filesystem performance• Tools and libraries• Multiple page-size support (MPSS)• Memory placement optimization (MPO)SecuritySecurity is more than a mix of technologies;it’s an ongoing discipline. Sun understandsthis and continues its 20-year commitment toenhancing security in the Solaris OS. SolarisUser and Process Rights Management plusSolaris Containers enable the secure hostingof hundreds of applications and multiplecustomers on the same system. Administratorscan use features such as Secure by Default tominimize and harden the Solaris OS even more.Additionally, Solaris Trusted Extensions providestrue multi-level security for the first time in acommercial-grade OS, running all your existingapplications and supported on over 1,000different system models.• Verify your system’s integrity by employingSolaris Secure Execution and file verificationfeatures• Reduce risk by granting only the privilegesneeded for users and processes• Simplify administration and increase privacyand performance by using the standards-based Solaris Cryptographic Framework• Secure your system using dynamic serviceprofiles, including a built-in, reduced-exposurenetwork services profile• Control access to data based on its sensitivitylevel by using the labeled security technologyin Solaris Trusted ExtensionsNetworkingExponential growth in Web connectivity, services,and applications is generating a critical needfor increased network performance. With theSolaris 10 OS, Sun meets current and futurenetworking challenges by significantly improvingnetwork performance without requiring changesto existing applications. The Solaris 10 OS speedsapplication performance via the Network Layer7 Cache and enhanced TCP/IP and UDP/IPperformance. The latest networking techno-logies, such as 10-Gigabit Ethernet and hardwareoff-loading, are all supported out of the box.Additionally, the Solaris 10 OS supports current IPv6 specifications, high availability, streaming, and Voice over IP (VoIP) networking through extended routing and protocol support —meeting the carrier-grade needs of a growing customer base.Platform choiceThe Solaris 10 OS is optimized for Sun and third-party systems running 64-bit SPARC, AMD, and Intel processors. This makes it possible to create horizontally and vertically scaled infra-structures and offers the flexibility to easily add compute resources. The OS runs on hardware ranging from laptops and single-board computers to datacenter and grid installations, while serving applications ranging from military command-and-control systems to telecommunications switch gear and stock trading.InteroperabilityThe Solaris 10 OS provides interoperability from the desktop to the datacenter across a range of hardware systems, operating platforms, and technologies, making it the ideal platform for today’s heterogeneous compute environments. Not only does it interoperate with both Linux and Microsoft Windows, it also supports popular open source applications and open standards such as Universal Description, Discovery, and Integration (UDDI); Simple Object Access Protocol (SOAP); Web Services Description Language (WSDL); and eXtensible Markup Language (XML).• Source and binary compatibility for Linux applications and interoperability with Microsoft Windows systems• Includes Perl, PHP, and other widely used scripting languages• Includes Apache, Samba, sendmail, IP Filter, BIND, and other popular open source software • Supports Java application development and deployment with the Java Platform, Enterprise Edition (Java EE) and Java Platform, Standard Edition (Java SE)• Includes authentication support for LDAP-based directory servers and Kerberos-based infrastructures© 2009 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, Solaris, OpenSolaris, Java , and CoolThreads are trademarks or registered trademarks of Sun Microsystems, Inc. or its subsidiaries in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. AMD, Opteron, the AMD logo, the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel® Xeon® is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and other countries. Information subject to change without notice. SunWIN #420130 Lit. #SWDS12147-4 09/09 Sun Microsystems, Inc.4150 Network Circle, Santa Clara, CA 95054 USA Phone1-650-960-1300 or 1-800-555-9SUN Web 。

Intel平台Solaris[tm]操作环境常见问题

Intel平台Solaris[tm]操作环境常见问题

Intel平台Solaris[tm]操作环境常见问题我如何将多个(虚拟)IP地址绑定到一个网卡上?当在x86平台的Solaris[tm]7操作环境里创建源文件时,我总会得到一个错误"/usr/ucb/cc:?language optional software package not installed",并且编译器会异常终止。

我该怎么办?我们如何在Solaris服务器操作环境下加载Windows NT文件系统?我们从哪里可以得到Solaris操作环境支持的硬件配置清单(HCL)?我们从哪里可以下载到Solaris操作环境的DCA(设备配置助理引导盘)?我们从哪里可以得到正式的免费Solaris操作环境补丁?我们从哪里可以下载到第三方供应商的硬件驱动程序?我如何为Intel平台Solaris操作环境的网卡设置一个IP地址?我从哪里可以得到关于Sun公司Solaris操作环境认证的更多信息?Solaris操作环境不能加载光盘和软盘设备,可能的原因是什么?有配置Intel平台的Solaris操作环境显示器属性的工具吗?我正在安装Intel平台的Solaris8操作环境。

当我从DCA引导系统时,我总得到如下错误提示:Warning: ACPI tables not in reclaim memoryprom_panic:Kmem_free block already freeEntering boot debugger[12ff05]请问我在x86平台的Solaris8操作环境里如何禁用ACPI我从哪里可以得到Intel平台Solaris8操作环境的Solstice管理套件(AdminSuite[tm])软件?为什么我会在我的NFS分区上得到如下错误"RPC:Program not registered" ?我如何才能在一个Solaris服务器操作环境里拥有两个FTP端口?如果超级用户在做管理任务或正在关闭系统,我如何临时地禁止用户登录?我想实现这样的功能:在任何用户登录到系统前显示一个消息条幅。

Solaris系统管理命令及相关技术中英文对照Unix系统-电脑资料

Solaris系统管理命令及相关技术中英文对照Unix系统-电脑资料

Solaris系统管理命令及相关技术中英文对照Unix系统-电脑资料Solaris 系统管理命令及相关技术中英文对照Solaris 系统管理命令及相关技术中英文对照A ----------------------------------------------------------------------------------- ab2admin—对AnswerBook2进行管理的命令行界面ab2cd—从Documentation CD中Solaris 系统管理命令及相关技术中英文对照Solaris 系统管理命令及相关技术中英文对照A-----------------------------------------------------------------------------------ab2admin—对AnswerBook2进行管理的命令行界面ab2cd—从Documentation CD中运行AnswerBook2服务器ab2regsvr—向联合域名服务注册AnswerBook2文档服务器a clearcase/" target="_blank" >ccept、reject—接受或拒绝打印请求acct—对计数及各种计数命令的概述acctcms—进程计数命令acctcon、acctcon1、acctcon2—连接时间计数acc td isk—将计数数据转换为计数记录总数acctdusg—通过登录信息计算磁盘资源的消耗acctmerg—合并或添加总体计数文件accton—在已有文件中追加进程计数记录acctprc、acctprc1、acctprc2—进程计数acctsh、chargefee、ckpacct、dodisk、lastlogin、monacct、nulladm、prctmp、prdaily、prtacct、shutacct、startup、turnacct—进行计数的shell过程acctwtmp—将utmpx记录写入文件adbgen—生成adb脚本add_drv—在系统中增加一个新的设备驱动器add_install_client—从网络安装中添加或删除客户的脚本add_to_install_server—从附加的Solaris CD中将脚本复制到现有的网络安装服务器addbadsec—映射出错误磁盘块admintool—通过图形用户界面进行系统管理afbconfig、SUNWafb_config—配置AFB图形加速器aliasadm—处理NIS+别名映射allocate—设备分配amiserv—A MI密钥服务器answerbook2_admin—AnswerBook2 GUI管理工具arp—地址解析的显示与控制aset—控制或限制对系统文件和目录的访问aset.restore—恢复ASET所影响的文件系统aspppd、aspppls—异步PPP链接管理程序aspppls—异步PPP链接管理程序audit—控制审计守护进程的行为auditconfig—审计配置auditd—控制审计追踪文件的生成与定位auditreduce—从审计追踪文件中合并和选择审计追踪记录audit_startup—审计子系统初始化脚本auditstat—显示内核审计统计audit_warn—审计守护进程警告脚本automount—安装自动挂接点automountd—挂接/摘除守护进程autofsautopush—配置一个自动压入的STREAMS模块列表B-----------------------------------------------------------------------------------bdconfig—配置按钮和拨号流boot—启动系统内核或者一个独立程序bootparamd—引导参数服务器bsmconv、bsmunconv—启用或者禁用BSMbusstat—报告与总线有关的性能统计C-----------------------------------------------------------------------------------cachefslog—对CacheFS进行记录cachefspack—将文件和文件系统压缩到高速缓存中cachefsstat—对CacheFS进行统计cachefswssize—测定高速缓存文件的工作集合的大小captoinfo—将termcap描述转换为terminfo描述cfgadm—配置管理cfgadm_ac—对EXX00内存进行系统管理cfgadm_pci—对PCI热插入进行配置管理的命令cfgadm_scsi—SCSI硬件专用的cfgadm命令cfgadm_sysctrl—对EX00系统板进行管理cfsadmin—管理CacheFS进行文件系统高速缓存时所使用的磁盘空间cg14config—配置SX/CG14图形加速器设备chargefee—计数的shell过程check-hostname—检测sendmail是否能够测定系统的完全合格主机名check-permissions—检测邮件重新路由的权限check—对JumpStart规则文件中的规则进行校验的脚本chown—改变所有者chroot—修改命令的root目录ckpacct—定期检测/var/adm/pacct长度的计数命令clear_locks—清除NFS客户所持有的锁clinfo—显示分组信息closewtmp—将一个非法读取进程的记录放入/var/adm/wtmpx 文件c lr i、dcopy—清除信息节点comsat—Biff服务器consadm—指定或者显示辅助控制台设备conv_lp—转换LP的配置conv_lpd—转换LPD的配置coreadm—对核心文件进行管理cpustat—通过CPU性能计数对系统行为进行嗫?crash—检测系统映像cron—时钟守护进程cvcd—虚拟控制台守护进程D-----------------------------------------------------------------------------------dcopy—清除信息节点dd—转换与复制文件deallocate—设备的卸配devattr—显示设备属性devconfig—配置设备属性devfree—从独占使用中释放设备devfsadm—对/dev和/devices进行管理的命令devfseventd—devfsadmd的内核事件通知守护进程devinfo—打印特定于设备的信息devlinks—为各种设备和伪设备添加/dev项devnm—设备名devreserv—为独占使用预留设备df—显示闲置的磁盘块和文件数df_ufs—报告UFS文件系统上的闲置磁盘空间dfmounts—显示被挂接的资源信息dfmounts_nfs—显示被挂接的NFS资源信息dfshares—列举远程或本地系统中可用的资源dfshares_nfs—列举远程系统可用的NFS资源dhcpagent—客户DHCP的守护进程dhcpconfig—对DHCP服务进行管理的命令dhcpmgr—管理DHCP服务的图形界面dhtadm—对DHCP配置表进行管理的命令disks—为附加到系统的硬盘创建/dev项diskscan—执行表面分析dispadmin—进程调度管理dmesg—收集系统诊断消息,形成错误日志dmi_cmd—DMI命令行界面的命令dmiget—命令行方式的DMI的获取命令dminfo—报告设备映射文件中某设备项的相关信息dmispd—Sun Solstice Enterprise 的DMI服务提供商dodisk—由时钟守护进程调用的shell过程,可执行磁盘计数功能domainname—显示或者设置当前域名dr_daemon—Enterprise 10000 的动态重配守护进程drvconfig—配置/devices目录du—对磁盘使用情况进行汇总dumpadm—对操作系统的崩溃转储进行配置E-----------------------------------------------------------------------------------edquota—为UFS文件系统编辑用户配额eeprom—EEPROM的显示和装载命令F-----------------------------------------------------------------------------------fbconfig—帧缓冲的配置命令fdetach—将名字与基于STREAMS的文件描述符分离fdisk—创建或者修改固定磁盘分区表ff—为文件系统列举文件名和统计信息ff_ufs—为UFS文件系统列举文件名和统计ffbconfig—对FFB图形加速器进行配置fingerd—远程用户信息服务器firmware—可引导的固件程序和固件命令fmthard—填充硬盘的卷目录表fncheck—检测FNS数据与NIS+数据之间的一致性fncopy—复制FNS环境fncreate—创建FNS环境fncreate_fs—创建FNS文件系统的环境fncreate_printer—在FNS名字空间中创建新打印机fndestroy—破坏FNS环境fnselect—为FNS初始化环境选择一个特定的命名服务fnsypd—更新NIS主服务器上的FNS环境format—磁盘的分区与维护命令fsck—检测和修复文件系统fsck_cachefs—为CacheFS缓存的数据进行完整性检测fsck_s5fs—文件系统的一致性检测和交互式修复fsck_udfs—文件系统的一致性检测和交互式修复fsck_ufs—文件系统的一致性检测和交互式修复fsdb—文件系统调试器fsdb_udfs—UDFS文件系统调试器fsdb_ufs—UFS文件系统调试器fsirand—安装随机的信息节点编号生成器fstyp—测定文件系统的类型ftpd—文件传输协议服务器fuser—通过文件或者文件结构标识进程fwtmp、wtmpfix—对连接计数记录进行处理G-----------------------------------------------------------------------------------gencc—创建cc命令的前端getdev—分类列举设备getdgrp—列举包含了匹配设备的设备组getent—从管理数据库中获取表项gettable—从主机中获取DoD Internet格式的主机表getty—设置终端类型、模式、速度和行规范getvol—对设备的可达性进行校验GFXconfig—配置PGX32(Raptor GFX)图形加速器goupadd—在系统中添加或创建新组定义groupdel—从系统中删除组定义groupmod—修改系统中的组定义grpck—口令和组文件的检测程序gsscred—添加、删除、列举gsscred表项gssd—为内核RPC产生和验证GSS-AIP标记H-----------------------------------------------------------------------------------halt、poweroff—关闭处理器hostconfig—配置系统的主机参数htable—转换DoD Internet格式的主机表I-----------------------------------------------------------------------------------id—返回用户标识ifconfig—配置网络接口参数sat、comsat—Biff服务器in.dhcpd—DHCP服务器in.fingerd、fingerd—远程用户信息服务器in.ftpd、ftpd—文件传输协议服务器in.lpd—BSD打印协议适配器d、named—Internet域名服务器in.ndpd—IPv6的自动配置守护进程in.rarpd、rarpd—DARPA逆向地址解析协议服务器in.rdisc、rdisc—发现网络路由守护进程in.rexecd、rexecd—远程执行服务器in.ripngd—IPv6的网络路由守护进程in.rlogind、rlogind—远程登录服务器in.routed、routed—网络路由守护进程in.rshd、rshd—远程shell服务器in.rwhod、rwhod—系统状态服务器in.talkd、talkd—talk程序服务器in.telnetd、telnetd—DARPA TELNET协议服务器in.tftpd、tftpd—Internet平凡文件传输协议服务器in.tnamed、tnamed—DARPA平凡名字服务器in.uucpd、uucpd—UUCP服务器inetd—Internet服务守护进程infocmp—比较或打印terminfo描述init、telinit—进程控制的初始化init.wbem—启动和停止CIM引导管理程序install—安装命令install_scripts—Solaris软件的安装脚本installboot—在磁盘分区中安装引导块installf—向软件安装数据库中添加文件Intro、intro—对维护命令及应用程序的介绍iostat—报告I/O统计ipsecconf—配置系统范围的IPsec策略ipseckey—手工操作IPsec的SA数据库K-----------------------------------------------------------------------------------kadb—内核调试器kdmconfig—配置或卸配键盘、显示器和鼠标选项kerbd—为内核RPC生成和校验Kerberos票据kernel—包括基本操作系统服务在内的UNIX系统可执行文件keyserv—存储加密私钥的服务器killall—杀死所有活跃的进程ktkt_warnd—Kerberos警告守护进程kstat—显示内核统计信息L-----------------------------------------------------------------------------------labelit—为文件系统列举或者提供标签labelit_hsfs—为HSFS文件系统列举或者提供标签labelit_udfs—为UDF文件系统列举或者提供标签labelit_ufs—为UFS文件系统列举或者提供标签lastlogin—显示每个人员所登录的最后日期ldap_cachemgr—为NIS查找缓存的服务器和客户信息LDAP守护进程ldapclient、ldap_gen_profile—对LDAP客户机进行初始化或者创建LDAP客户配置文件的LDIFlink、unlink—链接或者取消链接文件和目录list_devices—列举可分配的设备listdgrp—列举设备组的成员listen—网络监听守护进程llc2_loop—为测试驱动器、适配器和网络回送诊断lockd—网络锁定守护进程lockfs—修改或者报告文件系统锁lockstat—报告内核锁的统计信息lofiadm—通过lofi管理可用作磁块设备的文件logins—列举用户和系统的登录信息lpadmin—配置LP打印服务lpfilter—管理LP打印服务所使用的过滤器lpforms—管理LP打印服务所使用的格式lpget—获取打印配置lpmove—移动打印请求lpsched—启动LP打印服务lpset—在/etc/printers.conf或FNS中设置打印配置lpshut—停止LP打印服务lpsystem—向打印服务注册远程系统lpusers—设置打印队列的优先级luxadm—SENA、RSM和SSA子系统的管理程序M-----------------------------------------------------------------------------------m64config—配置M64图形加速器mail.local—将邮件存入邮件箱makedbm—创建dbm文件,或者从dbm文件得到文本文件makemap—为sendmail创建数据库映射mibiisa—Sun SUMP代理mk—从源代码重建二进制系统和命令mkfifo—创建FIFO专用文件mkfile—创建一个文件mkfs—构造文件系统mkfs_pcfs—构造FAT文件系统mkfs_udfs—构造UDFS文件系统mkfs_ufs—构造UFS文件系统mknod—创建专用文件modify_install_server—在现有网络安装服务器上取代miniroot 的脚本modinfo—显示所装载的内核模块信息modload—装载内核模块modunload—卸载模块mofcomp—将MOF文件编译为CIM类monacct—每月调用计数程序monitor—SPARC系统的PROM监控器mount、umount—挂接或摘除文件系统和远程资源mountall、umountall—挂接、摘除多个文件系统mount_cachefs—挂接CacheFS文件系统mountd—接收NFS挂接请求和NFS访问的服务器mount_hsfs—挂接HSFS文件系统mount_nfs—挂接远程的NFS资源mount_pcfs—挂接PCFS文件系统mount_s5fs—挂接s5文件系统mount_tmpfs—挂接tmpfs文件系统mount_udfs—挂接UDFS文件系统mount_ufs—挂接UFS文件系统mount_xmemfs—挂接xmemfs文件系统mpstat—报告每个处理器的统计信息msgid—生成消息IDmvdir—移动目录N-----------------------------------------------------------------------------------named-bootconf—将配置文件转换为适用于Bind 8.1的格式named-xfer—支持入站区域传送的辅助代理named—Internet域服务器ncheck—生成路径名与i编号的映射列表ncheck_ufs—为UFS文件系统生成路径名与i编号的映射列表ndd—获取和设置驱动器的配置参数netstat—显示网络状态newfs—构造新的UFS文件系统newkey—在publickey数据库中创建新的Diffie-Hellman密钥对nfsd—NFS守护进程nfslogd—NFS的日志记录守护进程nis_cachemgr—对NIS+服务器的位置信息进行高速缓存的NIS+命令nfsstat—显示NFS统计信息nisaddcred—创建NIS+证书nisaddent—从相应的/etc文件或者NIS映射中创建NIS+表nisauthconf—NIS+的安全性配置nisbackup—备份NIS+目录nisclient—为NIS+实体初始化NIS+证书nisd—NIS+服务的守护进程nisd_resolv—NIS+服务的守护进程nisinit—NIS+客户和服务器的初始化命令nislog—显示NIS+事务日志的内容nispasswdd—NIS+口令更新的守护进程nisping—向NIS+服务器发送pingnispopulate—填充NIS+域中的NIS+表nisprefadm—为NIS+客户设置服务器优先级别的NIS+命令nisrestore—恢复NIS+目录的备份nisserver—创建NIS+服务器nissetup—初始化NIS+域nisshowcache—打印共享高速缓存文件的NIS+命令nisstat—报告NIS+服务器的统计信息nisupdkeys—更新NIS+目录中的公钥nisadmin—对网络监听服务进行管理nscd—名字服务的高速缓存守护进程nslookup—交互式查询名字服务器nstest—DNS测试shellnsupdate—更新DNS名字服务器ntpdate—使用NTP设置本地的日期和时间 731ntpq—标准NTP查询程序 733ntptrace—沿着NTP主机链追溯到其主控时间资源 739nulladm—采用664模式创建文件名,确保其所有者和组为adm O-----------------------------------------------------------------------------------obpsym—OpenBoot固件的内核符号调试ocfserv—OCF 服务器P-----------------------------------------------------------------------------------parse_dynamic_clustertoc—基于动态项对clustertoc文件进行语法分析passmgmt—对口令文件进行管理patchadd—将补丁包应用于Solaris系统patchrm—删除补丁包并恢复以前保存的文件pbind—控制和查询进程与处理器之间的绑定pcmciad—PCMCIA用户守护进程pfinstall—对安装配置文件进行测试pgxconfig、GFXconfig—配置PGX32(Raptor GFX)图形加速器ping—向网络主机发送ICMP(ICMP6) ECHO_REQUEST包pkgadd—将软件包传给系统pkgask—将答复信息存储在请求脚本中pkgchk—检测软件包安装的准确性pkgrm—从系统中删除软件包pmadm—对端口监控器进行管理pmconfig—对电源管理系统进行配置pntadm—DHCP网络表的管理命令ports—为串行线创建/dev和inittab项powerd—电源管理的守护进程poweroff—停止处理器praudit—打印审计追踪文件的内容prctmp、prdaily、prtacct—打印各种计数文件printmgr—在网络中管理打印机的图形用户界面prstat—报告活跃进程的统计信息prtconf—打印系统的配置信息prtdiag—显示系统的诊断信息prtvtoc—报告关于磁盘几何以及分区信息psradm—修改处理器的操作状态psrinfo—显示处理器的相关信息psrset—创建和管理处理器集合putdev—编辑设备表putdgrp—编辑设备组表pwck、grpck—口令/组文件的检测程序pwconv—使用/etc/passwd中的信息安装和更新/etc/shadow Q-----------------------------------------------------------------------------------quot—汇总系统文件的所有权信息quota—显示用户在UFS文件系统中的磁盘配额和使用情况quotacheck—UFS文件系统配额的一致性检测程序quotaon、quotaoff—打开或者关闭UFS文件系统的配额R-----------------------------------------------------------------------------------rarpd—DARPA逆向地址解析协议服务器rdate—从远程主机设置系统日期rdisc—探测网络路由器的守护进程re-preinstall—在系统上安装JumpStart软件reboot—重新启动操作系统reject—拒绝打印请求rem_drv—从系统中删除设备驱动器removef—从软件数据库中删除文件repquota—为UFS文件系统进行配额汇总restricted_shell—受限的shell命令接收器rexd—基于RPC的远程执行服务器rexecd—远程执行服务器rlogind—远程登录服务器rm_install_client—从网络安装中删除客户的脚本rmmount—用于CD-ROM和软盘的可移动介质挂接程序rmt—远程磁带协议模块roleadd—管理新的角色帐号roledel—删除角色的登录rolemod—修改现有的角色帐号route—对路由表进行手工处理routed—网络路由的守护进程rpc.bootparamd、bootparamd—引导参数服务器rpc.nisd、nisd—NIS+服务的守护进程rpc.nisd_resolv、nisd_resolv—NIS+服务的守护进程rpc.nispasswdd、nispasswdd—NIS+口令更新的守护进程rpc.rexd、rexd—基于RPC的远程执行服务器rpc.rstatd、rstatd—内核统计服务器rpc.rusersd、rusersd—网络用户的名字服务器rpc.rwalld、rwalld—网络rwall服务器rpc.sprayd、sprayd—Spray服务器rpc.yppasswdd、yppasswdd—修改NIS口令文件的服务器rpc.ypupdated、ypupdated—修改NIS信息的服务器rpcbind—统一地址到RPC程序编号的映射rpcinfo—报告RPC信息rpld—IA网络引导的RPL服务器rquotad—远程配额服务器rsh—受限的shellrshd—远程shell服务器rstatd—内核统计服务器rtc—对所有的实时钟和GMT标记进行管理runacct—进行每日计数rusersd—网络用户的名字服务器rwall—写给网络中的所有用户rwalld—网络rwall服务器rwhod—系统状态服务器S-----------------------------------------------------------------------------------sa1、sa2、sadc—系统行为报告信息包sac—服务访问控制器sacadm—对服务访问控制器进行管理sadc—报告系统行为的信息包sadmind—分布式系统管理的守护进程saf—服务访问程序 888sar、sar1、sac2、sadc—报告系统行为的包savecore—保存操作系统的崩溃转储sendmail—在Internet上发送邮件server_upgrade—为异质OS服务器的客户进行升级setmnt—建立挂接表setuname—修改系统信息setup_install_server—从Solaris CD到磁盘的复制脚本share—允许远程挂接时使用本地资源share_nfs—允许远程挂接时使用NFS文件系统shareall、unshareall—共享或者取消共享多个资源showmount—显示所有的远程挂接showrev—显示机器和软件的修正信息shutacct—在系统关机时关闭进程计数shutdown—关闭系统或者改变系统状态slpd—服务定位协议守护进程smartcard—配置和管理智能卡smrsh—sendmail的受限shellsnmpdx—Sun Solstice Enterprise Master AgentsnmpXdmid—Sun Solstice Enterprise的SNMP-DMI映射snoop—捕获并检查网络包soconfig—配置套接字所使用的传输提供商soladdapp—将应用程序添加到Solstice应用程序注册表中soldelapp—从Solstice应用程序注册表中删除应用程序solstice—通过图形用户界面访问系统管理工具spray—Spray信息包sprayd—Spray服务器ssaadm—SPARCstorage 队列和SPARCstorage RSM磁盘系统的管理程序startup—在启动时打开进程计数statd—网络状态监控器strace—打印STREAMS追踪消息strclean—STREAMS错误记录器的清除程序strerr—STREAMS错误记录器守护进程sttydefs—为TTY端口维护行设置并寻找序列su—成为超级用户或者另一个用户sulogin—访问单用户模式suninstall—安装Solaris操作环境swap—交换管理界面swmtool—安装、升级和删除软件包sxconfig—为SX视频子系统配置连续内存sync—更新超块syncinit—设置串行线接口的操作参数syncloop—同步线性回送的测试程序syncstat—从同步串行链接中报告驱动器统计信息sys-unconfig—取消系统的一个配置sysdef—输出系统定义sysidconfig—执行或定义系统配置程序sysidtool、sysidnet、sysidns、sysidsys、sysidroot、sysidp—系统配置syslogd—记录系统消息T-----------------------------------------------------------------------------------talkd—talk程序的服务器tapes—为磁带设备创建/devtaskstat—打印ASET任务的状态tcxconfig—配置S24(TCX)帧缓冲telinit—进程控制的初始化telnetd—DARPA TELNET协议服务器tftpd—Internet平凡文件传输协议服务器tic—terminfo编译器tnamed—DARPA平凡命名服务器traceroute—打印信息包到达网络主机的路由ttyadm—对特定端口监控器的信息进行格式化并输出ttymon—终端端口的监控器tunefs—调谐现有的文件系统turnacct—打开或关闭进程计数U-----------------------------------------------------------------------------------uadmin—管理控制ufsdump—文件系统的增量转储ufsrestore—文件系统的增量恢复umount—摘除文件系统以及远程资源umountall—摘除多个文件系统unlink—取消文件和目录的连接unshare—不允许远程系统挂接本地资源unshare_nfs—不允许远程系统挂接本地的NFS文件系统unshareall—取消所有资源的共享useradd—管理系统中的新用户登录或新角色userdel—从系统中删除用户登录usermod—修改系统中的用户登录或角色信息utmp2wtmp—在runacct所生成的文件/var/adm/wtmpx中创建新项utmpd—utmpx监控守护进程uucheck—检测UUCP目录和许可文件uucico—UUCP系统的文件传输程序uucleanup—清除UUCP假脱机目录uucpd—UUCP服务器uusched—UUCP文件传输程序的调度程序Uutry、uutry—尝试在调试模式中联系远程系统uuxqt—执行远程命令请求V-----------------------------------------------------------------------------------vmstat—报告虚拟内存的统计volcopy—创建文件系统的映像拷贝volcopy_ufs—创建UFS文件系统的映像拷贝vold—对CD-ROM和软盘设备进行管理的卷管理守护进程W-----------------------------------------------------------------------------------wall—写给所有的用户wbemadmin—启动Sun WBEM用户管理程序wbemlogviewer—启动WBEM日志查看程序whodo—报告谁在做什么wtmpfix—处理连接计数记录X-----------------------------------------------------------------------------------xntpd—网络时间协议的守护进程xntpdc—专用的NTP查询程序Y-----------------------------------------------------------------------------------ypbind—NIS绑定进程ypinit—创建NIS客户ypmake—重建NIS数据库yppasswdd—修改NIS口令文件的服务器yppoll—返回NIS服务器主机上的当前NIS映射版本yppush—强制传播一个已修改的NIS映射ypserv、ypxfrd—NIS服务器以及绑定进程ypset—指向特定服务器上的ypbindypstart、ypstop—启动和停止NIS服务ypupdated—修改NIS信息的服务器ypxfr、ypxfr_1perday、ypxfr_1perhour、ypxfr_2perday—从NIS服务器向主机传送NIS映射ypxfrd—NIS服务器与绑定进程Z-----------------------------------------------------------------------------------zdump—时区转储器zic—时区编译器。

SOLARIS10 双机系统配置

SOLARIS10 双机系统配置
以root用户登录10.71.100.165系统,执行如下命令,ping与公网相连的两个IP地址,检查网络。
#ping 10.71.100.210
#ping 10.71.100.211
如果网络不通请检查网卡及网络是否物理隔断。
11.2
本节主要介绍双机系统的配置过程,INFOX GW系统采用双机主备组网方式。
----结束
启动资源组成功后,双机系统已经运行起来,执行如下命令,可查看双机资源组状态。
root@infox01#scstat -g
执行结果:
-- Resource Groups and Resources --
Group Name Resources
Resources:infox_rgserver_ipinfox_appinfoxdg_rs datadg_rsoracle_svr oracle_lsnr
数据库和应用合设情况下,/clustershell/appora/sun_sc3.1目录下文件:
sun_sc3.1
sun_sc3.1/etc
sun_sc3.1/etc/HW.smc
sun_sc3.1/bin
sun_sc3.1/bin/gethostnames
sun_sc3.1/bin/smgw_mon_start.ksh
scrgadm -a -jinfoxdg_rs-ginfox-rg -t SUNW.HAStoragePlus:2 -x GlobalDevicePaths="infox" -x FileSystemMountPoints='/export/home/infoxshare,' -y Resource_dependencies=server_ip

移动通信操作系统Solaris应用_第二章

移动通信操作系统Solaris应用_第二章
8
Solaris 9 For X86安装 安装
如果服务器接了键盘和显示器,跳过这一步;如果是在 终端上安装Solaris9,这时会出现如下提示: What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 ...... Type the number of your choice and press Return: 这是选择终端类型,我们选择“3) DEC VT100”。
7
Solaris 9 For X86安装 安装
接着提示如下: Select a Locale 0. English (C - 7-bit ASCII) 1. Albania (ISO8859-2) 2. Australia (ISO8859-1) 3. Belgium-Flemish (ISO8859-1) ...... Press Return to show more choices. Please make a choice (0 - 59), or press h or ? for help: 这是选择本地语言,在这里我们选择“0. English (C 7-bit ASCII)”。
4
Solaris操作系统的组成 操作系统的组成
Kernel Kernel是操作系统的核心,它的主要功能是:管理系统 的设备、内存、进程以及守护进程;系统程序与系统硬 件之间接口;执行所有的命令。 Shell Bourne shell ($): Korn shell ($): C shell(%): File structure Solaris的文件结构是分层的目录树结构,类似于DOS 的文件结构。是一些有特定目的而组织在一起的目录、 子目录和文件。

Solaris介绍

Solaris介绍



文件中每行包含7个字段: loginID:x:UID:GID:comment:home_directory:login_shell Login ID: Represents the user’s login name.最长8字符,又称login name、user name。 x – Represents a placeholder for the user’s encrypted password, which is kept in the /etc/shadow file. UID Contains the UID used by the system to identify the user. UID numbers for users range from 100 to 60000. Values 0 through 99 are reserved for system accounts. UID 60001 is reserved for the nobody account. UID 60002 is reserved for the noaccess account.Duplicate UIDs are allowed but should be avoided. If two users have the same UID, they have identical access to each users files. GID – Contains the GID used by the system to identify the user’s primary group. GID numbers for users range from 100 to 60000. (Those between 0 and 99 are reserved for system accounts.) comment – Contains the user’s full name. home directory – Contains the full pathname to the user’s home directory. login shell – Defines the user’s login shell, which can be /bin/sh, /bin/ksh, /bin/csh, /bin/zsh, /bin/bash, or /bin/tcsh. By default, if a user does not have a password, then they are automatically prompted to enter a new password during the initial login.

Solaris操作系统的系统要求与安装指南说明书

Requirementss The2GB memory option(256MB DIMMS)require that the system flash PROM software be3.2.24or compatible versions before installingmemory modules.s Currently,for Solaris2.5.1operating environments,configurations are limited to a maximum total memory of56GB,configurations thatexceed56GB for this version are not supported.Solaris2.6operating environments,together with patch105181-19,support a maximumconfiguration of60GB.There is no limitation for systems runningSolaris7or8operating environments(maximum achievableconfiguration is60GB).s Do not mix different densities(8or32or128or256Mbytes)in a bank.s To install the Solaris2.6operating environment,use the"Operating Environment Installation CD(part number704-7076-10)February2000"that comes with your system(not required for installing Solaris2.5.1, Solaris7or Solaris8operating environments).For Solaris2.5.1operating environments the/usr/platform/sun4u/ sbin/prtdiag command displays erroneous memory capacity e software patch104595-09(available at:http:// )to correct this problem.Suggestions for Maximum Performances If there is more than one CPU/Memory board,install one bank of DIMMs on each board first.Then install the second bank on any board.It does not matter whether the first bank is bank0or bank1.s Begin with the largest density banks first(256Mbyte DIMMs),continue with medium density banks(32or128Mbyte DIMMs),and finish with the smallest density banks(8Mbyte DIMMs).It may be necessary to move some banks to meet this guideline.s If there are remaining banks,fill the second banks on the boards in the same order as the first banks.Removing the CPU/Memory BoardCaution–If the message:NOTICE:Hot Plug not supportedin this system is displayed during boot,do NOT attempt toremove or install a board while the system is powered on.If your system supports the hot-plug feature,a board is in low-power mode and ready for removal if one of the following is true:s All three status LEDs are off.s If the Service()LED is lit,and the Power ()LED and theRunning()LED are off.Caution–Use a padded ESD mat to prevent breakage of partsmounted on the bottom of the e a grounding wriststrap to prevent static damage.Start here if the board is already in the system.1.If the board is in low-power mode,skip this step and go to Step2.If the board is not in low-power mode,halt the system and turn offpower before proceeding.2.With a Phillips#1screwdriver,turn the two quarter-turn lockingscrews to the unlocked position().3.Pull the ends of both extraction levers toward you,then pull theboard out of the card cage.Do not let the components on the board catch on any surroundingsurfaces as you pull the board.Caution–The heatsinks on the board may be hot.Handle withcare.4.Place the board on a padded ESD mat to prevent breakage of partsmounted on the bottom of the board.Installing DIMMsInstall a set of DIMMS as a complete bank of eight DIMMS.There are twobanks,interleaved as shown in the following figure.The socket numbers(Jxx00and Jxx01)are marked on the board.1.Open the DIMM socket by pressing down on the ejector levers atboth ends of the socket.2.Align the two notches at the bottom of the DIMM with the two tabsin the socket.3.Push the DIMM firmlydown into the socket.4.Lock the DIMM in place by pushing both ejector levers into theupright position.Installing the CPU/Memory Board1.If youare installing a new board,refer to the Sun Enterprise serversystem reference manual that came with your system for rules forselecting a board slot.2.Open the extraction levers by pulling the ends of both levers towardyou.3.Insert the board in the card cage slot.!!Extraction leversQuarter-turnBank 1J3801J3701J3601J3501J3401J3301J3201J3101Bank 0J3800J3700J3600J3500J3400J3300J3200J3100sFor a 4-slot or 5-slot card cage,orient the board with the component side to the right.sFor a 16-slot or 8-slot card cage (diagram follows):Front slot installation,orient the board with the component side down.Rear slot installation,orient the board with the component side up.4.Push the board into the card cage,then simultaneously press both extraction levers to seat the board on the centerplane.Caution –Do not press on the board front panel to seat it—doing so will damage the connector pins.Caution –When inserting a board into slot 4or slot 10of a16-slot card cage,lift the board slightly to avoid damage to the centerplane connectors.Pushing both levers simultaneously avoids twisting the board and bending the connector pins.5.With a Phillips #1screwdriver,turn the two quarter-turn locking screws to the locked position ().Rear viewFront view!!Sun ™ Enterprise ™6x 00/5x 00/4x 00/3x 00 Systems DIMM Installation GuideCaution –The heatsinks on the board can bedamaged by incorrect handling.Do not touch the heatsinks while installing or moving the board.Hold the board only by the edges.If a heatsink is loose or broken,obtain a replacement board.Caution –The heatsinks on the board can bedamaged by improper packaging.When storing or shipping the board,ensure that the heatsinks have sufficient protection.Note –Make sure the DIMM and connector are freeof dust and debris.If necessary,gently clean them using the dry,stiff brush supplied.!!Part Number: 802-5032-15Revision A of January 2000Accessing Sun Documentation OnlineThe sm web site enables you to access Sun technicaldocumentation on the Web.You can browse the archive or search for a specific book title or subject at:Sun Welcomes Your CommentsWe are interested in improving our documentation and welcome your comments and suggestions.You can email your comments to us at:*******************Please include the part number (802-5032-14)of the document in the subject line of your email.Copyright 2000 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Sun, SunMicrosystems, the Sun Logo, SunDocs, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries.RESTRICTED RIGHTS : Use, duplication, ordisclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and FAR 52.227-19(6/87), or DFAR 252.227-7015(b)(6/95) and DFAR 227.7202-3(a).Copyright 2000 Sun Microsystems, Inc. Tous droits réservés. Distribué par des licences qui enrestreignent l’utilization. Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun. Sun, SunMicrosystems,le logo Sun,SunDocs,et Solaris sont des marques de fabrique ou des marques déposées de Sun Microsystems, Inc. aux Etats-Unit et dans d’autres pay.Sun Microsystems Computer Company •901San Antonio Road •Palo Alto,CA 94303-4900USA •650960-1300•Fax 650969-91316.Reboot the system now or schedule a later time to reboot when system disruption will be minimized.The system cannot use the new board until the system is rebooted.7.If the system is running,look for a system message similar to the following example.This example is for a CPU/Memory board in slot 5:NOTICE: CPU Board Hotplugged into Slot 5NOTICE: Board 5 is ready to remove。

简单介绍Solaris

简单介绍Solaris 11⽹络管理Solaris 11的⽹络管理和Solaris 10及以往的Solaris都有相当的区别,为了继承历史,ifconfig等传统命令还是保留了⼀些,但单⽤这些命令将⽆法完成Solaris 11的⽹络管理。

Solaris 11⽹络管理的两个重要⽅法简介Solaris 11采⽤⽤户profile⽂件的管理⽅式进⾏⽹络管理,由此产⽣两种模式进⾏管理,⼀种为⼿⼯管理,⼀种为⾃动管理⽅式。

在安装Solaris 11的时候,根据所采⽤的配置模式来决定⽹络管理⽅式,缺省配置⽅式(DefaultFixed network NCP)和⾃动配置⽅式(Automatic NCP)在安装过程是均可以采⽤。

如果采⽤缺省配置⽅式,也就是缺省固定的NCP模式,那么可以使⽤dladm和ipadm命令来进⾏⽹络管理。

如果采⽤⾃动配置⽅式的NCP,那么可以采⽤netcfg和netadm两个命令进⾏⽹络管理。

在操作系统Solaris 11初始化安装的时候,对于使⽤GUI安装的话,⾃动配置⽅式将被激活使⽤,⽽采⽤text⽅式安装的话,可以选择⾃动,⼿⼯和none⽅式来配置⽹络。

Manually⼿⼯管理⽅法⽹络管理采⽤此种⽅式进⾏Solaris 11的⽹络管理,采⽤dladm和ipadm两个命令进⾏相关的⽹络配置,并且是激活了DefaultFixed NCP⽹络配置⽅式,⽤netadm list来查看使⽤何种管理⽅式。

Solaris 10及以前版本的Solaris系统所维护的⼀些配置⽂件,⽐如/etc下的defaultdomain,hostname.*,nodename,nsswitch.conf等⽂件,在Solaris 11下都采⽤了SMF的管理⽅式,因此在Solaris 11下修改主机名的话,再也不⽤修改nodename等⽂件了。

a. ⽹卡的配置在Solaris 10及以前的版本中,⽹卡⼀般采⽤ifconfig interfacename plumb⽅式将⽹卡拉起来,然后再使⽤ifconfig进⾏ip地址以及IPMP的配置等。

第9章 Solaris系统

命令
本节将介绍ps 命令和进程工具/usr/proc/bin /usr/proc/bin下的工具 本节将介绍ps 命令和进程工具/usr/proc/bin下的工具 级。 可以将ps和grep命令组合使用来搜索特定的信息。 可以将ps和grep命令组合使用来搜索特定的信息。 ps 命令组合使用来搜索特定的信息 1)/usr/proc/bin的命令介绍: /usr/proc/bin的命令介绍: 的命令介绍 pstop 停止进程 prun pid 重新启动进程 ptime pid 使用微状态计算进程时间
9.3 Solaris进程管理 进程管理
学习本章要了解进程监控命令及优先权控制 命令。 命令。
9.3 Solaris进程管理 进程管理
Solaris操作系统进程状态包括创建、初始、就绪(活动 操作系统进程状态包括创建、初始、就绪( 操作系统进程状态包括创建 静止)、执行、阻塞(活动、静止)、结束。 )、执行 )、结束 、静止)、执行、阻塞(活动、静止)、结束。 Solaris中的所有进程都是使用传统的 中的所有进程都是使用传统的fork/exec UNIX进 中的所有进程都是使用传统的 进 程创建模型创建的。这种模型是UNIX原始设计的一部分, 原始设计的一部分, 程创建模型创建的。这种模型是 原始设计的一部分 直到现在为止,实际上每个UNIX版本都是由这个模型实现 直到现在为止,实际上每个 版本都是由这个模型实现 的。Solaris环境中的唯一一个例外是启动时的四个监守进 环境中的唯一一个例外是启动时的四个监守进 程的创建:内存调度程序(sched)初始化程序 初始化程序(init)、页面 程的创建:内存调度程序 初始化程序 、 管理监守进程和fsflush。这些进程是由一个内核内部 管理监守进程和 。 newproc()函数创建的。 函数创建的。 函数创建的

Oracle Solaris操作系统相关硬件和软件支持详细说明书

ORACLE SOLARIS CLUSTER INTEROPERABILITY MATRIX
EMC VPLEX
Storage Supported2
EMC VPLEX VS1/VS2 Geosynchrony 5.3, 5.4, 5.4.1, 5.5, Metro Cluster EMC VPLEX VS6 GeoSynchrony 6.0, Metro Cluster Restrictions apply. For specifics, please go to EMC’s Solaris Host Connectivity Guide8
Host Adapters
SG-XPCI2FC-EM4, SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4 SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4
SAN Switches
SG-XPCIEFCGBE-E8-Z, SG-XPCIEFCGBE-Q8-Z 16 Gb HBAs6: 7101674, 7101682, 7101684, 7101690
Maximum Number Of Nodes Multi-Path Driver- Oracle Solaris
Oracle Solaris 11.1 plus minimum SRU5.5, 11.2, 11.3, 11.49 EMC PowerPath 5.5 P06, 6.0-6.0 P02 OSC 4.1, 4.2, 4.3, 4.4 6 Oracle SAN driver bundled in OS for Oracle Solaris 11 releases
9. Some legacy SPARC servers 11.4. See the following link for details. https:///technetwork/systems/end-of-notices/eonsolaris11392732.html#11.4
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Solaris MC: A Multi-Computer OSYousef A. Khalidi Jose M. Bernabeu Vlada Matena Ken Shirriff Moti ThadaniSMLI TR-95-48November 1995Abstract:Solaris MC is a prototype distributed operating system for multi-computers (i.e., clusters of nodes) that pro-vides a single-system image: a cluster appears to the user and applications as a single computer running the Solaris™ operating system. Solaris MC is built as a set of extensions to the base Solaris UNIX ® sys-tem and provides the same ABI/API as Solaris, running unmodified applications. The components of Solaris MC are implemented in C++ through a CORBA-compliant object-oriented system with all new ser-vices defined by the IDL definition language. Objects communicate through a runtime system that borrows from Solaris doors and Spring subcontracts. Solaris MC is designed for high availability: if a node fails, the remaining nodes remain operational. Solaris MC has a distributed caching file system with UNIX consis-tency semantics, based on the Spring virtual memory and file system architecture. Process operations are extended across the cluster, including remote process execution and a global /proc file system. The exter-nal network is transparently accessible from any node in the cluster. The prototype is fairly complete—we regularly exercise the system by running multiple copies of an off-the-shelf commercial database system.email addresses:yousef.khalidi@ josep@iti.upv.esvlada.matena@ ken.shirriff@ moti.thadani@M/S 29-012550 Garcia AvenueMountain View, CA 940431IntroductionSolaris MC1 is a prototype operating system for a multi-computer, a cluster of computing n o d e s c o n n e c t e d b y a h i g h-s p e e d interconnect. The Solaris MC operating system provides a single system image, making the cluster look like a single machine to the user, to applications, and to the network. By extending operating system abstractions across the cluster, Solaris MC preserves the existing Solaris ABI/API and runs existing Solaris 2.x applications and device drivers without modification.The decision to design a cluster operating system was motivated by trends in hardware technology. Traditional bus-based symmetric multiprocessors (SMP) are limited in the number of processors, memory, and I/O bandwidth that they can support. As processor speed increases, traditional SMPs will support an even smaller number of CPUs. Powerful, modular, and scalable computing systems can be built using inex-pensive computing nodes coupled with high-speed interconnection networks. Such clus-tered systems can take the form of loosely-1. Solaris MC is the internal name of a research project at Sun Microsystems Laboratories. More information on the project can be obtained from http:/ //research/solaris-mc.coupled systems, built out of workstations [1], massively-parallel systems (e.g., [24]), or perhaps as a collection of small SMPs interconnected through a low-latency high-bandwidth network.The key to using clustered systems is to provide a single-system image operating system allowing them to be used as general purpose computers. Cluster systems in the past have been mostly used for custom-built parallel and distributed applications, and sometimes as specialized database systems. However, to fully exploit the potential of clustered systems, we believe that they have to be usable as general purpose computers, running existing applications without modi-fication. Moreover, clustered systems have to be easy to administer and maintain. The fact that the computer is actually built out of multiple computing nodes should be invis-ible to the user. Finally, since clustered systems are built out of many components, the clustered system should be highly-avail-able and should be able to tolerate the failure of any one component.Our goals are to make a cluster of nodes that may or may not share memory appear as a single general purpose multiprocessor. It should be seen as a single machine by appli-Solaris MC: A Multi-Computer OSYousef A. Khalidi Jose M. Bernabeu Vlada Matena Ken Shirriff Moti ThadaniSun Microsystems Laboratories2550 Garcia AvenueMountain View, CA 94043cations, users, and administrators. We want this while preserving object code compati-bility (the ABI), minimizing changes to kernel code, requiring minimal or no change to device drivers, and supporting high avail-ability.Solaris MC has several interesting features. It:•Extends existing Solaris operating systemSolaris MC is built on top of the Solaris operating system. Most of Solaris MC consists of loadable modules extending the Solaris OS, and minimizes the modifi-cations to the existing Solaris kernel.Thus, Solaris MC shows how an existing, widely-used operating system can be extended to support clusters.•Maintains ABI/API complianceExisting the application and device driver binaries run unmodified on Solaris MC.To provide this feature, Solaris MC has a global file system, extends process opera-tions across all the nodes, allows transpar-ent access to remote devices, and makes the cluster appear as a single machine on the network.•Supports high availabilityThe Solaris MC architecture provides fault-containment at the level of an indi-vidual node in the multi-computer. Solaris MC runs a separate kernel on each node.A failure of a node does not cause thewhole system to fail. A failed node is detected and system services are reconfig-ured to use the remaining nodes. Only the programs that were using the resources of the failed node are affected by the failure.Solaris MC does not introduce new failure modes into UNIX.•Uses C++, IDL, and CORBA in the kernelSolaris MC illustrates how the CORBA (common object request broker architec-ture) object model can be used to extend an existing UNIX operating system to a distributed OS. At the same time, it also shows the advantages of implementing strong interfaces for kernel components by using IDL (interface definition lan-guage). Finally, Solaris MC illustrates how C++ can be used for kernel develop-ment, coexisting with previous code.•Leverages Spring technologySolaris MC illustrates how the distributed techniques developed by the Spring OS[15] can be migrated into a commercialoperating system. Solaris MC imports from Spring the idea of using a CORBA-compliant object model [18] as the com-munication mechanism, the Spring virtual memory and file system architecture [7, 10, 9], and the use of C++ as the imple-mentation language. One can view Solaris MC as a transition from the centralized Solaris operating system toward a more modular and distributed OS like Spring. Solaris MC uses ideas from earlier distrib-uted operating systems such as Sprite [19], LOCUS [20], OSF/1 AD TNC [26], MOS [2], and Spring. One key difference from other systems is that Solaris MC shows how a commercial operating system can be extended to a cluster while keeping the existing application base. In addition, Solaris MC uses an object-oriented approach to define new kernel components. Solaris MC also has a stronger emphasis on high avail-ability. Finally, Solaris MC uses new tech-niques for making the cluster appear as a single machine to the external network.The remainder of this paper is structured as follows. Section 2 explains the global file system. Section 3 describes how process management is globalized. Section 4 explains how I/O devices are made global,vnode1vnode2kernelvnode/VFSvnode/VFS (b) Solaris MC (a) Standard Solaris vnode1vnode2kernelvnode/VFSobject invocationproxy layer object implementationvnode1vnode2kernel vnode/VFS vnode/VFS (c) Solaris MC with cachingobject invocationproxy layer object implementationcachesproviders Figure 1. Extending File System Interfaces for Solaris MC. (a) In Solaris, the kernel accesses files through the VFS/vnode operations. (b) In Solaris MC, the VFS/vnode operations are converted by a proxy layer into object invocations . The invoked object may reside on any node in the system. The invoked object performs a local VFS/vnode operation on the underlying file system. Neither the kernel nor the existing file systems have to be modified to run under Solaris MC. (c) Caching is used in Solaris MC to improve performance. Solaris MC supports caching of file pages, directory information, file attributes, and mount points.and Section 5 explains how network opera-tions are made transparent. Section 6discusses the object-based communication model of Solaris MC, and explains CORBA and IDL. Section 7 briefly describes how Solaris MC provides high availability.Section 8 provides the current status of Solaris MC, Section 9 compares Solaris MC to other distributed operating systems, and Section 10 concludes the paper.2Global File SystemSolaris MC uses a global file system to make file accesses location transparent—a process can open a file located anywhere in the system and processes on all nodes can use the same pathname to locate a file. The global file system uses coherency protocols to preserve the UNIX file access semantics even if the file is accessed concurrently from multiple nodes. This file system, called the proxy file system (PXFS), is built on top of the existing Solaris file system at the vnode [11] interface. This interface allows PXFS tob e i m p l e m e n t e d w i t h o u t k e r n e l modifications. The PXFS file system provides extensive caching for high performance using the caching approach from Spring [7], and provides zero-copy bulk I/O movement to move large data objects efficiently. This section discusses these features of PXFS in more detail.PXFS interposes on file operations at the vnode/VFS interface and forwards them to the vnode layer where the file resides, as shown in Figure 1. Besides files, PXFS also provides access to other types of vnodes,such as directories, symbolic links, special devices, streams, swap files, fifos, and Solaris doors.2 Because PXFS is built on top of the existing file system, it can leverage off the existing file system code. This is an important difference from distributed file2. Solaris doors is a new IPC mechanism in Solaris 2.5 that is based on the Spring IPC mechanism [15].systems such as Sprite or Spring that rewrite the entire file system.PXFS uses extensive caching on the clients to reduce the number of remote object invo-cations. Figure 2 shows the objects used in the file paging and attribute caching proto-cols. The design of PXFS was influenced by the Spring file system and its caching archi-tecture [7, 17, 16]. A client cache is imple-mented through a cached object on the client to manage the cached data and a cacher object on the server to maintain consistency. For data, the client has a memcache object and the server has a mempager object. For attributes, the client has a attrcache object and the server has a attrprov object.As an example, suppose a process on Client 1 wishes to page in a page from a file. A memcache is a vnode in addition to being an IDL object, so it can accept GETPAGE and PUTPAGE operations from the Solaris virtual memory system. The memcachevnode is used as the paged vnode for the VOP_MAP operations on the proxy vnode. Memcache searches the local cache for the page. If it is not available, memcache requests the page from the associated mempager. The mempager checks the other mempagers to see if another client has the page, to maintain consistency. Finally, the page is obtained from the backing server vnode. Thus, PXFS has control over global page coherence.The PXFS coherency protocol is token-based and allows a page to be cached read-only by multiple caches or read-write by a single cache. If a dirty page is transferred from one node to another, it is first written to the stable storage on the server to avoid losing updates due to crashes of unrelated nodes. Similarly, an attribute cache is also protected by a reader-writer token. The token is also used to enforce atomicity of read/ write system calls on regular files. Token management is integrated with data transfer for better performance.Directory caching and caching of mount points is done in a fashion similar to attribute caching. Directory operations that create or remove objects are implemented as write-through to be reflected synchronously in stable storage on the server.PXFS has a “bulkio” object handler to perform zero-copy transfers between nodes of large data (file pages, uioread/uiowrite data) if the hardware interconnect has suffi-cient support. For example, if a process takes a page fault, it allocates a page in the local cache and invokes the page_in method on the mempager. The server then allocates a kernel buffer and reads the data from the disk into the buffer. The data is then transferred using the bulkio handler directly into the page on the client. If the underlying hard-ware supports shared memory, the server can Figure 2. File Paging and Attribute Caching.Each client has a memcache object to cache data and an attrcache object to cache attributes. The server has corresponding mempager and attrprov objects to provide the data and attributes. The file object is an IDL object implementing the file protocol. The server vnode provides the underlying file storage.Client 1Servermemcache1proxymempagersserver vnodeattrcache1Client 2memcache2proxyattrcache2 attrprovvnode vnodeList of attributeprovidersList of memorypagersfileobject......map the client page and read data from the disk directly into the page without the need for an intermediate buffer on the server. By using a separate handler for bulk I/O, no changes to the PXFS client or server code are necessary to port PXFS to a different interconnect; only the bulkio handler has to be ported to take full advantage of the hard-ware.3Global Process Management Global process management in Solaris MC extends OS process operations so that the location of a process is transparent to the user. While the threads of a single process m u s t b e o n t h e s a m e(p o s s i b l y multiprocessor) node, a process can reside on any node. The design goals of process m a n a g e m e n t a r e t o s u p p o r t P O S I X semantics for process operations while providing good performance, supplying high availability, and minimizing changes to the existing Solaris kernel. This section discusses the implementation of process management and how it transparently provide signals on global process ids, distributed waits, the /proc file system, and process migration.Process management is implemented in a kernel module above the existing Solaris kernel code that manages the global view of processes. As illustrated in Figure 3, this layer consists of a virtual process (vproc) object for each local process, and a node manager for the node. The vproc maintains state such as the parent and children of the process. The node manager keeps track of the local processes and the other nodes. Additional objects manage process groups and sessions.The global process layer interacts with the rest of the system in several ways. First, process-related system calls are redirected to this layer. Second, a small number of hooks were added to the kernel to call this layer when appropriate. Finally, the vproc layers on different nodes communicate through IDL interfaces. Process management was made more difficult by the lack of an existing kernel interface (analogous to vnodes for the file system). We are exploring if the vproc interface can be extended to a flexible kernel interface useful for other system extensions.Process identifiers (pids) in Solaris MC use a single global pid space and encode the home node of the process in the top bits. Thus, an arbitrary process can be located from its pid by contacting the home node, which knows the current location; this location can then be cached. The signal delivery code, for instance, uses the pid to deliver signals to a process no matter where it resides. The pid encoding also ensures that processes on different nodes will not be created with the same pid. The same pid is used inside and outside the kernel; Solaris MC does not use distinct local (internal) pids and global (external) pids.Waits pose problems for a cluster because a parent and child may be on separate nodes.processnode managerprocess processnode managernode listlocal procvirtualprocvirtualprocvirtualprocExisting kernel layerFigure 3. The data structures of the global process layer. Each node has a node manager object that has a list of all processes created or residing on the node and a list of the other nodes. Each process has a virtual process (vproc) object associated with it. When a process migrates, the old vproc is left behind to forward any operations. The vprocs keep track of the parent/child relationships of the processes.migratedlistGlobal process layerNode A Node BvprocIn Solaris MC, distributed waits are imple-mented by having the child inform the parent of each state change (exit, stopped, continued, or debugged). The parent keeps track of the state of each child and wait oper-ations use this local copy.In the Solaris OS, the /proc pseudo file system provides access to each process in the system; this is used by ps and the debugger, for instance. In Solaris MC, the /proc file system is extended to cover all processes in the cluster. Code for /proc in the pxfs file system merges together local /proc_local file systems into a global /proc. Thus, directory operations on /proc show the process entries in all the local /procs, and lookup operations are redirected to the appropriate node. Solaris MC currently supports remote execu-tion of processes and will soon support remote forks and migration of existing processes. For a remote fork or migration, most of the process’s state will be moved automatically through the consistency mech-anism of pxfs. A “shadow vproc” is left behind when a process migrates; any opera-tions received by the shadow vproc are forwarded to the vproc on the node where the process resides. The policy decisions on load balancing will be built on top of the migration mechanisms; one possibility is to use a migration daemon, as in Sprite [5], that will decide which nodes should receive processes. However, we believe that the main use of process migration will be for planned shutdown of cluster nodes rather than fine load balancing across the cluster; load balancing will largely be managed by placement of processes at exec time. Global process management in Solaris MC will support high availability. That is, the failure of a node will not interfere with processes on another node. While the processes on a failed node will die, the rest of the system will continue after a recovery phase. Parents and children will be notified appropriately of process failures. A new node will take over as home node for the failed node, and migrated processes that originated on the failed node will now use the new node as home.4I/O SubsystemThe I/O subsystem makes it possible to access any I/O device from any node in the multi-computer without regard to the physical attachment of devices to nodes. Applications are able to access I/O devices as local devices even when the devices are physically attached to a node different from the one on which the application is running. Several areas require attention to ensure this access:•Device configuration: the Solaris OS provides dynamically loadable and configurable device drivers. Solaris MC transparently provides a consistent view of device configurations through a distributed device server that is notified when a new device is configured into the system on a particular node. When the device driver corresponding to the newly configured device is invoked on a different node, it is loaded on that node using the DDI/DKI device interfaces defined for the Solaris OS. Different nodes in the system may have different devices attached and different sets of drivers/modules loaded in kernel memory at any point in time.The device server distributes the functionality of the Solaris modctl() interface, which handles the loading and unloading of dynamically loadable modules. Module configuration routines such as make_devname() add the new device names to the device server.Module control interfaces such as m o d_h o l d_d e v_b y_m a j o r(),d d i_n a m e_t o_m a j o r(),a n dgoal is achieved with minimal impact on the existing network subsystem implementation and without any changes to applications.We considered three approaches for handling network traffic. The first approach was to perform all network protocol processing on a single node. This approach, however, is not scalable to large numbers of nodes. The second approach was to run network proto-cols over the interconnection backplane.This approach requires each node to have a separate network address, which prevents transparency. The third approach, which we took, was to use a packet filter to route packets to the proper node and perform protocol processing on that node.Our approach creates the illusion that the set of real network interfaces available in the system is local to each node in the system.Applications are unaware of the real location of each network device, and their view of the network is the same from every node in the system. When an application transmits data over an illusory network device on a node,the framework forwards the outgoing network packet to the real device. Similarly,on the input side, the framework forwards packets from the node on which the real network device is attached to the node where the appropriate application is running.The advantages of our design are (a)protocol processing is not limited to those nodes that have network devices, (b) only one new module is written to handle networking for most protocol stacks, and (c)changes to the protocol stacks are mini-mized.There are three key components of the Solaris MC networking subsystem:•Demultiplexing of incoming packets tothe “correct” node: Incoming packets are first received on the node that has the network adapter physically attached to it.ddi_major_to_name() look-up the distributed device database rather than local data structures.•Uniform device naming: Device numbersprovide information about the location (i.e., node number) of the device in the system in addition to the type of device and the instance or unit number of the device. The operating system associates a location with every device special file.When a device is opened, the open() is directed to the node to which the physical device is attached.•Providing process context to devicedrivers: Device drivers require access to process context for data transfer and credentials checking. In Solaris MC, the calling process may be on a different node than the node on which the driver executes. Consequently, the process context in which the driver runs is different from the process context of the calling process. The operating system provides a logical equivalence between the two processes in order for device drivers to be able to function without modification.The Streams framework poses additional problems, which are not discussed in detail here due to space limitations. Solaris MC allows Streams device drivers and modules that use procedural interfaces to work unchanged in the new environment. Some modules, however, do not strictly obey the Streams interface; they may either be modi-fied to run on Solaris MC, or they may be confined to one node in the cluster.5NetworkingThe networking subsystem in Solaris MC creates a single system image environment for networking applications. The operating system ensures that network connectivity is the same for every application, regardless of which node the application runs on. ThisThe data may, however, be addressed to an application running on a different node. Solaris MC includes an enhanced implementation of the programmable Mach packet filter [14, 25], which extracts relevant information from each packet and matches it against state information maintained by the host system. Once the destination node within the multi-computer system is discovered, the packet is delivered to that node over the system interconnect.•Multiplexing of outgoing packets fromvarious nodes onto a network device: All protocol processing for outgoing packets is performed on the node on which the endpoint for the network connection exists. The layer that passes data to the device driver makes use of remote device access (transparently) to send data over the physical medium.•Global management of the network namespace: Network services are accessed through a service access point (or sap).(For TCP/IP, the saps are simply ports.)Providing a single system image of the sap name space requires coordination between the various nodes. In Solaris MC,a database that maps service access points to nodes within the multi-computer is maintained by the SAPServer, whiche n s u r e s t h a t t h e s a m e s a p i s n o t simultaneously allocated by different nodes in the system.The structure of the networking system is shown in Figure 4. The mc_net module is the packet filter that creates the illusion of a local lower stream corresponding to a remote physical network device in the system. The mc_net module is pushed above the clone-able network device driver by the Solaris MC network configuration utilities. The network stack, with the exception of the mc_net module, is oblivious of the location of the network device within the multi-computer system. In the figure, the SAPS-erver is shown independent of a node for clarity; in reality, it is provided on one or more of the nodes of the system.Solaris MC networking also provides the ability to replicate network services to provide higher throughput and lower response times. This is achieved by extending the API to allow multiple processes to register themselves as servers for a particular service. The network subsystem then chooses a particular process when a service request is received. For example,rlogin ,telnet , and http servers are by default replicated on each node. Each new connection to these services is sent to aFigure 4. Multi-computer Networking Set-up. The mc_net packet filter makes the le0 network device appear local to the application process. TCP/IP protocol processing occurs on node B, preventing node A from becoming a bottleneck. Solid lines show data traffic and dotted lines show service access port control communication.processtcpipKernelUser Node ANode Btcpip le0mc_netSAPServerlepmc_net networkdifferent node in the cluster based on a load balancing policy (currently, a simple round-robin load distribution policy). This allows the cluster to be used as an HTTP server, for example, with all nodes handling requests in parallel.Other features of the Solaris MC networking subsystem are management of global state in the network protocols, such as network statistics maintained for network manage-ment agents, and network state information acquired from routers or peers on the network. In the former case, the network management agents are modified to collect information from all the component nodes of the multi-computer, while in the latter case, information collected on any node is broad-cast to the other nodes.6Communication andProgramming Infrastructure Solaris MC is built from a set of components on top of the Solaris basic kernel. Those components include most OS services, from file system support to global process management and networking management. The programming and communications f r a m e w o r k p r o v i d e s s u p p o r t f o r implementing the components and the communication between components. The framework includes a programming model, a c o m p i l e r,a n d r u n t i m e s u p p o r t f o r component implementation.6.1Programming modelS o l a r i s M C c o m p o n e n t s r e q u i r e a mechanism for accessing them both locally and remotely, and to determine when a component is no longer used by the rest of the system. At the same time, it is essential that each new component have a clearly s p e c i fi e d i n t e r f a c e,p e r m i t t i n g i t s maintenance and evolution. These two requirements led us to decide on the adoption of an object-oriented approach to the design of Solaris MC.From the available possibilities we decided to adopt the CORBA [18] object model, as the best suited for our purposes. CORBA is an architecture with mechanisms for objects to make requests and receive responses in a heterogeneous distributed environment, somewhat similar to RPCs. CORBA provides a strong separation between inter-faces and implementations. In CORBA, an interface is basically a set of operations, and each object accepts requests for the opera-tions defined by the associated interface. How a given object implements an interface is up to the implementor of the particular object. CORBA also includes reference counting. In order to perform a request on an object, the client code must obtain a refer-ence to that object, allowing the system to keep track of the number of references. Interfaces are defined by using CORBA’s IDL [23]. IDL allows the definition of inter-faces by specifying the set of operations the interface accepts (similar to C function declarations), as well as the set of exceptions any given operation may raise. Interfaces can be composed using interface inheritance mechanisms, including multiple inheritance. Client and server object implementation code can be written using any programming language for which a mapping from IDL has been established. Currently, there are a few such programming languages, including C and C++. We decided to use C++ as it provided the best match for the CORBA object model.Every major component of Solaris MC is defined by one or more IDL specified object type. All interactions among the components are carried out by issuing requests for the operations defined in each component’s interface. Such requests are carried out inde-pendently of the location of the object instance by using our own ORB (Object Request Broker), or run time. When the invo-cation is local (within the same address。

相关文档
最新文档