SUN metadb数量-换盘_不拆镜像换盘法

在DiskSuite下替换失败的系统盘

环境:
Solaris2.X
SDS 4.X
failed bootdisk(c0t0d0
mirror disk(c0t1d0)

系统盘c0t0d0硬件故障失败,但系统有一个用SDS做的镜象盘c0t1d0。


下面是替换和恢复系统的过程:

1 从镜象盘引导系统
系统无法正常引导时,我们可以让系统进入硬件维护状态,使用devalias找到镜象盘:
ok devalias
sds-mirror /pci@1f,4000/scsi@3/disk@1,0
sds-root /pci@1f,4000/scsi@3/disk@0,0
net /pci@1f,4000/network@1,1
disk /pci@1f,4000/scsi@3/disk@0,0
cdrom /pci@1f,4000/scsi@3/disk@6,0:f
...

然后从镜象盘引导:

ok boot sds-mirror

这里有一个问题,我们知道DiskSuite的启动有一个要求,就是它的正常的可用的
配置数据库个数(state database replicas)要占它的总数的50%以上,以我们这
个例子说明,比如系统在原来安装时,在c0t0d0s7和c0t1d0s7这两个slice上各建立
2个database replicas,总共有4个replicas,那么因为c0t0d0盘损坏,所以只剩下
c0t1d0上的2个可用,这仍然不符合大于50%的要求,所以系统启动仍然需要手工
干预,进入单用户,删除失败的2个state database replicas:

Boot device: /pci@1f,4000/scsi@3/disk@1,0 File and args:
SunOS Release 5.8 Version Generic_108528-15 64-bit
Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
WARNING: md: d10: /dev/dsk/c0t0d0s0 needs maintenance
WARNING: forceload of misc/md_trans failed
WARNING: forceload of misc/md_raid failed
WARNING: forceload of misc/md_hotspares failed
configuring IPv4 interfaces: hme0.
Hostname: app01

metainit: stale databases

Insufficient metadevice database replicas located.

Use metadb to delete databases which are broken.
Ignore any "Read-only file system" error messages.
Reboot the system when finished to reload the metadevice database.
After reboot, repair any broken database replicas which were deleted.

Type control-d to proceed with normal startup,
(or give root password for system maintenance): ******

single-user privilege assigned to /dev/console.
Entering System Maintenance Mode

《请注意以上的过程提示,因为未满足state replicas大于50%的条件,系统无法进入
正常的运行状态》


# metadb -i
flags first blk block count
M p unknown unknown /dev/dsk/c0t0d0s7
M p unknown unknown /dev/dsk/c0t0d0s7
a m p lu 16 1034 /dev/dsk/c0t1d0s7
a p l 16 1034 /dev/dsk/c0t1d0s7
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's locat

ion was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors

《c0t0d0s7中的replicas的状态标志是M,表示DiskSuite无法正常访问》

# metadb -d c0t0d0s7
《删除失效的replicas》


# metadb -i
flags first blk block count
a m p lu 16 1034 /dev/dsk/c0t1d0s5
a p l 16 1034 /dev/dsk/c0t1d0s6
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
《检查》

# reboot -- sds-mirror
《重启动系统》

2 检查系统状态
进入系统后,要确认坏盘上的metadevices,把需要replace的设备名字记下:
# metastat
d0: Mirror
Submirror 0: d10
State: Needs maintenance
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 13423200 blocks

d10: Submirror of d0
State: Needs maintenance
Invoke: metareplace d0 c0t0d0s0
Size: 13423200 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t0d0s0 0 No Maintenance


d20: Submirror of d0
State: Okay
Size: 13423200 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t1d0s0 0 No Okay


d1: Mirror
Submirror 0: d11
State: Needs maintenance
Submirror 1: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2100000 blocks

d11: Submirror of d1
State: Needs maintenance
Invoke: metareplace d1 c0t0d0s1
Size: 2100000 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t0d0s1 0 No Maintenance


d21: Submirror of d1
State: Okay
Size: 2100000 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0

t1d0s1 0 No Okay


d4: Mirror
Submirror 0: d14
State: Needs maintenance
Submirror 1: d24
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2100000 blocks

d14: Submirror of d4
State: Needs maintenance
Invoke: metareplace d4 c0t0d0s4
Size: 2100000 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t0d0s4 0 No Maintenance


d24: Submirror of d4
State: Okay
Size: 2100000 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c0t1d0s4 0 No Okay

你需要记下这些信息:
d10 -- c0t0d0s0
d11 -- c0t0d0s1
d14 -- c0t0d0s4
以上3个设备是需要替换的。


3 替换失败的硬盘并恢复DiskSuite的配置

拔掉坏盘插入新盘,并按照镜象盘(c0t1d0)的分区表把新盘format。有1个命令可以
很方便的完成这个动作:
# prtvtoc /dev/rdsk/c0t1d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2

但是别忘了这一步,在新盘上安装引导块:
# installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

接着是在新盘上创建2个state database replicas:
# metadb -f -a /dev/dsk/c0t0d0s6
# metadb -i

最后一步,恢复DiskSuite原来的镜象配置:
# metareplace -e d0 c0t0d0s0
d0: device c0t0d0s0 is enabled

# metareplace -e d1 c0t0d0s1
d1: device c0t0d0s1 is enabled

# metareplace -e d4 c0t0d0s4
d4: device c0t0d0s4 is enabled

这一步做完后,你可以用metastat命令观察,新的镜象开始同步。


这个过程可以容易地测试,如果你装了DiskSuite,并做了系统盘的镜象,
在做好备份的前提下,可以按照上述内容测试。



一点补充:
=========================================================
DiskSuite 从版本4.2.1开始支持50%的state database replicas的启动,也就是说我们
上面所说的当一个盘损坏时的手工干预的过程可以避免。

这个新的SDS的功能需要在文件/etc/system中加入一个参数:
# echo "set md:mirrored_root_flag=1" >> /etc/system

例如:
2个盘做镜象,每个盘各有2个state database replicas,当一个盘坏掉,我们只剩下
50%的replicas,那么只要你安装了SDS4.2.1并且/etc/system已经做了设置,系统
就可以正常启动。

这个新功能很重要而且非常有用 ,它可以让我们在用SDS做镜象时不用再发愁到底
一边放几个replicas是最好的问题。手册上称这个功能叫:

----- "50% boot" behaviour of DiskSuite 4.2.1

===============================================================

相关文档
最新文档