This post is quite simple, but may help some of you in the process of discovering and adding new disks to ASM diskgroups on linux servers. I tend to treat my blog as a place to keep some procedures, commands and howtos for a future reference. My memory is not perfect, the more I learn the more I forget. Need a place to keep it alive somehow
I have ordered a disk 50GB for my Oracle Restart / SI DB with ASM environment. Will add it to +DATA diskgroup. Let’s go through the steps with short comments.
In order to discover new disks I used to leverage sg3_utils rpm package (it must be installed). It contains rescan-scsi-bus.sh script (self explanatory name). I run it to see what is new or changed.
[root@server.dba24.pl~]# rescan-scsi-bus.sh Scanning SCSI subsystem for new devices Scanning host 0 for all SCSI target IDs, all LUNs Scanning for device 0 0 0 0 ... OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 Scanning for device 0 0 0 1 ... OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 01 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 Scanning for device 0 0 0 2 ... NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 02 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs Scanning for device 1 0 0 0 ... 0 ... OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HP Model: LOGICAL VOLUME Rev: 6.64 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 1 3 0 0 ... 0 ... OLD: Host: scsi1 Channel: 03 Id: 00 Lun: 00 Vendor: HP Model: P420i Rev: 6.64 Type: RAID ANSI SCSI revision: 05 Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs Scanning host 3 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs Scanning host 4 for all SCSI target IDs, all LUNs Scanning for device 4 0 0 0 ... OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 Scanning for device 4 0 0 1 ... OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 01 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 Scanning for device 4 0 0 2 ... NEW: Host: scsi4 Channel: 00 Id: 00 Lun: 02 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 2 new or changed device(s) found. 0 remapped or resized device(s) found. 0 device(s) removed.
Notice these lines 2 devices (luns) have been discovered. In reality this is one disk but visible through 2 SAN paths.
Scanning for device 0 0 0 2 ... NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 02 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 ..... NEW: Host: scsi4 Channel: 00 Id: 00 Lun: 02 Vendor: HITACHI Model: OPEN-V Rev: 8001 Type: Direct-Access ANSI SCSI revision: 03 ..... 2 new or changed device(s) found.
Let’s find out how it looks like listing with multipath command
[root@server.dba24.pl ~]# multipath -ll asm_1065_200G_data_01 (365060e4007e3e3000030e3e300001065) dm-7 HITACHI ,OPEN-V size=200G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:1 sdy 65:128 active ready running `- 4:0:0:1 sdj 8:144 active ready running mpathl (365060e4007e3e3000030e3e300000099) dm-9 HITACHI ,OPEN-V size=50G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:2 sdv 65:80 active ready running `- 4:0:0:2 sdw 65:96 active ready running
New disk lunID 2 discovered on scsi 0 and scsi 4. But.. Oh yes I should have ordered 200G instead of 50G.
I need to call the storage guy again.
After few hours, he claims the disk has been resized. I think we can use rescan script again, but let’s check Redhat How to do it:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/DM_Multipath/online_device_resize.html
Rescan the devices
[root@server.dba24.pl ~]# echo > /sys/block/sdv/device/rescan [root@server.dba24.pl ~]# echo > /sys/block/sdw/device/rescan
Resize your multipath device by executing the multipathd resize command:
[root@server.dba24.pl ~]# multipathd -k'resize map mpathl'
Now let’s check it has been resized
[root@server.dba24.pl ~]# multipath -ll asm_1065_200G_data_01 (365060e4007e3e3000030e3e300001065) dm-7 HITACHI ,OPEN-V size=200G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:1 sdy 65:128 active ready running `- 4:0:0:1 sdj 8:144 active ready running mpathl (365060e4007e3e3000030e3e300000099) dm-9 HITACHI ,OPEN-V size=200G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:2 sdv 65:80 active ready running `- 4:0:0:2 sdw 65:96 active ready running
Bingo. As you can see above disk has been successfully resized.
Now let’s add the following lines to the /etc/multipath.conf
multipath { wwid 365060e4007e3e3000030e3e300000099 alias asm_0099_200G_data_02 }
Reload the config of mpathd and list luns again
[root@server.dba24.pl ~]# service multipathd restart Redirecting to /bin/systemctl restart multipathd.service [root@server.dba24.pl ~]# multipath -ll asm_1065_200G_data_01 (365060e4007e3e3000030e3e300001065) dm-7 HITACHI ,OPEN-V size=200G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:1 sdy 65:128 active ready running `- 4:0:0:1 sdj 8:144 active ready running asm_0099_200G_data_02 (365060e4007e3e3000030e3e300000099) dm-9 HITACHI ,OPEN-V size=200G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:0:2 sdv 65:80 active ready running `- 4:0:0:2 sdw 65:96 active ready running
Perfect. Just in case let’s check if there are any partitions on the disk
[root@server.dba24.pl~]# fdisk -l /dev/mapper/asm_0099_200G_data_02 Disk /dev/mapper/asm_0099_200G_data_02: 214.7 GB, 214748364800 bytes, 419430400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Empty so it is safe to create new partition. This is my way of preparing disks for ASM. I am not sure it is still required, but some old notes claim that it is good to create partition on a lun in order to secure the partition table from being overwritten by ASM. As I remember it was required on Linux only, but am not sure at the moment.
[root@server.dba24.pl~]# parted /dev/mapper/asm_0099_200G_data_02 mklabel gpt mkpart primary "1 -1" Disk /dev/mapper/asm_0099_200G_data_02: 214.7 GB, 214748364800 bytes, 419430400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x2e234de2 Device Boot Start End Blocks Id System /dev/mapper/asm_0099_200G_data_02p1 2048 419430399 209714176 83 Linux
In order to be sure the device to be added to ASM has got proper owner and group set to grid:asmadmin let’s check it:
[root@server.dba24.pl ~]# ls -l /dev/mapper/asm_0099_200G_data_02p1 lrwxrwxrwx 1 root root 8 Sep 1 19:28 /dev/mapper/asm_0099_200G_data_02p1 -> ../dm-10 [root@server.dba24.pl ~]# ls -l /dev/dm-10 brw-rw---- 1 grid asmadmin 253, 52 Sep 9 16:28 /dev/dm-10
Permissions ok. How do I do that it gets the owner:group automatically? By configuring udev here:
[root@server.dba24.pl ~]# cat /etc/udev/rules.d/99-multipath-oracle.rules KERNEL=="dm-*",ENV{DM_NAME}=="asm*p1", OWNER="grid", GROUP="asmadmin", MODE="0660"
Just add the device with devmapper with asm*p1 name and it gets appropriate attributes automatically.
Ok, we have all we need now. Login to the ASM instance and add disk to +DATA.
First will search if disk is visible as a CANDIDATE disk (ready to be added)
[grid@server.dba24.pl~]$ . .+ASM [grid@server.dba24.pl~]$ sqlplus / as sysasm SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 1 19:47:25 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Automatic Storage Management option SQL> select path from v$asm_disk where header_status='CANDIDATE'; PATH -------------------------------------------------------------------------------- /dev/mapper/asm_0099_200G_data_02p1
Visible – great! Let’s add it eventually!
SQL> alter diskgroup data add disk '/dev/mapper/asm_0099_200G_data_02p1' rebalance power 3; Diskgroup altered. SQL> select * from v$asm_operation; GROUP_NUMBER OPERA PASS STAT POWER ACTUAL SOFAR EST_WORK ------------ ----- --------- ---- ---------- ---------- ---------- ---------- EST_RATE EST_MINUTES ERROR_CODE CON_ID ---------- ----------- -------------------------------------------- ---------- 1 REBAL REBALANCE RUN 3 3 1778 156685 15292 10 0 1 REBAL COMPACT WAIT 3 3 0 0 0 0
The process of rebalance with rebalance power 3 has begun.
This is it more or less!!
About the author

Maciej Tokar
An Oracle technology geek and crazy long distance runner, DBA24 Owner
Senior Oracle DBA / Consultant / [OCP10g, OCP12c, OCE RAC 10g] / [experience: 9y+]
Currently working for Bluegarden (Oslo Norway) by Miratech Group
Past: Mastercard / Trevica by Britenet, Citi International PLC, PZU

