This is seventh part of my RAC lab series. I will show you step by step how to prepare Openfiler servers for your OEL 7.x OS to be able to properly install Oracle 12c GI and RAC.

RAC lab Part 1 – Installing the Ubuntu 16.04 desktop
RAC lab Part 2 – Virtualbox installation and configuration
RAC lab Part 3 – VMs configuration
RAC lab Part 4 – Installing the Openfiler software
RAC lab Part 5 – Installing OEL 7.x on VM
RAC lab Part 6 – OEL 7.x configuration for Oracle 12c GI & RAC DB installation
RAC lab Part 7 – Openfilers configuration and cloning
RAC lab Part 8 – OEL 7.x prepare storage
RAC lab Part 9 – Clone first RAC node as 2nd node and prepare config.
RAC lab Part 10 – Installing Grid Infrastructure
RAC lab Part 11 – Installing Database Software
RAC lab Part 12 – Creating a Container Database

 
 
Documentation and other sources
 
 
 
Basic openfiler1 configuration
 

As I am not going to install VirtualboxAdditions into Openfiler software as it is not ready for compilation on its OS version, I need another source of time synchronization. You can synch openfiler’s OS clock with external time sources like time.nist.gov, 0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org.

  • Set time synchronization on openfiler1

Login to Openfiler using Web GUI at https://192.168.1.10:446 and go to System->Clock Setup.

 

screenshot-from-2016-11-23-21-12-47

 
  • Set “time.nist.gov” in the server field and press Setup synchronization. Time should be set now. You can also pick your timezone.

1-openfiler1beforeclonesettimesynchronization

 
  • Set /etc/hosts. First of all the file should include RAC nodes IPs related to the storage network. We need them to set Network ACLs later and network communication.
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       openfiler1.dba24.pl openfiler1 localhost.localdomain localhost openfiler1
::1             localhost6.localdomain6 localhost6

# Openfilers

192.168.1.10 openfiler1
192.168.1.11 openfiler2
192.168.1.12 openfiler3

192.168.10.10 openfiler1-spriv
192.168.10.11 openfiler2-spriv
192.168.10.12 openfiler3-spriv

192.168.1.100 macieksrv.dba24.pl macieksrv

# storage network
192.168.10.21 oel7rac1n1-spriv.dba24.pl oel7rac1n1-spriv
192.168.10.23 oel7rac1n2-spriv.dba24.pl oel7rac1n2-spriv
 
 
  • Disable DNS queries in SSHD

Add following line at the end of /etc/ssh/sshd_config and restart sshd daemon

UseDNS no
[root@openfiler1 ~] cat /etc/ssh/sshd_config|grep DNS
UseDNS no

[root@openfiler1 ~] /etc/init.d/sshd restart
Stopping sshd:                                             [  OK  ]
Starting sshd:                                             [  OK  ]
 
  • Enable iscsi target service via GUI (click to enlarge)

2-enabling_services-before-clone-screenshot

Check that service iscsi-target is running

[root@openfiler1 /]# service iscsi-target status
ietd (pid 23622) is running...
 
  • Add oracle rac nodes to the network access list in order to be able to access Openfiler from our RAC nodes (click to enlarge)

     

    3-enable-network-access-for-rac-nodes-before-clone-screenshot-from-2016-11-28-15-30-25_networkaccessforracnodes

 
 
Prepare storage for RAC nodes
 
 
  • At the very beginning we need to put together the stuff we want our openfiler to provide for RAC nodes.

We need iscsi-targets on all three openfilers, through which the iSCSI luns will be presented

 
Openfiler Name iSCSI Target Name
openfiler1 iqn.2006-01.com.openfiler:op1.rac1.asm
openfiler2 iqn.2006-01.com.openfiler:op2.rac1.asm
openfiler3 iqn.2006-01.com.openfiler:op3.rac1.asm
 

We need luns as in the following table. Openfiler3 is going to serve the 3rd votedisk only.

 
Volume Name Volume Description Volume Size Lun type OP1 OP2 OP3
rac1-asm-crs-01 RAC1 CRS volume 1 (OCR, VOTE, OCRDG) 10016 MB iSCSI x
rac1-asm-crs-02 RAC1 CRS volume 2 (OCR, VOTE, OCRDG) 10016 MB iSCSI x
rac1-asm-crs-03 RAC1 CRS volume 3 (3rd VOTE) 512 MB iSCSI x
rac1-asm-data-01 RAC1 DATA 01 volume 2016 MB iSCSI x x
rac1-asm-data-02 RAC1 DATA 02 volume 2016 MB iSCSI x x
rac1-asm-data-03 RAC1 DATA 03 volume 2016 MB iSCSI x x
rac1-asm-data-04 RAC1 DATA 04 volume 2016 MB iSCSI x x
rac1-asm-fra-01 RAC1 FRA 01 volume 1024 MB iSCSI x x
rac1-asm-fra-02 RAC1 FRA 02 volume 1024 MB iSCSI x x
rac1-asm-fra-03 RAC1 FRA 03 volume 1024 MB iSCSI x x
rac1-asm-fra-04 RAC1 FRA 04 volume 1024 MB iSCSI x x
 
 
  • I have clicked on Volumes tab. As you can se there are no volume groups and even no physical volumes. Openfiler uses LVM as a base tool for storage management. Screen shows we can directly go to the page that helps create physical volume (click to enlarge)

One can navigate to this section via Volume Section -> Block Devices

 

4-createnewphysicalvolumeurlbeforeclone

 
  • Mark new partition on disk as Primary – Physical Volume and set boundaries to cover whole disk (click to enlarge)
 

5-create-partition-on-empty-disk-viewofemptypartitions-list-in-block-device-_dev_sdbbeforeclone

 
  • New partition has been aded. (click to enlarge)
 

6-new-partition-added-to-devsdb-beforeclone

 

Verify on the system level that the pv has been added

 
[root@openfiler1 ~] pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               -
  PV Size               47.68 GiB / not usable 26.54 MiB
  Allocatable           yes 
  PE Size               32.00 MiB
  Total PE              1525
  Free PE               832
  Allocated PE          693
  PV UUID               2MWlwa-5fTL-qg97-d7Eq-zPhS-dDeD-qI7BfN

Show the same info as gui.

 
  • Navigate to Volumes -> Volume Groups and create racvg group. Select the freshly created PV as the base pv for the volumegroup (click to enlarge)
 

7-create-new-volume-group-racvg_beforeaddition-beforeclone

 
  • Racvg volumegroup has been created (click to enlarge)
 

8-new-diskgroup-racvg-after_additionbeforeclone

 
  • Our well known PV is the one and only member pv (click to enlarge)
 

9-show-pvs-of-racvg-beforeclone
Lets’ see how it looks in OS

 
[root@openfiler1 ~] vgdisplay
  --- Volume group ---
  VG Name               racvg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  14
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                9
  Open LV               9
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               47.66 GiB
  PE Size               32.00 MiB
  Total PE              1525
  Alloc PE / Size       693 / 21.66 GiB
  Free  PE / Size       832 / 26.00 GiB
  VG UUID               RFeoeP-t66d-sj11-dKOu-2iwf-Y3zS-Orm1BS
 
  • Create all the volumes that we got in a table above. Example of adding the first volume rac1-asm-crs-01 below (click to enlarge)
 

10-add-new-volume-rac1_crs1_asm_beforeaddingbeforeclone

 
  • When I finished the LUNs list looked like below (click to enlarge)
 

11-logicalvolumesprepqared_openfiler1beforeclone

 

Check how it shows up at the OS level

 
[root@openfiler1 ~] lvscan
  ACTIVE            '/dev/racvg/rac1-asm-crs-01' [9.78 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-01' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-02' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-03' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-04' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-01' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-02' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-03' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-04' [1.00 GiB] inherit

# or with more details
[root@openfiler1 ~] lvdisplay
  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-crs-01
  VG Name                racvg
  LV UUID                gHzQ4T-3d48-kRTy-Bsup-gkgE-kxNk-S2WCtr
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                9.78 GiB
  Current LE             313
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-data-01
  VG Name                racvg
  LV UUID                CnLO8y-sO0N-YxjQ-14nb-aQAY-qR1U-Zhqvdc
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.97 GiB
  Current LE             63
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-data-02
  VG Name                racvg
  LV UUID                s162a8-H7GV-o03x-T2oT-8oN3-GSo2-NwlL0B
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.97 GiB
  Current LE             63
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-data-03
  VG Name                racvg
  LV UUID                Qd2XsS-2VZb-ihpV-8vFQ-8POu-TXTV-nWQl07
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.97 GiB
  Current LE             63
  Segments               1
 Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-data-04
  VG Name                racvg
  LV UUID                tfpQQH-ivvO-LInD-sxSe-INMl-fxZV-Os5peK
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.97 GiB
  Current LE             63
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-fra-01
  VG Name                racvg
  LV UUID                jMoj2g-Adx8-7blO-TnM4-Vt3W-foEq-viR5RA
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-fra-02
  VG Name                racvg
  LV UUID                6hGBNH-moz7-UTBy-kyEb-fGvG-0BUA-C0Wudc
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-fra-03
  VG Name                racvg
  LV UUID                cASlD2-QGI2-aFix-GIjE-OWzr-gFNE-fpCr5C
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

  --- Logical volume ---
  LV Name                /dev/racvg/rac1-asm-fra-04
  VG Name                racvg
  LV UUID                hqre5d-5Fod-1aDt-6XJG-XcZH-j2G2-tjjLew
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

 
  • Create iSCSI target that one can understand as LUNs access point for iSCSI initiators (servers) (click to enlarge)
 

12-create-iscsitarget-beforeclone

 
  • iSCSI target properties can be modified, but we don’t need to touch anything here (click to enlarge)
 

13-optionsvisibleafteriscsciadditionbeforeclone

 
  • Map LUNs to the iSCSI target (click to enlarge)
 

14-maplunsiscitargetlunmappingsonlyonebeforeclone

 
  • Set network ACLs to allow access from RAC nodes to this target (click to enlarge)
 

15-set-networkaclsforiscsitargetiscisionlyonenetworkaclsbeforeclone

 
  • Set user and password – the credentials will allow servers to access luns via iSCSI target (click to enlarge)
 

16-settingchapforrac1op1targetbeforeclone

 

Look how it looks atsystem level

 
[root@openfiler1 ~] cat /etc/ietd.conf
#####   WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT.  #####  


Target iqn.2006-01.com.openfiler:op1.rac1.asm
        HeaderDigest None
        DataDigest None
        MaxConnections 1
        InitialR2T Yes
        ImmediateData No
        MaxRecvDataSegmentLength 131072
        MaxXmitDataSegmentLength 131072
        MaxBurstLength 262144
        FirstBurstLength 262144
        DefaultTime2Wait 2
        DefaultTime2Retain 20
        MaxOutstandingR2T 8
        DataPDUInOrder Yes
        DataSequenceInOrder Yes
        ErrorRecoveryLevel 0
        Lun 0 Path=/dev/racvg/rac1-asm-crs-01,Type=blockio,ScsiSN=gHzQ4T-3d48-kRTy,ScsiId=gHzQ4T-3d48-kRTy,IOMode=wt
        Lun 1 Path=/dev/racvg/rac1-asm-data-01,Type=blockio,ScsiSN=CnLO8y-sO0N-YxjQ,ScsiId=CnLO8y-sO0N-YxjQ,IOMode=wt
        Lun 2 Path=/dev/racvg/rac1-asm-data-02,Type=blockio,ScsiSN=s162a8-H7GV-o03x,ScsiId=s162a8-H7GV-o03x,IOMode=wt
        Lun 3 Path=/dev/racvg/rac1-asm-data-03,Type=blockio,ScsiSN=Qd2XsS-2VZb-ihpV,ScsiId=Qd2XsS-2VZb-ihpV,IOMode=wt
        Lun 4 Path=/dev/racvg/rac1-asm-data-04,Type=blockio,ScsiSN=tfpQQH-ivvO-LInD,ScsiId=tfpQQH-ivvO-LInD,IOMode=wt
        Lun 5 Path=/dev/racvg/rac1-asm-fra-01,Type=blockio,ScsiSN=jMoj2g-Adx8-7blO,ScsiId=jMoj2g-Adx8-7blO,IOMode=wt
        Lun 6 Path=/dev/racvg/rac1-asm-fra-02,Type=blockio,ScsiSN=6hGBNH-moz7-UTBy,ScsiId=6hGBNH-moz7-UTBy,IOMode=wt
        Lun 7 Path=/dev/racvg/rac1-asm-fra-03,Type=blockio,ScsiSN=cASlD2-QGI2-aFix,ScsiId=cASlD2-QGI2-aFix,IOMode=wt
        Lun 8 Path=/dev/racvg/rac1-asm-fra-04,Type=blockio,ScsiSN=hqre5d-5Fod-1aDt,ScsiId=hqre5d-5Fod-1aDt,IOMode=wt

## or look here
[root@openfiler1 ~] cat /proc/net/iet/volume
tid:1 name:iqn.2006-01.com.openfiler:op1.rac1.asm
	lun:0 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-crs-01
	lun:1 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-01
	lun:2 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-02
	lun:3 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-03
	lun:4 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-04
	lun:5 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-01
	lun:6 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-02
	lun:7 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-03
	lun:8 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-04
You have new mail in /var/spool/mail/root
 
 
  • Forbidden access for all initiators by default after any config modification – BUG
 

First check allowed initiators

 

# First take a look at allowed IPs
[root@openfiler1 iet]# cat /etc/initiators.allow

# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
#       This configuration file was autogenerated
#       by Openfiler. Any manual changes will be overwritten
#       Generated at: Tue Nov 29 11:42:06 CET 2016

iqn.2006-01.com.openfiler:op1.rac1.asm  192.168.10.21/24, 192.168.10.23/24

# End of Openfiler configuration

<div style="line-height:10px">&nbsp;</div>
Now look at deny list. As you can see all addresses have been denied access to the openfiler's target
<div style="line-height:10px">&nbsp;</div>

[root@openfiler1 iet]# cat /etc/initiators.deny
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
#       This configuration file was autogenerated
#       by Openfiler. Any manual changes will be overwritten
#       Generated at: Tue Nov 29 11:42:06 CET 2016
iqn.2006-01.com.openfiler:op1.rac1.asm ALL

# End of Openfiler configuration
 

To fix this comment the line with ALL

 
[root@openfiler1 iet]# cat /etc/initiators.deny

# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
#       This configuration file was autogenerated
#       by Openfiler. Any manual changes will be overwritten
#       Generated at: Tue Nov 29 11:42:06 CET 2016
#iqn.2006-01.com.openfiler:op1.rac1.asm ALL

# End of Openfiler configuration
 

Unfortunately this is going to be overwritten each time the configuration is modified - remember to modify /etc/initiators.deny

 

Openfiler1 is ready!!!, let's discover the disks on oel7rac1n1 (our only existing rac node at that time)

 
 
iSCSI storage settings on OEL 7.x
 
 

As we have the luns ready on openfiler1, we need to go through the steps of discovery and configuration of them at the operating system level

  • Login to oel7rac1n1 and check its initiator name
[root@oel7rac1n1 ~] cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:e942b9ba36f
 
  • Discover targets presented by openfiler1
[root@oel7rac1n1 log]# iscsiadm -m discovery -t sendtargets -p openfiler1-spriv
192.168.10.10:3260,1 iqn.2006-01.com.openfiler:op1.rac1.asm
192.168.1.10:3260,1 iqn.2006-01.com.openfiler:op1.rac1.asm

The command also starts the iscsid service if it is not already running.

 

The following command displays information about the targets that is now stored in the discovery database

 
[root@oel7rac1n1 log]# iscsiadm -m discoverydb -t st -p openfiler1-spriv
# BEGIN RECORD 6.2.0.873-35
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = openfiler1-spriv
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = <empty>
discovery.sendtargets.auth.password = <empty>
discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# END RECORD
 
  • In order to see and access LUNs behind the target we need to login to the target
[root@oel7rac1n1 ~] iscsiadm -m node -T iqn.2006-01.com.openfiler:op1.rac1.asm -p openfiler1-spriv -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:op1.rac1.asm, portal: 192.168.10.10,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:op1.rac1.asm, portal: 192.168.10.10,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals
 

As you may remember I have set password for this target. Let's check our config on initiator side

 
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler1-spriv
# BEGIN RECORD 6.2.0.873-35
node.name = iqn.2006-01.com.openfiler:op1.rac1.asm
node.tpgt = 1
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.gateway = <empty>
iface.subnet_mask = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
iface.state = <empty>
iface.vlan_id = 0
iface.vlan_priority = 0
iface.vlan_state = <empty>
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.bootproto = <empty>
iface.dhcp_alt_client_id_state = <empty>
iface.dhcp_alt_client_id = <empty>
iface.dhcp_dns = <empty>
iface.dhcp_learn_iqn = <empty>
iface.dhcp_req_vendor_id_state = <empty>
iface.dhcp_vendor_id_state = <empty>
iface.dhcp_vendor_id = <empty>
iface.dhcp_slp_da = <empty>
iface.fragmentation = <empty>
iface.gratuitous_arp = <empty>
iface.incoming_forwarding = <empty>
iface.tos_state = <empty>
iface.tos = 0
iface.ttl = 0
iface.delayed_ack = <empty>
iface.tcp_nagle = <empty>
iface.tcp_wsf_state = <empty>
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.tcp_timestamp = <empty>
iface.redirect = <empty>
iface.def_task_mgmt_timeout = 0
iface.header_digest = <empty>
iface.data_digest = <empty>
iface.immediate_data = <empty>
iface.initial_r2t = <empty>
iface.data_seq_inorder = <empty>
iface.data_pdu_inorder = <empty>
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
iface.chap_auth = <empty>
iface.bidi_chap = <empty>
iface.strict_login_compliance = <empty>
iface.discovery_auth = <empty>
iface.discovery_logout = <empty>
node.discovery_address = openfiler1-spriv
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = None
node.session.auth.username = <empty>
node.session.auth.password = <empty>
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 192.168.10.10
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
 

Among others that we see above there are parameters we have to take care of:
- node.session.auth.authmethod
- node.session.auth.password
- node.session.auth.username

They have to be set accordingly to the target settings

 
iscsiadm -m node -p openfiler1-spriv -T iqn.2006-01.com.openfiler:op1.rac1.asm --op update -n node.session.auth.authmethod -v CHAP
iscsiadm -m node -p openfiler1-spriv -T iqn.2006-01.com.openfiler:op1.rac1.asm --op update -n node.session.auth.username -v iscsi_rac
iscsiadm -m node -p openfiler1-spriv -T iqn.2006-01.com.openfiler:op1.rac1.asm --op update -n node.session.auth.password -v ***********
 

Check config now again

 
maciek@oel7rac1n1.dba24.pl ~ $ sudo iscsiadm -m node  -p openfiler1-spriv|grep node.session.auth
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsi_rac
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
 

Authorization method (CHAP), user and password set. Try to login again.

 
[root@oel7rac1n1 ~] iscsiadm -m node -T iqn.2006-01.com.openfiler:op1.rac1.asm -p openfiler1-spriv -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:op1.rac1.asm, portal: 192.168.10.10,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:op1.rac1.asm, portal: 192.168.10.10,3260] successful.
 

Login successful

 
  • Check discovered luns
 
[root@oel7rac1n1 ~] iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-35
Target: iqn.2006-01.com.openfiler:op1.rac1.asm (non-flash)
        Current Portal: 192.168.10.10:3260,1
        Persistent Portal: 192.168.10.10:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 4
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
               MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 10 State: running
                scsi10 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdb          State: running
                scsi10 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdc          State: running
                scsi10 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdd          State: running
                scsi10 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sde          State: running
                scsi10 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdf          State: running
                scsi10 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdg          State: running
                scsi10 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdh          State: running
                scsi10 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdi          State: running
                scsi10 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sdj          State: running

 

Ok! Disks have arrived

 
  • Now reboot the system and check if OS discovers all the disks automatically
 
[root@oel7rac1n1 ~] iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-35
Target: iqn.2006-01.com.openfiler:op1.rac1.asm (non-flash)
        Current Portal: 192.168.10.10:3260,1
        Persistent Portal: 192.168.10.10:3260,1
                **********
                /Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 1
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 7  State: running
                scsi7 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdb          State: running
                scsi7 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdc          State: running
                scsi7 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdd          State: running
                scsi7 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sde          State: running
                scsi7 Channel 00 Id 0 Lun: 4
                       Attached scsi disk sdf          State: running
                scsi7 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdg          State: running
                scsi7 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdh          State: running
                scsi7 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdi          State: running
                scsi7 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sdj          State: running
 

Looks ok, so the configuration of openfiler1 is correct and ready. I will clone openfiler VM now as openfiler2.

 
 
Clone openfiler1 VM as openfiler2 and configure openfiler2.
 
  • Clone openfiler1 as openfiler2
 
# Shutdown openfiler1 before cloning
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage clonevm openfiler1 --name openfiler2 --groups "/Openfilers" --basefolder /vbox-repo1/metadata --register
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "openfiler2"

# check vm information
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage showvminfo openfiler2
Name:            openfiler2
Groups:          /Openfilers
Guest OS:        Other Linux (64-bit)
UUID:            b3251226-cb4f-4fc7-a907-18f2c0958d66
Config file:     /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2.vbox
Snapshot folder: /vbox-repo1/metadata/Openfilers/openfiler2/Snapshots
Log folder:      /vbox-repo1/metadata/Openfilers/openfiler2/Logs
Hardware UUID:   b3251226-cb4f-4fc7-a907-18f2c0958d66
Memory size:     2048MB
Page Fusion:     on
VRAM size:       256MB
CPU exec cap:    100%
HPET:            off
Chipset:         ich9
Firmware:        BIOS
Number of CPUs:  4
PAE:             on
Long Mode:       on
Triple Fault Reset: off
APIC:            on
X2APIC:          on
CPUID Portability Level: 0
CPUID overrides: None
Boot menu mode:  message and menu
Boot Device (1): DVD
Boot Device (2): HardDisk
Boot Device (3): HardDisk
Boot Device (4): Not Assigned
ACPI:            on
IOAPIC:          on
BIOS APIC mode:  APIC
Time offset:     0ms
RTC:             local time
Hardw. virt.ext: on
Nested Paging:   on
Large Pages:     off
VT-x VPID:       on
VT-x unr. exec.: on
Paravirt. Provider: Default
Effective Paravirt. Provider: KVM
State:           powered off (since 2016-11-29T12:53:48.036000000)
Monitor count:   1
3D Acceleration: off
2D Video Acceleration: off
Teleporter Enabled: off
Teleporter Port: 0
Teleporter Address:
Teleporter Password:
Tracing Enabled: off
Allow Tracing to Access VM: off
Tracing Configuration:
Autostart Enabled: off
Autostart Delay: 0
Default Frontend:
Storage Controller Name (0):            IDE Controller
Storage Controller Type (0):            PIIX4
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0):  2
Storage Controller Port Count (0):      2
Storage Controller Bootable (0):        on
Storage Controller Name (1):            SATA Controller
Storage Controller Type (1):            IntelAhci
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1):  30
Storage Controller Port Count (1):      5
Storage Controller Bootable (1):        on
IDE Controller (1, 0): /usr/share/virtualbox/VBoxGuestAdditions.iso (UUID: ad7ca697-6768-4fbb-adea-ea7e54bfe642)
SATA Controller (1, 0): /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk1.vdi (UUID: 22fba3ed-524c-4420-bdd5-3e19d29c416b)
SATA Controller (2, 0): /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk2.vdi (UUID: 825a7f27-dda8-4449-ba51-675cdfc99f50)
NIC 1:           MAC: 080027170282, Attachment: Bridged Interface 'wlx98ded00b5b05', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 2:           MAC: 0800274D2A23, Attachment: Internal Network 'storage-internal', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 3:           disabled
........
NIC 36:           disabled
Pointing Device: PS/2 Mouse
Keyboard Device: PS/2 Keyboard
UART 1:          disabled
UART 2:          disabled
UART 3:          disabled
UART 4:          disabled
LPT 1:           disabled
LPT 2:           disabled
Audio:           enabled (Driver: PulseAudio, Controller: AC97, Codec: STAC9700)
Clipboard Mode:  Bidirectional
Drag and drop Mode: Bidirectional
VRDE:            enabled (Address 0.0.0.0, Ports 5010-5020, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
Video redirection: disabled
VRDE property: TCP/Ports  = "5010-5020"
VRDE property: TCP/Address = <not set="">
VRDE property: VideoChannel/Enabled = <not set="">
VRDE property: VideoChannel/Quality = <not set="">
VRDE property: VideoChannel/DownscaleProtection = <not set="">
VRDE property: Client/DisableDisplay = <not set="">
VRDE property: Client/DisableInput = <not set="">
VRDE property: Client/DisableAudio = <not set="">
VRDE property: Client/DisableUSB = <not set="">
VRDE property: Client/DisableClipboard = <not set="">
VRDE property: Client/DisableUpstreamAudio = <not set="">
VRDE property: Client/DisableUSB = <not set="">
VRDE property: Client/DisableClipboard = <not set="">
VRDE property: Client/DisableUpstreamAudio = <not set="">
VRDE property: Client/DisableRDPDR = <not set="">
VRDE property: H3DRedirect/Enabled = <not set="">
VRDE property: Security/Method = <not set="">
VRDE property: Security/ServerCertificate = <not set="">
VRDE property: Security/ServerPrivateKey = <not set="">
VRDE property: Security/CACertificate = <not set="">
VRDE property: Audio/RateCorrectionMode = <not set="">
VRDE property: Audio/LogPath = <not set="">
USB:             disabled
EHCI:            disabled
XHCI:            disabled

USB Device Filters:

<none>

Bandwidth groups:  <none>

Shared folders:  <none>

Video capturing:    not active
Capture screens:    0
Capture file:       /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2.webm
Capture dimensions: 1024x768
Capture rate:       512 kbps
Capture FPS:        25

Guest:

Configured memory balloon size:      0 MB


 
  • Move virtual disks to /vbox-repo1/disks directory and unify names. Set correct snapshot directory
 

As you can see below virtual disks of the cloned machine have been placed in metadata directory. Also the files names have been changed to a little different format. I want to fix it.

 
SATA Controller (1, 0): /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk1.vdi (UUID: 22fba3ed-524c-4420-bdd5-3e19d29c416b)
SATA Controller (2, 0): /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk2.vdi (UUID: 825a7f27-dda8-4449-ba51-675cdfc99f50)
 

Move the disks and change snapshot directory

 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifymedium /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk1.vdi --move /vbox-repo1/disks/openfiler2_localOSdisk1.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Move medium with UUID 22fba3ed-524c-4420-bdd5-3e19d29c416b finished

vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifymedium /vbox-repo1/metadata/Openfilers/openfiler2/openfiler2-disk2.vdi --move /vbox-repo1/disks/openfiler2_functionaldisk1.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Move medium with UUID 825a7f27-dda8-4449-ba51-675cdfc99f50 finished

# set snapshots directory
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifyvm "openfiler2" --snapshotfolder /vbox-repo1/snapshots
 

How does it look like after the change

 
SATA Controller (1, 0): /vbox-repo1/disks/openfiler2_localOSdisk1.vdi (UUID: 22fba3ed-524c-4420-bdd5-3e19d29c416b)
SATA Controller (2, 0): /vbox-repo1/disks/openfiler2_functionaldisk1.vdi (UUID: 825a7f27-dda8-4449-ba51-675cdfc99f50)
 
  • Let's now start this openfiler2 to change its IPs and so on.
 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage startvm openfiler2
Waiting for VM "openfiler2" to power on...
VM "openfiler2" has been successfully started.
 
  • Login as root to the openfiler2 VM using VM console - network is not ready as all the macs were regenerated during the clone process and we must configure network individually.

17-openfiler2_beforenetworkchange_29_11_2016_14_35_32

 
  • Replace openfiler1 with openfiler2 in /etc/hosts for 127.0.0.1
 
[root@openfiler1 ~] cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       openfiler2.dba24.pl openfiler2 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

# Openfilers

192.168.1.10 openfiler1
192.168.1.11 openfiler2
192.168.1.12 openfiler3

192.168.10.10 openfiler1-spriv
192.168.10.11 openfiler2-spriv
192.168.10.12 openfiler3-spriv

192.168.1.100 macieksrv.dba24.pl macieksrv

# storage network
192.168.10.21 oel7rac1n1-spriv.dba24.pl oel7rac1n1-spriv
192.168.10.23 oel7rac1n2-spriv.dba24.pl oel7rac1n2-spriv
 
  • Change hostname in /etc/sysconfig/network - replace openfiler1 with openfiler2
[root@openfiler1 ~] cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=openfiler2.dba24.pl
 
  • Modify MAC addresses and IPs
 

Look for VM's MAC addresses (vboxmanage showvminfo)

 
NIC 1:           MAC: 080027170282, Attachment: Bridged Interface 'wlx98ded00b5b05', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 2:           MAC: 0800274D2A23, Attachment: Internal Network 'storage-internal', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
 

Change MAC in /etc/sysconfig/network-scripts/ifcfg-eth0 for bridged network

 
[root@openfiler1 ~] cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
MTU=1500
USERCTL=no
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=08:00:27:17:02:82
 

Change MAC and IP in /etc/sysconfig/network-scripts/ifcfg-eth1 for storage network

 
[root@openfiler1 ~] cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.10.255
HWADDR=08:00:27:4D:2A:23
IPADDR=192.168.10.11
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes
 

Replace macs accordingly in /etc/udev/rules.d/70-persistent-net.rules

 
[root@openfiler1 ~] cat /etc/udev/rules.d/70-persistent-net.rules 

# This file was automatically generated by the /lib/udev/write_net_rules
# program run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single line.

# Intel Corporation 82540EM Gigabit Ethernet Controller
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:17:02:82", ATTR{type}=="1", NAME="eth0"
# iNTEL Corporation 82540EM Gigabit Ethernet Controller
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:4d:2a:23", ATTR{type}=="1", NAME="eth1"
 

Reload udev rules and network

 
[root@openfiler1 ~] udevadm control --reload-rules
[root@openfiler1 ~] udevadm trigger
[root@openfiler1 ~] service network restart
 

You should now have correct IPs assigned to your network adapters. Remember the bridged interface should be able to get its fixed IP from dhcp server.

 
  • Restart server - you should see the login screen with correct IP
 

18-openfiler2_1

 
  • Login to web GUI
 

19-openfiler2_2_guiloginopenfiler2afternetworkchange

 
  • Navigate to volumes tab. As you can see below all the luns below are exactly the same as those in openfiler1. Unfortunately we cannot use them because scsi_ids would not be different. You must remove all the luns and add them back in opnenfiler2. They will receive new unique scsi_ids. Do it using table with luns specification for openfiler2 and assign them to a new iscsi target "iqn.2006-01.com.openfiler:op2.rac1.asm"
 

20-openfiler2_4_volumesviewfirstlogin2016-11-29-15-41-36

 
  • Take a look and check current openfiler2's network settings.
 

21-openfiler2_5_guinetworkconfigview-2016-11-29-15-39-33

 
  • Mapped new luns to "iqn.2006-01.com.openfiler:op2.rac1.asm"
 

22-openfiler2_6_lunsmapped2016-11-29-17-40-24

 
  • Allow netowrk access to the target for oel7rac1n1 and oel7rac1n2.
 

23-openfiler2_7_networkacls-2016-11-29-15-51-46

 
  • Set user and password for CHAP target authentication.
 

24-openfiler2_8_setuserpassword-2016-11-29-15-51-07

 
  • Check how the presented luns look like in the system - just for fun!
 
# LVM volumes for LUNS
[root@openfiler2 ~] lvscan 
  ACTIVE            '/dev/racvg/rac1-asm-data-01' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-02' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-03' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-data-04' [1.97 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-01' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-02' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-03' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-fra-04' [1.00 GiB] inherit
  ACTIVE            '/dev/racvg/rac1-asm-crs-02' [9.78 GiB] inherit

# luns presented by iscsi target
[root@openfiler2 ~] cat /proc/net/iet/volume
tid:1 name:iqn.2006-01.com.openfiler:op2.rac1.asm
	lun:0 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-crs-02
	lun:1 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-01
	lun:2 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-02
	lun:3 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-03
	lun:4 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-data-04
	lun:5 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-01
	lun:6 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-02
	lun:7 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-03
	lun:8 state:0 iotype:blockio iomode:wt path:/dev/racvg/rac1-asm-fra-04

 
 
iSCSI storage settings on OEL 7.x for openfiler2 luns
 
 

As we have the luns ready on openfiler2, we need to go through the steps of discovery and configuration of them at the operating system level

  • Login to oel7rac1n1 and discover targets presented by openfiler2
[root@oel7rac1n1 ~] iscsiadm -m discovery -t sendtargets -p openfiler2-spriv
192.168.10.11:3260,1 iqn.2006-01.com.openfiler:op2.rac1.asm
192.168.1.11:3260,1 iqn.2006-01.com.openfiler:op2.rac1.asm
 

The command also starts the iscsid service if it is not already running.

 

The following command displays information about the targets that is now stored in the discovery database

 
[root@oel7rac1n1 ~] iscsiadm -m discoverydb -t st -p openfiler2-spriv
# BEGIN RECORD 6.2.0.873-35
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = openfiler2-spriv
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = <empty>
discovery.sendtargets.auth.password = <empty>
discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# END RECORD
 
  • In order to see and access LUNs behind the target we need to login to the target. First we ned to set authentication method and credentials
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler2-spriv -T iqn.2006-01.com.openfiler:op2.rac1.asm --op update -n node.session.auth.username -v iscsi_rac
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler2-spriv -T iqn.2006-01.com.openfiler:op2.rac1.asm --op update -n node.session.auth.authmethod -v CHAP
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler2-spriv -T iqn.2006-01.com.openfiler:op2.rac1.asm --op update -n node.session.auth.password -v *****

# display uath. settings
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler2-spriv -T iqn.2006-01.com.openfiler:op2.rac1.asm|grep node.session.auth
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsi_rac
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
 

Login to the iqn.2006-01.com.openfiler:op2.rac1.asm target

 
[root@oel7rac1n1 ~] iscsiadm -m node -T iqn.2006-01.com.openfiler:op2.rac1.asm -p openfiler2-spriv -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:op2.rac1.asm, portal: 192.168.10.11,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:op2.rac1.asm, portal: 192.168.10.11,3260] successful.
 

oel7rac1n1 successfully logged into the target

 
  • Check discovered luns
 
[root@oel7rac1n1 ~] iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-35
Target: iqn.2006-01.com.openfiler:op1.rac1.asm (non-flash)
        Current Portal: 192.168.10.10:3260,1
        Persistent Portal: 192.168.10.10:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: <empty>
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 1
                iSCSI Connection State: TRANSPORT WAIT
                iSCSI Session State: FREE
                Internal iscsid Session State: REOPEN
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 7  State: running
                scsi7 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdb          State: transport-offline
               scsi7 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdc          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdd          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sde          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdf          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdg          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdh          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdi          State: transport-offline
                scsi7 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sdj          State: transport-offline
Target: iqn.2006-01.com.openfiler:op2.rac1.asm (non-flash)
        Current Portal: 192.168.10.11:3260,1
        Persistent Portal: 192.168.10.11:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 3
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 9  State: running
                scsi9 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdk          State: running
                scsi9 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdl          State: running
                scsi9 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdm          State: running
                scsi9 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sdn          State: running
                scsi9 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdo          State: running
                scsi9 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdp          State: running
                scsi9 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdq          State: running
                scsi9 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdr          State: running
                scsi9 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sds          State: running

As openfiler1 is down State of its luns is transport-offline. Openfiler2's disks discovered correctly. Restart oel7rac1n1 VM to check if os logs in to the target automatically.

 

OK let's clone openfiler1 again as openfiler3

 
 
Clone openfiler1 VM as openfiler3 and configure openfiler3.
 
  • Clone openfiler1 as openfiler3
 
# Shutdown openfiler1 before cloning
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage clonevm openfiler1 --name openfiler3 --groups "/Openfilers" --basefolder /vbox-repo1/metadata --register
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "openfiler3"
 
  • Move virtual disks to /vbox-repo1/disks directory and unify names. Set correct snapshot directory.
 

As you can see below virtual disks of the cloned machine have been placed in metadata directory. Also the files names have been changed to a little different format. I want to fix it.

 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage showvminfo openfiler3|grep vbox
Config file:     /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3.vbox
Snapshot folder: /vbox-repo1/metadata/Openfilers/openfiler3/Snapshots
Log folder:      /vbox-repo1/metadata/Openfilers/openfiler3/Logs
SATA Controller (1, 0): /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3-disk1.vdi (UUID: 37530cdf-685b-4597-a4e1-00d84470bcd6)
SATA Controller (2, 0): /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3-disk2.vdi (UUID: 665c3546-0e3d-4c45-b368-93d43d4cd6c9)
Capture file:       /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3.webm
 

Move the disks and set snapshots directory.

 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifymedium /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3-disk1.vdi --move /vbox-repo1/disks/openfiler3_localOSdisk1.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Move medium with UUID 37530cdf-685b-4597-a4e1-00d84470bcd6 finished

vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifymedium /vbox-repo1/metadata/Openfilers/openfiler3/openfiler3-disk2.vdi --move /vbox-repo1/disks/openfiler3_functionaldisk1.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Move medium with UUID 665c3546-0e3d-4c45-b368-93d43d4cd6c9 finished

# set snapshots directory
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage modifyvm "openfiler3" --snapshotfolder /vbox-repo1/snapshots
 

How does it look like after the change

 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage showmediuminfo /vbox-repo1/disks/openfiler3_localOSdisk1.vdi
UUID:           37530cdf-685b-4597-a4e1-00d84470bcd6
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       /vbox-repo1/disks/openfiler3_localOSdisk1.vdi
Storage format: VDI
Format variant: dynamic default
Capacity:       10240 MBytes
Size on disk:   1669 MBytes
Encryption:     disabled
In use by VMs:  openfiler3 (UUID: 13dfabc0-5c72-4b4c-b06f-82b6c4729907)

vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage showmediuminfo /vbox-repo1/disks/openfiler3_functionaldisk1.vdi
UUID:           665c3546-0e3d-4c45-b368-93d43d4cd6c9
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       /vbox-repo1/disks/openfiler3_functionaldisk1.vdi
Storage format: VDI
Format variant: dynamic default
Capacity:       51200 MBytes
Size on disk:   1488 MBytes
Encryption:     disabled
In use by VMs:  openfiler3 (UUID: 13dfabc0-5c72-4b4c-b06f-82b6c4729907)
 
  • Let's now start this openfiler3 to change its IPs and so on.
 
# Let's start openfiler3 now 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage startvm openfiler3
Waiting for VM "openfiler3" to power on...
VM "openfiler3" has been successfully started.
 
  • Login as root to the openfiler3 VM using VM console - network is not ready as all the macs were regenerated during the clone process and we must configure network individually.

17-openfiler2_beforenetworkchange_29_11_2016_14_35_32

 
  • Replace openfiler1 with openfiler3 in /etc/hosts for 127.0.0.1
 
[root@openfiler1 ~] cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       openfiler3.dba24.pl openfiler3 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

# Openfilers

192.168.1.10 openfiler1
192.168.1.11 openfiler2
192.168.1.12 openfiler3

192.168.10.10 openfiler1-spriv
192.168.10.11 openfiler2-spriv
192.168.10.12 openfiler3-spriv

192.168.1.100 macieksrv.dba24.pl macieksrv

# storage network
192.168.10.21 oel7rac1n1-spriv.dba24.pl oel7rac1n1-spriv
192.168.10.23 oel7rac1n2-spriv.dba24.pl oel7rac1n2-spriv
 
  • Change hostname in /etc/sysconfig/network - replace openfiler1 with openfiler3
[root@openfiler1 ~] cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=openfiler3.dba24.pl
 
  • Modify MAC addresses and IPs
 

Look for VM's MAC addresses (vboxmanage showvminfo)

 
vboxuser1@macieksrv.dba24.pl ~ $ vboxmanage showvminfo openfiler3 |grep MAC
NIC 1:           MAC: 08002774AF05, Attachment: Bridged Interface 'wlx98ded00b5b05', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 2:           MAC: 08002700C105, Attachment: Internal Network 'storage-internal', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
 

Change MAC in /etc/sysconfig/network-scripts/ifcfg-eth0 for bridged network

 
[root@openfiler1 ~] cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
MTU=1500
USERCTL=no
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=08:00:27:74:AF:05
 

Change MAC and IP in /etc/sysconfig/network-scripts/ifcfg-eth1 for storage network

 
[root@openfiler1 ~] cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.10.255
HWADDR=08:00:27:00:C1:05
IPADDR=192.168.10.12
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes
 

Replace macs accordingly in /etc/udev/rules.d/70-persistent-net.rules

 
[root@openfiler1 ~] cat /etc/udev/rules.d/70-persistent-net.rules 

# This file was automatically generated by the /lib/udev/write_net_rules
# program run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single line.

# Intel Corporation 82540EM Gigabit Ethernet Controller
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:74:AF:05", ATTR{type}=="1", NAME="eth0"
# iNTEL Corporation 82540EM Gigabit Ethernet Controller
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:00:C1:05", ATTR{type}=="1", NAME="eth1"
 

Reload udev rules and network

 
[root@openfiler1 ~] udevadm control --reload-rules
[root@openfiler1 ~] udevadm trigger
[root@openfiler1 ~] service network restart
 

You should now have correct IPs assigned to your network adapters. Remember the bridged interface should be able to get its fixed IP from dhcp server.

 
  • Restart server - you should see the login screen with correct IP
 

25-openfiler3-screenafternetworkchanges_29_11_2016_16_59_58

 
  • Login to web GUI
 

19-openfiler2_2_guiloginopenfiler2afternetworkchange

 
  • Navigate to volumes tab. Remove all the volumes and create only one for 3rd votedisk
 

27-openfiler3-createquorum3rdvotedisk

 
  • Map new lun to new iscsi target "iqn.2006-01.com.openfiler:op3.rac1.asm"
 

28-openfiler3_lunmappings2016-11-29-17-46-13

 
  • Allow netowrk access to the target for oel7rac1n1 and oel7rac1n2.
 

29-openfiler3-screenshot-from-2016-12-06-20-45-28

 
  • Set user and password for CHAP target authentication.
 

30-openfiler3-screenshot-from-2016-12-06-20-45-47

 
  • Check how the presented lun look like in the system - just for fun again!
 
# LVM volumes for LUNS
[root@openfiler3 ~]# lvscan
  ACTIVE            '/dev/racvg/rac1-asm-crs-03' [512.00 MiB] inherit

# luns presented by iscsi target
[root@openfiler3 ~] cat /proc/net/iet/volume
cat /proc/net/iet/volume

 
 
iSCSI storage settings on OEL 7.x for openfiler3 lun
 
 

As we have the lun ready on openfiler3, we need to go through the steps of discovery and configuration of them at the operating system level

  • Login to oel7rac1n1 and discover targets presented by openfiler3
[root@oel7rac1n1 ~] iscsiadm -m discovery -t sendtargets -p openfiler3-spriv
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:op3.rac1.asm
192.168.1.12:3260,1 iqn.2006-01.com.openfiler:op3.rac1.asm
 

The command also starts the iscsid service if it is not already running.

 

The following command displays information about the targets that is now stored in the discovery database

 
[root@oel7rac1n1 ~]# iscsiadm -m discoverydb -t st -p openfiler3-spriv
# BEGIN RECORD 6.2.0.873-35
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = openfiler3-spriv
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = <empty>
discovery.sendtargets.auth.password = <empty>
discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# END RECORD
 
  • In order to see and access the LUN behind the target we need to login to the target. First we ned to set authentication method and credentials
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler3-spriv -T iqn.2006-01.com.openfiler:op3.rac1.asm --op update -n node.session.auth.username -v iscsi_rac
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler3-spriv -T iqn.2006-01.com.openfiler:op3.rac1.asm --op update -n node.session.auth.authmethod -v CHAP
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler3-spriv -T iqn.2006-01.com.openfiler:op3.rac1.asm --op update -n node.session.auth.password -v *****

# display uath. settings
[root@oel7rac1n1 ~] iscsiadm -m node -p openfiler3-spriv -T iqn.2006-01.com.openfiler:op3.rac1.asm|grep node.session.auth
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsi_rac
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
 

Login to the iqn.2006-01.com.openfiler:op3.rac1.asm target

 
[root@oel7rac1n1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:op3.rac1.asm -p openfiler3-spriv -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:op3.rac1.asm, portal: 192.168.10.12,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:op3.rac1.asm, portal: 192.168.10.12,3260] successful.
 

oel7rac1n1 successfully logged into the target

 
  • Check discovered luns
 
root@oel7rac1n1 ~]# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-35
Target: iqn.2006-01.com.openfiler:op1.rac1.asm (non-flash)
        Current Portal: 192.168.10.10:3260,1
        Persistent Portal: 192.168.10.10:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 1
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 7  State: running
                scsi7 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdb          State: running
                scsi7 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdc          State: running
               scsi7 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdd          State: running
                scsi7 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sde          State: running
                scsi7 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdf          State: running
                scsi7 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdg          State: running
                scsi7 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdh          State: running
                scsi7 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdi          State: running
                scsi7 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sdj          State: running
Target: iqn.2006-01.com.openfiler:op2.rac1.asm (non-flash)
        Current Portal: 192.168.10.11:3260,1
        Persistent Portal: 192.168.10.11:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 3
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
               Attached SCSI devices:
                ************************
                Host Number: 9  State: running
                scsi9 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdk          State: running
                scsi9 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdl          State: running
                scsi9 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdm          State: running
                scsi9 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sdn          State: running
                scsi9 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdo          State: running
                scsi9 Channel 00 Id 0 Lun: 5
                        Attached scsi disk sdp          State: running
                scsi9 Channel 00 Id 0 Lun: 6
                        Attached scsi disk sdq          State: running
                scsi9 Channel 00 Id 0 Lun: 7
                        Attached scsi disk sdr          State: running
                scsi9 Channel 00 Id 0 Lun: 8
                        Attached scsi disk sds          State: running
Target: iqn.2006-01.com.openfiler:op3.rac1.asm (non-flash)
        Current Portal: 192.168.10.12:3260,1
        Persistent Portal: 192.168.10.12:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1988-12.com.oracle:e942b9ba36f
                Iface IPaddress: 192.168.10.21
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 4
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
               *****
                CHAP:
                *****
                username: iscsi_rac
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 131072
                FirstBurstLength: 262144
                MaxBurstLength: 262144
                ImmediateData: No
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 10 State: running
                scsi10 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sdt          State: running


As openfiler1 is down State of its luns is transport-offline. Openfiler2's disks discovered correctly. Start openfiler1 and after few minutes restart oel7rac1n1 VM to check if os logs in to all of the targest automatically.

 
 
 
 

Storage for our RAC nodes has been successfully prepared!!

 
 
 
What do you think?? Please post your comments :)
Source: My experience and WorlWideWeb

About the author

 
maciej tokar
Maciej Tokar

An Oracle technology geek and crazy long distance runner, DBA24 Owner
Senior Oracle DBA / Consultant / [OCP10g, OCP12c, OCE RAC 10g] / [experience: 9y+]
Currently working for Bluegarden (Oslo Norway) by Miratech Group
Past: Mastercard / Trevica by Britenet, Citi International PLC, PZU

 
View Maciej Tokar's profile on LinkedIn         logoDB4
LinkedIn Auto Publish Powered By : XYZScripts.com