Note: Configuration and testing on the equipment FAS2240-2 and FAS2240-4
Change NetApp Filer Name
How to Configure iSCSI on AIX Using the iSCSi Initiator
Initiators Mapping
Figure 1: Running the iSCSI Initiator from Windows 7 Control Panel / Administrative Tools
Figure 3: Starting the iSCSI Initiator Service
Figure 4: Connecting to an iSCSI server using the iSCSI Initator
Figure 5: Connecting to the iSCSI Target
Figure 6: Successfully Connected to Window iSCSI SAN
Figure 7: Connecting the iSCSI Device to the server
Initiators Mapping
Figure 8
Figure 9: New Unallocated Disk
Figure 10: Creating a new simple volume
In the Simple Volume Wizard you define how much space will be allocated of that volume and the drive letter that the new volume will have. Then give the space in next window and I maxed out the space of the volume with all that the volume offered, 204765 MB or about 200GB, Specify Size of the new Simple Volume >>> Now, assign a drive letter and finally format it with NTFS. At this point, you will see the finalization screen, asking you to confirm what you are about to do. If you have configured everything correctly, click Finish.
Figure 15: New Volume Created
Figure 16: My Computer showing the new volume
Change NetApp Filer Name
To change the filer name
of NetApp run the following command on filer and entered the detail. Detail
which we entered during installation will be saved as default, which could also
be changed by entering the info in same line.
By doing so if your
etc/rc files are showing mismatch that issue will be resolved too.
Temporarily disable
cluster first:
hostname> cf disable
Then run setup command
and follow the procedure:
hostname> setup
The setup command will
rewrite the /etc/rc, /etc/exports,
/etc/hosts,
/etc/hosts.equiv, /etc/dgateways, /etc/nsswitch.conf,
and /etc/resolv.conf
files, saving the original contents of
these files in .bak
files (e.g. /etc/exports.bak).
Are you sure you want to
continue? [yes]
NetApp Release 8.1RC3 7-Mode: Wed Feb 15 19:28:21 PST 2012
System ID: ****** (hostname); partner ID: (hostname)
System Serial Number: 650000020671 (hostname)
System Rev: B1
System Storage Configuration: Multi-Path HA
System ACP Connectivity: Full Connectivity
slot 0: System Board
Processors: 4
Processor type: Intel(R)
Xeon(R) CPU C3528 @ 1.73GHz
Memory Size: 6144
MB
Memory Attributes: Hoisting
Normal ECC
Controller: B
Service Processor Status: Online
slot 0: Internal 10/100 Ethernet Controller
e0M MAC Address: mac-address
(auto-100tx-fd-up)
e0P MAC Address: mac-address
(auto-100tx-fd-up)
slot 0: Quad Gigabit Ethernet Controller 82580
e0a MAC Address: mac-address
(auto-1000t-fd-up)
e0b MAC Address: mac-address
(auto-1000t-fd-up)
e0c MAC Address: mac-address
(auto-1000t-fd-up)
e0d MAC Address: mac-address
(auto-1000t-fd-up)
slot 0: Interconnect HBA: Mellanox IB MT25204
slot 0: SAS Host Adapter 0a
48 Disks:
26880.0GB
1 shelf with IOM3, 1 shelf with IOM6E
slot 0: SAS Host Adapter 0b
48 Disks:
26880.0GB
1 shelf with IOM3, 1 shelf with IOM6E
slot 0: Intel ICH USB EHCI Adapter u0a (0xdf101000)
boot0 Micron Technology Real SSD eUSB
2GB, class 0/0, rev 2.00/11.10, addr 2 1936MB 512B/sect (0FF0022700155706)
slot 1: Dual 10 Gigabit Ethernet Controller IX1-SFP+
e1a MAC Address: mac-address
(auto-unknown-down)
e1b MAC Address: mac-address
(auto-unknown-down)
Please enter the new
hostname [old-hostname]: new_hostname
Invalid hostname.
A valid hostname
consists of alphanumeric characters [a-zA-Z0-9]
and dash [-].
Please enter the new
hostname [old-hostname]: new-hostname
Do you want to enable
IPv6? [n]:
Do you want to configure
interface groups? [y]:
Number of interface
groups to configure? [1]
Name of interface group
#1 [vport1]:
Is vport1 a single [s],
multi [m] or a lacp [l] interface group? [m]
Is vport1 to use
IP-based [i], MAC-based [m], Round-robin based [r] or Port based [p] load
balancing? [m]
Number of links for vport1?
[4]
Name of link #1 for
vport1 [e0c]:
Name of link #2 for
vport1 [e0a]:
Name of link #3 for
vport1 [e0b]:
Name of link #4 for
vport1 [e0d]:
Please enter the IP
address for Network Interface vport1 [defalut, which was entered earlier during
installation]:
Please enter the netmask
for Network Interface vport1 [255.255.0.0]:
Should interface group
vport1 take over a partner interface group during failover? [n]: y
Please enter the partner
interface name to be taken over by vport1 []: vport2
Please enter media type
for vport1 {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)} [auto]:
Please enter the IP
address for Network Interface e1a []:
Should interface e1a
take over a partner IP address during failover? [n]:
Please enter the IP
address for Network Interface e1b []:
Should interface e1b
take over a partner IP address during failover? [n]:
e0M is a Data ONTAP dedicated management port.
NOTE: Dedicated management ports cannot be used for data
protocols (NFS, CIFS, iSCSI, NDMP or Snap*),
and if they are configured they should be on an isolated management LAN.
The default route will use dedicated mgmt ports only as the last resort,
since data protocol traffic will be blocked by default.
Please enter the IP
address for Network Interface e0M [defalut, which was entered earlier during
installation]:
Please enter the netmask
for Network Interface e0M [255.255.0.0]:
Should interface e0M
take over a partner IP address during failover? [n]: e0M
Please answer
"y" or "n".
Should interface e0M
take over a partner IP address during failover? [n]: y
Please enter the IPv4
address or interface name to be taken over by e0M []: e0M
Would you like to
continue setup through the web interface? [n]:
Please enter the name or
IP address of the IPv4 default gateway [defalut, which was entered earlier
during installation]:
The administration host is given root access to the filer's
/etc files for system administration. To allow /etc root access
to all NFS clients, enter 'all' below.
Please enter the name or
IP address of the administration host [defalut, which was entered earlier
during installation]: 10.0.*.*
Please enter timezone
[Asia/Karachi]:
Where is the filer
located? []: Asia/Karachi
What language will be
used for multi-protocol files (Type ? for list)?:en_US
Setting language on
volume vol0
The new language
mappings will be available after reboot
Language set on volume
vol0
Setting language on
volume lun_500GB_vol
Wed Feb 27 09:40:15 PKT
[hostname:vol.language.changed:info]: Language on volume vol0 changed to en_US
The new language
mappings will be available after reboot
Language set on volume
lun_500GB_vol
Setting language on
volume lun_500GB_vol
Wed Feb 27 09:40:15 PKT
[hostname:vol.language.changed:info]: Language on volume lun_500GB_vol changed
to en_US
The new language
mappings will be available after reboot
Language set on volume
lun_500GB_vol
Enter the root directory
for HTTP files [/vol/vol0/home/http]:
Wed Feb 27 09:40:16 PKT
[hostname:vol.language.changed:info]: Language on volume lun_500GB_vol changed
to en_US
Do you want to run DNS
resolver? [y]:
Please enter DNS domain
name [test.com]:
You may enter up to 3
nameservers
Please enter the IP
address for first nameserver [10.0.*.*]:
Do you want another
nameserver? [n]:
Do you want to run NIS
client? [n]:
The Service Processor (SP) provides remote management capabilities
including console redirection, logging and power control.
It also extends autosupport by sending
additional system event alerts. Your autosupport settings are used
for sending these alerts via email over the SP LAN interface.
Would you like to
configure the SP LAN interface [y]:
Would you like to enable
DHCP on the SP LAN interface [n]:
Please enter the IP
address for the SP [defalut, which was entered earlier during
installation]:
Please enter the netmask
for the SP [255.255.0.0]:
Please enter the IP
address for the SP gateway [defalut, which was entered earlier during
installation]:
The mail host is required by your system to send SP
alerts and local autosupport email.
Please enter the name or
IP address of the mail host [10.0.*.*]:
You may use the
autosupport options to configure alert destinations.
Now type 'reboot' for
changes to take effect.
hostname> reboot
new-hostname>
How to Configure iSCSI on AIX Using the iSCSi Initiator
In this method steps will be performed of configuring iSCSI
on AIX. We will use the AIX iSCSI software initiator and a NetApp storage
device.
The
following iSCSI filesets were installed by default.
# oslevel -s
5300-05-CSP-0000
# lslpp -l |
grep -i iscsi
devices.common.IBM.iscsi.rte
5.3.10.0
COMMITTED Common iSCSI Files
devices.iscsi.disk.rte 5.3.10.1
COMMITTED iSCSI Disk Software
devices.iscsi.tape.rte 5.3.0.30
COMMITTED iSCSI Tape Software
devices.iscsi_sw.rte 5.3.10.1
COMMITTED iSCSI Software Device
Driver
5.3.0.50
COMMITTED IBM 1 Gigabit-TX iSCSI
TOE
devices.pci.14102203.rte 5.3.7.0
COMMITTED IBM 1 Gigabit-TX iSCSI
TOE
5.3.0.50 COMMITTED
1000 Base-SX PCI-X iSCSI TOE
devices.pci.1410cf02.rte 5.3.7.0
COMMITTED 1000 Base-SX PCI-X
iSCSI TOE
devices.pci.1410d002.com 5.3.10.0
COMMITTED Common PCI iSCSI TOE
Adapter
5.3.10.0 COMMITTED
1000 Base-TX PCI-X iSCSI TOE
devices.pci.1410d002.rte 5.3.7.0
COMMITTED 1000 Base-TX PCI-X
iSCSI TOE
5.3.0.50 COMMITTED
IBM 1 Gigabit-SX iSCSI TOE
devices.pci.1410e202.rte 5.3.7.0
COMMITTED IBM 1 Gigabit-SX iSCSI
TOE
devices.pci.77102e01.diag 5.3.0.0
COMMITTED 1000 Base-TX PCI-X
iSCSI TOE
devices.pci.77102e01.rte 5.3.7.0
COMMITTED PCI-X 1000 Base-TX
iSCSI TOE
devices.common.IBM.iscsi.rte
5.3.10.0 COMMITTED
Common iSCSI Files
devices.iscsi_sw.rte 5.3.10.1
COMMITTED iSCSI Software Device
Driver
devices.pci.1410d002.com 5.3.9.0
COMMITTED Common PCI iSCSI TOE
Adapter
devices.pci.1410d002.rte 5.3.7.0
COMMITTED 1000 Base-TX PCI-X
iSCSI TOE
The iSCSI
software initiator enables AIX to access storage devices using TCP/IP on
Ethernet network adapters. There are two virtual Ethernet adapters (VEAs) in
this LPAR.
# lsdev -Cc adapter | grep ent
ent0 Available 0A-08 2-Port 10/100/1000
Base-TX PCI-X Adapter (14108902)
ent1 Available 0A-09 2-Port 10/100/1000
Base-TX PCI-X Adapter (14108902)
Two virtual SCSI (VSCSI) disks are used for rootvg. These
disks map to logical volumes on internal SAS drives in the VIOS.
# lsdev -Cc
disk
hdisk0
Available 00-08-02 IBM MPIO DS5000
Array Disk
hdisk1
Available 04-08-00-3,0 16 Bit LVD SCSI Disk Drive
hdisk2
Available 04-08-00-4,0 16 Bit LVD SCSI Disk Drive
hdisk3
Available 04-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk4
Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk5
Available 04-08-01-5,0 16 Bit LVD SCSI Disk Drive
hdisk6
Available 04-08-01-8,0 16 Bit LVD SCSI Disk Drive
# lsvg
rootvg
vgtest
# lspv
hdisk0 00012902b1e0df94 rootvg active
hdisk1 000129025cc60212 None
hdisk2 000129024ccb7c4f None
hdisk3 000129027f42893f None
hdisk4 00012902b29ed100 None
hdisk5 00012902511624a4 None
hdisk6 00012902cedeb3cc vgtest active
Before I can
discover my new iSCSI LUN, I must first configure my AIX iSCSI initiator (the
iscsi0 device) appropriately so that that I can connect to the storage device.
Essentially I need to supply an iSCSI qualified name (iqn).
This provides my AIX system with a unique identity, of which the NetApp will
use to verify that I am the correct host to assign storage. The iqn used in the
following command was given to me by my storage administrator.
# chdev -l iscsi0 -a initiator_name=iqn.1986-03.com.ibm:aix1
# lsattr -El iscsi0
disc_filename /etc/iscsi/targets Configuration file False
disc_policy file Discovery Policy True
initiator_name
iqn.1986-03.com.ibm:aix1 iSCSI Initiator Name True
isns_srvnames auto iSNS Servers IP
Addresses True
isns_srvports iSNS Servers Port
Numbers True
max_targets 16 Maximum Targets
Allowed True
num_cmd_elems 200
The next step is to update the /etc/iscsi/targets file on my
AIX system. This file must contain the hostname or IP address of the storage
device providing the iSCSI LUN. The iSCSI port, listening on the storage
server, is also entered. The default port is 3260. The last two entries
identify the iqn of the storage system and a password. It is not always
necessary to use a password but in this case, our storage administrator has set
one, so we must specify it when we attempt to connect to the device.
# cd /etc/iscsi/
# tail -1 targets
10.0.9.71 3260 iqn.1992-08.com.netapp:sn.1789745030
"netapp1234"
Add this line by editing the “targets” file.
In this example, the en0 interface is connected to our
“storage” network. The interface was configured according to the IBM
recommendations on iSCSI performance with AIX. Jumbo frames (MTU set to 9000)
and largesend are enabled on the interface, along with larger values for
tcp_sendspace and tcp_recvspace. We also disabled the Nagle algorithm and
enabled tcp_nodelay.
# ifconfig
en0
en0:
flags=1e080863,4c0
inet
10.2.6.11 netmask 0xffffff80 broadcast 10.2.6.127
tcp_sendspace
262144 tcp_recvspace 262144 tcp_nodelay 1 rfc1323 1
mtu 9000 Maximum IP Packet Size for This Device True
#chdev -Pl
ent0 -a jumbo_frames=yes
#chdev -Pl
en0 -a mtu=9000
# no -a
|grep nagle_limit
tcp_nagle_limit
= 0
On the server, we enabled jumo_frames, largesend and
large_receive. The SEA (Shared Ethernet Adapter) device is e.g. ent11 and the
backing device is e.g ent9 (which is in fact an LACP aggregated link). The
aggregated link device, e.g ent9, consists of two physical 1GB Ethernet ports,
ent0 and ent1.
$ chdev -dev
ent11 -attr largesend=1
$ lsdev -dev
ent0 -attr
$ lsdev -dev
ent0 -attr | grep -i large
large_send
yes Enable hardware TX TCP resegmentation True
$ lsdev -dev
ent1 -attr | grep -i large
large_send
yes Enable hardware TX TCP resegmentation True
$ entstat
-all ent11
NETAPP
STEPS
1) After login the netapp controller,
expand protocols and click on iSCSI and check that “iSCSI service” is running or not if not then
click on start and then refresh it and see the status like this:
iSCSI Service:
|
iSCSI service is running
|
2)
Expand
“Storage” and click on “LUNs” then click on >> “Initiator Groups” >> Click on “Create” and fill the following parameters
Name:
Operating System:
Type
Select the supported
protocol for this group
iSCSI
|
FC/FCoE
|
Then click on initiators tab on same
window and click on “Add” and add iqn no of aix which is “iqn.1986-03.com.ibm:aix1”
and click on Ok after that click on
“Create”. iqn is added into netapp, next step is to assign a LUN(disk).
3)
Expand
“Storage” and click on “LUNs”
>> “Create” (a window will
appear) >> “Next” and add the
following parameters
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Thin
Provisioned
|
|||
Allocate space as it is used, otherwise, allocate the
space now. Recommended for increasing utilization where each LUN is unlikely
to use all of its allocated space.
|
|||
And click on next and add the
following parameters
The wizard
automatically chooses the aggregate with most free space for creating flexible
volume for the LUN. But you can choose a different aggregate of your choice.
You can also select an existing volume/qtree to create your LUN.
Create
a new flexible volume in
|
|||||||||||
Aggregate Name:
Volume Name:
|
Move to next window by clickin on
next >>> select ur initiator
Initiators Mapping
You can connect your LUN
to the initiator hosts by selecting from the initiator group and by optionally
providing LUN ID for the initiator group.
|
Click on next and it will show you
summary that what parameters are added in the steps performed and then it will
show that LUN creation is completed and click on “Finish”
Now we run
the cfgmgr command on the AIX system to configure our new iSCSI disks .
# cfgmgr -vl
iscsi0
----------------
attempting
to configure device 'iscsi0'
Time: 0
LEDS: 0x25b0
invoking
/usr/lib/methods/cfgiscsi -l iscsi0
Number of
running methods: 1
----------------
Completed
method for: iscsi0, Elapsed time = 1
return code
= 0
******************
stdout ***********
hdisk7
******************
no stderr ***********
----------------
Time: 1
LEDS: 0x539
Number of
running methods: 0
----------------
attempting
to configure device 'hdisk7'
Time: 1
LEDS: 0x25f3
invoking
/usr/lib/methods/cfgscsidisk -l hdisk7
Number of
running methods: 1
----------------
Completed
method for: hdisk7, Elapsed time = 5
return code
= 0
******************
no stdout ***********
******************
no stderr ***********
----------------
Time: 6
LEDS: 0x539
Number of
running methods: 0
----------------
calling
savebase
return code
= 0
******************
no stdout ***********
******************
no stderr ***********
Configuration
time: 7 seconds
We now have
two new iSCSI disks.
# lsdev -Cc
disk | grep –i iscsi
hdisk7
Available Other iSCSI Disk
Drive
# lspv
hdisk0 000129020df94 rootvg active
hdisk1 000129020212 None
hdisk2 000129024c4f None
hdisk3 00012902893f None
hdisk4 0001290d100 None
hdisk5 000129624a4 None
hdisk6 00012902ce3cc vgtest active
hdisk7 000129028f38 None
# lspath
Enabled
hdisk0 fscsi0
Enabled
hdisk0 fscsi0
Enabled
hdisk0 fscsi1
Enabled
hdisk0 fscsi1
Enabled
hdisk1 scsi0
Enabled
hdisk2 scsi0
Enabled
hdisk3 scsi0
Enabled
hdisk4 scsi0
Enabled
hdisk5 scsi1
Enabled
hdisk6 scsi1
If there were any problems with the iSCSI configuration,
either at the storage end or at the AIX end, I would see an error in the AIX
error report after running cfgmgr; similar to the one shown below.
D3EF661B 0429100711 T H iscsi0 COMMUNICATIONS SUBSYSTEM
FAILURE
This error could be the result of a misconfigured /etc/iscsi/targets
file e.g. incorrect format, wrong password, etc.
The default queue depth for the disks was 8. You may
consider changing this value for better performance. Although in our
environment, we found that changing to a larger value did not help with
performance, in fact it had a negative impact.
# lsattr -El hdisk7
clr_q no Device
CLEARS its Queue on error True
host_addr 10.0.9.71 Hostname or IP
Address False
location
Location Label
True
lun_id 0x0 Logical Unit
Number ID False
max_transfer 0x40000 Maximum TRANSFER
Size True
port_num 0xcbc PORT
Number False
pvid 0001290271f18f380000000000000000 Physical volume identifier False
q_err yes Use QERR
bit True
q_type simple Queuing
TYPE True
queue_depth 1 Queue
DEPTH True
reassign_to 120 REASSIGN time
out value True
rw_timeout 30 READ/WRITE
time out value True
start_timeout
60
START unit time out value
True
target_name iqn.1992-08.com.netapp:sn.1789745030 Target
NAME False
# netstat -na | grep 3260 (listen on port 3260 if it was
disabled)
# mkvg -S -y
iscsivg hdisk7
0516-1254
mkvg: Changing the PVID in the ODM.
iscsivg
$ smit lv
Add a Logical Volume
Type or
select values in entry fields.
Press Enter
AFTER making all desired changes.
[Entry Fields]
Logical volume NAME []
* VOLUME GROUP
name
iscsivg
* Number of
LOGICAL PARTITIONS
[512]
#
PHYSICAL VOLUME names [hdisk7]
+
Logical volume TYPE [jfs2]
+
POSITION on physical volume middle +
RANGE of physical volumes minimum
+
MAXIMUM NUMBER of PHYSICAL VOLUMES [2]
#
to use for allocation
Number of COPIES of each logical 1
+
partition
Mirror Write Consistency? active
+
Allocate each logical partition copy yes
+
on a SEPARATE physical volume?
RELOCATE the logical volume during yes
+
reorganization?
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [512]
#
Enable BAD BLOCK relocation? yes
+
SCHEDULING POLICY for writing/reading parallel +
logical partition copies
Enable WRITE VERIFY? no
+
File containing ALLOCATION MAP []
Stripe Size? [Not
Striped]
+
Serialize IO? no
+
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
lv00 created successfully
# crfs -vjfs2 -d iscsilv -m /iscsifs -a logname=INLINE –A
yes
# smit fs
Add an Enhanced Journaled File System
Type or
select values in entry fields.
Press Enter
AFTER making all desired changes.
[Entry Fields]
Volume group name iscsivg
SIZE of file system
Unit Size
Megabytes +
* Number of units [600]
#
* MOUNT
POINT [/iscsifs]
Mount AUTOMATICALLY at system restart? yes
+
PERMISSIONS
read/write +
Mount OPTIONS []
+
Block Size (bytes) 4096
+
Logical Volume for Log
+
Inline Log size (MBytes) []
#
Extended Attribute Format Version 1 +
ENABLE Quota Management? no
+
F1=Help F2=Refresh
+------------------------------------------------------+ F4=List
F5=Reset F6=Command ¦
Running command... ¦ F8=Image
F9=Shell F10=Exit
+------------------------------------------------------+
COMMAND STATUS
Command:
running stdout: yes stderr: no
Before
command completion, additional instructions may appear below.
File system
created successfully.
614176
kilobytes total disk space.
New File
System size is 1228800
# varyonvg
iscsivg
# mount /iscsifs
# df –g |
grep iscsi
/dev/iscsilv
749.50 748.65 1% 4 1% /iscsifs
# lspv
hdisk0 00012902b1e0df94 rootvg active
hdisk1 000129025cc60212 None
hdisk2 000129024ccb7c4f None
hdisk3 000129027f42893f None
hdisk4 00012902b29ed100 None
hdisk5 00012902511624a4 None
hdisk6 00012902cedeb3cc vgtest active
hdisk7 0001290271f18f38 iscsivg
It was
interesting to see that there was a single TCP session open between the AIX
LPAR and the NetApp filer.
# netstat -na | grep 3260
tcp4 0 0
10.0.10.60.35718
10.0.9.71.3260 ESTABLISHED
We confirmed that largesend was in fact being used on the
AIX LPAR by checking the output from the netstat command.
# netstat -p
tcp | grep -i large
178509 large
sends
1291861075
bytes sent using largesend
2751348
bytes is the biggest largesend
Based on the
recommendations on the IBM website, we disabled auto-varyon on the volume
group.
# chvg -an iscsivg
# lsvg
iscsivg
VOLUME
GROUP: iscsivg VG IDENTIFIER: 00f6675800004c000000012f9ee01030
VG STATE:
active PP SIZE: 512 megabyte(s)
VG
PERMISSION: read/write TOTAL PPs: 1499 (767488 megabytes)
MAX LVs: 256
FREE PPs: 0 (0 megabytes)
LVs: 1 USED
PPs: 1499 (767488 megabytes)
OPEN LVs: 1
QUORUM: 2 (Enabled)
TOTAL PVs: 1
VG DESCRIPTORS: 2
STALE PVs: 0
STALE PPs: 0
ACTIVE PVs:
1 AUTO ON: no
MAX PPs per
VG: 32768 MAX PVs: 1024
LTG size
(Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE:
no BB POLICY: relocatable
PV
RESTRICTION: none
How to Configure iSCSI on Windows 7 & Windows Server Using the iSCSi Initiator
Connecting
Windows 7 to an iSCSI SAN
How
to configure Windows 7 to connect it to an iSCSI SAN.
Introduction
This article assumes that you have already have an iSCSI SAN up
and running. Besides that, I assume that you (or your SAN Admin) have already
created an iSCSI share on that SAN and that the iSCSI volume has not yet been
formatted with any OS /Operating system.
So, you have Windows 7
up and running but what do you do if you want to connect it to either the free
SAN IiSCSI or OpenFiler from the instructions above. You could even just want
to connect it to an existing iSCSI SAN and your storage admin has already
created out a LUN for you.
Now that we have some
background, let us configure Windows 7 to connect to an iSCSI SAN…
Configuring iSCSI in Windows 7
To
get started, you need to run the iSCSI Initiator that is installed by default
in Windows 7. You can access it in a couple of different ways.
One
option is to access it through the Windows 7 Control Panel.
Once inside control panel, on the address bar navigation, click on All Control Panel Items, then Administrative Tools, as seen in Figure 1.
From
there, you need to run the iSCSI Initiator (also in Figure 1).
Figure 1: Running the iSCSI Initiator from Windows 7 Control Panel / Administrative Tools
The alternative to running the iSCSI Initiator through that path
is to execute it by name. All you need to run isiscsicpl.exe. As you see
in Figure 2, you can do this by going to Start and in the blank,
enter iscsicpl.exe.
Either way, you will
arrive at the same destination. The iSCSI Warning that you see is in Figure 3 and then our real destination, the iSCSI Initiator Properties that you will see in Figure 4.
Assuming
this is the first time you have attempted to run iSCSI-related application, you
should see the warning message in Figure 3. This is just saying that that iSCSI
service has not been started and it is asking you if you want to start it.
Click Yes.
Figure 3: Starting the iSCSI Initiator Service
Finally,
we reach the iSCSI Initiator
Properties that we want to
configure, shown in Figure 4.
Figure 4: Connecting to an iSCSI server using the iSCSI Initator
Now,
what you want to do is to connect the iSCSI initiator to the iSCSI target.
Enter
the domain name or IP address for your iSCSI target / the iSCSI target. In our
case, it is the circled 10.0.9.71 (iscsi-san).
Next,
in Figure 5, you will be asked which of the discovered targets you want to
connect to.
Figure 5: Connecting to the iSCSI Target
Once you select it and
click Connect, your iSCSI SAN volume will be added to Windows so you can
click Ok.
You should see the
connections that you requested in the iSCSI Initiator (as you see in Figure 6).
Figure 6: Successfully Connected to Window iSCSI SAN
Now,
for reliability of the iSCSI volume, you should go into the Volumes and Devices tab and click Auto Configure.This will make the new iSCSI volume more “resilient”.
Figure 7: Connecting the iSCSI Device to the server
Then, OK, to close the iSCSI Initiator Properties.
NETAPP STEPS
1)
After login the netapp
controller, expand protocols and click on iSCSI and check that “iSCSI service” is running or not if not then
click on start and then refresh it and see the status like this:
iSCSI Service:
|
iSCSI service is running
|
2)
Expand “Storage” and
click on “LUNs” then click on >>
“Initiator Groups” >> Click
on “Create” and fill the following
parameters
Name:
Operating System:
Type
Select the supported
protocol for this group
iSCSI
|
FC/FCoE
|
Then click on initiators tab on same window and click on
“Add” and add iqn no of aix which is “iqn.1991-05.com.microsoft:shafqaatkhan-pc
(which could be checked by opening iscsi and its configurations tab)” and click
on Ok after that click on “Create”. iqn
is added into netapp, next step is to assign a LUN(disk).
3)
Expand “Storage” and
click on “LUNs” >> “Create” (a window will appear) >>
“Next” and add the following parameters
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Thin
Provisioned
|
|||
Allocate space as it is used, otherwise, allocate the
space now. Recommended for increasing utilization where each LUN is unlikely
to use all of its allocated space.
|
|||
And click on next and add the following parameters
The wizard automatically chooses the aggregate
with most free space for creating flexible volume for the LUN. But you can
choose a different aggregate of your choice. You can also select an existing
volume/qtree to create your LUN.
Create
a new flexible volume in
|
|||||||||||
Aggregate Name:
Volume Name:
|
Move to next window by clickin on next >>> select ur initiator
Initiators Mapping
You can connect your LUN
to the initiator hosts by selecting from the initiator group and by optionally
providing LUN ID for the initiator group.
|
Click on next and it will show you summary that what
parameters are added in the steps performed and then it will show that LUN
creation is completed and click on “Finish”
Back
in windows
Now,
go into Computer Management and click on Disk Management.
Assuming
this is the first time that any iSCSI Initiator (the Windows PC) you should see
that a new disk has been found. You will be told that you must initialize the
new disk before you can use it, as you see in Figure 8.
Figure 8
Click OK to initialize the newly found disk.
Now,
notice the new disk in Storage Manager (shown as Disk 1 but it could be a
different number on your system).
In
Figure 9, below you can see that the disk is now Online but it is Unallocated.
Figure 9: New Unallocated Disk
Now
what you need to do is to click on the unallocated disk and click New Simple Volume, as you can see in Figure 10, below.
Figure 10: Creating a new simple volume
This
brings up the New Simple Volume
Wizard.
In the Simple Volume Wizard you define how much space will be allocated of that volume and the drive letter that the new volume will have. Then give the space in next window and I maxed out the space of the volume with all that the volume offered, 204765 MB or about 200GB, Specify Size of the new Simple Volume >>> Now, assign a drive letter and finally format it with NTFS. At this point, you will see the finalization screen, asking you to confirm what you are about to do. If you have configured everything correctly, click Finish.
You
will see that the disk is being formatted and then you should see a new Healthy (Primary Partition) that is formatted with the NTFS filesystem (as
you see in Figure 15), below.
Figure 15: New Volume Created
Now
that the new volume is created, let us go to the new volume inside My Computer.
Figure 16: My Computer showing the new volume
With
that, we are done!
We
successfully connected Windows 7 to an iSCSI SAN. Specifically, we connected it
to a free OpenFiler SAN! So, with all the benefits that a SAN provides, Windows
7 (and all the other Windows devices that can now connect to the SAN), you will
be able to get a lot more done!
Issue
of a second disk is assigned to the same system
When switched on today my
second hard drive I had the following issue displayed
The disk is offline because it has
a signature collision with another disk that is online
Running Diskpart as
administrator proved that it was the issue
Microsoft
DiskPart version 6.1.7600
Copyright
(C) 1999-2008 Microsoft Corporation.
On
computer:
DISKPART>
DISKPART>
list disk
Disk ###
Status Size Free
Dyn Gpt
--------
------------- ------- -------
--- ---
Disk 0
Online 232 GB 1024 KB
Disk 1
Online 232 GB 1024 KB
Disk 2
No Media 0 B 0 B
Disk 3
Online 10 GB 0 B
Disk 4
Offline 10 GB 0 B
DISKPART>
select disk 4
Disk 4
is now the selected disk.
DISKPART>
uniqueid disk
Disk
ID: 11B58F8E
DISKPART>
select disk 3
Disk 3
is now the selected disk.
DISKPART>
uniqueid disk
Disk
ID: 11B58F8E
As you
can see disk 2 and disk 4 both have the 00024A91 signature hence the collision.
DISKPART>
uniqueid disk ID=11B58F8D
DISKPART>
Than
brought back the disk online using Disk Management
And everything went back to
normal
No comments:
Post a Comment