q4561231 发表于 2018-6-3 11:59:46

Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)

  本文详细记录了DB2 purescale 10.5在VMware Workstation 上的安装过程,如果大家看了本人的博文后,实践过程中有什么问题,欢迎加本人微信84077708,我将尽我所能为大家解惑。
  

      在上一篇博文Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (四)中,已经将操作系统上的绝大部分配置都已经做完了,本文继续进行余下的三个重要的配置,分别是:
      1、iscsi服务器配置

      2、iscsi客户端配置

      3、ssh用户免密登陆配置

     

  一、iscsi服务器配置(node01)
      因为在前面创建虚拟机以及为虚拟机安装操作系统的过程中,已经说明了,将node01配置为iscsi服务器,将node01和node02配置为iscsi客户端。具体配置如下:

      iscsi服务器端的配置文件为/etc/ietd.conf (node01虚拟机上),在文件末尾加上以下内容:
  

  Target iqn.2012-06.com.ibm:pureScaleDisk01
  Lun 0 Path=/dev/sda3,Type=fileio,ScsiId=3456789012,ScsiSN=456789012
  Target iqn.2012-06.com.ibm:pureScaleDisk02
  Lun 1 Path=/dev/sda4,Type=fileio,ScsiId=1234567890,ScsiSN=345678901
  
上面配置中特别增加了ScsiId和ScsiSN两个参数,并且给了对应的数字,这是因为,每个ISCSI盘,都必须有对应的WWID号或WWN号码,DB2 purescale软件在安装的过程中会检查磁盘的WWID号或WWN号码,如果它找不到,就会报错,说磁盘无效。
  

  二、iscsi客户端配置(node01 , node02)
  配置文件为/etc/init.d/iscsiclient (该文件默认不存在,需要手动创建),文件内容如下:
  #! /bin/sh
  

  ### BEGIN INIT INFO
  # Provides: iscsiclsetup
  # Required-Start: $network $syslog $remote_fs smartd
  # Required-Stop:
  # Default-Start: 3 5
  # Default-Stop: 0 1 2 6
  # Description: ISCSI client setup
  ### END INIT INFO
  

  IPLIST="192.168.142.101"
  

  # Shell functions sourced from /etc/rc.status:
  #      rc_check         check and set local and overall rc status
  #      rc_status      check and set local and overall rc status
  #      rc_status -v   ditto but be verbose in local rc status
  #      rc_status -v -rditto and clear the local rc status
  #      rc_failed      set local and overall rc status to failed
  #      rc_reset         clear local rc status (overall remains)
  #      rc_exit          exit appropriate to overall rc status
  . /etc/rc.status
  

  

  # catch mis-use right here at the start
  if ["$1" != "start"-a"$1" != "stop"-a"$1" != "status" -a "$1" != "restart" -a "$1" != "rescan" -a "$1" != "mountall" ]; then
  echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
  exit 1
  fi
  

  # First reset status of this service
  rc_reset
  

  iscsimount() {
  rc_reset
  echo -n "Mounting $1: "
  /usr/lpp/mmfs/bin/mmmount $1
  rc_status -v
  return $?
  }
  
  iscsiumount() {
  rc_reset
  echo -n "Umounting $1: "
  /usr/lpp/mmfs/bin/mmumount $1
  rc_status -v
  return $?
  }
  

  iscsicheck() {
  rc_reset
  echo -n "Verify if $1 is mounted: "
  mount | grep "on $1\b" > /dev/null
  rc_status -v
  return $?
  }
  

  iscsimountall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  # Only try to mount filesystems that are not currently mounted
  if ! mount | grep "on $mountpoint\b" > /dev/null
  then
  iscsimount $mountpoint || overallstatus=$?
  fi
  done
  return $overallstatus
  }
  

  iscsiumountall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  # Only try to umount filesystems that are currently mounted
  if mount | grep "on $mountpoint\b" > /dev/null
  then
  iscsiumount $mountpoint || overallstatus=$?
  fi
  done
  return $overallstatus
  }
  
  iscsicheckall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  iscsicheck $mountpoint || overallstatus=$?
  done
  return $overallstatus
  }
  
  case "$1" in
  start)
  modprobe -q iscsi_tcp
  iscsid
  for IP in $IPLIST
  do
  ping -q $IP -c 1 -W 1 > /dev/null
  RETURN_ON_PING=$?
  if [ ${RETURN_ON_PING} == 0 ]; then
  ISCSI_VALUES=`iscsiadm -m discovery -t st -p $IP \
  | awk '{print $2}' | uniq`
  if [ "${ISCSI_VALUES}" != "" ] ; then
  for target in $ISCSI_VALUES
  do
  echo "Logging into $target on $IP"
  iscsiadm --mode node --targetname $target \
  --portal $IP:3260 --login
  done
  else
  echo "No iscsitarget were discovered"
  fi
  else
  echo "iscsitarget is not available"
  fi
  done
  if [ ${RETURN_ON_PING} == 0 ]; then
  if [ "${ISCSI_VALUES}" != "" ] ; then
  /usr/lpp/mmfs/bin/mmstartup -a &> /dev/null
  iscsimountall
  fi
  fi
  ;;
  stop)      
  for IP in $IPLIST
  do
  ping -q $IP -c 1 -W 1 > /dev/null
  RETURN_ON_PING=$?
  if [ ${RETURN_ON_PING} == 0 ]; then
  ISCSI_VALUES=`iscsiadm -m discovery -t st --portal $IP \
  | awk '{print $2}' | uniq`
  if [ "${ISCSI_VALUES}" != "" ] ; then
  for target in $ISCSI_VALUES
  do
  echo "Logging out for $target from $IP"
  iscsiadm -m node --targetname $target \
  --portal $IP:3260 --logout
  done
  else
  echo "No iscsitarget were discovered"
  fi
  fi
  done
  if [ ${RETURN_ON_PING} == 0 ]; then
  if [ "${ISCSI_VALUES}" != "" ] ; then
  iscsiumountall
  fi
  fi
  ;;
  status)
  echo "Running sessions"
  iscsiadm -m session -P 1
  iscsicheckall
  rc_status -v
  ;;
  

  rescan)
  echo "Perform a SCSI rescan on a session"
  iscsiadm -m session -r 1 --rescan
  rc_status -v
  ;;
  
  mountall)
  iscsimountall
  rc_status -v
  ;;
  

  restart)
  ## Stop the service and regardless of whether it was
  ## running or not, start it again.
  $0 stop
  $0 start
  ;;
  *)
  echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
  exit 1
  esac
  

  rc_status -r
  

  rc_exit
  
到此,iscsi服务器和 iscsi客户端就配置完成了。接着执行以下命令,将iscsitarget服务和iscsiclient服务设置为开机自动启动。
  node01:/etc/init.d # chkconfig -a iscsitarget
  iscsitarget               0:off1:off2:off3:on   4:off5:on   6:off
  node01:/etc/init.d # chkconfig -a iscsiclient
  iscsiclient               0:off1:off2:off3:on   4:off5:on   6:off
  

  
  node02:~ # chkconfig -a iscsiclient
  iscsiclient               0:off1:off2:off3:on   4:off5:on   6:off
  

  配置好以上内容后,就可以重启一下服务器,使node01和node02都能正常识别到我们配置的共享磁盘了
  

  记住,由于node01是ISCSI服务器,所以在重启服务器时,必须先关闭node02,然后关闭node01,启动的时候,先启动node01 , 再启动node02.
  

  重新启动服务器后,我们可以在node01和node02上看到多出了两块磁盘,如下:
  node01服务器上的磁盘信息如下:
  node01:~ # fdisk -l
  

  Disk /dev/sda: 59.1 GB, 59055800320 bytes
  255 heads, 63 sectors/track, 7179 cylinders, total 115343360 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x000cf2bf
  

  Device Boot      Start         End      Blocks   IdSystem
  /dev/sda1            2048   8386559   4192256   82Linux swap / Solaris
  /dev/sda2   *   8386560    52420607    22017024   83Linux
  /dev/sda3      52420608    73383935    10481664   83Linux
  /dev/sda4      73383936   115343359    20979712   83Linux
  

  Disk /dev/sdb: 21.5 GB, 21483225088 bytes
  64 heads, 32 sectors/track, 20488 cylinders, total 41959424 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdb doesn't contain a valid partition table
  

  Disk /dev/sdc: 10.7 GB, 10733223936 bytes
  64 heads, 32 sectors/track, 10236 cylinders, total 20963328 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdc doesn't contain a valid partition table
  

  node02服务器上的磁盘信息如下:
  

  node02:~ # fdisk -l
  

  Disk /dev/sda: 26.8 GB, 26843545600 bytes
  255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x0004fe76
  

  Device Boot      Start         End      Blocks   IdSystem
  /dev/sda1            2048   8386559   4192256   82Linux swap / Solaris
  /dev/sda2   *   8386560    52428799    22021120   83Linux
  

  Disk /dev/sdb: 21.5 GB, 21483225088 bytes
  64 heads, 32 sectors/track, 20488 cylinders, total 41959424 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdb doesn't contain a valid partition table
  

  Disk /dev/sdc: 10.7 GB, 10733223936 bytes
  64 heads, 32 sectors/track, 10236 cylinders, total 20963328 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdc doesn't contain a valid partition table
  
三、ssh用户免密登陆配置
  root用户免密登陆:
  node01:~ # ssh-keygen -t dsa
  node01:~ # ssh-keygen -t rsa
  

  node02:~ # ssh-keygen -t dsa
  node02:~ # ssh-keygen -t rsa
  

  node01:~ # cd .ssh
  
node01:~/.ssh # cat id_dsa.pub >> authorized_keys
node01:~/.ssh # cat id_rsa.pub >> authorized_keys
node01:~/.ssh # scp authorized_keysroot@node02:~/.ssh/


node02:~ # cd .ssh

node02:~/.ssh # cat id_dsa.pub >> authorized_keys
node02:~/.ssh # cat id_rsa.pub >> authorized_keys
node02:~/.ssh # scp authorized_keysroot@node01:~/.ssh/


node01:~ # ssh node01 date
node01:~ # ssh node02 date
node01:~ # ssh node01.purescale.ibm.local date
node01:~ # ssh node02.purescale.ibm.local date



node02:~ # ssh node01 date
node02:~ # ssh node02 date
node02:~ # ssh node01.purescale.ibm.local date
node02:~ # ssh node02.purescale.ibm.local date
  

  
db2sdin1用户免密登陆:
db2sdin1@node01:~> ssh-keygen -t dsa
db2sdin1@node01:~> ssh-keygen -t rsa


db2sdin1@node02:~> ssh-keygen -t dsa
db2sdin1@node02:~> ssh-keygen -t rsa


db2sdin1@node01:~> cd .ssh
db2sdin1@node01:~/.ssh> cat id_dsa.pub >> authorized_keys
db2sdin1@node01:~/.ssh> cat id_rsa.pub >> authorized_keys
db2sdin1@node01:~/.ssh> scp authorized_keysdb2sdin1@node02:~/.ssh/


db2sdin1@node02:~> cd .ssh

db2sdin1@node02:~/.ssh> cat id_dsa.pub >> authorized_keys
db2sdin1@node02:~/.ssh> cat id_rsa.pub >> authorized_keys
db2sdin1@node02:~/.ssh> scp authorized_keysdb2sdin1@node01:~/.ssh/


db2sdin1@node01:~> ssh node01 date
db2sdin1@node01:~> ssh node02 date
db2sdin1@node01:~> ssh node01.purescale.ibm.local date
db2sdin1@node01:~> ssh node02.purescale.ibm.local date



db2sdin1@node02:~> ssh node01 date
db2sdin1@node02:~> ssh node02 date
db2sdin1@node02:~> ssh node01.purescale.ibm.local date
db2sdin1@node02:~> ssh node02.purescale.ibm.local date
     
到此,Suse Linux 11SP3操作系统上的所有配置就基本配置完成了,但是此时还不能安装DB2 purescale,因为还有一些比较细致的东西在安装前需要进行配置,否则安装会失败。关于这些内容,请参阅《Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (六)》
  
页: [1]
查看完整版本: Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)