设为首页 收藏本站
查看: 625|回复: 0

[经验分享] Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)

[复制链接]

尚未签到

发表于 2018-6-3 11:59:46 | 显示全部楼层 |阅读模式
  本文详细记录了DB2 purescale 10.5在VMware Workstation 上的安装过程,如果大家看了本人的博文后,实践过程中有什么问题,欢迎加本人微信84077708,我将尽我所能为大家解惑。
  

      在上一篇博文Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (四)中,已经将操作系统上的绝大部分配置都已经做完了,本文继续进行余下的三个重要的配置,分别是:
      1、iscsi服务器配置

      2、iscsi客户端配置

      3、ssh用户免密登陆配置

     

  一、iscsi服务器配置(node01)
      因为在前面创建虚拟机以及为虚拟机安装操作系统的过程中,已经说明了,将node01配置为iscsi服务器,将node01和node02配置为iscsi客户端。具体配置如下:

      iscsi服务器端的配置文件为/etc/ietd.conf (node01虚拟机上),在文件末尾加上以下内容:
  

  Target iqn.2012-06.com.ibm:pureScaleDisk01
  Lun 0 Path=/dev/sda3,Type=fileio,ScsiId=3456789012,ScsiSN=456789012
  Target iqn.2012-06.com.ibm:pureScaleDisk02
  Lun 1 Path=/dev/sda4,Type=fileio,ScsiId=1234567890,ScsiSN=345678901
  
上面配置中特别增加了ScsiId和ScsiSN两个参数,并且给了对应的数字,这是因为,每个ISCSI盘,都必须有对应的WWID号或WWN号码,DB2 purescale软件在安装的过程中会检查磁盘的WWID号或WWN号码,如果它找不到,就会报错,说磁盘无效。
  

  二、iscsi客户端配置(node01 , node02)
  配置文件为/etc/init.d/iscsiclient (该文件默认不存在,需要手动创建),文件内容如下:
  #! /bin/sh
  

  ### BEGIN INIT INFO
  # Provides: iscsiclsetup
  # Required-Start: $network $syslog $remote_fs smartd
  # Required-Stop:
  # Default-Start: 3 5
  # Default-Stop: 0 1 2 6
  # Description: ISCSI client setup
  ### END INIT INFO
  

  IPLIST="192.168.142.101"
  

  # Shell functions sourced from /etc/rc.status:
  #      rc_check         check and set local and overall rc status
  #      rc_status        check and set local and overall rc status
  #      rc_status -v     ditto but be verbose in local rc status
  #      rc_status -v -r  ditto and clear the local rc status
  #      rc_failed        set local and overall rc status to failed
  #      rc_reset         clear local rc status (overall remains)
  #      rc_exit          exit appropriate to overall rc status
  . /etc/rc.status
  

  

  # catch mis-use right here at the start
  if [  "$1" != "start"  -a  "$1" != "stop"  -a  "$1" != "status" -a "$1" != "restart" -a "$1" != "rescan" -a "$1" != "mountall" ]; then
  echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
  exit 1
  fi
  

  # First reset status of this service
  rc_reset
  

  iscsimount() {
  rc_reset
  echo -n "Mounting $1: "
  /usr/lpp/mmfs/bin/mmmount $1
  rc_status -v
  return $?
  }
  
  iscsiumount() {
  rc_reset
  echo -n "Umounting $1: "
  /usr/lpp/mmfs/bin/mmumount $1
  rc_status -v
  return $?
  }
  

  iscsicheck() {
  rc_reset
  echo -n "Verify if $1 is mounted: "
  mount | grep "on $1\b" > /dev/null
  rc_status -v
  return $?
  }
  

  iscsimountall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  # Only try to mount filesystems that are not currently mounted
  if ! mount | grep "on $mountpoint\b" > /dev/null
  then
  iscsimount $mountpoint || overallstatus=$?
  fi
  done
  return $overallstatus
  }
  

  iscsiumountall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  # Only try to umount filesystems that are currently mounted
  if mount | grep "on $mountpoint\b" > /dev/null
  then
  iscsiumount $mountpoint || overallstatus=$?
  fi
  done
  return $overallstatus
  }
  
  iscsicheckall() {
  # Find all fstab lines with gpfs as fstype
  for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
  do
  iscsicheck $mountpoint || overallstatus=$?
  done
  return $overallstatus
  }
  
  case "$1" in
  start)
  modprobe -q iscsi_tcp
  iscsid
  for IP in $IPLIST
  do
  ping -q $IP -c 1 -W 1 > /dev/null
  RETURN_ON_PING=$?
  if [ ${RETURN_ON_PING} == 0 ]; then
  ISCSI_VALUES=`iscsiadm -m discovery -t st -p $IP \
  | awk '{print $2}' | uniq`
  if [ "${ISCSI_VALUES}" != "" ] ; then
  for target in $ISCSI_VALUES
  do
  echo "Logging into $target on $IP"
  iscsiadm --mode node --targetname $target \
  --portal $IP:3260 --login
  done
  else
  echo "No iscsitarget were discovered"
  fi
  else
  echo "iscsitarget is not available"
  fi
  done
  if [ ${RETURN_ON_PING} == 0 ]; then
  if [ "${ISCSI_VALUES}" != "" ] ; then
  /usr/lpp/mmfs/bin/mmstartup -a &> /dev/null
  iscsimountall
  fi
  fi
  ;;
  stop)        
  for IP in $IPLIST
  do
  ping -q $IP -c 1 -W 1 > /dev/null
  RETURN_ON_PING=$?
  if [ ${RETURN_ON_PING} == 0 ]; then
  ISCSI_VALUES=`iscsiadm -m discovery -t st --portal $IP \
  | awk '{print $2}' | uniq`
  if [ "${ISCSI_VALUES}" != "" ] ; then
  for target in $ISCSI_VALUES
  do
  echo "Logging out for $target from $IP"
  iscsiadm -m node --targetname $target \
  --portal $IP:3260 --logout
  done
  else
  echo "No iscsitarget were discovered"
  fi
  fi
  done
  if [ ${RETURN_ON_PING} == 0 ]; then
  if [ "${ISCSI_VALUES}" != "" ] ; then
  iscsiumountall
  fi
  fi
  ;;
  status)
  echo "Running sessions"
  iscsiadm -m session -P 1
  iscsicheckall
  rc_status -v
  ;;
  

  rescan)
  echo "Perform a SCSI rescan on a session"
  iscsiadm -m session -r 1 --rescan
  rc_status -v
  ;;
    
  mountall)
  iscsimountall
  rc_status -v
  ;;
  

  restart)
  ## Stop the service and regardless of whether it was
  ## running or not, start it again.
  $0 stop
  $0 start
  ;;
  *)
  echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
  exit 1
  esac
  

  rc_status -r
  

  rc_exit
  
到此,iscsi服务器和 iscsi客户端就配置完成了。接着执行以下命令,将iscsitarget服务和iscsiclient服务设置为开机自动启动。
  node01:/etc/init.d # chkconfig -a iscsitarget
  iscsitarget               0:off  1:off  2:off  3:on   4:off  5:on   6:off
  node01:/etc/init.d # chkconfig -a iscsiclient
  iscsiclient               0:off  1:off  2:off  3:on   4:off  5:on   6:off
  

  
  node02:~ # chkconfig -a iscsiclient
  iscsiclient               0:off  1:off  2:off  3:on   4:off  5:on   6:off
  

  配置好以上内容后,就可以重启一下服务器,使node01和node02都能正常识别到我们配置的共享磁盘了
  

  记住,由于node01是ISCSI服务器,所以在重启服务器时,必须先关闭node02,然后关闭node01,启动的时候,先启动node01 , 再启动node02.
  

  重新启动服务器后,我们可以在node01和node02上看到多出了两块磁盘,如下:
  node01服务器上的磁盘信息如下:
  node01:~ # fdisk -l
  

  Disk /dev/sda: 59.1 GB, 59055800320 bytes
  255 heads, 63 sectors/track, 7179 cylinders, total 115343360 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x000cf2bf
  

  Device Boot      Start         End      Blocks   Id  System
  /dev/sda1            2048     8386559     4192256   82  Linux swap / Solaris
  /dev/sda2   *     8386560    52420607    22017024   83  Linux
  /dev/sda3        52420608    73383935    10481664   83  Linux
  /dev/sda4        73383936   115343359    20979712   83  Linux
  

  Disk /dev/sdb: 21.5 GB, 21483225088 bytes
  64 heads, 32 sectors/track, 20488 cylinders, total 41959424 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdb doesn't contain a valid partition table
  

  Disk /dev/sdc: 10.7 GB, 10733223936 bytes
  64 heads, 32 sectors/track, 10236 cylinders, total 20963328 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdc doesn't contain a valid partition table
  

  node02服务器上的磁盘信息如下:
  

  node02:~ # fdisk -l
  

  Disk /dev/sda: 26.8 GB, 26843545600 bytes
  255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x0004fe76
  

  Device Boot      Start         End      Blocks   Id  System
  /dev/sda1            2048     8386559     4192256   82  Linux swap / Solaris
  /dev/sda2   *     8386560    52428799    22021120   83  Linux
  

  Disk /dev/sdb: 21.5 GB, 21483225088 bytes
  64 heads, 32 sectors/track, 20488 cylinders, total 41959424 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdb doesn't contain a valid partition table
  

  Disk /dev/sdc: 10.7 GB, 10733223936 bytes
  64 heads, 32 sectors/track, 10236 cylinders, total 20963328 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000
  

  Disk /dev/sdc doesn't contain a valid partition table
  
三、ssh用户免密登陆配置
  root用户免密登陆:
  node01:~ # ssh-keygen -t dsa
  node01:~ # ssh-keygen -t rsa
  

  node02:~ # ssh-keygen -t dsa
  node02:~ # ssh-keygen -t rsa
  

  node01:~ # cd .ssh
  
node01:~/.ssh # cat id_dsa.pub >> authorized_keys
node01:~/.ssh # cat id_rsa.pub >> authorized_keys
node01:~/.ssh # scp authorized_keys  root@node02:~/.ssh/


node02:~ # cd .ssh

node02:~/.ssh # cat id_dsa.pub >> authorized_keys
node02:~/.ssh # cat id_rsa.pub >> authorized_keys
node02:~/.ssh # scp authorized_keys  root@node01:~/.ssh/


node01:~ # ssh node01 date
node01:~ # ssh node02 date
node01:~ # ssh node01.purescale.ibm.local date
node01:~ # ssh node02.purescale.ibm.local date



node02:~ # ssh node01 date
node02:~ # ssh node02 date
node02:~ # ssh node01.purescale.ibm.local date
node02:~ # ssh node02.purescale.ibm.local date
  

  
db2sdin1用户免密登陆:
db2sdin1@node01:~> ssh-keygen -t dsa
db2sdin1@node01:~> ssh-keygen -t rsa


db2sdin1@node02:~> ssh-keygen -t dsa
db2sdin1@node02:~> ssh-keygen -t rsa


db2sdin1@node01:~> cd .ssh
db2sdin1@node01:~/.ssh> cat id_dsa.pub >> authorized_keys
db2sdin1@node01:~/.ssh> cat id_rsa.pub >> authorized_keys
db2sdin1@node01:~/.ssh> scp authorized_keys  db2sdin1@node02:~/.ssh/


db2sdin1@node02:~> cd .ssh

db2sdin1@node02:~/.ssh> cat id_dsa.pub >> authorized_keys
db2sdin1@node02:~/.ssh> cat id_rsa.pub >> authorized_keys
db2sdin1@node02:~/.ssh> scp authorized_keys  db2sdin1@node01:~/.ssh/


db2sdin1@node01:~> ssh node01 date
db2sdin1@node01:~> ssh node02 date
db2sdin1@node01:~> ssh node01.purescale.ibm.local date
db2sdin1@node01:~> ssh node02.purescale.ibm.local date



db2sdin1@node02:~> ssh node01 date
db2sdin1@node02:~> ssh node02 date
db2sdin1@node02:~> ssh node01.purescale.ibm.local date
db2sdin1@node02:~> ssh node02.purescale.ibm.local date
       
到此,Suse Linux 11SP3操作系统上的所有配置就基本配置完成了,但是此时还不能安装DB2 purescale,因为还有一些比较细致的东西在安装前需要进行配置,否则安装会失败。关于这些内容,请参阅《Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (六)》
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-511043-1-1.html 上篇帖子: Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (四) 下篇帖子: vSphere5.1升级5.5
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表