设为首页 收藏本站
查看: 1126|回复: 0

[经验分享] Installing and Configuring Openfiler with DRBD and Heartbeat(HA)--1

[复制链接]
发表于 2015-11-21 10:08:30 | 显示全部楼层 |阅读模式
  
  Introduction
  Openfiler isa high performance operating system tailored for use as a SAN/NAS appliance. This configuration will enable two Openfiler appliances to work in an Active/Passive high availability scenario.
  
  

Requirements

Hardware


  • 2 x boxes that meet the minimum spec of Openfiler's hardware specifications.
  • 2 x ethernet interfaces in each box
  • Openfiler 2.3 installation media.
  • Both boxes should have the same size drives in each to avoid any replication inconsistencies.
  

Software
  Install Openfiler 2.3 on both boxes utilizing a disk setup such as the following:


  • 3 GB root (“/”) partition
  • 2 GB “swap” partition
  • 512 MB “/meta” partition (used for DRBD0)
  • Data partition configured as an unmounted LVM (used for DRBD1)
  

Configuration

Network
  Each Openfiler appliance will have two NICs: one for communicating with the LAN, the other for communicating with the

other SAN (via direct cable). The first will be used for administration, to communicate directly with each node.
  A third “virtual” interface is used by the heartbeat service and is what will be used by computers on the LAN.
  Below is what is used:
  filer01


  • LAN Interface (eth0) 192.168.1.18
  • Replication Interface (eth1) 10.188.188.1
  filer02


  • LAN Interface (eth0) 192.168.1.19
  • Replication Interface (eth1) 10.188.188.2
  HA NAS/SAN Address (eth0) 192.168.1.17


  • This is configured in the cluster.xml file (do not attempt to configure anywhere else)
  

Hostname Setup
  For both nodes to be able to recognize each other by name, configure the hosts file on each computer.

Modify our /etc/hosts (on filer01):

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 filer01 localhost.localdomain localhost
10.188.188.2 filer02
  Modify our /etc/hosts (on filer02):

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 filer02 localhost.localdomain localhost
10.188.188.1 filer01
  

SSH Shared keys
  To allow the two Openfiler appliances to talk to each other without having to use a password, use SSH shared keys.
  On filer01:
root@filer01 ~# ssh-keygen -t dsa
  Hit enter at the prompts (don't set a password on the key).
  On filer02:
root@filer02 ~# ssh-keygen -t dsa
  Hit enter at the prompts (don't set a password on the key).
  The above command will generate a file called "id_dsa.pub" in
~/.ssh/, which is the public key that will need to be copied to

the other node:
root@filer01 ~# scp .ssh/id_dsa.pub root@filer02:~/.ssh/authorized_keys2

root@filer02 ~# scp .ssh/id_dsa.pub root@filer01:~/.ssh/authorized_keys2
  

Configure DRBD
  DRBD is what will keep the data between the two nodes consistent.
  On filer01:
root@filer01 ~# mv /etc/drbd.conf /etc/drbd.conf.org
  Then modify drbd.conf (version 8) according to following:

global {
# minor-count 64;
# dialog-refresh 5; # 5 seconds
# disable-ip-verification;
usage-count ask;
}
common {
syncer { rate 100M; }
}
resource cluster_metadata {
protocol C;
handlers {
pri-on-incon-degr "echo O > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo O > /proc/sysrq-trigger ; halt -f";
local-io-error "echo O > /proc/sysrq-trigger ; halt -f";
# outdate-peer "/usr/sbin/drbd-peer-outdater";
}
startup {
# wfc-timeout 0;
degr-wfc-timeout 120; # 2 minutes.
}
disk {
on-io-error detach;
}
net {
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
# rate 10M;
# after "r2";
al-extents 257;
}
on filer01 {
device /dev/drbd0;
disk /dev/sda3;
address 10.188.188.1:7788;
meta-disk internal;
}
on filer02 {
device /dev/drbd0;
disk /dev/sda3;
address 10.188.188.2:7788;
meta-disk internal;
}
}
resource vg0drbd {
protocol C;
startup {
wfc-timeout 0; ## Infinite!
degr-wfc-timeout 120; ## 2 minutes.
}
disk {
on-io-error detach;
}
net {
# timeout 60;
# connect-int 10;
# ping-int 10;
# max-buffers 2048;
# max-epoch-size 2048;
}
syncer {
after "cluster_metadata";
}
on filer01 {
device /dev/drbd1;
disk /dev/sda5;
address 10.188.188.1:7789;
meta-disk internal;
}
on filer02 {
device /dev/drbd1;
disk /dev/sda5;
address 10.188.188.2:7789;
meta-disk internal;
}
}
  Both hosts need the same drbd.conf, so the drbd.conf file from filer01 will be copied to filer02:
root@filer01 ~# scp /etc/drbd.conf root@filer02:/etc/drbd.conf
  Initialise metadata on /dev/drbd0 (cluster_metadata) and/dev/drbd1 (vg0drbd) on both nodes:
root@filer01 ~# drbdadm create-md cluster_metadata

root@filer01 ~# drbdadm create-md vg0drbd

root@filer02 ~# drbdadm create-md cluster_metadata

root@filer02 ~# drbdadm create-md vg0drbd
  Note: if the commands above generate errors about needing to zero out the file system, use the following command:
root@filer01 ~# dd if=/dev/zero of=/dev/sda3
  Be careful with this command and make sure its on the correct drive.
  Before starting the DRBD service, make sure that the partition used for drbd0 (in the cluster_metadata resource in thedrbd.conf file) is not already mounted (which it will be by default if it was created during the installation).
root@filer01 ~# umount /dev/sda3
  Now, start DRBD on both hosts:
root@filer01 ~# service drbd start
root@filer02 ~# service drbd start
  If all goes well, they should connect and running "service drbd status" should present output similar to the following:
root@filer1 /# service drbd status
drbd driver loaded OK; device status:

version: 8.0.12 (api:86/proto:86)

GIT-hash: 5c9f89594553e32adb87d9638dce591782f947e3 build by phil@mescal, 2008-04-24 13:29:44

m:res cs st ds p mounted fstype

0:cluster_metadata Connected Secondary/Secondary Inconsistent/Inconsistent C

1:vg0drbd Connected Secondary/Secondary Inconsistent/Inconsistent C
  Once both drbd resources are connected and both nodes are in Secondary state (as above), set a Primary node:
root@filer01 ~# drbdsetup /dev/drbd0 primary -o

root@filer01 ~# drbdsetup /dev/drbd1 primary -o
  This should give you a status result of something like the following:
root@filer1 /# service drbd status
drbd driver loaded OK; device status:

version: 8.0.12 (api:86/proto:86)

GIT-hash: 5c9f89594553e32adb87d9638dce591782f947e3 build by phil@mescal, 2008-04-24 13:29:44

m:res cs st ds p mounted fstype

... sync'ed: 17.9% (247232/297152)K

0:cluster_metadata SyncSource? Primary/Secondary UpToDate/Inconsistent C

1:vg0drbd PausedSyncS? Primary/Secondary UpToDate/Inconsistent C
  Note: if the vg0drbd LVM is large, it will take a long time to sync (perhaps overnight).
  Enable DRBD to startup at boot:
root@filer01 ~# chkconfig --level 2345 drbd on

root@filer02 ~# chkconfig --level 2345 drbd on
  Now create the cluster_metadata filesystem. Use this 512 MB partition to keep all of the Openfiler configuration data and the data for the services that should be available in HA (eg. NFS, iSCSI, SMB).
root@filer01 ~# mkfs.ext3 /dev/drbd0
  Don't add this partition to an /etc/fstab, as this is managed by Heartbeat (and will be configured shortly).
  


  (http://www.howtoforge.com/installing-and-configuring-openfiler-with-drbd-and-heartbeat)

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-141738-1-1.html 上篇帖子: Installing and Configuring Openfiler with DRBD and Heartbeat(HA)--2 下篇帖子: 安装heartbeat所需要的包
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表