_winds 发表于 2019-2-1 10:53:07

GlusterFS的基础应用

  一、GlusterFS基础环境的介绍
  1、关于GlusterFS文件系统和架构的介绍
  http://jingyan.baidu.com/article/046a7b3ef65250f9c27fa9d9.html
  

  2、实验的目的
  a. 利用多台性能较低并且老旧的服务器,实现企业的云盘功能
  b. GlusterFS服务端和客户端的部署和配置
  c. 实现GlusterFS多节点的负载均衡功能
  

  3、测试环境说明
  操作系统:CentOS 6.7 X64
  内核版本:2.6.32-573.el6.x86_64
  软件版本:glusterfs 3.7.10
  使用4台服务器创卷GlusterFS的DHT功能,客户端win10使用samba进行连接配置
  

  二、GlusterFS服务端的配置(server01)
  

  1、GlusterFS无中心化概念,很多关于GlusterFS的配置仅需要在其中一台主机设置
  

  2、配置NTP服务器同步(这里也可以在crontab脚本里面,添加一个定时任务)
# ntpdate -u 10.203.10.20
18 Apr 14:16:15 ntpdate: adjust time server 10.203.10.20 offset 0.008930 sec
# hwclock -w  3、查看hosts表的记录
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04  4、单独添加一块磁盘作为共享卷使用(这里也可以配置LVM)

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x0ef88f22.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0ef88f22
   Device Boot      Start         End      Blocks   IdSystem
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# partx /dev/sdb
# 1:      63- 41929649 ( 41929587 sectors,21467 MB)
# 2:         0-       -1 (      0 sectors,      0 MB)
# 3:         0-       -1 (      0 sectors,      0 MB)
# 4:         0-       -1 (      0 sectors,      0 MB)
# fdisk -l |grep /dev/sdb
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               1      2610    20964793+83Linux
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.Use tune2fs -c or -i to override.  
  5、创建挂载目录,并设置开机挂载
# mkdir -p /glusterfs-xfs-mount
# mount /dev/sdb1 /glusterfs-xfs-mount/
# df -h
Filesystem      SizeUsed Avail Use% Mounted on
/dev/sda3       193G7.2G176G   4% /
tmpfs         932M   0932M   0% /dev/shm
/dev/sda1       190M   41M139M23% /boot
/dev/sdb1      20G   44M   19G   1% /glusterfs-xfs-mount  
  6、修改自动挂载
# echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab  7、添加外部源

#cd /etc/yum.repos.d/
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo  8、安装glusterfs服务端软件,并启动服务
#yum -y install glusterfs-server
#/etc/init.d/glusterd start
Starting glusterd:                                       
# chkconfig glusterd on
# chkconfig --list glusterd
glusterd       0:off1:off2:on3:on4:on5:on6:off
# df -h
Filesystem      SizeUsed Avail Use% Mounted on
/dev/sda3       193G7.2G176G   4% /
tmpfs         932M   0932M   0% /dev/shm
/dev/sda1       190M   41M139M23% /boot
/dev/sdb1      20G   44M   19G   1% /glusterfs-xfs-mount  
  9、添加集群对象(server02以及server03)
# gluster peer status
Number of Peers: 0
# gluster peer probe server02
peer probe: success.
# gluster peer status
Number of Peers: 1
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer in Cluster (Connected)
# gluster peer probe server03
peer probe: success.
# gluster peer status
Number of Peers: 2
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer in Cluster (Connected)
Hostname: server03
Uuid: 5110d0af-fdd9-4c82-b716-991cf0601b53
State: Peer in Cluster (Connected)  10、创建Gluster Volume
# gluster volume create dht-volume01 server01:/glusterfs-xfs-mount server02:/gluste
rfs-xfs-mount server03:/glusterfs-xfs-mount
volume create: dht-volume01: failed: The brick server01:/glusterfs-xfs-mount is a mount point. Please create a
sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the com
mand if you want to override this behavior.
# echo $?
1
# gluster volume create dht-volume01 server01:/glusterfs-xfs-mount server02:/gluste
rfs-xfs-mount server03:/glusterfs-xfs-mount force
volume create: dht-volume01: success: please start the volume to access data
# gluster volume start dht-volume01
volume start: dht-volume01: success
# gluster volume status
Status of volume: dht-volume01
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick server01:/glusterfs-xfs-mount         49152   0          Y       2948
Brick server02:/glusterfs-xfs-mount         49152   0          Y       2910
Brick server03:/glusterfs-xfs-mount         49152   0          Y       11966
NFS Server on localhost                     N/A       N/A      N       N/A
NFS Server on server02                      N/A       N/A      N       N/A
NFS Server on server03                      N/A       N/A      N       N/A
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks  11、测试写入一个512M的文件

# cd /glusterfs-xfs-mount/
# dd if=/dev/zero of=test.img bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.20376 s, 103 MB/s
# ls
lost+foundtest.img  
  三、GlusterFS服务端的配置(server02与server03上的配置类似)
  

  1、配置时间同步,这里的ntp服务器IP地址为10.203.10.20
# ntpdate -u 10.203.10.20
18 Apr 14:27:58 ntpdate: adjust time server 10.203.10.20 offset -0.085282 sec
# hwclock -w  
  2、查看host表的信息
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04  
  3、在本地设置一块单独的盘,组成GlusterFS卷的一部分
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x927b5e72.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x927b5e72
   Device Boot      Start         End      Blocks   IdSystem
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.  
  4、更新磁盘分区表信息,使磁盘分区和格式化
# partx /dev/sdb
# 1:      63- 41929649 ( 41929587 sectors,21467 MB)
# 2:         0-       -1 (      0 sectors,      0 MB)
# 3:         0-       -1 (      0 sectors,      0 MB)
# 4:         0-       -1 (      0 sectors,      0 MB)
# fdisk -l|grep /dev/sdb
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               1      2610    20964793+83Linux
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.Use tune2fs -c or -i to override.
# df -h
Filesystem      SizeUsed Avail Use% Mounted on
/dev/sda3       193G7.3G176G   4% /
tmpfs         932M   0932M   0% /dev/shm
/dev/sda1       190M   41M139M23% /boot  5、创建挂载目录,并配置挂载信息
# mkdir -p /glusterfs-xfs-mount
# mount /dev/sdb1 /glusterfs-xfs-mount/
# df -h
Filesystem      SizeUsed Avail Use% Mounted on
/dev/sda3       193G7.3G176G   4% /
tmpfs         932M   0932M   0% /dev/shm
/dev/sda1       190M   41M139M23% /boot
/dev/sdb1      20G   44M   19G   1% /glusterfs-xfs-mount
# echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab  
  6、配置yum源,并安装GlusterFS服务端软件
#cd /etc/yum.repos.d/
#wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glust
erfs-epel.repo
--2016-04-18 14:32:22--http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.
repo
Resolving download.gluster.org... 23.253.208.221, 2001:4801:7824:104:be76:4eff:fe10:23d8
Connecting to download.gluster.org|23.253.208.221|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1049 (1.0K)
Saving to: “glusterfs-epel.repo”
100%[==============================================================>] 1,049       --.-K/s   in 0s      
2016-04-18 14:32:23 (36.4 MB/s) - “glusterfs-epel.repo” saved
# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noa
rch.rpm
Retrieving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
warning: /var/tmp/rpm-tmp.gaJCKd: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ###########################################
   1:epel-release         ###########################################
# yum -y install glusterfs-server  7、启动glusterd服务
# /etc/init.d/glusterd start
Starting glusterd:                                       
# chkconfig glusterd on
# chkconfig --list glusterd
glusterd       0:off1:off2:on3:on4:on5:on6:off  
  8、查看gluster集群节点和卷的信息
# gluster peer status
Number of Peers: 1
Hostname: server01
Uuid: e90a3b54-5a9d-4e57-b502-86f9aad8b576
State: Peer in Cluster (Connected)
# gluster volume status
Status of volume: dht-volume01
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick server01:/glusterfs-xfs-mount         49152   0          Y       2948
Brick server02:/glusterfs-xfs-mount         49152   0          Y       2910
Brick server03:/glusterfs-xfs-mount         49152   0          Y       11966
NFS Server on localhost                     2049      0          Y       2932
NFS Server on server01                      2049      0          Y       2968
NFS Server on server03                      2049      0          Y       11986
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks
# ll /glusterfs-xfs-mount/
total 16
drwx------ 2 root root 16384 Apr 18 14:29 lost+found
# cd /glusterfs-xfs-mount/
# dd if=/dev/zero of=server02.img bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.85478 s, 91.7 MB/s
# ls
lost+foundserver02.img  由于我这里配置的是DHT的模式,所以集群在server01与server02的数据信息是不同的,除非人为写入相同文件
  

  四、手动添加和删除brick卷的卷节点的信息(任意一个gluster服务端节点上的操作都可以)
  

  1、添加server04节点
# gluster peer probe server04
peer probe: success.
# gluster peer status
Number of Peers: 3
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer in Cluster (Connected)
Hostname: server03
Uuid: 5110d0af-fdd9-4c82-b716-991cf0601b53
State: Peer in Cluster (Connected)
Hostname: server04
Uuid: d653b5c2-dac4-428c-bf6f-eea393adbb16
State: Peer in Cluster (Connected)  
  2、添加一个节点卷的信息到dht-volume的brick中
# gluster volume add-brick
Usage: volume add-brick[ ]...
# gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount
volume add-brick: failed: Pre Validation failed on server04. The brick server04:/glusterfs-xfs-mount is a moun
t point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'forc
e' at the end of the command if you want to override this behavior.
# gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount force
volume add-brick: success
# gluster volume status
Status of volume: dht-volume01
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick server01:/glusterfs-xfs-mount         49152   0          Y       2948
Brick server02:/glusterfs-xfs-mount         49152   0          Y       2910
Brick server03:/glusterfs-xfs-mount         49152   0          Y       11966
Brick server04:/glusterfs-xfs-mount         49152   0          Y       2925
NFS Server on localhost                     2049      0          Y       3258
NFS Server on server02                      2049      0          Y       3107
NFS Server on server03                      2049      0          Y       12284
NFS Server on server04                      2049      0          Y       2945
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks  
  3、从brick中移除一个节点卷
# gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/
Usage: volume remove-brick...
# gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/ commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the re
moved brick.
# gluster volume status
Status of volume: dht-volume01
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick server01:/glusterfs-xfs-mount         49152   0          Y       2948
Brick server02:/glusterfs-xfs-mount         49152   0          Y       2910
Brick server03:/glusterfs-xfs-mount         49152   0          Y       11966
NFS Server on localhost                     2049      0          Y       3336
NFS Server on server02                      2049      0          Y       3146
NFS Server on server04                      2049      0          Y       2991
NFS Server on server03                      2049      0          Y       12323
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks  五、配置均衡分布(rebalance)
# gluster volume rebalance dht-volume01
Usage: volume rebalance{{fix-layout start} | {start |stop|status}}
# gluster volume rebalance dht-volume01 fix-layout start
volume rebalance: dht-volume01: success: Rebalance on dht-volume01 has been started successfully. Use rebalanc
e status command to check status of the rebalance process.
ID: 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd
# gluster volume rebalance dht-volume01 fix-layout status
Usage: volume rebalance{{fix-layout start} | {start |stop|status}}
# gluster
gluster> volume rebalance dht-volume01 status
                                    Node Rebalanced-files          size       scanned      failures       skip
ped               statusrun time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   --------
---         ------------   --------------
                               localhost                0      0Bytes             0             0         
0 fix-layout completed      0:0:0
                              server02                0      0Bytes             0             0         
0 fix-layout completed      0:0:0
                              server03                0      0Bytes             0             0         
0 fix-layout completed      0:0:0
volume rebalance: dht-volume01: success
# cd /glusterfs-xfs-mount/
# gluster volume set dht-volume01 nfs.disable on
volume set: success
# gluster volume status
Status of volume: dht-volume01
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick server01:/glusterfs-xfs-mount         49152   0          Y       2948
Brick server02:/glusterfs-xfs-mount         49152   0          Y       2910
Brick server03:/glusterfs-xfs-mount         49152   0          Y       11966
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
Task               : Rebalance         
ID                   : 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd
Status               : completed         
# touch {1..100}.txt
# ls
100.txt12.txt16.txt20.txt25.txt3.txt   55.txt65.txt70.txt75.txt92.txt       lost+found
10.txt   14.txt18.txt21.txt2.txt   43.txt57.txt67.txt71.txt77.txtclient2.isotest.img
11.txt   15.txt1.txt   22.txt30.txt47.txt61.txt6.txt   72.txt88.txtclient.iso
# ls
13.txt23.txt28.txt34.txt39.txt46.txt52.txt66.txt76.txt81.txt8.txt   93.txt
17.txt26.txt29.txt35.txt41.txt4.txt   58.txt68.txt79.txt83.txt90.txt94.txt
19.txt27.txt33.txt37.txt42.txt51.txt62.txt73.txt80.txt86.txt91.txt97.txt
# cd /glusterfs-xfs-mount/
# ls
24.txt36.txt44.txt49.txt54.txt5.txt   64.txt78.txt84.txt89.txt98.txtlost+found
31.txt38.txt45.txt50.txt56.txt60.txt69.txt7.txt   85.txt95.txt99.txtserver03.img
32.txt40.txt48.txt53.txt59.txt63.txt74.txt82.txt87.txt96.txt9.txt   server04.iso
# cd /glusterFS-mount/
# ls
100.txt18.txt26.txt34.txt42.txt50.txt59.txt67.txt75.txt83.txt91.txt9.txt
10.txt   19.txt27.txt35.txt43.txt51.txt5.txt   68.txt76.txt84.txt92.txtclient2.iso
11.txt   1.txt   28.txt36.txt44.txt52.txt60.txt69.txt77.txt85.txt93.txtclient.iso
12.txt   20.txt29.txt37.txt45.txt53.txt61.txt6.txt   78.txt86.txt94.txtlost+found
13.txt   21.txt2.txt   38.txt46.txt54.txt62.txt70.txt79.txt87.txt95.txtserver03.img
14.txt   22.txt30.txt39.txt47.txt55.txt63.txt71.txt7.txt   88.txt96.txtserver04.iso
15.txt   23.txt31.txt3.txt   48.txt56.txt64.txt72.txt80.txt89.txt97.txttest.img
16.txt   24.txt32.txt40.txt49.txt57.txt65.txt73.txt81.txt8.txt   98.txt
17.txt   25.txt33.txt41.txt4.txt   58.txt66.txt74.txt82.txt90.txt99.txt
文件均衡分布的功能实现了  六、配置gluster客户端
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04
# mkdir -p /glusterFS-mount
# mount -t glusterfs server01:/dht-volume01 /glusterFS-mount/
# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3             193G7.6G176G   5% /
tmpfs               932M   76K932M   1% /dev/shm
/dev/sda1             190M   41M139M23% /boot
server01:/dht-volume01
                     59G1.7G   55G   3% /glusterFS-mount
# cd /glusterFS-mount/
# LS
-bash: LS: command not found
# ls
lost+foundserver02.imgserver03.imgtest.img
# dd if=/dev/zero of=client.iso bs=1M count=123
123+0 records in
123+0 records out
128974848 bytes (129 MB) copied, 1.52512 s, 84.6 MB/s
# ls
client.isolost+foundserver02.imgserver03.imgtest.img
# dd if=/dev/zero of=client2.iso bs=1M count=456
456+0 records in
456+0 records out
478150656 bytes (478 MB) copied, 8.76784 s, 54.5 MB/s
# ls
client2.isoclient.isolost+foundserver02.imgserver03.imgtest.img
# ls
client2.isoclient.isolost+foundserver02.imgserver03.imgtest.img
# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3             193G7.2G176G   4% /
tmpfs               932M   76K932M   1% /dev/shm
/dev/sda1             190M   41M139M23% /boot
server01:/dht-volume01
                     40G1.7G   36G   5% /glusterFS-mount
# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3             193G7.6G176G   5% /
tmpfs               932M   80K932M   1% /dev/shm
/dev/sda1             190M   41M139M23% /boot
server01:/dht-volume01
                     59G2.2G   54G   4% /glusterFS-mount
# cd ~
# mount -a
Mount failed. Please check the log file for more details.
Mount failed. Please check the log file for more details.
# ls
anaconda-ks.cfgDocumentsinstall.log         Music   Public   Videos
Desktop          Downloadsinstall.log.syslogPicturesTemplates
# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3             193G7.6G176G   5% /
tmpfs               932M   80K932M   1% /dev/shm
/dev/sda1             190M   41M139M23% /boot
server01:/dht-volume01
                     79G2.3G   72G   4% /glusterFS-mount
通过容量差别可以发现,节点的卷已经添加成功  
  七、在挂载glusterFS的客户机的目录下,使用samba分享给windows机器使用

  1、samba服务的安装

# yum -y install samba
# /etc/init.d/smb restart
Shutting down SMB services:                              
Starting SMB services:                                    
# /etc/init.d/nmb restart
Shutting down NMB services:                              
Starting NMB services:                                    
# chkconfig smb on
# chkconfig nmb on
# vim /etc/samba/smb.conf
配置文件如下:
workgroup = WORKGROUP    (工作组)
server string = Samba Server Version %v(显示版本)
hosts allow = 127. 192.168.1. 10.10.10.(允许登陆的主机)
log file = /var/log/samba/log.%m(日志存放的地方)
max log size = 50    (最大的日志数量)
security = user   (samba验证的级别)
passdb backend = tdbsam
[云盘测试平台]
      comment = yunpan
      browseable = yes
      writable = yes
      public = yes
      path = /glusterFS-mount
      valid users = wanlong
# /etc/init.d/smb restart
Shutting down SMB services:                              
Starting SMB services:                                    
# /etc/init.d/nmb restart
Shutting down NMB services:                              
Starting NMB services:                                       2、samba用户的配置

# adduser jifang01 -s /sbin/nologin
# id jifang01uid=501(jifang01) gid=501(jifang01) groups=501(jifang01)
# smbpasswd -a jifang01
New SMB password:
Retype new SMB password:
Added user jifang01.  3、设置本地文件夹权限

# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3             193G7.6G176G   5% /
tmpfs               932M   72K932M   1% /dev/shm
/dev/sda1             190M   41M139M23% /boot
server01:/dht-volume01
                     79G1.8G   73G   3% /glusterFS-mount
# chmod 777 /glusterFS-mount/ -R  4、在windows服务端映射网络驱动器后,进行验证
http://s5.运维网.com/wyfs02/M00/7F/2D/wKiom1cV0eTw86jWAAAoKjgBVv8736.png
http://s5.运维网.com/wyfs02/M01/7F/2D/wKiom1cV0eSgN_mnAABQjzCaHW8235.png
http://s5.运维网.com/wyfs02/M01/7F/2B/wKioL1cV0qPyOrowAABAHp3dml8375.png
http://s5.运维网.com/wyfs02/M02/7F/2D/wKiom1cV0eWgloo-AAAyL27ME3E212.png
  

  

  

  




页: [1]
查看完整版本: GlusterFS的基础应用