When managing a VM Guest on the VM Host Server itself, it is possible to access the complete file system of the VM Host Server in order to attach or create virtual hard disks or to attach existing images to the VM Guest.
However, this is not possible when managing VM Guests from a remote host. For this reason, libvirt supports so called “Storage Pools” which can be accessed from remote machines.
libvirt knows two different types of storage: volumes and pools. Storage Volume
A storage volume is a storage device that can be assigned to a guest - a virtual disk or a CD/DVD/floppy image. Physically (on the VM Host Server) it can be a block device (a partition, a logical volume, etc.) or a file. Storage Pool
A storage pool basically is a storage resource on the VM Host Server that can be used for storing volumes
Physically it can be one of the following types:
File System Directory (dir)
A directory for hosting image files.0
The files can be either one of the supported disk formats (raw, qcow2, or qed), or ISO images.
Physical Disk Device (disk)
Use a complete physical disk as storage.
A partition is created for each volume that is added to the pool.
Pre-Formatted Block Device (fs)
Specify a partition to be used in the same way as a file system directory pool (a directory for hosting image files).
The only difference to using a file system directory is the fact that libvirt takes care of mounting the device.
iSCSI Target (iscsi)
Set up a pool on an iSCSI target.
You need to have been logged into the volume once before, in order to use it with libvirt
Volume creation on iSCSI pools is not supported, instead each existing Logical Unit Number (LUN) represents a volume.
Each volume/LUN also needs a valid (empty) partition table or disk label before you can use it.
You may either use a pre-defined volume group, or create a group by specifying the devices to use.
Storage volumes are created as partitions on the volume.
Multipath Devices (mpath)
At the moment, multipathing support is limited to assigning existing devices to the guests.
Volume creation or configuring multipathing from within libvirt is not supported.
Network Exported Directory (netfs)
Specify a network directory to be used in the same way as a file system directory pool (a directory for hosting image files).
The only difference to using a file system directory is the fact that libvirt takes care of mounting the directory.
Supported protocols are NFS and glusterfs.
SCSI Host Adapter (scsi)
Use an SCSI host adapter in almost the same way a an iSCSI target.
It is recommended to use a device name from /dev/disk/by-* rather than the simple /dev/sdX, since the latter may change
Managing Storage with virsh
create a storage pool
virsh pool-define directory_pool.xml
Directory Pool
virtimages
/var/lib/virt/images
Filesystem pool
This block device will be mounted and files managed in the directory of its mount point.
virtimages
/var/lib/virt/images
Network filesystem pool
Instead of requiring a local block device as the source, it requires the name of a host and path of an exported directory. It will mount this network filesystem and manage files within the directory of its mount point.
virtimages
/var/lib/virt/images
Logical volume pools
This provides a pool based on an LVM volume group. For a pre-defined LVM volume group, simply providing the group name is sufficient, while to build a new group requires providing a list of source devices to serve as physical volumes.
HostVG
/dev/HostVG
Disk volume pools
This provides a pool based on a physical disk. Volumes are created by adding partitions to the disk.
sda
/dev
iSCSI volume pools
This provides a pool based on an iSCSI target. Volumes must be pre-allocated on the iSCSI server, and cannot be created via the libvirt APIs. Since /dev/XXX names may change each time libvirt logs into the iSCSI target, it is recommended to configure the pool to use /dev/disk/by-path or /dev/disk/by-id for the target path.
virtimages
/dev/disk/by-path
SCSI volume pools
This provides a pool based on a SCSI HBA. Volumes are preexisting SCSI LUNs, and cannot be created via the libvirt APIs. Since /dev/XXX names aren't generally stable, it is recommended to configure the pool to use /dev/disk/by-path or /dev/disk/by-id for the target path.
virtimages
/dev/disk/by-path
RBD pools
This storage driver provides a pool which contains all RBD images in a RADOS pool. RBD (RADOS Block Device) is part of the Ceph distributed storage project.
This backend only supports Qemu with RBD support. Kernel RBD which exposes RBD devices as block devices in /dev is not supported. RBD images created with this storage backend can be accessed through kernel RBD if configured manually, but this backend does not provide mapping for these images.
Images created with this backend can be attached to Qemu guests when Qemu is build with RBD support (Since Qemu 0.14.0).
myrbdpool
rbdpool
Listing Pools and Volumes
virsh pool-list –detailsvirsh pool-info POOLvirsh vol-list --details POOL Starting, Stopping and Deleting Pools
virsh pool-destroy POOLvirsh pool-delete POOLvirsh pool-start POOL Adding, Cloning, Deleting Volumes to a Storage Pool
virsh vol-create-as virtimages newimage 12G --format qcow2 --allocation 4G
vol-clone NAME_EXISTING_VOLUME NAME_NEW_VOLUME --pool POOLvirsh vol-delete NAME --pool POOL 在libvirt上使用LVM存储设备
http://www.ibm.com/developerworks/cn/linux/l-cn-libvirt-lvm/