设为首页 收藏本站
查看: 852|回复: 0

[经验分享] openstack 安装

[复制链接]

尚未签到

发表于 2016-1-8 14:29:18 | 显示全部楼层 |阅读模式
[iyunv@controller ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=con-net
检查网卡的配置,并重启网络service network restart
[iyunv@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.137.188
NETMASK=255.255.255.0
service NetworkManager stop
service network restart
/etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

/etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
* soft core ulimit
* hard core ulimit
/etc/selinux/config
SELINUX=disabled
setenforce 0

mv /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/default.xml.bak
删除 /etc/udev/rules.d/70-persistent-net.rules

pvcreate /dev/sdb
vgcreate vgstorage /dev/sdb
/etc/sysconfig/iptables
-I INPUT -p tcp --dport 80 -j ACCEPT
-I INPUT -p tcp --dport 3306 -j ACCEPT
-I INPUT -p tcp --dport 5000 -j ACCEPT
-I INPUT -p tcp --dport 5672 -j ACCEPT
-I INPUT -p tcp --dport 8080 -j ACCEPT
-I INPUT -p tcp --dport 8773 -j ACCEPT
-I INPUT -p tcp --dport 8774 -j ACCEPT
-I INPUT -p tcp --dport 8775 -j ACCEPT
-I INPUT -p tcp --dport 8776 -j ACCEPT
-I INPUT -p tcp --dport 8777 -j ACCEPT
-I INPUT -p tcp --dport 9292 -j ACCEPT
-I INPUT -p tcp --dport 9696 -j ACCEPT
-I INPUT -p tcp --dport 15672 -j ACCEPT
-I INPUT -p tcp --dport 55672 -j ACCEPT
-I INPUT -p tcp --dport 35357 -j ACCEPT
-I INPUT -p tcp --dport 12211 -j ACCEPT

/etc/init.d/iptables restart
cd /var/www/html/rabbitmq
rpm --import rabbitmq-signing-key-public.asc
yum install rabbitmq-server
cat /etc/hosts
127.0.0.1 con-net localhost.localdomain localhost4 localhost4.localdomain4
yum install rabbitmq-server
/etc/rabbitmq/enabled_plugins
[rabbitmq_management].
千万要注意后面的.,这个.不能遗漏。
设置rabbitmq服务开机自启动
chkconfig rabbitmq-server on
启动rabbitmq服务
service rabbitmq-server start
[iyunv@haoning ~]# ps -ef|grep rabb
rabbitmq   4209      1  0 02:46 ?        00:00:00 /usr/lib64/erlang/erts-5.8.5/bin/epmd -daemon
root       4224      1  0 02:46 pts/1    00:00:00 /bin/sh /etc/init.d/rabbitmq-server start
root       4226   4224  0 02:46 pts/1    00:00:00 /bin/bash -c ulimit -S -c 0 >/dev/null 2>&1 ; /usr/sbin/rabbitmq-server
root       4229   4226  0 02:46 pts/1    00:00:00 /bin/sh /usr/sbin/rabbitmq-server
root       4245   4229  0 02:46 pts/1    00:00:00 su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
rabbitmq   4246   4245  1 02:46 ?        00:00:02 /usr/lib64/erlang/erts-5.8.5/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.1.2/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@haoning -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit@haoning.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit@haoning-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.1.2/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@haoning-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@haoning"
rabbitmq   4343   4246  0 02:46 ?        00:00:00 inet_gethost 4
rabbitmq   4344   4343  0 02:46 ?        00:00:00 inet_gethost 4
root       4441   3844  0 02:49 pts/1    00:00:00 grep rabb
[iyunv@haoning ~]#
http://192.168.137.188:15672/
guest/guest

yum install mysql mysql-server
vi /etc/my.cnf
在[mysqld]下面加入
default-character-set=utf8
default-storage-engine=InnoDB
service mysqld start
/usr/bin/mysqladmin -u root password 'openstack'
chkconfig mysqld on

----------------------------------------------------
yum install -y openstack-keystone openstack-utils
export SERVICE_TOKEN=$(openssl rand -hex 10)
echo $SERVICE_TOKEN >/root/ks_admin_token
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN
openstack-config --set /etc/keystone/keystone.conf token provider keystone.token.providers.uuid.Provider;
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:keystone@192.168.137.188/keystone;
openstack-db --init --service keystone --password keystone --rootpw openstack;
chown -R keystone:keystone /etc/keystone
开启keystone★★★★★★★★★★★★★
/etc/init.d/openstack-keystone start
chkconfig openstack-keystone on
[iyunv@haoning ~]# ps -ef|grep keystone
keystone  89430      1  5 20:08 ?        00:00:00 /usr/bin/python /usr/bin/keystone-all
root      89440   3844  0 20:08 pts/1    00:00:00 grep keystone

export SERVICE_TOKEN=`cat /root/ks_admin_token`
export SERVICE_ENDPOINT=http://192.168.137.188:35357/v2.0
keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
keystone endpoint-create --service keystone --publicurl 'http://192.168.137.188:5000/v2.0' --adminurl 'http://192.168.137.188:35357/v2.0' --internalurl 'http://192.168.137.188:5000/v2.0' --region beijing
keystone user-create --name admin --pass openstack
keystone role-create --name admin
keystone tenant-create --name admin
keystone user-role-add --user admin --role admin --tenant admin

vi /root/keystone_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://192.168.137.188:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
unset SERVICE_TOKEN
unset SERVICE_ENDPOINT
source /root/keystone_admin
keystone user-list
keystone role-list
keystone tenant-list

keystone role-create --name Member
keystone user-create --name usera --pass openstack
keystone tenant-create --name tenanta
keystone user-role-add --user usera --role Member --tenant tenanta
keystone user-create --name userb --pass openstack
keystone tenant-create --name tenantb
keystone user-role-add --user userb --role Member --tenant tenantb
keystone user-list

-----------------------------------Glance安装-----------
yum install -y openstack-glance openstack-utils python-kombu python-anyjson
keystone service-create --name glance --type image --description "Glance Image Service"
keystone endpoint-create --service glance --publicurl "http://192.168.137.188:9292" --adminurl "http://192.168.137.188:9292" --internalurl "http://192.168.137.188:9292" --region beijing
openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:glance@192.168.137.188/glance
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:glance@192.168.137.188/glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host 192.168.137.188
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name admin
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user admin
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password openstack
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host 192.168.137.188
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name admin
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user admin
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password openstack
openstack-config --set /etc/glance/glance-api.conf DEFAULT notifier_strategy noop;
openstack-db --init --service glance --password glance --rootpw openstack;
chown -R glance:glance /etc/glance
chown -R glance:glance /var/lib/glance
chown -R glance:glance /var/log/glance
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on
service openstack-glance-api start
service openstack-glance-registry start
source /root/keystone_admin
glance image-list
glance image-create --name "ccirros-0.3.1-x86_64" --disk-format qcow2 --container-format bare --is-public true --file cirros-0.3.1-x86_64-disk.img
会在 这里 /var/lib/glance/images 新建
--------------------------Cinder----------------------
yum install openstack-cinder openstack-utils python-kombu python-amqplib
source /root/keystone_admin
keystone service-create --name cinder --type volume --description "Cinder Volume Service"
keystone endpoint-create --service-id cinder --publicurl "http://192.168.137.188:8776/v1/\$(tenant_id)s" --adminurl "http://192.168.137.188:8776/v1/\$(tenant_id)s" --internalurl "http://192.168.137.188:8776/v1/\$(tenant_id)s" --region beijing
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_connection mysql://cinder:cinder@192.168.137.188/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_group vgstorage
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.137.188
openstack-config --set /etc/cinder/cinder.conf DEFAULT rabbit_host 192.168.137.188
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host 192.168.137.188
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name admin
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user admin
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password openstack
openstack-db --init --service cinder --password cinder --rootpw openstack;
chown -R cinder:cinder /etc/cinder
chown -R cinder:cinder /var/lib/cinder
chown -R cinder:cinder /var/log/cinder
vi /etc/tgt/targets.conf,添加一行:
include /etc/cinder/volumes/*
chkconfig tgtd on
service tgtd start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on
chkconfig openstack-cinder-volume on
service openstack-cinder-api start
service openstack-cinder-scheduler start
service openstack-cinder-volume start
cinder list
cinder create --display-name=vol-1G 1
lvdisplay
当将该卷绑定到VM上面后可以查看:
tgtadm --lld iscsi --op show --mode target
???????????????
http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.html

---------------swift-------------------------安装-------------------------
yum install xfsprogs openstack-swift-proxy openstack-swift-object openstack-swift-container openstack-swift-account openstack-utils memcached
就和cinder用一个吧,没问题
lvcreate --size 5G --name lvswift vgstorage
mkfs.xfs -f -i size=1024 /dev/vgstorage/lvswift
mkdir /sdb1
mount /dev/vgstorage/lvswift /sdb1
vi /etc/fstab
加入
/dev/vgstorage/lvswift /sdb1 xfs defaults 0 0
mkdir /sdb1/1 /sdb1/2 /sdb1/3 /sdb1/4
for x in {1..4}; do ln -s /sdb1/$x /srv/$x; done
mkdir -p /etc/swift/object-server /etc/swift/container-server /etc/swift/account-server /srv/1/node/sdb1 /srv/2/node/sdb2 /srv/3/node/sdb3 /srv/4/node/sdb4 /var/run/swift
chown -R swift:swift /etc/swift/ /srv/ /var/run/swift/ /sdb1
keystone service-create --name swift --type object-store --description "Swift Storage Service"
keystone endpoint-create --service swift --publicurl "http://192.168.137.188:8080/v1/AUTH_%(tenant_id)s" --adminurl "http://192.168.137.188:8080/v1/AUTH_%(tenant_id)s" --internalurl "http://192.168.137.188:8080/v1/AUTH_%(tenant_id)s" --region beijing
/etc/xinetd.d/rsync文件
disable = no
/etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
[account6012]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/account6012.lock
[account6022]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/account6042.lock
[container6011]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/container6041.lock
[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock
[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock

mkdir /var/log/swift
chown swift:swift /var/log/swift
[iyunv@haoning log(keystone_admin)]# cat /etc/rsyslog.d/10-swift.conf
# Uncomment the following to have a log containing all logs together
#local1,local2,local3,local4,local5.* /var/log/swift/all.log
# Uncomment the following to have hourly proxy logs for stats processing
#$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%"
#local1.*;local1.!notice ?HourlyProxyLog
local1.*;local1.!notice /var/log/swift/proxy.log
local1.notice /var/log/swift/proxy.error
local1.* ~
local2.*;local2.!notice /var/log/swift/storage1.log
local2.notice /var/log/swift/storage1.error
local2.* ~
local3.*;local3.!notice /var/log/swift/storage2.log
local3.notice /var/log/swift/storage2.error
local3.* ~
local4.*;local4.!notice /var/log/swift/storage3.log
local4.notice /var/log/swift/storage3.error
local4.* ~
local5.*;local5.!notice /var/log/swift/storage4.log
local5.notice /var/log/swift/storage4.error
local5.* ~

修改/etc/rsyslog.conf配置文件,新加一行
$PrivDropToGroup adm
mkdir -p /var/log/swift/hourly
chmod -R g+w /var/log/swift
mkdir /etc/swift/bak
cp /etc/swift/account-server.conf /etc/swift/bak
cp /etc/swift/container-server.conf /etc/swift/bak
cp /etc/swift/object-expirer.conf /etc/swift/bak
cp /etc/swift/object-server.conf /etc/swift/bak
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name admin
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user admin
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_password openstack
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host 192.168.137.188
openstack-config --set /etc/swift/proxy-server.conf filter:keystone operator_roles admin,SwiftOperator,Member
之前是
[iyunv@haoning swift(keystone_admin)]# vim /etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix = %SWIFT_HASH_PATH_SUFFIX%
运行
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix $(openssl rand -hex 10)
之后是
[swift-hash]
swift_hash_path_suffix = d2ddc14d9c576c910da3
----------------------------------------------------------
在/etc/swift/account-server目录下,新建四个配置文件1.conf,2.conf,3.conf,4.conf
1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6012
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6022
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
[iyunv@haoning account-server(keystone_admin)]# cat 3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6032
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]
[iyunv@haoning account-server(keystone_admin)]# cat 4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6042
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
vm_test_mode = yes
[account-auditor]
[account-reaper]

--------------------------------------------------
在container-server目录下,新建四个文件1.conf,2.conf,3.conf,4.conf,四个文件的内容如下.
cd /etc/swift/container-server
[iyunv@haoning container-server(keystone_admin)]# cat 1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6011
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]

[iyunv@haoning container-server(keystone_admin)]# cat 2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6021
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]

[iyunv@haoning container-server(keystone_admin)]# cat 3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6031
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[iyunv@haoning container-server(keystone_admin)]# cat 4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6041
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
vm_test_mode = yes
[container-updater]
[container-auditor]
[container-sync]
-------------------------------------------
cd /etc/swift/object-server
[iyunv@haoning object-server(keystone_admin)]# cat 1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_port = 6010
user = swift
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
[iyunv@haoning object-server(keystone_admin)]# cat 2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_port = 6020
user = swift
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
[iyunv@haoning object-server(keystone_admin)]# cat 3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_port = 6030
user = swift
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
[iyunv@haoning object-server(keystone_admin)]# cat 4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_port = 6040
user = swift
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
vm_test_mode = yes
[object-updater]
[object-auditor]
--------------------------
vim  /root/remakering.sh
#!/bin/bash
cd /etc/swift
rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz
swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add r1z1-192.168.137.188:6010/sdb1 1
swift-ring-builder object.builder add r1z2-192.168.137.188:6020/sdb2 1
swift-ring-builder object.builder add r1z3-192.168.137.188:6030/sdb3 1
swift-ring-builder object.builder add r1z4-192.168.137.188:6040/sdb4 1
swift-ring-builder object.builder rebalance
swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add r1z1-192.168.137.188:6011/sdb1 1
swift-ring-builder container.builder add r1z2-192.168.137.188:6021/sdb2 1
swift-ring-builder container.builder add r1z3-192.168.137.188:6031/sdb3 1
swift-ring-builder container.builder add r1z4-192.168.137.188:6041/sdb4 1
swift-ring-builder container.builder rebalance
swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add r1z1-192.168.137.188:6012/sdb1 1
swift-ring-builder account.builder add r1z2-192.168.137.188:6022/sdb2 1
swift-ring-builder account.builder add r1z3-192.168.137.188:6032/sdb3 1
swift-ring-builder account.builder add r1z4-192.168.137.188:6042/sdb4 1
swift-ring-builder account.builder rebalance

chmod 755 /root/remakering.sh
chown -R swift:swift /srv/
chown -R swift:swift /etc/swift
chown -R swift:swift /sdb1
chown -R swift:swift /var/log/swift
运行
/root/remakering.sh

启动memcached
/etc/init.d/memcached start
chkconfig memcached on
启动swift (swift没有配置开机启动)
★★★★★
swift-init all start
启动会报错
在proxy-server.conf 里的 [default]配置里面加上下面2行
log_facility = LOG_LOCAL1
log_level = DEBUG
cd /etc/swift
ls -alR
ls -R
测试
[iyunv@haoning srv(keystone_admin)]# swift -v -V 2.0 -A http://192.168.137.188:5000/v2.0 -U admin:admin -K openstack stat
StorageURL: http://192.168.137.188:8080/v1/AUTH_1e8dfdfd938e4e23b8d147504a748272
Auth Token: 766ea165dd0a4965b51d262eecb4a567
Account: AUTH_1e8dfdfd938e4e23b8d147504a748272
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1434359625.62653
X-Put-Timestamp: 1434359625.62653
[iyunv@haoning srv(keystone_admin)]#

swift post testcontainer
swift upload testcontainer a.txt
swift list testcontainer
swift download testcontainer a.txt
-------------------------------------horizon的安装-------------------------
yum install -y mod_wsgi httpd mod_ssl memcached python-memcached openstack-dashboard
vim /etc/openstack-dashboard/local_settings
注释和修改如下几行
#CACHES = {
#    'default': {
#        'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
#    }
#}
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211',
}
}
修改如下几行
ALLOWED_HOSTS = ['*']
OPENSTACK_HOST = "192.168.137.188"
chown -R apache:apache /etc/openstack-dashboard/ /var/lib/openstack-dashboard/
chkconfig httpd on
chkconfig memcached on
service httpd start
service memcached start
http://192.168.137.188/dashboard
/etc/openstack-dashboard/

-----------------------------neutron
yum  install openvswitch
chkconfig openvswitch on
service openvswitch start
ovs-vsctl add-br br-int

[iyunv@haoning openstack-dashboard(keystone_admin)]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
yum install  iproute dnsmasq dnsmasq-utils
rpm -ivh dnsmasq-utils-2.48-6.el6.x86_64.rpm
http://mirrors.lightcloud.cn/
下载dnsmasq-utils
rpm -ivh dnsmasq-utils

mysql -u root -popenstack
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO neutron @'%' IDENTIFIED BY 'neutron';
GRANT ALL ON neutron.* TO neutron @'localhost' IDENTIFIED BY 'neutron';
FLUSH PRIVILEGES;

keystone service-create --name neutron --type network --description "Neutron Networking Service"
keystone endpoint-create --service neutron --publicurl "http://192.168.137.188:9696" --adminurl "http://192.168.137.188:9696" --internalurl "http://192.168.137.188:9696" --region beijing
yum install openstack-neutron python-kombu python-amqplib python-pyudev python-stevedore openstack-utils openstack-neutron-openvswitch openvswitch -y

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.137.188
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name admin
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user admin
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password openstack
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host 192.168.137.188
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:neutron@192.168.137.188/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
可能需要删除一个[DEFAULT]

chkconfig neutron-server on
ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini -f
openstack-config --set /etc/neutron/plugin.ini OVS tenant_network_type gre
openstack-config --set /etc/neutron/plugin.ini OVS tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugin.ini OVS enable_tunneling True
openstack-config --set /etc/neutron/plugin.ini OVS local_ip 192.168.137.188
openstack-config --set /etc/neutron/plugin.ini OVS integration_bridge br-int
openstack-config --set /etc/neutron/plugin.ini OVS tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugin.ini SECURITYGROUP firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

chkconfig neutron-openvswitch-agent on
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
chkconfig neutron-dhcp-agent on
已经有两个网卡eth0,eth1,直接执行下面的
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2 (eth0用于做内网, eth1用于内网,eth2做外网)
ip addr add 192.168.100.231/24 dev br-ex
ip link set br-ex up
echo "ip addr add 192.168.100.231/24 dev br-ex" >> /etc/rc.local
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT user_namespaces True
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge br-ex
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT enable_metadata_proxy True;
chkconfig neutron-l3-agent on
修改/etc/neutron/metadata_agent.ini配置文件。
先把auth_region 这一行注释掉
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region swif
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://192.168.137.188:35357/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name admin
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user admin
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password openstack
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.137.188
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret awcloud

chkconfig neutron-metadata-agent on
[iyunv@haoning ~]# neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
No handlers could be found for logger "neutron.common.legacy"
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade None -> folsom
INFO  [alembic.migration] Running upgrade folsom -> 2c4af419145b
INFO  [alembic.migration] Running upgrade 2c4af419145b -> 5a875d0e5c
INFO  [alembic.migration] Running upgrade 5a875d0e5c -> 48b6f43f7471
INFO  [alembic.migration] Running upgrade 48b6f43f7471 -> 3cb5d900c5de
INFO  [alembic.migration] Running upgrade 3cb5d900c5de -> 1d76643bcec4
INFO  [alembic.migration] Running upgrade 1d76643bcec4 -> 2a6d0b51f4bb
INFO  [alembic.migration] Running upgrade 2a6d0b51f4bb -> 1b693c095aa3
INFO  [alembic.migration] Running upgrade 1b693c095aa3 -> 1149d7de0cfa
INFO  [alembic.migration] Running upgrade 1149d7de0cfa -> 49332180ca96
INFO  [alembic.migration] Running upgrade 49332180ca96 -> 38335592a0dc
INFO  [alembic.migration] Running upgrade 38335592a0dc -> 54c2c487e913
INFO  [alembic.migration] Running upgrade 54c2c487e913 -> 45680af419f9
INFO  [alembic.migration] Running upgrade 45680af419f9 -> 1c33fa3cd1a1
INFO  [alembic.migration] Running upgrade 1c33fa3cd1a1 -> 363468ac592c
INFO  [alembic.migration] Running upgrade 363468ac592c -> 511471cc46b
INFO  [alembic.migration] Running upgrade 511471cc46b -> 3b54bf9e29f7
INFO  [alembic.migration] Running upgrade 3b54bf9e29f7 -> 4692d074d587
INFO  [alembic.migration] Running upgrade 4692d074d587 -> 1341ed32cc1e
INFO  [alembic.migration] Running upgrade 1341ed32cc1e -> grizzly
INFO  [alembic.migration] Running upgrade grizzly -> f489cf14a79c
INFO  [alembic.migration] Running upgrade f489cf14a79c -> 176a85fc7d79
INFO  [alembic.migration] Running upgrade 176a85fc7d79 -> 32b517556ec9
INFO  [alembic.migration] Running upgrade 32b517556ec9 -> 128e042a2b68
INFO  [alembic.migration] Running upgrade 128e042a2b68 -> 5ac71e65402c
INFO  [alembic.migration] Running upgrade 5ac71e65402c -> 3cbf70257c28
INFO  [alembic.migration] Running upgrade 3cbf70257c28 -> 5918cbddab04
INFO  [alembic.migration] Running upgrade 5918cbddab04 -> 3cabb850f4a5
INFO  [alembic.migration] Running upgrade 3cabb850f4a5 -> b7a8863760e
INFO  [alembic.migration] Running upgrade b7a8863760e -> 13de305df56e
INFO  [alembic.migration] Running upgrade 13de305df56e -> 20ae61555e95
INFO  [alembic.migration] Running upgrade 20ae61555e95 -> 477a4488d3f4
INFO  [alembic.migration] Running upgrade 477a4488d3f4 -> 2032abe8edac
INFO  [alembic.migration] Running upgrade 2032abe8edac -> 52c5e4a18807
INFO  [alembic.migration] Running upgrade 52c5e4a18807 -> 557edfc53098
INFO  [alembic.migration] Running upgrade 557edfc53098 -> e6b16a30d97
INFO  [alembic.migration] Running upgrade e6b16a30d97 -> 39cf3f799352
INFO  [alembic.migration] Running upgrade 39cf3f799352 -> 52ff27f7567a
INFO  [alembic.migration] Running upgrade 52ff27f7567a -> 11c6e18605c8
INFO  [alembic.migration] Running upgrade 11c6e18605c8 -> 35c7c198ddea
INFO  [alembic.migration] Running upgrade 35c7c198ddea -> 263772d65691
INFO  [alembic.migration] Running upgrade 263772d65691 -> c88b6b5fea3
INFO  [alembic.migration] Running upgrade c88b6b5fea3 -> f9263d6df56
INFO  [alembic.migration] Running upgrade f9263d6df56 -> 569e98a8132b
INFO  [alembic.migration] Running upgrade 569e98a8132b -> 86cf4d88bd3
INFO  [alembic.migration] Running upgrade 86cf4d88bd3 -> 3c6e57a23db4
INFO  [alembic.migration] Running upgrade 3c6e57a23db4 -> 63afba73813
INFO  [alembic.migration] Running upgrade 63afba73813 -> 40dffbf4b549
INFO  [alembic.migration] Running upgrade 40dffbf4b549 -> 53bbd27ec841
INFO  [alembic.migration] Running upgrade 53bbd27ec841 -> 46a0efbd8f0
INFO  [alembic.migration] Running upgrade 46a0efbd8f0 -> 2a3bae1ceb8
INFO  [alembic.migration] Running upgrade 2a3bae1ceb8 -> 14f24494ca31
INFO  [alembic.migration] Running upgrade 14f24494ca31 -> 32a65f71af51
INFO  [alembic.migration] Running upgrade 32a65f71af51 -> 66a59a7f516
INFO  [alembic.migration] Running upgrade 66a59a7f516 -> 51b4de912379
INFO  [alembic.migration] Running upgrade 51b4de912379 -> 1efb859137.188
INFO  [alembic.migration] Running upgrade 1efb859137.188 -> 38fc1f6789f8
INFO  [alembic.migration] Running upgrade 38fc1f6789f8 -> 4a666eb208c2
INFO  [alembic.migration] Running upgrade 4a666eb208c2 -> 338d7508968c
INFO  [alembic.migration] Running upgrade 338d7508968c -> 3ed8f075e38a
INFO  [alembic.migration] Running upgrade 3ed8f075e38a -> 3d6fae8b70b0
INFO  [alembic.migration] Running upgrade 3d6fae8b70b0 -> 1064e98b7917
INFO  [alembic.migration] Running upgrade 1064e98b7917 -> 2528ceb28230
INFO  [alembic.migration] Running upgrade 2528ceb28230 -> 3a520dd165d0
INFO  [alembic.migration] Running upgrade 3a520dd165d0 -> 27ef74513d33
INFO  [alembic.migration] Running upgrade 27ef74513d33 -> 49f5e553f61f
INFO  [alembic.migration] Running upgrade 49f5e553f61f -> havana
INFO  [alembic.migration] Running upgrade havana -> e197124d4b9
INFO  [alembic.migration] Running upgrade e197124d4b9 -> 1fcfc149aca4
INFO  [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a
INFO  [alembic.migration] Running upgrade 50e86cb2637a -> ed93525fd003
INFO  [alembic.migration] Running upgrade ed93525fd003 -> 8f682276ee4
INFO  [alembic.migration] Running upgrade 8f682276ee4 -> 1421183d533f
INFO  [alembic.migration] Running upgrade 1421183d533f -> 3d3cb89d84ee
INFO  [alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c
INFO  [alembic.migration] Running upgrade 4ca36cfc898c -> 27cc183af192
INFO  [alembic.migration] Running upgrade 27cc183af192 -> 50d5ba354c23
INFO  [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379
INFO  [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95
INFO  [alembic.migration] Running upgrade 3d2585038b95 -> abc88c33f74f
INFO  [alembic.migration] Running upgrade abc88c33f74f -> 1b2580001654
INFO  [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb
INFO  [alembic.migration] Running upgrade e766b19a3bb -> f44ab9871cd6
INFO  [alembic.migration] Running upgrade f44ab9871cd6 -> 2eeaf963a447
INFO  [alembic.migration] Running upgrade 2eeaf963a447 -> fcac4c42e2cc
INFO  [alembic.migration] Running upgrade fcac4c42e2cc -> 492a106273f8
[iyunv@haoning ~]#
chown -R neutron:neutron /etc/neutron
网络节点服务器上会运行neutron-Server、neutron-openvswitch-agent、neutron-dhcp-agent、neutron-l3-agent、neutron-metadata-agent五个主要服务。
chkconfig --list|grep neutron|grep 3:on
/var/log/neutron/
service neutron-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-l3-agent restart
service neutron-metadata-agent restart
service neutron-server restart
neutron net-list
不报错即可
---------------------Compute------------------------------
rpm --import http://192.168.137.205/epel-depends/RPM-GPG-KEY-EPEL-6
rpm --import http://192.168.137.205/centos/RPM-GPG-KEY-CentOS-6
rpm --import http://192.168.137.205/rabbitmq/rabbitmq-signing-key-public.asc
rpm --import http://192.168.137.205/rdo-icehouse-b3/RPM-GPG-KEY-RDO-Icehouse
2、安装OpenVswitch
chkconfig openvswitch on
service openvswitch start
ovs-vsctl add-br br-int (新建一个默认的桥接设备)
升级iproute和dnsmasq软件包
yum update -y iproute dnsmasq dnsmasq-utils
yum install -y openstack-nova openstack-utils python-kombu python-amqplib openstack-neutron-openvswitch  python-stevedore
需要单独装 dnsmasq-utils
mysql -u root -popenstack
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
FLUSH PRIVILEGES;

keystone service-create --name compute --type compute --description "OpenStack Compute Service"
keystone endpoint-create --service compute --publicurl "http://192.168.137.188:8774/v2/%(tenant_id)s" --adminurl "http://192.168.137.188:8774/v2/%(tenant_id)s" --internalurl "http://192.168.137.188:8774/v2/%(tenant_id)s" --region beijing
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:nova@192.168.137.188/nova;
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host 192.168.137.188;
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.137.188;
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0;
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.137.188;
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://192.168.137.188:6080/vnc_auto.html;
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone;
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu;
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.137.188;
openstack-config --set /etc/nova/nova.conf DEFAULT api_paste_config /etc/nova/api-paste.ini;
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.137.188;
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 5000;
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http;
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_version v2.0;
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user admin;
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name admin;
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password openstack;
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata;
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver;
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.neutron.manager.NeutronManager;
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy True;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret awcloud;
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_use_dhcp True;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://192.168.137.188:9696;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username admin;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password openstack;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name admin;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_region_name beijing;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://192.168.137.188:5000/v2.0;
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone;
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron;
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver;
openstack-config --set /etc/nova/nova.conf libvirt vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver;
如果机器cpu不支持全虚拟化,需要修改 /etc/nova/nova.conf
virt_type=kvm ===》 virt_type=qemu
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-api on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-compute on
chkconfig openstack-nova-novncproxy on
chown -R neutron:neutron /etc/neutron/*

service openstack-nova-conductor restart
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-compute restart
service openstack-nova-consoleauth restart
service openstack-nova-novncproxy restart
nova-manage db sync
service neutron-openvswitch-agent start
重启nova
service neutron-openvswitch-agent restart
service openstack-nova-conductor restart
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-compute restart
service openstack-nova-consoleauth restart
service openstack-nova-novncproxy restart

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-161950-1-1.html 上篇帖子: 使用wget下载openstack rpm包制作本地源 下篇帖子: Openstack参考资料
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表