openstack-icehouse-CentOs6.5源码实施文档 一.硬件设备准备l 计算机一台或者,多台物理机,CPU支持VT,内存>=4G,可用磁盘空间大于50G。 l 单台,安装VirtualBox 或VMWARE workstation虚拟机软件。 l 安装好两台CentOS-6.5-x86_64的虚拟机。 实验架构图: | | | | openstack-node1.example.com | | | | openstack-node2.example.com | | | | | | | | 二.系统环境准备2.1 安装操作系统CentOS-6.5-x86_64。1. 基本系统:1VCPU+1024M内存+20G硬盘。 2. 网络选择:使用两个网卡Host-Only和网络桥接 3. 软件包选择:Basic Server 。 4. 关闭iptables和SELinux。 2.2 ntp 时间同步[iyunv@openstack-node1 src]# yum install ntp [iyunv@openstack-node1 src]# service ntpd start [iyunv@openstack-node1 src]# chkconfig ntpd on 2.3 内核参数调整[iyunv@openstack ~]# vim /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 [iyunv@openstack ~]# sysctl -p 2.4 关闭iptables 和selinux [iyunv@ openstack-node1 ~]# /etc/init.d/iptables stop [iyunv@ openstack-node1 ~]# chkconfig iptables off [iyunv@ openstack-node1 ~]# vim /etc/sysconfig/selinux SELINUX=disabled [iyunv@openstack ~]# reboot 2.5 安装epel 软件仓库基础软件包安装[iyunv@openstack-node1 ~]# yum install epel-release -y #安装epel软件仓库 在所有OpenStack节点均进行安装。 [iyunv@openstack-node1 ~]# yum install -y python-pip gcc gcc-c++ make libtool patch automake python-devel libxslt-devel MySQL-python openssl-devel libudev-devel git wget libvirt-python libvirt qemu-kvm gedit python-numdisplay device-mapper bridge-utils libffi-devel libffi lrzsz swig 2.6 openstack 软件包准备[iyunv@openstack-node1 ~]# cd /usr/local/src [iyunv@openstack-node1 src]# 2.6.1 解压软件包 [iyunv@openstack-node1 ~]# cd /usr/local/src [iyunv@openstack-node1 ~]# for i in ./*.tar.gz ; do tar -zxvf $i ; done 2.6.2 安装依赖的Python包 [iyunv@openstack-node1 src]# pip install --upgrade pip [iyunv@openstack-node1 src]# pip install --upgrade setuptools [iyunv@openstack-node1 src]# vim openstack.txt #这里必须要用这些依赖,否则会报错 alembic==0.7.5.post2 amqp==1.4.6 amqplib==1.0.2 anyjson==0.3.3 argparse==1.2.1 Babel==1.3 boto==2.34.0 cffi==1.1.2 cliff==1.14.0 cmd2==0.6.8 cryptography==1.0 debtcollector==0.7.0 decorator==4.0.2 Django==1.6.7 django-appconf==0.6 django-compressor==1.4 django-openstack-auth==1.1.7 dogpile.cache==0.5.6 dogpile.core==0.4.1 ecdsa==0.13 enum34==1.0.4 eventlet==0.15.2 futures==3.0.3 glusterfs-api==3.6.0.54 greenlet==0.4.5 httplib2==0.9 idna==2.0 importlib==1.0.3 iniparse==0.3.1 ipaddress==1.0.14 iso8601==0.1.10 Jinja2==2.7.3 jsonpatch==1.11 jsonpointer==1.9 jsonrpclib==0.1.3 jsonschema==2.5.1 kombu==3.0.7 lesscpy==0.10.2 lockfile==0.10.2 lxml==3.4.2 Mako==1.0.1 MarkupSafe==0.23 monotonic==0.3 msgpack-python==0.4.6 MySQL-python==1.2.3rc1 netaddr==0.7.14 netifaces==0.10.4 networkx==1.8.1 nose==0.10.4 numdisplay==1.5 numpy==1.4.1 oauthlib==1.0.1 ordereddict==1.2 oslo.config==1.4.0 oslo.i18n==1.0.0 oslo.messaging==1.4.1 oslo.rootwrap==1.3.0 oslo.serialization==1.2.0 oslo.utils==1.2.0 oslo.vmware==0.6.0 paramiko==1.15.2 passlib==1.6.5 Paste==2.0.2 PasteDeploy==1.5.2 pbr==0.11.0 ply==3.6 posix-ipc==1.0.0 prettytable==0.7.2 pyasn1==0.1.7 pycadf==0.5.1 pycparser==2.14 pycrypto==2.6.1 pycurl==7.19.0 pygpgme==0.1 pyOpenSSL==0.14 pyparsing==2.0.3 python-ceilometerclient==1.4.0 python-cinderclient==1.1.1 python-glanceclient==0.14.2 python-heatclient==0.6.0 python-keystoneclient==0.11.1 python-neutronclient==2.3.10 python-novaclient==2.20.0 python-swiftclient==2.3.1 python-troveclient==1.2.0 pytz==2015.4 PyYAML==3.11 repoze.lru==0.6 requests==2.6.0 Routes==2.2 rtslib-fb==2.1.57 simplejson==3.8.0 six==1.9.0 SQLAlchemy==0.9.10 sqlalchemy-migrate==0.9.1 sqlparse==0.1.16 stevedore==1.7.0 suds==0.4 taskflow==0.1.3 Tempita==0.5.2 unicodecsv==0.13.0 urlgrabber==3.9.1 warlock==1.1.0 WebOb==1.4 websockify==0.5.1 wrapt==1.10.5 yum-metadata-parser==1.1.2 pluggy==0.3.0 py==1.4.30 这些依赖必须要安装完毕,如果有些安装不上,可以用yum去安装这些模块,yum安装一个就在这个文件里面删除一个,然后再跑pip 安装之前有遇到过glustfs-api pip 源安装不上,yum安装的 新的报错安装django_openstack_auth安装会报错缺少testrepository模块用下面命令安装上即可 pip install testrepository && pip install django_openstack_auth 注意:由于包的更新,后期如果执行中,遇到警告,可以忽略。ERROR需要解决。 千里之行始于足下,开始我们的的openstack安装之旅,切记要细心,否则你懂得  开始我们的openstack安装之旅一.基础服务安装(1)数据库服务安装 [iyunv@linux-node1 ~]# yum install mysql-server
[iyunv@linux-node1 ~]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf 增加以下配置。
[mysqld]
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
[iyunv@linux-node1 ~]# chkconfig mysqld on
[iyunv@linux-node1 ~]# /etc/init.d/mysqld start
[iyunv@linux-node1 ~]# mysqladmin -u root password openstack (2)数据库创建 Keystone数据库创建 [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "create database keystone DEFAULT CHARACTER SET utf8;"
[iyunv@linux-node1 ~]# mysql -u root -popenstack -e "grant all on keystone.* to keystone@'172.16.1.0/255.255.255.0’ identified by 'keystone'; " 测试数据库: [iyunv@linux-node1 ~]# mysql -ukeystone -pkeystone -h 172.16.1.131 Glance 数据库创建 [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "create database glance DEFAULT CHARACTER SET utf8;"
[iyunv@linux-node1 ~]# mysql -u root -popenstack -e "grant all on glance.* to glance@'172.16.1.0/255.255.255.0’identified by 'glance'" 测试数据库: [iyunv@linux-node1 ~]# mysql -uglance -pglance -h 172.16.1.131
Nova 数据库创建 [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "create database nova DEFAULT CHARACTER SET utf8;" 测试数据库: [iyunv@linux-node1 ~]# mysql -unova -pnova -h 172.16.1.131 Neutron数据库创建 [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "create database neutron DEFAULT CHARACTER SET utf8;" [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "grant all on neutron.* to neutron@'172.16.1.0/255.255.255.0' identified by 'neutron'" 数据库测试: [iyunv@linux-node1 ~]# mysql -uneutron -pneutron -h 172.16.1.131 Cinder 数据库创建 [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "create database cinder DEFAULT CHARACTER SET utf8;" [iyunv@linux-node1 ~]# mysql -u root -popenstack -e "grant all on cinder.* to cinder@'172.16.1.0/255.255.255.0' identified by 'cinder'" 测试数据库: [iyunv@linux-node1 ~]# mysql -ucinder -pcinder -h 172.16.1.131 (3)安装消息代理(RabbitMQ) 安装前提,配置主机hosts [iyunv@openstack-node1 src]# vi /etc/hosts 172.16.1.131 node1 openstack-node1.example.com 172.16.1.132 node2 openstack-node2.example.com [iyunv@linux-node1 ~]#yum install -y erlang rabbitmq-server
[iyunv@linux-node1 ~]# chkconfig rabbitmq-server on 启用WEB监控插件 [iyunv@linux-node1 ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins list
[iyunv@linux-node1 ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
[iyunv@openstack ~]# /etc/init.d/rabbitmq-server restart RabbitMQ 默认的用户名和密码均为 guest #这里不改了,根据自己需求可以自定义 [iyunv@linux-node1 ~]# rabbitmqctl change_password guest password #这里修改guest密码 二.Controller 安装( KeyStone) (1)Keystone安装 [iyunv@openstack-node1 ~]# cd /usr/local/src/ [iyunv@openstack-node1 src]# cd keystone-2014.1.5 [iyunv@openstack-node1 keystone-2014.1.5]# python setup.py install (2)keystone配置 [iyunv@openstack-node1 keystone-2014.1.5]# mkdir /etc/keystone
[iyunv@openstack-node1 keystone-2014.1.5]# mkdir /var/log/keystone
[iyunv@openstack-node1 keystone-2014.1.5]# mkdir /var/run/keystone (3)创建配置文件 [iyunv@openstack-node1 keystone-2014.1.5]# cp etc/keystone.conf.sample /etc/keystone/keystone.conf
[iyunv@openstack-node1 keystone-2014.1.5]# cp etc/keystone-paste.ini /etc/keystone/
[iyunv@openstack-node1 keystone-2014.1.5]# cp etc/logging.conf.sample /etc/keystone/logging.conf
[iyunv@openstack-node1 keystone-2014.1.5]# cp etc/policy.json /etc/keystone/
[iyunv@openstack-node1 keystone-2014.1.5]# cp etc/policy.v3cloudsample.json /etc/keystone/ (4)设置 Admin Token [iyunv@linux-node1 ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
[iyunv@linux-node1 ~]# echo $ADMIN_TOKEN 81b1a5eabff55205f009 #这个ID随机生成的 [iyunv@linux-node1 ~]# vim /etc/keystone/keystone.conf admin_token=81b1a5eabff55205f009 #填上我们之前生成的id (5)设置 PKI Token 默认情况下 Openstack 使用 PKI。创建一个签名的证书 [iyunv@linux-node1 ~]# keystone-manage pki_setup --keystone-user root --keystone-group root
[iyunv@linux-node1 ~]# chown -R root:root /etc/keystone/ssl
[iyunv@linux-node1 ~]# chmod -R o-rwx /etc/keystone/ssl (6)KeyStone配置数据库连接 [iyunv@openstack-node1 ~]# vi /etc/keystone/keystone.conf connection=mysql://keystone:keystone@172.16.1.131/keystone (7)keystone 同步数据库 [iyunv@linux-node1 ~]# keystone-manage db_sync 验证数据库创建是否正常: [iyunv@openstack ~]# mysql -h172.16.1.131 -ukeystone -pkeystone -e " use keystone;show tables;" (8)开启keystone日志 为了实验过程中及时的查看 Keystone 相关的日志,修改配置文件,设置以下参数。
debug=true
verbose=true
log_dir=/var/log/keystone
log_file=keystone.log (9)检验keystone配置 [iyunv@openstack-node1 ~]# grep -E -v "^#|^$" /etc/keystone/keystone.conf [DEFAULT] admin_token=81b1a5eabff55205f009 debug=true log_file=keystone.log log_dir=/var/log/keystone [assignment] [auth] [cache] [catalog] [credential] [database] connection=mysql://keystone:keystone@172.16.1.131/keystone [ec2] [endpoint_filter] [federation] [identity] [kvs] [ldap] [matchmaker_ring] [memcache] [oauth1] [os_inherit] [paste_deploy] [policy] [revoke] [signing] [ssl] [stats] [token] [trust] (10)Keystone 管理 (10.1)启动keystone [iyunv@linux-node1 ~]# mkdir /var/log/keystone [iyunv@linux-node1 ~]# keystone-all --config-file=/etc/keystone/keystone.conf 直接执行 keystone-all 命令启动,没有报错,两个端口启动为正常 [iyunv@openstack-node1 keystone]# netstat -tunlp |grep -E "35357|5000" (10.2) 添加开机启动脚本 #文档所在位置会附带脚本 [iyunv@linux-node1 ~]# mv openstack-keystone /etc/init.d/
[iyunv@ linux-node1~]# chmod +x /etc/init.d/openstack-keystone
[iyunv@ linux-node1 ~]# chkconfig --add openstack-keystone
[iyunv@ linux-node1 ~]# chkconfig openstack-keystone on
[iyunv@openstack ~]# /etc/init.d/openstack-keystone start
后期可以直接通过 keystone 的日志文件查看相关日志
[iyunv@openstack ~]# tail -f /var/log/keystone/keystone.log
(11)创建Admin用户 首先我们要创建一个超级管理员用户、角色和租户。默认情况下。Keystone 创建了一个特殊的_member_角色。这个是后面给 Dashboard 使用的。 ①在创建用户之前,我们需要两个环境变量用来连接 keystone [iyunv@linux-node1 ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[iyunv@linux-node1 ~]# export OS_SERVICE_ENDPOINT=http://172.16.1.131:35357/v2.0 [iyunv@linux-node1 keystone]# keystone role-list
如果出现以下内容说明 Keystone 安装成功。 ②创建admin 用户 [iyunv@linux-node1 ~]# keystone user-create --name=admin --pass=admin --email=admin@openstack.com ③创建admin角色 [iyunv@linux-node1 ~]# keystone role-create --name=admin ④创建admin租户 [iyunv@linux-node1 ~]# keystone tenant-create --name=admin --description="Admin Tenant" ⑤连接admin的用户,角色,和租户 [iyunv@linux-node1 ~]# keystone user-role-add --user=admin --tenant=admin --role=admin ⑥连接 Admin 用户、 _member_角色和 admin 租户。 keystone user-role-add --user=admin --role=_member_ --tenant=admin (12)创建普通用户 下面我们创建一个普通用户和租户。并链接到_member_角色。我们下面的实验均使用这个普通用户进行 Openstack 的管理。
[iyunv@linux-node1 ~]# keystone user-create --name=demo --pass=demo --email=demo@openstack.com
[iyunv@linux-node1 ~]# keystone tenant-create --name=demo --description="Demo Tenant"
[iyunv@linux-node1 ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo (13)创建keystone和服务并,注册 还记得上面说的 Openstack 的每个组件都必须在 Keystone 上进行注册。当然也包括 Keystone 本身。
[iyunv@linux-node1 ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity" #执行过之后会生成一个service-id 下一步要使用,切记,不要一味复制文档内容 [iyunv@linux-node1 ~]# keystone endpoint-create \
--service-id=d58be1a439004ec083de5a484d3ae90e \ 这里的service-id就是上一步生成id
--publicurl=http://172.16.1.131:5000/v2.0 \
--internalurl=http://172.16.1.131:5000/v2.0 \
--adminurl=http://172.16.1.131:35357/v2.0 (13)验证 Keystone 安装 [iyunv@linux-node1 ~]# keystone --os-username=admin --os-password=admin \
--os-auth-url=http://172.16.1.131:35357/v2.0 token-get 验证授权行为,请求验证租户。
[iyunv@linux-node1 ~]# keystone --os-username=admin --os-password=admin --os-tenant-name=admin \
--os-auth-url=http://172.16.1.131:35357/v2.0 token-get (14)keystone环境变量配置 为了不每次使用都要制定变量。我们将这些常用的变量设置为环境变量。
下面建立环境变量为其它服务部署和配置使用
[iyunv@linux-node1 ~]# vi ~/keystone-admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://172.16.1.131:35357/v2.0 [iyunv@linux-node1 ~]# source keystone-admin #source 执行执行环境变量 三.Controller 安装( Glance) (1)glance安装 [iyunv@openstack-node1 keystone]# cd [iyunv@openstack-node1 ~]# cd /usr/local/src/glance-2014.1.5 [iyunv@openstack-node1 glance-2014.1.5]# python setup.py install (2)glance-api配置 [iyunv@linux-node1 ~]# mkdir /etc/glance
[iyunv@linux-node1 ~]# mkdir /var/log/glance
[iyunv@linux-node1 ~]# mkdir /var/lib/glance
[iyunv@linux-node1 ~]# mkdir /var/run/glance
复制源码包的配置文件到/etc/glance 目录下
[iyunv@linux-node1 ~]# cd /usr/local/src/glance-2014.1/etc
[iyunv@linux-node1 etc]# cp -r * /etc/glance/
[iyunv@linux-node1 ~]# cd /etc/glance/
[iyunv@linux-node1 glance]# mv logging.cnf.sample logging.cnf
Glance 的配置主要涉及数据库、 RabbitMQ、 KeyStone 相关的配置项。 以后别的服务均类似。
(3)设置 RabbitMQ [iyunv@openstack ~]# vim /etc/glance/glance-api.conf #建议用vi 搜索关键字在相应位置填写 notifier_strategy = rabbit rabbit_host = 172.16.1.131 rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = guest rabbit_virtual_host = / rabbit_notification_exchange = glance rabbit_notification_topic = notifications rabbit_durable_queues = False (4)设置keystone [keystone_authtoken] auth_host = 172.16.1.131 auth_port = 35357 auth_protocol = http admin_tenant_name = admin admin_user = admin admin_password = admin [paste_deploy] flavor=keystone #此配置文件同样需要配置keystone [iyunv@openstack ~]# vim /etc/glance/glance- registry.conf [keystone_authtoken] auth_host = 172.16.1.131 auth_port = 35357 auth_protocol = http admin_tenant_name = admin admin_user = admin admin_password = admin [paste_deploy] flavor=keystone (5)glance 数据库配置 Glance-api.conf 和 glance-registry.conf 都需要设置。
[iyunv@openstack ~]# vim /etc/glance/glance-api.conf
connection = mysql://glance:glance@172.16.1.131/glance
[iyunv@openstack ~]# vim /etc/glance/glance- registry.conf
connection = mysql://glance:glance@172.16.1.131/glance
[iyunv@linux-node1 ~]# glance-manage db_sync
如果同步出现报错。这个是因为版本的问题。
测试数据库同步情况
[iyunv@openstack ~]#mysql -h 172.16.1.131 -uglance -pglance -e " use glance;show tables;" (6)创建 Glance service 和 endpoint [iyunv@linux-node1 ~]# keystone service-create --name=glance --type=image --description="OpenStack Image Service" #执行过之后会生成一个service-id 下一步要使用,切记,不要一味复制文档内容 keystone endpoint-create \
--service-id=6eead9a9857b43539fd1d291ecd617f8 \ #这里的service-id就是上一步生成id --publicurl=http://172.16.1.131:9292 \
--internalurl=http://172.16.1.131:9292 \
--adminurl=http://172.16.1.131:9292
(7)开启glance日志 将 debug 打开,方便调试。
verbose = true
debug = true
(8)手动启动glance [iyunv@linux-node1 ~]# glance-api --config-file=/etc/glance/glance-api.conf
[iyunv@linux-node1 ~]# glance-registry --config-file=/etc/glance/glance-registry.conf
如果没有出现 ERROR 均为运行正常,可以直接 Crtl+C。然后使用 init 脚本启动。启动脚本,此文档会附带 [iyunv@openstack ~]# mv openstack-glance-* /etc/init.d/
[iyunv@openstack ~]# chmod +x /etc/init.d/openstack-glance-*
[iyunv@openstack ~]# chkconfig --add openstack-glance-api
[iyunv@openstack ~]# chkconfig --add openstack-glance-registry
[iyunv@openstack ~]# chkconfig openstack-glance-api on
[iyunv@openstack ~]# chkconfig openstack-glance-registry on
[iyunv@openstack ~]# /etc/init.d/openstack-glance-api start
[iyunv@openstack ~]# /etc/init.d/openstack-glance-registry start (9)测试glance [iyunv@openstack-node1 ~]# glance image-list 如果没有报错即为正常。 上传一个qcow2 的云主机镜像,此镜像,文档会附带,只需传到宿主机执行下面的命令,上传至glance [iyunv@linux-node1 ~]# glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.2-x86_64-disk.img 如果看到镜像文件为active,恭喜,说明我们做的glance服务没问题,接着后面的挑战 四.Controller 安装( compute) 在控制节点安装时,需要安装除了 nova-compute 之外的其它的所有的 nova 服务。 (1)Nova 安装 [iyunv@openstack-node1 src]# cd /usr/local/src/nova-2014.1.5 [iyunv@openstack-node1 nova-2014.1.5]# python setup.py install (2)创建配置文件 [iyunv@openstack-node1 ~]# mkdir /etc/nova
[iyunv@openstack-node1 ~]# mkdir /var/log/nova
[iyunv@openstack-node1 ~]# mkdir /var/lib/nova/instances -p
[iyunv@openstack-node1 ~]# mkdir /var/run/nova
[iyunv@openstack-node1 nova-2014.1.5]# cp -a * etc/nova/ /etc/nova/ [iyunv@ linux-node1 nova]# cd /etc/nova/ [iyunv@linux-node1 nova]# mv logging_sample.conf logging.conf (3)数据库配置 [iyunv@openstack-node1 ~]# vi /etc/nova/nova.conf [database] #搜索到database段落,写入数据库连接 connection=mysql://nova:nova@172.16.1.131/nova (4)RabbitMQ 配置 [DEFAULT] # 默认段落,都写在这个段落里面不能乱放哟 rabbit_host=172.16.1.131 rabbit_port=5672 rabbit_use_ssl=false rabbit_userid=guest rabbit_password=guest rabbit_virtual_host=/ rabbit_retry_interval=5 rpc_backend=rabbit (5)VNC 相关配置 [DEFAULT] #默认段落,都写在这个段落里面 novncproxy_base_url=http://172.16.1.131:6080/vnc_auto.html vncserver_listen=0.0.0.0 vncserver_proxyclient_address=172.16.1.131 vnc_enabled=true vnc_keymap=en-us (6)Keystone 相关配置 [keystone_authtoken] #keystone写在这个段落里面,可以搜索关键字,放入这里 auth_host=172.16.1.131 auth_port=35357 auth_protocol=http auth_uri=http://172.16.1.131:5000/v.20 identity_uri=http://172.16.1.131:35357 auth_version=v2.0 admin_user=admin admin_password=admin admin_tenant_name=admin (7)Nova 数据库 [iyunv@openstack-node1 ~]# nova-manage db sync 测试数据库同步情况 [iyunv@openstack-node1 ~]# mysql -h 172.16.1.131 -unova -pnova -e " use nova;show tables;" (8)Nova service 和 endpoint [iyunv@openstack-node1 ~]# keystone service-create --name=nova --type=compute --description="OpenStack Compute" #执行过之后会生成一个service-id 下一步要使用,切记,不要一味复制文档内容 keystone endpoint-create \ --service-id=84201301d85649118b4da96f536a2165 \ #这里的service-id就是上一步生成id --publicurl=http://172.16.1.131:8774/v2/%\(tenant_id\)s \ --internalurl=http://172.16.1.131:8774/v2/%\(tenant_id\)s \ --adminurl=http://172.16.1.131:8774/v2/%\(tenant_id\)s (9)nonvc安装 文档会附带所有环境软件包 [iyunv@openstack-node1 src]# cd /usr/local/src/ [iyunv@openstack-node1 src]# cp -r noVNC /usr/share/novnc/ (10)启动 Nova Service [iyunv@openstack-node1 ~]# mkdir /var/lib/nova/tmp [iyunv@openstack-node1 src]# cd /usr/local/src/openstack-inc/control/init.d/ [iyunv@openstack-node1 init.d]# cp openstack-nova-* /etc/init.d/ [iyunv@openstack-node1 init.d]# chmod +x /etc/init.d/openstack-nova [iyunv@openstack-node1 init.d]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do chkconfig --add openstack-nova-$i ;done #添加服务 [iyunv@openstack-node1 init.d]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do chkconfig openstack-nova-$i on ;done #开机启动 [iyunv@openstack-node1 init.d]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do serivce openstack-nova-$i start ;done #启动服务 (11)验证安装 [iyunv@openstack-node1 init.d]# nova service-list 五.Controller 安装( Horizon) (1)Horizon 安装 horizon-change-bug-2014.1.5.tar.gz 请用我修改过的 Horizon包,默认的有bug [iyunv@openstack-node1 ~]# cd /usr/local/src/horizon-2014.1.5 [iyunv@openstack-node1 horizon-2014.1.5]# python setup.py install (2)安装 Apache 和 wsgi [iyunv@openstack-node1 horizon-2014.1.5]# cd .. [iyunv@openstack-node1 src]# yum install -y httpd mod_wsgi [iyunv@openstack-node1 src]# mv horizon-2014.1.5 /var/www/ [iyunv@openstack-node1 src]# vi /var/www/horizon-2014.1.5/openstack_dashboard/local/local_settings.py 修改 local_settings.py 以下内容 OPENSTACK_HOST = "172.16.1.131" [iyunv@openstack-node1 src]# chown -R apache:apache /var/www/horizon-2014.1.5/ [iyunv@openstack-node1 src]# vi /etc/httpd/conf.d/horzion.conf <VirtualHost *:80> DocumentRoot /var/www/horizon-2014.1.5/ ErrorLog /var/log/httpd/horizon_error.log LogLevel warn CustomLog /var/log/httpd/horizon_access.log combined WSGIScriptAlias / /var/www/horizon-2014.1.5/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 home=/var/www/horizon-2014.1.5 WSGIApplicationGroup horizon SetEnv APACHE_RUN_USER apache SetEnv APACHE_RUN_GROUP apache WSGIProcessGroup horizon Alias /media /var/www/horizon-2014.1.5/openstack_dashboard/static <Directory /var/www/horizon-2014.1.5/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> WSGISocketPrefix /var/run/horizon (3)启动Apache [iyunv@openstack-node1 src]# /etc/init.d/httpd start;chkconfig httpd on (4)Horizon访问测试 六.Controller 安装(Neutron) (1)Neutron 安装 [iyunv@openstack-node1 src]# cd /usr/local/src/neutron-2014.1.5 [iyunv@openstack-node1 neutron-2014.1.5]# python setup.py install (2)Neutron 配置 [iyunv@openstack-node1 neutron-2014.1.5]# mkdir /etc/neutron/ [iyunv@openstack-node1 neutron-2014.1.5]# mkdir /var/log/neutron/ [iyunv@openstack-node1 neutron-2014.1.5]# mkdir /var/lib/neutron/ [iyunv@openstack-node1 neutron-2014.1.5]# mkdir /var/run/neutron/ [iyunv@openstack-node1 neutron-2014.1.5]# cp -a etc/* /etc/neutron/ [iyunv@openstack-node1 neutron-2014.1.5]# cd /etc/neutron/neutron [iyunv@openstack-node1 neutron]# mv * ../ [iyunv@openstack-node1 neutron]# cd .. [iyunv@openstack-node1 neutron]#rm -rf neutron (3)Neutron 数据库配置 [database] connection = mysql://neutron:neutron@172.16.1.131:3306/neutron (4)Keystone 相关设置 [DEFAULT] auth_strategy = keystone [keystone_authtoken] auth_host = 172.16.1.131 auth_port = 35357 auth_protocol = http admin_tenant_name = admin admin_user = admin admin_password = admin (5)RabbitMQ 相关设置 [DEFAULT] rabbit_host = 172.16.1.131 rabbit_password = guest rabbit_port = 5672 rabbit_userid = guest rabbit_virtual_host = / (6)Nova 相关配置在 neutron.conf notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_admin_username = admin nova_admin_password = admin (7)网络和日志相关配置 core_plugin = ml2
service_plugins = router
verbose = true
debug = true
log_file = neutron.log
log_dir = /var/log/neutron (8)neutron 相关配置在 nova.conf neutron_url=http://172.16.1.131:9696 neutron_admin_username=admin neutron_admin_password=admin neutron_admin_tenant_name=admin neutron_admin_auth_url=http://172.16.1.131:5000/v2.0 neutron_auth_strategy=keystone linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver security_group_api=neutron vif_plugging_is_fatal=false vif_plugging_timeout=10 修改完毕 nova.conf 后,需要重启 nova 相关服务。 [iyunv@openstack-node1 src]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do
/etc/init.d/openstack-nova-$i restart;done (9)创建 Neutron Service 和 endpoint [iyunv@openstack-node1 src]# keystone service-create --name neutron --type network --description "OpenStack Networking" #执行过之后会生成一个service-id 下一步要使用,切记,不要一味复制文档内容 keystone endpoint-create \ --service-id=c723b24023db4deabd0803fb47375e6b \ #这里的service-id就是上一步生成id --publicurl=http://172.16.1.131:9696 \ --adminurl=http://172.16.1.131:9696 \ --internalurl=http://172.16.1.131:9696 (10)Neutron Plugin Neutron 支持很多的网络插件,此次方案使用 Linux bridge 的 VLAN 模式。 ①Neutron ML2 配置 [ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = flat mechanism_drivers = linuxbridge [ml2_type_flat] flat_networks = physnet1 [ml2_type_vlan] [ml2_type_gre] [ml2_type_vxlan] [securitygroup] ②Linuxbridge 配置 [vlans] network_vlan_ranges = physnet1 [linux_bridge] physical_interface_mappings = physnet1:eth0 (11)neutron 启动测试 neutron-server \ --config-file=/etc/neutron/neutron.conf \ --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \ --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini neutron-linuxbridge-agent \ --config-file=/etc/neutron/neutron.conf \ --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \ --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini 启动如果能看到端口,没有报错,说明我们配置无问题 配置开机启动脚本 [iyunv@openstack-node1 src]# cp /usr/local/src/control/init.d/openstack-neutron-* /etc/init.d/ chmod +x /etc/init.d/openstack-neutron-* chkconfig --add openstack-neutron-server chkconfig --add openstack-neutron-linuxbridge-agent /etc/init.d/openstack-neutron-server start /etc/init.d/openstack-neutron-linuxbridge-agent start (12)测试 Neutron 安装 [iyunv@openstack-node1 src]# neutron agent-list 没有报错,说明我们配置正常 (13)创建 Flat 网络 [iyunv@openstack-node1 src]# keystone tenant-list 下面创建网络使用的id 就是用户的ID [iyunv@openstack-node1 src]# neutron net-create --tenant-id 3f9ba8b583e94181b0873f223517d641 demo_net --shared --provider:network_type flat --provider:physical_network physnet1 (14)创建子网 使用 admin 用户登录 Dashboard->网络-〉 demo_net->创建子网 七.Compute安装( Nova-compute) 环境准备 (1)基础软件包安装 [iyunv@openstack-node2 ~]# yum install -y python-pip gcc gcc-c++ make libtool patch automake \
libxslt-devel MySQL-python openssl-devel kernel kernel-devel libudev-devel python-devel \
git wget lvm2 libvirt-python libvirt qemu-kvm scsi-target-utils gedit \
python-numdisplay device-mapper bridge-utils avahi
(2)libvirtd 和 messagebus 设置 [iyunv@openstack-node2 ~]# /etc/init.d/messagebus restart;/etc/init.d/libvirtd restart;/etc/init.d/avahi-daemon restart;chkconfig libvirtd on;chkconfig messagebus on;chkconfig avahi-daemon on (3)Nova compute 安装 [iyunv@openstack-node2 ~]# cd /usr/local/src/nova-2014.1.5 [iyunv@openstack-node2 nova-2014.1.5]# python setup.py install [iyunv@openstack-node2 ~]# mkdir /var/log/nova [iyunv@openstack-node2 ~]# mkdir -p /var/lib/nova/instances (4)Quantum Linuxbridge 安装 [iyunv@openstack-node2 nova-2014.1.5]# cd /usr/local/src/neutron-2014.1.5 [iyunv@openstack-node2 neutron-2014.1.5]# python setup.py install [iyunv@openstack-node2 neutron-2014.1.5]# mkdir /var/log/neutron [iyunv@openstack-node2 neutron-2014.1.5]# mkdir /var/lib/neutron (5)KVM 配置 [iyunv@openstack-node2 neutron-2014.1.5]# vim /etc/libvirt/qemu.conf 去掉一下注释并修改
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc","/dev/hpet","/dev/net/tun"
(6)配置文件设置 将控制节点的配置文件复制到计算节点,改下配置文件 Nova.conf 需要修改以下两行:
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=172.16.1.132
linuxbridge_conf.ini 需要注意网卡的别名是 eth0 还是 em1
physical_interface_mappings = physnet1:em1
(7)手动启动测试: neutron-server \ --config-file=/etc/neutron/neutron.conf \ --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \ --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini neutron-linuxbridge-agent \ --config-file=/etc/neutron/neutron.conf \ --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \ --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini (8)配置开机启动 [iyunv@openstack-node2 ~]# cp /usr/local/src/openstack-inc/compute/init.d/openstack-n* /etc/init.d/ [iyunv@openstack-node2 ~]# chkconfig --add openstack-neutron-linuxbridge-agent [iyunv@openstack-node2 ~]# chkconfig --add openstack-nova-compute [iyunv@openstack-node2 ~]# chkconfig openstack-neutron-linuxbridge-agent on [iyunv@openstack-node2 ~]# chkconfig openstack-nova-compute on [iyunv@openstack-node2 ~]# service openstack-neutron-linuxbridge-agent start [iyunv@openstack-node2 ~]# service openstack-nova-compute start 测试服务工作状态 [iyunv@openstack-node1 src]# nova service-list [iyunv@openstack-node1 src]# neutron agent-list Ok 到此,我们就可以创建我们自己的云主机了 登录,Dashboard http://172.16.1.131/ 点击 “启动云主机” 观察一下日志,如果没有报错的话,我们的按照已经OK
|