设为首页 收藏本站
查看: 1449|回复: 0

[经验分享] 照着官网来安装openstack pike之nova安装

[复制链接]

尚未签到

发表于 2017-12-5 07:05:56 | 显示全部楼层 |阅读模式
  nova组件安装分为控制节点和计算节点,还是先从控制节点安装


1、前提条件,数据库为nova创建库和账户密码来连接数据库




# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

  2、验证keystone



# source admin-openrc

  3、创建计算服务认证:



# openstack user create --domain default --password-prompt nova
输入密码:nova
# openstack role add --project service --user nova admin
# openstack service create --name nova --description "OpenStack Compute" compute

  4、创建计算服务的API endpoints:



# openstack endpoint create --region RegionOne compute public    http://192.168.101.10:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.101.10:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.101.10:8774/v2.1

  5、创建一个placement服务的用户,并设置密码:



# openstack user create --domain default --password-prompt placement
输入密码:placement

  6、添加placement用户到service这个项目中,使用admin角色:



# openstack role add --project service --user placement admin

  7、Create the Placement API entry in the service catalog:


将placement添加到一个service中




# openstack service create --name placement --description "Placement API" placement

  8、创建一个placement API service endpoints:



# openstack endpoint create --region RegionOne placement public http://192.168.101.10:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.101.10:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.101.10:8778

  9、安装nova服务需要的依赖:



# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

  安装完成后,需要修改配置文件进行设置


修改配置文件:/etc/nova/nova.conf



在[DEFAULT]部分下:




[DEFAULT]
enabled_apis = osapi_compute,metadata

  在[api_database]和[database]下:



[api_database]
connection = mysql+pymysql://nova:nova@192.168.101.10/nova_api
[database]
connection = mysql+pymysql://nova:nova@192.168.101.10/nova
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.101.10
使用的是:openstack:openstack账号和密码登录的rabbitmq
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.101.10:5000
auth_url = http://192.168.101.10:35357
memcached_servers = 192.168.101.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova          这里是nova在keystone那里认证的账号和密码
[DEFAULT]
my_ip = 192.168.101.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

  By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.



[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://192.168.101.10:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.101.10:35357/v3
username = placement
password = placement

  设置能够访问placement API的权限:/etc/httpd/conf.d/00-nova-placement-api.conf(追加到此文件)



<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>

  然后重启httpd服务:



# systemctl restart httpd

  向nova-api数据库导入数据:



# su -s /bin/sh -c "nova-manage api_db sync" nova

  Ignore any deprecation messages in this output.


注册cell0数据库:




# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

  创建cell1:



# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

  导入nova数据:



# su -s /bin/sh -c "nova-manage db sync" nova

DSC0000.png



上述忽略输出



校验nova cell0、cell1是否注册成功:




# nova-manage cell_v2 list_cells

  最后开启计算服务:



# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

  至此nova的控制节点已经安装成功,接下来nova在计算节点的安装:


1、计算节点node2




[iyunv@node2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.10    node1
192.168.101.11    node2

  2、时间同步(控制节点)



# yum install chrony

  修改vim /etc/chrony.conf



allow 192.168.101.0/16    开启
注释掉:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.101.10 iburst           (新增控制节点)
开启:
systemctl enable chronyd.service
systemctl start chronyd.service

  校验:
DSC0001.png

  3、在计算节点上执行需要的包:



# yum install centos-release-openstack-pike
# yum upgrade
If the upgrade process includes a new kernel, reboot your host to activate it.
reboot
# yum install python-openstackclient
# yum install openstack-selinux
RHEL and CentOS enable SELinux by default. Install the openstack-selinux package to automatically manage security policies for OpenStack services:

  前提环境安装完成后,于是开始安装必须必要的包环境:



# yum install openstack-nova-compute

  修改配置文件/etc/nova/nova.conf



[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.101.10
my_ip = 192.168.101.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.101.10:5000
auth_url = http://192.168.101.10:35357
memcached_servers = 192.168.101.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.101.10:6080/vnc_auto.html

[glance]
api_servers = http://192.168.101.10:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.101.10:35357/v3
username = placement
password = placement

  上述参数中my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS 用计算节点的管理ip替换,这里的计算节点ip为192.168.101.11,所以改为


my_ip = 192.168.101.11

在计算节点上查看是否支持虚拟化技术:



Determine whether your compute node supports hardware acceleration for virtual machines:




# egrep -c '(vmx|svm)' /proc/cpuinfo

  a、如果结果为one or greater,那么计算节点能够支持硬件加速,配置文件就无需修改


虚拟机安装的操作系统需要打开cpu虚拟化



b、如果结果返回为0,那么计算节点不支持硬件加速,you must configure libvirt to use QEMU instead of KVM.

Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:




[libvirt]
virt_type = qemu

  执行上述命令:



[iyunv@node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2

  所以支持虚拟化技术,默认kvm,所以配置libvirt不需要修改


开启计算服务:




# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

  If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.
  控制节点和计算节点都安装完成后,于是需要将计算节点添加到控制节点,于是接下来的操作在控制节点node1上操作:


将上面创建的计算节点添加到cell 数据库中,以下操作在控制节点上执行:




# source admin-openrc   进行身份认证
# openstack compute service list --service nova-compute

DSC0002.png



发现计算hosts:




# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

DSC0003.png



当你添加了一个计算节点,必须执行上面的nova-manage cell_v2 discover_hosts命令在控制节点上,另外也可以设置一个恰当的间隔进行发现:



修改/etc/nova/nova.conf:




[scheduler]
discover_hosts_in_cells_interval = 300

  最后检验操作:在控制节点node1上执行



# source admin-openrc    进行身份认证

  列出计算服务组件



# openstack compute service list

DSC0004.png



列出API endpoints在身份服务中校验连接状态(忽略输出的警告信息)




# openstack catalog list

  列出镜像的连接状态:



# openstack image list

  Check the cells and placement API are working successfully:



# nova-status upgrade check

DSC0005.png



至此控制节点和计算节点的nova都已安装成功

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-420671-1-1.html 上篇帖子: sexi部署openstack (devstack) 、一 下篇帖子: ##6.2 Neutron计算节点-- openstack pike
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表