设为首页 收藏本站
查看: 898|回复: 0

[经验分享] Openstack All in One @ centos6.3 install guide

[复制链接]

尚未签到

发表于 2015-4-11 16:13:49 | 显示全部楼层 |阅读模式
  
  最近在虚拟机和物理机搭建了几次openstack的环境,以便开发和测试,参照的是openstack的RED HAT文档,此文档多处需要更新,为避免遗忘记录此文档以便使用
  时间问题暂时未加入 cinder 和 quantum 特性,文档在github上,如果有更新或存在问题,欢迎fork或联系作者
  以下文档随时可能过期,建议在github链接中查看更新的详情
  Openstack All in One @ centos6.3 install guide
  OS:centos 6.3
  Openstack version: F

Install Openstack Folsom @ centos_6.3_x86_64

Preparation

Enable the EPEL repository

$rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Disable SELinux

$vim /etc/sysconfig/selinux
SELINUXTYPE=disabled

Disable the firewall

$vim /etc/sysconfig/system-config-firewall
--disabled

Install openstack and all related packages

$yum  install -y openstack-utils openstack-keystone python-keystoneclient mysql mysql-server MySQL-python wget openstack-nova openstack-glance openstack-utils memcached qpid-cpp-server openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached xfsprogs  memcached mod-wsgi openstack-dashboard bridge-utils

Configure Keystone

Set up and start MySQL daemon

$chkconfig mysqld on
$service mysqld start

  Set mysql root password

$/usr/bin/mysqladmin -u root password 'root'

Configure the Keystone database
  The keystone configuration file is /etc/keystone/keystone.conf

Check the admin_token in keystone.conf

admin_token=ADMIN

Ininsual the keystone db with openstack-db

$openstack-db --init --service keystone

  default user/password is keystone/keystone

Restart the keystone service

$keystone-manage db_sync
$service openstack-keystone start &&  chkconfig openstack-keystone on

Setting up tenants, users, and roles

$wget https://raw.github.com/TieWei/OpenstackFolsomInstall/master/sample_data.sh
$chmod +x sample_data.sh;./sample_data.sh

  default setting is as list :

Tenant      User      Roles     Password
-----------------------------------------
demo        admin     admin     secrete
service     glance    admin     glance
service     nova      admin     nova
service     ec2       admin     ec2
service     swift     admin     swift

Verifying Keystone Installation

Create adminrc file

$vim adminrc
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0
$source adminrc

Do Verify

$keystone user-list
$keystone endpoint-list
$keystone tenant-list

Configure Swift
  Swift is an object storage for openstack

Edit Swift Configration Files

edit /etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = swifthashcode

setup XFS volume (Use a file to simulate)

$dd if=/dev/zero of=/srv/swiftdisk bs=100MB count=50 //5G
$mkfs.xfs -i size=1024  /srv/swiftdisk
$mkdir -p /srv/node/sdb1
$echo "/srv/swiftdisk /srv/node/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
$mount /srv/node/sdb1
$chown -R swift:swift /srv/node/
$df -Th

edit /etc/rsyncd.conf

uid = root
gid = root
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

edit /etc/default/rsync

RSYNC_ENABLE = true

edit /etc/rc.d/rc.local

rsync –daemon

create /etc/swift/account-server.conf

[DEFAULT]
bind_ip = 127.0.0.1
bind_port = 6002
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]

create /etc/swift/container-server.conf

[DEFAULT]
bind_ip = 127.0.0.1
bind_port = 6001
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]

create /etc/swift/object-server.conf

[DEFAULT]
bind_ip = 127.0.0.1
bind_port = 6000
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
[object-expirer]

create /etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:cache]
use = egg:swift#memcache
memcache_servers = 127.0.0.1:11211
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin,SwiftOperator
is_admin = true
cache = swift.cache
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = swift
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift

Add to auto-start

$echo "swift-init main start" >> /etc/rc.local
$echo "swift-init rest start" >> /etc/rc.local

Start swift services

cd  /etc/swift
$swift-ring-builder account.builder create  10 1 1
$swift-ring-builder container.builder create 10 1 1
$swift-ring-builder object.builder create 10 1 1
$swift-ring-builder account.builder add z1-127.0.0.1:6002/sdb1 100
$swift-ring-builder container.builder add z1-127.0.0.1:6001/sdb1 100
$swift-ring-builder object.builder add z1-127.0.0.1:6000/sdb1 100
$swift-ring-builder account.builder
$swift-ring-builder container.builder
$swift-ring-builder object.builder
$swift-ring-builder account.builder rebalance
$swift-ring-builder container.builder rebalance
$swift-ring-builder object.builder rebalance
$swift-init all start
$service memcached start

Verify swift service

$swift stat

Configure Glance
  Configure Glance service and set swift as glance back-end storage

Initialize glance db

$openstack-db --init --service glance

  default user/password is glance/glance

Edit the Glance configuration files

edit /etc/glance/glance-api.conf

[DEFAULT]
default_store = swift
swift_store_auth_version = 2
swift_store_auth_address = http://127.0.0.1:35357/v2.0/
swift_store_user = service:swift
swift_store_key = swift
swift_store_create_container_on_put = True
[keystone_authtoken]
#
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor=keystone

edit /etc/glance/glance-registry.conf

[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor=keystone

Start glance services

$glance-manage db_sync
$service openstack-glance-registry start
$service openstack-glance-api start
$chkconfig openstack-glance-registry on
$chkconfig openstack-glance-api on

Verify glance service

$cd ~
$mkdir stackimages
$wget -c https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O stackimages/cirros.img
$glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare < stackimages/cirros.img

Configure Nova

Check and enbale KVM

$egrep '(vmx|svm)' --color=always /proc/cpuinfo

  check kvm either kvm-intel is loaded

$lsmod | grep kvm

  If NOT

$modprobe kvm
$modprobe kvm-intel
add /etc/modules:
kvm
kvm-intel

Configuring the SQL Database (MySQL) on the Cloud Controller

$openstack-db --init --service nova

Configuring OpenStack Compute

edit /etc/nova/nova.conf

[DEFAULT]
# LOGS/STATE
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path = /var/lib/nova/tmp
rootwrap_config=/etc/nova/rootwrap.conf
# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# VOLUMES
volume_driver=nova.volume.driver.ISCSIDriver
volume_group=nova-volumes
volume_name_template=volume-%08x
iscsi_helper=tgtadm
# DATABASE
sql_connection=mysql://nova:nova@127.0.0.1/nova
# COMPUTE
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
#instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True
# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=127.0.0.1
s3_host=127.0.0.1
#QPRD
rpc_backend = nova.openstack.common.rpc.impl_qpid
qpid_hostname=127.0.0.1
# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=127.0.0.1:9292
# NETWORK
network_manager=nova.network.manager.VlanManager
force_dhcp_release=False
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
dhcpbridge = /usr/bin/nova-dhcpbridge
public_interface=eth0
vlan_interface=eth0
injected_network_template = /usr/share/nova/interfaces.template
# NOVNC CONSOLE
novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html
vncserver_proxyclient_address=127.0.0.1
vncserver_listen=127.0.0.1
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova

  edit /etc/libvirt/qemu.conf

# The user ID for QEMU processes run by the system instance.
user = "nova"
# The group ID for QEMU processes run by the system instance.
group = "nova"

creat nova-volumes (20G)

$dd if=/dev/zero of=/srv/nova-volumes.img bs=100M count=200 && /sbin/vgcreate nova-volumes `/sbin/losetup --show -f /srv/nova-volumes.img`

  edit /etc/tgt/targets.conf

include /var/lib/nova/volumes/*
$service tgtd restart && chkconfig tgtd on

Restart nova services

$service qpidd restart && chkconfig qpidd on
$service libvirtd restart && chkconfig libvirtd on
$nova-manage db sync
$for svc in api objectstore compute network volume scheduler cert; do  service openstack-nova-$svc start ;  chkconfig openstack-nova-$svc on ;  done

Verify nova service
  Check images from glance

$nova-manage service list
$nova image-list

  Creating the Network for Compute VMs

$nova-manage network create --label=private --fixed_range_v4=192.168.20.0/24 --vlan=250 --bridge=br250 --num_networks=1 --network_size=256
$nova-manage network list

Running Virtual Machine Instances
  disable qpid auth, edit /etc/qpidd.conf

auth=no

  enable ssh and icmp

$nova secgroup-list
$nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

  adding a keypair (for ssh)

$mkdir ~/.ssh && ssh-keygen
$nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
$nova keypair-list

  starting an instance

$nova flavor-list //you will get [flavor_id]
$nova image-list //you will get [image_id]
$nova boot --flavor [flavor_id] --image  [image_id] --key_name mykey --security_group default cirros
$nova list //you will get [instance_id]
$virsh list
$nova console-log cirros
//now you can ping the ip of instance or do ssh
//login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.

  delete the instance

$nova delete [instance_id]

Installing the OpenStack Dashboard
  configure
  edit /etc/openstack-dashboard/local_settings

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dash',
'USER': 'dash',
'PASSWORD': 'dash',
'HOST': '127.0.0.1',
'default-character-set': 'utf8'
},
}
CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
OPENSTACK_KEYSTONE_ADMIN_URL = "http://%s:35357/v2.0" % OPENSTACK_HOST

  Initialize db

CREATE DATABASE dash;
GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'dash';
GRANT ALL ON dash.* TO 'dash'@'localhost' IDENTIFIED BY 'dash';
$/usr/share/openstack-dashboard/manage.py syncdb
$service httpd restart

  Verify
  login http://127.0.0.1/dashboard user:admin pass:secrete

END

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-56076-1-1.html 上篇帖子: Openstack Neutron DVR workflow 下篇帖子: [OpenStack] OpenStack ESSEX 全新手动安装,动手,实践,出真知!
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表