设为首页 收藏本站
查看: 651|回复: 0

[经验分享] Install OpenStack Juno with CentOS 7

[复制链接]

尚未签到

发表于 2017-6-26 19:34:03 | 显示全部楼层 |阅读模式
  2015年7月23日发布
  Reference: http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-messaging-server




  1. Basic Environment


  A. IP info


  controller node IP (and network node IP)


  192.168.2.31 // for management and tunneling


  150.183.249.91 // for external network




  compute node IP


  192.168.2.32


  150.183.249.92




  B. network node network setting




  vi /etc/sysconfig/network-scripts/ifcfg-br-ex




  TYPE="OVSBridge"


  BOOTPROTO="static"


  DEVICE="br-ex"


  ONBOOT="yes"


  IPADDR="150.183.249.91"


  PREFIX="24"


  GATEWAY="xxx.xxx.xxx.x"


  DNS1="xxx.xxx.xxx.xx"




  vi /etc/sysconfig/network-scripts/ifcfg-enp7s0f0




  TYPE="Ethernet"


  BOOTPROTO="none"


  DEFROUTE="yes"


  IPV4_FAILURE_FATAL="no"


  IPV6INIT="yes"


  IPV6_AUTOCONF="yes"


  IPV6_DEFROUTE="yes"


  IPV6_FAILURE_FATAL="no"


  NAME="eth0"


  UUID="8f5a0078-c245-4109-bae9-522976673172"


  DEVICE="enp7s0f0"


  ONBOOT="yes"


  #IPADDR="150.183.249.91"


  PREFIX="24"


  #GATEWAY="xxx.xxx.xxx.x"


  #DNS1="xxx.xxx.xxx.xx"


  IPV6_PEERDNS="yes"


  IPV6_PEERROUTES="yes"


  IPV6_PRIVACY="no"




  ovs-vsctl add-br br-x


  ovs-vsctl add-port br-ex enp7s0f0


  ping 8.8.8.8 // check internet enabled




  C. Then check Firewall (Important)


  Prefer to use iptables than Firewalld


  See here and install iptables-services and then start iptables.services


  http://heavenkong.blogspot.kr/2015/07/use-iptables-in-rhel-7-centos-7.html




  D. Verify connectivity


  And here I want to use neutron network architecture and verify connectivity


  ping 8.8.8.8


  ping 192.168.2.32 (to compute node)


  ping 192.168.2.31 (from compute node to controller/network node)




  E. Install and configure NTP


  [controller node]


  yum install ntp


  vi /etc/ntp.conf




  


  server NTP_SERVER iburst // NTP_SERVER = controller


  server pool.ntp.org iburst


  restrict -4 default kod notrap nomodify


  restrict -6 default kod notrap nomodify


  And delete or comment out for keywords server and restrict


  systemctl enable ntpd.service


  systemctl start ntpd.service




  [other node]




  yum install ntp


  vi /etc/ntp.conf


   server controller iburst


  And delete or comment out for keyword server




  systemctl enable ntpd.service


  systemctl start ntpd.service




  [Verify on controller node]


  ntpq -c peers


  ntpq -c assoc


  [Verify on other nodes'


  ntpq -c peers




  [iyunv@ivc02 ~]# ntpq -c peers


       remote           refid      st t when poll reach   delay   offset  jitter


  ===============================================


  *controller      218.234.23.44    4 u   43   64    7    0.203   -3.954   0.550




  [iyunv@ivc02 ~]# ntpq -c assoc




  ind assid status  conf reach auth condition  last_event cnt


  ============================================


    1 46623  963a   yes   yes  none  sys.peer    sys_peer  3




  F. Install OpenStack packages


  [all nodes]


  yum install yum-plugin-priorities


  yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm


  yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm


  yum upgrade


  yum install openstack-selinux




  G. Install MiriaDB (MySQL) database


  [on controller node]


  yum install mariadb mariadb-server MySQL-python


  vi /etc/my.cnf




  [mysqld]


  ...


  bind-address = 192.168.2.31




  default-storage-engine = innodb


  innodb_file_per_table


  collation-server = utf8_general_ci


  init-connect = 'SET NAMES utf8'


  character-set-server = utf8






  systemctl enable mariadb.service


  systemctl start mariadb.service


  mysql_secure_installation


  mysql -u root -p




  H. Messaging servre (using RabbitMQ)


  [on controller node]


  yum install rabbitmq-server




  systemctl enable rabbitmq-server.service


  systemctl start rabbitmq-server.service




  It will create default user guest, I just updated password for this user.




  [iyunv@controller ~]# rabbitmqctl change_password guest xxxxxx


  Changing password for user "guest" ...


  ...done.




  Check version 3.3.0 or newer


  rabbitmqctl status | grep rabbit




  Status of node rabbit@controller ...


   {running_applications,[{rabbit,"RabbitMQ","3.3.5"},




  vi /etc/rabbitmq/rabbitmq.config


  [{rabbit, [{loopback_users, []}]}].




  systemctl restart rabbitmq-server.service




  And open iptable port 5672


  iptables -I INPUT -p tcp -m tcp --dport 5672 -j ACCEPT


  iptables-save>/etc/sysconfig/iptables


  systemctl restart iptables.service




  2. Install Keystone


  [on controller node]


  A. Create keystone database


  mysql -u root -p


  CREATE DATABASE keystone;   


       GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';


       GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';


  Create tokens


  openssl rand -hex 10   


  149b492807a9ee1d1fd1




  B. Install keystone


  yum install openstack-keystone python-keystoneclient


  vi /etc/keystone/keystone.conf






  [DEFAULT]


  ...


  admin_token = 149b492807a9ee1d1fd1




  [size=1em][database]


  [size=1em]...


  connection = mysql://keystone:xxx@controller/keystone


  [size=1em]


  [size=1em][token]


  [size=1em]...


  [size=1em]provider = keystone.token.providers.uuid.Provider


  [size=1em]driver = keystone.token.persistence.backends.sql.Token


  [size=1em]


  [size=1em][revoke]


  [size=1em]...


  [size=1em]driver = keystone.contrib.revoke.backends.sql.Revoke


  [size=1em]


  [size=1em]




  keystone-manage pki_setup --keystone-user keystone --keystone-group keystone


  chown -R keystone:keystone /var/log/keystone


  chown -R keystone:keystone /etc/keystone/ssl


  chmod -R o-rwx /etc/keystone/ssl


  /bin/sh -c "keystone-manage db_sync" keystone




  systemctl enable openstack-keystone.service


  systemctl start openstack-keystone.service


  (crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1'  >> /var/spool/cron/keystone




  C. Create tenants, users, and roles




  export OS_SERVICE_TOKEN=149b492807a9ee1d1fd1


  export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0




  keystone tenant-create --name admin --description "Admin Tenant"


  keystone user-create --name admin --pass xxx


  keystone role-create --name admin


  keystone user-role-add --user admin --tenant admin --role admin




  keystone tenant-create --name demo --description "Demo Tenant"


  keystone user-create --name demo --tenant demo --pass xxx


  keystone tenant-create --name service --description "Service Tenant"




  D. Create service entity and API endpoint




  keystone service-create --name keystone --type identity --description "OpenStack Identity"


  keystone endpoint-create --service-id $(keystone service-list | awk '/ identity / {print $2}') --publicurl http://controller:5000/v2.0 --internalurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --region RegionOne




  Make user that open port 5000 and 35356




  C. Verify keystone




  unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT


  keystone --os-tenant-name admin --os-username admin --os-password xxx --os-auth-url http://controller:35357/v2.0 token-get


  keystone --os-tenant-name admin --os-username admin --os-password xxx --os-auth-url http://controller:35357/v2.0 tenant-list


  keystone --os-tenant-name admin --os-username admin --os-password xxx --os-auth-url http://controller:35357/v2.0 user-list


  keystone --os-tenant-name admin --os-username admin --os-password xxx --os-auth-url http://controller:35357/v2.0 role-list


  keystone --os-tenant-name demo --os-username demo --os-password xxx --os-auth-url http://controller:35357/v2.0 token-get


  keystone --os-tenant-name demo --os-username demo --os-password xxx --os-auth-url http://controller:35357/v2.0 user-list


  It is not working ... that's correct






  vi admin-openrc.sh






  export OS_TENANT_NAME=admin


  export OS_USERNAME=admin


  export OS_PASSWORD=xxx


  export OS_AUTH_URL=http://controller:35357/v2.0




  vi demo-openrc.sh




  export OS_TENANT_NAME=demo


  export OS_USERNAME=demo


  export OS_PASSWORD=xxx


  export OS_AUTH_URL=http://controller:35357/v2.0




  3. Install Glance
  [on controller node]
  A. Create database glance
  mysql -u root -p
  CREATE DATABASE glance;
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxx';
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxx';
  exit
  B. Create service credentials
  source admin-openrc.sh
  keystone user-create --name glance --pass xxx
  keystone user-role-add --user glance --tenant service --role admin
  keystone service-create --name glance --type image --description "OpenStack Image Service"


  keystone endpoint-create --service-id $(keystone service-list | awk '/ image / {print $2}')   --publicurl http://controller:9292   --internalurl http://controller:9292   --adminurl http://controller:9292 --region RegionOne




  open port 9292




  C. Install keystone package


  yum install openstack-glance python-glanceclient


  vi /etc/glance/glance-api.conf






  


  [DEFAULT]




  ...


  notification_driver = noop


  verbose = True


  


  [database]


  ...


  connection = mysql://glance:xxxx@controller/glance


  ... ...








  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = glance


  admin_password = GLANCE_PASS




  [paste_deploy]


  ...


  flavor = keystone


  





  [glance_store]


  ...


  default_store = file


  filesystem_store_datadir = /var/lib/glance/images/
  vi /etc/glance/glance-registry.conf


  [DEFAULT]




  ...


  notification_driver = noop


  verbose = True


  


  [database]


  ...


  connection = mysql://glance:xxxx@controller/glance


  ... ...








  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = glance


  admin_password = GLANCE_PASS




  [paste_deploy]


  ...


  flavor = keystone
  /bin/sh -c "glance-manage db_sync" glance
  systemctl enable openstack-glance-api.service openstack-glance-registry.service
  systemctl start openstack-glance-api.service openstack-glance-registry.service
  D. Verify glance
  mkdir /tmp/images
  cd /tmp/images
  wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2
  qcow2 file download
  glance image-create --name "centos-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud-1503.qcow2 --disk-format qcow2 --container-format bare --is-public True --progress
  glance image-list


  4. Install Nova
  [on controller node]
  A. Create nova database
  mysql -u root -p
  GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'    IDENTIFIED BY 'venus0894';
  GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'    IDENTIFIED BY 'venus0894';
  exit
  B. Create service credentials
  source admin-openrc.sh
  keystone user-create --name nova --pass NOVA_PASS
  keystone user-role-add --user nova --tenant service --role admin
  keystone service-create --name nova --type compute --description "OpenStack Compute"
  keystone endpoint-create --service-id $(keystone service-list | awk '/ compute / {print $2}') --publicurl http://controller:8774/v2/%\(tenant_id\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://controller:8774/v2/%\(tenant_id\)s --region RegionOne




  open port 8774




  C. Install nova packages


  yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient




  vi /etc/nova/nova.conf








  [DEFAULT]


  ...


  rpc_backend = rabbit


  rabbit_host = controller


  rabbit_password = RABBIT_PASS




  ...


  auth_strategy = keystone




  ...


  my_ip = 192.168.2.31




  ...


  vncserver_listen = 192.168.2.31


  vncserver_proxyclient_address = 192.168.2.31






  ...


  verbose = True


  


  [database]


  ...


  connection = mysql://nova:NOVA_DBPASS@controller/nova


  




  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = nova


  


  [glance]


  ...


  host = controller






  /bin/sh -c "nova-manage db sync" nova


  systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service




  systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service




  [on compute node]




  D. Install nova


  yum install openstack-nova-compute sysfsutils


  vi /etc/nova/nova.conf








  [DEFAULT]


  ...


  rpc_backend = rabbit


  rabbit_host = controller


  rabbit_password = RABBIT_PASS


  auth_strategy = keystone




  my_ip = 192.168.2.32



  [glance]




  ...


  host = controller


  




  systemctl enable libvirtd.service openstack-nova-compute.service



  systemctl start libvirtd.service openstack-nova-compute.service






  [on controller node]






  E. Verify nova



  source admin-openrc.sh



  nova service-list






  5. Install Neutron



  [on controller node]






  A. Create neutron database



  mysql -u root -p



  CREATE DATABASE neutron;



  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'venus0894';



  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'venus0894';



  exit






  B. Create the service credentials



  keystone user-create --name neutron --pass NEUTRON_PASS



  keystone user-role-add --user neutron --tenant service --role admin



  keystone service-create --name neutron --type network --description "OpenStack Networking"



  keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region RegionOne






  Open port 9696






  C. Install neutron packages





  yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which


  vi /etc/neutron/neutron.conf







  [DEFAULT]


  ...


  rpc_backend = rabbit


  rabbit_host = controller


  rabbit_password = RABBIT_PASS


  auth_strategy = keystone




  core_plugin = ml2


  service_plugins = router


  allow_overlapping_ips = True




  notify_nova_on_port_status_changes = True


  notify_nova_on_port_data_changes = True


  nova_url = http://controller:8774/v2


  nova_admin_auth_url = http://controller:35357/v2.0


  nova_region_name = regionOne


  nova_admin_username = nova


  nova_admin_tenant_id = SERVICE_TENANT_ID


  nova_admin_password = NOVA_PASS


  verbose = True


  






  [database]


  ...


  connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron


  




  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = neutron


  admin_password = NEUTRON_PASS


  


  




  vi /etc/neutron/plugins/ml2/ml2_conf.ini










  [ml2]


  ...


  type_drivers = flat,gre


  tenant_network_types = gre


  mechanism_drivers = openvswitch


  






  [ml2_type_gre]


  ...


  tunnel_id_ranges = 1:1000


  


  








  [securitygroup]


  ...


  enable_security_group = True


  enable_ipset = True


  firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


  


  



  vi /etc/nova/nova.conf




  [DEFAULT]


  ...


  network_api_class = nova.network.neutronv2.api.API


  security_group_api = neutron


  linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver


  firewall_driver = nova.virt.firewall.NoopFirewallDriver


  


  [neutron]




  ...


  url = http://controller:9696


  auth_strategy = keystone


  admin_auth_url = http://controller:35357/v2.0


  admin_tenant_name = service


  admin_username = neutron


  admin_password = NEUTRON_PASS


  


  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron
  systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service
  systemctl enable neutron-server.service
  systemctl start neutron-server.service
  D. Verify neutron
  neutron ext-list


  [on network node here is on controller node]
  F. Configure kernel networking
  vi /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
  sysctl -p
  G. Install networking components
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
  vi /etc/neutron/neutron.conf




  [DEFAULT]


  ...


  rpc_backend = rabbit


  rabbit_host = controller


  rabbit_password = RABBIT_PASS




  auth_strategy = keystone




  verbose = True


  core_plugin = ml2


  service_plugins = router


  allow_overlapping_ips = True


  




  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = neutron


  admin_password = NEUTRON_PASS


  


  
  vi /etc/neutron/plugins/ml2/ml2_conf.ini




  [ml2]


  ...


  type_drivers = flat,gre


  tenant_network_types = gre


  mechanism_drivers = openvswitch


  [ml2_type_flat]




  ...


  flat_networks = external




  securitygroup]


  ...


  enable_security_group = True


  enable_ipset = True


  firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


  [ovs]




  local_ip = 192.168.2.31


  enable_tunneling = True


  bridge_mappings = external:br-ex


  






  [agent]


  ...


  tunnel_types = gre


  


  vi /etc/neutron/l3_agent.ini






  [DEFAULT]


  ...


  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver


  use_namespaces = True


  external_network_bridge = br-ex


  router_delete_namespaces = True


  verbose = True


  


  vi /etc/neutron/dhcp_agent.ini


  







  [DEFAULT]


  ...


  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver


  dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq


  use_namespaces = True


  dhcp_delete_namespaces = True


  verbose = True


  


  vi /etc/neutron/metadata_agent.ini






  [DEFAULT]


  ...


  auth_url = http://controller:5000/v2.0


  auth_region = RegionOne


  admin_tenant_name = service


  admin_user = neutron


  admin_password = NEUTRON_PASS




  nova_metadata_ip = controller





  metadata_proxy_shared_secret = METADATA_SECRET



  verbose = True


  


  [on controller node]


  vi /etc/nova/nova.conf






  [neutron]


  ...


  service_metadata_proxy = True


  metadata_proxy_shared_secret = METADATA_SECRET


  systemctl restart openstack-nova-api.service


  


  [on network node]




  systemctl enable openvswitch.service


  systemctl start openvswitch.service


  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini




  cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \


  /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig




  sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \


  /usr/lib/systemd/system/neutron-openvswitch-agent.service






  systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \


  neutron-dhcp-agent.service neutron-metadata-agent.service \


  neutron-ovs-cleanup.service




  systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \


  neutron-dhcp-agent.service neutron-metadata-agent.service


  


  [on controller node]


  H. Verify




  neutron agent-list




  [on compute node]


  I. Configure networking kernel


  /etc/sysctl.conf



net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

  sysctl -p




  I. Install neutron compute packages


  yum install openstack-neutron-ml2 openstack-neutron-openvswitch


  vi /etc/neutron/neutron.conf








  [DEFAULT]


  ...


  rpc_backend = rabbit


  rabbit_host = controller


  rabbit_password = RABBIT_PASS


  auth_strategy = keystone




  core_plugin = ml2


  service_plugins = router


  allow_overlapping_ips = True


  verbose = True


  




  [keystone_authtoken]


  ...


  auth_uri = http://controller:5000/v2.0


  identity_uri = http://controller:35357


  admin_tenant_name = service


  admin_user = neutron


  admin_password = NEUTRON_PASS


  


  vi /etc/neutron/plugins/ml2/ml2_conf.ini








  [ml2]


  ...


  type_drivers = flat,gre


  tenant_network_types = gre


  mechanism_drivers = openvswitch






  [ml2_type_gre]


  ...


  tunnel_id_ranges = 1:1000






  [securitygroup]


  ...


  enable_security_group = True


  enable_ipset = True


  firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver






  [ovs]


  ...


  local_ip = 192.168.2.32


  enable_tunneling = True


  [agent]




  ...


  tunnel_types = gre


  





  systemctl enable openvswitch.service


  systemctl start openvswitch.service




  vi /etc/nova/nova.conf






  [DEFAULT]


  ...


  network_api_class = nova.network.neutronv2.api.API


  security_group_api = neutron


  linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver


  firewall_driver = nova.virt.firewall.NoopFirewallDriver


  


  [neutron]




  ...


  url = http://controller:9696


  auth_strategy = keystone


  admin_auth_url = http://controller:35357/v2.0


  admin_tenant_name = service


  admin_username = neutron


  admin_password = NEUTRON_PASS


  


  
  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
  /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig




  sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \


  /usr/lib/systemd/system/neutron-openvswitch-agent.service




  systemctl restart openstack-nova-compute.service




  systemctl enable neutron-openvswitch-agent.service


  systemctl start neutron-openvswitch-agent.service




  J. Verify


  neutron agent-list


  6. Install Ceilometer


  7. Install Horizon


  8. Launch Instance

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-388365-1-1.html 上篇帖子: OpenStack(企业私有云)万里长征第二步 下篇帖子: OpenStack部署应用第五篇:创建一个实例(转)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表