设为首页 收藏本站
查看: 763|回复: 0

[经验分享] Placement control and multi-tenancy isolation with OpenStack Cloud: Bare Metal P

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2015-10-11 10:14:17 | 显示全部楼层 |阅读模式
  http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/
  
  In a previous post, we introduced the bare-metal uses cases for OpenStack Cloud, using its capabilities. Here, we’re going to talk about how you can apply some of these approaches to a scenario mixing virtualization with isolation of key components.
  Isolation requirements are pretty common for OpenStack deployments. And in fact, one can just say: “Without proper resource isolation you can wave goodbye to the public cloud”. OpenStack tries to fulfill this need in a number of ways. This involves (amongmany other things):

  • GUI & API authentication with Keystone
  • private images in Glance
  • security groups
  However, if we go under the hood of OpenStack, we will see a bunch of well known open source components, such as KVM, iptables, bridges, iSCSI shares. How does OpenStack treat these components in terms of security? I could say that it does hardly anythinghere. It is up to the sysadmin to go to each compute node and harden the underlying components on his own.
  At Mirantis, one OpenStack deployment we dealt with had especially heavy security requirements. There was a need for all the systems to comply with several governmental standards involved in processing sensitive data. Still we had to provide multitenancy.To observe the standards we decided that for “sensitive” tenants, isolated compute nodes with a hardened config should be provided.
  The component responsible for distribution of the instances across OpenStack cluster is nova-scheduler. Its most sophisticated scheduler type, calledFilterScheduler allows to enforce many policies on instance placement based on “filters”. For a given user request to spawn an instance, filters determine a set of compute nodescapable of running it. There are a number of filters already provided with the default nova-scheduler installation (they are listedhere). However none of them fully satisfied our requirements, so we decided to implement our own, and called it “PlacementFilter”.
  The main goal of the PlacementFilter is to “reserve” a whole compute node only for one tenant’s instances, thus making them isolated from other tenants’ instances on the hardware level. Upon tenant creation it can be specified if it is isolated from othersor not (default). For isolated tenants only designated compute-nodes should be used for VM instances provisioning. We define and assign these nodes to specific tenants manually, by creating a number of host aggregates. In short – host aggregates is a way to group compute-nodes with similar capabilities/purpose. The goal of the PlacementFilter is to pick a proper aggregate (set of compute nodes) for agiven tenant. Usual (non-isolated) tenants will be using “shared” compute-nodes for VMs provisioning. In this deployment we were using OpenStack to also provision baremetal nodes. Bare-metal nodes are isolated by their nature so there’s no need to designatethem to pool of isolated nodes for isolated tenants. (In fact, this post builds a biton one of my previous posts about bare-metal provisioning)
Solution architecture
  During the initial cloud configuration, all servers dedicated for compute should be split into 3 pools:

  • servers for multi-tenant VMs
  • servers for the single-tenant VMs
  • servers for bare-metal provisioning
  Such grouping is required to introduce two types of tenants: “isolated tenant” and “common tenant”. For “isolated tenants” aggregates are used to create dedicated sets of compute nodes. The aggregates are later taken into account in the scheduling phaseby the PlacementFilter.
  The PlacementFilter has two missions:

  • schedule VM on a compute node dedicated to the specific tenant or on one of default compute nodes if tenant is non-isolated
  • schedule VM on a bare-metal host if a bare-metal instance was requested (no aggregate is required here, as bare-metal instance is isolated from other instances by nature – on the hardware level)
  Placement filter passes only bare-metal hosts if a ‘bare_metal’ value was given for ‘compute_type’ parameter in scheduler_hints.
NOTE: We can instruct the scheduler to take into account our provisioning requirements by giving it so-called “hints” (“–hint”  option to “nova” command); e.g., to specify compute node’sCPU architecture: –hint arch=i386. In the above case, the hint for bare-metal will be: nova boot …. –hint compute_type=bare_metal
  If a non bare-metal instance is requested – filter searches aggregate for the project this instance belongs to, and passes only hosts from its aggregate. If aggregate for project is not found,  then a host from the default aggregate is chosen.
  The following diagram illustrates how the PlacementFilter works for both bare-metal and virtual instances:
DSC0000.png

  (1) A member of project#1 requests an instance on his own isolated set of compute nodes. The instance lands within  his dedicated host aggregate.
(2) A member of project#1 requests a bare-metal instance. This time no aggregate is needed as bare-metal nodes are by nature isolated on the hardware level, so the bare-metal node is taken from the general pool.
(3) Instances of tenants not assigned any host aggregate, land in the default “public” aggregate, where compute nodes can be shared among the tenant instances.
PlacementFilter setup
  This is the procedure we follow to implement instance placement control:

  • create a default aggregate for non-isolated instances and add compute-nodes to it:
    nova aggregate-create default novanova aggregate-add-host 1 compute-112nova aggregate-create default novanova aggregate-add-host 1 compute-1
  • add a host where <bare-metal driver> runs to the default aggregate.
  • install placement filter from the packages or source code. Add the following flags to nova.conf file:
    --scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler--scheduler_available_filters=placement_filter.PlacementFilter--scheduler_default_filters=PlacementFilter123--scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler--scheduler_available_filters=placement_filter.PlacementFilter--scheduler_default_filters=PlacementFilter
  • create an isolated tenant:
    keystone tenant-create --name &lt;project_name&gt;1keystone tenant-create --name &lt;project_name&gt;
  • create a dedicated aggregate for this tenant:
    nova aggregate-create &lt;aggregate_name&gt; novanova aggregate-set-metadata &lt;aggregate_id&gt; project_id=&lt;tenant_id&gt;12nova aggregate-create &lt;aggregate_name&gt; novanova aggregate-set-metadata &lt;aggregate_id&gt; project_id=&lt;tenant_id&gt;
  • add hosts to the dedicated aggregate:
    nova aggregate-add-host &lt;aggregate_id&gt; &lt;host_name&gt;1nova aggregate-add-host &lt;aggregate_id&gt; &lt;host_name&gt;
  • spawn instance:
    nova boot --image &lt;image_id&gt; --flavor &lt;flavor_id&gt; &lt;instance_name&gt;1nova boot --image &lt;image_id&gt; --flavor &lt;flavor_id&gt; &lt;instance_name&gt;
    (instance will be spawned on one of the hosts dedicated for current ten)
  • spawn bare-metal instance:
    novaboot --image &lt;image_id&gt; --flavor &lt;flavor_id&gt; --hint compute_type=bare_metal &lt;instance_name&gt;1nova boot --image &lt;image_id&gt; --flavor &lt;flavor_id&gt; --hint compute_type=bare_metal &lt;instance_name&gt;
Summary
  With the advent of FilterScheduler, implementing custom scheduling policies has become quite simple. Filter organization inOpenStack makes it formally as simple as overriding a single function called “host_passes”. However, the design of the filter itself can become quite complex and is left to the fantasiesof sysadmins/devs(ha!). As for host aggregates, until recently there was no filter which would take them into account (that’s why we implemented PlacementFilter). However, recently (in August 2012) a new filter appeared, calledAggregateInstanceExtraSpecsFilter which seems to do similar job.

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-125334-1-1.html 上篇帖子: OpenStack块存储nova-volume工作机制和相关问题--有点老可以看看 下篇帖子: What Is VMware Up To With OpenStack?
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表