设为首页 收藏本站
查看: 1937|回复: 0

[经验分享] openstack:nova中的几个问题分析

[复制链接]

尚未签到

发表于 2015-10-11 12:10:20 | 显示全部楼层 |阅读模式
  1,nova中的两条命令执行过程分析:
  nova service-list
  +------------------+--------------+----------+---------+-------+----------------------------+-----------------+

| Binary           | Host         | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+------------------+--------------+----------+---------+-------+----------------------------+-----------------+

| nova-scheduler   | openstack2  | internal | enabled | up    | 2014-02-12T02:17:31.000000 | None            |

| nova-conductor   | openstack2  | internal | enabled | up    | 2014-02-12T02:17:32.000000 | None            |

| nova-cert        | openstack2  | internal | enabled | up    | 2014-02-12T02:17:31.000000 | None            |

| nova-consoleauth | openstack2  | internal | enabled | up    | 2014-02-12T02:17:31.000000 | None            |

| nova-compute     | openstack2  | nova     | enabled | up    | 2014-02-12T02:17:33.000000 | None            |

| nova-network     | openstack2  | internal | enabled | up    | 2014-02-12T02:17:29.000000 | None            |

| nova-compute     | openstack-1 | nova     | enabled | up    | 2014-02-12T02:17:27.000000 | None            |

| nova-network     | openstack-1 | internal | enabled | up    | 2014-02-12T02:17:32.000000 | None            |

+------------------+--------------+----------+---------+-------+----------------------------+-----------------+


  nova host-list
  +--------------+-------------+----------+

| host_name    | service     | zone     |

+--------------+-------------+----------+

| openstack2  | scheduler   | internal |

| openstack2  | conductor   | internal |

| openstack2  | cert        | internal |

| openstack2  | consoleauth | internal |

| openstack2  | compute     | nova     |

| openstack2  | network     | internal |

| openstack-1 | compute     | nova     |

| openstack-1 | network     | internal |

+--------------+-------------+----------+


  那么controller节点怎么知道各个node的状态呢?Controller怎么知道各个node中运行的服务呢?
  

在nova-api服务中有以下介个REST-API:
Host-controller中提供了一下几种方法:
class HostController(object):
def __init__(self):
self.api = compute.HostAPI()
super(HostController, self).__init__()
@wsgi.serializers(xml=HostIndexTemplate)
def index(self, req):
services = self.api.service_get_all(context, filters=filters,
set_zones=True)
for service in services:
hosts.append({'host_name': service['host'],
'service': service['topic'],
'zone': service['availability_zone']})
return {'hosts': hosts}
@wsgi.serializers(xml=HostShowTemplate)
def show(self, req, id):
context = req.environ['nova.context']
host_name = id
try:
service = self.api.service_get_by_compute_host(context, host_name)
def update(self, req, id, body):
return result
service-controller中提供了一下几种方法
class ServiceController(object):
def __init__(self, ext_mgr=None, *args, **kwargs):
self.host_api = compute.HostAPI()
self.servicegroup_api = servicegroup.API()
self.ext_mgr = ext_mgr
def _get_services(self, req):
services = self.host_api.service_get_all(
context, set_zones=True)
def index(self, req):
detailed = self.ext_mgr.is_loaded('os-extended-services')
services = self._get_services_list(req, detailed)
return {'services': services}
@wsgi.deserializers(xml=ServiceUpdateDeserializer)
@wsgi.serializers(xml=ServiceUpdateTemplate)
def update(self, req, id, body):
try:
self.host_api.service_update(context, host, binary, status_detail)
return ret_value
#从这里可以看出,对于host和service的查询和更新操作调用了hostAPI中的一些方法
#查看service_get_all和compute_node_get
def service_get_all(self, context, filters=None, set_zones=False):
services = service_obj.ServiceList.get_all(context, disabled,
set_zones=set_zones)
return ret_services
def compute_node_get(self, context, compute_id):
return self.db.compute_node_get(context, int(compute_id))
可以看出,都是通过调用数据库实现的;
注,service数据表中包含了主机host的信息
  
  那么service和host的数据是何时创建的呢?
  首先看各个服务service启动的时候,都做了什么?
  

def main():
config.parse_args(sys.argv)
logging.setup("nova")
utils.monkey_patch()
server = service.Service.create(binary='nova-scheduler',
topic=CONF.scheduler_topic)
service.serve(server)
service.wait()
class Service(service.Service):
def __init__(self, host, binary, topic, manager, report_interval=None,
periodic_enable=None, periodic_fuzzy_delay=None,
periodic_interval_max=None, db_allowed=True,
*args, **kwargs):
super(Service, self).__init__()
self.host = host
self.binary = binary
self.topic = topic
self.manager_class_name = manager
self.servicegroup_api = servicegroup.API(db_allowed=db_allowed)
manager_class = importutils.import_class(self.manager_class_name)
self.manager = manager_class(host=self.host, *args, **kwargs)
self.report_interval = report_interval
self.periodic_enable = periodic_enable
self.periodic_fuzzy_delay = periodic_fuzzy_delay
self.periodic_interval_max = periodic_interval_max
self.saved_args, self.saved_kwargs = args, kwargs
self.backdoor_port = None
self.conductor_api = conductor.API(use_local=db_allowed)
self.conductor_api.wait_until_ready(context.get_admin_context())
def start(self):
verstr = version.version_string_with_package()
LOG.audit(_('Starting %(topic)s node (version %(version)s)'),
{'topic': self.topic, 'version': verstr})
self.basic_config_check()
self.manager.init_host()
self.model_disconnected = False
ctxt = context.get_admin_context()
try:
self.service_ref = self.conductor_api.service_get_by_args(ctxt,
self.host, self.binary)
self.service_id = self.service_ref['id']
except exception.NotFound:
self.service_ref = self._create_service_ref(ctxt)
self.manager.pre_start_hook()
if self.backdoor_port is not None:
self.manager.backdoor_port = self.backdoor_port
self.conn = rpc.create_connection(new=True)
LOG.debug(_("Creating Consumer connection for Service %s") %
self.topic)
rpc_dispatcher = self.manager.create_rpc_dispatcher(self.backdoor_port)
# Share this same connection for these Consumers
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=False)
node_topic = '%s.%s' % (self.topic, self.host)
self.conn.create_consumer(node_topic, rpc_dispatcher, fanout=False)
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=True)
# Consume from all consumers in a thread
self.conn.consume_in_thread()
self.manager.post_start_hook()
LOG.debug(_("Join ServiceGroup membership for this service %s")
% self.topic)
# Add service to the ServiceGroup membership group.
self.servicegroup_api.join(self.host, self.topic, self)
if self.periodic_enable:
if self.periodic_fuzzy_delay:
initial_delay = random.randint(0, self.periodic_fuzzy_delay)
else:
initial_delay = None
self.tg.add_dynamic_timer(self.periodic_tasks,
initial_delay=initial_delay,
periodic_interval_max=
self.periodic_interval_max)
def _create_service_ref(self, context):
svc_values = {
'host': self.host,
'binary': self.binary,
'topic': self.topic,
'report_count': 0
}
service = self.conductor_api.service_create(context, svc_values)
self.service_id = service['id']
return service
def __getattr__(self, key):
manager = self.__dict__.get('manager', None)
return getattr(manager, key)
@classmethod
def create(cls, host=None, binary=None, topic=None, manager=None,
report_interval=None, periodic_enable=None,
periodic_fuzzy_delay=None, periodic_interval_max=None,
db_allowed=True):
if not host:
host = CONF.host
if not binary:
binary = os.path.basename(sys.argv[0])
if not topic:
topic = binary.rpartition('nova-')[2]
if not manager:
manager_cls = ('%s_manager' %
binary.rpartition('nova-')[2])
manager = CONF.get(manager_cls, None)
if report_interval is None:
report_interval = CONF.report_interval
if periodic_enable is None:
periodic_enable = CONF.periodic_enable
if periodic_fuzzy_delay is None:
periodic_fuzzy_delay = CONF.periodic_fuzzy_delay
service_obj = cls(host, binary, topic, manager,
report_interval=report_interval,
periodic_enable=periodic_enable,
periodic_fuzzy_delay=periodic_fuzzy_delay,
periodic_interval_max=periodic_interval_max,
db_allowed=db_allowed)
return service_obj
def kill(self):
"""Destroy the service object in the datastore."""
self.stop()
try:
self.conductor_api.service_destroy(context.get_admin_context(),
self.service_id)
except exception.NotFound:
LOG.warn(_('Service killed that has no database entry'))

  
  启动服务时,判断数据库中是否已经存在某个服务记录,如果不存在则创建一条记录!

services表的结构: DSC0000.jpg


DSC0001.jpg


  2  scheduler中的hostmanager是如何管理和调度计算节点的?
  直接查询数据库中的compute_node来实现;
  
  3  compute_node表中的数据何时CRUD?
  nova-compute服务会通过一个定时任务period_task.perisod_task定期更新compute_node表中的数据,这些数据主要是关于compute节点目前的使用情况!
  
  compute_node表的结构:



  

版权声明:本文为博主原创文章,未经博主允许不得转载。

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-125416-1-1.html 上篇帖子: install docker on openstack juno 下篇帖子: OneStack:Ubuntu 12.04 上一键自动部署 OpenStack
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表