sunage001 发表于 2017-7-2 17:44:06

tiger运维走向Python开发_课堂笔记week24_RabbitMQ队列_day82&83

RabbitMQ队列  
  安装 http://www.rabbitmq.com/install-standalone-mac.html
  安装python rabbitMQ module



pip install pika
or
easy_install pika
or
源码

https://pypi.python.org/pypi/pika
  实现最简单的队列通信

  rabbitmq配置用户名密码和权限


  有关rabbitmq的更多操作请见http://www.cnblogs.com/AloneSword/p/4200051.html
  send端



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/18
#IN Python 3.5
import pika
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
# 声明queue
channel.queue_declare(queue='hello')
# n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" Sent 'Hello World!'")
connection.close()
  receive端



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/18
#IN Python 3.5
import pika
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
#You may ask why we declare the queue again ‒ we have already declared it in our previous code.
# We could avoid that if we were sure that the queue already exists. For example if send.py program
#was run before. But we're not yet sure which program to run first. In such cases it's a good
# practice to repeat declaring the queue in both programs.
channel.queue_declare(queue='hello') # 声明queue
def callback(ch, method, properties, body):
print(" Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print('
[*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

Work Queues

  在这种模式下,RabbitMQ会默认把p发的消息依次分发给各个消费者(c),跟负载均衡差不多
  消息提供者代码



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/18
#IN Python 3.5
import sys
import pika
import time
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
# 声明queue
channel.queue_declare(queue='task_queue')
# n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.

message = ' '.join(sys.argv) or "Hello World! %s" % time.time() # 表示可以在命令行发送要发的信息,不发默认发HelloWorld
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode=2,# 使这个消息持久化
                      )
)
print(" Sent %r" % message)
connection.close()
  消费者代码



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/18
#IN Python 3.5
import pika, time
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()

def callback(ch, method, properties, body):
# ch是channel通道的对象,method是basic_consume的对象,properties是消息对象
print (ch, method, properties, body)
print(" Received %r" % body)
time.sleep(20)
print(" Done")
print("method.delivery_tag", method.delivery_tag)
ch.basic_ack(delivery_tag=method.delivery_tag) # 确认消息是否已收到

channel.basic_consume(callback,
queue='task_queue',
no_ack=True
)
print('
[*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
  此时,先启动消息生产者,然后再分别启动3个消费者,通过生产者多发送几条消息,你会发现,这几条消息会被依次分配到各个消费者身上  
  Doing a task can take a few seconds. You may wonder what happens if one of the consumers starts a long task and dies with it only partly done. With our current code once RabbitMQ delivers message to the customer it immediately removes it from memory. In this case, if you kill a worker we will lose the message it was just processing. We'll also lose all the messages that were dispatched to this particular worker but were not yet handled.
  But we don't want to lose any tasks. If a worker dies, we'd like the task to be delivered to another worker.
  In order to make sure a message is never lost, RabbitMQ supports message acknowledgments. An ack(nowledgement) is sent back from the consumer to tell RabbitMQ that a particular message had been received, processed and that RabbitMQ is free to delete it.
  If a consumer dies (its channel is closed, connection is closed, or TCP connection is lost) without sending an ack, RabbitMQ will understand that a message wasn't processed fully and will re-queue it. If there are other consumers online at the same time, it will then quickly redeliver it to another consumer. That way you can be sure that no message is lost, even if the workers occasionally die.
  There aren't any message timeouts; RabbitMQ will redeliver the message when the consumer dies. It's fine even if processing a message takes a very, very long time.
  Message acknowledgments are turned on by default. In previous examples we explicitly turned them off via the no_ack=True flag. It's time to remove this flag and send a proper acknowledgment from the worker, once we're done with a task.



def callback(ch, method, properties, body):
print " Received %r" % (body,)
time.sleep( body.count('.') )
print " Done"
ch.basic_ack(delivery_tag = method.delivery_tag)

channel.basic_consume(callback,
queue='hello')
  Using this code we can be sure that even if you kill a worker using CTRL+C while it was processing a message, nothing will be lost. Soon after the worker dies all unacknowledged messages will be redelivered

消息持久化
  We have learned how to make sure that even if the consumer dies, the task isn't lost(by default, if wanna disableuse no_ack=True). But our tasks will still be lost if RabbitMQ server stops.
  When RabbitMQ quits or crashes it will forget the queues and messages unless you tell it not to. Two things are required to make sure that messages aren't lost: we need to mark both the queue and messages as durable.
  First, we need to make sure that RabbitMQ will never lose our queue. In order to do so, we need to declare it as durable:



channel.queue_declare(queue='hello', durable=True)
  Although this command is correct by itself, it won't work in our setup. That's because we've already defined a queue called hello which is not durable. RabbitMQ doesn't allow you to redefine an existing queue with different parameters and will return an error to any program that tries to do that. But there is a quick workaround - let's declare a queue with different name, for exampletask_queue:



channel.queue_declare(queue='task_queue', durable=True)
  This queue_declare change needs to be applied to both the producer and consumer code.
  At that point we're sure that the task_queue queue won't be lost even if RabbitMQ restarts. Now we need to mark our messages as persistent - by supplying a delivery_mode property with a value 2.



channel.basic_publish(exchange='',
routing_key="task_queue",
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))

消息公平分发
  如果Rabbit只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置perfetch=1,意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了。




channel.basic_qos(prefetch_count=1)
  带消息持久化+公平分发的完整代码
  生产者端



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = ' '.join(sys.argv) or "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode=2,# make message persistent
                      ))
print(" Sent %r" % message)
connection.close()
  消费者端



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import time
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
print('
[*] Waiting for messages. To exit press CTRL+C')

def callback(ch, method, properties, body):
print(" Received %r" % body)
time.sleep(body.count(b'.'))
print(" Done")
ch.basic_ack(delivery_tag=method.delivery_tag)

channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue='task_queue')
channel.start_consuming()

Publish\Subscribe(消息发布\订阅)
  之前的例子都基本都是1对1的消息发送和接收,即消息只能发送到指定的queue里,但有些时候你想让你的消息被所有的Queue收到,类似广播的效果,这时候就要用到exchange了,
  exchange是一件很简单的事情。 一方面它从生产者接收消息,另一方面它将它们推送到队列。 exchange必须确切知道对接收到的消息做什么。 是否应将其附加到特定队列? 是否应该附加到许多队列? 或者应该丢弃。 其规则由exchange类型定义。
  Exchange在定义的时候是有类型的,以决定到底是哪些Queue符合条件,可以接收消息
  fanout: 所有bind到此exchange的queue都可以接收消息
direct: 通过routingKey和exchange决定的那个唯一的queue可以接收消息
topic:所有符合routingKey(此时可以是一个表达式)的routingKey所bind的queue可以接收消息

   表达式符号说明:#代表一个或多个字符,*代表任何字符
      例:#.a会匹配a.a,aa.a,aaa.a等
          *.a会匹配a.a,b.a,c.a等
   注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanout 
  headers: 通过headers 来决定把消息发给哪些queue

  消息publisher



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='logs',
type='fanout') # logs是随便起的名字,fanout表示是广播

message = ' '.join(sys.argv) or "info: Hello World!"
channel.basic_publish(exchange='logs',
routing_key='',
body=message)
print(" Sent %r" % message)
connection.close()
  消息subscriber



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='logs',
type='fanout')
result = channel.queue_declare(exclusive=True)# exclusive=unique 不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue # 在声明了一个临时queue后,获取到这个queue的name

channel.queue_bind(exchange='logs',
queue=queue_name) # 绑定exchange,并传入自己的queue名称
print('
[*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" %r" % body)
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()

有选择的接收消息(exchange type=direct)
  RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。

  关键字publisher



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs',type='direct') # 消息类型变为组播

severity = sys.argv if len(sys.argv) > 1 else 'info' # severity严重级别, 三元运算符,先拿sys.argv参数的级别,没有就默认拿info
message = ' '.join(sys.argv) or 'Hello World!'
channel.basic_publish(exchange='direct_logs',
routing_key=severity, # 把消息发送到指定一组队列里
body=message)
print(" Sent %r:%r" % (severity, message))
connection.close()
  关键字subscriber



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs',type='direct') #声明exchange名称和消息类型

result = channel.queue_declare(exclusive=True) # 生成一个随机queue
queue_name = result.method.queue
severities = sys.argv # 可以输入多个级别,例: info warning error
if not severities: # 确认是否有级别,如果没有,提示需要输入级别
sys.stderr.write("Usage: %s \n" % sys.argv)
sys.exit(1)
for severity in severities: # 循环消息级别并绑定
channel.queue_bind(exchange='direct_logs',queue=queue_name,routing_key=severity) # 只有绑定了所有的消息级别,才可以收到关联级别的所有消息
print('
[*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" %r:%r" % (method.routing_key, body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
  启动recv客户端(subscriber)


  启动服务端(publisher)


更细致的消息过滤
  虽然使用direct exchange改进了我们的系统,但它仍然有局限性: 它不能基于多个标准进行路由。
  在我们的日志记录系统中,我们可能希望不仅预订基于严重性的日志,而且还基于发出日志的源。 您可能从syslog unix工具知道这个概念,它根据严重性(info / warn / crit ...)和facility(auth / cron / kern ...)路由日志。
  这将给我们很大的灵活性 - 我们可能想听只是来自“cron”的关键错误,但也从“kern”的所有日志。


  publisher_send



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='topic_logs',
type='topic')
routing_key = sys.argv if len(sys.argv) > 1 else 'anonymous.info'
message = ' '.join(sys.argv) or 'Hello World!'
channel.basic_publish(exchange='topic_logs',
routing_key=routing_key,
body=message)
print(" Sent %r:%r" % (routing_key, message))
connection.close()
  subscriber_recv



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import sys
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.exchange_declare(exchange='topic_logs',type='topic')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = sys.argv # 需要绑定的key
if not binding_keys: # 如果没有key 提示输入key
sys.stderr.write("Usage: %s ...\n" % sys.argv)
sys.exit(1)
for binding_key in binding_keys: # 如果有key,就循环绑定key
channel.queue_bind(exchange='topic_logs',
queue=queue_name,
routing_key=binding_key)
print('
[*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" %r:%r" % (method.routing_key, body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
  启动recv端
  要接收所有日志运行:



python receive_logs_topic.py "#"
  接收设施“kern”的所有日志:



python receive_logs_topic.py "kern.*"
  或者如果你只想听到“关键”日志:



python receive_logs_topic.py "*.critical"
  您可以创建多个绑定:



python receive_logs_topic.py "kern.*" "*.critical"
  并使用路由键“kern.critical”发出日志类型:



python emit_log_topic.py "kern.critical" "A critical kernel error"

Remote procedure call (RPC)
  RPC— Remote Procedure All(远程执行调用)
  为了说明如何使用RPC服务,我们将创建一个简单的客户端类。 它将公开一个名为call的方法,它发送一个RPC请求并阻塞,直到接收到答案:



fibonacci_rpc = FibonacciRpcClient()
result = fibonacci_rpc.call(4)
print("fib(4) is %r" % result)

  RPC server



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100', credentials=credenticials))
channel = connection.channel()
channel.queue_declare(queue='rpc_queue') # 接收指令
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)
def on_request(ch, method, props, body):
n = int(body)
print(" [.] fib(%s)" % n)
response = fib(n) # 计算菲波那契数列

ch.basic_publish(exchange='',
routing_key=props.reply_to, # props是客户端发过来的参数
properties=pika.BasicProperties(correlation_id=props.correlation_id),
body=str(response)) # 把结果返回到客户端
ch.basic_ack(delivery_tag=method.delivery_tag) # 确认

channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue') # 收到消息后执行on_request函数
print(" Awaiting RPC requests")
channel.start_consuming() # 以阻塞的形式接收消息
  RPC client



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import uuid

class FibonacciRpcClient(object):
def __init__(self):
credenticials = pika.PlainCredentials('lxh', 'lxh123456')# 文本格式的验证方式, 还可以支持其他很多,比如数据库
self.connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.100',
credentials=credenticials))
self.channel = self.connection.channel()
result = self.channel.queue_declare(exclusive=True)
self.callback_queue = result.method.queue # 定义了一个随机的rpc_result queue

self.channel.basic_consume(self.on_response, no_ack=True,queue=self.callback_queue) #收到消息后执行on_response方法
def on_response(self, ch, method, props, body):
if self.corr_id == props.correlation_id:
self.response = body
def call(self, n):
self.response = None
self.corr_id = str(uuid.uuid4()) # 每次都会随机生成一个唯一值,并且可以确保在当前程序里是唯一的
# 将消息发送出去
self.channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(reply_to=self.callback_queue, #reply_to是固定参数
correlation_id=self.corr_id,), #corr_id是随机字符串
body=str(n))
while self.response is None:
self.connection.process_data_events() # 以非阻塞的方式去检查有没有新消息,如果有就接收
return int(self.response)

fibonacci_rpc = FibonacciRpcClient() #创建rpc对象并执行init方法
print(" Requesting fib(30)")
response = fibonacci_rpc.call(30) # 调用FibonacciRpcClient类里面的call方法
print(" [.] Got %r" % response)
  远程执行命令并返回例子:
  ssh_server_recv



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import subprocess
credenticials = pika.PlainCredentials('lxh', 'lxh123456') # 文本格式的验证方式, 还可以支持其他很多,比如数据库

connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.104', credentials=credenticials))
channel = connection.channel()
channel.queue_declare(queue='rpc_queue') # 接收指令
def cmd(cmd):
cmd_obj = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
cmd_result = cmd_obj.stdout.read() + cmd_obj.stderr.read()
return cmd_result
def on_request(ch, method, props, body):
print(" [.]recv cmd(%s)" % body)
response = cmd(body) #
    ch.basic_publish(exchange='',
routing_key=props.reply_to, # props是客户端发过来的参数
properties=pika.BasicProperties(correlation_id=props.correlation_id),
body=response) # 把结果返回到客户端
ch.basic_ack(delivery_tag=method.delivery_tag) # 确认

channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue') # 收到消息后执行on_request函数
print(" Awaiting RPC requests")
channel.start_consuming() # 以阻塞的形式接收消息
  ssh_client_send



#__author: Tiger lee
# -*- coding:utf-8 -*-
#date: 2017/2/21
#IN Python 3.5
import pika
import uuid

class FibonacciRpcClient(object):
def __init__(self):
credenticials = pika.PlainCredentials('lxh', 'lxh123456')# 文本格式的验证方式, 还可以支持其他很多,比如数据库
self.connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.10.104',
credentials=credenticials))
self.channel = self.connection.channel()
result = self.channel.queue_declare(exclusive=True)
self.callback_queue = result.method.queue # 定义了一个随机的rpc_result queue

self.channel.basic_consume(self.on_response, no_ack=True,queue=self.callback_queue) #收到消息后执行on_response方法
def on_response(self, ch, method, props, body):
if self.corr_id == props.correlation_id:
self.response = body
def call(self, n):
self.response = None
self.corr_id = str(uuid.uuid4()) # 每次都会随机生成一个唯一值,并且可以确保在当前程序里是唯一的
# 将消息发送出去
self.channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(
reply_to=self.callback_queue, #reply_to是固定参数
correlation_id=self.corr_id,
), #corr_id是随机字符串
body=str(n))
while self.response is None:
self.connection.process_data_events() # 以非阻塞的方式去检查有没有新消息,如果有就接收
return self.response

fibonacci_rpc = FibonacciRpcClient() #创建rpc对象并执行init方法
print(" Requesting fib(30)")
response = fibonacci_rpc.call("df -h") # 调用FibonacciRpcClient类里面的call方法
print(" [.] Got cmd result")
print (response.decode("utf-8"))
  ...
页: [1]
查看完整版本: tiger运维走向Python开发_课堂笔记week24_RabbitMQ队列_day82&83