设为首页 收藏本站
查看: 1443|回复: 0

[经验分享] python爬取微博图片数据存到Mysql中遇到的各种坑\python Mysql存储图片

[复制链接]

尚未签到

发表于 2018-8-6 11:10:44 | 显示全部楼层 |阅读模式
#!/usr/bin/env python  # -*- coding: utf-8 -*-
  # Created by Baoyi on 2017/10/16
  from multiprocessing.pool import Pool
  import pymysql
  import requests
  import json
  import exifread
  from io import BytesIO
  import configparser
  import hashlib
  import logging
  import base64
  # 配置logging
  logging.basicConfig(level=logging.WARNING,
  format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
  datefmt='%a, %d %b %Y %H:%M:%S',
  filename='weibo.log',
  filemode='w')
  cf = configparser.ConfigParser()
  cf.read("ConfigParser.conf")
  # 读取配置mysql
  db_host = cf.get("mysql", "db_host")
  db_port = cf.getint("mysql", "db_port")
  db_user = cf.get("mysql", "db_user")
  db_pass = cf.get("mysql", "db_pass")
  db = cf.get("mysql", "db")
  # 创建连接
  conn = pymysql.connect(host=db_host, user=db_user, passwd=db_pass, db=db, port=db_port, charset='utf8')
  # 获取游标
  cursor = conn.cursor()
  # 创建insert_sql
  insert_blog_sql = (

  "INSERT IGNORE INTO blog(userid,>  )
  insert_pic_sql = (
  "INSERT IGNORE INTO pics(pic_url, pic_bin, md5, exif) VALUES ('{pic_url}','{pic_bin}','{md5}','{exif}')"
  )
  insert_relationship_sql = (

  "INSERT IGNORE INTO>  )
  uid = []
  with open('./data/final_id.txt', 'r') as f:
  for i in f.readlines():
  uid.append(i.strip('\r\n'))
  # 处理图片数据
  def handle_pic(pic_url):
  large_pic_url = pic_url.replace('thumbnail', 'large')
  large_bin = requests.get(large_pic_url)
  return large_bin.content
  def get_poiid_info(uid):
  try:
  url = 'https://api.weibo.com/2/statuses/user_timeline.json'
  load = {
  'access_token': 'xxxxxxxxxx',
  'uid': uid,
  'count': 100,
  'feature': 2,
  'trim_user': 1
  }
  get_info = requests.get(url=url, params=load, timeout=(10, 10))
  if get_info.status_code != 200:
  logging.warning(ConnectionError)
  pass
  info_json = json.loads(get_info.content)
  info_json['uid'] = uid
  statuses = info_json['statuses']
  # 处理筛选微博数据
  for status in statuses:
  id = status['idstr']
  if status['geo'] is not None:
  lat = status['geo']['coordinates'][0]
  lng = status['geo']['coordinates'][1]
  pic_urls = status['pic_urls']
  # 判断是否在北京
  if (115.7 < lng < 117.4) and (39.4 < lat < 41.6):
  # 若在北京,插入blog数据进库
  blog_text = status['text'].replace('\'', '\'\'')
  created_time = status['created_at']
  try:
  cursor.execute(

  insert_blog_sql.format(uid=uid,>  created_time=created_time))
  except pymysql.err.OperationalError as e_blog:
  logging.warning(e_blog.args[1])
  pass
  # conn.commit()
  # 处理图片
  for pic_url in pic_urls:
  # 获取原图片二进制数据
  pic_bin = handle_pic(pic_url['thumbnail_pic'])
  # 读取exif 数据
  pic_file = BytesIO(pic_bin)  # 将二进制数据转化成文件对象便于读取exif数据信息和生成MD5
  tag1 = exifread.process_file(pic_file, details=False, strict=True)
  tag = {}
  for key, value in tag1.items():
  if key not in (
  'JPEGThumbnail', 'TIFFThumbnail', 'Filename',
  'EXIF MakerNote'):  # 去除四个不必要的exif属性,简化信息量
  tag[key] = str(value)
  tags = json.dumps(tag)  # dumps为json类型 此tag即为exif的json数据
  # 生成MD5
  MD5 = hashlib.md5(pic_file.read()).hexdigest()
  # 首先把二进制图片用base64 转成字符串之后再存
  try:
  cursor.execute(
  insert_pic_sql.format(pic_url=pic_url['thumbnail_pic'].replace('thumbnail', 'large'),
  pic_bin=str(base64.b64encode(pic_bin))[2:-1], md5=MD5,
  exif=tags))
  except pymysql.err.OperationalError as e_pic:
  logging.warning(e_pic.args[1])
  pass
  try:
  cursor.execute(insert_relationship_sql.format(id=id, md5=MD5))
  except pymysql.err.OperationalError as e_relation:
  logging.warning(e_relation)
  pass
  conn.commit()
  else:
  logging.info(id + " is Not in Beijing")
  pass
  else:
  logging.info(id + ' Geo is null')
  pass
  except pymysql.err.OperationalError as e:
  logging.error(e.args[1])
  pass
  def judge_conn(i):
  global conn
  try:
  conn.ping(True)
  get_poiid_info(i)
  except pymysql.err.OperationalError as e:
  logging.error('Reconnect')
  conn = pymysql.connect(host=db_host, user=db_user, passwd=db_pass, db=db, charset='utf8')
  get_poiid_info(i)
  def handle_tuple(a_tuple):
  read_uid_set = []
  for i in a_tuple:
  read_uid_set.append(i[0])
  return set(read_uid_set)
  if __name__ == '__main__':
  sql_find_uid = (
  "SELECT userid FROM blog"
  )
  cursor.execute(sql_find_uid)
  read_uid_tuple = cursor.fetchall()
  read_list = handle_tuple(read_uid_tuple)
  print(len(read_list))
  new_uid = set(uid).difference(read_list)
  print(len(new_uid))
  pool = Pool()
  pool.map(judge_conn, list(new_uid))

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-547547-1-1.html 上篇帖子: python爬取人脸识别图片数据集/python爬去图片/python爬虫 下篇帖子: 最新python3.6安装beatifulsoup教程
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表