设为首页 收藏本站
查看: 797|回复: 0

[经验分享] Redis缓存数据库介绍与环境搭建

[复制链接]

尚未签到

发表于 2018-11-6 09:46:00 | 显示全部楼层 |阅读模式
  在最近的项目中,有高负载数据量访问需求,在memcached和redis之间做了一下比较,最后选择了redis,主要是觉得redis相比memcached有两点优势:一是因为redis的windows版本用起来比较方便(苦逼的项目不允许用linux系统),二是由于redis的value支持比较多的数据类型(除了String、还有List,HashMap等),使用起来比较灵活。下面介绍下redis的环境搭建。

  •   redis简介
  Redis 是一个高性能的key-value数据库,据说在Linux 2.6, Xeon X3320 2.5Ghz服务器下可以达到SET操作每秒钟 110000 次,GET操作每秒钟 81000 次。,我更倾向于将它作为缓存服务器,既客户端——缓存——数据库,可以显著提高客户端相应速度。

  •   windows下部署
  下载windows版redis:
  https://github.com/MSOpenTech/Redis
  解压到随便一个目录,创建一个redis.con文件,内容如下:
  [java] view plaincopy

  •   # Redis configuration file example

  •   # By default Redis does not run as a daemon. Use 'yes' if you need it.
  •   # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
  •   daemonize no

  •   # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
  •   # You can specify a custom pid file location here.
  •   pidfile /var/run/redis.pid

  •   # Accept connections on the specified port, default is 6379
  •   port 6379

  •   # If you want you can bind a single interface, if the bind option is not
  •   # specified all the interfaces will listen for connections.
  •   #
  •   # bind 127.0.0.1

  •   # Close the connection after a client is idle for N seconds (0 to disable)
  •   timeout 300

  •   # Set server verbosity to 'debug'
  •   # it can be one of:
  •   # debug (a lot of information, useful for development/testing)
  •   # notice (moderately verbose, what you want in production probably)
  •   # warning (only very important / critical messages are logged)
  •   loglevel debug

  •   # Specify the log file name. Also 'stdout' can be used to force
  •   # the demon to log on the standard output. Note that if you use standard
  •   # output for logging but daemonize, logs will be sent to /dev/null
  •   logfile stdout

  •   # Set the number of databases. The default database is DB 0, you can select
  •   # a different one on a per-connection basis using SELECT  where
  •   # dbid is a number between 0 and 'databases'-1
  •   databases 16

  •   ################################ SNAPSHOTTING  #################################
  •   #
  •   # Save the DB on disk:
  •   #
  •   #   save  
  •   #
  •   #   Will save the DB if both the given number of seconds and the given
  •   #   number of write operations against the DB occurred.
  •   #
  •   #   In the example below the behaviour will be to save:
  •   #   after 900 sec (15 min) if at least 1 key changed
  •   #   after 300 sec (5 min) if at least 10 keys changed
  •   #   after 60 sec if at least 10000 keys changed
  •   save 900 1
  •   save 300 10
  •   save 60 10000

  •   # Compress string objects using LZF when dump .rdb databases?
  •   # For default that's set to 'yes' as it's almost always a win.
  •   # If you want to save some CPU in the saving child set it to 'no' but
  •   # the dataset will likely be bigger if you have compressible values or keys.
  •   rdbcompression yes

  •   # The filename where to dump the DB
  •   dbfilename dump.rdb

  •   # For default save/load DB in/from the working directory
  •   # Note that you must specify a directory not a file name.
  •   dir ./

  •   ################################# REPLICATION #################################

  •   # Master-Slave replication. Use slaveof to make a Redis instance a copy of
  •   # another Redis server. Note that the configuration is local to the slave
  •   # so for example it is possible to configure the slave to save the DB with a
  •   # different interval, or to listen to another port, and so on.
  •   #
  •   # slaveof  

  •   # If the master is password protected (using the "requirepass" configuration
  •   # directive below) it is possible to tell the slave to authenticate before
  •   # starting the replication synchronization process, otherwise the master will
  •   # refuse the slave request.
  •   #
  •   # masterauth

  •   ################################## SECURITY ###################################

  •   # Require clients to issue AUTH  before processing any other
  •   # commands.  This might be useful in environments in which you do not trust
  •   # others with access to the host running redis-server.
  •   #
  •   # This should stay commented out for backward compatibility and because most
  •   # people do not need auth (e.g. they run their own servers).
  •   #
  •   # requirepass foobared

  •   ################################### LIMITS ####################################

  •   # Set the max number of connected clients at the same time. By default there
  •   # is no limit, and it's up to the number of file descriptors the Redis process
  •   # is able to open. The special value '0' means no limts.
  •   # Once the limit is reached Redis will close all the new connections sending
  •   # an error 'max number of clients reached'.
  •   #
  •   # maxclients 128

  •   # Don't use more memory than the specified amount of bytes.
  •   # When the memory limit is reached Redis will try to remove keys with an
  •   # EXPIRE set. It will try to start freeing keys that are going to expire
  •   # in little time and preserve keys with a longer time to live.
  •   # Redis will also try to remove objects from free lists if possible.
  •   #
  •   # If all this fails, Redis will start to reply with errors to commands
  •   # that will use more memory, like SET, LPUSH, and so on, and will continue
  •   # to reply to most read-only commands like GET.
  •   #
  •   # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
  •   # 'state' server or cache, not as a real DB. When Redis is used as a real
  •   # database the memory usage will grow over the weeks, it will be obvious if
  •   # it is going to use too much memory in the long run, and you'll have the time
  •   # to upgrade. With maxmemory after the limit is reached you'll start to get
  •   # errors for write operations, and this may even lead to DB inconsistency.
  •   #
  •   # maxmemory

  •   ############################## APPEND ONLY MODE ###############################

  •   # By default Redis asynchronously dumps the dataset on disk. If you can live
  •   # with the idea that the latest records will be lost if something like a crash
  •   # happens this is the preferred way to run Redis. If instead you care a lot
  •   # about your data and don't want to that a single record can get lost you should
  •   # enable the append only mode: when this mode is enabled Redis will append
  •   # every write operation received in the file appendonly.log. This file will
  •   # be read on startup in order to rebuild the full dataset in memory.
  •   #
  •   # Note that you can have both the async dumps and the append only file if you
  •   # like (you have to comment the "save" statements above to disable the dumps).
  •   # Still if append only mode is enabled Redis will load the data from the
  •   # log file at startup ignoring the dump.rdb file.
  •   #
  •   # The name of the append only file is "appendonly.log"
  •   #
  •   # IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
  •   # log file in background when it gets too big.

  •   appendonly no

  •   # The fsync() call tells the Operating System to actually write data on disk
  •   # instead to wait for more data in the output buffer. Some OS will really flush
  •   # data on disk, some other OS will just try to do it ASAP.
  •   #
  •   # Redis supports three different modes:
  •   #
  •   # no: don't fsync, just let the OS flush the data when it wants. Faster.
  •   # always: fsync after every write to the append only log . Slow, Safest.
  •   # everysec: fsync only if one second passed since the last fsync. Compromise.
  •   #
  •   # The default is "always" that's the safer of the options. It's up to you to
  •   # understand if you can relax this to "everysec" that will fsync every second
  •   # or to "no" that will let the operating system flush the output buffer when
  •   # it want, for better performances (but if you can live with the idea of
  •   # some data loss consider the default persistence mode that's snapshotting).

  •   appendfsync always
  •   # appendfsync everysec
  •   # appendfsync no

  •   ############################### ADVANCED CONFIG ###############################

  •   # Glue small output buffers together in order to send small replies in a
  •   # single TCP packet. Uses a bit more CPU but most of the times it is a win
  •   # in terms of number of queries per second. Use 'yes' if unsure.
  •   glueoutputbuf yes

  •   # Use object sharing. Can save a lot of memory if you have many common
  •   # string in your dataset, but performs lookups against the shared objects
  •   # pool so it uses more CPU and can be a bit slower. Usually it's a good
  •   # idea.
  •   #
  •   # When object sharing is enabled (shareobjects yes) you can use
  •   # shareobjectspoolsize to control the size of the pool used in order to try
  •   # object sharing. A bigger pool size will lead to better sharing capabilities.
  •   # In general you want this value to be at least the double of the number of
  •   # very common strings you have in your dataset.
  •   #
  •   # WARNING: object sharing is experimental, don't enable this feature
  •   # in production before of Redis 1.0-stable. Still please try this feature in
  •   # your development environment so that we can test it better.
  •   # shareobjects no
  •   # shareobjectspoolsize 1024
  在命令行中进入redis所在目录,输入命令:
  redis-server.exe redis.conf
  服务器启动。


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-631380-1-1.html 上篇帖子: redis系列(一)-----日常使用详解 下篇帖子: 在Redis-Sentinel的client-reconfig-script脚本中设置VIP
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表