设为首页 收藏本站
查看: 745|回复: 0

[经验分享] 明天的明天 永远的永远 未知的一切 我与你一起承担 ??

[复制链接]

尚未签到

发表于 2017-12-20 16:09:51 | 显示全部楼层 |阅读模式
  # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
  # is reached? You can select among five behavior:
  #
  # volatile-lru -> remove the key with an expire set using an LRU algorithm
  # allkeys-lru -> remove any key accordingly to the LRU algorithm
  # volatile-random -> remove a random key with an expire set
  # allkeys->random -> remove a random key, any key
  # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
  # noeviction -> don't expire at all, just return an error on write operations
  #
  # Note: with all the kind of policies, Redis will return an error on write
  #       operations, when there are not suitable keys for eviction.
  #
  #       At the date of writing this commands are: set setnx setex append
  #       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  #       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  #       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  #       getset mset msetnx exec sort
  #
  # The default is:
  #
  # maxmemory-policy volatile-lru
  # LRU and minimal TTL algorithms are not precise algorithms but approximated
  # algorithms (in order to save memory), so you can select as well the sample

  #>
  # pick the one that was used less recently, you can change the sample>  # using the following configuration directive.
  #
  # maxmemory-samples 3
  ############################## APPEND ONLY MODE ###############################
  #
  # Note that you can have both the async dumps and the append only file if you
  # like (you have to comment the "save" statements above to disable the dumps).
  # Still if append only mode is enabled Redis will load the data from the
  # log file at startup ignoring the dump.rdb file.
  # 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。
  # 因为redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
  # IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
  # log file in background when it gets too big.
  appendonly no
  # 指定更新日志文件名,默认为appendonly.aof
  # appendfilename appendonly.aof
  # The fsync() call tells the Operating System to actually write data on disk
  # instead to wait for more data in the output buffer. Some OS will really flush
  # data on disk, some other OS will just try to do it ASAP.
  # 指定更新日志条件,共有3个可选值:
  # no:表示等操作系统进行数据缓存同步到磁盘(快)
  # always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)
  # everysec:表示每秒同步一次(折衷,默认值)
  appendfsync everysec
  # appendfsync no
  # When the AOF fsync policy is set to always or everysec, and a background
  # saving process (a background save or AOF log background rewriting) is
  # performing a lot of I/O against the disk, in some Linux configurations
  # Redis may block too long on the fsync() call. Note that there is no fix for
  # this currently, as even performing fsync in a different thread will block
  # our synchronous write(2) call.
  #
  # In order to mitigate this problem it's possible to use the following option
  # that will prevent fsync() from being called in the main process while a
  # BGSAVE or BGREWRITEAOF is in progress.
  #
  # This means that while another child is saving the durability of Redis is
  # the same as "appendfsync none", that in pratical terms means that it is
  # possible to lost up to 30 seconds of log in the worst scenario (with the
  # default Linux settings).
  #
  # If you have latency problems turn this to "yes". Otherwise leave it as
  # "no" that is the safest pick from the point of view of durability.
  no-appendfsync-on-rewrite no
  # Automatic rewrite of the append only file.
  # Redis is able to automatically rewrite the log file implicitly calling

  # BGREWRITEAOF when the AOF log>  #

  # This is how it works: Redis remembers the>
  # latest rewrite (or if no rewrite happened since the restart, the>  # the AOF at startup is used).
  #

  # This base>  # bigger than the specified percentage, the rewrite is triggered. Also

  # you need to specify a minimal>  # is useful to avoid rewriting the AOF file even if the percentage increase
  # is reached but it is still pretty small.
  #
  # Specify a precentage of zero in order to disable the automatic AOF
  # rewrite feature.
  auto-aof-rewrite-percentage 100
  auto-aof-rewrite-min-size 64mb
  ################################## SLOW LOG ###################################
  # The Redis Slow Log is a system to log queries that exceeded a specified
  # execution time. The execution time does not include the I/O operations
  # like talking with the client, sending the reply and so forth,
  # but just the time needed to actually execute the command (this is the only
  # stage of command execution where the thread is blocked and can not serve
  # other requests in the meantime).
  #
  # You can configure the slow log with two parameters: one tells Redis
  # what is the execution time, in microseconds, to exceed in order for the
  # command to get logged, and the other parameter is the length of the
  # slow log. When a new command is logged the oldest one is removed from the
  # queue of logged commands.
  # The following time is expressed in microseconds, so 1000000 is equivalent
  # to one second. Note that a negative number disables the slow log, while
  # a value of zero forces the logging of every command.
  slowlog-log-slower-than 10000
  # There is no limit to this length. Just be aware that it will consume memory.
  # You can reclaim memory used by the slow log with SLOWLOG RESET.
  slowlog-max-len 1024
  ################################ VIRTUAL MEMORY ###############################
  ### WARNING! Virtual Memory is deprecated in Redis 2.4
  ### The use of Virtual Memory is strongly discouraged.
  ### WARNING! Virtual Memory is deprecated in Redis 2.4
  ### The use of Virtual Memory is strongly discouraged.
  # Virtual Memory allows Redis to work with datasets bigger than the actual
  # amount of RAM needed to hold the whole dataset in memory.
  # In order to do so very used keys are taken in memory while the other keys
  # are swapped into a swap file, similarly to what operating systems do
  # with memory pages.
  # 指定是否启用虚拟内存机制,默认值为no,
  # VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中
  # 把vm-enabled设置为yes,根据需要设置好接下来的三个VM参数,就可以启动VM了
  vm-enabled no
  # vm-enabled yes
  # This is the path of the Redis swap file. As you can guess, swap files
  # can't be shared by different Redis instances, so make sure to use a swap
  # file for every redis process you are running. Redis will complain if the
  # swap file is already in use.
  #
  # Redis交换文件最好的存储是SSD(固态硬盘)
  # 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
  # *** WARNING *** if you are using a shared hosting the default of putting
  # the swap file under /tmp is not secure. Create a dir with access granted
  # only to Redis user and configure Redis to create the swap file there.
  vm-swap-file /tmp/redis.swap
  # With vm-max-memory 0 the system will swap everything it can. Not a good
  # default, just specify the max amount of RAM you can in bytes, but it's
  # better to leave some margin. For instance specify an amount of RAM
  # that's more or less between 60 and 80% of your free RAM.
  # 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多少,所有索引数据都是内存存储的(Redis的索引数据就是keys)
  # 也就是说当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0
  vm-max-memory 0
  # Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的数据大小来设定的。
  # 建议如果存储很多小对象,page大小最后设置为32或64bytes;如果存储很大的对象,则可以使用更大的page,如果不确定,就使用默认值
  vm-page-size 32
  # 设置swap文件中的page数量由于页表(一种表示页面空闲或使用的bitmap)是存放在内存中的,在磁盘上每8个pages将消耗1byte的内存
  # swap空间总容量为 vm-page-size * vm-pages
  #
  # With the default of 32-bytes memory pages and 134217728 pages Redis will
  # use a 4 GB swap file, that will use 16 MB of RAM for the page table.
  #
  # It's better to use the smallest acceptable value for your application,
  # but the default is large in order to work in most conditions.
  vm-pages 134217728
  # Max number of VM I/O threads running at the same time.
  # This threads are used to read/write data from/to swap file, since they
  # also encode and decode objects from disk to memory or the reverse, a bigger
  # number of threads can help with big objects even if they can't help with
  # I/O itself as the physical device may not be able to couple with many
  # reads/writes operations at the same time.
  # 设置访问swap文件的I/O线程数,最后不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟,默认值为4
  vm-max-threads 4
  ############################### ADVANCED CONFIG ###############################
  # Hashes are encoded in a special way (much more memory efficient) when they
  # have at max a given numer of elements, and the biggest element does not
  # exceed a given threshold. You can configure this limits with the following
  # configuration directives.
  # 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法
  hash-max-zipmap-entries 512
  hash-max-zipmap-value 64
  # Similarly to hashes, small lists are also encoded in a special way in order
  # to save a lot of space. The special representation is only used when
  # you are under the following limits:
  list-max-ziplist-entries 512
  list-max-ziplist-value 64
  # Sets have a special encoding in just one case: when a set is composed
  # of just strings that happens to be integers in radix 10 in the range
  # of 64 bit signed integers.

  # The following configuration setting sets the limit in the>  # set in order to use this special memory saving encoding.
  set-max-intset-entries 512
  # Similarly to hashes and lists, sorted sets are also specially encoded in
  # order to save a lot of space. This encoding is only used when the length and
  # elements of a sorted set are below the following limits:
  zset-max-ziplist-entries 128
  zset-max-ziplist-value 64
  # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
  # order to help rehashing the main Redis hash table (the one mapping top-level
  # keys to values). The hash table implementation redis uses (see dict.c)
  # performs a lazy rehashing: the more operation you run into an hash table
  # that is rhashing, the more rehashing "steps" are performed, so if the

  # server is>  # by the hash table.
  #
  # The default is to use this millisecond 10 times every second in order to
  # active rehashing the main dictionaries, freeing memory when possible.
  #
  # If unsure:
  # use "activerehashing no" if you have hard latency requirements and it is
  # not a good thing in your environment that Redis can reply form time to time
  # to queries with 2 milliseconds delay.
  # 指定是否激活重置哈希,默认为开启
  activerehashing yes
  ################################## INCLUDES ###################################
  # 指定包含其他的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各实例又拥有自己的特定配置文件
  # include /path/to/local.conf
  # include /path/to/other.conf

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-426104-1-1.html 上篇帖子: StackExchange.Redis加载Lua脚本进行模糊查询的批量删除和修改 下篇帖子: Redis集群方案应该怎么做
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表