设为首页 收藏本站
查看: 1283|回复: 0

[经验分享] redis php扩展简单使用

[复制链接]

尚未签到

发表于 2015-7-22 11:45:22 | 显示全部楼层 |阅读模式
  redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多
  1、下载
redis 软件包 https://code.google.com/p/servicestack/wiki/RedisWindowsDownload
php扩展文件 https://github.com/nicolasff/phpredis/downloads
  2、安装
解压redis到指定路径,假定解压到D:\lostPHP\redis
在此文件夹内,建立新文件redis.conf,添加以下内容


DSC0000.gif DSC0001.gif View Code


  1 # Redis configuration file example
  2
  3 # By default Redis does not run as a daemon. Use 'yes' if you need it.
  4 # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
  5 daemonize no
  6
  7 # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
  8 # You can specify a custom pid file location here.
  9 pidfile /var/run/redis.pid
10
11 # Accept connections on the specified port, default is 6379
12 port 6379
13
14 # If you want you can bind a single interface, if the bind option is not
15 # specified all the interfaces will listen for connections.
16 #
17 # bind 127.0.0.1
18
19 # Close the connection after a client is idle for N seconds (0 to disable)
20 timeout 300
21
22 # Set server verbosity to 'debug'
23 # it can be one of:
24 # debug (a lot of information, useful for development/testing)
25 # notice (moderately verbose, what you want in production probably)
26 # warning (only very important / critical messages are logged)
27 loglevel debug
28
29 # Specify the log file name. Also 'stdout' can be used to force
30 # the demon to log on the standard output. Note that if you use standard
31 # output for logging but daemonize, logs will be sent to /dev/null
32 logfile stdout
33
34 # Set the number of databases. The default database is DB 0, you can select
35 # a different one on a per-connection basis using SELECT  where
36 # dbid is a number between 0 and 'databases'-1
37 databases 16
38
39 ################################ SNAPSHOTTING  #################################
40 #
41 # Save the DB on disk:
42 #
43 #   save  
44 #
45 #   Will save the DB if both the given number of seconds and the given
46 #   number of write operations against the DB occurred.
47 #
48 #   In the example below the behaviour will be to save:
49 #   after 900 sec (15 min) if at least 1 key changed
50 #   after 300 sec (5 min) if at least 10 keys changed
51 #   after 60 sec if at least 10000 keys changed
52 save 900 1
53 save 300 10
54 save 60 10000
55
56 # Compress string objects using LZF when dump .rdb databases?
57 # For default that's set to 'yes' as it's almost always a win.
58 # If you want to save some CPU in the saving child set it to 'no' but
59 # the dataset will likely be bigger if you have compressible values or keys.
60 rdbcompression yes
61
62 # The filename where to dump the DB
63 dbfilename dump.rdb
64
65 # For default save/load DB in/from the working directory
66 # Note that you must specify a directory not a file name.
67 dir ./
68
69 ################################# REPLICATION #################################
70
71 # Master-Slave replication. Use slaveof to make a Redis instance a copy of
72 # another Redis server. Note that the configuration is local to the slave
73 # so for example it is possible to configure the slave to save the DB with a
74 # different interval, or to listen to another port, and so on.
75 #
76 # slaveof  
77
78 # If the master is password protected (using the "requirepass" configuration
79 # directive below) it is possible to tell the slave to authenticate before
80 # starting the replication synchronization process, otherwise the master will
81 # refuse the slave request.
82 #
83 # masterauth
84
85 ################################## SECURITY ###################################
86
87 # Require clients to issue AUTH  before processing any other
88 # commands.  This might be useful in environments in which you do not trust
89 # others with access to the host running redis-server.
90 #
91 # This should stay commented out for backward compatibility and because most
92 # people do not need auth (e.g. they run their own servers).
93 #
94 # requirepass foobared
95
96 ################################### LIMITS ####################################
97
98 # Set the max number of connected clients at the same time. By default there
99 # is no limit, and it's up to the number of file descriptors the Redis process
100 # is able to open. The special value '0' means no limts.
101 # Once the limit is reached Redis will close all the new connections sending
102 # an error 'max number of clients reached'.
103 #
104 # maxclients 128
105
106 # Don't use more memory than the specified amount of bytes.
107 # When the memory limit is reached Redis will try to remove keys with an
108 # EXPIRE set. It will try to start freeing keys that are going to expire
109 # in little time and preserve keys with a longer time to live.
110 # Redis will also try to remove objects from free lists if possible.
111 #
112 # If all this fails, Redis will start to reply with errors to commands
113 # that will use more memory, like SET, LPUSH, and so on, and will continue
114 # to reply to most read-only commands like GET.
115 #
116 # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
117 # 'state' server or cache, not as a real DB. When Redis is used as a real
118 # database the memory usage will grow over the weeks, it will be obvious if
119 # it is going to use too much memory in the long run, and you'll have the time
120 # to upgrade. With maxmemory after the limit is reached you'll start to get
121 # errors for write operations, and this may even lead to DB inconsistency.
122 #
123 # maxmemory
124
125 ############################## APPEND ONLY MODE ###############################
126
127 # By default Redis asynchronously dumps the dataset on disk. If you can live
128 # with the idea that the latest records will be lost if something like a crash
129 # happens this is the preferred way to run Redis. If instead you care a lot
130 # about your data and don't want to that a single record can get lost you should
131 # enable the append only mode: when this mode is enabled Redis will append
132 # every write operation received in the file appendonly.log. This file will
133 # be read on startup in order to rebuild the full dataset in memory.
134 #
135 # Note that you can have both the async dumps and the append only file if you
136 # like (you have to comment the "save" statements above to disable the dumps).
137 # Still if append only mode is enabled Redis will load the data from the
138 # log file at startup ignoring the dump.rdb file.
139 #
140 # The name of the append only file is "appendonly.log"
141 #
142 # IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
143 # log file in background when it gets too big.
144
145 appendonly no
146
147 # The fsync() call tells the Operating System to actually write data on disk
148 # instead to wait for more data in the output buffer. Some OS will really flush
149 # data on disk, some other OS will just try to do it ASAP.
150 #
151 # Redis supports three different modes:
152 #
153 # no: don't fsync, just let the OS flush the data when it wants. Faster.
154 # always: fsync after every write to the append only log . Slow, Safest.
155 # everysec: fsync only if one second passed since the last fsync. Compromise.
156 #
157 # The default is "always" that's the safer of the options. It's up to you to
158 # understand if you can relax this to "everysec" that will fsync every second
159 # or to "no" that will let the operating system flush the output buffer when
160 # it want, for better performances (but if you can live with the idea of
161 # some data loss consider the default persistence mode that's snapshotting).
162
163 appendfsync always
164 # appendfsync everysec
165 # appendfsync no
166
167 ############################### ADVANCED CONFIG ###############################
168
169 # Glue small output buffers together in order to send small replies in a
170 # single TCP packet. Uses a bit more CPU but most of the times it is a win
171 # in terms of number of queries per second. Use 'yes' if unsure.
172 glueoutputbuf yes
173
174 # Use object sharing. Can save a lot of memory if you have many common
175 # string in your dataset, but performs lookups against the shared objects
176 # pool so it uses more CPU and can be a bit slower. Usually it's a good
177 # idea.
178 #
179 # When object sharing is enabled (shareobjects yes) you can use
180 # shareobjectspoolsize to control the size of the pool used in order to try
181 # object sharing. A bigger pool size will lead to better sharing capabilities.
182 # In general you want this value to be at least the double of the number of
183 # very common strings you have in your dataset.
184 #
185 # WARNING: object sharing is experimental, don't enable this feature
186 # in production before of Redis 1.0-stable. Still please try this feature in
187 # your development environment so that we can test it better.
188 # shareobjects no
189 # shareobjectspoolsize 1024
  cd到D:\lostPHP\redis文件夹,redis-server.exe redis.conf 指定配置文件启动redis服务
这是可以新建一个dos窗口,键入redis-cli.exe -h 127.0.0.1 -p 6379 进入redis 指令控制redis设置读取数据
安装php扩展,把下载到的对应扩展放入ext,设置php.ini,重启web服务器

  3、使用
用php代码测试使用




1
  最后附一些redis.conf的配置信息说明



1 #是否作为守护进程运行
2 daemonize yes
3 #配置 pid 的存放路径及文件名,默认为当前路径下
4 pidfile redis.pid
5 #Redis 默认监听端口
6 port 6379
7 #客户端闲置多少秒后,断开连接
8 timeout 300
9 #日志显示级别
10 loglevel verbose
11 #指定日志输出的文件名,也可指定到标准输出端口
12 logfile stdout
13 #设置数据库的数量,默认连接的数据库是 0,可以通过 select N 来连接不同的数据库
14 databases 16
15 #保存数据到 disk 的策略
16 #当有一条 Keys 数据被改变是,900 秒刷新到 disk 一次
17 save 900 1
18 #当有 10 条 Keys 数据被改变时,300 秒刷新到 disk 一次
19 save 300 10
20 #当有 1w 条 keys 数据被改变时,60 秒刷新到 disk 一次
21 save 60 10000
22 #当 dump .rdb 数据库的时候是否压缩数据对象
23 rdbcompression yes
24 #dump 数据库的数据保存的文件名
25 dbfilename dump.rdb
26 #Redis 的工作目录
27 dir /home/falcon/redis-2.0.0/
28 ########### Replication #####################
29 #Redis 的复制配置
30 # slaveof  
31 # masterauth
32 ############## SECURITY ###########
33 # requirepass foobared
34 ############### LIMITS ##############
35 #最大客户端连接数
36 # maxclients 128
37 #最大内存使用率
38 # maxmemory
39 ########## APPEND ONLY MODE #########
40 #是否开启日志功能
41 appendonly no
42 # 刷新日志到 disk 的规则
43 # appendfsync always
44 appendfsync everysec
45 # appendfsync no
46 ################ VIRTUAL MEMORY ###########
47 #是否开启 VM 功能
48 vm-enabled no
49 # vm-enabled yes
50 vm-swap-file logs/redis.swap
51 vm-max-memory 0
52 vm-page-size 32
53 vm-pages 134217728
54 vm-max-threads 4
55 ############# ADVANCED CONFIG ###############
56 glueoutputbuf yes
57 hash-max-zipmap-entries 64
58 hash-max-zipmap-value 512
59 #是否重置 Hash 表
60 activerehashing yes
  
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-89442-1-1.html 上篇帖子: Linux 下 Redis 安装详解 下篇帖子: 深入redis内部--实现双向链表
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表