|
1.前言
Redis是一个key/value存储系统,它的数据都是缓存在内存中的,所以效率很高。这几天用到了redis,所以学习了一些关于redis的基本知识,现在记录出来,为以后查阅方便。
2.安装
参考文章:http://my.oschina.net/u/273598/blog/100809
Redis的下载地址:http://redis.io/download 有windows版和linux版,下面是windows下的安装过程:
windows版本的Redis安装文件解压之后,有一下几个文件:

redis-benchmark.exe:性能测试,用于模拟同时由N个客户端发送M个SETs/GETs查询
redis-check-aof.exe:更新日志检查
redis-check-dump.exe:本地数据库检查
redis-server.exe:redis的服务程序
2.1增加redis.conf文件
在解压好的redis的安装文件到某目录下,此时需要在redis的根目录自己创建一个redis的配置文件——redis.conf,这个文件的内容如下,只要将这些内容复制到创建的redis.conf文件中就行。
# Redis configuration file example
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
# You can specify a custom pid file location here.
pidfile /var/run/redis.pid
# Accept connections on the specified port, default is 6379
port 6379
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for connections.
#
# bind 127.0.0.1
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 300
# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel debug
# Specify the log file name. Also 'stdout' can be used to force
# the demon to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile stdout
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING #################################
#
# Save the DB on disk:
#
# save
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
save 900 1
save 300 10
save 60 10000
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# The filename where to dump the DB
dbfilename dump.rdb
# For default save/load DB in/from the working directory
# Note that you must specify a directory not a file name.
dir ./
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth
################################## SECURITY ###################################
# Require clients to issue AUTH before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# requirepass foobared
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limts.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
# maxmemory
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.log. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# The name of the append only file is "appendonly.log"
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.
appendonly no
# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "always" that's the safer of the options. It's up to you to
# understand if you can relax this to "everysec" that will fsync every second
# or to "no" that will let the operating system flush the output buffer when
# it want, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting).
appendfsync always
# appendfsync everysec
# appendfsync no
############################### ADVANCED CONFIG ###############################
# Glue small output buffers together in order to send small replies in a
# single TCP packet. Uses a bit more CPU but most of the times it is a win
# in terms of number of queries per second. Use 'yes' if unsure.
glueoutputbuf yes
# Use object sharing. Can save a lot of memory if you have many common
# string in your dataset, but performs lookups against the shared objects
# pool so it uses more CPU and can be a bit slower. Usually it's a good
# idea.
#
# When object sharing is enabled (shareobjects yes) you can use
# shareobjectspoolsize to control the size of the pool used in order to try
# object sharing. A bigger pool size will lead to better sharing capabilities.
# In general you want this value to be at least the double of the number of
# very common strings you have in your dataset.
#
# WARNING: object sharing is experimental, don't enable this feature
# in production before of Redis 1.0-stable. Still please try this feature in
# your development environment so that we can test it better.
# shareobjects no
# shareobjectspoolsize 1024
2.2启动redis
首先在cmd中进入redis的根目录下,然后输入命令:
redis-server.exe redis.conf
这个是启动了redis的服务,如果要使用redis,这个服务要一直保持开启状态,即这个cmd窗口一直开着。关闭则redis服务也关闭了。
新建另一个cmd,进入redis根目录,启动客户端:
redis-cli.exe -h 127.0.0.1 -p 6379
-h 127.0.0.1 是redis数据库服务所在服务器ip,如果是本地,则写127.0.0.1;-p 6379是redis占用端口,默认是6379。
此时就可以使用redis命令,设置key/value值,并可以通过key获取value了:

3.Java客户端
这里介绍如何使用java编程来操作redis数据库。
需要先下载一个java操作redis的驱动包——jedis-2.1.0.jar
一个最基本的的代码如下:
import redis.clients.jedis.Jedis;
public class MyRedis {
public static void main(String args[]) {
// 连接redis服务
Jedis jedis = new Jedis("127.0.0.1", 6379);
// 密码验证-如果你没有设置redis密码可不验证即可使用相关命令
jedis.auth("123456");
// 简单的key-value 存储
jedis.set("key1", "value1");
System.out.println(jedis.get("key1"));
}
}
附:
下面内容为转载自http://javacrazyer.iteye.com/blog/1840161
一个更加详细的java操作redis的博客请移步:
http://javacrazyer.iteye.com/blog/1840161 下面将这篇博客转载过来,方便以后查阅:
package com.wujintao.redis;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.junit.Test;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Pipeline;
import redis.clients.jedis.SortingParams;
import com.wujintao.redis.util.RedisUtil;
public class TestCase {
/**
* 在不同的线程中使用相同的Jedis实例会发生奇怪的错误。但是创建太多的实现也不好因为这意味着会建立很多sokcet连接,
* 也会导致奇怪的错误发生。单一Jedis实例不是线程安全的。为了避免这些问题,可以使用JedisPool,
* JedisPool是一个线程安全的网络连接池。可以用JedisPool创建一些可靠Jedis实例,可以从池中拿到Jedis的实例。
* 这种方式可以解决那些问题并且会实现高效的性能
*/
public static void main(String[] args) {
// ...when closing your application:
RedisUtil.getPool().destroy();
}
public static void Hello() {
Jedis jedis = RedisUtil.getJedis();
try {
// 向key-->name中放入了value-->minxr
jedis.set("name", "minxr");
String ss = jedis.get("name");
System.out.println(ss);
// 很直观,类似map 将jintao append到已经有的value之后
jedis.append("name", "jintao");
ss = jedis.get("name");
System.out.println(ss);
// 2、直接覆盖原来的数据
jedis.set("name", "jintao");
System.out.println(jedis.get("jintao"));
// 删除key对应的记录
jedis.del("name");
System.out.println(jedis.get("name"));// 执行结果:null
/**
* mset相当于 jedis.set("name","minxr"); jedis.set("jarorwar","aaa");
*/
jedis.mset("name", "minxr", "jarorwar", "aaa");
System.out.println(jedis.mget("name", "jarorwar"));
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
}
private void testKey() {
Jedis jedis = RedisUtil.getJedis();
System.out.println("=============key==========================");
// 清空数据
System.out.println(jedis.flushDB());
System.out.println(jedis.echo("foo"));
// 判断key否存在
System.out.println(jedis.exists("foo"));
jedis.set("key", "values");
System.out.println(jedis.exists("key"));
}
public static void testString() {
System.out.println("==String==");
Jedis jedis = RedisUtil.getJedis();
try {
// String
jedis.set("key", "Hello World!");
String value = jedis.get("key");
System.out.println(value);
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
System.out.println("=============String==========================");
// 清空数据
System.out.println(jedis.flushDB());
// 存储数据
jedis.set("foo", "bar");
System.out.println(jedis.get("foo"));
// 若key不存在,则存储
jedis.setnx("foo", "foo not exits");
System.out.println(jedis.get("foo"));
// 覆盖数据
jedis.set("foo", "foo update");
System.out.println(jedis.get("foo"));
// 追加数据
jedis.append("foo", " hello, world");
System.out.println(jedis.get("foo"));
// 设置key的有效期,并存储数据
jedis.setex("foo", 2, "foo not exits");
System.out.println(jedis.get("foo"));
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
}
System.out.println(jedis.get("foo"));
// 获取并更改数据
jedis.set("foo", "foo update");
System.out.println(jedis.getSet("foo", "foo modify"));
// 截取value的值
System.out.println(jedis.getrange("foo", 1, 3));
System.out.println(jedis.mset("mset1", "mvalue1", "mset2", "mvalue2",
"mset3", "mvalue3", "mset4", "mvalue4"));
System.out.println(jedis.mget("mset1", "mset2", "mset3", "mset4"));
System.out.println(jedis.del(new String[] { "foo", "foo1", "foo3" }));
}
public static void testList() {
System.out.println("==List==");
Jedis jedis = RedisUtil.getJedis();
try {
// 开始前,先移除所有的内容
jedis.del("messages");
jedis.rpush("messages", "Hello how are you?");
jedis.rpush("messages", "Fine thanks. I'm having fun with redis.");
jedis.rpush("messages", "I should look into this NOSQL thing ASAP");
// 再取出所有数据jedis.lrange是按范围取出,
// 第一个是key,第二个是起始位置,第三个是结束位置,jedis.llen获取长度 -1表示取得所有
List values = jedis.lrange("messages", 0, -1);
System.out.println(values);
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
// 清空数据
System.out.println(jedis.flushDB());
// 添加数据
jedis.lpush("lists", "vector");
jedis.lpush("lists", "ArrayList");
jedis.lpush("lists", "LinkedList");
// 数组长度
System.out.println(jedis.llen("lists"));
// 排序
System.out.println(jedis.sort("lists"));
// 字串
System.out.println(jedis.lrange("lists", 0, 3));
// 修改列表中单个值
jedis.lset("lists", 0, "hello list!");
// 获取列表指定下标的值
System.out.println(jedis.lindex("lists", 1));
// 删除列表指定下标的值
System.out.println(jedis.lrem("lists", 1, "vector"));
// 删除区间以外的数据
System.out.println(jedis.ltrim("lists", 0, 1));
// 列表出栈
System.out.println(jedis.lpop("lists"));
// 整个列表值
System.out.println(jedis.lrange("lists", 0, -1));
}
public static void testSet() {
System.out.println("==Set==");
Jedis jedis = RedisUtil.getJedis();
try {
jedis.sadd("myset", "1");
jedis.sadd("myset", "2");
jedis.sadd("myset", "3");
jedis.sadd("myset", "4");
Set setValues = jedis.smembers("myset");
System.out.println(setValues);
// 移除noname
jedis.srem("myset", "4");
System.out.println(jedis.smembers("myset"));// 获取所有加入的value
System.out.println(jedis.sismember("myset", "4"));// 判断 minxr
// 是否是sname集合的元素
System.out.println(jedis.scard("sname"));// 返回集合的元素个数
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
// 清空数据
System.out.println(jedis.flushDB());
// 添加数据
jedis.sadd("sets", "HashSet");
jedis.sadd("sets", "SortedSet");
jedis.sadd("sets", "TreeSet");
// 判断value是否在列表中
System.out.println(jedis.sismember("sets", "TreeSet"));
;
// 整个列表值
System.out.println(jedis.smembers("sets"));
// 删除指定元素
System.out.println(jedis.srem("sets", "SortedSet"));
// 出栈
System.out.println(jedis.spop("sets"));
System.out.println(jedis.smembers("sets"));
//
jedis.sadd("sets1", "HashSet1");
jedis.sadd("sets1", "SortedSet1");
jedis.sadd("sets1", "TreeSet");
jedis.sadd("sets2", "HashSet2");
jedis.sadd("sets2", "SortedSet1");
jedis.sadd("sets2", "TreeSet1");
// 交集
System.out.println(jedis.sinter("sets1", "sets2"));
// 并集
System.out.println(jedis.sunion("sets1", "sets2"));
// 差集
System.out.println(jedis.sdiff("sets1", "sets2"));
}
public static void sortedSet() {
System.out.println("==SoretedSet==");
Jedis jedis = RedisUtil.getJedis();
try {
jedis.zadd("hackers", 1940, "Alan Kay");
jedis.zadd("hackers", 1953, "Richard Stallman");
jedis.zadd("hackers", 1965, "Yukihiro Matsumoto");
jedis.zadd("hackers", 1916, "Claude Shannon");
jedis.zadd("hackers", 1969, "Linus Torvalds");
jedis.zadd("hackers", 1912, "Alan Turing");
Set setValues = jedis.zrange("hackers", 0, -1);
System.out.println(setValues);
Set setValues2 = jedis.zrevrange("hackers", 0, -1);
System.out.println(setValues2);
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
// 清空数据
System.out.println(jedis.flushDB());
// 添加数据
jedis.zadd("zset", 10.1, "hello");
jedis.zadd("zset", 10.0, ":");
jedis.zadd("zset", 9.0, "zset");
jedis.zadd("zset", 11.0, "zset!");
// 元素个数
System.out.println(jedis.zcard("zset"));
// 元素下标
System.out.println(jedis.zscore("zset", "zset"));
// 集合子集
System.out.println(jedis.zrange("zset", 0, -1));
// 删除元素
System.out.println(jedis.zrem("zset", "zset!"));
System.out.println(jedis.zcount("zset", 9.5, 10.5));
// 整个集合值
System.out.println(jedis.zrange("zset", 0, -1));
}
public static void testHsh() {
System.out.println("==Hash==");
Jedis jedis = RedisUtil.getJedis();
try {
Map pairs = new HashMap();
pairs.put("name", "Akshi");
pairs.put("age", "2");
pairs.put("sex", "Female");
jedis.hmset("kid", pairs);
List name = jedis.hmget("kid", "name");// 结果是个泛型的LIST
// jedis.hdel("kid","age"); //删除map中的某个键值
System.out.println(jedis.hmget("kid", "pwd")); // 因为删除了,所以返回的是null
System.out.println(jedis.hlen("kid")); // 返回key为user的键中存放的值的个数
System.out.println(jedis.exists("kid"));// 是否存在key为user的记录
System.out.println(jedis.hkeys("kid"));// 返回map对象中的所有key
System.out.println(jedis.hvals("kid"));// 返回map对象中的所有value
Iterator iter = jedis.hkeys("kid").iterator();
while (iter.hasNext()) {
String key = iter.next();
System.out.println(key + ":" + jedis.hmget("kid", key));
}
List values = jedis.lrange("messages", 0, -1);
values = jedis.hmget("kid", new String[] { "name", "age", "sex" });
System.out.println(values);
Set setValues = jedis.zrange("hackers", 0, -1);
setValues = jedis.hkeys("kid");
System.out.println(setValues);
values = jedis.hvals("kid");
System.out.println(values);
pairs = jedis.hgetAll("kid");
System.out.println(pairs);
} catch (Exception e) {
e.printStackTrace();
} finally {
RedisUtil.getPool().returnResource(jedis);
}
// 清空数据
System.out.println(jedis.flushDB());
// 添加数据
jedis.hset("hashs", "entryKey", "entryValue");
jedis.hset("hashs", "entryKey1", "entryValue1");
jedis.hset("hashs", "entryKey2", "entryValue2");
// 判断某个值是否存在
System.out.println(jedis.hexists("hashs", "entryKey"));
// 获取指定的值
System.out.println(jedis.hget("hashs", "entryKey")); // 批量获取指定的值
System.out.println(jedis.hmget("hashs", "entryKey", "entryKey1"));
// 删除指定的值
System.out.println(jedis.hdel("hashs", "entryKey"));
// 为key中的域 field 的值加上增量 increment
System.out.println(jedis.hincrBy("hashs", "entryKey", 123l));
// 获取所有的keys
System.out.println(jedis.hkeys("hashs"));
// 获取所有的values
System.out.println(jedis.hvals("hashs"));
}
public static void testOther() throws InterruptedException {
Jedis jedis = RedisUtil.getJedis();
try {
// keys中传入的可以用通配符
System.out.println(jedis.keys("*")); // 返回当前库中所有的key [sose, sanme,
// name, jarorwar, foo,
// sname, java framework,
// user, braand]
System.out.println(jedis.keys("*name"));// 返回的sname [sname, name]
System.out.println(jedis.del("sanmdde"));// 删除key为sanmdde的对象 删除成功返回1
// 删除失败(或者不存在)返回 0
System.out.println(jedis.ttl("sname"));// 返回给定key的有效时间,如果是-1则表示永远有效
jedis.setex("timekey", 10, "min");// 通过此方法,可以指定key的存活(有效时间) 时间为秒
Thread.sleep(5000);// 睡眠5秒后,剩余时间将为name sort ml get user:*->name get
* user:*->add
*/
}
@org.junit.Test
/**
* sort set
* SET结合String的排序
*/
public void testSort3() {
Jedis jedis = RedisUtil.getJedis();
jedis.del("tom:friend:list", "score:uid:123", "score:uid:456",
"score:uid:789", "score:uid:101", "uid:123", "uid:456",
"uid:789", "uid:101");
jedis.sadd("tom:friend:list", "123"); // tom的好友列表
jedis.sadd("tom:friend:list", "456");
jedis.sadd("tom:friend:list", "789");
jedis.sadd("tom:friend:list", "101");
jedis.set("score:uid:123", "1000"); // 好友对应的成绩
jedis.set("score:uid:456", "6000");
jedis.set("score:uid:789", "100");
jedis.set("score:uid:101", "5999");
jedis.set("uid:123", "{'uid':123,'name':'lucy'}"); // 好友的详细信息
jedis.set("uid:456", "{'uid':456,'name':'jack'}");
jedis.set("uid:789", "{'uid':789,'name':'jay'}");
jedis.set("uid:101", "{'uid':101,'name':'jolin'}");
SortingParams sortingParameters = new SortingParams();
sortingParameters.desc();
// sortingParameters.limit(0, 2);
// 注意GET操作是有序的,GET user_name_* GET user_password_*
// 和 GET user_password_* GET user_name_*返回的结果位置不同
sortingParameters.get("#");// GET 还有一个特殊的规则—— "GET #"
// ,用于获取被排序对象(我们这里的例子是 user_id )的当前元素。
sortingParameters.get("uid:*");
sortingParameters.get("score:uid:*");
sortingParameters.by("score:uid:*");
// 对应的redis 命令是./redis-cli sort tom:friend:list by score:uid:* get # get
// uid:* get score:uid:*
List result = jedis.sort("tom:friend:list", sortingParameters);
for (String item : result) {
System.out.println("item..." + item);
}
}
/**
*
* 只获取对象而不排序 BY 修饰符可以将一个不存在的 key 当作权重,让 SORT 跳过排序操作。
* 该方法用于你希望获取外部对象而又不希望引起排序开销时使用。 # 确保fake_key不存在 redis> EXISTS fake_key
* (integer) 0 # 以fake_key作BY参数,不排序,只GET name 和 GET password redis> SORT
* user_id BY fake_key GET # GET user_name_* GET user_password_* 1) "222" #
* id 2) "hacker" # user_name 3) "hey,im in" # password 4) "59230" 5) "jack"
* 6) "jack201022" 7) "2" 8) "huangz" 9) "nobodyknows" 10) "1" 11) "admin"
* 12) "a_long_long_password"
*/
public void testSort4() {
}
/**
*
保存排序结果 默认情况下, SORT 操作只是简单地返回排序结果,如果你希望保存排序结果,可以给 STORE 选项指定一个 key
* 作为参数,排序结果将以列表的形式被保存到这个 key 上。(若指定 key 已存在,则覆盖。) redis> EXISTS
* user_info_sorted_by_level # 确保指定key不存在 (integer) 0 redis> SORT user_id BY
* user_level_* GET # GET user_name_* GET user_password_* STORE
* user_info_sorted_by_level # 排序 (integer) 12 # 显示有12条结果被保存了 redis> LRANGE
* user_info_sorted_by_level 0 11 # 查看排序结果 1) "59230" 2) "jack" 3)
* "jack201022" 4) "2" 5) "huangz" 6) "nobodyknows" 7) "222" 8) "hacker" 9)
* "hey,im in" 10) "1" 11) "admin" 12) "a_long_long_password" 一个有趣的用法是将 SORT
* 结果保存,用 EXPIRE 为结果集设置生存时间,这样结果集就成了 SORT 操作的一个缓存。 这样就不必频繁地调用 SORT
* 操作了,只有当结果集过期时,才需要再调用一次 SORT 操作。
* 有时候为了正确实现这一用法,你可能需要加锁以避免多个客户端同时进行缓存重建(也就是多个客户端,同一时间进行 SORT
* 操作,并保存为结果集),具体参见 SETNX 命令。
*/
@Test
public void testSort5() {
// 排序默认以数字作为对象,值被解释为双精度浮点数,然后进行比较
Jedis jedis = RedisUtil.getJedis();
// 一般SORT用法 最简单的SORT使用方法是SORT key。
jedis.lpush("mylist", "1");
jedis.lpush("mylist", "4");
jedis.lpush("mylist", "6");
jedis.lpush("mylist", "3");
jedis.lpush("mylist", "0");
// List list = redis.sort("sort");// 默认是升序
SortingParams sortingParameters = new SortingParams();
sortingParameters.desc();
// sortingParameters.alpha();//当数据集中保存的是字符串值时,你可以用 ALPHA
// 修饰符(modifier)进行排序。
// sortingParameters.limit(0, 2);//可用于分页查询
// 没有使用 STORE 参数,返回列表形式的排序结果. 使用 STORE 参数,返回排序结果的元素数量。
jedis.sort("mylist", sortingParameters, "mylist");// 排序后指定排序结果到一个KEY中,这里讲结果覆盖原来的KEY
List list = jedis.lrange("mylist", 0, -1);
for (int i = 0; i < list.size(); i++) {
System.out.println(list.get(i));
}
jedis.sadd("tom:friend:list", "123"); // tom的好友列表
jedis.sadd("tom:friend:list", "456");
jedis.sadd("tom:friend:list", "789");
jedis.sadd("tom:friend:list", "101");
jedis.set("score:uid:123", "1000"); // 好友对应的成绩
jedis.set("score:uid:456", "6000");
jedis.set("score:uid:789", "100");
jedis.set("score:uid:101", "5999");
jedis.set("uid:123", "{'uid':123,'name':'lucy'}"); // 好友的详细信息
jedis.set("uid:456", "{'uid':456,'name':'jack'}");
jedis.set("uid:789", "{'uid':789,'name':'jay'}");
jedis.set("uid:101", "{'uid':101,'name':'jolin'}");
sortingParameters = new SortingParams();
// sortingParameters.desc();
sortingParameters.get("#");// GET 还有一个特殊的规则—— "GET #"
// ,用于获取被排序对象(我们这里的例子是 user_id )的当前元素。
sortingParameters.by("score:uid:*");
jedis.sort("tom:friend:list", sortingParameters, "tom:friend:list");
List result = jedis.lrange("tom:friend:list", 0, -1);
for (String item : result) {
System.out.println("item..." + item);
}
jedis.flushDB();
RedisUtil.closeJedis(jedis);
}
public void testMore(){
//ZRANGE取出最新的10个项目。
//使用LPUSH + LTRIM,确保只取出最新的1000条项目。
//HINCRBY key field increment,为哈希表 key 中的域 field 的值加上增量 increment
//INCRBY,HINCRBY等等,redis有了原子递增(atomic increment),你可以放心的加上各种计数,用GETSET重置,或者是让它们过期。
// LREM greet 2 morning # 移除从表头到表尾,最先发现的两个 morning,这个可以用来删除特定评论
// zrevrank test a 查看a在sorted set中倒排序时排在第几名,查询结果按照INDEX,所以INDEX是3表示排在第四名
// zrank test a 相反,表示正排序时候的名次
// zscore test one表示one这个元素在sorted set中的score为多少
// zrevrange test 0 -1 表示sorted set倒排序,zrange test 0 -1表示正排序
//将一个或多个 member 元素及其 score 值加入到有序集 key 当中。如果某个 member 已经是有序集的成员,那么更新这个 member 的 score 值,并通过重新插入这个 member 元素,来保证该 member 在正确的位置上。
//zrem test one删除sorted set中某个元素
}
public List get_latest_comments(int start, int num_items){
//获取最新评论
//LPUSH latest.comments
//-我们将列表裁剪为指定长度,因此Redis只需要保存最新的5000条评论:
//LTRIM latest.comments 0 5000
//们做了限制不能超过5000个ID,因此我们的获取ID函数会一直询问Redis。只有在start/count参数超出了这个范围的时候,才需要去访问数据库。
Jedis jedis = RedisUtil.getJedis();
List id_list = jedis.lrange("latest.comments",start,start+num_items-1) ;
if(id_list.size() |
|