设为首页 收藏本站
查看: 619|回复: 0

[经验分享] 200,000,000 Keys in Redis 2.0.0-rc3【转】

[复制链接]

尚未签到

发表于 2016-12-21 08:10:02 | 显示全部楼层 |阅读模式
Posted on
July 24, 2010
by
Jeremy Zawodny


  I’ve been testing Redis 2.0.0-rc3
in the hopes of upgrading our clusters very soon. I really want to take advantage of hashes and various tweaks and enhancements that are in the 2.0 tree
.
I was also curious about the per-key memory overhead and wanted to get a
sense of how many keys we’d be able to store in our ten machine
cluster. I assumed (well, hoped) that we’d be able to handle 1 billion
keys, so I decided to put it to the test.
  I installed redis-2.0.0-rc3 (reported as the 1.3.16 development version) on two hosts: host1 (master) and host2 (slave).
  Then I ran two instances of a simple Perl script
on host1:

#!/usr/bin/perl -w
$|++;
use strict;
use Redis;
my $r = Redis->new(server => 'localhost:63790') or die "$!";
for my $key (1..100_000_000) {
my $val = int(rand($key));
$r->set("$$:$key", $val) or die "$!";
}
exit;
__END__
  Basically that creates 100,000,000 keys with randomly chosen integer
values. They keys are “$pid:$num” where $pid is the process id (so I
could run multiple copies). In Perl the variable $$ is the process id.
Before running the script, I created a “foo” key with the value “bar” to
check that replication was working. Once everything looked good, I
fired up two copies of the script and watched.
  I didn’t time the execution, but I’m pretty sure I took a bit longer
than 1 hour–definitely less than 2 hours. The final memory usage on both
hosts was right about 24GB.
  Here’s the output of INFO
from both:
  Master:

redis_version:1.3.16
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:10164
uptime_in_seconds:10701
uptime_in_days:0
connected_clients:1
connected_slaves:1
blocked_clients:0
used_memory:26063394000
used_memory_human:24.27G
changes_since_last_save:79080423
bgsave_in_progress:0
last_save_time:1279930909
bgrewriteaof_in_progress:0
total_connections_received:19
total_commands_processed:216343823
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master
db0:keys=200000001,expires=0
  Slave:

redis_version:1.3.16
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:5983
uptime_in_seconds:7928
uptime_in_days:0
connected_clients:2
connected_slaves:0
blocked_clients:0
used_memory:26063393872
used_memory_human:24.27G
changes_since_last_save:78688774
bgsave_in_progress:0
last_save_time:1279930921
bgrewriteaof_in_progress:0
total_connections_received:11
total_commands_processed:214343823
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:slave
master_host:host1
master_port:63790
master_link_status:up
master_last_io_seconds_ago:512
db0:keys=200000001,expires=0
  This tells me that on a 32GB box, it’s not unreasonable to host
200,000,000 keys (if their values are sufficiently small). Since I was
hoping for 100,000,000 with likely lager values, I think this looks very
promising. With a 10 machine cluster, that easily gives us
1,000,000,000 keys.
  In case you’re wondering, the redis.conf on both machines looked like this.

daemonize yes
pidfile /var/run/redis-0.pid
port 63790
timeout 300
save 900 10000
save 300 1000
dbfilename dump-0.rdb
dir /u/redis/data/
loglevel notice
logfile /u/redis/log/redis-0.log
databases 64
glueoutputbuf yes
  The resulting dump file (dump-0.rdb) was 1.8GB in size.

I’m looking forward to the official 2.0.0 release. DSC0000.jpg


  • Share this:

  • Share





  
Ads by Google

[size=1.2em]Silvia / 240SX Key Blanks

JDM Logo Shaped Key Blanks K's Logo, Silvia Logo, 5 Colors

www.Rotary13B1.com/Silvia_Key_Blank


[size=1.2em]Golden-Key Precision

specializes in the manufacture of blank keys, color key. OEM and ODM

www.golden-key.com.tw


[size=1.2em]gettimeofday()

Do NOT use gettimeofday() to measure time!

blog.habets.pp.se


[size=1.2em]2011别克S弯挑战赛烽火再燃

挑战座驾君威2.0T,共启极速传奇 魔力环形赛道+新直道新赛段 即刻征战!

s.buick.com.cn

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-317093-1-1.html 上篇帖子: Redis官方文档(13) ——持久化 下篇帖子: redis数据类型改进和补充:zip2list和uintset
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表