设为首页 收藏本站
查看: 841|回复: 0

[经验分享] SSD卡对redis的影响

[复制链接]

尚未签到

发表于 2015-7-20 14:00:42 | 显示全部楼层 |阅读模式
  原文地址:http://antirez.com/news/52

Hello! As promised today I did some SSD testing.
The setup: a Linux box with 24 GB of RAM, with two disks.
A) A spinning disk.
b) An SSD (Intel 320 series).
The idea is, what happens if I set the SSD disk partition as a swap partition and fill Redis with a dataset larger than RAM?
It is a lot of time I want to do this test, especially now that Redis focus is only on RAM and I abandoned the idea of targeting disk for a number of reasons.
I already guessed that the SSD swap setup would perform in a bad way, but I was not expecting it was *so bad*.
Before testing this setup, let's start testing Redis in memory with in the same box with a 10 GB data set.
IN MEMORY TEST
===
To start I filled the instance with:
./redis-benchmark -r 1000000000 -n 1000000000 -P 32 set key:rand:000000000000 foo
Write load in this way is very high, more than half million SET commands processed per second using a single core:
instantaneous_ops_per_sec:629782
This is possible because we using a pipeline of 32 commands per time (see -P 32), so it is possible to limit the number of sys calls involved in the processing of commands, and the network latency component as well.
After a few minutes I reached 10 GB of memory used by Redis, so I tried to save the DB while still sending the same write load to the server to see what the additional memory usage due to copy on write would be in such a stress conditions:
[31930] 07 Mar 12:06:48.682 * RDB: 6991 MB of memory used by copy-on-write
almost 7GB of additional memory used, that is 70% more memory.
Note that this is an interesting value since it is exactly the worst case scenario you can get with Redis:
1) Peak load of more than 0.6 million writes per second.
2) Writes are completely distributed across the data set, there is no working set in this case, all the DB is the working set.
But given the enormous pressure on copy on write exercised by this workload, what is the write performance in this case while the system is saving? To find the value I started a BGSAVE and at the same time started the benchmark again:
$ redis-cli bgsave; ./redis-benchmark -r 1000000000 -n 1000000000 -P 32 set key:rand:000000000000 foo
Background saving started
^Ct key:rand:000000000000 foo: 251470.34
250k ops/sec was the lower number I was able to get, as once copy on write starts to happen, there is less and less copy on write happening every second, and the benchmark soon returns to 0.6 million ops per second.
The number of keys was in the order of 100 million here.
Basically the result of this test is, with real hardware and persisting to a normal spinning disk, Redis performs very well as long as you have enough RAM for your data, and for the additional memory used while saving. No big news so far.
SSD SWAP TEST
===
For the SSD test we still use the spinning disk attached to the system in order to persist, so that the SSD is just working as a swap partition.
To fill the instance even more I just started again redis-benchmark with the same command line, since with the specific parameters, if running forever, it would set 1 billion keys, that's enough :-)
Since the instance has 24 GB of physical RAM, for the test to be meaningful I wanted to add enough data to reach 50 GB of used memory. In order to speedup the process of filling the instance I disabled persistence for some time using:
CONFIG SET SAVE ""
While filling the instance, at some point I started a BGSAVE to force some more swapping.
Then when the BGSAVE finished, I started the benchmark again:
$ ./redis-benchmark -r 1000000000 -n 1000000000 -P 32 set key:rand:000000000000 foo
^Ct key:rand:000000000000 foo: 1034.16
As you can see the results were very bad initially, probably the main hash table ended swapped. After some time it started to perform in a decent way again:
$ ./redis-benchmark -r 1000000000 -n 1000000000 -P 32 set key:rand:000000000000 foo
^Ct key:rand:000000000000 foo: 116057.11
I was able to stop and restart the benchmark multiple times and still get decent performances on restarts, as long I was not saving at the same time. However performances continued to be very erratic, jumping from 200k to 50k sets per second.
…. and after 10 minutes …
It only went from 23 GB of memory used to 24 GB, with 2 GB of data set swapped on disk.
As soon as it started to have a few GB swapped performances started to be simply too poor to be acceptable.
I then tried with reads:
$ ./redis-benchmark -r 1000000000 -n 1000000000 -P 32 get key:rand:000000000000
^Ct key:rand:000000000000 foo: 28934.12
Same issue, 30k ops per second both for GET and SET, and *a lot* of swap activity at the same time.
What's worse is that the system was pretty unresponsive as a whole at this point.
At this point I stopped the test, the system was slow enough that filling it even more would require a lot of time, and as more data was swapped performances started to get worse.
WHAT HAPPENS?
===
What happens is simple, Redis is designed to work in an environment where random access of memory is very fast.
Hash tables, and the way Redis objects are allocated is all based on this concept.
Now let's give a look at the SSD 320 disk specifications:
Random write (100% Span) -> 400 IOPS
Random write (8GB Span) -> 23000 IOPS
Basically what happens is that at some point Redis starts to force the OS to move memory pages between RAM and swap at *every* operation performed, since we are accessed keys at random, and there are no more spare pages.
CONCLUSION
===
Redis is completely useless in this way. Systems designed to work in this kind of setups like Twitter fatcache or the recently announced Facebook McDipper need to be SSD-aware, and can probably work reasonably only when a simple GET/SET/DEL model is used.
I also expect that the pathological case for this systems, that is evenly distributed writes with big span, is not going to be excellent because of current SSD disk limits, but that's exactly the case Redis is trying to solve for most users.
The freedom Redis gets from the use of memory allows us to serve much more complex tasks at very good peak performance and with minimal system complexity and underlying assumptions.
TL;DR: the outcome of this test was expected and Redis is an in-memory system :-)

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-88766-1-1.html 上篇帖子: redis入门笔记(3) 下篇帖子: Redis应用
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表