设为首页 收藏本站
查看: 594|回复: 0

[经验分享] Considerations for Multi Site Cluster in Windows server 2012 (Part 1)

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2015-11-4 10:11:11 | 显示全部楼层 |阅读模式
  
Introduction
One of the features in Windows Server 2012 that has improved the most from previous versions is failover clustering. Windows Server 2012 allows you to build clusters that are more scalable than ever before, while at the same time giving administrators much more freedom to choose a cluster design that makes sense for their own organization, rather than being completely locked into a rigid set of requirements.  
  Although it was previously possible to build a multi-site failover cluster, Windows Server 2012 makes geo clustering much more practical. It is worth noting however, that even though Microsoft has gone to great lengths to make building clusters easier than it has ever been, good cluster design is essential. An improperly designed multi-site cluster will likely suffer from performance problems and may ultimately prove to be less than reliable. That being the case, I decided to write this article series as a way of providing you with some best practices for building multi-site clusters that are based on Windows Server 2012.
Quorum Considerations
  I want to start out by talking about one of the aspects of multi-site clustering that has traditionally proven to be the most challenging. In order for a cluster to function, it has to maintain quorum. This is a fancy way of saying that a minimal number of cluster nodes must remain functional and accessible in order for the cluster to function.
  Windows Server generally uses a Majority Node Set cluster. In a Majority Node Set Cluster a majority of the cluster nodes must be functional in order for the cluster to retain quorum. Microsoft defines the majority as half of the cluster nodes, plus one. If for example a Majority Node Set Cluster contained four cluster nodes then Windows would define a node majority as three cluster nodes (half of the cluster nodes plus an additional node).
  The majority node set requirement comes with a couple of implications. For starters, it means that smaller clusters can tolerate the failure of fewer nodes while still retaining quorum. For example, a four node cluster can only tolerate the failure of a single node. On the other hand, a cluster with ten nodes can retain quorum even if up to four nodes fail.
  Although cluster node planning is really just a matter of basic math (at least in terms of calculating tolerance for node failures), things get a little bit more interesting when you bring a multisite architecture into the picture.
  Imagine for example that your organization has a primary data center and a disaster recovery data center. Now imagine that you decide to build a multisite cluster to handle a mission-critical application. You want to be able to run that application in either data center, so you want to put plenty of cluster nodes in each location.
  As previously mentioned, a Majority Node Set cluster with ten cluster nodes can survive the failure of up to four nodes. With that in mind, let’s pretend that we decided to place five nodes in each of the two data centers. That way, all but one of the cluster nodes could potentially fail in either one of the data centers and the cluster would still retain quorum.
  Although this architecture may at first sound promising, there is a major problem. Imagine what would happen in the WAN link (or the Internet connection, if that’s what you are using) between the two sites failed. In this type of situation, the cluster nodes are not smart enough to tell the difference between a WAN link failure and a mass cluster node failure.
  In this scenario, each datacenter would interpret the WAN link failure as if all of the cluster nodes in the opposite datacenter had failed. In other words, each datacenter thinks that five cluster nodes are down. Remember that in a ten node cluster, six cluster nodes have to remain online in order for the cluster to retain quorum. Each datacenter can only confirm the availability of five nodes, so neither datacenter is able to maintain quorum. Hence the clustered application fails, even though not a single cluster node has actually failed.
  In this nightmare scenario, the WAN link is the cluster’s Achilles heel. It is the one single point of failure that has the potential to bring down the entire cluster. The question is how can you protect your cluster against this sort of thing?
  There are a couple of different schools of thought on preventing a WAN link outage from bringing down the cluster. In the past, a popular option has been to stack the deck in favor of one datacenter or the other. To show you how this works, let’s go back to my earlier example of a ten node cluster that spans two datacenters.
  If the goal is to prevent a WAN link failure from bringing down the cluster then you would need to place an uneven number of cluster nodes in each datacenter. A ten node cluster requires that six nodes remain online in order for the cluster to retain quorum. As such, placing six nodes in the primary datacenter and four nodes in the disaster recovery datacenter will insulate the cluster against a WAN link failure (assuming that all of the nodes in the primary datacenter are online at the time of the failure).
  A second school of thought regarding protecting a Majority Node Set cluster against a WAN link failure is to make use of a third site. This architecture works by placing half of the cluster nodes in the organization’s primary datacenter and half of the cluster nodes in a disaster recovery datacenter. The third location doesn’t actually host a cluster node. Instead, it hosts a non-clustered server that acts as a file share witness.
  A file share witness is a server that acts as a sort of referee in the event of a WAN link failure. To show you how this works, consider our earlier example in which an organization needs to build a multi-site cluster with ten cluster nodes. Now let’s suppose that we decided to put five cluster nodes in the primary datacenter and five cluster nodes in the disaster recovery datacenter.
  In this arrangement all of the same rules apply. The cluster still requires six nodes to be available in order for the cluster to retain quorum. Now suppose that a WAN link failure occurs. Neither datacenter has enough nodes for the cluster to retain quorum. However, all of the cluster nodes know about the file share witness. Therefore, both datacenters will attempt to contact the file share witness server. The datacenter with the functioning WAN connection should be able to establish contact, while the datacenter with the failed connection should not be able to. When a datacenter does establish contact with the file share witness, that server takes the place of a sixth cluster node. In doing so, it allows the cluster to retain quorum in spite of the WAN link failure.
Conclusion
  Although cluster quorum is an extremely important consideration for multi-site clusters, it is far from being the only consideration. Some of the other considerations that must be taken into account are node storage and the availability of cluster resources. I will discuss these issues and more as the series progresses.
  

  本系列windows server 2012英文原文来自 Windows network网站:http://www.windowsnetworking.com

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-134853-1-1.html 上篇帖子: 用Xming替代Xmanager,在windows下图形化登陆linux (2012-02-10 17:05) 下篇帖子: Windows Server 2012 域环境搭建
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表