设为首页 收藏本站
查看: 613|回复: 0

[经验分享] BOND linux

[复制链接]

尚未签到

发表于 2018-5-22 08:25:58 | 显示全部楼层 |阅读模式
  /usr/share/doc/iputils-20071127/README.binding
  

  =================================================================================================================================================================================================================================================================================
  

  1. Bonding Driver Installation
  

  2. Bonding Driver Options
  

  3. Configuring Bonding Devices
  3.1Configuration with Sysconfig Support
  3.1.1Using DHCP with Sysconfig
  3.1.2Configuring Multiple Bonds with Sysconfig
  3.2Configuration with Initscripts Support
  3.2.1Using DHCP with Initscripts
  3.2.2Configuring Multiple Bonds with Initscripts
  3.3Configuring Bonding Manually with Ifenslave
  3.3.1Configuring Multiple Bonds Manually
  3.4Configuring Bonding Manually via Sysfs
  

  4. Querying Bonding Configuration
  4.1Bonding Configuration
  4.2Network Configuration
  

  5. Switch Configuration
  

  6. 802.1q VLAN Support
  

  7. Link Monitoring
  7.1ARP Monitor Operation
  7.2Configuring Multiple ARP Targets
  7.3MII Monitor Operation
  

  8. Potential Trouble Sources
  8.1Adventures in Routing
  8.2Ethernet Device Renaming
  8.3Painfully Slow Or No Failed Link Detection By Miimon
  

  9. SNMP agents
  

  10. Promiscuous mode
  

  11. Configuring Bonding for High Availability
  11.1High Availability in a Single Switch Topology
  11.2High Availability in a Multiple Switch Topology
  11.2.1HA Bonding Mode Selection for Multiple Switch Topology
  11.2.2HA Link Monitoring for Multiple Switch Topology
  

  12. Configuring Bonding for Maximum Throughput
  12.1Maximum Throughput in a Single Switch Topology
  12.1.1MT Bonding Mode Selection for Single Switch Topology
  12.1.2MT Link Monitoring for Single Switch Topology
  12.2Maximum Throughput in a Multiple Switch Topology
  12.2.1MT Bonding Mode Selection for Multiple Switch Topology
  12.2.2MT Link Monitoring for Multiple Switch Topology
  

  13. Switch Behavior Issues
  13.1Link Establishment and Failover Delays
  13.2Duplicated Incoming Packets
  

  14. Hardware Specific Considerations
  14.1IBM BladeCenter
  

  15. Frequently Asked Questions
  

  16. Resources and Links
  =================================================================================================================================================================================================================================================================================
  

  1. Bonding Driver Installation
  ==============================
  .....
  ....
  ...
  ..
  .
  

  1.1 Configure and build the kernel with bonding
  -----------------------------------------------
.....
....
...
..
.
  

  1.2 Install ifenslave Control Utility
  -------------------------------------
.....
....
...
..
.
  

  2. Bonding Driver Options
  =========================
  Options for the bonding driver are supplied as parameters to the
  bonding module at load time, or are specified via sysfs.
  

  Module options may be given as command line arguments to the
  insmod or modprobe command, but are usually specified in either the
  /etc/modules.conf or /etc/modprobe.conf configuration file, or in a
  distro-specific configuration file (some of which are detailed in the next
  section).
  

  Details on bonding support for sysfs is provided in the
  "Configuring Bonding Manually via Sysfs" section, below.
  

  The available bonding driver parameters are listed below. If a
  parameter is not specified the default value is used.  When initially
  configuring a bond, it is recommended "tail -f /var/log/messages" be
  run in a separate window to watch for bonding driver error messages.
  

  It is critical that either the miimon or arp_interval and
  arp_ip_target parameters be specified, otherwise serious network
  degradation will occur during link failures.  Very few devices do not
  support at least miimon, so there is really no reason not to use it.
  

  Options with textual values will accept either the text name
  or, for backwards compatibility, the option value.  E.g.,
  "mode=802.3ad" and "mode=4" set the same mode.
  

  The parameters are as follows:
  

  arp_interval
  

  Specifies the ARP link monitoring frequency in milliseconds.
  

  The ARP monitor works by periodically checking the slave
  devices to determine whether they have sent or received
  traffic recently (the precise criteria depends upon the
  bonding mode, and the state of the slave).  Regular traffic is
  generated via ARP probes issued for the addresses specified by
  the arp_ip_target option.
  

  This behavior can be modified by the arp_validate option,
  below.
  

  If ARP monitoring is used in an etherchannel compatible mode
  (modes 0 and 2), the switch should be configured in a mode
  that evenly distributes packets across all links. If the
  switch is configured to distribute the packets in an XOR
  fashion, all replies from the ARP targets will be received on
  the same link which could cause the other team members to
  fail.  ARP monitoring should not be used in conjunction with
  miimon.  A value of 0 disables ARP monitoring.  The default
  value is 0.
  

  arp_ip_target
  

  Specifies the IP addresses to use as ARP monitoring peers when
  arp_interval is > 0.  These are the targets of the ARP request
  sent to determine the health of the link to the targets.
  Specify these values in ddd.ddd.ddd.ddd format.  Multiple IP
  addresses must be separated by a comma.  At least one IP
  address must be given for ARP monitoring to function.  The
  maximum number of targets that can be specified is 16.  The
  default value is no IP addresses.
  

  arp_validate
  

  Specifies whether or not ARP probes and replies should be
  validated in the active-backup mode.  This causes the ARP
  monitor to examine the incoming ARP requests and replies, and
  only consider a slave to be up if it is receiving the
  appropriate ARP traffic.
  

  Possible values are:
  

  none or 0
  

  No validation is performed.  This is the default.
  

  active or 1
  

  Validation is performed only for the active slave.
  

  backup or 2
  

  Validation is performed only for backup slaves.
  

  all or 3
  

  Validation is performed for all slaves.
  

  For the active slave, the validation checks ARP replies to
  confirm that they were generated by an arp_ip_target.  Since
  backup slaves do not typically receive these replies, the
  validation performed for backup slaves is on the ARP request
  sent out via the active slave.  It is possible that some
  switch or network configurations may result in situations
  wherein the backup slaves do not receive the ARP requests; in
  such a situation, validation of backup slaves must be
  disabled.
  

  This option is useful in network configurations in which
  multiple bonding hosts are concurrently issuing ARPs to one or
  more targets beyond a common switch.  Should the link between
  the switch and target fail (but not the switch itself), the
  probe traffic generated by the multiple bonding instances will
  fool the standard ARP monitor into considering the links as
  still up.  Use of the arp_validate option can resolve this, as
  the ARP monitor will only consider ARP requests and replies
  associated with its own instance of bonding.
  

  This option was added in bonding version 3.1.0.
  

  downdelay
  

  Specifies the time, in milliseconds, to wait before disabling
  a slave after a link failure has been detected.  This option
  is only valid for the miimon link monitor.  The downdelay
  value should be a multiple of the miimon value; if not, it
  will be rounded down to the nearest multiple.  The default
  value is 0.
  

  fail_over_mac
  

  Specifies whether active-backup mode should set all slaves to
  the same MAC address (the traditional behavior), or, when
  enabled, change the bond's MAC address when changing the
  active interface (i.e., fail over the MAC address itself).
  

  Fail over MAC is useful for devices that cannot ever alter
  their MAC address, or for devices that refuse incoming
  broadcasts with their own source MAC (which interferes with
  the ARP monitor).
  

  The down side of fail over MAC is that every device on the
  network must be updated via gratuitous ARP, vs. just updating
  a switch or set of switches (which often takes place for any
  traffic, not just ARP traffic, if the switch snoops incoming
  traffic to update its tables) for the traditional method.  If
  the gratuitous ARP is lost, communication may be disrupted.
  

  When fail over MAC is used in conjuction with the mii monitor,
  devices which assert link up prior to being able to actually
  transmit and receive are particularly susecptible to loss of
  the gratuitous ARP, and an appropriate updelay setting may be
  required.
  

  A value of 0 disables fail over MAC, and is the default.  A
  value of 1 enables fail over MAC.  This option is enabled
  automatically if the first slave added cannot change its MAC
  address.  This option may be modified via sysfs only when no
  slaves are present in the bond.
  

  This option was added in bonding version 3.2.0.
  

  lacp_rate
  

  Option specifying the rate in which we'll ask our link partner
  to transmit LACPDU packets in 802.3ad mode.  Possible values
  are:
  

  slow or 0
  Request partner to transmit LACPDUs every 30 seconds
  

  fast or 1
  Request partner to transmit LACPDUs every 1 second
  

  The default is slow.
  

  max_bonds
  

  Specifies the number of bonding devices to create for this
  instance of the bonding driver.  E.g., if max_bonds is 3, and
  the bonding driver is not already loaded, then bond0, bond1
  and bond2 will be created.  The default value is 1.
  

  miimon
  

  Specifies the MII link monitoring frequency in milliseconds.
  This determines how often the link state of each slave is
  inspected for link failures.  A value of zero disables MII
  link monitoring.  A value of 100 is a good starting point.
  The use_carrier option, below, affects how the link state is
  determined.  See the High Availability section for additional
  information.  The default value is 0.
  

  mode
  

  Specifies one of the bonding policies. The default is
  balance-rr (round robin).  Possible values are:
  

  balance-rr or 0
  

  Round-robin policy: Transmit packets in sequential
  order from the first available slave through the
  last.  This mode provides load balancing and fault
  tolerance.
  

  active-backup or 1
  

  Active-backup policy: Only one slave in the bond is
  active.  A different slave becomes active if, and only
  if, the active slave fails.  The bond's MAC address is
  externally visible on only one port (network adapter)
  to avoid confusing the switch.
  

  In bonding version 2.6.2 or later, when a failover
  occurs in active-backup mode, bonding will issue one
  or more gratuitous ARPs on the newly active slave.
  One gratuitous ARP is issued for the bonding master
  interface and each VLAN interfaces configured above
  it, provided that the interface has at least one IP
  address configured.  Gratuitous ARPs issued for VLAN
  interfaces are tagged with the appropriate VLAN id.
  

  This mode provides fault tolerance.  The primary
  option, documented below, affects the behavior of this
  mode.
  

  balance-xor or 2
  

  XOR policy: Transmit based on the selected transmit
  hash policy.  The default policy is a simple [(source
  MAC address XOR'd with destination MAC address) modulo
  slave count].  Alternate transmit policies may be
  selected via the xmit_hash_policy option, described
  below.
  

  This mode provides load balancing and fault tolerance.
  

  broadcast or 3
  

  Broadcast policy: transmits everything on all slave
  interfaces.  This mode provides fault tolerance.
  

  802.3ad or 4
  

  IEEE 802.3ad Dynamic link aggregation.  Creates
  aggregation groups that share the same speed and
  duplex settings.  Utilizes all slaves in the active
  aggregator according to the 802.3ad specification.
  

  Slave selection for outgoing traffic is done according
  to the transmit hash policy, which may be changed from
  the default simple XOR policy via the xmit_hash_policy
  option, documented below.  Note that not all transmit
  policies may be 802.3ad compliant, particularly in
  regards to the packet mis-ordering requirements of
  section 43.2.4 of the 802.3ad standard.  Differing
  peer implementations will have varying tolerances for
  noncompliance.
  

  Prerequisites:
  

  1. Ethtool support in the base drivers for retrieving
  the speed and duplex of each slave.
  

  2. A switch that supports IEEE 802.3ad Dynamic link
  aggregation.
  

  Most switches will require some type of configuration
  to enable 802.3ad mode.
  

  balance-tlb or 5
  

  Adaptive transmit load balancing: channel bonding that
  does not require any special switch support.  The
  outgoing traffic is distributed according to the
  current load (computed relative to the speed) on each
  slave.  Incoming traffic is received by the current
  slave.  If the receiving slave fails, another slave
  takes over the MAC address of the failed receiving
  slave.
  

  Prerequisite:
  

  Ethtool support in the base drivers for retrieving the
  speed of each slave.
  

  balance-alb or 6
  

  Adaptive load balancing: includes balance-tlb plus
  receive load balancing (rlb) for IPV4 traffic, and
  does not require any special switch support.  The
  receive load balancing is achieved by ARP negotiation.
  The bonding driver intercepts the ARP Replies sent by
  the local system on their way out and overwrites the
  source hardware address with the unique hardware
  address of one of the slaves in the bond such that
  different peers use different hardware addresses for
  the server.
  

  Receive traffic from connections created by the server
  is also balanced.  When the local system sends an ARP
  Request the bonding driver copies and saves the peer's
  IP information from the ARP packet.  When the ARP
  Reply arrives from the peer, its hardware address is
  retrieved and the bonding driver initiates an ARP
  reply to this peer assigning it to one of the slaves
  in the bond.  A problematic outcome of using ARP
  negotiation for balancing is that each time that an
  ARP request is broadcast it uses the hardware address
  of the bond.  Hence, peers learn the hardware address
  of the bond and the balancing of receive traffic
  collapses to the current slave.  This is handled by
  sending updates (ARP Replies) to all the peers with
  their individually assigned hardware address such that
  the traffic is redistributed.  Receive traffic is also
  redistributed when a new slave is added to the bond
  and when an inactive slave is re-activated.  The
  receive load is distributed sequentially (round robin)
  among the group of highest speed slaves in the bond.
  

  When a link is reconnected or a new slave joins the
  bond the receive traffic is redistributed among all
  active slaves in the bond by initiating ARP Replies
  with the selected MAC address to each of the
  clients. The updelay parameter (detailed below) must
  be set to a value equal or greater than the switch's
  forwarding delay so that the ARP Replies sent to the
  peers will not be blocked by the switch.
  

  Prerequisites:
  

  1. Ethtool support in the base drivers for retrieving
  the speed of each slave.
  

  2. Base driver support for setting the hardware
  address of a device while it is open.  This is
  required so that there will always be one slave in the
  team using the bond hardware address (the
  curr_active_slave) while having a unique hardware
  address for each slave in the bond.  If the
  curr_active_slave fails its hardware address is
  swapped with the new curr_active_slave that was
  chosen.
  

  primary
  

  A string (eth0, eth2, etc) specifying which slave is the
  primary device.  The specified device will always be the
  active slave while it is available.  Only when the primary is
  off-line will alternate devices be used.  This is useful when
  one slave is preferred over another, e.g., when one slave has
  higher throughput than another.
  

  The primary option is only valid for active-backup mode.
  

  updelay
  

  Specifies the time, in milliseconds, to wait before enabling a
  slave after a link recovery has been detected.  This option is
  only valid for the miimon link monitor.  The updelay value
  should be a multiple of the miimon value; if not, it will be
  rounded down to the nearest multiple.  The default value is 0.
  

  use_carrier
  

  Specifies whether or not miimon should use MII or ETHTOOL
  ioctls vs. netif_carrier_ok() to determine the link
  status. The MII or ETHTOOL ioctls are less efficient and
  utilize a deprecated calling sequence within the kernel.  The
  netif_carrier_ok() relies on the device driver to maintain its
  state with netif_carrier_on/off; at this writing, most, but
  not all, device drivers support this facility.
  

  If bonding insists that the link is up when it should not be,
  it may be that your network device driver does not support
  netif_carrier_on/off.  The default state for netif_carrier is
  "carrier on," so if a driver does not support netif_carrier,
  it will appear as if the link is always up.  In this case,
  setting use_carrier to 0 will cause bonding to revert to the
  MII / ETHTOOL ioctl method to determine the link state.
  

  A value of 1 enables the use of netif_carrier_ok(), a value of
  0 will use the deprecated MII / ETHTOOL ioctls.  The default
  value is 1.
  

  xmit_hash_policy
  

  Selects the transmit hash policy to use for slave selection in
  balance-xor and 802.3ad modes.  Possible values are:
  

  layer2
  

  Uses XOR of hardware MAC addresses to generate the
  hash.  The formula is
  

  (source MAC XOR destination MAC) modulo slave count
  

  This algorithm will place all traffic to a particular
  network peer on the same slave.
  

  This algorithm is 802.3ad compliant.
  

  layer2+3
  

  This policy uses a combination of layer2 and layer3
  protocol information to generate the hash.
  

  Uses XOR of hardware MAC addresses and IP addresses to
  generate the hash.  The formula is
  

  (((source IP XOR dest IP) AND 0xffff) XOR
  ( source MAC XOR destination MAC ))
  modulo slave count
  

  This algorithm will place all traffic to a particular
  network peer on the same slave.  For non-IP traffic,
  the formula is the same as for the layer2 transmit
  hash policy.
  

  This policy is intended to provide a more balanced
  distribution of traffic than layer2 alone, especially
  in environments where a layer3 gateway device is
  required to reach most destinations.
  

  This algorithm is 802.3ad complient.
  

  layer3+4
  

  This policy uses upper layer protocol information,
  when available, to generate the hash.  This allows for
  traffic to a particular network peer to span multiple
  slaves, although a single connection will not span
  multiple slaves.
  

  The formula for unfragmented TCP and UDP packets is
  

  ((source port XOR dest port) XOR
   ((source IP XOR dest IP) AND 0xffff)
  modulo slave count
  

  For fragmented TCP or UDP packets and all other IP
  protocol traffic, the source and destination port
  information is omitted.  For non-IP traffic, the
  formula is the same as for the layer2 transmit hash
  policy.
  

  This policy is intended to mimic the behavior of
  certain switches, notably Cisco switches with PFC2 as
  well as some Foundry and IBM products.
  

  This algorithm is not fully 802.3ad compliant.  A
  single TCP or UDP conversation containing both
  fragmented and unfragmented packets will see packets
  striped across two interfaces.  This may result in out
  of order delivery.  Most traffic types will not meet
  this criteria, as TCP rarely fragments traffic, and
  most UDP traffic is not involved in extended
  conversations.  Other implementations of 802.3ad may
  or may not tolerate this noncompliance.
  

  The default value is layer2.  This option was added in bonding
  version 2.6.3.  In earlier versions of bonding, this parameter
  does not exist, and the layer2 policy is the only policy.  The
  layer2+3 value was added for bonding version 3.2.2.
  

  3. Configuring Bonding Devices
  ==============================
  

  You can configure bonding using either your distro's network
  initialization scripts, or manually using either ifenslave or the
  sysfs interface.  Distros generally use one of two packages for the
  network initialization scripts: initscripts or sysconfig.  Recent
  versions of these packages have support for bonding, while older
  versions do not.
  

  We will first describe the options for configuring bonding for
  distros using versions of initscripts and sysconfig with full or
  partial support for bonding, then provide information on enabling
  bonding without support from the network initialization scripts (i.e.,
  older versions of initscripts or sysconfig).
  

  If you're unsure whether your distro uses sysconfig or
  initscripts, or don't know if it's new enough, have no fear.
  Determining this is fairly straightforward.
  

  First, issue the command:
  

  $ rpm -qf /sbin/ifup
  

  It will respond with a line of text starting with either
  "initscripts" or "sysconfig," followed by some numbers.  This is the
  package that provides your network initialization scripts.
  

  Next, to determine if your installation supports bonding,
  issue the command:
  

  $ grep ifenslave /sbin/ifup
  

  If this returns any matches, then your initscripts or
  sysconfig has support for bonding.
  

  3.1 Configuration with Sysconfig Support
  ----------------------------------------
  

  This section applies to distros using a version of sysconfig
  with bonding support, for example, SuSE Linux Enterprise Server 9.
  

.....
....
...
..
.
  

  3.2 Configuration with Initscripts Support
  ------------------------------------------
  

  This section applies to distros using a recent version of
  initscripts with bonding support, for example, Red Hat Enterprise Linux
  version 3 or later, Fedora, etc.  On these systems, the network
  initialization scripts have knowledge of bonding, and can be configured to
  control bonding devices.  Note that older versions of the initscripts
  package have lower levels of support for bonding; this will be noted where
  applicable.
  

  These distros will not automatically load the network adapter
  driver unless the ethX device is configured with an IP address.
  Because of this constraint, users must manually configure a
  network-script file for all physical adapters that will be members of
  a bondX link.  Network script files are located in the directory:
  

  /etc/sysconfig/network-scripts
  

  The file name must be prefixed with "ifcfg-eth" and suffixed
  with the adapter's physical adapter number.  For example, the script
  for eth0 would be named /etc/sysconfig/network-scripts/ifcfg-eth0.
  Place the following text in the file:
  

  DEVICE=eth0
  USERCTL=no
  ONBOOT=yes
  MASTER=bond0
  SLAVE=yes
  BOOTPROTO=none
  

  The DEVICE= line will be different for every ethX device and
  must correspond with the name of the file, i.e., ifcfg-eth1 must have
  a device line of DEVICE=eth1.  The setting of the MASTER= line will
  also depend on the final bonding interface name chosen for your bond.
  As with other network devices, these typically start at 0, and go up
  one for each device, i.e., the first bonding instance is bond0, the
  second is bond1, and so on.
  

  Next, create a bond network script.  The file name for this
  script will be /etc/sysconfig/network-scripts/ifcfg-bondX where X is
  the number of the bond.  For bond0 the file is named "ifcfg-bond0",
  for bond1 it is named "ifcfg-bond1", and so on.  Within that file,
  place the following text:
  

  DEVICE=bond0
  IPADDR=192.168.1.1
  NETMASK=255.255.255.0
  NETWORK=192.168.1.0
  BROADCAST=192.168.1.255
  ONBOOT=yes
  BOOTPROTO=none
  USERCTL=no
  

  Be sure to change the networking specific lines (IPADDR,
  NETMASK, NETWORK and BROADCAST) to match your network configuration.
  

  For later versions of initscripts, such as that found with Fedora
  7 and Red Hat Enterprise Linux version 5 (or later), it is possible, and,
  indeed, preferable, to specify the bonding options in the ifcfg-bond0
  file, e.g. a line of the format:
  

  BONDING_OPTS="mode=active-backup arp_interval=60 arp_ip_target=+192.168.1.254"
  

  will configure the bond with the specified options.  The options
  specified in BONDING_OPTS are identical to the bonding module parameters
  except for the arp_ip_target field.  Each target should be included as a
  separate option and should be preceded by a '+' to indicate it should be
  added to the list of queried targets, e.g.,
  

  arp_ip_target=+192.168.1.1 arp_ip_target=+192.168.1.2
  

  is the proper syntax to specify multiple targets.  When specifying
  options via BONDING_OPTS, it is not necessary to edit /etc/modules.conf or
  /etc/modprobe.conf.
  

  For older versions of initscripts that do not support
  BONDING_OPTS, it is necessary to edit /etc/modules.conf (or
  /etc/modprobe.conf, depending upon your distro) to load the bonding module
  with your desired options when the bond0 interface is brought up.  The
  following lines in /etc/modules.conf (or modprobe.conf) will load the
  bonding module, and select its options:
  

  alias bond0 bonding
  options bond0 mode=balance-alb miimon=100
  

  Replace the sample parameters with the appropriate set of
  options for your configuration.
  

  Finally run "/etc/rc.d/init.d/network restart" as root.  This
  will restart the networking subsystem and your bond link should be now
  up and running.
  

  3.2.1 Using DHCP with Initscripts
  ---------------------------------
  

  Recent versions of initscripts (the versions supplied with Fedora
  Core 3 and Red Hat Enterprise Linux 4, or later versions, are reported to
  work) have support for assigning IP information to bonding devices via
  DHCP.
  

  To configure bonding for DHCP, configure it as described
  above, except replace the line "BOOTPROTO=none" with "BOOTPROTO=dhcp"
  and add a line consisting of "TYPE=Bonding".  Note that the TYPE value
  is case sensitive.
  

  3.2.2 Configuring Multiple Bonds with Initscripts
  -------------------------------------------------
  

  Initscripts packages that are included with Fedora 7 and Red Hat
  Enterprise Linux 5 support multiple bonding interfaces by simply
  specifying the appropriate BONDING_OPTS= in ifcfg-bondX where X is the
  number of the bond.  This support requires sysfs support in the kernel,
  and a bonding driver of version 3.0.0 or later.  Other configurations may
  not support this method for specifying multiple bonding interfaces; for
  those instances, see the "Configuring Multiple Bonds Manually" section,
  below.
  

  3.3 Configuring Bonding Manually with Ifenslave
  -----------------------------------------------
  

  This section applies to distros whose network initialization
  scripts (the sysconfig or initscripts package) do not have specific
  knowledge of bonding.  One such distro is SuSE Linux Enterprise Server
  version 8.
  

  The general method for these systems is to place the bonding
  module parameters into /etc/modules.conf or /etc/modprobe.conf (as
  appropriate for the installed distro), then add modprobe and/or
  ifenslave commands to the system's global init script.  The name of
  the global init script differs; for sysconfig, it is
  /etc/init.d/boot.local and for initscripts it is /etc/rc.d/rc.local.
  

  For example, if you wanted to make a simple bond of two e100
  devices (presumed to be eth0 and eth1), and have it persist across
  reboots, edit the appropriate file (/etc/init.d/boot.local or
  /etc/rc.d/rc.local), and add the following:
  

  modprobe bonding mode=balance-alb miimon=100
  modprobe e100
  ifconfig bond0 192.168.1.1 netmask 255.255.255.0 up
  ifenslave bond0 eth0
  ifenslave bond0 eth1
  

  Replace the example bonding module parameters and bond0
  network configuration (IP address, netmask, etc) with the appropriate
  values for your configuration.
  

  Unfortunately, this method will not provide support for the
  ifup and ifdown scripts on the bond devices.  To reload the bonding
  configuration, it is necessary to run the initialization script, e.g.,
  

  # /etc/init.d/boot.local
  

  or
  

  # /etc/rc.d/rc.local
  

  It may be desirable in such a case to create a separate script
  which only initializes the bonding configuration, then call that
  separate script from within boot.local.  This allows for bonding to be
  enabled without re-running the entire global init script.
  

  To shut down the bonding devices, it is necessary to first
  mark the bonding device itself as being down, then remove the
  appropriate device driver modules.  For our example above, you can do
  the following:
  

  # ifconfig bond0 down
  # rmmod bonding
  # rmmod e100
  

  Again, for convenience, it may be desirable to create a script
  with these commands.
  

  

  3.3.1 Configuring Multiple Bonds Manually
  -----------------------------------------
  

  This section contains information on configuring multiple
  bonding devices with differing options for those systems whose network
  initialization scripts lack support for configuring multiple bonds.
  

  If you require multiple bonding devices, but all with the same
  options, you may wish to use the "max_bonds" module parameter,
  documented above.
  

  To create multiple bonding devices with differing options, it is
  preferrable to use bonding parameters exported by sysfs, documented in the
  section below.
  

  For versions of bonding without sysfs support, the only means to
  provide multiple instances of bonding with differing options is to load
  the bonding driver multiple times.  Note that current versions of the
  sysconfig network initialization scripts handle this automatically; if
  your distro uses these scripts, no special action is needed.  See the
  section Configuring Bonding Devices, above, if you're not sure about your
  network initialization scripts.
  

  To load multiple instances of the module, it is necessary to
  specify a different name for each instance (the module loading system
  requires that every loaded module, even multiple instances of the same
  module, have a unique name).  This is accomplished by supplying multiple
  sets of bonding options in /etc/modprobe.conf, for example:
  

  alias bond0 bonding
  options bond0 -o bond0 mode=balance-rr miimon=100
  

  alias bond1 bonding
  options bond1 -o bond1 mode=balance-alb miimon=50
  

  will load the bonding module two times.  The first instance is
  named "bond0" and creates the bond0 device in balance-rr mode with an
  miimon of 100.  The second instance is named "bond1" and creates the
  bond1 device in balance-alb mode with an miimon of 50.
  

  In some circumstances (typically with older distributions),
  the above does not work, and the second bonding instance never sees
  its options.  In that case, the second options line can be substituted
  as follows:
  

  install bond1 /sbin/modprobe --ignore-install bonding -o bond1 \
  mode=balance-alb miimon=50
  

  This may be repeated any number of times, specifying a new and
  unique name in place of bond1 for each subsequent instance.
  

  It has been observed that some Red Hat supplied kernels are unable
  to rename modules at load time (the "-o bond1" part).  Attempts to pass
  that option to modprobe will produce an "Operation not permitted" error.
  This has been reported on some Fedora Core kernels, and has been seen on
  RHEL 4 as well.  On kernels exhibiting this problem, it will be impossible
  to configure multiple bonds with differing parameters (as they are older
  kernels, and also lack sysfs support).
  

  3.4 Configuring Bonding Manually via Sysfs
  ------------------------------------------
  

  Starting with version 3.0.0, Channel Bonding may be configured
  via the sysfs interface.  This interface allows dynamic configuration
  of all bonds in the system without unloading the module.  It also
  allows for adding and removing bonds at runtime.  Ifenslave is no
  longer required, though it is still supported.
  

  Use of the sysfs interface allows you to use multiple bonds
  with different configurations without having to reload the module.
  It also allows you to use multiple, differently configured bonds when
  bonding is compiled into the kernel.
  

  You must have the sysfs filesystem mounted to configure
  bonding this way.  The examples in this document assume that you
  are using the standard mount point for sysfs, e.g. /sys.  If your
  sysfs filesystem is mounted elsewhere, you will need to adjust the
  example paths accordingly.
  

  Creating and Destroying Bonds
  -----------------------------
  To add a new bond foo:
  # echo +foo > /sys/class/net/bonding_masters
  

  To remove an existing bond bar:
  # echo -bar > /sys/class/net/bonding_masters
  

  To show all existing bonds:
  # cat /sys/class/net/bonding_masters
  

  NOTE: due to 4K size limitation of sysfs files, this list may be
  truncated if you have more than a few hundred bonds.  This is unlikely
  to occur under normal operating conditions.
  

  Adding and Removing Slaves
  --------------------------
  Interfaces may be enslaved to a bond using the file
  /sys/class/net/<bond>/bonding/slaves.  The semantics for this file
  are the same as for the bonding_masters file.
  

  To enslave interface eth0 to bond bond0:
  # ifconfig bond0 up
  # echo +eth0 > /sys/class/net/bond0/bonding/slaves
  

  To free slave eth0 from bond bond0:
  # echo -eth0 > /sys/class/net/bond0/bonding/slaves
  

  When an interface is enslaved to a bond, symlinks between the
  two are created in the sysfs filesystem.  In this case, you would get
  /sys/class/net/bond0/slave_eth0 pointing to /sys/class/net/eth0, and
  /sys/class/net/eth0/master pointing to /sys/class/net/bond0.
  

  This means that you can tell quickly whether or not an
  interface is enslaved by looking for the master symlink.  Thus:
  # echo -eth0 > /sys/class/net/eth0/master/bonding/slaves
  will free eth0 from whatever bond it is enslaved to, regardless of
  the name of the bond interface.
  

  Changing a Bond's Configuration
  -------------------------------
  Each bond may be configured individually by manipulating the
  files located in /sys/class/net/<bond name>/bonding
  

  The names of these files correspond directly with the command-
  line parameters described elsewhere in this file, and, with the
  exception of arp_ip_target, they accept the same values.  To see the
  current setting, simply cat the appropriate file.
  

  A few examples will be given here; for specific usage
  guidelines for each parameter, see the appropriate section in this
  document.
  

  To configure bond0 for balance-alb mode:
  # ifconfig bond0 down
  # echo 6 > /sys/class/net/bond0/bonding/mode
   - or -
  # echo balance-alb > /sys/class/net/bond0/bonding/mode
  NOTE: The bond interface must be down before the mode can be
  changed.
  

  To enable MII monitoring on bond0 with a 1 second interval:
  # echo 1000 > /sys/class/net/bond0/bonding/miimon
  NOTE: If ARP monitoring is enabled, it will disabled when MII
  monitoring is enabled, and vice-versa.
  

  To add ARP targets:
  # echo +192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
  # echo +192.168.0.101 > /sys/class/net/bond0/bonding/arp_ip_target
  NOTE:  up to 10 target addresses may be specified.
  

  To remove an ARP target:
  # echo -192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
  

  Example Configuration
  ---------------------
  We begin with the same example that is shown in section 3.3,
  executed with sysfs, and without using ifenslave.
  

  To make a simple bond of two e100 devices (presumed to be eth0
  and eth1), and have it persist across reboots, edit the appropriate
  file (/etc/init.d/boot.local or /etc/rc.d/rc.local), and add the
  following:
  

  modprobe bonding
  modprobe e100
  echo balance-alb > /sys/class/net/bond0/bonding/mode
  ifconfig bond0 192.168.1.1 netmask 255.255.255.0 up
  echo 100 > /sys/class/net/bond0/bonding/miimon
  echo +eth0 > /sys/class/net/bond0/bonding/slaves
  echo +eth1 > /sys/class/net/bond0/bonding/slaves
  

  To add a second bond, with two e1000 interfaces in
  active-backup mode, using ARP monitoring, add the following lines to
  your init script:
  

  modprobe e1000
  echo +bond1 > /sys/class/net/bonding_masters
  echo active-backup > /sys/class/net/bond1/bonding/mode
  ifconfig bond1 192.168.2.1 netmask 255.255.255.0 up
  echo +192.168.2.100 /sys/class/net/bond1/bonding/arp_ip_target
  echo 2000 > /sys/class/net/bond1/bonding/arp_interval
  echo +eth2 > /sys/class/net/bond1/bonding/slaves
  echo +eth3 > /sys/class/net/bond1/bonding/slaves
  

  

  

  

  

  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-478901-1-1.html 上篇帖子: kickstart安装linux 下篇帖子: 趣味linux
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表