设为首页 收藏本站
查看: 1160|回复: 0

[经验分享] Setting DPDK+OVS+QEMU on CentOS

[复制链接]

尚未签到

发表于 2017-6-25 07:22:01 | 显示全部楼层 |阅读模式
  Environment Build Step:
  these packages are needed for building dpdk+ovs:



yum install -y make gcc glibc.i686 libgcc.i686 libstdc++.i686 glibc-devel.i686 glibc-devel.i686 libc6-dev-i386 glibc-devel.x86_64 libc6-dev clang autoconf automake libtool cmake python m4 openssl git libpcap-devel pciutils numactl-devel kernel-devel
  First download latest dpdk & ovs :http://dpdk.org/download
  git clone https://github.com/openvswitch/ovs
  unzip these files.
  then configure & compile dpdk. in config/common_linux add the following lines:



CONFIG_RTE_LIBRTE_PMD_PCAP=y
CONFIG_RTE_LIBRTE_PMD_RING=y
  DPDK:
  first need to know, if ovs-vsctl add-port, then the port is automatically binded to the NIC, by order.. so the name dpdk0 shouldn't be changed.. the dpdk[n]'s n is its order of the NICs..
  also dpdkvhostuser[n], dpdkr[n].. and so on.


  • before startup, computer must met: VT-d, (svm, vmx),  add following line to /etc/fstab (if wondering why 1GB rather than 2mb.. this page explains all)


    nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
  • add startup-config: default_hugepagesz=1GB hugepagesz=1GB hugepages=5 (5GB RAM), modify /boot/grub2/grub.cfg and reboot
  • make sure these pre-requirements are met: make, gcc, gcc-multilib( glibc.i686, libgcc.i686, libstdc++.i686 and glibc-devel.i686 / glibc-devel.i686 / libc6-dev-i386; glibc-devel.x86_64 / libc6-dev )
  • ( it's quite possible that this problem is not solved, it's v16.04 and still need this patch. the patch no. is 945 ) patch usage: patch -p1 < *.patch ( for more, see man patch )
  • using its ./tools/setup.sh to compile at a suitable version of dpdk.  (not suggested, this step is too simple.. is divided as follows:

    • make install T=x86_64-native-linuxapp-gcc
    • cd x86_64-native-linuxapp-gcc
    • vi .config (note this step can modify the output of the libs to static libs. otherwise the .so files are needed  to set the paths. seems..
    • make (if need to reset .config, just make clean && make will done)
    • done.

  • then to set environmental vars:



  • 1 export RTE_SDK=/root/dpdk-16.04
    2 export RTE_TARGET=x86_64-....
  • use ./tools/setup.sh again to insmod igb_uio kernel module. then create hugepages. then add the additional nic to igb_uio.
  • *test with test programs
  • ready to use
  OVS installation:


  • set the PATH for ovs:


    export DPDK_DIR=$HOME/dpdk-16.04
    export DPDK_TARGET=x86_64-native-linuxapp-gcc
    export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
    export OVS_DIR=$HOME/ovs
    export VM_NAME=Centos-vm
    export GUEST_MEM=1024M
    export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
    export VHOST_SOCK_DIR=/usr/local/var/run/openvswitch
    export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
  • yum install clang autoconf automake libtool
  • ./boot.sh
  • ./configure --with-dpdk=%DPDK_BUILD
  • make install
  • installation finished
  OVS startup:


  • *if started before..


    rm -f /usr/local/etc/openvswitch/conf.db /usr/local/var/run/openvswitch/db.sock
    ovs-appctl -t ovsdb-server exit
    ovs-appctl -t ovs-vswitchd exit
  • source ./setup.sh content as follows: (start server, ovs and add port for qemu usage).  note sometimes there are 2 nodes of NUMA, so if dpdk ports are on the 2nd node, the socket mem should be set larger than 0, usually 1024.


    ovsdb-tool create /usr/local/etc/openvswitch/conf.db  \
    /usr/local/share/openvswitch/vswitch.ovsschema
    ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
    --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
    --pidfile --detach
    ovs-vsctl --no-wait init
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-hugepage-dir=/mnt/huge_1GB
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"
    ovs-vswitchd unix:$DB_SOCK --pidfile --detach
    ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6
    ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
    ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
    ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
    ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser
  • done
  QEMU:


  • before starting Qemu, need to create a virtual tap/tun device, and bind it with NIC: reference the steps


    yum install brctl " a bridge creating pkg
    ip link set eno16777736 down " a nic, which is connected to outer web
    brctl addbr br1 " create a bridge where the outer web and the inner nic running on, letting the inner system get on web
    brctl addif eno16777736
    ip link set dev br1 promisc on " promisc mode on, in this mode, tap/nic both working..
    ip link set dev eno16777736 promisc on
    dhclient br1 " to give ip addr to every un-allocated [v]nics.. and if something goes wrong, make sure dhclient is not already running in the background..
    ip link set dev br1 up
    ip link set dev eno16777736 up
    ip tuntap add mode tap tap0
    ip link set dev tap0 promisc on
  • start qemu vm by:


    qemu-system-x86_64 -name $VM_NAME -cpu host -enable-kvm -m $GUEST_MEM -object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/mnt/huge_1GB,share=on -numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2 -drive file=$QCOW2_IMAGE -chardev socket,id=char0,path=$VHOST_SOCK_DIR/dpdkvhostuser0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$VHOST_SOCK_DIR/dpdkvhostuser1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off -net nic,macaddr=00:00:00:00:00:21 -net tap,ifname=tap0,script=no,downscript=no -nographic -snapshot
  • to enable connection between dpdkvhostuser ports, first need to set bridge br0 up by: (必要はない


    ip link set dev br0 up
  • set the ip inside the VMs, by:


    ip addr add dev eth0 192.168.6.xx[n]
    ip link set eth0 up
    ip route add default via 192.168.6.1 " this gw should be found by traceroute www.baidu.com in the outer machine.. the first jump router is the gw..
  • done, now host & guest is ping-free.. but still not able to ssh(securecrt) into the inside vm.. fixed by modifying the ssh configure settings and restart sshd service..
  note: viewing logs created by any ovs program, using journalctl..



journalctl -t ovs-vswitchd
  done

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-387808-1-1.html 上篇帖子: nova系列一:虚拟化介绍 下篇帖子: openstack---虚拟化学习
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表