Uploaded image for project: 'DC/OS'
  1. DC/OS
  2. DCOS_OSS-1529

Missing MTU configuration for VTEP1024

    Details

      Description

      Hi, team.
       
      We're testing DC/OS 1.9.2 with 10G network. And we decided change MTU of the network for performance. After change HOST's NIC MTU configuration, we edit overlay_mtu property in the /opt/mesosphere/etc/overlay/config/agent.json, agent-master.json. 
       
      After dcos-mesos-slave restarted, we noticed the vtep1024's MTU is still 1500, and we could not get full bandwidth of 10G network (about 6 ~ 7 Gbps). So, we modified vtep1024's MTU manually and got nearly full speed of 10G(about 9.6 Gbps). 
       
      I think dcos-overaly module needs an configuration for vtep's MTU and when navstar creates vtep, setting MTU with this value.
       
      DC/OS Overlay module's Endpoint(http://private:5051/overlay-agent/overlay) returns:  

      {"ip":"192.10.250.234","overlays":[{"backend":{"vxlan":{"vni":1024,"vtep_ip":"44.128.0.2\/20","vtep_mac":"70:b3:d5:80:00:02","vtep_name":"vtep1024"}},"docker_bridge":{"ip":"9.0.6.0\/23","name":"d-dev"},"info":{"name":"dev","prefix":22,"subnet":"9.0.0.0\/8"},"mesos_bridge":{"ip":"9.0.4.0\/23","name":"m-dev"},"state":{"status":"STATUS_OK"},"subnet":"9.0.4.0\/22"},{"backend":{"vxlan":{"vni":1024,"vtep_ip":"44.128.0.2\/20","vtep_mac":"70:b3:d5:80:00:02","vtep_name":"vtep1024"}},"docker_bridge":{"ip":"10.0.6.0\/23","name":"d-prod"},"info":{"name":"prod","prefix":22,"subnet":"10.0.0.0\/8"},"mesos_bridge":{"ip":"10.0.4.0\/23","name":"m-prod"},"state":{"status":"STATUS_OK"},"subnet":"10.0.4.0\/22"}]}
      

      I think DC/OS Config "dcos_overlay_mtu" can not be used for VTEP1024. VTEP1024's MTU is always greater(more than 54 bytes?) than "dcos_overlay_mtu" for vxlan overhead. Our currnet NICs are look like this:

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc noqueue state DOWN 
          link/ether 02:42:52:7a:b1:eb brd ff:ff:ff:ff:ff:ff
          inet 172.17.0.1/16 scope global docker0
             valid_lft forever preferred_lft forever
          inet6 fe80::42:52ff:fe7a:b1eb/64 scope link 
             valid_lft forever preferred_lft forever
      13: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
          link/ether 2a:b8:c8:fd:04:9f brd ff:ff:ff:ff:ff:ff
      14: spartan: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
          link/ether 36:d9:13:be:bc:be brd ff:ff:ff:ff:ff:ff
          inet 198.51.100.1/32 scope global spartan
             valid_lft forever preferred_lft forever
          inet 198.51.100.2/32 scope global spartan
             valid_lft forever preferred_lft forever
          inet 198.51.100.3/32 scope global spartan
             valid_lft forever preferred_lft forever
          inet6 fe80::34d9:13ff:febe:bcbe/64 scope link 
             valid_lft forever preferred_lft forever
      15: minuteman: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
          link/ether 86:98:55:dd:8a:a5 brd ff:ff:ff:ff:ff:ff
          inet6 fe80::8498:55ff:fedd:8aa5/64 scope link 
             valid_lft forever preferred_lft forever
      16: d-prod: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
          link/ether 02:42:2a:74:62:1d brd ff:ff:ff:ff:ff:ff
          inet 10.0.6.1/23 scope global d-prod
             valid_lft forever preferred_lft forever
      17: d-dev: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8920 qdisc noqueue state UP 
          link/ether 02:42:6b:05:df:ac brd ff:ff:ff:ff:ff:ff
          inet 9.0.6.1/23 scope global d-dev
             valid_lft forever preferred_lft forever
          inet6 fe80::42:6bff:fe05:dfac/64 scope link 
             valid_lft forever preferred_lft forever
      18: vtep1024: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN qlen 1000
          link/ether 70:b3:d5:80:00:02 brd ff:ff:ff:ff:ff:ff
          inet 44.128.0.2/20 scope global vtep1024
             valid_lft forever preferred_lft forever
      20: veth49dc8e8@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8920 qdisc noqueue master d-dev state UP 
          link/ether e6:1e:dc:b9:91:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet6 fe80::e41e:dcff:feb9:919c/64 scope link 
             valid_lft forever preferred_lft forever
      41: p5p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
          link/ether a0:36:9f:e9:9c:a0 brd ff:ff:ff:ff:ff:ff
      42: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 20000
          link/ether a0:36:9f:e9:9c:a2 brd ff:ff:ff:ff:ff:ff
          inet 192.10.250.234/24 brd 192.10.250.255 scope global p5p2
             valid_lft forever preferred_lft forever
          inet6 fe80::975:6097:51e6:4465/64 scope link 
             valid_lft forever preferred_lft forever
      68: veth7d9ba1a@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8920 qdisc noqueue master d-dev state UP 
          link/ether a6:5e:74:58:b4:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 10
          inet6 fe80::a45e:74ff:fe58:b4a0/64 scope link 
             valid_lft forever preferred_lft forever
      

        Attachments

          Activity

            People

            • Assignee:
              sergeyurbanovich Sergey Urbanovich (Inactive)
              Reporter:
              minyk minyk
              Team:
              DELETE Networking Team
              Watchers:
              Deepak Goel, minyk, Sergey Urbanovich (Inactive)
            • Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Zendesk Support

                  NextupJiraPlusStatus

                  Error rendering 'slack.nextup.jira:nextup-jira-plus-status'. Please contact your JIRA administrators.