Ubuntu Bonding vs PPPoE multilink (2/3)
In this post we will measure the performance of balance-rr bonding over 2 gigabit ethernet connections.
This article is the second article in a series, you can find the first article here and the last article here.
The overal configuration diagram is shown here:
In this test we will use bonding interfaces with roud-robin algorithm between Orca and NAS2.
First let's set up the bonding interfaces:
sudo ip link add bond0 type bond mode balance-rr
sudo ip link set enp4s0 master bond0
sudo ip link set enp5s0 master bond0
sudo ip link set bond0 up
ip addr add 192.168.190.11/24 dev bond0
After doing this on both sides, we can check if the connection is up and running:
ping 192.168.190.10
If everything is Ok, then let's start the iperf3 tests to NAS2:
Computer To(Mb/s) From(Mb/s) To(MB/s) From(MB/s)
Orca 1360 1290 170 161
Boss 771 939 96 117
Blue 170 939 21 117
Backup 1300 1290 162 161
VM Windows 414 1110 52 139
VM Ubuntu 1330 1010 166 126
What we can see that the results are not close 2 Gbit/sec, the best results are achieved on linux machines. On Windows machines the sending is very slow, most probably because of their tcp congestion control. It is also worth noting, that on Boss the sending part is slower compared to the speeds measured when we had only one connection between Orca and NAS2.
Now let's do the samba testing, with mounting the share and using IOZone or CrystalDisk Benchmark.
Computer To(MB/s) From(MB/s)
Orca 134 126
Boss 76 113
Blue 20 96
Backup 143 141
VM Windows 48 105
VM Ubuntu 114 126
The results are similar, Windows computes can write only very slowly, Linux machines see some benefits.
This article is the second article in a series, you can find the first article here and the last article here.
The overal configuration diagram is shown here:
In this test we will use bonding interfaces with roud-robin algorithm between Orca and NAS2.
First let's set up the bonding interfaces:
sudo ip link add bond0 type bond mode balance-rr
sudo ip link set enp4s0 master bond0
sudo ip link set enp5s0 master bond0
sudo ip link set bond0 up
ip addr add 192.168.190.11/24 dev bond0
After doing this on both sides, we can check if the connection is up and running:
ping 192.168.190.10
If everything is Ok, then let's start the iperf3 tests to NAS2:
Computer To(Mb/s) From(Mb/s) To(MB/s) From(MB/s)
Orca 1360 1290 170 161
Boss 771 939 96 117
Blue 170 939 21 117
Backup 1300 1290 162 161
VM Windows 414 1110 52 139
VM Ubuntu 1330 1010 166 126
What we can see that the results are not close 2 Gbit/sec, the best results are achieved on linux machines. On Windows machines the sending is very slow, most probably because of their tcp congestion control. It is also worth noting, that on Boss the sending part is slower compared to the speeds measured when we had only one connection between Orca and NAS2.
Now let's do the samba testing, with mounting the share and using IOZone or CrystalDisk Benchmark.
Computer To(MB/s) From(MB/s)
Orca 134 126
Boss 76 113
Blue 20 96
Backup 143 141
VM Windows 48 105
VM Ubuntu 114 126
The results are similar, Windows computes can write only very slowly, Linux machines see some benefits.
Comments