I am emulating a C7200, two interfaces connected to tap1 and tap11. On one side is a KVM virtual machine and on the other is the host and the outside world.
When I send a burst of packets from the outside through the router to the VM, I find they get drip-fed out at intervals of almost exactly 10ms. This seems suspicious: it suggests there is something in the code which is rate limiting, or forwarding only one packet per timer tick.
Setup:
Code:
KVM VM --< [vnet0] [br1] [tap1] >-- DYNAMIPS --< [tap11] [br-lan] [eth0] >--- outside
Here's what I see using tcpdump when sending a burst of packets through:
Code:
** tap11 (outside) **
15:30:57.709432 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.709457 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.709466 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.709474 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
...
** tap1 (inside) **
15:30:57.717011 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.727148 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.737780 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
15:30:57.748004 IP 10.10.0.241.53366 > 10.10.1.1.1234: UDP, length 12
....
Now, after digging through the code, I found that there is a ptask_sleep_time with a 10ms default. So I tried changing this to 5ms:
Code:
--- a/common/dynamips.c
+++ b/common/dynamips.c
@@ -942,7 +942,7 @@ int main(int argc,char *argv[])
create_log_file();
/* Periodic tasks initialization */
- if (ptask_init(0) == -1)
+ if (ptask_init(5) == -1)
exit(EXIT_FAILURE);
/* Create instruction lookup tables */
and hey presto, I now get my packets at intervals of 5ms instead of 10ms.
Code:
** tap11 (outside) **
15:22:03.288594 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.288618 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.288624 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.288637 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.288643 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.288648 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
...
** tap1 (inside) **
15:22:03.294211 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.299343 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.304553 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.309883 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.315210 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
15:22:03.320466 IP 10.10.0.241.35998 > 10.10.1.1.1234: UDP, length 12
...
So this implies there's definitely scope for improving the performance. Perhaps if multiple packets are waiting in the same tick, they could all be processed?
The reason that this matters to me: I use dynamips for teaching labs where the students have a VM behind the dynamips virtual router (which they configured with things like SNMP, netflow etc). All their downloads go through the dynamips router. As a result, they are limited to a throughput of around 140KB/sec (which is shared between all the VMs behind the same dynamips router), or less when there are smaller packets going back and forth like SNMP.
Platform: ubuntu 12.04 (64-bit)
Dynamips version: I see the same with dynamips 0.2.12 from the PPA, and dynamips-stable built from git (cmake .; make; stable/dynamips -H 7200). This is the one I tweaked as above.
I couldn't get dynamips-unstable to work at all: it just ate all the CPU resources it could get, and I got no console messages from the virtual routers.
Additional info:
Code:
=> ver
Dynagen version 0.11.0
hypervisor version(s):
dynamips at s1.ws.nsrc.org:7200 has version 0.2.13-dev-amd64
=> show device r1
Router r1 is running
Hardware is dynamips emulated Cisco 7206VXR NPE-400 with 176 MB RAM
Router's hypervisor runs on s1.ws.nsrc.org:7200, console is on port 2101
Image is shared c7200-1514M4.bin-s1.ws.nsrc.org.ghost with idle-pc value of 0x60608f64
Idle-max value is 1500, idlesleep is 30 ms
128 KB NVRAM, 64 MB disk0 size, 0 MB disk1 size
slot 0 hardware is C7200-IO-2FE with 2 interfaces
FastEthernet0/0 is connected to real TAP tap11 interface
FastEthernet0/1 is connected to real TAP tap1 interface
Is this a known issue, or are there any plans for improvements in this area?
Thanks,
Brian.
P.S. Here is the test program I use to send a burst of UDP packets. Note that "ping -f" is no good because it also forces an interval of 10ms per packet when sending but not receiving any replies.
Code:
#include <sys/socket.h>
#include <netinet/in.h>
#include <sys/types.h>
#include <stdio.h>
#include <arpa/inet.h>
int main(int argc, char *argv[])
{
char buf[]="hello world";
int i;
struct sockaddr_in sa;
int fd = socket(PF_INET, SOCK_DGRAM, 0);
if (fd < 0) { perror("socket"); return 1; }
if (argc < 2) { fprintf(stderr, "Usage: %s addr\n", argv[0]); return 1; }
sa.sin_family = AF_INET;
sa.sin_port = htons(1234);
if (inet_aton(argv[1], &sa.sin_addr) <= 0) { perror("inet_aton"); return 1; }
if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) { perror("connect"); return 1; }
for (i=0; i<20; i++) {
if (send(fd, buf, sizeof(buf), 0) <= 0) { perror("send"); return 1; }
}
return 0;
}