July 2014 Archives

UDP flow control and asynchronous writes

| 0 Comments

I don't believe that UDP should require any flow control in the sending application. After all, it's unreliable and it should be quite OK for any stage of the route from one peer to another to decide to drop a datagram for any reason. However, it seems that, on Window's at least, no datagrams will be dropped between the application and the network interface card (NIC) driver, no matter how heavily you load the system.

Unfortunately most NIC drivers also prefer not to drop datagrams, even if they're overloaded (see here for details of how UDP checksum offloading can cause a NIC to make non-paged pool usage considerably higher). This can lead to situations where a user mode application can bring a box down due to non-paged pool exhaustion simply by sending as many datagrams as it can as fast as it can. It's likely that it's actually poorly implemented device drivers that are at fault here; by failing to gracefully handle situations where non-paged pool allocations fail, but it's the application that is putting these drivers into a situation where they could fail in such a catastrophic manner.

Since the NIC driver and the operating system will not drop datagrams it's down to the application itself to do so if it senses that it's overloading the NIC. I've recently added code to The Server Framework to allow you to configure behaviour like this so that an application can prevent itself from exhausting non-paged pool due to pending datagram writes.