I know that UDP is usually recommended for real-time multiplayer games with high data usage.
Most articles are serval years old, and since ~80% of all data transmitted on the internet is TCP, a lot of optimization must have been done for TCP.
This make me wonder: is UDP still superior in terms of speed and latency? Could recent TCP optimizations have made TCP perform better than UDP?
Answer
No, UDP is still superior in terms of performance latency, and will always be faster, because of the philosophy of the 2 protocols - assuming your communication data was designed with UDP or any other lossy communication in mind.
TCP creates an abstraction in which all network packets arrive, and they arrive in the exact order in which they were sent. To implement such an abstraction on a lossy channel, it must implement retransmissions and timeouts, which consume time. If you send 2 updates on TCP, and a packet of the first update gets lost, you will not see the second update until:
- The loss of the first update is detected.
- A retransmission of the first update is requested.
- the retransmission has arrived and been processed.
It doesn't matter how fast this is done in TCP, because with UDP you simply discard the first update and use the second, newer one, right now. Unlike TCP, UDP does not guarantee that all packets arrive and it does not guarantee that they arrive in order.
This requires you to send the right kind of data, and design your communication in such a way that losing data is acceptable.
If you have data where every packet must arrive, and the packets must be processed by your game in the order they were sent, then UDP will not be faster. In fact using UDP in this case would likely be slower because you're reconstructing TCP and implementing it by means of UDP in which case you might as well use TCP.
EDIT - Adding some additional information to incorporate/address some of the comments:
Normally, the packet loss rate on Ethernet is very low, but it becomes much higher once WiFi is involved or if the user has an upload/download in progress. Let's assume we have a perfectly uniform packet loss of 0.01% (one way, not round-trip). On a first person shooter, clients should send updates whenever something happens, such as when the mouse cursor turns the player, which happens about 20 times per second. They could also send updates per frame or on a fixed interval, which would be 60-120 updates per second. Since these updates get sent at different times, they will/should be sent in one packet per update. On a 16 player game, all 16 players send these 20-120 packets per second to the server, resulting in a total of 320-1920 packets per second. With our packet loss rate of 0.01%, we expect to lose a packet every 5.2-31.25 seconds. In this example we ignore the packets sent from the server to the players for simplicity.
On every packet we receive after the lost packet, we'll send a DupAck, and after the 3rd DupAck the sender will retransmit the lost packet. So the time TCP requires to initiate the retransmit is 3 packets, plus the time it takes for the last DupAck to arrive at the sender. Then we need to wait for the retransmission to arrive, so in total we wait 3 packets + 1 roundtrip latency. The roundtrip latency is typically 0-1 ms on a local network and 50-200 ms on the internet. 3 packets will typically arrive in 25 ms if we send 120 packets per second, and in 150ms if we send 20 packets per second.
In contrast, with UDP we recover from a lost packet as soon as we get the next packet, so we lose 8.3 ms if we send 120 packets per second, and 50 ms if we send 20 packets per second.
With TCP things get messier if we also need to consider Nagle (if the developer forgets to turn off send coalescing, or can't disable delayed ACK), network congestion avoidance, or if packet loss is bad enough that we have to account for multiple packet losses (including lost Ack and DupAck). With UDP we can easily write faster code because we quite simply don't care about being a good network citizen like TCP does.
No comments:
Post a Comment