C sockets client/server lag

I'm programming a C/C++ client/server sockets application. At this point, the client connects itselfs to the server every 50ms and sends a message.

Everything seems to works, but the data flow is not continuous: Suddenly, the server doesn't receives anything more, and then 5 messages at once... And sometimes everything works...

Has someone an idea of the origin of this strange behaviour ?

Some part of the code:

Client:

while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket = new socket();
socket->write("blah");
message.clear();
message = socket->read();
socket->close();
delete socket;
time=SDL_GetTicks();
}
}

Server:

while (true) {
fd_set readfs;
struct timeval timeout={0,0};
FD_ZERO(&readfs);
FD_SET(sock, &readfs);
select(sock + 1, &readfs, NULL, NULL, &timeout)
if(FD_ISSET(sock, &readfs))
{
SOCKADDR_IN csin;
socklen_t crecsize = sizeof csin;
SOCKET csock = accept(sock, (SOCKADDR *) &csin, &crecsize);
sock_err = send(csock, buffer, 32, 0);
closesocket(csock);
}
}

Edits: 1. I tried to do

int flag = 1;
setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, &flag, sizeof flag);

In both client and server, but the problem is still there.

2.Yes those connections/deconnections are very inneficient, but when I try to write

socket = new socket();
while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket->write("blah");
message.clear();
message = socket->read();
time=SDL_GetTicks();
}
}

Then the message is only sent once (or received)...

Finally:

I had forgotten to apply TCP_NODELAY to the client socket on the server side. Now it works perfectly ! I put the processes in threads so that the sockets keep open. Thank you all :)

Answers


This is what called "Nagle delay". This algorithm is waiting on TCP stack for more data to arrive before actually sending anything to network untill some timeout expires. So you should modify the Nagle timeout (http://fourier.su/index.php?topic=249.0) or disable Nagle delay at all (http://www.unixguide.net/network/socketfaq/2.16.shtml), so data will be sent per send call.


As others already replied the delays you see are due to TCP built-in Nagle algorithm, which can be disabled by setting TCP_NODELAY socket option.

I would like to point you to the fact that your socket communications are very inefficient due to constant connects and disconnects. Every time client connects to the server there's the three way handshake that takes place, and connection tear-down requires four packets to complete. Basically you lose most of the benefits of TCP but incur all of its drawbacks.

It would be much more efficient for each client to maintain persistent connection to the server. select(2), or even better, epoll(4) on Linux, or kqueue(2) on FreeBSD and Mac, are very convenient frameworks for handling IO on multiple sockets.


You can use TCP_NODELAY socket option to force the data sending immediately.


Need Your Help

Stream Write Exception

java sockets io

I am doing socket programming. I have two files called Server.java and client.java. Both programs were running successfully and successfully sending data to each other. Yesterday I added the follow...

Error - Calculating Euclidean distance for PCA in python

python numpy face-recognition pca euclidean-distance

I am trying to implement face recognition by Principal Component Analysis (PCA) using python. I am following the steps in this tutorial:

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.