Correct way to set the socket send buffer size on linux?
I have an NIO server that gets small client requests that result in ~1meg responses. The server uses the following to accept a new client:
SocketChannel clientChannel = server.accept(); clientChannel.configureBlocking(false); clientChannel.socket().setSendBufferSize(2 * 1024 * 1024);
I then log out a "client connected" line that includes the result of clientChannel.socket().getSendBufferSize().
On Windows, the set changes the client socket's send buffer size from 8k to 2megs. But on linux, the socket says its send buffer is 131,071 bytes.
This results in lousy performance, as my clientChannel.write only writes 128k at a time, so it takes 7 more passes to get all the data written. On Windows, the setSendBufferSize change significantly improved performance.
Linux appears to be configured to allow a large socket send buffer:
$ cat /proc/sys/net/ipv4/tcp_wmem 4096 16384 4194304
The platform is free to adjust the requested buffer size up or down, and that's what Linux appears to be doing. Nothing you can do about that except maybe tune the maxima via kernel configuration.
Note that my comments in the question you linked about setting buffer sizes > 64k apply to the receive buffer, not the send buffer, because the receive buffer size affects the window scaling option, so it needs to be set before the socket is connected, which sets the window scale in stone.m
I don't see why requiring 'more passes' should cause such a major performance difference that you're even asking this question. It seems to me you would be better off adjusting the receiver's window size upwards, and doing it prior to connection as above.