Reworked netpool/tcp

Some time ago I wrote blog post about tcp socket acceptor pool which i'm developing for XMPP server and today I fully reworked it. Main data structures like a Listener, Connection still as in previous time excluding really little changes in Listener data structure. I added some API for acceptor pool, fixed some little bugs and fully changed internal structure. I think it post will be useful for newbie Golang developers who wants to play with Golang, TCP and some concurrency. So if you're interesting in it follow this post.

Data structures

There is main structure in netpool/tcp - Listener. It describers TCP listener with socket acceptors pool. For the staring new tcp listener you must to initialize this structure for the start. It implements as:

As you can see it has following fields:
  • Accnb - acceptors number. If you pass 5< for example, it start one TCP listener with 5 acceptors
  • Mc< - maximum number of connections
  • Port - listener port
  • Handler - function which will handle incoming message by TCP
  • Lc - listener channel
  • OverFlowStrategy - strategy which describers listener behaviour after maximum number of connections overflow
  • Ssl - listener ssl option, just pass empty map if you no need in it
4 parameter - Handler as I said above is a function which will handle incoming messages from client. It has following type:

Function with two parameters. First is incoming message and second connection structure:

It describers current connection. It provides simple API for sending response, closing connection and etc...:

6 parameter - OverFlowStratey can be one of following:
  • RefuseConnection - than listener will refuse new connections;
  • IncreaseConnection - than listener be able to handle yet another 128 connections.
That's all data structures which provides netpool/tcp.


netpoll/tcp workflow is a pretty simple. It starts one tcp listener and defined numbers of connection acceptors. Every acceptor will start in separate goroutine and starts to wait new connections:
In accepting time, every acceptor gets all connections count and checks it with maximum number connections. If current connections count more than maximum number connections than acceptor check overflow strategy of the current listener. If it is RefuseConnections than acceptors will refuse all next connections while some of current active connection will not die. If it is OverFlowStrategy than acceptor will increase maximum number of connections for yet 128 and will start new connection handler. It implemented as:

After accepting new connection by acceptor it starts new goroutine for handling this connection. Connection handler goroutine starts infinite loop and check messages from channels for the start:

There is select clause for handling multiply channels. Current select clause waits for only one channel, but it made for future channels. It waits for messages from newConnection.Quit channel and if it receives it, it sends to the main listener loop -1 which talks that one connection was closed and close current connection. If there is no incoming messages from channel, it waits for incoming messages from client with:

As I said above it waits for incoming messages from user and if it gets any message, it checks this message. If this message is EOF which means that client finished connection, it does the same that you can see at previous select clause. It sends information to main listener loop that one connection was closed, close current connection and return from current handler. If there is another message than EOF it starts handler in goroutine and pass message and connection structure to it.


Usage of netpool/pool is a pretty simple. You just need to initialize Listener structure, call startNewTcpConnection with this structure and write a handler:


That's all. If you're interesting in full soruce code, you can find it here - netpool/tcp. If you will have any questions/suggestions or something else write me a comment or ping me in twitter - @0xAX

Happy Coding :)

No comments: