The C++ framework for developing highly scalable, high performance servers on Windows platforms.

Example Servers - Basic Echo Server

This example shows you how to build the simplest server that you can build. The basic structure of this server is also described in the TCP Socket Server How To document. The server example uses a helper library, ServerCommon, which provides some utility code that many of the example server use. Things such as command line parsing, allocators and server callback implementations that display debug traces to show what they're doing.

This example is shipped with all licensed versions of The Server Framework and it requires the core server framework libraries (see here for licensing options). You can always download the latest version of this example from here; and although you will need the correct libraries to be able to build it you can look at the example code and see how it works and perhaps get ideas from it. A compiled, unicode release, build of this example is available on request if you require it for performance analysis of the framework.

The ServerMain.cpp file for this example looks something like the following:

 #include "JetByteTools\Admin\Admin.h"

 #include "JetByteTools\SocketTools\WinsockWrapper.h"

 #include "JetByteTools\Win32Tools\Exception.h"
 #include "JetByteTools\Win32Tools\Utils.h"
 #include "JetByteTools\Win32Tools\DebugTrace.h"

 #include "SocketServer.h"

 #include "JetByteTools\SocketTools\FullAddress.h"

 #pragma hdrstop

 #include "JetByteTools\Win32Tools\StringConverter.h"
 #include "JetByteTools\Win32Tools\SEHException.h"

 #include "ServerCommon\IOPool.h"
 #include "ServerCommon\StreamSocketAllocator.h"
 #include "ServerCommon\BufferAllocator.h"
 #include "ServerCommon\CommandLine.h"
 #include "ServerCommon\SimpleServerShutdownHandler.h"

 #include "JetByteTools\IOTools\AsyncFileLog.h"

 #include <iostream>

 ///////////////////////////////////////////////////////////////////////////////
 // Using directives
 ///////////////////////////////////////////////////////////////////////////////

 using JetByteTools::Win32::_tstring;
 using JetByteTools::Win32::CException;
 using JetByteTools::Core::OutputEx;
 using JetByteTools::Win32::CDebugTrace;
 using JetByteTools::Win32::CSEHException;
 using JetByteTools::Win32::CStringConverter;

 using JetByteTools::Socket::CFullAddress;
 using JetByteTools::Socket::CConnectionLimiter;
 using JetByteTools::Socket::ListenBacklog;

 using JetByteTools::IO::CAsyncFileLog;

 using std::cerr;
 using std::endl;

 ///////////////////////////////////////////////////////////////////////////////
 // Program entry point
 ///////////////////////////////////////////////////////////////////////////////

 int main(int /*argc*/, char * /*argv[ ]*/)
 {
    CSEHException::Translator sehTranslator;

    try
    {
       CAsyncFileLog log(_T("EchoServer.log"));

       CDebugTrace::LogInstaller logInstaller(log);

       try
       {
          CCommandLine commandLine(_T("EchoServer"), CCommandLine::TCPServer);

          if (commandLine.Parse())
          {
             CIOPool pool(
                commandLine.NumberOfIOThreads(),
                commandLine.DisplayDebug());

             pool.Start();

             CStreamSocketAllocator socketAllocator(
                commandLine.SocketPoolSize(),
                commandLine.SpinCount(),
                commandLine.DisplayDebug());

             CBufferAllocator bufferAllocator(
                commandLine.BufferSize(),
                commandLine.BufferPoolSize(),
                commandLine.DisplayDebug());

             const CFullAddress address(
                commandLine.Server(),
                commandLine.Port());

             const ListenBacklog listenBacklog = commandLine.ListenBacklog();

             CConnectionLimiter connectionLimiter(commandLine.MaxConnections());

             CSocketServer server(
                commandLine.NoWelcomeMessage() ? "" : "Welcome to echo server\r\n",
                address,
                listenBacklog,
                pool,
                socketAllocator,
                bufferAllocator,
                connectionLimiter);

             server.Start();

             server.StartAcceptingConnections();

             CSimpleServerShutdownHandler shutdownHandler(server);

             shutdownHandler.WaitForShutdownRequest();

             server.WaitForShutdownToComplete();

             pool.WaitForShutdownToComplete();

             bufferAllocator.Flush();

             socketAllocator.ReleaseSockets();
          }
       }
       catch(const CException &e)
       {
          OutputEx(_T("Exception: ") + e.GetDetails());
       }
       catch(const CSEHException &e)
       {
          OutputEx(_T("SEH Exception: ") + e.GetDetails());
       }
       catch(...)
       {
          OutputEx(_T("Unexpected exception"));
       }
    }
    catch(const CException &e)
    {
       const _tstring message = _T("Exception: ") + e.GetDetails();

       cerr << CStringConverter::TtoA(message) << endl;
    }
    catch(const CSEHException &e)
    {
       const _tstring message = _T("SEH Exception: ") + e.GetDetails();

       cerr << CStringConverter::TtoA(message) << endl;
    }
    catch(...)
    {
       cerr << "Unexpected exception" << endl;
    }

    return 0;
 }

We start by setting up a Windows Structured Exception Handling translator which will translate SEH exceptions into C++ exceptions. Next we create and install a debug trace log in the form of an instance of JetByteTools::IO::CAsyncFileLog, this provides a high performance trace file for our log messages. The two levels of exception handling allows us to log exceptions that occur during the creation of our debug trace log to std::cerr and those that occur after our trace log is created to the trace log.

Once we're set up to deal with any exceptions, we parse the command line. There are some command line arguments that are required, so running the server without them gives some help:
 EchoServer - v6.4 - Copyright (c) 2009 JetByte Limited
 Usage: EchoServer -port xxxx

 Command line parameters:
  r -port               The port to listen on.
  o -server             The server address to bind to either in IPv4 dotted IP
                        or IPv6 hex formats.
                        Defaults to IPv4 INADDR_ANY.
  o -spinCount          The spin count used in per socket critical sections.
  o -numberOfIOThreads  Defaults to 0 (2 x processors)
  o -socketPoolSize     Defaults to 10.
  o -bufferPoolSize     Defaults to 10.
  o -bufferSize         Defaults to 1024
  o -listenBacklog      Defaults to 5.
  o -maxConnections     Defaults to no limit
  o -noWelcomeMessage   Does not display a welcome message
  o -displayDebug

  r = required, o = optional

Assuming we are passed the correct arguments, at least -port XXX to tell us which TCP port to listen on, the server will begin to construct the objects that it needs in order to run.

First we configure the pool of threads that will be used to perform our I/O. By default our command line parser will return 0 if we don't specify the number of I/O threads to use. This causes the I/O pool to use 2 x the number of CPU cores that the machine you're running on has. This is fine for low numbers of cores (up to 4 cores?) but is inappropriate for larger numbers of cores. See JetByteTools::IO::IIOPool for more details. We then create a socket allocator and a buffer allocator, both of these objects can provide pooling of their data structures to improve performance. Sockets are required for each connection and, although reasonably light weight, take time to construct. Note that this is the simplest way to create a socket allocator and it should work well in most situations, you can, however, share locks between sockets to trade performance for less resource usage. We talk about how and why you might tune per socket lock usage here but in general using this allocator construction approach should work fine and and is a good default choice. Buffers are required for all data transfer on a connection, and, in some server designs, for data flow around the server itself. The allocators can be configured to retain their data structures in a list of later reuse which can improve performance and memory fragmentation somewhat as the objects are only allocated once and then reused rather than being allocated and then destroyed on demand. Again this is a tunable parameter and you may wish to profile your server and see what's appropriate; once again you're trading resource usage for performance. If you expect to always have around X connections active then it makes sense to make the socket pool retain X sockets and the buffer pool retain 2 x X buffers (or maybe more depending on the design of your server). Note that you can ask the allocators to preallocate their data structures at server start up which means that you'll always have X data structures available in memory.

Next we pass any server string that we've been provided to an instance of JetByteTools::Socket::CFullAddress. This allows our server to be address family agnostic. You can pass either an IPv4 address (-server 192.168.0.44) or an IPv6 address (-server [ffff:ffff:ffff:ffff:ffff] and the server will listen appropriately.

Next comes the listenBacklog which is the maximum length of the queue of pending connections. If set to SOMAXCONN, the underlying service provider responsible for the server's listening socket will set the backlog to a maximum reasonable value. Something small usually works well for most simple servers, and you can increase it if you find that your server is refusing connections. Note that this isn't the number of connections that your server can handle, it's just the number of connections that are queing to be accepted by the server, the server accepts connections very quickly and so this queue of pending connections can usually be quite small. If you use the Echo Server Test harness to stress test your server you can issue many hundreds of connections at the same time; this is an easy way to cause the server to fail due to insufficient listen backlog but is not necessarilly especially realistic. You can either batch the test harnesses connection attempts into "reasonable" sized batches and add a connection batch delay or you can increase the server's listen backlog.

Next we create an instance of JetByteTools::Socket::CConnectionLimiter. This is a very important part of a high availability server that needs to (or could need to) service many thousands of concurrent connections. The connection limiter can protect the machine that the server runs on from running out of essential system-wide resources (such as Non-Paged Pool memory), the result of which could be a Blue Screen Of Death as some drivers are written poorly and fail to operate correctly under low resource situations; see Limiting Resource Usage for more details...

Finally we're in a position to create our server object, we pass all of the other objects that we've just created to the constructor of the server to configure it and provide it with the services that it requires to do its work. This method of object composition can look fairly complex to those who aren't familiar with it, but, in reality, it's simply a way of connecting together the objects that we rely on. These objects are separate objects so that our server can be configured in versatile ways. Most of these objects are accessed via interfaces so that we can replace the default implementations with custom implementations if we need to in order to meet performance or functionality requirements that were not anticipated when the framework itself was designed. In summary the need to supply all of these objects to the constructor of our server is a "Good Thing". See Parameterise from Above for more details about this technique.

Once we've put the server together we can start it and tell it to start accepting connections. At this point the server is running. Since everything that the server does happens on its own threads, either in the I thread pool or in the thread the runs to accept connections, the main application thread is no longer required and can simply sit around and wait until it's time to shut the server down. We create a CSimpleServerShutdownHandler object to do this for us. It creates a few named event objects that can be controlled by the ServerShutdown utility that ships as part of these examples. The shutdown handler waits inside CSimpleServerShutdownHandler::WaitForShutdownRequest() for a request to shut the server down and then returns.

Once the server is asked to shut down we simply tell it to do so and then wait for it to finish and clean up the resources used.

The server itself is also explained in the TCP Socket Server How To document. The CSocketServer class provides a link between the framework and the socket server callbacks that you will implement to act on various network events that happen during the lifetime of the connections to your server. The class definition looks like this:

 class CSocketServer :
    public JetByteTools::Socket::CStreamSocketServer,
    private CStreamSocketServerExCallback
 {
    public :

       CSocketServer(
          const std::string &welcomeMessage,
          const JetByteTools::Socket::IFullAddress &address,
          const JetByteTools::Socket::ListenBacklog listenBacklog,
          JetByteTools::IO::IIOPool &pool,
          JetByteTools::Socket::IAllocateStreamSockets &socketAllocator,
          JetByteTools::IO::IAllocateBuffers &bufferAllocator,
          JetByteTools::Socket::ILimitConnections &connectionLimiter = JetByteTools::Socket::CConnectionLimiter::NoLimitLimiter);

       ~CSocketServer();

    private :

       // Implement just the bits of IStreamSocketServerCallback that we need

       virtual void OnConnectionEstablished(
          JetByteTools::Socket::IStreamSocket &socket,
          const JetByteTools::Socket::IAddress &address);

       virtual void OnReadCompleted(
          JetByteTools::Socket::IStreamSocket &socket,
          JetByteTools::IO::IBuffer &buffer);

       // Our business logic

       void EchoMessage(
          JetByteTools::Socket::IStreamSocket &socket,
          JetByteTools::IO::IBuffer &buffer) const;

       const std::string m_welcomeMessage;

       /// No copies do not implement
       CSocketServer(const CSocketServer &rhs);
       /// No copies do not implement
       CSocketServer &operator=(const CSocketServer &rhs);
 };

It's vitally important that we have a destructor and it's vitally important that the destructor calls the WaitForShutdownToComplete() method of the base class. So, ideally, the destructor will look something like this.

 CSocketServer::~CSocketServer()
 {
    WaitForShutdownToComplete();
 }


The reason for this is that we need to be sure that we have shut down the server before our destruction begins, the reason for this is that we have supplied a reference to ourself as the callback interface to the base class and if we didn't wait for the base class to complete its shutdown then we (including our callback interface implementation) would be destroyed before our server base class and this would be bad... This problem is not something that manifests in servers that are built without deriving from the server base class, see the callback echo server example for details.

As you can see we only implement 2 of the callback methods from JetByteTools::Socket::IStreamSocketServerCallback and we inherit from CStreamSocketServerExCallback which lives in the ServerCommon library and which inherits from JetByteTools::Socket::CStreamSocketServerExCallback and provides some standard debug trace output for other callback methods. We also inherit from JetByteTools::Socket::CStreamSocketServer but this is entirely optional and we only do this as a convenience, see EchoServerCallback for an example server which does not inherit from a server base class.

 void CSocketServer::OnConnectionEstablished(
    IStreamSocket &socket,
    const IAddress & /*address*/)
 {
    Output(_T("OnConnectionEstablished"));

    if (socket.TryWrite(m_welcomeMessage.c_str(), GetStringLengthAsDWORD(m_welcomeMessage)))
    {
       socket.TryRead();
    }
 }

Our server attempts to send a message to newly connected clients and then attempts to issue a read. We use /ref JetByteTools::Socket::CSocketServer::TryWrite "TryWrite()" and /ref JetByteTools::Socket::CSocketServer::TryRead "TryRead()" so that we don't throw an exception out of the callback if the read or write fails (which it can do if the client disconnects straight after connecting!). It doesn't actually matter if we allow exceptions to leak out of our callback handler as the framework will handle them for us. The normal processing of a socket connection's lifetime will deal correctly with any connections that close during connection establishment so, in this simple server at least, there's nothing we need to do or worry about.

 void CSocketServer::OnReadCompleted(
    IStreamSocket &socket,
    IBuffer &buffer)
 {
    try
    {
       EchoMessage(socket, buffer);

       socket.Read();
    }
    catch(const CException &e)
    {
       Output(_T("ReadCompleted - Exception - ") + e.GetDetails());
       socket.Shutdown();
    }
    catch(...)
    {
       Output(_T("ReadCompleted - Unexpected exception"));
       socket.Shutdown();
    }
 }
When the read completes our OnReadCompleted() handler is called and we are given a buffer which contains the bytes that were read from the TCP stream. Now, remember, TCP connections are an unstructured stream of bytes, so this buffer may contain one or one hundred bytes, it doesn't matter that the client at the other end sent a "packet" of exactly 100 bytes in a single call to send, we can recieve any number of those bytes as the result of our read completing. We may, or may not, get the rest of the bytes later on; we probably will and, when testing on a LAN in your office you're unlikely to see too much packet fragmentation, but, you have to assume that every lump of data that is sent to you will arrive one byte at a time, each as the result of a separate call to OnReadCompleted(). Our "business logic" is simply to echo bytes so we needn't worry about how many bytes we have read from the TCP stream. Other servers, such as the PacketEchoServer and the SimpleProtocolServer need to be more careful in how they accumulate bytes for processing.

 void CSocketServer::EchoMessage(
    IStreamSocket &socket,
    IBuffer &buffer) const
 {
    DEBUG_ONLY(Output(_T("Data Echoed -\r\n") + DumpData(buffer.GetMemory(), buffer.GetUsed(), 60, true)));

    socket.Write(buffer);
 }
The function above is the extent of the "business logic" in this simple server. All we do is display what we're echoing (in debug builds only), and then write it back to the client.

Note that the JetByteTools::Win32::DumpData function isn't the most performant function in the world, and it adversely affects server performance so we only call it in debug builds.

Most servers will share much of the structure shown above. The initial setup is pretty much the same for all servers though the actual mechanics will change somewhat if you are hosting your server as a Windows Service (see EchoServerService for more details).

Generated on Sun Sep 12 19:06:45 2021 for The Server Framework - v7.4 by doxygen 1.5.3