MyCustomServer new addHandlerUrl: '<scheme>:/<transport>/<hostport>' sstAsUrl; startUp
The scheme specified in the invocation handler URL determines the invocation configuration selected by the SST method invocation infrastructure. The transport, if specified, determines the transport configuration selected by the SST transport services infrastructure. The hostport determines the IP interface on which to accept client requests, and the port on which to listen for client requests.
(The SstHttpServletEngine is an example of an "application" which makes use of this approach. See the example class SstHttpServerExample for hints on how to configure and run an SstHttpServletEngine. Also, read the HTTP implementation notes that follow later in this document.)
As a "lite" alternative, an application may simply create an instance of SstBasicServer and register a request handler callback message. The example below implements a complete HTTP server which responds to a client request with a simple HTML document containing a dump of the client request headers.
| server handler result | handler := DirectedMessage receiver: [:request :response| |htmlStream htmlResponse| htmlStream := WriteStream on: (Locale current preferredStringClass new: 128). htmlStream nextPutAll: '<html><pre>'. request header sstHttpStringOn: htmlStream. htmlStream nextPutAll: '</pre></html>'. htmlResponse := htmlStream contents. response header: (SstHttpResponseHeader new status: SstHttpConstants::HttpOk; version: request header version; contentLength: htmlResponse size; yourself); contents: htmlResponse; yourself] selector: #value:value:. (result := (server := SstBasicServer new) addHandlerUrl: 'http://:9988') isSstError ifTrue: [result raise]. server requestHandler: handler. (result := server startUp) isSstError ifTrue: [result raise]. server inspect "-- When finished, send #shutDown"
Note that the SST server infrastructure API used in this example did not require any direct manipulation of endpoints or invocation handlers or dispatchers. This is typically the case. It's worthwhile, nevertheless, to discuss some of these implementation details.
When the server is sent #startUp, it tells each of its configured invocation handlers (typically one, potentially one for each of two ports, one for each of two transports, ...) to #startUp, and it exports itself into the export space of each of these handlers.
When an invocation handler is sent #startUp, it initializes its marshaler, dispatcher, local endpoint, and worker manager. Then it forks its server process. The server process runs a request handling loop; its only function is to respond to a new connection request on its listening socket or a ready-for-read on one of its connected sockets by having its worker manager dispatch a worker process to accept the connection or recv the message.
In the case of an inbound message, the worker process will delegate to the marshaler to create a request object from the recv'd message object, and will then delegate to the dispatcher for execution of the marshaled request. The dispatcher dispatches a dispatch worker process to actually execute the marshaled request, resulting in the invocation of the server's #basicProcessRequest: method (or in the sending of its configured request handler message).
The user application is expected to return from #basicProcessRequest: with an appropriate transport reply message. On return from #basicProcessRequest:, the invocation handler delegates to its local endpoint (#send:to:) for delivering the reply message to the remote endpoint corresponding to the reply request.