amqsbcg not getting messages from MQ Client - ibm-mq

I managed to send a message from an MQ Client to a MQ Server. In the MQ Client I amqsputc [queue_local] [name_qmgr] and after typing the message it return Sample AMQSPUT0 end which means it was sent properly. But when I try to see the message in the MQ Server with amqsbcg [queue_local] [name_qmgr] it throws me an error message:
Sample AMQSGET0 start
MQCONNX ended with reason code 2058
This error appears when the queue manager doesn't exist or when the name is misspelled, but this is not the case.
When I verify the local queue it shows CURDEPTH(1), this means that there is one message on the queue(it was delivered). But I don't know why it doesn't allow me to get the message. In the queue manager error file it only shows something like:
the channel AMQ.... connection ended
I checked the channel I configured for this test:
AMQ8414: Display Channel details.
CHANNEL(A03ZCIWAS) CHLTYPE(SVRCONN)
ALTDATE(2017-09-07) ALTTIME(00.35.17)
CERTLABL( ) COMPHDR(NONE)
COMPMSG(NONE)
DESCR(Server-connection to ...)
DISCINT(0) HBINT(300)
KAINT(AUTO) MAXINST(100)
MAXINSTC(90) MAXMSGL(4194304)
MCAUSER(nobody) MONCHL(QMGR)
RCVDATA( ) RCVEXIT( )
SCYDATA( ) SCYEXIT( )
SENDDATA( ) SENDEXIT( )
SHARECNV(10) SSLCAUTH(REQUIRED)
SSLCIPH( ) SSLPEER( )
TRPTYPE(TCP)
AMQ8414: Display Channel details.
CHANNEL(A03ZCIWAS) CHLTYPE(CLNTCONN)
AFFINITY(PREFERRED) ALTDATE(2017-09-07)
ALTTIME(02.40.42) CERTLABL( )
CLNTWGHT(0) COMPHDR(NONE)
COMPMSG(NONE) CONNAME(XX.XX.XX.XX)
DEFRECON(NO)
DESCR(Client connection to ....)
HBINT(300) KAINT(AUTO)
LOCLADDR( ) MAXMSGL(4194304)
MODENAME( ) PASSWORD( )
QMNAME(AEDMQ03A) RCVDATA( )
RCVEXIT( ) SCYDATA( )
SCYEXIT( ) SENDDATA( )
SENDEXIT( ) SHARECNV(10)
SSLCIPH( ) SSLPEER( )
TPNAME( ) TRPTYPE(TCP)
USERID( )
The CONNAME(xx.xx.xx.xx) is the right IP adress for the MQServer and the variable MQSERVER is set like:
MQSERVER=[channel_svrconn]/tcp/'ip_adress_MQServer(1414)'
The port is also fine.

The output you provide indicates you are executing amqsget not amqsbcgc.
I note that your question mentions amqsbcg not amqsbcgc. The c at the end of the sample name indicates it is the client version of the program.
amqsbcg = Server Binding version
amqsbcgc = Client version
If you execute either amqsget or amqsbcg and specify a queue manager that is not local on the same server you will receive a 2058.
Solution is to use amqsgetc or amqsbcgc instead of amqsget or amqsbcg

Related

Send and Receive data through same Socket in JZMQ

I am developing a JAVA multicast application using JZMQ (PGM protocol).
Is it possible to send and receive data through the same socket?
If ZMQ.PUB is used, only send() works and recv() is not working.
If ZMQ.SUB is used, send() doesn't work.
Is there any alternative way for using both send() and recv() using the same Socket?
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket socket = context.socket(ZMQ.PUB);
socket.send(msg);
socket.recv();
Radio broadcast will never deliver your voice into the Main Station
Yes, both parts of the ZeroMQ PUB/SUB Scalable Formal Communication Pattern's archetypes are uni-directional ( by-definition ) one can just .send(), the other(s) may just listen ( and if were configured well, they will ).
How to do what you have asked for? ( ... and forget to have this using pgm:// )
Yes, there are ways to use other ZeroMQ archetypes for this - i.e. a single socket over PAIR/PAIR endpoints( capable of both .send() and .recv() methods ) or a pair of (A)->--PUSH/PULL->-(B) + (A)-<-PULL/PUSH-<-(B) so as to construct the bi-directional signalling / messaging channel by using just uni-directional archetypes.
You also need to select an appropriate transport-class for being used in .bind() + .connect() between the configured ZeroMQ endpoints.
// -------------------------------------------------------- HOST-(A)
ZMQ.Context aCONTEXT = ZMQ.context( 1 );
ZMQ.Socket aPubSOCKET = aCONTEXT.socket( ZMQ.PUB );
aPubSOCKET.setsockopt( ZMQ.LINGER, 0 );
// ----------------------
aPubSOCKET.bind( "tcp://*:8001" );
// ----------------------
// set msg = ...;
// ----------------------
aPubSOCKET.send( msg, ZMQ.NOWAIT );
// ...
// ----------------------
aPubSOCKET.close();
aCONTEXT.term();
// ----------------------
The SUB-side has one more duty ...
// -------------------------------------------------------- HOST-(B)
ZMQ.Context aCONTEXT = ZMQ.context( 1 );
ZMQ.Socket aSubSOCKET = aCONTEXT.socket( ZMQ.SUB );
aSubSOCKET.setsockopt( ZMQ.LINGER, 0 );
aSubSOCKET.setsockopt( ZMQ.SUBSCRIBE, "" );
// ----------------------
aSubSOCKET.connect( "tcp://<host_A_IP_address>:8001" );
// ----------------------
// def a msg;
// ----------------------
msg = aSubSOCKET.recv( ZMQ.NOWAIT );
// ...
// ----------------------
aSubSOCKET.close();
aCONTEXT.term();
// ----------------------

Why is ZeroMQ PUB enqueing messages with no connected subscribers? ( Well, "disconnected" SUB-s )

I am seeing a strange behavior using ZMQ_PUB.
I have a producer which .connect()-s to different processesthat .bind() on ZMQ_SUB sockets.
The subscribers all .bind(), the publisher .connect()-s.
When a producer starts, it creates a ZMQ_PUB socket and .connect()-s it to different processes. It then immediately starts sending messages at a regular period.
As expected, if there are no connected subscribers, it drops all messages, until a subscriber starts.
The flow works normal then, when a subscriber starts, it receives the messages from that moment on.
Now, the problem is:
I disconnect the subscriber ( stopping the process ).
There are no active subscribers at this point, as I stopped the only one. The producer continues sending messages, which should be dropped, as there are no connected subscribers anymore…
I restart the original subscriber, it binds, the publisher reconnects... and the subscriber receives all messages produced in the meantime !!
So what I see is that the producer enqueued all messages while the subscriber was down. As soon as the socket reconnected, because the subscriber process restarted, it sent all queued messages.
As I understood from here, a publisher should drop all sent messages when there are no connected subscribers:
ZeroMQ examples
"A publisher has no connected subscribers, then it will simply drop all messages."
Why is this happening?
By the way, I am using C++ over linux for these tests.
I tried setting a different identity on the subscriber when it binds, but it didn't work. Publisher still enqueues messages, and delivers them all when subscriber restart.
Thanks in advance,
Luis
UPDATE:
IMPORTANT UPDATE!!!!!Before posting this questionI had tried different solutions. One was to set ZMQ_LINGER to 0, which didn't work.I added ZMQ:IMMEDIATE, and it worked, but I just found out that ZMQ:IMMEDIATE alone does not work. It requires also ZMQ_LINGER.Luis Rojas 3 hours ago
UPDATE:
As per request, I am adding some simple test cases to show my point.
One is a simple subscriber, which runs on command line and receives the uri where to bind, for instance :
$ ./sub tcp://127.0.0.1:50001
The other is a publisher, which receives a list of uris to connect to, for instance :
./pub tcp://127.0.0.1:50001 tcp://127.0.0.1:50002
The subscriber receives up to 5 messages, then closes socket and exit. We can see on wireshark the exchange of FIN/ACK, both ways, and how the socket moves to TIME_WAIT state. Then, publisher starts sending SYN, trying to reconnect (that probes the ZMQ_PUB knows that connection closed)
I am explicitely not unsubscribing the socket, just closing it. In my opinion, if the socket closed, the publisher should automatically end any subscription for that connection.
So what I see is : I start subscriber(one or more), I start publisher, which starts sending messages. Subscriber receives 5 messages and ends. In the meantime publisher continues sending messages, WITH NO CONNECTED SUBSCRIBER. I restart the subscriber, and receives immediately several messages, because they were queued at the publishers side. I think those queued messages break the Publish/Subscribe model, where messages should be delivered only to connected subscribers. If a susbcriber closes the connection, messages to that subscriber should be dropped. Even more, when subscriber restarts, it may decide to subscribe to other messages, but it will still receive those subscribed by a "previous encarnation" that was binded at same port.
My proposal is that ZMQ_PUB (on connect mode), when detecting a socket disconnection, should clear all subscriptions on that socket, until it reconnects and the NEW subscriber decides to resubscribe.
I apologize for language mistakes, but english is not my native language.
Pub's code:
#include <stdio.h>
#include <stdlib.h>
#include <libgen.h>
#include <unistd.h>
#include <string>
#include <zeromq/zmq.hpp>
int main( int argc, char *argv[] )
{
if ( argc < 2 )
{
fprintf( stderr, "Usage : %s <remoteUri1> [remoteUri2...]\n",
basename( argv[0] ) );
exit ( EXIT_FAILURE );
}
std::string pLocalUri( argv[1] );
zmq::context_t localContext( 1 );
zmq::socket_t *pSocket = new zmq::socket_t( localContext, ZMQ_PUB );
if ( NULL == pSocket )
{
fprintf( stderr, "Couldn't create socket. Aborting...\n" );
exit ( EXIT_FAILURE );
}
int i;
try
{
for ( i = 1; i < argc; i++ )
{
printf( "Connecting to [%s]\n", argv[i] );
{
pSocket->connect( argv[i] );
}
}
}
catch( ... )
{
fprintf( stderr, "Couldn't connect socket to %s. Aborting...\n", argv[i] );
exit ( EXIT_FAILURE );
}
printf( "Publisher Up and running... sending messages\n" );
fflush(NULL);
int msgCounter = 0;
do
{
try
{
char msgBuffer[1024];
sprintf( msgBuffer, "Message #%d", msgCounter++ );
zmq::message_t outTask( msgBuffer, strlen( msgBuffer ) + 1 );
printf("Sending message [%s]\n", msgBuffer );
pSocket->send ( outTask );
sleep( 1 );
}
catch( ... )
{
fprintf( stderr, "Some unknown error ocurred. Aborting...\n" );
exit ( EXIT_FAILURE );
}
}
while ( true );
exit ( EXIT_SUCCESS );
}
Sub's code
#include <stdio.h>
#include <stdlib.h>
#include <libgen.h>
#include <unistd.h>
#include <string>
#include <zeromq/zmq.hpp>
int main( int argc, char *argv[] )
{
if ( argc != 2 )
{
fprintf( stderr, "Usage : %s <localUri>\n", basename( argv[0] ) );
exit ( EXIT_FAILURE );
}
std::string pLocalUri( argv[1] );
zmq::context_t localContext( 1 );
zmq::socket_t *pSocket = new zmq::socket_t( localContext, ZMQ_SUB );
if ( NULL == pSocket )
{
fprintf( stderr, "Couldn't create socket. Aborting...\n" );
exit ( EXIT_FAILURE );
}
try
{
pSocket->setsockopt( ZMQ_SUBSCRIBE, "", 0 );
pSocket->bind( pLocalUri.c_str() );
}
catch( ... )
{
fprintf( stderr, "Couldn't bind socket. Aborting...\n" );
exit ( EXIT_FAILURE );
}
int msgCounter = 0;
printf( "Subscriber Up and running... waiting for messages\n" );
fflush( NULL );
do
{
try
{
zmq::message_t inTask;
pSocket->recv ( &inTask );
printf( "Message received : [%s]\n", inTask.data() );
fflush( NULL );
msgCounter++;
}
catch( ... )
{
fprintf( stderr, "Some unknown error ocurred. Aborting...\n" );
exit ( EXIT_FAILURE );
}
}
while ( msgCounter < 5 );
// pSocket->setsockopt( ZMQ_UNSUBSCRIBE, "", 0 ); NOT UNSUBSCRIBING
pSocket->close();
exit ( EXIT_SUCCESS );
}
Q: Why is this happening?
Because the SUB is actually still connected ( not "disconnected" enough ).
Yes, might be surprising, but killing the SUB-process, be it on .bind()- or .connect()-attached side of the socket's transport-media, does not mean, the Finite-State-Machine of the I/O-pump has "moved" into disconnected-state.
Given that, the PUB-side has no other option but to consider the SUB-side still live and connected ( even while the process was silently killed beyond the line-of-sight of the PUB-side ) and for such "distributed"-state there is a ZeroMQ protocol-defined behaviour ( a PUB-side duty ) to collect all the interim messages for a ( yes, invisibly dead ) SUB-scriber, which the PUB-side still considers fair to live ( but might be having just some temporally intermitent issues somewhere low, on the transport I/O-levels or some kinds of remote CPU-resources starvations or concurrency-introduced transiently intermitent { local | remote } blocking states et al ).
So it buffers...
In case your assassination of the SUB-side agent would appear to have been a bit more graceful ( using the zeroised ZMQ_LINGER + an adequate .close() on the socket-resource instance ) the PUB-side will recognise the "distributed"-system system-wide Finite-State-Automaton shift into an indeed "DISCONNECT"-ed state and a due change-of-behaviour will happen on the PUB-side of the "distributed-FSA", not storing any messages for this "visibly" indeed "DISCONNECT"-ed SUB -- exactly what the documentation states.
"Distributed-FSA" has but quite a weak means to recognise state-change events "beyond it's horizon of localhost contols. KILL-ing a remote process, which implements some remarkable part of the "distributed-FSA" is a devastating event, not a method how to keep the system work. A good option for such external risks might be
Sounds complex?
Oh, yes, it is complex, indeed. That's exactly why ZeroMQ solved this for us, to be free and enjoy designing our application architectures based on top of these ( already solved ) low level complexities.
Distributed-system FSA ( a system-wide FSA of layered composition of sub-FSA-s )
To just imagine what is silently going on under the hood, imagine just having a simple, tandem pair of FSA-FSA - exactly what the pair of .Context() instances try to handle for us in the simplest ever 1:1 PUB/SUB scenario where the use-case KILL-s all the sub-FSA-s on the SUB-side without giving a shot to acknowledge the intention to the PUB-side. Even the TCP-protocol ( living both on the PUB-side and SUB-side ) has several state-transition from [ESTABLISHED] to [CLOSED] state.
A quick X-ray view on a distributed-systems' FSA-of-FSA-s
( just the TCP-protocol FSA was depicted for clarity )
PUB-side:
.socket( .. ) instance's behaviour FSA:
SUB-side:
(Courtesy nanomsg).
Bind and Connect although indifferent, have specific meaning here.
Option 1:
Change your code to this way and there's no problem:
Publisher should bind to an address
Subscriber should connect to that address
'coz if you bind a subscriber and then interrupt it, there's no way the publisher knows that the subscriber is unbound and so it queues the messages to the bound port and when you restart again on the same port, the queued messages will be drained.
Option 2:
But if you want to do it your way, you need to do the following things:
Register an interrupt handler (SIGINT) in the subscriber code
On the interrupt of the subscriber do the following:
unsubscribe the topic
close the sub socket
exit the subscriber process cleanly with preferably 0 return code
UPDATE:
Regarding the point of identity, do not assume that setting identity will uniquely identify a connection. If it is left to zeromq, it will assign the identities of the incoming connections using unique arbitrary numbers.
Identities are not used to respond back to the clients in general. They are used for responding back to the clients in case ROUTER sockets are used.
'Coz ROUTERsockets are asynchronous where as REQ/REP are synchronous. In Async we need to know to whom we respond back. It can be n/w address or a random number or uuid etc.
UPDATE:
I don't consider this as in issue with zeromq because throughout the guide PUB/SUB is explained in the way that Publisher is generally static (Server and is bound to a port) and subscribers come and go along the way (Clients which connect to the port).
There is another option which would exactly fit to your requirement
ZMQ_IMMEDIATE or ZMQ_DELAY_ATTACH_ON_CONNECT
Setting the above socket option on the publisher would not let the messages en queue when there are no active connections to it.

PureScript Halogen and websockets

I'm trying to use purescript-halogen in combination with websockets, but after several attempts I'm unable to make them work together.
I've seen this question on Thermite and websockets and Phil's answer regarding the Driver function. Halogen also has a Driver function, but I need to run the Driver function with the Aff effect, while purescript-websockets-simple uses the Eff effect.
I've no idea how to transform the synchronous callbacks of the websocket package to asynchronous code running in the Aff monad. Do I need to use an AVar? Do I need purescript-coroutines-aff? If so, how do I hook up these parts together?
Thanks in advance for any pointers in the right direction!
In this case you would indeed want to use purescript-aff-coroutines. That will get you a coroutine Producer that you can then hook up to a Consumer that pushes messages into the driver:
module Main where
import Prelude
import Control.Coroutine (Producer, Consumer, consumer, runProcess, ($$))
import Control.Coroutine.Aff (produce)
import Control.Monad.Aff (Aff)
import Control.Monad.Aff.AVar (AVAR)
import Control.Monad.Eff (Eff)
import Control.Monad.Eff.Exception (EXCEPTION)
import Control.Monad.Eff.Var (($=))
import Data.Array as Array
import Data.Either (Either(..))
import Data.Maybe (Maybe(..))
import Halogen as H
import Halogen.HTML.Indexed as HH
import Halogen.Util (runHalogenAff, awaitBody)
import WebSocket (WEBSOCKET, Connection(..), Message(..), URL(..), runMessageEvent, runMessage, newWebSocket)
----------------------------------------------------------------------------
-- Halogen component. This just displays a list of messages and has a query
-- to accept new messages.
----------------------------------------------------------------------------
type State = { messages :: Array String }
initialState :: State
initialState = { messages: [] }
data Query a = AddMessage String a
ui :: forall g. H.Component State Query g
ui = H.component { render, eval }
where
render :: State -> H.ComponentHTML Query
render state =
HH.ol_ $ map (\msg -> HH.li_ [ HH.text msg ]) state.messages
eval :: Query ~> H.ComponentDSL State Query g
eval (AddMessage msg next) = do
H.modify \st -> { messages: st.messages `Array.snoc` msg }
pure next
----------------------------------------------------------------------------
-- Websocket coroutine producer. This uses `purescript-aff-coroutines` to
-- create a producer of messages from a websocket.
----------------------------------------------------------------------------
wsProducer :: forall eff. Producer String (Aff (avar :: AVAR, err :: EXCEPTION, ws :: WEBSOCKET | eff)) Unit
wsProducer = produce \emit -> do
Connection socket <- newWebSocket (URL "ws://echo.websocket.org") []
-- This part is probably unnecessary in the real world, but it gives us
-- some messages to consume when using the echo service
socket.onopen $= \event -> do
socket.send (Message "hello")
socket.send (Message "something")
socket.send (Message "goodbye")
socket.onmessage $= \event -> do
emit $ Left $ runMessage (runMessageEvent event)
----------------------------------------------------------------------------
-- Coroutine consumer. This accepts a Halogen driver function and sends
-- `AddMessage` queries in when the coroutine consumes an input.
----------------------------------------------------------------------------
wsConsumer
:: forall eff
. (Query ~> Aff (H.HalogenEffects (ws :: WEBSOCKET | eff)))
-> Consumer String (Aff (H.HalogenEffects (ws :: WEBSOCKET | eff))) Unit
wsConsumer driver = consumer \msg -> do
driver $ H.action $ AddMessage msg
pure Nothing
----------------------------------------------------------------------------
-- Normal Halogen-style `main`, the only addition is a use of `runProcess`
-- to connect the producer and consumer and start sending messages to the
-- Halogen component.
----------------------------------------------------------------------------
main :: forall eff. Eff (H.HalogenEffects (ws :: WEBSOCKET | eff)) Unit
main = runHalogenAff do
body <- awaitBody
driver <- H.runUI ui initialState body
runProcess (wsProducer $$ wsConsumer driver)
pure unit
This should give you a page that almost immediately prints:
hello
something
goodbye
But it is doing everything you need, honest! If you use the producer with a "real" source you'll get something more like what you need.

For the un-finished 3-way TCP handshake, why the windows OS report the FD_ACCEPT event to the application

Test Scenario
I had written a windows program which I simply called it "simpleServer.exe". This program is just a simulation of a very basic server application. It listens on a port, and wait for incoming messages. The listening Socket was defined to be a TCP Stream Socket. that's all that this program is doing.
I had been deploying this exact same program on 2 different machines, both running on windows 7 professional 64bit. This machine will act as a host. and they are stationed in the same network area.
then, using the program "nmap", I used another machine on the same network, to act as a client. using the "-sS" parameter on "nmap", I do a Syn Scan, to the IP and Port of the listening simpleServer on both machine (one attempt at a time).
(note that the 2 hosts already had "wireshark" started, and is monitoring on tcp packets from the client's IP and to the listening port.)
In the "wireshark" entry, on both machine, I saw the expected tcp packet for Syn Scan:
client ----(SYN)----> host
client <--(SYN/ACK)-- host
client ----(RST)----> host
the above packet exchange suggests that the connection was not established.
But on the "simpleServer.exe", only one of it had "new incoming connection" printed in the logs, while the other instance was not alerted of any new incoming connection, hence no logs at all.
Code Snippets
iRetVal = WSAEventSelect (m_Socket, m_hSocketEvent, FD_ACCEPT);
if (SOCKET_ERROR == iRetVal)
{
if (WSAGetLastError()==WSAENOTSOCK)
{
return E_SOCKET_INVALID;
}
CHKLOGGER (m_pLogger->Log (LOGGER_LOG_ERROR,"GHLSocket::OnAccept() Error while WSAEventSelect(). Error code: ", WSAGetLastError() ));
#if defined GHLSOCKET_DEBUG_VERSION
printf ("Error while WSAEventSelect(). Error code: %ld\n", WSAGetLastError() );
#endif
return E_FAILED_RECV_DATA;
}
// Wait for Network Events to occcur
dwRetVal = WSAWaitForMultipleEvents ( 1,
&m_hSocketEvent,
FALSE,
lTimeout,
TRUE);
if ( WSA_WAIT_TIMEOUT == dwRetVal )
{
return E_TIMEOUT;
goto CleanUp;
}
if ( WSA_WAIT_FAILED == dwRetVal)
{
CHKLOGGER (m_pLogger->Log (LOGGER_LOG_ERROR,"GHLSocket::OnAccept() WSAWaitForMultipleEvents() failed. Error code: ", WSAGetLastError() ));
#if defined GHLSOCKET_DEBUG_VERSION
printf ("Error in WSAWaitForMultipleEvents() failed. Error code: %ld\n", WSAGetLastError() );
#endif
dwReturn = E_FAILED_RECV_DATA;
goto CleanUp;
}
// Parse the Results from the Network Events.
iRetVal = WSAEnumNetworkEvents (m_Socket, m_hSocketEvent, &mEvents);
if (SOCKET_ERROR == iRetVal)
{
CHKLOGGER (m_pLogger->Log (LOGGER_LOG_ERROR,"GHLSocket::OnAccept() Error while WSAEnumNetworkEvents(). Error code: ", WSAGetLastError() ));
#if defined GHLSOCKET_DEBUG_VERSION
printf ("Error while WSAEnumNetworkEvents(). Error code: %ld\n", WSAGetLastError() );
#endif
dwReturn = E_FAILED_RECV_DATA;
goto CleanUp;
}
// ACCEPT event Detected.
if (mEvents.lNetworkEvents & FD_ACCEPT)
{
// Perform accept operation.
*p_SOCKET = accept (m_Socket, NULL,NULL);
}
Help That I Needed
why is the different behavior from the 2 same of the same application on a different machine with the same OS?

why does nmap show that my tcp server is not listening on the port it should be?

I intend to build on this code, found here
However, I notice I can telnet to this server on the local host. Can't from another computer. I did a quick nmap scan, which reported that nothing was listening on the port I had selected.
For purposes of troubleshooting, I had shut down my firewall, so I've ruled that out as a possible problem.
Clues from haskell windows programmers would be appreciated.
It seems that the socket got bind to localhost (127.0.0.1), thats why you are not able to connect it from other machine and it only connect from local machine. Try to use Bind API to first create the socket and then bind the socket to "Any address" which binds the socket to every interface available on local machine.
This is for future new haskellers.
I based my code on this example.
I made improvements based on this reddit thread, and suggestions made above. The import statements are still sloppy, but fixing them is left as the proverbial "exercise for the reader". I invite any additional suggestions leading to improvements.
import Network.Socket
import Control.Monad
import Network
import System.Environment (getArgs)
import System.IO
import Control.Concurrent (forkIO)
main :: IO ()
main = withSocketsDo $ do
[portStr] <- getArgs
sock <- socket AF_INET Stream defaultProtocol
let port = fromIntegral (read portStr :: Int)
socketAddress = SockAddrInet port 0000
bindSocket sock socketAddress
listen sock 1
putStrLn $ "Listening on " ++ (show port)
sockHandler sock
sockHandler :: Socket -> IO ()
sockHandler sock' = forever $ do
(sock, _) <- Network.Socket.accept sock'
handle <- socketToHandle sock ReadWriteMode
hSetBuffering handle NoBuffering
forkIO $ commandProcessor handle
commandProcessor :: Handle -> IO ()
commandProcessor handle = forever $ do
line <- hGetLine handle
let (cmd:arg) = words line
case cmd of
"echo" -> echoCommand handle arg
"add" -> addCommand handle arg
_ -> do hPutStrLn handle "Unknown command"
echoCommand :: Handle -> [String] -> IO ()
echoCommand handle arg = do
hPutStrLn handle (unwords arg)
addCommand :: Handle -> [String] -> IO ()
addCommand handle [x,y] = do
hPutStrLn handle $ show $ read x + read y
addCommand handle _ = do
hPutStrLn handle "usage: add Int Int"
I usually go with
netstat -an | grep LISTEN
If you see the port listed, something is listening. I can't remember offhand what the lsof command is for sockets and Google isn't giving up the goods.

Resources