IBM Websphere connection failure - ibm-mq

I am able to connect to the MQ from Eclipse Explorer with all user accounts. But when I try to connect from a C Client with my useraccount ('muthu') it throws the error "MQCONN ended with reason code 2035". Note: I am able to access the MQ using the same C-Client code while running the code as a root user(which is part of mqm group).
My current setup is
ALTER QMGR PSNPRES(SAFE)
ALTER QMGR PSMODE (ENABLED)
ALTER QMGR CHLAUTH(DISABLED)
SET CHLAUTH(*) TYPE(BLOCKUSER) USERLIST('*NOACCESS')
DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN) MCAUSER('sampleuser') REPLACE
ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) CHCKCLNT(OPTIONAL)
REFRESH SECURITY TYPE(CONNAUTH)
Following is the log found in AMQERR01.LOG
----- cmqxrsrv.c : 2363 -------------------------------------------------------
02/22/17 13:51:13 - Process(353.6) User(root) Program(amqrmppa)
Host(ec060cda2b57) Installation(Installation1)
VRMF(9.0.0.0) QMgr(TMVDEVQM)
AMQ9788: Slow DNS lookup for address '172.17.0.1'.
EXPLANATION:
An attempt to resolve address '172.17.0.1' using the 'getnameinfo' function
call took 20 seconds to complete. This might indicate a problem with the DNS
configuration.
ACTION:
Ensure that DNS is correctly configured on the local system.
If the address was an IP address then the slow operation was a reverse DNS
lookup. Some DNS configurations are not capable of reverse DNS lookups and some
IP addresses have no valid reverse DNS entries. If the problem persists,
consider disabling reverse DNS lookups until the issue with the DNS can be
resolved.
----- amqcrhna.c : 794 --------------------------------------------------------
02/22/17 13:51:33 - Process(353.6) User(root) Program(amqrmppa)
Host(ec060cda2b57) Installation(Installation1)
VRMF(9.0.0.0) QMgr(TMVDEVQM)
AMQ9788: Slow DNS lookup for address '172.17.0.1'.
EXPLANATION:
An attempt to resolve address '172.17.0.1' using the 'getnameinfo' function
call took 20 seconds to complete. This might indicate a problem with the DNS
configuration.
ACTION:
Ensure that DNS is correctly configured on the local system.
If the address was an IP address then the slow operation was a reverse DNS
lookup. Some DNS configurations are not capable of reverse DNS lookups and some
IP addresses have no valid reverse DNS entries. If the problem persists,
consider disabling reverse DNS lookups until the issue with the DNS can be
resolved.
----- amqcrhna.c : 794 --------------------------------------------------------
02/22/17 13:51:33 - Process(353.6) User(root) Program(amqrmppa)
Host(ec060cda2b57) Installation(Installation1)
VRMF(9.0.0.0) QMgr(TMVDEVQM)
AMQ9209: Connection to host '172.17.0.1' for channel 'SYSTEM.DEF.SVRCONN'
closed.
EXPLANATION:
An error occurred receiving data from '172.17.0.1' over TCP/IP. The connection
to the remote host has unexpectedly terminated.
The channel name is 'SYSTEM.DEF.SVRCONN'; in some cases it cannot be determined
and so is shown as '????'.
ACTION:
Tell the systems administrator.
C-Code gist :
int pub(char *topic_name, char *queue_manager_name, char *message)
{
/* Declare file and character for sample input */
FILE *fp;
/* Declare MQI structures needed */
MQOD od = {MQOD_DEFAULT}; /* Object Descriptor */
MQMD md = {MQMD_DEFAULT}; /* Message Descriptor */
MQPMO pmo = {MQPMO_DEFAULT}; /* put message options */
/** note, sample uses defaults where it can **/
MQHCONN Hcon; /* connection handle */
MQHOBJ Hobj; /* object handle */
MQLONG CompCode; /* completion code */
MQLONG OpenCode; /* MQOPEN completion code */
MQLONG Reason; /* reason code */
MQLONG CReason; /* reason code for MQCONN */
MQLONG messlen; /* message length */
char buffer[100]; /* message buffer */
char QMName[50]; /* queue manager name */
QMName[0] = 0; /* default */
strncpy(QMName, queue_manager_name, MQ_Q_MGR_NAME_LENGTH);
pmo.Options = MQPMO_FAIL_IF_QUIESCING
| MQPMO_NO_SYNCPOINT;
/******************************************************************/
/* */
/* Connect to queue manager */
/* */
/******************************************************************/
MQCONN(QMName, /* queue manager */
&Hcon, /* connection handle */
&CompCode, /* completion code */
&CReason); /* reason code */
/* report reason and stop if it failed */
if (CompCode == MQCC_FAILED)
{
printf("MQCONN ended with reason code %d\n", CReason);
return (int)CReason;
}
.................
Any thoughts. ?
This a continuation of my issue from Provide anonymous access to IBM WebSphere MQ

Looking at your MQSC code i see you have defined a Channel called SYSTEM.ADMIN.SVRCONN however i see in the logs that a channel called SYSTEM.DEF.SVRCONN is closing (following the connection failing).
Given that the sample program you posted does not set the MQCNO structure which is the way to programmatically pass a channel name and you haven't mentioned CCDT (which is the other way) i suspect your MQSERVER environment variable is incorrect.

Related

why the "loadBalancingPolicy“ must be used when "healthCheckConfig" in grpc

The code file is:
client
and server
Doubtful code:
var serviceConfig = `{
"loadBalancingPolicy": "round_robin",
"healthCheckConfig": {
"serviceName": ""
}
}`
Test steps:
1.Run only one server and one client
2.When using "loadBalancingPolicy": "round_robin", the client can detect the "status=NOT_SERVING" of the server
3.When "loadBalancingPolicy": "round_robin" is deleted, or "pick_first" is used, the "status=NOT_SERVING" of the server cannot be detected on the client side
The health check meaningful when has multiple server addresses. If only has one address, there is no need to check health status. So the load balance policy round_robin is work together with health check.
The round_robin will check health status, so it will send request to READY address one after another.
The pick_first policy not support health check, so it will use first success connectted server. So there will only use specify address for any request.
You can read the document of health check and load balance policy in LB Policies Can Disable Health Checking When Needed.
For debug the client and server, you can add environment variable GRPC_GO_LOG_SEVERITY_LEVEL=info and GRPC_GO_LOG_VERBOSITY_LEVEL=99 for more detail of transport and connection event.
When I read the source code carefully, I understood the internal implementation.
pick_first
It implements "balancer.Builder" and "balancer.Balancer" by itself.
"ResolverState.Addresses" will only create a SubConn, there is an addrConn in SubConn, create ClientTransport with the first addr.
Returns a fixed "balancer.PickResult" each time Pick() is called.
round_robin
Pass in the parameter "HealthCheck: true" and return baseBuilder as Builder through "base.NewBalancerBuilder()".
Each addr of "ResolverState.Addresses" will create a corresponding SubConn.
Each time Pick() is called, change the internal next value, get it from "[]balancer.SubConn", and return a new "balancer.PickResult".

Xilinx: send data via UART to a ZedBoard

I am using a ZedBoard, which has a Zynq-7000 all programmable SoC. I am trying one of the examples provided (can be imported from Xilinx SDK), it's called xuartps_intr_example.c
This file contains an UART driver, which is used in interrupt mode. The application sends data and expects to receive the same data through the device using the local loopback mode
I would like to adapt that code in such a way that I can send data to my ZedBoard from a terminal or some kind of program that implements serial communication. I have tried using XUartPs_Recv function to receive data from outside, and sending data from different terminals in my PC (by disabling console in Xilinx SDK, otherwise the serial port is not accessible), but the board is not receiving anything. On the other hand, sending data from the ZedBoard to my PC works properly, and I can see the messages coming from the board in different terminals.
I have attached the source code that I can't get to understand and I think is giving me problems. My questions are preceded by a hash sign:
XUartPs_SetInterruptMask(UartInstPtr, IntrMask);
XUartPs_SetOperMode(UartInstPtr, XUARTPS_OPER_MODE_LOCAL_LOOP);
#WHAT IS LOCAL LOOPBACK MODE?? DOES THIS PREVENT THE BOARD FROM RECEIVING
DATA COMING FROM MY PC?
/*
* Set the receiver timeout. If it is not set, and the last few bytes
* of data do not trigger the over-water or full interrupt, the bytes
* will not be received. By default it is disabled.
*
* The setting of 8 will timeout after 8 x 4 = 32 character times.
* Increase the time out value if baud rate is high, decrease it if
* baud rate is low.
*/
XUartPs_SetRecvTimeout(UartInstPtr, 8);
/*
* Initialize the send buffer bytes with a pattern and the
* the receive buffer bytes to zero to allow the receive data to be
* verified
*/
for (Index = 0; Index < TEST_BUFFER_SIZE; Index++) {
SendBuffer[Index] = (Index % 26) + 'A';
RecvBuffer[Index] = 0;
}
/*
* Start receiving data before sending it since there is a loopback,
* ignoring the number of bytes received as the return value since we
* know it will be zero
*/
XUartPs_Recv(UartInstPtr, RecvBuffer, TEST_BUFFER_SIZE);
/*
* Send the buffer using the UART and ignore the number of bytes sent
* as the return value since we are using it in interrupt mode.
*/
XUartPs_Send(UartInstPtr, SendBuffer, TEST_BUFFER_SIZE);
#HOW DOES THIS ACTUALLY WORK? WHY DO WE START RECEIVING BEFORE WE SEND ANY
BYTES?
/*
* Wait for the entire buffer to be received, letting the interrupt
* processing work in the background, this function may get locked
* up in this loop if the interrupts are not working correctly.
*/
while (1) {
if ((TotalSentCount == TEST_BUFFER_SIZE) &&
(TotalReceivedCount == TEST_BUFFER_SIZE)) {
break;
}
}
/* Verify the entire receive buffer was successfully received */
for (Index = 0; Index < TEST_BUFFER_SIZE; Index++) {
if (RecvBuffer[Index] != SendBuffer[Index]) {
BadByteCount++;
}
}
/* Set the UART in Normal Mode */
XUartPs_SetOperMode(UartInstPtr, XUARTPS_OPER_MODE_NORMAL);
#WHAT DOES SETTING THE UART IN NORMAL MODE MEAN??
Also, I would like to know if sending data via SDK Terminal (Xilinx SDK) to the ZedBoard would be possible. At the moment, every attempt has been unsuccessful.
Thank you in advance
Christian
To receive data on the ZedBoard from a PC terminal, you have to be in normal operation mode, which is the default mode when the PS starts up. Have a look at the Zynq-7000 Technical Reference Manual, page 598, Figure 19-7, to understand the UART operation modes.
LOCAL LOOPBACK MODE does not send anything to the outside application and just loops back its sent stream to itself.
We actually do not start receiving data. We start waiting to receive data.
As #Cesar has mentioned correctly, just change XUARTPS_OPER_MODE_LOCAL_LOOP to XUARTPS_OPER_MODE_NORMAL at the beginning of the code, and you are good to go. NORMAL LOOPBACK MODE sends data to the outside application.

Libevent does not echo properly when there is a delay

Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */

Boost::asio UDP Broadcast with ephemeral port

I'm having trouble with udp broadcast transactions under boost::asio, related to the following code snippet. Since I'm trying to broadcast in this instance, so deviceIP = "255.255.255.255". devicePort is a specified management port for my device. I want to use an ephemeral local port, so I would prefer if at all possible not to have to socket.bind() after the connection, and the code supports this for unicast by setting localPort = 0.
boost::asio::ip::address_v4 targetIP = boost::asio::ip::address_v4::from_string(deviceIP);
m_targetEndPoint = boost::asio::ip::udp::endpoint(targetIP, devicePort);
m_ioServicePtr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
m_socketPtr = boost::shared_ptr<boost::asio::ip::udp::socket>(new boost::asio::ip::udp::socket(*m_ioServicePtr));
m_socketPtr->open(m_targetEndPoint.protocol());
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this))); // Start thread running io_service process
No matter what I do in terms of the following settings, the transmit is working fine, and I can use Wireshark to see the response packets coming back from the device as expected. These response packets are also broadcasts, as the device may be on a different subnet to the pc searching for it.
The issues are extremely strange to my mind, but are as follows:
If I specify the local port and set m_forceConnect=false, everything works fine, and my recieve callback fires appropriately.
If I set m_forceConnect = true in the constructor, but pass in a local port of 0, the transmit works fine, but my receive callback never fires. I would assume this is because the 'target' (m_targetEndpoint) is 255.255.255.255, and since the device has a real IP, the response packet gets filtered out.
(what I actually want) If m_forceConnect = false (and data is transmitted using a send_to call), and local port = 0, therefore taking an ephemeral port, my RX callback immediately fires with an error code 10022, which I believe is an "Invalid Argument" socket error.
Can anyone suggest why I can't use the connection in this manner (not explicitly bound and not explicitly connected)? I obviously don't want to use socket.connect() in this case, as I want to respond to anything I receive. I also don't want to use a predefined port, as I want the user to be able to construct multiple copies of this object without port conflicts.
As some people may have noticed, the overall aim of this is to use the same network-interface base-class to handle both the unicast and broadcast cases. Obviously for the unicast version, I can perfectly happily m_socket->connect() as I know the device's IP, and I receive the responses since they're from the connected IP address, therefore I set m_forceConnect = true, and it all just works.
As all my transmits use send_to, I have also tried to socket.connect(endpoint(ip::addressv4::any(), devicePort), but I get a 'The requested address is not valid in its context' exception when I try it.
I've tried a pretty serious hack:
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), m_socketPtr->local_endpoint().port());
m_socketPtr->bind(localEndpoint);
where I extract the initial ephemeral port number and attempt to bind to it, but funnily enough that throws an Invalid Argument exception when I try and bind.
OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method.
My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes:
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
// A dummy TX is required for the socket to acquire the local port properly under windoze
// Transmitting an empty string works fine for this, but the TX must take place BEFORE the first call to Asynch_receive_from(...)
#ifdef WIN32
m_socketPtr->send_to(boost::asio::buffer("", 0), m_targetEndPoint);
#endif
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this)));
It's a bit of a hack in my book, but it is a lot better than implementing all the requirements to defer the call to the asynch recieve until after the first transmission.

WinSock recv() timeout: setsockopt()-set value + half a second?

I am writing a cross-platform library which, among other things, provides a socket interface, and while running my unit-test suite, I noticed something strange with regard to timeouts set via setsockopt(): On Windows, a blocking recv() call seems to consistently return about half a second (500 ms) later than specified via the SO_RCVTIMEO option.
Is there any explanation for this in the docs I missed? Searching the web, I was only able to find a single other reference to the problem – could somebody who owns »Windows Sockets
Network Programming« by Bob Quinn and Dave Shute look up page 466 for me? Unfortunately, I can only run my test Windows Server 2008 R2 right now, does the same strange behavior exist on other Windows versions as well?
From Networking Programming for Microsoft Windows by Jones and Ohlund:
SO_RCVTIMEO optval
Type: int
Get/Set: Both
Winsock Version: 1+
Description : Gets or sets the timeout value (in milliseconds)
associated with receiving data on the
socket
The SO_RCVTIMEO option sets the
receive timeout value on a blocking
socket. The timeout value is an
integer in milliseconds that indicates
how long a Winsock receive function
should block when attempting to
receive data. If you need to use the
SO_RCVTIMEO option and you use the
WSASocket function to create the
socket, you must specify
WSA_FLAG_OVERLAPPED as part of
WSASocket's dwFlags parameter.
Subsequent calls to any Winsock
receive function (such as recv,
recvfrom, WSARecv, or WSARecvFrom)
block only for the amount of time
specified. If no data arrives within
that time, the call fails with the
error 10060 (WSAETIMEDOUT). If the
receiver operation does time out the
socket is in an indeterminate state
and should not be used.
For performance reasons, this option
was disabled in Windows CE 2.1. If you
attempt to set this option, it is
silently ignored and no failure
returns. Previous versions of Windows
CE do implement this option.
I'd think the crucial information in this is:
If you need to use the SO_RCVTIMEO option and you use the WSASocket
function to create the socket, you
must specify WSA_FLAG_OVERLAPPED as
part of WSASocket's dwFlags parameter
I hope this is still useful :)
I am having the same problem. Going to use
patchedTimeout = max ( unpatchedTimepit - 500, 1 )
Tested this with the unpatchedTimepit == 850
your problem is not in rcv function timeout!
if your application have a while loop to check and receive just put an if statement to check the receive buffer last index for '\0' char to check is the receiving string is ended or not.
typically if rcv function is still receiving return value is the size of received data. size can be used as last index of buffer array.
do{
result = rcv(s,buf,len,0);
if(buf[result] == '\0'){
break;
}
}
while(result > 0);

Resources