How to suppress "ERROR message: short read (SSL routines, SSL routines), value: 335544539" - boost

Reference:
websocket_client_sync_ssl.cpp
// Read a message into our buffer
ws.read(buffer);
// Close the WebSocket connection
ws.close(websocket::close_code::normal);
Based on my test, the ws.close will spit out a warning below:
ERROR message: short read (SSL routines, SSL routines), value:
335544539
Based on this post short read, this error can be safely ignored in the end of the session. I have tried the following method to suppress the warning:
try
{
boost::system::error_code close_ec;
ws.close(websocket::close_code::normal, close_ec);
if (close_ec)
{
std::cerr << "ERROR message: " << close_ec.message() << ", value: " << close_ec.value() << std::endl;
}
}
catch(...)
{
}
However, the ws.close still prints out the warning message.
Question> Is there a way that I can suppress this message?

However, the ws.close still prints out the warning message.
Are you sure? It looks like that's simply coming from the line:
std::cerr << "ERROR message: " << close_ec.message() << ", value: " << close_ec.value() << std::endl;
So, you would check the value of close_ec and conditionally handle it: Short read error-Boost asio synchoronous https call
Also, note that some kinds of "short reads" can constitute security errors. Some of the samples have very insightful comments about this:
// `net::ssl::error::stream_truncated`, also known as an SSL "short read",
// indicates the peer closed the connection without performing the
// required closing handshake (for example, Google does this to
// improve performance). Generally this can be a security issue,
// but if your communication protocol is self-terminated (as
// it is with both HTTP and WebSocket) then you may simply
// ignore the lack of close_notify:
//
// https://github.com/boostorg/beast/issues/38
//
// https://security.stackexchange.com/questions/91435/how-to-handle-a-malicious-ssl-tls-shutdown
//
// When a short read would cut off the end of an HTTP message,
// Beast returns the error beast::http::error::partial_message.
// Therefore, if we see a short read here, it has occurred
// after the message has been completed, so it is safe to ignore it.
if(ec == net::ssl::error::stream_truncated)
ec = {};
else if(ec)
return;

Related

OMNeT++: Different results in 'fast' or 'express' mode

Used Versions: OMNeT++ 5.0 with iNET 3.4.0
I created some code, which gives me reliable results in ‘step-by-step’- or ‘animated’ simulation mode. The moment I change to ‘fast’ or ‘express’ mode, it gets buggy. The following simplified example will explain my problems:
void MyMacSlave::handleSelfMessage(cMessage *msg)
{
if (msg == CheckAck) {
std::cout << “CheckAck: “ << msg << std::endl;
}
if (msg == transmissionAnnouncement) {
std::cout << “transmissionAnncouncement: “ << msg << std::endl;
}
if (msg == transmissionEvent) {
std::cout << “transmissionEvent: “ << msg << std::endl;
}
delete msg;
}
There is a function, which is called for handling self-messages. Depending on what self-message I got, I need to run different if queries.
I get this correct output in step-by-step or animated mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionEvent: (omnetpp::cMessage)transmissionEvent
And this is the strange output I get using fast or express mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionAnncouncement: (omnetpp::cMessage)transmissionEvent
transmissionEvent: (omnetpp::cMessage)transmissionEvent
The third output line shows that the self-message is ‘transmissionEvent’, but the ‘if (msg == transmissionAnnouncement)’ is mistakenly considered as true as well.
As shown above I get different simulation results, depending on the simulation mode I am using. What is the reason for the different output? Why is there even a difference?
As Christoph and Rudi mentioned there was something wrong with the memory allocation. When a pointer is de-allocated and a new one is allocated on the same memory, there will be something wrong. The difference regarding the usage of different running modes is just a sign that there are errors to that effect.
In my case it was useful to check for message-kinds like:
if (msg->getKind() == checkAckAckType) {
instead of the method used in the originally question. I defined the message-kinds using simple enums.

outputting a buffer stream

I am simiply trying to display a message I receive from a tcp socket which terminates with "\r\n\r\n".
C++ however terminates immediately even though the server indicates that the message has been successfully transmitted.
void handle_read( const boost::system::error_code& error, size_t bytes_transferred)
{
std::istream response_stream(&response_);
std::string incoming;
std::string res_time = make_daytime_string();
while (std::getline(response_stream, incoming) && incoming != "\r")
std::cout << incoming << std::endl;
std::cout <<"message received on "<< res_time <<std::endl;
}
In Eclipse I see the following in consol,
(exit value = -1)
When the program is terminated, If I switch to the consol of Linux I see the following error:
* Error in `/home/administrator/Documents/eclipse/Projects/Asynchronous_TCP/Debug/Asynchronous_TCP': double free or corruption (!prev): 0x0000000001040580 *
Solved
Apperantly I only needed to resolved the socket connection

Shell script AT Commands : not able to send sms through serial port

I have the below shell script (expect) where I am trying to send SMS. I have referred many stack overflow references and found out that ctrl-z maps to \x1a. However, even after appending it to the message and sending to the port or sending ctrl z separately to the port didn't help me. It timeouts later.
The script is written to send sms in pdu format. Irrespective of that, I believe, this is a generic issue to send ctrl-z to port. If you feel the script has some other errors, please share the solution for the same.
Also the length (34) mentioned below is the length of the (PDU_LENGTH -2)/2 as per the specification. This length doesn't include ctrl-z character.
at_command = "AT+CMGS=34\r"
message_content = "0011000C810056890......"
Script:
set PROMPT "0"
set timeout "$COMMAND_TIMEOUT"
send "$at_command"
expect {
"OK" { puts "Command Accepted\n"; }
"ERROR" { puts "Command Failed\n"; }
timeout { puts "Unable to connect to $HOSTIP at $HOSTPORT"; exit 1 }
"*>*" { set PROMPT "1"; }
}
if { "$PROMPT" == "1" } {
send "$message_content"
send "\x1a"
expect {
"OK" { puts "\nCommand accepted"; }
"ERROR" { puts "\nCommand failed"; }
"*>*" { puts "CTRL-Z dint reach UT. Error..."; }
"*" { puts "Unexpected return value received"; }
}
}
Am very sure the script sends $message_content" to port but exits immediately after sending "$message_content".
OUTPUT:
AT+CMGS=34
>
I did something like this in c# with an SMS-Gateway-Modul.
I had to switch to PDU-Mode first!
After that i had to transmit the expected PDU-Length and finally the PDU itself.
Every command has to be committed with can carriage return ASC[13] and the PDU had to be committed with an ASC[26] finally.
Here you can see a schematic(!) flow, how i did it in c#:
1) Create PDU and get length
int len;
var pdu = PDUGenerator.GetPdu(destination, message, "", out len);
2) Switch to PDUMode
SendToCom("AT+CMGF=0" + System.Convert.ToChar(13));
3) Announce message length
SendToCom("AT+CMGS=" + len + System.Convert.ToChar(13));
4) Send PDU and commit
SendToCom(pdu + System.Convert.ToChar(26));

Multiple applications write to one console - mixed/messed output

I have the following system architecture (cannot be changed - legacy code): One main application invokes one or more other applications and these applications interact over a IP protocol.
All applications write to one console window. Unfortunately the console output can get messed up (one character from app 1, next char from app 2, next character from app 4 etc.).
All applications write to console via one Logger.dll (provides static logging functions) using cout/cerr.
Is there a way how I can prevent mixed logging messages in this setup?
Thanks in advance.
EDIT code added:
void Logger::Log(const std::string & componentName, const std::string & Text, LogLevel logLevel, bool logToConsole, bool beep)
{
std::ostringstream stream;
switch (logLevel)
{
case LOG_INFO:
if (logToConsole)
{
stream << componentName << ": INFO " << Text;
mx_console.lock(); // this is a static boost::mutex
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
}
break;
case LOG_STATUS:
stream << componentName << ": STATUS " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
case LOG_WARNING:
stream << componentName << ": WARNING " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
default:;
}
if (beep)
Beep( 500, 50 );
}
Since you have separate logging functionality you can at minimum use some kind of locking (global mutex, etc.) to avoid interspersing messages from different applications too much. To make it more readable and grepable, add some identifying information, like process name or PID. Wrapping your Logger.dll around existing logging library sounds like an option as well.
Alternatively, you could have logging functions just forward messages to your main application and let that to sort out the synchronization and interspersing.
Syslog might be a solution for you as it is intended to handle logs from various places. Syslog is developed for unix, but this answer shows versions for windows.
You can change your logger to log to syslog instead of the console.
I replaced now all the
std::cout << stream.str();
statements with
std::string str = stream.str();
printf(str.c_str());
and now the output isn't messed up character-wise anymore.
But I don't have a good explanation for this behavior, does anybody know why?

Winsock IRC client connects but does not send data

I'm using the code posted on http://social.msdn.microsoft.com/Forums/en/vcgeneral/thread/126639f1-487d-4755-af1b-cfb8bb64bdf8 but it doesn't send data just like it says in the first post. How do I use WSAGetLastError() like it says in the solution to find out what's wrong?
I tried the following:
void IRC::SendBuf(char* sendbuf)
{
int senderror = send(m_socket, sendbuf, sizeof(sendbuf), MSG_OOB);
if(senderror == ERROR_SUCCESS) {
printf("Client: The test string sent: \"%s\"\n", sendbuf);
}
else {
cout << "error is: " << senderror << ", WSAGetLastError: " << WSAGetLastError() << endl;
printf("Client: The test string sent: \"%s\"\n", sendbuf);
}
}
And the output is: error is: 4, WSAGetLastError: 0
You're evaluating the address of WSAGetLastError instead of calling it. You need to add parenthesis in order to actually call that function:
void IRC::SendBuf(char* sendbuf)
{
int senderror = send(m_socket, sendbuf, strlen(sendbuf), 0);
if (senderror != SOCKET_ERROR) {
printf("Client: The test string sent: \"%s\"\n", sendbuf);
} else {
cout << "Error is: " << WSAGetLastError() << endl;
}
}
EDIT: The send() function returns the number of bytes written, not an error code. You need to test the return value against SOCKET_ERROR, as in the updated code above. In your case, send() tells that it successfully sent 4 bytes.
As you noted below, it only sends 4 bytes because that's the size of the sendbuf variable (it's a pointer, not a buffer). If the string in sendbuf is null-terminated, you can use strlen() instead. If it isn't, you probably should add a string length parameter to IRC::SendBuf() itself.

Resources