The following approach for connecting an https page with OpenSSL seems to work for most pages, but not for sheets.googleapis.com.
// initialize OpenSLL
SSL_library_init();
SSL_load_error_strings();
SSL_CTX* ctx = SSL_CTX_new(SSLv23_client_method());
// create context
SSL_CTX_set_ecdh_auto(ctx, 1);
wstring strHostname = L"sheets.googleapis.com";
wstring strPort = L"443";
SOCKET s = http_server_connect(strHostname, strPort);
// make socket non blocking for the timeout
unsigned long mode = 1;
ioctlsocket(s, FIONBIO, &mode);
SSL* ssl = SSL_new(ctx);
SSL_set_fd(ssl, (int)s);
int res = SSL_ERROR_NONE;
while ((res = SSL_connect(ssl)) != 1)
{
int error = SSL_get_error(ssl, res);
fd_set fds;
FD_ZERO(&fds);
FD_SET(s, &fds);
switch (error)
{
case SSL_ERROR_WANT_READ:
select(int(s + 1), &fds, NULL, NULL, NULL);
break;
case SSL_ERROR_WANT_WRITE:
select(int(s + 1), NULL, &fds, NULL, NULL);
break;
case SSL_ERROR_SYSCALL:
printf("SSL_ERROR_SYSCALL\n");
break;
}
}
Running this code the first SSL_connect shows the expected SSL_ERROR_WANT_READ and the second call does not show and error. However by checking the connection with Wireshark you see, that the server is resetting the connection. Using a blocking socket works without any problem.
I'm aware that there is a question with almost the same content which was answered by the code above, but this does not work for the google api page for any reason. Does anybody have a hint how to overcome this problem.
I've tested it on various OpenSSL versions including the latest 1.1.1d
Meanwhile I have figured out the issue. I have to set the hostname for the ssl connection by SSL_set_tlsext_host_name.
The interesting thing is that this information seems to be cached. After Removing the command later on, the code still works.
Related
Here is the COM port opening part:
portHandle=CreateFileA(portName, GENERIC_READ|GENERIC_WRITE,0, NULL, OPEN_EXISTING, 0, NULL);
if (portHandle == INVALID_HANDLE_VALUE)
{
return -1;
}
COMMCONFIG Win_CommConfig;
COMMTIMEOUTS Win_CommTimeouts;
unsigned long confSize = sizeof(COMMCONFIG);
Win_CommConfig.dwSize = confSize;
GetCommConfig(portHandle, &Win_CommConfig, &confSize);
Win_CommConfig.dcb.Parity = 0;
Win_CommConfig.dcb.fRtsControl = RTS_CONTROL_DISABLE;
Win_CommConfig.dcb.fOutxCtsFlow = FALSE;
Win_CommConfig.dcb.fOutxDsrFlow = FALSE;
Win_CommConfig.dcb.fDtrControl = DTR_CONTROL_DISABLE;
Win_CommConfig.dcb.fDsrSensitivity = FALSE;
Win_CommConfig.dcb.fNull=FALSE;
Win_CommConfig.dcb.fTXContinueOnXoff = FALSE;
Win_CommConfig.dcb.fInX=FALSE;
Win_CommConfig.dcb.fOutX=FALSE;
Win_CommConfig.dcb.fBinary=TRUE;
Win_CommConfig.dcb.DCBlength = sizeof(DCB);
if (baudrate != -1)
{
Win_CommConfig.dcb.BaudRate = baudrate;
}
Win_CommConfig.dcb.ByteSize = 8;
Win_CommTimeouts.ReadIntervalTimeout = 50;
Win_CommTimeouts.ReadTotalTimeoutMultiplier = 0;
Win_CommTimeouts.ReadTotalTimeoutConstant = 110;
Win_CommTimeouts.WriteTotalTimeoutMultiplier = 0;
Win_CommTimeouts.WriteTotalTimeoutConstant = 110;
SetCommConfig(portHandle, &Win_CommConfig, sizeof(COMMCONFIG));
SetCommTimeouts(portHandle,&Win_CommTimeouts);
return 0;
It connects OK, I manage to issue some AT comamnds and read back OK\n> responses, even one of the upper level protocol (OBD2: the command is 0100\r) gets a proper answer. But when I attempt other commands such as scanning of supported pids (e.g 0000\n, 0101\n, 0202\n etc) the whole thing either echoes back whatever I write to it or just times out. Issuing the same sequence of commands from a hyperterminal works properly. These serial ports are virtual simulated ports should it matter - http://com0com.sourceforge.net/.
What am I missing ? Perhaps some reading / setting / resetting of someof the pins ? It has been a while since I last mingled with RS232... Thanks!
EDIT: just tried the scantool at https://www.scantool.net/downloads/diagnostic-software/ and it worked ok too.
e.g 0000\n, 0101\n, 0202\n
This was the issue. It should have been \r at the end, not \n. Hyperterminal worked because the key would insert a \r here on Windows. Probablysome validation of the input was done by the device connected and so it got to work even with the wrong terminator character fed in.
I'm trying to implement a simple FTP client using winsock. I'm having problems trying to download a file. Here's the code I'm using at the moment:
bool FTPHandler::downloadFile(const char * remoteFilePath, const char * filePath) {
if (!isConnected()) {
setErrorMsg("Not connected, imposible to upload file...");
return false;
}
if (usePasiveMode) {
this->pasivePort = makeConectionPasive();
if (this->pasivePort == -1) {
//error msg will be setted by makeConectionPasive()
return false;
}
} else {
setErrorMsg("Unable to upload file not in pasive mode :S");
return false;
}
char * fileName = new char[500];
getFileName(remoteFilePath,fileName);
// Default name and path := current directory and same name as remote.
if (filePath == NULL) {
filePath = fileName;
}
if (!setDirectory(remoteFilePath)) {
return false;
}
char msg[OTHER_BUF_SIZE];
char serverMsg[SERVER_BUF_SIZE];
sprintf(msg,"%s%s\n",RETR_MSG,fileName);
send(sock, msg, strlen(msg), 0);
SOCKET passSocket;
SOCKADDR_IN passServer;
passSocket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
if (passSocket == INVALID_SOCKET) {
WSACleanup();
sprintf(errorMsg,"Error trying to create socket (WSA error code: %d)",WSAGetLastError());
return false;
}
passServer.sin_family = PF_INET;
passServer.sin_port = htons(this->pasivePort);
passServer.sin_addr = *((struct in_addr *)gethostbyname(this->host)->h_addr);
memset(server.sin_zero,0,8);
int errorCode = connect(passSocket, (LPSOCKADDR) &passServer, sizeof(struct sockaddr));
int tries = 0;
while (errorCode == SOCKET_ERROR) {
tries++;
if (tries >= MAX_TRIES) {
closesocket(passSocket);
sprintf(errorMsg,"Error trying to create socket");
WSACleanup();
return false;
}
}
char * buffer = (char *) malloc(CHUNK_SIZE);
ofstream f(filePath);
Sleep(WAIT_TIME);
while (int readBytes = ***recv(passSocket, buffer, CHUNK_SIZE, 0)***>0) {
buffer[readBytes] = '\0';
f.write(buffer,readBytes);
}
f.close();
Sleep(WAIT_TIME);
recv(sock, serverMsg, OTHER_BUF_SIZE, 0);
if (!startWith(serverMsg, FILE_STATUS_OKEY_CODE)) {
sprintf(errorMsg,"Bad response: %s",serverMsg);
return false;
}
return true;
}
That last recv() returns 1 byte several times, and then the method ends and the file that should be around 1Kb is just 23 bytes.
Why isn't recv reading the hole file?
There are all kinds of logic holes and incorrect/missing error handling in this code. You really need to clean up this code in general.
You are passing the wrong sizeof() value to connect(), and not handling an error correctly if connect() fails (your retry loop is useless). You need to use sizeof(sockaddr_in) or sizeof(passServer) instead of sizeof(sockaddr). You are also not initializing passServer correctly.
You are not checking recv() for errors. And in the off-chance that recv() actually read CHUCK_SIZE number of bytes then you have a buffer overflow that will corrupt memory when you write the null byte into the buffer (which you do not need to do) because you are writing it past the boundaries of the buffer.
If connect() fails, or recv() fails with any error other than a server-side initiated disconnect, you are not telling the server to abort the transfer.
Once you tell the server to go into Passive mode, you need to connect to the IP/Port (not just the Port) that the server tells you, before you then send your RETR command.
Don't forget to send the server a TYPE command so it knows what format to send the file bytes in, such as TYPE A for ASCII text and TYPE I for binary data. If you try to transfer a file in the wrong format, you can corrupt the data. FTP's default TYPE is ASCII, not Binary.
And lastly, since you clearly do not seem to know how to program sockets effectively, I suggest you use the FTP portions of the WinInet library instead of WinSock directly, such as the FtpGetFile() function. Let WinInet handle the details of transferring FTP files for you.
I'm writing a small application with a client and a server - the client sends a question and the server answers.
I managed to do the first part - the server gets the question from the client, do some work and sends back an answer. I just can't figure out how to tell the client to wait for a response from the server.
This is my client code:
char* ipAddress = (char*)malloc(15);
wcstombs(ipAddress, (TCHAR*)argv[1], 15);
DWORD port = wcstod(argv[2], _T('\0'));
DWORD numOfThreads = wcstod(argv[3], _T('\0;'));
DWORD method = wcstod(argv[4], _T('\0;'));
//initialize windows sockets service
WSADATA wsaData;
int iResult = WSAStartup(MAKEWORD(2,2), &wsaData);
assert(iResult==NO_ERROR);
//prepare server address
sockaddr_in server_addr;
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = inet_addr(ipAddress);
server_addr.sin_port = htons(port);
//create socket
SOCKET hClientSocket= socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
assert(hClientSocket!=INVALID_SOCKET);
//connect to server
int nRes=connect(hClientSocket, (SOCKADDR*)&server_addr, sizeof(server_addr));
assert(nRes!=SOCKET_ERROR);
char* buf = "GET /count.htm HTTP/1.1\r\nHost: 127.0.0.1:666\r\nAccept: text/html,application/xhtml+xml\r\nAccept-Language: en-us\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/5.0\r\n\r\n";
int nBytesToSend= strlen(buf);
int iPos=0;
while(nBytesToSend)
{
int nSent=send(hClientSocket,buf,nBytesToSend,0);
assert(nSent!=SOCKET_ERROR);
nBytesToSend-=nSent;
iPos+=nSent;
}
closesocket(hClientSocket);
int nLen = sizeof(server_addr);
SOCKET hRecvSocket=accept(hClientSocket,(SOCKADDR*)&server_addr, &nLen);
assert(hRecvSocket!=INVALID_SOCKET);
//prepare buffer for incoming data
char serverBuff[256];
int nLeft=sizeof(serverBuff);
iPos=0;
do //loop till there are no more data
{
int nNumBytes=recv(hRecvSocket,serverBuff+iPos,nLeft,0);
//check if cleint closed connection
if(!nNumBytes)
break;
assert(nNumBytes!=SOCKET_ERROR);
//update free space and pointer to next byte
nLeft-=nNumBytes;
iPos+=nNumBytes;
}while(1);
The assertion after the SOCKET hRecvSocket=accept(hClientSocket,(SOCKADDR*)&server_addr, &nLen); line fails.
The closesocket and accept call after your "send" loop - remove those calls. accept is for servers listening for incoming connections, not for clients that are already connected.
After your send() loop completes, go straight into your recv() loop. That should solve your immediate problem:
Also, your send loop is forgetting to referenece iPos on the buffer like I think you intended to. This is what you wanted:
int nSent=send(hClientSocket,buf+iPos,nBytesToSend,0);
In network programming, sockets will fail due to network conditions beyond your control. So "asserts" on network calls are not always appropriate. Better to just expect failure and be prepared to handle it. Typically, closing the socket and the active connection is the way to handle most errors.
I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.
I have some cross platform DNS client code that I use for doing end to end SMTP and on windows I can find the current DNS server ip addresses by looking in the registry. On the Mac I can probably use the SystemConfiguration framework as mentioned in the first answer, however the exact method of doing so is not immediately obvious.
For instance SCDynamicStoreCopyDHCPInfo returns some of the dynamic DHCP related data but not the DNS server addresses.
I know its very late to answer this question but may be helpful for the others.
This Code will help out for this task ..
SCPreferencesRef prefsDNS = SCPreferencesCreate(NULL, CFSTR("DNSSETTING"), NULL);
CFArrayRef services = SCNetworkServiceCopyAll(prefsDNS);
long servicesCount = CFArrayGetCount(services);
for (long i = 0; i < servicesCount; i++) {
const SCNetworkServiceRef service = (const SCNetworkServiceRef)CFArrayGetValueAtIndex(services, i);
CFStringRef interfaceServiceID = SCNetworkServiceGetServiceID(service);
CFStringRef primaryservicepath = CFStringCreateWithFormat(NULL,NULL,CFSTR("State:/Network/Service/%#/DNS"),interfaceServiceID);
SCDynamicStoreRef dynRef = SCDynamicStoreCreate(kCFAllocatorSystemDefault, CFSTR("DNSSETTING"), NULL, NULL);
CFPropertyListRef propList = SCDynamicStoreCopyValue(dynRef,primaryservicepath);
if (propList) {
CFDictionaryRef dict = (CFDictionaryRef)propList;
CFArrayRef addresses = (CFArrayRef)CFDictionaryGetValue(dict, CFSTR("ServerAddresses"));
long addressesCount = CFArrayGetCount(addresses);
for (long j = 0; j < addressesCount; j++) {
CFStringRef address = (CFStringRef)CFArrayGetValueAtIndex(addresses, j);
// Print address
CFShow(address);
}
CFRelease(propList);
}
CFRelease(dynRef);
CFRelease(primaryservicepath);
}
CFRelease(services);
CFRelease(prefsDNS);
I know it's been a long time since you needed this, but there is nothing worse than a old unsolved answer. You can't access them from "/etc/resolv.conf" because of permission issues. After much searching, and a little luck I discovered you can get it via res_ninit() function.
// Get native iOS System Resolvers
res_ninit(&_res);
res_state res = &_res;
for (int i = 0; i < res->nscount; i++) {
sa_family_t family = res->nsaddr_list[i].sin_family;
int port = ntohs(res->nsaddr_list[i].sin_port);
if (family == AF_INET) { // IPV4 address
char str[INET_ADDRSTRLEN]; // String representation of address
inet_ntop(AF_INET, & (res->nsaddr_list[i].sin_addr.s_addr), str, INET_ADDRSTRLEN);
} else if (family == AF_INET6) { // IPV6 address
char str[INET6_ADDRSTRLEN]; // String representation of address
inet_ntop(AF_INET6, &(res->nsaddr_list [i].sin_addr.s_addr), str, INET6_ADDRSTRLEN);
}
}
res_ndestroy(res);
You can use the SystemConfiguration framework. It's in C.
Update: apparently the rest of the web is harder to use than I thought. Search for the key "State:/Network/Service/ServiceID/DNS" where ServiceID is the ID of the service.
They are also available from
/etc/resolv.conf
You could read from /etc/resolv.conf.