SSL_CTX_new goes into a loop and hangs the application - windows

I'm working on a client application that uses openssl 1.0.2f for streaming data to the server using c++, Where the call to the SSL_CTX_new hangs 60% of the time soon after the connection start. Some times the call returns after a while (recovering from hang after about 30 seconds to 1 minute) and most of the time it doesn't.
here is my code:
SSL_library_init();
SSLeay_add_ssl_algorithms ( ) ;
SSL_load_error_strings ( ) ;
BIO_new_fp ( stderr , BIO_NOCLOSE ) ;
const SSL_METHOD *m_ssl_client_method = TLSv1_2_client_method ( );
if(m_ssl_client_method)
{
sslContext = SSL_CTX_new ( m_ssl_client_method ) ;
}
That looks similar to the SSL initialization steps given in the openSSL wiki
After debugging through very sleepy profiler I came to know that the initialization of random numbers causes the hang and it looks like it consumes 100% of the cpu and goes into an infinite loop.
Here is a snapshot captured from the verysleepy tool
I'm using VC++ and configured whole program optimization and enabled the use of SSE2 instruction set(disabling these optimizations doesn't seem to give any changes in the results).
I have come across a thread that talks about a similar problem but that doesn't provide a solution for this, I did not find any other threads that talk about this kind of problems, could some one help me with this?
Thanks in advance.

The problems seemed to be a possible bug in the openssl version 1.0.2h, upgrading it to latest version (1.1.0e) solved the problem.

Related

Extreme lag while developing using Mac OS

Im working on Mac Book Pro, OS : Catalina 10.15.7.
At first I was using VS Code to develop in Go but the ( fan?? ) i guess started sounding like a turbo Jet after a while up to the point that the entire OS would shutdown on its own. ( I do not exactly recall what the message said, it was a black screen with white text saying something like your cpu utilization was too high etc, etc and we had to restart the system ).
Today I am trying to run this Python3 script :
#!/usr/local/bin/python3
import csv
import json
import boto3
import time
from multiprocessing import Pool
dynamodb = boto3.resource('dynamodb', endpoint_url='http://localhost:4566', region_name='us-east-2')
table = dynamodb.Table('myTable')
collection = []
count = 0
with open('items.csv', newline='') as f:
reader = csv.DictReader(f)
for row in reader:
obj = {}
collection.append({
"PK" : int(row['id']),
"SK" : "product",
"Name" : row['name']
})
def InsertItem(i):
table.put_item(Item=i)
if __name__ == '__main__':
with Pool(processes=25) as pool:
result = pool.map(InsertItem, collection, 50)
print(result)
And the same behavior occurs ( it does not seem to be related to VS Code now since im directly running this script from the terminal ), the fans are extremely noisy, the performance drops to almost 0 and i get the lollypop mouse pointer of death. ( which seems to be an omen of the PC about to restart itself ) and the process I mentioned above happens again.
Some hints of what is going on :
Im not the only one having this problem. Another teammate does React and is seeing the same behaviour. ( Hs is using VsCode too but I think the problem is more generic ).
It seems to appear only with "intensive" tasks. ( And please take "intensive" with a grain of salt. I do the very same tasks in my Ubuntu Machine with half the RAM and it does not even flinch ).
I have been using Mac for years, and I do not recall having this issue.
So, my question is, is someone else noticing something similar? Is there some workaround for this?
Last note: The python script you see above I tested last week, it did not take even 2 minutes to run. today with these issues its just lingers forever. And I can see for the prints I am doing that it attempts to insert stuff but it freezes without moving forward.

How to to stop a machine from sleeping/hibernating for execution period

I have an app written in golang (partially), as part of its operation it will spawn an external process (written in c) and begin monitoring. This external process can take many hours to complete so I am looking for a way to prevent the machine from sleeping or hibernating whilst processing.
I would like to be able to then relinquish this lock so that when the process is finished the machine is allowed to sleep/hibernate
I am initially targeting windows, but a cross-platform solution would be ideal (does nix even hibernate?).
Thanks to Anders for pointing me in the right direction - I put together a minimal example in golang (see below).
Note: polling to reset the timer seems to be the only reliable method, I found that when trying to combine with the continuous flag it would only take effect for approx 30 seconds (no idea why), having said that polling on this example is excessive and could probably be increased to 10 mins (since min hibernation time is 15 mins)
Also FYI this is a windows specific example:
package main
import (
"log"
"syscall"
"time"
)
// Execution States
const (
EsSystemRequired = 0x00000001
EsContinuous = 0x80000000
)
var pulseTime = 10 * time.Second
func main() {
kernel32 := syscall.NewLazyDLL("kernel32.dll")
setThreadExecStateProc := kernel32.NewProc("SetThreadExecutionState")
pulse := time.NewTicker(pulseTime)
log.Println("Starting keep alive poll... (silence)")
for {
select {
case <-pulse.C:
setThreadExecStateProc.Call(uintptr(EsSystemRequired))
}
}
}
The above is tested on win 7 and 10 (not tested on Win 8 yet - presumed to work there too).
Any user request to sleep will override this method, this includes actions such as shutting the lid on a laptop (unless power management settings are altered from defaults)
The above were sensible behaviors for my application.
On Windows, your first step is to try SetThreadExecutionState:
Enables an application to inform the system that it is in use, thereby preventing the system from entering sleep or turning off the display while the application is running
This is not a perfect solution but I assume this is not an issue for you:
The SetThreadExecutionState function cannot be used to prevent the user from putting the computer to sleep. Applications should respect that the user expects a certain behavior when they close the lid on their laptop or press the power button
The Windows 8 connected standby feature is also something you might need to consider. Looking at the power related APIs we find this description of PowerRequestSystemRequired:
The system continues to run instead of entering sleep after a period of user inactivity.
This request type is not honored on systems capable of connected standby. Applications should use PowerRequestExecutionRequired requests instead.
If you are dealing with tablets and other small devices then you can try to call PowerSetRequest with PowerRequestExecutionRequired to prevent this although the description of that is also not ideal:
The calling process continues to run instead of being suspended or terminated by process lifetime management mechanisms. When and how long the process is allowed to run depends on the operating system and power policy settings.
You might also want to use ShutdownBlockReasonCreate but I'm not sure if it blocks sleep/hibernate.

Clicking qt .app vs running .exe in terminal

I have a qt gui that spawns a c++11 clang server in osx 10.8 xcode
It does a cryptographic proof-of-work mining of a name (single mining thread)
when i click .app process takes 4 1/2 hours
when i run the exact exe inside the .app folder, from the terminal, process takes 30 minutes
question, how do i debug this?
thank you
====================================
even worse:
mining server running in terminal.
if i start GUI program that connect to server and just sends (ipc) it the "mine" command: 4 hours
if I start a CL-UI that connects to server and just sends (ipc) it the "mine" command: 30 minutes
both cases the server is mining in a tight loop. corrupt memory? single CPU is at 100%, as it should be.. cant figure it out.
=========
this variable is is used w/o locking...
volatile bool running = true;
server thread
fut = std::async(&Commissioner::generateName, &comish, name, m_priv.get_public_key() );
server loop...
nonce_t reset = std::numeric_limits<nonce_t>::max()-1000;
while ( running && hit < target ) {
if ( nt.nonce >= reset )
{
nt.utc_sec = fc::time_point::now();
nt.nonce = 0;
}
else { ++nt.nonce; }
hit = difficulty(nt.id());
}
evidence is now pointing to deterministic chaotic behavior. just very sensitive to initial conditions.
initial condition may be the timestamp data within the object that is hashed during mining.
mods please close.

Is it possible to use midiOutLongMsg to play a chord? (Win32 API)

This guys says yes:
http://web.tiscalinet.it/giordy/midi-tech/lowmidi.htm
Same with a really old book from 1998 (Maximum MIDI).
MSDN doesn't mention it.
I'm not getting any sound.
I fill a char buffer with status|note|velocity|status|note|velocity...
Set lpData, dwBufferLength, and dwFlags of a MIDIHDR struct
call midiOutPrepareHeader (MMSYSERR_NOERROR)
call midiOutLongMsg (MMSYSERR_NOERROR)
Still no sound! Spamming midiOutShortMsg is working but will that work for slower machines? Did they change the functionality?
Thanks.
I'm an idiot! I figured it out: Microsoft GS Wavetable Synth does NOT support sending multiple short messages in midiOutLongMsg. The MIDI Mapper DOES!
midiOutShortMsg should be plenty fast, even on slow machines. MIDI interfaces themselves (hardware that is, but some software will limit themselves) run at 31,250 baud. This of course is ignoring any slow code you may have wrapped around where you call midiOutShortMsg.
Anyway, technically you should also be able to get away with one status byte, if the following notes use the same status byte. So, if you want to do note on/off (using velocity 0 for off) and those notes are on the same channel, you could do this:
status|note|velocity|note|velocity|note|velocity|note|velocity
This is called running status.

OpenAL on Mac OS X: Setting AL_SAMPLE_OFFSET does nothing

at work, we're unable to use alSourcePause() to pause sounds, and in any case we might want to start the sound with an offset.
We're performing a "resume" by doing alSourcei(this->sourceId, AL_SAMPLE_OFFSET, this->sampleOffset); with a sample offset that we retrieved with alGetSourcei(). We tried using AL_SEC_OFFSET, AL_BYTE_OFFSET and AL_SAMPLE_OFFSET -- to no avail. We have read that the sound source needs to be in the "initial" state; recreating the source and attaching the buffer, then attempting to skip also did not help.
Changing the buffer to skip AL_BYTE_OFFSET is not a solution, since it complicates looping.
Streaming sounds are skipping on slower machines; we're having trouble implementing multithreaded playing.
Since we're on a tight schedule, what is the best way to skip a portion of a simple sound source on OpenAL on OS X?
Source code is available at our Sourceforge repository.
I recently encountered the same problem in our game engine on OS X (10.6.8). We performed the following steps when resuming playback of a static buffer with a given sample offset, in this order:
alSourceQueueBuffers(mSourceId, 1, mBufferId);
alSourcei(mSourceId, AL_SAMPLE_OFFSET, mSampleOffset);
alSourcePlay(mSourceId);
The source was stopped before that, and all buffers were unqueued. According to the AL 1.1 specs, it should be possible to either
specify the buffer offset when the source is in the stopped state; here, the offset is supposed to be applied upon the next alSourcePlay() call, or
specify the offset on an already playing source, which should result in an immediate skip to the desired position.
(See section 4.3.2 of the official specs at http://connect.creativelabs.com/openal/Documentation/OpenAL%201.1%20Specification.htm )
Reversing the latter two calls in the above sequence (i.e. setting the buffer offset after issuing the alSourcePlay() call) did the trick in our case. Technically, this should be a perfectly valid way to go; however, if the audio thread gets interrupted right between these two calls for too long a time, this could possibly result in hearable glitches.

Resources