Extreme lag while developing using Mac OS - macos

Im working on Mac Book Pro, OS : Catalina 10.15.7.
At first I was using VS Code to develop in Go but the ( fan?? ) i guess started sounding like a turbo Jet after a while up to the point that the entire OS would shutdown on its own. ( I do not exactly recall what the message said, it was a black screen with white text saying something like your cpu utilization was too high etc, etc and we had to restart the system ).
Today I am trying to run this Python3 script :
#!/usr/local/bin/python3
import csv
import json
import boto3
import time
from multiprocessing import Pool
dynamodb = boto3.resource('dynamodb', endpoint_url='http://localhost:4566', region_name='us-east-2')
table = dynamodb.Table('myTable')
collection = []
count = 0
with open('items.csv', newline='') as f:
reader = csv.DictReader(f)
for row in reader:
obj = {}
collection.append({
"PK" : int(row['id']),
"SK" : "product",
"Name" : row['name']
})
def InsertItem(i):
table.put_item(Item=i)
if __name__ == '__main__':
with Pool(processes=25) as pool:
result = pool.map(InsertItem, collection, 50)
print(result)
And the same behavior occurs ( it does not seem to be related to VS Code now since im directly running this script from the terminal ), the fans are extremely noisy, the performance drops to almost 0 and i get the lollypop mouse pointer of death. ( which seems to be an omen of the PC about to restart itself ) and the process I mentioned above happens again.
Some hints of what is going on :
Im not the only one having this problem. Another teammate does React and is seeing the same behaviour. ( Hs is using VsCode too but I think the problem is more generic ).
It seems to appear only with "intensive" tasks. ( And please take "intensive" with a grain of salt. I do the very same tasks in my Ubuntu Machine with half the RAM and it does not even flinch ).
I have been using Mac for years, and I do not recall having this issue.
So, my question is, is someone else noticing something similar? Is there some workaround for this?
Last note: The python script you see above I tested last week, it did not take even 2 minutes to run. today with these issues its just lingers forever. And I can see for the prints I am doing that it attempts to insert stuff but it freezes without moving forward.

Related

Due to IPython and Windows limitation, python multiprocessing isn't available now. So `number_workers` is changed to 0 to avoid getting stuck

enter image description here
#id first_training
#caption Results from the first training
# CLICK ME
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
Due to IPython and Windows limitation, python multiprocessing isn't available now.
So number_workers is changed to 0 to avoid getting stuck
Hi, i am studying with Fastai book and i run this code without colab or paperspace.
But as not what i expected, it is taking so long time (my computer is workstation)
but i am wondering if i clear that error
maybe increasing 'number_workers', it would be much faster than before.
How to solve this problem?
Thanks

How do I use the bluetooth module in conjunction with the multiprocessing module?

I am using python 3.5 and OSX 10.13.2.
I am attempting to gather EEG data from a bluetooth EEG recording device while a human is watching a pygame GUI. Because the bluetooth device sends data at about 512 Hz and the pygame GUI updates at about 25Hz, I'm thinking that including the gathering of the bluetooth data and the updating of the GUI in the same loop/process is not appropriate.
Therefore, I want to create two concurrent processes: one for the gathering of bluetooth input and one for displaying/updating the pygame GUI. I think I have figured out how to run a pygame window in a separate process, but creating a separate process using the multiprocessing module that connects to and reads input from my bluetooth device is not working out well.
When I run the code below, execution seems to stop on this line socket.connect(('B0:B4:48:F6:38:A1', 1)): nothing in the connect() method is printed. Is this a problem with Mac OSX, the bluetooth module, Python, or something else? Does anyone know how I can fix this? If not, does anyone have a different way I could approach my initial problem of gathering data from a bluetooth device and updating a GUI in parallel using python 3.5?
import multiprocessing, time, bluetooth
def connect():
socket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
try:
socket.connect(('B0:B4:48:F6:38:A1', 1))
print("connected to device!!")
except Exception as e:
print("failed to connect :(")
def wait_and_do_nothing():
print("just sleeping over here in this process")
time.sleep(5)
process1 = multiprocessing.Process(target=connect, args=())
process2 = multiprocessing.Process(target=wait_and_do_nothing, args=())
process1.start()
process2.start()
process1.join()
process2.join()
print("finished")
The output of the above code is:
finished
I solved my problem. For one, I should have wrapped the multiprocessing code in an if statement like the one below. Two, I should have set the start method to 'spawn' instead of 'fork' like I do below:
if __name__ == '__main__':
multiprocessing.set_start_method('spawn', force=True)
process1 = multiprocessing.Process(target=connect, args=())
process2 = multiprocessing.Process(target=wait_and_do_nothing, args=())
process1.start()
process2.start()
process1.join()
process2.join()

SSL_CTX_new goes into a loop and hangs the application

I'm working on a client application that uses openssl 1.0.2f for streaming data to the server using c++, Where the call to the SSL_CTX_new hangs 60% of the time soon after the connection start. Some times the call returns after a while (recovering from hang after about 30 seconds to 1 minute) and most of the time it doesn't.
here is my code:
SSL_library_init();
SSLeay_add_ssl_algorithms ( ) ;
SSL_load_error_strings ( ) ;
BIO_new_fp ( stderr , BIO_NOCLOSE ) ;
const SSL_METHOD *m_ssl_client_method = TLSv1_2_client_method ( );
if(m_ssl_client_method)
{
sslContext = SSL_CTX_new ( m_ssl_client_method ) ;
}
That looks similar to the SSL initialization steps given in the openSSL wiki
After debugging through very sleepy profiler I came to know that the initialization of random numbers causes the hang and it looks like it consumes 100% of the cpu and goes into an infinite loop.
Here is a snapshot captured from the verysleepy tool
I'm using VC++ and configured whole program optimization and enabled the use of SSE2 instruction set(disabling these optimizations doesn't seem to give any changes in the results).
I have come across a thread that talks about a similar problem but that doesn't provide a solution for this, I did not find any other threads that talk about this kind of problems, could some one help me with this?
Thanks in advance.
The problems seemed to be a possible bug in the openssl version 1.0.2h, upgrading it to latest version (1.1.0e) solved the problem.

Addressing multiple B200 devices through the UHD API

I have 2 B210 radios on USB3 on a Windows 10 system using the UHD USRP C API latest release and Python 3.6 as the programming environment. I can "sort of" run them simultaneously in separate processes but would like to know if it is possible to run them in a single thread? How?
1 Happy to move to Linux if it makes things easier, I'm just more familiar with Windows.
2 "sort of" = I sometimes get errors which might be the two processes colliding somewhere down the stack.
The code below illustrates the race condition, sometimes one or both processes fail with error code 40 (UHD_ERROR_ASSERTION) or occasionally code 11 ( UHD_ERROR_KEY )
from ctypes import (windll, byref, c_void_p, c_char_p)
from multiprocessing import Process, current_process
def pread(argstring):
# get handle for device
usrp = c_void_p(0)
uhdapi = windll.uhd
p_str=c_char_p(argstring.encode("UTF8"))
errNo = uhdapi.uhd_usrp_make(byref(usrp),p_str)
if errNo != 0:
print("\r*****************************************************************")
print("ERROR: ",errNo," IN: ", current_process())
print("=================================================================")
if usrp.value != 0:
uhdapi.uhd_usrp_free(byref(usrp))
return
if __name__ == '__main__':
while True:
p2 = Process(target=pread, args=("",))
p1 = Process(target=pread, args=("",))
p1.start()
p2.start()
p1.join()
p2.join()
print("end")
Yes, you can have multiple multi_usrp handles.
By the way, note that UHD is natively C++, and the C API is just a wrapper around that. It's designed for generating scripting interfaces like the Python thing you're using (don't know which interface between Python and the C API you're using – something self-written?).
While it's possible, there's no good reason to call the recv and send functions from the same thread – most modern machines are multi-threaded, and you should make use of that. Real-time SDR is a CPU-intensive task and you should use all the CPU resources available to get data to and from the driver, as to avoid overflowing buffers.

Clicking qt .app vs running .exe in terminal

I have a qt gui that spawns a c++11 clang server in osx 10.8 xcode
It does a cryptographic proof-of-work mining of a name (single mining thread)
when i click .app process takes 4 1/2 hours
when i run the exact exe inside the .app folder, from the terminal, process takes 30 minutes
question, how do i debug this?
thank you
====================================
even worse:
mining server running in terminal.
if i start GUI program that connect to server and just sends (ipc) it the "mine" command: 4 hours
if I start a CL-UI that connects to server and just sends (ipc) it the "mine" command: 30 minutes
both cases the server is mining in a tight loop. corrupt memory? single CPU is at 100%, as it should be.. cant figure it out.
=========
this variable is is used w/o locking...
volatile bool running = true;
server thread
fut = std::async(&Commissioner::generateName, &comish, name, m_priv.get_public_key() );
server loop...
nonce_t reset = std::numeric_limits<nonce_t>::max()-1000;
while ( running && hit < target ) {
if ( nt.nonce >= reset )
{
nt.utc_sec = fc::time_point::now();
nt.nonce = 0;
}
else { ++nt.nonce; }
hit = difficulty(nt.id());
}
evidence is now pointing to deterministic chaotic behavior. just very sensitive to initial conditions.
initial condition may be the timestamp data within the object that is hashed during mining.
mods please close.

Resources