serial.serialutil.SerialTimeoutException: Write timeout - esp32

Sketch uses 260925 bytes (24%) of program storage space. Maximum is 1044464 bytes.
Global variables use 27984 bytes (34%) of dynamic memory, leaving 53936 bytes for local variables. Maximum is 81920 bytes.
esptool.py v3.0
Serial port COM3
Connecting...
Traceback (most recent call last):
File "C:/Users/Vicky/AppData/Local/Arduino15/packages/esp8266/hardware/esp8266/3.0.2/tools/pyserial\serial\serialwin32.py", line 325, in write
raise SerialTimeoutException('Write timeout')
serial.serialutil.SerialTimeoutException: Write timeout
Failed uploading: uploading error: exit status 1
I want to upload Blink Code in "ESP - 12E" model but getting this error.

Related

ModBus TCP communication using Ruby: failing to implement HelloWorld

I am able to communicate with a ventilation installation (Helios KWL EC 500 W which supports holding registers only, english description starts at 50% of the file) using the modpoll utility v3.4. But I failed to transfer the very first communication to Ruby and the rmodbus library v1.3.3.
With modpoll, I may request some temperature value with the command
./modpoll -m tcp -a 180 <ipaddr> 0x7630 0x3031 0x3034 0x0000
and then read the data using
./modpoll -m tcp -a 180 -t4:hex -c 8 -0 -1 <ipaddr>
Protocol configuration: MODBUS/TCP
Slave configuration...: address = 180, start reference = 1 (PDU), count = 8
Communication.........: x.x.x.x, port 502, t/o 2.00 s, poll rate 1000 ms
Data type.............: 16-bit register (hex), output (holding) register table
-- Polling slave...
[1]: 0x7630
which outputs 8 16bit registers as stated as example in the Helios modbus documentation. As very first step, I tried to move the read part to Ruby. However, my Ruby code times out:
require 'rmodbus'
ModBus::TCPClient.new(ipaddr, 502) do |client|
client.with_slave(1) do |slave|
slave.debug = true
puts slave.holding_registers[180..187]
end
end
and throws the exception
/var/lib/gems/2.3.0/gems/rmodbus-1.3.3/lib/rmodbus/slave.rb:241:in `rescue in query':
Timed out during read attempt (ModBus::Errors::ModBusTimeout)
from /var/lib/gems/2.3.0/gems/rmodbus-1.3.3/lib/rmodbus/slave.rb:232:in `query'
from /var/lib/gems/2.3.0/gems/rmodbus-1.3.3/lib/rmodbus/slave.rb:164:in `read_holding_registers'
What's wrong?
I am not sure if / how to use the parameters output by modpoll "address =180" and "start reference =1". Is "address" equivalent to "holding register #"?
Okay, this one was rather stupid. For the records (and others who might want to talk to their Helios using rmodbus):
slave.debug = 1
turns on the debugging which outputs the byte stream sent to the modbus slave. The first bytes sequence is supposed to be: transaction number (2 byte), protocol specifier (2 byte, always zero), size of the following message (2 byte), unit identifier (1 byte, always 180 for Helios KWL).
The slave needs to be initialized with its unit identifier 180, instead of 1:
client.with_slave(180) do |slave|

Error during go build/run execution

I've created a simple go script: https://gist.github.com/kbl/86ed3b2112eb80522949f0ce574a04e3
It's fetching some xml from the internet and then starts X goroutines. The X depends on file content. In my case it was 1700 goroutines.
My first execution finished with:
$ go run mathandel1.go
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/162152?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/148517?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
exit status 1
I've tried to increase ulimit to 2048.
Now I'm getting different error, script is the same thou:
$ go build mathandel1.go
# command-line-arguments
/usr/local/go/pkg/tool/linux_amd64/link: flushing $WORK/command-line-arguments/_obj/exe/a.out: write $WORK/command-line-arguments/_obj/exe/a.out: file too large
What is causing that error? How can I fix that?
You ran ulimit 2048 which changed the maximum file size.
From man bash(1), ulimit section:
If no option is given, then -f is assumed.
This means that you now set the maximum file size to 2048 bytes, that's probably not enough for.... anything.
I'm guessing you meant to change the limit for number of open file descriptors. For this, you want to run:
ulimit -n 2048
As for the original error (before changing the maximum file size), you're launching 1700 goroutines, each performing a http get. Each creates a connection, using a tcp socket. These are covered by the open file descriptor limit.
Instead, you should be limiting the number of concurrent downloads. This can be done with a simple worker pool pattern.

Trouble with Google API in python autosub

I'm trying to setup autosub to translate subtitles, I checked on the Github repo and saw this thread which they happen to be getting the same errors as me. However, when I tried their solution of enabling the Cloud Translation API, it didn't correct the problem. I am running this command for autosub, but I have it as a script to translate to different languages, which is why there are bash variables in the command.
"$tool_path" -o "$output_file" -F "$sub_format" -C 3 -K "key=$api_key" -S "$language_input" -D "$language_output" "$input_file"
When I run this command, I get the exact same error as in the thread, which is as follows:
Converting speech regions to FLAC files: 100% |################################################################################################################################################################################| Time: 0:00:03
Performing speech recognition: 100% |##########################################################################################################################################################################################| Time: 0:00:45
Exception in thread Thread-3:2% |#### | ETA: 0:00:00
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 389, in _handle_results
task = get()
File "/home/eddy/.local/lib/python2.7/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
TypeError: ('__init__() takes at least 3 arguments (1 given)', <class 'googleapiclient.errors.HttpError'>, ())

Pandas read_csv internal memory usage

I have a 320 MB, comma-separated (csv) - file.
To read it in, I use
pd.read_csv(loggerfile, header = 2)
I have 8 GB of Ram (5 are free), how can this ever throw an error?
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 443, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 235, in _read
return parser.read()
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 686, in read
ret = self._engine.read(nrows)
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 1130, in read
data = self._reader.read(nrows)
File "parser.pyx", line 727, in pandas.parser.TextReader.read (pandas\parser.c:7146)
File "parser.pyx", line 777, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7725)
File "parser.pyx", line 1788, in pandas.parser._concatenate_chunks (pandas\parser.c:21033)
MemoryError
EDIT:
Windows 7 Enterprise 64 Bit
Anaconda 2.0.1 x86 <- perhaps x86_64 would be better?
Still the memory error occurs way before my memory cap is reached (seen in task manager), even on a 3 Gb - 32 Bit - machine.

Tornado app halting regularly for few seconds with 100% CPU

I am trying to troubleshoot an app running on tornado 2.4 on Ubuntu 11.04 on EC2. It appears to be hitting 100% CPU regularly and halts at that request for few seconds.
Any help on this is greatly appreciated.
Symptoms:
top shows 100% cpu just at the time it halts. Normally server is about 30-60% cpu utilization.
It halts every 2-5 minutes just for one request. I have checked that there are no cronjobs affecting this.
It halts for about 2 to 9 seconds. Problem goes away on restarting tornado and worsens with tornado uptime. Longer the server is up, for longer duration it halts.
Http requests for which the problem appears do not seem to have any pattern.
Interestingly, next request in log sometimes sometimes matches the halting duration and some times does not. Example:
00:00:00 GET /some/request ()
00:00:09 GET /next/request (9000ms)
00:00:00 GET /some/request ()
00:00:09 GET /next/request (1ms)
# 9 seconds gap in requests is certainly not possible as clients are constantly polling.
Database (mongodb) shows no expensive or large number of queries. No page faults. Database is on the same machine - local disk.
vmstat shows no change in read/write sizes compared to last few minutes.
tornado is running behind nginx.
sending SIGINT when it was most likely halting, gives different stacktrace everytime. Some of them are below:
Traceback (most recent call last):
File "chat/main.py", line 3396, in <module>
main()
File "chat/main.py", line 3392, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 515, in start
self._run_callback(callback)
File "/home/ubuntu/tornado/tornado/ioloop.py", line 370, in _run_callback
callback()
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/iostream.py", line 303, in wrapper
callback(*args)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 298, in _on_request_body
self.request_callback(self._request)
File "/home/ubuntu/tornado/tornado/web.py", line 1421, in __call__
handler = spec.handler_class(self, request, **spec.kwargs)
File "/home/ubuntu/tornado/tornado/web.py", line 126, in __init__
application.ui_modules.iteritems())
File "/home/ubuntu/tornado/tornado/web.py", line 125, in <genexpr>
self.ui["_modules"] = ObjectDict((n, self._ui_module(n, m)) for n, m in
File "/home/ubuntu/tornado/tornado/web.py", line 1114, in _ui_module
def _ui_module(self, name, module):
KeyboardInterrupt
Traceback (most recent call last):
File "chat/main.py", line 3398, in <module>
main()
File "chat/main.py", line 3394, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 515, in start
self._run_callback(callback)
File "/home/ubuntu/tornado/tornado/ioloop.py", line 370, in _run_callback
callback()
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/iostream.py", line 303, in wrapper
callback(*args)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 285, in _on_headers
self.request_callback(self._request)
File "/home/ubuntu/tornado/tornado/web.py", line 1408, in __call__
transforms = [t(request) for t in self.transforms]
File "/home/ubuntu/tornado/tornado/web.py", line 1811, in __init__
def __init__(self, request):
KeyboardInterrupt
Traceback (most recent call last):
File "chat/main.py", line 3351, in <module>
main()
File "chat/main.py", line 3347, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 571, in start
self._handlers[fd](fd, events)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/netutil.py", line 342, in accept_handler
callback(connection, address)
File "/home/ubuntu/tornado/tornado/netutil.py", line 237, in _handle_connection
self.handle_stream(stream, address)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 156, in handle_stream
self.no_keep_alive, self.xheaders, self.protocol)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 183, in __init__
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/home/ubuntu/tornado/tornado/iostream.py", line 139, in read_until
self._try_inline_read()
File "/home/ubuntu/tornado/tornado/iostream.py", line 385, in _try_inline_read
if self._read_to_buffer() == 0:
File "/home/ubuntu/tornado/tornado/iostream.py", line 401, in _read_to_buffer
chunk = self.read_from_fd()
File "/home/ubuntu/tornado/tornado/iostream.py", line 632, in read_from_fd
chunk = self.socket.recv(self.read_chunk_size)
KeyboardInterrupt
Any tips on how to troubleshoot this is greatly appreciated.
Further observations:
strace -p, during the time it hangs, shows empty output.
ltrace -p during hang time shows only free() calls in large numbers:
free(0x6fa70080) =
free(0x1175f8060) =
free(0x117a5c370) =
It sounds like you're suffering from garbage collection (GC) storms. The behavior you've described is typical of that diagnosis, and the ltrace further supports the hypothesis.
Lots of objects are being allocated and disposed of in the main/event loops being exercised by your usage ... and the periodic flurries of calls to free() result from that.
One possible approach would be to profile your code (or libraries on which you are depending) and see if you can refactor it to use (and re-use) objects from pre-allocated pools.
Another possible mitigation would be to make your own, more frequent, calls to trigger the garbage collection --- more expensive in aggregate but possibly less costly at each call. (That would be a trade-off for more predictable throughput).
You might be able to use the Python: gc module both for investigating the issue more deeply (using gc.set_debug()) and for a simple attempted mitigation (calls to gc.collect() after each transaction for example). You might also try running your application with gc.disable() for a reasonable length of time to see if further implicates the Python garbage collector. Note that disabling the garbage collector for an extended period of time will almost certainly cause paging/swapping ... so use it only for validation of our hypothesis and don't expect that to solve the problem in any meaningful way. It may just defer the problem 'til the whole system is thrashing and needs to be rebooted.
Here's an example of using gc.collect() in another SO thread on Tornado: SO: Tornado memory leak on dropped connections

Resources