I received a weird error saying
[AVCaptureInput attachToFigCaptureSession:]_block_invoke, file
/SourceCache/EmbeddedAVFoundation/EmbeddedAVFoundation-
887.52/Aspen/AVCaptureInput.m, line 129.
What is this error? How it happened?
It happened when I call session.startRunning() where session is a AVCaptureSession!
Related
When running Rasa (tried on versions 1.3.3, 1.3.7, 1.3.8) I encounter this timeout exception message almost every time I make a call. I am running a simple program that recognises when a user offers their age, and stores the age in a database through an action response:
Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> I am 24 years old
2019-10-10 13:29:33 ERROR asyncio - Task exception was never retrieved
future: <Task finished coro=<configure_app.<locals>.run_cmdline_io() done, defined at /Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py:123> exception=TimeoutError()>
Traceback (most recent call last):
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py", line 127, in run_cmdline_io
server_url=constants.DEFAULT_SERVER_FORMAT.format("http", port)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 138, in record_messages
async for response in bot_responses:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 366, in step
return await ANextIter(self._it, start_fn, *args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 205, in throw
return self._invoke(self._it.throw, type, value, traceback)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 209, in _invoke
result = fn(*args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 103, in send_message_receive_stream
async for line in resp.content:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 40, in __anext__
rv = await self.read_func()
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 329, in readline
await self._wait('readline')
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 297, in _wait
await waiter
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/helpers.py", line 585, in __exit__
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
Transport closed # ('127.0.0.1', 63319) and exception experienced during error handling
Originally I thought this timeout was being caused by using large lookup tables for another part of my Rasa program, but for age recognition I am using a simple regex:
## regex:age
- ^(0?[1-9]|[1-9][0-9]|[1][1-9][1-9])$
And even this also causes the timeout.
Please help me solve this. I don't even need to avoid the timeout, I just want to know where I can catch/ignore this exception.
Thanks!
I was fetching data from an API wherein I was getting a Timeout error because it was not able to fetch the data in the default time limit :
Go to the directory: venv/Lib/site-packages/rasa/core/channels/console.py
Change the default value of DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS to more than 10, in my case I changed it to 30 it worked.
Another reason could be fetching of data again and again within a short period of time which could result in a timeout.
Observations :
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 10 i get timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and keep on running rasa shell again and again I get a timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and run rasa shell not frequently it functions perfectly.
Make sure that you uncomment the below code
action_endpoint:
url: "http://localhost:5055/webhook"
in the endpoints.yml. It is used when you are making custom actions to query database.
I had the same problem and it was not solved by increasing timeout.
Make sure you are sending back a 'string' to the rasa shell from rasa action sever. What I mean is, if you are using 'text = ' in your utter_message, make sure that the async result is also a string and not just an object or something else. Change the type if required.
dispatcher.utter_message(text='has to be a string')
Running 'rasa shell -vv' showed me that it is receiving an object and that is why it is not able to parse it, hence timeout.
I can't comment now, but add followup to Vishal response. To check that hooks are present and waiting for connection you can use -vv command line switch. This display all available hooks at startup. For example:
2020-04-21 14:05:56 DEBUG rasa.core.utils - Available web server routes:
/webhooks/rasa GET custom_webhook_RasaChatInput.health
/webhooks/rasa/webhook POST custom_webhook_RasaChatInput.receive
/webhooks/rest GET custom_webhook_RestInput.health
/webhooks/rest/webhook POST custom_webhook_RestInput.receive
/ GET hello
I've such code:
print("AAAAAA")
local status, jobj = pcall(json.decode(docTxt))
print("BBBBBB")
decode method causes PANIC error, an it results in following console output:
AAAAAAA
PANIC: unprotected error in call to Lua API (json.lua:166: 'for' initial value must be a number)
Whole program beaks, BBBBB does not get printed to console.
Is this normal? Is pcall broken ?
I was able to figure it out: it can be configured in watchdog options for firmware compiler. Now I've have such setup, that it reboots on panic.
UPDATE: Confirmed as a bug. For more detail, see the link and details provided by #ViralBShah below.
Julia throws a strange error when I add and remove processes (addprocs and rmprocs), but only if I don't do any parallel processing in between. Consider the following example code:
#Set parameters
numCore = 4;
#Add workers
print("Adding workers... ");
addprocs(numCore - 1);
println(string(string(numCore-1), " workers added."));
#Detect number of cores
println(string("Number of processes detected = ", string(nprocs())));
# Do some stuff (COMMENTED OUT)
# XLst = {rand(10, 1) for i in 1:8};
# XMean = pmap(mean, XLst);
#Remove the additional workers
print("Removing workers... ");
rmprocs(workers());
println("Done.");
println("Subroutine complete.");
Note that I've commented out the only code that actually does any parallel processing (the call to pmap). If I run this code on my machine (Julia 0.2.1, Ubuntu 14.04), I get the following output in the console:
Adding workers... 3 workers added.
Number of processes detected = 4
Removing workers... Done.
Subroutine complete.
fatal error on
In [86]: fatal error on 88: ERROR: 87: ERROR: connect: connection refused (ECONNREFUSED)
in yield at multi.jl:1540
connect: connection refused (ECONNREFUSED) in wait at task.jl:117
in wait_connected at stream.jl:263
in connect at stream.jl:878
in Worker at multi.jl:108
in anonymous at task.jl:876
in yield at multi.jl:1540
in wait at task.jl:117
in wait_connected at stream.jl:263
in connect at stream.jl:878
in Worker at multi.jl:108
in anonymous at task.jl:876
The first four lines are printed by my program, and seem to indicate that it runs to completion. But then I get a fatal error. Any ideas?
The most interesting thing about this error is if I uncomment the code with the call to pmap (ie if I actually do some parallel processing), the fatal error goes away.
This issue is being tracked at https://github.com/JuliaLang/julia/issues/7646 and I reproduce the answer by Amit Murthy:
pid 1 does an addprocs(3)
addprocs returns after it has established connections with all 3 new workers.
However, at this time the the connections between workers may not have been setup, i.e. from pids 3 -> 2, 4 -> 2 and 4 -> 3.
Now pid 1 calls rmprocs(workers()) , i.e., pids 2, 3 and 4.
As pid 2 exits, the connection attempt in 4 to 2, results in an error.
Since we have redirected the output of pid 4, to the stdout of pid 1, we see the same error printed.
The system is still in a consistent state, though the printing of said error messages may suggest something amiss.
For some reason I am having trouble with the make test statement while installing ```Vowpal Wabbit``. I am getting the following error:
RunTests: test 59: '/usr/bin/timeout 20 ../vowpalwabbit/vw -d train-sets/argmax_data -k -c --passes 20 --search_rollout oracle --search_alpha 1e-8 --search_task argmax --search 2 --holdout_off' failed (exitcode=1)
Anyone have a clue what this could be?
Just run the command which failed (in single quotes) directly from the test directory, and the reason would become obvious.
It is missing data file:
Reading datafile = test/train-sets/argmax_data
can't open: test/train-sets/argmax_data, error = No such file or directory
vw: std::exception
The issue was introduced in a recent check-in and should soon be fixed (hopefully).
Update (2014-05-31): fixed in the most recent commit.
I've installed CDH 4.2.1 and now I'm trying to access HUE Web UI for the first time. I enter a new user name and password, click Sign Up, and wait and wait and nothing happens 20 minutes. If I open another window and try to access the login page then I get a message that the database is locked.
I'm running on a single node. And following is the error message for the second window:
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg/eventlet/wsgi.py", line 336, in handle_one_response
result = self.application(self.environ, start_response)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/wsgi.py", line 245, in __call__
response = middleware_method(request, response)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/contrib/sessions/middleware.py", line 36, in process_response
request.session.save()
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/contrib/sessions/backends/db.py", line 63, in save
obj.save(force_insert=must_create, using=using)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/models/base.py", line 434, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/models/base.py", line 500, in save_base
rows = manager.using(using).filter(pk=pk_val)._update(values)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/models/query.py", line 491, in _update
return query.get_compiler(self.db).execute_sql(None)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/models/sql/compiler.py", line 861, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/models/sql/compiler.py", line 727, in execute_sql
cursor.execute(sql, params)
File "/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/db/backends/sqlite3/base.py", line 200, in execute
return Database.Cursor.execute(self, query, params)
DatabaseError: database is locked
Any idea?
Thank you,
Roberto.
The error here means it is trying to connect to Sqlite and fails as the first connection is still not finished (and Sqlite is not concurrent) so it is not very useful here.
If would look in the hue logs, especially 'runcpserver.log' if there is more information.
Adding 'export DESKTOP_DEBUG=1' in the environment and restarting Hue might give more details.
I would go on http://HUE_SERVER:HUE_PORT/dump_config, look at the 'database' value and delete the file and run a /opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue syncb or sync database if in CM.
It will recreate the database and make sure no other process is using it.
If it still does not work I would give a try to MySQL: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_15_8.html