SNAP: Simulation and Neuroscience Application Platform - ffmpeg

Is there any documentation/help manual on how to use SNAP (Simulation and Neuroscience Application Platform)1.
I wanted to run the Motor Imagery sample scenario with a .avi file for the stimulus instead of the image. How can that be done?
The following error is obtained when using the AlphaCalibration scenario which gives code to play an avi file.Any help appreciated
:movies:ffmpeg(warning): parser not found for codec indeo4, packets or times may be invalid.
:movies:ffmpeg(warning): max_analyze_duration 5000000 reached at 5000000
:movies(error): Could not open /e/BCI_Feb2014/SNAP-master/src/studies/SampleStudy/bird.avi
:audio(error): Cannot open file: /e/BCI_Feb2014/SNAP-master/src/studies/SampleStudy/bird.avi
:audio(error): Could not open audio /e/BCI_Feb2014/SNAP-master/src/studies/SampleStudy/bird.avi
:movies:ffmpeg(warning): parser not found for codec indeo4, packets or times may be invalid.
:movies:ffmpeg(warning): max_analyze_duration 5000000 reached at 5000000
:movies(error): Could not open /e/BCI_Feb2014/SNAP-master/src/studies/SampleStudy/bird.avi
:gobj(error): Texture "/e/BCI_Feb2014/SNAP-master/src/studies/SampleStudy/bird.avi" exists but cannot be read.
Traceback (most recent call last):
File "E:\BCI_Feb2014\SNAP-master\src\framework\latentmodule.py", line 458, in _run_wrap
self.run()
File "modules\BCI\AlphaCalibration.py", line 30, in run
Exception during run():
m = self.movie(self.moviefile, block=False, scale=[0.7,0.4],aspect=1.125,contentoffset=[0,0],volume=0.3,timeoffset=self.begintime+t*self.awake_duration,looping=True)
Could not load texture: bird.avi
File "E:\BCI_Feb2014\SNAP-master\src\framework\basicstimuli.py", line 348, in movie
tex = self._engine.base.loader.loadTexture(filename)
File "E:\BCI_Feb2014\Panda3D-1.8.0\direct\showbase\Loader.py", line 554, in loadTexture
raise IOError, message
IOError: Could not load texture: bird.avi

The video files are not included in the github repository due to the file size. However, your use case should work if you use any other video file that you put on the search path (e.g., next to the other media files - see also Panda3d's notion of content search paths).
There is a SNAP release including media files at: ftp://sccn.ucsd.edu/pub/software/LSE-SDK/ which might include the files you're looking for.

Related

Import file failed to greenplum because of one line of data on navicate

When importing a file into Greenplum,one lines fails,and the whole file is not imported successfully.Is there a way can skip the wrong line and import other data into Greenplum successfully?
Here are my SQL execution and error messages:
copy cjh_test from '/gp_wkspace/outputs/base_tables/error_data_test.csv' using delimiters ',';
ERROR: invalid input syntax for integer: "FE00F760B39BD3756BCFF30000000600"
CONTEXT: COPY cjh_test, line 81, column local_city: "FE00F760B39BD3756BCFF30000000600"
Greenplum has an extension to the COPY command that lets you log errors and set up a certain amount of errors that can occur that won't stop the load. Here is an example from the documentation for the COPY command:
COPY sales FROM '/home/usr1/sql/sales_data' LOG ERRORS
SEGMENT REJECT LIMIT 10 ROWS;
That tells COPY that 10 bad rows can be ignored without stopping the load. The reject limit can be # of rows or a percentage of the load file. You can check the full syntax in psql with: \h copy
If you are loading a very large file into Greenplum, I would suggest looking at gpload or gpfdist (which also support the segment reject limit syntax). COPY is single threaded through the master server where gpload/gpfdist load the data in parallel to all segments. COPY will be faster for smaller load files and the others will be faster for millions of rows in a load file(s).

Rasa Timeout Issue

When running Rasa (tried on versions 1.3.3, 1.3.7, 1.3.8) I encounter this timeout exception message almost every time I make a call. I am running a simple program that recognises when a user offers their age, and stores the age in a database through an action response:
Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> I am 24 years old
2019-10-10 13:29:33 ERROR asyncio - Task exception was never retrieved
future: <Task finished coro=<configure_app.<locals>.run_cmdline_io() done, defined at /Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py:123> exception=TimeoutError()>
Traceback (most recent call last):
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py", line 127, in run_cmdline_io
server_url=constants.DEFAULT_SERVER_FORMAT.format("http", port)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 138, in record_messages
async for response in bot_responses:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 366, in step
return await ANextIter(self._it, start_fn, *args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 205, in throw
return self._invoke(self._it.throw, type, value, traceback)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 209, in _invoke
result = fn(*args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 103, in send_message_receive_stream
async for line in resp.content:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 40, in __anext__
rv = await self.read_func()
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 329, in readline
await self._wait('readline')
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 297, in _wait
await waiter
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/helpers.py", line 585, in __exit__
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
Transport closed # ('127.0.0.1', 63319) and exception experienced during error handling
Originally I thought this timeout was being caused by using large lookup tables for another part of my Rasa program, but for age recognition I am using a simple regex:
## regex:age
- ^(0?[1-9]|[1-9][0-9]|[1][1-9][1-9])$
And even this also causes the timeout.
Please help me solve this. I don't even need to avoid the timeout, I just want to know where I can catch/ignore this exception.
Thanks!
I was fetching data from an API wherein I was getting a Timeout error because it was not able to fetch the data in the default time limit :
Go to the directory: venv/Lib/site-packages/rasa/core/channels/console.py
Change the default value of DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS to more than 10, in my case I changed it to 30 it worked.
Another reason could be fetching of data again and again within a short period of time which could result in a timeout.
Observations :
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 10 i get timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and keep on running rasa shell again and again I get a timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and run rasa shell not frequently it functions perfectly.
Make sure that you uncomment the below code
action_endpoint:
url: "http://localhost:5055/webhook"
in the endpoints.yml. It is used when you are making custom actions to query database.
I had the same problem and it was not solved by increasing timeout.
Make sure you are sending back a 'string' to the rasa shell from rasa action sever. What I mean is, if you are using 'text = ' in your utter_message, make sure that the async result is also a string and not just an object or something else. Change the type if required.
dispatcher.utter_message(text='has to be a string')
Running 'rasa shell -vv' showed me that it is receiving an object and that is why it is not able to parse it, hence timeout.
I can't comment now, but add followup to Vishal response. To check that hooks are present and waiting for connection you can use -vv command line switch. This display all available hooks at startup. For example:
2020-04-21 14:05:56 DEBUG rasa.core.utils - Available web server routes:
/webhooks/rasa GET custom_webhook_RasaChatInput.health
/webhooks/rasa/webhook POST custom_webhook_RasaChatInput.receive
/webhooks/rest GET custom_webhook_RestInput.health
/webhooks/rest/webhook POST custom_webhook_RestInput.receive
/ GET hello

HDFS: Errno 22 on attempt to edit an existing file in mounted NFS volume

Summary: I mounted HDFS nfs volume in OSX and it will not let me edit existing files. I can append and create files with content but not "open them with write flag".
Originally, I asked about a particular problem with JupyterLab failing to save notebooks into nfs mounted volumes but while trying to dig down to the roots, I realized (hopefully right) that it's about editing existing files.
I mounted HDFS nfs on OSX and I can access the files, read and write and whatnot. JupyterLab though can do pretty much everything but can't really save notebooks.
I was able to identify the pattern for what's really happening and the problem boils down to this: you can't open existing files in nfs volume for write:
This will work with a new file:
with open("rand.txt", 'w') as f:
f.write("random text")
But if you try to run it again (the file has been created now and the content is there), you'll get the following Exception:
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-94a46812fad4> in <module>()
----> 1 with open("rand.txt", 'w') as f:
2 f.write("random text")
OSError: [Errno 22] Invalid argument: 'rand.txt'
I am pretty sure the permissions and all are ok:
with open("seven.txt", 'w') as f:
f.write("random text")
f.writelines(["one","two","three"])
r = open("seven.txt", 'r')
print(r.read())
random textonetwothree
I can also append to files no problem:
aleksandrs-mbp:direct sasha$ echo "Another line of text" >> seven.txt && cat seven.txt
random textonetwothreeAnother line of text
I mount it with the following options:
aleksandrs-mbp:hadoop sasha$ mount -t nfs -o
vers=3,proto=tcp,nolock,noacl,sync localhost:/
/srv/ti/jupyter-samples/~Hadoop
Apache documentation suggests that NFS gateway does not support random writes. I tried looking at mount documentation but could not find anything specific that points to enforcing sequential writing. I tried playing with different options but it doesn't seem to help much.
This is the exception I get from JupyterLab when it's trying to save the notebook:
[I 03:03:33.969 LabApp] Saving file at /~Hadoop/direct/One.ipynb
[E 03:03:33.980 LabApp] Error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/filemanager.py", line 471, in save
self._save_notebook(os_path, nb)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 293, in _save_notebook
with self.atomic_writing(os_path, encoding='utf-8') as f:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 82, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 213, in atomic_writing
with atomic_writing(os_path, *args, log=self.log, **kwargs) as f:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 82, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 103, in atomic_writing
copy2_safe(path, tmp_path, log=log)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 51, in copy2_safe
shutil.copyfile(src, dst)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/shutil.py", line 115, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
[W 03:03:33.981 LabApp] 500 PUT /api/contents/~Hadoop/direct/One.ipynb?1534835013966 (::1): Unexpected error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
[W 03:03:33.981 LabApp] Unexpected error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
This is what I see in the NFS logs at the same time:
2018-08-21 03:05:34,006 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Setting file size is not supported when setattr, fileId: 16417
2018-08-21 03:05:34,006 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Setting file size is not supported when setattr, fileId: 16417
Not exactly sure what this means but if I understand the RFC, it should be part of implementation:
Servers must support extending the file size via SETATTR.
I understand the complexity behind mounting hdfs and letting clients write all they want while keeping these files distributed and maintain integrity. Is there though a compromise that would enable writes via nfs?

expected audio sample rate doesn't match actual?

I am trying to use pocket sphinx to transcribe audio files.
pocketsphinx_continuous -infile 116-288045-0005.flac.wav
but I am getting the errors:
ERROR: "continuous.c", line 136: Input audio file has sample rate [44100],
but decoder expects [16000]
FATAL: "continuous.c", line 165: Failed to process file '116-288045-0005.flac.wav'
due to format mismatch.
Here's one of the audio files I need to transcribe: Download from GitHub
Eventually I will batch-transcribe over 5 hours of audio files like these, currently they all throw the same error.
Here's some stats of the same file I'm trying to transcribe:
$ soxi 116-288045-0000.flac.wav
Input File : '116-288045-0000.flac.wav'
Channels : 1
Sample Rate : 44100
Precision : 16-bit
Duration : 00:00:10.65 = 469665 samples = 798.75 CDDA sectors
File Size : 939k
Bit Rate : 706k
Sample Encoding: 16-bit Signed Integer PCM
There might be a problem with some of this file's configuration, I've done some pre-processing to merge it with mp3s, convert from flac to wav, among others.
What's the easiest way now for me to get the transcription working?
Is it possible without re-sampling the files back down to 16kHz. Originally the flac files had a sample-rate of 16kHz, but I had to merge them with 44.1kHz mp3 files. Therefore there's some high-frequency information in them now that may be lost if resampled to 16k.
Resample the audio to 16000 samples then try again.
You can resample like this
sox file.wav -r 16000 file-16000.wav

USA and Canada with jvectormap

I'm looking for a map of (US + Canada) together with states/provinces respectively.
This's what I've done so far:
Downloaded jVectorMap 1.2.2 from here;
After reading this, installed GDAL and Shapely;
Downloaded 10m Admin 1 package from Natural Earth;
Than, according to this thread, it is possible to do what I need using following:
python converter.py --width 900 --country_name_index 12 --country_code_index 18 --longitude0 -100 --where="iso_a2 = 'CA' OR iso_a2 = 'US'" --projection lcc --name us_ca ne_10m_admin_1_states_provinces_shp/ne_10m_admin_1_states_provinces_shp.shp ../jquery-jvectormap-us-ca-lcc-en.js
where --country_name_index 12 --country_code_index 18 part doesn't make any sense to me, since I'm trying to convert 2 countries.
Anyways, after running suggested code I get:
Traceback (most recent call last):
File "converter.py", line 296, in <module>
converter.convert(args['output_file'])
File "converter.py", line 144, in convert
self.loadData()
File "converter.py", line 89, in loadData
self.loadDataSource( sourceConfig )
File "converter.py", line 130, in loadDataSource
shapelyGeometry = shapely.wkb.loads( geometry.ExportToWkb() )
AttributeError: 'module' object has no attribute 'wkb'
I find this really odd, unless I missed something in installation.
After adding import shapely.wkb to converter.py I get Alaska with name State and Yukon as Territory, and that's it.
What am I missing here?
Thanks for your time.
I had the same problem as you. Solved it by using the shapefile, 10m_cultural/ne_10m_admin_1_states_provinces_shp.shp from the package naturalearth all vector themes.
But only downside is that the output JS file is too big. It comes up to 2MB easily. I'll try using shapefile from different source next time and let you know. But for now at least this works.
I had the same problem when building continent maps. The fix was to use an older convert.py version (1.1.1 rather than 1.2.2). You still need to input --country_name_index and --country_code_index flag so give whatever you want as values. The map produced is fine.
convert.py 1.1.1 can be found here :
https://github.com/jfhovinne/jvectormap-maps-builder

Resources