Error using VideoReader in MATLAB 2017b mac - macos

I am trying to use VideoReader in Matlab R2017b in Mac 10.13.2 to read a .avi file but it gives me error as follows:
Error using VideoReader/init (line 619)
Could not read file due to an unexpected error. Reason: Cannot Decode
Error in VideoReader (line 172)
obj.init(fileName);
Error in Project1 (line 3)
vidObj = VideoReader('jan28.avi')
I tried referring to multiple stack overflow questions but unable to find a solution. Please help me in reading and processing the .avi file.
Below is aviinfo output of the avi file that I'm using in the sample
>> aviinfo('jan28.avi')
Warning: AVIINFO will be removed in a future release. Use VIDEOREADER instead.
> In aviinfo (line 66)
ans =
struct with fields:
Filename: '/Users/nagakukunuru/Matlab_work/jan28.avi'
FileSize: 8642702
FileModDate: '26-Jan-2018 02:13:59'
NumFrames: 1600
FramesPerSecond: 30
Width: 704
Height: 480
ImageType: 'truecolor'
VideoCompression: 'XVID'
Quality: 100
NumColormapEntries: 0

Related

Google assistant - Rspi 3 - "sounddevice.PortAudioError: Error querying device -1"

Installed google assistant sdk on raspi3, the speaker is a home mini bluetooth, it is paired and connected to raspi, played from youtube and it works! even google says it's connected!
However, when running command in terminal as (env) "googlesamples-assistant-pushtotalk --project-id (not going to paste ID) --device-model-id" I get the following:
/home/pi/env/lib/python3.5/site-packages/google/auth/crypt/_cryptography_rsa.py:22: CryptographyDeprecationWarning: Python 3.5 support will be dropped in the next release of cryptography. Please upgrade your Python.
import cryptography.exceptions
INFO:root:Connecting to embeddedassistant.googleapis.com
Traceback (most recent call last):
File "/home/pi/env/bin/googlesamples-assistant-pushtotalk", line 8, in
sys.exit(main())
File "/home/pi/env/lib/python3.5/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/home/pi/env/lib/python3.5/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/pi/env/lib/python3.5/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/env/lib/python3.5/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/pi/env/lib/python3.5/site-packages/googlesamples/assistant/grpc/pushtotalk.py", line 351, in main
flush_size=audio_flush_size
File "/home/pi/env/lib/python3.5/site-packages/googlesamples/assistant/grpc/audio_helpers.py", line 190, in init
blocksize=int(block_size/2), # blocksize is in number of frames.
File "/home/pi/env/lib/python3.5/site-packages/sounddevice.py", line 1345, in init
**_remove_self(locals()))
File "/home/pi/env/lib/python3.5/site-packages/sounddevice.py", line 762, in init
samplerate)
File "/home/pi/env/lib/python3.5/site-packages/sounddevice.py", line 2571, in _get_stream_parameters
info = query_devices(device)
File "/home/pi/env/lib/python3.5/site-packages/sounddevice.py", line 569, in query_devices
raise PortAudioError('Error querying device {0}'.format(device))
sounddevice.PortAudioError: Error querying device -1
When using arecord -l or aplay -l in terminal, get the same message for both: "aplay: device_list:270: no soundcards found..."
Also, running test in terminal using "speaker-test -t wav", the test runs but no sound is working"
" speaker-test 1.1.3
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 9600 to 4194304
Period size range from 480 to 4096
Using max buffer size 4194304
Periods = 4
was set period_size = 4096
was set buffer_size = 4194304
0 - Front Left
Time per period = 0.339021
0 - Front Left
Time per period = 0.315553
0 - Front Left
Time per period = 0.315577
*Keeps generating but with no sound."
Finally, going through sudo nano /home/pi/.asoundrc file, when connected to speaker is:
pcm.!default {
type plug
slave.pcm {
type bluealsa
device "x❌x❌x:x"
profile "a2dp"
}
}
ctl.!default {
type bluealsa
}
AND when going to "sudo nano /etc/asound.conf" it seems that it generates another code, when also connected to same speaker:
pcm.!default {
type asym
capture.pcm "mic"
playback.pcm "speaker"
}
pcm.mic {
type plug
slave.pcm {
type bluealsa device "x❌x❌x:x"
profile "sco"
}
}
pcm.speaker {
type plug
slave.pcm {
type bluealsa device "x❌x❌x:x"
profile "sco"
}
}
I tried copy/paste code of /etc/asound.conf into /home/pi/.asoundrc and run speaker-test -t wav, but i get:
speaker-test 1.1.3
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
WAV file(s)
ALSA lib bluealsa-pcm.c:680:(_snd_pcm_bluealsa_open) Couldn't get BlueALSA transport: No such device
Playback open error: -19,No such device"
So, whats the deal??

Why won't Scilab open an image file?

I am trying to work with image files under Scilab, and I get stuck at the very beginning, unable to load an image file.
I have searched the help system as well as the Web, tried two versions of Scilab (because some of the answers I found say that 6.0 is incompatible with some image functions) and still drew a blank. Whatever I try, the imread function is simply not there.
Here is what I get:
Under Scilab 6.0.2:
--> clear
--> atomsSystemUpdate()
Scanning repository http://atoms.scilab.org/6.0 ... Done
--> atomsInstall("SIVP")
atomsInstallList: The package "SIVP" is not registered.
Please check on the ATOMS repository that it is available for Scilab 6.0 on Windows.
If it is, run atomsSystemUpdate() before trying atomsInstall(..) again.
at line 52 of function atomsError ( C:\Program Files\scilab-6.0.2\modules\atoms\macros\atoms_internals\atomsError.sci line 66 )
at line 78 of function atomsInstallList ( C:\Program Files\scilab-6.0.2\modules\atoms\macros\atoms_internals\atomsInstallList.sci line 117 )
at line 233 of function atomsInstall ( C:\Program Files\scilab-6.0.2\modules\atoms\macros\atomsInstall.sci line 249 )
--> atomsInstall("IPCV")
ans =
[]
--> disp( atomsGetInstalled() );
!IPCV 4.1.2 user SCIHOME\atoms\x64\IPCV\4.1.2 I !
--> im=imread("Kratka220.tif")
Undefined variable: imread
Under Scilab 5.5.2:
-->clear
-->atomsSystemUpdate()
Scanning repository http://atoms.scilab.org/5.5 ... Done
-->atomsInstall("SIVP")
ans =
[]
-->atomsInstall("IPCV")
atomsInstallList: Pakiet IPCV nie jest dostępny.
<this is Polish for "Package IPCV is not available"; I installed 5.5.2 in Polish>
!--error 10000
at line 51 of function atomsError called by :
at line 76 of function atomsInstallList called by :
at line 233 of function atomsInstall called by :
atomsInstall("IPCV")
-->disp( atomsGetInstalled() );
column 1 to 4
!SIVP 0.5.3.2 user SCIHOME\atoms\x64\SIVP\0.5.3.2 !
column 5
!I !
-->im=imread("Kratka220.tif")
!--error 4
Niezdefiniowana zmienna: imread
<this is Polish for "undefined variable">
What am I doing wrong?
After atomsInstall you have to restart Scilab to load the toolbox.

HUE File Manager, able to create HDFS subdirectory/folder but unable to upload file to HDFS

I am getting an:
Error: Undefined message in HUE every time I try to upload a file.
I am able to create a subdirectory/folder in HDFS but file uploads are not working.
I tried copying a file to HDFS from the linux CLI using the hadoop user, and it works.
Hue user is hadoop
HDFS directory owner is hadoop:hadoop
Edit: Adding the error
ERROR Internal Server Error: /filebrowser/upload/file
Traceback (most recent call last):
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/base.py", line 178, in _get_response
response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/middleware/csrf.py", line 300, in process_view
request_csrf_token = request.POST.get('csrfmiddlewaretoken', '')
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/wsgi.py", line 126, in _get_post
self._load_post_and_files()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/request.py", line 299, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, data)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/request.py", line 258, in parse_file_upload
return parser.parse()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/multipartparser.py", line 269, in parse
self._close_files()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/multipartparser.py", line 316, in _close_files
handler.file.close()
AttributeError: 'NoneType' object has no attribute 'close'
[12/Apr/2020 22:48:51 -0700] upload DEBUG HDFSfileUploadHandler receive_data_chunk
[12/Apr/2020 22:48:51 -0700] upload ERROR Not using HDFS upload handler:
[12/Apr/2020 22:48:51 -0700] resource ERROR All 1 clients failed: {'http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1': u'500 Server Error: Internal Server Error for url: http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop\n{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}\n'}
[12/Apr/2020 22:48:51 -0700] resource ERROR Caught exception from http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1: 500 Server Error: Internal Server Error for url: http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop
{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}
(error 500)
As you can see from the error message, it reports that there is no match found for the query param passed when hue tries to perform the CHECKACCESS operation.
http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop
{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}
This implementation seems to be missing in some of the hadoop versions and is a known bug HTTPFS - CHECKACCESS operation missing.

MXNet - Augmentations - expected uint8, got float32

I am attempting to use mxnet 1.10/mxnet-cu91 for image classification. I am currently attempting to use mxnet.image.ImageIter to iterate through
and preprocess images. I have been able to successfully use the Augmenters to preprocess the images, but have received the following error when using Augmenters (with the only exception being ForceResizeAug):
Traceback (most recent call last):
File "image.py", line 22, in <module>
for batch in iterator:
File "/usr/local/lib/python2.7/dist-packages/mxnet/image/image.py", line 1181, in next
data = self.augmentation_transform(data)
File "/usr/local/lib/python2.7/dist-packages/mxnet/image/image.py", line 1239, in augmentation_transform
data = aug(data)
File "/usr/local/lib/python2.7/dist-packages/mxnet/image/image.py", line 659, in __call__
src = t(src)
File "/usr/local/lib/python2.7/dist-packages/mxnet/image/image.py", line 721, in __call__
gray = src * self.coef
File "/usr/local/lib/python2.7/dist-packages/mxnet/ndarray/ndarray.py", line 235, in __mul__
return multiply(self, other)
File "/usr/local/lib/python2.7/dist-packages/mxnet/ndarray/ndarray.py", line 2566, in multiply
None)
File "/usr/local/lib/python2.7/dist-packages/mxnet/ndarray/ndarray.py", line 2379, in _ufunc_helper
return fn_array(lhs, rhs)
File "<string>", line 46, in broadcast_mul
File "/usr/local/lib/python2.7/dist-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File "/usr/local/lib/python2.7/dist-packages/mxnet/base.py", line 146, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [20:02:07] src/operator/contrib/../elemwise_op_common.h:123: Check failed: assign(&dattr, (*vec)[i]) Incompatible attr in node at 1-th input: expected uint8, got float32
Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x2ab9a8) [0x7f5c873f09a8]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x2abdb8) [0x7f5c873f0db8]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x2d2078) [0x7f5c87417078]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x2d2b83) [0x7f5c87417b83]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x24c4c1e) [0x7f5c89609c1e]
[bt] (5) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x24c6e59) [0x7f5c8960be59]
[bt] (6) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(+0x240539b) [0x7f5c8954a39b]
[bt] (7) /usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so(MXImperativeInvokeEx+0x63) [0x7f5c8954a903]
[bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f5cc334ae40]
[bt] (9) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call+0x2eb) [0x7f5cc334a8ab]
The code needed to replicate the issue is below (shortened for brevity, closely resembles the code provided in the documentation):
import mxnet as mx
import glob
type1_paths = glob.glob('type1/*.jpg')
type1_list = [[1.0, path] for path in type1_paths]
type2_paths = glob.glob('type2/*.JPG')
type2_list = [[2.0, path] for path in type2_paths]
all_paths = type1_list + type2_list
iterator = mx.image.ImageIter(1, (3, 1000, 1000),
imglist=all_paths,
aug_list=[
mx.image.ColorJitterAug(0.1, 0.1, 0.1),
])
for batch in iterator:
print batch.data
I am not sure why the error is occurring, as I am not using any custom augmenters that could effect the discrepancy in dtype. I've also replicated this issue when using the following:
RandomGrayAug
HueJitterAug
ContrastJitterAug
SaturationJitterAug
NOTE: In case this matters, the only differences I know between the loaded jpg/JPG is that some photos were taken using a phone, and others using a DSLR camera.
Please let me know if I am missing any information that would be helpful in learning.
You're getting this issue because the images are loaded with a data type of int8 but the augmentations are expecting a data types of float32. Unfortunately the error message reads a little backwards to what you need to do in this case, because of a multiplication of the input image (int8) with a contrast jitter (float32). It's complaining about the data type of the contrast jitter instead of the input data. Same issue with hue and saturation augmenters.
So to fix this you need to convert your input image data type to float32. You can do this by adding mx.image.CastAug(typ='float32') at the start of your augmenter list.
iterator = mx.image.ImageIter(1, (3, 100, 100),
path_root=".",
imglist=all_paths,
aug_list=[
mx.image.CastAug(typ='float32'),
mx.image.ColorJitterAug(0.1, 0.1, 0.1),
mx.image.CenterCropAug((100,100))
])
And it's always a good idea to visualize your data after augmentation to confirm the steps are being applied as you expected.

MatConvNet is not compiling for GPUs

I have the following problem when I want to compile matconvnet for GPUS:
vl_compilenn('enableGpu', true)
vl_compilenn: CUDA: MEX config file: '/home/anselmo/Experimentos-Impressora-DeepLearning/matconvnet-master/matlab/src/config/mex_CUDA_glnxa64.sh'
mex: no file name given.
Usage:
MEX [option1 ... optionN] sourcefile1 [... sourcefileN]
[objectfile1 ... objectfileN] [libraryfile1 ... libraryfileN]
Use the -help option for more information, or consult the MATLAB External Interfaces Guide.
Error using mex (line 206)
Unable to complete successfully.
Error in vl_compilenn>mex_compile (line 434)
mex(mopts{:}) ;
Error in vl_compilenn>(parfor body) (line 393)
mex_compile(opts, srcs{i}, toobj(bld_dir,srcs{i}), flags.mexcu) ;
Error in parallel_function (line 470)
F(base, limit, supply(base, limit));
Error in vl_compilenn (line 387)
parfor i = 1:numel(horzcat(lib_src, mex_src))
509 rethrow(E)
What can be happening?

Resources