Trouble with Google API in python autosub - bash

I'm trying to setup autosub to translate subtitles, I checked on the Github repo and saw this thread which they happen to be getting the same errors as me. However, when I tried their solution of enabling the Cloud Translation API, it didn't correct the problem. I am running this command for autosub, but I have it as a script to translate to different languages, which is why there are bash variables in the command.
"$tool_path" -o "$output_file" -F "$sub_format" -C 3 -K "key=$api_key" -S "$language_input" -D "$language_output" "$input_file"
When I run this command, I get the exact same error as in the thread, which is as follows:
Converting speech regions to FLAC files: 100% |################################################################################################################################################################################| Time: 0:00:03
Performing speech recognition: 100% |##########################################################################################################################################################################################| Time: 0:00:45
Exception in thread Thread-3:2% |#### | ETA: 0:00:00
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 389, in _handle_results
task = get()
File "/home/eddy/.local/lib/python2.7/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
TypeError: ('__init__() takes at least 3 arguments (1 given)', <class 'googleapiclient.errors.HttpError'>, ())

Related

HDFS: Errno 22 on attempt to edit an existing file in mounted NFS volume

Summary: I mounted HDFS nfs volume in OSX and it will not let me edit existing files. I can append and create files with content but not "open them with write flag".
Originally, I asked about a particular problem with JupyterLab failing to save notebooks into nfs mounted volumes but while trying to dig down to the roots, I realized (hopefully right) that it's about editing existing files.
I mounted HDFS nfs on OSX and I can access the files, read and write and whatnot. JupyterLab though can do pretty much everything but can't really save notebooks.
I was able to identify the pattern for what's really happening and the problem boils down to this: you can't open existing files in nfs volume for write:
This will work with a new file:
with open("rand.txt", 'w') as f:
f.write("random text")
But if you try to run it again (the file has been created now and the content is there), you'll get the following Exception:
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-94a46812fad4> in <module>()
----> 1 with open("rand.txt", 'w') as f:
2 f.write("random text")
OSError: [Errno 22] Invalid argument: 'rand.txt'
I am pretty sure the permissions and all are ok:
with open("seven.txt", 'w') as f:
f.write("random text")
f.writelines(["one","two","three"])
r = open("seven.txt", 'r')
print(r.read())
random textonetwothree
I can also append to files no problem:
aleksandrs-mbp:direct sasha$ echo "Another line of text" >> seven.txt && cat seven.txt
random textonetwothreeAnother line of text
I mount it with the following options:
aleksandrs-mbp:hadoop sasha$ mount -t nfs -o
vers=3,proto=tcp,nolock,noacl,sync localhost:/
/srv/ti/jupyter-samples/~Hadoop
Apache documentation suggests that NFS gateway does not support random writes. I tried looking at mount documentation but could not find anything specific that points to enforcing sequential writing. I tried playing with different options but it doesn't seem to help much.
This is the exception I get from JupyterLab when it's trying to save the notebook:
[I 03:03:33.969 LabApp] Saving file at /~Hadoop/direct/One.ipynb
[E 03:03:33.980 LabApp] Error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/filemanager.py", line 471, in save
self._save_notebook(os_path, nb)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 293, in _save_notebook
with self.atomic_writing(os_path, encoding='utf-8') as f:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 82, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 213, in atomic_writing
with atomic_writing(os_path, *args, log=self.log, **kwargs) as f:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 82, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 103, in atomic_writing
copy2_safe(path, tmp_path, log=log)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/notebook/services/contents/fileio.py", line 51, in copy2_safe
shutil.copyfile(src, dst)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/shutil.py", line 115, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
[W 03:03:33.981 LabApp] 500 PUT /api/contents/~Hadoop/direct/One.ipynb?1534835013966 (::1): Unexpected error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
[W 03:03:33.981 LabApp] Unexpected error while saving file: ~Hadoop/direct/One.ipynb [Errno 22] Invalid argument: '/srv/ti/jupyter-samples/~Hadoop/direct/.~One.ipynb'
This is what I see in the NFS logs at the same time:
2018-08-21 03:05:34,006 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Setting file size is not supported when setattr, fileId: 16417
2018-08-21 03:05:34,006 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Setting file size is not supported when setattr, fileId: 16417
Not exactly sure what this means but if I understand the RFC, it should be part of implementation:
Servers must support extending the file size via SETATTR.
I understand the complexity behind mounting hdfs and letting clients write all they want while keeping these files distributed and maintain integrity. Is there though a compromise that would enable writes via nfs?

AssertionError when cloning Anaconda base environment

I'm trying to create a clone of my base anaconda environment for a specific application. I want to use the clone as the base off of which to install application-specific packages. I used the following command to start the clone:
C:\Users\Liam>conda create -n retrievals --clone base
It made it a long way through the cloning process, and had just reach 100% on cloning anaconda-5.2.0, when it threw the assertion error below:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\exceptions.py", line 819, in __call__
return func(*args, **kwargs)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\main.py", line 78, in _main
exit_code = do_call(args, p)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 77, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\main_create.py", line 11, in execute
install(args, parser, 'create')
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\install.py", line 211, in install
clone(args.clone, prefix, json=context.json, quiet=context.quiet, index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\install.py", line 72, in clone
index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\misc.py", line 277, in clone_env
force_extract=False, index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\misc.py", line 78, in explicit
assert not any(spec_pcrec[1] is None for spec_pcrec in specs_pcrecs)
AssertionError
$ C:\Users\Liam\Anaconda3\Scripts\conda create -n retrievals --clone base
Can anybody explain why this is happening and what I could try to fix it?
P.S. I'm doing this on Windows 10 if that helps at all.
I found workaround for it. You can just copy environment with base name.
cp -r /opt/conda/envs/base_env /opt/conda/envs/new_env
After that you can activate or update environment.
conda activate new_env
conda env update --name new_env --file environment.yaml --prune

Cef Compilation 64 bit Binaries not generated for Cocoa

CEF :
Branch : 2987
Terminal Commands for Generating binary distrib files after all the data gets downloaded
$ export GYP_DEFINES=proprietary_codecs=1 ffmpeg_branding=Chrome
$ python /Users/imfinity/Documents/CEF_2987/automate/automate-git.py --download-dir=/Users/imfinity/Documents/CEF_2987/download --branch=2987 --x64-build --force-config --force-build
$ cd /Users/imfinity/-dir/chromium/src/cef/tools
$ ./make_distrib.sh --ninja-build
ERROR : Traceback (most recent call last):
File "make_distrib.py", line 468, in
raise Exception('Missing generated header file: %s' % include)
Exception: Missing generated header file: cef_pack_resources.h
This leads to creation of : Incomplete Folder : cef_binary_3.2987.1574.g4232c4c_macosx32
Any help is appreciated!!
I tried this command
$ export GYP_DEFINES=proprietary_codecs=1 ffmpeg_branding=Chrome
$ python /Users/imfinity/Documents/CEF_20March/automate/automate-git.py --download-dir /Users/imfinity/Documents/CEF_20March/download --branch=2987 --x64-build --force-config
and finally it worked and 64 bit binaries were generated
still surprised it worked after 5 different attempts!!!

Computing Sky View Factor in GrassGis

Hy community,
I´m currently working on my Master Thesis and I have to compute the "sky view factor". Since ESRI Arcmap is not a helpful choice to do that, I found that it is fairly easy to compute with GrassGIS (V.7) using the r.skyview command.
But I get an error message in the logfile i can´t really deal with. Hope that someone of you is experienced with that kind of problem and can help me out with this.
Here is what the GrassGIS output says:
*(Fri Jan 09 16:17:10 2015)
r.skyview input=Subset#PERMANENT output=Subset_SVF ndir=16 maxdistance=15.0
Unknown module parameter "keyword" at line 21
Unknown module parameter "keyword" at line 22
FEHLER: Value <rast> ambiguous for parameter <type>
Valid options: raster,raster_3d,vector,old_vector,ascii_vector,labels,region,group,all
Traceback (most recent call last):
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line 120, in <module>
sys.exit(main())
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line82, in main
old_maps = _get_horizon_maps()
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line 114, in_get_horizon_maps
pattern=TMP_NAME + "*")[gcore.gisenv()['MAPSET']]
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 1176, in list_grouped
type=types, pattern=pattern,
exclude=exclude).splitlines():
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 425, in read_command
return handle_errors(returncode, stdout, args, kwargs)
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 308, in handle_errors
returncode=returncode)
grass.exceptions.CalledModuleError: Module run None
['g.list', '--q', '-m', 'type=rast', 'pattern=tmp_horizon_2340*'] ended with error
Process ended with non-zero return code 1. See errors in the (error) output.
(Fri Jan 09 16:17:11 2015) Befehl ausgeführt (1 Sek)*
I just tested r.skyview and it is working. There were big changes in GRASS module parameter names recently which caused the trouble, but now it should work without problems.

cloud-init per-boot script on ubuntu ec2-instance

I am trying to start a script with cloud-init on a ubuntu 11.10 ec2 instance.
I put the script script.sh in the folder /var/lib/cloud/scripts/per-boot.
Content of script.sh is simple:
#/!/bin/sh
echo "test"
After a reboot, I get the following error:
run-parts: failed to exec /var/lib/cloud/scripts/per-boot/script.sh: Exec format error
run-parts: /var/lib/cloud/scripts/per-boot/script.sh exited with return code 1
2012-04-14 19:10:52,642 - cc_scripts_per_boot.py[WARNING]: failed to run-parts in /var/lib/cloud/scripts/per-boot
2012-04-14 19:10:52,648 - __init__.py[WARNING]: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 108, in run_cc_modules
cc.handle(name, run_args, freq=freq)
File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 72, in handle
[ name, self.cfg, self.cloud, cloudinit.log, args ])
File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 309, in sem_and_run
func(*args)
File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_per_boot.py", line 27, in handle
util.runparts(runparts_path)
File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 140, in runparts
raise subprocess.CalledProcessError(sp.returncode,cmd)
CalledProcessError: Command '['run-parts', '--regex', '.*', '/var/lib/cloud/scripts/per-boot']' returned non-zero exit status 1
2012-04-14 19:10:52,648 - __init__.py[ERROR]: config handling of scripts-per-boot, None, [] failed
cloud-init boot finished at Sat, 14 Apr 2012 19:10:52 +0000. Up 3.70 seconds
2012-04-14 19:10:52,672 - cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-per-boot']
errors running cloud_config [final]: ['scripts-per-boot']
Any ideas how to fix it?
I believe your problem is related to the fact that #/!/bin/sh is not a valid input type. Need to remove the / after the #.
#!/bin/sh
echo "test"
Let me know if you still see the problems after this.

Resources