I'm nearly there, but stuck at the last hurdle.
$ /path/to/soffice.bin --version
^ This works both on my local machine (Docker Container) and on (container deployed on) AWS Lambda
However,
$ /path/to/soffice.bin \
--headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --nosplash \
--convert-to pdf:writer_pdf_Export \
--outdir /tmp \
$filename \
2>&1 || true # avoid exit-on-fail
... fails with:
LibreOffice - dialog 'LibreOfficeDev 6.4 - Fatal Error': 'The application cannot be started.
User installation could not be completed. 'LibreOfficeDev 6.4 - Fatal Error: The application cannot be started.
User installation could not be completed.
Searching on google, everything is pointing towards a permissions issue with ~/.config/libreoffice
And there is something strange going on with file permissions on the Lambda runtime.
Maybe it is attempting to read or write to a location to which it doesn't have access.
Is there any way to get it working?
The problem is that lambda can only write on /tmp, but the default HOME is not /tmp
adding
export HOME=/tmp
before calling /path/to/soffice.bin
should do the trick.
Also, note that the first run will produce a predictable error because of unknown issues. So you should handle the retry.
(Translated using Hero Translate)
Related
I'm trying to setup my environment to learn azure from the Microsoft learning page https://learn.microsoft.com/en-us/learn/modules/microservices-data-aspnet-core/environment-setup
but when i run . <(sudo wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup) to pull the repo and run the services, i get the error below
~/clouddrive/aspnet-learn/modules/microservices-data-aspnet-core/setup ~/clouddrive/aspnet-learn
~/clouddrive/aspnet-learn
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/quickstart.sh: Permission denied
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/create-acr.sh: Permission denied
cat: /home/username/clouddrive/aspnet-learn/deployment-urls.txt: No such file or directory
this used to work until it stopped working and I'm not sure what caused it to break or how to fix it.
I've tried deleting the 'Storage account' and the resources, but doesn't seem to work. also, when i delete the storage account and create a new one then try again, it seems to have the old data stored and i need to run a remove, so somehow this data isnt really being deleted when i delete the 'Storage account'
Before running this script, please remove or rename the existing /home/username/clouddrive/aspnet-learn/ directory as follows:
Remove: rm -r /home/username/clouddrive/aspnet-learn/
any idea what is wrong here, or how i can actually reset this to work like a new storage?
Note: I saw some solutions which say to start with sudo, for elevated permission, but didnt manage to get this to work
I have done the repro by following the given document
Able to deploy a modified version of the eShopOnContainers reference app
Again I executed the same command ,
. <(wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup)
got the same error which you have got
If we try to run the deploy script without cleaning the already created resource/app,will get the above error.
If you want to re-run the setup script, run the below command first to clean the resource
cd ~ && \
rm -rf ~/clouddrive/aspnet-learn && \
az group delete --name eshop-learn-rg --yes
OR
Remove: rm -r /home/username/clouddrive/aspnet-learn/
Rename: mv /home/username/clouddrive/aspnet-learn/ ~/clouddrive/new-name-here/
The above command removes or renames the existing /home/username/clouddrive/aspnet-learn/ directory
Now you can run the script again
On my window computer, I use git bash to register and start gitlab runner. gitlab-runner.exe stored in C:Runner directory. I open git-bash terminal, cd /c/Runner to register and run gitlab runner as following:
If I use CI/CD>>Schedules UI to start pipelines, everything works fine. But when I use trigger command like following, pipeline job failed.
curl -X POST \
-F token=xxxxxxx \
-F ref=develop \
http://git.xxxxxxxxx/trigger/pipeline
Error message is:
It seems that “/c/Runner/C:/Runner/builds...” is a wrong path. Does anyone know how to fix it? Thank you very much.
BTW: For some reason, I have to use bash terminal on window to start gitlab-runner.
As a user I want to execute Robot Framework's robot command with some command line options. I put everything in a script to avoid retyping the long command each time - see example below. On Linux an Mac OS I can execute this script from any terminal emulator, i.e.
# Linux
. run_local_tests.sh
# Mac OS
./run_local_tests.sh
On Windows an application (VSCode Editor) associated with .sh file type is opened instead of executing the robot command or an error like robot: command not found is returned
# Windows
.\run_local_tests.sh
# OR
run_local_tests.sh
# OR
bash run_local_tests.sh
shell script - filename: run_local_tests.sh
#!/bin/bash
# Set desired loglevel: NONE (less details), INFO, DEBUG, TRACE (most details)
export LOG_LEVEL=TRACE
# RUN CONTRIBUTION SERVICE TESTS
robot -i CONTRIBUTION -e circleci \
--outputdir results \
--log NONE \
--report NONE \
--output XML/CONTRIBUTION.xml \
--noncritical not-ready \
--flattenkeywords for \
--flattenkeywords foritem \
--flattenkeywords name:_resources.* \
--loglevel $LOG_LEVEL \
--name CONTRI \
robot/CONTRIBUTION_TESTS/
Renaming the script from .sh to .bat doen't help :(
entering bash, then activating venv and calling the script doesn't work
What other options are there (without installing additional tools like Cygwin etc.)?
I'm actually trying to answer the same question in the opposite direction (how to trigger/run them on my machine as .sh). Looks like we may help each other out. 8)
I believe this is what you're looking for:
Your file would be run_local_tests.bat
Contents:
#echo off
cd C:\path\to\robot\project
call robot -d relative/path/to/test/output/dir relative/path/to/run_local_tests.bat
Of course you can use any other valid robot cli syntax in the call also. You may have to make it executable too. I'm not sure.
I'm using Google Dataproc to initialize a Jupyter cluster.
At first I used the "dataproc-initialization-actions" available in github, and it works like a charm.
This is the create cluster Call available in the documentation:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But I want to customize it, so I got the initialization file and saved it o my Google Storage (that is under the same project where I'm trying to create the cluster). So, I changed the call to point to my script instead, like this:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://myjupyterbucketname/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But running this I got the following error:
Waiting on operation [projects/myprojectname/regions/global/operations/cf20
466c-ccb1-4c0c-aae6-fac0b99c9a35].
Waiting for cluster creation operation...done.
ERROR: (gcloud.dataproc.clusters.create) Operation [projects/myprojectname/
regions/global/operations/cf20466c-ccb1-4c0c-aae6-fac0b99c9a35] failed: Multiple
Errors:
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-m'.
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-w-1'..
Well the files where there, so I think it may not be some access permission problem. The file named "dataproc-initialization-script-0_output" has the following content:
/usr/bin/env: bash: No such file or directory
Any ideas?
Well, found my answer here
Turns out the script had windows line endings instead of unix line endings.
Made an online convertion using dos2unix and now it runs fine.
With help from #tix I could check that the file was reacheable using a SSH connection to the cluster (Successful "gsutil cat gs://myjupyterbucketname/jupyter.sh")
AND, the initialization file was correctly saved locally in the directory "/etc/google-dataproc/startup-scripts/dataproc-initialization-script-0"
I am a biginner with docker, I try to make my transmission container work!
First I am running on a debian 8.1.
To make things work I created this 2 folder:
mkdir -p /opt/docker/transmission/config
mkdir -p /opt/docker/transmission/downloads
after this I added the "good" right:
chown -R root:docker /opt/docker
chmod -R 775 /opt/docker
At the end I tried to create my docker by doing this:
docker run -d \
--net="host" \
--name="Transmission" \
-e USERNAME="root" \
-e PASSWORD="mdp" \
-v /opt/docker/transmission/config:/config \
-v /opt/docker/transmission/downloads:/downloads \
-v /etc/localtime:/etc/localtime:ro \
gfjardim/transmission
The command docker logs transmission gives:
Couldn't save temporary file "/config/resume/myfile.resume.tmp.hYhTPF": No such file or directory
I guessed that the folder resume in config was not created so I created it, but I didn't work.
The message in transmission GUI is:
unable to save resume file: permission denied
I cannot reproduce you errors using your commands. I suggest you delete your /opt/docker/transmission/ directory and then run the docker run command.
Docker will take care of creating those repositories.
I didn't find a solution for gfjardim/transmission container
But by changing container it worked directly.
I followed the dperson/transmission in the hub site, and it work perfectly.
Thanks for your help.