Google Dataproc initialization script error File not Found - shell

I'm using Google Dataproc to initialize a Jupyter cluster.
At first I used the "dataproc-initialization-actions" available in github, and it works like a charm.
This is the create cluster Call available in the documentation:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But I want to customize it, so I got the initialization file and saved it o my Google Storage (that is under the same project where I'm trying to create the cluster). So, I changed the call to point to my script instead, like this:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://myjupyterbucketname/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But running this I got the following error:
Waiting on operation [projects/myprojectname/regions/global/operations/cf20
466c-ccb1-4c0c-aae6-fac0b99c9a35].
Waiting for cluster creation operation...done.
ERROR: (gcloud.dataproc.clusters.create) Operation [projects/myprojectname/
regions/global/operations/cf20466c-ccb1-4c0c-aae6-fac0b99c9a35] failed: Multiple
Errors:
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-m'.
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-w-1'..
Well the files where there, so I think it may not be some access permission problem. The file named "dataproc-initialization-script-0_output" has the following content:
/usr/bin/env: bash: No such file or directory
Any ideas?

Well, found my answer here
Turns out the script had windows line endings instead of unix line endings.
Made an online convertion using dos2unix and now it runs fine.
With help from #tix I could check that the file was reacheable using a SSH connection to the cluster (Successful "gsutil cat gs://myjupyterbucketname/jupyter.sh")
AND, the initialization file was correctly saved locally in the directory "/etc/google-dataproc/startup-scripts/dataproc-initialization-script-0"

Related

Metaplex uploading error. "path" argument must be string

I'm trying to use metaplex to upload NFTs and im having some issues with the uploading.
i'm running this command
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload \ -e devnet \ -k C:\server3\NFT\keypair.json \ -cp config.json \ -c example \ c:/server3/NFT/assets
and getting this error
now i know WHY im getting the error, it says because its skipping unsuported file "/server3" which is where the files are located. how do i make it not skip that folder? i believe thats why path is returning undefined.
Windows has a issue with multi line commands. These new lines are indicated with the \ after every parameter. If you remove the extra \ and leave everything on one line it should resolve your issue for you.
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k C:\server3\NFT\keypair.json -cp config.json -c example c:/server3/NFT/assets

Getting LibreOffice (uninstalled, files only) to work on AWS Lambda

I'm nearly there, but stuck at the last hurdle.
$ /path/to/soffice.bin --version
^ This works both on my local machine (Docker Container) and on (container deployed on) AWS Lambda
However,
$ /path/to/soffice.bin \
--headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --nosplash \
--convert-to pdf:writer_pdf_Export \
--outdir /tmp \
$filename \
2>&1 || true # avoid exit-on-fail
... fails with:
LibreOffice - dialog 'LibreOfficeDev 6.4 - Fatal Error': 'The application cannot be started.
User installation could not be completed. 'LibreOfficeDev 6.4 - Fatal Error: The application cannot be started.
User installation could not be completed.
Searching on google, everything is pointing towards a permissions issue with ~/.config/libreoffice
And there is something strange going on with file permissions on the Lambda runtime.
Maybe it is attempting to read or write to a location to which it doesn't have access.
Is there any way to get it working?
The problem is that lambda can only write on /tmp, but the default HOME is not /tmp
adding
export HOME=/tmp
before calling /path/to/soffice.bin
should do the trick.
Also, note that the first run will produce a predictable error because of unknown issues. So you should handle the retry.
(Translated using Hero Translate)

AWS EC2 docker machine with amazonec2 driver - Host already exists

The following command should create a new docker machine on a shiny new Amazon EC2 instance:
docker-machine \
--storage-path /path/to/folder/docker_machines \
create \
--driver amazonec2 \
--amazonec2-access-key <my key> \
--amazonec2-secret-key <my secret> \
--amazonec2-vpc-id <my vpc> \
--amazonec2-region <my region> \
--amazonec2-zone <my AZ> \
--amazonec2-security-group <existing Sec Grp> \
--amazonec2-ami ami-da05a4a0 \
--amazonec2-ssh-keypath /path/to/private/key \
--engine-install-url=https://web.archive.org/web/20170623081500/https://get.docker.com \
awesome-new-docker-machine
I ran this command once, and encountered a legitimate problem (bad path to private key). Once I fixed that and ran the command again, I get this error:
Host already exists: "awesome-new-docker-machine"
However, I can't find this docker machine anywhere:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
I even tried a docker-machine rm and docker-machine kill just for giggles. No difference.
I can't see a new EC2 instance on Amazon having been created from the first, erroneous run of the command.
How can I "clean up" whatever's existing (somewhere) so I can recreate the machine correctly?
So, it turns out that the first run of the command created some initial artifacts in a new folder awesome-new-docker-machine under /path/to/folder/docker_machines.
Deleting this folder and trying again worked perfectly.

How to execute shell commands in pig script on amazon Elastic Map Reduce?

By using bootstrap i was moving some source files to master node. While creating the jobflow through elastic-mapreduce-client, I will pass a pig script, that will launch embedded python from the source files that present in master node.
following commands i have used to create the jobflow,
./elastic-mapreduce --create --alive --name "AutoTest" \
--instance-group master --instance-type m1.small \
--instance-count 1 --bid-price 0.20 \
--instance-group core --instance-type m1.small \
--instance-count 2 --bid-price 0.20 \
--log-uri s3n://test/logs \
--bootstrap-action "s3://test/bootstrap-actions/download.sh" \
--pig-script \
--args s3://test/rollups.pig
rollups.pig contains the following code that launches the embedded pig file,
sh pig automate.py
If i run the rollups.pig in local machine, it will fire the automate.py successfully. but when i try to run this by using amazon elastic map reduce, it is not working ?

Starting jobs with direct calls to Hadoop from within SSH

I've been able to kick off job flows using the elastic-mapreduce ruby library just fine. Now I have an instance which is still 'alive' after it's jobs have finished. I've logged in to is using SSH and would like to start another job, but each of my various attempts have failed because hadoop can't find the input file. I've tried storing the input file locally and on S3.
How can I create new hadoop jobs directly from within my SSH session?
The errors from my attempts:
(first attempt using local file storage, which I'd created by uploading files using SFTP)
hadoop jar hadoop-0.20-streaming.jar \
-input /home/hadoop/mystic/search_sets/test_sample.txt \
-output /home/hadoop/mystic/search_sets/test_sample_output.txt \
-mapper /home/hadoop/mystic/ctmp1_mapper.py \
-reducer /home/hadoop/mystic/ctmp1_reducer.py \
-file /home/hadoop/mystic/ctmp1_mapper.py \
-file /home/hadoop/mystic/ctmp1_reducer.py
11/10/04 22:33:57 ERROR streaming.StreamJob: Error Launching job :Input path does not exist: hdfs://ip-xx-xxx-xxx-xxx.us-west-1.compute.internal:9000/home/hadoop/mystic/search_sets/test_sample.txt
(second attempt using s3):
hadoop jar hadoop-0.20-streaming.jar \
-input s3n://xxxbucket1/test_sample.txt \
-output /home/hadoop/mystic/search_sets/test_sample_output.txt \
-mapper /home/hadoop/mystic/ctmp1_mapper.py \
-reducer /home/hadoop/mystic/ctmp1_reducer.py \
-file /home/hadoop/mystic/ctmp1_mapper.py \
-file /home/hadoop/mystic/ctmp1_reducer.py
11/10/04 22:26:45 ERROR streaming.StreamJob: Error Launching job : Input path does not exist: s3n://xxxbucket1/test_sample.txt
The first will not work. Hadoop will look for that location in HDFS, not local storage. It might work if you use the file:// prefix, like this:
-input file:///home/hadoop/mystic/search_sets/test_sample.txt
I've never tried this with streaming input, though, and it probably isn't the best idea even if it does work.
The second (S3) should work. We do this all the time. Make sure the file actually exists with:
hadoop dfs -ls s3n://xxxbucket1/test_sample.txt
Alternately, you could put the file in HDFS and use it normally. For jobs in EMR, though, I usually find S3 to be the most convenient.

Resources