I'm trying to use metaplex to upload NFTs and im having some issues with the uploading.
i'm running this command
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload \ -e devnet \ -k C:\server3\NFT\keypair.json \ -cp config.json \ -c example \ c:/server3/NFT/assets
and getting this error
now i know WHY im getting the error, it says because its skipping unsuported file "/server3" which is where the files are located. how do i make it not skip that folder? i believe thats why path is returning undefined.
Windows has a issue with multi line commands. These new lines are indicated with the \ after every parameter. If you remove the extra \ and leave everything on one line it should resolve your issue for you.
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k C:\server3\NFT\keypair.json -cp config.json -c example c:/server3/NFT/assets
Related
I have a question. Is it possible to assign output from az aritfacts universal download to variable?
I have a Jenkins job where I have script in shell like this:
az artifacts universal download \
--organization "sampleorganization" \
--project "sampleproject" \
--scope project \
--feed "sample-artifacts" \
--name $PACKAGE \
--version $VERSION \
--debug \
--path .
Then I would like transport the file to artifactory with this:
curl -v -u $ARTIFACTORY_USER:$ARTIFACTORY_PASS -X PUT https://artifactory.com/my/test/repo/my_test_file_{VERSION}
I ran the job but noticed that I passed to artifactory empty file. It created my_test_file_{VERSION} but it had 0 mb. As far as I understand I just created empty file with curl. So I would like to pass the output from az download to artifactory repo. Is it possible? How can I do this?
I understand that I need to assign file output to variable and pass it to the curl like:
$MyVariableToPass = az artifacts universal download output
And then pass this var to curl.
Is it possible? How can I pass files between Jenkins which triggers shell job to artifactory?
Also I am not using any plugin right now.
Please help.
The possible solution is to use a VM as the agent of the Jenkins, and then install the Azure CLI inside the VM. You can run the task in that node. To set a variable with the value of the CLI command, for example, the output looks like this:
{
"name": "test_name",
....
}
Then you can set the variable like this:
name=$(az ...... --query name -o tsv)
This is in the Linux system. If it's in the Windows, you can set it like this:
$name = $(az ...... --query name -o tsv)
And as I know, the command to download the file won't output the content of the file. So if you want to set the content of the file as a variable, it's not suitable.
I'm nearly there, but stuck at the last hurdle.
$ /path/to/soffice.bin --version
^ This works both on my local machine (Docker Container) and on (container deployed on) AWS Lambda
However,
$ /path/to/soffice.bin \
--headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --nosplash \
--convert-to pdf:writer_pdf_Export \
--outdir /tmp \
$filename \
2>&1 || true # avoid exit-on-fail
... fails with:
LibreOffice - dialog 'LibreOfficeDev 6.4 - Fatal Error': 'The application cannot be started.
User installation could not be completed. 'LibreOfficeDev 6.4 - Fatal Error: The application cannot be started.
User installation could not be completed.
Searching on google, everything is pointing towards a permissions issue with ~/.config/libreoffice
And there is something strange going on with file permissions on the Lambda runtime.
Maybe it is attempting to read or write to a location to which it doesn't have access.
Is there any way to get it working?
The problem is that lambda can only write on /tmp, but the default HOME is not /tmp
adding
export HOME=/tmp
before calling /path/to/soffice.bin
should do the trick.
Also, note that the first run will produce a predictable error because of unknown issues. So you should handle the retry.
(Translated using Hero Translate)
I'm using Google Dataproc to initialize a Jupyter cluster.
At first I used the "dataproc-initialization-actions" available in github, and it works like a charm.
This is the create cluster Call available in the documentation:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But I want to customize it, so I got the initialization file and saved it o my Google Storage (that is under the same project where I'm trying to create the cluster). So, I changed the call to point to my script instead, like this:
gcloud dataproc clusters create my-dataproc-cluster \
--metadata "JUPYTER_PORT=8124" \
--initialization-actions \
gs://myjupyterbucketname/jupyter.sh \
--bucket my-dataproc-bucket \
--num-workers 2 \
--properties spark:spark.executorEnv.PYTHONHASHSEED=0,spark:spark.yarn.am.memory=1024m \
--worker-machine-type=n1-standard-4 \
--master-machine-type=n1-standard-4
But running this I got the following error:
Waiting on operation [projects/myprojectname/regions/global/operations/cf20
466c-ccb1-4c0c-aae6-fac0b99c9a35].
Waiting for cluster creation operation...done.
ERROR: (gcloud.dataproc.clusters.create) Operation [projects/myprojectname/
regions/global/operations/cf20466c-ccb1-4c0c-aae6-fac0b99c9a35] failed: Multiple
Errors:
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-m'.
- Google Cloud Dataproc Agent reports failure. If logs are available, they can
be found in 'gs://myjupyterbucketname/google-cloud-dataproc-metainfo/231e5160-75f3-
487c-9cc3-06a5918b77f5/my-dataproc-cluster-w-1'..
Well the files where there, so I think it may not be some access permission problem. The file named "dataproc-initialization-script-0_output" has the following content:
/usr/bin/env: bash: No such file or directory
Any ideas?
Well, found my answer here
Turns out the script had windows line endings instead of unix line endings.
Made an online convertion using dos2unix and now it runs fine.
With help from #tix I could check that the file was reacheable using a SSH connection to the cluster (Successful "gsutil cat gs://myjupyterbucketname/jupyter.sh")
AND, the initialization file was correctly saved locally in the directory "/etc/google-dataproc/startup-scripts/dataproc-initialization-script-0"
I am a biginner with docker, I try to make my transmission container work!
First I am running on a debian 8.1.
To make things work I created this 2 folder:
mkdir -p /opt/docker/transmission/config
mkdir -p /opt/docker/transmission/downloads
after this I added the "good" right:
chown -R root:docker /opt/docker
chmod -R 775 /opt/docker
At the end I tried to create my docker by doing this:
docker run -d \
--net="host" \
--name="Transmission" \
-e USERNAME="root" \
-e PASSWORD="mdp" \
-v /opt/docker/transmission/config:/config \
-v /opt/docker/transmission/downloads:/downloads \
-v /etc/localtime:/etc/localtime:ro \
gfjardim/transmission
The command docker logs transmission gives:
Couldn't save temporary file "/config/resume/myfile.resume.tmp.hYhTPF": No such file or directory
I guessed that the folder resume in config was not created so I created it, but I didn't work.
The message in transmission GUI is:
unable to save resume file: permission denied
I cannot reproduce you errors using your commands. I suggest you delete your /opt/docker/transmission/ directory and then run the docker run command.
Docker will take care of creating those repositories.
I didn't find a solution for gfjardim/transmission container
But by changing container it worked directly.
I followed the dperson/transmission in the hub site, and it work perfectly.
Thanks for your help.
I have built a hadoop cluster which 1 master-slave node and the other is slave. And now, I wanna build a flume to get all log of the cluster on master machine. However, when I try to install flume from tarball and I always get:
Error: Could not find or load main class org.apache.flume.node.Application
So, please help me to find the answer, or the best way to install flume on my cluster.
many thanks!
It is basically because of FLUME_HOME..
Try this command
$ unset FLUME_HOME
I know its been almost a year for this question, but I saw it!
When you set your agnet using sudo bin/flume-ng.... make sure to specify the file where the agent configuration is.
--conf-file flume_Agent.conf -> -f conf/flume_Agent.conf
This did the trick!
look like you run flume-ng in /bin folder
flume after build in
/flume-ng-dist/target/apache-flume-1.5.0.1-bin/apache-flume-1.5.0.1-bin
run flume-ng in this
I suppose you are trying to run flume from cygwin on windows? If that is the case, I had a similar issue. The problem might be with the flume-ng script.
Find the following line in bin/flume-ng:
$EXEC java $JAVA_OPTS $FLUME_JAVA_OPTS "${arr_java_props[#]}" -cp "$FLUME_CLASSPATH" \
-Djava.library.path=$FLUME_JAVA_LIBRARY_PATH "$FLUME_APPLICATION_CLASS" $*
and replace it with this
$EXEC java $JAVA_OPTS $FLUME_JAVA_OPTS "${arr_java_props[#]}" -cp `cygpath -wp "$FLUME_CLASSPATH"` \
-Djava.library.path=`cygpath -wp $FLUME_JAVA_LIBRARY_PATH` "$FLUME_APPLICATION_CLASS" $*
Notice that the paths have been replaced with the windows directories. Java would not be able to find the library paths from the cygdrive paths and we would have to convert it to the correct windows paths wherever applicable
Maybe you are using the source files, you first should compile the source code and generate the binary code, then inside the binary files directory, you can execute: bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1. All these information you can follow: https://cwiki.apache.org/confluence/display/FLUME/Getting+Started
I got same issue before, it's simply due to FLUME_CLASSPATH not set
the best way to debug is see the java command being fired and make sure that flume lib is included in the CLASSPATH (-cp),
As in following command its looking for /lib/*, thats where the flume-ng-*.jar are, but its incorrect because there's nothing in /lib, in this line -cp '/staging001/Flume/server/conf://lib/*:/lib/*'. It has to be ${FLUME_HOME}/lib.
usr/lib/jvm/java-1.8.0-ibm-1.8.0.3.20-1jpp.1.el7_2.x86_64/jre/bin/java -Xms100m -Xmx500m $'-Dcom.sun.management.jmxremote\r' \
-Dflume.monitoring.type=http \
-Dflume.monitoring.port=34545 \
-cp '/staging001/Flume/server/conf://lib/*:/lib/*' \
-Djava.library.path= org.apache.flume.node.Application \
-f /staging001/Flume/server/conf/flume.conf -n client
So, if you look at the flume-ng script,
There's FLUME_CLASSPATH setup, which if absent it is setup based on FLUME_HOME.
# prepend $FLUME_HOME/lib jars to the specified classpath (if any)
if [ -n "${FLUME_CLASSPATH}" ] ; then
FLUME_CLASSPATH="${FLUME_HOME}/lib/*:$FLUME_CLASSPATH"
else
FLUME_CLASSPATH="${FLUME_HOME}/lib/*"
fi
So make sure either of those environments is set. With FLUME_HOME set, (I'm using systemd)
Environment=FLUME_HOME=/staging001/Flume/server/
Here's the working java exec.
/usr/lib/jvm/java-1.8.0-ibm-1.8.0.3.20-1jpp.1.el7_2.x86_64/jre/bin/java -Xms100m -Xmx500m \
$'-Dcom.sun.management.jmxremote\r' \
-Dflume.monitoring.type=http \
-Dflume.monitoring.port=34545 \
-cp '/staging001/Flume/server/conf:/staging001/Flume/server/lib/*:/lib/*' \
-Djava.library.path= org.apache.flume.node.Application \
-f /staging001/Flume/server/conf/flume.conf -n client