Generating Office docs in OpenXML. Part of the process is using zip to combine directories and files into an archive. This works fine locally
var p = 'cd ' + target + '/; zip -r ../' + this.fname + ' .; cd ..;';
return exec.exec(p, function(err, stdout, stderr) { ... }
But fails on Heroku Cedar, with an error /bin/sh: zip: not found. Logging in via shell (heroku run bash) and running ls /bin, it appears that the zip binary does not exist. gzip does exist, but I think that's different.
Is it possible to run zip on the Heroku from a shell process? From this link below it seems like it should be possible. (That article uses Ruby, I use Node, but I think the shell shouldn't care who's calling it?)
Rails: How can I use system zip on Heroku to make a docx from an xml template?
It says here
How to unzip files in a Heroku Buildpack
that though heroku doesn't include the zip command, the jar command is available.
However, why not use an npm like this one to process your files from within the node app itself:
https://www.npmjs.org/package/zipfile
Related
I have to deploy from a git Diff to a Salesforce Org.
So I have all the files names written on a txt file and need to bring them all to the Salesforce Org.
The files I have them in local and need to, as I said, deploy them to the Salesforce Org.
I tried doing sfdx and writing them all but it gives me
"C:\Program" is not a reconized command
I tried adding """ at the start and """ at the end and separating each file with a "","" but it still doesn't work.
I know I can do it from a xml file but I have the diff in a txt.
The error sounds like your sfdx isn't installed correctly, you may have to reinstall. Or maybe you had newlines in your command and they messed something up?
You need to read up about force:source:deploy command, the -p parameter...
This is a decent example of what you can do. Bit boring, repetitive but deploys exactly these files and nothing more, not whole folders.
sfdx force:source:deploy -u prod -p "force-app/main/default/objects/MyObject__c/fields/Description__c.field-meta.xml,force-app/main/default/objects/MyObject__c/fields/Amount__c.field-meta.xml,force-app/main/default/objects/MyObject__c/fields/Quantity__c.field-meta.xml,force-app/main/default/classes/MyObjectTriggerHandler.cls" -l RunSpecifiedTests -r "SomeTestClass" --verbose --loglevel fatal
There are also some cool sfdx plugins that will generate the xml file for you based on difference between 2 commits? Search list at https://github.com/mshanemc/awesome-sfdx-plugins
How can I load my output generated from my bash script into a gcs location
My bash command is like:
echo " hello world"
I want this output(hello world) to be shown in a location in gcs.
How to write a location command in Bash?
First, you should follow the install Cloud SDK instructions in order to use the cp command form gsutil tool on the machine you're running the script.
Cloud SDK requires Python; supported versions are Python 3 (preferred, 3.5 to 3.8) and Python 2 (2.7.9 or higher)
Run one of the following:
Linux 64-bit archive file from your command-line, run:
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-362.0.0-linux-x86_64.tar.gz
For the 32-bit archive file, run:
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-362.0.0-linux-x86.tar.gz
Depending on your setup, you can choose other installation methods
Extract the contents of the file to any location on your file system (preferably your Home directory). If you would like to replace an existing installation, remove the existing google-cloud-sdk directory and extract the archive to the same location.
Run gcloud init to initialize the SDK:
./google-cloud-sdk/bin/gcloud init
After you have installed the Cloud SDK, you should create a bucket to uploadthe files that will contain the output generated by your script.
Use the gsutil mb command and a unique name to create a bucket:
gsutil mb -b on -l us-east1 gs://my-awesome-bucket/
This uses a bucket named "my-awesome-bucket". You must choose your own, globally-unique, bucket name.
Then you can redirect your output to a local file and upload to Google Cloud Storage like this:
#!/bin/bash
TIMESTAMP=$(date +'%s')
BUCKET="my-awesome-bucket"
echo "Hello world!" > "logfile.$TIMESTAMP.log"
gsutil cp logfile.$TIMESTAMP.log gs://$BUCKET/logfile.$TIMESTAMP.log
Usually I source all the macros I have for the jobs run in a remote machine using this command:
macros=$\my_directory
But I see someone uses a different way to get all the macros for submitting the jobs in a remote machine. He uses this command:
macros=$(dirname $(readlink -f $BASH_SOURCE))
Now I want to know how the $dirname has the advantages over giving the specific macro location. It would be great if you just explain to me regarding the sourcing the macro using $dirname
By using dirname you get the directory of where the script is located, therefore it's easy to source other files locally close to your script and don't worry about specifying the correct path each time the script bundle is relocated.
For instance if you have in your script source $macros/some_script.sh then it will not break when the bundle is located in the /usr/local/bin/ or /bin/ or ...
Regarding $BASH_SOURCE see: https://stackoverflow.com/a/35006505/2146346
I have a shell script that is running aws s3 cp s3://s3file /home/usr/localfile. The file already exists in that directory, so the cp command is essentially getting the latest copy from S3 to get the latest version.
However, I noticed today that the file was not the latest version; it didn't match the file on S3. Looking at the shell script's stdout from the last two runs, it looks like the command ran - the output is: download: s3://s3file to usr/localfile. But when I compared the copies, they didn't match. The changed timestamp on the file when I view it on the local machine via WinSCP (a file transfer client) didn't change either
I manually ran the command in a shell just now and it copied the file from S3 to the local machine and successfully got the latest copy.
Do I need to add a specific option for this, or is it typical behavior for files to not override a file after aws s3 cp?
I created this simple script that does a backup, I wrote and tested it in Linux, then I copied it in my WebApp WEB-INF/scripts directory so that I could be run via Java Runtime.exec().
#!/bin/bash
JACCISE_FOLDER="/var/jaccise"
rm $JACCISE_FOLDER/jaccisebackup.zip
zip -r jaccisefolder.zip $JACCISE_FOLDER
mysqldump -ujacc -pxxx jacciseweb > jaccisewebdump.sql
zip jaccisebackup.zip jaccisewebdump.sql
zip jaccisebackup.zip jaccisefolder.zip
rm jaccisewebdump.sql
rm jaccisefolder.zip
cp jaccisebackup.zip $JACCISE_FOLDER
But it doesn't. So I tried to copy it from WEB-INF/scripts to my user dir and run it to roubleshoot it. The result is that it comes out with: ": File o directory non esistente" (Means "Unknown file or directory" notice the colon at the beginning). I created another file from scratch, copied and pasted the whole script and it works. I may think that this is related to:
Text encoding
\n\r differences between windows (I use Eclipse on windows to edit everything) and Linux.
How do I solve this deploy problem?
You should check if the file is executable (chmod +x). Then you should check, if your web server allows the execution of external programs. This might be a security problem and it is likely that the web server prevents the execution. Check the logs of the web server. The encoding of the file can be changed with the dos2unix command. In order to debug your script you can add an "set -x" at the beginning, but I think the script does not start at all.