I'm trying to set up a Bash Script (shl) that will use curl to download a file.
I really can't find a good bash script tutorial. I need assistance.
I've tried testing it with a windows bat file that has something like
: curl ${url} > file name [trying to see it work from windows]
and getting
Protocol "https" not supported or disabled in libcurl
the URL that I can use to extract the file would look something like this {example only)
https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b
I'm trying to put the file here \aiken\ProdITFileTrans\cofc_aw_disbursement.csv
so my bat file looks
#Echo On
curl --verbose -g ${https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b} >\\aiken\ProdITFileTrans\cofc_aw_disbursement.csv
PAUSE
Again the goal is to take a working version of this call in put in in a Bash shell that I can call forom ATOMIC/UC4
Once I have the bash script I want to be able to do a daily download of my file.
Well, perhaps something like:
#!/bin/bash
curl --verbose -g yourlongurlhere -o /path/to/your/file.csv
Make the file executable (chmod +x).
EDIT: check Advanced Bash Scripting Guide for tons of examples. It covers just about everything.
Related
I´m currently writing a snakemake pipeline, for which I want to include a perl script.
The script is not written by me, but from a github page. I never worked with perl before.
I installed perl (5.32.1) via conda. I have installed miniconda and am working on my universities unix server.
The code for my perl script rule looks like this:
rule r1_filter5end:
input:
config["arima_mapping"] + "unprocessed_bam/{sample}_R1.sam"
output:
config["arima_mapping"] + "filtered_bam/{sample}_R1.bam"
params:
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/r1_filter5end/{sample}_R1.log"
threads:
12
shell:
"samtools view --threads {threads} -h {input} -b | perl ../scripts/filter_five_end.pl | samtools -b -o {output} 2> log"
When I run this I receive the following error:
Can't open perl script "../scripts/filter_five_end.pl": no such file
or directory found
From what I learned while researching is that the 1. line of a perl script sets the path to my perl executable. The script I downloaded had the following path:
#!/usr/bin/perl
And since I use perl installed via conda this is probably wrong. So I set the path to:
#!/home/mi/my_user/miniconda3/bin/perl
However this did still not work, regardless of if I call
perl ../scripts/filter_five_end.pl
or
../scripts/filter_five_end.pl
Maybe it´s just not possible to run perl scripts via snakemake?
Anyone who had encountered this specific similar case?^^
The problem is not with the shebang. The interpreter path in the shebang doesn't matter because you're calling it with perl ../path directly. The shell that this command is executed in will resolve the path to the perl program (which is very likely the conda one) and then run the script, only taking flags (like -T or -w) from the shebang inside the script.
The error message means it cannot find the actual script file. I suspect when you run that shell command, it's in the wrong directory. Try a fully qualified path.
As stated by OP in their comment:
I forgot that snakemake always looks files up from the Snakemake file not the directory the rules are saved in.
Not quite an answer but perhaps relevant:
I forgot that snakemake always looks files up from the Snakemake file not the directory the rules are saved in.
This is not entirely correct, I think. The reference point is the directory set by -d/--directory which by default is where you execute snakemake:
--directory DIR, -d DIR
Specify working directory (relative paths in the snakefile will use this as their origin). (default: None)
This question might be silly, and I think it is something so basic that I can't even find the solution because it might be obvious to everyone.
Here's the thing:
I want to download a file from mega.nz using bash.
I found this bash script on github: https://github.com/tonikelope/megadown/blob/master/megadown
I don´t know how to run this
Tried:
Copy-pasting the file to a file called "megadown.sh" and then running:
$ bash megadown.sh 'https://mega.nz/#F!BwQy2IAS!AwWpbCPzgLf_5jyj76q7qw'
this returns:
Reading link metadata...
Oooops, something went bad. EXIT CODE (3)
Which tells me that at least the code is running, but I don't know if I am doing it correctly.
This is better than my previous attempt $ megadown 'URL' (as the documentation suggested), which resulted in "command not found"
First, make sure you have installed the dependencies:
sudo apt-get install openssl curl pv jq
Then try running this command:
bash megadown.sh -o FILE_NAME "LINK"
It will download the file specified by the URL to a file called FILE_NAME.
I need to download and run a firefox through a bash script, so I tried running the commands below:
curl -o ~/firefox.tar.bz2 https://download.mozilla.org/?product=firefox-latest-ssl&os=linux64
tar xjf ~/firefox.tar.bz2
~/firefox/firefox
Yet already the first command fails to download the tar file.
Note: The OS is Ubuntu 16, and I don't want to use apt-get.
Quote the address, otherwise the shell interprets the ampersand as a shell order and it ends up trying to download something different to what you expect. Also, add the -L parameter to tell cURL to follow the links:
curl -L -o ~/firefox.tar.bz2 "https://download.mozilla.org/?product=firefox-latest-ssl&os=linux64"
I'm using Parallels because I prefer the Mac OS but the work we do is all in Visual Studio. We currently have a build.cmd batch file that builds our typescript files. Because I'd prefer to work on the Mac side when I can, I thought I would rewrite the script in bash and to also get some experience writing a shell script. I have a main build.sh command that runs the other shell scripts like compile-templates.sh and compile-source.sh. I am trouble with the compile-source.sh portion now. Currently, the batch file looks like:
echo TypeScript Version:
CALL node_modules\.bin\tsc -v
The typescript compiler is included in our Solution so we are all using the same one throughout the solution. In my compile-source.sh, I try to do this:
node_modules/.bin/tsc -v
or this
./node_modules/.bin/tsc -v
And I get permission denied. Is there something I'm doing wrong?
There are several approaches to try. You can use bash to run the script like this:
bash node_modules/.bin/tsc -v
Or you can try to change the permissions on the file:
chmod a+x node_modules/.bin/tsc
This should enable you to run the script like this:
./node_modules/.bin/tsc -v
But in that case, make sure your script starts with a shebang line to tell the system it is a bash script:
#!/usr/bin/env bash
I am using vimwiki as my local wiki and keep it in git in order to be able to sync it with various pcs. I am trying to automate the process of putting the generated HTML from vimwiki on my server so I can easily look stuff up.
My idea is to checkout the repository on a regular basis on the server and have shell script in place which calls vim and tells him to execute VimwikiAll2HTML, ending afterwards. I can then symlink the html folder somewhere or point nginx there or whatever.
I was able to figure out that I can directly execute a command when calling vim by using the -c parameter:
vim -c "VimwikiAll2HTML" -n index.wiki
This command automatically generates the correct HTML. However, I have to press a key and then quit vim (:q) in order to get back into the shell. It doesn't seem suited to be run inside a bash script run by cron? Can I change the command somehow in order to exit after the html generation finished? Or is there any other way I'm not aware of? I looked into the vimwiki plugin because I thought that it maybe uses an external library for HTML generation which I can call in my script but it seems that the plugin does everything by itself.
This command should work:
$ vim -c VimwikiAll2HTML -c q index.wiki