I have a bunch of web resources. To make my life easier I have made now an additional one which returns aliases for all the endpoints. This is quite handy as I also do have different environments (host names and ports). Now I can curl the resource and copy paste all aliases (like list="curl ...") which works fine. But how can I source the aliases directly? Something like
curl "http://localhost:9999/env" | bash
which btw does not work.
EDIT: sample output
alias topics='curl -X GET "http://localhost:9999/bus/api/v1/topics"'
alias stats='curl -X GET "http://localhost:9999/bus/api/v1/topics+stats"'
Just guessing, but I'm pretty sure it will work:
source <(curl "http://localhost:9999/env")
I'm not sure about the curl syntax, I'm just mimmicking yours. You need curl to return in its standard output the contents that have to be processed by bash.
Related
I've made a simple for loop to make POST requests using curl and save the output to a .txt file.
for ((i=200000; i<=300000; i++)); do
curl -s -X POST -d "do=something&page=$i" "https://example.com/ajax" -o "$i.txt" > /dev/null
done
Currently, the script creates a new output in like every 260 ms. Is it possible to make this process even faster?
Have a look at gnu parallel. You can use this to get parallelisation for anything, but it also works well with curl. Look to replace for and while loops with it and test for optimal performance as more is not always better and there is diminishing marginal return as you go beyond a certain point.
Here is a reference to another article that discusses it: Bash sending multiple curl request using GNU parallel
I wanted to add a simple example to my previous post.
parallel -j8 curl -s '{}' < urls >/dev/null
-j8 means to use 8 parallel processes, but this can be left unset and it will try and use as many as possible. 'urls' is a text file with a bunch of URLs.
Change and apply as you see fit as it doesn't conform specifically to your example above.
I am trying to iterate through a list to curl each one, this ultimately is to kick of a list of Jenkins jobs.
so i have a text file which contents is
ApplianceInsurance-Tests
BicycleInsurance-Tests
Breakdown-Tests
BridgingLoans-Tests
Broadband-Tests
Business-Loans
BusinessElectric-Tests
BusinessGas-Tests
and i am trying to create a loop in which i fire a curl command for each line in the txt file
for fn in `cat jenkins-lists.txt`; do "curl -X POST 'http://user:key#x.x.x.xxx:8080/job/$fn/build"; done
but i keep getting a error - No such file or directory.
Getting a little confused
Your do-done body is quoted wrong. It should be:
curl -X POST "http://user:key#x.x.x.xxx:8080/job/$fn/build"
I'd also recommend:
while read -r fn; do
curl -X POST "http://user:key#x.x.x.xxx:8080/job/$fn/build"
done < jenkins-list.txt
instead of for fn in $(anything); do .... With the second way you don't
have to worry about inadvertent globbing and the jenkins-list file may
get nicely buffered instead of needing to be read all into memory at once (not that it matters for such a small file but why not have a technique that works well more or less regardless of file size?).
If the error had come from curl, it would probably have been html-formatted. The only way I can reproduce the error you describe is by cat-ing a non-existent file.
Double check the name of the jenkins-lists.txt file, and make sure your script is running in the same directory as the file. Or use an absolute path to the file.
i've started playing around with curl a few days ago. For any reason i couldn't figure out how to archive the following.
I would like to get the original filename with the output option
-O -J
AND put there some kind of variable, like time stamp, source path or whatever. This would avoid the file overwriting issue and also make it easier for further work with it.
Here are a few specs about my setup
Win7 x64
curl 7.37.0
Admin user
just commandline no PHP or script or so one
no scripting solutions please, need tihs command in a single line for Selenium automation
C:>curl --retry 1 --cert c:\certificate.cer --URL https://blabla.com/pdf-file --user username:password --cookie-jar cookie.txt -v -O -J
I've played around with various things i found online like
-o %(file %H:%s)
-O -J/%date%
-o $(%H) bla#1.pdf
but it always just print out the file as it is named link "%(file.pdf" or some other shitty names. I guess this is something pointing to escaping and quoting issues but cant find it right now.
No scripting solutions please, I need tihs command in a single line for Selenium automation.
Prefered output
originalfilename_date_time_source.pdf
Let me know if you get a solution for this.
this is all a little over my head, so please be specific in your responses.
I have successfully performed an HTTPS FORM POST using cURL. Here is the code, simplified:
curl.exe -E cert.pem -k -F file=#"C:\DIR\test.txt" "https://www.example.com/ul_file_curl.ashx"
Here's the problem: I need to make this code upload two files each day, and the names will change every day based on several variables, like date and time of creation.
What I want to do is just replace test.txt with *.txt, but cURL doesn't seem to support wildcards, so how can I accomplish this? Thanks very much in advance.
Edit: This is all done in a Windows environment.
Essentially, I have a standard format for file naming conventions. It breaks down to this:
target_dateUTC_timeUTC_tool
So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so:
FILENAME=$TARGET\_$UTCTIME\_$TOOL
Then, I can just call the variable at runtime, like so:
tcpdump -w $FILENAME.lpc
All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so:
UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z")
What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat.
While scouring for solutions, I came across a similar issues... like this.
But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution.
.sh file is posted via pastebin, here.
Use a function:
generate_filename() { echo "${1}_$(/bin/date --utc +"%Y%m%d_%H%M%Z")_$2"; }
And use it like this:
tcpdump -w "$(generate_filename foo tcpdump).lpc"
It's hard to get the function to automatically determine the command name. You can use bash history to get it and save a couple of characters typing:
tcpdump -w "$(generate_filename foo !#:0).lpc"