For certain reasons I'm writing a bash script in which I need to navigate to a certain point in a UTF-8 plaintext web page (this one to be precise gutenberg.org/files/2701/2701-0.txt), getting to this in the first place using curl. The command I'm currently using is:
curl -s http://gutenberg.org/files/2701/2701-0.txt|say
how could I make it start reading from a certain point in the book (i.e the start of a chapter)
You probably want:
chapter=4
curl -s "$url" | sed -n '/^CHAPTER '"$chapter"'\./,$p' | say
Related
I am unable to install sdkman on my macos. I referred sdkman install and Can't install sdkman on Mac OS. Still, I am missing something. Can someone please help me ? I am new to MacOS and sdkman.
When I go to bash terminal and type curl -s "https://get.sdkman.io" | bash , it prints message failed to write body on terminal and opens my bash profile. What is that I am supposed to do next? I tried to follow steps mentioned at above urls, even used source as suggessted but I guess something is missing. I actually never write anything in bash profile, so source would not even do anything. I did multiple attempts using what I found online but sdk version never gives any output, it kept saying sdk command not found. I found online that I needed to upgrade curl, I even did that still no success. Can someone please write / explain steps for me that I am missing? I would appreciate it. I did search online, but either steps are not clear or I am not getting something right. Thanks.
It looks more likely that the piped bash closes the read pipe before the previous curl finishes writing the whole page. When you issue curl -s "https://get.sdkman.io" | bash, as soon as the piped bash has what it wants, it will right away close the input stream from the previous curl. But the cURL doesn’t really expect this and throws a “failed writing body” error. You might want to try piping the stream through an intermediary program that always reads the whole page before feeding to bash. For instance, you can try something like this (running tac twice before piping to bash):
curl -s "https://get.sdkman.io" | tac | tac | bash
tac is a Unix program that can concatenate and print files in reverse. In this case, it reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to bash until cURL is finished. bash will still close the read stream when it gets what it needs, but it will only affect tac, which doesn't throw an error.
I am trying to iterate through a list to curl each one, this ultimately is to kick of a list of Jenkins jobs.
so i have a text file which contents is
ApplianceInsurance-Tests
BicycleInsurance-Tests
Breakdown-Tests
BridgingLoans-Tests
Broadband-Tests
Business-Loans
BusinessElectric-Tests
BusinessGas-Tests
and i am trying to create a loop in which i fire a curl command for each line in the txt file
for fn in `cat jenkins-lists.txt`; do "curl -X POST 'http://user:key#x.x.x.xxx:8080/job/$fn/build"; done
but i keep getting a error - No such file or directory.
Getting a little confused
Your do-done body is quoted wrong. It should be:
curl -X POST "http://user:key#x.x.x.xxx:8080/job/$fn/build"
I'd also recommend:
while read -r fn; do
curl -X POST "http://user:key#x.x.x.xxx:8080/job/$fn/build"
done < jenkins-list.txt
instead of for fn in $(anything); do .... With the second way you don't
have to worry about inadvertent globbing and the jenkins-list file may
get nicely buffered instead of needing to be read all into memory at once (not that it matters for such a small file but why not have a technique that works well more or less regardless of file size?).
If the error had come from curl, it would probably have been html-formatted. The only way I can reproduce the error you describe is by cat-ing a non-existent file.
Double check the name of the jenkins-lists.txt file, and make sure your script is running in the same directory as the file. Or use an absolute path to the file.
The following situation:
I am on a different mac (no command history) using the Terminal (bash) remembering only a part of a command e.g. searching for a command with util in it. Did not remember that it was mdutil.
How to fuzzy search for a command in an efficient manner completely in the terminal, without creating new files?
Typical ways I do it now:
To find that command I could google, not always efficient and needs internet connection and browser.
Or Tab Tab, see all commands and scroll through them until I recognize the right one.
Or output all commands to a textfile and search in that.
I guess you could do something like this:
oldIFS="$IFS"
IFS=:
for dir in $PATH; do
ls $dir/*util* 2> /dev/null
done
IFS="$oldIFS"
That would loop through all the directories in your $PATH looking for a command that contains util.
How about starting with man -k and refining, like this:
man -k util | grep -i meta
Moose::Util::MetaRole(3pm) - Apply roles to any metaclass, as well as the object base class
mdutil(1) - manage the metadata stores used by Spotlight
compgen -ca | grep util
did it the best. Instead of util you can search any part of a command.
Like gniourf_gniourf said, a better solution would be
compgen -caX '!*util*'
i've started playing around with curl a few days ago. For any reason i couldn't figure out how to archive the following.
I would like to get the original filename with the output option
-O -J
AND put there some kind of variable, like time stamp, source path or whatever. This would avoid the file overwriting issue and also make it easier for further work with it.
Here are a few specs about my setup
Win7 x64
curl 7.37.0
Admin user
just commandline no PHP or script or so one
no scripting solutions please, need tihs command in a single line for Selenium automation
C:>curl --retry 1 --cert c:\certificate.cer --URL https://blabla.com/pdf-file --user username:password --cookie-jar cookie.txt -v -O -J
I've played around with various things i found online like
-o %(file %H:%s)
-O -J/%date%
-o $(%H) bla#1.pdf
but it always just print out the file as it is named link "%(file.pdf" or some other shitty names. I guess this is something pointing to escaping and quoting issues but cant find it right now.
No scripting solutions please, I need tihs command in a single line for Selenium automation.
Prefered output
originalfilename_date_time_source.pdf
Let me know if you get a solution for this.
Is it possible to grab text from a online text file via grep/cat/awk or someting else? (in bash)
The way i currently do this is i download the text file to the drive and grep/cat into the file for it's text.
curl -o "$TMPDIR"/"text.txt" http://www.example.com/text.txt
cat/grep "$TMPDIR"/text.txt
rm -rf "$TMPDIR"/"text.txt"
Is one of the text grabbers (or another one) capable enough to grab something from a text file on the internet?
This would get rid of the whole downloadfile-readfile-deletefile process and just replace it with one command, speeding up things considerably if you have a lot of those strings.
I couldn't find anything via the man pages or googling around, maybe you guys know something.
Use curl -o - http://www.example.com/text.txt | grep "something".
-o - tells curl that it "downloads to stdout", other utils such as wget, lynx and links also have corresponding functionality.
You might try netcat - this is exactly what it was made for.
You could at least pipe your commands to avoid manually creating a temporary file:
curl … | cat/grep …