Is there any "lynx" like command DOS?
I want to get list of all files in a website.
I searched Google for "lynx dos" and found this: http://www.fdisk.com/doslynx/lynxport.htm
there is lynx for dos: http://www.rahul.net/dkaufman/lynx2.8.5rel.1-DOSc.zip (from http://www.rahul.net/dkaufman/)
there is also wget and curl builds for dos (links on the site above) which you can try out
No, there is not.
telnet would be the closest, but it's very different :)
Related
I'm trying to set up a Bash Script (shl) that will use curl to download a file.
I really can't find a good bash script tutorial. I need assistance.
I've tried testing it with a windows bat file that has something like
: curl ${url} > file name [trying to see it work from windows]
and getting
Protocol "https" not supported or disabled in libcurl
the URL that I can use to extract the file would look something like this {example only)
https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b
I'm trying to put the file here \aiken\ProdITFileTrans\cofc_aw_disbursement.csv
so my bat file looks
#Echo On
curl --verbose -g ${https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b} >\\aiken\ProdITFileTrans\cofc_aw_disbursement.csv
PAUSE
Again the goal is to take a working version of this call in put in in a Bash shell that I can call forom ATOMIC/UC4
Once I have the bash script I want to be able to do a daily download of my file.
Well, perhaps something like:
#!/bin/bash
curl --verbose -g yourlongurlhere -o /path/to/your/file.csv
Make the file executable (chmod +x).
EDIT: check Advanced Bash Scripting Guide for tons of examples. It covers just about everything.
I'm having a frustrating, but seemingly simple problem. I was recently pushing some files to github and now ls has started listing directory contents recursively when I use the basic command ls. Though, it only appears to do so in my Google Drive folder. It functions normally when used in directories outside of Google Drive. I'm not sure if it is connected to something I was doing with git or completely unrelated. I had been working on a github project in my Google Drive when I noticed the issue.
The output of type ls in the Google Drive directory and outside of it is:
ls is hashed (/bin/ls)
Does anyone have any input on how I can get ls to function 'normally' again? I'm not sure how I could've changed its function but it appears I must have. Let me know if there is additional information that would help in understanding the problem.
Thank you in advance
Your ls might be aliased (perhaps in your ~/.bashrc; look inside file that with your editor) by your interactive shell (or it might become a bash function). Check with type ls (using the type builtin).
Use \ls or /bin/ls to get the real ls program.
If your shell is bash, be sure to read the chapter on bash startup files.
Try also using stat(1) and/or some other shell (e.g. zsh, sash, ...).
SOLVED:
I'm not sure why this was occurring, but the issue seems to be related to having updated a Shiny app through rsconnect() in R. I closed RStudio and now the ls command is working properly again in all directories. I have no idea why this would occur and didn't think that would be related at all. Thanks for the troubleshooting help!
How can I use curl?
Every time in cmd or powershell I got the error message "Curl Command not found".. How can this be?
Use Cygwin - it has most of the commands you'll ever need.
Edit - this question looks helpful if it's Powershell you need specifically. inovke-webrequest can be used.
I wrote a bash script that fetches lyrics from a website. The script is here --> http://scrippets.wordpress.com/2011/02/01/fetching-lyrics-of-songs-from-the-terminal/ (the indentations in the script are correct unlike how it looks on the blog)
This script works perfectly well when executed from the terminal. Now i created a custom keyboard shortcut using compiz commands, that executes the following command when the right key combination is pressed :
gnome-terminal --working-directory="/home/tapan/sandbox/bash/" --window-with-profile=lyrics -e "/home/tapan/sandbox/bash/lyrics.sh" -t "`rhythmbox-client --print-playing`"
I created a new profile called "lyrics" to give the terminal that opens up a custom look. When i open up a terminal with this profile and run the script, it works perfectly fine again. However, when i use the keyboard shortcut to run the custom command, i get the following error:
Pink Floyd - Is There Anybody Out There?
wget: missing URL
Usage: wget [OPTION]... [URL]...
Try `wget --help' for more options.
cat: 3.txt: No such file or directory
I cannot figure out whats wrong. I mean if it works perfectly well in the terminal normally, why shouldn't this work? Any suggestions?
PS: The script i have written is pretty elementary and noobish, so any suggestions to improve it are also welcome in the comments :)
EDIT: The output has changed a little, now it just shows the name of the song playing and nothing else. Though sometimes it still shows the wget error.
EDIT2: When i run that gnome terminal command from a terminal, it works. The problem is only when running it with the keyboard shortcut using compiz commands or if i use the run dialog (the alt+f2 one).
The two wget commands should probably have the url variables in double quotes, for example: wget -q -U Mozilla -O 1.txt $link should be wget -q -U Mozilla -O 1.txt "$link"
You need to uriencode your song title so that special characters like '?', '&', '%', and '+' are not passed literally in your URL.
name3=${name2//\?/%3F}
searchq=${name3// /+}
will handle the ?'s. I don't know of a more general solution in bash without resorting to one-line Perl or Python scripts.
I aim to filter my Google results right at terminal such that I get only Google's definitions.
I am trying to run the following in Mac's terminal
open http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=vim
A similar command for Firefox is
open http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=firefox-a
Which client can you use to have Google's html page to your standard output?
To use Google search not through their web interface, you're almost certainly better off using their API.
However, I think curl is the right tool to use to download a web page if that's what you have to do (and it probably isn't)
"GET"
GET 'http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=vim'
See also "HEAD".
The command can be installable on Gnu/Linux OS:
[elcuco#pinky ~]$ rpm -qf `which GET`
perl-libwww-perl-5.808-2mdv2008.1
In theory you could also use "wget" and output to stdout using something like this:
wget http://www.google.com -O - --quiet
However I cannot get it to work with this example URL.