How can I download OracleXE using wget and avoid the login?
I tried applying logic from this question for Oracle Java but I couldn't get it to work.
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn/linux/oracle11g/xe/oracle-xe-11.2.0-1 .0.x86_64.rpm.zip
I get:
--2015-10-13 04:51:03-- http://download.oracle.com/otn/linux/oracle11g/xe/oracle-xe-11.2.0-1.0.x86_64.rpm.zip
Resolving download.oracle.com (download.oracle.com)... 206.248.168.160, 206.248.168.139, 206.248.168.160, ...
Connecting to download.oracle.com (download.oracle.com)|206.248.168.160|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://edelivery.oracle.com/akam/otn/linux/oracle11g/xe/oracle-xe-11.2.0-1.0.x86_64.rpm.zip [following]
--2015-10-13 04:51:03-- https://edelivery.oracle.com/akam/otn/linux/oracle11g/xe/oracle-xe-11.2.0-1.0.x86_64.rpm.zip
Resolving edelivery.oracle.com (edelivery.oracle.com)... 23.9.117.183, 23.9.117.183
Connecting to edelivery.oracle.com (edelivery.oracle.com)|23.9.117.183|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://login.oracle.com/pls/orasso/orasso.wwsso_app_admin.ls_login?Site2pstoreToken=v1.2~CA55CD32~7E777A421E00059BE8321AEAF3C29C59D68A2F46E15A49137CE5AAF6D6B46A0C599A4560AD622CF26FFFCF23FF8FC274F021B7E57B08CEF2076FADB1A57BBFB02B991E320BB3A417DDF966B4406E225736912745DE8F5E660631675765D519A5E7FF61481F567ED9C582AEAAEEC6E2A6C59D046AD82EA1C7AA08E9A1EDAFC44D97F22C470FE530A0F58872A00CAFD27012DF4851AD4964085264393C7220CF07817E14ED0B2130ECF06758DB538644A119246C4B65963CD1C825650BE3B3C86C1670EC8F754E943853BE4C58F0A4FD89B1CE14E7110087134765A9EBAA170769C75645798E1D978B944D2D896A564E49CD42478328D8661794E3DC377DBEF9F7C27184E0DFF7EAAB [following]
--2015-10-13 04:51:03-- https://login.oracle.com/pls/orasso/orasso.wwsso_app_admin.ls_login?Site2pstoreToken=v1.2~CA55CD32~7E777A421E00059BE8321AEAF3C29C59D68A2F46E15A49137CE5AAF6D6B46A0C599A4560AD622CF26FFFCF23FF8FC274F021B7E57B08CEF2076FADB1A57BBFB02B991E320BB3A417DDF966B4406E225736912745DE8F5E660631675765D519A5E7FF61481F567ED9C582AEAAEEC6E2A6C59D046AD82EA1C7AA08E9A1EDAFC44D97F22C470FE530A0F58872A00CAFD27012DF4851AD4964085264393C7220CF07817E14ED0B2130ECF06758DB538644A119246C4B65963CD1C825650BE3B3C86C1670EC8F754E943853BE4C58F0A4FD89B1CE14E7110087134765A9EBAA170769C75645798E1D978B944D2D896A564E49CD42478328D8661794E3DC377DBEF9F7C27184E0DFF7EAAB
Resolving login.oracle.com (login.oracle.com)... 209.17.4.8, 209.17.4.8
Connecting to login.oracle.com (login.oracle.com)|209.17.4.8|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2051 (2.0K) [text/html]
Saving to: ‘oracle-xe-11.2.0-1.0.x86_64.rpm.zip’
100%[======================================================================================================================================================>] 2,051 --.-K/s in 0s
2015-10-13 04:51:03 (142 MB/s) - ‘oracle-xe-11.2.0-1.0.x86_64.rpm.zip’ saved [2051/2051]
For download oracle for linux zips from url to directly server you have to:
1 - login Oracle.com with credentials.. (https://login.oracle.com/mysso/signon.jsp)
2 - export cookie.txt with browser
3 - copy this file to your Server
scp cookies.txt root#url:/path/
4 - go to path where your cookies.txt and copy install link and paste as to this to your server terminal
wget --load-cookies=cookies.txt http://download.oracle.com/otn/linux/oracle12c/121020/linuxamd64_12102_database_1of2.zip
wget --load-cookies=cookies.txt http://download.oracle.com/otn/linux/oracle12c/121020/linuxamd64_12102_database_2of2.zip
check file size with ls -lah
Note, that wget --header "Cookie: oraclelicense=accept-securebackup-cookie" breaks all other cookies, including authorization ones.
Instead You can use custom cookies.txt file and --user/--password (tested on Oracle Archive and OracleXE)
echo .oracle.com TRUE / FALSE 0 oraclelicense accept-securebackup-cookie >cookies.txt
wget -c --load-cookies cookies.txt --trust-server-names --user=SSO_USERNAME --password=SSO_PASSWORD URL
UPD: Attention! cookies.txt is tab-separated! To be sure of tabs please use `echo -e .oracle.com\tTRUE\t/\tFALSE\t0\toraclelicense\taccept-securebackup-cookie >cookies.txt
Related
tac FILE | sed -n -e 's/^.*URL: //p' | SEND TO WGET HERE
This one liner above gives a list of URLs from a file, one per line. I am trying to stream/pipe these into wget directly. Each URL is a thumbnail picture that I need to do a massive download on. Trying to write this one liner to facilitate this process.
This one liner above gives a list of URLs from a file, one per line. I
am trying to (...) pipe these into wget directly.
In order to do so you might harness -i file option, if you give - as file wget will be reading standard input, from wget man page
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input(...)If this function is
used, no URLs need be present on the command line(...)
So in your case
command | wget -i -
where command is command which output is one URL per line
Use xargs to set the argument of a command from standard input:
tac FILE | sed -n -e 's/^.*URL: //p' | xargs wget
Here each word of the standard input of xargs is set as a positional argument to wget
Demo:
$ cat FILE
URL: https://google.com https://netflix.com
asdfdas URL: https://stackoverflow.com
$ tac FILE | sed -n -e 's/^.*URL: //p' | xargs wget
--2021-12-30 12:53:17-- https://stackoverflow.com/
Resolving stackoverflow.com (stackoverflow.com)... 151.101.65.69, 151.101.193.69, 151.101.129.69, ...
Connecting to stackoverflow.com (stackoverflow.com)|151.101.65.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.7’
index.html.7 [ <=> ] 175,76K 427KB/s in 0,4s
2021-12-30 12:53:18 (427 KB/s) - ‘index.html.7’ saved [179983]
--2021-12-30 12:53:18-- https://google.com/
Resolving google.com (google.com)... 142.250.184.142, 2a00:1450:4017:80c::200e
Connecting to google.com (google.com)|142.250.184.142|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.google.com/ [following]
--2021-12-30 12:53:18-- https://www.google.com/
Resolving www.google.com (www.google.com)... 142.250.187.100, 2a00:1450:4017:807::2004
Connecting to www.google.com (www.google.com)|142.250.187.100|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://consent.google.com/ml?continue=https://www.google.com/&gl=GR&m=0&pc=shp&hl=el&src=1 [following]
--2021-12-30 12:53:19-- https://consent.google.com/ml?continue=https://www.google.com/&gl=GR&m=0&pc=shp&hl=el&src=1
Resolving consent.google.com (consent.google.com)... 216.58.206.206, 2a00:1450:4017:80c::200e
Connecting to consent.google.com (consent.google.com)|216.58.206.206|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.8’
index.html.8 [ <=> ] 12,16K --.-KB/s in 0,01s
2021-12-30 12:53:19 (1,25 MB/s) - ‘index.html.8’ saved [12450]
--2021-12-30 12:53:19-- https://netflix.com/
Resolving netflix.com (netflix.com)... 54.155.246.232, 18.200.8.190, 54.73.148.110, ...
Connecting to netflix.com (netflix.com)|54.155.246.232|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.netflix.com/ [following]
--2021-12-30 12:53:19-- https://www.netflix.com/
Resolving www.netflix.com (www.netflix.com)... 54.155.178.5, 3.251.50.149, 54.74.73.31, ...
Connecting to www.netflix.com (www.netflix.com)|54.155.178.5|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://www.netflix.com/gr-en/ [following]
--2021-12-30 12:53:20-- https://www.netflix.com/gr-en/
Reusing existing connection to www.netflix.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.9’
index.html.9 [ <=> ] 424,83K 1003KB/s in 0,4s
2021-12-30 12:53:21 (1003 KB/s) - ‘index.html.9’ saved [435027]
FINISHED --2021-12-30 12:53:21--
Total wall clock time: 4,1s
Downloaded: 3 files, 613K in 0,8s (725 KB/s)
I am trying to use bash and wget to POST form data to a login page and save the cookies after logging in. However, I have to go through an indirect URL that takes me to the login page through two redirects.
I've tried a variety of other methods using curl and wget to no avail. They all reach the page, but don't actually login.
All of the StackOverflow questions and articles I've read on the subject claim it's as easy as the wget call below.
wget \
--save-cookies cookies.txt \
--keep-session-cookies \
--post-data "username=$username&password=$password" \
"$indirect_url"
Here is an example wget output:
--2021-07-09 15:57:21-- https://<<indirect.url.1>>/
Resolving <<indirect.url.1>> (<<indirect.url.1>>)... <<indirect.ip.1>>
Connecting to <<indirect.url.1>> (<<indirect.url.1>>)|<<indirect.ip.1>>|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://<<indirect.url.2>> [following]
--2021-07-09 15:57:22-- https://<<indirect.url.2>>
Reusing existing connection to <<indirect.url.2>>:443.
HTTP request sent, awaiting response... 302 Found
Location: https://<<login.url.2>> [following]
--2021-07-09 15:57:22-- https://<<login.url>>
Resolving <<login.url>> (<<login.url>>)... <<login.ip>>
Connecting to <<login.url>> (<<login.url>>)|<<login.ip>>|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
index.html [ <=> ] 6.32K --.-KB/s in 0s
2021-07-09 15:57:22 (284 MB/s) - ‘index.html’ saved [6473]
It seems to connect to the login page (200 response) and it downloads it, but it doesn't actually log me in, nor are the cookies.txt correct (e.g. no post-login cookies).
Are the credentials not getting carried through the redirect?
Is there something else I'm doing wrong?
Any help would be appreciated,
Thank you.
I am using following command in my bash script to trigger jenkins build:
wget --no-check-certificate "http://<jenkins_url>/view/some_view/job/some_prj/buildWithParameters?token=xxx"
Output:
HTTP request sent, awaiting response... 201 Created
Length: 0
Saving to: “buildWithParameters?token=xxx”
[ <=> ] 0 --.-K/s in 0s
2015-02-20 10:10:46 (0.00 B/s) - “buildWithParameters?token=xxx” saved [0/0]
And then it's creates empty file: “buildWithParameters?token=xxx”
My question is: why wget creates this file and how to turn that functionality off?
Most simply:
wget --no-check-certificate -O /dev/null http://foo
this will make wget save the file to /dev/null, effectively discarding it.
I'm trying to execute a long list of repetitive commands on Terminal.
The commands look like this:
wget 'http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/-180,52,9/1280x1280.png' -O '/Volumes/Alaya/XXXXXXXXX/Downloads/MapTiles/Tile (52.-180) 0.png' \
wget 'http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/-177,52,9/1280x1280.png' -O '/Volumes/Alaya/XXXXXXXXX/Downloads/MapTiles/Tile (52.-177) 1.png' \
If I copy the entire list into Terminal, it executes them all but seems to do it in such a rush that some only get partially downloadeed, and some missed out entirely. It doesn't seem to take them one by one and wait until each is finished before attempting the next.
I tried putting them entire list into a shell script and running it, but then for some reason it seems to download everything, but only produces one file, and looking at the output, it seems to be trying to save each file under the same filename:
2014-03-29 09:56:31 (4.15 MB/s) - `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png' saved [28319/28319]
--2014-03-29 09:56:31-- http://%20%0Dwget/
Resolving \rwget... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ` \rwget'
--2014-03-29 09:56:31-- http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/171,52,9/1280x1280.png
Reusing existing connection to api.tiles.mapbox.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 33530 (33K) [image/jpeg]
Saving to: `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png'
100%[======================================>] 33,530 --.-K/s in 0.008s
2014-03-29 09:56:31 (3.90 MB/s) - `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png' saved [33530/33530]
--2014-03-29 09:56:31-- http://%20%0Dwget/
Resolving \rwget... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ` \rwget'
--2014-03-29 09:56:31-- http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/174,52,9/1280x1280.png
Reusing existing connection to api.tiles.mapbox.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 48906 (48K) [image/jpeg]
Saving to: `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png'
100%[======================================>] 48,906 --.-K/s in 0.01s
2014-03-29 09:56:31 (4.88 MB/s) - `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png' saved [48906/48906]
--2014-03-29 09:56:31-- http://%20%0Dwget/
Resolving \rwget... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ` \rwget'
--2014-03-29 09:56:31-- http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/177,52,9/1280x1280.png
Reusing existing connection to api.tiles.mapbox.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 45644 (45K) [image/jpeg]
Saving to: `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png'
100%[======================================>] 45,644 --.-K/s in 0.01s
2014-03-29 09:56:31 (4.36 MB/s) - `/Volumes/Alaya/XXXXXXXX/Downloads/MapTiles/Tile (52.180) 120.png' saved [45644/45644]
So it's saving every file to this name: Tile (52.180) 120.png
Note that it doesn't do this if I put in each command separately...so I don't understand why it's doing that.
Can someone tell me how to execute this list of commands so that it does each one properly?
Thanks!
Your file should look like this:
#!/bin/bash
wget -q 'http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/-180,52,9/1280x1280.png' -O 'a.png'
wget -q 'http://api.tiles.mapbox.com/v3/localstarlight.hl2o31b8/-177,52,9/1280x1280.png' -O 'b.png'
BUT... you have a backslash at the end of each wget line, which is a continuation character for long lines and which you don't need. Remove it.
Essentially you are asking wget to get a file and then another file called wget and then another file and then another file. Your script only does a single wget - the first one. All the other wget commands are seen as parameters to the first wget because of the continuation character.
You are doing this:
wget URL file wget URL file wget URL file
Quoting from the log you've posted:
http://%20%0Dwget/
This suggests that your script contains CR+LF line endings. Remove those before executing the script:
sed $'s/\r//' scriptname
or
tr -d '\r' < scriptname
I'm trying to crawl a local site with wget -r but I'm unsuccessful: it just downloads the first page and doesn't go any deeper. By the way, I'm so unsuccessful that for whatever site I'm trying it doesn't work... :)
I've tried various options but nothing better happens. Here's the command I thought I'd make it with:
wget -r -e robots=off --user-agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4" --follow-tags=a,ref --debug `http://rocky:8081/obix`
Really, I've no clue. Whatever site or documentation I read about wget tells me that it should simply work with wget -r so I'm starting to think my wget is buggy (I'm on Fedora 16).
Any idea?
EDIT: Here's the output I'm getting for wget -r --follow-tags=ref,a http://rocky:8081/obix/ :
wget -r --follow-tags=ref,a http://rocky:8081/obix/
--2012-10-19 09:29:51-- http://rocky:8081/obix/ Resolving rocky... 127.0.0.1 Connecting to rocky|127.0.0.1|:8081...
connected. HTTP request sent, awaiting response... 200 OK Length: 792
[text/xml] Saving to: “rocky:8081/obix/index.html”
100%[==============================================================================>] 792 --.-K/s in 0s
2012-10-19 09:29:51 (86,0 MB/s) - “rocky:8081/obix/index.html”
saved [792/792]
FINISHED --2012-10-19 09:29:51-- Downloaded: 1 files, 792 in 0s (86,0
MB/s)
Usually there's no need to give the user-agent.
It should be sufficient to give:
wget -r http://stackoverflow.com/questions/12955253/recursive-wget-wont-work
To see, why wget doesn't do what you want, look at the output it is giving you and post it here.