I want to download all wallpapers from this gallery: http://minus.com/mPxswm5Mn
Googling I found that I could use:
wget -r -A jpg http://minus.com/mPxswm5Mn
However, the page contains the thumbnails only, and the link of the wallpapers could be found from them like the following:
Example 1:
Thumbnail url: i3.minus.com/jsyJQqqkzB66W_e.jpg
Wallpaper url: i3.minus.com/isyJQqqkzB66W.jpg
Example 2:
Thumbnail url: i1.minus.com/jNgnESZBOrhfk_e.jpg
Wallpaper url: i1.minus.com/iNgnESZBOrhfk.jpg
Any help is appreciated
This is far from a perfect solution, but you could always use your browser to list all the files you need to download and then use wget to download them :
Run this javascript snippet in your browser :
var imgs = document.getElementsByClassName("grid_item_thumb");
var html = "";
for(var i=0; i<imgs.length; ++i) {
html += imgs[i].src.replace(/\/j(.*)_e.jpg/, "/i$1.jpg") + ' ';
}
document.getElementsByTagName("body")[0].innerHTML = html;
For your specific page, it will give you this :
http://i1.minus.com/ibb1tmEYMJGWFT.jpg http://i1.minus.com/iNgnESZBOrhfk.jpg http://i3.minus.com/iitJZBqmXOYUq.jpg http://i3.minus.com/iyuCwEjztXsMF.jpg http://i2.minus.com/i0ZQNJf8vb6O2.jpg http://i2.minus.com/i6VRXhptiq1KU.jpg http://i5.minus.com/ib0H6MXLY4JeGL.jpg http://i1.minus.com/ibdwVqCRDKD8R7.jpg http://i3.minus.com/ievFZt5lDS0xy.jpg http://i4.minus.com/iHqfqOEkZDPo0.jpg http://i1.minus.com/iUe1SLFXjtyVe.jpg http://i1.minus.com/i8diPC08rFPIg.jpg http://i4.minus.com/isl4vklSl72pd.jpg http://i3.minus.com/isyJQqqkzB66W.jpg http://i7.minus.com/iriBAKsCaL92j.jpg http://i5.minus.com/im0ZKpI5FAHq0.jpg http://i2.minus.com/ilSVC7K8c0aXK.jpg http://i4.minus.com/ibmQ6r3h8SWYzI.jpg http://i3.minus.com/iVzu4QVH9azWW.jpg http://i2.minus.com/ijYWU36ege0Rr.jpg http://i1.minus.com/iQel0uLQK2Z4z.jpg http://i7.minus.com/ibk24biYBBYAeT.jpg http://i4.minus.com/ieGTVoAZaPbI2.jpg http://i2.minus.com/iuYThOzLn6XsF.jpg http://i2.minus.com/i0SAaVhrZZaL5.jpg http://i3.minus.com/ibztNp9L7zaU3F.jpg http://i5.minus.com/iJCl7B0bRahTr.jpg http://i5.minus.com/iR6Wa5uoxJFeA.jpg http://i2.minus.com/i5QHKNePLD9C0.jpg http://i3.minus.com/ibzAkM1IjGFHvR.jpg http://i2.minus.com/iqYYDjp83txiQ.jpg http://i6.minus.com/i6jFQkoAJYa1R.jpg http://i6.minus.com/ibrPBnHFyGEHQC.jpg http://i3.minus.com/iGzGyXBihCWcQ.jpg http://i4.minus.com/ibqwMh3mFNCirI.jpg http://i3.minus.com/ibvCZejMz05iH4.jpg http://i5.minus.com/iE85MKn3sMyqk.jpg http://i7.minus.com/ipixgQGKKkPw2.jpg http://i5.minus.com/iX5wooBBLIl39.jpg http://i4.minus.com/iNqvRAmIQAphd.jpg http://i6.minus.com/icj3SqCWVXDiV.jpg http://i4.minus.com/iblZFuq9kvs1Nf.jpg http://i1.minus.com/iigEYtFY87N0U.jpg http://i3.minus.com/ibtZHiIuJQTU6w.jpg http://i3.minus.com/iGujEMf2OfWIY.jpg http://i5.minus.com/ib0JPBn8i4VCNx.jpg http://i3.minus.com/ibvTyZm0l6MXiZ.jpg http://i1.minus.com/i5dJh4TYPx9bj.jpg http://i6.minus.com/ibrHdaADSSejvl.jpg http://i1.minus.com/ibcUJxQZCU49BF.jpg http://i1.minus.com/ixcoa5XdfgtCn.jpg http://i5.minus.com/ip52CdwKAz2bR.jpg http://i2.minus.com/i5KMTx22lA7bP.jpg http://i5.minus.com/iuDulPAkX9CGk.jpg http://i5.minus.com/iB3sqxh7pIuH9.jpg http://i5.minus.com/i20DFIO4GVB1h.jpg http://i6.minus.com/ibhcAIwJ8AYgJR.jpg http://i5.minus.com/ioCdy2IZSqmWx.jpg http://i3.minus.com/iby48bA5K2GP0L.jpg http://i1.minus.com/iPfaYCDuz9xwz.jpg http://i2.minus.com/inO9GQr6450Bv.jpg http://i2.minus.com/iiYxzaAKpa6A4.jpg http://i1.minus.com/ibaaCqAGnT5CAu.jpg http://i7.minus.com/ibkc4xjbf0yKEG.jpg http://i5.minus.com/iuDW7LnAKqWrf.jpg http://i5.minus.com/ib1TTRbjEOrVsd.jpg http://i1.minus.com/ivfdsnDvOm0As.jpg http://i2.minus.com/iFR7E3cVR7vFM.jpg http://i2.minus.com/iGRxohksNdi1H.jpg http://i2.minus.com/iGVYyjaP9DWgs.jpg http://i1.minus.com/ibd34cjNZysV9D.jpg http://i6.minus.com/ibhFhdBswSdRbl.jpg http://i4.minus.com/iGqdr0L18P3Kr.jpg http://i5.minus.com/iCEZ5T4sUDjHi.jpg http://i7.minus.com/ivBTUAk6cK26L.jpg http://i2.minus.com/ifO73LUD5tgQp.jpg http://i5.minus.com/i98v2HTQgcjcb.jpg http://i2.minus.com/iFLa4Zo1u8KOu.jpg http://i3.minus.com/iDzGoaFEbOK3X.jpg http://i4.minus.com/ibqzS9GS4z7gCD.jpg http://i4.minus.com/iblogFe9Op8tFk.jpg http://i4.minus.com/i5nnZ6wXLHtgn.jpg http://i4.minus.com/iCpWjYy07z7S1.jpg http://i4.minus.com/iKnkXFPRZB7W.jpg http://i3.minus.com/iby6Dg1zpdY1in.jpg http://i7.minus.com/iABhWmq0upnuz.jpg http://i4.minus.com/ibn52BzbDsG4L5.jpg http://i2.minus.com/icOC6xjysZVpu.jpg http://i4.minus.com/ism8wP4XNCW7M.jpg http://i3.minus.com/iby5hJlpTlO1I3.jpg http://i6.minus.com/ihILTGiearIR6.jpg http://i2.minus.com/idYS2bhiPdDc3.jpg http://i6.minus.com/iOj9Omn6HEnYv.jpg http://i4.minus.com/iMmgVQAXqrg4Q.jpg http://i4.minus.com/ibpZhkaaPzDtWU.jpg http://i1.minus.com/ibaOlHIsx0iJMS.jpg http://i6.minus.com/iyrZSX5TswnDk.jpg http://i3.minus.com/ibujJB04GxWwvO.jpg http://i4.minus.com/ibor2bgtCBjMp3.jpg http://i5.minus.com/ij1JjMySkCATm.jpg http://i3.minus.com/ivxToOWGkXko4.jpg http://i4.minus.com/ibllcqgDcJ8ess.jpg http://i6.minus.com/ixji9sK26KiAT.jpg http://i4.minus.com/iboh6KjVVXlIV7.jpg http://i7.minus.com/ibsod54BAoswvp.jpg http://i2.minus.com/ihRIt03KJasW0.jpg http://i4.minus.com/iMnPTCNhtyhtv.jpg
You can then copy-paste this to a console and download the images with wget :
wget http://i1.minus.com/ibb1tmEYMJGWFT.jpg http://i1.minus.com/iNgnESZBOrhfk.jpg http://i3.minus.com/iitJZBqmXOYUq.jpg http://i3.minus.com/iyuCwEjztXsMF.jpg http://i2.minus.com/i0ZQNJf8vb6O2.jpg http://i2.minus.com/i6VRXhptiq1KU.jpg http://i5.minus.com/ib0H6MXLY4JeGL.jpg http://i1.minus.com/ibdwVqCRDKD8R7.jpg http://i3.minus.com/ievFZt5lDS0xy.jpg http://i4.minus.com/iHqfqOEkZDPo0.jpg http://i1.minus.com/iUe1SLFXjtyVe.jpg http://i1.minus.com/i8diPC08rFPIg.jpg http://i4.minus.com/isl4vklSl72pd.jpg http://i3.minus.com/isyJQqqkzB66W.jpg http://i7.minus.com/iriBAKsCaL92j.jpg http://i5.minus.com/im0ZKpI5FAHq0.jpg http://i2.minus.com/ilSVC7K8c0aXK.jpg http://i4.minus.com/ibmQ6r3h8SWYzI.jpg http://i3.minus.com/iVzu4QVH9azWW.jpg http://i2.minus.com/ijYWU36ege0Rr.jpg http://i1.minus.com/iQel0uLQK2Z4z.jpg http://i7.minus.com/ibk24biYBBYAeT.jpg http://i4.minus.com/ieGTVoAZaPbI2.jpg http://i2.minus.com/iuYThOzLn6XsF.jpg http://i2.minus.com/i0SAaVhrZZaL5.jpg http://i3.minus.com/ibztNp9L7zaU3F.jpg http://i5.minus.com/iJCl7B0bRahTr.jpg http://i5.minus.com/iR6Wa5uoxJFeA.jpg http://i2.minus.com/i5QHKNePLD9C0.jpg http://i3.minus.com/ibzAkM1IjGFHvR.jpg http://i2.minus.com/iqYYDjp83txiQ.jpg http://i6.minus.com/i6jFQkoAJYa1R.jpg http://i6.minus.com/ibrPBnHFyGEHQC.jpg http://i3.minus.com/iGzGyXBihCWcQ.jpg http://i4.minus.com/ibqwMh3mFNCirI.jpg http://i3.minus.com/ibvCZejMz05iH4.jpg http://i5.minus.com/iE85MKn3sMyqk.jpg http://i7.minus.com/ipixgQGKKkPw2.jpg http://i5.minus.com/iX5wooBBLIl39.jpg http://i4.minus.com/iNqvRAmIQAphd.jpg http://i6.minus.com/icj3SqCWVXDiV.jpg http://i4.minus.com/iblZFuq9kvs1Nf.jpg http://i1.minus.com/iigEYtFY87N0U.jpg http://i3.minus.com/ibtZHiIuJQTU6w.jpg http://i3.minus.com/iGujEMf2OfWIY.jpg http://i5.minus.com/ib0JPBn8i4VCNx.jpg http://i3.minus.com/ibvTyZm0l6MXiZ.jpg http://i1.minus.com/i5dJh4TYPx9bj.jpg http://i6.minus.com/ibrHdaADSSejvl.jpg http://i1.minus.com/ibcUJxQZCU49BF.jpg http://i1.minus.com/ixcoa5XdfgtCn.jpg http://i5.minus.com/ip52CdwKAz2bR.jpg http://i2.minus.com/i5KMTx22lA7bP.jpg http://i5.minus.com/iuDulPAkX9CGk.jpg http://i5.minus.com/iB3sqxh7pIuH9.jpg http://i5.minus.com/i20DFIO4GVB1h.jpg http://i6.minus.com/ibhcAIwJ8AYgJR.jpg http://i5.minus.com/ioCdy2IZSqmWx.jpg http://i3.minus.com/iby48bA5K2GP0L.jpg http://i1.minus.com/iPfaYCDuz9xwz.jpg http://i2.minus.com/inO9GQr6450Bv.jpg http://i2.minus.com/iiYxzaAKpa6A4.jpg http://i1.minus.com/ibaaCqAGnT5CAu.jpg http://i7.minus.com/ibkc4xjbf0yKEG.jpg http://i5.minus.com/iuDW7LnAKqWrf.jpg http://i5.minus.com/ib1TTRbjEOrVsd.jpg http://i1.minus.com/ivfdsnDvOm0As.jpg http://i2.minus.com/iFR7E3cVR7vFM.jpg http://i2.minus.com/iGRxohksNdi1H.jpg http://i2.minus.com/iGVYyjaP9DWgs.jpg http://i1.minus.com/ibd34cjNZysV9D.jpg http://i6.minus.com/ibhFhdBswSdRbl.jpg http://i4.minus.com/iGqdr0L18P3Kr.jpg http://i5.minus.com/iCEZ5T4sUDjHi.jpg http://i7.minus.com/ivBTUAk6cK26L.jpg http://i2.minus.com/ifO73LUD5tgQp.jpg http://i5.minus.com/i98v2HTQgcjcb.jpg http://i2.minus.com/iFLa4Zo1u8KOu.jpg http://i3.minus.com/iDzGoaFEbOK3X.jpg http://i4.minus.com/ibqzS9GS4z7gCD.jpg http://i4.minus.com/iblogFe9Op8tFk.jpg http://i4.minus.com/i5nnZ6wXLHtgn.jpg http://i4.minus.com/iCpWjYy07z7S1.jpg http://i4.minus.com/iKnkXFPRZB7W.jpg http://i3.minus.com/iby6Dg1zpdY1in.jpg http://i7.minus.com/iABhWmq0upnuz.jpg http://i4.minus.com/ibn52BzbDsG4L5.jpg http://i2.minus.com/icOC6xjysZVpu.jpg http://i4.minus.com/ism8wP4XNCW7M.jpg http://i3.minus.com/iby5hJlpTlO1I3.jpg http://i6.minus.com/ihILTGiearIR6.jpg http://i2.minus.com/idYS2bhiPdDc3.jpg http://i6.minus.com/iOj9Omn6HEnYv.jpg http://i4.minus.com/iMmgVQAXqrg4Q.jpg http://i4.minus.com/ibpZhkaaPzDtWU.jpg http://i1.minus.com/ibaOlHIsx0iJMS.jpg http://i6.minus.com/iyrZSX5TswnDk.jpg http://i3.minus.com/ibujJB04GxWwvO.jpg http://i4.minus.com/ibor2bgtCBjMp3.jpg http://i5.minus.com/ij1JjMySkCATm.jpg http://i3.minus.com/ivxToOWGkXko4.jpg http://i4.minus.com/ibllcqgDcJ8ess.jpg http://i6.minus.com/ixji9sK26KiAT.jpg http://i4.minus.com/iboh6KjVVXlIV7.jpg http://i7.minus.com/ibsod54BAoswvp.jpg http://i2.minus.com/ihRIt03KJasW0.jpg http://i4.minus.com/iMnPTCNhtyhtv.jpg
Or, as pointed out by Enissay, the urls could be saved to a file and downloaded like this :
wget -i urls.txt
If you are feeling lucky, you could always try to download all files from your browser using only javascript.
Related
I have a "problem"
I'm trying to upload a file in my cypress test, however my test runs with sucess but it doesn't upload.
I using the library cypress-file-upload;
my code:
const filePath = 'teste.pdf'
cy.get(':nth-child(1) > .backgroundColor > :nth-child(2) > :nth-child(1) > .col-auto > .input-group.mb-0 > .custom-file > .row > .form-group > .input-group > .input-group-text').attachFile(filePath)
result:
enter image description here
my html/css:
enter image description here
button:
enter image description here
ps: sorry for my bad english
I'm trying many css selector until xpath, but doesn't sucess
I think your target element in this case should be the input one.
Make sure teste.pdf is located at fixtures folder and try something like:
const filePath = 'teste.pdf'
cy.get('.custom-file-input.form-control-sm.file-input').attachFile(filePath)
Sphinx defines a role :download: that instructs Sphinx to copy the reference file to _downloads.
Does Pandoc has a similar feature?
Pandoc does not have that feature built-in, but it can be added with a few lines of Lua:
local prefix = 'media'
local path = pandoc.path
function Code (code)
if code.attributes.role == 'download' then
local description, filename = code.text:match '(.*) %<(.*)%>$'
local mimetype, content = pandoc.mediabag.fetch(filename)
local mediabag_filename = path.join{
pandoc.utils.sha1(content),
path.filename(filename)
}
if content and mimetype then
pandoc.mediabag.insert(mediabag_filename, mimetype, content)
end
return pandoc.Link(description, path.join{prefix, mediabag_filename})
end
end
Use the script by saving it to a file download-role.lua and the call pandoc with
pandoc --lua-filter=download-role.lua --extract-media=media ...
This will also work when using Markdown:
`this example script <../example.py>`{role=download}
I am using https://archiverjs.com/docs/ to zip a directory for my PhoneGap application but I have not yet managed to achieve what I want to.
I have a folder structured like this:
- www
- css
- images
- scripts
- config.xml
- index.html
Now what I would like to have is a zip file containing the the CONTENT of the www folder but NOT the www itself.
BAD
- www.zip
- www
- css
- images
- scripts
- config.xml
- index.html
GOOD
- www.zip
- css
- images
- scripts
- config.xml
- index.html
The code I have in place is the follow:
var archiver = require('archiver');
var output = fs.createWriteStream('./www/www.zip');
var archive = archiver('zip', {
store: true // Sets the compression method to STORE.
});
output.on('close', function() {
console.log(archive.pointer() + ' total bytes');
console.log('archiver has been finalized and the output file descriptor has closed.');
});
archive.on('error', function(err) {
throw err;
});
archive.pipe(output);
archive.directory('www/');
archive.finalize();
I have tried to add something like:
archive.directory('www/*');
but it throws an error.
Any other way I can accomplish that?
Thanks
I found the solution, I just needed to do the follow:
archive.directory('www', '/');
Problem solved :)
Thanks
Problem Statement:
I am trying to upload a file through an HTML form using an HTTP post request and then write it to a file called configuration.xml on my local server. I can only use the stock capabilities of the server, so, as much as I'd love to, I can't use cURL, PHP, Perl, or anything that I'd have to install on the server. What I have tried doing is having the HTML form open a CGI file as the form action and all this CGI file does is run the Bash script with the proper HTML formatting. I would run the Bash script directly from the HTML form, but my research led me to believe that this isn't possible without having to edit .htaccess or other hacky alternatives, which are not roads I want to go down. (If this can be done in a reasonable fashion, please enlighten me!) Regardless, I am able to successfully run the Bash script. I know this because I put a "touch configuration.xml" command in the script and it creates it every time. My script is also able to tell that it is an HTTP Post, as shown by the echoed text in the browser, but then I can't seem to be able to properly read any data from the file. I tried echoing the data as well as redirecting the read data to a file, but nothing appeared in the browser and nothing wrote to the file I specified. This very well may be me not knowing Bash scripting well enough or something silly like that, but I really don't know how to proceed from here.
Code:
UploadToServer.html:
<form action="run_script.cgi" method="POST" enctype="multipart/form-data">
<input type="file" name="file" />
<input type="submit" name="submit" value="Submit">
</form>
run_script.c:
Note: I compile this to a CGI file with the following command: gcc run_script.c -o run_script.cgi
#include <stdlib.h>
#include <stdio.h>
int main() {
system("./test.sh &");
printf("Content-Type: text/html\r\n\r\n");
printf(""); // print blank line for proper HTML header formatting
printf("<html>\n");
printf("</HTML>\n");
}
test.sh:
The non-commented code in the second if statement is from here. The commented code is from here.
#!/bin/bash
touch configuration.xml
if [[ $REQUEST_METHOD = 'POST' ]]; then
echo "this is a post!"
if [ "$CONTENT_LENGTH" -gt 0 ]; then
echo "entered second if statement!"
# read -n $CONTENT_LENGTH POST_DATA <&0
# echo "$CONTENT_LENGTH"
while read line
do eval "echo ${line}"
done
fi
fi
I also tried the approach in the third code block of this post, but didn't get any output. I also looked through this post, but it doesn't seem to grab all the data from the file like I need to. I also tried the approach of just using a CGI file like suggested in this post (_http://blog.purplepixie.org/2013/08/cc-cgi-file-upload/), but, once again, no output. I've been looking through the Apache error log as I try new things and no errors come up.
Anybody have any ideas on what I could possibly be doing wrong? Is there a different approach worth looking into? Any suggestions are greatly appreciated!
I figured out how to do it, with some help from my friends. I ended up doing it all in a CGI script and foregoing the Bash component. While this isn't what I asked for in my original question, it gets the job done for me, which is really what the question was asking.
The following is the C code I'm now using to successfully write the file on the server:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
void print_empty_html_page();
int main() {
char * req_method_str = getenv("REQUEST_METHOD");
if (req_method_str != NULL) {
if (strcmp(req_method_str, "POST") == 0) {
// process POST arguments
char * len_str = getenv("CONTENT_LENGTH");
if (len_str != NULL) {
int len = atoi(len_str);
if (len > 0) {
FILE * fp;
fp = fopen("file.xml", "w");
char * postdata = malloc((len + 1) * sizeof(char));
fread(postdata, sizeof(char), len, stdin);
postdata[len] = '\0';
fprintf(fp, "%s\n", postdata);
free(postdata);
fclose(fp);
}
system("sed -e '/Content/d' -e '/[-][-][*][*][*][*][*]/d' -e '/^[s]*$/d' -e '/WebKitFormBoundary/d' -e '/Submit/d' < file.xml > file_trimmed.xml");
system("rm file.xml");
}
}
}
print_empty_html_page();
return 0;
}
void print_empty_html_page() {
// Send the content type, letting the browser know this is HTML
printf("Content-type: text/html\r\n\r\n");
// Header information that prevents browser from caching
printf(
"<META HTTP-EQUIV=\"CACHE-CONTROL\" CONTENT=\"NO-CACHE, NO-STORE\">\r\n\r\n");
// Top of the page
printf("<html>\n");
printf("<BODY>\n");
// Finish up the page
printf("</BODY></html>\n");
}
Note: This method writes the entire HTTP POST to the file 'file.xml'. The system call to 'sed' is to remove the tags from the HTTP POST that don't correspond to the actual data in the file that was uploaded. If you need to check for additional unwanted lines, just add another -e '/<line_with_expression_to_delete>/d' in that sed call, where line_with_expression_to_delete is the expression you want to match and then delete all lines containing that expression. I couldn't figure out how to delete all the blank lines in the newly uploaded file, even though '/^[s]*$/d' should do that, according to my research. Gonna have to look into that more...
Also note: This method only works for uploading text files. It does not work for other file types, such as JPEGs or OGGs.
Hopefully this helps some other people with the same problem. Let me know if you have any questions.
I have an image that users can annotate on the browser. I can access the image using
canvas.toDataURL()
...I'd like to add a 'save' option for the user to save the image on the server.
This question has been answered for php...
file_put_contents('test.png', base64_decode(substr($data, strpos($data, ",")+1)));
...what I need is a Seaside callback with the PNG file content.
Is there a way to do this in Seaside?
Johan pointed out that the mine type declaration has to be removed from the value string. This works in VW... (with string hack to remove 'data:image/png;base64,')
html jQuery ajax
callback: [:value |
| writestream string |
writestream := ('c:\data\sketchpad_image.png' asFilename withEncoding: #binary) writeStream.
string := value copyFrom: 23 to: value size.
[writestream nextPutAll: (Seaside.GRPlatform current base64Decode: string) asByteArray]
ensure: [writestream close] ]
value: (Javascript.JSStream on: 'sketchpadCanvas.toDataURL()')
Depending on how you want your website to behave, I guess there are multiple ways of doing it. Here is the raw sample of one possibility I can think of using a jQuery ajax callback:
html jQuery ajax
callback: [:value | (FileStream newFileNamed: 'test.png')
nextPutAll: (value copyFrom: ((value indexOf: $,) + 1 to: value size) base64Decoded ]
value: (JSStream on: 'canvas.toDataURL()')
I did not test this myself. Maybe the filestream needs to be sent the #binary message to make a correct png file. Let me know if there are troubles.
Hope it helps.
Does the file-upload section in the Seaside book solve your problem? Taking the code from the book:
UploadForm>>renderContentOn: html
html form multipart; with: [
html fileUpload
callback: [ :value | self receiveFile: value ].
html submitButton: 'Send File' ]
UploadForm>>receiveFile: aFile
| stream |
stream := (FileDirectory default directoryNamed: 'uploads')
assureExistence;
forceNewFileNamed: aFile fileName.
[ stream binary; nextPutAll: aFile rawContents ]
ensure: [ stream close ]
I've also published a blog post about how to manage file uploads in a production environment using Seaside and Nginx that may be of interest.