Downloading file using wget directly into an EC2 instance - amazon-ec2

I am trying to download a zip file using wget directly from my EC2 instance. The command actually works and a file is downloaded, however it is a fraction of the size it is supposed to be (supposed to be 7 GB and the file downloaded is 14K) and unzip commands are not working.
Any ideas? I'd prefer not to download the file to my local computer and then use scp (although if I have to I guess that is what I'll do)

Found the issue, the website requires authentication before downloading the file, so I believe I just need to add login credentials as parameters in the wget command:
wget --user=user --password=password

Related

Downloading a file using a windows CMD line using wget/curl

I have a client [Windows 10 VM] and a server [say a linux based VM].
I have Apache running on the Linux Server.
I have a file on the linux server that I want to download on my windows client.
I want to do it in 2 ways from the windows CMD:
-Using curl
-using wget
I tried the foll commands on my windows CMD. But doesnt work. Is something wrong with my CLI?
curl http://x.x.x.x/home/abc/ -O test.zip
wget http://x.x.x.x/home/abc/ -O test.zip
From curl side, most probably curl.exe is missed on Windows Client machine or could not find it. One of the options is to download curl:
Download curl zip for Windows from this page: https://curl.se/download.html
Unzip and you will find the ..\bin\curl.exe
Also add ...\bin\ to your Path variable in System Settings.
To use wget in Windows, you can follow this link and it worked for me: https://medium.com/nerd-for-tech/using-wget-command-in-windows-10-environment-d766b8f526e9
Short tutorial:
Download the GnuWin setup
Install it
Open the wget directory
Add to Environmet Variables as well.

Install wkhtmltopdf on CPanel server without root acces

I'm trying to use wkhtmltopdf on my CPanel server. But I haven't root access and I can't put it in /usr/local/bin or /usr/bin/ .
So I just put the script on my /home/perso/wkhtmltopdf and made a chmod +x wkhtmltopdf.
But if I try to execute it, for example like this: ./wkhtmltopdf http://www.google.com test.pdf I get a
bash: ./wkhtmltopdf: cannot execute binary file
Any idea how can I place my script in order to be able to execute it ?
To run wkhtmltopdf, you need to install on the server.
Try to ask your hosting provider.
Usually for security, providers disable the execution of binary scripts by ordinary users.

Running wp-cli from bash script results in path error

I have successfully installed wp-cli on my remote server and created the "wp" alias. I use Putty to connect via SSH, and everything works just fine. First, I used a .user_bashrc file to set the alias with:
alias wp='/www/htdocs/w019d58a/wp-cli.phar'
The path is set in .user_bashrc using:
export PATH=/www/htdocs/w019d58a/:$PATH
However, when I tried to run wp-cli from a bash script, I got a "wp command not found" error. I contacted the support, and they recommended a symlink. So, I created a symlink using:
ln -s /www/htdocs/w019d58a/wp-cli.phar wp
Everything works but the installation process. I can, for example, install a plugin using:
#!/bin/bash
wp plugin install akismet
Unfortunately, I can't download WordPress via the bash script using:
wp core download --locale=de_DE_formal
I always get the error:
Error: Too many positional arguments:
Error: This does not seem to be a WordPress installation.
Pass --path=path/to/wordpress or run wp core download.
I tried to add the path using:
wp core download --locale=de_DE_formal --path="/www/htdocs/w019d58a"
No luck. I stil get the same error.
I can download and install WordPress directly from the console and do further operations using a script. But I can't download and install it from the script due to the path error.
Any ideas how to fix that?
I've just found out, that the download is working fine:
#!/bin/bash
wp core download --locale=de_DE_formal
It's the config create part that causes trouble:
wp config create --dbname=d123456 --dbuser=d123456 --dbpass=123456 --dbhost=localhost --dbprefix=wplcli_

Does using `wget --mirror --continue` as cron job strictly make sure that my files mirrors those in the server?

I am planning on using wget --mirror --continue on windows command prompt (i downloaded a win wget) to keep downloading all files from a server.
It works fine, it downloads, and I am planning to put this .bat on my windows task scheduler but I have a doubt since I am not familiar with wget, since it says --mirror. Does it also make sure those files in my local directory makes sure it is Strictly mirrors those in the server?
Because, what if:
I downloaded the files from the server using wget --mirror
the server deletes all its files
I run wget --mirror again
Will wget also delete all the files in my local?
Sorry I am not sure, and I cannot test since I do not have my own server.
Just a quick answer would be very helpful. Thanks!
I just tested with a minutely VPS and apparently wget does not delete them even the files are lost in the server.

Installing dropbox (and use Kirby CMS) on openshift

I'm trying to find a way to integrate Kirby CMS with Dropbox running on Openshift using these tutorials:
http://getkirby.com/blog/kirby-meets-dropbox
http://getkirby.com/forum/how-to/topic:561
I already get stuck installing Dropbox, since I assume I don't really have permission while SSHing:
http://www.dropbox.com/install?os=lnx
So my question: Is there even any way of achieving all that greatness? If no, not even if we get reaaaally creative? If NO, why not? If yes, how?
Thanks a bunch!
I have no experience with Kirby, but here's how to get Dropbox working on Openshift.
The following is a combination of doing a Dropbox install on a server and doing it in a non-standard location. Everything gets done in $OPENSHIFT_DATA_DIR because that's where you have write privileges.
First, make sure you're in $OPENSHIFT_DATA_DIR
cd $OPENSHIFT_DATA_DIR
Next, download the appropriate version of Dropbox:
wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf -
This should give you the .dropbox-dist folder in $OPENSHIFT_DATA_DIR.
Next, tell Dropbox to start the installation process, but tell it that your home directory is actually the $OPENSHIFT_DATA_DIR:
HOME=$OPENSHIFT_DATA_DIR ./.dropbox-dist/dropboxd start -i
Follow the instructions to link your Dropbox account to the Openshift server. After it's linked, it should start syncing everything in your Dropbox account to $OPENSHIFT_DATA_DIR/Dropbox. This might be a bad thing for you because you have too much data in your Dropbox account. If so, then you should exclude folders.
You can do that with the CLI script that Dropbox provides. Still in $OPENSHIFT_DATA_DIR, download it:
wget -O dropbox.py "https://www.dropbox.com/download?dl=packages/dropbox.py"
Make sure it's executable:
chmod +x dropbox.py
You need to run it the same way you would Dropbox:
HOME=$OPENSHIFT_DATA_DIR $OPENSHIFT_DATA_DIR/dropbox.py -h
Hope that helps.
You should be able to download/compile/install things into your OPENSHIFT_DATA_DIR (app-root/data) on your gear by using something like ./configure --prefix=~/app-root/data/dropbox, i tried that but i ran into missing the nautilus-whatever package, which i assume you could download and install in the same fashion, but i did not try past that point. As long as whatever you are running can be installed into the app-root/data, and does not require root permissions to run, you should be able to do it. If you get it going, you could also create a downloadable cartridge to run install it more easily.

Resources