windows cmd connection to remote mysql dbf - windows

is there a way of how to connect to mysql dbf on a remote server and run sql queries using windows command line?

Yes, you can connect to a different host by running mysql -h 123.45.67.89.
Please note that there are a few security implications:
You will have to grant yourself access. You will need to run something like GRANT ALL on db_name.table TO user#your_ip IDENTIFIED BY 'password'. db_name, table and your_ip can be * but beware of opening your server to hackers.
You will have to open your server's firewall if you are not on the same LAN. Again, ymmv and you should be aware not to open the door to exploits.
You may want to use SSL and use secure-auth in order to protect your traffic and credentials.
Hope that helps.

MySQL has a command-line client, where you can run queries. If you don't want to allow remote connections to the database on the server, you can still script things into a batch. There are command-line telnet/ssh clients, that either accept external file as a list of commands to run remotely, or you can pass it with the input stream redirection (less then symbol) to them.
When opening a connection to server - most clients are programmed so that the only way to specify the login password is by typing it in from keyboard (yeah, they don't use default input stream). Things like that make it hard to script it. However, it may be possible to set up a certificate based login on SSH - you'd actually have to research that.
If the server that's hosting the MySQL database is also a web server - you could also think about putting some script (PHP, Perl, Python, Ruby - whatever you like) on the password protected area, that would allow you to execute queries by simply making a HTTP(S) queries on that script. Although, Windows doesn't have a command-line HTTP(S) client, you can always get something like wget.exe and perform queries with it. Note, that if you choose this approach - I strongly advice to put that script under HTTPS - if discovered by malicious user, it could be lethal to your data.

You could use telnet, or SSH if you want to be more secure.

If the MySQL is running on Linux or BSD, you need a Telnet or SSH connection through something like putty
This will open a command line on the remote server. The command is mysql. There will be issues around authentication of remote users (as you would expect).
If the remote server is running Windows, you have a whole different set of issues.
I'm not sure you can connect to a remote Windows server and control it this way.
I should say I'm not sure HOW you could connect to a remote Windows server and use it this way. But no doubt it's possible.

Related

Does anyone know how to issue a SUBMIT command to OpenVMS over an FTP session?

I am currently using windows telnet to submit files to the OpenVMS queue via a series of sendkeys/application waits through VBA. It works, up until the end-user shifts focus away from the telnet window. I would prefer to issue the SUBMITs using an FTP session, where I can script the commands into a batch file and shoot it across FTP. I was able to do something similar with IBM mainframes - through the quote site FTP command - setting the filetype=jes, followed by a JCL file that would be dropped into the work queue for immediate execution. I can't seem to find anything on the internet related to FTP, openVMS, and submit. I have tried using Quote submit/que=... but it does not recognize the command. (Submit works fine under telnet).
Maybe you can use Remote Shell Protocol (RSH) to execute a command in a remote node
You would need a rsh client on windows:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/rsh.mspx?mfr=true
And also enable RSH service on VMS via TCPIP$CONFIG
(See OpenVMS documentation http://h71000.www7.hp.com/doc/index.html)
This works best with a VMS username dedicated to processing inbound FTP files. If you put in the LOGIN.COM for that username to detect it's a network connection and submit a batch job to look for the expected file, get exclusive access to it with retries (the FTP is done), and then process the file - That has worked for me.
The other option is to put a security ACL on the directory and make an audit listener - it will get file creates via a mailbox message. Then it can do similar: get exclusive access to the file being created and then process it.

How to prevent running program on Win or Unix via command line

After a long research here on Stackoverflow and on net I didn't found nothing talking about it. As the title say, how can I do that?
For example: I am owner of a hosted website that allow me to manage the database via PhpMyAdmin. When I try to connect to my database via prompt, the connection never go fine. It could be because of OS settings (right?).
How can I do that (in both OSs).
Thanks in advance.
If I understand your question correctly:
You have a website with a MySql database hosted on your providers servers.
When you try to use your local PC installation of MySql from a command prompt, it will not let you connect.
I use dreamhost.com and have a similar setup. If I want to use database tools from my local PC to connect to the database, I have to enter my IP address in the db configuration page under "allowed hosts".
Restricting remote database connections to specific IP addresses protects your database from random hacking attempts.
As for your question about restricting command line execution of a program, that is usually just caused by missing configuration information in the environment variables - leaving the path to the executable out of the PATH variable is a common one. You would still be able to run the program if you enter the full path to it.
It really depends on the error message you get when trying to run the program from the command line.

Receive File via SFTP/SSH and automatically forward to FTP on another server

I'm currently in a situation where I receive flat files via FTP from my clients. A couple of clients have insisted on the need to use SSH Private Key SFTP rather than regular FTP.
What I want to do is setup a web server (preferably in linux/unix but I guess I can do it on a windows server and purchase SFTP server software) that will do the following:
Allow me to setup an SFTP directory for each client with unique user/pass. Each directory also has to have the public/private key SSH "stuff" I'm a little new to this but I've googled it.
Once the file is completely uploaded by the client, I want to kick off an event that ftp's that file via regular FTP to my Windows cloud.
These files can be up to 10mb so the even that ftp's to the other server can't fire until the file is completely uploaded.
Has anyone set something like this up? Any guidance would be appreciated.
Thanks!
In Linux, you can use incron to monitor the directory the files will be SFTP'd to and have it trigger your ftp job. It's kind of like cron except that instead of triggering jobs based on time, it does so based on filesystem modifications. In order to only trigger once the entire file has been written, I think you can use IN_CLOSE_WRITE in the inotify mask. Failing this, I suggest configuring events for each of the events individually to echo a message to a log file and see if you can identify one which reliably happens only at the end of the SFTP transfer.
If you're using RedHat, it's not in the standard distribution, but it is in EPEL.
On Windows you could use Titan FTP Server Enterprise Edition, which supports SFTP as well as allows you to define various types of events. When the event is triggered, you could kick off anything you need on a per folder/per account basis.
PS. AFAIK, when it comes to SFTP it is either password authentication or public key authentication (SSH key), but not both.
In your UNIX server, you can configure SSH to use a custom sftp server that instead of handling SFTP protocol itself, opens a new SSH connection to to the Windows SFTP server using password authentication and forwards the SFTP traffic there.
Writting the proxy is easy with the right tools, for instance, in Perl using the Net::OpenSSH module:
#!/usr/bin/perl
# this is the sftp-proxy-server
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($windows_server, $user, $passwd);
$ssh->system({ssh_opts => '-s'}, 'sftp');
$ssh->error and die $ssh->error;
You can instruct the SSH server to use that alternative SFTP server changing the configuration in /etc/ssh/sshd_config. For instance:
Subsystem sftp /usr/local/bin/sftp-proxy-server
Did you try apache FTP Serveur ?
I think you can do what you need with the ftplet API.
see :
http://mina.apache.org/ftpserver-project/index.html

Speeding up ssh in batch files

This is my situation:
I have a linux server/media center with a windows client.
My goal is to remote control rhythmbox amongst other things.
I've done this using plink (windows based cli ssh toy).
The problem is that starting up an ssh session logging in and sending a command is understandably slow as hell. When I had a windows server I used a tool called psexec which was almost instantaneous.
Is there any way to speed this process up? Either somehow sending the commands with the login request which should show some improvement. Or by maintaining a persistent ssh connection which I can use. (plink dcs at the end of the command).
More info: On my windows machine I'm using a bat like:
plink -ssh -l username -pw pass myipaddress "/home/username/bin/skip"
On my linux machine the skip bash file is something like:
//needed to get around a x11 error caused by controlling rhythmbox over sshif its an ssh connection copy the dbusaddressfirhythmbox-client --next //the cli wrapper for rhythmbox
Further Research:
The only way to go seems to keep an ssh connection open/maintained as a service. This seems doable as there is a demand due to setting up ssh tunnels (to bypass firewalls). From there I'd need a way to send the command line commands to this existing connection or reuse that connection.
The other option is of course to NOT use ssh. Hell I already have a connection through samba file shares and there is no lag there. I bet I could put a service linux side that checks for a modified file. Then have an ap client side that modifies said file. Amazingly hacky but so far it seems like the best option. And by best I mean the only one that cuts control lag. There has got to be a better way than this, I can't be the only nerd using linux as a media-center that wants remote controls. This kind of moves the topic from stackoverflow to superuser but that's ok.
You could user an SSL certificate to get rid of the login part. Alternatively, build yourself a small HTTP server which uses an "exotic" port for controlling your media player (amarok, btw, has one build-in)
Switching to something like mpd will bypass the ssh issue, although I give no guarantee that changing tracks will be any faster.
If anyone is curious, I ended up implementing an http based server with php to execute commands server side. And client side I used curl.exe to allow me to have nice click-able buttons without the overhead of a web-browser.
Also nice since it allowed me to implement an in browser UI which is great to use from any machine with internet, ones that don't have ssh installed. And works wonderfully from my phone as a remote control (which I can use from a country away if I so chose...)

How to capture your username on Box A after you have SSHed onto Box B?

Maybe not the best worded question, but hopefully it's a straightforward problem.
The scenario is SSHing from a personal account on box A to a generic account on box B. The script running on box B needs to capture the personal account name for logging purposes. Is there any way of capturing this, either via SSH itself or some information captured by the shell? We are using ssh2 (Reflections), and KornShell (ksh) on Solaris.
If you have full control of the client machine, you can deploy identd to get the username.
Full procedure to get name from script:
Walk up process tree, find sshd
Walk netstat -p to find the remote IP and port.
Connect to client on port 113 and ask.
You may have to disable privilege separation for this to work as-is; however it should be trivial to modify to work w/o it.
You can't log the remote username reliably
You can log the IP of the connection (see the SSH_CONNECTION variable)
You could have a standard where they use an alias for ssh that logs the remote username as part of the login process, or where they store their username in a .ssh/environment file (but allowing environments to be set may require ssh/sshd config changes).
alias sshblah='ssh blah "REMOTEUSER=$USER; bash'
(Except that doesn't work, and I haven't tried to figure out why - and it would be different if you use tcsh, etc).
You can use environment passing in this manner, and select which variables you allow to be set. You'd have to get the users to set some alternate to $USER, like $REMOTE_USER=$USER, and then allow $REMOTE_USER to pass through. And you're trusting they don't set it incorrectly, or forget to set it (you can handle that case with a little annoyance by modifying this mechanism).
Note that you almost have to trust the client connecting to tell you who the user is - you can make it hard/annoying to spoof the username, but unless you use per-user certificates instead of a generic login/password they all know, you can't verify who connected.

Resources