How can I schedule the transfer of a CSV file resulting from a Snowflake query to an FTP server? [duplicate] - ftp

I have a simple question. I want my Windows 7 PC to send a file to Linux server every minute. This would be easy to do on Linux via cronjobs but I have really no idea how to do that on Windows.
For example. We have a file called example.txt in c:/programfiles/folder/ on my Windows PC.
I want to send that file to Linux server to folder /home/fold.
Any idea what to use and how? Does Windows have anything similar to
cronjob? What to use instead of scp? Maybe use FTP but my Linux server
only works with explicit TLS and I have no idea how to force ftp
client to send that file in Windows.
Ty in advance and sorry if I ask stupid question.

Windows equivalent to the cron is Windows Task Scheduler.
The Windows ftp.exe client does not support TLS/SSL. It also does not support a passive FTP mode. So you have to use a different client. Once you have to use a different client anyway, you can use SFTP.
So take any scriptable SFTP client (or FTPS or SCP client if your prefer) and schedule it to be run using Windows Scheduler.
For example with WinSCP, you can use a batch-file like:
#echo off
winscp.com /log=winscp.log /command ^
"open sftp://user:password#example.com/" ^
"put c:\path\file.txt /path/" ^
"exit"
(WinSCP supports FTPS and SCP too).
Then, in Windows control panel go to System and Security > Administrative Tools > Schedule Tasks and add a new task.
For details see my guide for scheduling file transfers to FTP/SFTP server.
(I'm the author of WinSCP)

Related

tso ftp gives me much more commands than connecting via standard ftp

I connect to a z/os-System via "tso ftp" with specifying my user-id, password and the hostname.
The command "help" shows me a list of commands that I am allowed to use, for example "site".
So far so good, but if I connect to the same host via "standard ftp", for example with the Windows Command Shell, the list of allowed commands is much(!) smaller. Typing e.g. "site" gives me the message: "unknown command".
This seems strange to me, because in my eyes I connected to the same host with the same credentials in just two different ways - but my permissions are quite differnt!?
Can anyone explain me the difference between "tso ftp" and "standard ftp"?
Thanks!
You need to use quote command to send your command to ZOS ftp server. E.g. quote "SITE FILETYPE=JES"
The FTP implementation done by IBM in z/OS differs quite much from any open FTP client or server (Windows, Linux, etc.)
It's part of the z/OS Communications Server and has a lot of specific features that allows you to easily switch between MVS & Unix System Services.
For example :
- you can send MVS batch jobs
- you can browse either the MVS or USS file sytem
- you can invoke the client from the ISPF command shell, the USS shell or a batch job
For more information you can read this redbook at "Chapter 3. File Transfer Protocol":
IBM z/OS V2R1 Communications - Server TCP/IP Implementation - Volume 2: Standard Applications

Does anyone know how to issue a SUBMIT command to OpenVMS over an FTP session?

I am currently using windows telnet to submit files to the OpenVMS queue via a series of sendkeys/application waits through VBA. It works, up until the end-user shifts focus away from the telnet window. I would prefer to issue the SUBMITs using an FTP session, where I can script the commands into a batch file and shoot it across FTP. I was able to do something similar with IBM mainframes - through the quote site FTP command - setting the filetype=jes, followed by a JCL file that would be dropped into the work queue for immediate execution. I can't seem to find anything on the internet related to FTP, openVMS, and submit. I have tried using Quote submit/que=... but it does not recognize the command. (Submit works fine under telnet).
Maybe you can use Remote Shell Protocol (RSH) to execute a command in a remote node
You would need a rsh client on windows:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/rsh.mspx?mfr=true
And also enable RSH service on VMS via TCPIP$CONFIG
(See OpenVMS documentation http://h71000.www7.hp.com/doc/index.html)
This works best with a VMS username dedicated to processing inbound FTP files. If you put in the LOGIN.COM for that username to detect it's a network connection and submit a batch job to look for the expected file, get exclusive access to it with retries (the FTP is done), and then process the file - That has worked for me.
The other option is to put a security ACL on the directory and make an audit listener - it will get file creates via a mailbox message. Then it can do similar: get exclusive access to the file being created and then process it.

Receive File via SFTP/SSH and automatically forward to FTP on another server

I'm currently in a situation where I receive flat files via FTP from my clients. A couple of clients have insisted on the need to use SSH Private Key SFTP rather than regular FTP.
What I want to do is setup a web server (preferably in linux/unix but I guess I can do it on a windows server and purchase SFTP server software) that will do the following:
Allow me to setup an SFTP directory for each client with unique user/pass. Each directory also has to have the public/private key SSH "stuff" I'm a little new to this but I've googled it.
Once the file is completely uploaded by the client, I want to kick off an event that ftp's that file via regular FTP to my Windows cloud.
These files can be up to 10mb so the even that ftp's to the other server can't fire until the file is completely uploaded.
Has anyone set something like this up? Any guidance would be appreciated.
Thanks!
In Linux, you can use incron to monitor the directory the files will be SFTP'd to and have it trigger your ftp job. It's kind of like cron except that instead of triggering jobs based on time, it does so based on filesystem modifications. In order to only trigger once the entire file has been written, I think you can use IN_CLOSE_WRITE in the inotify mask. Failing this, I suggest configuring events for each of the events individually to echo a message to a log file and see if you can identify one which reliably happens only at the end of the SFTP transfer.
If you're using RedHat, it's not in the standard distribution, but it is in EPEL.
On Windows you could use Titan FTP Server Enterprise Edition, which supports SFTP as well as allows you to define various types of events. When the event is triggered, you could kick off anything you need on a per folder/per account basis.
PS. AFAIK, when it comes to SFTP it is either password authentication or public key authentication (SSH key), but not both.
In your UNIX server, you can configure SSH to use a custom sftp server that instead of handling SFTP protocol itself, opens a new SSH connection to to the Windows SFTP server using password authentication and forwards the SFTP traffic there.
Writting the proxy is easy with the right tools, for instance, in Perl using the Net::OpenSSH module:
#!/usr/bin/perl
# this is the sftp-proxy-server
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($windows_server, $user, $passwd);
$ssh->system({ssh_opts => '-s'}, 'sftp');
$ssh->error and die $ssh->error;
You can instruct the SSH server to use that alternative SFTP server changing the configuration in /etc/ssh/sshd_config. For instance:
Subsystem sftp /usr/local/bin/sftp-proxy-server
Did you try apache FTP Serveur ?
I think you can do what you need with the ftplet API.
see :
http://mina.apache.org/ftpserver-project/index.html

Speeding up ssh in batch files

This is my situation:
I have a linux server/media center with a windows client.
My goal is to remote control rhythmbox amongst other things.
I've done this using plink (windows based cli ssh toy).
The problem is that starting up an ssh session logging in and sending a command is understandably slow as hell. When I had a windows server I used a tool called psexec which was almost instantaneous.
Is there any way to speed this process up? Either somehow sending the commands with the login request which should show some improvement. Or by maintaining a persistent ssh connection which I can use. (plink dcs at the end of the command).
More info: On my windows machine I'm using a bat like:
plink -ssh -l username -pw pass myipaddress "/home/username/bin/skip"
On my linux machine the skip bash file is something like:
//needed to get around a x11 error caused by controlling rhythmbox over sshif its an ssh connection copy the dbusaddressfirhythmbox-client --next //the cli wrapper for rhythmbox
Further Research:
The only way to go seems to keep an ssh connection open/maintained as a service. This seems doable as there is a demand due to setting up ssh tunnels (to bypass firewalls). From there I'd need a way to send the command line commands to this existing connection or reuse that connection.
The other option is of course to NOT use ssh. Hell I already have a connection through samba file shares and there is no lag there. I bet I could put a service linux side that checks for a modified file. Then have an ap client side that modifies said file. Amazingly hacky but so far it seems like the best option. And by best I mean the only one that cuts control lag. There has got to be a better way than this, I can't be the only nerd using linux as a media-center that wants remote controls. This kind of moves the topic from stackoverflow to superuser but that's ok.
You could user an SSL certificate to get rid of the login part. Alternatively, build yourself a small HTTP server which uses an "exotic" port for controlling your media player (amarok, btw, has one build-in)
Switching to something like mpd will bypass the ssh issue, although I give no guarantee that changing tracks will be any faster.
If anyone is curious, I ended up implementing an http based server with php to execute commands server side. And client side I used curl.exe to allow me to have nice click-able buttons without the overhead of a web-browser.
Also nice since it allowed me to implement an in browser UI which is great to use from any machine with internet, ones that don't have ssh installed. And works wonderfully from my phone as a remote control (which I can use from a country away if I so chose...)

windows cmd connection to remote mysql dbf

is there a way of how to connect to mysql dbf on a remote server and run sql queries using windows command line?
Yes, you can connect to a different host by running mysql -h 123.45.67.89.
Please note that there are a few security implications:
You will have to grant yourself access. You will need to run something like GRANT ALL on db_name.table TO user#your_ip IDENTIFIED BY 'password'. db_name, table and your_ip can be * but beware of opening your server to hackers.
You will have to open your server's firewall if you are not on the same LAN. Again, ymmv and you should be aware not to open the door to exploits.
You may want to use SSL and use secure-auth in order to protect your traffic and credentials.
Hope that helps.
MySQL has a command-line client, where you can run queries. If you don't want to allow remote connections to the database on the server, you can still script things into a batch. There are command-line telnet/ssh clients, that either accept external file as a list of commands to run remotely, or you can pass it with the input stream redirection (less then symbol) to them.
When opening a connection to server - most clients are programmed so that the only way to specify the login password is by typing it in from keyboard (yeah, they don't use default input stream). Things like that make it hard to script it. However, it may be possible to set up a certificate based login on SSH - you'd actually have to research that.
If the server that's hosting the MySQL database is also a web server - you could also think about putting some script (PHP, Perl, Python, Ruby - whatever you like) on the password protected area, that would allow you to execute queries by simply making a HTTP(S) queries on that script. Although, Windows doesn't have a command-line HTTP(S) client, you can always get something like wget.exe and perform queries with it. Note, that if you choose this approach - I strongly advice to put that script under HTTPS - if discovered by malicious user, it could be lethal to your data.
You could use telnet, or SSH if you want to be more secure.
If the MySQL is running on Linux or BSD, you need a Telnet or SSH connection through something like putty
This will open a command line on the remote server. The command is mysql. There will be issues around authentication of remote users (as you would expect).
If the remote server is running Windows, you have a whole different set of issues.
I'm not sure you can connect to a remote Windows server and control it this way.
I should say I'm not sure HOW you could connect to a remote Windows server and use it this way. But no doubt it's possible.

Resources