Laravel Beanstalkd job cannot connect to remote server via ssh - laravel

I've got a workflow in my web application that looks like this (Built in Laravel 4):
1) User uploads a file (up to 50mb or so)
2) File is moved to temp directory
3) Queued job created that does the following:
- Uploads file to amazon s3
- SSH into another file processing server and transfers the file to a folder there.
- Delete the temporary file
To connect to the remote server and upload the file within the queued job, I'm using:
SSH::into('processing')->put($localPath, $remotePath);
Everything works fine when I queue this job using the 'sync' driver, so I know the environment and permissions are correct. The problem is, when I switch over to beanstalkd as my queue driver, the job fails with the following:
[2015-01-09 14:15:40] production.ERROR: exception 'RuntimeException' with message 'Unable to connect to remote server.'
Beanstalkd jobs run fine elsewhere in the application (none of the others have ssh commands).
I'm using a username and password for the connection, so it's not a key permissions or passphrase issue. Any ideas?

If you know the file has uploaded OK to S3, why not generate a new job, that will then be run on the other file processing server (step 3b), to have it download the file from S3, if it needs it?
Other than that - you would need more debugging on the SSH upload.

Related

How to keep logstash service running when I logout from remote server

I configure logstash service following the instructions in the link https://www.elastic.co/guide/en/logstash/current/running-logstash-windows.html (logstash as a service using nssm) but I noted that the service does actually not running when I am disconnected from the remote server I installed it.
Is there a way to fix this problem?
thanks,
g
The same thing happens also running logstash manually (I mean , running the appropriate bat file in command prompt).

Execute actions before specific commands in Apache Guacamole

I setup Apache Guacamole 0.9.14 on my CentOS 7 with nginx as reverse proxy to it.
I want to give limited access to some of my employees for some of my servers via ssh.
Some of them are SFTP enabled and to prevent sabotage on purpose or not I edit guacamole upload function to upload a copy of file uploaded on guacamole server itself alongside destination server.
I was wondering if I could create a copy of files getting on destination servers via wget, curl, etc.
If I can control specific commands on destination servers and do some actions before executing them, (For example backing files on guacamole server before executing any rm -rf command or creating a copy of file 'wget'ed on guacamole server), that would be great.
There are more than a thousand servers with different Linux OSs on them, so editing any server except guacamole server itself is impossible to do.
Any idea on how to control commands before executing on guacamole server specially on ssh?

Run post processing commands on remote server from informatica cloud

I am running a job on informatica cloud. It picks up a file from a server (remote) and dumps the data into salesforce. I want to run post processing commands from informatica cloud on the source file which is present in the remote server after the informatica job finishes. Is it possible?
Files need to present in the Agent installed machine.
Post processing command file cannot be present in remote location.

PostgreSQL 9.2 streaming replication recovery.conf

I am working on Postgresql 9.2 streaming replication and I have finished setting up on the master and on the standby I want to set up the parameters in recovery.conf file.
But I can not get the file so I have created a new file 'named recovery.conf' and copied all the contents of recovery.conf.sample file and edited the parameters.
I saved it and when I start the postgresql service, it gives error
"service on local computer started and stopped....."
But when I remove recovery.conf file the service starts.
I need help.

Accessing Riak node from a remote machine (riak-admin backup)

While trying to run a riak-admin backup riak#ec2-xxx.compute-1.amazonaws.com riak /home/user/backup.dat all on a remote machine (ec2 instance) I encounter the following error message
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,'riak#ec2-xxx.compute-1.amazonaws.com'}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
I assume there's a connection / permission error since the same backup command will work if run locally on the instance (with a local node ip of course), I should note the server (Node.js) can remotely connect to that ip so the port is open and accessible 8098). Any advice on how to make the backup operational remotely?
It would appear that the riak-admin backup command doesn't work remotely - and certainly it's not something I've ever tried to do. I'd recommend setting up a periodic backup (via cron or similar) and then use rsync to get your backup file down to local.
Alternatively, you could try the following hacky untested idea for a single script.
#!/bin/bash
ssh ec2-xxx.compute-1.amazonaws.com "riak-admin backup riak#ip-local-ec2 /home/user/backup.dat all"
rsync -avP ec2-xxx.compute-1.amazonaws.com:/home/user/backup.dat .

Resources