While trying to run a riak-admin backup riak#ec2-xxx.compute-1.amazonaws.com riak /home/user/backup.dat all on a remote machine (ec2 instance) I encounter the following error message
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,'riak#ec2-xxx.compute-1.amazonaws.com'}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
I assume there's a connection / permission error since the same backup command will work if run locally on the instance (with a local node ip of course), I should note the server (Node.js) can remotely connect to that ip so the port is open and accessible 8098). Any advice on how to make the backup operational remotely?
It would appear that the riak-admin backup command doesn't work remotely - and certainly it's not something I've ever tried to do. I'd recommend setting up a periodic backup (via cron or similar) and then use rsync to get your backup file down to local.
Alternatively, you could try the following hacky untested idea for a single script.
#!/bin/bash
ssh ec2-xxx.compute-1.amazonaws.com "riak-admin backup riak#ip-local-ec2 /home/user/backup.dat all"
rsync -avP ec2-xxx.compute-1.amazonaws.com:/home/user/backup.dat .
Related
Whenever I try to execute a sh script via Jsh nothing happens , however when I execute it through a normal ssh session it works fine , I haven't been able to get a single sh file to work/run regardless of the contents of the sh file.
I have tried
channelssh.setCommand("/home/exiatron00/Desktop/bash test.sh");
channelssh.setCommand("/home/exiatron00/Desktop/./test.sh");
channelssh.setCommand("/home/exiatron00/Desktop/test.sh");
I don't see anything wrong with your command, so I would have to assume it's your setup.
Are you sure you're even logging into your server? I would check your last logs to make sure you are even connecting.
Are you on the same network as the machine you're attempting to connect to? If you aren't on wifi I would assume your machine is hidden behind a NAT.
I would like to install the Windows version of Perforce in a network location so that users can call p4 via:
\\somewhere\p4.exe -p server:1666 -c some_client_name sync
where "somewhere" is consistently mapped on all Windows machines. I tried to do this by installing locally, then copying p4.exe to \\somewhere.
On the computer where I installed locally, \\somewhere\p4.exe works just fine. But when I switch to another machine and try to run
\\somewhere\p4.exe -p server:1666 info
I get the following error:
Perforce client error
Connect to server failed; check $P4PORT.
TCP connect to server:1666 failed.
A non-recoverable error occurred during a database lookup.
What does this error mean? I couldn't find any information in the documentation; I suspect I might need another file besides p4.exe. Indeed, when I install Perforce locally on the other machine, using the local p4.exe works, but \\somewhere\p4.exe still does not.
Any pointers?
Thanks!
You shouldn't need any other files besides P4.exe.
The TCP connection error is probably because that other machine isn't able to translate "server" into an IP address.
Try using some of the Windows command line tools to diagnose this, as in:
nslookup server
or
ping server
Also, try changing your test to run:
\\somewhere\p4.exe -p NNN.NNN.NNN.NNN:1666 info
where the "NNN.NNN.NNN.NNN" is the IP address of your server machine.
I've got a continuous integration server (Jenkins ) which builds my code (checks for compilation errors) and runs tests and then deploys the files to a remote server (not a war file, but the actual file structure) I do this with a Jenkins plugin which allows me to transfer files via samba, it does this nightly.
Now, what I need to do is run an ant command on the remote server. And after that I need to start the application server on the remote server, the application server is started by running a .bat file from the command line.
I'm pretty clueless how to accomplish this, I know Jenkins is capable of running batch commands, but how do I make them run in the context of the server and not the context of the build server?
If Jenkins on Windows, remote on *nix, use plink.exe (which is essentially command line PuTTy)
If Jenkins on Windows, remote on Window, use psexec.exe
If Jenkins on *nix, remote on *nix, use ssh
If Jenkins on *nix, remote on Windows, (update 2015-01) Ansible http://docs.ansible.com/intro_windows.html has support for calling Windows commands, eg powershell, from a unix/linux machine, https://github.com/ansible/ansible-examples/blob/master/windows/run-powershell.yml
Tell me what OSes are involved (both on Jenkins and remote), and I will flash this out further.
Edit:
The download page for psexec.exe lists all command line options. You will want something along the lines of:
psexec \\remotecomputername -u remoteusername -p remotepassword cmd /c <your commands here>
Replace <your commands here> with actual commands as you would execute them from command prompt.
Note that psexec first needs to install a service, and required elevated command prompt/admin remote credentials to do so.
Also, you need to run psexec -accepteula once to accept the EULA prompt.
Following Slav's answer above, here is a simpler solution for Jenkins (*nix) to remote (windows):
Install an SSH server on your remote windows (MobaSSH home edition worked well for me)
Make sure your Jenkins user, on your Jenkins machine, has the required certification to open an SSH connection with your remote (you can simply open a terminal and ssh to your remote once, then accept the certification. Make sure it is saved for the Jenkins user).
You can now add an execute shell build phase in your Jenkins job which can SSH to your remote windows machine.
Notes :
The established connection might require some additional work - you might have to set windows environment variables or map network drivers in order for your executed commands or batch files to work properly on your windows machines.
If you wish to run GUI related operations this solution might not be relevant (Following my work on running automation tests which require GUI manipulation).
Using Jenkins SSH plugin is an issue, as seen here.
1、i install (MobaSSH home ) on my remote windows server .
2、and install jenkins ssh plugin
3、edit shell eg: go build project
4、it seems something wrong ,
" go: creating work dir: CreateFile C:\WINDOWS\system32\bsh\tmp: The system cannot find the path specified."
I ended up going with a different approach after trying out psexec.exe for a while.
Psexec.exe and copying files over the network was a bit slow and unstable, especially since the domain I work on has a policy of changing password every months (which broke the build).
In the end I went with the master/slave approach, which is faster and more stable. Since I don't have to use psexec.exe and don't have to copy files over the network.
I have to do some task of migration of three solaris servers, I have the ip addresses, username and password for each server. The script that I have to run, it does what it has to do with no problems, but is created to run if the script and the directory needed are in the same machine, so I have to change it adding the necesary connections instructions, but I am very limited for the next reasons:
I am not allowed to change or install anything on these systems.
I am allowed only to read privileges with the users I have.
The output files should be generated in the machine where the script is running, that leave to the next point.
The script it has to be run in a Solaris machine with a bash version 3, so I do not know what versions of ftp or ssh commands work in this version of Solaris.
I only need the part of code that does the connection and search of the needed directory, Any suggestions?
Use sshfs to mount the needed directories of your three servers.
Afterwards you can run the script locally accessing the remote data as local files.
Perhaps you could use pdsh (parallel distributed shell) to run the script on the 3 Solaris servers.
I have a Perl script which maps two drives, and then proceeds to copy files one of the drives to the other. The Perl script is located on a Windows box, but we are SSHing from a Linux box into the Windows box to execute the script. When I run the script directly from the Windows box, everything works without issue, the drives are mapped and the files are copied over successfully. When I attempt to execute the script from my Linux box via SSH, the script fails and I get the following output:
The local device name is already in use.
Error mapping source \\xxx.xxx.net\localdirectory
This error occurs when attempting to map the first drive, I don't know if it would fail on the second drive as well since it has not made it that far.
I have several other Perl scripts that are executed this same way (via ssh from Linux to Windows box) and they execute without issue, this is the only one that maps a drive though. This is the code I am using to execute the script:
#!/bin/sh
ssh -t -t user#server "cd /Path/to/Perl/Script; /cygdrive/C/Perl/bin/perl.exe Script.pl"
What user is your ssh daemon running as? Presumably System. That user doesn't have authority to map network drives, as far as I recall. Can you not just do this on the Linux box directly using samba?
In case anyone needs this in the future, we we're able to get it working. The issue was due to the SVCCopSSH being used for the CopSSH service on our Windows machine. We had to disable the CopSSH service, set the Log On as the network account we were using to SSH from Linux to Windows, and restart the service. This fixed all issues we were having.