Syncing Pis to a Master in Arch Linux Bash.
I am trying to configure a master Pi as an ntp server (which uses an external pool for time).
I have several child Pis that I need to configure to use the master Pi as their sync source.
Ideally I would like to have the master be able to get all the pi's connected to it as a time source, and gather the results from ntpq -pnon all pis from the master.
I have setup the configuration on the master to use the pool to get the time. I have three child Pis that are configured to get time from the master's ip.
Any ideas on how to do this or links would be greatly appreciated.
Thanks,
Ron
It sounds like you're trying to gather results of running ntpq -pn on all of the PIs. This kind of problem is not NTP-specific, but rather a general remote-execution task. SSH is well-documented and standard. Salt Stack is more advanced and powerful.
SSH
Setup SSH keys between the master and all other PIs.
Loop over the hosts with: for h in child1 child2; do ssh $h ntp -pn > ${h}.output; done
Salt Stack
Setup the Salt Master on the master PI.
Setup the child PIs as Salt Minions. (blog)
Use Salt to run a command on all minions: salt '*' cmd.run 'ntpq -pn'
Related
I am aware that you can launch docker containers remotely in VSCode. Is it possible to do the same with singularity containers?
Update: the solution to this was published in the same issue (https://github.com/microsoft/vscode-remote-release/issues/3066#issuecomment-1019500216) as before by user oschulz:
As promised, here are some instructions on how to use Singularity with VS-Code Remote SSH via SSH RemoteCommand. The procedure described below makes VS-Code run it’s remote server component inside a Singularity container instance (other runtimes like Shifter work too).
Acknowledgement: Credit for a lot of this goes to #gipert, who refined my original approach (using a custom SSH script) when support for RemoteCommand became available in VS-Code recently
Step 1
Use VS-Code >= v1.64 (includes support for the SSH RemoteCommand setting). Install the Pre-Release version of the Remote SSH extension
Important: In the VS-Code settings, set "remote.SSH.enableRemoteCommand": true.
Step 2
In your "$HOME/.ssh/config", add something like
Host myimage1~*
RemoteCommand singularity shell /path/to/image1.sif
RequestTTY yes
Host myimage2~*
RemoteCommand singularity shell /path/to/image2.sif
RequestTTY yes
Host somehost myimage1~somehost myimage2~somehost
HostName some.host.somewhere
User your_username_
Host otherhost myimage1~otherhost myimage2~otherhost
HostName some.otherhost.somewhere
User your_username_
Test whether this works using ssh myimage1~somehost. This should drop you into an SSH session inside of an instance of the "/path/to/image1.sif" container image on some.host.somewhere.
Connecting to the remote host with VS-Code: F1 > "Connect to Host" > "myimage1~somehost” should now get you a remote VS-Code session running in the container image as well. The same for "myimage2~somehost", "myimage1~otherhost" and "myimage2~otherhost".
Step 3
However, since VS-code reuses remote server instance, that's not sufficient to run multiple container images on the same host at the same time. To get separate (per container) VS-Code server instances the same host, add something like this to your VS-Code preferences:
"remote.SSH.serverInstallPath": {
"myimage1~somehost": "~/.vscode-container/myimage1",
"myimage1~otherhost": "~/.vscode-container/myimage1",
"myimage2~somehost": "~/.vscode-container/myimage2",
"myimage2~otherhost": "~/.vscode-container/myimage2"
}
Request to the VS-Code dev team
Could "remote.SSH.serverInstallPath" be controlled via an environment variable? This would allow us to eliminate all these cumbersome "remote.SSH.serverInstallPath" preferences. The environment variable could be set by a container startup script on the remote side (like the one below) automatically, depending on the selected container image.
Other Container runtimes
To use a different container runtime than Singularity (e.g. Shifter, Charliecloud, etc.), simply replace singularity shell /path/to/image1.sif by the appropriate command for your runtime.
On some systems (e.g. with Shifter at NERSC) you may also need to override $XDG_RUNTIME_DIR, since it's default location may not be writable from within a container instance. In such cases, it's best to use a custom container run-script like
#!/bin/sh
export XDG_RUNTIME_DIR="${TMPDIR:-/tmp}/`whoami`/run"
exec shifter --image="$1"
So in your SSH config, use
RemoteCommand /my/homedir/.local/bin/run_container image_name
I maintain a little container start-script called cenv that handles $XDG_RUNTIME_DIR (and quite a bit more, including some default bind-mounts) automatically for both Singularity and Shifter (contributions welcome).
Tips and tricks
If things don't work, try "Kill server on remote" from VS-Code and reconnect.
You can also try starting over from scratch with brute force: Close the VS-Code remote connection. Then, from an external terminal, kill the remote VS-Code server instance:
$ ssh somehost
$ kill -9 -1
(Will kill all processes you own on the remote host.)
Remove the ~/.vscode-server directory.
Old:
I believe this is still not supported. Refer to this issue: https://github.com/microsoft/vscode-remote-release/issues/3066, and there are also some ideas for potential workarounds in the same link.
I have a HPC cluster where several webapps are installed in docker containers, the queue is managed using Torque. Every app submits job to the HPC cluster connecting to it through ssh and then running qsub: ssh user#cluster qsub bla blabla. There are shared folder for exchanging data.
I am not satisfied with this setup and I'd like to know if it is possible to have a masternode running on each docker and using qsub directly inside it without doing an ssh connection. I'd prefer to use torque but I am open to other solutions.
Torque permits multiple submission hosts.
The names or addresses of the hosts should be added to submit_hosts variable in Torque server configuration here is a relevant page from the manual.
qmgr -c 'set server submit_hosts = headnode'
qmgr -c 'set server submit_hosts += app1'
qmgr -c 'set server submit_hosts += app2'
Assuming app1 and app2 are domain names of the docker containers. You will need to configure name resolution.
For more details and other options see Torque manual.
I have two Windows Server 2012 R2 machines located in one of the client's datacenters. Both servers are domain-joined. They both have RabbitMQ 3.6.0. installed on them. RabbitMQ is running as Windows Service on both machines. I've tried to cluster these two machines for a long time now without success. I always get the following error when I try to cluster them.
One the first machine nodeA I run the command 'rabbitmqctl join_cluster rabbit#nodeB'. This is what I get:
Clustering node 'rabbit#nodeA' with 'rabbit#nodeB' ...
Error: unable to connect to nodes ['rabbit#nodeB']: nodedown
`DIAGNOSTICS`
===========
attempted to contact: ['rabbit#nodeB']
rabbit#nodeB:
* connected to epmd (port 4369) on nodeB
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* suggestion: hostname mismatch?
* suggestion: is the cookie set correctly?
* suggestion: is the Erlang distribution using TLS?
current node details:
- node name: 'rabbitmq-cli-3892#nodeA'
- home dir: C:\Users\mydirectory
- cookie hash: l+SSu57+cRyAQ03AJdwAbQ==
I've tried this setup with Azure Virtual Machines within Azure Virtual Network and I succeeded to cluster the two VM's, however it seems I cannot connect these two (customer's machines) together.
This is what I have done and ensured:
There isn't any firewall blocking connections
Added host names to hosts file located on C:\Windows\system32\drivers\etc
Tried to refer to host names as FQDN without adding anything to hosts file
Tried to refer to host names with CAPITAL letters and without
Copied the same exact .erlang.cookie to C:\Windows and C:\Users\mydirectory on both machines.
I've read, understood and applied RabbitMQ Clustering Guide https://www.rabbitmq.com/clustering.html
Stopped, restarted, reinstalled RabbitMQ on both machines.
It seems I can't get it to work. On Azure machines, which were not domain-joined clustering worked beautifully. I am really running out of options... Any help?
i had the same problem you need to install rabbitmq as a admin. uninstall then reinstall as admin and it should work fine
Try to connect to each of RabbitMQ nodes via remote shell and check if value of cookie is the same (cookie can be set in 3 different ways: .erlang.cookie is one of them).
erl -remsh 'rabbitmq-cli-3892#nodeA' -name 'test#nodeA'
erlang:get_cookie().
I'm using Vagrant and Ansible to create my Bitbucket Server on Ubuntu 15.10. I have the server setup complete and working but I have to manually run the start-webapp.sh script to start the server each time I reprovision the server.
I have the following task in my Bitbucket role in Ansible and when I increase the verbosity I can see that I get a positive response from the server saying it will be running at http://localhost/ but when I go to the URL the server isn't on. If I then SSH in to the server and run the script myself, getting the exact same response after running the script I can see the startup webpage.
- name: Start the Bitbucket Server
become: yes
shell: /bitbucket-server/atlassian-bitbucket-4.7.1/bin/start-webapp.sh
Any advice would be great on how to fix this.
Thanks,
Sam
Probably better to change that to an init script and use the service module to start it. For example, see this role for installing bitbucket...
Otherwise, you're subject to HUP and other issues from running processes under an ephemeral session.
I want to use the parallel capabilities of ipython on a remote computer cluster. Only the head node is accessible from the outside. I have set up ssh keys so that I can connect to the head node with e.g. ssh head and from there I can also ssh into any node without entering a password, e.g. ssh node3. So I can basically run any commands on the nodes by doing:
ssh head ssh node3 command
Now what I really want to do is to be able to run jobs on the cluster from my own computer from ipython. The way to set up the hosts to use in ipcluster is:
send_furl = True
engines = { 'host1.example.com' : 2,
'host2.example.com' : 5,
'host3.example.com' : 1,
'host4.example.com' : 8 }
But since I only have a host name for the head node, I don't think I can do this. One option is to set us ssh tunneling on the head node, but I cannot do this in my case, since this requires enough ports to be open to accommodate all the nodes (and this is not the case). Are there any alternatives?
I use ipcluster on the NERSC clusters by using the PBS queue:
http://ipython.org/ipython-doc/stable/parallel/parallel_process.html#using-ipcluster-in-pbs-mode
in summary you submit jobs which runs mpiexec ipengine, (after having launched ipcontroller on the login node). Do you have PBS on your cluster?
this was working fine with ipython .10, it is now broken in .11 alpha.
I would set up a VPN server on the master, and connect to that with a VPN client on my local machine. Once established, the virtual private network will allow all of the slaves to appear as if they're on the same LAN as my local machine (on a "virtual" network interface, in a "virtual" subnet), and it should be possible to ssh to them.
You could possibly establish that VPN over SSH ("ssh tunneling", as you mention); other options are OpenVPN and IPsec.
I don't understand what you mean by "this requires enough ports to be open to accommodate all the nodes". You will need: (i) one inbound port on the master, to provide the VPN/tunnel, (ii) inbound SSH on each slave, accessible from the master, (iii) another inbound port on each slave, over which the master drives the IPython engines. Wouldn't (ii) and (iii) be required in any setup? So all we've added is (i).