Connecting 2 laptops for MPI - parallel-processing

I am new to MPI but after playing around with few sample MPI codes(in c) I got some familiarity with it. But when I tried to connect two laptops(via college LAN)
I am unable to get the things done.
I am following this link.
I completed all steps above: "ssh-copy-id node 1"
After giving this command I gets an error message saying-
"ERROR: No identities found".
If you can tell me where am I wrong or can suggest me other better way to get
the things done it would great for me.
(I want to run a MPI program on two laptops connected via LAN).

ssh-copy-id is reporting that it cannot find an ssh key in the default location. You either need to specify the location of your public key with the -i option, or if you have no key, generate one with:
ssh-keygen

Related

Make golang program restart itself

Im writing a tool and one of its commands allows you to start a new session
How can I make a golang program restart itself? If your solution is OS-Strict im on Linux.
I tried
// exec from os/exec
exec.Command(os.Args[0]).Run()
but it doesnt work. I get a blank input session which is hard to explain
My Program Input: session new
:(
:(
each of the :( represent a blank line where im able to type stuff and hit enter, there are 2 which means i hit enter twice
Im expecting
My Program Input: session new
My Program Input:
Edit: more accurately, i want to make a subprocess of the same program
You could use a separate process, like radovskyb/gobeat.
Example:
sudo gobeat -pid=1234 -cmd="go run sendemail.go"
Run with sudo so gobeat will restart the server in the same terminal tty that it originated in. (sudo)
Point gobeat to the process of the running server that you want gobeat to monitor. (gobeat -pid=1234)
Set the cmd flag to run a Go file that will send an email notifying you that the server was restarted. (-cmd="go run sendemail.go")
If you do not want a separate process, then consider implementing a graceful upgrade
You can use the library cloudflare/tableflip for instance.

Using SSHMon plugin with Jmeter- Plugin not capturing any stats

I have been working on Jmeter from quite sometime now and I have been trying to use Jmeter Plugin SSHMon , but I am stuck as even after configuring it completely it simply says "Waiting for samples" and does not render anything on the graph.
I am trying to execute the command on the Linux box and have passed all the relevant parameter for collecting the stats. But still I am not able to capture anything. Any help or pointer will be appreciated.
I also tried connecting the Linux box using Putty and executing the command and the command does work, but when I execute the test the Plugin does not capture anything
Please find the ScreenShot attached
In the majority of cases the answer lives in jmeter.log file, check it for any suspicious entries, if something is not working most probably there will be a cause identifier there. Also make sure to actually run your test as SSHMon is a Listener and relies on Sampler Results so if your test is not running - it will not show anything.
As an alternative you can use JMeter PerfMon Plugin which has EXEC metric so you can collect the same numbers, however PerfMon will require Server Agent to be up and running on the remote Linux system.
After a lot of trail and error I was able to get SSHMon working. Please find the solution below
Ok Guys, so its a lot tricky as you would expect. So I thought that installing the Perfmon Agent on the server made Jmeter collect the stats for SSHMon listner but there is a catch to it. To start off I will let you know that installing the Perfmon Agent on the servers and then using the plugin to collect the stats works smooth. You can definately use this option. But it requires for the Agent to be started everytime you want to run a test and if there are multiple servers you will have to restart on those server. Not sure if there is a way to automate the restart of the agent or to keep it running for a longer time. If you are lazy like me or you have installation restriction on the servers or hell bent on using SSHMon then what you need to do is stated below.
You should always start Jmeter with the command line argument --->
jmeter -H "Proxy" -P "Port" -u "UserName" -a "Password"
The arguments are self explanatory. Once you do that Jmeter will be launched, but wait its not done yet!!
When you start executing your test the command prompt in which you have started the Jmeter will prompt for Kerberos UserName [YourUsername]: you have to again Enter you username here, which you use to start Jmeter or login to you system. Followed by this it will prompt you to enter kerberos Password for your UserName: Enter Your Password and Voila!!
The thing is, it happens in the background so you never see what is happening on the Command Prompt you used to start Jmeter.
Please see below for more clarity.
Kerberos Username[UserName]: UserName Kerberos
Password for UserName: Password
I have attached the screen shot as well in the question as well as here showing the issue being resolved. Please refer "Solution ScreenShot". Cheers!!
Hope this helps Guys!! :)
Also please hit up for the answer if it helps you!! :)

Erlang Nodes See Each Other Only After Ping

I am running some erlang code on a Mac OSX, and I have this weird issue. My application is a multi node app where I have a single instance of a server that is shared between nodes (global).
The code works perfectly, except for one annoying thing: the different erlang nodes (I am running each node on a different terminal window) can only communicate with each other after ping!
So if on terminalA I am starting the server, and on terminalB I am running
erl>global:registered_names().
terminalB will return an empty list, unless, before starting the server on terminalA, I have ran a ping (from either one of the terminals).
For example, if I do this on either terminals before starting the server:
erl>net_adm:ping("terminalB").
then I start the server and from the second terminal I list the processes:
erl>global:registered_names().
This time I WILL see the registered process from the second terminal.
Is it possible that the mere net_adm:ping call does some kind of work (like DNS resolving or something like that) that allows the communication?
The nodes in a distributed Erlang system are loosely connected. The
first time the name of another node is used, for example if
spawn(Node,M,F,A) or net_adm:ping(Node) is called, a connection
attempt to that node will be made.
I find this in this link: http://www.erlang.org/doc/reference_manual/distributed.html#id85336
I think you should read this article.

Debugging Linux Kernel Module

I have built a linux kernel module which helps in migrating TCP socket from one server to another. The module is working perfectly except when the importing server tries to close the migrating socket, the whole server hangs and freezes.
I am not able to find out the root of the problem, I believe it is something beyond my kernel module code. Something I am missing when I am recreating the socket in the importing machine, and initializes its states. It seems that the system is entering an endless loop. But when I close the socket from client side, this problem does not appear at all.
So my question, what is the appropriate way to debug the kernel module and figure out what is going on, why is it freezing? How to dump error messages especially in my case I am not able to see anything, once I close the file descriptor related to the migrated socket in the server side, the machines freezes.
Note: I used printk to print all the values, and I am not able to find something wrong in the code.
Considering your system is freezing, have you checked if your system is under heavy load while migrating the socket, have you looked into any sar reports to confirm this, see if you can take a vmcore (after configuring kdump) and use crash-tool to narrow down the problem. First, install and configure kdump, then you may need add the following lines to /etc/sysctl.conf and running sysctl -p
kernel.hung_task_panic=1
kernel.hung_task_timeout_secs=300
Next get a vmcore/dump of memory:
echo 'c' > /proc/sysrq-trigger # ===> 1
If you still have access to the terminal, use the sysrq-trigger to dump all the stack traces of kernel thread in the syslog:
echo 't' > /proc/sysrq-trigger
If you system is hung try using the keyboard hot keys
Alt+PrintScreen+'c' ====> same as 1
Other things you may want to try out, assuming you would have already tried some of the below:
1. dump_stack() in your code
2. printk(KERN_ALERT "Hello msg %ld", err); add these lines in the code.
3. dmesg -c; dmesg

Fool a program run from within a shell script into thinking it's reading from a terminal

I'd like to write a shell script that does something like the following
while read line; do
echo $line
done<input.txt | ssh > output.txt
This is a bit pseudo codey at the moment (the original is at work), but you should be able to tell what it's doing. For simple applications this works a treat, but ssh checks the input to see whether it's stdin is a terminal.
Is there a way to fool ssh into thinking that the contents of my piped loop are a terminal rather than a pipe?
EDIT : Sorry for not adding this originally, this is intended to allow ssh to log in via the shell script (answering the password prompt)
ssh -t -t will do what you want - this tells ssh to allocate a pseudo terminal no matter whether it is actually running in one.
Update
This problem (after updating your question and various comments, it became clear you are looking for a way to conveniently get public key encryption into place) could perhaps be solved by 'thinking upside down'.
Instead of trying very hard to get your clients public key onto a server that doesn't yet authenticate the client, you can try to receive an authenticated identity (private key) from that server.
Simple terms: generate a keypair on the server instead of the client, and then find a way to get the keypair on the client. The server can put the public key in it's authorized_keys in advance, so the client can connect right away.
Chances are that
the problem of getting the key across is more easily solved (you could even use a 'group' key for access from various clients)
if a less-secure mechanism is chosen (convenience over security) at least only the security of the client is reduced, not as-much that of the server (directly).
Original answer:
Short answer: Nope. (it would be a security hole for ssh, because ssh 'trusts' the tty for password entry, and the tty only)
Long answer, you could try to subvert/creatively use a terminal emulator (look at script/scriptreplay for inspiration).
Why would you want to do it?

Resources