multiple commands in the same session with ssh - session

I am currently using the Trilead ssh2 library and when i try to execute multiple commands (using execCommand) in the same session, I get "a remote execution has already started" error.
Just wanted clarify, the session is limited to one system command; does that mean I can only send one command through execCommand() per connection? Is there any other alternative besides injecting multiple commands with semicolons?

Related

Continuously monitor an ssh connection using tail -f?

My Situation
I am using the SSH Command tool to retrieve log info from a remote Linux server using the tail command. A separate thread group logs in users which causes the log file to update with the login information. To get the data I need, I use a regex extractor on the response data. For every thread jmeter creates a new ssh connection to retrieve the new messages in that log file.
Here's my current setup:
Thread Group
- Get random user
- Login User
- SSH into server using tail command
- Extract relevant data
My Question
Instead of sshing into the Linux server every thread, causing extra load and log messages, I want to connect once at the start of the test and continuously extract from the log file using the tail -f command, for example, combined with the regex. Is this possible?
I would say that it is not possible or at least not easy.
In order to be able to extract the data from the response using Regular Expression Extractor you need to have SampleResult, it means that the SSH Command request must be completed. If you're going to use tail -f the request will never end.
If you want to minimize the number of connections you can consider using JSch library to establish the connection once and execute the commands within the bounds of a single Session.
If the above solution makes sense - consider migrating to JSR223 Sampler and Groovy language, you can find example code to execute remote command over SSH in i.e. Exec.java class

send argument/command to already running Powershell script

Until we can implement our new HEAT SM system i am needing to create some workflows to ease our currently manual user administration processes.
I intend to use Powershell to execute the actual tasks but need to use VBS to send an argument to PS from an app.
My main question on this project is, Can an argument be sent to an already running Powershell process?
Example:
We have a PS menu app that we will launch in the AM and leave running all day.
I would love for there to be a way to allow PS to listen for commands/args and take action on them as they come in.
The reason I am wanting to do it this way is because one of the tasks needs to disable exchange features and the script will need to establish a connection a remote PSsession which, in our environment, can take between 10-45 seconds. If i were to invoke the command directly from HEAT (call-logging software) it would lock up while also preventing the tech from moving on to another case until the script terminates.
I have searched all over for similar functionality but i fear that this is not possible with PS.
Any suggestions?
I had already setup a script to follow this recommendation but i was curious to see if there was a more seamless approach
As suggested by one of the comments by #Tony Hinkle
I would have the PS script watch for a file, and then have the VBScript script create a file with the arguments. You would either need to start it on another thread (since the menu is waiting for user input), or just use a separate script that in turn starts another instance of the existing PS script with a param used to specify the needed action

Trigger a mainframe job from Windows machine

I am converting my Windows script script that uses FTP to SFTP.
To trigger the mainframe job we had below command:
quote site filetype=jes
put C:\Test\test.dat
bye
sftp.exe uname#servername
But site filetype=jes does not work in SFTP. What will be the equivalent command for SFTP to trigger the mainframe job by sending a trigger file?
There are several options:
You can use a different FTP server (such as the Co:Z product mentioned in an earlier response.
You can wrap a conventional FTP session in a secure network session (VPN, SSH, etc) in a way that keeps the connection secure, but doesn't require SFTP. This gives you the security of SFTP while letting you continue to use your existing FTP scripting unchanged.
You can swap FTP for more of a shell approach (SSH) to login to the mainframe and submit your JCL. Once you have any sort of shell session, there are many ways to submit JCL - see http://www-01.ibm.com/support/knowledgecenter/SSLTBW_1.13.0/com.ibm.zos.r13.bpxa500/submit.htm%23submit for an example.
A slight variant on #3 (above) is that you can have a "submit JCL" transaction in something like a web server, if you're running one on z/OS. This gives you a way to submit JCL using an HTTP request, say through CURL or WGET (if you go this way, be sure someone carefully reviews the security around this transaction...you probably don't want it open to the outside world!).
If this is something you do over and over, and if your site uses job scheduling software (CA-7, Control-M, OPC, Zeke, etc...most sites have one of these), almost all these products can monitor for file activity and launch batch jobs when a file is created. You'd simply create a file with SFTP "PUT", and the job scheduling software would do its thing.
Good luck!
If you're using the Co:Z SFTP server on z/OS you can submit mainframe batch jobs directly.
Strictly speaking this isn't a trigger file, but it does appear to be the equivalent of what you describe as your current FTP process.

deny parallel ssh connection to server for specific host / IP

I have a bot machine (controlled via mobile device) which
connects to the Server and fetch information from it by method os
"ssh, shell script, os commands,sql queries etc" than it feed that
information over the internet (private)
I want to disallow this multiple connection to the server via the
bot machine ONLY.. there are other machine which connects to the server which must not be affected
Suppose
Client A from his mobile acess bot machine (via webpage) than the bot
machine connect to server (1st session) now if the process of this
connection is 5 minute during this period the bot machine will be
creating, quering, deleting, appending, updating etc
in the very mean time of that 5 minute duration (suppose 2min after
the 1st session started) Client B from his mobile access bot machine
(via webpage) than the bot machine connect to server (2nd session) now
it will conflict with the 1st session and create Havoc...
Limitation
Now first of all i do not want to editing any setting on the SERVER
ANY WHAT SO EVER
I do not want to edit the webpage/mobile etc
I already know abt the lock file method of parallel shell script and
it is implemented at script level but what abt the OS commands and
stuff like that which are not in bash script
My Thougth
What i thougt was whenever we create a connection with server it
create a process named what ever (SSH) which is viewable in ps -fu
OSUSER so by applying a unique id/tag/name to our connection we can
identify if one session is active or not. This will be check as soon
as the bot connects to the server. But i do not know how to do
that.... Please also suggest any more information over it.
Also is there is way to identify if the existing process is hanged or
the time of the process started or elapsed?
Maybe try using limits.conf to enforce a hard limit of 1 login for the user/group.
You might need a periodic cron job to check for and remove any stale logins.
Locks/mutexes are hard to get right and add complexity. Limits.conf is a standard feature of most unix/linux systems and should be more reliable, emphasis on should...
A similar question was raised here:
https://unix.stackexchange.com/questions/127077/number-of-ssh-connections-on-a-single-linux-machine
Details here:
http://linux.die.net/man/5/limits.conf
I assume you have a single login for the ssh account and that this runs a script on login
Add something like this to the script at login
#!/bin/bash
LOCK_FILE="/tmp/sshlock"
trap "rm $LOCK_FILE; exit" SIGHUP SIGINT SIGTERM
if [ $(( (`date +%s` - `stat -L --format %Y $LOCK_FILE`) < (30*60) )) ]; then
exit 0
fi
touch $LOCK_FILE
When the processes that the ssh login calls end, delete the $LOCK_FILE
The trap statement is an important part of this way of locking, please do use it
The "30*60" is a 30 minute timeout, thanks to the answer on this question How can I tell if a file is older than 30 minutes from /bin/sh?

Ruby - How to start an ssh session and dump user into it - no Net::SSH

So I've got dozens of servers I connect to and want a simple Ruby script that provides me with a list of those servers. Picking one will start up SSH with the proper connection details and let me start using it. That's it!
But, I don't want/need Ruby to keep running. If I did I could use Net::SSH and capture all output and send it back to the user, but this is an extra layer I don't need. I simply want to use Ruby as a "script starter" and then close itself.
Any ideas? I've thought about forking processes but I don't know how I'd attach the terminal to the new ssh one.
I have a simple bash script that does this already, but I want to add more functionality like being able to add to the list, remove servers, etc. all from the command line. I'm sure I could do this with bash as well but I'm much more comfortable with Ruby.
Maybe exec will do the trick
http://en.wikipedia.org/wiki/Exec_(operating_system)
http://ruby-doc.org/core/classes/Kernel.html#M005968

Resources