deny parallel ssh connection to server for specific host / IP - bash

I have a bot machine (controlled via mobile device) which
connects to the Server and fetch information from it by method os
"ssh, shell script, os commands,sql queries etc" than it feed that
information over the internet (private)
I want to disallow this multiple connection to the server via the
bot machine ONLY.. there are other machine which connects to the server which must not be affected
Suppose
Client A from his mobile acess bot machine (via webpage) than the bot
machine connect to server (1st session) now if the process of this
connection is 5 minute during this period the bot machine will be
creating, quering, deleting, appending, updating etc
in the very mean time of that 5 minute duration (suppose 2min after
the 1st session started) Client B from his mobile access bot machine
(via webpage) than the bot machine connect to server (2nd session) now
it will conflict with the 1st session and create Havoc...
Limitation
Now first of all i do not want to editing any setting on the SERVER
ANY WHAT SO EVER
I do not want to edit the webpage/mobile etc
I already know abt the lock file method of parallel shell script and
it is implemented at script level but what abt the OS commands and
stuff like that which are not in bash script
My Thougth
What i thougt was whenever we create a connection with server it
create a process named what ever (SSH) which is viewable in ps -fu
OSUSER so by applying a unique id/tag/name to our connection we can
identify if one session is active or not. This will be check as soon
as the bot connects to the server. But i do not know how to do
that.... Please also suggest any more information over it.
Also is there is way to identify if the existing process is hanged or
the time of the process started or elapsed?

Maybe try using limits.conf to enforce a hard limit of 1 login for the user/group.
You might need a periodic cron job to check for and remove any stale logins.
Locks/mutexes are hard to get right and add complexity. Limits.conf is a standard feature of most unix/linux systems and should be more reliable, emphasis on should...
A similar question was raised here:
https://unix.stackexchange.com/questions/127077/number-of-ssh-connections-on-a-single-linux-machine
Details here:
http://linux.die.net/man/5/limits.conf

I assume you have a single login for the ssh account and that this runs a script on login
Add something like this to the script at login
#!/bin/bash
LOCK_FILE="/tmp/sshlock"
trap "rm $LOCK_FILE; exit" SIGHUP SIGINT SIGTERM
if [ $(( (`date +%s` - `stat -L --format %Y $LOCK_FILE`) < (30*60) )) ]; then
exit 0
fi
touch $LOCK_FILE
When the processes that the ssh login calls end, delete the $LOCK_FILE
The trap statement is an important part of this way of locking, please do use it
The "30*60" is a 30 minute timeout, thanks to the answer on this question How can I tell if a file is older than 30 minutes from /bin/sh?

Related

slurm - action_unknown in pam_slurm_adopt

What does "source job" refer to in the description of action_unknown?
action_unknown
The action to perform when the user has multiple jobs on the node
and the RPC does not locate the **source job**. If the RPC mechanism works
properly in your environment, this option will likely be relevant only
when connecting from a login node. Configurable values are:
newest (default)
Pick the newest job on the node. The "newest" job is chosen based
on the mtime of the job's step_extern cgroup; asking Slurm would
require an RPC to the controller. Thus, the memory cgroup must be in
use so that the code can check mtimes of cgroup directories. The user
can ssh in but may be adopted into a job that exits earlier than the
job they intended to check on. The ssh connection will at least be
subject to appropriate limits and the user can be informed of better
ways to accomplish their objectives if this becomes a problem.
allow
Let the connection through without adoption.
deny
Deny the connection.
https://slurm.schedmd.com/pam_slurm_adopt.html
slurm_pam_adopt will try to capture an incoming SSH session into the cgroup corresponding to the job currently running on the host. This option is meant to decide what to do when there are several jobs running for the user that initiates the ssh command.
The 'source job' is the jobid of the process that initiates the ssh call. Typically, if you use an interactive ssh session from the frontend, there is not 'source job', but if the ssh command is run from within a submission script, then the 'source job' is the one corresponding to that submission script.

Implement Multiple client reads a file and multiple servers writes to a file via Client Server

Below is the question, I was asked in an interview,
Datacenter has 10000 servers.We have a single syslog driver which collates all the logs from all the servers in the datacenter and writes it in a single file called syslog.log
Let's say the datacenter has 1000 Admins.At any point of time any admin can login to syslog server and invoke a command say
getlog --serverid --severity
And the command should continuously tail the logs matching the conditions provided by the user untill he interupts.
Any number of users can concurrently login to this server and run this command. His request should be honoured, but with one condition, at any given point in time there can be only one file descriptor open for syslog.log file.
Implement getlog such that it satisfies the above condition.
I told my approach as Critical Section problem, we can use mutex/semaphore to lock the file until a user finishes. But he is expecting something like Client-Server Model.
How to serve this functionality using client and server architecture?
What is the best approach to solve this?

Trigger a mainframe job from Windows machine

I am converting my Windows script script that uses FTP to SFTP.
To trigger the mainframe job we had below command:
quote site filetype=jes
put C:\Test\test.dat
bye
sftp.exe uname#servername
But site filetype=jes does not work in SFTP. What will be the equivalent command for SFTP to trigger the mainframe job by sending a trigger file?
There are several options:
You can use a different FTP server (such as the Co:Z product mentioned in an earlier response.
You can wrap a conventional FTP session in a secure network session (VPN, SSH, etc) in a way that keeps the connection secure, but doesn't require SFTP. This gives you the security of SFTP while letting you continue to use your existing FTP scripting unchanged.
You can swap FTP for more of a shell approach (SSH) to login to the mainframe and submit your JCL. Once you have any sort of shell session, there are many ways to submit JCL - see http://www-01.ibm.com/support/knowledgecenter/SSLTBW_1.13.0/com.ibm.zos.r13.bpxa500/submit.htm%23submit for an example.
A slight variant on #3 (above) is that you can have a "submit JCL" transaction in something like a web server, if you're running one on z/OS. This gives you a way to submit JCL using an HTTP request, say through CURL or WGET (if you go this way, be sure someone carefully reviews the security around this transaction...you probably don't want it open to the outside world!).
If this is something you do over and over, and if your site uses job scheduling software (CA-7, Control-M, OPC, Zeke, etc...most sites have one of these), almost all these products can monitor for file activity and launch batch jobs when a file is created. You'd simply create a file with SFTP "PUT", and the job scheduling software would do its thing.
Good luck!
If you're using the Co:Z SFTP server on z/OS you can submit mainframe batch jobs directly.
Strictly speaking this isn't a trigger file, but it does appear to be the equivalent of what you describe as your current FTP process.

Rsync stop when failover

I have two cpanel servers(A->B) with failover configured in dnsmadeeasy. I have right now setup a rsync to sync the /home/account folder every 4 hours from A->B.
So when A fails, B takes over with a backlog of 4 hours of data in server A.
My problem is when A comes back to normal from a failure, the rsync in B overwrites the data from A since the rsync is A->B.
I like to know what is the best method to prevent the rsync from running after the first failover so that I can manually handle the rsync. I am thinking of a shell script which will try to access a text file in server A, which if results in failure will stop the cron from running.
Is this a good way to handle this, or is there a easier way?
Well, I have done something similar on a group of servers I have at the office. An overview of what I have found to work well is simply to run a cron script that keeps the status of each of the other servers in a temporary status file and the status is updated with calls to ping.
Specifically, the routine works by maintaining a list of hosts to be included in the check. Each host (except for the name matching the machine running the cron job) has a status file maintained in the /tmp directory called hoststatus.$HOSTNAME. Each status file contains either up or down. (if the status file does not exists, it is created during the check process and assumed up). The status files themselves provides a local means of checking the status of each remote host for any script before running it.
The cron job that checks the status, reads the status file for each remote host and provides the status to a case statement. For the case where status is up a call is made to the remote host with ping -c1 hostname. If the ping succeeds, then the script exits (remote host is up). If the ping fails, then the script waits 20 seconds (to insure the remote isn't rebooting, etc.. and checks again. If the second call succeeds, the status remains up and the script exits. If the second call to ping fails, the wait for 20 seconds repeats and retests. If the third test fails, then the status file is written down and the remote host is considered down.
Continuing in the case statement, if the initial status was down, a simple check is made with ping. If it succeeds, status is changed to up, if it fails, it remains down.
A log file is also kept that reflects each change of status to provide a running history of server availability.
Something similar would work for you case. If server A goes down, sever B could write a simple log in a similar fashion something like rsynchold.hostA that is checked before rsync is run between either A->B or B->A. This would allow you manual intervention with the first rsync after a failure -- at which time you could reset the rsynchold.hostA file.
This isn't elegant, but it has proven fairly foolproof over the past several years.

On Terminal Server, how does a service start a process in a user's session?

From a Windows Service running on a Terminal Server (in global space), we would like to be able to start up a process running a windows application in a specific user's Terminal Server sessions.
How does one go about doing this?
The Scenerio: the windows service starts at boot time. After the user has logged into a Terminal Server user session, based on some criteria known only to the windows service, the windows service wants to start a process in the user's session running a windows application.
An example: We would like to display a 'Shutdown in 5 minutes' warning to the users. The windows service would detect this condition, and start up a process in each user session that starts the windows app that displays the warning. And, yes, I know there are other ways of displaying a warning dialog, this is the example, what we want to do is much more invasive.
You can use CreateProcessAsUser to do this - but it requires a bit of effort. I believe the following steps are the basic required procedure:
Get the user's session (WTSQuerySessionInformation).
Get a token for that user (WTSQueryUserToken).
Create a duplicate token for your use (DuplicateTokenEx).
Use the token to create an environment block (CreateEnvironmentBlock).
Launch the application with CreateProcessAsUser, using the block above.
You'll also want to make sure to clean up all of the appropriate handles, tokens, etc., after you've launched the process.
Really late reply but maybe somebody will find this helpful.
You can use PsExec to launch an application on a remote (or local) server inside a specified session by using the following command:
psexec \\COMPUTER_NAME -i SESSION_ID APPLICATION_NAME
Where SESSION_ID indicates the session id in which to launch the application.
You will need to know what sessions are active on the server and which session id maps to which user login. The following thread provides a nice code sample for this exact problem: How do you retrieve a list of logged-in/connected users in .NET?
Late reply but in the answer above DuplicateToken is not necessary since WTSQueryUserToken already returns a primary token.

Resources