Okay, so I have two shell scripts, a server and a client, where the server is always run as root, and the clients can be run as standard users in order to communicate with the server.
The method I've chosen to do this is with a world-accessible directory containing named pipes (fifos), one of which is world-writable (to enable initial connection requests) and the others are created per-user and writable only by them (allowing my server script to know who sent a message).
This seems to work fine but it feels like it may be over-engineered or missing a more suitable alternative. It also lacks any means of determining whether the server is currently running, besides searching for its name in the output of ps. This is somewhat problematic as it means that writing to the connection fifo will hang if the server script isn't available to read from it.
Are there better ways to do something like this for a shell script? Of course I know could use an actual program to get access to more capabilities, but this is really just intended to provide secure access to a root service for non-root users (i.e - it's a wrapper for something else).
You could use Unix domain sockets instead of fifos. Domain sockets can be created with nc -lU /path/to/wherever and connected to with nc -U /path/to/wherever. This creates a persistent object in the filesystem (like a fifo, but different). The server should be able to maintain multiple simultaneous connections over the same socket.
If you're willing to write in C (or some other "real" programming language), it's also possible to pass credentials over Unix domain sockets, unlike fifos. This makes it possible for the server to authenticate its clients without needing to rely on filesystem permissions or other indirect means. Unfortunately, I'm not aware of any widely-supported interface for doing this in a shell script.
Related
I've created a communication between two applications using named pipes.
The first application creates a named pipe with CreateNamedPipe and reads the received messages with ReadFile sent by the second application. Both applications are able to communicate that way as intended.
Is it somehow possible to identify the sender of a received message?
Without some sort of identification (like getting the sender exe path) or authorization every other application could use that pipe to send messages to my application.
(Edit) Further details, because it seems it's important in this case:
The application that creates the pipe is running as a Windows service.
Both applications run locally on the same system.
The GetNamedPipeClientProcessId() will give you the process ID of the client process. You can then open a handle to the process with OpenProcess() and call GetModuleFileNameEx() to determine what application is running in that process. You can then vet the application in whatever way you think best, e.g., you might want to check the identity of the digital certificate or you might prefer to just check that the pathname is as you expect it to be.
Note that attempting to restrict access to a particular application rather than a particular user is never going to be robust; an attacker could always take control of the approved application and replace its code with their own. Basically it isn't going to be more than a speed bump, but if you feel it is worth doing, it can be done.
If what you really want to know is what user has connected, you should instead be using ImpersonateNamedPipeClient() as already suggested in the comments, followed by OpenThreadToken() and so on. Or better still, set the permissions when creating the named pipe so that only the authorized users are able to connect in the first place.
Now that you've clarified that the client runs with elevated privileges, I can make a more concrete recommendation: do both of the above. Configure the permissions on the named pipe so that only members of the Administrators group can access it; that will ensure that only applications running with elevated privilege can access it. Checking the executable as well won't hurt, but it isn't sufficient by itself, because an attacker could launch a copy of your application, suppress the requested elevatation, and inject their own code into the process. (Or, as conio points out, modify their own process to make it look as if they are running your executable; GetModuleFileNameEx() is not intended to be used as a security measure, so it makes no effort to avoid spoofing.)
I am trying to run console application (say win_a.exe; which is having few command line parameters) from ruby script(say lin_r.rb) on linux. win_a.exe is interacting with windows services on windows server 2008. I want to run win_a.exe at particular point via lin_r.rb (reason is that, at this time; I am having few parameters those need to be passed to win_a.exe and get some result)
I searched online but I did not get any useful links.
One solution in my mind is:
create the NFS share on windows and map that to linux.
Linux: write parameter/command in a new file(should be created on NFS share) from lin_r.rb
Window: watchdog program(need to write this) looking for a new file. If found execute win_a.exe with parameters and write result to new out file.
Linux: Yey! Got result
Is this good approach? What do you think?
Thanks, Vipul
Your approach could be made to work, however If I were implementing this, I would use HTTP instead of NFS. Likely both computers involved already are capable of making and receiving HTTP requests, so the setup should be less than NFS.
The basic approach would be to have the linux based script make an HTTP request to the windows machine, with the parameters to the .exe specified as query parameters (if you use a GET request). On the windows side, your "watchdog" program would be a small web service that would respond to the request from the linux machine, execute the program with the specified options, and return the result.
The web service on the windows machine can use whatever technology you prefer. I would likely use Sinatra+Thin, but the choice is up to you.
Whichever approach you take, NFS based, HTTP based, or something else, you should make sure you give thought to security. That means that you should not blindly pass the arguments you receive from lin_r.rb to the win_a.exe program. You should only accept specific arguments, and you should make some effort to verify that the person making the request (or writing the file if you use NFS) is someone who you have authorized to have access.
I guess I have missed the obvious, maybe, but I am lost for a good answer.
I am developing a stand alone program that will be running on a Linux (Ubuntu?) embedded PC inside a piece of hardware. I want it to be the "thing" SNMP talks to. Well, short of compiling in my own SNMD "daemon" code and persuading Linux to let a general user have access to port 161, I think I'll opt for Net-SNMP's snmpd. I am open to suggestions for better products to use. LGPL, BSD, MIT, licenses, please.
I am working separately on the MIB and assigning OIDs, etc. I know what vars I want to set and get, etc.
I have read and reread the stuff on making an SNMP/snmpd Agent and/or subagent. Near as I can tell, they are both compiled into snmp or linked to it as a shared library. Right?
So, how do I get that agent to talk to my sepaprate program running in a separate general user session? Is there a direct technique to use? D-Bus? ppen()? Named pipes? Shared memory? Temp files? UDP port? Something better? Or do I really want to turn my program into a .SO and let snmpd launch it? I assume at that point I'd be abe to tell snmpd where to call in to me to get/set vars. Right?
Thanks!
The "AgentX" protocol is a way for arbitrary applications to supply SNMP services to a running system SNMP daemon. Your application listens on some port other than 161 (typically a library will take care of the details for you), and the system snmpd will forward requests for your OIDs to your subagent. This method doesn't involve linking any code into the system snmpd.
Often an easier way is to configure the system snmpd to run a script to get or set data. The script can, if you like, use some other kind of IPC to talk to your application (such as JSON to an HTTP server, for example).
I am developing an application on MAC OS . It has 2 parts -- a UI element and a daemon (which needs to run continuously and must restart on being killed). Currently I am using launchctl to restart the daemon.
But there is another issue. I need the 2 parts of my application to communicate with each other . For this I am using distibuted objects for the same (as given here) . However this does not work when I launch the daemon with launchctl. Can anyone suggest some alternative???
I use NSDistributedNotifications to handle this pretty well in one app, even on 10.7. You have to do your own handshaking since this can be lossy (i.e. include an ack notification and resend in case of timeouts). A side effect of this approach is that if there are multiple clients running (particularly under fast user switching), all of them receive the notifications. That's good in the particular case of this app. It's also extremely simple to implement.
For another app, I use two FIFOs. The server writes to one and reads from the other. The client does the opposite. You can of course also use a network socket to achieve the same thing. I tend to prefer FIFOs because you don't have to do deal with locking down a network socket.
That said, what problem are you seeing using distributed objects under launchd? Are you just seeing problems on 10.7 (which changed the rules around the launchd context)?
Are you using launchd to lazy-load the daemon when the port is accessed (this is the normal way to do it). Have you considered using a launchagent instead of a launchdaemon?
EDIT:
Ah... the bootstrap server. Yes. You need to execute things in the correct bootstrap context in order to talk to them. The bootstrap context for the login session is rooted to the windowserver process. LaunchDaemons run in a different context, so they can't directly communicate with the login sessions. Some background reading:
Starting/stopping a launchd agent for all users with GUI sessions
How can you start a LaunchAgent for the first time without rebooting, when your code runs as a LaunchDaemon?
launch agent from daemon in user context
I am not aware of anyway to get processes into the correct context without using launchctl bsexec. Launchd technically has an API (launchctl uses it), but it is not well documented. You can pull the source from opensource.apple.com.
Even if you stay with NSDistributedObject, I would try to use something other than the bootstrap service if you can. As I mentioned, I tend to use other tools and avoid NSDistributedObject. In my opinion, for the same reasons that REST is better than SOAP, simple protocols are usually better than remote objects. (YMMV)
If you are launching your daemon using sudo launchctl; You should not use CFMessagePort and Distributed object for IPC. CFMessagePort and Distributed object are implemented using the bootstrap service(Many Mac OS X subsystems work by exchanging Mach messages with a central service. For such a subsystem to work, it must be able to find the service. This is typically done using the Mach bootstrap service, which allows a process to look up a service by name).
If you will use DO or CFMessagePort; you will run into bootstrap namespace problem.
when you will launch your daemon using sudo launchctl ; your service is register in root bootstrap namespace so your clients(running in user mode) will not able to use that services.
you can check bootstrap service using
$ launchctl bslist
$ sudo launchctl bslist // If you are using sudo lunchctl
You should use UNIX Domain Sockets. UNIX domain sockets are somewhat like TCP/IP sockets, except that the communication is always local to the computer. You access UNIX domain sockets using the same BSD sockets API that you'd use for TCP/IP sockets. The primary difference is the address format. For TCP/IP sockets, the address structure (that which you pass to bind, connect, and so on) is (struct sockaddr_in), which contains an IP address and port number. For UNIX domain sockets, the address structure is (struct sockaddr_un), which contains a path.For an example of using UNIX domain sockets in a client/server environment, see Sample Code 'CFLocalServer'.
Take a look at this Technical Note TN2083 Daemons and Agents
Daemon IPC Recommendations
Mach Bootstrap Basics
Each user has a separate Mach namespace .You cannot communicate
between namespaces. You'll need to use sockets (NSSocketPort)
instead, which are not limited in such ways.[1]
I would like to improve the way how an application is checking that another instance is not already running. Right now we are using named mutexes with checking of running processes.
The goal is to prevent security attacks (as this is security software). My idea right now is that "bulletproof" solution is only to write an driver, that will serve this kind of information and will authenticate client via signed binaries.
Does anyone solved such problem?
What are your opinions and recommendations?
First, let me say that there is ultimately no way to protect your process from agents that have administrator or system access. Even if you write a rootkit driver that intercepts all system calls (a difficult and unsafe practice in of itself), there are still ways to use admin access to get in. You have the wrong design if this is a requirement.
If you set up your secure process to run as a service, you can use the Service Control Manager to start it. The SCM will only start one instance, will monitor that it stays up, allow you to define actions to execute if it crashes, and allow you to query the current status. Since this is controlled by the SCM and the service database can only be modified by administrators, an attacking process would not be able to spoof it.
I don't think there's a secure way of doing this. No matter what kind of system-unique, or user-unique named object you use - malicious 3rd party software can still use the exact same name and that would prevent your application from starting at all.
If you use the method of checking the currently executing processes, and checking if no executable with the same name is running - you'd run into problems, if the malicious software has the same executable name. If you also check the path, of that executable - then it would be possible to run two copies of your app from different locations.
If you create/delete a file when starting/finishing - that might be tricked as well.
The only thing that comes to my mind is you may be able to achieve the desired effect by putting all the logic of your app into a COM object, and then have a GUI application interact with it through COM interfaces. This would, only ensure, that there is only one COM object - you would be able to run as many GUI clients as you want. Note, that I'm not suggesting this as a bulletproof method - it may have it's own holes (for example - someone could make your GUI client to connect to a 3rd party COM object, by simply editing the registry).
So, the short answer - there is no truly secure way of doing this.
I use a named pipe¹, where the name is derived from the conditions that must be unique:
Name of the application (this is not the file name of the executable)
Username of the user who launched the application
If the named pipe creation fails because a pipe with that name already exists, then I know an instance is already running. I use a second lock around this check for thread (process) safety. The named pipe is automatically closed when the application terminates (even if the termination was due to an End Process command).
¹ This may not be the best general option, but in my case I end up sending data on it at a later point in the application lifetime.
In pseudo code:
numberofapps = 0
for each process in processes
if path to module file equals path to this module file
increment numberofapps
if number of apps > 1
exit
See msdn.microsoft.com/en-us/library/ms682623(VS.85).aspx for details on how to enumerate processes.