How to authenticate NSConnection requests? - macos

(Let's ignore the fact that NSConnection is now deprecated.)
I have a tool that accepts connections to NSConnection over a service port. I have an application that launches the tool and then connects to it. That part works.
Now, I like to make sure that only my particular app can talk to the tool and that the tool rejects connections from any other tool/app.
How do I best accomplish this?
One idea I had:
Since the app starts the tool, it could pass a "secret" to the tool as an argument, and then I pass the same secret to the tool whenever I invoke one of its functions as an NSDistributedObject. However, that would mean I have to pass that extra argument to every call I make, and I think that's unnecessary overhead.
I would think that I could accept or reject the connection right when the app opens the connection to the tool, i.e. only once. There is the NSConnectionDelegate, and I suspect that I'd have to implement the authentication check in its authenticateComponents:withData: handler, but I cannot find any examples that would explain how to do that. I mean, is there any data in that connection attempt that would identify the app that's requesting the connection, such as its PID, for instance?

Do you establish a connection for every call? I wouldn't think so but, if not, why do you think you'd have to pass the secret for every call? It's pretty common for the server to have a check-in method that clients have to call. You could validate the secret in that check-in method.
A malicious client might try to just skip the check-in method. You can use the -connection:handleRequest: method of NSConnectionDelegate to force them to call the check-in method. Keep a flag for every connection indicating if you've seen the check-in method. If you have, that method can just return NO. If you haven't, then examine the NSDistantObjectRequest's invocation's selector. If it's the check-in method, set your flag and return NO. If it's not, then terminate the connection.
I know the underlying ports (Mach or socket) have mechanisms for authenticating peers, but I don't see a way to get at that with the abstractions of NSConnection laid over them.
Finally, you are apparently wedded to NSConnection but this is exactly the sort of thing that the NSXPCConnection API is for. Among other things, it will ensure that the service is only visible to the parent app.

Related

How to identify/authorize the sender of a message in a named pipe? (CreateNamedPipe)

I've created a communication between two applications using named pipes.
The first application creates a named pipe with CreateNamedPipe and reads the received messages with ReadFile sent by the second application. Both applications are able to communicate that way as intended.
Is it somehow possible to identify the sender of a received message?
Without some sort of identification (like getting the sender exe path) or authorization every other application could use that pipe to send messages to my application.
(Edit) Further details, because it seems it's important in this case:
The application that creates the pipe is running as a Windows service.
Both applications run locally on the same system.
The GetNamedPipeClientProcessId() will give you the process ID of the client process. You can then open a handle to the process with OpenProcess() and call GetModuleFileNameEx() to determine what application is running in that process. You can then vet the application in whatever way you think best, e.g., you might want to check the identity of the digital certificate or you might prefer to just check that the pathname is as you expect it to be.
Note that attempting to restrict access to a particular application rather than a particular user is never going to be robust; an attacker could always take control of the approved application and replace its code with their own. Basically it isn't going to be more than a speed bump, but if you feel it is worth doing, it can be done.
If what you really want to know is what user has connected, you should instead be using ImpersonateNamedPipeClient() as already suggested in the comments, followed by OpenThreadToken() and so on. Or better still, set the permissions when creating the named pipe so that only the authorized users are able to connect in the first place.
Now that you've clarified that the client runs with elevated privileges, I can make a more concrete recommendation: do both of the above. Configure the permissions on the named pipe so that only members of the Administrators group can access it; that will ensure that only applications running with elevated privilege can access it. Checking the executable as well won't hurt, but it isn't sufficient by itself, because an attacker could launch a copy of your application, suppress the requested elevatation, and inject their own code into the process. (Or, as conio points out, modify their own process to make it look as if they are running your executable; GetModuleFileNameEx() is not intended to be used as a security measure, so it makes no effort to avoid spoofing.)

How to use security in sync sd without gam?

How to use security in SD synchronization without GAM?
I need to block unwanted connections. How can I validate the execution of
Synchronization.Send () and Synchronization.Receive ()
I can not use GAM because I have to adapt my application to a pre existing security system.
There is currently no way for sending additional parameters or HTTP headers in the requests, so you'll need other means to identify your user.
One thing you could do, is call a procedure before synchronizing, passing the relevant information to identify the user (an authorization token or something like that). Then, you should validate that the next call is to the synchronization process, and check for instance that the IP address and the "device id" are the same.
Where would you validate the user's information, depends on which synchronization are we talking about.
For the Receive operation, you may perform your validations in the Offline Database object's Start event.
For the Send operation, everything is saved to the database by using Business Components. So you may add your validations in all the BCs that are involved.
Note: having said all the above, it is highly recommended that you use GeneXus Access Manager (a.k.a. GAM), where all this is already solved.
Second note: you should use HTTPS in all your connections; otherwise, none of this will be secure.

Which NSXPCConnection related objects do I have to retain myself?

I cannot find any hint in the docs regarding object lifecycle management.
In the XPC service, do I have to keep a strong reference to the NSXPCListener, or does the resume call take care of this effectively?
I'm using Swift and a connection creation object to get most of the stuff out of the main.swift file:
// main.swift
if let dependencies = Dependencies().setUp() {
// Actually run the service code (and never return)
NSRunLoop.currentRunLoop().run()
}
I have the hunch that the dependencies object (which creates the NSXPCListener during set up) should keep a strong reference to the listener object. But the resume method is said to work like operation queues.
Conversely, does the client need to keep the NSXPCConnection around?
In the XPC service, upon incoming connection, does setting exportedObject retain that object for the duration of the connection, or do I have to keep a strong ref to it myself?
Consequently: When multiple connections come in, should I maintain a list of exportedObjects?
In either the client of service, should I obtain a remoteObjectProxy once and keep it around, or should I obtain a proxy object anew for every call?
My partcular XPC service is a launchd process running all the time, not a one-off thing, and the client app itself might run for a few hours in the background, too. I worry whether it's safe to keep a proxy object to the background service for a potentially long-running communication.
If background services crash, launchd is said to restart them. Now if my service was a "launch on demand" service instead, will message calls to proxy objects issue a relaunch if necessary, will obtaining a proxy object do, or will only reconnecting achieve that?
Thanks for helping me sort this out!

Best way to initialize initial connection with a server for REST calls?

I've been building some apps that connect to a SQL backend. I use ajax calls to hit WebMethods, a WebAPI, etc.
I notice that the first initial call to the SQL backend retrieves the data fairly slow. I can only assume that this is because it must first negotiate credentials first before retrieving the data. It probably caches this somewhere, and thus, any calls made afterwards come back very fast.
I'm wondering if there's an ideal, or optimal way, to initialize this connection.
My thought was to make a simple GET call right when the page loads (grabbing something very small, like a single entry). I probably wouldn't be using the returned data in any useful way, other than to ensure that any calls afterwards come back faster.
Is this an okay way to approach fixing the initial delay? I'd love to hear how others handle this.
Cheers!
There are a number of reasons that your first call could be slower than subsequent ones
Depending on your server platform, code may be compiled when first executed
You may not have an active DB connection in your connection pool
The database may not have cached indices or data on the first call
Some VM platforms may take a while to allocate sufficient resources to your server if it has been idle for a while.
One way I deal with those types of issues on the server side is to add startup code to my web service that fetches data likely to be used by many callers when the service first initializes (e.g. lookup tables, user credential tables, etc).
If you only control the client, consider that you may well wish to monitor server health (I use the open source monitoring platform Zabbix. There are also many commercial web-based monitoring solutions). Exercising the server outside of end-user code is probably better than making an extra GET call from a page that an end user has loaded.

Cache an FTP connection via session variables for use via AJAX?

I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information.
Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet.
Any suggestions?
What you are trying to do is possible, but far from trivial, and Rails doesn't offer any built-in support for it. In fact you will need to descend to the OS level to get this done, and if you have more than one physical server then it will get even more complicated.
First, you can't store a connection in the session. In fact you don't want to store any Ruby object in the session for many reasons, including but not limited to:
Some types of objects have trouble being marshalled/unmarshalled
Deploying could break stuff if the model changes and people have outdates stuff serialized in their session
If you are using the cookie session store then you only have 4k
So in general, you only ever want to put primitives like strings, numbers and booleans into the session.
Now as far as an FTP connection is concerned, this falls into the category of things that can't be serialized/unserialized reliably. The reason is because it's not just a Ruby object, but also has a socket open which is going to be closed as soon as the original object is garbage collected.
So, in order to keep a FTP connection persistent, it can't be stored in a controller instance variable because the controller instance is per-request. You could try to instantiate it it somewhere outside the controller instance, but that has the potential for memory leaks if you are not very careful to clean up the connections, and besides, if you have more than one app server instance then you would also need to find a way to guarantee that the user talks to the same app server instance on each request, or it wouldn't be able to find the hook. So all in all, keeping the session open in the Ruby process is a non-starter.
What you need to do is open the connection in a separate process that any of the ruby processes can talk to. There's really no established and standard way to do that, you'll have to roll your own. You could look into DRb to provide some of the primitives you will need.
AJAX can't directly talk to FTP. It's designed for HTTP. That doesn't stop you from writing something that does cache the FTP server though. You probably should profile it to find out what's really slow. My guess is that the FTP access is just slow. Caching it may be a mixed blessing though. How do you know when the content of the ftp site changes?

Resources