Can UDS services available in the default session be secured by SecurityAccess (0x27) or Authentication (0x29)? - uds

Am i correct with the assumption, that different diagnostic sessions and SecurityAccess/Authentication are decoupled concepts in UDS? I.e. you can secure any service behind a seed/key or PKI challenge, even the ones in the default session making them unaccessible for somebody not authorized?
I'm referring to ISO14229-1:2020
Why i came over this: The standard defines NRC 0x33 (securityAccessDenied) as a supoorted NRC for ECUReset service (0x11). However, ECUReset is available in the default session. If my above assumption was not correct this wouldn't make sense.
BUT
ReadDtcInformation(0x19) is also availabe in the default session but for this service the standard does not define NRC 0x33. However, according to Annex A.1 the manufacturer may implement NRC 0x33 as an additional NRC.
If my assumption was correct, would that mean that any service that was originally available in the default session would only be available in a non-default session if it were secured? Or can I get the security access, switch back to the standard session and access the service I want?
In my opinion the standard is not very clear on that, or at least misleading (also at other parts of the standard)
Thanks for your help!
Read the standard however not clear, asked Google, did not find an answer

As far as I would interpret the standard, you're right. Since you can change sessions without authorization, an ECU might as well send you a NRC in the default session, if you're attempting an operation you don't have authorization for.
Note that's it's uncommon, but as far as I understand it, not forbidden.

Related

Encrypting OkHttp's HttpResponseCache

Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!

Why are sessions in the Snap Framework client side only?

By browsing through the code of the Auth and Session snaplets I observed that session information is only stored on the client (as an encrypted key/value store in a cookie). A common approach to sessions is to only store a session token with the client and then have the rest of the session information (expiry date, key/value pairs) in a data store on the server. What is the rationale for Snap's approach?
For me, the disadvantages of a client side only session are:
The key/value store might get large and use lots of bandwidth. This is not an issue if the session is only used to authenticate a user.
One relies on the client to expire/delete the cookie. Without having at least part of the session on the server one is effectively handing out a token that's valid to eternity when setting the cookie.
A follow-up question is what the natural way of implementing server side sessions in Snap would be. Ideally, I'd only want to write/modify auth and/or session backends.
Simplicity and minimizing dependencies. We've always taken a strong stance that the core snap framework should be DB-agnostic. If you look closely at the organization, you'll see that we carefully designed the session system with a core API that is completely backend-agnostic. Then we included a cookie backend. This provides users with workable functionality out of the gate without forcing a particular persistence system on them. It also serves as an example of how to write your own backend based on any other mechanism you choose.
We also used the same pattern with the auth system. It's a core API that lets you use whatever backend you want. If you want to write your own backend for either of these, then look at the existing implementations and use them as a guide. The cookie backend is the only one I know of for sessions, but auth has several: the simple file-based one that is included, and the ones included in snaplet-postgresql-simple, snaplet-mysql-simple, and snaplet-persistent.

MSMQ security and performance

I was wondering if anyone has done some performance testing with two different approaches for security. Mostly concerned with the server side of things.
1) Using active directory, the user account is validated each time a message is sent.
2) Using certificate, each message is encrypted with a certificate.
My guess would be that decrypting the message is more computer intensive hence the active directory approach is likely to perform better.
You have a few mixed bits of security there.
Which ones do you require?
Securing a queue against access from accounts you don't want
Ensuring a message is from the account it says it is (authentication)
Ensuring no one can see the message body (encryption)
Let me know and I can give you a better idea of what works performancewise.
You write "Using active directory, the user account is validated each time a message is sent."
That doesn't sound right. All MSMQ does is put the SID of the sending user account in the message header. This is why you shouldn't rely on just setting account level access on queues as anyone can spoof the SID in an MSMQ message.
Cheers
John Breakwell
Being a starter on MSMQ, I will do my best to answer the question here.
[1.] Securing a queue against access from accounts you don't want
Answer: My understanding is that if I use a private queue, it will implicitly do that. In other words, if anyone does not know about it, then how "outsiders" can access it ?
[2.] Ensuring a message is from the account it says it is (authentication)
Answer: I can debate about this. I am not sure it will make a difference in my particular environment since everything is driven by a custom app with structured data sent. If data is not structured the way it should be, the message will simply be ignored.
[3.] Ensuring no one can see the message body (encryption)
Answer: More relevant here, I do think that some level of encryption to prevent any "peeking" of the data.
Finally, I was not aware that the SID was inside the message header.
Let me know how performance is affected but these various security settings. Also, what's your advice on security with regards to MSMQ ?
Thx for all the info...
Christian Martin

Is it valid that re-send "USER" command to a FTP server during connection?

According to RFC959: FILE TRANSFER PROTOCOL (FTP)
(section 4.1.1):
Servers may allow a new USER command to be entered at any point in order to change the access control and/or accounting information. This has the effect of flushing any user, password, and account information already supplied and beginning the login sequence again. All transfer parameters are unchanged and any file transfer in progress is completed under the old access control parameters.
We can certainly re-send "USER"to authenticate user at any time. However our IT team recently established a new ftp server deployed on Linux, and not allowed client re-send "USER" command before current session be disconnected. In our IT team's words, that change provides robust environment to user.
I'm wondering whether this change worthy and valid? Please give me a authoritative explain if possible.
Is it worthy and valid? Well, your IT team is free to do whatever it wants to satisfy its needs.
But, the whole point of standards (including the RFCs) is that you follow them so that there will
be no nasty surprises for software that relies on that behaviour.
However, in this case, the magic word is "may". In standards parlance, "may" means exactly that. Servers may allow something. They are not required to do so.
If it was a requirement, they would have used stronger words - "shall" is a favourite of many standards bodies.
From RFC2119, point 5:
"MAY" - this word, or the adjective "OPTIONAL", mean that an item is truly optional.
So I think in this particular case, they are not actually violating the RFC.
Having said that, I'm not sure of the reasoning behind the assertion that it's more robust for the user (presumably at the client end). As the RFC states, it applies only to new transfers initiated after the change. All those currently in progress are unchanged.
Your best bet would be to ask your own people why this is the case.

DotNetOpenAuth on web farm

I am implementing DotNetOpenAuth for both an OpenId provider and a relying party. In both cases, the servers are behind a load balancer, so for any HTTP request, we can't assume that we'll hit the same server.
It appears that DotNetOpenAuth depends on the Session to store a pending request key. Because the server may change between requests, we can't depend on the standard InProc Session. Unfortunately, we've been unable to successfully implemented SQL as the store for Session.
My question is: is it safe to store a PendingAuthenticationRequest as a client cookie? Any worse than using Session?
The ProviderEndpoint.PendingAuthenticationRequest property is there for your convenience only, primarily for simpler scenarios. If it doesn't work for you, by all means store it another way and totally ignore this property. No harm done there.
Ultimately a session is tracked by an HTTP cookie, so you can certainly store the auth request state entirely in a cookie if you prefer so that it works in a web farm environment. Another approach is to not require the client (or the server) to track state at all by either making everything (including authentication) handled directly at the OP Endpoint URL, or redirecting the user from the OP Endpoint URL with a query string that includes all the state informaiton you need to track. Be careful of the latter approach though since you'll be exposing your state data to the user to see and possibly tamper with.
In short, you may or may not choose to store user sessions in a SQL store. That should be fine. The issue I think you ran into (that we discussed by email) was that you needed to implement your own IProviderApplicationStore, which will store nonces and associations in a database that is shared across all your web servers. This is imperative to do, and is orthogonal to the user session state since this is stored at the application level.

Resources