I am having a hard time finding out what factors go into determining whether BCryptGetFipsAlgorithmMode() returns TRUE or FALSE. Does it just return the status of:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled
or is it something else?
As I understand it: basically yes.
It's the C/C++ way to ascertain whether the system you're running on has FIPS Compliance specified in group policy. Using this function, rather than the registry keys allows Microsoft to move the registry key around as they see fit, as well as determine other ways in which this rule may be enforced. I suspect that's why they've provided a function and not just details of a key to check.
Related
Using the Android Management API, I would like to identify if a device has been rooted.
I found the attribute "devicePosture" and the possible values for this attribute are listed in this documentation here.
However, for me, it was not clear what these items mean.
For example:
Does the type "POTENTIALLY_COMPROMISED" mean that the device is rooted or just had its bootloader unlocked?
Does the "AT_RISK" type mean that you have a virus version of android (or something similar)?
Thank you for your help.
You can check this link , Also to answer some of your questions with regards to device posture.
The value of the security posture determines the current device state and the policies applied. Or in other terms it reflects how secure the device is
1.) “POTENTIALLY_COMPROMISED” value means that either SafetyNet's ctsProfileMatch check or basicIntegrity check fail or this device may be compromised and corporate data may be accessible to unauthorized actors. It covers both bootloader unlocked and rooted scenarios[1].
2.) "AT_RISK” value means that both SafetyNet's ctsProfileMatch check and basicIntegrity check pass but fails to meet requirements set by the policy (e.g. device's password state, etc.).
To determine whether what fails you can check the PostureDetail, SecurityRisk Value
[1] To understand what SafetyNet's ctsProfileMatch and basicIntegrity fields mean, you can check this link, which also explains what scenarios correspond to the combination of the value of the two checks.
I am Developing an onpremise solution for a client without any control and internet connection on the machine.
The solution is to be monetized based on number of allowed requests(REST API calls) for a bought license. So currently we store the request count in an encrypted file on the file system itself. But this solution is not perfect as the file can be copied somewhere and then replaced when the requests quota is over. Also if the file is deleted then there's manual intervention needed from support.
I'm looking for a solution to store the state/data in binary and update it runtime (consider usage count that updates in binary itself)
Looking for a better approach.
Also binary should start from the previous stored State
Is there a way to do it?
P.S. I know writing to binary won't solve the issue but I think it'll increase the difficulty by increasing number of permutation and combinations for places where the state can be stored and since it's not a common knowledge that you can change the executable that would be the last place to look for the state if someone's trying to mess with the system (security by obscurity)
Is there a way to do it?
No.
(At least no official, portable way. Of course you can modify a binary and change e.g. the data or BSS segment, but this is hard, OS-dependent and does not solve your problem as it has the same problem like an external file: You can just keep the original executable and start over with that one. Some things simply cannot be solved technically.)
If your rest API is within your control and is the part that you are monetizing surely this is the point at which you would be filtering the licensed perhaps some kind of certificate authentication or key to the API and then you can keep then count on the API side that you can control and then it wont matter if it is in a flat file or a DB etc, because you control it.
Here is a solution to what you are trying to do (not to writing to the executable which) that will defeat casual copying of files.
A possible approach is to regularly write the request count and the current system time to file. This file does not even have to be encrypted - you just need to generate a hash of the data (eg using SHA2) and sign it with a private key then append to the file.
Then when you (re)start the service read and verify the file using your public key and check that it has not been too long since the time that was written to the file. Note that some initial file will have to be written on installation and your service will need to be running continually - only allowing for brief restarts. You also would probably verify that the time is not in the future as this would indicate an attempt to circumvent the system.
Of course this approach has problems such as the client fiddling with the system time or even debugging your code to find the private key and probably others. Hopefully these are hard enough to act as a deterrent. Also if the service or system is shut down for an extended period of time then some sort of manual intervention would be required.
SecCodeCheckValidity:
Performs dynamic validation of signed code.
SecStaticCodeCheckValidity
Validates a static code object.
This function obtains and verifies the signature on the code specified
by the code object. It checks the validity of all sealed components,
including resources (if any). It validates the code against a code
requirement if one is specified. The call succeeds if all these
conditions are satisfactory. This call is only secure if the code is
not subject to concurrent modification, and the outcome is only valid
as long as the code remains unmodified. If the underlying file system
has dynamic characteristics, such as a network file system, union
mount, or FUSE, you must consider how secure the code is from
modification after validation.
So given this description for codesigning document from Apple, it is not clear what do they mean "dynamic characaterstics" here.
SecStaticCodeCheckValidity verifies if the application on-disk. In contrast, SecCodeCheckValidity verifies the application in-memory against the same requirements while it is running.
This attempts to prevent modification via hijacking, injection or other traditional methods of mutating in-memory code by checking if it is still code-signed with a valid signature.
I remember hearing that distinction somewhere during WWDC '09, correct me if I am wrong.
If you want to check whether some running code is signed by Apple and not some designated requirement specified by the programmer, you want:
SecRequirementCreateWithString(CFSTR("anchor apple"), ...)
and then pass the result from SecRequirementRef to SecCodeCheckValidity. There is no need to interact with the designated requirement in this case, since you've already decided what code is acceptable to you, which is anything signed by Apple.
In production code, you can use csreq(1) to compile a binary version of "anchor apple" and use SecRequirementCreateWithData instead of SecRequirementCreateWithString, which is faster.
let us assume, we have a valid HCERTSTORE handle of opened certificate store. How can we determine - is opened store physical or system?
Restriction 1 - we should use CryptoAPI (C++) only.
Restriction 2 - we've successfully forgotten, what kind of store was used in CertOpenStore() call.
I don't see a way to solve this with CryptoAPI and, as the constraint 2 is artificial, don't think it was designed to address this problem. Closeable handles can't be passed around between processes, so one cannot "forget" what it was unless deliberately: the knowledge is right there, in the code that got the handle.
By looking through the function list in the left pane at CertOpenStore - MSDN, I see CertGetStoreProperty(), but there's only one predefined property, CERT_STORE_LOCALIZED_NAME_PROP_ID, which isn't reliable.
I'd like to write some simple code that helps to determine if some instructions have been executed in the intended order client-side. This is to make things difficult for anyone wishing to alter behaviour by editing byte code. For example, using a JMP so some instructions are never executed. I'm a bit short on ideas though.
To check if the last two instructions have been run in the correct order something simple like this could be used (pseudo code):
// Variables initialized by server
int lastInt;
// Monitored at regular intervals
// Saves using callback which could be tampered with
boolean bSomethingFishyHere;
int array[20]
...
execute( array[5], doStuff1() )
execute( array[6], doStuff2() )
...
// This could be tested remotely with all combinations of values possible
execute( int i, boolean b ){
if( lastInt >= i ){
bSomethingFishyHere = true;
}
lastInt = i;
}
I'm at a loss at to what approach could be used to verify if all instructions have been run in the correct order though. Maybe I could add an array and have it populated by the server with some randomly ascending numbers or use some sort checksum. What are your suggestions?
The problem is, that no matter what kind of book-keeping you do, a malicious user can always do the same book-keeping, but skip over the actual doing stuff. If you can do it, so can they. You can rely on external mechanisms, like code-signing to ensure that your executable hasn't been tampered with and CPU protections to prevent on-the-fly modification of the code in memory. But in that case you're only as secure as the platform you're running on.
I'm assuming this is some sort of copy-protection scheme. (If not, feel free to correct me, and you might get some better, more applicable advice). There isn't a fool-proof way to prevent someone from running your software, but you can license an existing scheme where the vendor has already put enough effort into it, that it's not worth it for an attacker to bother, for the most part.
The one way that is pretty much fool-proof, is if you control the code. Run the real meat of the code on your servers, and provide some sort of front end remote client.
This is just to patch some holes in an fps shooter. The designers of the game left some temporary variables that can be changed in the console. Some of them are harmless but others like Texture transparent=true are abusive. What I'm aiming for is to redesign an existing modification so most of the code is on the server as you suggest. The variables in question are set in the "world" that is mimicked by the client. Ultimately, I'm planning to extend some classes so they ignore them and just need to monitor values where this isn't possible.
If you do want a short-term patch, a more practical approach (than the one you are looking at) would to send encrypted bytecodes to the client and using a special classloader to decrypt them on the fly. Beware however that it wouldn't be that difficult for a hacker to reverse engineer the classloader, get hold of the client-side bytecodes, and modify them to install the cheats.
So my advice is that any client-side "patch" to stop users tampering with the bytecodes is never going to be hack-proof. Skip that idea, and go straight to your long term solution of rearchitecting the game so that it is not necessary to trust that the client-side code plays by the rules.