In order to prevent root to read the content of my flow.xml.gz, I would like to protect it from unwanted access.
Is it possible? how ?
Can Nifi compress this file with a password ?
Thanks in advance.
N.
The root user can read/write all files on the system - you should instead focus on protecting root user access. As other comments state, the sensitive properties will be encrypted with an encryption key which is stored in nifi.properties. You can however configure alternative encryption mechanisms which do not require the encryption key to exist on the file system, such as Hashicorp Vault:
https://bryanbende.com/development/2021/07/20/apache-nifi-1-14-0-hashicorp-vault
Related
There is some confidential data of user that I am encrypting using DPAPI. This encrypted data is being stored in a file and is placed in %LOCALAPPDATA% folder.
How can I prevent other processes from accessing that file, since any other process running in the user session can decrypt it. Does windows have some provision to facilitate my requirement or is this a path that should not be pursued to protect the data?
I feel like I'm completely missing the point of the new JCEKS keystore format in Wildfly. Maybe you can set me straight.
The way that we have Wildfly configured (and much of the internet instructs us to, for example):
We put the standard keystore entries in a standard Java Key Store ("keystore.jks") file with a password ("jks_pw")
We then create a JCEKS keystore ("keystore.jceks") with a password, salt, and round-count ("jceks_s_n").
We then put "pks_pw" into "keystore.jceks"
We then add the JCEKS password/etc ("jceks_s_n") into our jboss config (standalone.xml) as plain text, defining a entry
We then add a reference to the vault-stored JKS password to our jboss https connector (standalone.xml), as "password="${VAULT::jks::jks::1}".
What the heck did all of that accomplish???
If we just used a JKS file and a password embedded in standalone.xml, the system is susceptible to:
An attacker getting a copy of standalone.xml and the JKS file, in which case all secrets are known.
An attacker getting a copy of the JKS file, in which case an attacker can use brute-force or lookup table attacks.
If we use a JCEKS container in the way described, the system is susceptible to:
(SAME) An attacker getting a copy of standalone.xml, the JKS/JCEKS files, in which case all secrets are known.
(SAME) An attacker getting a copy of the JKS file, in which case an attacker can use brute-force or lookup table attacks.
This would sort of make sense if we put the actual certs inside of the JCEKS file, in which case brute-force and lookup table attacks would be harder in the second case of attack, but so far I haven't found a way to use a JCEKS-formatted keystore directly with an https connector.
Really, the only reason I care too much about this is that we apparently have a security requirement to use the "vault", but it seems pointless.
UPDATE: It is worth noting that by using the vault you're using a "masked" password to the vault in your jboss config file, but I can't figure out what this means. Apparently your masked-password + salt + rounds can unlock the JCEKS keystore (source), so I'm not sure what exactly masking accomplishes. It just seems like a third level of redirection. I've got to be missing something...
JBoss states that the security mechanism behind "vault" is security by obscurity (https://developer.jboss.org/wiki/JBossAS7SecuringPasswords)
How secure is this?
The default implementation of the vault utlizes a Java KeyStore. Its configuration uses Password Based Encryption, which is security by obscurity. This is not 100% security. It only gets away from the problem of clear text passwords in configuration files. There is always a weak link. (As mentallurg suggests in the comments, the keystore password is the weakest link).
Ideally, 3rd party ISV robust implementations of Vaults should provide the necessary security.
Vault uses an unknown password and algorithm perform a symmetric encryption of the keystore password. Without a HSM, you will always face the problem of "where store the, e.g., datasource password". So normally you'd define a property file with an Access-Control-List and store the encoded password there.
The vault just increases the effort of getting the secured password, leaving the attacker to either read the pw in-memory or reverse-engineer the vault encryption algorithm + key.
It is important to to know that the security mechanism behind "vault" is security by obscurity, which means you are just masking your sensetive data. It means if an attacker have access to your standalone.xml and the keystore, he can easily read all your data.
vault "increases the effort" -> the attacker cannot see them directly but with some (little bit) effort.
I wrote a helper script in Ruby to handle my file synchronization through some servers. It was used only in my intranet and authentication was made by SSH keys. But now I want to use it where I can't use SSH keys and I want to store the passwords in a config file.
I know, there are some encryption libraries like bcrypt or OpenSSL, but I have a problem with that:
I start my script and enter my passphrase and it is stored in a variable to decrypt my passwords.
My code is open source.
So everybody, who has access with my user to my computer (which would be the first barrier, which I'd like to extend) and looks into the memory (where my passphrase is stored) can decrypt my password file. How is that handled in applications which are relevant to security?
Edith says as a reply to DevDude (but here, because I want to keep my specifications in my question):
But then this configuration file would be plain text and not encrypted. And when I encrypt this file there are two more issues in my opinion:
The super_secret_pwd would be stored in a variable, so when I would search in the memory of the computer, I would find it, wouldnt I?
The master password for encryption would be in the memory as plain text, too.
So the big question is: Is it possible to read plain text variables from the memory? As I know it is possible in C and a big security issue.
What you are looking for is to use a YAML file with the password/API keys. and never check this file into your repo.
Then you can reference this file on your initializers, and maybe make the password a global variable or x, use configatron, etc.
This is basically how production applications work, they read their important settings from a YAML file stored on the server itself.
This is what I use:
#c = configatron
# Per environment settings
app_settings = YAML.load_file('config/secret_stuff.yml')
#c.password = app_settings['super_secret_pwd']
Do not use ENVIRONMENT variables because they have all sort of security issues. They are an antipattern.
I am using the cipher command to encrypt a file so that nobody can read that.
I tried this command : cipher /e /a exp\test.txt
I noticed the content of the file remained same. Only the file properties changed to encrypted. (Also the color of the filename changed to green. :P). I can still read, modify and delete the file.
Later I tried to decrypt: cipher /d /a exp\test.txt
The content as before same and in properties encrypted is unchecked.
I can read write and delete the file after encryption, what is the meaning of encryption then? How to use it properly? Am I missing something? Can anybody help me on this cipher command?
The cipher command on Windows allows you to control the encryption of files/directories provided by the Encrypting File System (EFS).
The important thing to note about EFS is that it is transparent encryption from the point of view of those granted access to the files. In other words, whilst the file data is encrypted on disk, providing you have keys to the file, you don't need to explicitly decrypt it in order to view the contents, it can just be read as any other file and the file system handles decrypting the data automatically. However, if you were to try and access the file as another user on the machine, or by reading data directly off the disk the file would be inaccessible.
You're not doing anything "wrong" here, it's just that cipher and EFS don't do what you expect them to.
The file is indeed encrypted at the file system level. That is, it's encrypted on disk, but NFTS will automatically decrypt on behalf of any application that is attempting to read that file running under your account.
Copy the encrypted file to a shared (NTFS) disk directory and validate it's still green in explorer. Then sign out and sign in with another account on this PC. I don't think you'll be able to read the file.
I'd come to this conclusion through experience and various things I've read on this internet, but in stating it to a co-worker, it seems illogical. Can you verify the following statement is true, or provide a counter to it?
On Vista/Win7, two standard (non-elevated users) cannot read/write the same location in the registry.
On Vista/Win7, two standard (non-elevated users) cannot read/write the same location in the registry.
This is a false statment
On Vista/Win7, two standard (non-elevated users) cannot write the same location in the registry in the default configuration.
But this is true. By default, users only have write access to their own hive (HKEY_CURRENT_USER) and read access to the machine hive (HKEY_LOCAL_MACHINE).
If you want to configure a location where any user can read and write, you can certainly do by configuring a key's ACL, as #Dark Falcon said. A good place for this is somewhere inside your application's key in HKEY_LOCAL_MACHINE, and at install time (when your installer has elevated privileges to do so).
That would be incorrect. A registry key can have an ACL specified which allows any user, elevated or not, to write to it. By default, I am not aware of any keys which have this configured, but it certainly is possible.