kinit errors - authentication to AD 2008 - windows

I am trying to have my application authenticate using AD credentials. I am using kinit to test after creating the krb5.ini file. I believe all the realm information is correct in the ini file but continue to receive the below error when testing with kinit.
Exception: krb_error 0 Cannot find any provider supporting ARCFOUR No error
KrbException: Cannot find any provider supporting ARCFOUR
at
sun.security.krb5.internal.crypto.ArcFourHmacEType.encrypt(ArcFourHm
cEType.java:68)
at
sun.security.krb5.internal.crypto.ArcFourHmacEType.encrypt(ArcFourHm
cEType.java:60)
at sun.security.krb5.EncryptedData.<init>(EncryptedData.java:122)
at sun.security.krb5.KrbAsReq.init(KrbAsReq.java:355)
at sun.security.krb5.KrbAsReq.<init>(KrbAsReq.java:180)
at sun.security.krb5.internal.tools.Kinit.<init>(Kinit.java:253)
at sun.security.krb5.internal.tools.Kinit.main(Kinit.java:107)
Caused by: java.security.NoSuchAlgorithmException: Cannot find any provider
suporting ARCFOUR
at javax.crypto.Cipher.getInstance(DashoA13*..)
at
sun.security.krb5.internal.crypto.dk.ArcFourCrypto.encrypt(ArcFourCrpto.java:279)

You need to enable RC4-HMAC in both your krb5.ini config and the conf/security/java.security
I think RC4 got blacklisted in Oracle JDK (>= 1.8.u060) for known insecurities together with MD5, but it is strictly required for key exchange by the MS Active Directory Kerberos implementation.
Maybe you have to re-enable it by removing RC4 from jdk.tls.disabledAlgorithms and jdk.certpath.disabledAlgorithms of your JDKs conf/security/java.security.
See https://www.java.com/en/configure_crypto.html for more information.

Related

${VAULT_SCHEME} not working in bootstrap.properties

I have configured spring boot application to take properties from my environment but strangely I am facing an error while starting my application.
I have added the properties in my ~/.bash_profile and also did source ~/.bash_profile after adding them to the profile.
This is how my bootstrap.properties look like:
spring.application.name=gamification
spring.cloud.vault.enabled=${VAULT_ENABLE:true}
spring.cloud.vault.fail-fast=false
spring.cloud.vault.token=${VAULT_TOKEN}
spring.cloud.vault.scheme=${VAULT_SCHEME}
spring.cloud.vault.host=${VAULT_HOST}
spring.cloud.vault.port=${VAULT_PORT:8200}
I am getting this error:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.vault.config.VaultReactiveBootstrapConfiguration]: Constructor threw exception; nested exception is java.lang.IllegalArgumentException: Scheme must be http or https
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:216) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:310) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
... 30 common frames omitted
Caused by: java.lang.IllegalArgumentException: Scheme must be http or https
at org.springframework.util.Assert.isTrue(Assert.java:118) ~[spring-core-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.vault.client.VaultEndpoint.setScheme(VaultEndpoint.java:167) ~[spring-vault-core-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.cloud.vault.config.VaultConfigurationUtil.createVaultEndpoint(VaultConfigurationUtil.java:91) ~[spring-cloud-vault-config-2.2.2.RELEASE.jar:2.2.2.RELEASE]
at org.springframework.cloud.vault.config.VaultReactiveBootstrapConfiguration.<init>(VaultReactiveBootstrapConfiguration.java:110) ~[spring-cloud-vault-config-2.2.2.RELEASE.jar:2.2.2.RELEASE]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_231]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_231]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_231]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_231]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:203) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
... 32 common frames omitted
I added a debug point in Vault Endpoint and found this:
Here as you can see, the VAULT_HOST is being taken as VAULT_HOST instead of the value of that environment variable, and same with the VAULT_SCHEME
[EDIT]
Adding bash_profile vault configuration:
export VAULT_ENABLE=true
export VAULT_SCHEME=http
export VAULT_HOST=vault-1.dev.lokal
export VAULT_PORT=8200
export VAULT_TOKEN=5F97X
[EDIT #2]
Tried out the solution suggested by #Gopinath
I am getting environment as null when trying to autowire it
The root cause of the problem can be found form this error message:
org.springframework.core.convert.ConverterNotFoundException:
No converter found capable of converting
from type [java.lang.String]
to type [org.springframework.cloud.vault.config.VaultProperties$Config]
The above message indicates that the VaultProperties object could not be initialized using the string parameter supplied.
Here is the link to documentation and instructions on configuring VaultProperties:
https://spring.io/guides/gs/vault-config/
Some more information to help understand vault:
References:
Spring Cloud Vault: https://cloud.spring.io/spring-cloud-vault/
Hashicorp Vault: https://www.vaultproject.io
What is a Vault?
A vault is a secure storage space meant for storing secret information.
Hashicorp Vault is one tool that offers vault functionality for cloud applications.
What is Spring Boot Vault?
Spring Boot applications commonly require secret information for those to work.
Some examples of secret information are:
Database password
Private key
API key
Usually, the input parameters are passed to Spring boot application through the
"application.properties" file or "bootstrap.properties" file.
The use of such properties file poses a security risk, if secret data is directly mentioned in the file.
Spring Boot Vault addresses this risk.
It pulls secret information from vault and supplies to the application at the start-up time.
The .properties file will only tell the application the names of parameters that it can expect from Vault.
The actual values of the parameters will be taken from vault.
How to setup Vault?
Step 1: Install and launch HashiCorp Vault from
https://www.vaultproject.io/downloads.html:
Step 2: After installing Vault, test whether it works, by launching
it in a console window.
> vault server --dev --dev-root-token-id="spring-boot-vault-demo"
==> Vault server configuration:
Api Address: http://127.0.0.1:8200
Cgo: disabled
Cluster Address: https://127.0.0.1:8201
Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: false, enabled: false
Recovery Mode: false
Storage: inmem
Version: Vault v1.4.1
WARNING! dev mode is enabled!
.....
You may need to set the following environment variable:
PowerShell:
$env:VAULT_ADDR="http://127.0.0.1:8200"
cmd.exe:
set VAULT_ADDR=http://127.0.0.1:8200
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: +Dihvgj/oRN2zo6/97ZqpWt086/CFRZEPkuauDu4uQo=
Root Token: spring-boot-vault-demo
Step 3: Store some secret data in the vault,
by running these commands in a separate command window:
> set VAULT_ADDR=http://127.0.0.1:8200
> set VAULT_TOKEN=spring-boot-vault-demo
> vault kv put secret/spring-boot-vault-demo password=££££$$$$%%%%
Key Value
--- -----
created_time 2020-05-02T09:59:41.2233332Z
deletion_time n/a
destroyed false
version 1
I did this:
I made a shell script called setenv.sh and put this under it:
#!/bin/bash
launchctl setenv VAULT_ENABLE true
launchctl setenv VAULT_SCHEME http
launchctl setenv VAULT_HOST vault-1.dev.lokal
launchctl setenv VAULT_PORT 8200
launchctl setenv VAULT_TOKEN 5F97X
And then, before starting the application I ran the shell script with
sudo sh setenv.sh
And the application seems to work fine without any errors. Strangely if I do it with my previous approach of adding the env variables inside the .bash_profile, it doesn't work.

Spinnaker & Okta integration failing

Scenerio:
Upgraded Spinnaker to 1.12.0. No other config changes that would impact this integration (we had to modify an s3 IAM because it quit working). Okta integration stopped working. Public key was reissued during install process for the ingress, may be relevant?
SAML-TRACE shows payload getting to okta and back
Spinnaker throws two different errors depending on browser and how I get there.
Direct link to deck url: (500) No IDP was configured, please update included metadata with at least one IDP (seen in browser and gate)
Okta "chicklet" in okta dashboard: (401) Authentication Failed: Incoming SAML message is invalid
Config details (again none of this changed):
Downloading metadata directly
JKS is being leveraged and is valid
service url is confirmed
alias for JKS is confirmed
I had this issue as well when upgrading from 1.10.13 to 1.12.2. I found lots of these error messages in Gate's logs:
2019-02-19 05:31:30.421 ERROR 1 --- [.0-8084-exec-10] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw e
xception [org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP] with root cause
org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP
at org.springframework.security.saml.metadata.MetadataManager.getDefaultIDP(MetadataManager.java:795) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.populatePeerEntityId(SAMLContextProviderImpl.java:157) ~[spring-security-saml2-core-1.0.2.RELEASE.jar
:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.getLocalAndPeerEntity(SAMLContextProviderImpl.java:127) ~[spring-security-saml2-core-1.0.2.RELEASE.ja
r:1.0.2.RELEASE]
at org.springframework.security.saml.SAMLEntryPoint.commence(SAMLEntryPoint.java:146) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.sendStartAuthentication(ExceptionTranslationFilter.java:203) ~[spring-security-web-4.2.9.RELEASE.jar
:4.2.9.RELEASE]
...
After downgrading back to 1.10.13, I upgraded to the next version, 1.11.0, and found that's when the issue started. Eventually, I looked at Gate's logs from the launch of the Container and found:
2019-02-20 22:31:40.132 ERROR 1 --- [0.0-8084-exec-3] o.o.s.m.provider.HTTPMetadataProvider : Error retrieving metadata from https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
javax.net.ssl.SSLException: Error in hostname verification
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:241) ~[openws-1.5.4.jar:na]
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:186) ~[openws-1.5.4.jar:na]
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) ~[commons-httpclient-3.1.jar:na]
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) ~[commons-httpclient-3.1.jar:na]
...
This lead me to realize that the TLS Certificate was being rejected by Gate. Not sure why it suddenly started failing the check. Up to this point, I had it configured as:
$ hal config security authn saml edit --metadata https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
I ended up downloading the metadata file and redeploying with halyard.
$ wget https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
$ hal config security authn saml edit --metadata "${PWD}/metadata"
$ hal config version edit --version 1.12.2
$ hal deploy apply
Opened up a private browser window as suggested by the Spinnaker documentation and Gate started redirecting to Okta correctly again.
Issue filed, https://github.com/spinnaker/spinnaker/issues/4017.
So I ended up finding the answer. The tomcat config changed apparently in spinnaker in later versions for gate.
I created this snippet in ~/.hal/default/profiles/gate-local.yml
server:
tomcat:
protocolHeader: X-Forwarded-Proto
remoteIpHeader: X-Forwarded-For
internalProxies: .*
Deployed spinnaker and it was back to working.

Getting Regionserver throwing InvalidToken exception in logs

I have noticed following error in my region server logs:
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/my-table/eb512b4b9f9fa9cb2a1a3930d9c9f18b/r/df1694a4542f419992f86b219541fb6fBlock token with block_token_identifier (expiryDate=1519482398334, keyId=1283446178, userId=hbase, blockPoolId=BP-1872413417-101.33.253.88-1458393583173, blockId=1133036852, access modes=[READ]) is expired.
at org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:591)
at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:490)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:782)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:716)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)
at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1118)
at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1056)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1411)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1374)
at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:89)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1380)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1591)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:259)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:217)
at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2003)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:5338)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2494)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2480)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2462)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6527)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6506)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:579)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
2018-02-24 20:03:10,947 INFO [RW.default.readRpcServer.handler=151,queue=38,port=16020] hdfs.DFSClient: Access token was invalid when connecting to /101.32.65.239:50010 : org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/my-table/eb512b4b9f9fa9cb2a1a3930d9c9f18b/r/df1694a4542f419992f86b219541fb6fBlock token with block_token_identifier (expiryDate=1519482398334, keyId=1283446178, userId=hbase, blockPoolId=BP-1872413417-101.33.253.88-1458393583173, blockId=1133036852, access modes=[READ]) is expired.
I see config related to these are already tweaked by us.
dfs.client.read.shortcircuit.streams.cache.expiry.ms
300000
hdfs-default.xml
dfs.client.read.shortcircuit.streams.cache.size
4096
hdfs-site.xml
I also see a JIRA raised by Nick Dimiduk-2 already facing this, which is still open and in the comments have the config suggested by a user facing same issue, which we are also using,
Anyone faced this issue or knows about it and have any resolution/suggestion for this?

UnrecoverableKeyException on trying to read key using alias from keystore

I am getting the following error on Line #4 of code below in IBM Websphere
Liberty Profile 16.0.0
InputStream keystoreStream = EncryptionUtility.class.getResourceAsStream(keyStoreLocation);
KeyStore keystore = KeyStore.getInstance("JCEKS");
keystore.load(keystoreStream, storePass.toCharArray());
Key key = keystore.getKey(alias, keyPass.toCharArray());
Which results in the following exception:
Caused by: java.security.UnrecoverableKeyException: com.ibm.crypto.provider.AESSecretKey
at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:358)
at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:133)
at java.security.KeyStore.getKey(KeyStore.java:804)
at com.comdata.base.helper.EncryptionUtility.initSymmetricKey(EncryptionUtility.java:134)
Any ideas why this is happening? Is anything need to be configured for cryptography?
I poked through the code of keyProtector.java in JDK 7 and UnrecoverableKeyException is triggered by ClassNotFoundException
com.ibm.crypto.provider.AESSecretKey
Do we need to install any feature via installUtility?
Any ideas why this is happening? Is anything need to be configured for cryptography?
The class not being found (com.ibm.crypto.provider.AESSecretKey) is from the IBM JDK.
It looks like your keystore was created using the IBM JDK and thus has a key packaged in it that uses the AESSecretKey from the IBM JDK.
At runtime, your Liberty server is probably using a non-IBM JDK, which would not have this IBM JDK specific class in it.
Do we need to install any feature via installUtility?
Nope. The missing class should be provided by the JDK, as opposed to a Liberty feature.

APNS certificate expiry date error with MobileFirst Platform 7.0

When deploying an APNS certificate in a .wlapp file in MFP 7.0, I'm seeing a null-pointer exception when it validates the end-date, even though it has one. ( openssl pkcs12 -in apns-certificate-sandbox.p12 | openssl x509 -noout -enddate returns a valid date in the future).
It seems others have made this work, so I'm guessing it must be something I am doing wrong...has anyone else resolved similar issues with valid Apple Push Notification Service certs failing to be deployed on MFP
Relevant lines from the log:
947: "com.ibm.worklight.admin.services.ApplicationService E FWLSE3000E: A server error was detected.",
"948: com.ibm.worklight.admin.common.util.exceptions.ValidationException: FWLSE3119E: APNS certificate validation failed. See additional messages for details.",
"949: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validateApnsConfiguration(PushEnvironmentUtil.java:232)",
"950: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validatePushConfiguration(PushEnvironmentUtil.java:220)",
[ ... lots more trace here .. ]
"1030: Caused by: java.lang.NullPointerException",
"1031: at java.io.ByteArrayInputStream.(ByteArrayInputStream.java:117)",
"1032: at com.ibm.worklight.admin.util.PushEnvironmentUtil.getCertificateExpiryDate(PushEnvironmentUtil.java:362)",
"1033: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validateApnsConfiguration(PushEnvironmentUtil.java:230)",
Initial hurdle was that the .wlapp file was not being built, so no apns certificate was in the file (it is just in .zip format with a meta directory that should hold the .p12 file). The underlying issue was that the tag's password field in application-descriptor.xml wasn't exactly right: it was following the example from "Push Notifications in iOS applications" at https://developer.ibm.com/mobilefirstplatform/documentation/getting-started-7-0/notifications/push-notifications-native-ios-applications/ :
<pushSender password="apns-certificate-p12 password"/>
when it really should just have the password:
<pushSender password="password"/> </code></pre>
with the file named either apns-certificate-sandbox.p12 or apns-certificate-production.p12 depending on which server is to be used.
Double dumbass on me for not checking the official docs at http://www-01.ibm.com/support/knowledgecenter/SSHS8R_7.0.0/com.ibm.worklight.dev.doc/devref/c_the_application_descriptor.html , which has it described correctly.
Moral: "When in doubt, RTFM"

Resources