Sentry | java.lang.NullPointerException: Config key sentry.service.client.server.rpc-address is required - sentry

solrctl sentry --list-roles
I am running above command but it failed with below error.
I am new to Sentry. I have double checked value of sentry.service.client.server.rpc-address in Solr configuration and it has the right value of hostname where Sentry service is running. What does this error mean?
16/09/26 15:19:42 ERROR tools.SentryShellSolr: Config key sentry.service.client.server.rpc-address is required
java.lang.NullPointerException: Config key sentry.service.client.server.rpc-address is required
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:208)
at org.apache.sentry.provider.db.generic.service.thrift.SentryGenericServiceClientDefaultImpl.<init>(SentryGenericServiceClientDefaultImpl.java:123)
at org.apache.sentry.provider.db.generic.service.thrift.SentryGenericServiceClientFactory.create(SentryGenericServiceClientFactory.java:31)
at org.apache.sentry.provider.db.generic.tools.SentryShellSolr.run(SentryShellSolr.java:50)
at org.apache.sentry.provider.db.tools.SentryShellCommon.executeShell(SentryShellCommon.java:241)
at org.apache.sentry.provider.db.generic.tools.SentryShellSolr.main(SentryShellSolr.java:95)
The operation failed. Message: Config key sentry.service.client.server.rpc-address is required

With CDH, solrctl sentry commands look for a sentry configuration file on the host (etc/sentry/conf.cloudera.sentry/sentry-site.xml or /etc/sentry/conf/sentry-site.xml). This config file contains sentry.service.client.server.rpc-address among other config options and only gets automatically deployed to the host if the host has role Sentry Server or Sentry Gateway in Cloudera Manager.
In most cases this means that you need to add the Sentry Gateway role to the host you want to run solrctl sentry commands from.
In Cloudera Manager go to Sentry -> Instances -> Add Role Instances -> (Select the host(s)) -> Ok. After a few minutes the configs should be deployed and you should be able to use solrctl sentry.
I tested this with CDH 5.11.1 and kerberized Solr.

Related

Kerberos HTTP service Using GSS shows No valid credentials due to domain name or host name mismatch

I am having a Micro-Service Platform having multiple Micro-Services connected to each other, Platform uses Kerberos for authentication of Micro-Services. In One of Micro-Service Node hadoop is installed which uses separate KDC for Hadoop cluster authentication.
Lets say platform domain is "idm.com" and hadoop domain is "hadoop.com".
Resource Manager is running on one node. I have configure HTTP principal for spnego in core-site.xml using "hadoop.http.authentication.kerberos.principal" property to "HTTP/master.hadoop.com#HADOOP.COM" and nodes Hostname is "hadoopmaster.idm.com".
I do Kinit and acquire root user ticket from TGS. When I tried to do curl using "curl -k -v --negotiate -u : https://master.hadoop.com:8090/cluster" It shows GSS Exception: No valid credentials provided.
If I see klist it shows two ticket one krbtgt and second "HTTP/hadoopmaster.idm.com#HADOOP.COM"(I have added this principal in kdc database). First krbtgt i got using kinit and second HTTP one i Got it automatically after doing curl before curl the ticket was not there. Krb client acquired another for using HTTP service.
After some debugging I noticed the problem/behaviour is I got ticket for HTTP/hadoopmaster.idm.com#HADOOP.COM where I have configure hadoop to use HTTP/master.hadoop.com#HADOOP.COM. If we configure hadoop to use "HTTP/hadoopmaster.idm.com#HADOOP.COM" then ui is accessible.
I have added both FQDNs to /etc/hosts file.
It seems when I do curl using any of the FQDNs I got the HTTP ticket of the first entry in /etc/hosts file.
For example if
...
10.7.0.5 hadoopmaster.idm.com
10.7.0.5 master.hadoop.com
...
now if i do curl i will get HTTP/hadoopmaster.idm.com#HADOOP.COM in klist.
and if /etc/hosts looks like this
...
10.7.0.5 master.hadoop.com
10.7.0.5 hadoopmaster.idm.com
...
Now if i do curl i will get HTTP/master.hadoop.com in klist
In both the cases if i configure the hadoop property to the same i got using curl then UI will be accessible and other wise it will shows 403 GSSException which i guess means curl used spnego but didn't get valid credentials.
And if it matches with the hadoop's configured principal then it will work.
It looks like Hostname is causing problem is there any way to map this hostname or is there any kerberos config which can map this or any property which will give me exact ticket with exact hostname i have specified in curl despite of hadoop configurations.

Adding user to ActiveMQ Artemis fails on Windows

I'm trying to add a user to ActiveMQ Artemis on Windows. I have created an instance and started it. Then I run command:
artemis user add --user admin --password admin --user-command-user another_admin --user-command-password another_admin --role admin --url tcp://localhost:61616
The command fails with message:
The system cannot find the path specified.
The syntax of the command is incorrect.
Connection brokerURL = tcp://localhost:61616
Failed to add user another_admin. Reason: AMQ229220: Failed to load user file: /C:/Program%20Files/apache-artemis-2.26.0-instance1/etc/artemis-users.properties
How to fix?
There is a bug related to how the broker deals with spaces in the path to the user/role files. I've sent a PR to resolve the problem. The fix should be in 2.27.0.
In the mean-time you can work-around the issue by putting the broker instance on a path which has no spaces.

How to run portworx backup to minio server

Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.

Error when I try to create a new account with "deis register" command

I've a fresh install of Deis on AWS but I get this error when I try to register an user:
http://deis.XXXX.com does not appear to be a valid Deis controller.
Also, when I try to make a curl to the ELB or any node it return a timeout, but I think that it's a normal behaviour due to the security group configutarion.
It could be a proxy configuration error? Because when I installed Deis I got this error:
Enabling proxy protocol failed, please enable proxy protocol manually after finishing your deis cluster installation.
And I enabled it manually with:
deisctl config router set proxyProtocol=1
Thanks!
Once you have enabled proxyProtocol on the router you should be able to run deisctl install platform without issues.
Is that not the case?
I had this issue when I hadn't registered my deis cluster domain with global dns - i.e., I had only added it to a Route 53 hosted zone that wasn't actually public.
I fixed it by adding an A ALIAS record in Route 53 pointing a wildcard sub-domain under my existing domain to the deiswebelb host.
Name:
*.apps.example.com
Type: A
Value: ALIAS dualstack.deis-deiswebelb-1abcdefghijkl-1234567890.us-east-1.elb.amazonaws.com

Unable to load SiteMinder host configuration object or host configuration file

The application log in the event viewer shows
Unable to load SiteMinder host configuration object or host configuration file
for Siteminder 12.51 on IIS 7.5 (64bit) OS Windows 2008 (64bit).
When do you get the error? Is it when you're configuring the Webagent?
Anyway, verify the following.
Verify your environment variables are set correctly, if must have references to the Webagent files, you may need to export the envvars in the webagent or policy server folder (nete_wa_env...) file.
if the host config object you're using exists? Verify this using the
Admin UI
If the hostname is configured in the policy server as a trusted host
Verify if the settings are correct in the corresponding SmHost.conf file in the webagent folder
Verify and eliminate any duplicate or conflicting lines in your IIS config files which refer to siteminder.
Verify the host config object and the agent config object settings.
make sure WebAgent.conf is pointing to the correct SmHost.conf and SmHost.conf has the correct HostConfigObject defined (with the exact case that is used in the Policy Store).
If the HCO in the Policy Store is named "DefaultHostSettings" and SmHost.conf contains HostConfigObject=defaulthostsettings you will get this type of error.
Try re-registering the web agent with the policy server using the smreghost command.

Resources