Jelastic MySQL cluster and db users - jelastic

I have created a MySQL Cluster using the "Scalable MySQL Cluster with Master-Slave Replication, ProxySQL Load Balancing and Orchestrator" installation instructions. This works as expected.
My Java application can connect to the database using the nodeid endpoint of the Proxysql server and the default user/password emailed during setup.
I have since created another user/password in the MySQL master node with the same permissions as the default user created during setup. However, my Java application gets a "Permission denied" error when trying to use that new user.
If I change the Java application to point directly to the master node instead of the Proxysql node, it works.
Is there another step I must take to enable other db users to be accessed through the Proxysql?

Yes, you should to enable other db user to be accessed through the proxysql. For enabling a new user please connect to the proxysql node via ssh and execute the next steps:
mysql -h 127.0.0.1 -P6032 -uadmin -padmin
INSERT INTO mysql_users (username, password, active, default_hostgroup, max_connections) VALUES ('new_user, 'new_user_pass', 1, 10, 1000); for adding new user
LOAD MYSQL USERS TO RUNTIME; SAVE MYSQL USERS TO DISK; for loading user to runtime

Related

Use gMSA for Hashicorp Vault mssql credential rotation

I want to start using Vault to rotate credentials for mssql databases, and I need to be able to use a gMSA in my mssql connection string. My organization currently only uses Windows servers and will only provide gMSAs for service accounts.
Specifying the gMSA as the user id in the connection string returns the 400 error error creating database object: error verifying connection: InitialBytes InitializeSecurityContext failed 8009030c.
I also tried transitioning my vault services to use the gMSA as their log on user, but this made nodes unable to become a leader node even though they were able to join the cluster and forward requests.
My setup:
I have a Vault cluster running across a few Windows servers. I use nssm to run them as a Windows service since there is no native Windows service support.
nssm is configured to run vault server -config="C:\vault\config.hcl" and uses the Local System account to run under.
When I change the user, the node is able to start up and join the raft cluster as a follower, but can not obtain leader status, which causes my cluster to become unresponsive once the Local System user nodes are off.
The servers are running on Windows Server 2022 and Vault is at v1.10.3, using integrated raft storage. I have 5 vault nodes in my cluster.
I tried running the following command to configure my database secret engine:
vault write database/config/testdb \
connection_url='server=myserver\testdb;user id=domain\gmsaUser;database=mydb;app name=vault;' \
allowed_roles="my-role"
which caused the error message I mentioned above.
I then tried to change the log on user for the service. I followed these steps to rotate the user:
Updated the directory permissions for everywhere vault is touching (configs, certificates, storage) to include my gMSA user. I gave it read permissions for the config and certificate files and read/write for storage.
Stopped the service
Removed the node as a peer from the cluster using vault operator raft remove-peer instanceName.
Deleted the old storage files
Changed the service user by running sc.exe --% config "vault" obj="domain\gmsaUser" type= own.
Started the service back up and waited for replication
When I completed the last step, I could see the node reappear as a voter in the Vault UI. I was able to directly hit the node using the cli and ui and get a response. This is not an enterprise cluster, so this should have just forwarded the request to the leader, confirming that the clustering portion was working.
Before I got to the last node, I tried running vault operator step-down and was never able to get the leader to rotate. Turning off the last node made the cluster unresponsive.
I did not expect changing the log on user to cause any issue with node's ability to operate. I reviewed the logs but there was nothing out of the ordinary, even by setting the log level to trace. They do show successful unseal, standby mode, and joining the raft cluster.
Most of the documentation I have found for the mssql secret engine includes creating a user/pass at the sql server for Vault to use, which is not an option for me. Is there any way I can use the gMSA in my mssql config?
When you put user id into the SQL connection string it will try to do SQL authentication and no longer try windows authentication (while gMSA is a windows authentication based).
When setting up the gMSA account did you specify the correct parameter for who is allowed to retrieve the password (correct: PrincipalsAllowedToRetrieveManagedPassword, incorrect but first suggestion when using tab completion PrincipalsAllowedToDelegateToAccount)
maybe you need to Install-ADServiceAccount ... on the machine you're running vault on

Connecting to Oracle data source via DataGrip (HR scheme)

I downloaded schemes from this website (https://github.com/oracle-samples/db-sample-schemas/releases/tag/v21.1)
but I don't know how to work on them in the DataGrip, that is, how to connect and which user data to use for connecting to the Oracle data source (Host, SID, Port, user, password etc)
Thanks in advance!
Schemas are the content.
And you need a place where to place this content.
So, create a database server first. You can do that via Docker or in the cloud.
Here is the Dockerfile you can use to create a docker container with the oracle database running:
https://github.com/DataGrip/docker-env-oracle/blob/master/12.2.0.1/Dockerfile
During installation, you will set the username, password, port etc

Why do we use the Hive service principal when using beeline to connect to Hive on a Kerberos enabled EMR cluster?

I am trying to connect to Hive using beeline on an EMR cluster (Kerberos enabled) and am wondering why I'd run a kinit (using my user account) and then the following:
beeline -u "jdbc:hive2://localhost:10000/default;principal=hive/_HOST#REALM"
The part that confuses me is the principal above. Why do we use "principal=hive/_HOST#REALM" (which from what I've read is the Hive service principal) when I've authenticated with my user account using the kinit in the previous command?
Will I be running queries against the Hive service principal or my user account? Do all users use the Hive service principal when using beeline? Is there any reason behind this?
Link for further context: Connecting to Hive via Beeline using Kerberos keytab
The principal= option on that JDBC URL actually refers to the service principal (SPN) i.e. what you need to connect to. It's admittedly ambiguous and confusing.
kinit authenticates your user principal (UPN), creating a "ticket-granting ticket" (TGT) which is dumped in the ticket cache.
Later the JDBC client (or HTTP client, or Hive Metastore Java client, or HDFS Java client, whatever) will use the TGT to request a service ticket for the appropriate service type on the appropriate host; for some reason Java never puts that service ticket in the cache (unlike curl or Python, which use a C library, like kinit).
SPNs are normally defined in Hadoop configuration files named ***-site.xml which are consumed by the Hadoop client libraries.
But... a JDBC driver is supposed to be stand-alone, not have dependencies on external libs or config files, and get all its connection params from the URL. That's why you have to stuff the SPN explicitly on your URL. Duh.

Configuring Impala with LDAP

I'm using CDH 4.5. I installed Impala manually (without Cloudera Manager). I've configured LDAP with Impala (using the instructions at http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_ldap.html).
I've added ldap_uri to the /etc/default/impala file. But how do I configure the ldap bind username?
With the current configuration, if I start impala shell, I am able to login using the ldap bind username. But how do I login using actual users from AD? I need to configure the ldap bind username and / or ldap password so that impala automatically connects using the bind username and when I start the impala shell, I can connect using actual user names.
Thanks.
Apparently we don't have to use the ldap bind name. I'm able to log in with user name as "someone#abc.com" where someone is the user name in AD and abc.com is the AD search base.

WebSphere to Oracle - doesn't accept correct password

In WebSphere 6.1 I have created a datasource to an Oracle 11g instance using the thin JDBC client.
In Oracle I have two users, one existing and another newly created.
My websphere datasource is OK if I use the component-managed authentication alias of the existing user, but fails with "invalid user/password" message if I use the alias of the new user. The error message is:
The test connection operation failed for data source MyDB (Non-XA) on
server nodeagent at node MY_node with the following exception:
java.sql.SQLException: ORA-01017: invalid username/password;
logon denied DSRA0010E: SQL State = 72000, Error Code = 1,017.
View JVM logs for further details.
There is nothing in the JVM logs. I have grepped all websphere logs and they do not mention my connection at all.
I can confirm that the username and password are correct by logging in via SQLPlus or (to prove the JDBC connection is OK) via SQuirreL.
I have checked in Oracle that the new user has all the system privs that the existing user has.
Any thoughts on what is going on or how I can debug this further?
Just FYI. I am guessing you are running WebSphere in Network Deployment mode.
This behavior you're experiencing is actually by design.
The reason for it is that the "Test Connection" button you see on the admin console, invokes the JDBC connection test from within the process of the Node Agent. There is no way for the J2C Alias information to propagate to the Node Agent without restarting it; some configuration objects take effect in WebSphere as soon as you save the configuration to the master repository, and some only take effect on a restart. J2C aliases take effect on restarts.
In a Network Deployment topology, you may have any number of server instances controlled by the same Node Agent. You may restart your server instances as you'd like, but unless you restart the Node Agent itself, the "test connection" button will never work.
It's a known WebSphere limitation... Which also exists on version 7.0, so don't be surprised when you test it during your next migration. :-)
If this happens to anyone else, I restarted WebSphere and all my problems went away. It's a true hallmark of quality software.
Oftentimes when people tell me they can't log into Oracle 11g with the correct password, I know they've been caught out by passwords becoming case-sensitive between 10g and 11g.
Try this :
data source definition
security
use the j2c alias both autentication managed by component and autentication managed by container
IBM WAS 8.5.5 Knowledge Center - Managing Java 2 Connector Architecture authentication data entries for JAAS
If you create or update a data source that points to a newly created J2C authentication data alias, the test connection fails to connect until you restart the deployment manager.
After you restart the deployment manager, the J2C authentication data is reflected in the runtime configuration. Any changes to the J2C authentication data fields require a deployment manager restart for the changes to take effect.
The node agent must also be restarted.
I have point my data source to componenet-manage authentication as well as container-managed authentication.Its working fine now........

Resources