How to validate an LDIF?
Similar to XML, XMLSchema and Schematron are there any libraries to validate an LDIF with an LDAP schema?
A better way to solve this is to run the ldap commands with flags that dont actually commit results to the server an example would be ldapadd -H ldap:/// -D "cn=admin,dc=nodomain" -w '<secretThatNobodyKnows>' -n -f here with -n flag you are telling it to only show you what might happen. The advantage that this method holds over running query against a fake server is that you will actually validating against the same rules where you want to eventually commit to.
ldap-servers like openldap or opends usually check the ldif against the current schema on insertion. So if you need to check your ldif without using your productive ldap server, you could use a small java-based ldap server like openDS that uses the same ldap-schema.
Related
I want to put my ansible results into a database.
I made a simple php webpage that listen for post data : server name, number of updates to install, etc and write it into a mysql database for later use (with grafana)
---> How can I make my ansible-playbook make a http post from the ansible server to save results?
I have seen uri but looks to work from the client. Maybe callback plugin but did not seen a simple http plugin.
an idea??
Thanks
You should try ansible-cmdb for this. It has options to store Ansible's output in the form of HTML, CSV, SQL, etc.
Option --template sql is available which is helpful in exporting to MySQL Database.
I am trying to do a LDAP mod operation through Jmeter. Expected behavior - Jmeter would hit server A which in turn would hit server B. Actual modification operation would happen at server B. Server B would complete the operation and give response to server A which in turn would respond to Jmeter.
Now the issue is, Jmeter is always getting the "Referral" response message. However, manually we are able to change the password after hitting server A from different remote server.
Could you someone please suggest how to overcome this?
I am assuming this has been resolved. Just in case you are still wondering, #Rohan , my understanding is that you run jmeter on the command line:
$ jmeter -Jjava.naming.referral=true -n -t testplan.jmx -l log.jtl
JMeter won't have specific behaviour of its own. You will need to tell it to follow referrals by setting java.naming.referral property appropriately in the jndi.properties mechanism defined in the documentation for the JNDI LDAP provider, which you should already have in place for your application if you expect it to behave that way.
So, I am attempting to create an install script for my application (targeting Ubuntu 16). It has to create a postgresql user, grant permission to that user to authenticate via password, and grant permission to that user to authenticate locally. I only want to grant permission to do that on one database, the application database. So I need to insert the line local databasename username md5 above the lines that reject unknown connections, e.g., in the "Put your actual configuration here" section of pg_hba.conf. (pg_hba.conf uses position in the file to determine priority: first rule encountered that matches the connection gives the final result.)
To add this line, my script runs:
sudo awk '
/# Put your actual configuration here/ {
print "local databasename username md5"
}
{ print }
' /etc/postgresql/9.5/main/pg_hba.conf
# other setup
service postgresql restart
But that's less than optimal. First, the version number will change in the future, so hardcoding the directory is poor. Second, that's making a comment in someone else's project an actual structural part of the config file, which is a horrible idea from all possible points of view in all possible universes.
So my question is twopart. First, is there a good, correct, and accepted method to edit pg_hba.conf that I can use in an installation script instead of kitbashing about with text editors?
Second, if there is no good answer to the first part: is there a programmatic way to ask postgresql where it's pulling pg_hba from?
Is there a programmatic way to ask postgresql where it's pulling pg_hba from?
show hba_file;
-- or
select current_setting('hba_file');
Debian tool chain
So my question is twopart. First, is there a good, correct, and accepted method to edit pg_hba.conf that I can use in an installation script instead of kitbashing about with text editors?
Yes, however, you'll probably find it unsatisfactory.
Upstream, PostgreSQL doesn't support multiple versions and installs with their build tools. Debian does. So Debian has invented a concept of a cluster which is essentially a name and a version number.
Building a tool on Ubuntu or Debian, you should also probably use a name and version number.
Second, if there is no good answer to the first part: is there a programmatic way to ask postgresql where it's pulling pg_hba from?
Yes, there is a tool called pg_conftool. The default cluster's name is main. If you want the 9.5/main cluster. You can do this..
pg_conftool -s 9.5 main show hba_file
/etc/postgresql/9.5/main/pg_hba.conf
You can see conftool can make use of a version and name, but strictly it may not require one.
/usr/bin/pg_conftool [options] [<version> <cluster name>] [<configfile>] <command>
If you want to know more about a cluster in this context, check out check out all the binaries starting with pg_* but first and foremost pg_ctl and pg_ctlcluster (the debian wrapper)
I need to change the password on a user for over a hundred system. I want to do this with ansible. Which is easy. However the user module on ansible requires a hashed password. I am concerned because there are a few older hosts which may not support newer types of hashing. I want to be able to programmatically identify what password hashing algorithms are available, and use the appropriate password hash to change. Or is there perhaps a better way to handle this whole sale.
I have considered the following:
echo username:password | chpasswd
and run that using the command module. That should use whatever the default algorithm is. Is there any cause for concern with this method?
In my mind, the ideal way would be to figure the supported hashes for each machine and then generate the proper hash for each machine.
The approach you list should work Just make sure you at "no_log: yes" to your task to ensure the password doesn't end up in the log file.
With either approach you're going to need have a way of getting the password(s) into ansible to use with the user module. Not sure if the passwords will be in a CSV file, yaml file or some other format. You could consider using vault to lock things down a bit more.
Before creating an object in Kubernetes (Service, ReplicationController, etc.), I'd like to test that the JSON or YAML specification of the object is valid. But I don't want to actually create the object.
Is there some to do a "dry run" that would be equivalent to running kubectl create --validate=true -f file.json, but would just let me know that it passes validation, and not actually create it?
Ideally, it would be great if I could do this via API, and not require the use of kubectl. But I could make it work if it required me to use kubectl.
Thanks.
This works for me (kubernetes 1.7 and 1.9):
kubectl apply --validate=true --dry-run=client --filename=file.yaml
Some kubectl commands support a --dry-run flag (like kubectl run, kubectl expose, and kubectl rolling-update).
There is an issue open to add the --dry-run flag to more commands.
There is a tool called kubeval which validates configs against the expected schema, and does not require connection to a cluster to operate, making it a good choice for applications such as CI.
The use of --dry-run and --validate only seem to partially solve the issue.
client-side validation is not exhaustive. it primarily ensures the
fields names and types in the yaml file are valid. full validation is
always done by the server, and can always impose additional
restrictions/constraints over client-side validation.
Source - kubectl --validate flag pass when yaml file is wrong #64830
Given this you cannot do a full set of validations except to hand it off completely to the server for vetting.