Can we load Schema (base schema) for OpenDJ using Novel Ldap api? - java-8

We are using Novel Ldap Api for all LDAP operations, i want to load my base schema ldif file to opendj without restarting the openDJ server.
So far, post setup we are manually copying the schema file to /config/schema location and we wanted it to do through Java code.
Since we already using Novel Ldap for all LDAP operations (modify, delete, read, add entry), we have to use the same.
When i tried, iam getting below exception, is there any solution please share?
SEVERE: Exception getting LDAP connection: LDAPLocalException:
com.novell.ldap.ldif_dsml.LDIFReader: Version line must be the first
meaningful line(on line 9 of the file) (82) Local Error
at com.novell.ldap.util.LDIFReader.(LDIFReader.java:156)
at com.novell.ldap.util.LDIFReader.(LDIFReader.java:80)

It looks like the Novell LDIF reader is strictly accepting LDIF version 1 from RFC 2849.
The first line should contain version: 1
OpenDJ does support adding schema over LDAP, it must be a change of cn=schema, adding values to the attributeTypes and objectClasses attributes.

Related

Unable to connect to SQL Server using the quarkus-hibernate-reactive due to incorrect reactive.url

I had worked on Quarkus connecting to Postgres. But this is the first time I am trying to connect to SQL Server, which is the default server in my current project. I am following this guide to create a database component.
The properties file contains the following:
quarkus.datasource.db-kind=mssql
quarkus.datasource.username=<user-id>
quarkus.datasource.password=<pwd>
quarkus.datasource.reactive.url=sqlserver://localhost:1433/<db-name>?currentSchema=<schema-name>
quarkus.datasource.reactive.max-size=20
hibernate.default_schema=<schema-name>
The application starts fine, but when I make a request to the Resource that internally uses the repository, I get the following error:
Internal Server Error
Error id f0a959d2-3201-4015-bfd7-6628ae9914d1-1, io.vertx.mssqlclient.MSSQLException: {number=208, state=1, severity=16, message='Invalid object name ''.', serverName='<sql-instance>', lineNumber=1, additional=[io.vertx.mssqlclient.MSSQLException: {number=8180, state=1, severity=16, message='Statement(s) could not be prepared.', serverName='<sql-instance>', lineNumber=1}]}
This means, my application is able to connect to the database, but it is not able to find the table. The table exists in a schema, and I am unable to pass the schema which may be the cause of the issue. If you check the properties file, I have tried with two options:
Adding 'currentSchema' as a query param
Adding the property 'hibernate.default_schema'
But none of the two options are working. There is no documentation on SQL Server that can help me provide the right configuration to the Quarkus application. Please help.
The correct property is quarkus.hibernate-orm.database.default-schema It is possible to check all the available configuration properties in this url https://quarkus.io/guides/hibernate-orm#hibernate-configuration-properties

Mule Connect to remote flat files

I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]

pwdChangedTime which is not allowed by any of the objectclasses defined in that entry in OpenDj LDAP

I installed a new LDAP server (OpenDJ 2.4.6). I am trying to enable the replication with reference of existing server. But I am getting the below issue.
I ran the replication command, existing server (1st server). Can you please help/suggest on the below issue?
Establishing connections ..... Done.
Checking registration information .....
Error updating registration information. Details: Registration information
error. Error type: 'ERROR_UNEXPECTED'. Details:
javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Entry
cn=admin,cn=Administrators,cn=admin data violates the Directory Server schema
configuration because it includes attribute pwdChangedTime which is not
allowed by any of the objectclasses defined in that entry]; remaining name
'cn=admin,cn=Administrators,cn=admin data'
See /tmp/opends-replication-6304872164983350730.log for a detailed log of this
operation.
Schema violation indicates that the entry is not compliant with the schema defined on the directory server.
Since the pwdChangedTime is by default defined in the Schema as an operational attribute and the error occurs with the dsreplication command (which is known to produce valid data), this probably indicates that you have messed up with the default schema and altered it in a non standard and incompatible way.

Is there a mechanism within LDAP to query out the version of the directory server?

Is there a mechanism to describe/discover information about the directory server implementation using the LDAP protocol itself? For example if you want to find out the specific version of the product being used on the backend.
As an analogy - in Oracle DBMS you can issue the following SQL to get the version information of the database you are connected to:
Select * from v$version
The JDBC protocol has a similar mechanism to get Database metadata. Does the LDAP protocol have any similar discovery features to find out information about the directory server being connected to? In my particular case I happen to know that the backend Directory server is Oracle/OID but is there a mechanism to describe the directory server using the LDAP protocol itself? Alternatively is there an Oracle/OID specific technique?
See "the root dse". Directory Servers might publish this info. See also RFC3045.
update:
In java:
LDAPConnection conn = new LDAPConnection(hostname,port);
SearchRequest req = new SearchRequest("",SearchScope.BASE,"(&)","+");
SearchResult result = conn.search(req);
// If the search succeeds, the result will comprise one entry,
// and that entry is the Root DSE:
dn:
subschemaSubentry: cn=schema
namingContexts: C=us
vendorName: UnboundID Corp.
vendorVersion: UnboundID Directory Server 4.1.0.6
Although Terry Gardner's solution works on many LDAP server implementations, unfortunately, version information is a broad and inconsistently supported among LDAP server implementations.
We show some of the more extensive methods to determine the vendor and versions.
-jim

Sonar-Runner talks to the local database

I am trying to understand the sonar-runner http://docs.sonarqube.org/display/SONAR/Installing+and+Configuring+SonarQube+Runner. I have a central sonar server that has a database in the same host. As expected I run the sonar-runner from my clients in numerous boxes and expect them to upload data to the sonar cube.
My sonar-project.properties looks something like below
# Required metadata
sonar.projectKey=a:b
sonar.projectName=b-1.0
sonar.projectVersion=1.0
# Comma-separated paths to directories with sources (required)
sonar.sources=lib
# Language
sonar.language=py
# Encoding of the source files
sonar.sourceEncoding=UTF-8
# Host of the sonar url
sonar.host.url=http://myserver:9000/msde/sonar/webapp
I was expecting that my client would perform some analysis and upload data directly to the server using some web services meant for upload. I however see the following in my logs
10:42:00.678 INFO - Apply project exclusions
10:42:00.682 WARN - H2 database should be used for evaluation purpose only
10:42:00.682 INFO - Create JDBC datasource for jdbc:h2:tcp://localhost/sonar
10:42:00.755 INFO - Initializing Hibernate
Question
Should I be configuring the details of the database in the sonar-project.properties? I was expecting it to use some webservice from the sonar url to upload metrics but there are several problems with exposing the database details. I wanted the database to be internal to the server and not accessed by various clients.
This also means that I should place the database details in various property files in several projects so the cost of changing the central database details is huge.
You need to edit the $SONARQUBE_RUNNER_HOME/conf/sonar-runner.properties file to point to the correct database instance. And this is the only file you need to do that for all your projects.
If you are using MySQL, in the HOME_SONAR_RUNNER\conf\sonnar-runner.properties file, you have to uncomment the line that relates with MySQL, leaving the line as this:
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8
In the same file, don't forget to comment next:
#sonar.host.url=http://localhost:9000
Save the file and run again.

Resources