In SnapLogic, is there a way to paramertize the database connection info so you can create a reusable pipeline that can pass the db connection info for the account of a select snap?
In pipeline properties, add a parameter named "awsdb" and its value as the "Account Name". Now, in database access snaps, use the above created parameter to dynamically load the connection info based on parameter value.
The database connection information cannot be passed as parameters (and it should not be). You have to create separate accounts for each connection.
As per this answer here, you can pass the account name as a pipeline parameter and configure the database read snap to connect to whatever instance it is supposed to connect to.
A better way to do this is to have an expression library file that will contain the account information and pass the schema name and table name as pipeline parameters. In this way, you can use the same pipeline to extract data from different tables in various schema using the same connection. This can be done by dragging and dropping this pipeline wherever you need it or using pipeline execute to call it from a different pipeline. You can maintain multiple such expression library files configured to different accounts.
Creating a pipeline with schema name and table name passed as pipeline parameter and an expression library file containing the account information:
Configuring the schema name and table name in the settings tab of the snap:
Getting the account information from the expression library file:
The expression library file config.expr used in this example:
{
"account": "../shared/TEST"
}
It works:
Note: As you can see, I created a Generic Database Account named TEST in the shared folder of the project space I was working in. This account contains all the connection information.
Hope this helps :)
Related
I had worked on Quarkus connecting to Postgres. But this is the first time I am trying to connect to SQL Server, which is the default server in my current project. I am following this guide to create a database component.
The properties file contains the following:
quarkus.datasource.db-kind=mssql
quarkus.datasource.username=<user-id>
quarkus.datasource.password=<pwd>
quarkus.datasource.reactive.url=sqlserver://localhost:1433/<db-name>?currentSchema=<schema-name>
quarkus.datasource.reactive.max-size=20
hibernate.default_schema=<schema-name>
The application starts fine, but when I make a request to the Resource that internally uses the repository, I get the following error:
Internal Server Error
Error id f0a959d2-3201-4015-bfd7-6628ae9914d1-1, io.vertx.mssqlclient.MSSQLException: {number=208, state=1, severity=16, message='Invalid object name ''.', serverName='<sql-instance>', lineNumber=1, additional=[io.vertx.mssqlclient.MSSQLException: {number=8180, state=1, severity=16, message='Statement(s) could not be prepared.', serverName='<sql-instance>', lineNumber=1}]}
This means, my application is able to connect to the database, but it is not able to find the table. The table exists in a schema, and I am unable to pass the schema which may be the cause of the issue. If you check the properties file, I have tried with two options:
Adding 'currentSchema' as a query param
Adding the property 'hibernate.default_schema'
But none of the two options are working. There is no documentation on SQL Server that can help me provide the right configuration to the Quarkus application. Please help.
The correct property is quarkus.hibernate-orm.database.default-schema It is possible to check all the available configuration properties in this url https://quarkus.io/guides/hibernate-orm#hibernate-configuration-properties
I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.
I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.
I am trying to get the information of all the files related to an instance of ibm bpm but the following query does not work for me and there is no error in the javascript console either. I am using ECM Document List and in configuration I am adding a variable which contains the query.
"SELECT cmis:name, IBM_BPM_Document_FileNameURL,IBM_BPM_Document_UserId FROM IBM_ WHERE IBM_BPM_Document_ProcessInstanceId = 75774"
Thanks
I would assume that you are using and external ECM server and not the embedded ECM that comes along with IBM BPM/BAW.
Looking at the problem, I would approach the debugging in the following order;
Use a relevant ECM browser (ACCE in case of Filenet) to check if the
documents have a property that holds the value for the instanceID.
Because, by default external ECM servers don't have such a document
property.
If the document has such a property, then use that in the
"WHERE" clause of the query. If it doesn't then talk to whoever
maintains the ECM environment to create such a property and make
sure that is set properly (to the correct instanceId) for the
documents.
Another solution if you have access to it can be using the "BPM Document List" and "BPM File Uploader" which has the feature (as a configuration option) to associate documents with the current process instance.
I have to move data from Oracle to PostgreSQL on Amazon cloud. I want to know if there are any ways to take care of configuration related issues.I want to take connection string ,userID, password and other credentials dynamically.How am I supposed to do this ?
You can use tFileInputProperties component for that. There you can set the type of the configuration file (.properties or .ini) and the path of the configuration file. The output is a key value pair row. Connect it with tContextLoad. If the key in the configuration file is the same as the context variable name, then it will set the value.
Here is the overview of the .properties file structure: https://en.wikipedia.org/wiki/.properties
And here is the .ini file structure: https://en.wikipedia.org/wiki/INI_file
If you have an enterprise version of Talend, with Talend Administration Center, you can input the context variables also in the task definition, in Job Conductor page. It is more convenient than putting properties file on the VM disc where the job server is running.
Small tip = if you decide to use properties file, please note that oracle password should be in double quotes, if it includes special characters. I do not know about PostgreSQL but MySQL password should NOT be with double quotes.