In my ETL process, I'm trying to load final resultsets into AWS Redshift. I'm aware that PutDatabaseRecord inserts/updates the data in bulk. But I've verified on redshift that it is executing one statement at a time. For an average of 150 records, it's taking 1.5 minutes. Attaching a screenshot of multiple inserts.
Below is my configuration:
I have primary key in table, also the same field is sort key aswell.
Am I missing anything?
Update-1:
Added below debug logger into logback.xml and restarted server to see logs
<logger name="org.apache.nifi.processors.standard.PutDatabaseRecord" level="DEBUG" />
The logs are as below:
2021-12-17 12:39:05,780 INFO [NiFi Web Server-19] o.a.n.c.s.StandardProcessScheduler Running once PutDatabaseRecord[id=1180157e-118d-1002-61b2-83024a97a793]
2021-12-17 12:39:05,780 INFO [NiFi Web Server-19] o.a.n.controller.StandardProcessorNode Starting PutDatabaseRecord[id=1180157e-118d-1002-61b2-83024a97a793]
2021-12-17 12:39:05,804 WARN [NiFi Web Server-19] org.apache.nifi.audit.ProcessorAuditor Unable to record actions: null
2021-12-17 12:39:06,544 INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController#25c348b5 // Another save pending = false
2021-12-17 12:39:06,603 DEBUG [Timer-Driven Process Thread-3] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=1180157e-118d-1002-61b2-83024a97a793] Fetched Table Schema TableSchema[columns=[ --- data for around 40 columns --- ] for table name orders
2021-12-17 12:50:30,411 INFO [Timer-Driven Process Thread-3] o.a.n.c.s.StandardProcessScheduler Stopping PutDatabaseRecord[id=1180157e-118d-1002-61b2-83024a97a793]
2021-12-17 12:50:30,411 INFO [Timer-Driven Process Thread-3] o.a.n.controller.StandardProcessorNode Stopping processor: PutDatabaseRecord[id=1180157e-118d-1002-61b2-83024a97a793]
It can be seen that it is taking more than 10-15 minutes for a thousand records.
PS: Maximum Batch Size is set to 1000, still the execution is happening row after row. Screenshot attached below:
Related
Report Server (SSRS) 2019 is restarting very often, kind of every minute, because the value of Hosting-databaseValidationStatus changes. SSRS is configured to use a local SQL Server 2019. It connects using Windows authentication.
Why would SSRS restart that often?
Why is the validation status chaning?
2023-02-16 17:34:18.3752|INFO|45|Configuration of process 'RS Service' has changed. Values modified: Hosting-databaseValidationStatus
2023-02-16 17:34:18.3752|INFO|45|Configuration of process 'Portal' has changed. Values modified: Hosting-databaseValidationStatus
2023-02-16 17:34:18.3752|INFO|45|Restarting process: RS Service
2023-02-16 17:34:18.3752|INFO|45|Restarting process: Portal
2023-02-16 17:35:18.5642|INFO|45|Configuration of process 'RS Service' has changed. Values modified: Hosting-databaseValidationStatus
2023-02-16 17:35:18.5642|INFO|45|Configuration of process 'Portal' has changed. Values modified: Hosting-databaseValidationStatus
2023-02-16 17:35:18.5642|INFO|45|Restarting process: RS Service
2023-02-16 17:35:18.5642|INFO|45|Restarting process: Portal
I found out that removing Hosting-databaseValidationStatus from restartOnChangesTo in SSRS/RSHostingService/config.json solves the problem.
But there might be a reason that the validation change forces a restart.
We are running a Mule 4 application where we have introduced the btm (bitronix manager) to manage transaction commits and rollback.
For the btm XA data resource we used a default configuration like this (the service details and user details are not provided here....)
Here are a few observations we have seen:
The threads are spiking sharply
Getting constant warning (2nd message) continuously every minute
What could be the issue?
We are using Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Thanks.
XADataSource in connector StandardXADataSource: connection count=<0> number of dead connection=<0> dead lock max wait=<300000> dead lock retry wait=<10000> driver name=<oracle.jdbc.driver.OracleDriver> number of free connections=<0> max con=<0> min con=<50> prepared stmt cache size=<16> transaction manager= xid connection size=<0> StandardConnectionPoolDataSource: master prepared stmt cache size=<0> prepared stmt cache size =<16> StandardDataSource: driver= url=<jdbc:oracle:thin:#//oracle service name> user= CoreDataSource : debug = description = login time out =<60> user = verbose = . A default pool will be created. To customize define a bti:xa-data-source-pool element in your config and assign it to the connector.
?error running recovery on resource '270015947-default-xa-session', resource marked as failed (background recoverer will retry recovery)
java.lang.IllegalStateException: No TransactionContext associated with current thread: 914397
at com.hazelcast.transaction.impl.xa.XAResourceImpl.getTransactionContext(XAResourceImpl.java:305) ~[hazelcast-3.12.jar:3.12]
at com.mulesoft.mule.runtime.module.cluster.internal.vm.ClusterQueueSession.getXaTransactionContext(ClusterQueueSession.java:175) ~[mule-ee-distribution-standalone-4.3.0-20210622-patch.jar:4.3.0-20210622]
at com.mulesoft.mule.runtime.module.cluster.internal.vm.ClusterQueueSession.recover(ClusterQueueSession.java:136) ~[mule-ee-distribution-standalone-4.3.0-20210622-patch.jar:4.3.0-20210622]
at bitronix.tm.recovery.RecoveryHelper.recover(RecoveryHelper.java:103) ~[mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.RecoveryHelper.recover(RecoveryHelper.java:61) ~[mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.recover(Recoverer.java:276) [mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.recoverAllResources(Recoverer.java:233) [mule-btm-2.1.14.jar:2.1.14]
at bitronix.tm.recovery.Recoverer.run(Recoverer.java:146) [mule-btm-2.1.14.jar:2.1.14]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_241]
We have automated the SonarQube server installation using Ansible. As part of this procedure, Ansible polls the URL sonar/api/server/index to check whether the server is up-and-running. As soon as an HTTP 200 is returned and the server status is equal to SETUP...
<server>
<id>20170131094026</id>
<version>5.6.2</version>
<status>SETUP</status>
</server>
... the script triggers a database upgrade by sending a POST to sonar/api/server/setup and waiting for MIGRATION_SUCCEEDED to be returned.
This has worked well until now that I tried to upgrade SonarQube from version 5.6.2 to 5.6.5. For some reason sonar/api/server/index now always returns the status UP (even though the GUI clearly indicates that it's still under maintenance) and a POST to sonar/api/server/setup indicates that the database is up-to-date and no migration is needed (NO_MIGRATION).
However, the server is still in maintenance mode and the nexus.log keeps repeating the same line:
09:41:05 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer...
09:41:39 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer...
09:43:13 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer...
09:47:28 INFO ce[o.s.c.a.WebServerWatcherImpl] Still waiting for WebServer...
When I manually navigate to sonar/setup and click on the Update button, then a database migration starts... Has there been any changes in the API? Am I calling the wrong REST endpoints?
You're using internal web services (api/server/index and api/server/upgrade). Responses and behavior can change between versions without any notification.
You should use instead :
GET api/system/db_migration_status : to get the database migration status
POST api/system/migrate_db : to execute the migration
I encourage you to go to http:///web_api to see documentation of available web services for the version of SonarQube your using.
I have a clustered WSO2 deployment. The CPU often at 30% (on a c2.large) and despite the CPU usage, the server isn't processing request it just seems to be busy doing nothing.
It seems that the SVN deepsync autocommit feature is the cause of the CPU consumption since if I switch off deepsync or simple set autocommit to false then I don't see the same CPU spiking.
The logs seem to back up this theory as I see:
TID: [0] [AM] [2015-02-20 16:30:14,100] DEBUG {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} - SVN adding files in /zzish/wso2am/repository/deployment/server {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
TID: [0] [AM] [2015-02-20 16:30:52,932] DEBUG {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} - No changes in the local working copy {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
TID: [0] [AM] [2015-02-20 16:30:52,932] DEBUG {org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizer} - Commit completed at Fri Feb 20 16:30:52 UTC 2015. Status: false {org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizer}
and during this time the CPU spike occurs.
As per https://docs.wso2.com/display/CLUSTER420/SVN-based+Deployment+Synchronizer I am using svnkit-1.3.9.wso2v1.jar.
I am using an external SVN service (silksvn) in order to avoid having to run my own HA subversion service.
So I have three questions:
Is it possible to reduce the frequency of the deepsync service ?
How to further debug this performance issue? running this hot smells like a bug.
Has anyone managed to get the git deployment sync working (Link to project on github) with AM 1.8.0 ?
There are errors is NewRelic logs:
...
2014-03-28 13:35:14,167 NewRelic INFO: Harvest starting
2014-03-28 13:35:15,136 NewRelic INFO: Harvest starting
2014-03-28 13:35:20,355 NewRelic INFO: Harvest starting
2014-03-28 13:35:23,543 NewRelic ERROR: Exception thrown from event handler. Event handlers should not let exceptions bubble out of them.
System.NullReferenceException: Object reference not set to an instance of an object.
at NewRelic.Agent.Core.Metric.StatsMap`1.Merge(T name, IStats newStats)
at NewRelic.Agent.Core.Metric.StatsMap`1.Merge(IStatsMap`1 otherMap)
at NewRelic.Agent.Core.Metric.StatsCollection.RecordTransactionStats(String scope, ITransactionStats txStats)
at NewRelic.Agent.Core.Utilities.EventBus`1.Publish(T message)
2014-03-28 13:35:33,090 NewRelic INFO: Harvest starting
2014-03-28 13:36:07,575 NewRelic INFO: Harvest starting
...
Windows Events log not contain any records about this.
OS Windows 2012
Status monitor show "New Relic has not sent data" for this application. however, the protocols have an records of sending data.
Anybody know about this error?
The issue causing this error was fixed in version 2.22.79.0 of the .Net Agent so if you are running an older version upgrading should fix the problem.