I am trying to implement the below site(without update attribute).I have taken a directory which has few files that needs to be put in the marklogic database. I have added the ML database configuration.
https://marklogic-community.github.io/marklogic-nifi-incubator/file-system-to-marklogic.html
Attached the screenshot. Please help.
Error :
2021-08-17 10:57:36,900 ERROR [Timer-Driven Process Thread-6] o.a.n.marklogic.processor.PutMarkLogic PutMarkLogic[id=017b100b-ebe4-1ae5-b791-0cc5e89e6e2c] : java.lang.NullPointerException
java.lang.NullPointerException: null
2021-08-17 10:57:36,905 INFO [Timer-Driven Process Thread-6] o.a.n.marklogic.processor.PutMarkLogic PutMarkLogic[id=017b100b-ebe4-1ae5-b791-0cc5e89e6e2c] Rolling back session
2021-08-17 10:57:36,906 ERROR [Timer-Driven Process Thread-6] o.a.n.marklogic.processor.PutMarkLogic PutMarkLogic[id=017b100b-ebe4-1ae5-b791-0cc5e89e6e2c] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: java.lang.NullPointerException: java.lang.NullPointerException
↳ causes: org.apache.nifi.processor.exception.ProcessException: java.lang.NullPointerException
org.apache.nifi.processor.exception.ProcessException: java.lang.NullPointerException
at org.apache.nifi.marklogic.processor.AbstractMarkLogicProcessor.logErrorAndRollbackSession(AbstractMarkLogicProcessor.java:215)
at org.apache.nifi.marklogic.processor.PutMarkLogic.onTrigger(PutMarkLogic.java:394)
at org.apache.nifi.marklogic.processor.PutMarkLogic.onTrigger(PutMarkLogic.java:329)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1202)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:103)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
I figured out the error. It is that I didn’t set JAVA_HOME properly. Once I set it it’s working fine.
Related
i have Nifi Installation running on Linux, which was working fine and all of sudden FetchSFTp throwing error
my flow is List SFTP - FetchSFTp - PutSFTP. and below is the error showing in FetchSFTp process.
FetchSFTP[id=908da67c-0181-1000-1830-fdbb76da7be8] Successfully fetched the content for FlowFile[filename=cfgcampaign_2022-06-25.csv] from etl12.kw.zain.com:22/data1/dw/ftpuser/Varicent_Files/ICM_CC/cfgcampaign_2022-06-25.csv but failed to rename the remote file due to net.schmizz.sshj.sftp.SFTPException: Failure: java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure - Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
And from the log:
2022-06-26 10:58:50,699 WARN [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.FetchSFTP [FetchSFTP[id=908da67c-0181-1000-1830-fdbb76da7be8], StandardFlowFileRecord[uuid=d139d68c-f094-45f8-982d-ab4a1abaf264,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1656230330691-548, container=default, section=548], offset=0, length=46140],offset=0,name=cfgcampaign_2022-06-25.csv,size=46140], etl12.kw.zain.com, 22, /data1/dw/ftpuser/Varicent_Files/ICM_CC/cfgcampaign_2022-06-25.csv, java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure] Successfully fetched the content for {} from {}:{}{} but failed to rename the remote file due to {}
java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure
at org.apache.nifi.processors.standard.util.SFTPTransfer.rename(SFTPTransfer.java:785)
at org.apache.nifi.processors.standard.FetchFileTransfer.performCompletionStrategy(FetchFileTransfer.java:359)
at org.apache.nifi.processors.standard.FetchFileTransfer.lambda$onTrigger$1(FetchFileTransfer.java:313)
at org.apache.nifi.controller.repository.StandardProcessSession.commitAsync(StandardProcessSession.java:537)
at org.apache.nifi.processors.standard.FetchFileTransfer.onTrigger(FetchFileTransfer.java:312)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1283)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:63)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
at net.schmizz.sshj.sftp.Response.error(Response.java:140)
at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:133)
at net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:125)
at net.schmizz.sshj.sftp.SFTPEngine.rename(SFTPEngine.java:250)
at net.schmizz.sshj.sftp.SFTPClient.rename(SFTPClient.java:124)
at net.schmizz.sshj.sftp.SFTPClient.rename(SFTPClient.java:119)
at org.apache.nifi.processors.standard.util.SFTPTransfer.rename(SFTPTransfer.java:777)
... 16 common frames omitted
Can anyone help me to fix this?
Regards,
Ben
I am using nifi to connect to netezza to get data from here save to hdfs. I'm configuring the DBCPConnectionPool as below
and getting the data with the following statement:
but I get the following error:
2022-04-26 17:03:06,530 WARN [Timer-Driven Process Thread-6]
o.a.n.controller.tasks.ConnectableTask Administratively Yielding
ExecuteSQLRecord[id=504843e5-0180-1000-0000-00006597605a] due to
uncaught Exception: java.lang.AbstractMethodError:
org.netezza.sql.NzConnection.isValid(I)Z
java.lang.AbstractMethodError:
org.netezza.sql.NzConnection.isValid(I)Z at
org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:897)
at
org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at
org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:630)
at
org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:118)
at
org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:665)
at
org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:544)
at
org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:753)
at
org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:440)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:55)
at sun.reflect.GeneratedMethodAccessor374.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:87)
at com.sun.proxy.$Proxy131.getConnection(Unknown Source) at
org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:236)
at
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
at
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
at
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I don't know why and how to fix this. Can anyone help me? Thank you very much.
have a problem with geomesa failed on adding indexes, maybe someones know where problem is?
geomesa-accumulo add-attribute-index -u root -p xxx -c xxx_dev_test -a asset_id --coverage full -f telemetry_values
DEBUG Looking up Accumulo Instance Id in Zookeeper for 5000 milliseconds.
DEBUG You can specify the Instance Id via the command line or
change the Zookeeper timeout by setting the system property 'instance.zookeeper.timeout'.
INFO Running map reduce index job for attributes: [asset_id] with coverage: full...
ERROR Error encountered running attribute index command. Check hadoop's job history logs for more information.
Found that no jobs created in hadoop so no logs, but in tserver logs I found
2021-01-25 12:32:05,129 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
2021-01-25 12:32:05,202 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
hadoop 3.1
accumulo 1.9.3
geomesa-accumulo 2.4.0
any advice?
geomesa logs, looks same as error as zookeeper
2021-01-25 13:29:38,762 DEBUG [org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator] IOException thrown
java.io.IOException: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:760)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:158)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.flush(ThriftTransportPool.java:346)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.send_startMultiScan(TabletClientService.java:326)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:308)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:684)
... 6 more
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:475)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 13 more
here more logs from geomesa, seems some problem with job creation
2021-01-25 13:54:36,873 WARN [org.apache.hadoop.mapred.LocalJobRunner] job_local1471203421_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.NullPointerException
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:103)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:102)
at org.locationtech.geomesa.utils.io.WithStore.apply(WithStore.scala:37)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper.setup(AttributeIndexJob.scala:102)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
error from mapred job
Exception in thread "main" java.lang.NumberFormatException: For input string: "local1471203421"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.mapreduce.TypeConverter.toClusterTimeStamp(TypeConverter.java:111)
at org.apache.hadoop.mapreduce.TypeConverter.toYarn(TypeConverter.java:82)
at org.apache.hadoop.mapred.ClientServiceDelegate.<init>(ClientServiceDelegate.java:121)
at org.apache.hadoop.mapred.ClientCache.getClient(ClientCache.java:68)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:215)
at org.apache.hadoop.mapreduce.tools.CLI.getJob(CLI.java:660)
at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:470)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)
hadoop 3.1 not support this feature, need 3.2 update
I'm running jenkins build job which takes code from git and builds an artifact
but I'm getting following error
Waiting for Jenkins to finish collecting data
ERROR: Asynchronous execution failure
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at hudson.remoting.Channel$2.adapt(Channel.java:992)
at hudson.remoting.Channel$2.adapt(Channel.java:986)
at hudson.remoting.FutureAdapter.get(FutureAdapter.java:55)
at hudson.maven.AbstractMavenBuilder.waitForAsynchronousExecutions(AbstractMavenBuilder.java:186)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:146)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:70)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
I am trying out the Wait\Notify Processors in Nifi 1.2 for the first time. In the Property for Distributed Cache Service I choose create new service.
Under Properties of that service I just pick the hostname of the local server where nifi is running as Server Hostname and all lights changed on go.
But when i started the Prozessors I got this error messege:
2017-07-12 14:28:09,563 ERROR [Timer-Driven Process Thread-6]
org.apache.nifi.processors.standard.Wait
Wait[id=115238a2-299b-1267-98b6-14d1a4eb45e8] Failed to process
session due to org.apache.nifi.processor.exception.ProcessException:
Failed to get signal for TOC_2017cw14_WGS84_umts due to
java.net.ConnectException: Connection refused: {}
org.apache.nifi.processor.exception.ProcessException: Failed to get
signal for TOC_2017cw14_WGS84_umts due to java.net.ConnectException:
Connection refused
at org.apache.nifi.processors.standard.Wait.onTrigger(Wait.java:354)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)
Manually add DistributedMapCacheServer with default parameters (port 4557) and enable it.
You might have DistributedMapCacheClientService, however you need DistributedMapCacheServer to resolve the issue.