ORACLE SQL Errors 127 and 396 - oracle

For a long time Oracle, as soon as I start the program, gives me these two errors that I have not been able to solve or understand.
1st mistake
LEVEL: SEVERE
Sequence: 396
Elapsed: 60969
Source: oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$1
Message: NORTHWIND expired 0.003s ago (see "Caused by:" stack trace below); reported from ExpiredTextBuffer on thread AWT-EventQueue-0 activate AWT event:
java.awt.event.FocusEvent[FOCUS_LOST,permanent,opposite=null,cause=CLEAR_GLOBAL_FOCUS_OWNER] on Editor2_MAIN at oracle.ide.model.ExpiredTextBuffer.newExpiredTextBufferException(ExpiredTextBuffer.java:55)
2nd Error
LEVEL: SEVERE
sequence: 127
Elapsed: 0
Source: oracle.ide.extension.HashStructureHook
Message: Unexpected runtime exception while delivering HashStructureHookEvent
I have tried to reinstall everything and it is not due to lack of resources either since the pc is quite powerful ryzen 9 and 32gb of ram

Related

ORA-29273: HTTP request failed ORA-29276: transfer timeout

We are running 12.1.0.2 OEE
We are Getting intermittent Ora error while executing a rest call from SP
[Error] Execution (124: 1): ORA-29273: HTTP request failed
ORA-29276: transfer timeout
ORA-06512: at "SYS.UTL_HTTP", line 1258
ORA-06512: at "EDB.GET_EXPECTED_VALUES_914", line 57
ORA-06512: at line 12
What we tried:
We changed default timeout to:
UTL_HTTP.SET_TRANSFER_TIMEOUT(896000);
It worked for sometime and now we started getting this time_our error again.
The time_out occurs in 1.5 minute that means it does not respect the parameter in UTL_HTTP.SET_TRANSFER_TIMEOUT(896000).
The issue was in the network performance that fluctuated.
UTL_HTTP.SET_TRANSFER_TIMEOUT(896000) - modify default 60 sec timeout
and must be set before initiating rest call, other wise the following notation:
UTL_HTTP.SET_TRANSFER_TIMEOUT(req,896000).

PITEST mutationCoverage is returning SocketException

While running clean test verify org.pitest:pitest-maven:mutationCoverage, getting the below exception.
PIT >> INFO : MINION :
.pitest.testapi.execute.containers.UnContainer.execute(UnContainer.java:31)
- at org.pitest.testapi.execute.Pitest.executeTests(Pitest.java:57)
- at org.pitest.testapi.execute.Pitest.run(Pitest.java:48)
- at org.pitest.coverage.execute.CoverageWorker.run(C
-Caused by: java.net.SocketException: Software caused connection abort: socket write error
-- at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java
The micro service has many scenarios to execute, would like to know how this can be fixed. I see some details in http://pitest.org/faq/ refering under section
PIT is taking forever to run, but not sure, if there is a way to increase the thread count.

Opendaylight Data Tree poisoned after leader switch

We are using Nitrogen version (SR1) of ODL. We are trying 2 node cluster and when follower becomes leader (after running the server for 5 - 6 hrs) we observe below exception in karaf.log. When below exception happens we are unable to access MDSAL for any read/write operations.
We call switchAllLocalShardsState API to change follower to be new leader based on "Akka Member Removed" event.
1) What does "poisoned" and "No Progress Exception" refer here.
2) Does "TERM" that we pass as an argument to switchAllLocalShardsState
cause this issue. If yes, please explain the significance of TERM
and also let us know why we are facing this issue only after
running the server for a long time.
2018-06-04 11:58:25,452 | ERROR | tAdminThread #20 | 130 - com.fujitsu.fnc.sdn.fw-scheduler-odl - 5.1.0.SNAPSHOT | SchedulerServiceImpl | Get Schedule List Transaction failed : ReadFailedException{message=read execution failed, errorList=[RpcError [message=read execution failed, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=ReadFailedException{message=read execution failed, errorList=[RpcError [message=read execution failed, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.cluster.access.client.NoProgressException: No progress in 31198 seconds]]}]]}
2018-06-04 11:58:25,452 | WARN | tAdminThread #12 | 261 - com.fujitsu.fnc.sdnfw.security-odl - 5.1.0.SNAPSHOT | IDMLightServer | getContainer: {}
java.util.concurrent.ExecutionException: ReadFailedException{message=read execution failed, errorList=[RpcError [message=read execution failed, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.cluster.access.client.NoProgressException: No progress in 31198 seconds]]}
at org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.wrapInExecutionException(MappingCheckedFuture.java:65)[583:org.opendaylight.yangtools.util:1.2.1]
at org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.get(MappingCheckedFuture.java:78)[583:org.opendaylight.yangtools.util:1.2.1]
at com.fujitsu.fnc.sdnfw.aaa.idmlight.impl.IDMLightServer.getUsermgmt(IDMLightServer.java:3056)[261:com.fujitsu.fnc.sdnfw.security-odl:5.1.0.SNAPSHOT]
at com.fujitsu.fnc.sdnfw.aaa.idmlight.impl.IDMLightServer.init(IDMLightServer.java:1977)[261:com.fujitsu.fnc.sdnfw.security-odl:5.1.0.SNAPSHOT]
at com.fujitsu.fnc.sdnfw.aaa.idmlight.impl.IDMLightServer.handleEvent(IDMLightServer.java:3251)[261:com.fujitsu.fnc.sdnfw.security-odl:5.1.0.SNAPSHOT]
at Proxyadd8c855_db4c_4e72_b600_2ca57cba8d4d.handleEvent(Unknown Source)[:]
at org.apache.felix.eventadmin.impl.handler.EventHandlerProxy.sendEvent(EventHandlerProxy.java:415)[393:org.apache.karaf.services.eventadmin:4.0.10]
at org.apache.felix.eventadmin.impl.tasks.HandlerTask.run(HandlerTask.java:90)[393:org.apache.karaf.services.eventadmin:4.0.10]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]
Suppressed: java.lang.IllegalStateException: Uncaught exception occured during closing transaction
at org.opendaylight.controller.cluster.databroker.AbstractDOMBrokerTransaction.closeSubtransactions(AbstractDOMBrokerTransaction.java:92)[497:org.opendaylight.controller.sal-distributed-datastore:1.6.1]
at org.opendaylight.controller.cluster.databroker.DOMBrokerReadOnlyTransaction.close(DOMBrokerReadOnlyTransaction.java:50)[497:org.opendaylight.controller.sal-distributed-datastore:1.6.1]
at org.opendaylight.controller.md.sal.binding.impl.BindingDOMReadTransactionAdapter.close(BindingDOMReadTransactionAdapter.java:36)[484:org.opendaylight.controller.sal-binding-broker-impl:1.6.1]
at com.fujitsu.fnc.sdnfw.aaa.idmlight.impl.IDMLightServer.getUsermgmt(IDMLightServer.java:3062)[261:com.fujitsu.fnc.sdnfw.security-odl:5.1.0.SNAPSHOT]
... 10 more
Caused by: java.lang.IllegalStateException: Connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, poisoned=org.opendaylight.controller.cluster.access.client.NoProgressException: No progress in 31198 seconds, backend=ShardBackendInfo{actor=Actor[akka.tcp://opendaylight-cluster-data#u446.nms.fnc.fujitsu.com:2550/user/shardmanager-config/member-2-shard-default-config#-1532234096], sessionId=0, version=BORON, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} has been poisoned
at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.commonEnqueue(AbstractClientConnection.java:198)[467:org.opendaylight.controller.cds-access-client:1.2.1]
The NoProgressException indicates you've enabled the new tell-based that was initially introduced in Nitrogen. It's designed to be more resilient to transient comm failures but is still experimental and there have been fixes since Nitrogen. I would suggest disabling it.
Also, there's a better way to implement 2-node primary/secondary by making the secondary non-voting and then promoting the secondary to leader by switching it to voting when the primary fails. This is documented in the online clustering guide in the Geo redundancy section which describes it for 6 nodes but you can use it for 2 nodes - same concept.

Error (10500): VHDL syntax error at lab_06.vhd(54) near text "shifta";

enter image description here
Error (10500): VHDL syntax error at lab_06.vhd(54) near text "shifta";
Info: *******************************************************************
Info: Running Quartus II 64-Bit Analysis & Synthesis
Info: Version 13.0.1 Build 232 06/12/2013 Service Pack 1 SJ Web Edition
Info: Processing started: Mon Nov 13 18:53:13 2017
Info: Command: quartus_map --read_settings_files=on --write_settings_files=off lab_06 -c lab_06
Warning (20028): Parallel compilation is not licensed and has been disabled
Error (10500): VHDL syntax error at lab_06.vhd(30) near text "ktj"; expecting "component"
Error (10500): VHDL syntax error at lab_06.vhd(54) near text "shifta"; expecting "entity", or "architecture", or "use", or "library", or "package", or "configuration"
Info (12021): Found 0 design units, including 0 entities, in source file lab_06.vhd
Error: Quartus II 64-Bit Analysis & Synthesis was unsuccessful. 2 errors, 1 warning
Error: Peak virtual memory: 485 megabytes
Error: Processing ended: Mon Nov 13 18:53:14 2017
Error: Elapsed time: 00:00:01
Error: Total CPU time (on all processors): 00:00:01
Error (293001): Quartus II Full Compilation was unsuccessful. 4 errors, 1 warning
It looks to me like you've tried to instantiate lab_06 outside the architecture Behavior. Had you written end architecture Behavior;, I think the problem would have stared you in the face.

DB connection issue with Playframework 2.1 and Bonecp 0.8.0 : This connection has been closed

I was facing an issue with Bonecp 0.7.1 on a Playframework app using postgresql 9.2.4 on Heroku. It seems this version had a DB connection leak causing after several DB accesses the folllwing error :
[error] c.j.b.h.AbstractConnectionHook - Failed to acquire connection Sleeping for 1000ms and trying again. Attempts left: 1. Exception: null.Message:FATAL: too many connections for role "eonqhnjenuislk" Database warning
[error] c.j.b.PoolWatchThread - Error in trying to obtain a connection. Retrying in 1000ms
org.postgresql.util.PSQLException: FATAL: too many connections for role "eonqhnjenuislk"
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:112) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22) ~[postgresql-9.1-901.jdbc4.jar:na]
As every threads of the connection pool was acquired and retained, the application was not reachable anymore until I restarted it.
Then I heard that this issue was corrected in Bonecp 0.8.0, so I upgraded the lib. But the issue seems not to be completely fixed. In fact, now connection threads are not retained anymore what make the application reachable at anytime but sometimes a DB connection close suddenly... The app throws the following error causing an 500 error to the end users :
javax.persistence.PersistenceException: org.postgresql.util.PSQLException: This connection has been closed.
at com.avaje.ebeaninternal.server.transaction.TransactionManager.createTransaction(TransactionManager.java:331)
at com.avaje.ebeaninternal.server.core.DefaultServer.createServerTransaction(DefaultServer.java:2056)
at com.avaje.ebeaninternal.server.core.BeanRequest.createImplicitTransIfRequired(BeanRequest.java:58)
at com.avaje.ebeaninternal.server.core.PersistRequest.initTransIfRequired(PersistRequest.java:81)
at com.avaje.ebeaninternal.server.persist.DefaultPersister.executeSqlUpdate(DefaultPersister.java:146)
at com.avaje.ebeaninternal.server.core.DefaultServer.execute(DefaultServer.java:1928)
at com.avaje.ebeaninternal.server.core.DefaultServer.execute(DefaultServer.java:1935)
at com.avaje.ebeaninternal.server.core.DefaultSqlUpdate.execute(DefaultSqlUpdate.java:148)
at actor.PublicParkingPlacesActor$1.apply(PublicParkingPlacesActor.java:41)
at actor.PublicParkingPlacesActor$1.apply(PublicParkingPlacesActor.java:26)
at play.libs.F$Promise$PromiseActor.onReceive(F.java:425)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:159)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:425)
at akka.actor.ActorCell.invoke(ActorCell.scala:386)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:230)
at akka.dispatch.Mailbox.run(Mailbox.scala:212)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:502)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:714)
at org.postgresql.jdbc2.AbstractJdbc2Connection.setAutoCommit(AbstractJdbc2Connection.java:661)
at com.jolbox.bonecp.ConnectionHandle.setAutoCommit(ConnectionHandle.java:1292)
at play.api.db.BoneCPApi$$anon$1.onCheckOut(DB.scala:328)
at com.jolbox.bonecp.AbstractConnectionStrategy.postConnection(AbstractConnectionStrategy.java:75)
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:92)
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131)
at play.db.ebean.EbeanPlugin$WrappingDatasource.getConnection(EbeanPlugin.java:146)
at com.avaje.ebeaninternal.server.transaction.TransactionManager.createTransaction(TransactionManager.java:297)
... 20 more
Thanks a lot for your help!
EDIT :
DB configuration :
db.default.isolation=READ_COMMITTED
db.default.partitionCount=2
db.default.maxConnectionsPerPartition=10
db.default.minConnectionsPerPartition=5
db.default.acquireIncrement=1
db.default.acquireRetryAttempts=2
db.default.acquireRetryDelay=5 seconds
db.default.connectionTimeout=10 second
db.default.idleMaxAge=10 minute
db.default.idleConnectionTestPeriod=5 minutes
db.default.initSQL="SELECT 1"
db.default.maxConnectionAge=1 hour
EDIT 2:
Here is the DB config I set according to this post Heroku/Play/BoneCp connection issues
These changes reduce the number of "This connection has been closed" issues, but I still get 1 or 2 of them perdays what makes some HTTP requests to fail. So the issue is still not fixed:
db.default.isolation=READ_COMMITTED
db.default.partitionCount=2
db.default.maxConnectionsPerPartition=10
db.default.minConnectionsPerPartition=5
db.default.acquireIncrement=1
db.default.acquireRetryAttempts=2
db.default.acquireRetryDelay=5 seconds
db.default.connectionTimeout=10 seconds
db.default.idleMaxAge=10 minutes
db.default.idleConnectionTestPeriod=30 seconds
db.default.initSQL="SELECT 1"
db.default.maxConnectionAge=30 minutes

Resources