I am getting following errors while inserting data in Cassandra.
I am using gocql client for Cassandra.
{"error":"gocql: too many query timeouts on the connection","status":500}
{"error":"gocql: no response received from cassandra within timeout period","status":500}
{"error":"write tcp 172.23.15.226:36954-\u003e172.23.16.15:9042: use of closed network connection","status":500}
Can anybody help me with this?
Try to increase timeouts in Cassandra config file (write_request_timeout_in_ms - for writes) and concurrent writes (concurrent_writes).
Also, try to lower NumConns parameter in your gocql driver.
If you are using goroutines, try to lower their number and verify that you are reusing same session object for all goroutines.
If you are using protocol version prior to 4, you can try to set Timeout paramter of cluster object in gocql to higher value.
Related
I have a strange problem with kafka -> elasticsearch connector. First time when I started it all was great, I received a new data in elasticsearch and checked it through kibana dashboard, but when I produced new data in to kafka using the same producer application and tried to start connector one more time, I didn't get any new data in elasticsearch.
Now I'm getting such errors:
[2018-02-04 21:38:04,987] ERROR WorkerSinkTask{id=log-platform-elastic-0} Commit of offsets threw an unexpected exception for sequence number 14: null (org.apache.kafka.connect.runtime.WorkerSinkTask:233)
org.apache.kafka.connect.errors.ConnectException: Flush timeout expired with unflushed records: 15805
I'm using next command to run connector:
/usr/bin/connect-standalone /etc/schema-registry/connect-avro-standalone.properties log-platform-elastic.properties
connect-avro-standalone.properties:
bootstrap.servers=kafka-0.kafka-hs:9093,kafka-1.kafka-hs:9093,kafka-2.kafka-hs:9093
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
# producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
# consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
#rest.host.name=
rest.port=8084
#rest.advertised.host.name=
#rest.advertised.port=
plugin.path=/usr/share/java
and log-platform-elastic.properties:
name=log-platform-elastic
key.converter=org.apache.kafka.connect.storage.StringConverter
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=member_sync_log, order_history_sync_log # ... and many others
key.ignore=true
connection.url=http://elasticsearch:9200
type.name=log
I checked connection to kafka brokers, elasticsearch and schema-registry(schema-registry and connector are on the same host at this moment) and all is fine. Kafka brokers are running on port 9093 and I'm able to read data from topics using kafka-avro-console-consumer.
I'll be gratefull for any help on this!
Just update flush.timeout.ms to bigger than 10000 (10 seconds which is the default)
According to documentation:
flush.timeout.ms
The timeout in milliseconds to use for periodic
flushing, and when waiting for buffer space to be made available by
completed requests as records are added. If this timeout is exceeded
the task will fail.
Type: long Default: 10000 Importance: low
See documentation
We can optimized Elastic search configuration to solve issue. Please refer below link for configuration parameter
https://docs.confluent.io/current/connect/kafka-connect-elasticsearch/configuration_options.html
Below are key parameter which can control message rate flow to eventually help to solve issue:
flush.timeout.ms: Increase might help to give more breath on flush time
The timeout in milliseconds to use for periodic flushing, and when
waiting for buffer space to be made available by completed requests as
records are added. If this timeout is exceeded the task will fail.
max.buffered.records: Try reducing buffer record limit
The maximum number of records each task will buffer before blocking
acceptance of more records. This config can be used to limit the
memory usage for each task
batch.size: Try reducing batch size
The number of records to process as a batch when writing to
Elasticsearch
tasks.max: Number of parallel thread(consumer instance) Reduce or Increase. This will control Elastic Search if bandwidth not able to handle reduce task may help.
It worked my issue by tuning above parameters
During migration from OAS10 to WebLogic 12.1.2, a call to a stored procedure is producing an ORA-03111 around 4 minutes after it is invoked:
java.sql.SQLTimeoutException: ORA-03111: break received on communication channel
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:213)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1111)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3770)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3955)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:9353)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)
at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:101)
at mycode.app.impliq.dao.connection.oracle.OracleProcesoDao.callSP(OracleProcesoDao.java:811)
This code is NOT using statement timeout and it is not configured at data source level either.
Any pointer will be appreciated.
Two ways from here.
1. check the connection pool settings in both weblogic and db for your data source.
2. check log running sql.
I'm trying to read a lot of bulk data (probably around 1-2G) into neo4j with a simple ruby script and Neography. My code mostly just consists of a lot of create_node and create_relationship methods.
It seems to work fine, but after around 5,000 create methods I reach an error:
/home/earlz/.gem/ruby/2.1.0/gems/excon-0.44.3/lib/excon/socket.rb:127:in `connect_nonblock': Cannot assign requested address - connect(2) for 127.0.0.1:7474 (Errno::EADDRNOTAVAIL) (Excon::Errors::SocketError)
How do I fix this? I've tried increasing the HTTP timeouts and such, but this didn't help anything
It seems like your script is opening so many connections that it ran out of ephemeral ports to pick from. Try this:
echo "32768 61000" >/proc/sys/net/ipv4/ip_local_port_range
Here is our environment:
.Net version: 4.5
Database: Oracle 12.1.0.2 (odp.net)
We are using LLBL "Adapter" but I don't think that has anything to do with the issue
LLBLGen Pro version: 4.1
Llbl Gen Pro Runtime: 4.1.13.1213
When we do an Insert(always into different tables which we are using for the short period and then removing) we use the following code:
int numRecords = strings.Count();
var insertCmd = "insert into " + tableName + " (StringField) values (:StringField)";
var oracleCommand = new OracleCommand();
oracleCommand.CommandText = insertCmd;
oracleCommand.CommandType = CommandType.Text;
oracleCommand.BindByName = true;
oracleCommand.ArrayBindCount = numRecords;
oracleCommand.Parameters.Add(":StringField", OracleDbType.NVarchar2, strings.ToArray(), ParameterDirection.Input);
// this is an LLBL adapter. Like I said, I think the issue is below the LLBL layer.
this.adapter.ExecuteActionQuery(new ActionQuery(oracleCommand));
When the database is getting hit hard with multiple of these inserts in parallel, we get the following error and the insert call never returns from the database.
WG_6.Index_586.TVD: An exception was caught during the execution of an action query: ORA-24381: error(s) in array DML
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-12592: TNS:bad packet
ORA-03111: break received on communication channel
ORA-03111: break received on communication channel
ORA-03111: break received on communication channel
On the database, using Toad's session browser, I can see that the "Current Statement" is correct.
insert into schemaX.tableY(StringField) values(:Stringfield)
Under the Waits tab in Toad, there is the following message:
“Waiting for SQL*Net more data from client - waited X hundred seconds, so far” and the X keeps incrementing until we hit our database timeout.
We tried with batches of 1 million and this gave us the best performance for our scenario. However, this hanging issue arose. I then decrease the ArrayBindCount to 500K, 100K, 50K, 10K and then 5K. Only when I used 5K did it stop happening.
A couple of notes:
This happens more frequently when the database is on a different physical machine than the client. When using a local VM, it rarely happens. The network that we are using is generally very reliable with no other noted issues.
From the error message(ORA-12592: TNS:bad packet), it seems that the issue might be on the client and perhaps related to code in the "Oracle.DataAccess.Client"(ODAC) dll.
My next steps for troubleshooting are to use Reflector to debug the call from the ODAC code and also to get more reliable client side tracing while forcing this error to occur.
I had the same situation when trying to insert into an Oracle table using the ArrayBinding.
Using a small number for oracleCommand.ArrayBindCount seemed to improve the frequency of the errors (same like yours) but not completely.
The solution was to use the Managed data access. I suggest you get the latest ODP.NET, add a reference to ManagedDataAccess and change to:
using Oracle.ManagedDataAccess.Client;
using Oracle.ManagedDataAccess.Types;
This fixed problem in my case and with no need to change anything in the code.
I get the following error under a certain scenario
When a different thread is populating a lot of users via the bulk upload operation and I was trying to view the list of all users on a different web page. The list query, throws the following timeout error. Is there a way to set this timeout so that I can avoid this timeout error.
Env: h2 (latest), Hibernate 3.3.x
Caused by: org.h2.jdbc.JdbcSQLException: Timeout trying to lock table "USER"; SQL statement:
[50200-144]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.table.RegularTable.doLock(RegularTable.java:482)
at org.h2.table.RegularTable.lock(RegularTable.java:416)
at org.h2.table.TableFilter.lock(TableFilter.java:139)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:571)
at org.h2.command.dml.Query.query(Query.java:257)
at org.h2.command.dml.Query.query(Query.java:227)
at org.h2.command.CommandContainer.query(CommandContainer.java:78)
at org.h2.command.Command.executeQuery(Command.java:132)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:278)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:137)
at java.lang.Thread.run(Thread.java:619)
at org.h2.engine.SessionRemote.done(SessionRemote.java:543)
at org.h2.command.CommandRemote.executeQuery(CommandRemote.java:152)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:96)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:342)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1808)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2228)
... 125 more
Yes, you can change the lock timeout. The default is relatively low: 1 second (1000 ms).
In many cases the problem is that another connection has locked the table, and using multi-version concurrency also solves the problem (append ;MVCC=true to the database URL).
EDIT: MVCC=true param is no longer supported, because since h2 1.4.200 it's always true for a MVStore engine, which is a default engine.
I faced quite the same problem and using the parameter "MVCC=true", it solved it. You can find more explanations about this parameter in the H2 documentation here : http://www.h2database.com/html/advanced.html#mvcc
I'd like to suggest that if you are getting this error, then perhaps you should not be using a transaction on your bulk database operation. Consider instead doing a transaction on each individual update: does it make sense to think of an entire bulk import as a transaction? Probably not. If it does, then yes, MVCC=true or a bigger lock timeout is a reasonable solution.
However, I think for most cases, you are seeing this error because you are trying to perform a very long transaction - in other words you are not aware that you are performing a really long transaction. This was certainly the case for myself and I simply took more care on how I was writing records (either using no transactions or using smaller transactions) and the lock timeout issue was resolved.
For those having this issue with integration tests (i.e. server is accessing the h2 db and an integration test is accessing the db before calling the server, to prepare the test), adding a 'commit' to the script executed before the test makes sure that the data are in the database before calling the server (without MVCC=true - which I find is a bit 'weird' if it is not enabled by default).
I had MVCC=true in my connection string but still was getting error above. I had added ;DEFAULT_LOCK_TIMEOUT=10000;LOCK_MODE=0 and problem was solved
I got this issue with the PlayFramework
JPAQueryException occured : Error while executing query from
models.Page where name = ?: Timeout trying to lock table "PAGE"
It ended being an infinite loop of sorts because I had a
#Before
without an unless which caused the function to repeatedly call itself
#Before(unless="getUser")
Working with DBUnit, H2 and Hibernate - same error, MVCC=true helped, but I would still get the error for any tests following deletion of data. What fixed these cases was wrapping the actual deletion code inside a transaction:
Transaction tx = session.beginTransaction();
...delete stuff
tx.commit();
From a 2020 user, see reference
Basically, the reference says:
Sets the lock timeout (in milliseconds) for the current session. The default value for this setting is 1000 (one second).
This command does not commit a transaction, and rollback does not affect it. This setting can be appended to the database URL: jdbc:h2:./test;LOCK_TIMEOUT=10000