How to increase the responding time for ReQL query in RethinkDB - rethinkdb

By now I've 1Million records in my table. When I trying to add a new column/variable to the table it is showing time out error.I even tried to limit the data intake but it doesn't. Can anyone tell me how to tackel it. Any help would be appreaciable.
e: HTTP ReQL query timed out after 300 seconds in:
r.table("interestdata").update({"pick": 0});
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Thanks in advance!

From the rethink tutorial we can increase the connection timeout using javascript or python etc.
r.connect({
host: 'localhost',
port: 28015,
db: 'test',
timeout:600
}
This worked for me.

Related

Big Query Timeout Errors

I am trying to insert data into big query tables and the request fails with this message:
Post URL: read tcp IP_ADD:22465-\u003eIP_ADD:443: read: connection timed out
Could some one explain what is timing out exactly? Retrying does not fix the problem.

DB2 Query Timeout issue - How to handle

This may have been asked numerous times but none of them helped me so far.
Here's some history:
QueryTimeOut: 120 secs
Database:DB2
App Server: JBoss
Framework: Struts 2
I've one query which fetches around a million records. Yes, we need to fetch it all at once for caching purpose, sadly can't change the design.
Now, we've 2 servers Primary and DR. In DR server, the query is getting executed within 30 secs, so no timeout issue there. But in Primary serverit is getting time out due to some unknown reason. Sometimes it is getting timed out in rs.next() and sometime in pstmt.executeQuery().
All DB indexes, connection pool etc are in place. The explain plan shows, there are no full table scan as well.
My Analysis:
Since query is not the issue here, there might be issue in Network delay?
How can I get the root cause behind this timeout. How can I make sure there are no connection leakage? (Since all connection are closed properly).
Any way to recover from the timeout and again execute the query with increased timeout value for e.g: pstmt.setQueryTimeOut(600)? <- Note that this has no effect whatsoever. Don't know why..!
Appreciate any inputs.
Thank You!

mongoDB is getting slower and slower

I update data into MongoDB continuously in different collection or DB, which name are timestamp. And I delete the oldest data and keep about 3 days data, 200GB, in mongo. The mapped and vsize are increasing but res is under 10 GB. And I summarize the mongo response time is larger and larger. Do you know the reason? I am willing your sharing.
Please make sure that you are using indexes correctly.
For example, if you find users by email field, you have to build index for this field:
db.users.ensureIndex({ email: 1 })
To learn more about indexes please follow the link: http://docs.mongodb.org/manual/indexes/
Also, some explanation will be very useful for you. You can see detailed information about your queries from the next command:
db.users.find({ email: "user#example.com" }).explain()
explain() will say you a lot about your query. To read more about it, please follow the official documentation: http://docs.mongodb.org/manual/reference/method/cursor.explain/
So, if you are sure that indexes are built correctly, please post the output of explain(). It will help us to find the problem.

Server issue when searching Oracle database

I have a JEE application searching a large Oracle databse for data. The application uses JDBC to query the database.
The issue I am having is that the results page is unable to be displayed. I get the following error:
The connection to the server was reset while the page was loading.
This happens after 60 seconds. When I run the sql query manually using a SQL client, the results return in 3 seconds.
I have checked the logs and there are no exceptions that I can see.
Do any of you know the best way to find what is causing the connection to be reset? If I break my search date range into 2, and search both ranges individually, both return results. So it seems that it's the larger result set causing the issue.
Any help is welcome.
You are probably right about the larger result set. Often when running a query from a SQL client, you'll get the first set of records right away. If you page down to force pull of all records, then it bogs down. Perhaps your hitting the same issue with JDBC client where it takes more than 60 sec to get all the rows. I've not done JDBC in a while, but can you get it to stream the result set?
Regards,
Roger
All views are mine ...

Preventing the oracle connection being get lost

How to prevent the connection to the oracle server being gets lost if it is kept for some ideal time
If you use the newest JDBC spec 4.0 there is a isValid() method available for a connection that allows you to check if the connection is usable, if not then get a new (reconnect) connection and execute your SQL.
One possible way that I knew to save the database connection from being getting lost is to send a dummy query after the threshhold time, By Threash hold I mean the time after which the connection to the database is expected to become idle or get lost.
Some thing like
Ping_time_to_DB=60
if(Current_time - Last_Ping_time > Ping_time_to_DB)
{
--send a dummy query like
select 1 from dual;
}

Resources