How to specify a no-timeout option on the cursor? [duplicate] - ruby

This question already has an answer here:
tailable cursor in mongo db timing out
(1 answer)
Closed 9 years ago.
How to specify a no-timeout option on the cursor?
I can run the job manually and from my laptop but something is going on the server and all the time I am getting this error:
MONGODB cursor.refresh() for cursor xxx
Query response returned CURSOR_NOT_FOUND. Either an invalid cursor was specified, or the cursor may have timed out on the server.
MONGODB cursor.refresh() for cursor yyy
The job is ran from a ruby scheduler file that and is specified as a namespace in rake
rake is calling for another ruby module in the middle, and the job dies during the execution of this module
I asked this question earlier and it got downvoted. Please, instead of downvoting explain what is so stupid about it, because I really need to solve this problem and can't figure out what is going on.
The server is kind of experimental and does not have any monitoring tools. But it seems to be reliable. And there are no other jobs running.

See the FAQ for the Ruby MongoDB driver for details on how to turn off the cursor timeout.
Example from there:
#collection.find({}, :timeout => false) do |cursor|
cursor.each do |document
# Process documents here
end
end

Related

Aborting queries on neo4jrb

I am running something along the lines of the following:
results = queries.map do |query|
begin
Neo4j::Session.query(query)
rescue Faraday::TimeoutError
nil
end
end
After a few iterations I get an unrescued Faraday::TimeoutError: too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) and Neo4j needs switching off and on again.
I believe this is because the queries themselves aren't aborted - i.e. the connection times out but Neo4j carries on trying to run my query. I actually want to time them out, so simply increasing the timeout window won't help me.
I've had a scout around and it looks like I can find my queries and abort them via the Neo4j API, which will be my next move.
Am I right in my diagnosis? If so, is there a recommended way of managing queries (and aborting them) from neo4jrb?
Rebecca is right about managing queries manually. Though if you want Neo4j to automatically stop queries within a certain time period, you can set this in your neo4j conf:
dbms.transaction.timeout=60s
You can find more info in the docs for that setting.
The Ruby gem is using Faraday to connect to Neo4j via HTTP and Faraday has a built-in timeout which is separate from the one in Neo4j. I would suggest setting the Neo4j timeout as a bit longer (5-10 seconds perhaps) than the one in Ruby (here are the docs for configuring the Faraday timeout). If they both have the same timeout, Neo4j might raise a timeout before Ruby, making for a less clear error.
Query management can be done through Cypher. You must be an admin user.
To list all queries, you can use CALL dbms.listQueries;.
To kill a query, you can use CALL dbms.killQuery('ID-OF-QUERY-TO-KILL');, where the ID is obtained from the list of queries.
The previous statements must be executed as a raw query; it does not matter whether you are using an OGM, as long as you can input queries manually. If there is no way to manually input queries, and there is no way of doing this in your framework, then you will have to access the database using some other method in order to execute the queries.
So thanks to Brian and Rebecca for useful tips about query management within Neo4j. Both of these point the way to viable solutions to my problem, and Brian's explicitly lays out steps for achieving one via Neo4jrb so I've marked it correct.
As both answers assume, the diagnosis I made IS correct - i.e. if you run a query from Neo4jrb and the HTTP connection times out, Neo4j will carry on executing the query and Neo4jrb will not issue any instruction for it to stop.
Neo4jrb does not provide a wrapper for any query management functionality, so simply setting a transaction timeout seems most sensible and probably what I'll adopt. Actually intercepting and killing queries is also possible, but this means running your query on one thread so that you can look up its queryId in another. This is the somewhat hacky solution I'm working with atm:
class QueryRunner
DEFAULT_TIMEOUT=70
def self.query(query, timeout_limit=DEFAULT_TIMEOUT)
new(query, timeout_limit).run
end
def initialize(query, timeout_limit)
#query = query
#timeout_limit = timeout_limit
end
def run
start_time = Time.now.to_i
Thread.new { #result = Neo4j::Session.query(#query) }
sleep 0.5
return #result if #result
id = if query_ref = Neo4j::Session.query("CALL dbms.listQueries;").to_a.find {|x| x.query == #query }
query_ref.queryId
end
while #result.nil?
if (Time.now.to_i - start_time) > #timeout_limit
puts "killing query #{id} due to timeout"
Neo4j::Session.query("CALL dbms.killQuery('#{id}');")
#result = []
else
sleep 1
end
end
#result
end
end

Tibco SQLException [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am encountering this error again and again in my tibco code.Somebody please tell how to solve this error
I am using tibco 5.7.3.
JDBC error reported: (SQLState = HY000) - java.sql.SQLException: [tibcosoftwareinc][SQLServer JDBC Driver]Object has been closed."
When a JDBC Query activity is configured to query in subset mode, the resultSet object is kept in the engine for subsequent iterations. Typically the resultSet object will only be closed and cleared from the engine if there is no more data left. However, keep in mind that the default connection idleTimeout is set to 5 minutes. This means that after 5 minutes of no activity the connection will get released. So if you wait longer than the idleTimeout value to retrieve subsequent subsets you will incur this exception since the connection has been closed and hence the resultset is no longer available.
Resolution:
Set Engine.DBConnection.idleTimeout to higher value in the Businessworks engine TRA file, say, 20 minutes so this connection can remain idle without getting released for next iterations for example: Engine.DBConnection.idleTimeout=20. For more details on this setting please see the list of Available Custom Engine Properties.

How do I absolutely ensure that a Phusion Passenger instance stays alive?

I'm having a problem where no matter what I try all Passenger instances are destroyed after an idle period (5 minutes, but sometimes longer). I've read the Passenger docs and related questions/answers on Stack Overflow.
My global config looks like this:
PassengerMaxPoolSize 6
PassengerMinInstances 1
PassengerPoolIdleTime 300
And my virtual config:
PassengerMinInstances 1
The above should ensure that at least one instance is kept alive after the idle timeout. I'd like to avoid setting PassengerPoolIdleTime to 0 as I'd like to clean up all but one idle instance.
I've also added the ruby binary to my CSF ignore list to prevent the long running process from being culled.
Is there somewhere else I should be looking?
Have you tried to set the PassengerMinInstances to anything other than 1 like 3 and see that work?
Ok, I found the answer for you on this link: http://groups.google.com/group/phusion-passenger/browse_thread/thread/7557f8ef0ff000df/62f5c42aa1fe5f7e . Look at the last comment by Phusion guy.
Is there a way to ensure that I always have 10 processes up and
running, and that each process only serves 500 requests before being
shut down?
"Not at this time. But the current behavior is such that the next time
it determines that more processes need to be spawned it will make sure
L at least PassengerMinInstances processes exist."
I have to say their documentation doesn't seem to match what the current behavior.
This seems to be quite a common problem for people running Apache on WHM/cPanel:
http://techiezdesk.wordpress.com/2011/01/08/apache-graceful-restart-requested-every-two-hours/
Enabling piped logging sorted the problem out for me.

Why are my delayed_job jobs re-running even though I tell them not to?

I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.

Ruby: connection timeout detection for a TCPServer

been trying to understand how to implement a timeout detection to a ruby TCP server of mine. Mainly because sometimes clients with instable internet lose connection and i need my server to detect it.
The idea is to teach my server to detect when a connection had been silent for longer than 30 seconds and abort it. I've been trying to use timeout, but it terminates the program, so i need to use something like a simple timer that will just return an integer of seconds passed since the activation of the said timer.
Is there an already made solution for that? Sorry if it is a stupid question, it's just that googling it led me nowhere.
ps: using ruby 1.8 here.
The 'Time' object can report the number of seconds past by comparing it to previously created instances. Consider:
require 'time'
t0 = Time.now
sleep(2)
t1 = Time.now
t1.to_f - t0.to_f # => 2.00059294700623
So by creating a "last transmission" time object then checking its difference from "now" you can determine the number of seconds passed and act accordingly.
This might help: http://en.wikibooks.org/wiki/Ruby_Programming/Reference/Objects/Socket#Keeping_a_connection_alive_over_time_when_there_is_no_traffic_being_sent

Resources