How to stop the UI navigating to the most recent result in the Services view? - datagrip

So I have maybe 10 different connections active at any one time, running a bunch of statements on different dbs. Every time a single statement/query is completed, my results view jumping to the latest completed statement in the console, on any one of the open running connections - which is annoying when its something like quickly dropping a temp table when i'm quickly reading results from another output.
Any idea if you can prevent this from happening?

Unfortunately, it is impossible now. Please file a feature request here: https://youtrack.jetbrains.com/issues/DBE
The workaround can be using 'In-editor results' mode, when you'll see the result just under your query and no one will ever grab it from you!

Related

Trains: Can I reset the status of a task? (from 'Aborted' back to 'Running')

I had to stop training in the middle, which set the Trains status to Aborted.
Later I continued it from the last checkpoint, but the status remained Aborted.
Furthermore, automatic training metrics stopped appearing in the dashboard (though custom metrics still do).
Can I reset the status back to Running and make Trains log training stats again?
Edit: When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Edit2: I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
Disclaimer: I'm a member of Allegro Trains team
When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Yes that's the only way to continue the same exact Task.
You can also mark it as started with task.mark_started() , that said the automatic logging will not kick in, as Task.get_task is usually used for accessing previously executed tasks and not continuing it (if you think the continue use case is important please feel free to open a GitHub issue, I can definitely see the value there)
You can also do something a bit different, and justcreate a new Task continuing from the last iteration the previous run ended. Notice that if you load the weights file (PyTorch/TF/Keras/JobLib) it will automatically connect it with the model that was created in the previous run (assuming the model was stored is the same location, or if you have the model on https/S3/Gs/Azure and you are using trains.StorageManager.get_local_copy())
previous_run = Task.get_task()
task = Task.init('examples', 'continue training')
task.set_initial_iteration(previous_run.get_last_iteration())
torch.load('/tmp/my_previous_weights')
BTW:
I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
This is a great idea for an interface to continue a previous run, feel free to add it as GitHub issue.

Need to write the single logfile by different SP running in different session concurerntly in PL SQL

I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.

How to end "While Controller" on error and show whole progress in "View Results Tree"

In my test plan I want to make java request and check how it was done by making jdbc request, But I don't exactly know when java request will change DB. I want to make several requests with waiting after each. And if JDBC request doesn't special changes after certain time stop whole test. After that I want to see progress where test plan stopped in view results tree.
BUT
If I use any stop-thread functions, there are no data in View Results Tree.
Haw can I do what I want with jmeter?
Thanks.
you can use a similar mechanism to the one described here:
http://www.sourcepole.ch/2011/1/4/waiting-for-a-page-change-in-jmeter

no response from the host :snmpwalk

I have implemented AgentX using mib2c.create-dataset.conf ( with cache enabled)
In my snmd.conf :: agentXTimeout 15
In testtable.h file I have changed cache value as below...
#define testTABLE_TIMEOUT 60
According to my understanding It loads data every 60 second.
Now my issue is if the data in data table is exceeds some amount it takes some amount of time to load it.
As in between If I fired SNMPWALK it gives me “no response from the host” If I use SNMPWALK for whole table and in between testTABLE_TIMEOUT occurs it stops in between and shows following error (no response from the host).
Please tell me how to solve it ? In my table large amount of data is present and changing frequently.
I read some where:
(when the agent receives a request for something in this table and the cache is older than the defined timeout (12s > 10s), then it does re-load the data. This is the expected behaviour.
However the agent does not automatically release the local cache (i.e. call the 'free' routine) as soon as the timeout has expired.
Instead this is handled by a regular "garbage collection" run (once a minute), which will free any stale caches.
In the meantime, a request that tries to use that cache will spot that it's expired, and reload the data.)
Is there any connection between these two ?? I can’t get this... How to resolve my problem ???
Unfortunately, if your data set is very large and it takes a long time to load then you simply need to suffer the slow load and slow response. You can try and load the data on a regular basis using snmp_alarm or something so it's immediately available when a request comes in, but that doesn't really solve the problem either since the request could still come right after the alarm is triggered and the agent will still take a long time to respond.
So... the best thing to do is optimize your load routine as much as possible, and possibly simply increase the timeout that the manager uses. For snmpwalk, for example, you might add -t 30 to the command line arguments and I bet everything will suddenly work just fine.

WP7 Max HTTPWebRequests

This is kind of a 2 part question
1) Is there a max number of HttpWebRequests that can be run at the same time in WP7?
I'm going to create a ScheduledTaskAgent to run a PeriodicTask. There will be 2 different REST service calls the first one will get a list of IDs for records that need to be downloaded, the second service will be used to download those records one at a time. I don't know how many records there will be my guestimage would be +-50.
2.) Would making all the individual record requests at once be a bad idea? (assuming that its possible) or should I wait for a request to finish before starting another?
Having just spent a week and a half working at getting a BackgroundAgent to stay within it's memory limits, I would suggest doing them one at a time.
You lose about half your memory to system libraries and the like, your first web request will take another nearly 20%, but it seems to reuse that memory on subsequent requests.
If you need to store the results into a local database, it is going to take a good chunk more. I have found a CompiledQuery uses less memory, which means holding a single instance of your context.
Between each call I would suggest doing a GC.Collect(), I even add a short Thread.Sleep() just to be sure the process has some time to tidying things up.
Another thing I do is track how much memory I am using and attempt to exit gracefully when I get to around 97 or 98%.
You can not use the debugger to test memory limits as the debug memory is much higher and the limits are not enforced. However, for comparative testing between versions of your code, the debugger does produce very similar result on subsequent runs over the same code.
You can track your memory usage with Microsoft.Phone.Info.DeviceStatus.ApplicationCurrentMemoryUsage and Microsoft.Phone.Info.DeviceStatus.ApplicationMemoryUsageLimit
I write a status log into IsolatedStorage so I can see the result of runs on the phone and use ScheduledActionService.LaunchForTest() to kick the off. I then use ShellToast notifications to let me know when the task runs and also when it completes, that way I can launch my app to read the status log without interrupting it.
Tyler,
My 2 cents here.
I don't believe there is any restriction on how mant HTTPWebequests you can spin up. These however have to be async, off course, and may be served from the browser stack. Most modern browsers including IE9 handle over 5 concurrently to the same domain; but you are not guaranteed a request handle immediately. However, it should not matter if you are willing to wait on a separate thread, dump your content on to the request pipe & wait for response on yet another thread. This post (here) has a nice walkthrough of why we need to do this.
Nothing wrong with this approach either, IMO. You're just going to have to wait until all the requests have their respective pipelines & then wait for the responses.
Thanks!
1) Your memory limit in a PeriodicTask or ResourceIntensiveTask is 5 MB. So you definitely should control your requests really careful. I dont think there is a limit in the code.
2)You have only 5 MB. So when you start all your requests at the same time it will terminate immediately.
3) I think you should better use a ResourceIntensiveTask because a PeriodicTask should only run 15 seconds.
Good guide for Multitasking features in Mango: http://blogs.infosupport.com/blogs/alexb/archive/2011/05/26/multi-tasking-in-windows-phone-7-1.aspx
I seem to remember (but can't find the reference right now) that the maximum number of requests that the OS can make at once is 7. You should avoid making this many at once though as it will stop other/system apps from being able to make requests.

Resources