I'm thinking of killing all my oracle sessions:
Kill all sessions of a user
to solve my Issue on DBA about not being able to replace a package body.
Would you recommend doing this?
"Recommend"? I would (kind of), but you should pay attention to who is using your package because you might kill that user in the middle of some processing. That would release the package and you'll be able to compile it, but - not everyone will be happy about it.
If you're about to kill yourself only (pure suicide, eh?), go ahead - you'll easily reconnect afterwards.
That's why you should deploy changes during OFF hours.
Related
So, I just want to ask something about how to handle cache properly?
Having a problem on when I deploy the application/site in our client server.
Once I successfully deployed it, the end users from the other side seems like failed to
get the exact changes from their end. We did some investigation, and found that it happens because of the cache loaded in their browser was still up and running...
The quick solution is to have them (client end users) manually clear their cache on their respective machines...
Unfortunately, this kind of solution is a little inconvenient given that the manually clearing of cache should be done each machine...
So, are there any way to have the cache clear automatically or something like that?
How did the deployment happen on your own company/clients? Do we do the same thing?
Happy to hear your thoughts about this, thanks!
I have a Scraypd server in heroku. It works fine and the spider works and connects to dbs without any issue.
I have set it to run everyday by the scheduler in the Scrapydweb UI.
However everyday the spider seems to disappear and I would have to reload the scrayd-deploy from my local machine to the server again for it to be scheduled but it never runs anything past that single day. Even though I have set it to run everyday at a certain time.
Does anyone know what might be the problem?
I am not sure what kind of details people need to see to help me resolve this. Please do ask and I shall provide what I can.
I have inherited a database and every night i get buzzed in the middle of the night for locking issues.This database has severe locking issues and the usual drill is to bounce the application tier one by one so the locks get released . I am tired of doing this and came across a documentation where i can go ahead and kill the blocking session .
I am just wondering if i go ahead and kill the database blocking session after a session blocks for a time more then the predefined threshold
do i have the risk of corrupting the database ?
if so how ?
Even if i assume that i am corrupting the database then restarting the application server also is equally risky and more painful for me too.
So what option do i choose here kill automatically the blocking session until the time the developer fixes the code that is causing the blocking ?
regards
Nick
Seems like the exact purpose the Resource Manager directive MAX_IDLE_BLOCKER_TIME was created for.
Example
No , killing a session won't corrupt database as it will be rolled back and generate UNDO , when you killed it , it gives the "marked for kill " message,
do it the normal way "alter system kill session "sid, serial#' , not the "kill -9 .."
I have quite a slow Application_Start due to having a lot of IoC stuff happen at start up.
The problem I'm trying to solve is, how do I avoid passing that start up time to the end user?
Assumptions
My apps are hosted on AppHarbor so I have no access to IIS. However even if I did, my understudying is that it's best practice to let the app pool recycle, so there's no way to avoid having the Application_Start run regularly (I think it's every 20 minutes on AppHarbor).
My idea to solve it
Initially I thought I'd hit it every minute or something, but that seems too brute force and it may not even stop a user from experiencing the slow start up.
My current solution is to handle the Application_End event, and then immediately hit the App so that it starts up again, thus hopefully not impacting any users.
Is there a better way to solve this issue?
Unfortunately, a longer session timeout will not prevent an IIS app pool recycle when you're using InProcess session state.
Have you considered lazy loading (some of) your dependencies? SimpleInjector has documentation on how to do this, which should be adaptable to most other IoCs:
Simple Injector \ Documentation \ How To \ Register Factory Delegates \ Working With Lazy Factories
In my understanding, to prevent the propagation of startup time to users, you should avoid recycling the App Pool, for which you can use IIS App pool timeout settings,these can be tuned through web.config, not just through IIS console. Additionally you can read more of it here on this SO qurestion. You might not need Application_End hacks to achieve this.
Update :
I found another interesting thing that may help you on this, check this IIS Application Initialization Extension that can be used to preload dependencies as soon as worker process starts. It may help you improve customer experience. Check it out.
Here's my scenario: my Azure web role does a lot of work in OnStart() and produces a huge debug trace that is uploaded to Blob Storage.
Now OnStart() hangs for whatever reason and I look into Blob Storage and see that trace has not been updated for several minutes already. So I decide the role is beyond repair and I want to shut it down immediately so that I can update the role with another package and start it again.
The problem is when I hit "Stop" in the Management Portal it takes up to ten minutes to stop the role - I guess it tries to convince the role to stop gracefully and wait for several minutes.
Can I somehow make the role stop immediately without letting it stop gracefully?
I wonder if deleting the deployment (that's presumably what you're going to do after stopping it?) is faster, but I'm not sure. As far as I know, there's only one kind of "stop," so no, I don't think there's a way to force a faster stop.
Have a look # Windows Azure Platform PowerShell Cmdlets
It should give you at least the same functionality and probably more control over the actions. You could also request the current status as it is not always reflected immediately in the Silverlight portal.