We have evening package that is scheduled to run during evening and usually completes by midnight.
We recently migrated our Oracle ODA's to Exadata cloud and did containerization and oracle patching. After our database migration it stopped the next day. Then as a solution I asked DBA team to gather stale stats and it worked the next day. After that it worked fine for rest of the days. Last week we did Oracle upgrade to latest version available, it again stopped the next day then I asked DBA team to gather stale stats, but it didn't effect, now it is running inconsistently, one day it errored out stating temp tablespace issue, our DBA team increased the space, after that it completed next day but taking 12 hours to complete instead of 5. For two days it ran but struck at one point without erroring out, therefore summary is sometimes it is completing and other times it gets struck without erroring out. Then we have to kill the struck session before next one starts in the evening.
I checked with DBA team to look into it, they don't see any issues from their end. Please let me know what else can be the reason for that slowness and inconsistent behavior.
Note: There was no change made in definition of the package.
Any feedback or recommendations are greatly appreciated.
Thanks,
Khush11
Related
A few days ago, one of my costumers who has(MariaDB 10.3.10) installed in windows 10, had a power issue on hes computer, after he turned on the computer again, windows pop'd the restore point option for him and he took it, before that, hes data base was working correctly as intended, but after he used the restore point, something happened, and some store procedures, went from 5-10 seconds, to 3-4 minutes.
My first thought was that some tables were corrupted, but afer some test, i decided to make a dump and install the Data in another computer with the same database engine version(MariaDb 10.3.10) and surprisingly, everything worked perfectly, i took the same database, in other 2 different computers, and everything ran correctly, no slow querys nothing, i decided to format the computer with the issue, and installed everything from scratch, but nothing changed, same issues, same problems...
Any thoughts on this=
I tried to sync database on Visual Studio 2015 after creating a project, EDT, Enum and a Table in order to create a new screen on Dynamics 365.
When I tried to synchronize it, it was stopped in the middle during schema checking process. Though it seems that the DB synchronization doesn't have problem for the first few minutes, it always stops during this process as I describe below.
Log Details:
"Schema has not changed between new table 'DPT_TableDT' and old table
'DPT_TableDT' with table id '3997'. Returning from
ManagedSyncTableWorker.ExecuteModifyTable() Syncing Table Finished:
DPT_TableDT. Time elapsed: 0:00:00:00.0010010"
Could you tell me how to solve this issue?
Thanks in advance.
Full database synchronization log
DB Sync Log
From what you've described and also shown in your screenshot, this does not look like an error but is simply describing X++ and Dynamics AX/365FO behaviour.
When you say that it "doesn't have a problem for the first few minutes" I'm guessing you're just not being patient enough. Full database syncs should generally take between 10-30 minutes, but can take shorter or longer depending on a variety of factors such as how much horsepower your development environment has, how many changes are being sync'd etc. I would wait at least one hour before considering the possibility that the sync engine has errors (or even run it overnight and see what information it has for you in the morning).
The message you've posted from the log ("Schema has not changed") isn't an error message; it is just an informational log from the sync engine. It is simply letting you know that the table did not have any changes to propagate to SQL Server.
Solution: Run the sync overnight and post a screenshot of the results or the error list window in Visual Studio.
I've recently been stymied by a long running application where Access v2003 replicas refused to synchronize. The message returned was "not enough memory". This was on machines running Windows 10. The only way I was able to force synchronizing was to move the replicas onto an old machine still running Windows 98 with Office XP, which allowed synchronizing and conflict resolution. When I moved the synchronized files back to the Windows 10 machine they still would not synchronize.
I finally had to create a blank database and link to a replica, then use make-table queries to select only data fields to create new tables. I was then able to create new replicas that would synchronize.
From this I've come to suspect the following:
Something in Windows 10 has changed and caused the problem with synchronizing/conflict resolution.
Something in the hidden/protected fields added to the replica sets is seen as a problem under Windows 10 that is not a problem under Windows 98.
One thing I noticed is that over the years the number of replicas in the synchronizing list had grown to over 900 sets, but the only way to clear the table was to create a new clean database.
For some reason upon shutdown my Postgres server always stops and upon restart i always have to call;
pg_ctl -D /usr/local/var/usr/local/var/postgres -l /usr/local/var/postgres/server.log start
Can you help me understand what's stopping postgres starting up on its own and what an optimal set up would be? Thanks
Update: Mac, Yosemite 10.10.5 is what I'm on. I installed Postgres about a year and a half ago and it used to load on start up just fine. It was a while ago so i don't remember exactly how i installed it, but i'd imagine I did so via cmd line. Something, what exactly I'm unfortunately unaware of, must have happened to my system about 6 months ago as it was around then that it stopped auto-running on start up and that i last looked for 'how to get Postgres up and running manually', i found that start command that i listed above and I've been using it ever since every time i restart. A little late i know, but I've now decided to try to work out what's going wrong. Apologies for vagueness of detail as to what exactly lead to this happening.
I am working with a SuSE machine (cat /etc/issue: SUSE Linux Enterprise Server 11 SP1 (i586)) running Postgresql 8.1.3 and the Slony-I replication system (slon version 1.1.5). We have a working replication setup going between two databases on this server, which is generating log shipping files to be sent to the remote machines we are tasked to maintain. As of this morning, we ran into a problem with this.
For a while now, we've had strange memory problems on this machine - the oom-killer seems to be striking even when there is plenty of free memory left. That has set the stage for our current issue to occur - we ran a massive update on our system last night, while replication was turned off. Now, as things currently stand, we cannot replicate the changes out - slony is attempting to compile all the changes into a single massive log file, and after about half an hour or so of running, it trips over the oom-killer issue, which appears to restart the replication package. Since it is constantly trying to rebuild that same package, it never gets anywhere.
My first question is this: Is there a way to cap the size of Slony log shipping files, so that it writes out no more than 'X' bytes (or K, or Meg, etc.) and after going over that size, closes the current log shipping file and starts a new one? We've been able to hit about four megs in size before the oom-killer hits with fair regularity, so if I could cap it there, I could at least start generating the smaller files and hopefully eventually get through this.
My second question, I guess, is this: Does anyone have a better solution for this issue than the one I'm asking about? It's quite possible I'm getting tunnel vision looking at the problem, and all I really need is -a- solution, not necessarily -my- solution.
I've got a dbms_scheduler-Job running in Oracle 10.2.0.
When I change the system date back to yesterday, the job will wait for one day to continue its work. The reason for this is that next_run_date does not change.
This does not happen regularly, but sometimes someone decides to change the system date without thinking or even knowing about oracle jobs running.
Any suggestions on how to keep my job running with the configured interval (without having it to change manually)?
If you are changing the system date out from underneath Oracle, your hands might be tied. Is there a reason you are regularly changing the system date? If so, perhaps you should create a script for doing so, and have that script also update next_run_date.