DB synchronization on Visual Studio 2015 is hanged - visual-studio

I tried to sync database on Visual Studio 2015 after creating a project, EDT, Enum and a Table in order to create a new screen on Dynamics 365.
When I tried to synchronize it, it was stopped in the middle during schema checking process. Though it seems that the DB synchronization doesn't have problem for the first few minutes, it always stops during this process as I describe below.
Log Details:
"Schema has not changed between new table 'DPT_TableDT' and old table
'DPT_TableDT' with table id '3997'. Returning from
ManagedSyncTableWorker.ExecuteModifyTable() Syncing Table Finished:
DPT_TableDT. Time elapsed: 0:00:00:00.0010010"
Could you tell me how to solve this issue?
Thanks in advance.
Full database synchronization log
DB Sync Log

From what you've described and also shown in your screenshot, this does not look like an error but is simply describing X++ and Dynamics AX/365FO behaviour.
When you say that it "doesn't have a problem for the first few minutes" I'm guessing you're just not being patient enough. Full database syncs should generally take between 10-30 minutes, but can take shorter or longer depending on a variety of factors such as how much horsepower your development environment has, how many changes are being sync'd etc. I would wait at least one hour before considering the possibility that the sync engine has errors (or even run it overnight and see what information it has for you in the morning).
The message you've posted from the log ("Schema has not changed") isn't an error message; it is just an informational log from the sync engine. It is simply letting you know that the table did not have any changes to propagate to SQL Server.
Solution: Run the sync overnight and post a screenshot of the results or the error list window in Visual Studio.

I've recently been stymied by a long running application where Access v2003 replicas refused to synchronize. The message returned was "not enough memory". This was on machines running Windows 10. The only way I was able to force synchronizing was to move the replicas onto an old machine still running Windows 98 with Office XP, which allowed synchronizing and conflict resolution. When I moved the synchronized files back to the Windows 10 machine they still would not synchronize.
I finally had to create a blank database and link to a replica, then use make-table queries to select only data fields to create new tables. I was then able to create new replicas that would synchronize.
From this I've come to suspect the following:
Something in Windows 10 has changed and caused the problem with synchronizing/conflict resolution.
Something in the hidden/protected fields added to the replica sets is seen as a problem under Windows 10 that is not a problem under Windows 98.
One thing I noticed is that over the years the number of replicas in the synchronizing list had grown to over 900 sets, but the only way to clear the table was to create a new clean database.

Related

How do I improve performance of SSIS calling a WCF service?

Edit
I've discovered that the service is not actually performing any database calls when running my process but it does for the normal application. I am going to troubleshoot a problem with my SSIS package / code under the assumption that something is silently failing or looping there it shouldn't.
Original Question
I have an SSIS package that has a table of user accounts. I need to loop through them (386,000 rows) and call a WCF service to create the users in our system. The WCF call is sub-second but even at 1/2 second this process would take over 50 hours. To improve through-put I modified the package to divide the result set into 5 batches and process them asynchronously.
The process works and I get the expected 1/5 run time in Visual Studio debugging. Problem is that after 40,000ish users my VS will crash.
So, I attempted an execution using the SQL Server Execute Package Utility but the iteration run time seems to be growing and is destroying any improvement that I made.
Execution start
Execution after 2 hours of processing
Any solution to my problem would be much appreciated, thank you.

Is there a way to cap the file size of slony log shipping files?

I am working with a SuSE machine (cat /etc/issue: SUSE Linux Enterprise Server 11 SP1 (i586)) running Postgresql 8.1.3 and the Slony-I replication system (slon version 1.1.5). We have a working replication setup going between two databases on this server, which is generating log shipping files to be sent to the remote machines we are tasked to maintain. As of this morning, we ran into a problem with this.
For a while now, we've had strange memory problems on this machine - the oom-killer seems to be striking even when there is plenty of free memory left. That has set the stage for our current issue to occur - we ran a massive update on our system last night, while replication was turned off. Now, as things currently stand, we cannot replicate the changes out - slony is attempting to compile all the changes into a single massive log file, and after about half an hour or so of running, it trips over the oom-killer issue, which appears to restart the replication package. Since it is constantly trying to rebuild that same package, it never gets anywhere.
My first question is this: Is there a way to cap the size of Slony log shipping files, so that it writes out no more than 'X' bytes (or K, or Meg, etc.) and after going over that size, closes the current log shipping file and starts a new one? We've been able to hit about four megs in size before the oom-killer hits with fair regularity, so if I could cap it there, I could at least start generating the smaller files and hopefully eventually get through this.
My second question, I guess, is this: Does anyone have a better solution for this issue than the one I'm asking about? It's quite possible I'm getting tunnel vision looking at the problem, and all I really need is -a- solution, not necessarily -my- solution.

Entity Framework takes 30 minutes to generate a model

No matter what I do, which DB I connect with, EF seems to take around 15-30 minutes to generate a model. While it's doing this, I get a "Visual Studio is busy" message in the system tray.
The first DB I connected to was complex and had a lot of data, lots of views so I thought, may be that's why. Now I have a local DB file with 1 table that has 2 columns and 3 rows. It still takes the same amount of time.
Eventually VS crashes and restarts. Has anyone had this problem before? Any idea?
I've looked at resource monitor, devenv.exe does not seem to be consuming any resources that would indicate it's doing a lot of work.
What credentials do you use to access your DB? Let's try ruling out domain latency issues first. If this is at work, can you verify the same flow on 2 separate machines using the same domain credentials? This wouldn't apply if you were using local creds.
Turns out I had a Visual Studio DVD in my DVD drive. Each time I did something with EF, VS started to read from the disc. I have no idea what or why, but the little LED would blink. Once I ejected the disk, everything ran fine. Go figure!

SQL Server CE performance on device

My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each).
The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs.
All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment.
For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas?
For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.
Yes, it recreates your indexes because the database was created or opened on a desktop computer. Copy your indexed database from the device and into your setup.
more info here:
http://blogs.msdn.com/sqlservercompact/archive/2009/04/01/after-moving-the-database-from-one-platform-to-other-the-first-sqlceconnection-open-takes-more-time.aspx
Since the db is read only, and if the "initialized" db no longer inflates, I would go with simply putting it into the setup. Just confirming that your approach makes sense.

Retain Windows Error Reporting Dumps from Hung Application

An application is hanging occasionally, and I would like to see the dump at the time to figure it out. I had written an application that the user can run to automatically create a dump that I can look at. However I can't seem to get the users to remember to run it when it hangs, no matter what I try. They always end up closing the program, which invokes Windows Error Reporting.
WER will create dumps in the temp directory, but unfortunately they are deleted as soon as the dialog for sending the info to Microsoft or not is closed.
Becoming an ISV and getting this info from Microsoft's error reporting servers is one solution.. but not one that is realistic at the moment.
I can't imagine that I am the only one faced with this issue. The software is used concurrently by dozens upon dozens of staff, so reaching them all and getting them to run an application or not click close on that dialog until running some other application or etc has not been working out.
The app is running on Windows Server 2003. Too bad, since I know Server 2008 has some LocalDumps options that will let me retain them.
Any ideas for somehow keeping these dumps around so I can analyze them? The obstacle is the user, in the sense that I've accepted to their stubbornness and do not expect them to run any other application or do anything special.
Thanks for any advice!
You could opt for an automatic solution. I believe there're multiple options at your disposal for detecting if you're hung.
One would be the use of SendMessageTimeout (also pay attention to SMTO_ABORTIFHUNG as one of the fuFlags values) from a separate thread in your app. Once you have determined the main thread is not responding you can save a dump file wherever you want.
There's also a IsHungAppWindow() (user32.dll) available since w2k.

Resources