sql managment studio is sending and receing data with no reason - traffic

I am programming and the sql management studio was left open I detected an unusual amount of data transfer by sql management studio (mostly sending out) it was continuously sending with speed of 16 KB/s and receiving with 6 KB/s.
If it was all download I would have thought that is an update but how about sending data when the program is not even being used. the total amount of send were 47 MB when I saw it.
is this indicating a problem? like being hacked? or is it normal? if it is normal what is the reason?
thanks in advance.

Do you have any add-ins installed ? Some add-ins check for new versions.. Try to deactivate them.

Related

Power BI using 90+% CPU while doing.. what?

I have a 9Mb PBIX containing small tables and one table with 250k rows. Data imported from various xlsx & JSON sources. Machine is Windows 10 Pro, 2.6GHz, 64 bit, 16GB RAM.
On the Power BI service online the performance is ok, but on desktop it's practically unworkable. With task manager I can see that it is using 7Mb of memory, but almost 100% CPU, half an hour after opening - while on a blank tab with no visualisations.
I don't understand what it is doing in the background and how I can improve the situation.
There is the 'Allow data preview to download in the background' setting, but I think this is only relevant to the query editor? Would clearing the cache or changing cache settings help?
I am aware of performance analyzer and the query diagnostics tools, but neither seem relevant since the queries are not refreshing and there are no visualisations loading.
Am at a bit of a loss - any help greatly appreciated.
Thanks
Update: Having disabled parallel load and background refresh in Data load settings I noticed that finally the issue seemed to go away (though not immediately). Eventually, when reopening the pbix, mashup containers did not appear and CPU and memory was not being killed. Then at some point Power BI got stuck and had to close and the problem reappeared even though the data load settings were still disabled. Restarting the machine seemed to clear the problem once again.
It seems then, that some zombie processes can persist through application close and re-open. Has anyone else noticed this, can confirm or refute it, suggest what is going on or any steps on how to avoid/prevent? It's very annoying!
Thanks
I have also noticed the same issue, for opening 5 mb pbix file, power bi eating 12 GB of memory, and 90%+ CPU utilization, Power BI Desktop is poorly managed product by Microsoft.

DB synchronization on Visual Studio 2015 is hanged

I tried to sync database on Visual Studio 2015 after creating a project, EDT, Enum and a Table in order to create a new screen on Dynamics 365.
When I tried to synchronize it, it was stopped in the middle during schema checking process. Though it seems that the DB synchronization doesn't have problem for the first few minutes, it always stops during this process as I describe below.
Log Details:
"Schema has not changed between new table 'DPT_TableDT' and old table
'DPT_TableDT' with table id '3997'. Returning from
ManagedSyncTableWorker.ExecuteModifyTable() Syncing Table Finished:
DPT_TableDT. Time elapsed: 0:00:00:00.0010010"
Could you tell me how to solve this issue?
Thanks in advance.
Full database synchronization log
DB Sync Log
From what you've described and also shown in your screenshot, this does not look like an error but is simply describing X++ and Dynamics AX/365FO behaviour.
When you say that it "doesn't have a problem for the first few minutes" I'm guessing you're just not being patient enough. Full database syncs should generally take between 10-30 minutes, but can take shorter or longer depending on a variety of factors such as how much horsepower your development environment has, how many changes are being sync'd etc. I would wait at least one hour before considering the possibility that the sync engine has errors (or even run it overnight and see what information it has for you in the morning).
The message you've posted from the log ("Schema has not changed") isn't an error message; it is just an informational log from the sync engine. It is simply letting you know that the table did not have any changes to propagate to SQL Server.
Solution: Run the sync overnight and post a screenshot of the results or the error list window in Visual Studio.
I've recently been stymied by a long running application where Access v2003 replicas refused to synchronize. The message returned was "not enough memory". This was on machines running Windows 10. The only way I was able to force synchronizing was to move the replicas onto an old machine still running Windows 98 with Office XP, which allowed synchronizing and conflict resolution. When I moved the synchronized files back to the Windows 10 machine they still would not synchronize.
I finally had to create a blank database and link to a replica, then use make-table queries to select only data fields to create new tables. I was then able to create new replicas that would synchronize.
From this I've come to suspect the following:
Something in Windows 10 has changed and caused the problem with synchronizing/conflict resolution.
Something in the hidden/protected fields added to the replica sets is seen as a problem under Windows 10 that is not a problem under Windows 98.
One thing I noticed is that over the years the number of replicas in the synchronizing list had grown to over 900 sets, but the only way to clear the table was to create a new clean database.

How can I repair foxpro .dbf database error message?

I have a client who received an error message when their Foxpro application initiated, Foxpro v9.0 error msg: "Unrecognized Database Format".
My client reported that he had a power failure and the problem appeared after he rebooted the PC.
It appears that the foxpro database corruption was caused by the power failure.
Any suggestions, friends...
Here is a link for VFP code to repair corrupt table headers in VFP. Provided you have VFP, it basically does a low-level file open, checks entire file size, detects record size and resets the record count. I have a client who still has VFP apps going, and once in a great while, they too have whatever power outage / surges / network connectivity issues and corrupts the header. They run this routine and back in business...
In addition to fixing the data or restoring a back-up, you should make sure there's an Uninterruptible Power Supply for every computer on which your application runs. Then, in the event of a power outage, your customers can shut down gracefully and avoid this kind of problem.

Intel parallel studio warning on VS2005

I installed the intel parallel studio and used it. But when I ran the application, I got a message in the Output section of Visual Studio 2005 that said
“Data collection has stopped after reaching the configured limit of 10
MB of raw data. The target will continue to run, but no further data
will be collected. The data collection stopped since the data size
limit of (10 Mb) is reached. The application is running but no data is
collected.”
Does anyone has any idea why this message is coming and is it like if I continue running my application the data will not be collected. I am not sure how to configure the settings as this is the first time I am using any such tool for finding performance hotspots.
It would be nice to know what version of Parallel Studio you are using and which collection you are running (since different collector use different settings).
Assuming you are running Amplifier collection:
Click "Project Properties" button (on toolbar or under "Start" button on left side panel).
In "Target" tab expand "Advanced" section.
There is "Collection data limit" option. You could increase it as appropriate. In latest versions of Parallel Studio it is increased to 500 MB by default.
You could set data limit to zero for unlimited collection. I don't recommend this since you may run out of disc space quickly. This is also the reason why this option is here - unlimited collection produce huge amounts of raw data.

Entity Framework takes 30 minutes to generate a model

No matter what I do, which DB I connect with, EF seems to take around 15-30 minutes to generate a model. While it's doing this, I get a "Visual Studio is busy" message in the system tray.
The first DB I connected to was complex and had a lot of data, lots of views so I thought, may be that's why. Now I have a local DB file with 1 table that has 2 columns and 3 rows. It still takes the same amount of time.
Eventually VS crashes and restarts. Has anyone had this problem before? Any idea?
I've looked at resource monitor, devenv.exe does not seem to be consuming any resources that would indicate it's doing a lot of work.
What credentials do you use to access your DB? Let's try ruling out domain latency issues first. If this is at work, can you verify the same flow on 2 separate machines using the same domain credentials? This wouldn't apply if you were using local creds.
Turns out I had a Visual Studio DVD in my DVD drive. Each time I did something with EF, VS started to read from the disc. I have no idea what or why, but the little LED would blink. Once I ejected the disk, everything ran fine. Go figure!

Resources