I have created a simple console app and execute it from PerfView via Run Command -> PerfMonTest.exe
I get the log file and see the process of the app. It is expensive as expected (99% CPU ), but when I want to drill down into the expensive methods they are not shown in the list of expensive methods.
Is there something I can do to make them visible?
Here is the view when I selected the process. I would expect CallExpensive and CallCheap in the list:
Selecting the Main Methods doesnt give me the chace to drill further into the called methods
Here is the app:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace PerfMonTest
{
class Program
{
static void Main(string[] args)
{
for (int i = 0; i <= 2000; i++)
{
CallExpensive(1000);
CallCheap(1000);
CallCheap(400);
}
}
public static void CallExpensive(int expense)
{
for (int i = 0; i <= expense; i++)
{
DateTime checkTime = DateTime.Now;
string val = "10" + i.ToString();
}
}
public static void CallCheap(int expense)
{
for (int i = 0; i <= expense; i++)
{
int j = 2;
}
}
}
}
From the screenshots it looks like, you didn't load symbols. If you do, you'll see that most of the time is spent in DateTime.Now.
If you click on Main in the By Name view, you'll go to the Callers view, which will tell you which methods called Main. If you want to drill into what methods Main is calling, you need to go to the Callees view. If you do that, you'll see the break down of what Main calls.
However, in this particular case the logic of CallExpensive and CallCheap is so simple, that the methods will be inlined (in release mode). Because the methods are inlined, they don't appear as part of the calls made from Main as the code has been folded into Main itself.
You can verify that the methods are inlined by attaching a debugger after the methods have run and look at the method descriptors for the type. Here's the output I got:
0:004> !dumpmt -md 004737c0
EEClass: 00471278
Module: 00472e94
Name: ConsoleApplication1.Program
mdToken: 02000002
File: C:\temp\ConsoleApplication1\ConsoleApplication1\bin\Release\ConsoleApplication1.exe
BaseSize: 0xc
ComponentSize: 0x0
Slots in VTable: 8
Number of IFaces in IFaceMap: 0
--------------------------------------
MethodDesc Table
Entry MethodDe JIT Name
72064a00 71d66728 PreJIT System.Object.ToString()
72058830 71d66730 PreJIT System.Object.Equals(System.Object)
72058400 71d66750 PreJIT System.Object.GetHashCode()
72051790 71d66764 PreJIT System.Object.Finalize()
0047c01d 004737b8 NONE ConsoleApplication1.Program..ctor()
004d0050 00473794 JIT ConsoleApplication1.Program.Main(System.String[])
0047c015 004737a0 NONE ConsoleApplication1.Program.CallExpensive(Int32)
0047c019 004737ac NONE ConsoleApplication1.Program.CallCheap(Int32)
The fact that CallExpensive and CallCheap have NONE listed in the JIT column indicates that they were inlined (or not called at all, but that's not the case here).
Related
My sample app looks as follows.
class Program
{
static List<XmlNode> memList = new List<XmlNode>();
static void Main(string[] args)
{
Console.WriteLine("Press any key to start");
Console.ReadKey();
CauseHighCPU();
}
static public void CauseHighCPU()
{
string str = string.Empty;
for (int i = 0; i < 100000; i++)
{
str += " Hello World";
}
}
}
I expect string concatenation to cause high cpu. When I profile the application using PerfView, this is very loud and clear.
I am trying to do similar analysis using Visual Studio 2017 Diagnostics Hub. Below is what its CPU usage tab shows.
Its Call-tree view not showing any call to Concat, although, there are some External Code here
This makes me think that it may related to something missing in my configration. As you can see here, Enable Just My Code is unchecked.
Also not sure if its related but here is symbols settings.
Any thouhts what could be wrong that is causing VS not showing root cause of high-cpu usage.
You should not look in the options of Debugging but in the options of Performance Tools and then disable "Just my code":
I use the libpd4unity package to communicate with Pure Data. I receive a bang from Pure Data with LibPD.Bang. On a bang event I play sound by FMOD.
Problem is, that I receive bangs frequently, for example once every 500 ms but event doesn't trigger in specific length of frame. Usually length change 1 frame less or more.
Is there a solution for this problem? For example a framerate independent event? I want to know if event (delegate) in Unity3D is framerate independent or not.
Because there is tempo for playing each sound and just 1 frame ruins rhythm.
I need to sync sounds for playing by each separate bang.
Regarding your question on whether delegates are dependent or independent from Unity's framerate, there's no straight answer. It depends on how your delegates are called. Are they called from a thread? Are they executed in a thread?
Coroutines are not framerate independent, they are executed in Unity's loop.
The following script should shine a light on the difference between handling delegates in coroutines and in threads.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Threading;
public class DelegatesAndFramerate : MonoBehaviour {
delegate void MyDelegate();
MyDelegate myDelegate1; // done with coroutines
MyDelegate myDelegate2; // done with threads
Thread thread;
bool threadDone = false;
private int frameCount = 0;
private int delegate1CallCount = 0;
private int delegate2CallCount = 0;
private int callerLoopsCount_coroutine = 0;
private int callerLoopsCount_thread = 0;
void Start () {
myDelegate1 += Elab1;
myDelegate2 += Elab2;
StartCoroutine(CallerCoroutine());
thread = new Thread(new ThreadStart(CallerThread));
thread.Start();
}
void Update()
{
frameCount++;
}
void Elab1()
{
delegate1CallCount++;
}
void Elab2()
{
delegate2CallCount++;
}
IEnumerator CallerCoroutine()
{
while(true)
{
callerLoopsCount_coroutine++;
myDelegate1();
yield return null;
}
}
void CallerThread()
{
while(!threadDone)
{
callerLoopsCount_thread++;
myDelegate2();
}
}
void OnDestroy()
{
Debug.Log("Frame Count: " + frameCount);
Debug.Log("Delegate Call Count (Coroutine): " + delegate1CallCount);
Debug.Log("Delegate Call Count (Thread): " + delegate2CallCount);
Debug.Log("Caller Loops Count (Coroutine): " + callerLoopsCount_coroutine);
Debug.Log("Caller Loops Count (Thread): " + callerLoopsCount_thread);
threadDone = true;
thread.Join();
}
}
If you attach it to a GameObject and let Unity play for some seconds you'll see that the times the delegate was called from a coroutine is equal to the number of executed frames whilst the times the delegate was called from the thread will be way bigger.
I have experience in interfacing softwares similar to Pure Data and I think what you need is a (rather typical) thread with all your delegates there, create a queue of commands for Unity and digest it in Unity's Update.
Not knowing libPD in the specific this might not be the best practice for the case but it is a widely used approach. Basically the producer-consumer pattern.
Basing on the example GUITextScript.cs, libPD only requires you to subscribe to the right delegates. You don't have control on when these are executed, the library has; so if you keep having this issue it's worth submitting a bug report to the developers I guess.
I'm currently debugging using the Netbeans debugger. I was just wondering if there is a function where I can set a inactive breakpoint on a variable, the breakpoint is only activated when the value of the variable changes?
I guess it depends on what language are you using. Invoke Debug from main menu and then New Breakpoint. There you can select e.g. Java for language and will have a plenty of ways/conditions on how to specify breakpoint (including field modifications).
Here is a simple ilustration:
Sample code:
public class JavaApplication4 {
public static int something = 2;
public static void main(String[] args) {
for (int i = 0; i < 10; i++) {
System.out.println(i); // this is line #19
something++;
}
}
}
And here are 2 screenshots with possible breakpoints:
I wanted to help out #mark in a question where he is asking for an API to dump many objects from a .NET crash dump file.
So I wrote the following code using mdbgeng, but unfortunately it fails with a NotImplementedException when trying to enumerate the objects in memory.
using System;
using System.Runtime.InteropServices;
using Microsoft.Samples.Debugging.CorDebug;
using Microsoft.Samples.Debugging.CorDebug.Utility;
using Microsoft.Samples.Debugging.MdbgEngine;
using Microsoft.Samples.Debugging.Native;
namespace DumpHeapFromDotNet
{
class Program
{
static void Main(string[] args)
{
var libraryProvider = new LibraryProvider();
var dumpReader = new DumpReader(args[0]);
var dataTarget = new DumpDataTarget(dumpReader);
foreach (var module in dumpReader.EnumerateModules())
{
var clrDebugging = new CLRDebugging();
Version actualVersion;
ClrDebuggingProcessFlags flags;
CorProcess proc;
var hr = (HResult) clrDebugging.TryOpenVirtualProcess(module.BaseAddress, dataTarget, libraryProvider,
new Version(4, 6, int.MaxValue, int.MaxValue), out actualVersion, out flags, out proc);
if (hr < 0)
{
switch (hr)
{
case HResult.CORDBG_E_NOT_CLR:
Console.WriteLine(module.FullName + " is not a .NET module");
break;
case HResult.CORDBG_E_LIBRARY_PROVIDER_ERROR:
Console.WriteLine(module.FullName + " could not provide library");
break;
case HResult.CORDBG_E_UNSUPPORTED_DEBUGGING_MODEL:
case HResult.CORDBG_E_UNSUPPORTED_FORWARD_COMPAT:
break;
default:
Marshal.ThrowExceptionForHR((int)hr);
break;
}
}
else
{
var objects = proc.Objects; // NotImplementedException
foreach (CorObjectValue o in objects)
{
// TODO: Write details of object to file here
}
}
}
Console.ReadLine();
}
}
}
The dump I was using is a .NET 4.6.1076.0 dump with full memory (you can pass a file name as an argument):
0:000> lm vm clr
[...]
ProductVersion: 4.6.1076.0
FileVersion: 4.6.1076.0 built by: NETFXREL3STAGE
0:000> .dumpdebug
----- User Mini Dump Analysis
MINIDUMP_HEADER:
Version A793 (61B1)
NumberOfStreams 11
Flags 1806
0002 MiniDumpWithFullMemory
0004 MiniDumpWithHandleData
0800 MiniDumpWithFullMemoryInfo
1000 MiniDumpWithThreadInfo
I doubt it has something to do with missing mscordacwks or similar, since I just created the dump on the same machine with the same .NET framework as I used for this sample.
Is it really not implemented yet, or am I doing something else wrong?
I'm currently messing with MDBG and I have tried to check the described behavior on real application, not on the dump. I received exatly the same not implemented exception. Looking for the documentation on MSDN I've found the confirmation, that this method is not implemented.
When debugging in IDE, how does the IDE know how to calculate the watch value without changing the environment (writing to file, writing result to DB)?
Your observation cannot be generalized. An IDE typically makes changes during debugging, especially if a property has a side effect.
Visual Studio
The following C# code:
using System;
namespace EvaluateChangesValue
{
class Program
{
static void Main()
{
var program = new Program();
Console.WriteLine(program.Value);
Console.ReadLine();
Console.WriteLine(program.Value);
Console.ReadLine();
}
private int member;
private int Value => member++;
}
}
Set a breakpoint at the first ReadLine(), then add program.Value to the watch window and see how the value gets increased due to the member++ statement.
Eclipse
In Java and Eclipse, it's a bit harder to make the same proof because for these reasons:
In Java it's more clear whether you call a method or access a field.
You need the "Expressions" window, which is not available by default
Re-evaluation needs user interaction
The code is similar to C#:
public class Program {
public static void main(String[] args)
{
Program p = new Program();
System.out.println(p.member);
System.console().readLine();
System.out.println(p.member);
System.console().readLine();
}
private int member;
public int getMember()
{
return member++;
}
}
And the screenshot: