retrieving data from arbitrary memory addresses using VSIX - visual-studio

I am working on developing a debugger plugin for visual studio using VSIX. My problem is I have an array of addresses but I cannot set the IDebugMemoryBytes2 to a particular address. I use DEBUG_PROPERTY_INFO and get the array of addresses, and I also am able to set the context to the particular addresses in the array using the Add function in IDebugMemoryContext2. However, I need to use the ReadAt function to retrieve n bytes from a specified address (from IDebugMemoryBytes2).
Does anyone have any idea how to retrieve data from arbitrary addresses from memory?
I am adding more information on the same:
I am using the Microsoft Visual Studio Extensibility package to build my debugger plugin. In the application I am trying to debug using this plugin, there is a double pointer and I need to read those values to process them further in my plugin. For this, there is no way to display all the pointer variables in the watch window and hence, I am not able to get the DEBUG_PROPERTY_INFO for all the block of arrays which the pointer variable is pointing to. This is my problem which I am trying to address. There is no way for me to read the memory pointed to by this double pointer.
Now as for the events in the debuggee process, since the plugin is for debugging variables, I put a breakpoint at a place where I know this pointer is populated and then come back to the plugin for further evaluation.
As a start, I was somehow able to get the starting addresses of each of the array. But still, I am not able to read x bytes of memory from each of these starting addresses.
ie., for example, if I have int **ptr = // pointing to something
I have the addresses present in ptr[0], ptr[1], ptr[2], etc. But I need to go to each of these addresses and fetch the memory block they are pointing to.
For this, after much search, I found this link: https://macropolygon.wordpress.com/2012/12/16/evaluating-debugged-process-memory-in-a-visual-studio-extension/ which seems to address exactly my issue.
So to use expression evaluator functions, I need an IDebugStackFrame2 object to get the ExpressionContext. To get this object, I need to register to events in the debuggee process which is for breakpoint. As said in the post, I did:
public int Event(IDebugEngine2 engine, IDebugProcess2 process,
IDebugProgram2 program, IDebugThread2 thread, IDebugEvent2
debugEvent, ref Guid riidEvent, uint attributes)
{
if (debugEvent is IDebugBreakpointEvent2)
{
this.thread = thread;
}
return VSConstants.S_OK;
}
And my registration is like:
private void GetCurrentThread()
{
uint cookie;
DBGMODE[] modeArray = new DBGMODE[1];
// Get the Debugger service.
debugService = Package.GetGlobalService(typeof(SVsShellDebugger)) as
IVsDebugger;
if (debugService != null)
{
// Register for debug events.
// Assumes the current class implements IDebugEventCallback2.
debugService.AdviseDebuggerEvents(this, out cookie);
debugService.AdviseDebugEventCallback(this);
debugService.GetMode(modeArray);
modeArray[0] = modeArray[0] & ~DBGMODE.DBGMODE_EncMask;
if (modeArray[0] == DBGMODE.DBGMODE_Break)
{
GetCurrentStackFrame();
}
}
}
But this doesn't seem to invoke the Event function at all and hence, I am not sure how to get the IDebugThread2 object.
I also tried the other way suggested in the same post:
namespace Microsoft.VisualStudio.Debugger.Interop.Internal
{
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid("1DA40549-8CCC-48CF-B99B-FC22FE3AFEDF")]
public interface IDebuggerInternal11 {
[DispId(0x6001001f)]
IDebugThread2 CurrentThread { [return:
MarshalAs(UnmanagedType.Interface)]
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType =
MethodCodeType.Runtime)]
get; [param: In, MarshalAs(UnmanagedType.Interface)]
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType =
MethodCodeType.Runtime)] set; }
}
}
private void GetCurrentThread()
{
debugService = Package.GetGlobalService(typeof(SVsShellDebugger)) as IVsDebugger;
if (debugService != null)
{
IDebuggerInternal11 debuggerServiceInternal =
(IDebuggerInternal11)debugService;
thread = debuggerServiceInternal.CurrentThread;
GetCurrentStackFrame();
}
}
But in this method, I think I am missing something but I am not sure what, because after the execution of the line
IDebuggerInternal11 debuggerServiceInternal =
(IDebuggerInternal11)debugService;
when I check the values of the debuggerServiceInternal variable, I see there is a System.Security.SecurityException for CurrentThread, CurrentStackFrame (and so obviously the next line causes a crash). For this, I googled the error and found I was missing the ComImport attribute to the class. So I added that and now, I get a System.AccessViolationException : Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I am new to C# programming as well and hence, it is a bit difficult to grasp many things in short duration. I am lost as to how to proceed further now.
Any help in the same or suggestions to try another way to achieve my objective will be greatly appreciated.
Thanks a lot,
Esash

After much search, since I am short of time, I need a quick solution and hence, for now, it seems like the quickest way to solve this problem is to hack the .natvis files by making it display all the elements of the pointer and then using the same old way by using IDebug* interface methods to access and retrieve the memory context for each of the pointer elements. But, after posting the same question in msdn forums, I think the proper answer to this problem is as mentioned by Greggs:
"For reading memory, if you want a fast way to do this, you just want the raw memory, and the debug engine of the target is the normal Visual Studio native engine (in other words, you aren't creating your own debug engine), I would recommend referencing Microsoft.VisualStudio.Debugger.Engine. You can then use DkmStackFrame.ExtractFromDTEObject to get the DkmStackFrame object. This will give you the DkmProcess object and you can call DkmProcess.ReadMemory to read memory from the target."
Now, after trying a lot to understand how to implement this, I found that you could just accomplish this using :
DkmProcess.GetProcesses() and doing a ReadMemory on the process returned.
There is a question now, what if more than one processes are returned. Well, I tried attaching many processes to the current debugging process and tried attaching many processes to the debuggee process as well, but found that the DkmProcess.GetProcesses() gets only the one from which I regained the control from, and not the other processes I am attached to. I am not sure if this will work in all cases but for me, it worked this way and for anyone who has similar requirements, this might work as well.
Using the .natvis files to accomplish this means, using IndexListItems for VS2013 and prior versions, and using CustomListItems for VS2015 and greater versions, and to make it look prettier, use the "no-derived" attribute. There is no way to make the Synthetic tag display only the base address of each variable and hence, the above attribute is the best way to go about, but this is not available in VS2013 and prior versions (The base address might get displayed but for people who want to go beyond just displaying contents and also access the memory context of the pointer element, Synthetic tag is not the right thing).
I hope this helps some developer who struggled like me using IDebug* interfaces. For reference, I am also giving the link to the msdn forum where my question was answered.
https://social.msdn.microsoft.com/Forums/en-US/030cef1c-ee79-46e9-8e40-bfc59f14cc34/how-can-i-send-a-custom-debug-event-to-my-idebugeventcallback2-handler?forum=vsdebug
Thanks.

Related

Get notified when TaggedValue of an element changes in Enterprise Architect

I am new in creating Addins for Enterprise Architect and I have this problem:
I have a diagram with elements which have TaggedValues. I want to get notified when the value of a TaggedValue changes and see the new value.
I saw that there is this event EA_OnElementTagEdit available but I can't seem to get it triggered. I also saw that the tagged value has to be of type AddinBroadcast but I can't seem to make it work. What am I missing?
I will put below a sample of my code:
//creating tagged value
EA.TaggedValue ob3 = (EA.TaggedValue)NewElement.TaggedValues.AddNew("Responsible", "val");
ob3.Value = EEPROMBlocks.ElementAt(index).Responsible;
ob3.SetAttribute("Type", "AddinBroadcast");
ob3.Update();
//event method
public override void EA_OnElementTagEdit(EA.Repository Repository, long ObjectID, ref string TagName, ref string TagValue, ref string TagNotes)
You are not missing anything. This is simply not possible. The only way around is the OnContext... where you temporarily store the status of one element and look if a tag has changed when the context changes. I would not recommend that since it involved a lot of superfluous DB accesses.
Send a feature request (if you are an optimistic guy). Alternatively you should think of ways to get around this somehow else.

Invalid Named Property

We call the microsoft exchange to set the extended property which in our case is an unique guid
microsoft.exchange.webservices.data.core.exception.service.remote.ServiceResponseException: An internal server error occurred. The operation failed., Invalid named property
Its been working great until now when some of our users are facing the above issue ....
val uId = getUniqueId();
val emailExtendedPropDef = new ExtendedPropertyDefinition(uId,"uniqueId", MapiPropertyType.String)
try {
email.setExtendedProperty(emailExtendedPropDef, uId.toString)
email.sendAndSaveCopy()
} catch {
case e: Exception =>
error(s"Exception in setting extended property for user $from", e)
throw e
}
trying to find the root cause of the issue, we are also thinking it might be related to throttling on Microsoft exchange for extended properties (Not sure how to prove if it's indeed throttling) any help to point us in the right direction will be of great help
Our use case is to able to retrieve the email when customer want's to reply back we want to retrieve that particular email to be included in users reply....currently we are using the uid to achieve that ....
we have been using the code as per the documentation here
https://learn.microsoft.com/en-us/previous-versions/office/developer/exchange-server-2010/dd633654(v%3Dexchg.80)
and also the documentation here
https://github.com/OfficeDev/ews-java-api/wiki/Getting-Started-Guide#extended-properties
Update : As per the comments we do understand that we have to treat extendedProperty as a column definition and update the same column ...but we couldn't figure out how to achieve this as...Any code samples to point us in the right direction will be of great help
Latest Update : We have deleted some of the extendedPropertyDefinition's but still facing the same invalid property could some one please point us in the right direction
Is it safe to say that getUniqueId returns a different guid on each call? If so, then that is the problem. Think of the Guid for an extended prop as a namespace. The exchange store limits the number of custom extended props to something like 32k per mailbox. So you are likely hitting that limit. But aside from that, the main reason for creating an extended property is so that you can refer to it later. But if you are basically discarding the namespace each time, you are leaving orphaned props on items. Without understanding your particular scenario, I can only say that the Guid should be thought of truly as a namespace. Choose one for your app/company/scenario and hard code it. They create all the named props you want within that namespace. For instance, "MyProp/String" in Guid namespace 1 is a different property than "MyProp/String" in Guid namespace 2.

Just how global are Coldfusion variables not declaring using “var”?

I’m using Coldfusion MX 8. I recently had a situation where variables seem to be “swapping” between sessions. I found some information regarding entire sessions swapping, but this was not the case. It was just one variable being swapped, not the entire session. My code snippets follow:
var idArray = ListToArray(arguments.event.getArg("itemIDs"));
var oItemDetail = 0;
var oItem = 0; //Inserting this line seems to have fixed the error.
var i = 0;
for (i=1;i lte ArrayLen(idArray);i=i+1) {
//Log Spot #1 – cflog idArray[i] and arguments.event.getArg("statusNotes")
oItem = getItemService().getItem(idArray[i]);
oItemDetail = getItemService().getItemDetail();
oItemDetail.setItemID(oItem.getItemID());
oItemDetail.setStatusNotes(arguments.event.getArg("statusNotes"));
getItemService().saveItem(oItem);
getItemService().saveItemDetail(oItemDetail);
}
//getItem and getItemDetail just call getTransfer().get()
//saveItem and saveItemDetail just call getTransfer().save()
For example, at Log Spot #1, idArray[i] might have been “1”, and the StatusNotes event arg might be “abc”.
But should another person, in another session, using another login, in another place, another browser, etc.etc. Use the function at exactly the same time, using idArray[i] = “2” and statusNotes = “def”, then Item Detail “abc” might instead attach to Item “2”, and Item Detail “def” attach to Item “1”.
The fact that, at Log Spot #1, the variables logged are correct, but in the database they are swapped, points to these lines of code as the suspects.
This problem has gone away by declaring “var oItem” at the top.
So I guess I’m a little shocked by this revelation. I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same session might get overwritten. But this seems to be some sort of internal memory issue. The variables are not even being overwritten, rather swapped, between sessions!
I was wondering if anyone had any similar experiences and could shed some light on this?
Unvar'd variables are made private variables within the object that they are contained in. Which causes two problems,
They are shared (accessed and written to) by functions (within that same component)
They live beyond the life of the function call
When you var a variable it makes it a local variable only to that function. Only that function can use it and it only lives as long as that function does.
In your case, this problem doesn't really have anything to do with sessions, other than that is the persistent scope where you happen to be storing the data from these functions.
You said
I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same session might get overwritten.
But it would be more accurate to say
I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same OBJECT might get overwritten.

Reliable and efficient way to handle Azure Table Batch updates

I have an IEnumerable that I'd like to add to Azure Table in the most efficient way possible. Since every batch write has to be directed to the same PartitionKey, with a limit of 100 rows per write...
Does anyone want to take a crack at implementing this the "right" way as referenced in the TODO section? I'm not sure why MSFT didn't finish the task here...
Also I'm not sure if error handling will complicate this, or the correct way to implement it. Here is the code from the Microsoft Patterns and Practices team for Windows Azure "Tailspin Toys" demo
public void Add(IEnumerable<T> objs)
{
// todo: Optimize: The Add method that takes an IEnumerable parameter should check the number of items in the batch and the size of the payload before calling the SaveChanges method with the SaveChangesOptions.Batch option. For more information about batches and Windows Azure table storage, see the section, "Transactions in aExpense," in Chapter 5, "Phase 2: Automating Deployment and Using Windows Azure Storage," of the book, Windows Azure Architecture Guide, Part 1: Moving Applications to the Cloud, available at http://msdn.microsoft.com/en-us/library/ff728592.aspx.
TableServiceContext context = this.CreateContext();
foreach (var obj in objs)
{
context.AddObject(this.tableName, obj);
}
var saveChangesOptions = SaveChangesOptions.None;
if (objs.Distinct(new PartitionKeyComparer()).Count() == 1)
{
saveChangesOptions = SaveChangesOptions.Batch;
}
context.SaveChanges(saveChangesOptions);
}
private class PartitionKeyComparer : IEqualityComparer<TableServiceEntity>
{
public bool Equals(TableServiceEntity x, TableServiceEntity y)
{
return string.Compare(x.PartitionKey, y.PartitionKey, true, System.Globalization.CultureInfo.InvariantCulture) == 0;
}
public int GetHashCode(TableServiceEntity obj)
{
return obj.PartitionKey.GetHashCode();
}
}
Well, we (the patterns & practices team) just optimized for showing other things we considered useful. The code above is not really a "general purpose library", but rather a specific method for the sample that uses it.
At that moment we thought that adding that extra error handling would not add much, and we diceided to keep it simple, but....we might have been wrong.
Anyway, if you follow the link in the //TODO:, you will find another section of a previous guide we wrote that talks a little bit more on error handling in "complex" storage transactions (not in the "ACID" form though as transactions "ala DTC" are not supported in Windows Azure Storage).
Link is this: http://msdn.microsoft.com/en-us/library/ff803365.aspx
The limitations are listed in more detail there:
Only one instance of the entity should be present in the batch
Max 100 entities or 4 MB payload
Same PartitionKey (which is being handled in the code: notice that "batch" is only specified if there's a single Partition key)
etc.
Adding some extra error handling should not overcomplicate things too much, but depends on the type of app you are building on top of this and your preference to handle this higher or lower in your app stack. In our example, the app would never expect > 100 entities anyway, so it would simply bubble the exception up if that situation happens (because it should be truly exceptional). Same with the total size. The use cases implemented in the app make it impossible to have the same entity in the same collection, so again, that should never happen (and if it happens, it wouls simply throw)
All "entity group transactions" limitations are documented here: http://msdn.microsoft.com/en-us/library/dd894038.aspx
Let us know how it goes! I'm also interested to know if other pieces of the guide were useful for you.

How to quickly find a sharepoint document library by id?

Given the SPList.ID and a site collection (or an SPWeb with subwebs), how do I quickly find the document library with the given ID?
I can recursively enumerate through all webs and perform a web.Lists[guid] on each one of them, but there might be thousands of subwebs in my case, and I'm looking for a realtime solution.
If there is no way to do this quickly, any other suggestions on how to uniquely identify a document library? I could store the full path (url), but the identification will be publicly visible and I don't feel very comfortable giving away our exact SharePoint document structure like that. Should I resort to maintaining a manual ID <-> library mapping in a separate list?
I vote for the manual ID -> URL pair matching in a top-level, well-known list that's visible only to the elevated privileges account.
Since you are storing the ListID somewhere, you may also store the WebId. Lists are opened by the context SPWeb always, so if you go to:
http://toplevel/_layouts/ListGeneralSettings.aspx?ID={GUID1} // OK
http://toplevel/sub1/_layouts/ListGeneralSettings.aspx?ID={GUID1} // Wont Work (same Guid)
Having the WebId and ListId you can simply:
using(SPWeb subweb = (new SPSite("http://url")).OpenWeb(new Guid("{000...}")))
{
SPList list = subweb.Lists.GetList(new Guid("{111...}"), true);
// list logic
}
MS does not support this :)...
But take a look at this for giggles: http://weblogs.sqlteam.com/jhermiz/archive/2007/08/15/60288.aspx
If you have MOSS Search available, then it might help, depending on the lag you have between these lists getting created and needing to search for them. You could probably map list id as a managed property and do a quick search for list objects with the id in question.
For lots of classes of problems it seems like search is the fastest way to rip through huge sets of data. In fact if this approach worked for you, you really wouldn't even need to know the site collection up front. Don't have access to any of my MOSS environments at the moment, so can't verify this will work though.

Resources