anyone using IXCLRDataProcess, GetRuntimeNameByAddress and ICLRDataTarget to get managed symbol? - debugging

the purpose is to get a mixed callstack. For managed symbol, I use IXCLRDATAProcess / GetRuntimeNameByAddress to resolve the corresponding managed callstack. I modify this project to get self-mixed callstack, it works on X86, but it failed on X64. after debugging, I find the issue is located in IXCLRDATAProcess. Our project is located in this.
this is my way to use IXCLRDATAProcess:
HANDLE processHandle = OpenProcess(PROCESS_ALL_ACCESS, FALSE, ProcessId);
void** iface;
BOOL isWow64 = FALSE;
if (processHandle == NULL)
return NULL;
BOOLEAN result = IsWow64Process(processHandle, &isWow64);
DiagCLRDataTarget* dataTarget = new DiagCLRDataTarget(ProcessId, processHandle, isWow64, debugNative); //new DnCLRDataTarget;
ICLRDataTarget* target = static_cast<ICLRDataTarget*>(dataTarget);
HMODULE accessDll = LoadLibraryW(L"C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\mscordacwks.dll");
PFN_CLRDataCreateInstance entry = (PFN_CLRDataCreateInstance)GetProcAddress(accessDll, "CLRDataCreateInstance");
hr = entry(__uuidof(IXCLRDataProcess), target, (void**)&ifacePtr); // error
if (FAILED(hr)) {
std::cout << "error: " << GetLastError() << std::endl; // where we get error: 203:230 (0xE6) The pipe state is invalid.
*iface = ifacePtr;
}
m_clrDataProcess = static_cast<IXCLRDataProcess*>(iface);
then I use m_clrDataProcess to resolve the symbol, that is
hr = m_clrDataProcess->GetRuntimeNameByAddress(clrAddr, 0, maxSize - 1, &nameLen, buffer, &displacement);
due to the m_clrDataProcess being null, we can't resolve symbol from the frame address. Does anyone use these API or does anyone have some advice?

Related

Blue screen when rewriting packets at DATAGRAM_DATA layer in WFP

I've been trying to modify outgoing DNS packets via the DATAGRAM_DATA layer in WFP, however i get blue screen errors when rewriting the destination ip in the outgoing packet. What am i doing wrong?
I admit i found the parameters for FwpsInjectTransportSendAsync a bit confusing, and was unsure exactly what to put in for the sendParams arg - though i think what i have looks right.
RtlIpv4StringToAddressExW(
L"1.1.1.1", // hard-coding the new (rewritten) dns server for now
FALSE,
&sin4.sin_addr,
&sin4.sin_port);
RtlIpv4StringToAddressExW(
L"8.8.8.8", // hard-coding the original dns server for now
FALSE,
&origSin4.sin_addr,
&origSin4.sin_port);
if ((Direction == FWP_DIRECTION_OUTBOUND) && (PacketInjectionState == FWPS_PACKET_NOT_INJECTED) && (RemotePort == 53) && (RemoteAddress == origSin4.sin_addr.S_un.S_addr))
{
UINT32 IpHeaderSize = inMetaValues->ipHeaderSize;
UINT32 TransportHeaderSize = inMetaValues->transportHeaderSize;
UINT64 endpointHandle = inMetaValues->transportEndpointHandle;
PNET_BUFFER NetBuffer = NET_BUFFER_LIST_FIRST_NB((PNET_BUFFER_LIST)layerData);
NdisRetreatNetBufferDataStart(NetBuffer, IpHeaderSize + TransportHeaderSize, 0, NULL);
PNET_BUFFER_LIST NetBufferList = NULL;
NTSTATUS Status = FwpsAllocateCloneNetBufferList(layerData, NULL, NULL, 0, &NetBufferList);
if (!NT_SUCCESS(Status))
{
return;
}
NdisAdvanceNetBufferDataStart(NetBuffer, IpHeaderSize + TransportHeaderSize, FALSE, NULL);
if (!NetBufferList)
{
return;
}
NetBuffer = NET_BUFFER_LIST_FIRST_NB(NetBufferList);
PIPV4_HEADER IpHeader = NdisGetDataBuffer(NetBuffer, sizeof(IPV4_HEADER), NULL, 1, 0);
// Rewriting the dest ip
IpHeader->DestinationAddress = sin4.sin_addr.S_un.S_addr;
// Updating the IP checksum
UpdateIpv4HeaderChecksum(IpHeader, sizeof(IPV4_HEADER));
// not 100% sure the sendParams argument is setup correctly, the docs are slightly unclear
FWPS_TRANSPORT_SEND_PARAMS sendParams = {
.remoteAddress = (UCHAR*)IpHeader->DestinationAddress,
.remoteScopeId = inMetaValues->remoteScopeId,
.controlData = inMetaValues->controlData,
.controlDataLength = inMetaValues->controlDataLength,
.headerIncludeHeader = inMetaValues->headerIncludeHeader,
.headerIncludeHeaderLength = inMetaValues->headerIncludeHeaderLength
};
Status = FwpsInjectTransportSendAsync(g_InjectionHandle, NULL, endpointHandle, 0, &sendParams, AF_INET, inMetaValues->compartmentId, NetBufferList, DriverDatagramDataInjectComplete, NULL);
if (!NT_SUCCESS(Status))
{
FwpsFreeCloneNetBufferList(NetBufferList, 0);
}
classifyOut->actionType = FWP_ACTION_BLOCK;
classifyOut->rights &= ~FWPS_RIGHT_ACTION_WRITE;
classifyOut->flags |= FWPS_CLASSIFY_OUT_FLAG_ABSORB;
}
Two things stand out to me, both in the sendParams.
First, remoteAddress is incorrect. It needs to a pointer to the address, so it should be (UCHAR*)&IpHeader->DestinationAddress.
Second, FwpsInjectTransportSendAsync() is asynchronous so any parameters you pass to it need to stay valid until it completes which may be after your calling function returns. Typically you allocate some context structure that contains sendParams and deep copies of relevant members (remoteAddress and controlData). You pass this as the context to the completion routine where you free it.

Aging values in a queue: Best use of Windows timers?

I have an std::set that contains unique values. I have an std::queue that holds the same values
in order to age the values in std::set.
I'd like to use a timer to determine when to pop a value from the queue and then erase the value from the set.
The timer is created/started every time data is added to an empty set/queue.
If data is added to a non-empty set/queue, no change is made to the timer.
The timer would fire every X milliseconds to execute a function.
The function would pop a value from the queue then erase that value from the set.
If the set/queue is now empty the timer would stop.
If the set/queue is not empty, no change is made to the timer.
This program runs in Windows 10.
Does this way make sense? Is there a better/more efficient/simpler way to age the data?
I've read the docs on Using Timer Queues so I see how the queue and the timers are created and destroyed. What I don't see is a recommendation for starting/stopping timers.
Should I be creating a new TimerQueueTimer to wait for X milliseconds once, run the func and then create a new TimerQueueTimer if the set/queue is not empty?
Should I instead create a single TimerQueueTimer to run periodically X milliseconds but delete it once the set/queue is empty?
Is there a 3rd technique I should use instead?
Here's my example code.
using unsignedIntSet = std::set<std::uint32_t>;
using unsignedIntQ = std::queue<std::uint32_t>;
unsignedIntQ agingQ;
unsignedIntSet agingSet;
HANDLE gDoneEvent = NULL;
HANDLE hTimer = NULL;
HANDLE hTimerQueue = NULL;
VOID CALLBACK ageTimer(PVOID lpParam, BOOLEAN TimerOrWaitFired)
{
if (!agingQ.empty())
{
auto c = agingQ.front();
agingSet.erase(c);
agingQ.pop();
if (!agingQ.empty())
{
// rerun CreateTimerQueueTimer() here?
}
}
SetEvent(gDoneEvent);
}
int createTimerForAgingQ()
{
// create timer if it doesn't already exist
if (gDoneEvent == NULL)
{
gDoneEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (gDoneEvent == NULL)
{
std::cerr << "CreateEvent() error: " << WSAGetLastError() << std::endl;
return -1;
}
hTimerQueue = CreateTimerQueue();
if (hTimerQueue == NULL)
{
std::cerr << "CreateTimerQueue() error: " << WSAGetLastError() << std::endl;
return -1;
}
if (!CreateTimerQueueTimer(&hTimer, hTimerQueue, (WAITORTIMERCALLBACK)ageTimer, NULL, 500, 0, WT_EXECUTEONLYONCE))
{
std::cerr << "CreateTimerQueueTimer() error: " << WSAGetLastError() << std::endl;
return -1;
}
}
}
void addUnique(unsigned char* buffer, int bufferLen)
{
// hash value
auto h = hash(buffer, bufferLen);
// test insert into set
auto setResult = agingSet.emplace(h);
if (setResult.second)
{
// enqueue into historyQ
agingQ.emplace(h);
if (!gDoneEvent) createTimerForAgingQ();
}
}
Research shows that the CreateTimerQueue/CreateTimerQueueTimer may not be the way to go.
Use of ThreadpoolTimer

nodejs native addon : how to modify the value of c++ object's member contained in another native addon

First some context, i got two nodejs native addon. The first one contains a static c++ object "Conn" exposed using an v8 object internal field as described in the embedder's guide
NAN_METHOD(cobject) {
auto isolate = Isolate::GetCurrent();
Conn* p = &ConnHolder::connection;
Local<ObjectTemplate> conn_templ = ObjectTemplate::New(isolate);
conn_templ->SetInternalFieldCount(1);
Local<Object> obj = conn_templ->NewInstance();
obj->SetInternalField(0, External::New(isolate, p));
info.GetReturnValue().Set(obj);
}
In my other native addon, i'm loading the first one using c++ code and i expose a function called test containing two calls on the Conn object "callToDummyFunction()" and "callToFunctionWithMemberAccess()"
// persistent handle for the main addon
static Persistent<Object> node_main;
void Init(v8::Local<v8::Object> exports, v8::Local<v8::Object> module) {
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
// get `require` function
Local<Function> require = module->Get(String::NewFromUtf8(isolate, "require")).As<Function>();
Local<Value> args[] = { String::NewFromUtf8(isolate, "path_to_my_addon\\addon.node") };
Local<Object> main = require->Call(module, 1, args).As<Object>();
node_main.Reset(isolate, main);
NAN_EXPORT(exports, test);
}
NAN_METHOD(test) {
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
// get local handle from persistent
Local<Object> main = Local<Object>::New(isolate, node_main);
// get `cobject` function to get pointer from internal field
Local<Function> createdMain = main->Get(String::NewFromUtf8(isolate, "cobject")).As<Function>();
Local<Object> callResult = createdMain->Call(main, 0, nullptr).As<Object>();
Local<Object> self = info.Holder();
Local<External> wrap = Local<External>::Cast(self->GetInternalField(0));
void* ptr = wrap->Value();
// from there i get a pointer to my Conn object
Conn* con = static_cast<Conn*>(ptr);
conn->callToDummyFunction();
conn->callToFunctionWithMemberAccess();
info.GetReturnValue().Set(10);
}
Then i'm launching a nodejs session using "node", i load the first and second addon using two require calls and finally i'm calling the method test on the second addon.
The method test is executed, the call to "callToDummyFunction" is executed successfully but the call to "callToFunctionWithMemberAccess" crash and also kill the node session.
Ok, so what is the difference between "callToDummyFunction" and "callToFunctionWithMemberAccess" ?
bool Conn::callToDummyFunction()
{
cout << "callToDummyFunction" << endl;
return true;
}
bool Conn::callToFunctionWithMemberAccess()
{
cout << "callToFunctionWithMemberAccess " << someIntMember << endl;
return true;
}
So, it seems accessing a member of the Conn object generate an error and crash the node session. The node session does not output any message before crashing.
Can someone tell me why?
And/Or
How to get an error message ?
I'm answering my own question. In fact, i'm stupid but at least my stupidity made me learn some strange cpp things.
So, first of all the stupid answer. Instead of using my returned object i'm using a totaly unrelated object :(
Local<Object> callResult = createdMain->Call(main, 0, nullptr).As<Object>();
Local<Object> self = info.Holder();
Local<External> wrap = Local<External>::Cast(self->GetInternalField(0));
Why using
Local self = info.Holder();
instead of callResult. The right code would be
Local<Object> callResult = createdMain->Call(main, 0, nullptr).As<Object>();
Local<External> wrap = Local<External>::Cast(callResult->GetInternalField(0));
What did i learn from this stupid mistake:
read your code carefully (obvious)
executing member function on nullptr actualy works if there isn't any member access in the function (maybe it's obvious for an experienced cpp dev)
Native Addons live in their own vm, static fields aren't shared between vms.

Wrong PIDL got from CIDA when dragging a Control Panel item

I'm working on a drag and drop problem now and trying to get the PIDLs of the items being dragged from windows shell to my application.
The code below can get correct PIDLs if the dragged item is 'My Computer' or 'Control Panel' itself, but it doesn't work when the dragged item is an item in the 'Control Panel'.
What's wrong with my code?
#define GetPIDLFolder(pida) (LPCITEMIDLIST)(((LPBYTE)pida)+(pida)->aoffset[0])
#define GetPIDLItem(pida, i) (LPCITEMIDLIST)(((LPBYTE)pida)+(pida)->aoffset[i+1])
STGMEDIUM medium;
UINT fmt = RegisterClipboardFormat(CFSTR_SHELLIDLIST);
FORMATETC fe= {fmt, NULL, DVASPECT_CONTENT, -1, TYMED_HGLOBAL};
HRESULT GETDATA_RESULT = pDataObject->GetData(&fe, &medium);
if (SUCCEEDED(GETDATA_RESULT))
{
LPIDA pida = (LPIDA)GlobalLock(medium.hGlobal);
LPCITEMIDLIST pidlFolder = GetPIDLFolder(pida);
int n = pida->cidl; // the n is always correct
if( n > 0 )
{
LPCITEMIDLIST pidlItem = GetPIDLItem(pida, 0);
// the pidlItem is wrong when the dragged object is an item in 'Control Panel'
}
GlobalUnlock(medium.hGlobal);
}
ReleaseStgMedium(&medium);
Any idea? Thanks
Zach#Shine
If I D&D Mouse, Network Connections and Fonts I get the following output in my test app:
0 Mouse | sfgao=4 hr=0
1 ::{20D04FE0-3AEA-1069-A2D8-08002B30309D}\::{21EC2020-3AEA-1069-A2DD-08002B30309D}\::{7007ACC7-3202-11D1-AAD2-00805FC1270E} | sfgao=20000004 hr=0
2 C:\WINDOWS\Fonts | sfgao=60000004 hr=0
The Network Connections and Fonts pidls can be converted to fully qualified shell paths while Mouse only returns a relative path/displayname.
This makes sense if you check the documentation for IShellFolder::GetDisplayNameOf:
...They do not guarantee that
IShellFolder will return the requested
form of the name. If that form is not
available, a different one might be
returned. In particular, there is no
guarantee that the name returned by
the SHGDN_FORPARSING flag will be
successfully parsed by
IShellFolder::ParseDisplayName. There
are also some combinations of flags
that might cause the
GetDisplayNameOf/ParseDisplayName
round trip to not return the original
identifier list. This occurrence is
exceptional, but you should check to
be sure.
It is clear that when dealing with controlpanel items, you need to keep the pidl around and only use GetDisplayNameOf for display strings in your UI.
(IShellLink::SetIDList on the Mouse pidl will create a working shortcut so the pidl is valid)
void OnDrop(IDataObject*pDO)
{
STGMEDIUM medium;
UINT fmt = RegisterClipboardFormat(CFSTR_SHELLIDLIST);
FORMATETC fe= {fmt, NULL, DVASPECT_CONTENT, -1, TYMED_HGLOBAL};
HRESULT hr = pDO->GetData(&fe, &medium);
if (SUCCEEDED(hr))
{
LPIDA pida = (LPIDA)GlobalLock(medium.hGlobal);
if (pida)
{
LPCITEMIDLIST pidlFolder = GetPIDLFolder(pida);
for (UINT i=0; i<pida->cidl; ++i)
{
LPCITEMIDLIST pidlItem = GetPIDLItem(pida,i);
LPITEMIDLIST pidlAbsolute = ILCombine(pidlFolder,pidlItem);
if (pidlAbsolute)
{
IShellFolder*pParentSF;
hr= SHBindToParent(pidlAbsolute,IID_IShellFolder,(void**)&pParentSF,&pidlItem);
if (SUCCEEDED(hr))
{
STRRET str;
hr= pParentSF->GetDisplayNameOf(pidlItem, SHGDN_FORPARSING, &str);
if (SUCCEEDED(hr))
{
TCHAR szName[MAX_PATH];
hr= StrRetToBuf(&str,pidlItem,szName,MAX_PATH);
if (SUCCEEDED(hr))
{
SFGAOF sfgao = SFGAO_FOLDER|SFGAO_FILESYSTEM|SFGAO_HASSUBFOLDER|SFGAO_CANLINK;
hr= pParentSF->GetAttributesOf(1,&pidlItem,&sfgao);
TRACE(_T("%u %s | sfgao=%X hr=%X\n"),i,szName,sfgao,hr);
CreateTestShortcut(pidlAbsolute);
}
}
pParentSF->Release();
}
ILFree(pidlAbsolute);
}
}
GlobalUnlock(medium.hGlobal);
}
ReleaseStgMedium(&medium);
}
}

Determining the default message store under Windows Mobile

The following piece of test code runs under Windows Mobile.
It's objective is to seek out the default message store so I can get the proper account name for programmatically compiling an email.
IMAPISession *mapiSession;
HRESULT hr = S_OK;
MAPIInitialize (NULL);
IMAPITable *msgTable;
SRowSet *pRows;
IMsgStore *msgStore;
if (MAPILogonEx(0,NULL,NULL,0,&mapiSession) != S_OK)
{
// MessageBox(g_hWnd,_T("Failed to logon"),_T("Error"),0);
}
else
{
SizedSPropTagArray(3, PropTagArr) = {3,{PR_DISPLAY_NAME,
PR_ENTRYID,
PR_DEFAULT_STORE}};
hr = mapiSession->GetMsgStoresTable(MAPI_UNICODE,&msgTable);
hr = msgTable->SetColumns((LPSPropTagArray)&PropTagArr, 0);
if (!hr)
{
do
{
hr = msgTable->QueryRows(1,0,&pRows);
LPSPropValue lpProp;
lpProp = &pRows->aRow[0].lpProps[0];
// if(_tcscmp( lpProp->Value.LPSZ, _T("SMS") ) == 0 )
// break;
lpProp = &pRows->aRow[0].lpProps[0];
if (lpProp->ulPropTag == PR_DEFAULT_STORE)
break;
lpProp = &pRows->aRow[0].lpProps[1];
if (lpProp->ulPropTag == PR_DEFAULT_STORE)
break;
lpProp = &pRows->aRow[0].lpProps[2];
if (lpProp->ulPropTag == PR_DEFAULT_STORE)
break;
FreeProws(pRows);
pRows = NULL;
}while (!hr);
hr = mapiSession->OpenMsgStore (0,
pRows->aRow[0].lpProps[1].Value.bin.cb,
(ENTRYID*)pRows->aRow[0].lpProps[1].Value.bin.lpb,
NULL,
MDB_NO_DIALOG | MAPI_BEST_ACCESS,
&msgStore);
... BUT, fails to get the PR_DEFAULT_STORE property on a Windows Mobile device. I'm guessing Microsoft didn't implement it accurately. And so, lpProp->ulPropTag will never == PR_DEFAULT_STORE. It's always 0000.
Has anyone had success getting PR_DEFAULT_STORE using MAPI under Windows Mobile?
Is there another way of the determining the default message store?

Resources