How to drive my credential provider with CredUIPromptForWindowsCredentials - winapi

I've been working on a credential provider and I've been debugging it through logging. Recently learned about CredUIPromptForWindowsCredentials() API to be able to invoke it from other than login screen or remote desktop connection. The only way at this time I can seem to get my credential to display is to set the last param to CREDUIWIN_SECURE_PROMPT. I've tried various schemes of the flags with no luck. My CP works, that's not the problem. Problem is easier debugging. Only once have I had to go to rescue mode when I made my laptop unbootable. ;) The problem with using the CREDUIWIN_SECURE_PROMPT flag is that then I don't have access to the debugger because login takes over the screen and I can't get back to my debugger. I suppose the only workaround would be to remote debug on another machine with this API, but I'd prefer not to hassle with that.
My CP is registered at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\Credential Providers\{55157584-ff0f-48ce-9178-a4e290901663} and the default property is "MyCredProvider" (for this example). (GUID, prop name changed to protect the guilty. Also ignore LsaString where bad things would happen on a copy--of which I'm not doing.)
Any way to get my custom CP without using the secure prompt?
#include <windows.h>
#include <iostream>
#include <EvoApi.h>
#include <decrypt.h>
#include <atlbase.h>
#include <Lmwksta.h>
#include <StrSafe.h>
#include <LMAPIbuf.h>
#include <LMJoin.h>
#include <wincred.h>
#include <NTSecAPI.h>
#pragma warning(disable : 4996)
#pragma comment(lib, "netapi32.lib")
#pragma comment(lib, "credui.lib")
#pragma comment(lib, "secur32")
using namespace std;
template <size_t SIZE = 256>
struct LsaString : public LSA_STRING
{
LsaString()
{
MaximumLength = SIZE;
Length = 0;
Buffer = pBuf.get();
}
LsaString(LPCSTR pWhat)
{
MaximumLength = SIZE;
Length = 0;
Buffer = pBuf.get();
Init(pWhat);
}
void Init(LPCSTR pWhat)
{
size_t len = strlen(pWhat);
if (len >= SIZE)
throw;
strcpy(Buffer, pWhat);
Length = (USHORT) len;
}
unique_ptr<char[]> pBuf = make_unique< char[] >(SIZE);
};
int _tmain(int argc, wchar_t* argv[])
{
#if 1
wstring me(_T("MYLOGING"));
wstring url(_T("Header"));
wstring message(_T("Enter credentials for ..."));
CREDUI_INFOW credInfo;
credInfo.pszCaptionText = url.c_str();
credInfo.hbmBanner = nullptr;
credInfo.hwndParent = NULL;
credInfo.pszMessageText = message.c_str();
credInfo.cbSize = sizeof(CREDUI_INFOW);
ULONG authPackage = 0;
LSAHANDLE lsaHandle;
LsaConnectUntrusted(&lsaHandle);
LsaString<> lsaString("MyCredProvider");
//LsaString<> lsaString(MICROSOFT_KERBEROS_NAME_A); // works ... as far as finding in LsaLookupAuth...
//LsaString<> lsaString(NEGOSSP_NAME_A); // works ... as far as finding in LsaLookupAuth...
ULONG ulPackage = 0;
LsaLookupAuthenticationPackage(lsaHandle, &lsaString, &ulPackage);
void* pBlob;
ULONG blobSize = 0;
DWORD dwFlags = CREDUIWIN_GENERIC; //CREDUIWIN_SECURE_PROMPT
CredUIPromptForWindowsCredentials(&credInfo, 0, &ulPackage, NULL, 0, &pBlob, &blobSize, FALSE, dwFlags);
if (pBlob) CoTaskMemFree(pBlob);
return 0;
}

Related

How do i pass a variable as wchat_t** from a std::wstring in c++

I am using VS 2019 and the C++ Language Standard is set to Default which I assume is C++ 11?
I have the following constructor of a class in a header file:
input_parser(int& argc, wchar_t** argv)
{
for (auto i = 0; i < argc; ++i)
{
this->tokens_.emplace_back(argv[i]);
}
};
To call the methods argv parameter I am creating an array of wchar_t in the following manner:
std::wstring command_line = L"-m \"F-14RHV\" -s \"BIT|Flir\" -d";
auto buffer = new wchar_t[command_line.length() + 1];
wcsncpy_s(buffer, command_line.length()+1, command_line.c_str(), command_line.length() + 1);
const auto inputs = input_parser(argc, &buffer);
delete[] buffer;
Inside the constructor the first pass when argc == 0 is fine but I get an access violation when argc == 1.
Okay so some programmer dude was correct and here is how I have to do it after I figure out how to split the string by spaces!
Here is the final answer:
#include <string>
#include <sstream>
#include <algorithm>
#include <iostream>
#include <iterator>
std::wstring text = L"-m \"F-14RHV\" -s \"BIT|Flir\" -d";
std::wistringstream iss(text);
std::vector<std::wstring> results((std::istream_iterator<std::wstring, wchar_t>(iss)),
std::istream_iterator<std::wstring, wchar_t>());
Using a vector is going to make this process MUCH easier. I will probably change the other side to use a vector now.
Thanks for the help.

Writing a custom nss hosts module

I'm seeking to implement a custom nss module for the getent hosts lookup. Based on glibc's resolv/nss-dns/dns-host.c and gnunet's src/gns/nss/nss_gns.c I wrote the following minimal implementation that I hoped at least should write something to syslog - which it sadly doesn't.
#include <netdb.h>
#include <nss.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <syslog.h>
#define _nss_lash_gethostbyname2_r _nss_lash_gethostbyname_r
#define _nss_lash_gethostbyname3_r _nss_lash_gethostbyname_r
#define _nss_lash_gethostbyname4_r _nss_lash_gethostbyname_r
#define _nss_lash_getcanonname_r _nss_lash_gethostbyaddr_r
#define _nss_lash_gethostbyaddr2_r _nss_lash_gethostbyaddr_r
#define _nss_lash_getnetbyname_r _nss_lash_gethostbyaddr_r
#define _nss_lash_getnetbyaddr_r _nss_lash_gethostbyaddr_r
typedef char addr[1];
const addr default_addrs[2] = {0x01, 0x00};
enum nss_status
_nss_lash_gethostbyname_r (const char *name, struct hostent *result,
char *buffer, size_t buflen, int *errnop,
int *h_errnop)
{
syslog(LOG_WARNING, name);
if (!strcmp(name, "lash")) {
return NSS_STATUS_UNAVAIL;
}
*(result->h_aliases) = 0x0;
result->h_addrtype = AF_INET;
result->h_length = 1;
*(result->h_addr_list) = (char *)default_addrs;
*errnop = 0;
*h_errnop = NETDB_SUCCESS;
return NSS_STATUS_SUCCESS;
}
enum nss_status
_nss_lash_gethostbyaddr_r (const char *name, struct hostent *result,
char *buffer, size_t buflen, int *errnop,
int *h_errnop)
{
syslog(LOG_ERR, name);
if (!strcmp(name, "lash")) {
return NSS_STATUS_UNAVAIL;
}
*(result->h_aliases) = 0x0;
result->h_addrtype = AF_INET;
result->h_length = 1;
*(result->h_addr_list) = (char *)default_addrs;
*errnop = 0;
*h_errnop = NETDB_SUCCESS;
return NSS_STATUS_SUCCESS;
}
I've added lash to /etc/nsswitch.conf. strace shows that the /lib/libnss_lash.so.2 file is being successfully opened. However the return value from the nss lookup is NSS_UNAVAIL / ENOENT. If I add [unavail=return] to /etc/nsswitch.conf after the lash entry, I get the same result.
Anyone have any clues to what I'm missing?
(the #define lines attempt to catch all symbols found in objdump -T /lib/libnss_dns.so, which seems to be the simpler implementation)
Using:
glibc 2.30
gnunet 0.11.6-ish
nss 3.49.2

Solution to avoid the error: "Entry point not found " even though I'm checking the OS version before callingGetTcpTable2 api on windows XPindows XP?

I'm aware that the GetTcpTable2 api is supported only on windows vista and above versions, hence the code checks for the OS version and only then enters the loop that calls the api. I'm compiling the code on windows 7, visual studio 2008 and the executable runs fine on windows 7 and other OS except Windows XP, the error thrown is :
The code snippet is:`
#include <winsock2.h>
#include <ws2tcpip.h>
#include <iphlpapi.h>
#include <stdio.h>
// Need to link with Iphlpapi.lib and Ws2_32.lib
#pragma comment(lib, "iphlpapi.lib")
#pragma comment(lib, "ws2_32.lib")
#define MALLOC(x) HeapAlloc(GetProcessHeap(), 0, (x))
#define FREE(x) HeapFree(GetProcessHeap(), 0, (x))
/* Note: could also use malloc() and free() */
int main()
{
OSVERSIONINFOEX osvi;
ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX));
osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX);
GetVersionEx((OSVERSIONINFO *)&osvi);
if(osvi.dwMajorVersion>=6)
{
// Declare and initialize variables
PMIB_TCPTABLE2 pTcpTable;
ULONG ulSize = 0;
DWORD dwRetVal = 0;
char szLocalAddr[128];
char szRemoteAddr[128];
struct in_addr IpAddr;
int i;
pTcpTable = (MIB_TCPTABLE2 *) MALLOC(sizeof (MIB_TCPTABLE2));
if (pTcpTable == NULL)
{
printf("Error allocating memory\n");
return 1;
}
ulSize = sizeof (MIB_TCPTABLE);
// Make an initial call to GetTcpTable2 to
// get the necessary size into the ulSize variable
if ((dwRetVal = GetTcpTable2(pTcpTable, &ulSize, TRUE)) ==
ERROR_INSUFFICIENT_BUFFER)
{
FREE(pTcpTable);
pTcpTable = (MIB_TCPTABLE2 *) MALLOC(ulSize);
if (pTcpTable == NULL)
{
printf("Error allocating memory\n");
return 1;
}
}
}
else
{
printf("Unsupported OS");
}
return 0;
}
`How do I get the executable to work on Windows XP without crashing/throwing the error shown in attached image?

Getting process base address in Mac OSX

I'm trying to read the memory of a process using task_for_pid / vm_read.
uint32_t sz;
pointer_t buf;
task_t task;
pid_t pid = 9484;
kern_return_t error = task_for_pid(current_task(), pid, &task);
vm_read(task, 0x10e448000, 2048, &buf, &sz);
In this case I read the first 2048 bytes.
This works when I know the base address of the process (which I can find out using gdb "info shared" - in this case 0x10e448000), but how do I find out the base address at runtime (without looking at it with gdb)?
Answering my own question. I was able to get the base address using mach_vm_region_recurse like below. The offset lands in vmoffset. If there is another way that is more "right" - don't hesitate to comment!
#include <stdio.h>
#include <mach/mach_init.h>
#include <sys/sysctl.h>
#include <mach/mach_vm.h>
...
mach_port_name_t task;
vm_map_offset_t vmoffset;
vm_map_size_t vmsize;
uint32_t nesting_depth = 0;
struct vm_region_submap_info_64 vbr;
mach_msg_type_number_t vbrcount = 16;
kern_return_t kr;
if ((kr = mach_vm_region_recurse(task, &vmoffset, &vmsize,
&nesting_depth,
(vm_region_recurse_info_t)&vbr,
&vbrcount)) != KERN_SUCCESS)
{
printf("FAIL");
}
Since you're calling current_task(), I assume you're aiming at your own process at runtime. So the base address you mentioned should be the dynamic base address, i.e. static base address + image slide caused by ASLR, right? Based on this assumption, you can use "Section and Segment Accessors" to get the static base address of your process, and then use the dyld functions to get the image slide. Here's a snippet:
#import <Foundation/Foundation.h>
#include </usr/include/mach-o/getsect.h>
#include <stdio.h>
#include </usr/include/mach-o/dyld.h>
#include <string.h>
uint64_t StaticBaseAddress(void)
{
const struct segment_command_64* command = getsegbyname("__TEXT");
uint64_t addr = command->vmaddr;
return addr;
}
intptr_t ImageSlide(void)
{
char path[1024];
uint32_t size = sizeof(path);
if (_NSGetExecutablePath(path, &size) != 0) return -1;
for (uint32_t i = 0; i < _dyld_image_count(); i++)
{
if (strcmp(_dyld_get_image_name(i), path) == 0)
return _dyld_get_image_vmaddr_slide(i);
}
return 0;
}
uint64_t DynamicBaseAddress(void)
{
return StaticBaseAddress() + ImageSlide();
}
int main (int argc, const char *argv[])
{
printf("dynamic base address (%0llx) = static base address (%0llx) + image slide (%0lx)\n", DynamicBaseAddress(), StaticBaseAddress(), ImageSlide());
while (1) {}; // you can attach to this process via gdb/lldb to view the base address now :)
return 0;
}
Hope it helps!

boost.log auto_flush files are not stored when app is crashed

Recently I started to play with boost.log, and bumped into an issue that if an unhanded exception is thrown no log messages are written to the log file. I am using rolling text files and auto-flash option is set on.
Here is the modified source from the samples:
#include <stdexcept>
#include <string>
#include <iostream>
#include <fstream>
#include <functional>
#include <boost/ref.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/date_time/gregorian/gregorian.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/barrier.hpp>
#include <boost/log/common.hpp>
#include <boost/log/filters.hpp>
#include <boost/log/formatters.hpp>
#include <boost/log/attributes.hpp>
#include <boost/log/sinks.hpp>
#include <boost/log/utility/empty_deleter.hpp>
#include <boost/log/utility/record_ordering.hpp>
namespace logging = boost::log;
namespace attrs = boost::log::attributes;
namespace src = boost::log::sources;
namespace sinks = boost::log::sinks;
namespace fmt = boost::log::formatters;
namespace keywords = boost::log::keywords;
using boost::shared_ptr;
using namespace boost::gregorian;
enum
{
LOG_RECORDS_TO_WRITE = 100,
LOG_RECORDS_TO_WRITE_BEFORE_EXCEPTION = 10,
THREAD_COUNT = 10
};
BOOST_LOG_DECLARE_GLOBAL_LOGGER(test_lg, src::logger_mt)
//! This function is executed in multiple threads
void thread_fun(boost::barrier& bar)
{
// Wait until all threads are created
bar.wait();
// Here we go. First, identify the thread.
BOOST_LOG_SCOPED_THREAD_TAG("ThreadID", boost::thread::id, boost::this_thread::get_id());
// Now, do some logging
for (unsigned int i = 0; i < LOG_RECORDS_TO_WRITE; ++i)
{
BOOST_LOG(get_test_lg()) << "Log record " << i;
if(i > LOG_RECORDS_TO_WRITE_BEFORE_EXCEPTION)
{
BOOST_THROW_EXCEPTION(std::exception("unhandled exception"));
}
}
}
int main(int argc, char* argv[])
{
try
{
typedef sinks::synchronous_sink< sinks::text_file_backend > file_sink;
shared_ptr< file_sink > sink(new file_sink(
keywords::file_name = L"%Y%m%d_%H%M%S_%5N.log", // file name pattern
keywords::rotation_size = 10 * 1024 * 1024, // rotation size, in characters
keywords::auto_flush = true // make each log record flushed to the file
));
// Set up where the rotated files will be stored
sink->locked_backend()->set_file_collector(sinks::file::make_collector(
keywords::target = "log" // where to store rotated files
));
// Upon restart, scan the target directory for files matching the file_name pattern
sink->locked_backend()->scan_for_files();
sink->locked_backend()->set_formatter(
fmt::format("%1%: [%2%] [%3%] - %4%")
% fmt::attr< unsigned int >("Line #")
% fmt::date_time< boost::posix_time::ptime >("TimeStamp")
% fmt::attr< boost::thread::id >("ThreadID")
% fmt::message()
);
// Add it to the core
logging::core::get()->add_sink(sink);
// Add some attributes too
shared_ptr< logging::attribute > attr(new attrs::local_clock);
logging::core::get()->add_global_attribute("TimeStamp", attr);
attr.reset(new attrs::counter< unsigned int >);
logging::core::get()->add_global_attribute("Line #", attr);
// Create logging threads
boost::barrier bar(THREAD_COUNT);
boost::thread_group threads;
for (unsigned int i = 0; i < THREAD_COUNT; ++i)
threads.create_thread(boost::bind(&thread_fun, boost::ref(bar)));
// Wait until all action ends
threads.join_all();
return 0;
}
catch (std::exception& e)
{
std::cout << "FAILURE: " << e.what() << std::endl;
return 1;
}
}
Source is compiled under Visual Studio 2008. boost.log compiled for boost 1.40.
Any help is highly appreciated.
Check to see if the log file is in the current working directory of the process, rather than the specified file collector target directory ("log" in your sample code). Additionally, you will probably want to specify a directory for the sink "file_name" pattern.
As "JQ" notes, don't expect to see any logging post-exception.

Resources