Attach to multiple processes with nvim-dap - debugging

Came across an interesting problem today that I have been having trouble figuring out and I wanted to test the waters to see if anyone know if this is possible. I'm currently in the process of setting up nvim-dap and for my situation I need to be able to run debuggers for multiple processes. Suppose I had a configuration that looked like this
dap.adapters.node2 = {
type = 'executable',
command = 'node',
args = {os.getenv('HOME') .. '/dev/microsoft/vscode-node-debug2/out/src/nodeDebug.js'},
}
dap.configurations.javascript = {
{
type = 'node2',
request = 'attach',
name = 'program 1',
port = 9229,
},
{
type = 'node2',
request = 'attach',
name = 'program 2',
port = 7000,
},
{
type = 'node2',
request = 'attach',
name = 'program 3',
port = 5035,
},
}
Then when I use :lua require'dap'.continue() I will get the option to listen to only one of those running processes. Does anyone know if it's possible to get a debugger going which is able to attach to all of these processes in the same nvim session? Bonus points if it can attach to a chrome process as well for frontend debugging!
It looks like with launch configurations in VS code you could use something like a compound launch compound launch configuration , however, I couldn't find any resource for getting similar functionality with attaching and in nvim-dap.
I'd love to see how you all are solving a similar problem!

Related

How do I force Solana/Anchor methods to use the devnet?

In creating a simple program, I can't get Solana to use the devnet for its RPC connection. I keep getting the following error:
{
blockhash: '7TTVjRKApwAqP1SA7vZ2tQHuh6QbnToSmVUA9kc7amEY',
lastValidBlockHeight: 129662699
}
Error: failed to get recent blockhash: FetchError: request to http://localhost:8899/ failed, reason: connect ECONNREFUSED 127.0.0.1:8899
at Connection.getRecentBlockhash (/home/simeon/dev/freelance/niels_vacancies/node_modules/#solana/web3.js/lib/index.cjs.js:6584:13)
even though I have set all of my settable constants like ANCHOR_PROVIDER_URL=https://api.devnet.solana.com, or the relevant entries in my Anchor.toml file. I also explicitly specify the following:
const connection = new anchor.web3.Connection("https://api.devnet.solana.com/", {commitment: "max"});
const wallet = anchor.Wallet.local();
const provider = new anchor.Provider(
connection,
wallet,
{
commitment: "max",
preflightCommitment: "max",
skipPreflight: false
}
)
I even test console.log(await anchor.getProvider().connection.getLatestBlockhash()); to ensure that I can, in fact, get a blockhash from the devnet. What can I do to force the RPC calls to do so too?
You just have to set the Anchor.toml cluster to devnet and programs.devnet and then deploy the program using a wallet with devnet-sol. I will drop an Anchor.toml for devnet.
[features]
seeds = false
[programs.devnet]
first_program = "FPT...bd3"
[registry]
url = "https://anchor.projectserum.com"
[provider]
cluster = "devnet"
wallet = "PATH/TO/WALLET/WHO/WILL/PAY/FOR/DEPLOY.json"
[scripts]
test = "yarn run ts-mocha -p ./tsconfig.json -t 1000000 tests/**/*.ts"
in this case the first_program is the program_id declared on the declare_id macro.
Then you can use ur test file totally normal with anchor.setProvider(anchor.Provider.env());
If you have already updated the anchor.toml to use devnet, and are having this issue with program.provider.connection.whatever or program.account.whatever.fetch.whatever, make sure that you have set the anchor provider BEFORE creating the program, e.g:
const provider = AnchorProvider.env();
anchor.setProvider(provider);
must come before the line
const program: Program<Whatever> = workspace.Whatever;

SWUpdate multiple bootenv sections

I use SWUpdate to update different Hardware-Revisions of the same device with a double-copy strategy.
The bootloader environmnent of all those looks very similar. However, I have to set the mmc-partition to boot from depending on the active copy and the boot_file depending on the hardware-revision.
To keep the sw-description-file as comprehensive as possible and to make it easy to maintain I would like to set a "basic" boot-environment for all devices in a first step and in a second step overwrite some variables depending on hardware-revision and active copy:
software =
{
version = "1.1";
hardware-compatibility = ["0.1","1.0"];
device1=
{
copy-1:
{
images:
(
{
filename = "rootfs.ext3.gz";
device = "/dev/mmcblk0p3";
compressed = true;
},
{
filename = "u-boot-env-base"; #basic boot environment
type = "uboot";
}
);
bootenv: # device-specific boot variables
(
{
name = "boot_file"
value = "uImage1"
},
{
name = "mmcpart";
value = "3";
}
);
}
}
}
While parsing both bootloader environments are reported but only one is applied or both are, but in the wrong order, because when checking via fw_printenv the "u-boot-env-base" is unaltered.
I am using
SWUpdate v2018.11.0
U-Boot 2018.09.
I feel that I had this working in an older setup (SWUpdate 2016).
I have addressed the mailing list with this question. Stefano Babic, SWUpdate developer and maintainer, answered my question I am just trying to summarize it here.
What I have described is desired behaviour. It is not foreseen to set bootloader variables twice during an update. The u-boot variables defined in a file have priority over u-boot name-value-pairs in the bootenv section because the file is processed in the very end of the update. The solution in my case is to set the variables only in the bootenv section.

InitiateShutdown fails with RPC_S_SERVER_UNAVAILABLE error for a remote computer

I'm trying to implement rebooting of a remote computer with InitiateShutdown API using the following code, but it fails with RPC_S_SERVER_UNAVAILABLE or 1722 error code:
//Process is running as administrator
//Select a remote machine to reboot:
//INFO: Tried it with and w/o two opening slashes.
LPCTSTR pServerName = L"192.168.42.105";
//Or use 127.0.0.1 if you don't have access to another machine on your network.
//This will attempt to reboot your local machine.
//In that case make sure to call shutdown /a /m \\127.0.0.1 to cancel it.
if(AdjustPrivilege(NULL, L"SeShutdownPrivilege", TRUE) &&
AdjustPrivilege(pServerName, L"SeRemoteShutdownPrivilege", TRUE))
{
int nErrorCode = ::InitiateShutdown(pServerName, NULL, 30,
SHUTDOWN_INSTALL_UPDATES | SHUTDOWN_RESTART, 0);
//Receive nErrorCode == 1722, or RPC_S_SERVER_UNAVAILABLE
}
BOOL AdjustPrivilege(LPCTSTR pStrMachine, LPCTSTR pPrivilegeName, BOOL bEnable)
{
HANDLE hToken;
TOKEN_PRIVILEGES tkp;
BOOL bRes = FALSE;
if(!OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken))
return FALSE;
if(LookupPrivilegeValue(pStrMachine, pPrivilegeName, &tkp.Privileges[0].Luid))
{
tkp.PrivilegeCount = 1;
tkp.Privileges[0].Attributes = bEnable ? SE_PRIVILEGE_ENABLED : SE_PRIVILEGE_REMOVED;
bRes = AdjustTokenPrivileges(hToken, FALSE, &tkp, 0, (PTOKEN_PRIVILEGES)NULL, 0);
int nOSError = GetLastError();
if(bRes)
{
if(nOSError != ERROR_SUCCESS)
bRes = FALSE;
}
}
CloseHandle(hToken);
return bRes;
}
So to prepare for this code to run I do the following on this computer, which is Windows 7 Pro (as I would do for the Microsoft's shutdown tool):
Run the following "as administrator" to allow SMB access to the logged in user D1 on the 192.168.42.105 computer (per this answer):
NET USE \\192.168.42.105\IPC$ 1234 /USER:D1
Run the process with my code above "as administrator".
And then do the following on remote computer, or 192.168.42.105, that has Windows 7 Pro (per answer here with most upvotes):
Control Panel, Network and Sharing Center, Change Advanced Sharing settings
"Private" enable "Turn on File and Printer sharing"
Set the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
LocalAccountTokenFilterPolicy=dword:1
RUN secpol.msc, then go to Local Security Policy, Security Settings, Local Policies, User Rights Assignment. Add "Everyone" to "Force shutdown from a remote system". (Just remember to remove it after you're done testing!)
Note that the following shutdown command seems to work just fine to reboot the remote computer:
shutdown /r /m \\192.168.42.105 /t 30
What am I missing with my code?
EDIT:
OK. I will admit that I was merely interested in why InitiateShutdown doesn't seem to "want" to work with a remote server connection, while InitiateSystemShutdownEx or InitiateSystemShutdown had no issues at all. (Unfortunately the latter two did not have the dwShutdownFlags parameter, which I needed to pass the SHUTDOWN_INSTALL_UPDATES flag to, which caused my persistence...)
At this point I had no other way of finding out than dusting out a copy of WinDbg... I'm still trying to dig into it, but so far this is what I found...
(A) It turns out that InitiateSystemShutdownEx internally uses a totally different RPC call. W/o too many details, it initiates RPC binding with RpcStringBindingComposeW using the following parameters:
ObjUuid = NULL
ProtSeq = ncacn_np
NetworkAddr = \\192.168.42.105
EndPoint = \\PIPE\\InitShutdown
Options = NULL
or the following binding string:
ncacn_np:\\\\192.168.42.105[\\PIPE\\InitShutdown]
(B) While InitiateShutdown on the other hand uses the following binding parameters:
ObjUuid = 765294ba-60bc-48b8-92e9-89fd77769d91
ProtSeq = ncacn_ip_tcp
NetworkAddr = 192.168.42.105
EndPoint = NULL
Options = NULL
which it later translates into the following binding string:
ncacn_np:\\\\192.168.42.105[\\PIPE\\lsarpc]
that it uses to obtain the RPC handle that it passes to WsdrInitiateShutdown (that seems to have its own specification):
So as you see, the InitiateShutdown call is technically treated as Unknown RPC service (for the UUID {765294ba-60bc-48b8-92e9-89fd77769d91}), which later causes a whole bunch of credential checks between the server and the client:
which, honestly, I'm not sure I want to step into with a low-level debugger :)
At this stage I will say that I am not very well versed on "Local Security Authority" interface (or the \PIPE\lsarpc named pipe configuration.) So if anyone knows what configuration is missing on the server side to allow this RPC call to go through, I would appreciate if you could post your take on it?

Attaching to process in Visual Studio Package

I'm trying to write a Visual Studio Package that will attach the debugger to a named process.
I am using the following code in my package.
var info = new VsDebugTargetInfo
{
dlo = DEBUG_LAUNCH_OPERATION.DLO_AlreadyRunning,
bstrExe = strProcessName,
bstrCurDir = "c:\\",
bstrArg = "",
bstrEnv = "",
bstrOptions = null,
bstrPortName = null,
bstrMdmRegisteredName = null,
bstrRemoteMachine = "",
cbSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf<VsDebugTargetInfo>(),
grfLaunch = (uint)(__VSDBGLAUNCHFLAGS.DBGLAUNCH_DetachOnStop| __VSDBGLAUNCHFLAGS.DBGLAUNCH_StopDebuggingOnEnd| __VSDBGLAUNCHFLAGS.DBGLAUNCH_WaitForAttachComplete),
fSendStdoutToOutputWindow = 1,
clsidCustom = VSConstants.CLSID_ComPlusOnlyDebugEngine
};
VsShellUtilities.LaunchDebugger(ServiceProvider, info);
However I get the following, unhelpful, error:
Exception : Unable to attach. Operation not supported. Unknown error: 0x80070057.
The code is obviously doing something because if the process has not started I get this error
Exception : Unable to attach. Process 'xxxxxxxx' is not running on 'xxxxxxxx'.
The process is a managed .net 4 process and I am able to attach to it through the VS UI.
For context I am trying to replace a simple Macro I was using in VS 2010 to do the same job but that obviously can't be run in newer versions of Visual Studio.
I found a totally different piece of code, inspited by https://github.com/whut/AttachTo, worked much better to achieve the same result
foreach (Process process in (DTE)GetService(typeof(DTE)).Debugger.LocalProcesses)
if (process.Name.EndsWith(strProcessName,StringComparison.InvariantCultureIgnoreCase))
process.Attach();
I had to use 'ends with' because the process names include the full path to the running exe.

Use Node.js command line debugger on child process?

I use the Node client debugger, like so:
node debug myscript.js
But this process spawns a child using:
var child = require("child_process").fork(cmd, args);
Is there a way for this child to ALSO be started in "debug" mode?
Yes. You have to spawn your process in a new port. There is a workaround to debug with clusters, in the same way you can do:
var debug = process.execArgv.indexOf('--debug') !== -1;
if(debug) {
//Set an unused port number.
process.execArgv.push('--debug=' + (5859));
}
var child = require("child_process").fork(cmd, args);
....
debugger listening on port 5859
Not a problem. All you need to do is to send the debugging Sig to the running process. Look at the node-inspector docs do a find for Enable debug mode
I can`t tell you how to do a debug-brk, I am not sure you can but you can always do something from code like
while(true){debugger}So you can catch the debug statement then step out of the loop manually. Dirty I know =)
Here's another way, this will make the child debuggable on a free port:
// Determine if in debug mode.
// If so, pass in a debug-brk option manually, without specifying port.
var startOpts = {};
var isInDebugMode = typeof v8debug === 'object';
if(isInDebugMode) {
startOpts = {execArgv: ['--debug-brk']};
}
child_process.fork('./some_module.js', startArgs, startOpts);

Resources