I was checking in Go's source code, and it seems the standard out (os.Stdout) points to:
os.Stdout = os.NewFile(uintptr(syscall.Stdout), "/dev/stdout")
But from my understanding, this should only work for Unix-based systems. And yet, it's defined as a general variable.
Microsoft says Windows has a standard output device (stdout).
GetStdHandle function
STD_OUTPUT_HANDLE - The standard output device.
The Windows specific Go syscall.Stdout variable is:
go/src/syscall/syscall_windows.go:
var (
Stdout = getStdHandle(STD_OUTPUT_HANDLE)
)
See Go Build constraints for OS specific files.
The main important argument passed to NewFile is the first one, which is the file handle. The Windows syscall package correctly specifies a value for the standard output file (syscall.Stdout), so os just borrows it.
The point of the second argument is just to give the resulting *os.File value some kind of name that can be used, for example when calling os.Stdout.Name(), since the file handle doesn't carry a name by itself. You could argue that naming os.Stdout as "/dev/stdout" on Windows is confusing, but it's just a name that doesn't impact functionality.
Related
I want to print the addresses of all the local and global variables which are being used in a function, at different points of execution of a program and store them in a file.
I am trying to use gdb for this same.
The "info local" command prints the values of all local variables. I need something to print the addresses in a similar way. Is there any built in command for it?
Edit 1
I am working on a gcc plugin which generates a points-to graph at compile time.
I want to verify if the graph generated is correct, i.e. if the pointers do actually point to the variables, which the plugin tells they should be pointing to.
We want to validate this points-to information on large programs with over thousands of lines of code. We will be validating this information using a program and not manually. There are several local and global variables in each function, therefore adding printf statements after every line of code is not possible.
There is no built-in command to do this. There is an open feature request in gdb bugzilla to have a way to show the meaning of all the known slots in the current stack frame, but nobody has ever implemented this.
This can be done with a bit of gdb scripting. The simplest way is to use Python to iterate over the Blocks of the selected Frame. Then in each such Block, you can iterate over all the variables, and invoke info addr on the variable.
Note that printing the address with print &var will not always work. A variable does not always have an address -- but, if the variable exists, it will have a location, which is what info addr will show.
One simple way these ideas can differ is if the compiler decides to put the variable into a register. There are more complicated cases as well, though, for example the compiler can put the variable into different spots at different points in the function; or can split a local struct into its constituent parts and move them around.
By default info addr tries to print something vaguely human-readable. You can also ask it to just dump the DWARF location expressions if you need that level of detail.
programmatically ( in C/C++ ) you use the & operator to get the address of a variable (assuming it's not a pointer):
int a; //variable declaration
print("%d", a); //print the value of the variable (as an integer)
print("0x%x", &a); //print the address of the variable (as hex)
The same goes for (gdb), just use &
plus the question has already been answered here (and not only)
I have two executables, both manually created by me, I shall call them 1.exe and 2.exe respectively. First of all, both the executables are compiled by MSVS 2010, using the Microsoft compiler. I want to type a message into 1.exe, and I want 1.exe to inject that message into 2.exe (possibly as some sort of parameter), so when I run 2.exe after 1.exe has injected the message, 2.exe will display that message.
NOTE - this is not for illicit use, both these executables were created by me.
The big thing for me is:
Where to place the message/instructions in 2.exe so they can be easily accessed by 2.exe
How will 2.exe actually FIND use these parameters (message).
I fully understand that I can't simply use C++ code as injection, it must be naked assembly which can be generated/translated by the compiler at runtime (correct me if I am wrong)
Some solutions I have been thinking of:
Create a standard function in 2.exe requiring parameters (eg displaying the messagebox), and simply inject these parameters (the message) into the function?
Make some sort of structure in 2.exe to hold the values that 1.exe will inject, if so how? Will I need to hardcode the offset at which to put these parameters into?
Note- I don't expect a spoonfeed, I want to understand this aspect of programming proficiently, I have read up the PE file format and have solid understanding of assembly (MASM assembler syntax), and am keen to learn alot more. Thank you for your time.
Very few programmers ever need to do this sort of thing. You could go your entire career without it. I last did it in about 1983.
If I remember correctly, I had 2.exe include an assembler module with something like this (I've forgotten the syntax):
.GLOBAL TARGET
TARGET DB 200h ; Reserve 512 bytes
1.exe would then open 2.exe, search the symbol table for the global symbol "TARGET", figure out where that was within the file, write the 512 bytes it wanted to, and save the file. This was for a licensing scheme.
The comment from https://stackoverflow.com/users/422797/igor-skochinsky reminded me that I did not use the symbol table on that occasion. That was a different OS. In this case, I did scan for a string.
From your description it sounds like passing a value on the command line is all you need.
The Win32 GetCommandLine() function will give you ther passed value that you can pass to MessageBox().
If it needs to be another running instance then another form of IPC like windows messages (WM_COPYDATA) will work.
I am using Ruby 1.9.3 in Windows and trying to perform an action where I write filenames to a file one per line (we'll call it a filelist) and then later read this filelist, and call system() to run another program where I will pass it a filename from the filelist. That program I'm calling with system() will take the filename I pass it and convert it to a binary format to be used in a proprietary system.
Everything works up to the point of calling system(). I have a UTF-8 filelist, and reading the filename from the filelist is giving me the proper result. But when I run
system("c:\foo.exe -arg #{bar}")
the arg "bar" being passed is not in UTF-8 format. If I run the program manually with a Japanese, chinese, or whatever filename it works fine and codes the file correctly, but if I do it using system() it won't. I know the variable in bar is stored properly because I use it elsewhere without issue.
I've also tried:
system("c:\foo.exe -arg #{bar.encoding("UTF-8")}")
system("c:\foo.exe -arg #{bar.force_encoding("UTF-8")}")
and neither work. I can only assume the issue here is passing unicode to system.
Can someone else confirm if system does, in fact, support or not support this?
Here is the block of code:
$fname.each do |file|
flist.write("#{file}\n") # This is written properly in UTF-8
system("ia.exe -r \"#{file}\" -q xbfadd") # The file being passed here is not encoding right!
end
Ruby's system() function, like that in most scripting languages, is a veneer over the C standard library system() call. The MS C runtime uses Win32 ANSI APIs for all the byte-oriented C stdlib functions.
The ANSI APIs use the Windows system locale (aka 'ANSI codepage') to map between byte-oriented strings and Windows's native-UTF16LE strings which are used for filenames and shell commands. Unfortunately, it is impossible to set the system locale to UTF-8; you can set the codepage to 65001 (Windows's equivalent to UTF-8) on a particular console, but the MS CRT has long-standing bugs in its handling of code page 65001 which make a lot of applications fail.
So using the standard cross-platform byte-oriented C interfaces means you can't support Unicode filenames or shell commands, which is rather sad. Some scripting languages have added support for Unicode filenames by calling the Win32 'W' (Unicode) APIs explicitly instead of the C stdlib interfaces. Ruby 1.9.x is making progress in this area, but system() has not been looked at yet.
You can fix it by calling the Win32 API yourself, for example CreateProcessW but it's not especially pretty.
I upvoted bobince's answer; I believe it correct.
The only thing I'd add is that an additional work-around, this being a windows problem, is to write out the commandline to a batch file and then use system() to call the batchfile.
I used this approach to successfully get around the problem while running Calibre's ebook-convert commandline tool for a book with UTF-8/non-English chars in its title.
I think that bobince answer is correct and the solution that worked for me was:
system("c:\foo.exe -arg #{bar.encoding("ISO-8859-1")}")
I have a dll that I need to make use of. I also have a program that makes calls to this dll to use it. I need to be able to use this dll in another program, however previous programmer did not leave any documentation or source code. Is there a way I can monitor what calls are made to this dll and what is passed?
You can't, in general. This is from the Dependency Walker FAQ:
Q: How do I view the parameter and
return types of a function?
A: For most functions, this
information is simply not present in
the module. The Windows' module file
format only provides a single text
string to identify each function.
There is no structured way to list the
number of parameters, the parameter
types, or the return type. However,
some languages do something called
function "decoration" or "mangling",
which is the process of encoding
information into the text string. For
example, a function like int Foo(int,
int) encoded with simple decoration
might be exported as _Foo#8. The 8
refers to the number of bytes used by
the parameters. If C++ decoration is
used, the function would be exported
as ?Foo##YGHHH#Z, which can be
directly decoded back to the
function's original prototype: int
Foo(int, int). Dependency Walker
supports C++ undecoration by using the
Undecorate C++ Functions Command.
Edit: One thing you could do, I think, is to get a disassembler and disassemble the DLL and/or the calling code, and work out from that the number and types of the arguments, and the return types. You wouldn't be able to find out the names of the arguments though.
You can hook the functions in the DLL you wish to monitor (if you know how many arguments they take)
You can use dumpbin (Which is part of the Visual Studio Professional or VC++ Express, or download the platform kit, or even use OpenWatcom C++) on the DLL to look for the 'exports' section, as an example:
dumpbin /all SimpleLib.dll | more
Output would be:
Section contains the following exports for SimpleLib.dll
00000000 characteristics
4A15B11F time date stamp Thu May 21 20:53:03 2009
0.00 version
1 ordinal base
2 number of functions
2 number of names
ordinal hint RVA name
1 0 00001010 fnSimpleLib
2 1 00001030 fnSimpleLib2
Look at the ordinals, there are the two functions exported...the only thing is to work out what parameters are used...
You can also use the PE Explorer to find this out for you. Working out the parameters is a bit tricky, you would need to disassemble the binary, and look for the function call at the offset in the file, then work out the parameters by looking at the 'SP', 'BP' registers.
Hope this helps,
Best regards,
Tom.
Are file descriptors supported on windows? Why do things "seem to work" in Perl with fds?
Things like "fileno", "dup" and "dup2" were working but then randomly inside some other environment, stopped working. It's hard to give details, mostly what I'm looking for is answers from experienced Windows programmers and how file descriptors work/don't work on Windows.
I would guess that it's the PerlIO layer playing games and making it seem as though file descriptors work, but that's only a guess.
Example of what is happening:
open($saveout, ">&STDOUT") or die();
...
open(STDOUT, ">&=".fileno($saveout)) or die();
The second line die()s but only in certain situations (which I have yet to nail down).
Windows uses file descriptors natively. See Low-Level I/O on MSDN. They all report errors through the C variable errno, which means they show up in Perl's $!.
Note that you can save yourself a bit of typing:
open(STDOUT, ">&=", $saveout) or ...;
This works because the documentation for open in perlfunc provides:
If you use the 3-arg form then you can pass either a number, the name of a filehandle or the normal “reference to a glob.”
Finally, always include meaningful diagnostics when you call die! The program below identifies itself ($0), tells what it was trying to do (open), and why it failed ($!). Also, because the message doesn't end with a newline, die adds the name of the file and line number where it was called.
my $fakefd = 12345;
open(STDOUT, ">&=", $fakefd) or die("$0: open: $!");
This produces
prog.pl: open: Bad file descriptor at foo.pl line 2.
According to the documentation for _fdopen (because you used >&= and not >&), it has two failure modes:
If execution is allowed to continue, errno is set either to EBADF, indicating a bad file descriptor, or EINVAL, indicating that mode was a null pointer.
The second would be a bug in perl and highly unlikely because I don't see anywhere in perlio.c that involves a computed mode: they're all static strings.
Something appears to have gone wrong with $saveout. Could $saveout have been closed before you try to restore it? From your example, it's unclear whether you enabled the strict pragma. If it's not lexical (declared with my), are you calling a function that also monkeys with $saveout?