I add flag “--print-opt-code” to V8 engine, then v8 could print the compiled code to stdout. And now I what to print these information to specific files, then which function should I modify?
Thank you very much~~
Try this: --redirect-code-traces-to=<filename> ;-)
Generally speaking: follow the FLAG_print_opt_code through the source and see what it does.
Depending on where you run V8 (in d8? in Chrome? in node.js?), it might be easiest to just use shell redirection: .../d8 test.js --print-opt-code > output.txt
Related
My team has all our Golang code in a monorepo.
Various package subdirectories with library code.
Binaries/services/tools under cmd
We've had it for a while and are doing some cleanup. Are there any tools or techniques that can find functions not used by the binaries under cmd?
I know go vet can find private functions that are unused in a package. However I suspect we also have exported library functions that aren't used either.
UPD 2020: The unused tool has been incorporated into staticcheck.
Unfortunately, v0.0.1-2020.1.4 will probably be the last to support this
feature. Dominik explains that it is because the check consumes a lot of
resources and is hard to get right.
To get that version:
env GO111MODULE=on go get honnef.co/go/tools/cmd/staticcheck#v0.0.1-2020.1.4
To use it:
$ staticcheck --unused.whole-program=true -- ./...
./internal/pkg/a.go:5:6: type A is unused (U1001)
Original answer below.
Dominik Honnef's unused tool might be what you're looking for:
Optionally via the -exported flag, unused can analyse all arguments as a
single program and report unused exported identifiers. This can be useful for
checking "internal" packages, or large software projects that do not export
an API to the public, but use exported methods between components.
Try running go build -gcflags -live. This passes the -live flag to the compiler (go tool compile), instructing it to output debugging messages about liveness analysis. Unfortunately, it only prints when it's found live code, not dead code, but you could in theory look to see what doesn't show up in the output.
Here's an example from compiling the following program stored in dead.go:
package main
import "fmt"
func main() {
if true {
fmt.Println(true)
} else {
fmt.Println(false)
}
}
Output of go build -gcflags -live:
# _/tmp/dead
./dead.go:7: live at call to convT2E: autotmp_5
./dead.go:7: live at call to Println: autotmp_5
If I'm reading this correctly, the second line states that the implicit call to convT2E (which converts non-interface types to interface types, since fmt.Println takes arguments of type interface{}) is live, and the third line states that the call to fmt.Println is live. Note that it doesn't say that the fmt.Println(false) call is live, so we can deduce that it must be dead.
I know that's not a perfect answer, but I hope it helps.
It is a bit dirty, but it works for me.
I had a lot of structs which I did not want to test manually, so I wrote a script that renames the struct then runs all the tests (ci/test.sh) and renames it back if any test failed:
#!/bin/sh
set -e
git grep 'struct {' | grep type | while read line; do
file=$(echo $line | awk -F ':' '{print $1}')
struct=$(echo $line | awk '{print $2}')
sed "s/$struct struct/_$struct struct/g" -i $file
echo "testing for struct $struct changed in file $file"
if ! ./ci/test.sh; then
sed "s/_$struct struct/$struct struct/g" -i $file
fi
done
It's not an open source solution, but it works.
If you guys are using Goland, you should consider using its code-inspections feature, includes useful features:
Reports unused constant
Reports unused exported functions
Reports unused exported types in the main package and in tests
Reports unused unexported functions
Reports global variables that are defined but are never used in code
Reports unused function parameters
Reports unused types
(It looks like the implementation of this feature is black box, jetbrains does not open source this feature)
Go-related detection tools seem to place more emphasis on accuracy, and they want to do their best to minimize error reporting. And using Goland's code-inspections feature may require more self-judgment. :)
Interests: Paid users only, not working for Jetbrains, simply think this feature works well.
A reliable but inelegant method I've used is to rename or comment out functions you suspect might not be used and then recompile everything -- no errors means you didn't need them.
If they are needed, it shows you where these functions are called so it's good for getting familiar with a code base and seeing how things connect.
I'm trying to use the READFITS() function on IDL 8.3 on Mac 10.9.3
My input on the IDL promt:
readfits('image.fits',h, /EXTEN, /SILENT)
Result:
readfits('image.fits',h, /EXTEN, /SILENT)
^
% Syntax error.
*note: the '^' is below '/EXTEN'
Maybe it will help, so here is a link to the IDL help page on using READFITS() --> http://www.exelisvis.com/docs/readfits.html
I tried using the brackets like they show on that help page, but it still didn't work, so I'm stuck now. Didn't know if anyone here has experience reading .fits files in IDL.
ok, so it turns out the readfits procedure isn't included in IDL's original library, so I just had to download AstroLib (contains lots of useful astronomy procedures - including Readfits). The original syntax then worked.
I'm using IDL 8.2.2 on OS X 10.9.4.
Try keeping it simple first. Do these work?
readfits('image.fits')
readfits('image.fits', header)
Next try this:
readfits('image.fits', header, EXTEN_NO=0)
I suspect you really want extension number 0, not 1. See (e.g.) http://www.stsci.edu/documents/dhb/web/c02_datafiles.fm2.html.
Tim Pope's rails.vim provides a command :A (and a set of related commands) which opens the "alternate" file. For most classes, that's the test, and for the test, the class.
It would sure be nice to have that functionality in non-Rails Ruby projects. Is there a plugin which provides that? Bonus points if it helps me create the test file when I create the implementation file. :)
Our hero tpope wrote rake.vim too. It does the very same things rails.vim does but in Ruby projects.
I created the following command that makes it possible to do
:E /pattern/replace
to jump to the file that is the current filename and substituting pattern by replace
For example, if your tests files are in /test/code.js and your src files in /src/code.js you could write the following command:
command! -nargs=* Es :call EditSubstitute("/test/src")
command! -nargs=* Et :call EditSubstitute("/src/test")
to have the command :Es to jump from testfile to source file and the command :Et to jump from source file to testfile.
Here's the function that does that :
function! EditSubstitute(args)
if (len(a:args))<2
return
endif
let s:delimiter = (a:args[0])
let s:split = split(a:args,s:delimiter,1)[1:]
let s:fullpath = expand('%:p')
let s:bar = substitute(s:fullpath, s:split[0], s:split[1], "")
echo (s:bar)
silent execute('edit '.s:bar)
endfunction
command! -nargs=* E :call EditSubstitute(<q-args>)
I know this doesn't really answer your question at all... but I use VIM buffers to provide easy accessibility to a file and its tests.
I keep my test on top, and the file on the bottom. Then I can view both at the same time.
I use NERDTree to make browsing easier too, but that is not a per-requisite.
You can get a full reference of what I use here:
https://github.com/coderjoe/dotfiles
If you like it I'd recommend NOT using my dotfiles from the above repo, but start with something like RyanB's dotfiles and build your own sets based on your own preferences. :)
Have a look at the vimrc of the guy from 'Destroy all software' https://github.com/garybernhardt/dotfiles/blob/master/.vimrc#L280
pressing <leader>. will switch you between your code and the spec code.
-frbl
I would like to implement gem-style console app, and when I say gem (or apt-get etc) style, I mean that it will have invocation syntax like:
program.rb verb [argument] [--options ...]
For example
greeter.rb say "Hello world" --bold
I have used optparse but I think it is not suitable for anything except --option style arguments. Am I wrong about it or there is more suitable library to achieve this?
I suggest not to parse from scratch; I suggest to use GLI by which you can provide (via its DSL) a git like interface to your users. Get started here to see how it works.
You might also be interested in looking at a real (humble) implementation in a project of mine. Check these files:
https://github.com/empo/RuGPost/blob/master/bin/rugpost
https://github.com/empo/RuGPost/blob/master/lib/rugpost/commands.rb
https://github.com/empo/RuGPost/blob/master/lib/rugpost/commands/post_commands.rb
https://github.com/empo/RuGPost/blob/master/lib/rugpost/commands/project_commands.rb
I have a problem that I cannot seem to find an answer to.
With Perl I need to use a script across Windows and unix platforms. Te problem is that on Windows we use Win32-pecific modules like Win32::Process, and those modules do not exist on unix.
I need a way to include those Win32 modules only on Windows.
if($^O =~ /win/i)
{
use win32::process qw(CREATE_NEW_CONSOLE);
}
else
{
#unix fork
}
The problem lies in that use statement for windows. No matter what I try this does not compile on unix.
I have tried using dynamic evals, requires, BEGIN, etc.
Is there a good solution to this problem? Any help will be greatly appreciated.
Thanks in advance,
Dan
Update:
A coworker pointed out to me this is the correct way to do it.
require Win32;
require Win32::Process;
my $flag = Win32::Process::CREATE_NEW_CONSOLE();
Win32::Process::Create($process,
$program,
$cmd,
0,
$flag, ".") || die ErrorReport();
print "Child started, pid = " . getPID() . "\n";
Thank you all for your help!
Dan
use is executed at compile time.
Instead do:
BEGIN {
if( $^O eq 'MSWin32' ) {
require Win32::Process;
# import Win32::Process qw(CREATE_NEW_CONSOLE);
Win32::Process->import(qw/ CREATE_NEW_CONSOLE /);
}
else {
#unix fork
}
}
See the perldoc for use.
Also see perlvar on $^O.
Update:
As Sinan Unur points out, it is best to avoid indirect object syntax.
I use direct method calls in every case, except, with calls to import. Probably because import masquerades as a built-in. Since import is really a class method, it should be called as a class method.
Thanks, Sinan.
Also, on Win32 systems, you need to be very careful that you get the capitalization of your module names correct. Incorrect capitalization means that symbols won't be imported properly. It can get ugly.use win32::process may appear to work fine.
Are you sure win32::process can be loaded on OSX? "darwin" matches your /win/i.
You may want to use http://search.cpan.org/dist/Sys-Info-Base/ which tries to do the right thing.
That aside, can you post an example of the code that you actually are using, the failure message you're receiving, and on which unix platform (uname -a) ?
What about a parser that modifies the file on each OS?
You could parse your perl file via a configure script that works on both operating systems to output perl with the proper Use clauses. You could even bury the parse action in the executable script to launch the code.
Originally I was thinking of precompiler directives from C would do the trick, but I don't know perl very well.
Here's an answer to your second set of questions:
Are you using strict and warnings?
Did you define an ErrorReport() subroutine? ErrorReport() is just an example in the synopsis for Win32::Process.
CREATE_NEW_CONSOLE is probably not numeric because it didn't import properly. Check the capitalization in your call to import.
Compare these one-liners:
C:\>perl -Mwin32::process -e "print 'CNC: '. CREATE_NEW_CONSOLE;
CNC: CREATE_NEW_CONSOLE
C:\>perl -Mwin32::process -Mstrict -e "print 'CNC: '. CREATE_NEW_CONSOLE;
Bareword "CREATE_NEW_CONSOLE" not allowed while "strict subs" in use at -e line 1.
Execution of -e aborted due to compilation errors.
C:\>perl -MWin32::Process -e "print 'CNC: '. CREATE_NEW _CONSOLE;
CNC: 16
You could just place your platform specific code inside of an eval{}, and check for an error.
BEGIN{
eval{
require Win32::Process;
Win32::Process->import(qw'CREATE_NEW_CONSOLE');
};
if( $# ){ # $# is $EVAL_ERROR
# Unix code here
}
}