Right way to run the same code twice in v8 (array out-of-bounds fails on second run - deoptimizer) - v8

The following program is based on the example in the v8 Getting Started page. I have made three changes to demonstrate a problem I am encountering:
I create an empty array put it into the global context.
The script being run references the zeroth element in the array, which should return undefined.
I run the compiled script twice.
The first run works fine. The second fails: v8 calls V8_Fatal() in Deoptimizer::DoComputeCompiledStubFrame() because descriptor->register_param_count_ == -1.
Am I doing something wrong here? How can I fix it?
Isolate* isolate = Isolate::New();
Isolate::Scope isolate_scope(isolate);
HandleScope handle_scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
Local<Array> a = Array::New(isolate);
context->Global()->Set(String::NewFromUtf8(isolate, "a"), a);
Local<String> source = String::NewFromUtf8(isolate, "a[0];");
Local<Script> script = Script::Compile(source);
Local<Value> result = script->Run();
Local<Value> result2 = script->Run();
return 0;
NOTES:
This is the entire body of main().
Other fragments of JavaScript code run twice without a problem. Somehow this relates to the out-of-bound array reference, which is perhaps triggering deoptimization.
I do not want to recompile the script from scratch each time because I am typically running these scripts thousands of times, and sometimes millions of times.
I have tried compiling the script as an UnboundScript and then binding it for each execution, but the result is the same.
I have reported this as a v8 issue, but nobody has responded so I'm hoping that the StackOverflow community can help.
I am seeing this on VS2012 Update 4, but I also see it on VS2008, and in both x64 and x86 and in both Debug and Release builds.

OK, found it. The problem is an uninitialized code stub for dictionary loads - your use case triggers this as a failure as the stub isn't initialized through other means, eg compilation.
Below is a patch against v8 trunk revision 22629 that fixes the problem for me, tested on Windows with VS 2010 and Linux with g++ 4.9. Please let me know how you go with this:
Index: src/code-stubs.cc
===================================================================
--- src/code-stubs.cc (revision 22629)
+++ src/code-stubs.cc (working copy)
## -236,6 +236,8 ##
CODE_STUB_LIST(DEF_CASE)
#undef DEF_CASE
case UninitializedMajorKey: return "<UninitializedMajorKey>Stub";
+ case NoCache:
+ return "<NoCache>Stub";
default:
if (!allow_unknown_keys) {
UNREACHABLE();
## -939,6 +941,13 ##
// static
+void KeyedLoadDictionaryElementStub::InstallDescriptors(Isolate* isolate) {
+ KeyedLoadDictionaryElementStub stub(isolate);
+ InstallDescriptor(isolate, &stub);
+}
+
+
+// static
void KeyedLoadGenericElementStub::InstallDescriptors(Isolate* isolate) {
KeyedLoadGenericElementStub stub(isolate);
InstallDescriptor(isolate, &stub);
Index: src/code-stubs.h
===================================================================
--- src/code-stubs.h (revision 22629)
+++ src/code-stubs.h (working copy)
## -1862,6 +1862,8 ##
virtual void InitializeInterfaceDescriptor(
CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE;
+ static void InstallDescriptors(Isolate* isolate);
+
private:
Major MajorKey() const { return KeyedLoadElement; }
int NotMissMinorKey() const { return DICTIONARY_ELEMENTS; }
Index: src/isolate.cc
===================================================================
--- src/isolate.cc (revision 22629)
+++ src/isolate.cc (working copy)
## -2000,6 +2000,7 ##
NumberToStringStub::InstallDescriptors(this);
StringAddStub::InstallDescriptors(this);
RegExpConstructResultStub::InstallDescriptors(this);
+ KeyedLoadDictionaryElementStub::InstallDescriptors(this);
KeyedLoadGenericElementStub::InstallDescriptors(this);
}
As a workaround if you don't want to compile your own V8 for now, you could execute some code on each Isolate that uses the KeyedLoadDictionaryElementStub directly, prior to running your code --- this should initialize the stub. Something like the following works for me:
Isolate* isolate = Isolate::New();
Isolate::Scope isolate_scope(isolate);
HandleScope handle_scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
Local<Array> a = Array::New(isolate);
context->Global()->Set(String::NewFromUtf8(isolate, "a"), a);
// Workaround code for initializing KeyedLoadDictionaryElementStub
Local<String> workaround_source = String::NewFromUtf8(isolate, "Math.random()");
Local<Script> workaround_script = Script::Compile(workaround_source);
Local<Value> workaround_value = workaround_script->Run();
// End workaround
Local<String> source = String::NewFromUtf8(isolate, "a[0]");
Local<Script> script = Script::Compile(source);
// ...and so on

Related

Unable to see the plugin compiled in the custom wireshark run?

I am following the foo example given in the wireshark documentation.
I am able to build the foo code plugin. I am using wireshark 3.0.1 version. In the workroot folder, I have updated the target - PLUGIN_SRC_DIRS - plugins/epan/foo just before gryphon.
I can see that my code builds because I got some compilation error which I was able to fix it.
My foo code lives inside the plugins/epan folder.
I am running custom wireshark - sudo ./run/wireshark
There is a surprise here that I can't see even gryphon protocol field in the running wireshark. So in order to test this, I am typing foo or gryphon in that display filter and it turns red and it say foo is neither a protocol nor a protocol field. I am using Ubuntu 16.04 LTS to build it. The build goes fine.
Here is packet-foo.c
#include "config.h"
#include <epan/packet.h>
#include "packet-foo.h"
static int proto_foo = -1;
static int dissect_foo(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree _U_, void *data _U_);
void
proto_register_foo(void)
{
proto_foo = proto_register_protocol (
"FOO Protocol", /* name */
"FOO", /* short name */
"foo" /* abbrev */
);
}
void
proto_reg_handoff_foo(void)
{
static dissector_handle_t foo_handle;
foo_handle = create_dissector_handle(dissect_foo, proto_foo);
dissector_add_uint("udp.port", FOO_PORT, foo_handle);
}
static int
dissect_foo(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree _U_, void *data _U_)
{
col_set_str(pinfo->cinfo, COL_PROTOCOL, "FOO");
/* Clear out stuff in the info column */
col_clear(pinfo->cinfo,COL_INFO);
return tvb_captured_length(tvb);
}
Here is the packet-foo.h
#define FOO_PORT 1234
The CMakeLists.txt is here, this is actually a copy of gryphon.
So, I am wondering if gryphon wasn't recognised that means foo won't be recognised too. So, this file might be a source of problem.
include(WiresharkPlugin)
# Plugin name and version info (major minor micro extra)
set_module_info(foo 0 0 4 0)
set(DISSECTOR_SRC
packet-foo.c
)
set(PLUGIN_FILES
plugin.c
${DISSECTOR_SRC}
)
set_source_files_properties(
${PLUGIN_FILES}
PROPERTIES
COMPILE_FLAGS "${WERROR_COMMON_FLAGS}"
)
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
register_plugin_files(plugin.c
plugin
${DISSECTOR_SRC}
)
add_plugin_library(foo epan)
target_link_libraries(foo epan)
install_plugin(foo epan)
file(GLOB DISSECTOR_HEADERS RELATIVE "${CMAKE_CURRENT_SOURCE_DIR}" "*.h")
CHECKAPI(
NAME
foo
SWITCHES
-g abort -g termoutput -build
SOURCES
${DISSECTOR_SRC}
${DISSECTOR_HEADERS}
)
Merely changing the plugin isn't sufficient.
You need to modify the top make file so that foo is actually installed.
vim CMakeListsCustom.txt.example
Firstly, uncomment - line number 16
plugins/epan/foo
Since your foo lives inside plugins/epan/foo
Now, rename this example to
mv CMakeListsCustom.txt.example CMakeListsCustom.txt
vim CMakeLists.txt
Insert a line number around 1408-
plugins/epan/foo
After that, do make
and then sudo make install
Here is the working copy -
https://github.com/joshis1/WiresharkDissectorFoo

How can I activate features in all my crates?

I want to conditionally enable run-time checks and logging, independently from each other and from debug and release mode. So I've started adding two features to my project, one called "invariant-checking" and one called "logging". Ultimately i want their use to be through macros I define in a crate which is visible project-wide.
I had assumed that if I filled out the features section the same way in all of the crates the same way then when I activated the feature while compiling the bin crate, then all the lib crates would also have the feature enabled, but this is not the case! How can I enable and disable features across multiple crates? Hopefully this can be done by only changing one thing like the command-line arguments to cargo.
To clarify exactly what I want, here's an example, which I will also reproduce below:
There are three crates, the main, bin, crate, and two lib crates, called "middle" and "common". Here are the relevant parts of the relevant files:
main.rs
extern crate common;
extern crate middle;
fn main() {
common::check!();
middle::run();
println!("done");
}
the main Cargo.toml
[dependencies]
[dependencies.common]
path = "libs/common"
[dependencies.middle]
path = "libs/middle"
[features]
default = []
invariant-checking = []
logging = []
middle's lib.rs
extern crate common;
pub fn run() {
common::check!();
common::run();
}
middle's Cargo.toml
[dependencies]
[dependencies.common]
path = "../common"
[features]
default = []
invariant-checking = []
logging = []
common's lib.rs
#[macro_export]
macro_rules! check {
() => {{
if cfg!(feature = "invariant-checking") {
println!("invariant-checking {}:{}", file!(), line!());
}
if cfg!(feature = "logging") {
println!("logging {}:{}", file!(), line!());
}
}};
}
pub fn run() {
check!()
}
and finally common's Cargo.toml
[dependencies]
[features]
default = []
invariant-checking = []
logging = []
When i run cargo run --features "invariant-checking,logging" I get the following output
invariant-checking src\main.rs:5
logging src\main.rs:5
done
but want it to log in middle and common as well. How can I transform this project such that it will do that, and still allow me to get only "done" as output by changing only one place?
How can I enable and disable features across multiple crates?
A Cargo.toml can add features that transitively enable other features which are allowed to belong to dependencies.
For example, in the Cargo.toml of a crate which depends on crates foo and bar:
[dependencies]
foo = "0.1"
bar = "0.1"
[features]
default = []
invariant-checking = [ "foo/invariant-checking", "bar/invariant-checking" ]
logging = [ "foo/logging", "bar/logging" ]
This crate adds the invariant-checking and logging features. Enabling them transitively enables the respective features of the crates foo and bar, so that
cargo build --features=logging,invariant-checking
will enable the logging and invariant-checking features in this crate and also in its dependencies foo and bar as well.
In your particular case, you probably want main to transitively enable the features of middle and common, and for middle to transitively enable the features of common.
The macro definitions in their current form have a problem: The code inside the macro gets inlined whenever the macro is used, and then compiled in the context where it got inlined. Since you use runtime feature checks like
if cfg!(feature = "invariant-checking")
this means that you need to define the features in every crate where you are using the macro. In the common crate itself, on the other hand, the feature is never queried and thus redundant.
This seems completely backwards to me. The feature flag should be only queried in the common crate, and using the macro should not require first defining a feature flag in the crate that uses it. For this reason, I suggest using compile-time checks to select what macro to define:
#[cfg(feature = "invariant-checking")]
macro_rules! check_invariant {
() => ( println!("invariant-checking {}:{}", file!(), line!()); )
}
#[cfg(not(feature = "invariant-checking"))]
macro_rules! check_invariant {
() => ()
}
#[cfg(feature = "logging")]
macro_rules! logging {
() => ( println!("logging {}:{}", file!(), line!()); )
}
#[cfg(not(feature = "logging"))]
macro_rules! logging {
() => ()
}
#[macro_export]
macro_rules! check {
() => ( check_invariant!(); logging!(); )
}
This way, you will only need to define the feature in the common crate, as it should be. As long as you only use a single version of that crate, switching the flag on and off has global effect.

Can't add jvmArgs using add(), why?

Can anyone explain why the first sample working while the second does nothing?
test {
jvmArgs '-Xdebug',
'-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=4000'
}
test {
jvmArgs.add('-Xdebug')
jvmArgs.add('-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=4000')
}
Because in the second example this method is invoked. You get the list, modify it but the changes are not reflected to settings - read only access. In the first example this method is invoked and arguments passed are set.
Here's the explanation, a copy of the list is returned (it's for safety, security reasons - mutable types should be always returned as a copy)
public List<String> getJvmArgs() {
List<String> args = new ArrayList<String>();
for (Object extraJvmArg : extraJvmArgs) {
args.add(extraJvmArg.toString());
}
return args;
}
I found this problematic with the normal command line arguments using Gradle as well -- Even to the extent that Example 1 works and Example 2 will fail to add the extra argument:
runArgs is set in the main build.gradle
As follows:
ext {
runArgs = [ '-server=localhost', '-port=8080' ];
}
Simply appending to the command line appears challenging (see below).
Example 1:
debug.doFirst(){
// ... <snip> ...
// command line arguments
//
println " debug args (a): ${args}."
runArgs.add( "-memo=${project.name}:debug" );
args = runArgs;
println " debug args (b): ${args}."
}
Output is correct, shows "-memo" parameter is added but also the passed-in args have been replaced by the script variable using this approach.
debug args (a): [-server, localhost, -port, 8080 ].
debug args (b): [-server=localhost, -port=8080, -memo=Client:debug].
Example 2:
debug.doFirst(){
// ... <snip> ...
// command line arguments
//
println " debug args (a): ${args}."
args.add( "-memo=${project.name}:debug" );
println " debug args (b): ${args}."
}
Output (correctly?) shows there was no add(), per answer: from Opal above.
debug args (a): [-server, localhost, -port, 8080 ].
debug args (b): [-server, localhost, -port, 8080 ].
I've posted this example to show that there may be alternatives to accepting the status quo, i expect the jvmArgs to work in a similar pattern. I didn't find examples for adding extra debug specific arguments (say). So here is one.
I also saw in a couple of places (on-line and books), examples such as:
jvmArgs.add( "-DAPPLICATION_LOCATION=City" );
jvmArgs.add( "-DSERVER_HOST=localhost" );
Which as we now understand, do not work.
The use-case I set-out to implement is for the sub-project build.gradle Script to supply missing arguments and/or script specific parameters (e.g. as in the debug run example). It is clear to me that if you want to do this, the script will need to either replace the command line or analyse the args passed-in and then wrinkle-out the defaults by some mechanism.
Hopefully the example will give others more insight.
Actually this hack is working:
def jvmArgsCopy = jvmArgs
jvmArgsCopy.add("-XX:MaxDirectMemorySize=2g")
jvmArgs = jvmArgsCopy
And to add to this here it is in Kotlin gradle
val jvmArgsCopy: ArrayList<String> = jvmArgs as ArrayList<String>
jvmArgsCopy.add("-Xdebug")
jvmArgsCopy.add("-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=4000")
jvmArgs = jvmArgsCopy

EclEmma, Java8 and Lambda - no coverage on lambda expression

I have a Java project under Eclipse Luna, with EclEmma 2.3.1.201405111647 (latest), which use Jacoco 0.7.1, which have support for Java 8 as stated in their changelog:
"Version 2.3.1 (2014/05/11)
Fixed ASM 5.0.1 dependency conflicts with new ASM bundles in Eclipse 4.4 (GitHub #83).
Upgrade to JaCoCo 0.7.1 for full Java 8 support.
I now have the following toString:
#Override
public String toString() {
// [BLOCK0]
if (0 == value) {
return "0B";
}
// [BLOCK1]
final MutableLong val = new MutableLong(value);
final StringBuilder sb = new StringBuilder();
// [BLOCK2]
Arrays.asList(TERA_BYTES, GIGA_BYTES, MEGA_BYTES, KILO_BYTES, BYTES).forEach(unit -> {
// [BLOCK3]
long divider = unit.toBytes(1);
long n = val.longValue() / divider;
if (0 != n) {
sb.append(n).append(unit.getUnitCharacter());
val.subtract(n * divider);
}
});
// [BLOCK4]
return sb.toString();
}
I won't put the Junit test, because I know it goes 100% coverage. I can prove it by moving the lamdba expression into a appendToString method, and remplace the forEach with a for-each for (V value : Iterable<V>).
The result is, when I do "Coverage as Junit Test", the following:
BLOCK0 is all green
BLOCK1 is all green
BLOCK2 is green, up to the forEach(unit -> {
BLOCK3 is white (as if it were ignored lines)
BLOCK4 is all green.
Can someone explain me why Jacoco can't detect coverage in lambda ?
Lambda expression bodies are compiled into synthetic methods, but as far as I read, synthetic methods are unconditionally filtered out in the code coverage analysis.
By looking at the Change History of JaCoCo I see
Snapshot Build 0.7.2.201408210455 (2014/08/21)
Fixed Bugs
Do not ignore synthetic lambda methods to get code coverage for Java 8 lambda expressions (GitHub #232).
which seems to address your issue. Since you are using EclEmma 2.3.1 which is using JaCoCo version 0.7.1 you just need an update.

Scala IDE Debugger "step into" behavior

Current behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to x match { line
On "step into" the control goes to def TwiceTest = { line
On further "step into" the control goes to if (z % 2 == 0)... line.
Expected behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to if (z % 2 == 0)... line.
Code Snippet
object testobj extends App {
def TwiceTest = {
val x = Twice(21)
x match {
case Twice(n) => Console.println(n)
} // prints 21
}
TwiceTest
}
object Twice {
def apply(x: Int): Int = x * 2
def unapply(z: Int): Option[Int] = {
if (z % 2 == 0) Some(z / 2) else None
}
}
The current behavior is irritating while debugging a scala program with lots of nested extractors. I tried this with the new Scala debugger as well as the Java debugger but with the same result.
Step Filtering also does not help in this case.
As a workaround, I am putting a breakpoint in the unapply method and running resume from the first breakpoint. Can someone please suggest me a cleaner method.
Edit 1
I am using Scala-IDE (latest nightly build. 2.1.0.nightly-2_09-201208250315-529cd70 )
Eclipse Version: Indigo Service Release 2 Build id: 20120216-1857
OS: Windows 7 ( 64 bit)
The line number information in the bytecode is wrong. It is not an issue with the IDE, but the Scala compiler. When pattern matching is compiled, synthetic code sometimes gets the wrong position information.
I assume you are using Scala 2.9.2. In the next version of Scala (2.10.0), there are significant improvements in the pattern matcher, so it would be good to give it a try.

Resources