There are some languages that support a sufficiently powerful type system that they can prove at compile time that the code does not address an array outside its bounds. My question is that if we were to compile such a language to the JVM, is there some way we could take advantage of that for performance and remove the array bounds checks that occur on every array access?
1) I know that recent JDK supports some array bound check elimination, but since I know at compile time that certain calls are safe, I could remove a lot more safely.
2) Some might think this doesn't affect performance much but it most certainly does, especially in array/computation heavy applications such as scientific computing.
The same question regarding casting. I know something is a certain type, but Java doesn't because its limited type system. Is there some way to just tell the JVM to "trust me" and skip any checks?
I realize there is probably no way to do this as the JVM is generally distributed, could it be reasonable to modify a JVM with this feature? Is this something that has been done?
It's one of the frustrations in compiling a more powerfully typed language to the JVM, it still is hampered by Java's limitations.
In principle this cannot be done in a safe fashion without a proof-carrying code (PCC) infrastructure. PCC would allow you to embed your reasoning of safety in the class file. Your embedded proof is checked at class-loading time. The class is not loaded if there is a flaw in the proof.
If the JVM ever allowed you to drop runtime checks without requiring a formal proof, then, as SecurityMatt put it, it would defeat the original philosophy of Java as a safe platform.
The JVM uses a special form of PCC for type-checking local variables in a method. All local variable typing info is used by the class-loading mechanism to check its correctness, but discarded after that. But that's the only instance of PCC concepts used in the JVM. As far as I know there is no general PCC infrastructure for the JVM.
I once heard one existed for the JavaCard platform which supports a small subset of Java. I am not sure if that can be helpful in your problem though.
One of the key features of Java is that it does not need to "trust" the developer to do bounds checking. This eliminates the "buffer overflow" security vulnerabilities which can lead to attackers being able to execute arbitrary code within your application.
By allowing developers the ability to turn off bounds checking, Java would lose one of its key features - that no matter how wrong the Java developer is, there is not going to be any exploitable buffer overflows within his/her code.
If you would like to use a language where the programmer is trusted to manage their own bounds checking, might I suggest C++. This gives you the ability to allocate arrays with no automatic bounds checking (new int[]) and to allocate arrays with inbuilt bounds checking (std::vector).
Additionally, I strongly suggest that before blaming bounds checking for the speed loss in your application, you perform some BENCHMARKING to determine whether there is somewhere else in your code that might be causing the bottleneck.
You may find that for a compiler target that a bytecode language such as MSIL is more suited to your needs than Java bytecode. MSIL is strongly typed and does not suffer from a number of the inefficiencies that you have found in Java.
Related
Maybe a newb's question but if you never ask youll never know
Will using Stripe's Sorbet (https://sorbet.org/) on a RoR app, can potentially improve the app's performance?
(performance meaning response times, not robustness \ runtime error rate)
I did some reading on dynamically typed languages (particularly Javascript in this case) and found out that if we keep sending some function (foo for example) the same type of objects, the engine does some optimising work on that function, so that when it is invoked again with the same types, there interpreting work would be quicker.
I thought maybe ruby interpreter does a similar work which can potentially mean that type-checking may increase interpreting speed
I thought maybe ruby interpreter does a similar work which can potentially mean that type-checking may increase interpreting speed
It doesn't yet, but one could potentially build this one day.
Goal of Sorbet was to build a type system for people, compared to building a type system for computers(compiler). It can introduce some performance overhead, but as Stripe runs it in production, we keep it in check. Internally, we page us if overhead is >7% of cpu time.
I did some reading on dynamically typed languages (particularly Javascript in this case) and found out that if we keep sending some function (foo for example) the same type of objects, the engine does some optimising work on that function, so that when it is invoked again with the same types, there interpreting work would be quicker.
Yes, this can be done. What you're describing is a common optimization in Just-In-Time(JIT) compilers. The technique that you seem to refer to uses run time profiling and actually is a common alternative technique that allows to achieve this result in absence of type system. It's also worth noting that well-build JITs can do it more frequently than a type system, as type system encodes what could happen, while profiling & JITs can optimize for what actually happens in practice.
That said, building a JIT is frequently much more work than building an online compiler, thus, depending on amount of investment one wants to put into speeding up Ruby, either using building a JIT or using types can prove better under different real-world constrains.
I thought maybe ruby interpreter does a similar work which can potentially mean that type-checking may increase interpreting speed
Summarizing the previous paragraph, Sorbet typesystem does not currently speedup Ruby, but it doesn't slow it down much either.
Type systems could be indeed used to speed up languages, but they aren't your only tool, with profiling & JIT compilation being the major competitor.
the optimizations you are talking about apply more to the JIT that is beeing worked on for the ruby runtime.
in general, sorbet aims at type-safety by introducing type interfaces or method signatures. they enable static type-checks that are applied before deploying the application in order to get rid of "type errors".
sorbet comes with a runtime component that can enforce type checks at runtime in your runnable application, but those are going to decrease the applications performance as they wrap method-calls in order to check for correct types https://sorbet.org/docs/runtime#runtime-checked-sig-s
I've heard many anecdotes that a large problem with dynamically typed languages is that type checking is very slow. Why is it slow though? What is the computer science rational that using runtime assigned types that may change cause large slowdowns in computational efficiency?
Dynamically typed languages must perform type-checking while code is running. Although they can sometimes be compiled, they need to cut many corners for reasonable performance. One big drawback of checking at runtime is that if a type fails to be valid, the interpreter can only throw exceptions or stop execution.
So they often try to coerce types to prevent exceptions, even when it may be undesirable. In python, it isn't uncommon to discover that a simple division by whole integers means that my user output is suddenly full of '2.0' because I didn't explicitly cast back into int.
The computer science rational is that type-checking is an extremely heavy algorithm. For every function you call, all the types involved must be validated (or coerced which may be another function call), and type information must be updated afterwards. At runtime you can only afford to have a simple type system and very little optimization. A compiler by comparison can exploit even a weak type system to optimize your inefficient algorithms away.
It's very common for statically-typed languages to be compiled, and dynamically-typed languages to be interpreted. This is because if a language is being designed for a compiler, it's a no-brainer to give the responsibility of type-checking to the compiler so that your code will be more optimal and won't need to manage typing at runtime. The less you need to carry at runtime, the faster code will execute.
Ultimately, this means languages designed for interpreters can't afford the level of typing a compiler can. In addition to having less freedom to exploit type information to optimize - strike 1 to performance - they must carry and modify type information at runtime - strike 2. The weaker type system also introduces many type safety bugs.
Naturally, there are also numerous cases where weak typing is desirable. Dynamic languages often take the role of scripting; they're quick to code, easy to interpret, and can be ported to new platforms faster than a compiler! This makes them invaluable for gluing very different systems together. One script can interact with the operating system and many programs on it to schedule a daily download of all the latest cat videos from your favourite website.
As always, I highly recommend that you have a dynamic language and a static language in your repertoire. It's invaluable to have access to the guarantees of strong typing and access to the ease of weak typing. Be a code omnivore :)
I've been looking over a few tutorials for JIT and allocating heaps of executable memory at runtime. This is mainly a conceptual question, so please correct me if I got something wrong.
If I understand it correctly, a JIT takes advantage of a runtime interpreter/compiler that outputs native or executable code and, if native binary, places it in an executable code heap in memory, which is OS-specific (e.g. VirtualAlloc() for Windows, mmap() for Linux).
Additionally, some languages like Erlang can have a distributed system such that each part is separated from each other, meaning that if one fails, the others can account for such a case in a modular way, meaning that modules can also be switched in and out at will if managed correctly without disturbing overall execution.
With a runtime compiler or some sort of code delivery mechanism, wouldn't it be feasible to load code at runtime arbitrarily to replace modules of code that could be updated?
Example
Say I have a sort(T, T) function that operates on T or T. Now, suppose I have a merge_sort(T,T) function that I have loaded at runtime. If I implement a sort of ledger or register system such that users of the first sort(T,T) can reassign themselves to use the new merge_sort(T,T) function and detect when all users have adjusted themselves, I should then be able to deallocate and delete sort(T,T) from memory.
This basically sounds a lot like a JIT, but the attractive part, to me, was the aspect where you can swap out code arbitrarily at runtime for modules. That way, while a system is not under a full load such that each module is being used, modules could be automated to switch to new code, if needed, and etc. Theoretically, wouldn't this be a way to implement patches such that a user who uses a program should never have to "restart" the program if the program can swap out code silently in the individual modules? I'd imagine much larger distributed systems can make use of this, but what about smaller ones?
Additionally, some languages like Erlang can have a distributed system
such that each part is separated from each other, meaning that if one
fails, the others can account for such a case in a modular way,
meaning that modules can also be switched in and out at will if
managed correctly without disturbing overall execution.
You're describing how to make a fault-tolerant system which is entirely different from replacing code at run-time (known at Dynamic Software Update or DSU). Indeed, in Erlang, you can have one process monitoring other processes and if one fails, it will migrate the work to another process to keep the system running as expected. Note that DSU is not used to implement fault-tolerance. They are different features with different purposes.
Say I have a sort(T, T) function that operates on T or T. Now, suppose
I have a merge_sort(T,T) function that I have loaded at runtime. If I
implement a sort of ledger or register system such that users of the
first sort(T,T) can reassign themselves to use the new merge_sort(T,T)
function and detect when all users have adjusted themselves, I should
then be able to deallocate and delete sort(T,T) from memory.
This is called DSU and is used to be able to do any of the following tasks without the need to take the system down:
Fix one or more bugs in a piece of code.
Patch security holes.
Employ a more efficient code.
Deploy new features.
Therefore, any app or system can use DSU so that it can perform these tasks without requiring a restart.
Erlang enables you to perform DSU in addition to facilitating fault-tolerance as discussed above. For more information, refer to this Erlang write paper.
There are numerous ways to implement DSU. Since you're interested in JIT compilers and assuming that by "JIT compiler" you mean the component that not only compiles IL code but also allocates executable memory and patches function calls with binary code addresses, I'll discuss how to implement DSU in JIT environments. The JIT compiler has to support the following two features:
The ability to obtain or create new binary code at run-time. If you have IL code, no need to allocate executable memory yet since it has to be compiled.
The ability to replace a piece of IL code (which might have already been JITted) or binary code with the new piece of code.
Clearly, with these two features, you can perform DSU on a single function. Swapping a whole module or library requires swapping all the functions and global variables exported by that module.
There are many libraries for runtime bytecode generation such as ASM, Javassist, CGLib, and BCEL to name a few. All of these tools are capable of manipulating java bytecode dynamically, and are different from tools like the javac compiler.
I understand that there are some good reasons to generate bytecode and load them into a ClassLoader at runtime. My question is whether or not there are any performance issues or concerns with these tools when generating bytecode for java methods or classes which could be very large.
One scenario might be an application which keeps running for a long time and the generated bytecode would be trivial but continuous (it would keep generating bytecode and/or classes and load/unload them into a classloader continuously).
There is a similar question here, but none of the answers explain any questions about performance. May I have some links to academic articles regarding this issue?
In a real world it won't really matter which framework you'll use. Unless you are planning to generate millions of new methods and load them at the run time, which would be a bad idea to begin with.
Generating a class at runtime is nothing fancier than filling a byte array with contents. At the point, where the JVM is told to interpret this contents as a Java class, it doesn’t differ from the way a precompiled class loaded from the hard drive is added to the runtime environment.
Since filling a byte array is trivial, the performance depends on the rules which determine the contents. Parsing source code and validating its correctness is an expensive task. On the other hand, generating code according to hardcoded rules, like, e.g. fulfill an interface by calling a single specified method (like lambda instantiation works), usually works much faster than loading the equivalent code from hard drive. Having such rules the typical use case for runtime bytecode generation.
But before thinking about performance you should ask yourself why you are thinking about dynamic byte code generation at all. In most real life scenarios the answer to this question already contains the answer to the question whether performance is relevant at all or why it is expected to be improved by generating code.
I think ASM is the strongest choice for two reasons. First, it's up to date with all of the JVM features, and second, its Visitor Pattern API is very efficient. This second point addresses your performance concerns, I think.
From what I have read java (usually) seems to compile java to not very (is at all?) optimised java bytecode, leaving it to the jit to optimise. Is this true? And if it is has there been any exploration (possibly in alternative implementations) of getting the compiler to optimise the code so the jit has less work to do (is this possible)?
Also many people seem to have a dislike for native code generation (sometimes referred to as ahead of time compilation) for Java (and many other high level memory managed languages) , for many reasons such as loss of portability (and ect.) , but also partially because (at least for those languages that have a just in time compiler) the thinking goes that ahead of time compilation to machine code will miss the possible optimisations that can be done by a jit compiler and therefore may be slower in the long run.
This leads me to wonder whether anyone has ever tried to implement http://en.wikipedia.org/wiki/Profile-guided_optimization (compiling to a binary + some extras then running the program and analysing the runtime information of the test run to generate a hopefully more optimised binary for real world usage) for java/(other memory managed languages) and how this would compare to jit code? Anyone have a clue?
Personally, I think the big difference is not between JIT compiling and AOT compiling, but between class-compilation and whole-program optimization.
When you run javac, it only looks at a single .java file, compiling it into a single .class file. All the interface implementations and virtual methods and overrides are checked for validity but left unresolved (because it's impossible to know the true method invocation targets without analyzing the whole program).
The JVM uses "runtime loading and linking" to assemble all of your classes into a coherent program (and any class in your program can invoke specialized behavior to change the default loading/linking behavior).
But then, at runtime, the JVM can remove the vast majority of virtual methods. It can inline all of your getters and setters, turning them into raw fields. And when those raw fields are inlined, it can perform constant-propagation to further optimize the code. (At runtime, there's no such thing as a private field.) And if there's only one thread running, the JVM can eliminate all synchronization primitives.
To make a long story short, there are a lot of optimizations that aren't possible without analyzing the whole program, and the best time for doing whole program analysis is at runtime.
Profile-guided optimization has some caveats, one of them mentioned even in the Wiki article you linked. It's results are valid
for the given samples, representing how your code is actually used by the user or other code.
for the given platform (CPU, memory + other hardware, OS, whatever).
From the performance point of view there are quite big differences even among platforms that are usually considered (more or less) the same (e.g. compare a single core, old Athlon with 512M with a 6 core Intel with 8G, running on Linux, but with very different kernel versions).
for the given JVM and its config.
If any of these change then your profiling results (and the optimizations based on them) are not necessary valid any more. Most likely some of the optimizations will still have a beneficial effect, but some of them may turn out suboptimal (or even degrading performance).
As it was mentioned the JIT JVMs do something very similar to profiling, but they do it on the fly. It's also called 'hotspot', because it constantly monitors the executed code, looks for hot spots that are executed frequently and will try to optimize only those parts. At this point it will be able to exploit more knowledge about the code (knowing the context of it, how it is used by other classes, etc.) so - as mentioned by you and the other answers - it can do better optimizations as a static one. It will continue monitoring and if its needed it will do another turn of optimization later, this time trying even harder (looking for more, more expensive optimizations).
Working on the real life data (usage statistics + platform + config) it can avoid the caveats mentioned before.
The price of it is some additional time it needs to spend on "profiling" + JIT-ing. Most of the time its spent quite well.
I guess a profile-guided optimizer could still compete with it (or even beat it), but only in some special cases, if you can avoid the caveats:
you are quite sure that your samples represent the real life scenario well and they won't change too much during execution.
you know your target platform quite precisely and can do the profiling on it.
and of course you know/control the JVM and its config.
It will happen rarely and I guess in general JIT will give you better results, but I have no evidence for it.
Another possibility for getting value from the profile-guided optimization if you target a JVM that can't do JIT optimization (I think most small devices have such a JVM).
BTW one disadvantage mentioned in other answers would be quite easy to avoid: if static/profile guided optimization is slow (which is probably the case) then do it only for releases (or RCs going to testers) or during nightly builds (where time does not matter so much).
I think the much bigger problem would be to have good sample test cases. Creating and maintaining them is usually not easy and takes a lot of time. Especially if you want to be able to execute them automatically, which would be quite essential in this case.
The official Java Hot Spot compiler does "adaptive optimisation" at runtime, which is essentially the same as the profile-guided optimisation you mentioned. This has been a feature of at least this particular Java implementation for a long time.
The trade-off to performing more static analysis or optimisation passes up-front at compile time is essentially the (ever-diminishing) returns you get from this extra effort against the time it takes for the compiler to run. A compiler like MLton (for Standard ML) is a whole-program optimising compiler with a lot of static checks. It produces very good code, but becomes very, very slow on medium-to-large programs, even on a fast system.
So the Java approach seems to be to use JIT and adaptive optimisation as much as possible, with the initial compilation pass just producing an acceptable valid binary. The absolute opposite end is to use an approach like that of something like MLKit, which does a lot of static inference of regions and memory behaviour.