I am currently deciding which pax executor to use when running OSGi applications from the IDE (see http://paxrunner.ops4j.org/space/Executors for a list of available one's). There basically two choices for me when I want to run a process from the IDE with pax runner:
In-Process-Executor (Runs the OSGi process in the same Java Process as Pax Runner itself):
PRO: Easy to attach a debugger to.
PRO: Easy to kill the OSGi process, as no second Java VM is started.
PRO: Faster to start.
CON: heavily limits paxrunner's capabilities to non-javaVM settings (that one is from the documentation of Pax Runner).
Default Executor: (Runs the OSGi process in a new Java process)
PRO: enables Pax Runner to set various JavaVM settings.
CON: Harder to attach a debugger to (needs some remote debugging setup).
CON: Almost impossible to kill the OSGi process if Pax Runner doesn't do it properly.
CON: Longer startup times as two JVMs are started.
So my question basically is, if someone experienced a scenario where Pax Runner's JVM settings capabilities were crucial and the OSGi process would not work when using the in-process executor. I have not yet found such an example, however I need to decide if I support in-process, default executor or both, so some real-world use case that makes use of JVM-setting capabilities of the default executor would really help me in making that decision.
If the reason you start Pax runner from IDE is for Testing, have a look at Pax Exam, which also (optionally) uses Pax Runner underneath. Then you don't need to worry too much.
Related
What are the best tools available to continuously evaluate web performance in a CI/CD pipeline like Drone, and notify about any performance drops.
I have a microservice based application, that I need to implement continuous performance monitoring
There is no "best" tool, you need to answer the following questions:
What are the network protocols used by your application. The tool has to support all of them in order to be able to mimic real life usage of the application
Can the tool be executed in unattended command-line non-GUI mode
Can the tool produce the results which can be interpreted by the continuous integration server so it could fail the build automatically in case of performance degradation
Given you have jmeter in your question tags take a look at Jenkins and Performance Plugin which can mark the build as unstable or failed by comparing the metrics and KPIs of the current build to some "reference" baseline build or previous build
I have a Spring batch application that runs in the VM, it takes 2 hours to process 10k records.
We are planning to migrate the application to Azure kubernetes and I see that the migrated application takes 6 hours.
I need to make the necessary changes to meet the current performance.
We didn't make any major changes in the code.
How do I do profiling to analysis the performance issues in Intellij? Is there any other way to find the cause for the performance impact.
Note : I don't have appDynamic.
The question is not really specific to Spring Batch per se, but if you want to profile a Java application with IntelliJ IDEA, you can run the app from within the IDE (or outside the IDE) and attach a profiler to it, see Profiling tools. This feature is only available in IntelliJ IDEA ultimate edition.
There are open source profilers that you can use as well, see Open Source Java Profilers.
Late Friday night thoughts after reading through material on how Cloudflare's v8 based "no cold start" Workers function - in short, because of the V8 engine's Just-in-Time compiler of Javascript code - I'm wondering why this no cold start type of serverless functions seems to only exist for Javascript.
Is this just because architecturally when AWS Lambda / Azure Functions were launched, they were designed as a kind of even more simplified Kubernetes model, where each function exists in its own container? I would assume that was a simpler model of keeping different clients' code separate than whatever magic sauce v8 isolates provided under the hood.
So given Java is compiled into bytecode for the JVM, which uses JIT compilation (if it doesn't optimise and compile to machine code certain high usage functions), is it therefore also technically possible to have no cold start Java serverless functions? As long as there is some way to load in each client's bytecode as they are invoked, on the cloud provider's server.
What are the practical challenges for this to become a reality? I'm not a big expert on all this, but can imagine perhaps:
The compiled bytecode isn't designed to be loaded in this way - it expects to be the only code being executed in a JVM
JVM optimisations aren't written to support loading short-lived, multiple functions, and treats all code loaded in to be one massive program
JVM once started doesn't support loading additional bytecode.
In principle, you could probably develop a Java-centric serverless runtime in which individual functions are dynamically loaded on-demand, and you might be able to achieve pretty good cold-start time this way. However, there are two big reasons why this might not work as well as JavaScript:
While Java is designed for JIT compiling, it has not been optimized for startup time nearly as intensely as V8 has. Today, the JVM is most commonly used in large always-on servers, where startup speed is not that important. V8, on the other hand, has always focused on a browser environment where code is downloaded and executed while a user is waiting, so minimizing startup latency is critical. (It might actually be interesting to look at an alternative Java runtime like Android's Dalvik, which has had much more reason to prioritize startup speed. Maybe it could be the basis of a really fast Java serverless environment!)
Security. V8 and other JavaScript runtimes have been designed with hostile code in mind from the beginning, and have had a huge amount of security research done on them. Java tried to target this too, in the very early days, with "applets", but that usage of Java never caught on. These days, secure sandboxing is not a major concern of Java. Because of this, it is probably too risky to run multiple Java apps that don't trust each other within the same container. And so, you are back to starting a separate container for each application.
I have been studying Big data platforms like Hadoop and Storm for a while and I'm at the beginning of a research work in the field of scheduling/resource management. But I wonder how they (developers of these libraries) manage to debug the Scheduling Classes? Do they use a specific tool (IDE) or just use logging/unit testing to control how it's running. Because I think using just logging/testing is too complicated and I can't imagine how to test entire scheduler and subsystems altogether.
The Question is, how can I debug my Algorithm after implementation and integrating it into the platform? Is there a place that I can find a sample to understand the logic of their work?
Because using tools (I use intelliJ IDEA) I can debug user level programs with no problem, but at the system level (Scheduling and Resource Management Classes) this method doesn't work!
Any idea would be appreciated (either on hadoop library or storm).
I'm looking for some comparison between Quartz.NET and Windows Scheduled Tasks?
How different are they? What are the pros and cons of each one? How do I choose which one to use?
TIA,
With Quartz.NET I could contrast some of the earlier points:
Code to write - You can express your intent in .NET language, write unit tests and debug the logic
Integration with event log, you have Common.Logging that allows to write even to db..
Robust and reliable too
Even richer API
It's mostly a question about what you need. Windows Scheduled tasks might give you all you need. But if you need clustering (distributed workers), fine-grained control over triggering or misfire handling rules, you might like to check what Quartz.NET has to offer on these areas.
Take the simplest that fills your requirements, but abstract enough to allow change.
My gut reaction would be to try and get the integral WinScheduler to work with your needs first before installing yet another scheduler - reasoning:
no installation required - installed and enabled by default
no code to write - jobs expressed as metadata
integration with event log etc.
robust and reliable - good enough for MSFT, Google etc.
reasonably rich API - create jobs, check status etc.
integrated with remote management tools
security integration - run jobs in different credentials
monitoring tooling
Then reach for Quartz if it doesn't meet your needs. Quartz certainly has many of these features too, but resist adding yet another service to own and manage if you can.
One important distinction, for me, that is not included in the other answers is what gets executed by the scheduler.
Windows Task Scheduler can only run executable programs and scripts. The code written for use within Quartz can directly interact with your project's .NET components.
With Task Scheduler, you'll have to write a shell executable or script. Inside of that shell, you can interact with your project's components. While writing this shell code is not a difficult process, you do have to consider deploying the extra files.
If you anticipate adding more scheduled tasks over the lifetime of the project, you may end up needing to create additional executable shells or script files, which requires updates to the deployment process. With Quartz, you don't need these files, which reduces the total effort needed to create and deploy additional tasks.
Unfortunately, Quartz.NET job assemblies can't be updated without restarting the process/host/service. That's a pretty big one for some folks (including myself).
It's entirely possible to build a framework for jobs running under Task Scheduler. MEF-based assemblies can be called by a single console app, with everything managed via a configuration UI. Here's a popular managed wrapper:
https://github.com/dahall/taskscheduler
https://www.nuget.org/packages/TaskScheduler
I did enjoy my brief time of working with Quart.NET, but the restart requirement was too big a problem to overcome. Marko has done a great job with it over the years, and he's always been helpful and responsive. Perhaps someday the project will get multiple AppDomain support, which would address this. (That said, it promises to be a lot of work. Kudos to he and his contributors if they decide to take it on.)
To paraphrase Marko, if you need:
Clustering (distributed workers)
Fine-grained control over triggering or misfire handling rules
...then Quartz.NET will be your requirement.