After I start a SpringBoot web project. I can't find the main thread using jcmd $pid Thread.print. I also can't find the main thread using hsdb. Where did the main thread go?
I don't know the tool that you used in order to create your spring boot project, but if you created it via the Spring Initializr (https://start.spring.io/) it should be on the path YOUR_PROJECT_NAME/src/main/java/YOUR_PERSONALIZED_PATH/.
The name of the file where the main thread is created/executed should be inside that path, and should be called YOUR_PROJECT_NAME+Application.java.
For most Spring Boot apps, SpringApplication::run involves starting a web server (Tomcat, Undertow, Jetty, Netty). Those servers create their own non-daemon threads. The call to SpringApplication::run then returns and the main thread exits. The VM then is kept alive by those other non-daemon threads – the exact names depend on the web server used.
#SpringBootApplication
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
The web application delegates the main method to SpringApplication.run, which will execute Spring initialization and other processes. After the Spring initialization is completed, the life cycle of the main method has ended.
Why doesn't the java process exit without the main method?
java.c
/* Build platform specific argument array */
mainArgs = CreateApplicationArgs(env, argv, argc);
CHECK_EXCEPTION_NULL_LEAVE(mainArgs);
/* Invoke main method. */
(*env)->CallStaticVoidMethod(env, mainClass, mainID, mainArgs);
/*
* The launcher's exit code (in the absence of calls to
* System.exit) will be non-zero if main threw an exception.
*/
ret = (*env)->ExceptionOccurred(env) == NULL ? 0 : 1;
LEAVE();
#define LEAVE() \
do { \
if ((*vm)->DetachCurrentThread(vm) != JNI_OK) { \
JLI_ReportErrorMessage(JVM_ERROR2); \
ret = 1; \
} \
if (JNI_TRUE) { \
(*vm)->DestroyJavaVM(vm); \
return ret; \
} \
} while (JNI_FALSE)
Because it is stated in the comments of the macro definition of leave.
Always detach the main thread so that it appears to have ended when the application's main method exits. This will invoke the uncaught exception handler machinery if main threw an exception. An uncaught exception handler cannot change the launcher's return code except by calling System.exit.
Wait for all non-daemon threads to end, then destroy the VM. This will actually create a trivial new Java waiter thread named "DestroyJavaVM", but this will be seen as a different thread from the one that executed main, even though they are the same C thread. This allows mainThread.join() an mainThread.isAlive() to work as expected.
In this case, process has non-daemon thread so the process not exit.
Related
how can I Print adittional information to Command line Console?
Output now is:
C:\Users\admin\Desktop\java>java -jar pdf.jar
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds for length 0
at readDataIn.main(readDataIn.java:31)
Code:
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
try {
String arg = args[0];
fileNameSource = "import/" + arg + ".xml";
fileNameTarget = "export/" + arg + ".pdf";
} catch (Exception e) {
// TODO: handle exception
**System.out.println("Personal-Number is missing");**
e.printStackTrace();
}
How can i give the information out, that the Personal Number ist Missing?
First of all, as a general rule you should check for possible exceptions before they actually occur if that is possible, which in your case it definitely is.
So instead of catching the ArrayIndexOutOfBounds insert an if statement that checks the length of the args array before accessing it.
if(args.length == 0){
// no argument has been provided
// handle error here
}
In terms of how to handle the error, there are many options available and depending of what you want to do either could be a good fit.
IllegalArgumentException
It is a common idiom in Java that whenever a function receives an invalid/ illegal argument to throw an IllegalArgumentException.
if (args.length == 0){
throw new IllegalArgumentException("Personal number is missing");
}
This will print the message that you have provided and the stack trace. However if your application should be a Command Line Interface (CLI) you should not use this kind of error handling.
Print message & exit program
if (args.length == 0){
// notice: "err" instead of "out": print to stderr instead of stdout
System.err.println("Personal number is missing");
// exit program with non-zero exit code as exit code == 0 means everything is fine
System.exit(1);
}
For more information on stdout and stderr see this StackOverflow question.
This is what many CLI applications and e.g. java itself does. When you type java fdsdfsdfs or some similar nonsense as an argument Java will give you an error message and exit with some non-zero return code ("1" in this case).
It is also common that CLI applications print an error message and following some usage information on how to correctly use the application or provide a help command so a user can get more information. This happens for example if you just enter java without any parameters.
So it is really up to you what you want to do.
If you are thinking of implementing a full featured CLI application with more (complex) commands with multiple options etc. you should consider using a CLI library like JCommander or Apache Commons CLI as parsing command line arguments can quickly get ugly. All these common things are already handled there.
Logging
In case your application is some script that will be executed in a non-interactive way logging the error to a file and exiting with a non-zero exit code might also be an option.
PS
Your code looks to me like it should not compile at all as you are not declaring a type for your variables fileNameSource and fileNameTarget.
Use String or var here (assuming you're running > Java 11).
String fileNameSource = "import/" + arg + ".xml";
var fileNameTarget = "export/" + arg + ".pdf";
You might also need to consider that your program name is part of the args array, so you might have more than 0 values in the array and therefore might need to adjust the if statements above.
You may be interested in picocli, which is a modern CLI library for Java and other JVM languages.
Picocli does some basic validation automatically, and results in very compact code that produces user-friendly applications. For example:
import picocli.CommandLine;
import picocli.CommandLine.Command;
import picocli.CommandLine.Option;
import picocli.CommandLine.Parameters;
#Command(name = "myapp", mixinStandardHelpOptions = true, version = "1.0",
description = "This command does something useful.")
class MyApp implements Runnable {
#Parameters(description = "File name (without extension) of the file to import and export.")
private String personalNumber;
#Override
public void run() {
String fileNameSource = "import/" + personalNumber + ".xml";
String fileNameTarget = "export/" + personalNumber + ".pdf";
// remaining business logic
}
public static void main(String[] args) {
System.exit(new CommandLine(new MyApp()).execute(args));
}
}
If I run this class without any parameters, the following message is printed to the standard error stream, and the process finished with exit code 2. (Exit codes are customizable.)
Missing required parameter: '<personalNumber>'
Usage: myapp [-hV] <personalNumber>
This command does something useful.
<personalNumber> File name (without extension) of the file to import
and export.
-h, --help Show this help message and exit.
-V, --version Print version information and exit.
The usage help message is created automatically from the descriptions of the command, and the descriptions of its options and positional parameters, but can be further customized.
Note how the mixinStandardHelpOptions = true annotation adds --help and --version options to the command. These options are handled by the library without requiring any further logic in the application.
Picocli comes with an annotation processor that makes it very easy to turn your application into a native image with GraalVM. Native images have faster startup time and lower runtime memory overhead compared to a Java VM.
Background
It is possible to perform a software-controlled disconnection of the power adapter of a Mac laptop by creating an DisableInflow power management assertion.
Code from this answer to an SO question can be used to create said assertion. The following is a working example that creates this assertion until the process is killed:
#include <IOKit/pwr_mgt/IOPMLib.h>
#include <unistd.h>
int main()
{
IOPMAssertionID neverSleep = 0;
IOPMAssertionCreateWithName(kIOPMAssertionTypeDisableInflow,
kIOPMAssertionLevelOn,
CFSTR("disable inflow"),
&neverSleep);
while (1)
{
sleep(1);
}
}
This runs successfully and the power adapter is disconnected by software while the process is running.
What's interesting, though, is that I was able to run this code as a regular user, without root privileges, which wasn't supposed to happen. For instance, note the comment in this file from Apple's open source repositories:
// Disables AC Power Inflow (requires root to initiate)
#define kIOPMAssertionTypeDisableInflow CFSTR("DisableInflow")
#define kIOPMInflowDisableAssertion kIOPMAssertionTypeDisableInflow
I found some code which apparently performs the actual communication with the charger; it can be found here. The following functions, from this file, appears to be of particular interest:
IOReturn
AppleSmartBatteryManagerUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
if (selector >= kNumBattMethods) {
// Invalid selector
return kIOReturnBadArgument;
}
switch (selector)
{
case kSBInflowDisable:
// 1 scalar in, 1 scalar out
return this->secureInflowDisable((int)arguments->scalarInput[0],
(int *)&arguments->scalarOutput[0]);
break;
// ...
}
// ...
}
IOReturn AppleSmartBatteryManagerUserClient::secureInflowDisable(
int level,
int *return_code)
{
int admin_priv = 0;
IOReturn ret = kIOReturnNotPrivileged;
if( !(level == 0 || level == 1))
{
*return_code = kIOReturnBadArgument;
return kIOReturnSuccess;
}
ret = clientHasPrivilege(fOwningTask, kIOClientPrivilegeAdministrator);
admin_priv = (kIOReturnSuccess == ret);
if(admin_priv && fOwner) {
*return_code = fOwner->disableInflow( level );
return kIOReturnSuccess;
} else {
*return_code = kIOReturnNotPrivileged;
return kIOReturnSuccess;
}
}
Note how, in secureInflowDisable(), root privileges are checked for prior to running the code. Note also this initialization code in the same file, again requiring root privileges, as explicitly pointed out in the comments:
bool AppleSmartBatteryManagerUserClient::initWithTask(task_t owningTask,
void *security_id, UInt32 type, OSDictionary * properties)
{
uint32_t _pid;
/* 1. Only root processes may open a SmartBatteryManagerUserClient.
* 2. Attempts to create exclusive UserClients will fail if an
* exclusive user client is attached.
* 3. Non-exclusive clients will not be able to perform transactions
* while an exclusive client is attached.
* 3a. Only battery firmware updaters should bother being exclusive.
*/
if ( kIOReturnSuccess !=
clientHasPrivilege(owningTask, kIOClientPrivilegeAdministrator))
{
return false;
}
// ...
}
Starting from the code from the same SO question above (the question itself, not the answer), for the sendSmartBatteryCommand() function, I wrote some code that calls the function passing kSBInflowDisable as the selector (the variable which in the code).
Unlike the code using assertions, this one only works as root. If running as a regular user, IOServiceOpen() returns, weirdly enough, kIOReturnBadArgument (not kIOReturnNotPrivileged, as I would have expected). Perhaps this has to do with the initWithTask() method above.
The question
I need to perform a call with a different selector to this same Smart Battery Manager kext. Even so, I can't even get to the IOConnectCallMethod() since IOServiceOpen() fails, presumably because the initWithTask() method prevents any non-root users from opening the service.
The question, therefore, is this: how is IOPMAssertionCreateWithName() capable of creating a DisableInflow assertion without root privileges?
The only possibility I can think of is if there's a root-owned process to which requests are forwarded, and which performs the actual work of calling IOServiceOpen() and later IOConnectCallMethod() as root.
However, I'm hoping there's a different way of calling the Smart Battery Manager kext which doesn't require root (one that doesn't involve the IOServiceOpen() call.) Using IOPMAssertionCreateWithName() itself is not possible in my application, since I need to call a different selector within that kext, not the one that disables inflow.
It's also possible this is in fact a security vulnerability, which Apple will now fix in a future release as soon as it is alerted to this question. That would be too bad, but understandable.
Although running as root is a possibility in macOS, it's obviously desirable to avoid privilege elevation unless absolutely necessary. Also, in the future I'd like to run the same code under iOS, where it's impossible to run anything as root, in my understanding (note this is an app I'm developing for my own personal use; I understand linking to IOKit wipes out any chance of getting the app published in the App Store).
I created a method to prevent the system from sleeping as follows:
public static void KeepSystemAwake(bool bEnable)
{
if (bEnable)
{
EXECUTION_STATE state = SetThreadExecutionState(EXECUTION_STATE.ES_DISPLAY_REQUIRED | EXECUTION_STATE.ES_CONTINUOUS);
}
else
{
EXECUTION_STATE state = SetThreadExecutionState(EXECUTION_STATE.ES_CONTINUOUS);
}
}
The method prevents the system from sleep but when I call the ES_CONTINUOUS part of the method,the system does not sleep at all when I want it behave normally. What am I missing? I'm running this code in a different thread (Timer)
I'm running this code in a different thread (Timer)
If you're using something like a System.Threading.Timer callback, it will be called on different (read: arbitrary) threads.
From MSDN:
The callback method executed by the timer should be reentrant, because it is called on ThreadPool threads
Make sure you're calling SetThreadExecutionState for the same thread. Ideally, you'll serialise calls onto one thread (like the main thread).
Here is my use case.
A legacy system updates a database queue table QUEUE.
I want a scheduled recurring job that
- checks the contents of QUEUE
- if there are rows in the table it locks the row and does some work
- deletes the row in QUEUE
If the previous job is still running, then a new thread will be created to do the work. I want to configure the maximum number of concurrent threads.
I am using Spring 3 and my current solution is to do the following (using a fixedRate of 1 millisecond to get the threads to run basically continuously)
#Scheduled(fixedRate = 1)
#Async
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
publishWorker.start();
log.debug("End schedule");
}
<task:executor id="workerExecutor" pool-size="4" />
This created 4 threads straight off and the threads correctly shared the workload from the queue. However I seem to be getting a memory leak when the threads take a long time to complete.
java.util.concurrent.ThreadPoolExecutor # 0xe097b8f0 | 80 | 373,410,496 | 89.74%
|- java.util.concurrent.LinkedBlockingQueue # 0xe097b940 | 48 | 373,410,136 | 89.74%
| |- java.util.concurrent.LinkedBlockingQueue$Node # 0xe25c9d68
So
1: Should I be using #Async and #Scheduled together?
2: If not then how else can I use spring to achieve my requirements?
3: How can I create the new threads only when the other threads are busy?
Thanks all!
EDIT: I think the queue of jobs was getting infinitely long... Now using
<task:executor id="workerExecutor"
pool-size="1-4"
queue-capacity="10" rejection-policy="DISCARD" />
Will report back with results
You can try
Run a scheduler with one second delay, which will lock & fetch all
QUEUE records that weren't locked so far.
For each record, call an Async method, which will process that record & delete it.
The executor's rejection policy should be ABORT, so that the scheduler can unlock the QUEUEs that aren't given out for processing yet. That way the scheduler can try processing those QUEUEs again in the next run.
Of course, you'll have to handle the scenario, where the scheduler has locked a QUEUE, but the handler didn't finish processing it for whatever reason.
Pseudo code:
public class QueueScheduler {
#AutoWired
private QueueHandler queueHandler;
#Scheduled(fixedDelay = 1000)
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
List<Long> queueIds = lockAndFetchAllUnlockedQueues();
for (long id : queueIds)
queueHandler.process(id);
log.debug("End schedule");
}
}
public class QueueHandler {
#Async
public void process(long queueId) {
// process the QUEUE & delete it from DB
}
}
<task:executor id="workerExecutor" pool-size="1-4" queue-capcity="10"
rejection-policy="ABORT"/>
//using a fixedRate of 1 millisecond to get the threads to run basically continuously
#Scheduled(fixedRate = 1)
When you use #Scheduled a new thread will be created and will invoke method doSchedule at the specified fixedRate at 1 milliseconds. When you run your app you can already see 4 threads competing for the QUEUE table and possibly a dead lock.
Investigate if there is a deadlock by taking thread dump.
http://helpx.adobe.com/cq/kb/TakeThreadDump.html
#Async annotation will not be of any use here.
Better way to implement this is to create you class as a thread by implementing runnable and passing your class to TaskExecutor with required number of threads.
Using Spring threading and TaskExecutor, how do I know when a thread is finished?
Also check your design it doesn't seem to be handling the synchronization properly. If a previous job is running and holding a lock on the row, the next job you create will still see that row and will wait for acquiring lock on that particular row.
In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this.
I want the child processes to be automatically closed if my main processes crashes or is closed.
Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up.
How do I do this?
The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.:
HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBAL
if( ghJob == NULL)
{
::MessageBox( 0, "Could not create job object", "TEST", MB_OK);
}
else
{
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 };
// Configure all child processes associated with the job to terminate when the
jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli)))
{
::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK);
}
}
Then when each child process is created, execute the following code to launch each child each process and add it to the job object:
STARTUPINFO info={sizeof(info)};
PROCESS_INFORMATION processInfo;
// Launch child process - example is notepad.exe
if (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo))
{
::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK);
if(ghJob)
{
if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess))
{
::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK);
}
}
// Can we free handles now? Not sure about this.
//CloseHandle(processInfo.hProcess);
CloseHandle(processInfo.hThread);
}
VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista.
One somewhat hackish solution would be for the parent process to attach to each child as a debugger (use DebugActiveProcess). When a debugger terminates all its debuggee processes are terminated as well.
A better solution (assuming you wrote the child processes as well) would be to have the child processes monitor the parent and exit if it goes away.
Windows Job Objects sounds like a good place to start. The name of the Job Object would have to be well-known, or passed to the children (or inherit the handle). The children would need to be notice when the parent dies, either through a failed IPC "heartbeat" or just WFMO/WFSO on the parent's process handle. At that point any child process could TermianteJobObject to bring down the whole group.
You can keep a separate watchdog process running. Its only task is watching the current process space to spot situations like you describe. It could even re-launch the original application after a crash or provide different options to the user, collect debug information, etc. Just try to keep it simple enough so that you don't need a second watchdog to watch the first one.
You can assign a job to the parent process before creating processes:
static HANDLE hjob_kill_on_job_close=INVALID_HANDLE_VALUE;
void init(){
hjob_kill_on_job_close = CreateJobObject(NULL, NULL);
if (hjob_kill_on_job_close){
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jobli = { 0 };
jobli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
SetInformationJobObject(hjob_kill_on_job_close,
JobObjectExtendedLimitInformation,
&jobli, sizeof(jobli));
AssignProcessToJobObject(hjob_kill_on_job_close, GetCurrentProcess());
}
}
void deinit(){
if (hjob_kill_on_job_close) {
CloseHandle(hjob_kill_on_job_close);
}
}
JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE causes all processes associated with the job to terminate when the last handle to the job is closed. By default, all child processes will be assigned to the job automatically, unless you passed CREATE_BREAKAWAY_FROM_JOB when calling CreateProcess. See https://learn.microsoft.com/en-us/windows/win32/procthread/process-creation-flags for more information about CREATE_BREAKAWAY_FROM_JOB.
You can use process explorer from Sysinternals to make sure all processes are assigned to the job. Just like this:
You'd probably have to keep a list of the processes you start, and kill them off one by one when you exit your program. I'm not sure of the specifics of doing this in C++ but it shouldn't be hard. The difficult part would probably be ensuring that child processes are shutdown in the case of an application crash. .Net has the ability to add a function that get's called when an unhandled exception occurs. I'm not sure if C++ offers the same capabilities.
You could encapsulate each process in a C++ object and keep a list of them in global scope. The destructors can shut down each process. That will work fine if the program exits normally but it it crashes, all bets are off.
Here is a rough example:
class myprocess
{
public:
myprocess(HANDLE hProcess)
: _hProcess(hProcess)
{ }
~myprocess()
{
TerminateProcess(_hProcess, 0);
}
private:
HANDLE _hProcess;
};
std::list<myprocess> allprocesses;
Then whenever you launch one, call allprocessess.push_back(hProcess);