Viewing output contained in slurm.out file - cluster-computing

I have a C++ program that when run on my local machine runs some simulations and saves the results in a .csv file.
I am now running the same program on a cluster. Jobs are scheduled with SLURM, queued, and then run to completion. Rather than a .csv file output, the output is a slurmid.out file. How can I access this file to see the results of my simulation?

I typically use the cat command to view slurm output files-
cat slurmid.out
You could also use vim, or any other text editor/viewer. The script should probably output the csv file as well- If its not because it's failing, the .out file will tell you about it.

Related

Live Updating text-editor? (macOs)

I am running a program and writing output in a .txt file. Is there a method or a way to do this without closing and opening .txt file after each run? Basically the program runs and the .txt file updates on its own?

modify the Source code of hadoop command to add text during command execution

I'd like to see the source code for certain hadoop commands like -put and -ls. I want to be able to add additional information to the log outputs that are associated with running these commands. For example, i want to show the message "Hii user, your file is copying from local file system to hdfs" during the execution of -get or copyFromLocal command.
I want to change in core files not in api files like copyCommand.java(http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java?view=markup)
this type of message should print on execution of command.
Can anyone tell which file I should change.
How can I find these?

Bash script behaving differently for different files

I have a bash script that uses awk to process some files that I have downloaded. If I run the script on any of the files it does not work properly. However, if I transfer the contents of a file in a newly created one it seems to work as supposed. Could it have anything to do with the settings of the files?
I have two files file hotel_12313.dat and hotel_99999.dat . The first one is downloaded and the second one is created by me. If I copy the data from the first file into the second one and I execute the script on both of them the output is different.

How do you run scripts and output results to file in sequence using a shell script

I have the below code in a .sh file:
/tmp/dev/cpvSSK2/LDN/Grade/003/_runTheBatch2.sh >
/tmp/dev/cpvSSK2/LDN/Grade/003/batchRun_003.txt;
/tmp/dev/cpvSSK2/LDN/Grade/004/_runTheBatch2.sh >
/tmp/dev/cpvSSK2/LDN/Grade/004/batchRun_004.txt;
when I run the .sh file from the command line it just gets stuck at the first command.
It does generate the batchRun_003.txt file in the destination folder. However it just stalls at that satge.
Thanks

How to swap out to a new file a running process output is redirecting to, without restarting the command?

I have a background process that I do not want to restart. Its output is actively being logged to a file.
nohup mycommand 1> myoutputfile.log 2>&1 &
I want to "archive" the file the process is currently writing its output to, and make it start writing to a blank file at the same file name. I must be able to do this without having to kill the process and start it again.
I tried simply renaming the existing file (to myoutputfile_.log), hoping that the shell now finding that the file is no longer there, will create a new file with the original file name (myoutputfile.log). But this does not work as the shell holds a reference to the file's location and keeps appending to it.
I looked here. On executing ls, I see that the streams are now marked as (deleted) but I'm quite confused what to do next. In the gdb command, do I have to specify the process executable in addition to the process ID? What happens if I don't specify it or I get it wrong? Once in gdb, how do I force the stream to re-create a file in the deleted file's same location (same path and filename)?
How can I use the commands in shell to signal it to start a new file for an existing process's output redirection?
PS: I can't do a trial-and-error because it's rather important I get this right. If it is relevant to know, this is a java process.
I resolved this issue by doing the following:
cp myoutputfile.log myoutputfile_.log; echo > myoutputfile.log
This essentially reset the log file after copying the original contents to a new file.

Resources