How can I separate the logs from one pod to another?
Below is what I worked on:
CrashLoopBackOff=`for i in $(kubectl get po -n namespace | grep CrashLoopBackOff | awk '{print $1}'); do echo $i; done`
for y in $CrashLoopBackOff; do
k8s_logs=`kubectl logs $y -n namespace | tail -10` arr2+="$k8s_logs\n"
done
But the output is all together for 2 crashed pods or more pods and I cannot differentiate the logs from each pod. Any idea how can I put an echo or something between each pod log?
2021-03-12 07:30:11.622 [ERROR] [gstp_server_app.cc:4007] Failed to start subscription from SEL
2021-03-12 07:30:11.622 [ERROR] [gstp_server_app.cc:4010] Exception: 0 In order to do catch up, static entity: must have history enabled.
2021-03-12 07:30:11.622 [FATAL] [gstp_server_app.cc:1441] JMS Catchup initialization error.
2021-03-12 07:30:11.622 [FATAL] [gstp_server_app.cc:695] Failed to connect to JMS for data
2021-03-12 07:30:13.188 [INFO ] [jms_server.cc:495] End JMSServer.run
Ending JMSServer::Run
GSTP JMS Server Application Shutting down.
255
2021-03-12 07:31:51.828 [ERROR] [rcltocvll.cc:325] Zero Curve does not exist for DEPOT asof 02/06/12.
2021-03-12 07:31:51.831 [ERROR] [gbdopspec.cc:297] Error Creating Curve List Out Of Risk Class
2021-03-12 07:31:51.831 [ERROR] [sectheo.cc:705] Error retrieving security curves.
2021-03-12 07:31:51.833 [ERROR] [sectheo.cc:999] Error computing theoretical price for Security
2021-03-12 07:31:51.833 [ERROR] [rcltocvll.cc:325] Zero Curve does not exist for DEPOT asof 02/06/12.
2021-03-12 07:31:51.833 [ERROR] [gbdopspec.cc:297] Error Creating Curve List Out Of Risk Class List
2021-03-12 07:31:51.833 [ERROR] [sectheo.cc:705] Error retrieving security curves.
2021-03-12 07:31:51.833 [ERROR] [sectheo.cc:999] Error computing theoretical price for Security
bash: line 1: 6 Killed gstp_server_jms -N -LOGGER INFO -CFG
Please don't forget to give part of kubectl get po -n namespace's output for people to know what kind of data you are working on.
Also, the backquote (`) is used in the old-style command substitution, e.g. foo=`command` . The foo=$(command) syntax is recommended instead. Backslash handling inside $() is less surprising, and $() is easier to nest. See http://mywiki.wooledge.org/BashFAQ/082
See also https://mywiki.wooledge.org/BashFAQ/001 to learn how to read lines of input
The rest of your code is not even working, so here is working code based on my assumptions on what you tried to achieve:
arr2=()
while read -r crashing_pod _; do
while IFS= read -r line; do
arr2+=("$crashing_pod: $line")
# arr2+=("$line")
done < <(kubectl logs "$crashing_pod" -n namespace | tail -10)
# arr2+=('' "end of $crashing_pod 's logs" '')
done < <(kubectl get po -n namespace | grep CrashLoopBackOff)
printf %s\\n "${arr2[#]}"
You'll see I have prepended each line with the pod name, but you can use the two commented outlines instead if you'd rather have a separator like requested.
Related
Trying to build Bitcoin raw transaction for Bitcoin Testnet in Golang, but when trying to send getting an error:
mandatory-script-verify-flag-failed (Script evaluated without error but finished with a false/empty top stack element)
Here is raw transaction:
01000000014071216d4d93d0e3a4d88ca4cae97891bc786e50863cd0efb1f15006e2b0b1d6010000008a4730440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be0141045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e54daf29c5ee7a3752b5896e5ed3693daf19b57e243cf2dcf27dfe5081cfcf534496affffffff012e1300000000000017a914de05d1320add0221111cf163a9764587c5a171ba8700000000
Tried to debug with btcdeb:
./btcdeb --tx=01000000014071216d4d93d0e3a4d88ca4cae97891bc786e50863cd0efb1f15006e2b0b1d6010000008a4730440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be0141045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e54daf29c5ee7a3752b5896e5ed3693daf19b57e243cf2dcf27dfe5081cfcf534496affffffff012e1300000000000017a914de05d1320add0221111cf163a9764587c5a171ba8700000000 --txin=02000000000101394187cababd1c18dfc9d30d6325167aa654b1d35505ab77cd1b96562fda5d500000000017160014c0a4f9f451ea319f67c6d2535c1e41bd5d333214feffffff02f009aab80000000017a91455f5b5f3afa4751a54205941a45a14b27ad99be787ec8016000000000017a91435ac960b988964007c167c38ea724e034123e6b1870247304402205d6b22bcaf1a58bc41224eecc7437eef0db9b7e7fb709826314a8bd73adb330702204fbbbd49747d75331a89e2f7b486e0b7a786ecef3229b8e3fec0c4be491921c301210233eab1d60449c393c8f22d4b5d98ee103060d9644dc2af665e607a62e2151bbc30091e00
btcdeb 0.4.21 -- type `./btcdeb -h` for start up options
LOG: sign segwit taproot
notice: btcdeb has gotten quieter; use --verbose if necessary (this message is temporary)
input tx index = 0; tx input vout = 1; value = 1474796
got witness stack of size 0
14 op script loaded. type `help` for usage information
script | stack
-------------------------------------------------------------------+--------
30440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a... |
045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e5... |
<<< scriptPubKey >>> |
OP_HASH160 |
35ac960b988964007c167c38ea724e034123e6b1 |
OP_EQUAL |
<<< P2SH script >>> |
5128ccd2 |
OP_DEPTH |
OP_SIZE |
OP_NOP4 |
OP_PICK |
28c6c438d0635ebb2fd6e78aa2d51ea70e8b |
OP_UNKNOWN |
#0000 30440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be01
Can anybody give an advice on where to look at?
Judging from the examples in the btcdeb documentation, you should expect to see a valid script message when starting btcdeb, if the script validates correctly.
btcdeb will still allow you to step through the script with the step command, but because the script is invalid in the first place, this may not tell you much, except that it decides to halt after reaching <<< P2SH script >>>, thinking that is the end of the script.
The most obvious fix should be to remove OP_UNKNOWN, which represents an opcode that was not understood by btcdeb, but there are probably other errors lurking that prevent the script from validating also. You could try removing the end of the script, and building it back up incrementally, testing with the debugger, until it works.
I use cut in one rule of my pipeline and always throws an error, but without any error description.
When I try this command with a simple bash script it is working without any errors.
Here is the rule:
rule convert_bamheader:
input: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned.bam, stats/SERUM-ACT/good_barcodes_clean_filter.txt
output: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt, bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
jobid: 15
wildcards: sample=SERUM-ACT
threads: 4
mkdir -p stats/SERUM-ACT
mkdir -p log/SERUM-ACT
samtools view bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned.bam > bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt
cut -f 12,13,18,20-24 bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt | grep -f stats/SERUM-ACT/good_barcodes_clean_filter.txt > bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
Submitted DRMAA job 15 with external jobid 7027806.
Error in rule convert_bamheader:
jobid: 15
output: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt, bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
ClusterJobException in line 256 of */pipeline.snake:
Error executing rule convert_bamheader on cluster (jobid: 15, external: 7027806, jobscript: */.snakemake/tmp.ewej7q4e/snakejob.convert_bamheader.15.sh). For detailed error see the cluster log.
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message
Complete log: */.snakemake/log/2018-12-18T104741.092698.snakemake.log
I thought that it has to do something with the number of threads provided and number of threads needed for the cut step, but I am not sure.
Perhaps someone can help me?
Cheers!
Hello bash programmers, I am using GATK and trying to loop through my bam files and do local realignment using my target_intervals and known indels. Below is my code I am trying. I am hoping someone can help with the error and correct my code.
# do the local realignment.
echo "local realignment..."
for file in `ls -d adp/map/*marked_duplicates.bam`
do
java -jar ~/software/GenomeAnalysisTK-3.3-0/GenomeAnalysisTK.jar \
-T IndelRealigner \
-R ~/flybase/fb-r5.57/dmel-all-chromosome-r5.57.fasta \
-I $file \
-known adp/map/*indel_intervals.vcf \
-targetIntervals adp/map/*target_intervals.list \
-o ${file}_realigned_reads.bam
done
wait
# Create a new index file.
echo "indexing the realigned bam file..."
for file in `ls -d adp/map/*realigned_reads.bam`
do
~/software/samtools-1.2/samtools index $file
done
ERROR: when looking this up, it appears to be a coding issue, and I am not seeing it.
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A USER ERROR has occurred (version 3.3-0-g37228af):
##### ERROR
##### ERROR This means that one or more arguments or inputs in your command are incorrect.
##### ERROR The error message below tells you what is the problem.
##### ERROR
##### ERROR If the problem is an invalid argument, please check the online documentation guide
##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
##### ERROR
##### ERROR Visit our website and forum for extensive documentation and answers to
##### ERROR commonly asked questions http://www.broadinstitute.org/gatk
##### ERROR
##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
##### ERROR
##### ERROR MESSAGE: Invalid argument value 'adp/map/360M_F_L002.recal.bam.sorted.bam_marked_duplicates.bam_target_intervals.list' at position 10.
##### ERROR Invalid argument value 'adp/map/517_F_L002.recal.bam.sorted.bam_marked_duplicates.bam_target_intervals.list' at position 11.
##### ERROR Invalid argument value 'adp/map/517M_F_L002.recal.bam.sorted.bam_marked_duplicates.bam_target_intervals.list' at position 12.
##### ERROR Invalid argument value 'adp/map/900_F_L002.recal.bam.sorted.bam_marked_duplicates.bam_target_intervals.list' at position 13.
##### ERROR Invalid argument value 'adp/map/900M_F_L002.recal.bam.sorted.bam_marked_duplicates.bam_target_intervals.list' at position 14
.
At least part of the problem is the * in your commands. GATK doesn't deal well with globs. To specify multiple values to an argument, specify the argument multiple times.
i.e. instead of
-known adp/map/*indel_intervals.vcf
you need to specify each file with a separate argument
-known adp/map/first_file.indel_intervals.vcf
-known adp/map/second_file.indel_intervals.vcf
There may be other issues as well. For instance, I'm not certain that -targetIntervals can take multiple files as input. Also, that's very old version of gatk, you might want to upgrade to 3.8.
Background: I'd like to assert that no exceptions are written to a log for 30 seconds. Basically it's a smoke test to see if my application has come up and we haven't introduced any serious bugs.
Requirements: I'd like to do this using a bash script, preferably using common shell utilities. An exception is basically a single line that starts with !. There are a lot of other log lines written that are not exceptions.
Questions: How can I do this?
Here's one possible solution:
timeout 30 tail -F my.log -n 0|grep --line-buffered '^!'|head -n 1
. I can then check whether the exit code is 124 or 143 (timed out, don't know why it varies) or 0 (line found). This is my best bet so far. However, the solution doesn't seem to exit very quickly upon exception. I'd love to hear other solutions!
Assuming log file will only be updated by program on an event of exception.
You can use the following command:
stat log_file_name
And you'll get output similar to below. You can run stat after 30 sec or so, compare results of current and previous stat if you don't see any change in the timestamp then the file has not been modified or otherwise.
Access: 2015-03-27 15:22:17.000000000 +0530
Modify: 2015-03-27 15:22:16.000000000 +0530
Change: 2015-03-27 15:22:16.000000000 +0530
This question already has answers here:
Filter log file entries based on date range
(5 answers)
Closed 4 years ago.
I have an application that appends to the same log file. As this file is rather large (around 8 gb), I'd like to extract portions based on the timestamp at the beginning of the line.
-bash-3.2$ cat application.log | egrep --color "Starting Application|Exception"
08:46:01.328 [main] INFO Starting Application...
09:14:53.670 [Thread-1] ERROR Resolver - Caught exception -> com.jgoodie.AuthzException: Authorization failed
Caused by: com.jgoodie.AuthzException: Authorization failed
09:56:15.739 [main] INFO Starting Application...
10:17:08.932 [Thread-1] ERROR Resolver - Caught exception -> com.jgoodie.AuthzException: Authorization failed
Caused by: com.jgoodie.AuthzException: Authorization failed
In the above example, I'd like to extract the logs for the first run of the application (between 08:46:01.328 and 09:56:15.739). Is there any simple way (preferably a one liner) to do this?
Thanks
sed -n '/08:46:01.328/,/09:56:15.739/p' application.log
perl -lne 'if(/^08:46:01.328/.../^09:56:15.739/){print}' your_file