BSD grep does not stop after executing - bash

I am using BSD grep! (Different from UNIX grep)
grep -e '22T1[2-4]' nagoya_all.csv -> nagoya12_14.csv
When I execute this command, nagoya12_14.csv is successfully created. However, on terminal is doesn't prompt me for a new command.
Why does this happen?
How can i check if this command is still running?

Take out the - in front the redirect sign > and you should be good. When you have - at the end of a grep command, it will try to read from stdin, which waits for user input.

Related

Why does "(echo <Payload> && cat) | nc <link> <port>" creates a persistent connection?

I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.

Stopping a task in shell script if logs contain specific string

I am trying to run a command in shell script and would like to exit it if the processing logs (not sure what you call the logs that are outputted on terminal while the task is running) contains the string "INFO | Next session will start at"
I tried using grep but because the string "INFO | Next session will start at" is not in a stdout it does not detect while the command is running.
The specific command I'm running is below
pipenv run python3 run.py --config accounts/user/config.yml
By 'processing logs' I mean the log output before the stdout is displayed in the terminal.
...
[D 211127 10:07:12 init:400] atx-agent version 0.10.0
[D 211127 10:07:12 init:403] device wlan ip: route ip+net: no such network interface
[11/27 10:07:12] INFO | Time delta has set to 00:11:51.
[11/27 10:07:13] INFO | Kill atx agent.
[11/27 09:59:32] INFO | Next session will start at: 10:28:30 (2021/11/27).
[11/27 09:59:32] INFO | Time left: 00:28:57.
I am trying to do this because the yml file I'm trying to run has a limit on what time you can execute it, and I would like to exit the task if the time is not met.
I tried to give as much context but if there's something missing please let me know.
This may work:
pipenv run python3 run.py --config accounts/user/config.yml |
sed "/INFO | Next session will start at/q"
sed prints the piped input, until it matches the expression and quits (q). The program will receive SIGPIPE (broken pipe) when it tries to continue writing, and (likely) exit. It's the same as what happens when you do something like find | head.
You could also use kill in a shell wrapper:
sh -c 'pipenv run python3 run.py --config accounts/user/config.yml |
{ sed "/INFO | Next session will start at/q"; kill -- -$$; }'
Notes:
The program may print a different log if stdout is not a terminal.
If you want to match a literal string, you could use grep -Fm 1 PATTERN, but other log output will be hidden. grep fails if no match, which can be useful.
This will work any shell, including zsh. zsh or bash can also be used for the kill wrapper.
There are other approaches. This thread focuses on tail, but is a useful reference: https://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found

sending echo and error to both terminal and file log

I am trying to modify a script someone created for unix in shell. This script is mostly used to run on backed servers with no human interaction, however I needed to make another script to allow users to input information. So, it is just modifying to old version for user input. But the biggest issue I am running into is trying to get both error logs and echos to be saved in a log file. The script has a lot of them, but I wanted to have those shown on the terminal as well as send them to the log file specified, to be looked into later.
What I have is this:
exec 1> ${LOG} 2>&1
This line is pretty much send everything to the log file. That is all good, but I also have people trying to enter in information in the script, and it is sending everything to the log file including the echo needed for the prompt. This line is also at the beginning of the script, but reading more into the stderr and stdout messages. I tried:
exec 2>&1 1>>${LOG}
exec 1 | tee ${LOG} But only getting error when running it this "./bash_pam.sh: line 39: exec: 1: not found"
I have went over site such as this to solve the issue, but I am not understanding why it does not print to both. The way I insert it, it either only sends it to the log location and not to the terminal, or it sends it to the terminal, but nothing is persevered in the log.
EDIT: Some of the solutions, for this have mentioned that certain fixes will work in bash, but not in /bin/sh.
If you would like all output to be printed onto the console, while also being printed to a logfile.txt you would run this command on your script:
bash your_script.sh 2>&1 | tee -a logfile.txt
Or calling it within the file:
<bash_command> 2>&1 | tee -a logfile.txt
To append to logfile.txt instead of overwriting, add the -a option to tee.

shell script - redirection makes some information lost

I have a shell script that can enable ble device scan with the following command
timeout 10s hcitool lescan
By executing this script (say ble_scan), I can see the nearby devices shown on the terminal.
However, when I redirect it to the file and terminal
./ble_scan | tee test.log
I can't see the nearby devices shown on the screen anymore and log file as well.
./ble_scan 2>&1 | tee test.log
The above redirection also doesnt help, anything I go wrong here?
If the command behaves differently with file output, you can run it within script.
script test.log
#=> Script started, output file is test.log
./ble_scan
# lots of output here
exit
#=> Script done, output file is test.log
Note that the file will include terminal-specific characters like carriage returns not normally captured in output redirects.

Echoing 'at' command in terminal fails

The following should print "hello" (or some reminder) on my Linux command line at 9:00 AM today:
$ at 9:00AM
warning: commands will be executed using /bin/sh
at> echo "hello"
at> <EOT>
However, at the specified time, nothing happens.
I have an empty etc/at.deny and no /etc/at.allow file, so there shouldn't be any problems with permissions to use the command. Also, writing a file at 9:00 AM works:
$ at 9:00AM
at> echo "hello" > /home/mart/hello.txt
at> <EOT>
$ cat /home/mart/hello.txt
hello
All jobs are shown as scheduled, I just can't get any output to the terminal window (I'm on Crunchbang Linux with Terminator). Why? Do I need to somehow specify the window for that output?
Thanks for any help!
at runs commands from a daemon (atd), which doesn't have access to your terminal. Thus, output from the script isn't going to appear in your terminal (unless you pipe to the right tty in your command).
Instead, it does as man at says:
The user will be mailed standard error and standard output from his commands, if any.
You may be able to access these reports using mail if your machine is suitably configured.
If you want to have at write to your terminal, you can try piping the output to write, which writes a message to a user's TTY, or to wall if you want to write to every terminal connected to the system.
Okay, nneonneo's explanations led me to using wall, which sends a message to all users. So setting oneself reminders in a terminal window can be done like this:
$ at 9:00AM
warning: commands will be executed using /bin/sh
at> echo "hello" | wall
at> <EOT>

Resources