powershell stdin pipes and redirection - bash

Hello so I have been making a small cross platform script I can curl and pipe into bash and Powershell. The basic idea is the server sends a command to the interpreter and then it gives a command to redirect all output after to stdout. An example in bash is
#some commands
aplay rick.wav
cat -
random text
that will be redirected to stdout by cat...
bash will never see this
I would then pipe this to stdin of bash
But for Powershell I can do cat test.ps1 | iex or cat test.ps1 | powershell -
But can't redirect stdin to stdout continuously in one command like cat - because cat doesn't look from stdin.
Also some side notes after trying a lot of random things it seems like there are many stdin types for Windows, one being keyboard and another being pipes

You can pipe lines of text to powershell.exe, the Windows PowerShell CLI, via -Command - (-c -), and it will interpret them one by one.
Here's an interactive demonstration from inside PowerShell; it works the same with input piped (provided via stdin) from the outside:
# Repeatedly prompt for a line of input and execute it as a PowerShell command.
# Press Ctrl-C to exit.
& { while ($true) { Read-Host } } | powershell -noprofile -c -
Note:
-Command - has problematic aspects, notably with commands that span multiple lines (an additional Enter keystroke / newline is then needed for the command to be recognized) and so does -File -, whose behavior is even stranger - see this answer and GitHub issue #3223.
Another demonstration, simulating outside stdin input via 2 lines piped to powershell -c -:
'get-date', 'get-item /' | powershell -noprofile -c -
The two commands are executed and their output is printed; powershell.exe then exits, because no more stdin input is available; however, with indefinite stdin input (analogous to cat - on Unix-like platforms) the PowerShell process would be kept alive indefintely too.

Related

Why does "(echo <Payload> && cat) | nc <link> <port>" creates a persistent connection?

I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.

How to output an constantly active CMD

I am running an application which opens CMD and connect via API service. Throughout the day new stuff will show up in the CMD and I would like to export that information to txt somewhere and Everytime something new shows up append to the same file, or create a new one. It doesn't really matter
App.exe > /file.txt doesn't really work
Redirection examples
command > filename # Redirect command output to a file (overwrite)
command >> filename # APPEND into a file
command 2> filename # Redirect Errors from operation to a file(overwrite)
command 2>> filename # APPEND errors to a file
command 2>&1 # Add errors to results
command 1>&2 # Add results to errors
command | command # This is the basic form of a PowerShell Pipeline
# In PowerShell 3.0+
command 3> warning.txt # Write warning output to warning.txt
command 4>> verbose.txt # Append verbose.txt with the verbose output
command 5>&1 # Writes debug output to the output stream
command *> out.txt # Redirect all streams (output, error, warning, verbose, and debug) to out.txt
You are not showing any code as to how you are starting/using cmd.exe for your use case. Which just leaves folks trying to help you, to guess. So, redirect of cmd.exe, for example:
$MyOutputFile = C:\MyOutputFile.txt
Start-Process -FilePath c:\windows\system32\cmd.exe -ArgumentList '/c C:\YourCommand.bat' -Wait -NoNewWindow -RedirectStandardOutput $MyOutputFile
Lastly, since you've left us to guess. If you’re launching Process A from PowerShell, but it, Process A is, in turn, launching Process B, then it would be up to Process A to capture or redirect the output of Process B. There’s no way for PowerShell to sub-capture if Process A isn’t doing it.
Resources
About Redirection
How-to: Redirection
PowerShell Redirection Operators
Understanding Streams, Redirection, and Write-Host in PowerShell
Use PowerShell Redirection Operators for Script Flexibility

Piping multiple commands to bash, pipe behavior question

I have this command sequence that I'm having trouble understanding:
[me#mine ~]$ (echo 'test'; cat) | bash
echo $?
1
echo 'this is the new shell'
this is the new shell
exit
[me#mine ~]$
As far as I can understand, here is what happens:
A pipe is created.
stdout of echo 'test' is sent to the pipe.
bash receives 'test' on stdin.
echo $? returns 1, which is what happens when you run test without args.
cat runs.
It is copying stdin to stdout.
stdout is sent to the pipe.
bash will execute whatever you type in, but stderr won't get printed to the screen (we used |, not |&).
I have three questions:
It looks like, even though we run two commands, we use the same pipe and bash process for both commands. Is that the case?
Where do the prompts go?
When something like cat uses stdin, does it take exclusive ownership of stdin as long as the shell runs, or can other things use it?
I suspect I'm missing some detail with ttys, but I'm not sure. Any help or details or man excerpt appreciated!
So...
Yes, there's a single pipe sending commands to a single instance of bash. Note:
$ echo 'date "+%T hello $$"; sleep 1; date "+%T world $$"' | bash
22:18:52 hello 72628
22:18:53 world 72628
There are no prompts. From the man page:
An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and error are both connected to terminals. PS1 is set and $- includes i if bash is interactive.
So a pipe is not an interactive shell, and therefore has no prompt.
Stdin and stdout can only connect to one thing at a time. cat will take stdin from the process that ran it (for example, your interactive shell) and send its stdout through the pipe to bash. If you need multiple things to be able to submit to the stdin of that cat, consider using a named pipe.
Does that cover it?

Want to redirect the output of the nohup command [duplicate]

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

shell script - redirection makes some information lost

I have a shell script that can enable ble device scan with the following command
timeout 10s hcitool lescan
By executing this script (say ble_scan), I can see the nearby devices shown on the terminal.
However, when I redirect it to the file and terminal
./ble_scan | tee test.log
I can't see the nearby devices shown on the screen anymore and log file as well.
./ble_scan 2>&1 | tee test.log
The above redirection also doesnt help, anything I go wrong here?
If the command behaves differently with file output, you can run it within script.
script test.log
#=> Script started, output file is test.log
./ble_scan
# lots of output here
exit
#=> Script done, output file is test.log
Note that the file will include terminal-specific characters like carriage returns not normally captured in output redirects.

Resources