How to send a SNMP trap with a script - snmp

I have to send a SNMP trap towards my monitor with a script (perl for instance, or other) when some condition is met (e.g. when memory use or disk usage rise above 80%).
I never write script so I have no idea how to do that.
This little script will allow me to test my java program which catch some traps on a given port.

If all you want is to send a trap to test your trap receiver, you don't have to write a script! You can just download and install the net-snmp command line tools from
http://net-snmp.sourceforge.net/download.html
The command "snmptrap" is just what you're looking for.
If you want to do this from a shell script, of course you just ahve the script call the snmptrap binary.
If you are actually writing some monitoring script in perl, I still think the easiest way is to execute the snmptrap program from the perl script. You do also have the option of using some SNMP library. I've used Net::SNMP (unrelated to net-snmp) to good effect:
https://metacpan.org/pod/Net::SNMP

Related

How to read from terminal in lazarus on ubuntu

Lazarus can run bash scripts and commands. How to get the output of an executed command as string and later use it, for example print it with ShowMessage? Thanks!
Summary:
use the tprocess class from unit process, that allows to trap console output using pipes.
for straightforward cases use the Runcommand helper functions (also in process, wrap tprocess for simple cases)
Be aware that while you see console output as one stream, in fact there might be two (stdout and stderr)

Executing a shell script within an expect script

I am attempting to write an expect script, which executes/runs another shell script. This shell script configures an emulator, so the expect script is intended to automatically configure the emulator by sending back the appropriate data. However, when I wrote exec followed by the name of the shell script in my expect script, nothing happened. The console just sits and waits. Entering strings and whatnot does not appease the script. Failure to launch. DOA... I read from other posts that using exec is not a good fit when interacting with the subprogram is necessary.
Any advice for how I can execute the shell script within the expect script then?
Thanks!
If you want to interact with the shell script, you need to spawn it, then expect to see patterns and send responses.
If you're brand new to expect, check out the book "Exploring Expect" by the author of expect Don Libes.

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

why tcl expect exit unexpectly?

On Windows, I tested a tcl expect script as followed:
package require Expect
spawn "cmd.exe"
expect ">"
send "echo hello world\r"
But the output printed "F:\Workspace\>", then it exited.
Of cource, I expect that it executes "echo hello world"
Due to the way Expect for Windows works (it uses a special debugging mode) there are certain programs which can't be captured; telnet.exe is one, and cmd.exe could well be another. (The executables concerned have the system bit set in their file flags IIRC.)
Fortunately, the programs that this causes problems for are usually the ones that you don't actually need to automate with Expect. Tcl is quite capable of talking to other machines directly (by opening a socket) and cmd is both often unneeded and (in the other cases) easy to automate by just using the exec command. If this was just a test that was a proxy for your real automation, don't worry too much for now; try to automate the real program, though just do something simple (like exit cleanly) to start out with and build up from there.
It might be better if you tell me the problem you're really trying to solve. But anyway, you just need to type
echo hello world
instead of
send "echo hello world\r"
to get the result you require.
cheers
Brian

How to call bash commands from tcl script?

Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.

Resources