I have some quite long script that I want users to slightly adapt to their scenario before they can copy paste in in some IRB or pry shell.
Most people would, unless paying attention, blindly copy-paste the script in some console. I'd like to stop parsing/executing the rest of the code and (for example) require a user to explicitely validate that he wants to continue running the script (for example via a simple gets that force the user to type enter)
I was thinking of something like (imagine this is the code of a wiki that one would copy paste)
# Start script
[...] # Some stuff
# Copy paste the autorization code you receive from the internet in the `token` variable
string = ("Do not blindly copy-paste, Did you do what was written in the command just above ? We need the token in the `token` variable. If you did it, just press enter. Otherwise abort with CTRL+C and restart the script".purple)
puts string
gets
Service.call(token)
The problem is that the gets/coloration seems to break further execution of the copy pasted script.
My question is, assuming you have a chunk of code that you copy paste using REPL, is there a way to abort/pause the automatic Read-Evaluate print loop via some code contained in the chunk that is copied ?
Basically I'm writing some test procedure that testers can just copy paste in their terminal to check a feature is working. 99% of the time, is is enough to just set some variables and copy-paste the code provided in the examples. But sometimes, there are some (optional) manual steps to be done in the middle of the execution of the code that is copy pasted (like copying a token, retrieving the automatically generated ID, etc.)
Related
I've compiled my first web crawler script with AppleScript and I'm at the point now where I've gained a lot of knowledge and tricks from what I've written. I want to parse down the script now and disable some things that I thought would be helpful (for example: I coded it so the script completely quits Excel after entering the data in some workbooks from web pages because I noticed when you didn't start Excel fresh running the code it would return an error. But now I have the script running every 15 minutes so I worry that I will be working in Excel on some forecasting or formatting and the script will run and kick me out of Excel while I'm working and interrupt me or worse, quit without the option of saving). I vaguely remember C++ coding there was the ability to mark some text with a certain character that disabled it from running in the environment but made it so you could still see the original code before editing out stuff you decided wasn't necessary. Is there a way to mark a certain statement with a symbol so that AppleScript doesn't run the commands? I haven't experimented at all but I don't know what to guess that would do it. I may be mistaken that you can blank out or "white out" text while leaving it in the original position, still readable and able to be put back in when you want it or left for you so you have a collection of all the research you put into the process of building a script for a project. Well I suppose I'll just wonder a while and find something else to burn hours on.
In applescript there are three ways to "comment" out text in your code.
--A line beginning with two dashes is a comment.
#In applescript 2.+, the number sign also works as a comment symbol.
(* Multi-line text
can be commented out
using these symbols. *)
(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.
At http://sourceforge.net/projects/auroraconkytheme/ I have this theme for conky.
One of the scripts gets a cover from the internet for the current playing song.
The script reads out the new_trackid (current song) from spotify and compares it to the first_trackid (previous song).
If they (new_trackid and first_trackid) don't match, it gets the cover of the new song aka the song has just changed.
I previously worked with writing the variables to text files.
I came across 'export' and i tried that. It seemed to work just fine by I now discovered it was comparing a number (new_trackid) with an empty variable(first_trackid).
Basically every time conky runs the scripts it forgets the variable first_trackid. Export does not work in this case.
So it keeps downloading the cover every 5 seconds (script of conky).
What am i missing?
How can i make Linux remember the variable first_trackid without using a text file next time it runs this script?
Or how to pass one variable to the same script next time it runs?
There is a lot to read about these issues. Env, source, .bashrc...
There must be a nicer way to do this.
Thanks
Ps. code pasting was unreadable
Using a file is really the right way to keep state.
The reason is that the export won't affect the environment of the parent process (your shell or cron), only itself and its children.
If running from a shell, you could source the script directly:
source ./yourscript.sh
This will evalulate it directly in your current shell, so export will effect your environment.
How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n
I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin