How to save a Python 2 idle/shell/command prompt interactive session on Windows? - windows

I run calculations on Windows for hours and would like to have the calculation report/log inside the interactive IDLE/shell window be saved to a file at the end by a command.
Would be nice to exit() and close the window too.
As much as I like Linux, this is an Unattended Windows machine, hence, some modules/commands are not available, sadly, and the ability to install other software is limited.
The fact that the developers did not think of a command to save the transactions within the IDLE/shell is surprising.
I know in some environments you can direct the output of a job, like a report to another text file by using the key -o, --o, --output, > to a text file. Surprisingly Python does not support anything like that!
Any help would be appreciated
Thanks

Windows Command Prompt supports stdout redirection and probably stderr redirection. I just tested python -c "print(test)" > F:/dev/tem/out.txt and the print out went to out.txt. Replace -c "print('test') with script.py and the same should happen. Piping stdout of one program to stdin of another might work. You might be able to chain programs with a .bat file. PowerShell likely is more powerful and flexible, but I have never used it.
I am not completely clear on what you are asking, but I hope the following answers your questions.
Python runs in 2 modes: batch and interactive. Interactive mode is intended for ephermeral interaction with a human. Batch mode is for unattended computation, with occasional screen messages, but with most results sent to a file other than the screen. Both modes are combined when you run python -i xyz.py. The file is first run in batch mode, and then Python shifts to interactive mode.
It sounds like you should be using batch mode rather than either Python's or IDLE's interactive mode. If your code runs from IDLE, you should be able to run it directly with the same python.exe that you used to run IDLE. There are exceptions, of course, if one makes one's code dependent on running within IDLE, but this is unlikely to be an accident or to be needed for unattended running.
The IDLE Shell simulates interactive Python. When a file is run from an editor window, IDLE simulates python -i file-being-edited.py, with screen output going to Shell. In either case, an interactive user can save the contents of Shell with the File => Save As menu command. So there is such a command. There are also close window and exit commands and shortcuts.
In IDLE's intended use, as an interactive python learning and program development environment, there is no need from for a program to issue these commands. To save data in a file, a program can open a file and write data directly.

Try to see if you can install Jupyter Notebook (not separate software, but just a python module)
pip install jupyter
Jupyter notebook is highly helpful for saving and sharing your code. It can be used as both a shell and as a script editor.

Related

How to correctly emulate terminal in linux/macOS using exex(go)?

I need to emulate a terminal in go. I try to do it like this:
lsCmd := exec.Command("bash", "-c", "ls")
lsOut, err := lsCmd.Output()
if err != nil {
panic(err)
}
fmt.Println(string(lsOut))
And it seems to work correctly (the native ubuntu terminal displays a horizontal list, and the result of this function goes vertically).
But if I specifically call the wrong command, for example exec.Command ("bash", "-c", "lss"), I get:
panic: exit status 127
And in the native ubuntu terminal I get the following result:
Command 'lss' not found, did you mean:
and enumeration of commands.
I need to communicate with the native terminal, and get the same thing as the result of the command if I wrote the command in the standard ubuntu terminal.
What is the best way to do this? Maybe the exec library is not suitable for this? All this is necessary for front-end communication with the OS terminal. On a simple html/css/js page, the user enters a command, after go it sends it to the native terminal of the operating system and returns the result to the front-end.
How I can get the same result of executing commands as if I were working in a native terminal?
The problem
But if I specifically call the wrong command, for example exec.Command
("bash", "-c", "lss"), I get:
panic: exit status 127
And in the native ubuntu terminal I get the following result:
Command 'lss' not found, did you mean:
and enumeration of commands.
This has nothing to do with Go, and the problem is actually two-fold:
Ubuntu ships with a special package, command-not-found, which is usually preinstalled, which tries make terminal more friently for mere mortals by employing two techniques:
It tries to suggest corrections for misspellings (your case).
It tries to suggest packages to install when the user tries to execute a program which would have been be available if the user had a specific package installed.
When the command is not found, "plain" (see below) shell fails the attempt by returning a non-zero exit code.
This is absolutely expected and normal.
I mean, panicking on it is absolutely unwise.
There's a historical difference on how a shell is run on a Unix system.
When a user logs into the system (remember that back in the days the concept of the shell was invented you'd be logging in via a hardware computer terminal which was basically what your GNOME Terminal window is but in hardware, and connected over a wire),
the so-called login shell is started.
The primary idea of a logic shell is to provide interactive environment for the user.
But as you surely know, shells are also capable of executing scripts.
When a shell executes a script, it's running in a non-interactive mode.
The modes a Unix shell can work in
Now let's dig deeper into that thing about interactive vs non-interactive shells.
In the interactive mode:
The shell is usually connected to a real terminal (either hadrware or a terminal emulator; your GNOME Terminal window is a terminal emulator).
"Connected" means that the shell's standard I/O streams are connected to the terminal, so that what the shell prints is displayed by the terminal.
It enables certain bells and whistles for the user, usually providind limited means for editing what is being input (bash, for instance, engages GNU readline.
In the non-interactive mode:
The shell's standard I/O streams are connected to some files (or to "nowhere" — like /dev/null).
No bells and whistles are enabled — as there is nobody to make use of them.
GNU bash is able to run in both modes, and which mode it runs in depends
on how it was invoked.
When initializing in different modes, bash reads different initialization scripts, and this explains why the machinery provided by the command-not-found package gets engaged in the interactive mode and does not when bash is run otherwise — like in your invocation from Go.
What do do about the problem
The simplest thing to try is to run bash with the --login command-line option or otherwise make it think it runs as an interactive shell.
This might solve the problem for your case but not necessarily.
The next possible problem is that some programs do really check whether they're running at a terminal — usually these are programs which insist on real interaction with the user, usually for security purposes, and there are programs which simply cannot run when not connected to a real terminal — these are "full-screen" text UI programs such as GNU Midnight Commander, Vim, Emacs, GNU Nano and anything like this.
To solve this problem, the only solution is to run the shell in a pseudo-terminal environment, and that's what #eudore hinted at in their comment.
The github.com/creack/pty might be a package to start looking at; golang.org/x/crypto/ssh also provides some means to wrangle PTYs.

What is the Bash file extension?

I have written a bash script in a text editor, what extension do I save my script as so it can run as a bash script? I've created a script that should in theory start an ssh server. I am wondering how to make the script execute once I click on it. I am running OS X 10.9.5.
Disagreeing with the other answers, there's a common convention to use a .sh extension for shell scripts -- but it's not a useful convention. It's better not to use an extension at all. The advantage of being able tell that foo.sh is a shell script because of its name is minimal, and you pay for it with a loss of flexibility.
To make a bash script executable, it needs to have a shebang line at the top:
#!/bin/bash
and use the chmod +x command so that the system recognizes it as an executable file. It then needs to be installed in one of the directories listed in your $PATH. If the script is called foo, you can then execute it from a shell prompt by typing foo. Or if it's in the current directory (common for temporary scripts), you can type ./foo.
Neither the shell nor the operating system pays any attention to the extension part of the file name. It's just part of the name. And by not giving it a special extension, you ensure that anyone (either a user or another script) that uses it doesn't have to care how it was implemented, whether it's a shell script (sh, bash, csh, or whatever), a Perl, Python, or Awk script, or a binary executable. The system is specifically designed so that either an interpreted script or a binary executable can be invoked without knowing or caring how it's implemented.
UNIX-like systems started out with a purely textual command-line interface. GUIs like KDE and Gnome were added later. In a GUI desktop system, you can typically run a program (again, whether it's a script or a binary executable) by, for example, double-clicking on an icon that refers to it. Typically this discards any output the program might print and doesn't let you pass command-line arguments; it's much less flexible than running it from a shell prompt. But for some programs (mostly GUI clients) it can be more convenient.
Shell scripting is best learned from the command line, not from a GUI.
(Some tools do pay attention to file extensions. For example, compilers typically use the extension to determine the language the code is written in: .c for C, .cpp for c++, etc. This convention doesn't apply to executable files.)
Keep in mind that UNIX (and UNIX-like systems) are not Windows. MS Windows generally uses a file's extension to determine how to open/execute it. Binary executables need to have a .exe extension. If you have a UNIX-like shell installed under Windows, you can configure Windows to recognize a .sh extension as a shell script, and use the shell to open it; Windows doesn't have the #! convention.
You don't need any extension (or you could choose an arbitrary one, but .sh is a useful convention).
You should start your script with #!/bin/bash (that first line is understood by execve(2) syscall), and you should make your file executable by chmod u+x. so if your script is in some file $HOME/somedir/somescriptname.sh you need to type once
chmod u+x $HOME/somedir/somescriptname.sh
in a terminal. See chmod(1) for the command and chmod(2) for the syscall.
Unless you are typing the whole file path, you should put that file in some directory mentioned in your PATH (see environ(7) & execvp(3)), which you might set permanently in your ~/.bashrc if your login shell is bash)
BTW, you could write your script in some other language, e.g. in Python by starting it with #!/usr/bin/python, or in Ocaml by starting it with #!/usr/bin/ocaml...
Executing your script by double-clicking (on what? you did not say!) is a desktop environment issue and could be desktop specific (might be different with
Kde, Mate, Gnome, .... or IceWM or RatPoison). Perhaps reading EWMH spec might help you getting a better picture.
Perhaps making your script executable with chmod might make it clickable on your desktop (apparently, Quartz on MacOSX). But then you probably should make it give some visual feedback.
And several computers don't have any desktop, including your own when you access it remotely with ssh.
I don't believe it is a good idea to run your shell script by clicking. You probably want to be able to give arguments to your shell script (and how would you do that by clicking?), and you should care about its output. If you are able to write a shell script, you are able to use an interactive shell in a terminal. That it the best and most natural way to use a script. Good interactive shells (e.g. zsh or fish or perhaps a recent bash) have delicious and configurable autocompletion facilities and you won't have to type a lot (learn to use the tab key of your keyboard). Also, scripts and programs are often parts of composite commands (pipelines, etc...).
PS. I'm using Unix since 1986, and Linux since 1993. I never started my own programs or scripts by clicking. Why should I?
just .sh.
Run the script like this:
./script.sh
EDIT: Like anubhava said, the extension doesn't really matter. But for organisational reasons, it is still recommended to use extensions.
I know this is quite old now but I feel like this adds to what the question was asking for.
If your on a mac and you want to be able to run a script by double clicking it you need to use the .command extension. Also same as before make file executable with chmod -x.
As was noted before, this isn't really that useful tbh.
TL;DR -- If the user (not necessarily the developer) of the script is using a GUI interface, it depends on what file browser they are using. MacOS's Finder will require the .sh extension in order to execute the script. Gnome Nautilus, however, recognizes properly shebanged scripts with or without the .sh extension.
I know it's already been said multiple times the reasons for and against using an extension on bash scripts, but not as much why or why not to use extensions, but I have what I consider to be a good rule of thumb.
If you're the type who hops in and out of bash and using the terminal in general or are developing a tool for someone else who does not use the terminal, put a .sh extension on your bash scripts. That way, users of that script have the option of double-clicking on that file in a GUI file browser to run the script.
If you're the type who primarily does all or most of your work in the terminal, don't bother putting any extension on your bash scripts. They would serve no purpose in the terminal, assuming that you've already set up your ~/.bashrc file to visually differentiate scripts from directories.
Edit:
In the Gnome Nautilus file browser with 4 test files (each with permissions given for the file to be executed) with stupidly simple bash command to open a terminal window (gnome-terminal):
A file with NO extension with #!/bin/bash on the first line.
It worked by double-clicking on the file.
A file with a .sh extension with #!/bin/bash on the first line.
It worked by double-clicking on the file.
A file with NO extension with NO #!/bin/bash on the first line.
It worked by double-clicking on the file...technically, but the GUI gave no indication that it was a shell script. It said it was just a plain text file.
A file with a .sh extension with NO #!/bin/bash on the first line.
It worked by double-clicking on the file.
However, as Keith Thompson, in the comments of this answer, wisely pointed out, relying on the using the .sh extension instead of the bash shebang on the first line of the file (#!/bin/bash) it could cause problems.
Another however, I recall when I was previously using MacOS, that even properly shebanged (is that a word?) bash scripts without a .sh extension could not be run from the GUI on MacOS. I would love for someone to correct me on that in the comments though. If this is true, it would prove that there is a least one file browser out there where the .sh extension matters.

In Bash, how can I tell if I am currently in a terminal

I want to create my own personal logfile that logs not only when I log in and out, but also when I lock/unlock my screen. Kindof like /var/log/wtmp on steroids.
To do this, I decided to run a script when I log into Ubuntu that runs in the background until I quit. My plan to do this is to add the script to .bashrc, using ./startlogging.sh & and in the script I will use trap to catch signals. That's great, except .bashrc gets run every time I open a new terminal, which is not what I want for the logger.
Is there a way to tell in Bash that the current login is a gnome login? Alternatively, is there some sort of .gnomerc I can use to run my script?
Edit: Here is my script:
Edit 2: Removed the script, since it's not related to the question. I will repost my other question, rather than repurpose this one.
Are you looking for a way to detect what type of terminal it is?
Try:
echo $TERM
From Wikipedia:
TERM (Unix-like) - specifies the type of computer terminal or terminal
emulator being used (e.g., vt100 or dumb).
See also: List of Terminal Emulators
for bash use : ~/.bash_logout
that will get executed when you logout, which sounds like what you are trying to do.
Well, for just bash, what you want are .bash_login/.bash_logout in your home directory (rather than .bashrc) These are run whenever a LOGIN shell starts/finishes, which happens any time you log in to a shell (on a tty or console, or via ssh or other network login). These are NOT run for bash processes created to run in terminal windows that you create (as those are not login shells) so won't get run any time you open a new terminal.
The problem is that if you log in with some mechanism that does not involve a terminal (such as gdm running on the console to start a gnome or kde or unity session), then there's no login shell so .bash_login/logout never get run. For that case, the easiest is probably to put something in your .xsessionrc, which will get run every time you start an X session (which happens for any of those GUI environments, regardless of which one you run). Unfortunately, there's no standard script that runs when an X session finishes.

Running a command as a background process/service

I have a Shell command that I'd like to run in the background and I've read that this can be done by suffixing an & to the command which causes it to run as a background process but I need some more functionality and was wondering how to go about it:
I'd like the command to start and run in the background every time the system restarts.
I'd like to be able to able to start and stop it as and when needed just like one can do service apache2 start.
How can I go about this? Is there a tool that allows me to run a command as a service?
I'm a little lost with this.
Thanks
UNIX systems can handle as many processes as you need simultaneously (just open new shell windows if you're in a GUI), so running a process in the background is only necessary if you need to carry on using the current shell window for other things once you've run an application or process that keeps running.
To run a command called command in background mode, you'd use:
command &
This is a special character that returns you to the command prompt once the process is started. There are other special characters that do other things, more info is available here.
Take a look at the daemon command, which can turn arbitrary processes into daemons. This will allow your script to act as a daemon without requiring you to do a lot of extra work. The next step is to invoke it automatically at boot. To know the correct way to do that, you'll need to provide your OS (or, for Linux, your distribution).
Based on this article:
http:// felixmilea.com/2014/12/running-bash-commands-background-properly/
...another good way is with screen eg:
screen -d -m -s "my session name" <command to run>
from the screen manual:
-d -m
Start screen in detached mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts.
i.e. you can close your terminal, the process will continue running (unlike with &)
with screen you can also reattach to the session later
Use nohup while directing the output to /dev/null
nohup command &>/dev/null &
For advanced job control with bash, you should look into the commands jobs bg and fg.
However, it seems like you're not really interested in running the command in the background. What you want to do is launch the command at startup. The way to do this varies depending on the Unix system you use, but try to look into the rc family of files (/etc/rc.local for example on Ubuntu). They contain scripts that will be executed after the init script.

bash script on cygwin - seems to get stuck between consecutive commands.

I am using a bash script to run a number of application (some repeatedly) on a Windows machine through cygwin. The script contains commands to launch those applications, line by line. Most of these applications run for many minutes and many times I have observed that the i+1 th application is not being started even after i th application is completed. In such cases, if I press enter in the cygwin console on which the bash script is running, the next application starts running. Is it because of any issue with bash on cygwin? Or is it an issue with the Windows OS itself? Have any of you observed such an issue with bash + cygwin + Windows?
Thanks.
I think I have seen this before.
Instead of
somecommand
try
somecommand </dev/null
If that doesn't work, try
cmd /c somecommand
Or experiment with other redirections, e.g.
somecommand >/dev/null
Sounds like you may have a problem with your shell script encoding; DOS (and Windows) uses CR+LF line endings, whereas Linux uses LF endings. Try saving the file as LF.
What might also be going on:
When I was running Cygwin on a school laptop, I encountered a dramatic slowing of shell scripts vs. when they were running in a native Linux environment. This was especially apparent when running a configure script from GNU Autotools.
Check your path for slow drives. (From the Cygwin FAQ):
Why is Cygwin suddenly so slow?
If suddenly every command takes a very long time, then something is probably attempting to access a network share. You may have the obsolete //c notation in your PATH or startup files. Using //c means to contact the network server c, which will slow things down tremendously if it does not exist.
You might also want to check whether you have an antivirus program running. Antivirus programs tend to scan every single executable file as it is executed; this can cause problems for even simple shell scripts that run hundreds or even thousands of individual programs before they run their course.
This mailing list post outlines what is needed to pseudo-mount the main /usr/bin directory as cygexec. I'm not sure what that does, but I found it helped.
If you're running a configure script, try the -C option.
Hope this helps!
Occasionally, I'll get this behaviour because I have accidentally deleted the 'she-bang' at the top of the script, that is, deleted the #!/bin/bash on the first line of the script.
It's even more likely for this to happen when a parent shell script calls a child script that has the she-bang missing!
Hope this helps.
A bit of a long shot, but I have seen some similar behaviour previously.
In Windows 2000, if any program running in a command prompt window had some of it's text highlighted by the cursor, it would pause the command running, and you had to press enter or clear the highlighting to get the command prompt to continue executing.
As I said, bit of a long shot, but accidental mouse clicks could be your issue...
Install cygwin with unix style line breaks and forget weird problems like that.
Try saving your script as "the-properly-line-broken-style" for your cygwin. That is, use the style you specified under installation.
Here is some relevant information:
https://stackoverflow.com/a/7048200/657703

Resources