We are working on project in Linux, we need to use some real world multi-process applications to demonstrate its features.
I could not find any multi-process applications in Linux.
Please help me with the name of such applications.
Any shell pipeline:
grep whatever myfile | head -100
Uses two processes (grep and head) to show you the first hundred lines containing whatever in myfile. This sort of thing is all over the place in Linux, or any *nix-like system. More specifically, it is a key feature to implement the Unix Philosophy, and is central to any shell, like Bash, Korn Shell, Z Shell, Dash, C Shell, etc.
Linux is made of multiprocessing. If you still aren't convinced, look at sshd which accepts incoming logins from other machines by spawning a process for each one.
Related
I am a Mac guy who is used to Mac's Terminal. Now I am using Windows.
Whats the diff between those CLI options?
When should I use one over the other?
Are there more CLI options that I should consider?
What CLI would you use if you were a Mac person trying to adapt to Windows?
The reason I am trying to use Windows, is that I want to ensure the CLI of my Docker projects work for Windows users, that I can write files coming from my container to Windows and ensure my README files have instructions for Windows users. And basically test everything I do on Windows too, like Python.
Git bash is bash, which is IIRC also the default shell on MacOS. It is not the default shell on Windows, although several implementations exist (CygWin, MinGW, ...).
Git is bundled with a number of POSIX (UNIX/Linux/etc.) utilities in addition to bash; in order to avoid "collisions" with similarly named Windows commands, the most common installation option is to install bash in such a way that the other POSIX commands are only available when running bash. The Git installer will create a shortcut to launch this "private" version of bash, hence "git bash".
The Windows command prompt runs the default Windows shell, CMD.EXE, which is a derivative of the old MS-DOS command shell, COMMAND.COM. It is much less capable than most POSIX shells; for example, it did not until relatively recently support an if/then/else construct, and it does not support shell functions or aliases (although there are some workarounds for these limitations).
PowerShell is more of a scripting environment. I'd compare it to Perl on UNIX/Linux systems -- much more powerful than the standard shell, but not necessarily something I'd want to use at the command line.
One thing to be aware of is that some of the nicer PowerShell features may require you to update your version of PowerShell -- the version bundled with Windows is typically a few years old. And updating PowerShell usually requires admin privilege; depending on the version, you may also need to update the .NET framework.
If I were a Mac person trying to adapt to Windows ... it depends. In the short term it would be easier to use something familiar like bash. But long term, you -- and more importantly, your potential users -- may not want to be dependent on a third party tool, especially since for Windows users that will typically present an additional learning curve.
As to which to use when ... it really depends on what you're trying to accomplish -- both in terms of technical functionality and the interface you want to present to your users. As noted above, I'd consider PowerShell more appropriate for scripting than the CLI, unless you just need to run a cmdlet (either a built-in or one you've created yourself).
This is a high-level overview of some of the differences between the shells, not a feature by feature comparison.
CMD (Command Prompt) and PowerShell are both shells for Windows. CMD.exe was borne from COMMAND.COM, which was itself born from MS-DOS, and has some logical constructs and can run programs, process output, and do most basic tasks you would expect from a shell. It is generally considered very limited based on what other shells can do, but is not incapable if you know how to use it. However, it was never really "designed", with new features getting tacked on without a clear roadmap.
Powershell is a shell designed from the ground up with ties into .NET and has more modern language constructs built in. Microsoft designed Powershell as a replacement to CMD.exe and batch scripting, though CMD is far from being deprecated. Powershell can call directly into .NET classes, work with WMI objects natively, and has built in remoting capabilities. It is more akin to a programming or scripting language than batch scripting is. There is a much stronger community surrounding Powershell today than there is for batch scripting, and it is generally recommended to write new code in Powershell than to continue to use batch (CMD) scripting.
Powershell does feel like CMD at first. You can run programs in it and process their output, and in most cases programs will behave exactly the same whether they are run from Powershell, or from CMD. However, you will quickly notice some differences - not all variables are considered environment variables, variables are prefixed with a $ as opposed to being wrapped in %, and the Powershell pipeline is far more powerful than the CMD pipeline. Powershell is also entirely object-oriented, which is unique when compared to most other shell languages which are primarily text based.
You can read more here about why Powershell is recommended over batch scripting, and there is a good bit of history on CMD.exe and batch files as well.
Git Bash is the same bash shell you are used to on Linux and MacOS but instead compiled for Windows. It has the Git prefix with the name to indicate it was installed with Git for Windows, a packaging of git and various *nix utilities compiled for Windows for use with git. You can run sh and bash scripts in it, as well as call the Unix programs it installs with it.
The Unix utilities can generally be run in CMD or PowerShell too, but by default the installer does not add these utilities to the SYSTEM or USER PATH, as to not potentially override the same utilities the user may use in other contexts. Basically, it isolates the utilities installed with Git Bash to Git Bash.
Outside of git automation, I wouldn't recommend using Git Bash itself for anything production related, you would probably rather manage an installation of cygwin, msys2, or another Unix compatibility layer yourself in that case. But it can be a handy shell to have during development, although these days I generally prefer PowerShell over bash for Windows scripting.
Ever saw a WiFi base station named "| rm -rf ~ | rm -rf /?
When scripting some kind of simple analysis or logging of WiFi base station data, how would one ensure that an attacker won't be able to inject shell commands into your expressions?
For example, I want to log data from ifconfig run0 scan on OpenBSD and airport -s on OS X, and I already have some scripts in sh/tcsh that work great for my needs. But how could I ensure that I don't become a victim of shell injections?
this problem has been addressed to the OpenBSD lists (misc#) many times, as well as in other places.
first of all i would advise you to NOT execute things that you get from the network, specially in a script. Maybe you can edit your question to be more specific on what you want to do with these data so our answers could be tailored to it.
if you want to use this info to connect to networks (as a network manager), put yourself between the script and the input. So after getting the output of scan just copy the nwid that you approve in your actual script. check out this discussion on undeadly
if you need to run automated scripts on the input for some other purpose (data gathering?), consider creating a chrooted dir , with your script and your statically compiled shell in it and run it chrooted. Of course you shouldn't rely on that approach for more sophisticated attacks than the one you mentioned in your question.
Hope this helps.
Hey there, I'm building a remote shell server that interfaces between a text-only client and a virtual shell.
It works perfectly when using regular shell commands, but the first thing that people try after that is vim, which promptly drives my server crazy and can't even be closed remotely.
Is there any way to detect ncurses based programs and prevent them from running in my special shell?
(the server is ruby, but any system command will do)
You can declare the capabilities your shell has, by setting the TERM environment variable to the correct value. For instance, if your shell has the same capabilities as the vt100 terminal, export TERM to the correct value, and programs like vim will respect that.
To run vim in vt100-mode, try:
TERM=vt100 vim
You could also try:
export TERM=dumb
The trick is to find a terminal that corresponds to the capabilities of what you are creating. There is a lot to choose from. On my system (Arch Linux) this gives me a long list of choices:
find /usr/share/terminfo
You might be able to find a terminal specification that corresponds to what your program can handle.
Alternatively, you may want to consider implementing terminal emulation for ansi or vt100:
http://en.wikipedia.org/wiki/ANSI_escape_code
http://www.termsys.demon.co.uk/vtansi.htm
Best of luck!
I don't know if this is a dumb question or not but again as my professor says if you have doubts then clear them . What is the difference between Linux text mode and windows command prompt (cmd). I know both windows and Linux are different Operating Systems but when you look at the commands, some of the commands are common For Example cd command.
Although superficially similar in some ways, the two command line interfaces have different lineages:
The Windows command prompt is based heavily on that of MS-DOS / PC-DOS, which in turn was based on the CP/M Console Command Processor. The CP/M CCP interface was itself based on an earlier operating system called RSTS.
The Linux shells trace their roots back to the original UNIX Thompson shell; the Thompson shell borrowed from the Multics shell (where the term "shell" originated).
Traces of these are still evident today - the DIR command in the Windows command prompt can be traced all the way back to the DIR command in RSTS, and similarly the ls command in GNU coreutils can be traced back to the Multics "list segments" command.
They're both based on the same idea and are called Command-Line Interfaces (see wikipedia). They operate off the same principals, just using different keywords to perform similar commands. It should be noted however, that the commands although similarly named, may not perform the exact same function. They are just abstractions of lower level functions of the operating system. Just like people can explain similar ideas using different words and phrases, the same applies in this situation. For reference here's a list of Bash commands: http://ss64.com/bash/ and the same website has windows commands.
The difference is the operating system. The command prompt (cmd) and a terminal emulator (linux bash shell or similar) are text interfaces to the operating system. They allow you to manipulate the file system and run programs without the graphical interface.
You should read about Linux shells. The Bash shell for instance, is among the most used Linux shells... ever!
http://doc.dev.md/lsst/ch01sec07.html
http://www.tuxfiles.org/linuxhelp/shell.html
And if you're looking for a list of commands: http://www.physics.ubc.ca/mbelab/computer/linux-intro/html/
It is not that commands are in common (well yes, maybe some), it is that they have the same name and do almost the same things, as for cd as you said.
The shells are an abstraction of the underlying operative system, Linux and Windows have a different kernel, hence the difference.
You might want to start here with your reading.
I have been running drush scripts (for Drupal) with Cygwin on my relatively fast windows machine, but I still have to wait about a minute for any drush command (specifically drush cache clear to execute).
I'm quite sure it has something to do with the speed of Cygwin since my fellow developers (who are running Linux) can run these scripts in about 5 seconds.
Is there a way to make Cygwin use more memory and/or CPU per terminal?
The problem you're running into is not some arbitrary limit in Cygwin that you can make go away with a settings change. It's an inherent aspect of the way Cygwin has to work to get the POSIX semantics programs built under it expect.
The POSIX fork() system call has no native equivalent on Windows, so Cygwin is forced to emulate it in a very inefficient way. Shell scripts cause a call to fork() every time they execute an external process, which happens quite a lot since the shell script languages are so impoverished relative to what we'd normally call a programming language. External programs are how shell scripts get anything of consequence done.
There are other inefficiencies in Cygwin, though if you profiled it, you'd probably find that that's the number one speed hit. In most places, the Cygwin layer between a program built using it and the underlying OS is pretty thin. The developers of Cygwin take a lot of pains to keep the layer as thin as possible while still providing correct POSIX semantics. The current uncommon thickness in the fork() call emulation is unavoidable short of Microsoft adding a native fork() type facility to their OS. Their incentives to do that aren't very good.
The solutions posted above as comments aren't bad.
Another possibility is to go through the drush script and see if there are calls to external programs you can replace with shell intrinsics or more efficient constructs. I wouldn't expect a huge speed improvement by doing that, but it has the nice property that you'll speed things up on the Linux side as well. (fork() is efficient on Linux, but starting external programs is still a big speed hit that you may not have to pay as often as you currently do.) For instance:
numlines=`grep somepattern $somefile | wc -l`
if [ $numlines -gt 0 ] ; then ...
would run faster as:
if grep -q somepattern $somefile ; then ...
The first version is arguably clearer, but it requires at least three external program invocations, and with primitive shells, four. (Do you see all of them?) The replacement requires only one external program invocation.
Also look at things that slow down Cygwin startup:
Trim down your Windows PATH (to the bare bones like %SystemRoot%\system32;%SystemRoot%)
Remove things you don't need from bashrc and bash_profile
Move things you only need in your terminal window from bashrc to bash_profile
One surprisingly large time suck in Cygwin is Bash completion. If you are using it (and you should because it's great), only source completion for the commands you need (rather than all of them which used to be the default). And, as mentioned above, source them from bash_profile, not bashrc.
You can give Cygwin a higher priority.
Write a new batch file, for example, "cygstart.bat" with the following content:
start "Cygwin" /high C:\cygwin\Cygwin.bat
The /high switch gives the shell a higher process priority.