I am working on a project, where I need to execute multiple unix script from PHP environment. Could this be possible to open a single unix shell and execute all the unix scripts.
Currently im using shell_exec for each of the scripts execution. This makes the application slow, as each time shell_exec,a new shell is being opened and the script is executed.
Thanks in Advance,
No, the underlying shell is not accessible.
You could try few things:
Optimise the scripts so you have to do fewer execs. Pipe them or something like that
I am not sure if it will work but you should be able to start a bash process and send commands to it (see proc_open). This way you could be able to manually and reuse the shell. But I imagine it will be a nightmare, especially in parsing the responses from the scripts (if you need that).
Related
I am working on an application where we are using xtermjs and node-pty inside of an electron application. We are adding a terminal to our application and would like to add some custom commands that are used in the terminal that are related to our application.
What are some options for adding these commands?
We want them installed with the application.
They don't have to be useable inside an 'external' terminal, but it is ok if they are. By external, i mean your normal terminal. Not our xterm & node-pty implementation.
And we want them to behave the same as other normal unix commands. Where you can pipe with other commands && them together and stuff.
I have played around with intercepting commands between xterm and node-pty and that was a disaster. I am now considering, just writing bash scripts for the commands and having the installer manage putting them where they need to be so they can be used.
Just wondering what my options are, thanks.
You can simply put all your executables in a directory that you add to your PATH when you invoke the shell in your terminal emulator.
The commands will be available to the user like any others in any construct that accepts commands, regardless of the user's shell or shell version (i.e. it'll work equally well in bash, zsh and fish).
If you need the commands to coordinate with your terminal emulator (e.g. if you want to process the command in JS in your Node.js process), you can arrange that via a second environment variable containing e.g. a host/port to connect to.
I want to run a Julia script from window command line, but it seems everytime I run > Julia code.jl, a new instance of Julia is created and the initiation time (package loading, compiling?) is quite long.
Is there a way for me to skip this initiation time by running the script on the current REPL/Julia instance? (which usually saves me 50% of running time).
I am using Julia 1.0.
Thank you,
You can use include:
julia> include("code.jl")
There are several possible solutions. All of them involve different ways of sending commands to a running Julia session. The first few that come to my mind are:
use sockets as explained in https://docs.julialang.org/en/v1/manual/networking-and-streams/#A-simple-TCP-example-1
set up a HTTP server e.g. using https://github.com/JuliaWeb/HTTP.jl
use named pipes, as explained in Named pipe does not wait until completion in bash
communicate e.g. through the file system (e.g. make Julia scan some folder for .jl files and if it finds them there they get executed and moved to another folder or deleted) - this is probably simplest to implement correctly
In all the solutions you can send the command to Julia by executing some shell command.
No matter which approach you prefer the key challenge is sanitizing the code to handle errors properly (i.e. a situation when you sent some command to the Julia session and it crashes or when you send requests faster than Julia is able to handle them). This is especially important if you want the Julia server to be detached from the terminal.
As a side note: when using the Distributed module from stdlib in Julia for multiprocessing you actually do a very similar thing (but the communication is Julia to Julia) so you can also have a look how this module is implemented to get the feeling how it can be done.
I'm new to Windows system programming and I'm trying to learn the CreateProcess() function.
I know that it's possible to run a new process, for instance, notepad.exe or cmd.exe by the calling program by giving the name (notepad or cmd.exe) as parameter to the CreateProcess() function in the calling program.
What is the use of doing that, and could you explain any real world application for that?
Can I use this create process function to clone itself and do something in parallel?
What is the use of doing that, and could you explain any real world application for that?
CreateProcess is the way to create new processes on Windows. Obvious examples of its use would be for the shell to start new applications. Or for the command line interpreter to execute external commands.
Can I use this create process function to clone itself and do something in parallel?
No. Windows processes don't use the *nix fork idiom. There is no analogue in Windows to forking.
Can I use this create process function to clone itself and do something in parallel?
Not so much a clone, no. But the calling app can spawn a separate instance of itself, by specify its own filename, possibly with command-line parameter(s) to tell the spawned process what to do. So in that regard, yes, you can have multiple instances of your app running in parallel.
I have a specific proprietary application, which is dual use, running "account.exe" in a CGI context (eg from inside a web server) will make account.exe output a HTML page and such. Running "account.exe" outside of CGI context causes account.exe to enable certain command line functions.
Now to the question:
I want to run account.exe outside the CGI context in perl. Have tried with system(1, "command"); have tried with system("start command"), tried with a BAT wrapper that clears (SET VARIABLE=) every enviroment variable that has with CGI to do, but still account.exe "detects" that its run by a web server and outputs its HTML.
How can I run a windows command in a CGI script in perl (using strawberry perl) and making it impossible for the "account.exe" application to detect that the execution originally came from a web server?
There are many ways how account.exe could possibly detect how it was run.
Environment variables is one way; it seems you have already ruled that one out.
Normally processes can see who is their parent and their parent, so that could be other way.
So either you can do a lot of testing until you finally fool the specific technique that the process is using, or you might want to try sandboxing to gain more control on what the process can or cannot see (or do).
I would like to run a shell script from my cocoa app when clicking on a button. I can easily use the system() call to do that, but that's not all i need. I need the app to close as soon as it calls the script, or even before it calls the script. Basically the script should take a few seconds to run so i need the app to close by that time. The reason i need this is because i'm writing a simple application that puts the mac to sleep, but before that it does lots of cleaning up via a shell script and i basically don't want this app to be open when i brind the system back from sleep.
Would using a fork or something like that do the job or do i need some special magic to do this?
Thank you
If you're in Cocoa, you'll want to use NSTask. If your script needs admin privileges, there's always STPrivilegedTask.
You can use popen() instead of system(). The init process should inherit ownership of the script you run once your application exits. You could also fork/exec, but popen will be simpler as its semantics are much closer to that of system().