I've just knocked out a quick script for keeping a slave web server in sync with a master using rsync. (https://github.com/simonjgreen/liveFolderSync/blob/master/liveFolderSync.sh)
I'd like to make this run on boot and be controllable via the usual /etc/init.d/... or service commands, however this is an area I've always fallen down in. I find both init.d scripts and upstart scripts horrendously confusing, and can't find a guide anywhere for starting from scratch.
The only control I'd like to have over it is start/stop/restart. Obviously later I will move the config into a separate file in /etc but that's already on the cards so outside the scope of this question.
Any pointers/advise and best practices would be helpful. I should add that I'm doing this on Ubuntu.
To get started with Sys V init scripts, I suggest the following links:
Linux: How to write a System V init script to start, stop, and restart my own application or service
Writing System V init scripts for Red Hat Linux
Ubuntu Bootup Howto
For instructions specific to Upstart, I would recommend starting with:
Getting Started
The Upstart Cookbook
At present, there are also 129 questions on AskUbuntu, several of which will point you in the right direction:
What Events are available for Upstart
Want to make an Upstart script, need help and advice
Related
My embedded board uses Linux Kernel version 3.18.
I would like to configure my Wifi (using wpa_supplicant and then dhcpcd commands) automatically, as soon as the board boots up.
I made a shell script for the same (I verified the script by executing it manually) and placed this in "/etc/init.d" directory.
Then made a symbolic link to the shell script file in the "/etc/rc.d" directory.
However, doing this change does not serve my purpose. Can anyone please help me out.
PS: It is important to note that it takes around 3-4 seconds for my Wifi module to be inserted into the kernel once the board boots up.
TLDR;
in initscript call differant script managing wpa_supplicant,dhcpd so that init-script won't block.
It is nice practice not to block in init-scripts. so you can do differed processing in init-script. i.e. start different script in background which checks module insertion and wpa_supplicant also can modify it to keep checking status. Something similar happens in Desktop Linux OS. The program name is NetworkManager.
Spark-shell can be used to interact with the distributed storage of data, then what is the essential difference between coding in spark-shell and uploading packaged sbt independent applications to the cluster operation?(I found a difference is sbt submit the job can be seen in the cluster management interface, and the shell can not) After all, sbt is very troublesome, and the shell is very convenient.
Thanks a lot!
Spark-shell gives you a bare console-like interface in which you can run your codes like individual commands. This can be very useful if you're still experimenting with the packages or debugging your code.
I found a difference is sbt submit the job can be seen in the cluster management interface, and the shell can not
Actually, spark shell also comes up in the job UI as "Spark-Shell" itself and you can monitor the jobs you are running through that.
Building spark applications using SBT gives you some organization in your development process, iterative compilation which is helpful in day-to-day development, and a lot of manual work can be avoided by this. If you have a constant set of things that you always run, you can simply run the same package again instead of going through the trouble of running the entire thing like commands. SBT does take some time getting used to if you are new to java style of development, but it can help maintain applications on the long run.
I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd
my goal: to create a suite of scripts that do some common system tasks, which include these
copy/move/list/search/grep files
watch/start/stop processes
run queries against Oracle via sqlplus
i grew accustomed to using Cygwin/bash to ease my life at work, and frankly speaking, i don't want to move away from bash language and start learning PowerShell, for example - so i started searching for a way to run bash scripts on Windows, ... preferably something alternative to Cygwin.
the truth is that i am still not pleased with Cygwin installation, and the fact that there is no simple way around it, that it is targeting more or less expert users, and there are a number of things that might pop up during the installation. i mean. what i am trying to do now is to write a suite of scripts that will target someone less expert than me (and i am in no way a real expert) - in most cases some kind of an administrator who doesn't want to know the script details.
i am thinking that this user will also want to be able to run these scripts on another machine, and i want to be able to explain him/her how to do it, without just saying, call the support, and, me, eventually (so that we can install cygwin on another machine etc etc.)
i tried MinGW(msys) but it also needs manual steps to set things up - i mean, these manual steps have become something of a de facto standard in these Windows ports (sorry, maybe i have a mood for bragging). win-bash looked like it could be a solution, but i ended up trashing it too, because of the old bash version, and its inability to do things i was able to do in cygwin - specifically
here documents
things like "cmd /C dir *" (don't know why) - and yes, i do cmd /C dir in cases i am in some kind of shared network folder with thousands of files, and ls is significantly slower than dir
my questions at last:
am i doomed to use PowerShell? i guess i will, reluctantly, if i have to
is there a simple pre-packaged "slim" cygwin installation.. or, portable cygwin, even better? there is a cygwin-portable project on sourceforge, but it's not that doesn't need those manual steps, again, apparently - is there a way to automate those steps, perhaps? and if there is, i wonder why somebody hasn't done it already? - and then, would it be possible to call bash scripts from Windows command prompt using a simple command like "bash somescript.sh"?
thanks for your attention.
As mentioned here, the Cygwin installation can totally be scripted and parametrized to ran in a silently and automatic mode.
If you define the minimal list of cygwin packages you need, just use a little .bat script that call the cygmin setup executable like this
setup.exe --packages=list_of_packages_you_need --quiet-mode
If you wrap the cygwin install process, it should be tolerable for a less technical user.
The cygwin install can be streamlined using command-line args;
http://sources.redhat.com/ml/cygwin-apps/2003-03/msg00526.html
You can also automate the install of most cygwin packages through cyg-apt.
I haven't verified this but I suspect that msys implements a *nix look alike by creating windows executable versions of system commands. All of the common commands have an executable on my install of msys. If that is true then it should be possible to use them separate from a complete install.
Try copying "bash.exe", "cp.exe", etc. from the msys bin directory to a machine/vm that does not have an msys install and see if it works. You may need to copy some dll's or shared libraries as well. A windows dependency checker program would show which dll's an executable is using.
You could package up the stuff you use and make a simple installer or just copy the files with your scripts.
Take a look at MKS Toolkit. Unlike Cygwin, it can live within the Windows world. Files end in CR/LF like Windows files, and you don't have that /cygdrive/c stuff. Naked drive letters work fine in MKS Toolkit.
A few caveats:
I haven't used MKS Toolkit in a long time. See following reason.
MKS Toolkit is (sit down for this) $600 per license. Ouch! That's why I use Cygwin even though I don't think it's as good or works as well.
It's Kornshell based and not Bash (although this may be a bit different). Kornshell and BASH are 95% alike. However, that last 5% gets you. I actually like Kornshell better than BASH in many respects. Kornshell has the print statement which is way superior than the echo statement. Variable names don't disappear in blocks. You can easily do double loops because almost all the commands can take unit numbers of input and output. However, Kornshell doesn't have those neat escape characters in the prompt, and it's hard to find the exit status of a command in the middle of the pipeline.
I am porting an application which runs as a background service in windows at startup, we are porting the application to linux(SUSE Enterprise server), I'am completely new to linux. Can somebody help me on how to proceed with this. Like
Should I build the linux executable
After builiding the binary, what changes should I make to linux startup files to run this executable
How my service can register call back function to modify or change or send commands to my service while it is running
Yes, you should build a Linux binary. You may want to rephrase your question since I doubt this is the answer you want :-)
You should generally create what is known as an "init" file, which lives in /etc/init.d. Novell has a guide online which you can use to author the file. Note that while the init file is common, the exact method of letting the operating system use it varies depending on the distribution.
This is going to be a marked change for you. If you are doing simple actions such as re-loading a configuration file, you can use the signals functionality, especially the SIGHUP/HUP signal which is generally used for this purpose. If you require extended communication with your daemon, you can use a UNIX domain socket (think of it as a named pipe) or a network socket.
Another task you are going to need to accomplish is to daemonize your application. Generally this is done by first fork()ing your process, then redirecting the stdin/stdout pipes in the child. There are more details which can be answered by reading this document
See how-to-migrate-a-net-windows-service-application-to-linux-using-mono.
Under Linux, deamons are simple background processes. No special control methods (e.g start(), stop()) are used as in Windows. Build your service as a simple (console) application, and run it in the background. You can use a tool like daemonize to run a program as a Unix daemon.