I have a daemon that forks the process.
This daemon access a database using mysql connector library.
When I do not fork, I am able to open and read a database fine, however, when I fork, I get
MySQL server has gone away
errors consistently on the first query...
Anyone know what could be causing this?
Edit Oh, my apologies for misinterpreting
Still the problems with differences between daemonized/non-daemonized are roughly with the following class of options:
environment variables
LIBPATH
PATH
HOME, UID, EUID (HOME surprisingly enough gets (ab)used way too often)
mysql specific variables
permissions
what user is the daemon running as? elevated or privilege separation?
current working directory (traditionally / for daemons, where / might be a chroot jail instead of 'real' /)
Starting with kernel 2.4.19, Linux provides per-process mount namespaces. A
mount namespace is the set of file system mounts that are visible to a
process. Mount-point namespaces can be (and usually are) shared between
multiple processes, and changes to the namespace (i.e., mounts and unmounts)
by one process are visible to all other processes sharing the same namespace.
(The pre-2.4.19 Linux situation can be considered as one in which a single
namespace was shared by every process on the system.)
detached stdin/stdout causing trouble (IMO that would mean badly designed library, but who am I)
watch it that specific resources (file locks, socket connections, threads (!)) are NOT inherited across fork/execve. I recommend reading the linked on daemonization (below), especially for example the section on 'Mutual Exclusion and Running a Single Copy [open,lockf,getpid]'
I'm sure I'm forgetting stuff
Ermm... what are you starting a mysql server process for? Mysql has plenty of sound init scripts that do work.
On the subject of proper daemonization: http://www.enderunix.org/docs/eng/daemon.php
Pay attention to the effects of sharing resources with fork children (e.g. file descriptors).
Besides that, you could just be missing basic environment settings. Peruse the official init scripts for mysql to find out which you need.
Related
I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.
Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.
I am running PotgreSQL 9.4 on Windows, and constantly get the error,
2015-06-15 09:35:36 EDT LOG could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": Permission denied
I also see constant 200-800k writes to global.stat and global.tmp. I have seen other users with the same issue, but no solution.
It is a big database server, with 300g of data, and 6,000 databases.
I tried setting,
track_activities=off
In the config file, but it did not seem to have any affect.
Any help for the error, or reducing the write?
After my initial answer, I decided to research the operation of the stats collector and in particular what it is doing with the files in pg_stat_tmp. I've substantially re-written the answer as a result.
What are the global.stat / global.tmp files used for?
Postgresql contains functionality to collect statistics and status information about its operation. The function is described in Section 27.2 of the manual.
This information is collated by the stats collector process. It is made available to the other postgresql processes via the global.stat file. The first time you run a query that accesses this data within a transaction, the backend which you are connected to will read the global.stat file and cache the result, using it until the end of the transaction.
To keep this file up to date, the stats collector process periodically re-writes it with updated information. It typically does this several times a second. The process is as follows:
Create a new file global.tmp
Write data to this file
Rename global.tmp as global.stat, overwriting the previous global.stat
The global.tmp and global.stats files are written into the directory configured by the stats_temp_directory configuration parameter. Normally this is set to $PGDATA/pg_stat_tmp.
On shutdown, the stats file is written into the file $PGDATA/global/pgstat.stat, and the files in the tmp dir above are removed. This file is then read and removed when the database is started up again.
Why is the stats collector processor creating so much I/O load?
Normally, the amount of data written to the global.stats is relatively modest and writing it does not generate that much I/O traffic. However under some circumstances it does seem to get very bloated. When this happens the amount of load generated can start to get excessive as the entire file is rewritten more than once a second.
I have had one experience where it grew by a factor or 10 or more, compared to other similar servers. This machine did have an unusually large number of databases (for our application at least - 30-40 databases - but nothing like the 6000 you say you have). It is possible that having a large number of databases exacerbates this.
Some of the references below talk about a pattern of creating / dropping lots of tables causing bloat in these files, and that perhaps autovacuum is not running aggressively enough to remove the associated bloat. You may wish to consider your autovac settings.
Why do I get 'Permission Denied' errors on Windows?
After examining the postgresql source code I think there may be a race condition in accessing the global.stats file which could happen at any time, but is exacerbated by the size of the file.
The default mode of operation in Windows is that it is not possible to rename or remove a file while another process has it open. This is different to Linux (or Unix) where a file can be renamed or removed while other processes are accessing it.
In the sequence above you can see that if one of the backend processes is reading the file at the same time as the stats collector is rewriting it, then the backend process may still have the file open at the time the rename is attempted. That leads to the 'Permission Denied' error you are seeing.
Naturally when the file becomes very large, then the amount of time taken to read it becomes more significant, therefore the probability of the stats collector process attempting a rename while a backend still has it open increases.
However, since the file is frequently being rewritten, the impact of these errors is relatively mild. It just means that this particular update fails, leading the the backends getting slightly out of date statistics. The next update will probably succeed.
Note that Windows does offer a file opening mode which does allow files to be deleted or renamed while they are opened by another process, however as far as I could tell, this mode is not used by Postgresql. I could not find any bug report on this - seems like it should be reported.
In summary, these errors are a side effect of the main problem, which is the excessive size of the global.stat file.
I've turned track_activities off but the file is still being written - Why?
From what I can see, track_activites affects only one of the sets of information that the stats collector is collecting.
In addition, it looks as though the stats collector process is started regardless of these settings, and will continue to re-write the file. The settings appear to control only the collection of fresh data.
My conclusion is that once the file has become bloated, it will remain so and continue to be re-written, even once all of the stats collection options are turned off.
What can I do to avoid this problem?
Once the file has become bloated, it seems that the easiest way to get the database back into a good working state is to remove the file, using the following steps:
Stop the database
When the DB is stopped, the pg_stat_tmp directory is empty and a file $PGDATA/global/pgstat.stat is written. We renamed this file to pgstat.stat.old.
Start the database. It creates a fresh set of pgstat files. After confirming the server was operating correctly you can remove the old file you have renamed.
This is the process we used when one of our servers suffered from this problem.
Needless to say be very careful when manually manipulating any files under the Postgresql Data directory.
After this you may want to monitor the server to see if it the file becomes bloated again. If it does then here are some additional ideas to consider:
As mentioned above I have seen some references to this file becoming bloated if autovacuum is not running aggressively enough. You may wish to tune the autovacuum settings
Disabling any of the track_xxx options described in the Section 18.9.1 of the manual which are not required may help
It is possible to place the pg_stats_tmp directory in a tmpfs filesystem (or whatever equivalent RAM based filesystem is available in windows). Doing so should eliminate I/O as a concern for these files.
References:
Postgres stats collector showing high disk I/O
Too much I/O generated by postgres stats collector process
stats collector suddenly causing lots of IO
Here might be a solution for your problem. https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Another possibility could be antivirus settings. Try to turn it off temporarily.
It happened to me few days ago. I rebooted the machine, but the error did not disappeared.
Don't know why, but performing a vacuum analyze verbose did the trick, and the error has stoped to show up.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)
Is it possible to access kernel objects on remote computers? I was reading that you could access remote kernel objects by using a symbolic link to \Device\Mup\server\object but I am not sure if that would work. Thanks for the help!
I know this is a little odd but I was trying to access a named pipe.
This is not possible in the general case - Mup is the broker that chooses which remote filesystem (WebDav/SMB/NFS) to engage for a particular UNC path. What kernel objects are you trying to access specifically?
Edit: Named pipes are definitely doable - try the syntax:
\\machinename\pipe\nameofpipe
Keep in mind that the pipe has to be ACLed appropriately