While running database configuration assistant(oracle 9i) I get error oversubscribed literal/length tree.Please help me out.
This is an instance of java.util.zip.ZipException. There are a number of things which might cause it. One cause is the executed path being too long, for instance if you extracted the zip to a sub-directory (instead of, say, the root).
But there are a number of other things which might cause it, e.g. file corruption. So if the directory path length is not the problem you will need to give us more details. For instance, what OS are you installing on? Also, is there any more info regarding the error in the log file: $ORACLE_HOME/assistants/opca/install.log?
Related
I am using elasticsearch in a liferay portal (sidecar). Unfortunately, elasticsearch can only be startet as root user. Otherwise, I get a java native error in
org.elasticsearch.bootstrap.JNANatives.trySetMaxNumberOfThreads on bootstrapping:
V [libjvm.so+0x687142] InterpreterRuntime::resolve_invoke(JavaThread*, Bytecodes::Code)+0x1b2
j org.elasticsearch.bootstrap.JNANatives.trySetMaxNumberOfThreads()V+20
j org.elasticsearch.bootstrap.Natives.trySetMaxNumberOfThreads()V+17
j org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Ljava/nio/file/Path;ZZZ)V+75
I have checked/updated the ulimits for the current user, but it seems like JNANatives is not allowed to execute this methods. The service starts fine as root, but I want to avoid any root processes on the server. Does someone had a similar problem? How can I debug this? What could be the issue here?
Many thanks
Now, I found the problem and want to share it here, so that anyone has a similar issues finds a possible hint.
I figured out that some folders up the hierarchy, there was a mount involved causing some permission issues. This causes a hotspot JVM native error in org.elasticsearch.bootstrap.JNANatives.trySetMaxNumberOfThreads().
Liferay reports java.io.StreamCorruptedException: invalid type code: 23 for elasticsearch.
I thought it was due to a ulimit, but it wasn't. Fixing mount/folder permissions on a top level folder, resolved the bootstrapping issues.
I have an application using Mnesia Database running is a memory constrained environment. So is there any way to disable MnesiaCore file creation by when something goes wrong with Mnesia which is a default case? Or is it possible to give limit on the size of file or constraint on count of files?
Since the MnesiaCore file gets created with different name each time, it will not overwrite the previous one. It causes flooding of MnesiaCore files in the working directory of my application, if the application has crashed for multiple times.
In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.
I am running PotgreSQL 9.4 on Windows, and constantly get the error,
2015-06-15 09:35:36 EDT LOG could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": Permission denied
I also see constant 200-800k writes to global.stat and global.tmp. I have seen other users with the same issue, but no solution.
It is a big database server, with 300g of data, and 6,000 databases.
I tried setting,
track_activities=off
In the config file, but it did not seem to have any affect.
Any help for the error, or reducing the write?
After my initial answer, I decided to research the operation of the stats collector and in particular what it is doing with the files in pg_stat_tmp. I've substantially re-written the answer as a result.
What are the global.stat / global.tmp files used for?
Postgresql contains functionality to collect statistics and status information about its operation. The function is described in Section 27.2 of the manual.
This information is collated by the stats collector process. It is made available to the other postgresql processes via the global.stat file. The first time you run a query that accesses this data within a transaction, the backend which you are connected to will read the global.stat file and cache the result, using it until the end of the transaction.
To keep this file up to date, the stats collector process periodically re-writes it with updated information. It typically does this several times a second. The process is as follows:
Create a new file global.tmp
Write data to this file
Rename global.tmp as global.stat, overwriting the previous global.stat
The global.tmp and global.stats files are written into the directory configured by the stats_temp_directory configuration parameter. Normally this is set to $PGDATA/pg_stat_tmp.
On shutdown, the stats file is written into the file $PGDATA/global/pgstat.stat, and the files in the tmp dir above are removed. This file is then read and removed when the database is started up again.
Why is the stats collector processor creating so much I/O load?
Normally, the amount of data written to the global.stats is relatively modest and writing it does not generate that much I/O traffic. However under some circumstances it does seem to get very bloated. When this happens the amount of load generated can start to get excessive as the entire file is rewritten more than once a second.
I have had one experience where it grew by a factor or 10 or more, compared to other similar servers. This machine did have an unusually large number of databases (for our application at least - 30-40 databases - but nothing like the 6000 you say you have). It is possible that having a large number of databases exacerbates this.
Some of the references below talk about a pattern of creating / dropping lots of tables causing bloat in these files, and that perhaps autovacuum is not running aggressively enough to remove the associated bloat. You may wish to consider your autovac settings.
Why do I get 'Permission Denied' errors on Windows?
After examining the postgresql source code I think there may be a race condition in accessing the global.stats file which could happen at any time, but is exacerbated by the size of the file.
The default mode of operation in Windows is that it is not possible to rename or remove a file while another process has it open. This is different to Linux (or Unix) where a file can be renamed or removed while other processes are accessing it.
In the sequence above you can see that if one of the backend processes is reading the file at the same time as the stats collector is rewriting it, then the backend process may still have the file open at the time the rename is attempted. That leads to the 'Permission Denied' error you are seeing.
Naturally when the file becomes very large, then the amount of time taken to read it becomes more significant, therefore the probability of the stats collector process attempting a rename while a backend still has it open increases.
However, since the file is frequently being rewritten, the impact of these errors is relatively mild. It just means that this particular update fails, leading the the backends getting slightly out of date statistics. The next update will probably succeed.
Note that Windows does offer a file opening mode which does allow files to be deleted or renamed while they are opened by another process, however as far as I could tell, this mode is not used by Postgresql. I could not find any bug report on this - seems like it should be reported.
In summary, these errors are a side effect of the main problem, which is the excessive size of the global.stat file.
I've turned track_activities off but the file is still being written - Why?
From what I can see, track_activites affects only one of the sets of information that the stats collector is collecting.
In addition, it looks as though the stats collector process is started regardless of these settings, and will continue to re-write the file. The settings appear to control only the collection of fresh data.
My conclusion is that once the file has become bloated, it will remain so and continue to be re-written, even once all of the stats collection options are turned off.
What can I do to avoid this problem?
Once the file has become bloated, it seems that the easiest way to get the database back into a good working state is to remove the file, using the following steps:
Stop the database
When the DB is stopped, the pg_stat_tmp directory is empty and a file $PGDATA/global/pgstat.stat is written. We renamed this file to pgstat.stat.old.
Start the database. It creates a fresh set of pgstat files. After confirming the server was operating correctly you can remove the old file you have renamed.
This is the process we used when one of our servers suffered from this problem.
Needless to say be very careful when manually manipulating any files under the Postgresql Data directory.
After this you may want to monitor the server to see if it the file becomes bloated again. If it does then here are some additional ideas to consider:
As mentioned above I have seen some references to this file becoming bloated if autovacuum is not running aggressively enough. You may wish to tune the autovacuum settings
Disabling any of the track_xxx options described in the Section 18.9.1 of the manual which are not required may help
It is possible to place the pg_stats_tmp directory in a tmpfs filesystem (or whatever equivalent RAM based filesystem is available in windows). Doing so should eliminate I/O as a concern for these files.
References:
Postgres stats collector showing high disk I/O
Too much I/O generated by postgres stats collector process
stats collector suddenly causing lots of IO
Here might be a solution for your problem. https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Another possibility could be antivirus settings. Try to turn it off temporarily.
It happened to me few days ago. I rebooted the machine, but the error did not disappeared.
Don't know why, but performing a vacuum analyze verbose did the trick, and the error has stoped to show up.
In my attempt to extract data (dumps and selective reading of columns) from a diverse collection of edb databases I got faced with a fundamental problem. I have an edb database coming with a couple of log files. I know what information there is within the database, but I just get half of it extracted. I fear that the remaining half sleeps somewhere in the log files. I assumed the EDB engine knows where the log files are and automagically loads them when attaching the database (JET_paramSystemPath, JET_paramLogFilePath and JET_paramBaseName are properly set). Is that a wrong assumption? If so, what should I do to have the logs loaded as well?
Alternatively, would it be possible to simply commit the transactions to the EDB file and get rid of the logs?
If there are uncommitted transactions then the database will be marked as 'inconsistent'. You can check this using ESENTUTL /MH against the database. Calling JetAttachDatabase against an inconsistent database will always fail.
So, if your program is able to attach and open the database then it is consistent. There are two ways a database can be made consistent:
A clean shutdown of ESENT.
Running recovery using the logfiles at JetInit time.
The first thing that JetInit does is to look for the logfiles specified by JET_paramLogFilePath and JET_paramBaseName. Logfiles contain the full paths of the database(s) they reference and the transactions in the logfiles are then committed to the database(s). So, if you set the system parameters properly then ESENT will load the logs when attaching the database.
On the other hand, if you don't set the parameters properly then your program will actually work on databases that don't require recovery. JetInit won't find any logfiles so it won't do anything and the attach will succeed because the database is consistent.
One further twist is that the logfiles contain the full path to the database. This means that if you have copied/moved the database then recovery will not work. Starting with Windows Server 2003 you can deal with this by setting JET_paramAlternateDatabaseRecoveryPath to the directory containing the database.
Important: to be safe you should always attach and open the database using the read-only flags. This will avoid any problems caused by bad logfile settings. A common problem is that read-only applications end up creating a set of logfiles in a different directory which prevent the database from being recovered properly.