os kernel paramete missing semmni - oracle

I am trying to install Oracle 11gR2 on oracle linux 7.
As per the documentation given in
https://oracle-base.com/articles/11g/oracle-db-11gr2-installation-on-oracle-linux-7 there i think i have properly configured it.
but unfortunately i get the following error.
but when i see parameters with sysctl -p i get
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
or
sysctl -a | grep "kernel.sem"
gives
kernel.sem = 250 32000 100 128
kernel.sem_next_id = -1
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.ens192.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
what am i missing ?

Nothing wrong with your installation style,
Firstly, log as root
$ su root
and then check whether the resource limits are in the recomended range for the file /etc/security/limits.conf as below ( if not, you need to update the values in that file ) :
--# the soft limit for the stack setting
$ ulimit -Ss
10240
$ ulimit -Hs
32768
--# the soft and the hard limits for the number of processes available to a user
$ ulimit -Su
2047
$ ulimit -Hu
16384
--# he soft and hard limits for the file descriptor setting
$ ulimit -Sn
4096
$ ulimit -Hn
65536
if the problem still persists, and you use version before 11.2.0.4(there might be a bug),
you might meet such situation, re-install at least with version 11.2.0.4.

According to this answer
there is a bug in RHEL 7 with 11.2.0.4, if you really need to install 11.2.0.4 on RHEL 7, you should really think about it again, otherwise you can go ahead with 12c.

Related

Out of memory error installing laravel app on server

Installing my laravel 5.8 app on ubuntu 18( under Digital Ocean ) I got error :
Updating dependencies
: mmap() failed: [12] Cannot allocate memory
: mmap() failed: [12] Cannot allocate memory
: PHP Fatal error: Out of memory (allocated 533733376) (tried to allocate 4096 bytes) in /usr/share/php/Composer/DependencyResolver/RuleSetGenerator.php on line 126
: Out of memory (allocated 533733376) (tried to allocate 4096 bytes) in /usr/share/php/Composer/DependencyResolver/RuleSetGenerator.php on line 126
I check memory and see:
# free
total used free shared buff/cache available
Mem: 1009156 387908 147884 15716 473364 462800
I try to attach swap file and googling I found decision :
# sudo swapon -a
# sudo fallocate -l 1G /`file
>
and last command hang forever.
Next I tried :
# sudo mkswap /swapfile
mkswap: cannot open /swapfile: No such file or directory
# sudo swapon /swapfile
swapon: cannot open /swapfile: No such file or directory
# cat /proc/partitions
major minor #blocks name
252 0 26214400 vda
252 1 26100719 vda1
252 14 4096 vda14
252 15 108544 vda15
# fdisk -l || mount | grep sd
Disk /dev/vda: 25 GiB, 26843545600 bytes, 52428800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C1F9A1FE-534C-4DAC-9299-5CC180C29DCE
Device Start End Sectors Size Type
/dev/vda1 227328 52428766 52201439 24.9G Linux filesystem
/dev/vda14 2048 10239 8192 4M BIOS boot
/dev/vda15 10240 227327 217088 106M Microsoft basic data
Partition table entries are not in disk order.
Why error and how to fix it ?
Modified block:
as composer run in envoy script I tried with switching to used user and setting memory as next:
# which composer
/usr/bin/composer
su -l lardeployer
php -d memory_limit=1024M /usr/bin/composer update
Composer could not find a composer.json file in /home/lardeployer
To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
But last message with ref to "Getting Started" section confused me...
How to fix it?

Max file descriptors [4096] for elasticsearch process is too low when started ELASTICSEARCH

When I start elasticseach, I am getting this Warning:
[2018-08-05T15:04:27,370][WARN ][o.e.b.BootstrapChecks ] [bDyfvVI] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
I have set the needed value to 65536 as running through this tutorial https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html. I have also tried these steps:
Check ulimit -n, it would be 4096.
Edit /etc/security/limits.conf and add following lines:
* soft nofile 65536
* hard nofile 65536
root soft nofile 65536
root hard nofile 65536
Edit /etc/pam.d/common-session and add this line session required pam_limits.so
Edit /etc/pam.d/common-session-noninteractive and add this line session required pam_limits.so
Reload session and check for ulimit -n, it would be 65536.
Unfortunately I am still getting this warning. Can someone help me why?
We've raised the MAX_OPEN_FILES set to 1024000 by changing the value in
/etc/default/elasticsearch
More information here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher.
For the .zip and .tar.gz packages, set ulimit -n 65535 as root before starting Elasticsearch, or set nofile to 65535 in /etc/security/limits.conf.
On macOS, you must also pass the JVM option -XX:-MaxFDLimit to Elasticsearch in order for it to make use of the higher file descriptor limit.
RPM and Debian packages already default the maximum number of file descriptors to 65535 and do not require further configuration.
You can check the max_file_descriptors configured for each node using the Nodes Stats API, with:
GET _nodes/stats/process?filter_path=**.max_file_descriptors

How to set limit for Nginx log files on Mac OS?

Nginx occupies all the available disk space. How to set limit for log files on Mac OS?
Rotate the log files. On OS X, newsyslog is the preferred utility to do that. Set up a file like this in /etc/newsyslog.d/nginx.conf:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/nginx.log deceze:wheel 644 2 1024 * J
Read https://www.newsyslog.org/manual.html for more information.
Building on #deceze's answer, here's an adapted version of Apple's Apache configuration for a Homebrew-installed nginx:
#logfilename [owner:group] mode count size when [flags] [/pid_file] [sig_num]
/opt/homebrew/var/log/nginx/access.log _nginx:_nginx 644 10 20480 * Z /opt/homebrew/var/run/nginx.pid 30
This is assuming you're running as user:group _nginx:_nginx. It sends the correct SIGUSR1 signal (30 on macOS) to nginx. Also changed to using gzip instead of bzip2.

hadoop ulimit open files name

I have a hadoop cluster we assuming is performing pretty "bad". The nodes are pretty beefy.. 24 cores, 60+G RAM ..etc. And we are wondering if there are some basic linux/hadoop default configuration that prevent hadoop from fully utilizing our hardware.
There is a post here that described a few possibilities that I think might be true.
I tried logging in the namenode as root, hdfs and also myself and trying to see the output of lsof and also the setting of ulimit. Here are the output, can anyone help me understand why the setting doesn't match with the open files number.
For example, when I logged in as root. The lsof looks like this:
[root#box ~]# lsof | awk '{print $3}' | sort | uniq -c | sort -nr
7256 cloudera-scm
3910 root
2173 oracle
1886 hbase
1575 hue
1180 hive
801 mapred
470 oozie
427 yarn
418 hdfs
244 oragrid
241 zookeeper
94 postfix
87 httpfs
...
But when I check out the ulimit output, it looks like this:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 806018
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am assuming, there should be no more than 1024 files opened by one user, however, when you look at the output of lsof, there are 7000+ files opened by one user, can anyone help explain what is going on here?
Correct me if I had made any mistake understanding the relation between ulimit and lsof.
Many thanks!
You need to check limits for the process. It may be different from your shell session:
Ex:
[root#ADWEB_HAPROXY3 ~]# cat /proc/$(pidof haproxy)/limits | grep open
Max open files 65536 65536 files
[root#ADWEB_HAPROXY3 ~]# ulimit -n
4096
In my case haproxy has a directive on its config file to change maximum open files, there should be something for hadoop as well
I had a very similar issue, which caused one of the claster's YARN TimeLine server to stop due to reaching magical 1024 files limit and crashing with "too many open files" errors.
After some investigation it came out that it had some serious issues with dealing with too many files in TimeLine's LevelDB. For some reason YARN ignored yarn.timeline-service.entity-group-fs-store.retain-seconds setting (by default it's set to 7 days, 604800ms). We had LevelDB files dating back for over a month.
What seriously helped was applying a fix described in here: https://community.hortonworks.com/articles/48735/application-timeline-server-manage-the-size-of-the.html
Basically, there are a couple of options I tried:
Shrink TTL (time to live) settings First enable TTL:
<property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property>
Then set yarn.timeline-service.ttl-ms (set it to some low settings for a period of time):
\
<property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property>
Second option, as described, is to stop TimeLine server, delete the whole LevelDB and restart the server. This will start the ATS database from scratch. Works fine if you failed with any other options.
To do it, find the database location from yarn.timeline-service.leveldb-timeline-store.path, back it up and remove all subfolders from it. This operation will require root access to the server where TimeLine is located.
Hope it helps.

Couch has apparent limit of attachment sizes on Mac OS X

I have plain vanilla CouchDB from Apache, which runs as an App running on a Mac OS X 10.9. If I try to attach an attachment to a document that is above 1 Meg in size, it just hangs and does nothing.
I have tried to use couchdbs on Linux, and there the sky is the limit.
I first thought it had to do with low limits on the mac but it doesn't seem so :
➜ ~ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 709
-n: file descriptors 256
What is causing this ? Why ? And how to fix this ?
Check the config files given by couchdb -c. You probably have this somewhere in them (for some unknown reason):
[couchdb]
max_attachment_size = 1048576 ; bytes
Remove or comment the line and you should be fine.
Or maybe it was compiled with this hardcoded so you could add this line to one of the config file and increase the value.
Update
max_attachment_size is undocumented so probably not safe to use. I leave the original answer as it seems to have solved the problem of the OP but according to the docs, the attachment size should be unlimited. Also attachment_stream_buffer_size is the config key controlling the chunk size of the attachments which might relevant.

Resources