CPU stuck for XXs - cpu

I found the CPU is stuck in the VM(CentOS 7.2).
[root#ha-node1 ~]#
Message from syslogd#ha-node1 at Jul 31 13:41:56 ...
kernel:BUG: soft lockup - CPU#6 stuck for 21s! [swapper/6:0]
So I use the dmesg to check the CPU stuck:
[root#ha-node1 ~]# dmesg |grep stuck
[141504.185667] BUG: soft lockup - CPU#1 stuck for 23s! [stonithd:57926]
[141532.172112] BUG: soft lockup - CPU#1 stuck for 23s! [stonithd:57926]
[141541.654004] BUG: soft lockup - CPU#3 stuck for 79s! [ksoftirqd/3:279]
[141541.654974] BUG: soft lockup - CPU#7 stuck for 66s! [corosync:57912]
[141549.948803] BUG: soft lockup - CPU#6 stuck for 87s! [systemd:1]
[141578.290675] BUG: soft lockup - CPU#2 stuck for 27s! [xfsaild/sda2:598]
[141578.290767] BUG: soft lockup - CPU#7 stuck for 27s! [corosync:57912]
[141578.290820] BUG: soft lockup - CPU#6 stuck for 26s! [ksoftirqd/6:294]
[141578.291153] BUG: soft lockup - CPU#3 stuck for 27s! [haproxy:1250]
[141578.303520] BUG: soft lockup - CPU#0 stuck for 27s! [ruby:92261]
[141584.185153] BUG: soft lockup - CPU#5 stuck for 22s! [nova-conductor:49156]
[141593.982198] BUG: soft lockup - CPU#4 stuck for 39s! [kworker/4:0:285]
[141593.982694] BUG: soft lockup - CPU#1 stuck for 39s! [kworker/1:2:4672]
[141604.368498] BUG: soft lockup - CPU#2 stuck for 23s! [2_scheduler:2989]
[141623.603542] BUG: soft lockup - CPU#3 stuck for 25s! [6_scheduler:2993]
[142237.825417] BUG: soft lockup - CPU#7 stuck for 23s! [corosync:57912]
[142392.705639] BUG: soft lockup - CPU#6 stuck for 23s! [swapper/6:0]
[436213.466318] BUG: soft lockup - CPU#6 stuck for 24s! [4_scheduler:2991]
[436214.828007] BUG: soft lockup - CPU#5 stuck for 22s! [aux:2996]
[473345.361656] BUG: soft lockup - CPU#6 stuck for 28s! [haproxy:1250]
[473346.785653] BUG: soft lockup - CPU#7 stuck for 29s! [corosync:57912]
[479503.672417] BUG: soft lockup - CPU#6 stuck for 21s! [swapper/6:0]

Later, I found my host machine's memory is 32G, and I have three VMs, every one's memory is 16G, so the memory is overload.
I also checked the host machine's memory usage:
So I change the VMs' memory to 8G, now it works normal, there is no CPU stuck now.

Related

ORACLE SQL Errors 127 and 396

For a long time Oracle, as soon as I start the program, gives me these two errors that I have not been able to solve or understand.
1st mistake
LEVEL: SEVERE
Sequence: 396
Elapsed: 60969
Source: oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$1
Message: NORTHWIND expired 0.003s ago (see "Caused by:" stack trace below); reported from ExpiredTextBuffer on thread AWT-EventQueue-0 activate AWT event:
java.awt.event.FocusEvent[FOCUS_LOST,permanent,opposite=null,cause=CLEAR_GLOBAL_FOCUS_OWNER] on Editor2_MAIN at oracle.ide.model.ExpiredTextBuffer.newExpiredTextBufferException(ExpiredTextBuffer.java:55)
2nd Error
LEVEL: SEVERE
sequence: 127
Elapsed: 0
Source: oracle.ide.extension.HashStructureHook
Message: Unexpected runtime exception while delivering HashStructureHookEvent
I have tried to reinstall everything and it is not due to lack of resources either since the pc is quite powerful ryzen 9 and 32gb of ram

Mac brew arangodb delaying start log file path

I have installed arangodb through brew. I am new to both mac and arangodb. Right after installation of arangodb I could start stop it through brew services. But since yesterday that didn't work. However arangod start worked. Today its taking really long time for the service to start up
$ arangod start
2018-04-30T07:40:32Z [3593] INFO ArangoDB 3.3.7 [darwin] 64bit, using jemalloc, build , VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.0.2o 27 Mar 2018
2018-04-30T07:40:32Z [3593] INFO {authentication} Jwt secret not specified, generating...
2018-04-30T07:40:32Z [3593] INFO using storage engine mmfiles
2018-04-30T07:40:32Z [3593] INFO {cluster} Starting up with role SINGLE
2018-04-30T07:40:32Z [3593] INFO {syscall} file-descriptors (nofiles) hard limit is unlimited, soft limit is 8192
2018-04-30T07:40:32Z [3593] INFO {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2018-04-30T07:40:32Z [3593] INFO running WAL recovery (1 logfiles)
2018-04-30T07:40:32Z [3593] INFO replaying WAL logfile '/Users/neel/start/journals/logfile-17009.db' (1 of 1)
2018-04-30T07:40:32Z [3593] INFO WAL recovery finished successfully
2018-04-30T07:40:33Z [3593] INFO using endpoint 'http+tcp://127.0.0.1:8529' for non-encrypted requests
2018-04-30T07:41:33Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:41:43Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:42:34Z [3593] WARNING {v8} giving up waiting for unused V8 context after 60.000000 s
2018-04-30T07:43:05Z [3593] INFO ArangoDB (version 3.3.7 [darwin]) is ready for business. Have fun!
I don't know where are the log files. So when I try to start with brew services start arangodb I can't check whether it has been started or not as it responds Successfully startedarangodb(label: homebrew.mxcl.arangodb) immediately. So my questions are why its delaying ? and where are the log files ?
The log files are located here: /usr/local/var/log/arangodb3
The delay above is caused by lack of available V8 contexts. You can adjust them by setting them in /usr/local/etc/arangodb3/arangod.conf. But the default value there is set to 0, which means that arangodb is to choose how many are running.

Sonarqube 6.7 upgrade failure "Unrecoverable indexation failures"

We are attempting to upgrade from SonarQube 5.6.7 to SonarQube 6.7.2. I followed the steps outlined here https://docs.sonarqube.org/display/SONAR/Upgrading.
I have > 300GB available on the partition that elastic search is using so it doesn't seem to be related to this problem.
The exception:
2018.03.21 11:13:10 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
java.lang.IllegalStateException: Unrecoverable indexation failures
at org.sonar.server.es.IndexingListener$1.onFinish(IndexingListener.java:39)
at org.sonar.server.es.BulkIndexer.stop(BulkIndexer.java:117)
at org.sonar.server.issue.index.IssueIndexer.doIndex(IssueIndexer.java:247)
at org.sonar.server.issue.index.IssueIndexer.indexOnStartup(IssueIndexer.java:95)
at org.sonar.server.es.IndexerStartupTask.indexUninitializedTypes(IndexerStartupTask.java:68)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.sonar.server.es.IndexerStartupTask.execute(IndexerStartupTask.java:55)
at java.util.Optional.ifPresent(Optional.java:159)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup$1.doPrivileged(PlatformLevelStartup.java:84)
at org.sonar.server.user.DoPrivileged.execute(DoPrivileged.java:45)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup.start(PlatformLevelStartup.java:80)
at org.sonar.server.platform.Platform.executeStartupTasks(Platform.java:196)
at org.sonar.server.platform.Platform.access$400(Platform.java:46)
at org.sonar.server.platform.Platform$1.lambda$doRun$1(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.runIfNotAborted(Platform.java:371)
at org.sonar.server.platform.Platform$1.doRun(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.run(Platform.java:355)
at java.lang.Thread.run(Thread.java:745)
Partition configuration:
[dssc100[DEV]#omhqp13890 bin]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
... Other volumes omitted ...
/dev/mapper/Volume00-upapps
464422672 127511888 313323572 29% /upapps
At one point I did attempt to run the upgrade with the logging set to debug. This generated 6GB of log files and I was unable to find anything that seemed out of the ordinary.
We've got around 6k projects in this installation, some of which have several years of history. I would like to maintain that rich history, what can I do/look for as a possible solution?
You seem to have hit SONAR-10502, which is (will be) fixed in 6.7.3 and 7.1.

CentOS 6.5 - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

Before I start, I'd like to state that I have tried exhausting all Stack Overflow posts asking this same question, but either none of their recommended answers apply (which is absurdly improbable), or I just wasn't able to find the right answer yet. Hence the following post:
First of all, I am not proficient at all regarding coding. However, in my job, which is managing the WordPress website of an Internet television network (meaning I schedule hourly updates of our online video content every day), I have to teach myself how to understand very basic coding in case emergencies happen. This is such an emergency. In doing advanced scheduling for our online content, I opened too many tabs and triggered a WordPress error:
Error establishing a database connection
This has happened before, and I managed to fix the error then by finding the answer sudo service mysqld start; here. However, when it reoccurred again today, simply redoing what I did before didn't work any more. Thus I tried following some other answers as much as my limited understanding of code can help me get. Some other similar queries. No answers so far were able to help me, or maybe I just don't understand how they work.
From what I gathered, my problem is
Trying the previous solution failed:
[root#li725-222 ~]# sudo service mysqld start;
MySQL Daemon failed to start.
Starting mysqld: [FAILED]
This, from my mysqld.log:
171217 05:19:05 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
171217 05:19:16 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
171217 5:19:16 [Note] /usr/libexec/mysqld (mysqld 5.5.53) starting as process 27761 ...
171217 5:19:16 [Note] Plugin 'FEDERATED' is disabled.
171217 5:19:16 InnoDB: The InnoDB memory heap is disabled
171217 5:19:16 InnoDB: Mutexes and rw_locks use GCC atomic builtins
171217 5:19:16 InnoDB: Compressed tables use zlib 1.2.3
171217 5:19:16 InnoDB: Using Linux native AIO
171217 5:19:16 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
171217 5:19:16 InnoDB: Completed initialization of buffer pool
171217 5:19:16 InnoDB: Fatal error: cannot allocate memory for the buffer pool
171217 5:19:16 [ERROR] Plugin 'InnoDB' init function returned error.
171217 5:19:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
171217 5:19:16 [ERROR] Unknown/unsupported storage engine: InnoDB
171217 5:19:16 [ERROR] Aborting
171217 5:19:16 [Note] /usr/libexec/mysqld: Shutdown complete
171217 05:19:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
Searching for InnoDB: Fatal error: cannot allocate memory for the buffer pool on Google, I found this guide. Doing this suggestion didn't yield results (maybe I did it wrong, or maybe the solution is not applicable?):
(1) Increase the physical RAM. Adding 1GB of additional RAM will solve
the problem.
(2) Allocate SWAP space. Digital Ocean VPS instance is not configured
to use swap space by default. By allocating 512MB of swap space, we
were able to solve this problem. To add swap space to your server,
please follow the following steps:
## As a root user, perform the following:
# dd if=/dev/zero of=/swap.dat bs=1024 count=512M
# mkswap /swap.dat
# swapon /swap.dat
## Edit the /etc/fstab, and the following entry. /swap.dat none swap sw 0 0
(3) Reduce the size of MySQL buffer pool size
## Edit /etc/my.cnf, and add the following line under the [mysqld] heading. [mysqld] innodb_buffer_pool_size=64M
Restart mysql and you're good to go.
I tried some of the answers in this thread as well but I don't think any of them worked.
Our website is hosted in Linode and is running on CentOS 6.5.
P.S. I have just noticed that my mysqld.log had ballooned in size from 754kb to ~881.97mb. I don't know the reason but of course it must have something to do with my attempted fixes. I recall deleting the files /var/lib/mysql/ib_logfile0 and /var/lib/mysql/ib_logfile1 as per the advice of one of the solutions I tried (I backed them up first, of course). The difference between the mysqld.log then and now is that in the latest version, these text strings have been added:
171217 05:19:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
171217 05:39:42 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
171217 5:39:42 [Note] /usr/libexec/mysqld (mysqld 5.5.53) starting as process 28073 ...
171217 5:39:42 [Note] Plugin 'FEDERATED' is disabled.
171217 5:39:42 InnoDB: The InnoDB memory heap is disabled
171217 5:39:42 InnoDB: Mutexes and rw_locks use GCC atomic builtins
171217 5:39:42 InnoDB: Compressed tables use zlib 1.2.3
171217 5:39:42 InnoDB: Using Linux native AIO
171217 5:39:42 InnoDB: Initializing buffer pool, size = 64.0M
171217 5:39:42 InnoDB: Completed initialization of buffer pool
171217 5:39:42 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile0 size to 5 MB
InnoDB: Database physically writes the file full: wait...
171217 5:39:42 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile1 size to 5 MB
InnoDB: Database physically writes the file full: wait...
171217 5:39:42 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
171217 5:39:42 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
171217 5:39:42 InnoDB: Error: page 1 log sequence number 1118101582
InnoDB: is in the future! Current system log sequence number 580967436.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: for more information.
171217 5:39:42 InnoDB: Error: page 4 log sequence number 1099567460
InnoDB: is in the future! Current system log sequence number 580967436.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: for more information...
..with that last paragraph looping and looping up until the moment (maybe it is still updating).

EOFException thrown by a Hadoop pipes program

First of all, I am a newbie of Hadoop.
I have a small Hadoop pipes program that throws java.io.EOFException. The program takes
as input a small text file and uses hadoop.pipes.java.recordreader and hadoop.pipes.java.recordwriter.
The input is very simple like:
1 262144 42.8084 15.9157 4.1324 0.06 0.1
However, Hadoop will throw an EOFException, which I can't see the reason. Below is the
stack trace:
10/12/08 23:04:04 INFO mapred.JobClient: Running job: job_201012081252_0016
10/12/08 23:04:05 INFO mapred.JobClient: map 0% reduce 0%
10/12/08 23:04:16 INFO mapred.JobClient: Task Id : attempt_201012081252_0016_m_000000_0, Status : FAILED
java.io.IOException: pipe child exception
at org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
at org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
BTW, I ran this on a fully-distributed mode (a cluster with 3 work nodes).
Any help is appreciated! Thanks
Lessons learned: by all means, try to make sure there is no bug in your own program.
This stacktrace is usually indicative of running out of available file descriptors within your worker machines. This is exceedingly common, documented sparsely, and precisely why I have two related questions on the subject.
If you have root access on all of the machines, you should consider raising the file descriptor limit for your Hadoop user by editing /etc/sysctl.conf:
(Add) fs.file-max = 4096
Or issuing:
ulimit -Sn 4096
ulimit -Hn 4096
Ad infinitum. General information for raising this limit is available here.
However, from the perspective of long-term planning, this strategy is somewhat spurious. If you happen to discover more information on the problem, perhaps you can help me help you help us all? [Thank you, GLaDOS. -Ed]
(Edit: See commentary that follows.)

Resources