Stack Overflow,
We have a server running Red-hat 6 for a period in which we have 2 sockets * 8 cores * 2 threads, since former workmates have left and there is no document regarding this fields: we are now investigating a 'Off-line CPU(s) list' on two specific questions:
- Q1. What have caused those CPU(s) offline?
- Q2. When are those CPU(s) offline caused?
Could there be an enumeration about all the techniques that should cause the CPU(s) to be showing this status?
The "Off-line CPU(s) is showing like below:
Related
I am new to Green Plum. I have a single server installed GreenPlum(1 master instance, 6 segment instances), and we have huge data imported(about 10TB). as we all run it for about 1 month, the memory utilization is low(15GB of 128GB), but the cpu is almost 100% when we run some calculation on it.
It will report the OOM issue of segment some time.
OS version: CentOS 7.2, Server Type: VM
Here are the os settings:
kernel.shmmax = 107374182400
kernel.shmall = 26214400
kernel.shmmin = 4096
for GP setting:
gp_vmem_protect_limit=11900
Any help is appreciated
shmall should be <50% of RAM
you have one single VM (128GB) with gpdb master process and 6 primary segment processes. Am I right? Do you have mirror segment processes? How many CPU cores does your VM have?
gp_vmem_protect_limit =12GB. This means you have 12GB x 7 (1master, 6primary segments) = 84GB.
1 single node VM to handle 10TB data? Your cpu is probably waiting for IO all the time. This is not right.
My system is using Arm cortexa7#1GHz with realtime patchset Linux 4.4.138-rt19 from CIP Community: v4.4.138-cip25-rt19
I has run a
prio-preempt.c
to verify priority preemption on my system. However I am running an issue:
the system only probably runs a number of threads lower than 27 created threads.
About theorical aspect, the ltp app prio-preempt creates 27 worker_threads with different priorities, N busy_threads (N: depend on number of CPU(s), in my case N = 2) with high priority, and master_thread (highest priority).
When deploying the app to the board, threads_running is always lower than 27 while create_fifo_thread(worker_thread,i,...) successfully created 27 worker_thread(s).
I ran the same program above on cortexa15#1.5GHz, the issue didn't happen.
For further vision, I thought the issue might come from Linux RT scheduler unable to waken sleep threads after bmutex lock is released.
Anyone has the same problem to me ? plz share your idea.
Basically, in Linux FULL Preemptive RT system, higher priority threads always preempt lower priority threads to take control of CPU(s). In my case, the issue actually happened on even higher speed processor, I tested on dual cortexa15#1.5 GHz or quad cortexa15#1.4GHz. However, the failed rate was lower much.
Because the issue randomly happened, in cases of failure, all CPU(s) concurrently do the higher priority threads and forget the lower priority threads.
So, I assigned a certain CPU to do a specific thread (high pri).
#define CPU_0 0x01 /* Bind CPU 0 */
#define CPU_1 0x02 /* Bind CPU 1 */
#define CPU_2 0x04 /* Bind CPU 2 */
#define CPU_3 0x08 /* Bind CPU 3 */
...
{
unsigned long cpuset = CPU_0;
if (pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset) < 0) {
printf("failed to pthread_setaffinity_np\n");
}
}
And yield other CPU(s) to do other jobs (low pri).
My system doesn't hang-up any more and probably runs all 27 worker_thread (low-pri threads)
When querying my cluster, I noticed these stats for one of my nodes in the cluster. Am new to Elastic and would like the community's health in understanding the meaning of these and if I need to take any corrective measures?
Does the Heap used look on the higher side and if yes, how would I rectify it? Also any comments on the System Memory Used would be helpful - it feels like its on the really high side as well.
These are the JVM level stats
JVM
Version OpenJDK 64-Bit Server VM (1.8.0_171)
Process ID 13735
Heap Used % 64%
Heap Used/Max 22 GB / 34.2 GB
GC Collections (Old/Young) 1 / 46,372
Threads (Peak/Max) 163 / 147
This is the OS Level stats
Operating System
System Memory Used % 90%
System Memory Used 59.4 GB / 65.8 GB
Allocated Processors 16
Available Processors 16
OS Name Linux
OS Architecture amd64
As You state that you are new to Elasticsearch I must say you go through cluster as well as cat API you can find documentation at clusert API and cat API
This will help you understand more in depth.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I generally know that the more the number of processors the more processes (watching a movie, playing some game, running firefox with youtube playing a Simpson's episode, all simultaneously) you can have simultaneously going without your computer slowing down. But I want to know how to make sense of the linux commands cpuinfo and lscpu.
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 42
Stepping: 7
CPU MHz: 1600.000
BogoMIPS: 6800.18
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
and cpuinfo:
===== Processor composition =====
Processor name : Quad-Core AMD Opteron(tm) Processor 2354
Packages(sockets) : 2
Cores : 8
Processors(CPUs) : 8
Cores per package : 4
Threads per core : 1
===== Processor identification =====
Processor Thread Id. Core Id. Package Id.
0 0 0 0
1 0 1 0
2 0 2 0
3 0 3 0
4 0 0 1
5 0 1 1
6 0 2 1
7 0 3 1
===== Placement on packages =====
Package Id. Core Id. Processors
0 0,1,2,3 0,1,2,3
1 0,1,2,3 4,5,6,7
What exactly are they telling me. A dual core to me means two core per processor. I can see 8 CPU(s) listed. But what is the difference between thread and cores. I can see 2 Thread(s) per core. And what is a socket? I could not google a place where things are explained but there are plenty of places which tell you to use cpuinfo/lscpu.
What you call "core" is technically a "physical core", aka socket aka package.
A physical core is "virtually splitted" into logical cores (listed simply as "core(s)" by cpuinfo/lscpu.
So your system has 2 physical cores, each one divided into 4 logical cores. This sums up into 8 logical cores.
A similar question on tomshw:
http://www.tomshardware.co.uk/answers/id-1850932/difference-physical-core-logical-core.html
Hyperthreading:
http://en.m.wikipedia.org/wiki/Hyper-threading
A socket is on the motherboard, where you plug the processor inside and have a fan cooling it.
cpuinfo on your machine says that you have a motherboard with 2 sockets and 2 processors, which are each a Quad-Core AMD Opteron(tm) Processor 2354. So together you have 8 cores (2x quad (4) core) and also 8 threads available.
you ran lscpu on a different machine which has only one processor on the motherboard. This one is an intel quad core with Hyper-Threading.
A socket is a physical plug on your motherboard. A core is a physical part of a computer, while a thread is a specific path of execution on a core. This answer explains threads really well.
lscpu - http://manpages.courier-mta.org/htmlman1/lscpu.1.html
cpuinfo - http://www.richweb.com/cpu_info
EDIT: whoops, got network sockets mixed in there for some reason. Just kidding.
I bougth recently a server with 2 x X5550, they are quad (4 cores each) total 8 cores
If I check the task manager it shows in the CPU usage history 16 diagrams,
Should't it be 8 cause I have 2 processors with quad?
or the diagrams maybee shows the Threads of the CPU?
The CPUs have support for HyperThreading, so each core x2 logical CPUs.
You can always lookup the chip specs on Intel's site