OMP Settings issue - openmp

1. I checked out oneDNN v1.2
2. Build it using clang
export CC=clang and export CXX=clang++
mkdir -p build && cd build && cmake ..
make -j
sudo make install
3. modified getting_started.cpp to add a print as first line inside main() function
(base) ratan#dattonaxq1648:~/my_work/oneDNN/examples$ git diff
diff --git a/examples/getting_started.cpp b/examples/getting_started.cpp
index 2376013eb..3c43dab91 100644
--- a/examples/getting_started.cpp
+++ b/examples/getting_started.cpp
## -469,6 +469,7 ## void getting_started_tutorial(engine::kind engine_kind) {
/// #snippet getting_started.cpp Main
// [Main]
int main(int argc, char **argv) {
std::cout << "Start of getting_started code\n";
int exit_code = 0;
4. export these env variables for verbose
export KMP_AFFINITY=verbose
export OMP_DISPLAY_ENV=TRUE
export KMP_SETTINGS=TRUE
5. build examples/getting_started.cpp
clang++ -std=c++11 getting_started.cpp -o bin/getting_started_cpp -L/usr/local/lib -ldnnl
6. Run above test
(base) ratan#dattonaxq1648:~/my_work/oneDNN/examples$ ./bin/getting_started_cpp
User settings:
KMP_AFFINITY=verbose
KMP_SETTINGS=TRUE
OMP_DISPLAY_ENV=TRUE
Effective settings:
KMP_ABORT_DELAY=0
KMP_ADAPTIVE_LOCK_PROPS='1,1024'
KMP_ALIGN_ALLOC=64
KMP_ALL_THREADPRIVATE=384
KMP_ATOMIC_MODE=2
KMP_BLOCKTIME=200
KMP_CPUINFO_FILE: value is not defined
KMP_DETERMINISTIC_REDUCTION=false
KMP_DEVICE_THREAD_LIMIT=2147483647
KMP_DISP_NUM_BUFFERS=7
KMP_DUPLICATE_LIB_OK=false
KMP_ENABLE_TASK_THROTTLING=true
KMP_FORCE_REDUCTION: value is not defined
KMP_FOREIGN_THREADS_THREADPRIVATE=true
KMP_FORKJOIN_BARRIER='2,2'
KMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'
KMP_FORKJOIN_FRAMES=true
KMP_FORKJOIN_FRAMES_MODE=3
KMP_GTID_MODE=3
KMP_HANDLE_SIGNALS=false
KMP_HOT_TEAMS_MAX_LEVEL=1
KMP_HOT_TEAMS_MODE=0
KMP_INIT_AT_FORK=true
KMP_ITT_PREPARE_DELAY=0
KMP_LIBRARY=throughput
KMP_LOCK_KIND=queuing
KMP_MALLOC_POOL_INCR=1M
KMP_NUM_LOCKS_IN_BLOCK=1
KMP_PLAIN_BARRIER='2,2'
KMP_PLAIN_BARRIER_PATTERN='hyper,hyper'
KMP_REDUCTION_BARRIER='1,1'
KMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'
KMP_SCHEDULE='static,balanced;guided,iterative'
KMP_SETTINGS=true
KMP_SPIN_BACKOFF_PARAMS='4096,100'
KMP_STACKOFFSET=64
KMP_STACKPAD=0
KMP_STACKSIZE=8M
KMP_STORAGE_MAP=false
KMP_TASKING=2
KMP_TASKLOOP_MIN_TASKS=0
KMP_TASK_STEALING_CONSTRAINT=1
KMP_TEAMS_THREAD_LIMIT=96
KMP_TOPOLOGY_METHOD=all
KMP_USE_YIELD=1
KMP_VERSION=false
KMP_WARNINGS=true
OMP_AFFINITY_FORMAT='OMP: pid %P tid %i thread %n bound to OS proc set {%A}'
OMP_ALLOCATOR=omp_default_mem_alloc
OMP_CANCELLATION=false
OMP_DEFAULT_DEVICE=0
OMP_DISPLAY_AFFINITY=false
OMP_DISPLAY_ENV=true
OMP_DYNAMIC=false
OMP_MAX_ACTIVE_LEVELS=1
OMP_MAX_TASK_PRIORITY=0
OMP_NESTED: deprecated; max-active-levels-var=1
OMP_NUM_THREADS: value is not defined
OMP_PLACES: value is not defined
OMP_PROC_BIND='false'
OMP_SCHEDULE='static'
OMP_STACKSIZE=8M
OMP_TARGET_OFFLOAD=DEFAULT
OMP_THREAD_LIMIT=2147483647
OMP_TOOL=enabled
OMP_TOOL_LIBRARIES: value is not defined
OMP_WAIT_POLICY=PASSIVE
KMP_AFFINITY='verbose,warnings,respect,granularity=core,none'
OPENMP DISPLAY ENVIRONMENT BEGIN
_OPENMP='201611'
[host] OMP_AFFINITY_FORMAT='OMP: pid %P tid %i thread %n bound to OS proc set {%A}'
[host] OMP_ALLOCATOR='omp_default_mem_alloc'
[host] OMP_CANCELLATION='FALSE'
[host] OMP_DEFAULT_DEVICE='0'
[host] OMP_DISPLAY_AFFINITY='FALSE'
[host] OMP_DISPLAY_ENV='TRUE'
[host] OMP_DYNAMIC='FALSE'
[host] OMP_MAX_ACTIVE_LEVELS='1'
[host] OMP_MAX_TASK_PRIORITY='0'
[host] OMP_NESTED: deprecated; max-active-levels-var=1
[host] OMP_NUM_THREADS: value is not defined
[host] OMP_PLACES: value is not defined
[host] OMP_PROC_BIND='false'
[host] OMP_SCHEDULE='static'
[host] OMP_STACKSIZE='8M'
[host] OMP_TARGET_OFFLOAD=DEFAULT
[host] OMP_THREAD_LIMIT='2147483647'
[host] OMP_TOOL='enabled'
[host] OMP_TOOL_LIBRARIES: value is not defined
[host] OMP_WAIT_POLICY='PASSIVE'
OPENMP DISPLAY ENVIRONMENT END
OMP: Info #211: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #209: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-95
OMP: Info #156: KMP_AFFINITY: 96 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 2 packages x 24 cores/pkg x 2 threads/core (48 total cores)
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6193 thread 0 bound to OS proc set 0-95
Start of getting_started code
dnnl_verbose,info,DNNL v1.2.0 (commit 75d0b1a7f3586c212e37acebbb8acd221cee7216)
dnnl_verbose,info,cpu,runtime:OpenMP
dnnl_verbose,info,cpu,isa:Intel AVX2
dnnl_verbose,info,gpu,runtime:none
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6194 thread 1 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6195 thread 2 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6196 thread 3 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6197 thread 4 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6198 thread 5 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6199 thread 6 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6200 thread 7 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6201 thread 8 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6202 thread 9 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6203 thread 10 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6205 thread 12 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6204 thread 11 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6206 thread 13 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6207 thread 14 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6208 thread 15 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6209 thread 16 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6210 thread 17 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6211 thread 18 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6212 thread 19 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6213 thread 20 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6214 thread 21 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6215 thread 22 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6217 thread 24 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6216 thread 23 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6218 thread 25 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6219 thread 26 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6220 thread 27 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6221 thread 28 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6222 thread 29 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6223 thread 30 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6224 thread 31 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6225 thread 32 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6226 thread 33 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6227 thread 34 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6228 thread 35 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6229 thread 36 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6230 thread 37 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6231 thread 38 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6233 thread 40 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6232 thread 39 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6235 thread 42 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6234 thread 41 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6236 thread 43 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6237 thread 44 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6238 thread 45 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6239 thread 46 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6240 thread 47 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6241 thread 48 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6242 thread 49 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6243 thread 50 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6244 thread 51 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6246 thread 53 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6245 thread 52 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6248 thread 55 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6247 thread 54 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6249 thread 56 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6250 thread 57 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6251 thread 58 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6252 thread 59 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6254 thread 61 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6253 thread 60 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6255 thread 62 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6256 thread 63 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6257 thread 64 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6258 thread 65 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6259 thread 66 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6260 thread 67 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6261 thread 68 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6262 thread 69 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6263 thread 70 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6264 thread 71 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6265 thread 72 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6266 thread 73 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6267 thread 74 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6268 thread 75 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6269 thread 76 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6270 thread 77 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6271 thread 78 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6272 thread 79 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6273 thread 80 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6274 thread 81 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6275 thread 82 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6276 thread 83 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6277 thread 84 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6278 thread 85 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6279 thread 86 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6280 thread 87 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6281 thread 88 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6282 thread 89 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6283 thread 90 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6285 thread 92 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6284 thread 91 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6286 thread 93 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6288 thread 95 bound to OS proc set 0-95
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6287 thread 94 bound to OS proc set 0-95
dnnl_verbose,exec,cpu,eltwise,jit:avx2,forward_inference,data_f32::blocked:acdb:f0 diff_undef::undef::f0,,alg:eltwise_relu alpha:0 beta:0,1x3x13x13,32.114
Example passed on CPU.
7. If we see that thread 0 is created before 1st line in main() is executed.
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6193 thread 0 bound to OS proc set 0-95
Start of getting_started code
dnnl_verbose,info,DNNL v1.2.0 (commit 75d0b1a7f3586c212e37acebbb8acd221cee7216)
dnnl_verbose,info,cpu,runtime:OpenMP
dnnl_verbose,info,cpu,isa:Intel AVX2
dnnl_verbose,info,gpu,runtime:none
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6194 thread 1 bound to OS proc set 0-95
8. Asked same question mkl-dnn forum and i was suggested to ask here. Tried with other mkl-dnn version (V1.3, v1.2.1, v1.2.2) and it has same behaviour.
Question:
9. Question.
What build setting / or run time env variable to use so that thread bounds happens after entering main()
Expected behavior:
Start of getting_started code
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6193 thread 0 bound to OS proc set 0-95
dnnl_verbose,info,DNNL v1.2.0 (commit 75d0b1a7f3586c212e37acebbb8acd221cee7216)
dnnl_verbose,info,cpu,runtime:OpenMP
dnnl_verbose,info,cpu,isa:Intel AVX2
dnnl_verbose,info,gpu,runtime:none
OMP: Info #249: KMP_AFFINITY: pid 6193 tid 6194 thread 1 bound to OS proc set 0-95

Related

hbase import module don't succeed

I have to move some hbase tables from one hadoop cluster to another. I have extracted the tables using
bin/hbase org.apache.hadoop.hbase.mapreduce.Export \ <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
and I've put the return files into HDFS on my new cluster.
But when I try bin/hbase org.apache.hadoop.hbase.mapreduce.Import , I have the strange following logs:
hadoop#edgenode:~$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Import ADCP /hbase/backup_hbase/ADCP/2022-07-04_1546/ADCP/
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
2022-10-03 11:19:09,689 INFO [main] mapreduce.Import: writing directly to table from Mapper.
2022-10-03 11:19:09,847 INFO [main] client.RMProxy: Connecting to ResourceManager at /172.16.42.42:8032
2022-10-03 11:19:09,983 INFO [main] Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:host.name=edgenode
2022-10-03 11:19:10,043 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_342
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.vendor=Private Build
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: hadoop-yarn-client-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.3.3.jar
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/lib/native
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.name=Linux
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.version=5.15.0-1018-kvm
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.free=174MB
2022-10-03 11:19:10,044 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.max=3860MB
2022-10-03 11:19:10,045 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Client environment:os.memory.total=237MB
2022-10-03 11:19:10,048 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Initiating client connection, connectString=namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$15/257950720#1124fc36
2022-10-03 11:19:10,054 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-10-03 11:19:10,061 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes
2022-10-03 11:19:10,069 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=
2022-10-03 11:19:10,077 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Opening socket connection to server namenode/172.16.42.42:2181. Will not attempt to authenticate using SASL (unknown error)
2022-10-03 11:19:10,084 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.16.42.187:48598, server: namenode/172.16.42.42:2181
2022-10-03 11:19:10,120 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-SendThread(namenode:2181)] zookeeper.ClientCnxn: Session establishment complete on server namenode/172.16.42.42:2181, sessionid = 0x1b000002cb790005, negotiated timeout = 40000
2022-10-03 11:19:11,001 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7] zookeeper.ZooKeeper: Session: 0x1b000002cb790005 closed
2022-10-03 11:19:11,001 INFO [ReadOnlyZKClient-namenode:2181,datanode1:2181,datanode2:2181,datanode3:2181,datanode4:2181,datanode5:2181,datanode6:2181,datanode7:2181,datanode8:2181,datanode9:2181,datanode10:2181,datanode11:2181,datanode12:2181,datanode13:2181,datanode14:2181,datanode15:2181,datanode16:2181,datanode17:2181,datanode18:2181,datanode19:2181,datanode20:2181,datanode21:2181,datanode22:2181,datanode23:2181,datanode24:2181,datanode25:2181,datanode26:2181,edgenode:2181#0x05b970f7-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x1b000002cb790005
2022-10-03 11:19:15,366 INFO [main] input.FileInputFormat: Total input files to process : 32
2022-10-03 11:19:15,660 INFO [main] mapreduce.JobSubmitter: number of splits:32
2022-10-03 11:19:15,902 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1664271607293_0002
2022-10-03 11:19:16,225 INFO [main] conf.Configuration: resource-types.xml not found
2022-10-03 11:19:16,225 INFO [main] resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-10-03 11:19:16,231 INFO [main] resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
2022-10-03 11:19:16,231 INFO [main] resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
2022-10-03 11:19:16,293 INFO [main] impl.YarnClientImpl: Submitted application application_1664271607293_0002
2022-10-03 11:19:16,328 INFO [main] mapreduce.Job: The url to track the job: http://namenode:8088/proxy/application_1664271607293_0002/
2022-10-03 11:19:16,329 INFO [main] mapreduce.Job: Running job: job_1664271607293_0002
2022-10-03 11:19:31,513 INFO [main] mapreduce.Job: Job job_1664271607293_0002 running in uber mode : false
2022-10-03 11:19:31,514 INFO [main] mapreduce.Job: map 0% reduce 0%
2022-10-03 11:19:31,534 INFO [main] mapreduce.Job: 2-10-03 11:19:31.345]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2022-10-03 11:19:31.346]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
For more detailed output, check the application tracking page: http://namenode:8088/cluster/app/application_1664271607293_0002 Then click on links to logs of each attempt.
. Failing the application.
2022-10-03 11:19:31,552 INFO [main] mapreduce.Job: Counters: 0
I don't understand what the problem could be. I went to http://namenode:8088/cluster/app/application_1664271607293_0002 but i didn't found nothing interesting. I've tried the command with different tables but get the same result. The two clusters are not one the same version but I read that it wasn't a problem. Every service works well on my clusters and I can use hbase commands on the hbase shell without any problem. Also, map reduce programs works well on my new cluster. I've also tested the copyTable and snapchot methods for the data migration, which didn't work either.
Any idea of what should be the problem? Thanks! :)
update :
I found this on a datanode syslog in the hadoop web interface, may be useful?
2022-10-04 14:12:39,341 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration
2022-10-04 14:12:39,354 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2022-10-04 14:12:39,493 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 7 cluster_timestamp: 1664271607293 } attemptId: 2 } keyId: -896624238)
2022-10-04 14:12:39,536 INFO [main] org.apache.hadoop.conf.Configuration: resource-types.xml not found
2022-10-04 14:12:39,536 INFO [main] org.apache.hadoop.yarn.util.resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-10-04 14:12:39,636 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:73)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils.newJobId(MRBuilderUtils.java:39)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:298)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1745)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1742)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1673)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:70)
... 10 more
Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder.setAppId(Lorg/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto;)Lorg/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder; #36: invokevirtual
Reason:
Type 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage'
Current Frame:
bci: #36
flags: { }
locals: { 'org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
stack: { 'com/google/protobuf/SingleFieldBuilder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
Bytecode:
0x0000000: 2ab4 0011 c700 1b2b c700 0bbb 002f 59b7
0x0000010: 0030 bf2a 2bb5 000a 2ab6 0031 a700 0c2a
0x0000020: b400 112b b600 3257 2a59 b400 1304 80b5
0x0000030: 0013 2ab0
Stackmap Table:
same_frame(#19)
same_frame(#31)
same_frame(#40)
at org.apache.hadoop.mapreduce.v2.proto.MRProtos$JobIdProto.newBuilder(MRProtos.java:1017)
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.JobIdPBImpl.<init>(JobIdPBImpl.java:37)
... 15 more
2022-10-04 14:12:39,641 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:73)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils.newJobId(MRBuilderUtils.java:39)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:298)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1745)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1742)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1673)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:70)
... 10 more
Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder.setAppId(Lorg/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto;)Lorg/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder; #36: invokevirtual
Reason:
Type 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage'
Current Frame:
bci: #36
flags: { }
locals: { 'org/apache/hadoop/mapreduce/v2/proto/MRProtos$JobIdProto$Builder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
stack: { 'com/google/protobuf/SingleFieldBuilder', 'org/apache/hadoop/yarn/proto/YarnProtos$ApplicationIdProto' }
Bytecode:
0x0000000: 2ab4 0011 c700 1b2b c700 0bbb 002f 59b7
0x0000010: 0030 bf2a 2bb5 000a 2ab6 0031 a700 0c2a
0x0000020: b400 112b b600 3257 2a59 b400 1304 80b5
0x0000030: 0013 2ab0
Stackmap Table:
same_frame(#19)
same_frame(#31)
same_frame(#40)
at org.apache.hadoop.mapreduce.v2.proto.MRProtos$JobIdProto.newBuilder(MRProtos.java:1017)
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.JobIdPBImpl.<init>(JobIdPBImpl.java:37)
... 15 more
2022-10-04 14:12:39,643 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException

PCI Express AER Driver issues on Linux

I'm debugging a PCIe hardware issue on Linux and I want to enable PCIe AER driver on linux to catch any AER errors reported by my hardware device. I'm following this wiki:
https://www.kernel.org/doc/Documentation/PCI/pcieaer-howto.txt
My syslog shows AER is enabled
0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-108-generic root=UUID=a9f6d189-c13d-485c-a504-ba0aa0127e2e ro quiet splash aerdriver.forceload=y crashkernel=512M-:192M vt.handoff=1
[ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-108-generic root=UUID=a9f6d189-c13d-485c-a504-ba0aa0127e2e ro quiet splash aerdriver.forceload=y crashkernel=512M-:192M vt.handoff=1
[ 0.640130] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 0.661638] acpi PNP0A08:01: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 0.678143] acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 0.694863] acpi PNP0A08:03: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 4.747041] acpi PNP0A08:04: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 4.751760] acpi PNP0A08:05: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 4.758480] acpi PNP0A08:06: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 4.763990] acpi PNP0A08:07: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 5.463432] pcieport 0000:00:01.1: AER enabled with IRQ 34
[ 5.463450] pcieport 0000:00:07.1: AER enabled with IRQ 35
[ 5.463472] pcieport 0000:00:08.1: AER enabled with IRQ 37
[ 5.463517] pcieport 0000:10:01.1: AER enabled with IRQ 38
[ 5.463547] pcieport 0000:10:07.1: AER enabled with IRQ 39
[ 5.463575] pcieport 0000:10:08.1: AER enabled with IRQ 41
[ 5.463604] pcieport 0000:20:03.1: AER enabled with IRQ 42
[ 5.463635] pcieport 0000:20:07.1: AER enabled with IRQ 44
[ 5.463663] pcieport 0000:20:08.1: AER enabled with IRQ 46
[ 5.463782] pcieport 0000:30:03.1: AER enabled with IRQ 47
[ 5.463811] pcieport 0000:30:07.1: AER enabled with IRQ 49
[ 5.463843] pcieport 0000:30:08.1: AER enabled with IRQ 51
[ 5.463872] pcieport 0000:40:07.1: AER enabled with IRQ 62
[ 5.463895] pcieport 0000:40:08.1: AER enabled with IRQ 64
[ 5.463930] pcieport 0000:50:07.1: AER enabled with IRQ 66
[ 5.463965] pcieport 0000:50:08.1: AER enabled with IRQ 68
[ 5.464000] pcieport 0000:60:07.1: AER enabled with IRQ 70
[ 5.464044] pcieport 0000:60:08.1: AER enabled with IRQ 72
[ 5.464071] pcieport 0000:70:07.1: AER enabled with IRQ 74
[ 5.464099] pcieport 0000:70:08.1: AER enabled with IRQ 76
The hardware device is a Samsung SSD connected to the Root Complex by a PCIe switch
PCIe Topology : Root Complex - <PLDA PCIe switch + FPGA> - Samsung EVO SSD
Unfortunately, I'm seeing a lot of NVMe related errors but no AER errors are outputted.
Jun 26 12:13:54 ndra-Diesel kernel: [ 1080.672606] nvme1n1: p1 p2
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542592] nvme nvme1: I/O 832 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542617] nvme nvme1: I/O 833 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542627] nvme nvme1: I/O 834 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542636] nvme nvme1: I/O 835 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542645] nvme nvme1: I/O 872 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542654] nvme nvme1: I/O 873 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542662] nvme nvme1: I/O 874 QID 5 timeout, aborting
Jun 26 12:14:27 ndra-Diesel kernel: [ 1113.542670] nvme nvme1: I/O 875 QID 5 timeout, aborting
Jun 26 12:14:58 ndra-Diesel kernel: [ 1144.262425] nvme nvme1: I/O 832 QID 5 timeout, reset controller
Jun 26 12:15:29 ndra-Diesel kernel: [ 1174.982243] nvme nvme1: I/O 16 QID 0 timeout, reset controller
Jun 26 12:15:40 ndra-Diesel gnome-software[6474]: no app for changed ubuntu-dock#ubuntu.com
I have custom compiled my kernel with following options:
cat /boot/config-4.15.0-108-generic | grep -i PCIE
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=y
CONFIG_PCIEPORTBUS=y
The lspci output for my Samsung NVMe shows that it has AER Capability:
37:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a801
Flags: bus master, fast devsel, latency 0, IRQ 54, NUMA node 3
Memory at b6500000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=33 Masked-
Capabilities: [100] Advanced Error Reporting <----------------------------- SEE THIS
Capabilities: [148] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [158] Power Budgeting <?>
Capabilities: [168] #19
Capabilities: [188] Latency Tolerance Reporting
Capabilities: [190] L1 PM Substates
Kernel driver in use: nvme
Kernel modules: nvme
But the lscpi for PLDA switch doesn't show it has AER Capability
3:00.0 PCI bridge: PLDA XpressSwitch (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 53, NUMA node 3
Bus: primary=33, secondary=34, subordinate=3b, sec-latency=0
Memory behind bridge: b6400000-b65fffff
Capabilities: [80] Express Upstream Port, MSI 00
Capabilities: [e0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [f8] Power Management version 3
Capabilities: [100] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
Capabilities: [300] #19
Kernel driver in use: pcieport
Kernel modules: shpchp
I have two Questions :
The Samsung NVMe is behind the PLDA switch in the topology and the switch doesn't have AER capability. Can this be the reason I'm not seeing AER errors from the NVMe ?
Do I need to do anything else to enable AER on linux ?
Try adding this to the end of your config file:
pcie_ports=native

hadoop streaming failed with error code 1 in rstudio-server

I use single node.
I installed rmr2 and hdfs in sudo R
I wrote some codes in rstudio-server.
But, it occurred error.
I don`t know what's wrong.
Thanks for reading.
If someone help me, I will appreciate you.
> library("rmr2", lib.loc="/usr/lib64/R/library")
Please review your hadoop settings. See help(hadoop.settings)
> library("rhdfs", lib.loc="/usr/lib64/R/library")
Loading required package: rJava
HADOOP_CMD=/home/knu/hadoop/hadoop-2.7.3/bin/hadoop
Be sure to run hdfs.init()
> hdfs.init()
17/04/12 21:39:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> detach("package:rmr2", unload=TRUE)
> library("rmr2", lib.loc="/usr/lib64/R/library")
Please review your hadoop settings. See help(hadoop.settings)
> small.ints <- to.dfs(1:10)
17/04/12 21:41:07 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
17/04/12 21:41:07 INFO compress.CodecPool: Got brand-new compressor [.deflate]
> from.dfs(small.ints)
17/04/12 21:41:25 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
17/04/12 21:41:25 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
$key
NULL
$val
[1] 1 2 3 4 5 6 7 8 9 10
> result <- mapreduce(input = small.ints,
map = function(k,v) cbind(v,v^2) )
Execution Log:
packageJobJar: [/tmp/hadoop-unjar5646463829062726981/] [] /tmp/streamjob3604004992263150530.jar tmpDir=null
17/04/12 21:41:38 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/04/12 21:41:38 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/04/12 21:41:39 INFO mapred.FileInputFormat: Total input paths to process : 1
17/04/12 21:41:39 INFO mapreduce.JobSubmitter: number of splits:2
17/04/12 21:41:39 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
17/04/12 21:41:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1491995822063_0001
17/04/12 21:41:40 INFO impl.YarnClientImpl: Submitted application application_1491995822063_0001
17/04/12 21:41:40 INFO mapreduce.Job: The url to track the job: http://0.0.0.0:8089/proxy/application_1491995822063_0001/
17/04/12 21:41:40 INFO mapreduce.Job: Running job: job_1491995822063_0001
17/04/12 21:41:54 INFO mapreduce.Job: Job job_1491995822063_0001 running in uber mode : false
17/04/12 21:41:54 INFO mapreduce.Job: map 0% reduce 0%
17/04/12 21:42:01 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_0, Status : FAILED
Container [pid=12055,containerID=container_1491995822063_0001_01_000003] is running beyond virtual memory limits. Current usage: 109.6 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12073 12055 12055 12055 (java) 193 8 2153918464 27756 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_0 3
|- 12055 12053 12055 12055 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_0 3 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:01 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_0, Status : FAILED
Container [pid=12054,containerID=container_1491995822063_0001_01_000002] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12072 12054 12054 12054 (java) 204 9 2154971136 27169 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_0 2
|- 12054 12052 12054 12054 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_0 2 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:07 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_1, Status : FAILED
Container [pid=12143,containerID=container_1491995822063_0001_01_000004] is running beyond virtual memory limits. Current usage: 105.0 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12151 12143 12143 12143 (java) 177 5 2153918464 26571 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_1 4
|- 12143 12142 12143 12143 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_1 4 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:10 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_1, Status : FAILED
Container [pid=12164,containerID=container_1491995822063_0001_01_000005] is running beyond virtual memory limits. Current usage: 130.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000005 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12172 12164 12164 12164 (java) 293 9 2175307776 32535 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_1 5
|- 12164 12163 12164 12164 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_1 5 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005/stderr
|- 12224 12172 12164 12164 (R) 0 0 116293632 469 /bin/sh /usr/lib64/R/bin/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
|- 12228 12224 12164 12164 (R) 0 0 116293632 158 /bin/sh /usr/lib64/R/bin/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:16 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_2, Status : FAILED
Container [pid=12254,containerID=container_1491995822063_0001_01_000007] is running beyond virtual memory limits. Current usage: 160.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12319 12262 12254 12254 (R) 10 2 268447744 8256 /usr/lib64/R/bin/exec/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
|- 12262 12254 12254 12254 (java) 295 9 2175307776 32549 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_2 7
|- 12254 12253 12254 12254 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_2 7 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:16 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_2, Status : FAILED
Container [pid=12280,containerID=container_1491995822063_0001_01_000008] is running beyond virtual memory limits. Current usage: 92.1 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12280 12279 12280 12280 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_2 8 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008/stderr
|- 12289 12280 12280 12280 (java) 137 5 2142965760 23280 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_2 8
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:23 INFO mapreduce.Job: map 100% reduce 0%
17/04/12 21:42:23 INFO mapreduce.Job: Job job_1491995822063_0001 failed with state FAILED due to: Task failed task_1491995822063_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/04/12 21:42:23 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=39157
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=39157
Total vcore-milliseconds taken by all map tasks=39157
Total megabyte-milliseconds taken by all map tasks=40096768
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
17/04/12 21:42:23 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1

Unable to start the MGT Development Environment

I'm trying to setup the MGT Development Environment as per the instructions on the site. I'm running Ubuntu 16.04 and native docker.
I did a fresh pull before trying any of this. After running the container the browser at 127.0.0.1:3333 just shows the general HTTP 500 error. Running docker logs on the container shows the following log entries:
docker logs 7b1f04c29bf2
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-03-28 14:03:53,908 CRIT Supervisor running as root (no user in config file)
2017-03-28 14:03:53,908 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2017-03-28 14:03:53,916 INFO RPC interface 'supervisor' initialized
2017-03-28 14:03:53,917 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2017-03-28 14:03:53,917 INFO supervisord started with pid 1
2017-03-28 14:03:54,919 INFO spawned: 'sshd' with pid 9
2017-03-28 14:03:54,920 INFO spawned: 'postfix' with pid 10
2017-03-28 14:03:54,922 INFO spawned: 'php-fpm' with pid 11
2017-03-28 14:03:54,928 INFO spawned: 'redis' with pid 13
2017-03-28 14:03:54,930 INFO spawned: 'varnish' with pid 16
2017-03-28 14:03:54,932 INFO spawned: 'cron' with pid 18
2017-03-28 14:03:54,934 INFO spawned: 'nginx' with pid 19
2017-03-28 14:03:54,935 INFO spawned: 'clp-server' with pid 20
2017-03-28 14:03:54,937 INFO spawned: 'clp5-fpm' with pid 23
2017-03-28 14:03:54,938 INFO spawned: 'mysql' with pid 24
2017-03-28 14:03:54,940 INFO spawned: 'memcached' with pid 26
2017-03-28 14:03:54,940 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:54,941 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2017-03-28 14:03:55,011 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:55,102 INFO exited: postfix (exit status 0; expected)
2017-03-28 14:03:55,255 INFO exited: varnish (exit status 0; not expected)
2017-03-28 14:03:56,256 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,257 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,259 INFO spawned: 'redis' with pid 382
2017-03-28 14:03:56,262 INFO spawned: 'varnish' with pid 383
2017-03-28 14:03:56,263 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp-server entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,266 INFO spawned: 'mysql' with pid 384
2017-03-28 14:03:56,266 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,279 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:56,279 CRIT reaped unknown pid 385)
2017-03-28 14:03:56,306 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:56,585 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:03:58,588 INFO spawned: 'redis' with pid 396
2017-03-28 14:03:58,589 INFO spawned: 'varnish' with pid 397
2017-03-28 14:03:58,590 INFO spawned: 'mysql' with pid 398
2017-03-28 14:03:58,599 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:58,605 CRIT reaped unknown pid 399)
2017-03-28 14:03:58,632 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:58,913 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:01,919 INFO spawned: 'redis' with pid 410
2017-03-28 14:04:01,921 INFO spawned: 'varnish' with pid 411
2017-03-28 14:04:01,923 INFO spawned: 'mysql' with pid 412
2017-03-28 14:04:01,930 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:04:01,930 INFO gave up: redis entered FATAL state, too many start retries too quickly
2017-03-28 14:04:01,930 CRIT reaped unknown pid 413)
2017-03-28 14:04:01,969 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:04:02,238 INFO gave up: mysql entered FATAL state, too many start retries too quickly
2017-03-28 14:04:02,238 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:03,240 INFO gave up: varnish entered FATAL state, too many start retries too quickly
If I logon to the container via docker exec -it bash it shows the following running process:
root#mgt-dev-70:/# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 48144 16348 ? Ss+ 14:03 0:00 /usr/bin/python /usr/bin/supervisord
root 9 0.0 0.0 55600 5268 ? S 14:03 0:00 /usr/sbin/sshd -D
root 11 0.0 0.3 819816 49984 ? S 14:03 0:00 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf)
root 18 0.0 0.0 25904 2236 ? S 14:03 0:00 /usr/sbin/cron -f
root 19 0.0 0.1 64660 23456 ? S 14:03 0:00 nginx: master process /usr/sbin/nginx -g daemon off;
root 20 0.0 0.0 93752 8432 ? S 14:03 0:00 nginx: master process /usr/sbin/clp-server -g daemon off;
root 23 0.0 0.2 854428 39528 ? S 14:03 0:00 php-fpm: master process (/etc/clp5/fpm/php-fpm.conf)
root 25 0.1 0.0 37256 8876 ? Ssl 14:03 0:00 /usr/bin/redis-server 127.0.0.1:6379
memcache 26 0.0 0.0 327452 2724 ? Sl 14:03 0:00 /usr/bin/memcached -p 11211 -u memcache -m 256 -c 1024
root 40 0.0 0.1 65564 21516 ? S 14:03 0:00 nginx: worker process
root 102 0.0 0.0 94588 4304 ? S 14:03 0:00 nginx: worker process
root 156 0.0 0.0 36620 3948 ? Ss 14:03 0:00 /usr/lib/postfix/master
postfix 157 0.0 0.0 38684 3780 ? S 14:03 0:00 pickup -l -t unix -u -c
postfix 158 0.0 0.0 38732 3892 ? S 14:03 0:00 qmgr -l -t unix -u
varnish 164 0.0 0.0 126924 7172 ? Ss 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
vcache 165 0.0 0.7 314848 123484 ? Sl 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
root 495 0.0 0.0 20244 2984 ? Ss 14:12 0:00 bash
root 501 0.0 0.0 17500 2036 ? R+ 14:12 0:00 ps -aux
That's really as much as I know. Any guideance on getting it progressed appreciated as it looks great as a quick and easy way to get going on Magento 2. Thanks.

MacOSX build not running on older version

I am facing a strange issue on 10.8. My client is reporting that build isn't working on 10.8. I have tested the build on 2 systems both systems have 10.11 installed.
in 10.8 log its saying
Library not loaded: #rpath/libfmod.dylib
reason: image not found
I have added "#executable_path" in Runtime Search Path.
In Copy files build phase i have added libfmod.dylib and its destination is Executables
In my personal opinion if the problem is with run path then it shouldn't run on my machine either.
Another thing i have noticed is both machines have diffrent "Referenced From" path. my client crash log for osx10.8 shows
/Applications/MyApp.app/Contents/Resources/.MyApp.app/Contents/MacOS/My App
while my log shows
/Applications/MyApp.app/Contents/MacOS/MyApp
could this be a problem?
Any help would be appreciated.
EDIT
here is the result of running otool -l command on dylib
Load command 3
cmd LC_ID_DYLIB
cmdsize 48
name #rpath/libfmod.dylib (offset 24)
time stamp 1 Thu Jan 1 05:00:01 1970
current version 1.0.0
compatibility version 1.0.0
Load command 4
cmd LC_SYMTAB
cmdsize 24
symoff 1320816
nsyms 1233
stroff 1341592
strsize 38776
Load command 5
cmd LC_DYSYMTAB
cmdsize 80
ilocalsym 0
nlocalsym 1
iextdefsym 1
nextdefsym 1107
iundefsym 1108
nundefsym 125
tocoff 0
ntoc 0
modtaboff 0
nmodtab 0
extrefsymoff 0
nextrefsyms 0
indirectsymoff 1340592
nindirectsyms 250
extreloff 1340544
nextrel 6
locreloff 1302528
nlocrel 1672
Load command 6
cmd LC_UUID
cmdsize 24
uuid 44E30C32-DE34-37CF-BD39-3F67491607C4
Load command 7
cmd LC_VERSION_MIN_MACOSX
cmdsize 16
version 10.5
sdk 10.11
Load command 8
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 157.0.0
compatibility version 2.0.0
Load command 9
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 10
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 11
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libstdc++.6.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 104.1.0
compatibility version 7.0.0
Load command 12
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libSystem.B.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1225.1.1
compatibility version 1.0.0
Load command 13
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libgcc_s.1.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 955.0.0
compatibility version 1.0.0
Load command 14
cmd LC_LOAD_DYLIB
cmdsize 104
name /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1253.0.0
compatibility version 150.0.0
Load command 15
cmd LC_FUNCTION_STARTS
cmdsize 16
dataoff 1315904
datasize 4912
Load command 16
cmd LC_DATA_IN_CODE
cmdsize 16
dataoff 1320816
datasize 0
Edit otool -l results for app
Load command 0
cmd LC_SEGMENT_64
cmdsize 72
segname __PAGEZERO
vmaddr 0x0000000000000000
vmsize 0x0000000100000000
fileoff 0
filesize 0
maxprot 0x00000000
initprot 0x00000000
nsects 0
flags 0x0
Load command 1
cmd LC_SEGMENT_64
cmdsize 952
segname __TEXT
vmaddr 0x0000000100000000
vmsize 0x00000000001fe000
fileoff 0
filesize 2088960
maxprot 0x00000007
initprot 0x00000005
nsects 11
flags 0x0
Section
sectname __text
segname __TEXT
addr 0x0000000100001ba0
size 0x0000000000175017
offset 7072
align 2^4 (16)
reloff 0
nreloc 0
flags 0x80000400
reserved1 0
reserved2 0
Section
sectname __stubs
segname __TEXT
addr 0x0000000100176bb8
size 0x00000000000033ae
offset 1534904
align 2^1 (2)
reloff 0
nreloc 0
flags 0x80000408
reserved1 0 (index into indirect symbol table)
reserved2 6 (size of stubs)
Section
sectname __stub_helper
segname __TEXT
addr 0x0000000100179f68
size 0x0000000000000c90
offset 1548136
align 2^2 (4)
reloff 0
nreloc 0
flags 0x80000400
reserved1 0
reserved2 0
Section
sectname __const
segname __TEXT
addr 0x000000010017ac00
size 0x0000000000005048
offset 1551360
align 2^5 (32)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __gcc_except_tab
segname __TEXT
addr 0x000000010017fc48
size 0x00000000000163a0
offset 1571912
align 2^2 (4)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __cstring
segname __TEXT
addr 0x0000000100195fe8
size 0x000000000000c783
offset 1662952
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000002
reserved1 0
reserved2 0
Section
sectname __objc_classname
segname __TEXT
addr 0x00000001001a276b
size 0x0000000000000081
offset 1714027
align 2^0 (1)
reloff 0
nreloc 0
flags 0x00000002
reserved1 0
reserved2 0
Section
sectname __objc_methname
segname __TEXT
addr 0x00000001001a27ec
size 0x0000000000001505
offset 1714156
align 2^0 (1)
reloff 0
nreloc 0
flags 0x00000002
reserved1 0
reserved2 0
Section
sectname __objc_methtype
segname __TEXT
addr 0x00000001001a3cf1
size 0x0000000000001609
offset 1719537
align 2^0 (1)
reloff 0
nreloc 0
flags 0x00000002
reserved1 0
reserved2 0
Section
sectname __unwind_info
segname __TEXT
addr 0x00000001001a52fc
size 0x0000000000006e20
offset 1725180
align 2^2 (4)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __eh_frame
segname __TEXT
addr 0x00000001001ac120
size 0x0000000000051ec8
offset 1753376
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Load command 2
cmd LC_SEGMENT_64
cmdsize 1512
segname __DATA
vmaddr 0x00000001001fe000
vmsize 0x0000000000028000
fileoff 2088960
filesize 126976
maxprot 0x00000007
initprot 0x00000003
nsects 18
flags 0x0
Section
sectname __nl_symbol_ptr
segname __DATA
addr 0x00000001001fe000
size 0x0000000000000010
offset 2088960
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000006
reserved1 2205 (index into indirect symbol table)
reserved2 0
Section
sectname __got
segname __DATA
addr 0x00000001001fe010
size 0x0000000000000240
offset 2088976
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000006
reserved1 2207 (index into indirect symbol table)
reserved2 0
Section
sectname __la_symbol_ptr
segname __DATA
addr 0x00000001001fe250
size 0x00000000000044e8
offset 2089552
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000007
reserved1 2279 (index into indirect symbol table)
reserved2 0
Section
sectname __mod_init_func
segname __DATA
addr 0x0000000100202738
size 0x0000000000000470
offset 2107192
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000009
reserved1 0
reserved2 0
Section
sectname __const
segname __DATA
addr 0x0000000100202bc0
size 0x0000000000013400
offset 2108352
align 2^5 (32)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __cfstring
segname __DATA
addr 0x0000000100215fc0
size 0x0000000000000480
offset 2187200
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __objc_classlist
segname __DATA
addr 0x0000000100216440
size 0x0000000000000030
offset 2188352
align 2^3 (8)
reloff 0
nreloc 0
flags 0x10000000
reserved1 0
reserved2 0
Section
sectname __objc_protolist
segname __DATA
addr 0x0000000100216470
size 0x0000000000000018
offset 2188400
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __objc_imageinfo
segname __DATA
addr 0x0000000100216488
size 0x0000000000000008
offset 2188424
align 2^2 (4)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __objc_const
segname __DATA
addr 0x0000000100216490
size 0x0000000000001da0
offset 2188432
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __objc_selrefs
segname __DATA
addr 0x0000000100218230
size 0x00000000000005a0
offset 2196016
align 2^3 (8)
reloff 0
nreloc 0
flags 0x10000005
reserved1 0
reserved2 0
Section
sectname __objc_classrefs
segname __DATA
addr 0x00000001002187d0
size 0x0000000000000128
offset 2197456
align 2^3 (8)
reloff 0
nreloc 0
flags 0x10000000
reserved1 0
reserved2 0
Section
sectname __objc_superrefs
segname __DATA
addr 0x00000001002188f8
size 0x0000000000000028
offset 2197752
align 2^3 (8)
reloff 0
nreloc 0
flags 0x10000000
reserved1 0
reserved2 0
Section
sectname __objc_ivar
segname __DATA
addr 0x0000000100218920
size 0x0000000000000088
offset 2197792
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __objc_data
segname __DATA
addr 0x00000001002189a8
size 0x00000000000001e0
offset 2197928
align 2^3 (8)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __data
segname __DATA
addr 0x0000000100218b90
size 0x0000000000003500
offset 2198416
align 2^4 (16)
reloff 0
nreloc 0
flags 0x00000000
reserved1 0
reserved2 0
Section
sectname __common
segname __DATA
addr 0x000000010021c0a0
size 0x0000000000002cd0
offset 0
align 2^5 (32)
reloff 0
nreloc 0
flags 0x00000001
reserved1 0
reserved2 0
Section
sectname __bss
segname __DATA
addr 0x000000010021ed80
size 0x0000000000006d7f
offset 0
align 2^5 (32)
reloff 0
nreloc 0
flags 0x00000001
reserved1 0
reserved2 0
Load command 3
cmd LC_SEGMENT_64
cmdsize 72
segname __LINKEDIT
vmaddr 0x0000000100226000
vmsize 0x000000000005bd20
fileoff 2215936
filesize 376096
maxprot 0x00000007
initprot 0x00000001
nsects 0
flags 0x0
Load command 4
cmd LC_DYLD_INFO_ONLY
cmdsize 48
rebase_off 2215936
rebase_size 1848
bind_off 2217784
bind_size 3592
weak_bind_off 2221376
weak_bind_size 3248
lazy_bind_off 2224624
lazy_bind_size 8704
export_off 2233328
export_size 173048
Load command 5
cmd LC_SYMTAB
cmdsize 24
symoff 2416712
nsyms 2350
stroff 2472248
strsize 119784
Load command 6
cmd LC_DYSYMTAB
cmdsize 80
ilocalsym 0
nlocalsym 1881
iextdefsym 1881
nextdefsym 71
iundefsym 1952
nundefsym 398
tocoff 0
ntoc 0
modtaboff 0
nmodtab 0
extrefsymoff 0
nextrefsyms 0
indirectsymoff 2454312
nindirectsyms 4484
extreloff 0
nextrel 0
locreloff 0
nlocrel 0
Load command 7
cmd LC_LOAD_DYLINKER
cmdsize 32
name /usr/lib/dyld (offset 12)
Load command 8
cmd LC_UUID
cmdsize 24
uuid BD90987E-8A90-3083-B629-E9F378270B47
Load command 9
cmd LC_VERSION_MIN_MACOSX
cmdsize 16
version 10.8
sdk 10.8
Load command 10
cmd LC_SOURCE_VERSION
cmdsize 16
version 0.0
Load command 11
cmd LC_MAIN
cmdsize 24
entryoff 288778
stacksize 0
Load command 12
cmd LC_LOAD_WEAK_DYLIB
cmdsize 112
name /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 45.0.0
compatibility version 1.0.0
Load command 13
cmd LC_LOAD_WEAK_DYLIB
cmdsize 152
name /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 600.0.0
compatibility version 64.0.0
Load command 14
cmd LC_LOAD_WEAK_DYLIB
cmdsize 88
name /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 15
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/CoreMedia.framework/Versions/A/CoreMedia (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 16
cmd LC_LOAD_DYLIB
cmdsize 104
name /System/Library/Frameworks/AVFoundation.framework/Versions/A/AVFoundation (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 2.0.0
compatibility version 1.0.0
Load command 17
cmd LC_LOAD_DYLIB
cmdsize 48
name #rpath/libfmod.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 18
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1187.37.0
compatibility version 45.0.0
Load command 19
cmd LC_LOAD_DYLIB
cmdsize 104
name /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 20
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 19.0.0
compatibility version 1.0.0
Load command 21
cmd LC_LOAD_DYLIB
cmdsize 104
name /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 744.18.0
compatibility version 150.0.0
Load command 22
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 945.16.0
compatibility version 300.0.0
Load command 23
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/OpenAL.framework/Versions/A/OpenAL (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 24
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.0.0
compatibility version 1.0.0
Load command 25
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.8.0
compatibility version 1.2.0
Load command 26
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libxml2.2.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 10.8.0
compatibility version 10.0.0
Load command 27
cmd LC_LOAD_DYLIB
cmdsize 48
name /usr/lib/libz.1.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.2.5
compatibility version 1.0.0
Load command 28
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libstdc++.6.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 56.0.0
compatibility version 7.0.0
Load command 29
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libSystem.B.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 169.3.0
compatibility version 1.0.0
Load command 30
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 1.8.0
compatibility version 1.2.0
Load command 31
cmd LC_LOAD_DYLIB
cmdsize 56
name /usr/lib/libobjc.A.dylib (offset 24)
time stamp 2 Thu Jan 1 05:00:02 1970
current version 228.0.0
compatibility version 1.0.0
Load command 32
cmd LC_RPATH
cmdsize 32
path #executable_path (offset 12)
Load command 33
cmd LC_FUNCTION_STARTS
cmdsize 16
dataoff 2406376
datasize 10336
Load command 34
cmd LC_DATA_IN_CODE
cmdsize 16
dataoff 2416712
datasize 0

Resources