Unable to mount /dev/pmem0 with 'dax' option - linux-kernel

I am upgrading the kernel version from 4.14 to 5.4.103.
We have /dev/pmem0 created with the following command line: memmap=0x1000000!0x10000000
As a result /dev/pmem0 has been created. But, the mount command is failing with errors.
mount -o dax /dev/pmem0 /mnt/data1
[ 28.920384] squashfs: Unknown parameter 'dax'
[ 28.945516] EXT4-fs (pmem0): DAX unsupported by block device.
[ 28.951523] squashfs: Unknown parameter 'dax'
mount: mounting /dev/pmem0 on /mnt/data1 failed: Invalid argument
Can someone tell me whether something got changed with 5.4.x kernel version?
PS: If I remove the 'dax' option, mount becomes successful.

Related

Why updateNodeList Failed when installing oracle grid infrastructure 12c?

when installing oracle 12c grid infrastructure I received error and must run the following command :
/u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=kaash-his-2
when I run the command the result failed .
-bash-4.2$ /u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=kaash-his-2ACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TStarting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 28607 MB Passed
Exception oracle.sysman.oii.oiil.OiilNativeException: S_OWNER_SYSTEM_EPERM occurred..
oracle.sysman.oii.oiil.OiilNativeException: S_OWNER_SYSTEM_EPERM
at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.chgrp(Native Method)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.changeGroup(OiipgBootstrap.java:1468)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.writeInvLoc(OiipgBootstrap.java:1113)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.updateInventoryLoc(OiipgBootstrap.java:463)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.createInventory(OiiiInstallAreaControl.java:5394)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControlWithAccessCheck(OiiiInstallAreaControl.java:1826)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initAreaControl(OiicStandardInventorySession.java:316)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:276)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:238)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:187)
at oracle.sysman.oii.oiic.OiicBaseInventoryApp.getInstallAreaControl(OiicBaseInventoryApp.java:993)
at oracle.sysman.oii.oiic.OiicBaseInventoryApp.main_helper(OiicBaseInventoryApp.java:756)
at oracle.sysman.oii.oiic.OiicUpdateNodeList.main(OiicUpdateNodeList.java:492)
'UpdateNodeList' failed.
'UpdateNodeList' failed.
This is the log file :
[grid#KAASH-HIS-2 logs]$ cat UpdateNodeList2023-02-12_12-11-10AM.log
The file oraparam.ini could not be found at /u01/app/12.1.0/grid/oui/bin/oraparam.ini
Using paramFile: /u01/app/12.1.0/grid/oui/oraparam.ini
Checking swap space: must be greater than 500 MB. Actual 28607 MB Passed
Execvp of the child jre : the cmdline is ../../jdk/jre/bin/java, and the argv is
../../jdk/jre/bin/java
-Doracle.installer.library_loc=../lib/linux64
-Doracle.installer.oui_loc=/u01/app/12.1.0/grid/oui/bin/..
-Doracle.installer.bootstrap=FALSE
-Doracle.installer.startup_location=/u01/app/12.1.0/grid/oui/bin
-Doracle.installer.jre_loc=../../jdk/jre
-Doracle.installer.custom_inventory=/u01/app/oraInventory
-Doracle.installer.nlsEnabled="TRUE"
-Doracle.installer.prereqConfigLoc=
-Doracle.installer.unixVersion=5.4.17-2136.315.5.el7uek.x86_64
-Xms150m
-Xmx256m
-XX:MaxPermSize=128M
-cp
/tmp/OraInstall2023-02-12_12-11-10AM::/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/wsclient_extended.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/emCoreConsole.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/jsch.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/remoteinterfaces.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/OraPrereq.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/orai18n-utility.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/OraPrereqChecks.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/adf-share-ca.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/jmxspi.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/instcommon.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/installcommons_1.0.0b.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/ojdbc6.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/instcrs.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/cvu.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/entityManager_proxy.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/javax.security.jacc_1.0.0.0_1-1.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/prov_fixup.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/orai18n-mapping.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/emca.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/ssh.jar:../jlib/OraInstaller.jar:../jlib/oneclick.jar:../jlib/xmlparserv2.jar:../jlib/share.jar:../jlib/OraInstallerNet.jar:../jlib/emCfg.jar:../jlib/emocmutl.jar:../jlib/OraPrereq.jar:../jlib/jsch.jar:../jlib/ssh.jar:../jlib/remoteinterfaces.jar:../jlib/http_client.jar:../jlib/OraSuiteInstaller.jar:../jlib/opatch.jar:../jlib/opatchactions.jar:../jlib/opatchprereq.jar:../jlib/opatchutil.jar:../jlib/OraCheckPoint.jar:../jlib/InstImages.jar:../jlib/InstHelp.jar:../jlib/InstHelp_de.jar:../jlib/InstHelp_es.jar:../jlib/InstHelp_fr.jar:../jlib/InstHelp_it.jar:../jlib/InstHelp_ja.jar:../jlib/InstHelp_ko.jar:../jlib/InstHelp_pt_BR.jar:../jlib/InstHelp_zh_CN.jar:../jlib/InstHelp_zh_TW.jar:../jlib/oracle_ice.jar:../jlib/help-share.jar:../jlib/ohj.jar:../jlib/ewt3.jar:../jlib/ewt3-swingaccess.jar:../jlib/swingaccess.jar::../jlib/jewt4.jar:../jlib/orai18n-collation.jar:../jlib/orai18n-mapping.jar:../jlib/ojmisc.jar:../jlib/xml.jar:../jlib/srvm.jar:../jlib/srvmasm.jar
oracle.sysman.oii.oiic.OiicUpdateNodeList
-scratchPath
/tmp/OraInstall2023-02-12_12-11-10AM
-sourceType
network
-timestamp
2023-02-12_12-11-10AM
-updateNodeList
-setCustomNodelist
-noClusterEnabled
ORACLE_HOME=/u01/app/12.1.0/grid
CLUSTER_NODES=kaash-his-1,kaash-his-2
NODES_TO_SET={kaash-his-1,kaash-his-2}
CRS=false
INVENTORY_LOCATION=/u01/app/oraInventory
LOCAL_NODE=kaash-his-2
this is id commnad :
[grid#KAASH-HIS-2 logs]$ id
uid=54323(grid) gid=1002(oinstall) groups=1002(oinstall),1004(dba),54323,54324 context=system_u:system_r:unconfined_service_t:s0
this is the command I need to run before continue the setup
/u01/app/12.1.0/grid/oui/bin/runInstaller -jreLoc /u01/app/12.1.0/grid/jdk/jre -paramFile /u01/app/12.1.0/grid/oui/clusterparam.ini -silent
-ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}"
-invPtrLoc "/u01/app/12.1.0/grid/oraInst.loc" -local
when I execute the command I got this error
The operation failed as it was called without path of Oracle Home being attached
how can I attach oracle home and execute the command please ?

go application build with bazel can't link when running inside container

I am trying to containerize my application build, though, when running the build that uses bazel with bazel-gazelle inside a container I will get this error:
$ bazel run --spawn_strategy=local //:gazelle --verbose_failures
INFO: Analyzed target //:gazelle (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/BUILD.bazel:43:15: GoToolchainBinary external/go_sdk/builder [for host] failed: (Exit 1): go failed: error executing command
(cd /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/execroot/__main__ && \
exec env - \
GOROOT_FINAL=GOROOT \
external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a)
# Configuration: e0f1106e28100863b4221c55fca6feb935acec078da5376e291cf644e275dae5
# Execution platform: #local_config_platform//:host
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
Target //:gazelle failed to build
INFO: Elapsed time: 2.302s, Critical Path: 0.35s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
I tried to run it standalone:
$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
and still got no success.
Never had this kind of link problem and the linker don't provide much more information. Tried to install all packages I could think of and still no luck.
For context:
Running Ubuntu 20.04 LTS
Docker 20.10.9
Bazel 4.2.2
Rules GO v0.31.0
Bazel Gazelle v0.25.0
Also tried to run it with the strace, though I don't think I am skilled enough to find meaningful information from the tool output.
#edit
For more context:
e$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -v -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
HEADER = -H5 -T0x401000 -R0x1000
searching for runtime.a in /opt/go/pkg/linux_amd64/runtime.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument

failed to install greenplum command center when running gpccinstall

I downloaded greenplum-cc-web-4.6.1-LINUX-x86_64.zip for my greenplum db with 5.18, and followed this link (https://gpcc.docs.pivotal.io/460/topics/setup-collection-agents.html) to install command center. Everything is OK while there is a failure about gpccinstall. It showed following errors:
RunCommandOnEachHost fail on host: client-gp03.bj
Error when unzip remote binary on sdw3 bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp00.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp01.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp02.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
RunCommandOnEachHost failure happened
Can anyone encounter this issue before? I did some search in google and pivotal community, but failed to find some solution. Any help is appreciated.
BTW, when I ignored above errors, and continued, I found the gpcc web server can be started successfully. And when I logged in, only "Query Monitor" UI section show one warning: "GPCC is no longer receiving updates. Check your network status or gpcc status and refresh this page.", other part of UI seems OK.
From here:
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
You have duplicate ssh fingerprint keys in your /home/gpadmin/.ssh/known_hosts file. I recommend removing both lines 10 and 17 from that file, then running ssh-keyscan client-gp03.bj >> /home/gpadmin/.ssh/known_hosts
After this is complete, try ssh-ing to the host, to see that the fingerprint error is cleared up, and if so, try the gpcc installation again.

MPID_nem_tcp_init(384).............: gethostbyname failed, Mac (errno 1)

I install mpich3.3 locally on my macbook, but I got this run time error:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(565)..............:
MPID_Init(224).....................: channel initialization failed
MPIDI_CH3_Init(105)................:
MPID_nem_init(324).................:
MPID_nem_tcp_init(178).............:
MPID_nem_tcp_get_business_card(425):
MPID_nem_tcp_init(384).............: gethostbyname failed, Mac (errno 1)
Could anyone please help me point out the issue and tell me how to solve it?
This problem has been solved by adding a new line in the file /etc/hosts.
1 ##
2 # Host Database
3 #
4 # localhost is used to configure the loopback interface
5 # when the system is booting. Do not change this entry.
6 ##
7 127.0.0.1| localhost
8 255.255.255.255|broadcasthost
Simply add a new line 127.0.0.1 Mac, where you need to replace Mac by your current machine name.
I believe this problem is caused by the modification of machine name from the system preference.

Getting error while starting presto as $ sudo bin/launcher run

When I'm starting presto as this, it was started but
sys2079#sys2079:~/Music/presto-server-0.149$ sudo bin/launcher start
Started as 10672
After that i tried to lunch the presto then I'm getting error like this
sys2079#sys2079:~/presto-server-0.149$ sudo bin/launcher run
Unrecognized VM option 'G1HeapRegionSize = 32M'
Did you mean 'G1HeapRegionSize='?
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
WHAT I HAVE TO DO ????
try changing the RAM in your /etc/config.properties file
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=8080
query.max-memory=5GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://<your IP or *Localhost*>:8080
Hope this helps
The problem is in your jvm.config file inside presto. You should write your JVM option as: G1HeapRegionSize=32M instead of G1HeapRegionSize = 32M

Resources