when installing oracle 12c grid infrastructure I received error and must run the following command :
/u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=kaash-his-2
when I run the command the result failed .
-bash-4.2$ /u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=kaash-his-2ACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-1,kaash-his-2 "NODES_TStarting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 28607 MB Passed
Exception oracle.sysman.oii.oiil.OiilNativeException: S_OWNER_SYSTEM_EPERM occurred..
oracle.sysman.oii.oiil.OiilNativeException: S_OWNER_SYSTEM_EPERM
at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.chgrp(Native Method)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.changeGroup(OiipgBootstrap.java:1468)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.writeInvLoc(OiipgBootstrap.java:1113)
at oracle.sysman.oii.oiip.oiipg.OiipgBootstrap.updateInventoryLoc(OiipgBootstrap.java:463)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.createInventory(OiiiInstallAreaControl.java:5394)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControlWithAccessCheck(OiiiInstallAreaControl.java:1826)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initAreaControl(OiicStandardInventorySession.java:316)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:276)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:238)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:187)
at oracle.sysman.oii.oiic.OiicBaseInventoryApp.getInstallAreaControl(OiicBaseInventoryApp.java:993)
at oracle.sysman.oii.oiic.OiicBaseInventoryApp.main_helper(OiicBaseInventoryApp.java:756)
at oracle.sysman.oii.oiic.OiicUpdateNodeList.main(OiicUpdateNodeList.java:492)
'UpdateNodeList' failed.
'UpdateNodeList' failed.
This is the log file :
[grid#KAASH-HIS-2 logs]$ cat UpdateNodeList2023-02-12_12-11-10AM.log
The file oraparam.ini could not be found at /u01/app/12.1.0/grid/oui/bin/oraparam.ini
Using paramFile: /u01/app/12.1.0/grid/oui/oraparam.ini
Checking swap space: must be greater than 500 MB. Actual 28607 MB Passed
Execvp of the child jre : the cmdline is ../../jdk/jre/bin/java, and the argv is
../../jdk/jre/bin/java
-Doracle.installer.library_loc=../lib/linux64
-Doracle.installer.oui_loc=/u01/app/12.1.0/grid/oui/bin/..
-Doracle.installer.bootstrap=FALSE
-Doracle.installer.startup_location=/u01/app/12.1.0/grid/oui/bin
-Doracle.installer.jre_loc=../../jdk/jre
-Doracle.installer.custom_inventory=/u01/app/oraInventory
-Doracle.installer.nlsEnabled="TRUE"
-Doracle.installer.prereqConfigLoc=
-Doracle.installer.unixVersion=5.4.17-2136.315.5.el7uek.x86_64
-Xms150m
-Xmx256m
-XX:MaxPermSize=128M
-cp
/tmp/OraInstall2023-02-12_12-11-10AM::/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/wsclient_extended.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/emCoreConsole.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/jsch.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/remoteinterfaces.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/OraPrereq.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/orai18n-utility.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/OraPrereqChecks.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/adf-share-ca.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/jmxspi.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/instcommon.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/installcommons_1.0.0b.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/ojdbc6.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/instcrs.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/cvu.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/entityManager_proxy.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/javax.security.jacc_1.0.0.0_1-1.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/prov_fixup.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/orai18n-mapping.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/emca.jar:/u01/app/12.1.0/grid/oui/bin//../../inventory/Scripts/ext/jlib/ssh.jar:../jlib/OraInstaller.jar:../jlib/oneclick.jar:../jlib/xmlparserv2.jar:../jlib/share.jar:../jlib/OraInstallerNet.jar:../jlib/emCfg.jar:../jlib/emocmutl.jar:../jlib/OraPrereq.jar:../jlib/jsch.jar:../jlib/ssh.jar:../jlib/remoteinterfaces.jar:../jlib/http_client.jar:../jlib/OraSuiteInstaller.jar:../jlib/opatch.jar:../jlib/opatchactions.jar:../jlib/opatchprereq.jar:../jlib/opatchutil.jar:../jlib/OraCheckPoint.jar:../jlib/InstImages.jar:../jlib/InstHelp.jar:../jlib/InstHelp_de.jar:../jlib/InstHelp_es.jar:../jlib/InstHelp_fr.jar:../jlib/InstHelp_it.jar:../jlib/InstHelp_ja.jar:../jlib/InstHelp_ko.jar:../jlib/InstHelp_pt_BR.jar:../jlib/InstHelp_zh_CN.jar:../jlib/InstHelp_zh_TW.jar:../jlib/oracle_ice.jar:../jlib/help-share.jar:../jlib/ohj.jar:../jlib/ewt3.jar:../jlib/ewt3-swingaccess.jar:../jlib/swingaccess.jar::../jlib/jewt4.jar:../jlib/orai18n-collation.jar:../jlib/orai18n-mapping.jar:../jlib/ojmisc.jar:../jlib/xml.jar:../jlib/srvm.jar:../jlib/srvmasm.jar
oracle.sysman.oii.oiic.OiicUpdateNodeList
-scratchPath
/tmp/OraInstall2023-02-12_12-11-10AM
-sourceType
network
-timestamp
2023-02-12_12-11-10AM
-updateNodeList
-setCustomNodelist
-noClusterEnabled
ORACLE_HOME=/u01/app/12.1.0/grid
CLUSTER_NODES=kaash-his-1,kaash-his-2
NODES_TO_SET={kaash-his-1,kaash-his-2}
CRS=false
INVENTORY_LOCATION=/u01/app/oraInventory
LOCAL_NODE=kaash-his-2
this is id commnad :
[grid#KAASH-HIS-2 logs]$ id
uid=54323(grid) gid=1002(oinstall) groups=1002(oinstall),1004(dba),54323,54324 context=system_u:system_r:unconfined_service_t:s0
this is the command I need to run before continue the setup
/u01/app/12.1.0/grid/oui/bin/runInstaller -jreLoc /u01/app/12.1.0/grid/jdk/jre -paramFile /u01/app/12.1.0/grid/oui/clusterparam.ini -silent
-ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/12.1.0/grid CLUSTER_NODES=kaash-his-2 "NODES_TO_SET={kaash-his-1,kaash-his-2}"
-invPtrLoc "/u01/app/12.1.0/grid/oraInst.loc" -local
when I execute the command I got this error
The operation failed as it was called without path of Oracle Home being attached
how can I attach oracle home and execute the command please ?
Related
getting below error while starting the database:-
startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/mis/PARAMETERFILE/spfile.276.967375255'
ORA-17503: ksfdopn:10 Failed to open file +DATA/mis/PARAMETERFILE/spfile.276.967375255
ORA-04031: unable to allocate 56 bytes of shared memory ("shared pool","unknown object","KKSSP^24","kglseshtSegs")
Your database cannot find the SPFILE (newer init.ora) within ASM with the actual system parameters or has no permissions to access it.
Either your Grid Infrastructure stack or the dbs/spfile.ora is pointing to the wrong file.
To find out what the grid infrastructure stack is using, run "srvctl" which should display the parameterfile name the database should be using
srvctl config database -d <dbname>
...
Spfile: +DATA/<dbname>/PARAMETERFILE/spfile.269.1066152225
...
Then check (as the grid user), if the file indeed is not visible (by using asmcmd):
asmcmd
ASMCMD> ls +DATA/<dbname>/PARAMETERFILE/
spfile.269.1066152225
If the name is different, then you got the issue... (and you have to point to the correct file).
If the name is correct, then it could be wrong permissions on the oracle executable(s) (check My Oracle Support):
RAC Database Can't Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)
I am updating IBM BPM 8.6.0 to IBM Business Automation Workflow Version 18.0.0.2, after updating fix pack for IBAW when I am running below command I get an error.
BPMGenerateUpgradeSchemaScripts.bat -profileName Node1Profile -de ProcessCenter
Below is the error which is coming on running above command.
Unable to find the response file
C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties
Unable to find the file C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties, please run the command 'BPMConfig -update -profile deployment_manager_profile -de deployment_environment_name -caseConfigure' to collect the configuration information for the content data sources, please read the knowledge center for details.
java.io.FileNotFoundException: C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties (The system cannot find the file specified.)
CWMCO6007E: The BPMGenerateUpgradeSchemaScripts command could not complete successfully. The following exception occurred :
Faild to initialize the CommonInfo. java.io.FileNotFoundException: C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties (The system cannot find the file specified.)
The file command asked to run first in the above error is on 11 point in the upgrade guide, can some one please suggest whats wrong with this?
I am running hive sql on yarn,
it's throwing error with join condition , I am able to create External as well as internal table but failed to create table when use command
create table as AS SELECT name from student.
when running same query through hive cli it's working fine but with spring jog it throws error
2016-03-28 04:26:50,692 [Thread-17] WARN
org.apache.hadoop.hive.shims.HadoopShimsSecure - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
Task with the most failures(4):
-----
Task ID:
task_1458863269455_90083_m_000638
-----
Diagnostic Messages for this Task:
AttemptID:attempt_1458863269455_90083_m_000638_3 Timed out after 1 secs
2016-03-28 04:26:50,842 [main] INFO
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Killed application
application_1458863269455_90083
2016-03-28 04:26:50,849 [main] ERROR com.mapr.fs.MapRFileSystem - Failed to
delete path maprfs:/home/pro/amit/warehouse/scratdir/hive_2016-03-28_04-
24-32_038_8553676376881087939-1/_task_tmp.-mr-10003, error: No such file or
directory (2)
2016-03-28 04:26:50,852 [main] ERROR org.apache.hadoop.hive.ql.Driver -
FAILED: Execution Error, return code 2 from
As per my findings I think there is some issue with scratdir.
Kindly suggest if any one face same issue.
This issue occurs if the recursive directory doesnot exist. Hive doesnt automatically create directories recursively.
Please check existence of directories to child\table level from root
I faced a similar issue while running the below Hive query
select * from <db_name>.<internal_tbl_name> where <field_name_of_double_type> in (<list_of_double_values>) order by <list_of_order_fields> limit 10;
I performed an explain on the above statement and below was the result.
fs.FileUtil: Failed to delete file or dir [/hdfs/Hadoop_Misc_Logs/Edge01/local_scratch/<hive_username>/41289638-cd53-4d4b-88c9-3359e9ec99e2/hive_2017-05-08_04-26-36_658_6626096693992380903-1/.nfs0000000057b93e2d00001590]: it still exists.
2017-05-08 04:26:37,969 WARN [41289638-cd53-4d4b-88c9-3359e9ec99e2 main] fs.FileUtil: Failed to delete file or dir [/hdfs/Hadoop_Misc_Logs/Edge01/local_scratch/<hive_username>/41289638-cd53-4d4b-88c9-3359e9ec99e2/hive_2017-05-08_04-26-36_658_6626096693992380903-1/.nfs0000000057b93e2700001591]: it still exists.
Time taken: 0.886 seconds, Fetched: 24 row(s)
And checked the logs through
yarn logs -applicationID application_1458863269455_90083
The error happened after a MapR upgrade from the admin team. It is probably due to some upgrade or installation issue and Tez configurations (as suggested by the line 873 in log below). Or probably, the Hive query is syntactically not supporting the Tez optimization. Saying so, because another Hive query on an external table is running fine in my case. Have to check a bit deeper though.
Though not sure but the error line in the logs that looks to be most relevant is as follows:
2017-05-08 00:01:47,873 [ERROR] [main] |web.WebUIService|: Tez UI History URL is not set
Solution:
It is probably happening due to some open files or applications that are using some resources. Pls check https://unix.stackexchange.com/questions/11238/how-to-get-over-device-or-resource-busy
You can run the explain <your_Hive_statement>
In the result execution plan, you can come across the filenames/dirs that Hive execution engine fails to delete e.g.
2017-05-08 04:26:37,969 WARN [41289638-cd53-4d4b-88c9-3359e9ec99e2 main] fs.FileUtil: Failed to delete file or dir [/hdfs/Hadoop_Misc_Logs/Edge01/local_scratch/<hive_username>/41289638-cd53-4d4b-88c9-3359e9ec99e2/hive_2017-05-08_04-26-36_658_6626096693992380903-1/.nfs0000000057b93e2d00001590]: it still exists.
Go to the path given in the step 2 e.g. /hdfs/Hadoop_Misc_Logs/Edge01/local_scratch/<hive_username>/41289638-cd53-4d4b-88c9-3359e9ec99e2/hive_2017-05-08_04-26-36_658_6626096693992380903-1/
In path 3, doing ls -a or lsof +D /path will show the open process_ids blocking the files from delete.
If you run ps -ef | grep <pid>, you get
hive_username <pid> 19463 1 05:19 pts/8 00:00:35 /opt/mapr/tools/jdk1.7.0_51/jre/bin/java -Xmx256m -Dhiveserver2.auth=PAM -Dhiveserver2.authentication.pam.services=login -Dmapr_sec_enabled=true -Dhadoop.login=maprsasl -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/mapr/hadoop/hadoop-2.7.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/mapr/hadoop/hadoop-2.7.0 -Dhadoop.id.str=hive_username -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/mapr/hadoop/hadoop-2.7.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/opt/mapr/hive/hive-2.1/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf -Dzookeeper.saslprovider=com.mapr.security.maprsasl.MaprSaslProvider -Djavax.net.ssl.trustStore=/opt/mapr/conf/ssl_truststore org.apache.hadoop.util.RunJar /opt/mapr/hive/hive-2.1//lib/hive-cli-2.1.1-mapr-1703.jar org.apache.hadoop.hive.cli.CliDriver
CONCLUSION:
The HiveCLiDriver clearly shows that running "Hive on Spark" (or managed) tables through Hive CLI is not supported any more from Hive 2.0 onwards and it is going to be deprecated going forward. You have to use HiveContext in Spark for running Hive queries. But you can still run queries on Hive external tables through Hive CLI.
I am trying to install RHadoop on top of my Hadoop cluster. While installing some of the required packages I am facing the following error:
> install.packages("Megh/rmr2_3.3.1.tar.gz")
Installing package into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
inferring 'repos = NULL' from 'pkgs'
Error in rawToChar(block[seq_len(ns)]) :
embedded nul in string: 'rmr2/man/fromdfstodfs.Rd\0\0\0\0erties\n i-_". '
Warning message:
In install.packages("Megh/rmr2_3.3.1.tar.gz") :
installation of package ‘Megh/rmr2_3.3.1.tar.gz’ had non-zero exit status
>
> install.packages("Megh/plyrmr_0.6.0.tar.gz")
Installing package into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
inferring 'repos = NULL' from 'pkgs'
Warning in untar2(tarfile, files, list, exdir, restore_times) :
checksum error for entry 'plyrmr/man/as.data.framed'
Warning in readBin(con, "raw", n = 512L) :
invalid or incomplete compressed data
Error in untar2(tarfile, files, list, exdir, restore_times) :
incomplete block on file
Warning message:
In install.packages("Megh/plyrmr_0.6.0.tar.gz") :
installation of package ‘Megh/plyrmr_0.6.0.tar.gz’ had non-zero exit status
I Have also installed RHive on the cluster. I'm able to execute relatively smaller queries through RHive but larger queries fail:
> rhive.query("SELECT COUNT(*) FROM tradehistory")
Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> rhive.query("SELECT tradeno FROM tradehistory LIMIT 10")
tradeno
1 34232193
2 34232198
3 34232199
4 34232200
5 34232201
6 34232202
7 34232203
8 34232204
9 34232205
10 34232206
If anybody has any idea please help me out with this! Thanks a lot in advance!
For installation error that I was facing, I figured out that it was an issue with the tar file.
I downloaded that tar file using Windows system and was transferring the same to my cluster using WinSCP.
for transferring zip/archive kind of files, ideally binary transfer should be used otherwise there are chances of some bytes of the tar file being missed out.
This in turn, results in the error.
In case of Tez, if a query needs to be executed which has to invoke multiple MapReduce tasks, the query can't execute without proper authorization.
So when I tried the same rhive query with supplying the username and password, I was able to achieve the desired results.
I am trying to run cellcli on my one of the Exadata Cell Server.
When I login to the Server, I am able to see all the files as expected.
(Like: all_group all_nodelist_group cell_group all_ib_group etc)
When I issue command to start cellcli it gives me error that command not found:
# cellcli
-bash: cellcli: command not found
# which cellcli
which: no cellcli in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
Any idea what is the location of the cellclie executable on exadata?
Do I need to export any other path to get this command?
cellcli is in /opt/oracle/cell/cellsrv/bin. It should be put in path by /etc/profile.d/cell_env.sh
(from Marc Fielding)