How to find the cluster name in Hadoop environment? - hadoop

I am using CDH 5.5 and want to know any command or a way to find the clustername?
I am actually trying to execute the below api call and it throws an error.
curl -u admin:admin 'http://localhost:7180/api/v1/clusters/namenode241'
error:
{
"message" : "Cluster 'namenodee241' not found."
}

Your command is corrent except one last part. In your command you are mentioning the Cluster name namenode241. So remove that and execute
curl -u admin:admin 'http://localhost or hostname:7180/api/v1/clusters'

Can't you just just access to cloudera manager?, it's usually on port 7180 and it will show you the cluster name just in the home page. In the screenshot below it's "Cluster 1"

Related

Pentaho execute shell component not working for CURL

I'm trying to download a file to my local path from Azure Storage Account.
CURL command works fine on command prompt but it's not working from Pentaho execute shell component
curl -H "x-ms-version: 2017-11-09" -X GET "STORAGE_ACCOUNT_NAME/BLOBNAME?STORAGE_ACCOUNT_KEY" -o "C:/Users/FileName8.txt"
I'm getting below error from pentaho
{"error":{"code":"AuthenticationFailed","message":"Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:65018b37-401f-0021-5f01-2fe938000000\nTime:2023-01-23T08:03:18.5896810Z"}}
expecting CURL to work same as CMD

migration from Cloudera Hadoop to HDINSIGHT

Can anyone tell me. I have HQL scripts that I used to run on Cloudera using hive -f scriptname.hql Now I want to run on these scripts in HDINSIGHT(Hadoop cluster) but the hive command line is not available in HDINSIGHT. Can someone guide how I can do that.
beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -i query.hql
Anyone has experience of using above rather than
hive -f query.hql
I don't see there is any other way to execute the HQL files.You can refer to this document-https://learn.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-use-hive-beeline#run-a-hiveql-file
You can also use the zookeeper quorum(encircled) to avoid failure of queries during head node failover
beeline -u '<zookeeper quorum>' -i /path/query.hql
Create an environment variable :
export hivef="beeline -u 'jdbc:hive2://hn0-hdi-uk.witechmill.co.uk:10001/default;principal=hive/_HOST#witechmill.CO.UK;auth-kerberos;transportMode=http' -n umerrkhan "
witechmill is my clustername
Then call the script by using the below
$hivef scriptname.hql

XDMoD : error "no aggregate table. have you ingested your data"

For running the XDMoD from http://open.xdmod.org/
on CentOS virtual machine. The command
xdmod-shredder -v -r resource -f pbs -d /var/spool/torque/server_priv/accounting
runs correctly and the ingester command
xdmod-ingestor -v
also. But still error msg shows up when I open up web browser saying " have you ingested your data". the problem is still unknown. kindly help.

How to test if hbase is correctly running

I just installed hbase on a EC2 server (I also have HDFS installed, it's working).
My problem is that I don't know how to check if my Hbase is correctly installed.
To install hbase I followed this tutorial in wich they say we can check the hbase instance on the webUI on the address addressOfMyMachine:60010, I also checked on the port 16010 but this is not working.
I have an error saying this :
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
I managed to run the hbase shell but I don't know if my installation is working well.
To check if Hbase is running or not using shell script, execute the command below.
if echo -e "list" | hbase shell 2>&1 | grep -q "ERROR:" 2>/dev/null ;then echo "Hbase is not running"; fi

how to use curl --ftp-create-dirs?

Background
I have been searching the Internet trying to find an example of --ftp-create-dirs.
Overall my goal is to use "--ftp-create-dirs" to automatically create the necessary folders if they are not present when I upload my file.
Problem
The problem is I don't know the exact syntax for properly using --ftp-create-dirs, can someone help me with this?
My current curl:
curl -k -T 000-0000-0000-000.png -u [username]:[pass] --ftp-create-dirs /test --ftp-ssl ftp:[ftp server]
In the example above, I am trying to upload the .png image and create /test on the ftp server if it does not exist.
To add a new directory via FTP:
curl ftp://username:password#10.10.10.10/homes/back/newdir/ --ftp-create-dirs
Just putting this in here for future reference (and because I keep making the same mistake that I just saw in your code): it is important to end your folder name with a / (slash). Otherwise, curl will create a file, not a folder. Here is the command I used:
curl -T path/to/local_file.txt ftp://1.2.3.4/my_new_folder/ --ftp-create-dirs -u username:password
This will move local_file.txt to my_new_folder on the FTP server. The folder will be created if it doesn't exist, otherwise, the command will simply be ignored.
If there are any issues with creating the folder, curl will return error number 9 (see https://curl.haxx.se/libcurl/c/libcurl-errors.html). This can happen if a file with the same name as the new folder already exists in the given directory:
curl: (9) Failed to MKD dir: 451
You don't need to use the /test after the --ftp-create-dirs. Its just a parameter similar to your -k(doesn't take any value) at the command.

Resources