Administer NFS gateway (Hortonworks stack) - hortonworks-data-platform

I believe in the latest version of Ambari (2.1.1) you can administer NFS gateways via the front-end. When I navigate to the HDFS service summary I see the NFSGateways hyperlink (NFSGateways 0/0 Started) but when clicking on this link there is no option to start a new NFS3 gateway and I have to resort to the command line (Hadoop nfs3) command to start it up.
I've had a trawl around but can find no documentation on how to start NFS up via Ambari, does anybody know how I can do this?

The "NFSGateways 0/0 Started" means you have no such components installed.
On hosts page, go to some host page, choose "Add component" and select NFS Gateway. After installing start it. You should be able to use it then

Related

amabri metrics collector error

We have 5 node hortonworks cluster with ambari monitors installed in all nodes and metrics collector installed in master node.
I am getting Connection failed: [Errno 111] Connection refused to 0.0.0.0:6188
PFA for error.
https://drive.google.com/file/d/0B85rPUe3-QPXbXJSRzJmdUwwQU0/view?usp=sharing
I followed the below document and tried removing the service and added it.
https://cwiki.apache.org/confluence/display/AMBARI/Moving+Metrics+Collector+to+a+new+host
First of all, I am not able to find the origin of the error. Please share your experience if you ever faced this problem.
This happens sometimes that port is already being use by another process when we try to move collector to new host with 'Curl' commands specified on apache wiki.
Istead of doing using this you can leverage the feature which Ambari provides from it's GUI to move components from one host to another host .
'Move Master Wizard'
Follow the steps stated at Move Master Wizard , Ambari will take care rest of things for you.
I have fixed this issue by killing the process running in that port and restart the service. You can also do a manual reboot of the machine to fix this issue.

Unable to remove a dead host from Ambari

We had some problem with a host and we had to shutdown that host.
Now, we are not able to remove that dead host from Ambari.
Whenever we click Hosts -> Click on the host that is dead -> Host Actions -> Delete Host
This host cannot be deleted since it has the following master components: DRPC Server, Falcon Server etc.
If I go to that service, all the actions against each services are greyed out. So, there is no way I can move those services to another hosts because those are disabled.
Please suggest a way ahead. Is handling sudden death of a service not possible in Ambari?
You can try the Ambari API as explained here. Some features of the Ambari API aren't implementet in der User Interface right now.
I remeber a case in my company where we couldn't remove a Node with the Ambari UI. With an API-Call like it's explained this link it was possible.

Cannot install Storm on HDP 2.2 Sandbox

When I acces the ambari dashboard on the HDP 2.2 Sandbox VM from the url http://127.0.0.1:8080/ the ambari service shows all the services with a "?" simbol, including storm. When i try to install, I cant see the install wizard link anywhere, and if I go to the url http://127.0.0.1:8080/#/installer/step0 it returns to the dash.
In the tutorial, they just open the dashboard and everything is set up, but i'm unable to start my storm cluster.
Thanks in advance.
You shouldn't have to install anything.
On the services (or hosts) page, you should find an Actions button that lets you stop and then start all services. Sometimes a particular service doesn't start well and you'll need to dig in to why that's happening.

Use spark-submit to submit a application to EC2 cluster

I am new to Spark and I am trying to run it on EC2. I follow the tutorial on spark webpage by using spark-ec2 to launch a Spark cluster. Then, I try to use spark-submit to submit the application to the cluster. The command looks like this:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://ec2-54-88-9-74.compute-1.amazonaws.com:7077 --executor-memory 2G --total-executor-cores 1 ./examples/target/scala-2.10/spark-examples_2.10-1.0.0.jar 100
However, I got the following error:
ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
Please let me know how to fix it. Thanks.
You're seeing this issue because the master node of your spark-standalone cluster cant open a TCP connection back to the drive (on your machine). The default mode of spark-submit is client which runs the driver on the machine that submitted it.
A new cluster mode was added to spark-deploy that submits the job to the master where it is then run on a client, removing the need for a direct connection. Unfortunately this mode is not supported in standalone mode.
You can vote for the JIRA issue here: https://issues.apache.org/jira/browse/SPARK-2260
Tunneling your connection via SSH is possible but latency would be a big issue since the driver would be running locally on your machine.
I'm curious if you still having this issue ... But in case anyone is asking here is a brief answer. As clarified by jhappoldt, the master node of your spark-standalone cluster cant open a TCP connection back to the drive (on your local machine). Two workarounds are possible, tested and succeeded.
(1) From EC2 Management Console, create a new security group and add rules to enable TCP back and forth from your PC (public IP). (what I did was adding TCP rules inbound and outbound) ... Then add this security group to your master instance. (right click --> Networking --> Change security groups). Note: add it and don't remove the already established security groups.
This solution work well, but in your specific scenario, deploying your application from local machine to EC2 cluster, you will face further problems (resource related) so the next option is the best one
(2) Having your .jar file (or .egg) copy it to the master node using scp. You can check this link http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html for information about how to do that; and deploy your application from the master node. Note: spark is already pre-insalled so you will do nothing but write the same exact command you write on your local machine from ~/spark/bin. This shall work perfect.
Are you executing the command on your local machine, or on the created EC2 node? If you're doing it locally, make sure port 7077 is open in the security settings, as its closed to the outside by default.

Apache installation issue (in windows 7)

I want to install Apache Web server,for this I tried in the following way .
Step 1 :
I downloaded Apache Web server from the following link.
Step 2:
I installed in the following way.
click on Run-->next-->Accepted Terms-->next-->next
click on next-->next-->change..
click OK-->next-->install. here I am facing the problem I got two screens.
after completion of 23 seconds,It shows Finish button in main screen, I click on that.
to my conformation,whether it is working or not. I open following URL
http://localhost/
it's not working.
again for my conformation, I open the following window from my right hand side corner of my system.
How can I fix this.
can you suggest me ?
Your port 80 is already in use by another application, you have two options :
Find & Change the port or Deactivate the application that runs in port 80 (IIS, Skype...)
Changing the Apache port from 80 to another value.
There is "something" already listening on port 80 of your system, which will prevent Apache from starting. It could be Skype, or a web-server like IIS or Tomcat.
If you open the command-line (cmd.exe) and run netstat -ona, and look for the local lines that have port 80 in them (ex: 0.0.0.0:80, 127.0.0.1:80), you can then cross-reference the PID of that "something" with Task Manager's Process list (press Ctrl-Shift-Esc).
Then you can attempt to disable it (if it's a Service) or remove/uninstall it.
A couple of other issues -
You are downlading Apache 2.0, which is completly outdated. You should be at least using Apache 2.2.
Your Apache download will not come with PHP - you'll only be able to do HTML pages. Nor will it be configured for security, performance, multi-site, etc.
Unless you have a reason not to, try a WAMP (Windows, Apache, MySQL, PHP) distribution/package that has everything already set up for you. The popular free ones that come to mind are XAMPP and WampServer; and there are commercial ones like Wamp-Developer. Ans also a few others not mentioned, that you can find recommended here on StackOverflow.
use netstat -bano in an elevated command prompt to see what apps are listening on which ports.
But Usually following applications uses port 80 in windows.
IIS
World Wide Web Publishing service
IIS Admin Service
SQL Server Reporting services
Web Deployment Agent Service
Stop above applications if running and check!!!

Resources