I have installed knox server and done all the steps mentioned on hortonworks site.
When i ran the below command on the sandbox , it gives me the proper output.
curl http://sandbox:50070/webhdfs/v1?op=GETHOMEDIRECTORY
Now i have another VM running fedora . I am assuming it as external client and trying to do external access but getting no output:-
curl -k https://<sandbox-ip>:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
Can someone point me whats wrong with my settings.
Not sure about your topology but if you are using the default one (sandbox) you probably need to add basic auth e.g.
curl -k -u guest:guest-password -X GET https://<sandbox- ip>:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
Also check the logs at
<knox_install>/logs/gateway.log
They should tell you more about what went wrong.
Good luck !
Related
I have an app running on Heroku, and I need to download a file from an FTP. But I need to do it using a fixed IP. I´m using www.quotaguard.com to have fixed IPs.
But I can´t get it working.
Does anyone has a Ruby example to download a file from an FTP via a proxy server (quotaguard).
Both the proxy server and the FTP require username and password.
I´ve tried everything, using Ruby. And also calling wget from system to initiate a download, but wget apparently doesn´t go via the proxy. Also checked many posts, but no success so far.
I´m using Ruby 2.4.5
Thanks for any comments.
Thank you QuotaGuard. Socksify is not maintained and really old, we gave it a try but didn´t want to spend much time on it.
We actually managed to get this working with curl. You can call it within Heroku as well.
Here´s the command in case anyone wonders.
curl -x socks5h://socksproxyurl 'ftp://theftp/some.pdf' --user "ftp_user:ftp_pass" -o some.pdf
We've seen a few customers do this before with the socksify gem.
require 'socksify'
proxy = URI(ENV['QUOTAGUARDSTATIC_URL'])
TCPSocket::socks_username = proxy.user
TCPSocket::socks_password = proxy.password
Socksify::proxy(proxy.hostname, 1080) do |soc|
# do your FTP stuff in here
end
If that doesn't do it, post the errors you're seeing and we'll help get this running for you.
I've installed the Snowsql CLI tool (v1.2.16) and tried connecting to Snowflake using a command similar to snowsql -a <account details> -user datamonk3y#domain.com --authenticator externalbrowser.
For myself, and a few other colleagues, a pop up window appears which will allow us to authenticate. Unfortunately this isn't the case for some of my other colleagues...
I've not found anything obvious, but the authentication browser window simply isn't popping up for some users (Around half of us), therefore the connection is aborting after time out.
We're all using AWS workspaces with the same version of windows, same version of chrome and the same version of Snowsql. There's nothing I can see in the chrome settings that could be causing this. I'm also able to change the default browser to Firefox and I still authenticate fine.
Logging into the UI works for everyone too...
The logs don't really give much away, the failed attempts get a Failed to check OSCP response cache file message, but I think this is because the authentication isn't initiated with the server.
When I check my local machine (C:/Users/<datamonk3y>/AppData/Local/Snowflake/Caches/) I see a ocsp_response_cache.json file, but this isn't there for my colleagues who aren't able to log in.
As #SrinathMenon has mentioned in the comments below, adding -o insecure_mode=True to the login command will bypass this issue, but does anyone have any thoughts as to what could be causing this?
Thanks
Try by using the turning off OCSP :
snowsql -a ACCOUNT -u USER -o insecure_mode=True
The only root cause I see this issue happening is when the request is not able to reach the OCSP URL and that is failing.
Adding the debug flag in snowsql would give more details / information. Use this to collect the debug logs:
snowsql -a <account details> -user datamonk3y#domain.com --authenticator externalbrowser -o log_level=debug -o log_file=<path>
In my case, what worked was including the region in account name. So instead of -a abc1234, you would do something like -a abc1234.us-east-1.
https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#format-2-legacy-account-locator-in-a-region explains this a little, but basically you use the first part of the web console URL, eg: https://abc1234.us-east-1.snowflakecomputing.com/ (this only works with classic console)
How to spin up a local version of Spinnaker? This has been answered and addressed in detail here.
https://github.com/spinnaker/spinnaker/issues/1729
Ok, so I got it to work, but not without you valuable help! #lwander
So I'll leave the steps here for posterity.
Each line is a separate command in the command line, I've installed this on a virtual machine with a freshly installed Ubuntu 14.04 copy with nothing else than SSH. Then SSH as root, You will need to configure sshd on your console to allow root access.
https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04
> curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/stable/InstallHalyard.sh
created a user account member of the adm and sudo groups (is this necessary???)
then Install Halyard:
bash InstallHalyard.sh
Verify that HAL is installed and validate its version.
hal -v
Tell Hal that the deployment type will be as a local instance (this will publish all services in localhost which will be tricky later in order to access them, but I have a turnaround so keep reading)
hal config deploy edit --type localdebian
Hal will complain that a version has not been selected, just tell HAL which version:
hal config version edit --version 1.0.0
The tell HAL which storage you are going to use, in my case and since it is local I want to use redis.
hal config storage edit --type redis
So now we need to add a cloud provider to HAL, we use AWS so we add it like this:
hal config provider aws edit --access-key-idXXXXXXXXXXXXXXXXXXXX--secret-access-key
I created a user on AWS and added access keys to the user inside IAM on the user security credentials tab. Obviously my access-key-idis not XXXXXXXXXXXXXXXXXXXX, I edited it. You do not need to enter the secret-access-key because the command will prompt for it.
Then you need to create a username relative or that will only concern you spinnaker installation however this will get related to you AWS Account-ID, so in MY spinnaker local installation I chose the username spinnakermaster you should choose yours!. And my AWS Account ID is not YYYYYYYYYYYY, I've edited too.
All the configurations and steps that you'll need to do inside AWS for this to work are really well documented here:
[https://www.spinnaker.io/setup/providers/aws/](https://www.spinnaker.io/setup/providers/aws/
)
And to tell HAL of of the above here's the command:
hal config provider aws account add spinnakermaster --account-id YYYYYYYYYYYY --assume-role role/spinnakerManaged
And after all that and if everything went according to plan we can ask HAL to deploy our brand new spinnaker installation.
hal deploy apply
It will begin a long installation downloading and configuring all the services.
Once it has finished you may do whatever you like but in my case I created a monitoring script like the one described here:
https://github.com/spinnaker/spinnaker/issues/854
Which can be launched on a recursive manner as this:
watch -n1 spinnaker-status.shor until toctrl+Cit!.
then to be able to access your local VM spinnaker copy you can either setup a reverse proxy with the proxy server of your choice to forward all the requests to localhost or you can simply ssh the SH** out of this redirecting the ports;
ssh root#ZZZ.ZZZ.ZZZ.ZZZ -L 9000:127.0.0.1:9000 -L 8084:127.0.0.1:8084 -L 8083:127.0.0.1:8083 -L 7002:127.0.0.1:7002 -L 8087:127.0.0.1:8087 -L 8080:127.0.0.1:8080 -L 8088:127.0.0.1:8088 -L 8089:127.0.0.1:8089
Where obviously theZZZ.ZZZ.ZZZ.ZZZ is not an actual IP Address.
And finally to begin having fun with this cutie you have to go to your browser of choice and type into the address bar:
http://127.0.0.0:9000
Hope this helps and saves some time to everybody!.
Cheers.
EN
I have a host configured into Ambari which no longer exists. Ambari still thinks it's there. When I try to delete it through the UI I get:
400 status code received on DELETE method for API:
/api/v1/clusters/handy091015/hosts/r-hadoopeco-celeryworker-07ac46a4.hbinternal.com/host_components/ZOOKEEPER_CLIENT
Error message: Bad Request
When I try to delete it via the api, with the command below, I get the same host information as with a GET:
curl -H "X-Requested-By: ambari" -DELETE http://admin:admin#ambari.handy-internal.com//api/v1/clusters/handy091015/hosts/r-hadoopeco-celeryworker-07ac46a4.hbinternal.com
I have tried the instructions here to no avail:
https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
My question is: how do I get Ambari to no longer know about/try to do things with this host.
I am not able to reproduce your behaviour with Ambari 2.1.2 and HDP 2.3 stack.
Limitation:
Note that host removing is supported only for hosts with no master components, so if they are present, then deleting is not possible.
Options:
Try to do ambari-server restart, sometimes it have intermittent issues
If this is an option, I recomend you to do ambari-server reset and install it from scratch. If you don't have much setup, it will save your time probably.
If not, you may want to post ambari-server.log file additionally. This may help to debug the core issue
Another option - just ignore that host, it will not do much harm to you. You can move it to maintenance mode, that will ease cluster operation.
I am trying out Shield as a security measure for my Kibana and Elasticsearch. Running on Mac OS X 10.9.5
Followed the documentation from Elastic. Managed to install Shield. Since my Elasticsearch is running automatically, I skipped step 2(start elasticsearch).
For step 3, I tried adding an admin. Ran this following command on my terminal. bin/shield/esusers useradd admin -p password -r admin.
Unfortunately I'm getting this error.
Error: Could not find or load main class org.elasticsearch.shield.authc.esusers.tool.ESUsersTool
Below are the additional steps I took.
Double checked that the bin/shield esusers path existed and all.
Manually starting elasticsearch before adding users
Tried a variety of different commands based on the documentation.
bin/shield/esusers useradd admin -r admin and
bin/shield/esusers useradd es_admin -r admin
Ran those commands with sudo
Same error generated. Can't seem to find the problem on google as well. Not really sure what I'm missing here as the documentation seems pretty straightforward.
You must restart the node because new Java classes were added to it (from the Shield plugin) and the JVM behind Elasticsearch needs to reload those classes. It can only do that if you restart it.
Kill the process and start it up again, or use curl -XPOST "http://localhost:9200/_shutdown" to shut the cluster down.
Also, the Shield plugin needs to be installed on all the nodes in the cluster.