I use ELK and write some additional (extensive) log information to a file called extensive.log.
How can I download this file to my local hard drive for further analysis?
I tried the cf-download plugin. But I do get a "permission or corrupt" error.
$ cf download app-name /home/SOME-PATH/logs/extensive.log
You don't need any plugins, you can use scp or sftp.
The documentation is here, but this is a summary:
cf login and cf target
cf curl /v2/info | jq .app_ssh_endpoint this is your SSH host
cf ssh-code then copy the value, this is your password
cf app app-name --guid this is part of your user name
sftp -P 2222 cf:<app-guid>/<app-instance-num>#<ssh-host>, when prompted for a password, enter the passcode you copied in step #3.
Similarly, for scp you can run scp -P 2222 -oUser=cf:<app-guid>/<app-instance-num> my-local-file.json <ssh-host>:my-remote-file.json.
Hope that helps!
Related
I have a spring boot application deployed on pivotal cloud foundry.
I'm trying to tunnel (cf ssh) to that application in pcf from my spring boot application, but not able to find any api or client libraries to achieve it.
Actual cli command to tunnel pcf:
cf ssh -N -T -L 10001:localhost:10001 ms name
Any suggestions are welcome.
If you're trying to write Java code that would do the same thing as the cf ssh command, that should be possible. It's standard SSH, but with short-lived credentials so the trick will be generating credentials that you can use from your app.
Here's an example of using a standard SSH/SCP/SFTP client, note that ssh.bosh-lite.com will be your SSH domain, which you can see from cf curl /v2/info:
$ ssh -p 2222 cf:$(cf app app-name --guid)/0#ssh.bosh-lite.com
$ scp -P 2222 -oUser=cf:$(cf app app-name --guid)/0 my-local-file.json ssh.bosh-lite.com:my-remote-file.json
$ sftp -P 2222 cf:$(cf app app-name --guid)/0#ssh.bosh-lite.com
https://github.com/cloudfoundry/diego-ssh#cloud-foundry-via-cloud-controller-and-uaa
That said, you should be able to do something similar with any standard SSH Java library.
As mentioned above, the trick is in getting credentials. The username will be the format cf:application-guid/app-instance-number, which is easy, but the password needs to be generated with cf ssh-code, or the comparable call to the UAA API.
Ex: curl -vv -H 'Accept: application/json' -H "Authorization: $(cf oauth-token)" "https://uaa.run.pivotal.io/oauth/authorize?client_id=ssh-proxy&response_type=code"
This example uses curl to send the request and cf oauth-token to get a valid Oauth2 bearer token for the logged in user. You could get a valid bearer token in a number of ways, including making direct API calls or using the cf-java-client. It just needs to be a valid token for the user that should perform the SSH action (i.e. it would be the user that's running cf ssh).
Hope that helps!
My app (gs-spring-boot.zip) is unable to connect to the locally running MySQL 8.0 database after carrying out the following configurations as described in various tutorials online.
Please find the errors in the attached log file named cf_spring_boot_log.txt
Please find a screenshot of the MySQL account with same credentials as configured below.
Local MySQL account
PS C:\WINDOWS\system32> cf login -a https://api.dev.cfdev.sh --skip-ssl-validation -u admin -p admin
PS E:\> cf cups mysql80-db -p '{\"username\": \"pcf\", \"password\": \"pcfpassword\", \"name\": \"world\", \"hostname\": \"127.0.0.1\", \"port\": 3306, \"uri\": \"mysql://pcf:pcfpassword#127.0.0.1:3306/world\", \"jdbcUrl\": \"jdbc:mysql://127.0.0.1:3306/world?user=pcf&password=pcfpassword\"}'
PS E:\> cf push pcf-people-mgmt -p C:\Users\ranadeep-sharma\IdeaProjects\gs-spring-boot\complete\target\pcf-people-mgmt-1.0-SNAPSHOT.jar
PS E:\> cf bind-service pcf-people-mgmt mysql80-db
PS E:\> cf restage pcf-people-mgmt
I have spent weeks with no success. Please let me know what I am missing in my configuration.
Problem resolved, after using correct IP Address.
I have set-up or have been provided with an secured URL (HTTPS) to a remote Docker registry. I need to perform docker login into the remote registry in order to be able to push my locally built Docker images.
The command would be something like:
docker login -u myUser https://registry.mydomain.example.com
However, docker login fails with x509 certificate verification error like:
Error response from daemon: Get https://registry.mydomain.example.com/v2/: x509: certificate signed by unknown authority
I'm using macOS / OS X, how can I get my local Docker (Docker client) to accept remote repository's TLS certificate for HTTPS traffic?
Also, once the secure HTTPS connection works, how do I build and push my image to the remote repository, after I've written the Dockerfile and tested locally that my image works?
Unlike the Docker documentation's link regarding this matter specifically mentions, the Linux/Unix instructions work for macOS / OS X as well:
https://docs.docker.com/engine/security/certificates/
I got below instructions working with MacBook Pro using macOs High Sierra 10.13.5 (17F77)
Docker client (local Docker) version: 18.03.1-ce
Place the Certificate Authority (CA) file, provided by the remote registry admin, into the specific folder structure via terminal commands:
sudo mkdir -p /etc/docker/certs.d/registry.mydomain.example.com
sudo cp ca.crt /etc/docker/certs.d/registry.mydomain.example.com
Note: If you are using URL with port to connect to the registry, the port needs to be included in the foldername under certs.d folder. The URL can also be in the form of IP:
sudo mkdir -p /etc/docker/certs.d/registry.mydomain.example.com:443
sudo mkdir -p /etc/docker/certs.d/172.123.123.1:443
EDIT TO ADD!
I tested this with a co-worker and it was discovered that addition of the CA file into macOS Keychain was required (I had also done this previously). It is currently unknown if the above /etc/docker steps are even required on Mac. We used this guide to import ca.crt file into the Keychain (visible as "not trusted" at Certificates menu).
https://www.sslsupportdesk.com/how-to-import-a-certificate-into-mac-os/
Afterwards, restart your local Docker.
Docker login should work normally afterwards. If you still keep getting the x509 unknown authority error, it might be a good idea to verify the remote registry's server certificate's (obtainable e.g. by navigating to the registry's URL with browser) validity against the CA file, using openssl commands:
https://www.sslshopper.com/article-most-common-openssl-commands.html
Below is an example if working with OpenShift integrated (Atomic) registry:
oc login https://registry.mydomain.example.com -u myUser --certificate-authority=ca.crt
docker login -u $(oc whoami) -p $(oc whoami -t) https://registry.mydomain.example.com
You should get a prompt that Login Succeeded, then:
docker build -t registry.mydomain.example.com/openshiftProject/my-image:1.0 .
docker push registry.mydomain.example.com/openshiftProject/my-image:1.0
I'm quite new to Gerrit.I'm getting Permisson denied(public key) while running ssh command .I want to add rsa key to my Gerrit profile.But I'm not able to take the Web UI.I tried localhost:29418 But nothing come up.
Thanks in advance!!!
If you used the -b batch mode you can probably find the site on http:// localhost :8080
The first user to login will have automatically administration rights and you should be able to enter a userid and ssh key. You need to then use this userid plus the ssh key to connect.
ssh -p 29418userid# 127.0.0.1 gerrit
This should then show you the gerrit help.
I was wondering how to set up filezilla or how to upload files to my ec2 server. everytime i try to set up filezilla it says:
Error: Disconnected: No supported authentication methods available (server sent: publickey)
Error: Could not connect to server
and i have to go to downloads folder and login with ssh -i key.pem user#ipaddress every time i want to have access since my mac wont automatically ssh from anywhere since i cant import it into my keychain.
According to the FileZilla Docs, it should be possible:
FileZilla supports the standard SSH agents. If your SSH agent is running, the SSH_AUTH_SOCK environment variable should be set.
Here is a documentation on how to set up ssh agent.
However I personally use Cyberduck as an SFTP client. When creating a new connection there, you can simply check "Use public key authorization" and give the path to your key file. Should be easier to set up.
you can use sshfs to fuse the ec2 instance directory to your local folder.
So, you have to do following steps :
install sshfs on your mac.
put you mac id_rsa.pub key inside authorized keys in .ssh/ folder of ec2 instance . this will allow you to mount ec2 directory to local folder. Also, this will allow you to ssh to ec2 instance without using key.pem.
mount the ec2 instance using following command :
sshfs ubuntu#ec2-xx-xx-xx-xxx.compute-1.amazonaws.com: /<your new folder location>
4. don't forget to give your folders write permissions , so that you can edit them remotely.
Hope it helps.