Google Anthos - Cluster Backup - google-anthos

Does anyone tried to backup google anthos cluster via bmctl command here?
I'm try to backup my current cluster and planning to restore it to another cluster into a new location. But I'm having errors while backing up.. Please see screenshot.
ERROR: Error: failed to backup node config files: ssh: subsystem request failed
Any help is highly appreciated.
Thank you..
MD

[backup/confirm] Are you sure you want to proceed with the backup? [y/N]: y
[2022-07-13 10:40:58+0800] Take etcd snapshot on pod etcd-anthos01
[2022-07-13 10:40:58+0800] Take etcd snapshot on pod etcd-anthos02
[2022-07-13 10:40:59+0800] Take etcd snapshot on pod etcd-anthos03
[2022-07-13 10:41:00+0800] Backup files on machine 192.168.99.151
Error: failed to backup node config files: ssh: subsystem request failed
I'm sorry but since I'm just new in Anthos, looks like the error message above can be ignore.. I found the backup files within same directory of the kubeconfig and cluster config files.
My bad.. And thank you.

Related

kubectl not working on my windows 10 machine

When I try to run any kubectl command including kubectl version, I get a pop-up saying "This app can't run on your PC, To find a version for your PC, check with the software publisher" when this is closed, the terminal shows "access denied"
The weird thing is, when I run the "kubectl version" command in the directory where I have downloaded kubectl.exe, it works fine.
I have even added this path to my PATH variables.
thank you for the answer, #rally
apparently, in my machine, it was an issue of administrative rights during installation. My workplace's IT added the permission and it worked for me.
Adding this answer here so that if anyone else comes across this problem they can try this solution as well.
Not knowing what exactly you downloaded, i would suggest you to delete everying in the folder and follow the instructions for installing kubectl for Windows from here:
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
Note: downloading the .exe is not enough. You need a kubeconfig file "config", which contains the configuration to access your cluster.
kubectl looks for this file in a hidden folder under your user profile directory. c:\users<me>.kube.
Just to let you try, i would suggest you to activate Kubernetes in your Docker-Desktop installation. I guess you have this installed. If not install it from the Dockersite. https://www.docker.com/products/docker-desktop/
Activating Kubernetes inside Docker-desktop, will install also kubectl and save the config in the .kube folder.
After the installation finished, in a new terminal:
kubectl get node
You should see the 1 node in the kubernetes-docker-desktop cluster.
Now if you want to access another cluster, you need the kubeconfig-file for that cluster. If you have it, just rename the config in the .kube folder (to not loose it) and put the other config inside.
If the new config file is correct you should be able to access that cluster.
The config file can be structured to hold more than one cluster configuration and you can switch between them using a so called context.
Here you can get the information how to do that, according to your needs:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Hope this can help you, starting with KUbernetes.

Trouble getting started with AWS Lambda locally

I'm trying to get a simple dotnet lambda up and running using the Rider AWS toolkit - starting with the SAM HelloWorld sample project, but in creating, I run into this error
java.util.concurrent.CompletionException: java.lang.RuntimeException: Could not execute `sam init`!: [Cloning from https://github.com/aws/aws-sam-cli-app-templates, Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user_name\AppData\Roaming\AWS SAM directory or file an issue at https://github.com/aws/aws-sam-cli/issues]
I checked the permissions on that directory, and I should have full read/write. I'm not seeing anyone else running into this particular problem online. Is this indicative of any other steps I missed along the way?
The root cause is apparently a long filename issue. There is a workaround here: https://github.com/aws/aws-sam-cli/issues/3781#issuecomment-1081263942
What I did was:
Using regedit set HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled" to 1
With admin permissions, run git config --system core.longpaths true

Unable to pull hyperledger/cello-api-engine image

Setup of Hyperledger-cello:
Cloned Hyperledger-cello 0.9.0
sudo SERVER_PUBLIC_IP=xx.xx.xx.xx make start
I am facing the following issue:
ERROR: pull access denied for hyperledger/cello-api-engine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Makefile:211: recipe for target 'start-docker-compose' failed
I tried changing name of image in docker-compose.yml file, but stuck with the same issue.
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]n
ERROR: pull access denied for hyperledger/cello-api-engine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Makefile:211: recipe for target 'start-docker-compose' failed
make[1]: *** [start-docker-compose] Error 1
Hey the problem is that on master is not the most stable release, so, in orderer it to work they have changed a little bit. After cloning you need to do the following.
make docker
This will build all the images, on the documentation they say that for now this is mandatory.
And after that is finished do the following:
make start
This will start the network.
If you are looking for a more stable build you can checkout to the tag 0.9.0.
Take on account that this project is still on incubator and they are preparing for the 1.0.0 release, on the master branch you may see that there is missing documentation and more.
There is no cello-api-engine in docker hub.
You can check list here
cello docker hub
If you change from cello-api-engine to cello-engine, that error will not appear. But there is also no cello-dashboard in docker hub images for next step. there are only cello-operator-dashboard and cello-user-dashboard.
No need to change any image name in .yaml files,after running setp-master, it is going to create images in your mahcine . Done with images part.
If anyone face issue let me knwow :)

How to ensure AWS S3 cli file sync fully works without missing files

I'm working on a project that takes database backups from MongoDB on S3 and puts them onto a staging box for use for that day. I noticed during a manual run today I got this output. Normally it shows a good copy of each files but today I got a connection reset error or something one of the files, *.15, was not copied over after the operation had completed.
Here is the AWS CLI command that I'm using:
aws s3 cp ${S3_PATH} ${BACKUP_PRODUCTION_PATH}/ --recursive
And here is an excerpt of the output I got back:
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.10
to ../../data/db/myorg-production/myorg-production.10
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.11
to ../../data/db/myorg-production/myorg-production.11
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.12
to ../../data/db/myorg-production/myorg-production.12
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.13
to ../../data/db/myorg-production/myorg-production.13
download s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.14
to ../../data/db/myorg-production/myorg-production.14
download failed: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-produc
tion.15 to ../../data/db/myorg-production/myorg-production.15 ("Connection broken: error(104, 'Connection reset by peer')", error
(104, 'Connection reset by peer'))
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.16
to ../../data/db/myorg-production/myorg-production.16
How can I ensure that the data from the given S3 path was fully copied over to the target path without any connection issues, missing files, etc? Is the sync command for the AWS tool a better option? Or should I try something else?
Thanks!

openshift DIY, 503 error after deleting and adding again testrubyserver.ruby file

I am trying openshift DIY cartridge. I use a windows system to manage the server from command line. I managed to run a simple html5 website. I have deleted the testrubyserver.ruby file from the webpage folder for test purposed and then added it again to my webfolder. Now i have 503 error. No restart, no stop, no start helps. I am stuck in 503. Does anyone know what to do? How can I make the testrubyserver.ruby run again?
Solved my problem. I checked the log file in the folder: app-root / logs. There I found out that
nohup: failed to run command `/..//testrubyserver.rb': Permission denied
I change in filezilla the permissions for the file from rw to rwx to execute it. Restarted the server and then it worked.
I do not know if this is the right approach. At least it makes my app running again.

Resources