I'm trying to export a disk image I've build in GCP and export it as a vmdk to a storage bucket.
The export through an error message complaining about service account not found. I can't remember having deleted such a user account. For me it should exist since the creation of the project.
How can I re-create a default service account without taking the risk to loose all my compute engine resources? Which roles should I give to this service account?
[image-export-ext.export-disk.setup-disks]: 2021-10-06T18:52:00Z CreateDisks: Creating disk "disk-export-disk-os-image-export-ext-export-disk-j8vpl".
[image-export-ext.export-disk.setup-disks]: 2021-10-06T18:52:00Z CreateDisks: Creating disk "disk-export-disk-buffer-j8vpl".
[image-export-ext.export-disk]: 2021-10-06T18:52:01Z Step "setup-disks" (CreateDisks) successfully finished.
[image-export-ext.export-disk]: 2021-10-06T18:52:01Z Running step "run-export-disk" (CreateInstances)
[image-export-ext.export-disk.run-export-disk]: 2021-10-06T18:52:01Z CreateInstances: Creating instance "inst-export-disk-image-export-ext-export-disk-j8vpl".
[image-export-ext]: 2021-10-06T18:52:07Z Error running workflow: step "export-disk" run error: step "run-export-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-10-06T11:52:07.153-07:00 Error:0xc000712230 HttpErrorMessage:BAD REQUEST HttpErrorStatusCode:400 Id:5314937137696624317 InsertTime:2021-10-06T11:52:02.707-07:00 Kind:compute#operation Name:operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/operations/operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee StartTime:2021-10-06T11:52:02.708-07:00 Status:DONE StatusMessage: TargetId:840687976797195965 TargetLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/instances/inst-export-disk-image-export-ext-export-disk-j8vpl User:494995903825#cloudbuild.gserviceaccount.com Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Wed, 06 Oct 2021 18:52:07 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}:
Code: EXTERNAL_RESOURCE_NOT_FOUND
Message: The resource '494995903825-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
[image-export-ext]: 2021-10-06T18:52:07Z Workflow "image-export-ext" cleaning up (this may take up to 2 minutes).
[image-export-ext]: 2021-10-06T18:52:08Z Workflow "image-export-ext" finished cleanup.
[image-export] 2021/10/06 18:52:08 step "export-disk" run error: step "run-export-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-10-06T11:52:07.153-07:00 Error:0xc000712230 HttpErrorMessage:BAD REQUEST HttpErrorStatusCode:400 Id:5314937137696624317 InsertTime:2021-10-06T11:52:02.707-07:00 Kind:compute#operation Name:operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/operations/operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee StartTime:2021-10-06T11:52:02.708-07:00 Status:DONE StatusMessage: TargetId:840687976797195965 TargetLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/instances/inst-export-disk-image-export-ext-export-disk-j8vpl **User:494995903825#cloudbuild.gserviceaccount.com** Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Wed, 06 Oct 2021 18:52:07 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}: Code: EXTERNAL_RESOURCE_NOT_FOUND; Message: The resource **'494995903825-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.**
ERROR
ERROR: build step 0 "gcr.io/compute-image-tools/gce_vm_image_export:release" failed: step exited with non-zero status: 1
Go to IAM & Admin > IAM and check whether your default SA is there.
If deleted you can recover within 30 days.
How to check if it is deleted?
To recover. One cannot recover a default compute service account after 30 days.
If all the above fails, then you might need to go the custom SA route, or share an image with a project that has a default service account.
Related
I am using following command to create an EKS cluster :
eksctl create cluster --name cqpocsefkdemo --node-type t2.micro --nodes 3 --nodes-min 3 --nodes-max 5 --region us-east-1 --zones us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1f
But I am getting error that I am unable to resolve, the error looks like this:
SDK 2022/04/15 19:20:50 DEBUG request failed with unretryable error
https response error StatusCode: 403, RequestID:
56fa150b-5c94-499f-be10-d9a318557f15, api error SignatureDoesNotMatch:
Signature expired: 20220415T135049Z is now earlier than
20220415T143550Z (20220415T145050Z - 15 min.)
Error: checking AWS STS access – cannot get role ARN for current
session: operation error STS: GetCallerIdentity, https response error
StatusCode: 403, RequestID: 56fa150b-5c94-499f-be10-d9a318557f15, api
error SignatureDoesNotMatch: Signature expired: 20220415T135049Z is
now earlier than 20220415T143550Z (20220415T145050Z - 15 min.)
The error occurred because the system time was not in sync, I resolved this error by going to windows setting and sync up my date and time.
I had uploaded some objects on google cloud storage for which I get the error as Forbidden Object Google::Cloud::PermissionDeniedError. Additionally, I do not have full rights to Cloud Storage as I am working on the university class project.
Can you please tell me how to delete the objects? I was the one to upload it using the Google API. The interesting thing to note is that other files I can delete but three files that I uploaded were written protected if I remember correctly and cannot be deleted now.
Here is the additional context to the issue.
I checked the retention policy for the storage bucket. It has no retention policy enabled, as can be seen from the output below
gsutil retention get gs://cs291project2
gs://cs291project2/ has no Retention Policy.
Yet, the remove command doesn't seem to work.
SISProject2$ gsutil rm gs://cs291project2/**
Removing >gs://cs291project2/00/00/3Da608e50745f7fe13116e728cd0282fda42ce3f83d3f509d5a83f4cd5>80...
AccessDeniedException: 403 Object >'cs291project2/00/00/3Da608e50745f7fe13116e728cd0282fda42ce3f83d3f509d5a83f4cd580' >is under active Temporary hold and cannot be deleted, overwritten or archived until >hold is removed.
From the error message Object Temporary Hold.. is under active hold you might have uploaded a file to a locked, retention-enabled bucket. You can check if you have the retention policy enabled for the bucket by running these commands:
Example:
$ gsutil retention get gs://bucket
Retention Policy (LOCKED):
Duration: 7 Day(s)
Effective Time: Thu, 11 Sep 2021 19:52:15 GMT
Example:
$ gsutil ls -Lb gs://bucket/object
gs://bucket/object:
Creation time: Thu, 27 Sep 2020 00:00:00 GMT
Update time: Thu, 27 Sep 2021 12:11:00 GMT
Event-Based Hold: Enabled
If that is the case, you cannot delete the object until its retention period is reached.
If you receive a 403 error whilst running these commands, you most likely do not have the correct permission configured. You can run the command below to review the policies for the project. Please note, this is a permissions-based command.
gcloud projects get-iam-policy <project-id> | grep 'role\|user\|members'
You can then compare the result against the IAM permissions for gsutil. For example, the gsutil rm command requires these:
rm Buckets storage.buckets.delete
storage.objects.delete
storage.objects.list
rm Objects storage.objects.delete
storage.objects.get
As a last resort, to drill down further to see what might be happening you can add the -D switch to run the command in debug mode.
gsutil -D retention get gs://bucket
Please note, this comes with a warning:
***************************** WARNING *****************************
*** You are running gsutil with debug output enabled.
*** Be aware that debug output includes authentication credentials.
*** Make sure to remove the value of the Authorization header for
*** each HTTP request printed to the console prior to posting to
*** a public medium such as a forum post or Stack Overflow.
***************************** WARNING *****************************
my oobjective is simple, just a backup and retsore it on other machine , which have no raltion with running cluter .
My steps .
1. Remotly pg_basebackup on new machine .
2. rm -fr ../../main/
3. mv bacnkup/main/ ../../main/
4. start postgres service
** During backup no error occur. **
But getting error:
2018-12-13 10:05:12.437 IST [834] LOG: database system was shut down in recovery at 2018-12-12 23:01:58 IST
2018-12-13 10:05:12.437 IST [834] LOG: invalid primary checkpoint record
2018-12-13 10:05:12.437 IST [834] LOG: invalid secondary checkpoint record
2018-12-13 10:05:12.437 IST [834] PANIC: could not locate a valid checkpoint record
2018-12-13 10:05:12.556 IST [833] LOG: startup process (PID 834) was terminated by signal 6: Aborted
2018-12-13 10:05:12.556 IST [833] LOG: aborting startup due to startup process failure
2018-12-13 10:05:12.557 IST [833] LOG: database system is shut down
Based on the answer to a very similar question (How to mount a pg_basebackup on a stand alone server to retrieve accidently deleted data and on the fact that that answer helped me get this working glitch-free, the steps are:
do the basebackup, or copy/untar previously made one, to the right location /var/lib/postgresql/9.5/main
remove the file backup_label
run /usr/lib/postgresql/9.5/bin/pg_resetxlog -f /var/lib/postgresql/9.5/main
start postgres service
(replying to this old question because it is the first one I found when looking to find the solution to the same problem).
There are many same events all my Hyper V servers environment. I haven't found any solution for this event. If you have the solution for this error can you help me?
>Log Name: Microsoft-Windows-Kernel-EventTracing/Admin
>Source: Microsoft-Windows-Kernel-EventTracing
>Date: 4/28/2016 1:34:27 PM
>Event ID: 2
>Task Category: Session
>Level: Error
>Keywords: Session
>User: NETWORK SERVICE
>Computer: HYPERV01.prod.local
>Description:
>**Session "" failed to start with the following error: 0xC0000022**
You're getting an access denied error upon attempt to start network service. Your permissions are probably restricted.
I am trying to deploy a basic app engine web app with maven.
As a part of the deployment process, I am required to authenticate via a web browser.
I am using 2 different google accounts. 1 for home. 1 for work. When maven opened up the browser tab to ask me to authenticate, it selected the wrong account. I didn't notice this and clicked the "Allow" button.
This account does not have the right credentials so I got an access denied error.
😈 >mvn appengine:update
...
Beginning interaction for module default...
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #0
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #1
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #2
Apr 01, 2016 4:47:33 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #3
So I think "no biggee", I'll just run it again. Somehow I'll get maven to select the correct account (maybe I'll temporarily logout of the incorrect one) and that will solve the problem.
Unfortunately, I am no longer being prompted to authenticate. It just keeps giving me accessed denied errors.
I am presuming there is a file somewhere on the file system that I need to delete in order to get prompted for my authorization again.
Does anyone know where this file is?
UPDATE
I tried completely recreating my project from scratch in a different directory, and I still get the access denied errors.
By running this command ...
mvn help:describe -Dplugin=appengine -Ddetail
I have discovered that there is an additional parameter that I can pass to the update goal that will do exactly what I need it to do, but I don't know how the correct syntax to use to actually pass this additional parameter.
appengine:update
Description: Create or update an app version.
Implementation: com.google.appengine.appcfg.Update Language: java
Before this mojo executes, it will call:
Phase: 'package'
Available parameters:
additionalParams
User property: appengine.additionalParams
Additional parameters to pass through to AppCfg.
noCookies
User property: appengine.noCookies
Do not save/load access credentials to/from disk.
I think this might be the correct syntax ...
😈 >mvn appengine:update -DadditionalParams="--noCookies"
However, this does NOT solve the problem as the update seems to ignore the parameter.
I fixed the error using this command before mvn appengine:update command:
rm ~/.appcfg_oauth2_tokens_java
I was able to solve this problem by using the appcfg.sh tool instead of maven.
😈 >appcfg.sh --no_cookies update /path/to/maven/project/first_project_second_try/guestbook/target/guestbook-1.0-SNAPSHOT
I suspect that it is possible to do this with maven as well, but I am uncertain as to how pass the "--no_cookies" option to maven.