Unable to mount new volume on node - google-api

Hi im trying to mount a new volume for my db pod, i execute kubectl describe pod rc-chacha-5064p to see what its taking so long and i get the following
FailedMount AttachVolume.Attach failed for volume "db-xxxx-disk-pv" : googleapi: Error 403: Exceeded limit 'maximum_persistent_disks' on resource 'gke-xxxx-cluster-1-db-pool-xxxxx-xxxx'. Limit: 16.0
is there a way to raise that limit, i already went trough google quotas but there is nothing about this kind of restriction, any help would be appreciated

This is not a quota issue but a node level limit. Using beta apis, you can create a machine type which can mount more number of disks. See this https://cloud.google.com/compute/docs/disks/#increased_persistent_disk_limits

Related

Updating service [default] (this may take several minutes)...failed

This used to be working perfectly until a couple of days back exactly 4 days back. When i run gcloud app deploy now it complete the build and then straight after completing the build it hangs on Updating Service
Here is the output:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/just-sleek/regions/us-central1/operations/8260bef8-b882-4313-bf97-efff8d603c5f error [INTERNAL]: An internal error occurred while processing task /appengine-flex-v1/insert_flex_deployment/flex_create_resources>2020-05-26T05:20:44.032Z4316.jc.11: Deployment Manager operation just-sleek/operation-1590470444486-5a68641de8da1-5dfcfe5c-b041c398 errors: [
code: "RESOURCE_ERROR"
location: "/deployments/aef-default-20200526t070946/resources/aef-default-20200526t070946"
message: {
\"ResourceType\":\"compute.beta.regionAutoscaler\",
\"ResourceErrorCode\":\"403\",
\"ResourceErrorMessage\":{
\"code\":403,
\"errors\":[{
\"domain\":\"usageLimits\",
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"reason\":\"limitExceeded\"
}],
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"statusMessage\":\"Forbidden\",
\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/just-sleek/regions/us-central1/autoscalers\",
\"httpMethod\":\"POST\"
}
}"]
I tried the following the ways to resolve the error:
I deleted all my previous version and left the running version
I ran gcloud components update still fails.
I create a new project, changed the region from [REGION1] to [REGION2] and deployed and m still getting the same error.
Also ran gcloud app deploy --verbosity=debug, does not give me any different result
I have no clue what is causing this issue and how to solve it please assist.
Google is already aware of this issue and it is currently being investigated.
There is a Public Issue Tracker, you may 'star' and follow so that you can receive any further updates on this. In addition, you may see some workarounds posted that could be performed temporarily if agreed with your preferences.
Currently, there is no ETA yet for the resolution but there will be an update provided as soon as the team progresses on the issue.
I resolved this by adding this into my app.yaml
automatic_scaling:
min_num_instances: 1
max_num_instances: 7
I found the solution here:
https://issuetracker.google.com/issues/157449521
And I was also redirected to:
gcloud app deploy - updating service default fails with code 13 Quota for instances limit exceeded, and 401 unathorizeed

How overcome error 400 in Watson Discovery Upload Data

I am new to IBM cloud. I deleted my Watson Discovery service by mistake. Afterwards, I re-created a new service and there was no issue. But when I try to upload data to Watson Discovery, I'm given error 400 "Only one free environment is allowed per resource group". I'm on the Lite plan.
Any help?
login into your ibm cloud account and go to https://cloud.ibm.com/shell and run the following commands
ibmcloud resource reclamations
the above command list all resource reclamations under your account. to know which resource to delete check the Entity CRN and copy it's ID then use below command to delete the resource
ibmcloud resource reclamation-delete [ID] --force
Replace the ID with resource id to delete.
Maybe it is too late, but I found some information under this link: https://cloud.ibm.com/docs/discovery?topic=discovery-gs-api.
It mentions something like: "If you have recently deleted a Lite instance and then receive a 400 - Only one free environment is allowed per resource group error message when creating a new environment in a new Lite instance, you need to finish deleting the original Lite instance. See ibmcloud resource reclamations and follow the reclamation-delete instructions."
Also further information can be gathered from here: https://cloud.ibm.com/docs/cli?topic=cloud-cli-ibmcloud_commands_resource#ibmcloud_resource_reclamations

kubernetes rolling update for elasticsearch

I am performing a simple rolling update for elasticsearch image. The command I use is
kubectl set image deployment master-deployment elasticsearch={private registry}/elasticsearch:{tag}
However, the elasticsearch always gets IOException after the rolling update.
Caused by: java.io.IOException: failed to read [id:60, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/global-60.st]
I have checked the directory /var/lib/elasticsearch/nodes/0/_state/. It has global-10.st file present but not global-60.st.
How should I make sure the image itself synchronizes well with the files present?
I think you should go with statefulSet and external storage (I.e pvc - don’t store the data inside the pod. )

Issue launching Aurora: Access denied to Performance Insights - InvalidParameterCombination

For some reason i just can't get an Amazon Aurora DB launched. I haven't launched one before but have read many Amazon help / instruction pages. Launching other Amazon products did work well after some digging. This one just doesn't. Any suggestions?
Error:
Access denied to Performance Insights (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: 8ef6c7b9-be54-4bd8-aa87-XXXXXXXX)
http://prntscr.com/iug951
Today it works.. selected the same settings as yesterday. All i did different was omit dashes (-) from the database name and other stuff you have to name. If that was the actual cause of the 3h headache yesterday Amazon really sucks it just doesn't tell you that instead of showing a cryptic error message.
I just had the same issue with the same error message, restarting the setup process from the start (with a database name without a dash in it), fixed the issue.

openshift 3 memory quota

I am using openshift 3 pro to mount an elasticsearch server (not ELK).
to do this I'am using this image :
-- https://github.com/lbischof/openshift3-elk
only the elasticsearch part.
After installing I am using elasticdump to add data from another server.
The process is very long and crashing muliples times. during the dumping, the pod is always using ALL the 512Mi Memory quota.
How to allow 1024 or 2048 Mi for my elasticsearch pod ?
You can change the resource quota by going to the deployment config in the web console and from the drop down menu on right side select 'Edit Resource Limits'. You will need to first ensure your Pro account has enough memory associated with it.

Resources