I created a private image of a Google compute engine persistent disk, called primecoin01.
Later on, I'm trying to create a new image. It fails by saying the regexp is invalid both in listing the images and during gcloud.compute.instances.delete - the 1st step in using my persistent disk to create an image. It let me create the image name and now I'm unable to use the commands gcloud compute images list or gcloud compute instances delete instance-0 --keep-disks boot. I do not know a way to delete this image from my list.
primecoin01 certainly meets the regular expression criteria, and I have no clue why it named the image apparently ``primecoin01 All help greatly appreciated.
Details below:
C:\Program Files\Google\Cloud SDK>gcloud compute images list
NAME PROJECT ALIAS DEPRECATED STATUS
centos-6-v20141021 centos-cloud centos-6 READY
centos-7-v20141021 centos-cloud centos-7 READY
coreos-alpha-494-0-0-v20141108 coreos-cloud READY
coreos-beta-444-5-0-v20141016 coreos-cloud READY
coreos-stable-444-5-0-v20141016 coreos-cloud coreos READY
backports-debian-7-wheezy-v20141021 debian-cloud debian-7-backports READY
debian-7-wheezy-v20141021 debian-cloud debian-7 READY
container-vm-v20141016 google-containers container-vm READY
opensuse-13-1-v20141102 opensuse-cloud opensuse-13 READY
rhel-6-v20141021 rhel-cloud rhel-6 READY
rhel-7-v20141021 rhel-cloud rhel-7 READY
sles-11-sp3-v20140930 suse-cloud sles-11 READY
sles-11-sp3-v20141105 suse-cloud sles-11 READY
sles-12-v20141023 suse-cloud READY
ERROR: (gcloud.compute.images.list) Some requests did not succeed:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z]
(?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
and
C:\Program Files\Google\Cloud SDK>gcloud compute instances delete instance-0 --keep-disks boot
ERROR: (gcloud.compute.instances.delete) Unable to fetch a list of zones. Specifying [--zone] may fix this issue:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z] (?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
and
C:\Program Files\Google\Cloud SDK>gcloud compute instances delete instance-0 --keep-disks boot --zone us-central1-b
The following instances will be deleted. Attached disks configured to
be auto-deleted will be deleted unless they are attached to any other
instances. Deleting a disk is irreversible and any data on the disk
will be lost.
- [instance-0] in [us-central1-b]
Do you want to continue (Y/n)? y
ERROR: (gcloud.compute.instances.delete) Failed to fetch some instances:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z]
(?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
It seems to be a validation error when creating the disk and the name is not correct. Are you still having the same issue?
A way to create a 'snapshot' of your disk will be to use the dd Linux command to burn your image and then tar the file to create an image from this file.
Related
BlockExplorer - https://explorer.mainnet.near.org/blocks/2RPJGA17MQ9GAtwSVuVbasuosgkWqDgXHKWLuX4VyYv4
i am able to query starting from block - 9820221.
Can any one help me understand why this is the case and if there are other explorers where i can query the blockDetails
mainnet started from block height 9820210 (see mainnet genesis config), so there are no blocks before that one. The first 3 blocks are missing due to validators being offline or something like that, so the first produced block is 9820214, and you can query it: https://explorer.mainnet.near.org/blocks/CFAAJTVsw5y4GmMKNmuTNybxFJtapKcrarsTh5TPUyQf
Blocks before 9820210 were produced in mainnet running before July 22nd, 2020, but for some reason NEAR needed to restart the network from genesis, and thus we dumped the state as of block 9820210 and called it a new genesis, and that was the start. You have no access to the history before that moment, you can only inspect the state as of genesis, where certain accounts exist with certain balances, contract code, and states.
I have about 4000 files (avg ~7MB each) input.
My pipeline always failed on the step CoGroupByKey when the data size reach about 4GB.
I tried to limit only use 300 file then it run just fine.
In case of fail, the logs on GCP dataflow only show:
Workflow failed. Causes: S24:CoGroup Geo data/GroupByKey/Read+CoGroup Geo data/GroupByKey/GroupByWindow+CoGroup Geo data/Map(_merge_tagged_vals_under_key) failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers:
store-migration-10212040-aoi4-harness-m7j7
Root cause: The worker lost contact with the service.,
store-migration-xxxxx
Root cause: The worker lost contact with the service.,
store-migration-xxxxx
Root cause: The worker lost contact with the service.,
store-migration-xxxxx
Root cause: The worker lost contact with the service.
I digging through all logs in Logs Explorer. Nothing else indicate error other than the above, even my logging.info and try...except code.
Think this relate to the memory of the instances but I didn't digging into that direction. Because it kindna what I don't want to worry about when I am using GCP services.
Thanks.
I deployed Prometheus Node Exporter pod on k8s. It worked fine.
But when I try to get system metrics by calling Node Exporter metric API in my custom Go application
curl -X GET "http://[my Host]:9100/metrics"
The result format was like this
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.7636e-05
go_gc_duration_seconds{quantile="0.25"} 2.466e-05
go_gc_duration_seconds{quantile="0.5"} 5.7992e-05
go_gc_duration_seconds{quantile="0.75"} 9.1109e-05
go_gc_duration_seconds{quantile="1"} 0.004852894
go_gc_duration_seconds_sum 1.291217651
go_gc_duration_seconds_count 11338
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 8
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.12.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.577128e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.0073577064e+10
.
.
.
something like this
Those long texts are hard to parse and I want to get the results in JSON format to parse them easily.
https://github.com/prometheus/node_exporter/issues/1062
I checked Prometheus Node Exporter GitHub Issues and someone recommended prom2json.
But this is not I'm looking for. Because I have to run extra process to execute prom2json to get results. I want to get Node Exporter's system metric by simply calling HTTP request or some kind of Go native packages in my code.
How can I get those Node Exporter metrics in JSON format?
You already mentioned prom2json and you can pull the package into your Go file by importing github.com/prometheus/prom2json.
The sample executable in the repo has the all building blocks you need. First, open the URL and then use the prom2json package to read the data and store the result.
However, you should also have a look at expfmt.TextParser as that is the native way to ingest Prometheus formatted metrics.
I am using the aerospike list operation (golang client) to prepend to an existing key in aerospike using the following command:
client.Operate(c.WritePolicy, aeroKey, aero.ListInsertOp(c.bin, 0,
messages...))
But I am getting "Server error" as response error and no other error details. I already checked that the aeroKey exists and is not nil. Could it be that the aerospike version does not support this operation? Is there a way to confirm this problem or some setting to allow this operation ?
Well, that would be because the list API was added in release 3.7.0.1. Before that lists were a data type without any atomic operations (list-append, etc). Same thing goes for maps, before 3.8.4 they were just a container for map data.
You're running against a version that is two years old. Time to upgrade.
I am following the steps in the link shown below to use Hadoop 2.2 clusters with HDInsight. http://azure.microsoft.com/en-us/documentation/articles/hdinsight-get-started-30/
In the "Run a Word Count Map Reduce Job"section I am having difficulty getting the message to take for step 4. In the PowerShell I type in the following commands:
Submit the job
Select-AzureSubscription $subscriptionName
$wordCountJob = Start-AzureHDInsightJob -Cluster $clusterName -JobDefinition $wordCountJobDefinition
I keep getting an error that states there is a ParameterArgumentValidationError. What command could I use to avoid getting these errors?
I am new to using Azure and could really use some help :)
Those are two separate cmdlets:
The first one is:
Select-AzureSubscription $subscriptionName
If you only have only one subscription with your azure account, you can skip this cmdlet.
The second cmdlet is:
$wordCountJob = Start-AzureHDInsightJob -Cluster $clusterName -JobDefinition $wordCountJobDefinition