we have a problem setting up aws-sigv4 and connecting an AWS AMP workspace via docker images.
TAG: grafana/grafana:7.4.5
Main problem is that in the UI the sigv4 configuration screen does not appear.
Installing grafana:7.4.5 locally via Standalone Linux Binaries works.
Just setting the environment variables,
export AWS_SDK_LOAD_CONFIG=true
export GF_AUTH_SIGV4_AUTH_ENABLED=true
the configuration screen appears.
Connecting and querying data to AMP via corresponding IAM instance role is working flawlessly.
Doing the same in the docker image as ENV Variables does NOT work.
When using grafana/grafana:sigv4-web-identity it works, but it seems to me that this is just a "test image".
How to configure the default grafana image in order to enable sigV4 authentication?
It works for me:
$ docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_AUTH_SIGV4_AUTH_ENABLED=true" \
-e "AWS_SDK_LOAD_CONFIG=true" \
grafana/grafana:7.4.5
You didn't provide minimal reproducible example, so it's hard to say what is a problem in your case.
Use variable GF_AWS_SDK_LOAD_CONFIG instead of AWS_SDK_LOAD_CONFIG.
Related
How can I export a gce Image to use it in a local Virtualbox?
I get the error:
error: No such device
gce-image-export.vmdk
gce-image-export.qcow2
gce-image-export.vdi
I use the command:
qemu-img convert -O vdi gce-image-export.qcow2 gce-image-export.vdi
I get by *.vmdk, *.qcow2, *.vdi all the the same error.
Did you have input for me?
Thanks
kivitendo
You can export the image using the gcloud command. You can see in the following documentation all the use of the command and the flags.
gcloud compute images export \
--destination-uri <destination-uri> \
--image <image-name> \
--export-format <format>
The --export-format flag exports the image to a format supported by QEMU using qemu-img. Valid formats include 'vmdk', 'vhdx', 'vpc', 'vdi', and 'qcow2'.
You can send it to a Bucket and later you can download it.
thanks, i only can export from Google Cloud Platform (gce) to
*.VMDK
*.VHDX
*.VPC
*.qcow2
formats.
I must Virtualbox 6.1 change Virtualbox Machine to EFI Support.
If i use than rEFInd 0.12 as boothelper i can start my vce *.vmdk machine.
I get many errormessage and i don't can log in, in my vce *.vmdk machine. to repair the errormessage and install grub-efi to my vce *.vmdk machine.
My installed NextCloud server is starting.
How can i log in, to my machine?
root doesn't work.
I don't find any tutorial.
kivitendo
I have a set up that needs to be bootstrapped off the values of some files in another VM.
Here is the run command I am using to invoke the run run command:
BOOT_VM="${VM_NAME}1"
BOOT_ENODE=$(az vm run-command invoke --name ${BOOT_VM} \
--command-id RunShellScript \
--resource-group ${RSC_GRP_NAME} \
--query "value[].message" \
--output tsv \
--scripts "cat /etc/parity/enode.pub")
echo ${BOOT_ENODE}
The result I get is :
Enable succeeded: [stdout] [stderr]
As far as I know, this could mean 2 things:
There is no file there
I am handling the response wrongly.
Really hoping it isnt 1 and would like advice on how to approach this.
For your issue, there is also a reason that the agent in the vm is not at work or something not good happens to it. The Azure VM agent manages interactions between an Azure VM and the Azure fabric controller. So you should check if it works well.
Update
You can check the agent in the portal:
Also, you can check the agent inside the vm:
For example, I want to get the config of vim in the vm and the vm os is Red Hat 7.2. Then the result of the command az vm run-command invoke will like below:
Currently, my project's setup for testing is twofold: as for day-to-day development, I run testcafe through foreman on MacOS (to take advantage of my personal .env file), and on the CI server (BitBucket), I use testcafe through the testcafe/testcafe docker image.
However, not using the same environment during development and CI is not optimal, so I figured using docker(-compose) in both scenarios would be the best way to go. After reading testcafe issue 1880 and PR 2574, I figured my command for development should be something like:
docker run -v /Users/bert/Development/m4e/ui_factory/test/tests:/test -p 1337:1337 -p 1338:1338 -it testcafe/testcafe -- remote /test --hostname localhost
but I seem unable to connect Safari to http://localhost:1337 in this case:
Safari can't open the page "172.17.0.2:1337/browser/connect/ryD70k" because Safari can't connect to the server "172.17.0.2"
Anyone has an idea on how to tackle this?
Please delete unnecessary " -- " in the following record:
testcafe/testcafe -- remote
Here is a help topic, which describes how to use TestCafe Docker Image:
Using TestCafe Docker Image
As #Marion pointed out: the culprit is the -- in the command. I used it to ensure
the arguments of the command were clearly separated from the docker arguments.
This is not simply 'unnecessary', it is simply wrong.
I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais
I am using the Kubernetes to deploy and trace data from application using zipkin. I am facing issue in replacing MySQL with Elasticsearch since I am not able to get the idea. Even the replacement is done on command line basis, using STORAGE_TYPE="Elasticsearch" but how that can be done through kubernetes? I am able to run the container from docker imgaes but is there any way to replace through deployment?
You may define all needed params via ENV options.
Here is a cmd for running zipkin in docker:
docker run -d -p 9411:9411 -e STORAGE_TYPE=elasticsearch -e ES_HOSTS=http://172.17.0.3:9200 -e ES_USERNAME=elastic -e ES_PASSWORD=changeme openzipkin/zipkin
All these params can be defined in Deployment (see Expose Pod Information to Containers Through Environment Variables)