I was trying to deploy ngnix docker container by mesos Marathon, I would like to set some environment variables in the container, so I added parameter section in the Json file, but after I added the parameter section, it was failed. My Json file as following:
{
"container":{
"type":"DOCKER",
"docker":{
"image":"nginx",
"network":"BRIDGE",
"portMappings":[{"containerPort":80,"hostPort":0,"servicePort":80,"protocol":"tcp"}],
"parameters": [
{ "key": "myhostname", "value": "a.corp.org" }
]
}
},
"id":"nginx7",
"instances":1,
"cpus":0.25,
"mem":256,
"uris":[]
}
my launch script was: curl -X POST -H "Content-Type: application/json" 10.3.11.11:8080/v2/apps -d#"$#"
The command I ran was: ./launch.sh nginx.json
You used the wrong parameter key myhostname, if you want to setup hostname for you container, it should be:
"parameters": [
{ "key": "hostname", "value": "a.corp.org" }
]
if you want to pass environment variable, it should be:
"parameters": [
{ "key": "env", "value": "myhostname=a.corp.org" }
]
Related
I have two paths in KrakenD config: /city/toronto and /city/vancouver. I want to create another path /city/other that would catch every other city that would be provided.
I know at first glance one would say: make the city a path parameter or one would even say make the city a query parameter. I have considered these options and they are not viable.
Is there a way in Krakend To define a catchall or fallback endpoint? I though wildcard could allow me to do this but I am not seeing how this would work.
In earlier versions of KrakenD this was not possible, but since 1.4 you can use routes that before were considered to be conflicting. The following example does exactly what you are expecting. Run with the -d flag:
krakend run -d -c krakend.json where the content of the JSON file is:
{
"version": 2,
"endpoints": [
{
"endpoint": "/city/vancouver",
"backend": [
{
"url_pattern": "/__debug/vancouver",
"host": [
"http://localhost:8080"
]
}
]
},
{
"endpoint": "/city/toronto",
"backend": [
{
"url_pattern": "/__debug/toronto",
"host": [
"http://localhost:8080"
]
}
]
},
{
"endpoint": "/city/{other}",
"backend": [
{
"url_pattern": "/__debug/catchall",
"host": [
"http://localhost:8080"
]
}
]
}
]
}
For testing purposes I'm trying to execute this simple pipeline (nothing sophisticated).
However, I'm getting this error:
{"code":"BadRequest","message":null,"target":"pipeline//runid/cb841f14-6fdd-43aa-a9c1-4619dab28cdd","details":null,"error":null}
The goal is to see if two variables are getting the right values (we have been facing some issues in our production environment).
This is the json with the definition of the pipeline:
{
"name": "GeneralTest",
"properties": {
"activities": [
{
"name": "Set variable1",
"type": "SetVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "start_time",
"value": {
"value": "#utcnow()",
"type": "Expression"
}
}
},
{
"name": "Wait1",
"type": "Wait",
"dependsOn": [
{
"activity": "Set variable1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"waitTimeInSeconds": 5
}
},
{
"name": "Set variable2",
"description": "",
"type": "SetVariable",
"dependsOn": [
{
"activity": "Wait1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "end_time ",
"value": {
"value": "#utcnow()",
"type": "Expression"
}
}
}
],
"variables": {
"start_time": {
"type": "String"
},
"end_time ": {
"type": "String"
}
},
"folder": {
"name": "Old Pipelines"
},
"annotations": []
}
}
What am I missing, or what could be the problem with this process?
You are having a "blank space" after the variable name end_time like "end_time "
You can see the difference in my repro:
MyCode VS YourCode
Clearing that would make the execution just fine.
I faced a similar issue when doing a Debug run of one of my pipelines. The error messages for these types of errors are not helpful when running in Debug mode.
What I have found is that if you publish the pipeline and then Trigger a Pipeline Run (instead of a Debug run), you can then go to Monitor Pipeline Runs and it will show you a more useful error message.
Apart from possible blank spaces in variable or parameter names, Data Factory doesn't like hyphens, but only in parameter names, variables are fine.
Validation passes, but then in debug time you get the same cryptic error
I ran into this same error message in Data Factory today on the Copy activity. Everything passed validation but this error would pop on each debug run.
I have parameters configured on my dataset connections so that I can use dynamic queries against the data sources. In this case, I was using explicit queries, so the parameters appeared irrelevant. I tried with both blank values and value is null. Both failed the same way.
I tried with stupid but real text values and it worked! The pipeline isn't leveraging the stupid values for any work, so their content doesn't matter, but some portion of the engine needs a non-null value in the parameters in order to execute.
I am looking to send pull request to multiple reviewers in bitbucket. Currently I have json and curl request as below
{
"title": "PR-Test",
"source": {
"branch": {
"name": "master"
}
},
"destination": {
"branch": {
"name": "prd"
}
},
"reviewers": [
{
"uuid": "{d543251-6455-4113-b4e4-2fbb1tb260}"
}
],
"close_source_branch": true
}
curl -u "user:pass" -H "Content-Type: application/json" https://api.bitbucket.org/2.0/repositories/companyname/my-repo/pullrequests -X POST --data #my-pr.json
The above curl command works. I need the json syntax to pass either multiple usernames or multiple UUID in the reviewers list.
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D/pullrequests
The documentation you linked to seems to indicate that something like this should work:
"reviewers": [
{
"uuid": "{504c3b62-8120-4f0c-a7bc-87800b9d6f70}"
},
{
"uuid": "{bafabef7-b740-4ee0-9767-658b3253ecc0}"
}
]
I'm creating a bash script to provision multiple Azure resources via the Azure CLI. So far so good, however I'm having a problem tagging resources.
My goal is to store multiple tags in a variable and provide that variable to the --tags option of several az commands in the script. The problem however is that a space in the value will be interpreted as a new key.
If we take for example the command az group update (which will update a resource group) the docs state the following about the --tags option:
--tags
Space-separated tags in 'key[=value]' format. Use "" to clear existing tags.
When a value (or key) contains spaces it must be enclosed in quotes.
So when we provide the key-value pairs directly to the command including a value with spaces, like in the following example, the result will be as expected:
az group update --tags owner="FirstName LastName" application=coolapp --name resource-group-name
The result will be that two tags have been added to the resource group:
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"application": "coolapp",
"owner": "FirstName LastName"
}
}
However, when we store the same value we used in the previous step in a variable the problem occurs.
tag='owner="FirstName LastName" application=coolapp'
I use echo $tag to validate that the variable contains exactly the same value as we provided in the previous example to the --tags option:
owner="FirstName LastName" application=coolapp
But when we provide this tag variable to the tags option of the command as shown in the next line:
az group update --tags $tag --name resource-group-name
The result will be three tags instead of the expected two:
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"LastName\"": "",
"application": "coolapp",
"owner": "\"FirstName"
}
}
I've already tried defining the variable in the following ways, but no luck so far:
tag="owner=FirstName LastName application=coolapp"
tag=owner="Firstname Lastname" application=cool-name
tag='`owner="Firstname Lastname" application=cool-name`'
I even tried defining the variable as an array and providing it to the command as shown on the next line, but also that didn't provide the correct result:
tag=(owner="Firstname Lastname" application=cool-name)
az group update --tags ${tag[*]}--name resource-group-name
I also tried putting quotes around the variable in the command, as was suggested by #socowi, but this leads to the following incorrect result of one tag instead of two:
az group update --tags "$tag" --name resource-group-name
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"owner": "Firstname Lastname application=cool-name"
}
}
Does anybody know how to solve this?
Define your tags as
tags=("owner=Firstname Lastname" "application=cool-name")
then use
--tags "${tags[#]}"
I've found the following works. It requires a resource group already be created.
I used the following template:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceName": {
"type": "string",
"metadata": {
"description": "Specifies the name of the resource"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for the resources."
}
},
"resourceTags": {
"type": "object",
"defaultValue": {
"Cost Center": "Admin"
}
}
},
"resources": [
{
"apiVersion": "2019-06-01",
"kind": "StorageV2",
"location": "[parameters('location')]",
"name": "[parameters('resourceName')]",
"properties": {
"supportsHttpsTrafficOnly": true
},
"sku": {
"name": "Standard_LRS"
},
"type": "Microsoft.Storage/storageAccounts",
"tags": "[parameters('resourceTags')]"
}
]
}
In the Azure CLI using Bash, you can pass in the tag as a JSON object. In the following example, a template file with a location requires two parameters, resourceName and the tags which is an ARM object named resourceTags:
az deployment group create --name addstorage --resource-group myResourceGroup \
--template-file $templateFile \
--parameters resourceName=abcdef45216 resourceTags='{"owner":"bruce","Cost Cen":"2345-324"}'
If you want to pass it as an environment variable, use:
tags='{"owner":"bruce","Cost Center":"2345-324"}'
az deployment group create --name addstorage --resource-group myResourceGroup \
--template-file $templateFile \
--parameters resourceName=abcdef4556 resourceTags="$tags"
The $tags must be in double quotes. (You are passing in a JSON object string)
The JSON string also works when you are passing in the tags into Azure DevOps pipeline. See https://github.com/MicrosoftDocs/azure-devops-docs/issues/9051
First, build your string like so and double quote all keys/values just in case of spaces in either: (Sorry this is PoSH just example)
[string] $tags = [string]::Empty;
97..99 |% {
$tags += "&`"$([char]$_)`"=`"$($_)`"";
}
The results of this is a string "&"a"="97"&"b"="98"&"c"="99".
Now pass it as a string array using the split function of the base string class which results in a 4 element array, the first element is blank. The CLI command ignores the first empty element. Here I set the tags for a storage account.
$tag='application=coolapp&owner="FirstName LastName"&"business Unit"="Human Resources"'
az resource tag -g rg -n someResource --resource-type Microsoft.Storage/storageaccounts -tags $tag.split("&")
I also employed this approach when I wanted to override the parameters provided in a parameter file for a resource group deployment.
az group deployment create --resource-group $rgName --template-file $templatefile --parameters $parametersFile --parameters $($overrideParams.split("&"));
Is there any way to reset all slaves reserved resources in Mesos, without configuring one by one the /unreserve http endpoint?
In Mesos documentation :
/unreserve (since 0.25.0)
Suppose we want to unreserve the resources that we dynamically reserved above. We can send an HTTP POST request to the master’s /unreserve endpoint like so:
$ curl -i \
-u <operator_principal>:<password> \
-d slaveId=<slave_id> \
-d resources='[
{
"name": "cpus",
"type": "SCALAR",
"scalar": { "value": 8 },
"role": "ads",
"reservation": {
"principal": <reserver_principal>
}
},
{
"name": "mem",
"type": "SCALAR",
"scalar": { "value": 4096 },
"role": "ads",
"reservation": {
"principal": <reserver_principal>
}
}
]' \
-X POST http://<ip>:<port>/master/unreserve
Mesos doesn't directly provide any support for unreserving resources at more than one slave using a single operation. However, you can write a script that uses the /unreserve endpoint to unreserves the resources at all the slaves in the cluster, e.g., by fetching the list of slaves and reserved resources from the /slaves endpoint on the master (see the reserved_resources_full key).