Related
The documentation on Review Apps in Heroku says:
The app.json file has a scripts section that lets you specify a postdeploy command. Use this to run any one-time setup tasks that make the app, and any databases, ready and useful for testing
Does that mean it never runs on production? what about staging? Does it run on production if I create a new production app from scratch?
I'm trying to find out how to generate sample data for my review apps, a post-deploy hook that seems to apply to my whole pipeline, and not just review apps, feels like the wrong place.
It was as easy as specifying it in environments -> review, like this:
{
"env": {
"RAILS_MASTER_KEY": {
"description": "Rails master encryption key.",
"required": true,
"generator": "secret"
},
},
"formation": {
"web": {
"quantity": 1,
"size": "hobby"
},
"worker": {
"quantity": 1,
"size": "hobby"
}
},
"addons": [
{
"plan": "heroku-postgresql"
}
],
"environments": {
"review": {
"scripts": {
"postdeploy": "bundle exec rails db:seed"
}
}
}
}
So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.
I'm on a Mac 💻. I'm trying to explore a way to create 4 Terminals as soon as I dbl-clicked on my workspace file.
I've tried to get one working, but I seem stuck
{
"folders": [
{
"path": "/Users/bheng/Sites/laravel/project"
}
],
"settings": {
"workbench.action.terminal.focus": true,
"terminal.integrated.shell.osx": "ls",
"terminal.integrated.shellArgs.osx": [
"ls -lrt"
]
},
"extensions": {}
}
My goal is to open 4 Terminals
Terminal1 : run 'npm run watch'
Terminal2 : run 'ls -lrt'
Terminal3 : run 'ssh_staging'
Terminal4 : run 'mysql'
I've been following this doc : https://code.visualstudio.com/docs/editor/integrated-terminal#_terminal-keybindings
Any hints for me ?
I've been playing around with this which seems to work. Combining the ability to run a task on folder open and to make that task depend on other tasks I came up with the following. It looks cumbersome but it is actually pretty simple and repetitive.
First, you will need a macro extension like multi-command. Put this into your settings:
"multiCommand.commands": [
{
"command": "multiCommand.runInFirstTerminal",
"sequence": [
"workbench.action.terminal.new",
{
"command": "workbench.action.terminal.renameWithArg",
"args": {
"name": "npm watch"
}
},
{
"command": "workbench.action.terminal.sendSequence",
"args": {
"text": "npm run watch\u000D" // \u000D is a return so it runs
}
}
]
},
{
"command": "multiCommand.runInSecondTerminal",
"sequence": [
"workbench.action.terminal.new",
{
"command": "workbench.action.terminal.renameWithArg",
"args": {
"name": "ls -lrt"
}
},
{
"command": "workbench.action.terminal.sendSequence",
"args": {
"text": "ls -lrt\u000D"
}
}
]
},
{
"command": "multiCommand.runInThirdTerminal",
"sequence": [
"workbench.action.terminal.new",
{
"command": "workbench.action.terminal.renameWithArg",
"args": {
"name": "ssh_staging"
}
},
{
"command": "workbench.action.terminal.sendSequence",
"args": {
"text": "ssh_staging\u000D" // however you run the ssh_staging command
}
}
]
},
{
"command": "multiCommand.runInFourthTerminal",
"sequence": [
"workbench.action.terminal.new",
{
"command": "workbench.action.terminal.renameWithArg",
"args": {
"name": "mysql"
}
},
{
"command": "workbench.action.terminal.sendSequence",
"args": {
"text": "mysql\u000D" // however you run the mysql command
}
},
// "workbench.action.focusActiveEditorGroup"
]
}
]
There is one command for each terminal. But within each of those you can do as much as you can get into a macro - which is a lot, especially thanks to the sendSequence command. You can change directories and send another sendSequence command to the same terminal instance, run all the non-terminal commands too, change focus to an editor at the end of the last terminal set-up, etc.
I added the nicety of naming each terminal based on your command using the command workbench.action.terminal.renameWithArg.
In tasks.json:
"tasks": [
{
"label": "Run 4 terminals on startup",
"runOptions": {"runOn": "folderOpen"},
"dependsOrder": "sequence", // or parallel
"dependsOn": [
"terminal1",
"terminal2",
"terminal3",
"terminal4"
]
},
{
"label": "terminal1",
"command": "${command:multiCommand.runInFirstTerminal}"
},
{
"label": "terminal2",
"command": "${command:multiCommand.runInSecondTerminal}",
},
{
"label": "terminal3",
"command": "${command:multiCommand.runInThirdTerminal}"
},
{
"label": "terminal4",
"command": "${command:multiCommand.runInFourthTerminal}"
}
]
Now whenever you open (or reload) the workspace folder this tasks.json is in the four terminals should be opened, named and run. In my experience, there is about a short delay before vscode runs any folderOpen task.
If you prefer to manually trigger the Run 4 terminals task, you can set up a keybinding like so:
{
"key": "alt+r", // whatever keybinding you want
"command": "workbench.action.tasks.runTask",
"args": "Run 4 terminals on startup"
},
Here is a demo running with the keybinding, easier to demonstrate than reloading vscode, but there is no difference. I added an interval delay to each terminal running just for demonstration purposes - otherwise it is extremely fast.
I noticed that vscode freezes if I don't interact with one of the terminals or open another before deleting them all.
There is also a Terminal Manager extension which may be of interest. I haven't tried it.
An extension for setting-up multiple terminals at once, or just
running some commands.
But it isn't obvious to me whether this extension can be configured to run on folderOpen - but it appears to contribute a run all the terminals command so you should be able to use that in a task.
I like the accepted answer. However, I prefer not to use the multi-command extension as shown in the accepted answer, I think my approach is simpler.
Please note in my case:
my project only needs three tasks and all three tasks should run in parallel (craft-server, craft-app, and craft-site) -- but this approach should work for more than three tasks
I prefer to see the output of three tasks in three separate terminals (vs combined in one terminal)
my tasks never "finish" (all three tasks "watch" for file changes, so I need the terminals to remain open)
See my tasks.json file below. You'll need to modify the "label" and "command" properties to suit your project. See my notes about the important parts, below.
{
"version": "2.0.0",
"tasks": [
/// ...other tasks...
{
"label": "runDevelopment",
"runOptions": {
"runOn": "folderOpen"
},
"dependsOrder": "parallel",
"dependsOn": [
"craft-server",
"craft-site",
"craft-app"
]
},
{
"label": "craft-server",
"type": "shell",
"command": "npx nodemon --watch . --ignore craft-angular/projects/craft-app/ --ignore craft-angular/projects/craft-site/ --ignore dist/ --ignore bin/ --ignore log/ --ignore cypress/ --ignore cypress.json ./bin/www",
"presentation": {
"panel": "dedicated"
}
},
{
"label": "craft-site",
"type": "shell",
"command": "cd ./craft-angular && node --max_old_space_size=8000 ./node_modules/#angular/cli/bin/ng build craft-site --verbose=false --progress=true --watch --output-path=\"./dist/development/craft-site\"",
"presentation": {
"panel": "dedicated"
}
},
{
"label": "craft-app",
"type": "shell",
"command": "cd ./craft-angular && node --max_old_space_size=8000 ./node_modules/#angular/cli/bin/ng build craft-app --verbose=false --progress=true --watch --output-path=\"./dist/development/craft-app\"",
"presentation": {
"panel": "dedicated"
}
}
]
}
Please note:
I only use the VS Code tasks.json / custom tasks feature (I don't use a VS Code extension)
I use the "dependsOn" approach as shown in the accepted answer, so that one task can invoke multiple other tasks in parallel (note "dependsOrder": "parallel")
I use the "runOptions": {"runOn": "folderOpen"} approach as shown in the accepted answer, so that VSCode will automatically run my "combined" task when I open my workspace/project
"runOn": "folderOpen" is convenient for me (I always want to run
my main task when I open the folder containing my project),
but it is optional; you could also use keybindings as shown in the accepted answer or here
and if you use "runOn": "folderOpen" you need give VS Code a one-time permission to do that, as described here
I don't use the "problemMatcher" property (i.e. a VS Code feature to scan output of each terminal); therefore when I run the task, I choose "Continue without scanning the task output"
I use the "presentation" property with {"panel":"dedicated"} so each of my tasks gets a separate terminal (aka separate panel)
The runDevelopment task should run automatically when I open the workspace/project/folder (i.e. the location containing the .vscode folder, and the .vscode/tasks.json file, using File > Open Folder... (Ctrl+K Ctrl+O), or File > Open Recent...)
While I want this task to run automatically , below shows how run the task manually (if needed -- like if the automatic task fails, or I kill the automatic task);
You could use Ctrl+Shift+B to run the build task, as commented by #Ruben, and described in the VSCode Keyboard Bindings here.
Or you could use a more step-by-step approach:
I use Ctrl+Shift+P (or F1) to open the "command window"/ "Show All Commands"
then type "Run Tasks"; (hit Enter)
then choose the single "combined" task (for me, it's named runDevelopment; hit Enter)
finally choose "Continue without scanning the task output" and hit Enter (because none of my tasks have a "problemMatcher", I can interpret the task output for myself):
This is how the task looks after it is run; note there are 3 separate terminals for 3 separate subtasks:
I like the second answer that only uses vscode task, but it does not work for my requirement because I cannot input other instructions in the open terminals, otherwise, it will close. I prefer to use the Restore Terminals in vscode.
After the extension installed, you can just create a restore-terminals.json file in .vscode folder:
{
"artificialDelayMilliseconds": 300,
"keepExistingTerminalsOpen": false,
"runOnStartup": true,
"terminals": [
{
"splitTerminals": [
{
"name": "server",
"commands": ["npm i", "npm run dev"]
},
{
"name": "client",
"commands": ["npm run dev:client"]
},
{
"name": "test",
"commands": ["jest --watch"]
}
]
},
{
"splitTerminals": [
{
"name": "build & e2e",
"commands": ["npm run eslint", "npm run build", "npm run e2e"],
"shouldRunCommands": false
},
{
"name": "worker",
"commands": ["npm-run-all --parallel redis tsc-watch-start worker"]
}
]
}
]
}
I want to execute 2 commands on 1 hotkey "F4"
workbench.action.toggleSidebarVisibility
workbench.action.toggleActivityBarVisibility
I am trying to use this code, but it doesn't work.
{
"key": "F4",
"command": "workbench.action.toggleSidebarVisibility && workbench.action.toggleActivityBarVisibility"
}
Not possible, at least not as of today with a vanilla installation.
But you can try this extension here, it creates macros from multiple commands, which can then be bound to a shortcut: https://marketplace.visualstudio.com/items?itemName=geddski.macros
Another way to achieve this without an extension:
Run "Tasks: Open User Tasks" command to create or open a user level tasks file.
Define commands as separate tasks, like so:
{
"version": "2.0.0",
"tasks": [
{
"label": "my-f4-1",
"command": "${command:workbench.action.toggleSidebarVisibility}"
},
{
"label": "my-f4-2",
"command": "${command:workbench.action.toggleActivityBarVisibility}"
},
{
"label": "my-f4",
"dependsOrder": "sequence",
"dependsOn": [
"my-f4-1",
"my-f4-2"
],
"problemMatcher": []
}
]
}
In your keybindings.json:
{
"key": "f4",
"command": "my-f4"
}
I am trying to run a Consul container on each of my Mesos slave node.
With Marathon I have the following JSON script:
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"args": ["agent","-bind","$MESOS_SLAVE_IP","-retry-join","$MESOS_MASTER_IP"]
}
However, it seems that marathon treats the args as plain text.
That's why I always got errors:
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Failed to parse advertise address!
So I just wonder if there are any workaround so that I can start a Consul container on each of my Mesos slave node.
Update:
Thanks #janisz for the link.
After taking a look at the following discussions:
#3416: args in marathon file does not resolve env variables
#2679: Ability to specify the value of the hostname an app task is running on
#1328: Specify environment variables in the config to be used on each host through REST API
#1828: Support for more variables and variable expansion in app definition
as well as the Marathon documentation on Task Environment Variables.
My understanding is that:
Currently it is not possible to pass environment variables in args
Some post indicates that one could pass environment variables in "cmd". But those environment variables are Task Environment Variables provided by Marathon, not the environment variables on your host machine.
Please correct if I was wrong.
You can try this.
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST",
"parameters": [
"key": "env",
"value": "YOUR_ENV_VAR=VALUE"
]
}
}
}
Or
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"env": {
"ENV_NAME" : "VALUE"
}
}