A trigger on our Synapse workspace needs different trigger times on different environments. During deployment the times are overriden by the default value that's part of the TemplateForWorkspace.sjon file in the workspace_publish branch.
},
"Load from FTP_properties_typeProperties_recurrence_schedule": {
"type": "object",
"defaultValue": {
"minutes": [
0
],
"hours": [
3
]
}
},
To override parameters during deployment, we use the OverrideArmParameters option of the Synapse workspace deployment#2 taks.
- task: Synapse workspace deployment#2
displayName: Synapse workspace deployment
inputs:
operation: 'deploy'
TemplateFile: $(Pipeline.Workspace)/synapse_template/TemplateForWorkspace.json
ParametersFile: $(Pipeline.Workspace)/synapse_template/TemplateParametersForWorkspace.json
azureSubscription: ${{ parameters.serviceConnection }}
ResourceGroupName: ${{ parameters.deploymentResourceGroupName }}
TargetWorkspaceName: '$(DeployWorkspaceName)'
DeleteArtifactsNotInTemplate: true
DeployManagedPrivateEndpoints: false
OverrideArmParameters: |
KeyVault_properties_typeProperties_baseUrl: xxxx${{ parameters.environment }}xxx
Datalake_properties_typeProperties_url: xxxx${{ parameters.environment }}xxxx.dfs.core.windows.net
How to override the hours of the trigger? I've tried some things like:
Load from FTP_properties_typeProperties_recurrence_schedule: 0,9
This results in empty time boxes
Related
I am working on creating a CI Pipeline using Github Actions, Terraform and Heroku. My example application is a Jmix application from Mario David (rent-your-stuff) that I am building according to his Youtube videos. Unfortunately, the regular Github integration he suggests has been turned off due to a security issue. If you attempt to use Heroku's "Connect to GitHub" button, you get an Internal Service Error.
So, as an alternative, I have changed my private repo to public and I'm trying to directly download via the Terraform heroku_build Source.URL (see the "heroku_build" section):
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 5.0"
}
herokux = {
source = "davidji99/herokux"
version = "0.33.0"
}
}
backend "remote" {
organization = "eraskin-rent-your-stuff"
workspaces {
name = "rent-your-stuff"
}
}
required_version = ">=1.1.3"
}
provider "heroku" {
email = var.HEROKU_EMAIL
api_key = var.HEROKU_API_KEY
}
provider "herokux" {
api_key = var.HEROKU_API_KEY
}
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
buildpacks = ["heroku/gradle"]
source {
url = "https://github.com/ericraskin/rent-your-stuff/archive/refs/heads/master.zip"
}
}
resource "heroku_formation" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
type = "web"
quantity = 1
size = "Standard-1x"
depends_on = [heroku_build.eraskin-rsys-staging]
}
Whenever I try to execute this, I get the following build error:
-----> Building on the Heroku-20 stack
! Push rejected, Failed decompressing source code.
Source archive detected as: Zip archive data, at least v1.0 to extract
More information: https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive
My assumption is that Heroku can not download the tarball, but I can successfully download it without any authentication using wget.
How do I debug this? Is there a way to ask Heroku to show the commands that the build stack is executing?
For that matter, is there a better approach given that the normal GitHub integration pipeline is broken?
I have found a workaround for this issue, based on the notes from Heroku. They suggest using a third-party GitHub Action Deploy to Heroku instead of Terraform. To use it, I removed my heroku_build and heroku_formation from my main.tf file, so it just contains this:
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
My GitHub workflow now contains:
on:
push:
branches:
- master
pull_request:
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Format
id: fmt
working-directory: ./infrastructure
run: terraform fmt
- name: Terraform Init
id: init
working-directory: ./infrastructure
run: terraform init
- name: Terraform Validate
id: validate
working-directory: ./infrastructure
run: terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
working-directory: ./infrastructure
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Update Pull Request
uses: actions/github-script#v6
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ️⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${process.env.PLAN}
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
working-directory: ./infrastructure
run: terraform apply -auto-approve -input=false
heroku-deploy:
name: 'Heroku-Deploy'
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
runs-on: ubuntu-latest
needs: terraform
steps:
- name: Checkout App
uses: actions/checkout#v3
- name: Deploy to Heroku
uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: ${{secrets.HEROKU_APP_NAME}}
heroku_email: ${{secrets.HEROKU_EMAIL}}
buildpack: https://github.com/heroku/heroku-buildpack-gradle.git
branch: master
dontautocreate: true
The workflow has two "phases". On the pull request, it runs the tests in my application, followed by terraform fmt, terraform init and terrform plan. On a merge to my master branch, it runs the terraform apply. When that completes, it runs the second job that runs the akhileshns/heroku-deploy#v3.12.12 GitHub action.
As far as I can tell, it works. YMMV, of course. ;-)
I'm deploying Docker swarm with ansible and I would like to ensure the ingress network has been created. In that aim, I configured the following task :
- name: Ensure ingress network exists
docker_network:
state: present
name: ingress
driver: overlay
driver_options:
ingress: true
And I'm getting the following error :
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found ("No such container: ingress-endpoint")
fatal: [swarm-srv-1]: FAILED! => {"changed": false, "msg": "An unexpected docker error occurred: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found (\"No such container: ingress-endpoint\")"}
I've tried to add some arguments likes :
scope: swarm
force: yes
But no changes... I've also tried to delete the ingress with ansible (state: absent), but I always get the same error.
Note that I don't face any issue when trying to delete a recreate the ingress network manually on the swarm : docker network rm ingress
I don't know how to resolve that issue...Any help would be appreciated. Thanks !
Here are some informations that may help...
# docker version
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:47:35 2021
OS/Arch: linux/amd64
# docker inspect ingress
[
{
"Name": "ingress",
"Id": "yb2tkhep8vtaj9q7w3mssc9lx",
"Created": "2021-05-19T05:53:27.524446929-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "dfdc0f123d21a196c7a815c7e0a886924d0799ae5f3be2d38b64d527ed4620b1",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "8f8932d6f99f",
"IP": "(ip address here)"
},
{
"Name": "28b9ca95dcf0",
"IP": "(ip address here)"
},
{
"Name": "f7c48c8af2f5",
"IP": "(ip address here)"
}
]
}
]
I had the exact same issue when trying to customize the IP range of the ingress network. It looks like the docker_network module does not support modification of swarm specific networks: there is a open Github issue for this.
I went for the ugly workaround of removing the network by executing it through a shell (docker network rm ingress command) and adding it again. When adding it with the docker_network module, I found that adding also seems not be working (fails to set the ingress property of the network). So I ended up doing both remove- and create operation through a shell command.
Since the removal will trigger a confirmation dialogue:
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
I used the expect module to confirm the dialogue:
- name: remove default ingress network
ansible.builtin.expect:
command: docker network rm ingress
responses:
"[y/N]": "y"
- name: create customized ingress network
shell: "docker network create --ingress --subnet {{ docker_ingress_network }} --driver overlay ingress"
It is not perfect but it works.
There was one last problem I experienced: when running it on an existing swarm I ended up having network issues on the node where I did run this (somehow the docker_gwbridge network on that node could not handle the change). The fix for this was to fully remove the node and re-join the swarm (regenerates the docker_gwbridge).
I wanted to fetch properties from two git repos. one is https://username#bitbucket.my.domain.com/share.git - which will have a property file contains some common key value pair and the other one is https://username#bitbucket.my.domain.com/service.git - it will have property files of all the micro services.
While I am deploying the service only one yml file (which is in https://username#bitbucket.my.domain.com/share.git repo) is read by the config server. What I am missing? How to read the property file from another repo i.e. https://username#bitbucket.my.domain.com/service.git too?
I wanted to deploy the service in PCF. So I configured the config-server in PCF with the following json.
{
"count": 1,
"git": {
"label": "feature",
"uri": "https://username#bitbucket.my.domain.com/share.git",
"username": "username",
"password": "password",
"repos": {
"configserver": {
"password": "password",
"label": "feature",
"uri": "https://username#bitbucket.my.domain.com/service.git"
"username": "username"
}
}
}
}
Name of my service is LogDemo and spring profile is active. I have created two yml files and placed in the corresponding repo. (I have given same name two both the files like LogDemo-active.yml). While I am deploying the service only one yml file (which is in https://username#bitbucket.my.domain.com/share.git repo) is read by the config server. /env is giving me the following:
{
"profiles": [
"active",
"cloud"
],
"server.ports": {
"local.server.port": 8080
},
"configService:configClient": {
"config.client.version": "234e59d4a9f80f035f00fdf07e6f9f16e5560a55"
},
"configService:https://username#bitbucket.my.domain.com/share.git/LogDemo-active.yml": {
"key1": "value1",
"key2": "value2"
},
...................
...................
What I am missing? How to read the property file from other repo i.e. https://username#bitbucket.my.domain.com/service.git too?
Below is my bootstrap.yml
spring:
application:
name: LogDemo
mvc:
view:
prefix: /
suffix: .jsp
Here is my manifest file
---
inherit: baseManifest.yml
applications:
- name: LogDemo
host: LogDemo
env:
LOG_LEVEL: INFO
spring.profiles.active: active
TZ: America/New_York
memory: 1024M
domain: my.domain.com
services:
- config-server-comp
When using multiple repos, the repos that will be applied depend on the pattern's defined for those repos. The default pattern is <repo-name>/*. Thus changing the repo name to LogDemo will activate the repo for your app, because the app name, spring.application.name, is LogDemo.
If one or more patterns match, then the repo for the matched patterns will be used. If no pattern matches then the default is used.
Full details are described in the docs here.
https://cloud.spring.io/spring-cloud-config/single/spring-cloud-config.html#_pattern_matching_and_multiple_repositories
If you don't need or want the pattern matching feature, you can use the [composite backend](
https://docs.pivotal.io/spring-cloud-services/2-0/common/config-server/composite-backends.html). The composite backend allows you to define multiple Git repositories. See the first config example here.
https://docs.pivotal.io/spring-cloud-services/2-0/common/config-server/composite-backends.html#general-configuration
I need to configure the following params:
environment, trend, history, executors, retries,
etc. I need these params for cucumber to work with ruby. I searched in a lot of places and I did not find much. I would appreciate if you can provide these params.
History
Allure stores history information to allure-report/history folder during report generation. So you need to copy such folder from previous launch into your allure-results before you generate the report.
History features are handled out of the box by Allure CI plugins
Executor info
To add an information about your test executor create a file executor.json in your allure-results:
{
"name": "Jenkins",
"type": "jenkins",
"url": "http://example.org",
"buildOrder": 13,
"buildName": "allure-report_deploy#13",
"buildUrl": "http://example.org/build#13",
"reportUrl": "http://example.org/build#13/AllureReport",
"reportName": "Demo allure report"
}
In the report such information will be displayed like this:
Allure CI plugins will add executor information for you
Categories
You can also add custom problem categories in the report. Create file categories.json and place it into your results folder:
[
{
"name": "Ignored tests",
"messageRegex": ".*ignored.*",
"matchedStatuses": [
"skipped"
]
},
{
"name": "Infrastructure problems",
"messageRegex": ".*RuntimeException.*",
"matchedStatuses": [
"broken"
]
},
{
"name": "Outdated tests",
"messageRegex": ".*FileNotFound.*",
"matchedStatuses": [
"broken"
]
},
{
"name": "Regression",
"messageRegex": ".*\\sException:.*",
"matchedStatuses": [
"broken"
]
}
]
Retries
Retries are handled by default. For each test Allure generates historyId (usually it is md5(fullName + parameterValues)). If report contains few results with the same historyId Allure only displays latest, other are marked as retries and available from Retries section on test result page.
For example, if you run your test 3 times (first 2 are broken, latest is passed, Allure will display such test as passed)
I am deploying a package that contains a deploy.ps1 file. As you already know Octopus is running this script on deploying by default, I want to prevent it happening and run a custom script instead.
If you have a requirement like this, then it's better to move the powershell that starts the services to a separate build step and then tag the tentacles you want that script to run on.
In your deployment step for the service, set the start mode to "Manual"
Then have a step that starts the service, and scope that script to the environments / servers that you want to auto start
The code for the step template I use here is
{
"Id": "ActionTemplates-1",
"Name": "Enable and start service",
"Description": null,
"ActionType": "Octopus.Script",
"Version": 8,
"Properties": {
"Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptSource": "Inline",
"Octopus.Action.RunOnServer": "false",
"Octopus.Action.Script.ScriptBody": "$serviceName = $OctopusParameters[\"ServiceName\"]\n\nwrite-host \"the service is: \" $serviceName\n\n& \"sc.exe\" config $serviceName start= delayed-auto\n& \"sc.exe\" start $serviceName\n\n"
},
"Parameters": [
{
"Name": "ServiceName",
"Label": "Service Name",
"HelpText": null,
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-10-10T10:21:21.980Z",
"OctopusVersion": "3.3.2",
"Type": "ActionTemplate"
}
}
You may want to modify the step template as it will set the service to "Automatic - Delayed" and then start the service.
Are you able to move the script to a sub folder?
These scripts must be located in the root of your package
http://docs.octopusdeploy.com/display/OD/Custom+scripts
Alternatively - don't include your deploy.ps1 script in the deployment package if it should never be deployed.