How do I specify my custom agent in the build pipeline? - vb6

I am following Donovan Brown's blog post to try and setup a build agent for VB6
I can see my agent in the agent pools
but don't know what to put as the image.
I tried Default and I tried vb6vm3 but was unable to save the pipeline with these values.

Target the desired Queue, not the Pool. Try replacing the pool code with the code below.
queue:
name: Default
I also find it easier to use the graphical user interface to create my build and use the Show YAML button to get the yaml written for me.

How do I specify my custom agent in the build pipeline?
I have encountered the same issue as you. To resolve this issue, I try to create a new build pipeline by using the visual designer, select my custom private agent, then select the options View YAML:
I got following code:
pool:
name: VS2017PrivateAgent
And it works fine.
But I am still curious why I can not use pool:vmImage and how add my private agent option to the drop-down menu.
After search much info, I found the reason in an inconspicuous place Pool:
pool:
name: string # name of the pool to run this job in
demands: string | [ string ] ## see below
vmImage: string # name of the vm image you want to use, only valid in the Microsoft-hosted pool
The comment name of the vm image you want to use, only valid in the Microsoft-hosted pool is the real reason I could not use pool:vmImage.
And
If you're using a private pool and don't need to specify demands, this
can be shortened to:
pool: string # name of the private pool to run this job in
Hope this can give more info about this issue.

Related

Elastic APM different index name

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..
In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

Create multiple MarkLogic Schedule Task for same module through ml-gradle

I am trying to create multiple instance of application on same marklogic environment. I can able to create all the configurations(users,roles,databases,forests,app servers...) but could not able to schedule individual tasks for separate database with same module path.
When tried to run ml-gradle mldeployApps failing at Tasks creation.
My whole application configuration will depends on from property file. for any APP-NAME a seperate insiance need to be created.
I tried deploying through ml-gradle
The mlDeployTasks is failing as already an task is available for the module path. When try to run secong with new failing as it is not recognizing task database
JSON:
{
"task-enabled":true,
"task-path":"/ext/schedules/monitor.xqy",
"task-root":"/",
"task-type":"daily",
"task-period":1,
"task-start-time": "10:00:00",
"task-database":"%%DATABASE%%",
"task-modules":"%%MODULES_DATABASE%%",
"task-user":"admin",
"task-priority":"normal"
}
ERROR:
Logging HTTP response body to assist with debugging: {"errorResponse":{"statusCode":"500", "status":"Internal Server Error", "messageCode":"MANAGE-INVALID", "message":"MANAGE-INVALID (err:FOER0000): task-database"}}
Error occurred while sending PUT request to /manage/v2/tasks/5389046897270663947/properties?group-id=Default; logging request body to assist with debugging: {
Expectation :
wants to deploy and undeploy whole application including schedules tasks based on APPLICATION-NAME as seperate instance
Actual:
the mlDeployTasks based on the module-path each task is identified with old existing database and fails to create a new task server.
Please suggest me the right way to achieve the same
MarkLogic's Management API is seeing your request as an attempt to change the task-database, but it only allows one property for a scheduled task to change (task-enabled). I think what you'll need to do here is have different task-path values for your different databases. That's not ideal, but if the implementation logic is all in a library that's imported by the task, the different modules themselves will be very lightweight.
Try ml-gradle 3.10.0 - support for this now exists - see the release notes for ml-app-deployer 3.10.0 (which provides most of the functionality in ml-gradle) - https://github.com/marklogic-community/ml-app-deployer/releases/tag/3.10.0

Octopus - SQL Deploy DACPAC Community Contributed Step

I am using the SQL Deploy DACPAC community contributed step to deploy my dacpac to the server within Octopus.
It has been setup correctly and has been working fine until the below situation occurs.
I have a situation where I am dropping columns but the deploy keeps failing due to rows being detected. I am attempting to use /p:BlockOnPossibleDataLoss=false as an "Additional deployment contributor arguments" but it seems to be ignored.
Can anyone guide me to what is wrong?
The publish properties should have DropObjectsNotInSource, try to set it to True.
You might want to fine tune it, to avoid dropping users, permissions, etc.
After multiple updates by the original author, this issue was still not resolved. The parameter has actually since been completely removed since version 11.
Initially, I added a pre-deployment script that copied all the data from the tables that were expected to fail, delete all the data, allow the table schema to update as normal, and in a post-deployment script re-insert all the data into the new structure. The problem with this was that for data that could be lost, a pre-deployment and post-deployment script was required when it wasn't really needed.
Finally, I got around this by duplicating the community step "SQL - Deploy DACPAC" (https://library.octopus.com/step-templates/58399364-4367-41d5-ad35-c2c6a8258536/actiontemplate-sql-deploy-dacpac) by saving it as a copy from within Octopus. I then went into the code, into the function Invoke-DacPacUtility, and added the following code:
[bool]$BlockOnPossibleDataLoss into the parameter list
Write-Debug (" Block on possible data loss: {0}" -f $BlockOnPossibleDataLoss) into the list of debugging
if (!$BlockOnPossibleDataLoss) { $dacProfile.DeployOptions.BlockOnPossibleDataLoss = $BlockOnPossibleDataLoss; } into the list of deployment options
Then, I went into the list of parameters and added as follows:
Variable name: BlockOnPossibleDataLoss
Label: Block on possible data loss
Help test: True to stop deployment if possible data loss if detected; otherwise, false. Default is true.
Control type: Checkbox
Default value: true
With this, I am able to change the value of this parameter with the checkbox when using the step in the process of the project.

Disable websphere app autostart command line or admin script

I am looking for an admin command or script to disable the automatic start of applications hosted by a WAS.
I found via the web interface in the following menus:
Application -> Application Types -> Websphere Enterprise Applications -> Click on the App -> Details Properties: "Target specific application status" -> Select cluster and click on "Disable Auto start".
But no way to find a command line corresponding to this action.
Can you help me ?
Thank you in advance,
You can use the "wsadminlib.py" scripting library to do this easily, it contains a function for setDeploymentAutoStart with signature:
Here is the signature and doc:
def setDeploymentAutoStart(deploymentname, enabled, deploymenttargetname=None):
"""Sets an application to start automatically, when the server starts.
Specify enabled as a lowercase string, 'true' or 'false'.
For example, setDeploymentAutoStart('commsvc', 'false')
Returns the number of deployments which were found and set successfully.
Raises exception if application is not found.
You may optionally specify an explicit deployment target name, such as a server or cluster name.
For example, setDeploymentAutoStart('commsvc', 'true', deploymenttargetname='cluster1')
setDeploymentAutoStart('commsvc', 'false', deploymenttargetname='server1')
If the deployment target name is not specified, autostart is set on all instances of the deployment.
Ultimately, this method changes the 'enable' value in a deployment.xml file. For example,
<targetMappings xmi:id="DeploymentTargetMapping_1262640302437" enable="true" target="ClusteredTarget_1262640302439"/>
"""
Using wsadminlib.py is as easy as downloading it from github, launching wsadmin, then running execfile /path/to/wsadminlib.py
Then you just need to sort out the parameters you want and call the function above.

Queued Build is not connecting to db as it uses domainName\computerName instead of domanName\username

I am trying to queue a build in my own build definition. But the sql connection in my code throws an exception that Login failed for user 'domainName\computerName$' which is natural since it should have used domainName\userAlias.
My question is why is it using domainName\computerName, and how to make it use windows auth instead? Can some one please help me with this?
You need to set the service account that the build service uses on the server(s) running your Build Agent(s). It sounds like it's currently set to run as Network Service.
You can change it by firing up TFS Admin Console, and going to Build Configuration and changing the properties on the service:

Resources