DC/OS: remove history for a specific service ID - mesos

How do i remove history for a specific service id in DC/OS 1.11.0?
We have service Ides that have failed when we tested docker containers and we want to remove the history for this service id. Before we start a new container on that id, so we easily can see if this fails again or if it was the old docker container that failed or the new one that failed.
But when we delete the service and start a new one on the same ID it remembers the old history of the task.
[History of a service id just an example]
Or are we force to change the service incrementally in order to have a clean service history?

As far as I know this is not possible. The setting is the same for all services.

Related

Start services correctly and safely on Windows startup

I would like to start a Windows service ((as a daemon or ScheduledTask) on one of the clients in a small Active Directory domain for testing purposes only. The Windows service is Docker Desktop. Ideally, the service should start before a user logs in.
I am wondering what is the recommended approach for this procedure to start services properly and securely.
My approach would be the following: I would create a local user and then assign the service to that user using Task Scheduler. Would that be broadly the recommended approach or is there some kind of system user to entrust the task to?

GCP - creating a VM instance and extracting logs

I jave a JAVA application in which I am using GCP to create VM instances from images.
In this application, I would like to allow the user to view the vm creation logs in order to be updated on the status of the creation, and to be able to see failure points in detail.
I am sure such logs exist in GCP, but have been unable to find specific APIOs which let me see a specific action, for example creation of instance "X".
Thanks for the help
When you create a VM, the answer that you have is a JobID (because the creation take time and the Compute Engine API answer immediately). To know the status of the VM start (and creation) you have to poll regularly this JobID.
In the logs, you can also filter with this JobID to select and view only the logs that you want on the Compute API side (create/start errors).
If you want to see the logs of the VM, filter the logs not with the JobID but with the name of the VM, and its zone.
In Java, you have client libraries that help you to achieve this

Concurrent az login executions

I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user

PCF Scheduling jobs

I have been trying to schedule spring cloud task via PCF scheduler, however I can't create a job from the app/task (following this documentation on the site - http://docs.pivotal.io/pcf-scheduler/1-1/using-jobs.html)
$ cf apps
name requested state instances memory disk urls
cloud-task stopped 0/1 750M 1G
$ cf services
name service plan bound apps
last operation
my-scheduler scheduler-for-pcf standard cloud-task
create succeeded
$ cf create-job cloud-task my-task-job ".java-buildpa
ck/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
Creating job ←[33;1mmy-task-job←[0m for ←[33;1mcloud-task←[0m with command ←[33;1m.java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loade
r.JarLauncher←[0m in org ←[33;1mglobal-sales-marketing-customer-experience←[0m / space ←[33;1m141349-dev←[0m as ←[33;1mzzh1bb←[0m
←[31;1mFAILED←[0m
The requested resource was not found.
Not Found
You must create an instance of the scheduler service in this space to use the scheduler service.
Not sure why the job creation command is not able to find the instance of scheduler service - Am I missing something here ?
Also, wondering if there is anything in spring-clould-data-flow that can schedule tasks.
From the output you should be able to create a job in that org/space.
Does the user (zzh1bb?) have SpaceDeveloper privileges? SpaceAdmin should also be sufficient.
Does a cf task execute successfully using:
cf run-task cloud-task ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
And seeing results with:
cf tasks cloud-task
Another diagnostic step might be to check the output of the api calls described here:
http://docs.pivotal.io/pcf-scheduler/1-1/api/#create-job
What version of PCF are you using and what version of the Scheduler for PCF are you using? There were significant changes in the cloud controller api between 1.10, 1.11 and 1.12 that prevent the scheduler service from working across all of those versions.
As far as scheduling SCDF, the Scheduler for PCF service can be used in conjunction with SCDF to allow you to call the task execution endpoint from a Schedeler for PCF call (https://docs.pivotal.io/pcf-scheduler/1-1/using-calls.html).
Call SCDF using the execution endpoint
http://...scdf server.../tasks/executions?name=taskA
doc'ed here:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_launching_a_task_2
This is very useful and convenient especially when creating the SCDF service and the Scheduler for PCF service in the same space.

How to Start/Stop/Delete the user define services from MS-CONFIG

I am running the Spring-Boot application as a Windows service.I can see my services is listed in the MSCONFIG
I wanted to know how can I
Stop
Start
Delete
The Service.
You can't perform those operations from MSCONFIG. Use the services control panel application (services.msc) to start, stop and generally manage your service. Use the SC command line utility to delete your service ("SC DELETE service-name").

Resources