Disable remote Gradle cache for one task - gradle

I have a Gradle build with both local and remote cache configured. Among other things I use Spotless Gradle plugin. That plugin has marked its tasks (spotlessCheck and spotlessApply) as cacheable. The problem is that in my case task itself is quite fast and thus checking task's output in remote cache takes more time than actually running the task.
So my question: is it possible to disable cache for one task introduced by the 3rd plugin? Even better, is it possible to disable just remote cache for just one task?

I don't think those two particular tasks you mention have the build cache enabled. But other ones like spotlessJava do.
In any case, when you have figured out which tasks use the build cache (e.g. by running with -i), you can configure them with outputs.cacheIf { false }.
Note that this disables both the local and remote build cache. I am not aware of a way to selectively disable just the remote cache for a given task but keep the local one enabled.
For instance:
tasks.named("spotlessJava") {
outputs.cacheIf { false }
}

I don't think that disabling only the remote cache is possible, but if your problem is that the cache result is too big and it's wasting a lot time trying to upload it (which always fails anyway), you can solve this using the useExpectContinue incubating property.
It will try to check if the upload is possible before doing it, it's good enough for me.

Related

Copy all Gradle dependencies without pre-registered custom task

Use Case
The use case for grabbing all dependencies (without the definition of a custom task in build.gradle) is to perform policy violation and vulnerability analysis on each of them via a templated pipeline. We are using Nexus IQ to do the evaluation.
Example
This can be done simply with Maven, by specifying the local repository to download all dependencies and then supply a pattern to Nexus IQ to scan. In the example below we would supply maven-dependencies/* as the scan target to Nexus IQ after rounding up all the dependencies.
mvn -B clean verify -Dmaven.repo.local=${WORKSPACE}/maven-dependencies
In order to do something similar in Gradle it seems the most popular method is to introduce a custom task into build.gradle. I'd prefer to do this in a way that doesn't require developers to implement custom tasks; it's preferred to keep those files as clean as possible. Here's one way I thought of making this happen:
Set GRADLE_USER_HOME to ${WORKSPACE}/gradle-user-home.
Run find ${WORKSPACE}/gradle-user-home -type f -wholename '*/caches/modules*/files*/**/*.*' to grab the location of all dependency resources (I'm fine with picking up non-archive files).
Copying all files from step #1 to a gradle-dependencies folder.
Supply gradle-dependencies/* as the scan target to Nexus IQ.
Results
I'm super leery about doing it this way, as it seems very hacky and doesn't seem like the most sustainable solution. Is there another way that I should consider?
UPDATE #1: I've adjusted my question to allow answers that have custom tasks, just not pre-registered. Pre-registered means the custom task is already in the build.gradle file. I'll also provide my answer shortly after this update.
I'm uncertain if Gradle has the ability to register external, custom tasks, but this is how I'm making ends meet. I've created a custom task in a file called copyAllDependencies.gradle, appending the contents of that file (after replacing all newlines and instances of two or more spaces with a single space) to build.gradle when the pipeline runs, and then running gradlew copyAllDependencies. I then pass gradle-dependencies/* as the scan target to Nexus IQ.
task copyAllDependencies(type: Copy) {
def allConfigurations = [];
configurations.each {
if (it.canBeResolved) {
allConfigurations += configurations."${it.name}"
}
};
from allConfigurations
into "gradle-dependencies"
}
I can't help but feel that this isn't the most elegant solution, but it suits my needs for now.
UPDATE #1: Ultimately, I decided to go with requiring development teams to specify this custom task in their build.gradle file. There were too many nuances with echoing script contents into another file (hence the need to include ; when defining allConfigurations and iterating over all configurations). However, I am still open answers that address the original question.

How to disable gradle's local build cache, but keep remote cache enabled?

I'm trying to do some benchmarking and performance testing for my Android app and want to see how gradle is using the build cache.
I have a remote build cache configured and working. I want to test it, but I can't find a flag or option to disable the local cache specifically while keeping the remote enabled.
Both the local and remote cache can be enabled and disabled individually, but it has to be done in the settings file. Your use case happens to be identical to what you usually want a CI server to do (only use a remote cache and not a local one) for which there is an example of in the user guide. Something like this:
buildCache {
local {
enabled = false
}
remote(HttpBuildCache) {
enabled = true
url = 'https://example.com:8123/cache/'
}
}
If you need to troubleshoot the use of the build cache (e.g. unexpected cache misses), run with -Dorg.gradle.caching.debug=true.

How to tell Octopus Deploy to wait until another deployment finishes on the same machine?

Sometimes it is preferred and/or required to host dozens of applications on a single server. Not saying this is "right" or "wrong," I'm only saying that it happens.
A downside to this configuration is the error message Waiting for the script in task [TASK ID] to finish as this script requires that no other Octopus scripts are executing on this target at the same time appears whenever more than one deployment to the same machine is running. It seems like Octopus Deploy is fighting itself.
How can I configure Octopus Deploy to wait for one deployment to completely finish before the next one is started?
Before diving into the answer, it is important to understand why that message is appearing in the first place. Each time a step is run on a deployment target, the tentacle will create a "Mutex" to prevent others projects from interfering with it. An early use case for this was updating the IIS metabase during a deployment. In certain cases, concurrent updates would cause random errors.
Option 1: Disable the Mutex
We've seen cases where the mutex is the cause of the delay. The mutex is applied per step, not per deployment. It is common to see a situation where it looks like Octopus is "jumping" between deployments. Depending on the number of concurrent deployments, that can slow down the deployment. The natural thought is to disable the mutex altogether.
It is possible to disable the mutex by adding the variable OctopusBypassDeploymentMutex and setting it to True. That variable can exist in either a specific project or in a variable set.
More details on what that variable does can be found in this document. If you do disable the mutex please test it and monitor for any failures. For the most part, we don't see issues disabling the mutex, but it has happened from time to time. It depends on a host of other factors such as application type and Windows version.
Option 2: Leverage Deploy a Release Step
Another option is to coordinate the projects using the deploy a release step. Typically this works best when the projects being deployed are part of the same application suite. In the example screenshot below I have five "deployment" projects:
Azure Worker IaC
Database Worker IaC
Kubernetes Worker IaC
Script Worker IaC
OctoStudy
The project Unleash the Kraken coordinates deployments for those projects.
It does this by using the Deploy a Release step. First it spins up all the infrastructure, then it deploys the application.
This won't work as well if the server is hosting 50 disparate applications.
Option 3: Leverage the API to check for running deployments
The final option is to include a step at the start of each project which hits the API to check for active releases to the deployment targets for the deployment target. If an active deployment is found then wait until it is done.
You can do this by hitting the endpoint https://[YOUR URL]/api/[SPACE ID]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[SPACE ID]&includeSystem=false. That will tell you all the active tasks being run for a specific machine.
You can get Machine Id by pulling the value from Octopus.Deployment.Machines. You can get Space Id by pulling the value from Octopus.Space.Id.
The pseudo code for this approach could look like this (I'm not including the actual code as your requirements could be very different).
activeDeployments = true
while (activeDeployments)
{
activeDeployments = false
foreach(machineId in Octopus.Deployment.Machines)
{
activeTasks = https://[YOUR URL]/api/[Octopus.Space.Id]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[Octopus.Space.Id]&includeSystem=false
if (activeTasks.Count > 0)
{
activeDeployments = true
}
}
if (activeDeployments = true)
{
Sleep for 5 seconds
}
}
I had this message hit me because I hit the Task Cap on the Octopus Server.
In Octopus\Configuration\Nodes change the task cap to 1 to have 1 deployment at a time even with agents on different servers. The message will display constantly
Or simply increase this value to prevent the message from occurring at all.

Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher()

Here is the context of my problem:
a gitlab ci yml pipeline
several jobs in the same internship
all jobs use a task gradle requiring the use of his cache
all jobs share the same gradle cache
My problem:
sometimes, when there are several pipelines at the same time, I get :
What went wrong:
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache (/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
I can't find any documentation about the lock system used by gradle. I don't understand why locks are positioned when the gradle action doesn't write to cache dir.
Does anyone know how locks work? Or can I simply change the duration of the timeout to allow concomitant tasks to wait their turn long enough before failing?
Translated with www.DeepL.com/Translator
I tried to tun gradle without a daemon, did not work.
I fixed this by killing all java processes in Activity Monitor(MacOS). Hope it helps.
You typically get this error when trying to share the Gradle cache amongst multiple Gradle processes that run on different hosts. I assume your CI pipelines run on different hosts or they at least run isolated from each other (e.g., as part of different Docker containers).
Unfortunately, such a scenario is currently not supported by Gradle. Gradle developer Stefan Oehme wrote this comment wrt. sharing the Gradle user home:
Gradle processes will hold locks if they are uncontended (to gain performance). Contention is announced through inter-process communication, which does not work when the processes are isolated in Docker containers.
And more clearly he states in a follow-up comment (highlighting by me):
There might be other issues that we haven't yet discovered though, since sharing a user home between machines is not a use case we have designed for.
In other words: sharing a Gradle user home or even just the cache part of it across different machines or otherwise isolated processes is currently not officially supported by Gradle. (See also my related question.)
I guess the only way to solve this for your scenario is to either:
make sure that the Gradle processes in your CI pipelines can communicate with each other (e.g., by running them on the same host), or
don’t directly share the Gradle user home, e.g., by creating copies for all CI pipelines, or
don’t run the CI pipelines in parallel.
Another scenario where it could happen is if some of these Gradle related files are on a cloud file system like OneDrive that needs a re-authentication.
Re-authenticate to the cloud file system
"Invalidate caches and restart" in Android Studio
1.First edit your config file /etc/sysconfig/jenkins change to root user JENKINS_USER="root"
2.Modify /var/lib/jenkins file permissions to root chown -R root:root jenkins
3.Restart your service service jenkins restart
Your Exception:
What went wrong: Could not create service of type FileHasher using
GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache
(/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
Thid worked for me:
rm /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
(Remove the lock file)

SCM management of AppFabric Cache Cluster

I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?
After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60

Resources