How to clear Module database in MarkLogic - gradle

What is the gradle task to clear all modules in a MarkLogic database?
I have tried mlClearDatabase, but it didn't work.

mlClearDatabase will clear the content database.
The task that you are looking for to clear the modules database is:
mlClearModulesDatabase - if the application exists, clear its modules database; otherwise do nothing
If you are clearing the modules in order to ensure that you are deploying to a fresh modules database, then you might want to use mlReloadModules, which will invoke mlClearModules and then mlLoadModules.
https://github.com/marklogic-community/ml-gradle/wiki/Task-reference#modules

mlClearModulesDatabase
gradle doesn't gurantee complete Modules database cleanup if other App server has dependency on that Modules database
mlClearDatabase -Pdatabase={db-name} -Pconfirm=true
gradle will clear the said database in force mode, for that reason, -Pconfirm=true is used. If other App servers have dependency on the cleared Modules database, your application will fail.
It is very true that mlReloadModules is the right way to deploy/redeploy modules.

Related

How can I make one single `.gradle` cache for multiple projects?

We are trying to use one single .gradle cache among our multiple build workers (in jenkins) by creating .gradle in NFS mount which is shared with all the workers.
Now when we run multiple projects using gradle builds, they get failed with following errors:
Timeout waiting to lock artifact cache (/common/user/.gradle/caches/modules-2). It is currently in use by another Gradle instance.
Owner PID: 1XXXX
Our PID: 1XXXX
Owner Operation: resolve configuration ':classpath’
Our operation: resolve configuration ':classpath’
Lock file: /common/user/.gradle/caches/modules-2/modules-2.lock
What is the suggestive method to use .gradle cache sharing among multiple users. This model works fine for maven .m2 cache.
We cannot have .gradle for each workers as it occupies lot of space to store the jars in cache.
Because of the locking mechanism Gradle uses for its dependency cache, you can't have multiple instances write to the same cache directory.
However, you can create a shared, read-only dependency cache that can be used by multiple Gradle instances. You can find instructions in the docs. The basic mechanism is to create a folder that's pre-populated with the dependencies you think your builds will need, then set the GRADLE_RO_DEP_CACHE environment variable to point to that folder.
This cache, unlike the classical dependency cache, is accessed without locking, making it possible for multiple builds to read from the cache concurrently.
Because this cache is read-only, you would need to add dependencies to it beforehand. The builds themselves can't write their dependencies back to the read-only shared cache. The cache needs to follow the folder structure that Gradle expects, though, which isn't something that can really be set up by hand. In practice the way to get a working shared cache is to copy the dependency cache that was created by an existing Gradle instance.
The read-only cache should be sourced from a Gradle dependency cache that already contains some of the required dependencies. [...] In a CI environment, it’s a good idea to have one build which "seeds" a Gradle dependency cache, which is then copied to a different directory. This directory can then be used as the read-only cache for other builds.
The shared cache doesn't need to contain all of the dependencies, though. Any that are missing will be fetched by each individual build as normal, as if the shared cache wasn't there.
https://docs.gradle.org/current/userguide/dependency_resolution.html#sub:shared-readonly-cache
Using "ascii" graphics in the gradle manual isn't very instructive, but there they say:
run a regular gradle build.
now go into, on windows, %USERPROFILE%.gradle\caches, where you find a folder named 'modules-2'
grab the modules-2 folder, as is, move it into a directory accessible to all your builds, so that you have <mygradle_ro_cache>\modules-2...
delete any .lock or gc.* files from <mygradle_ro_cache>\modules-2\
set the env variable GRADLE_RO_DEP_CACHE to <mygradle_ro_cache>
Done.

Howto handle infinispan cache creating and deployment

We have a infinispan cluster serving as cache server for our applications. Every time we need a new cache, we have to edit the config files, and redeploy the cluster, which is problematic. For obvious reasons, we don't want to redeploy the cache cluster.
We can add the new cache definition through web interface, or cli. But it has downside of not recording this configuration in a repo. Ideally I want to be able to add cache definitions in a way that is persistent in my code repo. So that in case of a disaster, I can simply redeploy the cache cluster.
We looked into creating cache definition through the source code, at application startup, but that doesn't seems to be possible.
Does anyone has an idea about the best practises for this issue?
After some R&D, this is what we found:
Programatic creation of the caches, are possible through jcache implementation in Infinispan, but we could not find a way to properly configure it. End result is just an empty cache definition, with no properties
What we ended up doing is to create caches using jboss cli. Use an script to create the cache definitions, and commit that script to version control system. This way you can recreate your cache server anytime by rerunning that script. The downside of this approach is that you are going to need to install jboss-cli on your deploying machine - CI probably- which is very inconvenient. We just decided to do this step manually for time being.

How do I tell Sonar not to store the source code in the database?

I am trying two options.
One is not to store source code
If it is not possible how to delete project from sonar database?
I tried with "sonar.import Sources=false" but this is not working for sonar version 6.1(deprecated after 4.5 version).
If I delete the project,will source code remain in database?
Storage of source code in database can't be disabled because it's used to display data in webapp.
Source code is indeed dropped from db when deleting a project.
This is late, but might be helpful for someone:
Sonar usually cache the project for performance purpose via squid mechanism, then thru queue mechanism it stores the project data in internal h2 database which can be changed to few supported databases, then you will be having advanced options to manipulate data on database(things like fail-over cases can be achieved), not that I know of any way of not to store project data in database.
Unless you configure certain user, default user can be admin to sonar dashboard with password as admin, Login to console and navigate to Administration-> projects->Management, now delete n number of unnecessary projects. Once you do this Sonar dashboard will not be able to show the project again until you re-analyze same project. To make sure this worked,after re-analyzing project click on the project on dashboard and check the version under Activity.
Additional info: If you modify the maven project code, first build the project & then do sonar:sonar for latest modifications to be reflected.
I agree with other answer, elaborating in few lines..

Add jars to Ibm Websphere Portal Server with out restarting the server

I have multiple applications that share common jars, so I decided to put them in the shared library and add a reference to all the applications.
Now the problem is that when I make a change in one of the jars and put them back I have to re start the server.
The weird thing is that I have to do that on my local system and not on the shared server, i was trying to find the setting that will allow me to upload the jar and see the effect with out restarting the server.
One of the blogs says it it not possible but on the shared server it happens so I am sure it is definitely possible.
Please advice what can be done here.
thanks
It sounds like you've configured the shared library to be a part of the server's classpath. Any JARs on the server classpath are only loaded once on server startup. Changes to these JARs require a full server restart.
Libraries that are added to the application's classpath can be reloaded dynamically. The application will still need to restart when the JAR gets changed but that's a much lighter operation and WAS will often automatically detect a file system change and restart the affected applications.
Check how you've configured your shared library to make sure it's being loaded on the application classpath.

ATG 10 on WebLogic Lock Manager Configuration

I am trying to configure the Lock Manager instances for Oracle ATG 10.2 running on WebLogic 10.3.6 and was wondering if these instances absolutely need database connectivity? We plan to only run the ServerLockManager components on these servers and nothing else.
Thoughts?
In theory you may be able to create your own ATG module which only has your only server lock manager components defined and you could only startup that single module on your 'lock manager' weblogic instances. This would require quite a bit of customization to configuration and I have never seen it done across the handful of customers I have worked on, so it is not common and you will likely have some difficulties implementing and getting proper support from Oracle if you do have issues with repositories / lock managers in the future.
In any case, there is a chance that depending upon your repository and lock server configurations, the lock manager components you start up may have required dependencies on other components that ultimately have dependencies on DB based repositories. So, in the end you may likely require DB connectivity regardless.

Resources