Maven or Jenkins to remove couchbase bucket documents - maven

I´m using couchbase, and I need to clean my bucket every time my Jenkins job run. I´ve been thinking in create a Java app to clean it before start, but I was wondering if is there any Jenkins or Maven plugin that maybe I can use in order to achieve what I want to. So far I could not found anything like that on google.
Any suggestion?.
Regards

there's a notion of a "flush" in couchbase. it must be enabled on the bucket, and what it does is empty the bucket and remove data on disk.
you can use the REST api to trigger a flush (see the REST flush doc, note that in the examples there, the second default is the target bucket)

Did you have a look into couchbase-maven-plugin ?

Related

How do I clean up the botstate db in Cosmos DB

I'd like to periodically clean up old conversations/user states, e.g. older than 30 days.
Is there a script that can do that?
There is no direct way of achieving this currently through the framework.
I have been able to achieve this in one simple way.
Assuming you are using CosmosDB(SQL) for your state management, you could set the TTL for that container. This will help deleting the documents that are 30days since the last update done to them.
You could also do the same by setting the TTL, when you first create the container through your Bot SDK, that way you wouldnt need to manually change it on portal.

Is it possible to use a Nexus Repository to store a Gradle Remote Build Cache?

I have access to a private Nexus Repository and would like to speed up my CI builds and thought that I could use the private repository to store and access my build cache. Is this a possibility or a dead end?
It works like a breeze.
Just create a "Raw" repository and give a user write permission for it.
This user then is used to fill the cache and you can use another user or anonymous access to read from the cache.
I just tried it minutes ago.
Any web server that supports PUT for storing files and GET for retrieving the same files should be fine with the default HttpBuildCache implementation.
You can even provide an own client-side implementation to use any remote service you want as build cache.
No.
Gradle's remote build cache is one of the selling points of Gradle Enterprise. So it's not something you can just "plugin" to another piece of software like Nexus.
There is however a Docker image that is designed to work with Gradle Enterprise. Maybe you could make use of that somehow.
But again, the remote build cache is a selling point of Gradle enterprise and as a result is designed to work with Gradle enterprise.
https://gradle.com/build-cache/

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.
I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).
Problem
So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.
Question
How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
Is there a way in SAM (cmd line options or in the template files) to
have it automatically delete old versions of your package when you
'sam package' upload a new version?
Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)
Slight Complication
In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).
But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.
thanks for any thoughts, ideas and help!
1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket.
In layer it's ContentUri.
2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.
3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .
You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:
AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
--output json --template-stage Processed
To only get the processed template body, you may want to pipe the output again through
jq -r ".TemplateBody"
Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:
MyLambda:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: my-bucket
S3Key: 0c53a7ccb1c1762eaeebd96555d13a20
You can then try to delete s3 objects that are not referenced by the current stack.
There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028
It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.
Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:
[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.
https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286
So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...
Actually, CloudFormation (which SAM is based on) uses S3 as temporary storage only. When you create or update the Lambda function, a copy of the code is made, so you could delete all objects from the bucket and the Lambda function would still work correctly.
Caveat: there are cases where the S3 object may be required, for example to rollback a CloudFormation stack. For example the "CloudFormation stack deployment rollback" scenario (reference):
Deploy version N of a stack
Delete the packaged S3 object that version N uses
Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback

Jfrog Artifactory: How to delete old snapshot artifacts

I had a task to delete old SNAPSHOT artefacts which are under many folders/directories.
We can't go and delete each and every artefact manually so I would like to go with restAPI.
For clear info:
https://artifactory.com/artifactory/maven-local/com/aa/bbb/cccc/dddd/XYZ-SNAPSHOT/abc.jar
https://artifactory.com/artifactory/maven-local/com/aa/bbb/cccc/dddd/XYZ-SNAPSHOT/xyz.jar
https://artifactory.com/artifactory/maven-local/com/aa/bbb/cccc/eeee/XYZ-SNAPSHOT/pqr.jar
https://artifactory.com/artifactory/maven-local/com/aa/bbb/dddd/eeee/XYZ-SNAPSHOT/lmn.jar
Above 4 examples have different directories.
My script needs to go each and every directory and have to verify for XYZ-SNAPSHOT, if it found then we can make a url and delete through CURL.
How can we achieve this? Or is there any other way to do it?
You should probably want to use Artifactory Query Language (AQL) which is the easiest way to find artifacts and modules according to patterns. You can find bunch of examples in the page. Moreover, to perform the deletion easily and even automate the process in the future, I advise using JFrog CLI. You can also read this interesting blog about similar use case.
Also, there is the 'Max Unique Snapshots' field in your local Maven repository settings. You can use that for Artifactory to keep a specified number of unique snapshots per artifact.

How does the indexing of Maven artifact repositories work

I would like to understand how the indexing for the artifact repositories like Nexus and Artifactory works. What benefit does it provide? I mean -- how does it help and what is the logic that's used when resolving artifacts?
My understanding is that the Lucene indexes contain information concerning which artifacts are presents in a given proxied repository or group and that once these indexes have been downloaded, you can easily check if a remote repository contains the artifact you're looking for and you can try to resolve it from the repositories which have it. Is this the only use? Is the index also queried for local resolutions (because each repository does have an index)...? How does this actually work?
Artifactory doesn't use indexes for searching. We believe that indexes are the thing of the past, when machines were slow and couldn't handle large searches on the server side. Here is only partial list of why search indexes are bad:
Client need to download huge files before searching
The indexes are updated too rare to reflect frequent changes
System with search indexes requires special client to perform the search against
The client it toughly coupled with the index format.
Nowdays, when servers like Artifactory can provide real-time searching, exposed via UI for humans an API for tools like IDEs, the indexes are obsolete and supported in Artifactory only for compatibility with tools like m2eclipse.
Repository indexing is all about searching. The Maven Eclipse plugin documentation describes the functionality:
http://books.sonatype.com/m2eclipse-book/reference/repository-sect-repo-view.html#d5e1169
Maintaining a server-side index makes Maven client operation more efficient. Server-side repository managers can use indexes to enable search interfaces and REST APIs for retrieving artifacts (Sonatype Nexus doesn't need a database).
As Mark already said, Maven Index is all about searching (either server side, where search is exposed over UI, or using REST) or client side like for example M2E does (typical example is code completion in POM editor, where context hints uses index to provide you Gs, As and Vs while adding dependencies for example).
Nexus does NOT use index to fulfil it's main functionality: serving up artifacts and/or proxying them, while it DOES maintain the index on the fly. Again, indexes are not used in "resolution" or any other way, except for Search UI and downstream publishing reason (for clients like M2E is).
For example "client side" usage of Maven Indexer, you can look at the examples here.
HTH,
~t~

Resources