Springboot, Caffeine and bucket4j Integration with individual expiry for each cache - spring-boot

I have created a Springboot app and have implemented in-memory Cache & throttling using Caffeine and bucket4j.
application.yml
spring:
cache:
jcache:
provider: com.github.benmanes.caffeine.jcache.spi.CaffeineCachingProvider
cache-names:
- allEndpoints
- hello
- world
- myCache1
- myCache2
caffeine:
spec: maximumSize=1000000,expireAfterAccess=3600s
bucket4j:
enabled: true
filters:
- cache-name: allEndpoints
url: /api/v1/.*
rate-limits:
- bandwidths:
- capacity: 10
time: 1
unit: minutes
fixed-refill-interval: 1
fixed-refill-interval-unit: minutes
- cache-name: hello
url: /api/v1/hello.*
rate-limits:
- bandwidths:
- capacity: 2
time: 1
unit: minutes
fixed-refill-interval: 1
fixed-refill-interval-unit: minutes
- cache-name: world
url: /api/v1/world.*
rate-limits:
- bandwidths:
- capacity: 2
time: 1
unit: minutes
fixed-refill-interval: 1
fixed-refill-interval-unit: minutes
The caches allEndpoints, hello and world are used by bucket4j. Whereas the caches myCache1 and myCache2 are used by application to cache the results returned by the service methods. Below is the service class.
#Service
public class TestService {
#Cacheable("myCache1")
public String myService1(int key){
System.out.println("Service1 Invoked");
return "hello from Service1:: " + key;
}
#Cacheable("myCache2")
public String myService2(int key){
System.out.println("Service2 Invoked");
return "hello from Service2:: " + key;
}
}
I would like to know if it is possible to have multiple expireAfterAccess or expireAfterWrite for the memory caches the application uses. i.e. something like mentioned in this article. TIA.
The entire code can be found here.

Related

Google Workflows only returning the first 20 documents in a collection

I have a simple Workflows that starts with retrieving the documents from a Firebase collection, but for some reason, the call only returns the first 20 documents from the collection.
Workflows is written in YAML -- Is there a limitation I am unaware of?
main:
params: [input]
steps:
- initialize:
assign:
- project: "dev"
- collection: "shops"
- shops:
steps:
- get_documents:
call: googleapis.firestore.v1.projects.databases.documents.get
args:
name: ${"projects/" + project + "/databases/(default)/documents/" + collection}
result: documents
- endit:
return: ${documents.documents}
Resolved, I added a pageSize to the query param
https://cloud.google.com/firestore/docs/reference/rest/v1beta1/projects.databases.documents/list
- shops:
steps:
- get_documents:
call: googleapis.firestore.v1.projects.databases.documents.get
args:
name: ${"projects/" + project + "/databases/(default)/documents/" + collection}
query:
pageSize: 10000
result: documents

How to create a random string in Artillery for every connection

I am doing a load test for socket.io using Artillery
# my-scenario.yml
config:
target: "http://localhost:4000"
phases:
- duration: 60
arrivalRate: 10
engines:
socketio-v3: {transports: ['websocket']}
socketio:
query:
address: "{{ $randomString() }}"
scenarios:
- name: My sample scenario
engine: socketio-v3
I need the address field to be a different random string of fixed length for every connection.
Currently, $randomString() only generates 1 random string which is used for all connections, and also I can't control its length.
Thanks in advance!

Serverless Framework - unrecognized property 'params'

I am trying to create a scheduled lambda function using the Serverless framework and to send it different parameters from different events.
here is my serverless configuration:
functions:
profile:
timeout: 10
handler: profile.profile
events:
- schedule:
rate: rate(1 minute)
params:
hello: world
The issue is that when I run sls deploy, I get the following error:
Serverless: at 'functions.profile.events[0]': unrecognized property 'params'
This is basically copied from the documentation here, so should work...
Am I missing something?
The documentation you're referencing is for Apache Open Whisk.
If you're using AWS, you'll need to use input as shown in the aws documentation
functions:
aggregate:
handler: statistics.handler
events:
- schedule:
rate: rate(10 minutes)
enabled: false
input:
key1: value1
key2: value2
stageParams:
stage: dev
The documentation that you referred to is for OpenWhisk https://www.serverless.com/framework/docs/providers/openwhisk/events/schedule/#schedule/.
Cloudwatch Events (now rebranded as EventBridge) is at https://www.serverless.com/framework/docs/providers/aws/events/schedule/#enabling--disabling. Sample code for reference
functions:
aggregate:
handler: statistics.handler
events:
- schedule:
rate: rate(10 minutes)
enabled: false
input:
key1: value1
key2: value2
stageParams:
stage: dev
- schedule:
rate: cron(0 12 * * ? *)
enabled: false
inputPath: '$.stageVariables'
- schedule:
rate: rate(2 hours)
enabled: true
inputTransformer:
inputPathsMap:
eventTime: '$.time'
inputTemplate: '{"time": <eventTime>, "key1": "value1"}'
Official docs at https://docs.aws.amazon.com/eventbridge/latest/userguide/scheduled-events.html
I could see one of my configuration something like below. There we use parameters instead of param.
functions:
test_function:
handler: handler.test_function
memorySize: 512
timeout: 60
events:
- http:
path: get-hello
method: get
request:
parameters:
queryStrings:
name: true

serverless warm up plugin concurrent execution of warmup functions

I got the serverless-plugin-warmup 4.2.0-rc.1 working fine with serverless version 1.36.2
But it only executes with one single warmup call instead of the configured five.
Is there any problem in my serverless.yml config?
It is also strange that I have to add 'warmup: true' to the function section to get the function warmed up. According to the docs on https://github.com/FidelLimited/serverless-plugin-warmup the config at custom section should be enough.
plugins:
- serverless-prune-plugin
- serverless-plugin-warmup
custom:
warmup:
enabled: true
concurrency: 5
prewarm: true
schedule: rate(2 minutes)
source: { "type": "keepLambdaWarm" }
timeout: 60
functions:
myFunction:
name: ${self:service}-${opt:stage}-${opt:version}
handler: myHandler
environment:
FUNCTION_NAME: myFunction
warmup: true
in AWS Cloud Watch I only see one execution every 2 minutes. I would expect to see 5 executions every 2 minutes, or do I misunderstand something here?
EDIT:
Now using the master branch concurrency works but now the context that is deliverd to the function which should be warmed is broken: Using Spring Cloud Functions => "Error parsing Client Context as JSON"
Looking at the JS of the generated warmup function the delivered source looks not ok =>
const functions = [{"name":"myFunction","config":{"enabled":true,"source":"\"\\\"{\\\\\\\"source\\\\\\\":\\\\\\\"serverless-plugin-warmup\\\\\\\"}\\\"\"","concurrency":3}}];
Config is:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
timeout: 60
Added Property sourceRaw: true to warmup config which generates a clean source in the Function JS.
const functions = [{"name":"myFunctionName","config":{"enabled":true,"source":"{\"type\":\"keepLambdaWarm\"}","concurrency":3}}];
Config:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
source: { "type": "keepLambdaWarm" }
sourceRaw: true
timeout: 60

Groovy-based Spring Boot task choking on configured cron

Not sure if this is purely a Spring Boot issue, purely a Groovy issue, or a problem arising from using Groovy to build a Spring Boot app. I have a Spring Boot background task that -- in production -- I want running once an hour:
#Component
class MyTask {
#Scheduled(cron = "${tasks.mytask.cron}")
void doSomething() {
// blah whatever
}
}
In my application.yml file I have:
logging:
config: 'logback.groovy'
server:
port: 9200
error:
whitelabel:
enabled: false
spring:
cache:
type: none
myapp:
detailsMode: ${detailsMode:Terse}
verification: 5
tasks:
mytask:
cron: '0 0/1 * 1/1 * ? *'
However for local development I want to be able to change the cron expression (for testing, etc.). When I go to compile this I get:
Expected '$tasks.mytask.cron' to be an inline constant of type java.lang.String in #org.springframework.scheduling.annotation.Scheduled
# line 31, column 23.
#Scheduled(cron = "${tasks.mytask.cron}")
Any ideas what I need to do to fix this? I need an externally-configurable value like tasks.mytask.cron that I can define in my app properties/YAML.
myapp:
detailsMode: ${detailsMode:Terse}
verification: 5
tasks:
mytask:
cron: '0 0/1 * 1/1 * ?'
or
#Scheduled(cron = '${myapp.tasks.mytask.cron}')
also notice that your cron format is incorrect

Resources