Can Jedis get/set an Java POJO? - jedis

I'm using Jedis as the java client to connect to Redis Servers.
Question 1: It seems there is no method to get/set Object < ? extends Serializable> ? All the values must be String or byte[]?
Other clients like "JRedis" and Spymemcache(for memcached Server) could.
Question 2: If I use ShardedJedis, it cannot set auth/password? While Jedis class can (using auth(String password)).

Regard Question 1: Jedis won't handle POJOs. You should serialize to a string or byte[] and use jedis to do that, although I won't recommend to store your java objects serialized, as you won't be able to use all Redis cool features. A different approach would be to use something like a object-hash mapper, like JOhm.
Regard Question 2: ShardedJedis will only support commands that run on a single key. This is to guarantee atomicity. If you want to run a specific command on a specific redis you should use shardedJedis.getShard('someky') which will return a Jedis instance that you can use.
Another way to handle this, the recommended one, is to specify your password in the JedisShardInfo instances.
You can see and example of this in the tests.

Answer to question 1:
Redisson (Redis Java client) can work with POJO objects. And you don't need to serialize/deserialize object by yourself each time and work with connections (acquire/release). It's all done by the Redisson.
Here is an example:
RBucket<AnyObject> bucket = redisson.getBucket("anyObject");
// set an object
bucket.set(new AnyObject());
// get an object
AnyObject myObject = bucket.get();
or you can use LiveObjectService which stores POJO fields as keys in Redis hash object.
#REntity
public class MyObject {
#RId
private String id;
#RIndex
private String value;
private MyObject parent;
public MyObject(String id) {
this.id = id;
}
public MyObject() {
}
// getters and setters
}
Redisson supports many popular codecs like Jackson JSON, Avro, Smile, CBOR, MsgPack, Kryo, FST, LZ4, Snappy and JDK Serialization.

Related

Given an assignment to return specific data using Springboot reactive but the JSON is really complicated

I am new to Springboot reactive
I was asked to call the following endpoint and return todays weather data only:
https://api.weather.gov/gridpoints/MLB/33,70/forecast
I believe I need to use something like this...
WebClient.create().get()
.uri("https://api.weather.gov/gridpoints/MLB/33,70/forecast")
.retrieve()
.bodyToMono(WeatherClass.class)
.block();
Do I need to map out an entire java object to match the JSON at the endpoint? is there an easy way to perhaps just grab the a certain piece of the JSON?
How would I handle something like the #context annotation in the JSON.
The WebClient in spring boot automatically uses Jackson's ObjectMapper to unmarshall json to a java object when the content type of the response is application/json. So there is no need to pull in any additional libraries or have to write any specific unmarshalling code, unless you want to use an alternate json-to-java library.
When using Jackson, you don't need to map every field in the json to your java object. You can annotate your java class with #JsonIgnoreProperties to inform jackson to ignore any properties that may appear in the json but do not have a matching field in your java object.
An example WeatherClass in which you want only the #context and forecastGenerator unmarshalled would look something like this
#JsonIgnoreProperties
public class WeatherClass {
private final List<Object> context;
private final WeatherProperties weatherProperties;
public WeatherClass(#JsonProperty("#context") List<Object> context,
#JsonProperty("properties") WeatherProperties weatherProperties) {
this.context = context;
this.weatherProperties = weatherProperties;
}
private class WeatherProperties {
private final String forecastGenerator;
private WeatherProperties(#JsonProperty("forecastGenerator") String forecastGenerator) {
this.forecastGenerator = forecastGenerator;
}
}
}
Note
#context seems to be an array that can contain multiple types (both objects and strings in your example). I've used Object to work around this but obviously isn't the most graceful solution but should be adequate to demonstrate how Jackson works
Alternatively, you can unmarshall the response to a JsonNode, which you can then use to traverse the structure of the json without converting it to a java object. For example
String forecastGenerator = WebClient.create().get()
.uri("https://api.weather.gov/gridpoints/MLB/33,70/forecast")
.retrieve()
.bodyToMono(JsonNode.class)
.block().get("properties").get("forecastGenerator").toString()
There are many other annotations provided by Jackson that can used to define how the unmarshaller functions. Too many to cover here. See Jackson Deserialisation Annotations

how to use EnumCodec in r2dbc-postgresql

I am trying to use EnumCodec from the latest version of r2dbc-postgresql (0.8.4) unsuccessfully, and I wondered if you could help me.
I use also spring-data-r2dbc version 1.1.1.
I took the exact example from the GitHub and created an enum type “my_enum” in my Postgres,
and a table “sample_table” which contains ‘name’ (text) and ‘value’ (my_enum).
Then I did as in the example:
SQL:
CREATE TYPE my_enum AS ENUM ('FIRST', 'SECOND');
Java Model:
enum MyEnumType {
FIRST, SECOND;
}
Codec Registration:
PostgresqlConnectionConfiguration.builder()
.codecRegistrar(EnumCodec.builder().withEnum("my_enum", MyEnumType.class).build());
I use DatabaseClient in order to communicate with the DB.
I tried to insert using 2 methods:
databaseClient.insert().into(SampleTable.class)
.using(sampleTable).fetch().rowsUpdated();
or:
databaseClient.insert().into("sample_table")
.value("name", sampleTable.getName())
.value("value", sampleTable.getValue())
.then();
where SampleTable is:
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
#Table("sample_table")
#JsonIgnoreProperties(ignoreUnknown = true)
#JsonInclude(JsonInclude.Include.NON_NULL)
public class SampleTable implements Serializable {
private String name;
#Column("value")
#JsonProperty("value")
private MyEnumType value;
}
But I get the same error using both:
column "value" is of type my_enum but expression is of type character varying
Can you please help me understand what I did wrong, or refer me to some working example?
I appreciate your help!
Spring Data considers enum values as values to be converted to String by default. You need to register a Converter that retains the type by writing the enum-type as-is.
#WritingConverter
class MyEnumTypeConverter implements Converter<MyEnumType, MyEnumType> {
#Override
public MyEnumType convert(MyEnumType source) {
return source;
}
}
Next, you need to register the converter. If you're using Spring Data R2DBC's AbstractR2dbcConfiguration, then override getCustomConverters():
class MyConfiguration extends AbstractR2dbcConfiguration {
#Override
protected List<Object> getCustomConverters() {
return Collections.singletonList(new MyEnumTypeConverter());
}
// …
}
Alternatively, if you configure DatabaseClient standalone, then you need a bit more of code:
PostgresqlConnectionConfiguration configuration = PostgresqlConnectionConfiguration.builder()
.codecRegistrar(EnumCodec.builder().withEnum("my_enum", MyEnumType.class).build())
.host(…)
.username(…)
.password(…)
.database(…).build();
R2dbcDialect dialect = PostgresDialect.INSTANCE;
DefaultReactiveDataAccessStrategy strategy = new DefaultReactiveDataAccessStrategy(dialect, Collections.singletonList(new MyEnumTypeConverter()));
DatabaseClient databaseClient = DatabaseClient.builder()
.connectionFactory(new PostgresqlConnectionFactory(configuration))
.dataAccessStrategy(strategy)
.build();
However, there are two bugs in the R2DBC driver that prevent Spring Data from working as expected:
Row.decode(…) fails for enum type with IllegalArgumentException: 72093 is not a valid object id #301
EnumCodec decoding fails if the requested value type is Object #302
As temporary workaround, you can duplicate EnumCodec in your codebase and apply the fix from #302 until a new release of R2DBC Postgres is available.
I have tried to use pg enum type and Java Enum class in my sample projects.
If you are using DatabaseClient API(in Spring 5.3 core, not use Spring Data R2dbc), register an EnumCodec in PostgresConnectionFactory is enough.
Check my exmaple.
If creating a pg type enum as a column type in the table schema, and Register an EnumCodec in the PostgresConnectionFactory.builder. You need to write custom #ReadingConverter to read the custom enum.
Check my example here.
If you use text-based type(varchar) in the table schema with Java Enum. no need for the extra effort on conversion, check my example here.
The Spring Data R2dbc said if using the driver built-in mechanism to handle enum, you have to register an EnumWriteSupport. But according to my experience, when using Spring Data R2dbc, the write can be handled automatically, but reading converter is required to read enum from Postgres.

Serve PostgreSQL large objects via HTTP

I'm building an app to serve data from a PostgreSQL database via a REST API (with Spring MVC) and a PWA (with Vaadin).
The PostgreSQL database stores files up to 2GB using Large Objects (I'm not in control of that); the JDBC driver provides streamed access to their binary content via Blob#getBinaryStream, so data does not need to be read entirely into memory.
The only requirement is that the stream from the blob must be consumed in the same transaction, otherwise the JDBC driver will throw.
The problem is that even if I retrieve the stream in a transactional repository method, both Spring MVC and Vaadin's StreamResource will consume it outside the transaction, so the JDBC driver throws.
For example, given
public interface SomeRepository extends JpaRepository<SomeEntity, Long> {
#Transactional(readOnly = true)
default InputStream getStream() {
return findById(1).getBlob().getBinaryStream();
}
}
this Spring MVC method will fail
#RestController
public class SomeController {
private final SomeRepository repository;
#GetMapping
public ResponseEntity getStream() {
var stream = repository.getStream();
var resource = new InputStreamResource(stream);
return new ResponseEntity(resource, HttpStatus.OK);
}
}
and the same for this Vaadin StreamResource
public class SomeView extends VerticalLayout {
public SomeView(SomeRepository repository) {
var resource = new StreamResource("x", repository::getStream);
var anchor = new Anchor(resource, "Download");
add(anchor);
}
}
with the same exception:
org.postgresql.util.PSQLException: ERROR: invalid large-object descriptor: 0
which means the transaction is already closed when the stream is read.
I see two possible solutions to this:
keep the transaction open during the download;
write the stream to disk during transaction and then serve the file from disk during download.
Solution 1 is an anti-pattern and a security risk: the transaction duration is left on the hands of the client and both a slow-reader or an attacker might block data access.
Solution 2 creates a huge delay between the client request and the server response, since the stream is first read from the database and written to disk.
One idea might be to start reading from the disk while the file is being written with data from the database, so that the transfer starts immediately but the transaction duration would be decoupled from the client download; but I don't know which side-effects this might have.
How can I achieve the goal of serving PostgreSQL large objects in a secure and performant way?
We solved this problem in Spring Content by using threads + piped streams and a special inputstream wrapper ClosingInputStream that delays closes the connection/transaction until the consumer closes the input stream. Maybe something like this would help you too?
Just as an FYI. We have found using Postgres's OIDs and the Large Object API to be extremely slow when compared with similar databases.
Perhaps it is also possible that you might be able to just retrofit Spring Content JPA to your solution and therefore use its http endpoints (and the solution I just outlined) instead of creating your own? Something like this:-
pom.xml
<!-- Java API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-jpa-boot-starter</artifactId>
<version>0.4.0</version>
</dependency>
<!-- REST API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-rest-boot-starter</artifactId>
<version>0.4.0</version>
</dependency>
SomeEntity.java
#Entity
public class SomeEntity {
#Id
#GeneratedValue
private long id;
#ContentId
private String contentId;
#ContentLength
private long contentLength = 0L;
#MimeType
private String mimeType = "text/plain";
...
}
SomeEntityContentStore.java
#StoreRestResource(path="someEntityContent")
public interface SomeEntityContentStore extends ContentStore<SomeEntity, String> {
}
Is all you need to get REST endpoints that will allow you to associate content with your entity SomeEntity. There is a working example in our examples repo here.
One option is to decouple reading from the database and writing response to client as you mentioned. The downside is the complexity of the solution, you would need to synchronize between the reader and the writer.
Another option is to first get the large object id in the main transaction and then read data in chunks, each chunk in the separate transaction.
byte[] getBlobChunk(Connection connection, long lobId, long start, long chunkSize) throws SQLException {
Blob blob = PgBlob(connection, lobId);
InputStream is = blob.getBinaryStream(start, chunkSize);
return IOUtils.toByteArray(is);
}
This solution is much simpler but has an overhead of establishing a new connection which shouldn't be a big deal if you use connection pooling.

filter dynamodb from list in springboot

I have a Spring boot application using an AWS DynamoDb table which contains a list of items as such:
#DynamoDBTable(tableName = MemberDbo.TABLENAME)
public class MemberDbo {
public static final String TABLENAME = "Member";
#NonNull
#DynamoDBHashKey
#DynamoDBAutoGeneratedKey
protected String id;
// some more parameters
#DynamoDBAttribute
private List<String> membergroupIds;
}
I would like to find all members belonging to one specific groupId. In best case I would like to use CrudRepository like this:
#EnableScan
public interface MemberRepository extends CrudRepository<MemberDbo, String> {
List<MemberDbo> findByMembergroupIdsContaining(String membergroupIds); // actually I want to filter by ONE groupId
}
Unfortunately the query above is not working (java.lang.String cannot be cast to java.util.List)
Any suggestions how to build a correct query with CrudRepository?
Any suggestions how to create a query with Amazon SDK or some other Springboot-compliant methods?
Alternatively can I create a dynamoDb index somehow and filter by that index?
Or do I need to create and maintain a new table programmatically containing the mapping between membergroupIds and members (which results in a lot of overhead in code and costs)?
A solution for CrudRepository is preferred since I may use Paging in future versions and CrudRepository easily supports paging.
If I have understood correctly this looks very easy. You using DynamoDBMapper for model persistence.
You have a member object, which contains a list of membergroupids, and all you want to do is retrieve this from the database. If so, using DynamoDBMapper you would do something like this:
AmazonDynamoDB dynamoDBClient = new AmazonDynamoDBClient();
DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient);
MemberDbo member = mapper.load(MemberDbo.class, hashKey, rangeKey);
member.getMembergroupIds();
Where you need to replace hashKey and rangeKey. You can omit rangeKey if you don't have one.
DynamoDBMapper also supports paging out of the box.
DynamoDBMapper is an excellent model persistence tool, it has strong features, its simple to use and because its written by AWS, it has seamless integration with DynamoDB. Its creators have also clearly been influenced by spring. In short, I would use DynamoDBMapper for model persistence and Spring Boot for model-controller stuff.

Spring Data + MongoDB GridFS access via Repository possible?

I recently discovered GridFS which I'd like to use for file storage with metadata. I just wondered if it's possible to use a MongoRepository to query GridFS? If yes, can someone give me an example?
I'd also take a solution using Hibernate, if there is some.
The reason is: My metadata contains a lot of different fields and it would be much easier to query a repository than to write some new Query(Criteria.where(...)) for each scenario. And I hopefully could also simply take a Java object and provide it via REST API without the file itself.
EDIT: I'm using
Spring 4 Beta
Spring Data Mongo 1.3.1
Hibernate 4.3 Beta
There is a way to solve this:
#Document(collection="fs.files")
public class MyGridFsFile {
#Id
private ObjectId id;
public ObjectId getId() { return id; }
private String filename;
public String getFilename() { return filename; }
private long length;
public long getLength() { return length; }
...
}
You can write a normal Spring Mongo Repo for that. Now you can at least query the fs.files collection using a Spring Data Repo. But: You cannot access the file contents this way.
For getting the file contents itself, you've got (at least) 2 options:
Use file = gridOperations.findOne(Query.query(Criteria.where("_id").is(id))); InputStream is = file.getInputStream();
Have a look at the source code of GridFSDBFile. There you can see, how it internally queries the fs.chunks collection and fills the InputStream.
(Option 2 is really low level, Option 1 is a lot easier and this code gets maintained by the MongoDB-Java-Driver devs, though Option 1 would be my choice).
Updating GridFS entries:
GridFS is not designed to update file content!
Though only updating the metadata field can be useful. The rest of the fields is kinda static.
You should be able to simply use your custom MyGridFsFileRepo's update method. I suggest to only create a setter for the metadata field.
Different metadata for different files:
I solved this using an abstract MyGridFsFile class with generic metadata, i.e.:
#Document(collection="fs.files")
public abstract class AbstractMyGridFsFile<M extends AbstractMetadata> {
...
private M metadata;
public M getMetadata() { return metadata; }
void setMetadata(M metadata) { this.metadata = metadata; }
}
And of course each impl has its own AbstractMetadata impl associated. What have I done? AbstractMetadata always has a field called type. This way I can find the right AbstractMyGridFsFile impl. Though I have also a generic abstract repository.
Btw: In the meantime I switched here from using Spring Repo, to use plain access via MongoTemplate, like:
protected List<A> findAll(Collection<ObjectId> ids) {
List<A> files = mongoTemplate.find(Query.query(Criteria
.where("_id").in(ids)
.and("metadata.type").is(type) // this is hardcoded for each repo impl
), typeClass); // this is the corresponding impl of AbstractMyGridFsFile
return files;
}
Hope this helps. I can write more, if you need more information about this. Just tell me.
You can create a GridFS object with the database from your MongoTemplate, and then interact with that:
MongoTemplate mongoTemplate = new MongoTemplate(new Mongo(), "GetTheTemplateFromSomewhere");
GridFS gridFS = new GridFS(mongoTemplate.getDb());
The GridFS object lets you create, delete and find etc.

Resources