Read Full Query parameter String in Spring Boot - spring-boot

Is there a way for me to read the entire query string in the GET API? Since there can be a variable number of parameters I am already looking at using this
public void createUser(#RequestParam(required=false) Map<String,String> qparams) {
}
But I want to read the entire query string as well.
The reason being one of the parameters here is an HMAC which is calculated on the entire string. and we are using that HMAC for cross verification.
We have deep integration with third-party software. The issue here is that the third-party software can make a change to their API at any point in time.

Here's how you can do it.
#GetMapping("/test1")
void endpoint1(HttpServletRequest req) {
var qs = req.getQueryString() //returns the entire string
qs.split("&") //split to get the individual parameters
}

Related

How can I enable automatic slicing on Elasticsearch operations like UpdateByQuery or Reindex using the Nest client?

I'm using the Nest client to programmatically execute requests against an Elasticsearch index. I need to use the UpdateByQuery API to update existing data in my index. To improve performance on large data sets, the recommended approach is to use slicing. In my case I'd like to use the automatic slicing feature documented here.
I've tested this out in the Kibana dev console and it works beautifully. I'm struggling on how to set this property in code through the Nest client interface. here's a code snippet:
var request = new Nest.UpdateByQueryRequest(indexModel.Name);
request.Conflicts = Elasticsearch.Net.Conflicts.Proceed;
request.Query = filterQuery;
// TODO Need to set slices to auto but the current client doesn't allow it and the server
// rejects a value of 0
request.Slices = 0;
var elasticResult = await _elasticClient.UpdateByQueryAsync(request, cancellationToken);
The comments on that property indicate that it can be set to "auto", but it expects a long so that's not possible.
// Summary:
// The number of slices this task should be divided into. Defaults to 1, meaning
// the task isn't sliced into subtasks. Can be set to `auto`.
public long? Slices { get; set; }
Setting to 0 just throws an error on the server. Has anyone else tried doing this? Is there some other way to configure this behavior? Other APIs seem to have the same problem, like ReindexOnServerAsync.
This was a bug in the spec and an unfortunate consequence of generating this part of the client from the spec.
The spec has been fixed and the change will be reflected in a future version of the client. For now though, it can be set with the following
var request = new Nest.UpdateByQueryRequest(indexModel.Name);
request.Conflicts = Elasticsearch.Net.Conflicts.Proceed;
request.Query = filterQuery;
((IRequest)request).RequestParameters.SetQueryString("slices", "auto");
var elasticResult = await _elasticClient.UpdateByQueryAsync(request, cancellationToken);

Tomcat Performance with Spring Boot API for File Upload

I have a Spring boot API and one of the endpoints allows users to upload video's. Now My controller basically takes the file as a MultiPart file and then I store it in a temp folder accessible to tomcat. Once I have it stored on Disk, I then push the video to an S3 bucket.
Now to me anyway, this seems to be less than optimal, Like if I wanted to have a 100 or a 1000 users upload at once it seems really non performant to write the files to disk first.
As a little background I'm storing it on disk with the intention that if there is a issue pushing to S3 I can retry
The below code might show what I'm doing better than the above:
public Video addVideo(#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
this.amazonS3ClientService.uploadFileToS3Bucket(file, title, description));
}
Method for storing Video file:
String fileNameWithExtenstion = awsS3FileName + "." + FilenameUtils.getExtension(multipartFile.getOriginalFilename());
//creating the file in the server (temporarily)
File file = new File(tomcatTempDir + fileNameWithExtenstion);FileOutputStream fos = new FileOutputStream(file);
fos.write(multipartFile.getBytes());
fos.close();PutObjectRequest putObjectRequest = new PutObjectRequest(this.awsS3Bucket, awsS3BucketFolder + UnigueId + "/" + fileNameWithExtenstion, file);
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
specified.amazonS3.putObject(putObjectRequest);
//removing the file created in the server
file.delete();
So my question is....is there a better way in Tomcat to:
A) Take in a file via a controllerB) Push to S3
There is no other way to do it with multipart. The problem with multipart that to properly segement parts from the requst they need sometimes skipped or be repeatable. That is impossible within memory w/o having memory to explode. Therefore, Commons FileUpload caches them on disk after a certain threshold is reached.
Multipart requests are the worst way for that. I highly recommend to use either PUT or POST with content type application/octet-stream. You can take the bare request input stream and pass to HttpClient to stream to your backend server. I did this already 5 years ago and it works for gigabytes. I have posted the solution in the Apache HttpClient mailing list.
There is one possibility how this could work under specific conditions:
All parts are in the correct physical order you want to read
Your write to a backend is fast enough to sustain the read from the front
Consume the root part and then go over to the next physical one, process the request body lazily. JAX-WS RI (Metro) has a very nice handling of multipart requests for XOP/MTOM. Learn from that because you won't be able to make it any better.
Perhaps you can try to direct stream the input stream from your MultipartFile to S3.
Consider the following uploadFileToS3Bucket method:
public PutObjectResult uploadFileToS3Bucket(InputStream input, long size, String title, String description) {
// Indicate the length of the information to avoid the need to compute it by the AWS SDK
// See: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html#PutObjectRequest-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata-
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(size); // rely on Spring implementation. Maybe you probably also can use input.available()
// compute the object name as appropriate
String key = "...";
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, input, objectMetadata
);
// The rest of your code
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
return specified.amazonS3.putObject(putObjectRequest);
}
Of course, you need to provide the service the input stream obtained from the client request associated with the MutipartFile object:
public Video addVideo(
#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
try (InputStream input = file.getInputStream()) {
this.amazonS3ClientService.uploadFileToS3Bucket(input, file.getSize(), title, description));
}
}
Probably you can also play with the getBytes method of MultipartFile and create a ByteArrayInputStream to perform the operation.
In addVideo:
byte[] bytes = file.getBytes();
In uploadFileToS3Bucket:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(bytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, new ByteArrayInputStream(bytes), objectMetadata
);
I would prefer the first solution, but try to determine which option offers you the best performance.

opendaylight : Storing a sting in MDSAL

I have a YANG model (known to MDSAL) which I am using in an opendaylight application. In my application, I am presented with a json formatted String which I want to store in the MDSAL database. I could use the builder of the object that I wish to store and set its with fields presented in the json formatted String one by one but this is laborious and error prone.
Alternatively I could post from within the application to the Northbound API which will eventually write to the MDSAL datastore.
Is there a simpler way to do this?
Thanks,
Assuming that your incoming JSON matches the structure of your YANG model exactly (does it?), I believe what you are really looking for is to transform that JSON into a "binding independant" (not setters of the generated Java class) internal model - NormalizedNode & Co. Somewhere in the controller or mdsal project there is a "codec" class that can do this.
You can either search for such code, and its usages (I find looking at tests are always useful) in the ODL controller and mdsal projects source code, or in other ODL projects which do similar things - I'm thinking specifically browsing around the jsonrpc and daexim projects sources; specifically this looks like it may inspire you: https://github.com/opendaylight/daexim/blob/stable/nitrogen/impl/src/main/java/org/opendaylight/daexim/impl/ImportTask.java
Best of luck.
Based on the information above, I constructed the following (which I am posting here to help others). I still do not know how to get rid of the deprecated reference to SchemaService (perhaps somebody can help).
private void importFromNormalizedNode(final DOMDataReadWriteTransaction rwTrx, final LogicalDatastoreType type,
final NormalizedNode<?, ?> data) throws TransactionCommitFailedException, ReadFailedException {
if (data instanceof NormalizedNodeContainer) {
#SuppressWarnings("unchecked")
YangInstanceIdentifier yid = YangInstanceIdentifier.create(data.getIdentifier());
rwTrx.put(type, yid, data);
} else {
throw new IllegalStateException("Root node is not instance of NormalizedNodeContainer");
}
}
private void importDatastore(String jsonData, QName qname) throws TransactionCommitFailedException, IOException,
ReadFailedException, SchemaSourceException, YangSyntaxErrorException {
// create StringBuffer object
LOG.info("jsonData = " + jsonData);
byte bytes[] = jsonData.getBytes();
InputStream is = new ByteArrayInputStream(bytes);
final NormalizedNodeContainerBuilder<?, ?, ?, ?> builder = ImmutableContainerNodeBuilder.create()
.withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(qname));
try (NormalizedNodeStreamWriter writer = ImmutableNormalizedNodeStreamWriter.from(builder)) {
SchemaPath schemaPath = SchemaPath.create(true, qname);
LOG.info("SchemaPath " + schemaPath);
SchemaNode parentNode = SchemaContextUtil.findNodeInSchemaContext(schemaService.getGlobalContext(),
schemaPath.getPathFromRoot());
LOG.info("parentNode " + parentNode);
try (JsonParserStream jsonParser = JsonParserStream.create(writer, schemaService.getGlobalContext(),
parentNode)) {
try (JsonReader reader = new JsonReader(new InputStreamReader(is))) {
reader.setLenient(true);
jsonParser.parse(reader);
DOMDataReadWriteTransaction rwTrx = domDataBroker.newReadWriteTransaction();
importFromNormalizedNode(rwTrx, LogicalDatastoreType.CONFIGURATION, builder.build());
}
}
}
}

NFC External record is returning in wrong format?

I've successfully written an external record to an NFC tag. When I use a 3rd party tag reader to evaluate the external record that was written, I see the appropriate value, which is a single, positive integer.
However, when I run my code (below) to see what the value of the payload (external record) is on the tag (using a Toast) in order to incorporate that value into an "if" statement, I get different values. So far, I've seen the following:
B#41fb4278 or B#41fb1190.
At this point, the value of the external record is just "2". How can I just return/write simply 2?
protected void onNewIntent(Intent intent) {
super.onNewIntent(intent);
if(intent.hasExtra(NfcAdapter.EXTRA_TAG))
{
Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
byte[] payload = "2".getBytes(); ///this is where the ID (payload) for the tag is assigned.
NdefRecord[] ndefRecords = new NdefRecord[2];
ndefRecords[0] = NdefRecord.createExternal("com.example.bmt_admin", "externaltype", payload);
ndefRecords[1] = NdefRecord.createApplicationRecord("com.example.bmt_01");
NdefMessage ndefMessage = new NdefMessage(ndefRecords);
writeNdefMessage(tag, ndefMessage);
Toast.makeText(this, "NFC Scan: " + payload, Toast.LENGTH_SHORT).show();
}
}
Thanks for any help!!
payload is defined as byte[]. When you use payload in your toast() statment, you use it a pointer to that array. Therefore what you see is the address of the array. When you want to get a string representation of a byte[], you can use for example:
String s = new String(payload);

How to enable document routing in Transport Client or Node Client

I want to use routing-field in Elastic-Search.
But I am not able to find any Java API to enable the same.
I have gone through link 1 and link 2 but none seems to have addressed this.
My code:
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", elasticSearchCluster).build();
this.client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(esHost, esPort));
}
public void execute(Tuple tuple) {
try {
String document = tuple.toString();
byte[] byteBuffer = document.getBytes();
IndexResponse response = this.client.prepareIndex(indexName, type, id)
.setSource(byteBuffer).execute().actionGet();
} catch (Exception e) {
e.printStackTrace();
}
collector.ack(tuple);
}
Note that I am using TransportClient here as there does not seem to be a good way of using Node-Client with storm but the question is irrespective of that. If there is a way of using Node-Client with routing, please do suggest otherwise TransportClient's routing would also be of great help.
I believe you are confusing two different "routing" concepts in ES. One is document routing and the other is index allocation routing (or "filtering").
The _routing field allows you specify the value to be used when indexing each document to determine which shard the document will be indexed on. The other two links you provided refer to an index-level (as opposed to document-level) setting that determines how the shards of an index are allocated to the various nodes in your cluster.
It sounds like you are trying to do document routing. This can be accomplished in the Java API using the IndexRequestBuilder class and the setRouting(String) method. Have a look at the source code on GitHub.
There are also some good code examples here which specify the routing field during indexing.
Almost!
you can just replace one line of codes
from
IndexResponse response = this.client.prepareIndex(indexName, type, id)
.setSource(byteBuffer).execute().actionGet();
to
String routingValue = "ANY_ROUTING_VALUE_YOU_WANT";
IndexResponse response = this.client.prepareIndex(indexName, type, id) .setSource(byteBuffer).setRouting(routingValue).execute().actionGet();
Then your documetns will be stored in a specific shard corresponding to the routing value you provide. In search time, you can provide the same routing value so that your search request hits only one specific shard.

Resources