I am using this line of code to read previously saved Object from Cache
Task<string> responselist = cache["responselist"] as Task<string>;
Line used to save the object
cache.Set("responselist", response.Content.ReadAsStringAsync(), policy);
The reason I am using responselist variable as type Task because I return "Task" object in my method.
I am fairly new to Web API. I just want to know if it makes sense to this or there's better alternative?
PS: it works 100% fine.
You should use async/await keywords for it and store the primitive types in cache.
public async Task Action(...)
{
string content = await response.Content.ReadAsStringAsync();
cache.Set("responselist", content, policy);
/* ... */
string content = cache["responselist"] as string;
}
Storing the basic types/POCO classes is more natural than class like Task. By using the basic types you can save capacity in cache.
Related
What's the standard means of processing input to an AWS Lambda handler function, when the format of the incoming JSON varies depending on the type of trigger?
e.g. I have a Lambda function that gets called when an object is created in an S3 bucket, or when an hourly scheduled event fires. Obviously, the JSON passed to the handler is formatted differently.
Is it acceptable to overload Lambda handler functions, with the input type defined as S3Event for one signature and ScheduledEvent for the other? If not, are developers simply calling JsonConvert.DeserializeObject in try blocks? Or is the standard practice to establish multiple Lambda functions, one for each input type (yuck!)?
You should use one function per event.
Having multiple triggers for one Lambda will just make things way harder, as you'll end up with a bunch of if/else, switch statements or even Factory methods if you want to apply design patterns.
Now think of Lambda functions as small and maintainable. Think of pieces of code that should do one thing and should do it well. By the moment you start having multiple triggers, you kind of end up with a "Lambda Monolith", as it will have way too many responsibilities. Not only that, you strongly couple your Lambda functions with your events, meaning that once a new trigger is added, your Lambda code should change. This is just not scalable after two or three triggers.
Another drawback is that you are bound to using one language only if you architect it like that. For some use cases, Java may be the best option. But for others, it may be Node JS, Python, Go...
Essentially, your functions should be small enough to be easily maintainable and even rewritten if necessary. There's absolutely nothing wrong with creating one function per event, although, apparently, you strongly disapprove it. Think of every Lambda as a separate Microservice, which scales out independently, has its own CI/CD pipeline and its own suite of tests.
Another thing to consider is if you want to limit your Lambda concurrent executions depending on your trigger type. This would be unachievable via the "One-Lambda-Does-It-All" model.
Stick with one Lambda per trigger and you'll sleep better at night.
This is actually possible by doing the following:
Have the Lambda signature take a Stream rather than the Amazon event type, so we can get the raw JSON message.
Read the JSON contents of the stream as a string.
Deserialize the string to a custom type in order to identify the event source.
Use the event source information to deserialize the JSON a second time to the appropriate type for the event source.
For example:
public async Task FunctionHandler(Stream stream, ILambdaContext context)
{
using var streamReader = new StreamReader(stream);
var json = await streamReader.ReadToEndAsync();
var serializationOptions = new JsonSerializationOptions { PropertyNameCaseInsensitive = true };
var awsEvent = JsonSerializer.Deserialize<AwsEvent>(json, serializationOptions);
var eventSource = awsEvent?.Records.Select(e => e.EventSource).SingleOrDefault();
await (eventSource switch
{
"aws:s3" => HandleAsync(Deserialize<S3Event>(json, serializationOptions), context),
"aws:sqs" => HandleAsync(Deserialize<SQSEvent>(json, serializationOptions), context),
_ => throw new ArgumentOutOfRangeException(nameof (stream), $"Unsupported event source '{eventSource}'."),
});
}
public async Task HandlyAsync(S3Event #event) => ...
public async Task HandleAsync(SQSEvent #event) => ...
public sealed class AwsEvent
{
public List<Record> Records { get; set; }
public sealed class Record
{
public string EventSource { get; set; }
}
}
I have a MediaTypeFormatter that converts an internal rep of an image to a png/jpeg/etc. if someone asks for it. However, my WriteToStreamAsync never gets called unless I add an image/png or similar to the accept headers.
First, here is my webapi method, with some key bits removed for brevity:
public ImageFormatter.BinaryImage GetImage(int cId, int iId)
{
....
using (var input = iFIO.OpenRead())
{
input.Read(b.data, 0, (int)iFIO.Length);
}
// With this next line my mediatypeformatter is correctly called.
Request.Headers.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("image/png"));
return b;
}
And here is the write portion of my MediaTypeFormatter (there is also a read portion, and that works great, actually).
namespace PivotWebsite.MediaFormatters
{
public class ImageFormatter : MediaTypeFormatter
{
public class BinaryImage
{
public byte[] data;
public string metaData;
}
public ImageFormatter()
{
SupportedMediaTypes.Add(new MediaTypeHeaderValue("image/jpg"));
SupportedMediaTypes.Add(new MediaTypeHeaderValue("image/jpeg"));
SupportedMediaTypes.Add(new MediaTypeHeaderValue("image/png"));
}
public override bool CanWriteType(Type type)
{
return true;
}
public override async Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext)
{
var b = value as BinaryImage;
if (b == null)
throw new InvalidOperationException("Can only work with BinaryImage types!");
await writeStream.WriteAsync(b.data, 0, b.data.Length);
}
}
}
What I expected to be able to do was, in WriteToStreamAsync, to alter the outgoing headers to include Content-Type as "image/png" (or whatever, depending on the data type).
However, when I call this from a web browser with a URL like "http://my.testdomain.net:57441/api/Images?cID=1&iID=1", the WriteToStreamAsync never gets called (accepted headers are listed as {text/html, application/xhtml+xml, */*}). If I add the line above that adds the proper image type, then everything is called as I would expect.
What am I missing here? The accepted header of "*/*" should have triggered my media formatter, right? Or... am I missing something basic about the plumbing in Web API.
Do you want the image formatter to always get used if the Accept header is "/"? If that's the case, then you should insert your formatter first in the Formatters collection like this:
config.Formatters.Insert(0, new ImageFormatter());
What happens when there isn't an exact Accept header match like in your case is that the first formatter that can write the type gets selected to serialize the object. So if you register your formatter first, it would get used.
This could have unintended side-effects because it would affect all your controllers though. I would suggest changing the CanWriteType implementation to only return true if it's a BinaryImage. That should make the formatter only get used when that's your return type.
Another thing you could do is select the formatter directly in your action by returning an HttpResponseMessage:
return Request.CreateResponse(HttpStatusCode.OK, image, new ImageFormatter());
That's basically saying "this action should always use the image formatter, regardless of content-type, accept headers etc". That might be reasonable in your case if you're always just returning an image and you need it serialized with your formatter.
I'm writing a CsvFormatter and I want to be able to call the API from the browser to trigger a file download. Since I didn't have control over the Accept header, I wanted to use an extension to trigger my CSV formatter, but the XML formatter kept getting the request. I found that by adding a "text/html" media type, I could handle the CSV extension. Hopefully this doesn't cause other problems down the line :).
public CsvFormatter()
{
var header = new MediaTypeHeaderValue("text/csv");
SupportedMediaTypes.Add(header);
MediaTypeMappings.Add(new UriPathExtensionMapping("csv", header));
// From Chrome: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
// Allow the formatter to run from a standard browser request.
header = new MediaTypeHeaderValue("text/html");
SupportedMediaTypes.Add(header);
MediaTypeMappings.Add(new UriPathExtensionMapping("csv", header));
}
I am trying to analyze what problem i might be having with unsafe threading in my code.
In my mvc3 webapplication i try to the following:
// Caching code
public static class CacheExtensions
{
public static T GetOrStore<T>(this Cache cache, string key, Func<T> generator)
{
var result = cache[key];
if(result == null)
{
result = generator();
lock(sync) {
cache[key] = result;
}
}
return (T)result;
}
}
Using the caching like this:
// Using the cached stuff
public class SectionViewData
{
public IEnumerable<Product> Products {get;set;}
public IEnumerable<SomethingElse> SomethingElse {get;set;}
}
private void Testing()
{
var cachedSection = HttpContext.Current.Cache.GetOrStore("Some Key", 0 => GetSectionViewData());
// Threading problem?
foreach(var product in cachedSection.Products)
{
DosomestuffwithProduct...
}
}
private SectionViewData GetSectionViewData()
{
SectionViewData viewData = new SectionViewData();
viewData.Products = CreateProductList();
viewData.SomethingElse = CreateSomethingElse();
return viewData;
}
Could i run inte problem with the IEnumerable? I dont have much experience with threading problems. The cachedSection would not get touched if some other thread adds a new value to cache right? To me this would work!
Should i cache Products and SomethingElse indivually? Would that be better than caching the whole SectionViewData??
Threading is hard;
In your GetOrStore method, the get/generator sequence is entirely unsynchronized, so any nymber of threads can get null from the cache and run the generator function at the same time. This may - or may not - be a problem.
Your lock statement only locks the setter of cache[string], which is already thread safe and doesn't need to be "extra locked".
The variation of double-checked locking in the cache is suspect, I'd try to get rid of it. Since the thread that never enters the lock() section can get result without a memory barrier, result may not be entirely constructed by the time the thread gets it.
Enumerating the cached IEnumrators is safe as long as nothing modifies them at the same time. If GetSectionViewData() returns an object with immutable (as in non changing) collections, you're safe.
Your code is missing parts like how would Products be populated? Only in GetSectionViewData?
If so, then I don't see a major problem with your code.
There is however a chance that two threads generate the same data(CachedSection) for the same key, it shouldn't create a threading problem except that you are doing the work twice, so if this was an expensive operation I would change the code so it only generates it once per key. If it is not expensive, it works fine as is.
IEnumerable for Products is not touched (assuming you create it separately per thread, but the enumerator on the cache is modified for each insert operation, hence it is not thread safe. So if you are using this I would be careful about that.
I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.
I need to store and retrieve lists in PhoneApplicationService.Current.State[] but this is not a list of strings or integers:
public class searchResults
{
public string title { get; set; }
public string description { get; set; }
}
public List<searchResults> resultData = new List<searchResults>()
{
//
};
The values of the result are fetched from internet and when the application is switched this data needs to be saved in isolated storage for multitasking. How do I save this list and retrieve it again?
If the question really is about how to save the data then you just do
PhoneApplicationService.Current.State["SearchResultList"] = resultData;
and to retrieve again you do
List<searchResults> loadedResultData = (List<searchResults>)PhoneApplicationService.Current.State["SearchResultList"];
Here is a complete working sample:
// your list for results
List<searchResults> resultData = new List<searchResults>();
// add some example data to save
resultData.Add(new searchResults() { description = "A description", title = "A title" });
resultData.Add(new searchResults() { description = "Another description", title = "Another title" });
// save list of search results to app state
PhoneApplicationService.Current.State["SearchResultList"] = resultData;
// --------------------->
// your app could now be tombstoned
// <---------------------
// load from app state
List<searchResults> loadedResultData = (List<searchResults>)PhoneApplicationService.Current.State["SearchResultList"];
// check if loading from app state succeeded
foreach (searchResults result in loadedResultData)
{
System.Diagnostics.Debug.WriteLine(result.title);
}
(This might stop working when your data structure gets more complex or contains certain types.)
Sounds like you just want to employ standard serialisation for your list object, see here in the MSDN docs
http://msdn.microsoft.com/en-us/library/ms973893.aspx
Or also XML serialisation if you want something that can be edited outside of the application (you can also use the Isolated Storage exploter to grab the file off and edit later)
http://msdn.microsoft.com/en-us/library/182eeyhh(v=vs.71).aspx
Alternatively i would also suggest trying out the Tombstone Helper project by Matt Lacey which can simplify this for you greatly
http://tombstonehelper.codeplex.com/
The answer by Heinrich already summarizes the main idea here - you can use the PhoneApplicationService.State with Lists like with any objects. Check out the MSDN docs on preserving application state: How to: Preserve and Restore Application State for Windows Phone. There's one important point to notice there:
Any data that you store in the State dictionary must be serializable,
either directly or by using data contracts.
Directly here means that the classes are marked as [Serializable]. Regarding your List<searchResults>, it is serializable if searchResults is serializable. To do this, either searchResults and all types referenced by it must be marked with the [Serializable] OR it must be a suitable Data Contract, see Using Data Contracts and Serializable Types. In short, make sure the class is declared as public and that it has a public, parameterless constructor.