What is the difference between AddTransientHttpErrorPolicy and AddPolicyHandler? - microservices

I want to apply resiliency strategy using Polly.
I am using HttpClientFactory from ASP.NET Core 2.1. I found some guide on Polly GitHub wiki. There are two ways of such policy configuration - using AddTransientHttpErrorPolicy and AddPolicyHandler, but not much of an explanation.
What are the differences between them?

.AddTransientHttpErrorPolicy(...) embeds a specification for you of the what to handle (network failures, 5xx and 408 responses as described in the wiki). You only have to specify the how to handle (eg retry, circuit-breaker).
With .AddPolicyHandler(...), you specify the whole policy yourself: both what to handle (.Handle<>(), .Or<>(), .OrResult<HttpResponseMessage() etc) and how to handle (eg retry, circuit-breaker). As shown here in the Polly wiki.
Beyond that, there are no differences in how IHttpClientFactory works with the configured policies.

Related

Using an alternative connection channel/transport for GRPC

I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.

Batched requests with modern Google APIs Node.js client

I've recently been trying to refactor some code that takes advantage of the global batch requests feature for the Google APIs that were recently deprecated. Currently, we use the npm package google-batch, but since it dangerously edits the filesystem and uses the deprecated global endpoint, I'd like to move away from it before the endpoint gets fully removed.
How can I create a batch request using (ideally) only the Node.js client? I want to use methods already present in the client as much as possible since it natively provides Promise and TypeScript support, which I intend to use.
I've looked into the Batchelor package suggested in this answer, but it requires you to manually write the HTTP request object instead of using the Node.js client.
This Github issue discusses the use of batch requests in the new node client.
According to that thread, the new intended method of "batching" (outside of a poorly listed set of endpoints that support it) is to make use of the HTTP/2 feature being shipped with the client- and then just to make your requests all at once it seems.
The reason "Batching" is in quotes is because I do not believe this explanation matches my definition of batching- the client isn't performing the queuing of requests to be executed, but instead managing network traffic better when you execute them yourself.
I'm unsure if I am understanding it correctly, but this HTTP/2 feature doesn't actually batch requests, and requires you to queue things yourself and instead tidys up some TCP overhead. In short, I do not believe that batching itself is possible with the api client alone.
(FWIW, I would have preferred to comment with a link as I'm uncertain I explained this well, but reputation didn't let me)

pentest-verify checklist after cheked

After pentesting and checking the check-list, how can I reassure my client that these checks are done and vulnerabilities patched? (of course for something like sqli, showing is obvious)
But I mean somewhere to verify or something like this?
Thanks
For test checks that are done you can provide different reports generated by tools or manual testing (depending on vulnerability type) for those specific checks.
While for patched vulnerabilities, you will need to re-test the platform again and provide the changed reports either generated from tools or manual testing that will show different output by indicating the vulnerability is no longer present.
For further re-assurance you can also add the vulnerability exploitation reproducing steps on the report. So if the client wants to test it themselves they can do it (and get assured that it was fixed).
You need to describe all methodologies used like OSSTMM, OWASP, NIST. Is very important too talk about the perimeter tested (web like forms, api, frameworks, network protocols,etc).
However, you can create a topic every step tested using Top10Owasp:
Broken Authentication
Sensitive data exposure
XML External Entities (XXE)
Broken Access control
Security misconfigurations
Cross Site Scripting (XSS)
Insecure Deserialization
Using Components with known vulnerabilities
Insufficient logging and monitoring
This way you ensure that your test was compliance.

Sending Email in .Net Core without using a class library

.Net Core (as of now) lacks the System.Net.Mail namespace and the preview version of this namespace which is available through nugget lacks SmtpClient which is the most important type of it. However, if one does not like to use class libraries for some reasons (such as to be an independent hero), is there any way to send an email using the available tools in .Net Core (HttpWebRequest, HttpClient, Sockets, etc.)? If so how could this be done (example and not theory).
You absolutely could write your own library to send emails. The question is "Should you?" The amount of time required would not be a worthwhile endeavor unless you fully intended to make the library available to a broad audience.
The original SMTP Specification (RFC 821) (https://www.rfc-editor.org/rfc/rfc821) was about 68 pages. Implementing this specification was "doable" but required significant effort. The newer SMTP Specification (RFC2821) (https://www.rfc-editor.org/rfc/rfc2821) is 79 pages.
Beyond the implementation of the functionality specified in RFC 2821, you also need to be familiar with a number of other RFCs and documents. (A brief look through the references listed from pages 68 to 70 will certainly give you a bit of pause.)
The best options for anyone sending Mail with .NET Core would be to either use a service such as SendGrid (http://sendgrid.com/) or a free and open source library like MailKit (https://github.com/jstedfast/MailKit). I believe that most .NET Core deployed sites are using one of these two methods for sending emails.
Steve Gordon did a really nice blog post (https://www.stevejgordon.co.uk/how-to-send-emails-in-asp-net-core-1-0) on using the MailKit library with ASP.NET Core.
Even if you wanted to build your own SMTP library (and I am definitely not advocating that), the best approach might be to fork something like MailKit (https://github.com/jstedfast/MailKit) via GitHub and then modify it. That would at least allow you to play with and iteratively modify a working implementation.
Most of your work would be done using components from the System.Net.Sockets namespace. Here is the reference for .NET Framework 4.7 version (https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets?view=netframework-4.7). It should be fairly similar to the .NET Core implementation although I have not tested it.
Sometimes the best advice is "Just because you can doesn't mean you should." Please don't be an "independent hero." Go with what works.

Patterns / Solutions to complicated Feature Management

My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end.
I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it".
For example, here're the following features that a web publisher can have:
Sites limit
Bandwidth limit
SSL feature + SSL configuration per site
If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade.
For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account.
For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled?
So as you can see, there're many different situations and there are different ways of handling it.
I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade.
Or a system that ignores the impacts and just upgrade/downgrade. Bad.
Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT)
There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to?
Appreciate your help.

Resources