WebApi ignores defaults from route definition template - asp.net-web-api

I'm fighting with solution for WebApi routes where for not provided values for tokens in the route template I'd like to have them o be auto-populated with defaults.
My route declaration:
config.Routes.MapHttpRoute(
name: "LandingPage Route",
routeTemplate: "api/{version}/{controller}/{id}",
defaults: new
{
controller = "LandingPageCommon",
id = System.Web.Http.RouteParameter.Optional,
version = ApiVersions.None,
logging = true
},
constraints: new
{
version = new ApiVersionRouteConstraint()
}
// handler: apiVersionHandler
);
Now I'm making 2 calls via Postman:
1) working as expected (I'm given with proper entity) for GET /api/v3/LandingPageCommon/886858
{version} -> v3
{controller} -> LandingPageCommon
{id} -> 886858
2) not working for GET /api/LandingPageCommon/886858
{version} -> not provided so should be taken form 'defaults' from declaration -> 'none'
{controller} -> LandingPageCommon
{id} -> 886858
I've created custom RouteConstraint to guard {version} parameter. While debugging the 2nd call I see that, route tokens are not properly set up ({version} -> {controller} and {controller} -> {id}):
At the sime time I see the defaults in the route declaration:
Why for the 2nd call {version} is not set to its default value?
Thanks

Related

How do I create a HttpOrigin for Cloudfront to use a Lambda function url?

Trying to setup a Cloudfront behaviour to use a Lambda function url with code like this:
this.distribution = new Distribution(this, id + "Distro", {
comment: id + "Distro",
defaultBehavior: {
origin: new S3Origin(s3Site),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
additionalBehaviors: {
[`api-prd-v2/*`]: {
compress: true,
originRequestPolicy: originRequestPolicy,
origin: new HttpOrigin(functionUrl.url, {
protocolPolicy: OriginProtocolPolicy.HTTPS_ONLY,
originSslProtocols: [OriginSslPolicy.TLS_V1_2],
}),
allowedMethods: AllowedMethods.ALLOW_ALL,
viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY,
cachePolicy: apiCachePolicy,
},
The functionUrl object is created in a different stack and passed in to the cloudformation stack, the definition looks like:
this.functionUrl = new FunctionUrl(this, 'LambdaApiUrl', {
function: this.lambdaFunction,
authType: FunctionUrlAuthType.NONE,
cors: {
allowedOrigins: ["*"],
allowedMethods: [HttpMethod.GET, HttpMethod.POST],
allowCredentials: true,
maxAge: Duration.minutes(1)
}
});
The above code fails because "The parameter origin name cannot contain a colon".
Presumably, this is because functionUrl.url evaluates to something like https://xxx.lambda-url.ap-southeast-2.on.aws/ (note the https://) whereas the HttpOrigin parameter should just be the domain name like xxx.lambda-url.ap-southeast-2.on.aws.
I can't just write code to hack the url up though (i.e. functionUrl.url.replace("https://", "")) because when my code executes, the value or the url property is a token like ${Token[TOKEN.350]}.
The function url is working properly: if I hard-code the HttpOrigin to the function url's value (i.e. like xxx.lambda-url.ap-southeast-2.on.aws) - it works fine.
How do I use CDK code to setup the reference from Cloudfront to the function url?
I'm using aws-cdk version 2.21.1.
There is an open issue to add support: https://github.com/aws/aws-cdk/issues/20090
Use CloudFormation Intrinsic Functions to parse the url string:
cdk.Fn.select(2, cdk.Fn.split('/', functionUrl.url));
// -> 7w3ryzihloepxxxxxxxapzpagi0ojzwo.lambda-url.us-east-1.on.aws

Passing a path parameter to Google's Endpoint for Cloud Function

I am following Google's tutorial on setting up Google Cloud endpoint (not AWS API Gateway) in front of my Cloud Function. I am triggering my Cloud Function to trigger an AWS lambda function, AND I am trying to pass a path parameter from my Endpoint as defined by OpenAPI spec.
Path parameters are variable parts of a URL path. They are typically used to point to a specific resource within a collection, such as a user identified by ID. A URL can have several path parameters, each denoted with curly braces { }.
paths: /users/{id}:
get:
parameters:
- in: path
name: id # Note the name is the same as in the path
required: true
schema:
type: integer
GET /users/{id}
My openapi.yaml
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
x-google-endpoints:
- name: "HOST"
allowCors: "true
schemes:
- https
produces:
- application/json
paths:
/function1/{pathParameters}:
get:
operationId: function1
parameters:
- in: path
name: pathParameters
required: true
type: string
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1
responses:
'200':
description: A successful response
schema:
type: string
The error I get when I use Endpoint URL https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/function1/conversations is a TypeError from my AWS lambda function
StatusCode:200, FunctionError: "Unhandled", ExecutedVersion: "$LATEST". Payload: "errorType":"TypeError", errorMessage:"Cannot read property 'startsWith' of undefined..."
It is saying that on line
var path = event.pathParameters;
...
...
if (path.startsWith('conversations/'){...};
my path var is undefined.
I initially thought my Google Function was not correctly passing pathParameters but when I tested my Google function using triggering event {"pathParameters":"conversations"}, my Lambda returns the payload successfully.
My Google Cloud Function:
let AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'key',
secretAccessKey: 'secret',
region: 'region'
})
let lambda = new AWS.Lambda();
exports.helloWorld = async(req,res) => {
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify(req.body)
};
res.status(200).send(await lambda.invoke(params, function(err,data){
if(err){throw err}
else{
return data.Payload
}
}).promise());
}
EDIT 1:
Seeing this Google Group post, I tried adding to my openapi.yaml file path_translation: APPEND_PATH_TO_ADDRESS, yet still I'm having no luck.
...
paths:
/{pathParameters}:
get:
...
x-google-backend:
address: https://tomy.cloudfunctions.net/function-Name
path_translation: APPEND_PATH_TO_ADDRESS
#Arunmainthan Kamalanathan mentioned in the comments that testing in AWS and Google Cloud directly with trigger event {"pathParameters":"conversations"} is not equivalent to passing req.body from my Google function to AWS lambda. I think this is where my error is occurring -- I'm not correctly passing my path parameter in the payload.
EDIT 2:
There is this Stackoverflow post concerning passing route parameters to Cloud Functions using req.path. When I console.log(req.path) I get / and console.log(req.params) I get {'0': '' }, so for some reason my path parameter is not getting passed correctly from Cloud Endpoint URL to my Google function.
I was running into the same issue when specifying multiple paths/routes within my openapi.yaml. It all depends on where you place the x-google-backend (top-level versus operation-level). This has implications on the behaviour of the path_translation. You could also overwrite path_translation yourself, as the documentation clearly describes:
path_translation: [ APPEND_PATH_TO_ADDRESS | CONSTANT_ADDRESS ]
Optional. Sets the path translation strategy used by ESP when making
target backend requests.
Note: When x-google-backend is used at the top level of the OpenAPI
specification, path_translation defaults to APPEND_PATH_TO_ADDRESS,
and when x-google-backend is used at the operation level of the
OpenAPI specification, path_translation defaults to CONSTANT_ADDRESS.
For more details on path translation, please see the Understanding
path translation section.
This means that if you want the path to be appended as a path parameter instead of a query parameter in your backend, you should adhere to the following scenario's:
Scenario 1: Do you have one cloud function in the x-google-backend.address that handles all of your paths in the openapi specification? Put x-google-backend at the top-level.
Scenario 2: Do you have multiple cloud functions corresponding to different paths? Put x-google-backend at the operation-level and set x-google-backend.path_translation to APPEND_PATH_TO_ADDRESS.
When your invocation type is RequestRespone, you can access the payload directly from the event parameter of lambda.
Look at the `Payload' parameter of the Google function:
let params = {
FunctionName:'lambdafunction',
InvocationType: "RequestRespone",
Payload: JSON.stringify({ name: 'Arun'})
};
res.status(200).send(await lambda.invoke(params)...)
Also the Lambda part:
exports.handler = function(event, context) {
context.succeed('Hello ' + event.name);
};
I hope this helps.

DTO not working for microservice, but working for apis directly

I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!

Typo3 9.x ajax call

Configuration about a single route for ajax call: getamministrazioni.json
I tried to change configuration site as follow:
...
routeEnhancers:
News:
type: Extbase
extension: News
plugin: Pi1
routes:
-
routePath: '/{news-title}'
_controller: 'News::detail'
_arguments:
news-title: news
aspects:
news-title:
type: PersistedAliasMapper
tableName: tx_news_domain_model_news
routeFieldName: path_segment
PageTypeSuffix:
type: PageType
default: .html
map:
.html: 0
getamministrazioni.json: 1035343
errorHandling: { }
routes: { }
...
And in setup.typoscript i have:
GetAmministrazioni = PAGE
GetAmministrazioni {
typeNum = 1035343
config {
disableAllHeaderCode = 1
debug = 0
no_cache = 1
additionalHeaders {
10 {
header = Content-Type: application/json
replace = 1
}
}
}
10 < tt_content.list.20.my_controller_getamministrazioni
}
It works but for all pages.
/home/getamministrazioni.json
/page1/getamministrazioni.json
etc.. etc..
I want a single route from root '/getamministrazioni.json
how i can do that?
There is a possibility to limit the routing to specific page ids:
limitToPages: 1
But this will limit your whole mapping configuration to page id 1, also the .html suffix (which you don't want to, I guess).
Unfortunately, it is currently not possible to create multiple Route Enhancers with the same name, like in the following, non-working example:
PageTypeSuffix:
type: PageType
map:
.html: 0
PageTypeSuffix:
type: PageType
limitToPages: 1
map:
sitemap.xml: 1533906435
Two possible workarounds:
Create your own RouteEnhancer, which just extends TYPO3\CMS\Core\Routing\Enhancer\PageTypeDecorator to allow a different name (see custom enhancers)
Redirect your json page type to an error page, if page id is not 0 (no routing neccessary, as TypoScript avoids delivering of this page type)
[getTSFE().id != 1]
seo_sitemap.config {
additionalHeaders.10 {
header = Location: /error.html
}
}
[END]
I found another solution. I create a plugin and i use controller to print json and i 'return' false. In template setup strip all html and i change header content type. So in every page i insert a a plugin as content that print the json

Web API add openid scope to auth url for swagger/swashbuckle UI

We have a asp.net web api application which uses swagger/swashbuckle for it's api documentation. The api is secured by azure AD using oauth/openid-connect. The configuration for swagger is done in code:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" }
};
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
c.SingleApiVersion(Version, Name);
c.UseFullTypeNameInSchemaIds();
c.OAuth2("oauth2")
.Description("OAuth2 Implicit Grant")
.Flow("implicit")
.AuthorizationUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/authorize")
.TokenUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/token");
c.OperationFilter<AssignOAuth2SecurityRequirements>();
})
.EnableSwaggerUi(c =>
{
c.EnableOAuth2Support(_applicationId, null, "http://localhost:49919/swagger/ui/o2c-html", "Swagger", " ", oauthParams);
c.BooleanValues(new[] { "0", "1" });
c.DisableValidator();
c.DocExpansion(DocExpansion.List);
});
When swashbuckle constructs the auth url for login, it automatically adds:
&scope=
However I need this to be:
&scope=openid
I have tried adding this:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" },
{ "scope", "openid" }
};
But this then adds:
&scope=&someotherparam=someothervalue&scope=openid
Any ideas how to add
&scope=openid
To the auth url that swashbuckle constructs?
Many thanks
So, found out what the issue was, the offending code can be found here:
https://github.com/swagger-api/swagger-ui/blob/2.x/dist/lib/swagger-oauth.js
These js files are from a git submodule that references the old version of the UI.
I can see on lines 154-158 we have this code:
url += '&redirect_uri=' + encodeURIComponent(redirectUrl);
url += '&realm=' + encodeURIComponent(realm);
url += '&client_id=' + encodeURIComponent(clientId);
url += '&scope=' + encodeURIComponent(scopes.join(scopeSeparator));
url += '&state=' + encodeURIComponent(state);
It basically adds scopes regardless of whether there are scopes or not. This means you cannot add scopes in the additionalQueryParams dictionary that gets sent into EnableOAuth2Support as you will get a url that contains 2 scope query params i.e.
&scope=&otherparam=otherparamvalue&scope=openid
A simple length check around the scopes would fix it.
I ended up removing swashbuckle from the web api project and added a different nuget package called swagger-net, found here:
https://www.nuget.org/packages/Swagger-Net/
This is actively maintained and it resolved the issue and uses a newer version of the swagger ui. The configuration remained exactly the same, the only thing you need to change is your reply url which is now:
http://your-url/swagger/ui/oauth2-redirect-html

Resources