Is it a must to setup Cloudflare firewall rules to retrieve firewall events? - graphql

I am new to Cloudflare Graphql. I am trying to use Cloudflare Graphql to retrieve firewall events of a domain using code below.
{
viewer {
zones(filter: { zoneTag: $zoneTag }) {
firewallEventsAdaptive(
filter: { date_lt: $datetimeStart, date_gt:$datetimeEnd}
limit: 10
orderBy: [datetime_DESC]
) {
action
clientAsn
clientCountryName
clientIP
clientRequestPath
clientRequestQuery
datetime
source
userAgent
}
}
}
}
However, there is no firewall rules setup for this domain. Could I still get result if I don't setup firewall rules?

Related

How to run both svelte and go

I'm trying to make a website using svelte(front) and golang(backend).
My problem is when I run those in different terminal to test my app('npm go dev' for svelte, 'go run .' for go), they run in different port. Go in port 8080 and Svelte in port 50838. How can I solve this?
Using vite to proxy requests to your Go backend is probably the simplest method (I'm assuming you are using vite!).
To do this add something like the following to your vite.config.js:
const config = {
...,
server: {
proxy: {
'/api': {
target: 'http://127.0.0.1:8080/',
proxyTimeout: 10000
},
'/': { // Complex example that filters based on headers
target: 'http://127.0.0.1:8080/',
proxyTimeout: 600000,
bypass: (req, _res, _options) => {
let ct = req.headers['content-type'];
if (ct == null || !ct.includes('grpc')) {
return req.url; // bypass this proxy
}
}
}
},
}
};
This contains a few examples; you will need to tweak these to meet your needs.

Does not work image optimization on an external website for #nuxt/image

Can't optimize images on the fly from a remote website.
Example:
<nuxt-img
src="https://upload.wikimedia.org/wikipedia/ru/b/b7/Enterthematrix.jpg"
format="webp"
/>
// nuxt.config.js
export default {
image: {
domains: ['https://upload.wikimedia.org']
}
}
What am I doing wrong?
I had the same question, when I review the document and finally found out.
Your domain is not valid: you don't need to include the http protocol so you should write like this (without http protocol)
export default {
image: {
domains: ['upload.wikimedia.org']
}
}

Traefik with dynamic routing to ECS backends, running as one-off tasks

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.
Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Using webpack dev server, how to proxy everything except "/app", but including "/app/api"

Using webpack dev server, I'd like to have a proxy that proxies everything to the server, except my app. Except that the api, which has an endpoint under my app, should be proxied:
/myapp/api/** should be proxied
/myapp/** should not be proxied (any
/** should be proxied
The following setup does this using a bypass function, but can it be done declaratively, using a single context specification?
proxy: [
{
context: '/',
bypass: function(req, res, options) {
if (
req.url.startsWith('/app') &&
!req.url.startsWith('/app/api')
) {
// console.log ("no proxy for local stuff");
return false;
}
// console.log ("Proxy!")
},
// ...
},
],
According to https://webpack.js.org/configuration/dev-server/#devserver-proxy webpack dev server uses http-proxy-middleware and at its documentation (https://github.com/chimurai/http-proxy-middleware#context-matching) you can use exclusion.
This should be working in your case:
proxy: [
{
context: ['**', '/myapp/api/**', '!/myapp/**'],
// ...
},
],

Resources