Api-gateway production - microservices

I have some apis, and a api-gateway in front of them. It happens to be Ocelot
this is my configuration on my local environment:
"ReRoutes": [
{
"DownstreamPathTemplate": "/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5000
}
],
"UpstreamPathTemplate": "/api1/{everything}",
"UpstreamHttpMethod": [ "Get", "Post" ]
},
the api is running as Kestrel. No docker.
On the production system however it should be installed in IIS
The problem is this:
on my locan environment all apis, nad the api gateway run on localhost, nad have a port. But on the server, they are installed as Websites, and Application inside the website. so if the website's address is xyz.com, api1's address is xyz.com/api1. and it is available. I don't want it to be available, only via the api gateway. The first quesrtion is, how to make in available only via the api gateway.
And the second issue is, it doesn't work. This is my configuration on the server:
"ReRoutes": [
{
"DownstreamPathTemplate": "/Api1Api/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost", <-- here I tried xyz.com too
"Port": 80
}
],
"UpstreamPathTemplate": "/Api1/{everything}",
"UpstreamHttpMethod": [ "Get", "Post" ]
},
If I go to xyz.com/Api1Api it is working, but I don't want it to work, but if I go to xyz.com/ApiGateway/Api1, it is not working, but I want it to work from there

Related

Laravel Echo Server, no listening to channels but works with events

i'm using Redis, with socket.io, laravel-echo-server and VUe.
In my local machine it's working just fine but the only problem when i uploaded and made the configuration in the server is that laravel-echo-server is not even trying to authenticate nor sending an error, but works fine when i send or broadcast an event:
When i run an event, it reflects in Laravel-echo-server:PRODUCTION SERVER IMAGE
But it's not even trying to listen or authenticate, this is my listening code in Vue:
PRODUCTION SERVER IMAGE
mounted() {
//LARAVEL ECHO, SOCKET.IO, REDIS, etc
Echo.private(`messages.${this.user.id}`)
.listen('NewMessage', (e) => {
console.log("LLEGĂ“ MENSAJE NUEVO");
console.log(e);
this.handleIncoming(e.message);
});
But as i mentioned it does not show anything in Laravel-echo-server console.
With the same configuration this is how it looks in my local machine:
localhost IMAGE
As i said, it does not even try to authenticate or connect to the channel.
This is my laravel-echo-server.json:
PRODUCTION SERVER IMAGE
Am i missing something?
Why is it working with the events but no listening or at least try to listening to them?
Thank you so much!
EDIT 1:
This is part of my .env FILE
PRODUCTION SERVER IMAGE
BROADCAST_DRIVER=redis
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
This is part of my bootstrap.js
PRODUCTION SERVER IMAGE
import Echo from "laravel-echo"
window.io = require('socket.io-client');
window.Echo = new Echo({
broadcaster: 'socket.io',
host: '127.0.0.1:6001' //I CHANGED TO STATICALLY GO TO 127.0.0.1
});
And this is part of my config/database.php:
PRODUCTION SERVER IMAGE
In localhost it works just fine, using http:127.0.0.1:8000
and 127.0.0.1:6001 for laravel echo.
the problem is when i upload and configure in my AWS Lightsail, the domain is http://test1.erpnegocios.com and the route in the server is /var/www/html/test1
EDIT 2:
I've been reading posts that say that the host in bootstrap.js (socket.io) must be the same host in laravel-echo-server.json, that's why i'm using for both host: '127.0.0.1:6001' (as shown in the images) but the problem persists.
If i run sudo service redis-server stop, laravel-echo console stop working, so i assume that the connection is good, but i can't get clients connecting to channels, nor public or private
EDIT 3
Guys i changed my bootstrap.js and laravel-echo-server.json as follow:
bootstrap.js
window.Echo = new Echo({
broadcaster: 'socket.io',
host: window.location.hostname + ':6001' //El host de laravel-echo-server
});
laravel-echo-server.json
{
"authHost": "http://test1.erpnegocios.com:8000",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"secureOptions": 67108864,
"sslCertPath": "",
"sslKeyPath": "",
"sslCertChainPath": "",
"sslPassphrase": "",
"subscribers": {
"http": true,
"redis": true
},
"apiOriginAllow": {
"allowCors": false,
"allowOrigin": "",
"allowMethods": "",
"allowHeaders": ""
}
}
I also opened the port 8000 in my server, now running my laravel-echo-server console, i see:
Basically it's working now but only in public channels, i think the error is related to another config, let me explain you:
the server's main IP is 3.83.87.137
in that server i have multiple projects:
/var/www/html/test1
/var/www/html/erp
/var/www/html/test, etc.
The IP (3.83.87.137) is pointing to /var/www/html/erp, but i'm configuring and using test1 first. I believe that i can't authenticate because pointing my laravel-echo-server.json authHost to 3.83.87.137:8000 is pointing to /var/www/html/erp, am i right? Should i change my configuration so 3.83.87.137 points to /var/www/html/ o i can configure my laravel-echo-server.json authHost to 3.83.87.137:8000/test1?
Thank you so much for your help!

Issue with Consul Connect

I have a service that I want to proxy with Connect and followed the instructions on HashiCorp Learn portal.
This is my "hello" service:
{
"service": {
"name": "node",
"port": 3000,
"connect": {
"sidecar_service": {}
}
}
}
I then do a "consul reload" and create the proxy with
consul connect proxy -sidecar-for node &
When I create another service like this
consul connect proxy -service web -upstream node:9191
I can verify that I can reach my node service by calling the web service on port 9191 (curl localhost:9191). But when I define my web service in a json file as shown below, then register it (with consul reload) and want to connect to it, I have the following error:
curl: (7) Failed to connect to localhost port 9191: Connection refused
web.json
{
"service": {
"name": "web",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "node",
"local_bind_port": 9191
}
]
}
}
}
}
}
Is there anything I missed?

Getting error connection refused when trying to consul connect using sidecar proxy to web

I am following this tutorial https://learn.hashicorp.com/consul/getting-started/connect
at the point when I ran consul connect proxy -sidecar-for web it started throwing this error:
2020-07-26T14:30:18.243+0100 [ERROR] proxy.inbound: failed to dial: error="dial tcp 127.0.0.1:0: connect: can't assign requested address"
why this does not have port assigned in his demonstration ?
{
"service": {
"name": "web",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
The video in the tutorial shows the forth line as:
"port": 8080,
The documentation is missing that line. Not that it will matter because nothing is listening on web service so the error will persist. You can safely ignore that. I suspect your issue is that the operation nc 127.0.0.1 9191 is failing. I address that below.
The full config should look like:
{
"service": {
"name": "web",
"port": 8080,
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
But, this isn't important for getting through this section of the lab. The instructions aren't clear but don't forget to restart web proxy consul connect proxy -sidecar-for web and start socat proxy consul connect proxy -sidecar-for socat
The last part is sorely missing from the instructions and the video.

how to proxy API requests? (Angular-CLI)

I'm working on Java project with Spring-4 and Angular-5. Session is generated on spring side.
So, I'm not able to generate this session from angular Service. It's working on Postman and I'm able to get response in PostMan.
But It's not working with Angular post method call.
So, I thought that it's may be a issue of Proxy. (Corrent me If i'm wrong).
So, My local Url is :- http://localhost:8080/MacromWeb/ws/login
So, How Can I make a proxy.conf.json file?
So for that I have added this code to my package.json file,
"start": "ng serve --proxy-config proxy.conf.json",
I have created a new file called proxy.conf.json.
And Put this code in it.
{
"/": {
"target": "http://localhost:8080/MacromWeb/ws",
"secure": false
}
}
Then I tried with ng serve and npm start both.
Postman Screenshot.
You can achieve this through proxy, You need to provide proper values in the proxy config.
/* should work too, but if MacromWeb is common in API URLs, then instead of / provide /MacromWeb/*
proxy.conf.json looks something like this,
{
"/MacromWeb/*": {
"target": {
"host": "localhost",
"protocol": "http:",
"port": 8080
},
"secure": false,
"changeOrigin": true,
"logLevel": "debug"
}
}
Hope it helps.
Say we have a server running on http://localhost:3000 and we want all calls to http://localhost:4200/api to go to that server.
In our proxy.conf.json file, we add the following content
{
"/api": {
"target": "http://localhost:3000",
"secure": false,
"pathRewrite": {
"^/api": ""
}
}
}
More on this: here

Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager

My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.

Resources