Octopus-deploy build server seems to think that it needs different thumbprint than originally indicated - octopus-deploy

I am getting the following error when attempting to connection to a listening agent on a deployment target (the thumbprints and server names where obfuscated, aliasing the 2 thumbprints involved with AAAAA & BBBBB ):
An error occurred when sending a request to 'https://TARGETSERVER:10933/', after the request began: The server at https://TARGETSERVER:10933/ presented an unexpected security certificate. We expected the server to present a certificate with the thumbprint 'AAAAA'. Instead, it presented a certificate with a thumbprint of 'BBBBB' and subject 'CN=Octopus Tentacle'. This usually happens when the client has been configured to expect the server to have the wrong certificate, or when the certificate on the server has been regenerated and the client has not been updated. It may also happen if someone is performing a man-in-the-middle attack on the remote machine, or if a proxy server is intercepting requests. Please check the certificate used on the server, and verify that the client has been configured correctly.
I checked the Tentacle configuration and it showed the following:
{"Octopus": {
"Home": "/etc/octopus/Tentacle",
"Watchdog": {
"Enabled": false,
"Instances": "*",
"Interval": 0
}
},
"Tentacle": {
"CertificateThumbprint": "BBBBB",
"Communication": {
"TrustedOctopusServers": [
{
"Thumbprint": "AAAAA",
"CommunicationStyle": 1,
"Address": null,
"Squid": null,
"SubscriptionId": null
}
]
},
"Deployment": {
"ApplicationDirectory": "/home/Octopus/Applications"
},
"Services": {
"ListenIP": null,
"NoListen": false,
"PortNumber": 10933
}
}
}
So the thumbprint for the tentacle is shown to be BBBBB yet the error response seems to indicate that the build server expected AAAAA to come from the server, what should i do so that the build server (which i do not have easy access to) and the target server understand the correct thumbprints to exchange?

We expected the server to present a certificate with the thumbprint 'AAAAA'. Instead, it presented a certificate with a thumbprint of 'BBBBB' and subject 'CN=Octopus Tentacle'.
If the thumbprints have been aliased consistently, this message makes me think the wrong value was added to the Thumbprint field on the Target page in Octopus.
The value for Thumbprint in the communication section on the Target page (screenshot below) should match the BBBBB value in your Tentacle config file. The AAAAA value is the Octopus Server certificate thumbprint that the Tentacle needs to know to accept communications from known servers. The BBBBB value is the Octopus Tentacle certificate thumbprint that the Server needs to know so that it is communicating with known targets.

Related

ASP.NET Core 6 getting Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware[1] Failed to authenticate HTTPS connection

I am building a 5 method Web API application that needs to be secured with TLS in production. I have a gRPC server that works and when I use the same configuration in the Web API app, I am getting the following error:
dbug: Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware[1]
Failed to authenticate HTTPS connection.
System.IO.IOException: Received an unexpected EOF or 0 bytes from the transport stream.
at System.Net.Security.SslStream.ReceiveBlobAsync[TIOAdapter](TIOAdapter adapter)
at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
at Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware.OnConnectionAsync(ConnectionContext context)
dbug: Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware[1]
Failed to authenticate HTTPS connection.
System.IO.IOException: Received an unexpected EOF or 0 bytes from the transport stream.
at System.Net.Security.SslStream.ReceiveBlobAsync[TIOAdapter](TIOAdapter adapter)
at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
at Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware.OnConnectionAsync(ConnectionContext context)
I ripped out all of the gRPC code in the Web API Program.cs file in lieu of trying to configure Kestrel in the launchSettings.json file.
The snippet below shows my current effort:
"Kestrel": {
"Endpoints": {
"HttpsInlineCertFile": {
"Url": "https://localhost:30050",
"Certificate": {
"Path": "D:\\Development\\Certs\\MDevelopment.pfx",
"Password": "xxxxxxxx"
}
}
},
"Certificates": {
"Default": {
"Path": "D:\\Development\\Certs\\MDevelopment.pfx",
"Password": "xxxxxxxx"
}
}
}
The development and production certs work in the gRPC case, and also for securing a mail server I have so I am confident the certs are good. I have tried so many things, I cannot tell you anything else of value.
#Buffoonism, #Alvaro: I tried many things in trying to overcome this error. I did find that the cert that was being used in production was not correct (the node name changed messing everything up...)
What I did was create a new cert to ensure I was using the correct keys to gen the certificate. Then, I changed the way the cert was being applied to the listener as such:
// Get the host name and port number to bind the service to.
kestrelServerOptions.ConfigureHttpsDefaults (httpsConnectionAdapterOptions => {
httpsConnectionAdapterOptions.ServerCertificate = new X509Certificate2 (certPath, certPassword);
httpsConnectionAdapterOptions.SslProtocols = SslProtocols.Tls12;
});
Once I made that change, everything started to work. #Microsoft, you really need to step up your documentation game! All of you examples surround using a dev cert which is easy. Using a production-grade cert requires more handling (such as trusting the cert) which could use waaaaaaay better examples!
I hope that helps!

PATCH API don't work on Google Cloud Run instance

I have cloud run services hosting GO OSB application implementing gRpc but exposing the http REST api's via grpc-gateway which uses cloud sql (mysql) as a DB. All the CRUD API's are responding fine except the PATCH one.
It's throwing the below error with http response code 503:
{
"textPayload": "The request failed because either the HTTP response was malformed or connection to the instance had an error.",
"insertId": "6141e984000c63529e7b7afd",
"httpRequest": {
"requestMethod": "PATCH",
"requestUrl": "https://********-********-mr336-qv7hk7cx3a-uc.a.run.app/v2/service_instances/237e80fd-b22e-4df0-b9ed-23c91a4d7f51",
"requestSize": "1102",
"status": 503,
"responseSize": "976",
"userAgent": "PostmanRuntime/7.28.4",
"remoteIp": "********",
"serverIp": "********",
"latency": "0.410343680s",
"protocol": "HTTP/1.1"
},
"resource": {
"type": "cloud_run_revision",
"labels": {
"location": "us-central1",
"revision_name": "********-********-mr336-00001-hop",
"project_id": "********-********-l-app-us-01",
"configuration_name": "********-********-mr336",
"service_name": "********-********-mr336"
}
},
"timestamp": "2021-09-15T12:39:32.811858Z",
"severity": "ERROR",
"labels": {
"instanceId": "00bf4bf02dff6d5f53cff1f1828cafbca265606a996eddff5cb44e3fff674efb77ca51eca7087fb8b8e7acba227b2a3e3e913bdfcc0a487640a2e028"
},
"logName": "projects/********/logs/run.googleapis.com%2Frequests",
"trace": "projects/********/traces/e29e5add9452d171e9eebd26817bb667",
"receiveTimestamp": "2021-09-15T12:39:32.817171397Z"
}
Points to Note :
After every patch request I can see the instance start-up logs, i.e. after the above mentioned logs every time I can see the container entrypoint (server) startup logs (like cold-start).
As soon as server startup is complete, it again throws the same above error in logs.
Important point to note is that I also can't see any logs from my application which suggests PATCH api request is not reaching the container instances running behind the cloud run services.
Also my active instances after cold start goes to ideal and then scales down to 0 with in a 1 min. after the last request hit, but it doesn't seems to create issue for other APIs. This is how it is supposed to work but can't find any lead on what's the issue with PATCH.
This is fixed now !
Facing the due to handling of multiple protocols on the same port and one of the protocol matchers was causing the issue with PATCH API by returning Empty reply from server. So, done changes to the matchers and it worked.
RCA :
Cmux HTTP1Fast Matcher only matches the methods in the HTTP request.
This matcher is very optimistic: if it returns true, it does not mean that the request is a valid HTTP response.
A correct but slower HTTP1 matcher, used "HTTP1" instead which scan the whole request up-to 4096 bytes but its bit slow.

"unable to get local issuer certificate" on Windows with Python and Postman after adding the client certificate

I'm a Data Engineer working in Windows 10.
I'm trying to make a simple Post request in Python to retrieve an authentication token to a custom database service.
My code is as straightforward as possible:
import requests
import ssl
import certifi
url = ""
headers = {'Content_Type': 'application/x-www-form-urlencoded'}
payload = {
'password': '',
'scope': '',
'client_id': '',
'client_secret': '',
'username': '',
'grant_type': ''
}
if __name__ == '__main__':
token_response = requests.request("POST", url, headers=headers, data=payload)
print(token_response.content)
I have removed the payload values as well as the url values for privacy. When I run the code, I get back
and
"Max retries exceeded with url", and "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)".
I am able to sidestep this error by setting verify=False in my Post request, but given that I am handling sensitive data, I cannot do that. I mention that to demonstrate that my credentials do work, and there is clearly something wrong with either the Cinchy certificate, my python setup, or the business network I am working on (VPN? Something else?).
I see from this answer (https://stackoverflow.com/a/67998066/9403186) that it might be that "If you are running under business network, contact the network administrator to apply required configurations at the network level." I am certainly running under a business network, what would the required configurations look like?
Most of the answers out there on this error mention that you need to download the CA certificate of the url (e.g. here Windows: Python SSL certificate verify failed), but I have done that and it is not working. I went to the site, logged in, clicked on the Chrome icon with the lock, and downloaded the certificate as a Base-64 encoded X.509, and added it to my certifi cacert.pem file 'C:\Users\gblinick\AppData\Local\Programs\Python\Python39\lib\site-packages\certifi\cacert.pem' as per https://medium.com/#menakajain/export-download-ssl-certificate-from-server-site-url-bcfc41ea46a2.
Any help here would be greatly appreciated. My hunch is that it has to do with the business network, but I don't know how to check this. Ideas would be fantastic.
Please let me know if I can provide further info.
Thanks so much!
EDIT: The request with Postman also doesn't work, unless I turn off SSL authentication, so clearly it's the same problem and this is not simply an issue with Python.
If I make the same request with Powershell, however, it does work:
# [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$Result = Invoke-WebRequest -Uri $Endpoint -UseBasicParsing -Method 'Post' -Body $Body -Headers $Header -ErrorAction Stop # -SslProtocol Tls
This works whether or not I have the first line commented out or not. It seems like Powershell might have some built in SSL or TLS functionality that Python and Powershell do not. That, or the server somehow expects Powershell to connect and not Postman or Python. Note that I have # -SslProtocol Tls commented out because I was using Powershell 5.1 for which the SslProtocol does not exist.
Ultimately, the answer was super simple: I was downloading the wrong corporate certificate. When downloading the SSL certificate from Chrome, there is a "Certification Path" tab. Mine had 3 certificates in it, I'll call them A, B, and C. B descended from A, and C from B. I made the mistake of downloading C, the lowest level one. When I downloaded A and put it in my cacert.pem file, it worked.

Ocelot API gateway fails to establish SSL connection to downstream

I have an Ocelot API gateway in front of all our microservices providing some form of an API. We also have some external services hosted in Azure, one which I figured I wanted to route through our API gateway. However, as the title says, when attempting to connect using https as the downstream scheme I get the following error message:
Error Code: ConnectionToDownstreamServiceError Message: Error connecting to downstream service, exception: System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.IO.IOException: Cannot determine the frame size or a corrupted frame was received.
at System.Net.Security.SslStream.ReceiveBlobAsync[TIOAdapter](TIOAdapter adapter)
at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Boolean async, Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken)
Googling the inner exception leads to some posts suggesting that since .net 5.0 preview 6 the default tls version is determined by the operating system default which by now should probably be TLSv1.3. I believe Azure functions does not support TLSv1.3 yet and it appears that the downstream request from Ocelot does not fall back on TLSv1.2 when this is the case.
Attempting to run the request a number of other ways yields only success which leads me to believe that this is an issue specific to Ocelot. Running requests directly to the Azure resource via
Postman: succeeds. TLSv1.2 is used.
curl: succeeds. TLS1.2 is used here too.
GetAsync using HttpClient in C# (.net 5.0): succeeds.
Running curl with -v I can see that it attempts a handshake with TLSv1.3 first but then falls back to TLS1.2.
Can anyone confirm that the tls version is indeed the problem? And in that case is there any way to allow TLSv1.2 in an asp.net 5 ocelot API Gateway project?
Additional information:
Setting "DangerousAcceptAnyServerCertificateValidator": true does not resolve the issue
The gateway has no issues when using http as downstream scheme
I have a similar environement where most of my Ocelot downstream requests are currently pointing to localhost microservices (in development). However i have rigged up an Azure Function (Http Trigger) that i deployed to Azure and therefore need Ocelot to route a downstream request to the external facing url.
What I can confirm is that my Ocelot environment is running on .NET Core 5.0 and am able to succesfully call the Azure Function and return the data to my front end Web App (upstream route)
I havent had to do anything different with Ocelot for it to work. Below is the snippet from the ocelot.json file
"Routes": [
// ---------------------------------------------------
// ---------- Azure Functions HTTP Triggers ----------
// ---------------------------------------------------
// ----------- Logs -----------
{
"DownstreamPathTemplate": "/api/logproperties",
"DownstreamScheme": "https",
"DownstreamHostAndPorts": [
{
"Host": "azurefunction-httpget-logproperties.azurewebsites.net",
"Port": 443
}
],
"UpstreamPathTemplate": "/logs/logproperties",
"UpstreamHttpMethod": [ "Get" ],
"RouteIsCaseSensitive": false,
"AuthenticationOptions": {
"AuthenticationProviderKey": "Bearer",
"AllowedScopes": []
}
}
]
The actual url path for my function is shown below (obviously with the api key removed)
https://azurefunction-httpget-logproperties.azurewebsites.net/api/logproperties?code=<MyApiKeyHere>&containerName=ohiAppSource&logPropertyName=OhiAppSource
I was having issues intitially but after removing 'https://' from the string set for the "Host" in the Json config then it went through OK.
My Ocelot environement authenticates upstream requests (Client App --> Ocelot) using Microsoft Identity Web library so calling a function with a simple API key is not a security concern for me. But regardless of how you're securing the inbound side of ocelot, I dont think that would affect the issue you encountered. My solution is not promising a fix to your particular issue but it always helps to see a working configuration.

MPNS sending pushing forbidden response 403

I want to use MPNS in my windows phone app and we are going to authenticate the webservice which is sending the push to clients.
I have the done the all steps that are needed for MPNS authentication.
Uploaded the certificates on my WindowsPhone dev dashboard.
Created the channel name with the common name of my certificates
Getting the return URI with https:// thats mean my push channel is authenticated
Adding certificates to my WebRequest header
But when I am going to send push message and send webrequest but I am getting "The remote server returned an error: (403) Forbidden." response. I have read that I am doing something wrong with my request and not adding certificate properly.
Here is my code for Request Header
X509Certificate2 Cert = new X509Certificate2(Server.MapPath("Certs/abc.crt"), "password");
request.ClientCertificates.Add(Cert);
We have verisign ssl and i am testing this from my visual studio IIS. Its not hosted on any server right now and even not configured in IIS and no SSL configured for IIS.
Is that the issue or something else.
There's no unique answer for your problem.
However, when you add a client certificate to your request you only add the public key to it. The server will then respond with a challenge signed with your public key (see client certificate authentication) and you need to decipher and respond to it with your private key. If this authentication process fails, you will get a 403 forbidden.
Therefore, you must ensure that the .pfx/.p12 (containing your private key, public certificate, intermediate CA and Root CA certificates) is imported to your local machine certificate store and that your IIS server has access to it.
Because there are so many variables related to Windows, you can use Curl instead for testing purposes. Note that you must convert your .pfx/.p12 to .pem first (use openssl).
curl --cert P:\cert.pem:PASSWORD -v -H "Content-Type:text/xml" -H "X-WindowsPhone-Target:To
ast" -H "X-NotificationClass:2" -X POST -d "<?xml version='1.0' encoding='utf-8'
?><wp:Notification xmlns:wp='WPNotification'><wp:Toast><wp:Text1>My title</wp:Te
xt1><wp:Text2>My subtitle</wp:Text2></wp:Toast></wp:Notification>" https://am3.n
otify.live.net/unthrottledthirdparty/01.00/push_uri_here
Once you get that working, you may face the same problem as me: some notifications are being sent correctly and some others are rejected with a 403 forbidden for no apparent reason. See this thread:
http://social.msdn.microsoft.com/Forums/sharepoint/en-US/383617ab-eafe-45fb-92cc-5e4b25a50e7f/authenticated-push-notifications-failing-randomly-403-forbidden?forum=wpnotifications
and the same here:
https://stackoverflow.com/questions/23805883/windows-phone-authenticated-push-notifications-failing-randomly-403-forbidden

Resources