how to create wallet in Oracle autonomus database. On premise we have database server but on cloud we dont see server files. Maybe to create in on another server and upload it to storage?
You cannot upload custom wallets to Oracle Autonomous Database. You must use their pre-configured wallet with commonly used SSL root certificates. You cannot connect to sites with self-signed certificates using UTL_HTTP.
From the documentation (https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/appendix-database-pl-sql-packages-restrictions.html#GUID-829A7D07-1EA4-4F59-AA60-F780FABAFDEC):
PL/SQL Packages Restrictions
UTL_HTTP Restrictions
Connections through IP addresses are not allowed.
Only HTTPS connections are allowed (HTTP and HTTP_PROXY are disallowed).
All web services must be secured. The only allowed ports are 443 and 8443. Your instance is preconfigured with an Oracle Wallet that
contains more than 90 of the most commonly trusted root and
intermediate SSL certificates. This Oracle Wallet is centrally managed
and therefore you cannot consume 3rd party web services that are
protected using self-signed SSL certificates.
The SET_AUTHENTICATION_FROM_WALLET procedure is disallowed.
The WALLET_PATH and WALLET_PASSWORD arguments for the CREATE_REQUEST_CONTEXT, REQUEST, and REQUEST_PIECES APIs are ignored.
Oracle Wallet configuration cannot be altered. All arguments for SET_WALLET API are ignored.
UTL_HTTP usage is audited by default. You cannot disable auditing for UTL_HTTP.
The UTL_HTTP URL must be accessible over the internet. The UTL_HTTP call is unable to reach URLs that are on private subnets or behind
on-premises firewalls.
And see here:
https://enesi.no/2021/03/utl_http-from-oracle-autonomous-database/
Related
We have a server (IBM i) which hosts a queue manager. We have a third party who connect to this using MQ client software (through a B2B interconnect). This is currently working with TLS but it was set up years ago and I'm trying to fully understand the moving parts because we want to change the certificate on the server.
As I understand things so far:
The server has a server certificate, issued by our company CA, along with our relevant CA certificates loaded in the server key store.
The client has a client certificate, issued by their company CA, along with their relevant CA certificates loaded in their key store.
The client and the server have each other's CA certificates loaded.
The server has a Server Connection Channel with a TLS CipherSpec defined, and a client certificate is required to connect.
What I do not understand is the link between what the client attempts to connect to (i.e. a connection string of some kind including a network address for the queue manager) and the server's certificate.
On the web, a server certificate's Common Name must match the name at which the web site was accessed. E.g. internally we can access a web server at https://server/somepage.html but if the certificate has server.company.com as the Common Name, then the browser will report it as insecure. Only using https://server.company.com/somepage.html is considered secure by the browser.
In the MQ Client-Server connection, does this relationship also need to be present? We currently have a server certificate with common name myserver. I want to change the server to use a different certificate with the common name myserver.company.com. Will this require the client to change their connection string or other configuration value?
The equivalent to the check that a web client does to ensure the server certificate's Common Name must match the name at which the web site, is for an MQ Client application to to set the value it expects in the SSLPEER attribute of it's CLNTCONN definition (or equivalent, say MQCD.SSLPeerNamePtr/Length in MQCONNX programmable interface).
Unlike web connections, there is no specific standard, so the MQ Client and MQ back-end queue managers would agree some standard to allow the MQ Client application to know it had connected to the correct queue manager.
There is also the reverse check, where the queue manager can set the SSLPEER value at the queue manager end of the connection and only allow MQ Client applications that provide a certificate with certain values in the Distinguish Name to connect to the queue manager.
I am getting a 404 when trying to access Database Actions of an Autonomous Database with a private endpoint from my internal environment that is connected through VPN. Anyone know how to fix this?
All Autonomous Database tools are supported in databases configured with private endpoints, but additional configuration is required: to connect from your data center, to resolve the Autonomous Database private endpoint, you need a Fully Qualified Domain Name (FQDN), mapping the private endpoint IP to the FQDN. For that you either need to add an entry into your client's hosts files (e.g. /etc/hosts on Linux) or you can use hybrid DNS in the Oracle Cloud Infrastructure.
In addition to the name resolution, your dynamic routing gateway must allow the traffic to and from the Autonomous Database.
For what it's worth, if you want to learn more about the private endpoint setup, check the official doc and specifically the connection example
I have a nodejs application using kerberos to authenticate to two separate services. One is an Oracle database & the second is an HTTP api. When connecting to each of these services independently, kerberos authentication is successful.
The problem comes when attempting to connect to both in a single process. The web API's service name is a load balanced CNAME. Once the database connection is set up, all subsequent kerberos requests for tickets use DNS canonicalization on the clientside prior to negotiating a ticket for any service, despite setting rdns=false in the kerberos configuration.
[libdefaults]
rdns=false
This causes all attempts to access the web api to fail.
Error: Unspecified GSS failure. Minor code may provide more information: Server not found in Kerberos database
I am explicitly pointing oracle (via sqlnet.ora) to use the kerberos configuration containing the above libdefaults section. I'm a little lost on what to consider next, does anyone know if its possible to tell oracle to not do DNS canonicalization for kerberos auth?
for my new task I have to use SCOM to monitoring non-domain server/computer. My company told me to do it with only 1 server management that contains others SCOM features. So I have a server Windows 2016 with SCOM with a local domain, and I have to connect the others devices. It seems easy, but I have a problem with certificates: when I try to certificates my server & computers, and I'll import the certificate with MOMCertImport, in Event Viewer I see the event id 21007, that tell me "The OpsMgr Connector cannot create a mutually authenticated connection to 'PC-NAME' because it is not in a trusted domain." So I have the certificates installed but I can't anyway connect Agent to SCOM, What will I do? I search anywhere for this problem, but any solution not work with me!
There are few things you need to look at.
The certificate: must have both client auth and server auth purposes.
Authentication is MUTUAL, i.e. you agent confirms its identity to a gateway, or to a management server, AND the gateway or management server confirms its identity to the agent.
Certificates must be issued to EXACT conputer FQDN. If you rename, or join domain, or change DNS suffix => this will invalidate certificate, because FQDN changes.
Install and bind certificates at both participating servers (i.e. agent and (MS or GW)). This is because #2.
Obviously, you need individual certificates for each server, because of #3.
Ensure, that both servers can maintain trust chanin to own certificate and to other party's one. Ideally, if you have a single root/issuing CA, which used to issue both certificates. In this case, just install root/issuing CA certs in appropriate storages in local computer account. If using self-signed, you need to install them as trusted at other party.
I'm currently working on a project that requires connecting to oracle eventhub which is a Oracle's version of kafka. The systems contacting the restproxy wouldnt accept the self signed certificate hence i'm trying to do either :
1- turn off https and allow for http connections to kafka
2- import a signed certificate i generated
unfortunately i cant locate the certificate store neither do i know how to or even if it is possible to have the rest proxy run on http
The solution was more simple than i thought, obviously Oracle Eventhub rest-proxy uses nginx for service exposure, few modifications to the configuration file and i was able to both remove https and allow for a certificate that i had issued