Datasource name for a local Oracle Docker Instance - oracle

I am trying to connect to an oracle docker instance through a framework. The framework requires the table name and logical host name to be passed in.
I am able to connect to the docker instance using jdbc connection in java.
My question is how do i set the logical host name to this docker instance that i can use.
Things I have tried:
I tried by adding a logical host in /etc/hosts file in the image
using docker run command.
I tried passing in the docker name as the logical host
I tried giving the host name mentioned in tnsnames.ora file in the image
I am using Docker version 18.09.1, build 4c52b90 and Oracle v12.2.0.1. Any pointers would be helpful.
Thanks in advance!!

Have you considered to containerize the application that uses that framework? If you put all the containers in the same stack, using docker-compose or docker stack, you'll be able to reach each one of the services through the service name.

Check out the Creating an Oracle Database Docker image blog for more details.

Related

Local database docker to AWS database in VPN

i'm a beginner to Docker, hope everyone can help, much appreciated.
I downloaded a docker image from my company repository and i managed to create a container in my local machine from the image, let's named it mydb. It is created through command below:
docker run --name mydb -p 1521:1521 -d mycompany.com:5000/docker-db:20.0.04
I am able to access the database with following connection string through my sqldqveloper : system/abc123#127.0.0.1:1521/ORCL
Our company have a database server in AWS, let's name it awsdb. I can access it after vpn login.
I am able to access the database with following connection string in sqldqveloper :
system/abc123#awsdb.amazonaws.com:1521/awsdb
Question:
How can i create a database link in mydb to awsdb with database link "my_dblink"? eg. select sysdate from dual#my_dblink.
I try with following command:
CREATE PUBLIC DATABASE LINK my_dblink
CONNECT TO system
IDENTIFIED BY abc123
USING 'awsdb.amazonaws.com:1521/awsdb';
but it return error ORA-12543: TNS:destination host unreachable.
I tried remove the container and recreated it by set the net=host:
docker run --name mydb -p 1521:1521 -d --net=host mycompany.com:5000/docker-db:20.0.04
then now i can't even connect is with system/abc123#127.0.0.1:1521/ORCL
error ORA-12541 returned: no listener.
How can i open the connection between internal docker to AWS database server? Thank you.
First of all, I do believe you need to understand what you are trying to accomplish.
When you create a database link between two databases, the main requirement you must fulfil is to have network connectivity between both of them in the ports you are using. As one of them is stored in public cloud, at least you would need:
A network connection between the network where the docker is installed and the public cloud in AWS.
But, as your docker is installed in your local laptop, the AWS should be opened to Internet, something that it is a security issue and probably it is not enabled.
Moreover, you would need Firewall rules in all the ports you might need to use in this connectivity.
As you are using a VPN login that allows you to access the AWS Cloud resources because you are connecting through it ( probably using Active Directory and/or a certificate, perhaps even using SSO federation between your AD in your company and the resources in AWS ), the database can't connect using that.
Summarizing, that is not possible, and if I were someone in Security I would never allow it. The only option for you would be to create a docker with the database in AWS and then create the database link there.

What is the connection string for SQL Server running on Mac Docker in ASP.NET Core?

I'm trying to finish the ASP.NET CORE tutorial on Pluralsight on a MAC. I'm running MSSQL server using Docker and its seems to work (i have the sql database up and running
as shown here)
The second step was to have my asp.net core application to connect with this mssql database. Here are what i have for the connection string inside appsettings.json:
"ConnectionStrings": {
"OdeToFood2Db": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=OdeToFood2;Integrated Security=True"
}
this is what i have for ConfigureServices() inside startup.cs:
services.AddDbContextPool<OdeToFood2DbContext>(
options =>
{
options.UseSqlServer(Configuration.GetConnectionString("OdeToFood2Db"));
}
);
I then tried to run
dotnet ef dbcontext info -s ../OdeToFood2/odeToFood2.csproj
but im getting the
Build started...
Build failed. Use dotnet build to see the errors.
error.
I think the issue is that i have the connection string wrong since im running my mssql on docker and not locally like the tutorial i'm following.
If anyone could point me in the right direction that would definitely help a tons, i've been stuck on this issue for 5 days now and it is excruciating. Thanks in advance!
"ConnectionStrings": {
"MyWindowsConnection": "Server=(localdb)\\mssqllocaldb;Database=TestWinMac;Trusted_Connection=True;",
"MyMacConnection": "Server=localhost,1433;Initial Catalog=TestWinMac;User ID=SA;Password=MyP#ssword"
}
See my connection titled, "MyMacConnection". 1433 is the port.
I have a project that exists on both Mac and Windows machines, and I switch out the connection string based on the platform I'm using.
Docker runs its own internal virtual network on your machine, finding that IP/Hostname of your SQL container should help since technically your SQL instance is running inside a container.
On your docker machine run
docker ps
This should list all running containers, one of which should be your SQL container. The output should look similar to:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a8b6facb199c SQL-Image "some command" X days ago Up X days 8080 sqlserver
Of interest are the ports and name fields. Your ports field should list at least one port, this is likely the default of the SQL provider you're using but it may not be.
We now have the port to attempt to connect to, now we need the IP address. Run the following where is the value from the name field in the previous command (Or use the container ID)
docker inspect <image-name>
This will dump the attributes of your container including the IP address, this will likely be a 172.x IP address.
Now we have the port and the IP address you should be able to modify your connection string to point to those values.

Make k8s cluster services available to local docker containers

I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.

Docker DNS for Service Discovery to resolve Windows Container´s address by name does not work consistently

Working with Docker Windows Containers I want to go beyond only one Docker container running a App. As described in the Microsoft docs under the headline "Docker Compose and Service Discovery":
Built in to Docker is Service Discovery, which handles service
registration and name to IP (DNS) mapping for containers and services;
with service discovery, it is possible for all container endpoints to
discover each other by name (either container name, or service name).
And because docker-compose lets you define services in it´s yaml files, these should be discoverable (e.g. pingable) by there names (be sure to remind the difference between services and containers in docker-compose). This blog post by Microsoft provides a complete example with the service web and db including full source with the needed docker-compose.yml in the GitHub repo.
My problem is: the Docker windows containers do "find" each other only sometimes, and sometimes not at all. I checked them with docker inspect <container-id> and the alias db and web are present there. But when I powershell into one container (e.g. into one web container via docker exec -it myapps_web_1 powershell) and try to do a ping db this only works only occasionally.
And let me be clear here (because IMHO the docs are not): This problem is the same for non docker-compose scenarios. Building an example app without compose, the problem also appears without docker-compose services, but just plain old container names!
Any ideas on that strange behavior? For me this scenario gets worse with more apps coming into play. For more details, just have a look into https://github.com/jonashackt/spring-cloud-netflix-docker, where I have an example project with Spring Boot & Spring Cloud Eureka/Zuul and 4 docker-compose services, where the weatherbackend and weatherbackend-second are easily scalable - e.g. via docker compose scale weatherbackend=3.
My Windows Vagrant box is build via packer.io and is based on the latest Windows Server 2016 Evalutation ISO. The necessary Windows Features and Docker/docker-compose installation is done with Ansible.
Having no fix for this problem, Docker Windows Containers become mostly unusable for us at the customer.
After a week or two trying to solve this problem, I finally found the solution. Beginning with the read of this docker/for-win/issues/500, I found a link to this multicontainer example application source where one of the authors documented the solution as a sideline, naming it:
Temporary workaround for Windows DNS client weirdness
Putting the following into your Dockerfile(s) will fix the DNS problems:
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
RUN set-itemproperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord
(to learn how the execution of Powershell commands inside Dockerfiles work, have a look into the Dockerfile reference)
The problem is also discussed here and the solution will hopefully find it´s way into a official Docker image (or at least into the docs).
I have found out that I needed to open TCP in port 1888 to make the DNS work immediately. Without this port open, I had to connect to the container (windows in my case) and execute in PowerShell Clear-DnsClientCache each time the DNS changed (also during first swarm setup).

Unable to access MongoDB within a container within a Docker Machine instance from Windows

I am running Windows 7 on my desktop at work and I am signed in to a regular user account on the VPN. To develop software, we are to normally open a Dev VM and work from in there however recently I've been assigned a task to research Docker and Mongo DB. I have very limited access to what I can install on the main machine.
Here lies my problem:
Is it possible for me to connect to a MongoDB instance inside a container inside the docker machine from Windows and make changes? I would ideally like to use a GUI tool such as Mongo Management Studio to make changes to a Mongo database within a container.
By inspecting the Mongo container, it has the ports listed as: 0.0.0.0:32768 -> 27017/tcp
and docker-machine ip (vm name) returns 192.168.99.111.
I have commented out the 127.0.0.1 binding host ip within the mongod.conf file also.
From what I have researched so far, most users resolve their problem by connecting to their docker-machine IP with the port they've set with -p or been given with -P. Unfortunately for me, trying to connect with 192.168.99.111:32768 does not work.
I am pretty stumped and quite new to this environment. I am able to get inside the container with bash and manipulate the database there however I'm wondering if I can do this within Windows.
Thank you if anyone can help.
After reading Smutje's advice to ping the VM IP and testing it out to no avail, I attempted to find a pingable IP which would hopefully move me closer to my goal.
By doing "ifconfig" within the Boot2Docker VM (but not inside the container), I was able to locate another IP listed under eth0. This IP looks something like 134.36.xxx.xxx to me and is pingable. With the Mongo container running I can now access the database from within Mongo Management Studio by connecting to 134.36.xxx.xxx:32768 and manipulate the data from there.
If you have the option of choosing the operating system for your dev VM, go with Ubuntu and setup docker with all of the the containers you want to test on that. Either way, you will need to have a VM for testing docker on windows since it uses VirtualBox if i'm not mistaken. Instead, setup an Ubuntu VM and do all of your testing on that.

Resources