I have created a simple grails 3 application. I am trying to connect it to an Oracle database in the datasource configuration.
When I run
SELECT * FROM V$VERSION
in sql developer, the following data is returned back about my database.
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
my application.yml file looks like this:
dataSources:
dataSource:
pooled: true
dialect: org.hibernate.dialect.Oracle10gDialect
driverClassName: 'oracle.jdbc.OracleDriver'
username: 'superCool'
password: 'password'
url: 'jdbc:oracle:thin:#127.0.0.1:1521:coolio'
dbCreate: ''
my build.gradle file contains these lines for hibernate and oracle dependencies.
dependencies {
(...)
compile "org.grails.plugins:hibernate:4.3.10.5"
(...)
compile "org.hibernate:hibernate-ehcache"
compile("com.oracle:ojdbc7:12.1.0.2")
}
My service file looks as follows:
class DatabaseService {
DataSource dataSource
public void testMyDb(User user) {
try {
registerUser(new Sql(dataSource), user)
} catch (SQLException e) {
LOGGER.error("unable to register the user", e)
throw e
}
}
public void registerUser(Sql sql, User user) {
sql.call("{call isertUser(?)}", [user.name])
}
If I remove the
compile "org.grails.plugins:hibernate:4.3.10.5"
from the build.gradle, I can run my integration tests and the database is successfully reached. If I keep it there, I get the following error:
ERROR DatabaseService - unable to register the user
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.getAutoCommit(PhysicalConnection.java:2254) ~[ojdbc7-12.1.0.2.jar:12.1.0.2.0]
UPDATE 1:
I updated my build.gradle file to reference
compile("com.oracle:ojdbc6:11.2.0.2")
as opposed to
compile("com.oracle:ojdbc7:12.1.0.2")
and the generated error now refers to the setter:
ERROR DatabaseService - unable to register the user
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.setAutoCommit(PhysicalConnection.java:2254) ~[ojdbc7-12.1.0.2.jar:12.1.0.2.0]
UPDATE 2:
I caught the SQLException and got the sql error code from it. The code returned back: 08003. According to https://docs.oracle.com/cd/E15817_01/appdev.111/b31228/appd.htm ,
08003 - connection does not exist
So at this point, I set the pooled flag to false in the datasource, and everything worked just fine. So the problem here is narrowed down to that. The plugin is not reacting well to the pooled properties.
I have issued the following sql commands to figure out the size of my pool:
SELECT name, value FROM v$parameter WHERE name = 'sessions';
that returns back 1524.
I have also issued the sql command to see the current allocated amount:
SELECT COUNT(*) FROM v$session;
which returns back 58.
I suppose the question now is, what is causing the pooled property to go crazy.
The solution to this was to disable my pooling. I cannot tell if its a bug, r why it fails, but it does. Thankfully for me, I used jndi lookup for my dataSources, so replacing that made the spark.
dataSources:
dataSource:
pooled: false
dialect: org.hibernate.dialect.Oracle10gDialect
driverClassName: 'oracle.jdbc.OracleDriver'
username: 'superCool'
password: 'password'
url: 'jdbc:oracle:thin:#127.0.0.1:1521:coolio'
dbCreate: ''
Related
I am trying to build Rest API using MS SQL db as backend db.
I installed install django-mssql-backend and defined below config on settings.py
It is a simple Select stmt running on ms sql db and not using any update or insert.
When I try to migrate, it throws below exception that userid should create table permission. Why it is expect full access when i dont have plan to do any write or update operation? How to fix this issue?
raise MigrationSchemaMissing("Unable to create the django_migrations table (%s)" % exc)
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]CREATE TABLE permission denied
in database 'db-name'. (262) (SQLExecDirectW)"))
DATABASES = {
'default': {
'NAME': 'db-name',
'ENGINE': 'sql_server.pyodbc',
'HOST': 'hostname',
'USER': 'username',
'PASSWORD': 'pwd',
"OPTIONS": {"driver": "ODBC Driver 17 for SQL Server", 'unicode_results': True,
},
}
}
I am trying to access the read replica database from Spring R2DBC.My connection string looks like this
spring:
r2dbc:
url: r2dbc:mysql://db-master-dev-pvt.xyz***.com:3306,db-replica-dev-pvt.xyz**.com:3306/employee?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8
username:
password:
but I am getting an unknown host. I am following the below document
https://r2dbc.io/spec/0.8.2.RELEASE/spec/html/#overview.connection.discovery
as per Documentation, we can multiple host configurations separated by comma (,) but when I have tried to do a query or do any health check it's throwing an unknown host exception. Same configuration working fine with Spring Data JPA.
{
"database": "MySQL",
"validationQuery": "validate(REMOTE)",
"error": "java.net.UnknownHostException: failed to resolve 'db-master-dev-pvt.xyz**.com:3306,db-replica-dev-pvt.xyz**.com:3306'"
}
Stack Trace
{"#timestamp":"2021-02-12T11:34:18.438Z","#version":"1","message":"Operator called default onErrorDropped","logger_name":"reactor.core.publisher.Operators","thread_name":"reactor-tcp-epoll-1","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: failed to resolve 'myDB-master-dev-pvt.xyz**.com:3306,myDB-replica-dev-pvt.myAPI.com:3306' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1013)\n\t... 35 common frames omitted\nWrapped by: org.springframework.transaction.CannotCreateTransactionException: Could not open R2DBC Connection for transaction; nested exception is java.net.UnknownHostException: failed to resolve 'myDB-master-dev-pvt.myAPI.com:3306,MyDB-replica-dev-pvt.myAPI.com:3306' after 2 queries \n\tat org.springframework.r2dbc.connection.R2dbcTransactionManager.lambda$null$5(R2dbcTransactionManager.java:226)\n\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:221)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxRetry$RetrySubscriber.onError(FluxRetry.java:94)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onError(FluxPeekFuseable.java:234)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:221)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:221)\n\tat reactor.pool.AbstractPool$Borrower.fail(AbstractPool.java:427)\n\tat reactor.pool.SimpleDequePool.lambda$drainLoop$5(SimpleDequePool.java:309)\n\tat reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onError(FluxDoOnEach.java:186)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.FluxMap$MapSubscriber.onError(FluxMap.java:132)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:189)\n\tat reactor.netty.resources.NewConnectionProvider$DisposableConnect.onError(NewConnectionProvider.java:139)\n\tat org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:192)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:259)\n\tat reactor.netty.transport.TransportConnector$MonoChannelPromise.tryFailure(TransportConnector.java:464)\n\tat reactor.netty.transport.TransportConnector.lambda$doResolveAndConnect$6(TransportConnector.java:271)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)\n\tat io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:109)\n\tat io.netty.resolver.InetSocketAddressResolver$1.operationComplete(InetSocketAddressResolver.java:62)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)\n\tat io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)\n\tat io.netty.resolver.dns.DnsNameResolver.tryFailure(DnsNameResolver.java:936)\n\tat io.netty.resolver.dns.DnsNameResolver.access$500(DnsNameResolver.java:90)\n\tat io.netty.resolver.dns.DnsNameResolver$5.operationComplete(DnsNameResolver.java:956)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)\n\tat io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)\n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1021)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:966)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:414)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:938)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:63)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:467)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:201)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:193)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1230)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.epoll.EpollDatagramChannel.read(EpollDatagramChannel.java:681)\n\tat io.netty.channel.epoll.EpollDatagramChannel.access$100(EpollDatagramChannel.java:58)\n\tat io.netty.channel.epoll.EpollDatagramChannel$EpollDatagramChannelUnsafe.epollInReady(EpollDatagramChannel.java:499)\n\tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.lang.Thread.run(Thread.java:748)\nWrapped by: reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.transaction.CannotCreateTransactionException: Could not open R2DBC Connection for transaction; nested exception is java.net.UnknownHostException: failed to resolve 'myDB-master-dev-pvt.xyz**.com:3306,myDB-replica-dev-pvt.xyz**.com:3306' after 2 queries \n","caller_class_name":"reactor.util.Loggers$Slf4JLogger","caller_method_name":"error","caller_file_name":"Loggers.java","caller_line_number":314,"traceId":"","instance_activeProfiles":"dev","instance_port":"8080","instance_ip":"instance_ip_IS_UNDEFINED","instance_application_name":"employee-adjustment-service"}
Thanks
I have done the analysis, I am providing that here so that in future if any one want to check on mysql they must see this before planing to R2DBC. I went to the R2DBC team and after that, I found that driver which I am using that don't have multi-host . and Officaily they support on postgres and MSSQL and H2,
I am using below driver
<!-- https://mvnrepository.com/artifact/dev.miku/r2dbc-mysql -->
<dependency>
<groupId>dev.miku</groupId>
<artifactId>r2dbc-mysql</artifactId>
<version>0.8.2.RELEASE</version>
</dependency>
so this need to be fixed in this driver I have to open an issue there
https://github.com/mirromutth/r2dbc-mysql/issues/169 in order to fix it.
Thanks, Mark #mp911de from Spring R2DBC for clarification and support.
I`m able to invoke with multiple hosts comma seperated hosts
r2dbc.oracle.config.host1=a.b.com
r2dbc.oracle.config.host2=x.y.com
And My config class goes as below
#Value("${r2dbc.oracle.config.host1}")
private String host1;
#Value("${r2dbc.oracle.config.host2}")
private String host2;
#Bean()
#Qualifier("remoteabcConnectionFactory")
public ConnectionFactory remoteRemedyConnectionFactory() {
String hosts = host1 + "," + host2;
return ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(DRIVER, driver)
.option(HOST, host)
.option(HOST, hosts)
.option(PORT, Integer.valueOf(port))
.option(USER, user)
.option(PASSWORD, password)
.option(DATABASE, database)
.build());
}
You can also get comma separated hosts from one propertt param in
app.properties
Without changing anything in my settings, I can't connect to my PostgreSQL database hosted on Heroku. I can't access it in my application, and is given error
OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "<heroku user>" FATAL: no pg_hba.conf entry for host "<address>", user "<user>", database "<database>", SSL off
It says SSL off, but this is enabled as I have confirmed in PgAdmin. When attempting to access the database through PgAdmin 4 I get the same problem, saying that there is a fatal password authentication for user '' error.
I have checked the credentials for the database on Heroku, but nothing has changed. Am I doing something wrong? Do I have to change something in pg_hba.conf?
Edit: I can see in the notifications on Heroku that the database was updated right around the time the database stopped working for me. I am not sure if I triggered the update, however.
Here's the notification center:
In general, it isn't a good idea to hard-code credentials when connecting to Heroku Postgres:
Do not copy and paste database credentials to a separate environment or into your application’s code. The database URL is managed by Heroku and will change under some circumstances such as:
User-initiated database credential rotations using heroku pg:credentials:rotate.
Catastrophic hardware failures that require Heroku Postgres staff to recover your database on new hardware.
Security issues or threats that require Heroku Postgres staff to rotate database credentials.
Automated failover events on HA-enabled plans.
It is best practice to always fetch the database URL config var from the corresponding Heroku app when your application starts. For example, you may follow 12Factor application configuration principles by using the Heroku CLI and invoke your process like so:
DATABASE_URL=$(heroku config:get DATABASE_URL -a your-app) your_process
This way, you ensure your process or application always has correct database credentials.
Based on the messages in your screenshot, I suspect you were affected by the second bullet. Whatever the cause, one of those messages explicitly says
Once it has completed, your database URL will have changed
I had the same issue. Thx to #Chris I solved it this way.
This file is in config/database.js (Strapi 3.1.3)
var parseDbUrl = require("parse-database-url");
if (process.env.NODE_ENV === 'production') {
module.exports = ({ env }) => {
var dbConfig = parseDbUrl(env('DATABASE_URL', ''));
return {
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: dbConfig.driver,
host: dbConfig.host,
port: dbConfig.port,
database: dbConfig.database,
username: dbConfig.user,
password: dbConfig.password,
},
options: {
ssl: false,
},
},
},
}
};
} else {
// to use the default local provider you can return an empty configuration
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'sqlite',
filename: env('DATABASE_FILENAME', '.tmp/data.db'),
},
options: {
useNullAsDefault: true,
},
},
},
});
}
I am using Doctrine ORM 2.6.1 in a Symfony 3.4.4 project.
Some of my instances work on a MySQL database, some on Postgresql, and a few installations even access a MicosoftSQL server. This works fine without any special changes to my project or entities, I only have to configure the corresponding connection parameters.
But: if I create migrations, only statements compatible with the current database connection are created in the migration file.
I develop with a postgres-conncection, so I only produce postgresql-statements, like:
class Version20180430083616 extends AbstractMigration
{
public function up(Schema $schema)
{
// this up() migration is auto-generated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() !== 'postgresql', 'Migration can only be executed safely on \'postgresql\'.');
$this->addSql('DELETE FROM document_category');
$this->addSql('DROP SEQUENCE document_category_id_seq CASCADE');
$this->addSql('DROP TABLE document_category');
}
public function down(Schema $schema)
{
// this down() migration is auto-generated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() !== 'postgresql', 'Migration can only be executed safely on \'postgresql\'.');
//...
}
}
My Question: How can I tell the migrations bundle to create statements for each platform, like:
class Version20180430083616 extends AbstractMigration
{
public function up(Schema $schema)
{
// this up() migration is auto-generated, please modify it to your needs
if($this->connection->getDatabasePlatform()->getName() == 'postgresql'){
$this->addSql('DELETE FROM document');
$this->addSql('DELETE FROM document_category');
$this->addSql('DROP SEQUENCE document_category_id_seq CASCADE');
$this->addSql('DROP TABLE document_category');
} else if($this->connection->getDatabasePlatform()->getName() == 'mysql'){
...
} else if ($this->connection->getDatabasePlatform()->getName() == 'mssql') { // MicrosoftSQL ?
...
}
}
}
Edit:
So, I think the only solution to my problem is to define multiple database connections and entity managers, and to always create a distinct migration for each connection type. According to this article, I can define several connections as:
I found a doable solution:
inf config.yml I define one connection and one EntityManager per database type:
doctrine:
dbal:
default_connection: pgdb
connections:
pgdb:
driver: pdo_pgsql
host: db
port: 5432
name: pgdb
user: postgres
password: example
charset: utf8
mapping_types:
enum: string
mysql:
driver: pdo_mysql
host: mysqlhost
port: 3306
name: mydb
dbname: mydb
user: root
password: xxx
charset: utf8mb4
default_table_options:
collate: utf8mb4_unicode_ci
mapping_types:
enum: string
mssql:
driver: pdo_sqlsrv
host: mssqlhost
port: 1433
name: msdb
dbname: testdb
user: sa
password: xxx
charset: utf8
mapping_types:
enum: string
orm:
auto_generate_proxy_classes: false
proxy_dir: '%kernel.cache_dir%/doctrine/orm/Proxies'
proxy_namespace: Proxies
entity_managers:
default:
connection: pgdb
naming_strategy: doctrine.orm.naming_strategy.underscore
mappings:
AppBundle: ~
my:
connection: mydb
naming_strategy: doctrine.orm.naming_strategy.underscore
mappings:
AppBundle: ~
ms:
connection: msdb
naming_strategy: doctrine.orm.naming_strategy.underscore
mappings:
AppBundle: ~
Then, I can issue the diff-command 3 times instead of only once
$ bin/console doctrine:migrations:diff --em=default
$ bin/console doctrine:migrations:diff --em=my
$ bin/console doctrine:migrations:diff --em=ms
This creates three migrations each starting with a fence line:
$this->abortIf($this->connection->getDatabasePlatform()->getName() !== 'mssql', 'Migration can only be executed safely on \'mssql\'.');
in which I exchange abortIf by skipIf, such that the migration process is not aborted if the current migration if for a different database type, but just skipped:
$this->skipIf($this->connection->getDatabasePlatform()->getName() !== 'mssql', 'Migration can only be executed safely on \'mssql\'.');
I hope this helps somebody.
I start h2 database in a servlet context listener:
public void contextInitialized(ServletContextEvent sce) {
org.h2.Driver.load();
String apprealPath = sce.getServletContext().getRealPath("\\");
String h2Url = "jdbc:h2:file:" + apprealPath + "DB\\cdb;AUTO_SERVER=true";
LoggerContext lc = (LoggerContext) LoggerFactory.getILoggerFactory();
StatusPrinter.print(lc);
logger.debug("h2 url : " + h2Url);
try {
conn = DriverManager.getConnection(h2Url, "sa", "sa");
} catch (SQLException e) {
e.printStackTrace();
}
logger.debug("h2 database started in embedded mode");
sce.getServletContext().setAttribute("connection", conn);
}
then I try to use dbvisualizer to connect to h2 using following url :
jdbc:h2:tcp://localhost/~/cdb
but get these error messages:
An error occurred while establishing the connection:
Type: org.h2.jdbc.JdbcSQLException Error Code: 90067 SQL State: 90067
Message:
Connection is broken: "Connection refused: connect" [90067-148]
I tried to replace localhost with "172.17.33.181:58524" (I found it in cdb.lock.db)
reconnect with user "sa" password "sa" ,then server response changed to :
wrong username or password !
In the Automatic Mixed Mode, you don't need to (and you can't) use jdbc:h2:tcp://localhost. Just use the same URL everywhere, that means jdbc:h2:file:...DB\\cdb;AUTO_SERVER=true.
You can use the same database URL independent of whether the database is already open or not. Explicit client/server connections (using jdbc:h2:tcp:// or ssl://) are not supported.