When writing simple records to a table in Postgres (could be any db) at the end of a pipeline, some of the potential records violate uniqueness constraints and trigger an exception. As far as I can tell, there's no straight forward way to handle these gracefully - the pipeline either errors out completely, or depending on the runner, enters an interminable death spiral.
There doesn't appear to be any mention of error handling for this case in the beam docs. The medium posts on error handling don't seem to apply to this particular type of PTransform which returns PDone.
This answer isn't comprehensible and is devoid of examples.
In my example, I'm reading from a file with 2 duplicate lines and trying to write them into a table.
CREATE TABLE foo (
field CHARACTER VARYING(100) UNIQUE
);
foo.txt contains:
a
a
The pipeline looks like this:
Pipeline p = Pipeline.create();
p.apply(TextIO.read().from("/path/to/foo.txt"))
.apply(
JdbcIO.<String>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("org.postgresql.Driver", "jdbc:postgresql://localhost:5432/somedb"))
.withStatement("INSERT INTO foo (field) VALUES (?)")
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<String>() {
private static final long serialVersionUID = 1L;
public void setParameters(String element, PreparedStatement query) throws SQLException {
query.setString(1, element);
}
}))
;
p.run();
Here is the output from the simple example above:
[WARNING]
org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists. Call getNextException to see other errors in the batch.
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:332)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:302)
at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:197)
at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:64)
at org.apache.beam.sdk.Pipeline.run (Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run (Pipeline.java:299)
at com.thing.Main.main (Main.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
at java.lang.Thread.run (Thread.java:748)
Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists. Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleError (BatchResultHandler.java:148)
at org.postgresql.core.ResultHandlerDelegate.handleError (ResultHandlerDelegate.java:50)
at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2184)
at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn$DoFnInvoker.invokeFinishBundle (Unknown Source)
at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.finishBundle (SimpleDoFnRunner.java:285)
at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.finishBundle (SimplePushbackSideInputDoFnRunner.java:118)
at org.apache.beam.runners.direct.ParDoEvaluator.finishBundle (ParDoEvaluator.java:223)
at org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.finishBundle (DoFnLifecycleManagerRemovingTransformEvaluator.java:73)
at org.apache.beam.runners.direct.DirectTransformExecutor.finishBundle (DirectTransformExecutor.java:188)
at org.apache.beam.runners.direct.DirectTransformExecutor.run (DirectTransformExecutor.java:126)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
at java.util.concurrent.FutureTask.run (FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624)
at java.lang.Thread.run (Thread.java:748)
I'd like to be able to arrest that exception and divert it to some dead letter construct.
There is no general way of doing it in Beam yet. There are discussions from time to time about modifying the IOs to not return PDone but to my knowledge there is nothing readily available.
At the moment I can think of couple of workarounds, all of them are far from ideal:
in the driver program handle the restart of the pipeline when it fails;
copy-paste JdbcIO, parts of it, or implement your own Jdbc ParDo with custom exception handling;
add an exception handling feature to JdbcIO and contribute it to Beam, it will be appreciated;
I was also facing same issue. So, I created custom jdbcio write and returned PCollectionTuple instead of PDone where I classified successfully inserted records and other record which thrown sqlexception while execute batch in WriteFn.
Below is the link for more details:
https://sachin4java.blogspot.com/2021/11/extract-error-records-while-inserting.html
Related
I'm running grails 4.0.1 attempting to use org.grails.plugins:elasticsearch:3.0.0 with elasticsearch 7.9.0. I'm not sure if I've misconfigured a bean along the way that I'd missed but any help in the right direction would be well received! Near ready to drop using the plugin if can't get it wired in my env but would like to keep up with it.
More complete trace contains
Caused by: org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'grails.core.support.proxy.DefaultProxyHandler' to required type 'grails.core.support.proxy.EntityProxyHandler' for property 'proxyHandler'; nested exception is java.lang.IllegalStateException: Cannot convert value of type 'grails.core.support.proxy.DefaultProxyHandler' to required type 'grails.core.support.proxy.EntityProxyHandler' for property 'proxyHandler': no matching editors or conversion strategy found
at org.springframework.beans.AbstractNestablePropertyAccessor.convertIfNecessary(AbstractNestablePropertyAccessor.java:590)
at org.springframework.beans.AbstractNestablePropertyAccessor.convertForProperty(AbstractNestablePropertyAccessor.java:604)
at org.springframework.beans.BeanWrapperImpl.convertForProperty(BeanWrapperImpl.java:219)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.convertForProperty(AbstractAutowireCapableBeanFactory.java:1723)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1679)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1426)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:592)
... 49 common frames omitted
Caused by: java.lang.IllegalStateException: Cannot convert value of type 'grails.core.support.proxy.DefaultProxyHandler' to required type 'grails.core.support.proxy.EntityProxyHandler' for property 'proxyHandler': no matching editors or conversion strategy found
at org.springframework.beans.TypeConverterDelegate.convertIfNecessary(TypeConverterDelegate.java:262)
at org.springframework.beans.AbstractNestablePropertyAccessor.convertIfNecessary(AbstractNestablePropertyAccessor.java:585)
I am trying to decode a message coming as part of avro message in my spark2.2 streaming. I have a schema defined for this json and whenever the json message comes with out honoring the json schema, my JsonDecoder fails with below error
Caused by: org.apache.avro.AvroTypeException: Expected field name not found: "some_field"
at org.apache.avro.io.JsonDecoder.doAction(JsonDecoder.java:477)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.JsonDecoder.advance(JsonDecoder.java:139)
at org.apache.avro.io.JsonDecoder.readString(JsonDecoder.java:219)
at org.apache.avro.io.JsonDecoder.readString(JsonDecoder.java:214)
at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:201)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:422)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:414)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:181)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:315)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261)
I know jackson decoding has a way to ignore the extra as well as absent fields. Is there a way in org.apache.avro.io.JsonDecoder for the same behaviour?
I am trying to obtain a list of all tables in my H2 in-memory DB using JOOQ's DSLContext.meta():
DSL_CONTEXT_PROVIDER.db().meta().getTables();
results in:
java.lang.RuntimeException: org.jooq.exception.DataAccessException: Error while accessing DatabaseMetaData
at MyTest.deleteEntities(MyTest.java:222)
at MyTest.cleanupDatabase(MyTest.java:201)
at MyTest.afterTestCase(MyTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
[... omitted for brevity ...]
Caused by: org.jooq.exception.DataAccessException: Error while accessing DatabaseMetaData
at org.jooq.impl.MetaImpl.getCatalogs(MetaImpl.java:160)
at org.jooq.impl.MetaImpl.getSchemas(MetaImpl.java:168)
at org.jooq.impl.MetaImpl.getTables(MetaImpl.java:179)
at MyTest.deleteEntities(MyTest.java:210)
... 29 more
Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed [90007-174]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:172)
at org.h2.message.DbException.get(DbException.java:149)
at org.h2.message.DbException.get(DbException.java:138)
at org.h2.jdbc.JdbcConnection.checkClosed(JdbcConnection.java:1410)
at org.h2.jdbc.JdbcConnection.checkClosed(JdbcConnection.java:1388)
at org.h2.jdbc.JdbcDatabaseMetaData.checkClosed(JdbcDatabaseMetaData.java:2963)
at org.h2.jdbc.JdbcDatabaseMetaData.getCatalogs(JdbcDatabaseMetaData.java:756)
at org.jooq.impl.MetaImpl.getCatalogs(MetaImpl.java:143)
... 32 more
DSL_CONTEXT_PROVIDER.db() looks like this:
JdbcDataSource h2ds = new JdbcDataSource();
h2ds.setURL("jdbc:h2:mem:testDB;create=true");
h2ds.setUser("");
h2ds.setPassword("");
return DSL.using(new DefaultConfiguration().set(new DataSourceConnectionProvider(h2ds)));
Ordinary queries work fine with the above configuration, but not the meta().getTables(). If I replace DataSourceConnectionProvider with an anonymous implementation that doesn't close the connection, no exception is thrown anymore.
It seems H2 does not approve of calling methods like getCatalogs() on the object returned by connection.getMetaData() after the underlying connection has been closed. Is this this a bug in jooq-meta (I use 3.7.0) or is my configuration flawed?
jOOQ 3.7.0 / 3.7.1 and earlier are caching the DatabaseMetaData in org.jooq.Meta. This is a bug (4762) and will be fixed soon.
The reason why you're running into this issue is because you're using the DataSourceConnectionProvider, which isn't really intended to work with standalone connections, or "simple" DataSources. It closes the connection after every query (which normally translates to returning it to the pool). After closing the connection, the cached DatabaseMetaData reference is stale.
You've already documented the workaround: Don't use a "simple" DataSource with jOOQ's DSLContext.meta() API.
I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.
I'm seeing a very strange behavior in my application.
My application setup: Spring + Hibernate + C3p0
Application keeps running fine, when all of a sudden I start seeing these errors in logs and system totally stop processing any database specific requests.
WARN c3p0.C3P0Registry - Could not create for find ConnectionCustomizer with class name ''.
java.lang.ClassNotFoundException:
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at com.mchange.v2.c3p0.C3P0Registry.getConnectionCustomizer(C3P0Registry.java:181)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getConnectionCustomizer(C3P0PooledConnectionPoolManager.java:636)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.createPooledConnectionPool(C3P0PooledConnectionPoolManager.java:738)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:257)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:271)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:80)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.AbstractBatcher.prepareSelectStatement(AbstractBatcher.java:123)
at org.hibernate.id.SequenceGenerator.generate(SequenceGenerator.java:73)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:99)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:187)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:172)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:94)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:70)
at org.hibernate.impl.SessionImpl.fireSaveOrUpdate(SessionImpl.java:507)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:499)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:495)
at org.springframework.orm.hibernate3.HibernateTemplate$18.doInHibernate(HibernateTemplate.java:690)
at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:365)
at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:687)
Why would C3p0 require to create a new connection pool at this
particular time, before these exceptions application is 100% working
fine and responding perfectly.
Also I've not provided any connectionCustomizerClassName property in
my c3p0 configurations, why would it load one? in this stack trace I
see it's not-null empty string ''.
Any clues?
==============================================================================
Following hibernate jars I see in application's classpath:
hibernate-3.2.6.ga.jar
spring-hibernate-1.2.6.jar
Following c3p0 jars I see in application's classpath:
c3p0-0.9.1.jar
c3p0-0.9.2-pre5.jar
c3p0-oracle-thin-extras-0.9.2-pre5.jar
Code that manually read these properties and set on datasource (I do not read/set any connectionCustomizerClassName property here at all)
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setMinPoolSize(Integer.parseInt(props.getProperty("jdbc.hibernate.c3p0.minPoolSize")));
.....
Here are C3p0 properties being used:
jdbc.hibernate.c3p0.minPoolSize=100
jdbc.hibernate.c3p0.initialPoolSize=100
jdbc.hibernate.c3p0.maxPoolSize=1000
jdbc.hibernate.c3p0.maxIdleTime=21600
jdbc.hibernate.c3p0.maxStatementsPerConnection=0
jdbc.hibernate.c3p0.maxStatements=0
jdbc.hibernate.c3p0.numHelperThreads=30
jdbc.hibernate.c3p0.checkoutTimeout=30000
jdbc.hibernate.c3p0.idleConnectionTestPeriod=900
jdbc.hibernate.c3p0.preferredTestQuery=SELECT 1 FROM dual
jdbc.hibernate.c3p0.maxConnectionAge=0
jdbc.hibernate.c3p0.maxIdleTimeExcessConnections=3600
jdbc.hibernate.c3p0.acquireIncrement=10
jdbc.hibernate.c3p0.acquireRetryDelay=5000
jdbc.hibernate.c3p0.acquireRetryAttempts=6
jdbc.hibernate.c3p0.propertyCycle=180
Following up a conversation in the comments on the posted question, it looks like the issue here is that VisualVM updates the null valued property connectionCustomizerClassName to an empty String value, which c3p0 currently treats an non-null and interprets as a class name.
Going forward (c3p0-0.9.5-pre7 and above), c3p0 will guard against this, interpret an all-whitespace connectionCustomizerClassName as equivalent to null. But in the meantime or for older versions, take care.
One easy workaround would be to define a NullConnectionCustomizer:
package mypkg;
import com.mchange.v2.c3p0.*;
public class NullConnectionCustomizer extends AbstractConnectionCustomizer
{}
And then use mypkg.NullConnectionCustomizer for connectionCustomizerClassName, so that the corresponding field in VisualVM is not empty and ambiguously interpretable as empty String or null.