Jedis, Cannot get jedis connection: cannot get resource from pool - spring

I have seen answers in couple of threads but didn't work out for me and since my problem occurs occasionally, asking this question if any one has any idea.
I am using jedis version 2.8.0, Spring Data redis version 1.7.5. and redis server version 2.8.4 for our caching application.
I have multiple cache that gets saved in redis and get request is done from redis. I am using spring data redis APIs to save and get data.
All save and get works fine, but getting below exception occasionally:
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool | org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the poolorg.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:198)
org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:345)
org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:129)
org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:92)
org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:79)
org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:191)
org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:166)
org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:88)
org.springframework.data.redis.core.DefaultHashOperations.get(DefaultHashOperations.java:49)
My redis configuration class:
#Configuration
public class RedisConfiguration {
#Value("${redisCentralCachingURL}")
private String redisHost;
#Value("${redisCentralCachingPort}")
private int redisPort;
#Bean
public StringRedisSerializer stringRedisSerializer() {
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
return stringRedisSerializer;
}
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName(redisHost);
factory.setPort(redisPort);
factory.setUsePool(true);
return factory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
redisTemplate.setExposeConnection(true);
// No serializer required all serialization done during impl
redisTemplate.setKeySerializer(stringRedisSerializer());
//`redisTemplate.setHashKeySerializer(stringRedisSerializer());
redisTemplate.setHashValueSerializer(new GenericSnappyRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
#Bean
public RedisCacheManager cacheManager() {
RedisCacheManager redisCacheManager = new RedisCacheManager(redisTemplate());
redisCacheManager.setTransactionAware(true);
redisCacheManager.setLoadRemoteCachesOnStartup(true);
redisCacheManager.setUsePrefix(true);
return redisCacheManager;
}
}
Did anyone faced this issue or have any idea on this, why might this happen?

We were facing the same problem with RxJava, the application was running fine but after some time, no connections could be aquired from the pool anymore. After days of debugging we finally figured out what caused the problem:
redisTemplate.setEnableTransactionSupport(true)
somehow caused spring-data-redis to not release connections. We needed transaction support for MULTI / EXEC but in the end changed the implementation to get rid of this problem.
Still we don't know if this is a bug or wrong usage on our side.

I moved from redis.template to plain jedis.
Added below configuration(can be added in redis template too) for pool and don't see any exception now:
jedisPoolConfig.setMaxIdle(30);
jedisPoolConfig.setMinIdle(10);
for redis template:
jedisConnectionFactory.getPoolConfig().setMaxIdle(30);
jedisConnectionFactory.getPoolConfig().setMinIdle(10);
Same above config can be added in redis template too.

The problem is with the Redis configuration
For me, I was using this property for my local, when I commented on this property, the issue got resolved
#spring.redis.database=12
The correct property will be
spring.redis.sentinel.master=mymaster
spring.redis.password=
spring.redis.sentinel.nodes=localhost:5000

I fixed mine by changing this in my application.yml file:
redis:
password: ${REDIS_SECRET_KEY: null}
to this:
password: ${REDIS_SECRET_KEY:}

Related

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Include newly added data sources into route Data Source object without restarting the application server

Implemented Spring's AbstractRoutingDatasource by dynamically determining the actual DataSource based on the current context.
Refered this article : https://www.baeldung.com/spring-abstract-routing-data-source.
Here on spring boot application start up . Created a map of contexts to datasource objects to configure our AbstractRoutingDataSource. All these client context details are fetched from a database table.
#Bean
#DependsOn("dataSource")
#Primary
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
The problem is if i add a new client details, I cannot get that in routeDataSource. Obvious reason is that these values are set on start up.
How can I achieve to add new client context and I had to re intialize the routeDataSource object.
Planning to write a service to get all the client context newly added and reset the routeDataSource object, no need to restart the server each time any changes in the client details.
A simple solution to this situation is adding #RefreshScope to the bean definition:
#Bean
#Primary
#RefreshScope
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
Add Spring Boot Actuator as a dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Then trigger the refresh endpoint POST to /actuator/refresh to update the DataSource (actually every refresh scoped bean).
So this will depend on how much you know about the datasources to be added, but you could set this up as a multi-tenant project. Another example of creating new datasources:
#Autowired private Map <String, Datasource> mars2DataSources;
public void addDataSourceAtRuntime() {
DataSourceBuilder dataSourcebuilder = DataSourcebuilder.create(
MultiTenantJPAConfiguration.class.getclassloader())
.driverclassName("org.postgresql.Driver")
.username("postgres")
.password("postgres")
.url("Jdbc: postgresql://localhost:5412/somedb");
mars2DataSources("tenantX", datasourcebuilder.build())
}
Given that you are using Oracle, you could also use its database change notification features.
Think of it as a listener in the JDBC driver that gets notified whenever something changes in your database table. So upon receiving a change, you could reinitialize/add datasources.
You can find a tutorial of how to do this here: https://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm#JJDBC28820
Though, depending on your organization database notifications need some extra firewall settings for the communication to work.
Advantage: You do not need to manually call the REST Endpoint if something changes, (though Marcos Barberios answer is perfectly valid!)

Error accessing Spring session data stored in Redis

In my REST controllers Spring project, I want to store Session information in Redis.
In my application.properties I have defined the following:
spring.session.store-type=redis
spring.session.redis.namespace=rdrestcore
com.xyz.redis.host=192.168.201.46
com.xyz.redis.db=0
com.xyz.redis.port=6379
com.xyz.redis.pool.min-idle=5
I also have enabled Http Redis Session with:
#Configuration
#EnableRedisHttpSession
public class SessionConfig extends AbstractHttpSessionApplicationInitializer
{}
I finally have a redis connection factory like this:
#Configuration
#EnableRedisRepositories
public class RdRedisConnectionFactory {
#Autowired
private Environment env;
#Value("${com.xyz.redis.host}")
private String redisHost;
#Value("${com.xyz.redis.db}")
private Integer redisDb;
#Value("${com.xyz.redis.port}")
private Integer redisPort;
#Value("${com.xyz.redis.pool.min-idle}")
private Integer redisPoolMinIdle;
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig poolConfig = new JedisPoolConfig();
if(redisPoolMinIdle!=null) poolConfig.setMinIdle(redisPoolMinIdle);
return poolConfig;
}
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
if(redisHost!=null) jedisConFactory.setHostName(redisHost);
if(redisPort!=null) jedisConFactory.setPort(redisPort);
if(redisDb!=null) jedisConFactory.setDatabase(redisDb);
jedisConFactory.setPoolConfig(jedisPoolConfig());
return jedisConFactory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
final RedisTemplate< String, Object > template = new RedisTemplate();
template.setConnectionFactory( jedisConnectionFactorySpring());
template.setKeySerializer( new StringRedisSerializer() );
template.setValueSerializer( new GenericJackson2JsonRedisSerializer() );
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashValueSerializer( new GenericJackson2JsonRedisSerializer() );
return template;
}
}
With this configuration, the session information gets stored in Redis, but, it is serialized very strangely. I mean, the keys are readable, but the values stored are not (I query the information from a program called "Redis Desktop Manager")... for example... for a new session, I get a hash with key:
*spring:session:sessions:c1110241-0aed-4d40-9861-43553b3526cb*
and the keys this hash contains are: maxInactiveInterval, lastAccessedTime, creationTime, sessionAttr:SPRING_SECURITY_CONTEXT
but their values are all they coded like something similar to this:
\xAC\xED\x00\x05sr\x00\x0Ejava.lang.Long;\x8B\xE4\x90\xCC\x8F#\xDF\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xAC\x95\x1D\x0B\x94\xE0\x8B\x02\x00\x00xp\x00\x00\x01b$G\x88*
(for the creationTime key)
and if I try to access this information from code, with the redisTemplate, it rises an exception like this one:
Exception occurred in target VM: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
at org.springframework.data.redis.serializer.JdkSerializationRedisSerializer.deserialize(JdkSerializationRedisSerializer.java:82)
I think it is some kind of problem with the serialization/deserialization of the Spring session information, but I don't know what else to do to be able to control that.
Does anyone know what Im doing wrong?
Thank you
You're on the right track, your problem is serialization indeed. Try this configuration (configure your template with these serializers only):
template.setHashValueSerializer(new JdkSerializationRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
template.setKeySerializer(new StringRedisSerializer());
template.setDefaultSerializer(new JdkSerializationRedisSerializer());

ActiveMQ fails to start (EOFException: null)

I'm having issues start my spring web app with camel and ActiveMQ.
The particular error I'm getting is not very descriptive:
16:18:53.552 [localhost-startStop-1] ERROR o.a.a.b.BrokerService - Failed to start Apache ActiveMQ ([activemq.myworkingdomain.com, ID:Ricardos-MacBook-Air.local-65257-1453738732697-0:2], {})
java.io.EOFException: null
at java.io.DataInputStream.readBoolean(DataInputStream.java:244) ~[na:1.8.0_60]
at org.apache.activemq.openwire.v11.SubscriptionInfoMarshaller.looseUnmarshal(SubscriptionInfoMarshaller.java:133) ~[activemq-client-5.13.0.jar:5.13.0]
at org.apache.activemq.openwire.OpenWireFormat.doUnmarshal(OpenWireFormat.java:366) ~[activemq-client-5.13.0.jar:5.13.0]
at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:277) ~[activemq-client-5.13.0.jar:5.13.0]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBTopicMessageStore$1.execute(KahaDBStore.java:755) ~[activemq-core-5.7.0.jar:5.7.0]
at org.apache.kahadb.page.Transaction.execute(Transaction.java:769) ~[kahadb-5.7.0.jar:5.7.0]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBTopicMessageStore.getAllSubscriptions(KahaDBStore.java:749) ~[activemq-core-5.7.0.jar:5.7.0]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBTopicMessageStore.<init>(KahaDBStore.java:663) ~[activemq-core-5.7.0.jar:5.7.0]
at org.apache.activemq.store.kahadb.KahaDBStore.createTopicMessageStore(KahaDBStore.java:920) ~[activemq-core-5.7.0.jar:5.7.0]
at org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.createTopicMessageStore(KahaDBPersistenceAdapter.java:100) ~[activemq-core-5.7.0.jar:5.7.0]
I'm sticking with java and simple spring stuff no xml:
#Bean
public CamelContext camelContext() {
final CamelContext camelContext = new DefaultCamelContext();
camelContext.addComponent("activemq", activeMQComponent());
try {
CamelConfigurator.addRoutesToCamel(camelContext);
camelContext.start();
} catch (final Exception e) {
LOGGER.error("Failed to start the camel context", e);
}
LOGGER.info("Started the Camel context and components");
return camelContext;
}
#Bean
public ActiveMQComponent activeMQComponent() {
final ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration());
activeMQComponent.setTransacted(true);
activeMQComponent.setCacheLevelName("CACHE_CONSUMER");
return activeMQComponent;
}
#Bean
public JmsConfiguration jmsConfiguration() {
final JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory());
jmsConfiguration.setConcurrentConsumers(CONCURRENT_CONSUMERS);
return jmsConfiguration;
}
#Bean
public PooledConnectionFactory pooledConnectionFactory() {
final PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(
activeMQConnectionFactory());
pooledConnectionFactory.setMaxConnections(MAX_CONNECTIONS_TO_POOL_FACTORY);
// pooledConnectionFactory.start();
return pooledConnectionFactory;
}
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory() {
return new ActiveMQConnectionFactory(username, password, activeMQBrokerURL);
}
I've been trying to change the order in which things load also the routes and what's included in them, deleting the kahadb local folder but nothing seems to work or even point me in the right location.
Your stacktrace shows that your application in using
activemq-client-5.13.0
activemq-core-5.7.0
I am pretty sure that the version mismatch is responsible for this error.
Can you just import activemq-all 5.13.0 and try again?
According to your stacktrace, seems like you are experiencing issues with KahaDB (the persistence engine by default for Apache ActiveMQ).
I believe KahaDB has a bug within it and you can find information about it on the Apache issues Web Page.
I had this issue once, but it strange since it seems like it happens at random, sometimes it works, sometime it fails.
I managed to fix this issue by choosing a different persistence engine for ActiveMQ. Maybe you can give it a try and tell me if it works for you. Hope this information can help you.

Glassfish - Resource Injection not working

Using NetBeans IDE and Glassfish Server. For some reason I can't get a DataSource injected (have tried a million variations).
private DataSource iserver;
#Resource(name="jdbc/iserver", type=DataSource.class)
public void setIServer(DataSource dataSource) {
this.iserver = dataSource;
}
(I already tried adding #Resource annotation directly to the field). The connection pool and jdbc resource are configured on Glassfish, and for the time being, I have added the workaround code (in the same class):
ctx = new InitialContext();
iserver = (DataSource) ctx.lookup("jdbc/iserver");
The workaround code works perfectly. I don't see any obvious relevant errors in Glassfish log. I do see this, but not sure it's related:
*name cannot be null at javax.management.ObjectName.construct(ObjectName.java:405) * at javax.management.ObjectName.(ObjectName.java:1403) at org.glassfish.admingui.common.handlers.ProxyHandlers.getRuntimeProxyAttrs(ProxyHandlers.java:289) at org.glassfish.admingui.common.handlers.ProxyHandlers.getProxyAttrs(ProxyHandlers.java:273) at ...
Any suggestions?
Choose the option "name" by "lookup" -->
#Resource(lookup = "java:global/env/jdbc/__default")
DataSource dataSource;
Make sure you're in a session bean or injection won't work.
Here's an example of how I inject
#Resource(name="jdbc/my_db") private DataSource dataSource;

Resources