We created a SQL table with KEY_TYPE and VALUE_TYPE class. Given those class details to server as well via placing jar on libs folder.
Now we start to insert the rows with SQL Insert statement.
And can see the ROWS in both SQL and Cache.
But when we do cache.get(key), it returns null for Ignite thin client.
The same works fine without issue for Ignite client node. Strang that why same key is not available for thin clients.
Have tried with latest client and server version as well, the result remain same.
Is there they advise the ignite experts can share on above behaviour?
Seems related to Ignite cache size returning correct value but while trying to access the cache its returning null value
It looks like you have different binary configurations for thin and thick clients/server nodes.
Try to adjust your thin client configuration with compactFooter=true and check if it resolves the issue.
clientConfig.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(true)
Defaults are different for backward compatibility and some historical reasons, but I hope this mismatch will be fixed in some future versions.
Related
I am trying to insert data from a SQL table to an Oracle table using activity Copy Data in Data Factory, on the first try it runs fine but on the second try it throws an error that an index on the target table (Oracle) has been corrupted.
Searching in different forums I found that apparently the Copy Data activity sends the insert statement in the following way: INSERT /*+ SYS_DL_CURSOR */ INTO
any idea how to fix this???
Thank you very much for the help
As per the error index is not corrupted. It was used twice. May be the operation was not planned according to the schedule and worked parallelly.
The Copy activity is executed on an integration runtime. You can use different types of integration runtimes for different data copy scenarios:
When you're copying data between two data stores that are publicly accessible through the internet from any IP, you can use the Azure integration runtime for the copy activity. This integration runtime is secure, reliable, scalable, and globally available.
When you're copying data to and from data stores that are located on-premises or in a network with access control (for example, an Azure virtual network), you need to set up a self-hosted integration runtime.
Use either of the two operations mentioned above, the error will be resolved.
Check link for support document: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview
Years ago I wrote an app to capture data into H2 datafiles for easy transport and archival purposes. The application was written with H2 1.4.192.
Recently, I have been revisiting some load code relative to that application and I have found that there are some substantial gains to be had in some things I am doing in H2 1.4.200.
I would like to be able to load the data that I had previously saved to the other databases. But I had some tables that used a now invalid precision scale specification. Here is an example:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5)
H2 databases created with 1.4.192 that contain tables like this will not load on 1.4.200,
they will get the following error:
Scale($"23") must not be bigger than precision({1}); SQL statement:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5) [90051-200] 90051/90051 (Help)
My question is how can I go about correcting the invalid table schema? My application utilizes a connection to an H2 database and then loads the data it contains into another database. Ideally I'd like to have my application be able to detect this situation and repair it automatically so the app can simply utilize the older data files. But in H2 1.4.200 I get the error right up front upon connection.
Is there a secret/special mode that will allow me to connect 1.4.200 to the database to repair its schema? I hope???
Outside of that it seems like my only option is have a separate classloader for different versions of H2 and have remedial operations happen in a classloader and the load operations happen in another classloader. Either that or start another instance of the JVM to do remedial operations.
I wanted to check for options before I did a bunch of work.
This problem is similar to this reported issue, but there was no specifics in how he performed his resolution.
This data type is not valid and was never supported by H2, but old H2, due to bug, somehow accepted it.
You need to export your database to a script with 1.4.192 Beta using
SCRIPT TO 'source.sql'
You need to use the original database file, because if you opened file from 1.4.192 Beta with 1.4.200, it may be corrupted by it, such automatic upgrade is not supported.
You need to replace DATETIME(23,3) with TIMESTAMP(3) or whatever you need using a some text editor. If exported SQL is too large for regular text editors, you can use a stream editor, such as sed:
sed 's/DATETIME(23,3)/TIMESTAMP(3)/g' source.sql > fixed.sql
Now you can create a new database with 1.4.200 and import the edited script into it:
RUNSCRIPT FROM 'fixed.sql'
if the glassfish server loses connectivity with the DB the connections all die. I want to detect it and recover the connection.
When I set it to use "table", this works but when i set it to "meta-data", this seems not working. Anybody know why or is this a known glassfish bug?
Likely not bug in GlassFish, but JDBC driver that caches meta data. This is also addressed in GlassFish documentation:
table: Performing the query on a specified table. If this option is
selected, Table Name must also be set. Choosing this option may be
necessary if the JDBC driver caches calls to setAutoCommit() and
getMetaData().
I am using a tomcat dbcp along with spring jdbc. When I start the server for first time and try to load the web-page it fetches the data from database and returns correct result set but when i make some changes to db using editor and try to reload the page, it shows old result set. I tried database logging and can see that query is reached to database. I think the result set is being cached somewhere in the container..Can someone tell me what parameter i need to take care of.
help will be appreciated.
thanks.
AngerClown, thanks for your replies. You got me to the real pain point of the problem.
The real problem was lying on database side. Somehow because of some primary key indexing issues some process has acquired the lock on the table and at the same time my autocommit from query browser was set to false. Because of this when i tried to fetch the data in same transaction changes got reflected but not in other transaction.
Without bothering much about it, I just recreated the table. And now it is working fine.
Thanks a lot.
-Santosh.
Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
A trigger on the table can call UTL_HTTP to communicate with another machine (eg call a RESTful API).
The architectural questions are :
This will happen PRIOR to the commit so you may get false alerts if a change is rolled back
If you wait for a response, it will slow the system down.
What do you do if you get an non-standard response (eg the other server isn't available)