Relay ConnectionHandler.getConnectionID() connection does not exist error - graphql

I'm trying to cleanup relay graphql flow in my app. Atm in some places I'm still using fetchKey and record invalidation, which I'd like to change to #appendEdge/#appendNode directives. Unfortunately every time I'm trying to get the proper connection on which I'm going to use them, it ends up with warnings stating:
Warning: [Relay][Mutation] The connection with id 'client:ParentType14nif3uirdfut431431hg2rr:__ParentTypeChildrenList_children_connection' doesn't exist.
I'm using ConnectionHandler.getConnectionID(<id of the parent object of the fragment>, <connection key I've specified>).
Does anyone know what may be the issue here? Also if I understand correctly and if that works my edge is supposed to be added to the store and relay will only determine whether everything is up-to-date without fetching that new edge from the server, right?

Relay matches connection identity not only by the parent id and connection key but also using the latest arguments ("filters) that connection was called with.
By passing filters: [] to the connection you're telling relay that no arguments should be used to determine connection identity. This can work sometimes but if your arguments actually affect the output of your connection this can introduced some unexpected behaviors, such as your connection results not updating when only some argument has changed.
Alternatively you can pass the latest arguments you called your connection with to your getConnectionId call. This can be very verbose and difficult to do at times but there's no other work around.
From the docs: https://relay.dev/docs/guided-tour/list-data/updating-connections/#connection-identity-with-filters

Related

Oracle ORDS: Get request returns old data, then after period of time the changed data

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

Is it possible to make a runtime db connection and use it in Schema, DB and models without effecting configs?

I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

Spring SAML sample app fails with Ping Federation

Hy Guys,
I have been trying to integrate spring sample app, downloaded from https://github.com/spring-projects/spring-security-saml, with Ping Federate. I have used this sample app to integrate with so many other IDPs and it worked fine without any hassles. But Ping Federate seems to be bit complicated. This is what I did so far.
Create connection in Ping using my SP meta data.
Export Ping meta data
Configure it in my SP (securityContext.xml)
Start the server
I get various errors at various scenarios. The one which I am currently testing,
has following error on server restart,
org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP
On investigating the logs, I see the root cause to be
Caused by: java.lang.NullPointerException
at org.opensaml.saml2.common.SAML2Helper.getEarliestExpiration(SAML2Helper.java:112)
at org.opensaml.saml2.metadata.provider.AbstractReloadingMetadataProvider.processCachedMetadata(AbstractReloadingMetadataProvider.java:328)
at org.opensaml.saml2.metadata.provider.AbstractReloadingMetadataProvider.refresh(AbstractReloadingMetadataProvider.java:258)
However, everything works fine if I disable metadataTrustCheck in securityContext.xml using property
< property name="metadataTrustCheck" value="false"/>
Can some one please help? I have been trying to solve this past one week. Unfortunately there is no good enough documentation from Ping for the version (latest) I am using.
Update:
Application works fine if,
Metadata trust check is disabled at SP and PF metadata is signed
Metadata trust check is enabled at SP and PF metadata is unsigned
However, I am getting above NullPointerException if
Metadata trust check is enabled at SP and PF metadata is signed
A while ago, we had exactly the same NullPointerException with IDP metadata (using opensaml 2.6.4). As written above, setting metadataTrustCheck="false" on the ExtendedMetadataDelegate did solve the problem, but was not the desired solution.
Alternatively, one could have removed the <Signature> block from the metadata, which is equally as bad.
Solution:
Besides adding the (self-signed) certificate, it was necessary to add the next certificate in the chain to the keystore as well.
For the interested reader:
Despite this error, the application continued to start and claimed "Reloading metadata was finished".
However, there's a TimerTask, which regularly checks whether metadata providers where changed i.e., if a new one was registered. Supposedly, this happens only at startup time.
Regardless, every 10 seconds (by default), a refresh is triggered internally, which leads to calculation of the expiration time. If the metadata is not loaded for any reason e.g., because of a validation error, then this leads to the mentioned NullPointerException in getEarliestExpiration().
If you're using a file-based MetadataProvider you might want customize the CachingMetadataManager and set refreshCheckInterval="-1" to disable this TimerTask.
PS: Maybe there are other reasons like a typo in the entityID, an overdue validUntil, expired certificates,... you name it. Anything, which causes the metadata not to be loaded will likely result in this issue. Another indicator is the following exception:
Caused by: org.opensaml.saml2.metadata.provider.MetadataProviderException: Metadata for issuer <ENTITY_ID> wasn't found

quickfix session config issues

I've compiled and trolled around the quickfix ( http://www.quickfixengine.org ) source and the examples. I figured a good starting point would be to compile (C++) and run the 'executor' example, then use the 'tradeclient' example to connect to 'executor', and send it order requests.
I created two seperate session files one for the 'executor' as an acceptor, and one for the 'tradeclient' as the initiator. They're both running on the same Win7 pc.
'executor' runs, but tradeclient can't connect to it, and I can't figure out why. I downloaded Mini-fix and was able to send messages to executor, so I know that executor is working. I figure that the problem is with the tradeclient session settings. I've included both of them below, I was hoping someone could point out what's causing them to not communicate. They're both running on the same computer using port 56156.
--accceptor session.txt----
[DEFAULT]
ConnectionType=acceptor
ReconnectInterval=5
SenderCompID=EXEC
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=SENDER
HeartBtInt=5
#SocketConnectPort=
SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileStorePath=store
---- initiator session.txt ---
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=5
SenderCompID=SENDER
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=EXEC
HeartBtInt=5
SocketConnectPort=56156
#SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileLogPath=log
FileStorePath=store
--------end------
Update: Thanks for the resonses... Turns out that my logfile directories didn't exist. Once I created them, they both started communicating. Must have been some logging error that didn't throw an exception, but disabled proper behavior.
Is there an error condition that I should be checking? I was relying on exceptions, but that's obviously not enough.
It doesn't seem to be config, check that your message sequence numbers are in synch, especially since you've been connecting to a different server using the same settings.
Try setting the TargetCompID and SenderCompID on the acceptor to *

Resources