JDataConnect has been used by us many years ago to access Microsoft Access Databases from Java. For a migration project we are tentatively thinking about using it again. In the meantime remote access is possible in principle. I had one successful test a year ago.
Today I am retrying on a different machine with the free license from:
http://www.jnetdirect.com/free-software/jdataconnect-single.html
I have downloaded and installed
3.679.232 JDCSetup_4_0.exe
After opening the firewall on port 1150 I can in principle connect using JData2_0.sql.$Driver as a driver and a connection string like:
jdbc:JDataConnect:1150//leto/c:\\y_wf\\data\\smartCRM\\smartCRM.mdb
then i get the error message:
Attempt 1, Connect to JDataServer on server leto port:1150 Result was: java.sql.SQLException: ServerException:You should upgrade the license for this version. The current License is valid only for version 3
SQLState: 01000
VendorError: 0
I am confused since I used the license key from the JnetDirects webpage. There seems to be no version 3 download available. What might be going on here that makes the situation fail?
JNetDirect support answered the following:
After further verification, I noticed (and corrected) that while the key provided on the download page before the file is downloaded and the main JDataConnect page was the correct one (the one in red below), the one provided on another page (which Wolfgang used) had 1 different digit. It has been corrected.
Related
when I tested my smart contract in Kovan, the oracle (0xc57B33452b4F7BB189bB5AfaE9cc4aBa1f7a4FD8) listed in Contract Addresses failed to fulfill my request.
As I checked the failed transaction, I've found out that the signature in the InputData was fulfillOracleRequest2 whereas other successful transactions' InputData contained fulfillOracleRequest. So the root cause of the issue seems to be the discrepancy between ChainlinkClient's version(v0.8) and the oracle's version.
Does anybody know of any oracle compatible with ChainlinkClient v0.8 in Kovan?
I've basically gone through the official tutorial to Use Any API except for the version of ChainlinkClient(The tutorial noted v0.6 but I used v0.8).
By updating #chainlink/contracts from 0.1.7 to 0.2.1 (and by adjusting the path to ChainlinkClient.sol), my requests started to get fulfilled.
I'm trying to install websphere application server v7
I followed the steps
and when I click on next after after entering security administration password I got this error message
System Prerequisite Check
The system prerequisite check failed. The error messages are as follows:
Unable to retrieve information from the minimal service level (MSL) file of the installation.
thanks for helping
You can use the instructions in this IBM Technote to disable MSL checking (you'll need to substitute v70 for v80 in the property name). I must also point out that you're trying to install a version of WAS that goes out of service in 5 months. You may instead want to invest in installation of a more modern version of WAS like v9.
after upgrading our database system from 11g to 12c we cannot make https-requests to one of our webservers.
After a lot of googling and trial-and-error we are pretty sure that error is due to our remote certificate. The wallet doesn't contain the server certificate, only the CAs are present (read this somewhere, ora12 doesn't like the regular certs in the wallet any longer)
The only special thing I found about our certificate:
It has no CN, it has only a couple of SAN specified.
With 11g the requests work like a charm, but 12c doesn't allow the certificate any more. We found out that utl_http_request() got a new parameter, "https_host" which is matched against the common name of the server certificate (1), not saying anything about the subject alternative name. No matter which value we choose for this parameter, the call fails with an ORA-24263.
I cannot understand why oracle should ignore the SAN, as they are a pretty mandatory as per RFC6125 (2) from 2011:
However, it is perfectly acceptable for the subject field to be empty, as long as the certificate contains a subject alternative name ("subjectAltName") extension that includes at least one subjectAltName entry, because [...]
Anyone having similar problems?
How to work around this error?
Thanks
Contact Oracle Support as this is a known bug. I suspect Bug 25734963 : SNI SUPPORT IN UTL_HTTP
Note: There are a couple others such as Bug 26040483 and 26190856, but at least one will conflict so you may want to request a merge patch.
In odoo v10 Enterprise while trying to sync bank accounts for auto feed entries then getting following error.
Once your bank accounts are registered, you will be able to access your statements from the Accounting Dashboard. The available methods for synchronization are as follows.
Direct connection to your bank
Importing your statements in via a supported file format (QIF, OFX,
CODA or CSV format)
Manually enter your transactions using our fast recording interface
I would like to go with first one "Direct connection to your bank"
In that I am facing issue so I want to fix it.
Issue :
Problem Updating Account(507):We're sorry, Yodlee has just started providing data updates for this site, and it may take a few days to be
successful as we get started. Please try again later.
I went to deploy over an existing Cloud Service (in staging) and received the following message:
"Error: No deployments were found. Http Status Code: NotFound"
Does anyone know what this means?
I am looking at the Cloud Service, and it surely exists.
UPDATE:
Been using the same deploy method as prior (successful) efforts. However, I simply right click the cloud service in Visual Studio 2013. In the Windows Azure Publish Summary, I set to: the correct cloud service name, to staging, to realease ... and press publish. Nothing special really...which is why I am perplexed
You may have exceeded the maximum number of cores allowed on your Azure subscription. Either remove unneeded deployments or ask Microsoft to increase the maximum allowed cores on your Azure subscription.
Since I had this problem and none of the answers above were the cause... I had to dig a little bit more. The RoleName specified in the Role tag must of course match the one in the EndpointAcl tag.
<Role name="TheRoleName">
<Instances count="1" />
</Role>
<NetworkConfiguration>
<AccessControls>
<AccessControl name="ac-name-1">
<Rule action="deny" description="TheWorld" order="100" remoteSubnet="0.0.0.0/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="TheRoleName" endPoint="HTTP" accessControl="ac-name-1" />
<EndpointAcl role="TheRoleName" endPoint="HTTPS" accessControl="ac-name-1" />
</EndpointAcls>
</NetworkConfiguration>
UPDATE
It seems that the previous situation is not the only one causing this error.
I ran into it again now due to a related but still different mismatch.
In the file ServiceDefinition.csdef the <WebRole name="TheRoleName" vmsize="Standard_D1"> tag must have a vmsize that exists (of course!) but according to Microsoft here (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/) the value Standard_D1_v2 should also be accepted.
At the moment it was causing this same error... once I removed the _v2 it worked fine.
Conclusion: everytime something is wrong in the Azure cfgs this error message might come along... it is then necessary to find out where it came from.
Just to add some info.
The same occured to me, my WM Size was setted to a size that was "Wrong".
I have multiple subscriptions, I was pointing one of them, and using a machine "D2", I don't know what happened, the information was refreshed and this machine disappeared as an option. I then selected "Large" (old), and worked well.
Lost 6 hours trying to upload this #$%#$% package.
I think the problem can be related to any VM Size problem
I hit this problem after resizing my role from small to extra-small. I still had the Local Storage set to the default of 20GB, which an extra-small instance can't hold. I ended up reducing it to 100MB and the deployment worked (the role I'm deploying is in maintenance mode only for a couple of months, so I don't care much about getting diagnostics from it).
A quick tip: I was getting nowhere debugging this with Visual Studio's error message. On a whim, I switched to the azure website and manually uploaded the package. That finally gave me a useful error: that VM size was too small for the resources I had requested.
I encountered this error during the initial deployment of a Cloud Service that required a specific SSL Certificate... that was missing from Azure.
Corrected the certificate - deploy succeeded.
(After the first deployment Visual Studio provides a meaningful error in this case.)