In odoo v10 Enterprise while trying to sync bank accounts for auto feed entries then getting following error.
Once your bank accounts are registered, you will be able to access your statements from the Accounting Dashboard. The available methods for synchronization are as follows.
Direct connection to your bank
Importing your statements in via a supported file format (QIF, OFX,
CODA or CSV format)
Manually enter your transactions using our fast recording interface
I would like to go with first one "Direct connection to your bank"
In that I am facing issue so I want to fix it.
Issue :
Problem Updating Account(507):We're sorry, Yodlee has just started providing data updates for this site, and it may take a few days to be
successful as we get started. Please try again later.
Related
I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.
I'm not too familiar with Heroku Connect so excuse me if these are newbie questions.
Is there a way to view Salesforce Error logs beyond a day old? I get the email notification for errors writing to Salesforce but if I don't check on them within a day, I can't access them anymore.
If the problem is "UNABLE_TO_LOCK_ROW", should we configure for the Retry stated here: https://devcenter.heroku.com/articles/heroku-connect-faq#can-i-retry-records-that-failed-to-write-to-salesforce ?
Thanks
In case of writes toward Salesforce, all the changes are kept into Postgres: on the _trigger_log table once occurred and then moved to _trigger_log_archive once processed. On demo plan these records are preserved up to 7 days, on paid plan up to 30 days. You can then look at the archive table to have more insights of the failure and manually resubmit the change, as explained in the link you have mentioned.
I've taken over support of a CRM 2016 On-Premise system. I don't know the history of the particular instance, but I suspect it's been copied and/or imported many times.
The BulkDeleteFailureBase tables has just short of 2 million rows, almost all of which contain an error description like:
Not enough privilege to access the Microsoft Dynamics CRM object or
perform the requested operation. The current Organizationid '<GUID1>'
does not match with userOrTeam's organization id '<GUID2>'.
OrganisationBase has only one record with <GUID2> in it.
Has this happened because the instance has been copied/moved around incorrectly? If so, is this likely an indication more problems are heading my way in the future?
How can I recover from this?
BulkDeleteFailureBase is one of the system async jobs logging table where platform captures the run/success/failure logs.
Probably someone might have tried to clean the data like Plugin Trace log which were copied over from different DB backup/restore or CRM Org restoration. They used Bulk delete & all that fails, ended up here.
MS Support recommendation gives the script to clean those tables safely. Leaving it only gives you performance head-ache.
Kind of new with the integration runtime.
I had a pipeline running with no issues but recently we had an AD upgrade and the local on premesis SQL db changed my user from 'bluecompany\joe' to 'redcompany\joe'
This has caused my datafactory to stop working properly . as it can't connect to the SQL onpremesis .
I can't seem to find the place of where I can update this change?
Error:
Copy activity encountered a user error at Source side: Integration Runtime (Self-hosted) Node Name=ORG200016,ErrorCode=UserErrorFailedToConnectToSqlServer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Server: 'org200016.bluecompany.com.au', Database: 'GroupRisk', User: 'bluecompany\joe'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.ComponentModel.Win32Exception,Message=This user can't sign in because this account is currently disabled,Source=Microsoft.DataTransfer.ClientLibrary,'.
any ideas would be very welcomed. Thank you
As your login account has changed, I think you will need to update the account in the corresponding linked service, where you entered your credentials for this database previously.
Be sure the test connection succeeds after you edit the linked serivce. Then the pipeline should be able to connect to your database again.
Depending on which version of ADF you're using, there are different ways to update your linked service:
login to https://portal.azure.com/ and find you data factory (if you don't have an account to login to portal, you need to find the admin who create this linked service and ask him to update for you)
if you're using v1 data factory, find the "Author and Deploy" where you should be able to find your linked service corresponding to your on premise SQL server.
if you're using v2 data factory, find the "Author and Monitor", click on the pen logo where you should be able to find your linked service from the "connections" tab, it will allow you to edit the linked service.
Thanks,
Eva
Background:
I am using the deployment tools in Visual Studio 2010.
I right clicked my project and selected Package/Publish settings. Put all my settings in there ...
I am then using "web deploy" to tranfer the files to my remote server running a remote agent service and this is working fine. The transforms i have on my Web.Release.config do their thing and the server can access the database I created manually.
Problem:
My next step was to get the Database Deployment working too.
I went into the Package / Publish SQL tab and entered my Connection string for the destination database.
(Data Source=MyDBServer;Initial Catalog=Database2;User ID=User;Password=pass)
This database is empty ready to accept the import.
I also enter in the connection string for the source database. This lives on the same server.
(Data Source=MyDBServer;Initial Catalog=Database;User ID=User;Password=pass)
Database Scripting options are set to Schema and Data (changing this makes no difference) and the database scripts are set to [Auto Generated Schema and Data]
When i deploy this now, i get the error:
Error 4 Web deployment task
failed.((09/06/2010 16:41:51) An error
occurred when the request was
processed on the remote computer.)
(09/06/2010 16:41:51) An error
occurred when the request was
processed on the remote computer. The
entry type 'Unknown' was not expected
at this time. The serialization stream
may be corrupted.
Additional Info:
I can successfully create a package with no problems. I looked at the contents in the zip and can see the SQL is generated fine (so no problems connecting to the database). I can then copy this SQL and run it as a new query on the new database and the tables and data are created fine.
I can not seem to work out where this is going wrong, i googled the error and there are no entries on the whole internet. Anyone have any ideas?
Addendum:
To get some further idea of what might be going on, i sent the package across to the server and imported it using IIS. It told me i needed SQL Server Management Objects. So I installed that.
Next attempt it told me my user did not have permission to create the database, I thought excellent this must be the problem. :Granted access - Re-run. Passed!
So i deleted all the tables and went back to VS2010 clicked publish and i get the same error. :(
Sorted it!
Thank goodness, i was totally out of ideas when i went back to a video by hanselman. He mentioned that the Web Deployment Agent can have permissions. I went in had a look and there was a tab in it's properties called log on.
I entered the detials of an account with a decent level of access and clicked okay.
I then restarted the service as requested to enable the changes.
I then went back to VS 2010 and clicked Publish Web.
Music to my eyes, i see the words "Publish succeeded", I check the database and the tables are there. Excellent!
I think i scared the office by getting a little over excited, if you get this problem and this solution fixes it for you, try to hold in the temptation to shout out "YES!, yes, get in!" while laughing maniacally or people will think you're weird like me.