I was having problems with Dynamics CRM online getting a recordset to import. I have now reduced the problem to one field.
I have a field, named 'Description', a text field that is junking up the works. When I try the import to CRM online WITHOUT the field, the import goes just fine. But WITH the field included, I get this message:
I originally thought that there was a character somewhere in the results of this field that was throwing things off for the import. So I went through a day long process of doing lots of REPLACES, RTRIM(ing) and CAST. Nothing worked.
Then I figured, "what if it isn't an odd character?" and did a LEFT with an RTRIM function to get me only one character to test the import with. Here is what my final statement looked like when I imported:
RTRIM(LEFT(CAST(lntmu11.matter.memo AS varchar(1)), 1)) AS Description
So now, I am only returning one character for this column. I have double checked the output in Excel, and verified that there is no punctuation or odd looking data. And STILL I am getting the error.
I am exporting from SQL, and the original field is a TEXT datatype field.
Anybody run into a similar problem when importing? My other thought was that I was exceeding the max individual record size in CRM when I included the Description field in the export. (Because some of the records would include 500 characters or more). But now that I am only exporting 1 character, that can't be an issue.
Any thoughts?
Related
I'm having a strange issue with exporting/updating/importing data in our on-premises Dynamics 365 (8.2). I was doing a bulk update of over 3000 records by exporting the records to an Excel workbook, updating the data in a specific column, then importing the workbook back into CRM. It worked for all of the records except 14 of them, which according to the import log was for the reason that "You cannot import data to this record because the record was updated in Microsoft Dynamics 365 after it was exported." I looked at the Audit History of those 14 records, and find that they have not been modified in any way for a good two months. Strangely, the modified date of the most recent Audit History entry for ALL 14 records is the exact same date/time.
We have a custom workflow that runs once every 24 hours on a schedule that automatically updates the Age field of our Contact records based on the value in the respective Birthday field. For these 14 records, ALL of them have a birthday of November 3rd, but in different years. What that means though is that the last modification that was done to them was on 11/3/2019 via the workflow. However, I cannot understand why the system "thinks" that this should prevent a data update/import.
I am happy to provide any additional information that I may have forgotten to mention here. Can anyone help me, please?
While I was not able to discover why the records would not update, I was able to resolve the issue. Before I share what I did to update the records, I will try and list as many things as I can remember that I tried that did not work:
I reworked my Advanced Find query that I was using to export the records that needed updated to return ONLY those records that had actual updates. Previously, I used a more forgiving query that returned about 30 or so records, even though I knew that only 14 of them had new data to import. I did so because the query was easier to construct, and it was no big deal to remove the "extra" records from the workbook before uploading it for import. I would write a VLOOKUP for the 30-something records, and remove the columns for which the VLOOKUP didn't find a value in my dataset, leaving me with the 14 that did have new data. After getting the error a few times, I started to ensure that I only exported the 14 records that needed to be updated. However, I still got the error when trying to import.
I tried formatting the (Do Not Modify) Modified On column in the exported workbook to match the date format in the import window. On export of the records, Excel was formatting this column as m/d/yyyy h:mm while the import window with the details on each successful and failed import showed this column in mm/dd/yyyy hh:mm:ss format. I thought maybe if I matched the format in Excel to the import window format it might allow the records to import. It did not.
I tried using some Checksum verification tool to ensure that the value in the (Do Not Modify) Checksum column in the workbook wasn't being written incorrectly or in an invalid format. While the tool I used didn't actually give me much useful information, it did recognize that the values were checksum hashes, so I supposed that was helpful enough for my purposes.
I tried switching my browser from the new Edge browser (the one that uses Chromium) to just IE as suggested on the thread provided by Arun. However, it did not resolve the issue.
What ended up working in the end was Arun's suggestion to just do some arbitrary edit to all the records and exporting them afterward. This was okay to do for just 14 records, but I'm still slightly vexed as this wouldn't really be a feasible solution of it were, say, a thousand records that were not importing. There was no field that ALL 14 Contact records had in common that I could just bulk edit, and bulk edit back again. What I ended up doing was finding a text field on the Contact Form that did not have any value in it for any of the records, putting something in that field, then going to each record in turn and removing the value (since I don't know of a way to "blank out" or clear a text field while bulk editing. Again, this was okay for such a small number of records, but if it were to happen on a larger number, I would have to come up with an easier way to bulk edit and then bulk "restore" the records. Thanks to Arun for the helpful insights, and for taking the time to answer. It is highly appreciated!
When you first do an import of an entity (contacts for example) you see that your imported excel contains 3 hidden columns (Do Not Modify) Contact, (Do Not Modify) Row Checksum, (Do Not Modify) Modified On.
When you want to create new instances of the entity, just edit the records and clear the content of the 3 hidden colums.
This error will happen when there is a checksum difference or rowversion differs from the exported record vs the record in database.
Try to do some dummy edit for those affected records & try to export/reimport again.
I could think of two reasons - either the datetime format confusing the system :( or the the community thread explains a weird scenario.
Apparently when importing the file, amending and then saving as a different file type alters the spreadsheet's parameters.
I hence used Internet Explorer since when importing the file, the system asks the user to save as a different format. I added .xlsx at the end to save it as the required format. I amended the file and imported it back to CRM..It worked
For me it turned out to be a different CRM time zone setting for the exporter and importer. Unfortunately this setting doesn't seem to be able to be changed by an administrator via the user interface.
The setting is available for each user under File->Options->Time Zone.
I would like to import a text file to an already-existing table with SQL Developer 18.3 on Windows.
I have a column with data type float(126) and I want to store very small numbers in it. The data is in scientific notation (e.g. 1.5e-82) in the text file, but the importer doesn't accept this data. The status is "Data is not compatible with column definition or is not available for a not nullable column.".
1
When I try to add one row with insert, it works so the problem seems to be the import. Should I use a different type?
(The language of the computer is English and it accepts basic decimal numbers so the decimal point (instead of comma) shouldn't be the problem.)
The behaviour is slightly odder than I thought when I commented. When I recreated it I went back and tried the 'Insert Script' option, and that didn't complain. But I also tried picking SQL*Loader, which also errored; and external table, which did not (but needs the file to be on the DB server, of course).
But #thatjeffsmith implied that SQL*Loader worked, so I tried that again, and... it does, but not on the first attempt.
If you launch the data import wizard and click through the options, whichever import type you select you get that error (still using 'Insert'):
But click the Back button to go back to the previous stage:
and then click the Next button to go forward again:
and now it doesn't error, and you can carry on and successfully complete the import.
We use VBA to retrieve data from an Oracle database using the Microsoft.ACE.OLEDB.12.0 provider.
We have used this method without issue for a long time, but we have encountered a problem with a specific query of data from a specific table.
When running it under VBA, we get "Run-Time error '1004': Application-defined or object-defined error. However investigating further, we find the following:
The queries we run are dynamically generated, and how we handle them is to read the results into a variant array, then output that array into Excel. When we step-through our particular query, we find that one specific database field is "blank": The locals window shows the value to be completely blank: it is not an empty string, it is not a null, it is not zero. VarType() shows it to be a decimal data type, and yet it is empty.
I can't seem to prevent this error from locking-out:
On Error Resume Next
...still breaks.
if (isEmpty(theValue)) then
...doesn't catch it, because it is not empty
if (theValue is nothing) then
...doesn't catch it because it is not an object
etc.
We used the SQL in the a .NET application, and got a more informative error:
Decimal's scale value must be between 0 and 28, inclusive. Parameter name: Scale
So: I see this as two issues:
1.) In VBA, how do I trap the variant datatype value-that-is-not-empty-or-null, and;
2.) How do I resolve the Oracle Decimal problem?
For #2, I see from the Oracle decimal data type, it can support precision of up to 31 places, and I assume that the provider I am using can only support 28. I guess I need to Cast it to a smaller type in the query.
Any other thoughts?
I am trying to import a dataset into a custom entity in Dynamics CRM 2015 on-prem, using the import template for the entity, saved as CSV.
My dataset is quite small - only 10 rows. (Yes, I know it would probably take less time to just enter them manually).
When I import the data, CRM reads it as 3,001 records. The extra records show up totally blank. I am sure I don't have any extraneous data in other rows and columns.
Has anyone seen this, or have any idea what could be going on?
I have faced the same issue once. Sometimes the csv file containing some empty rows.
I have copied only the rows which containing data to a new csv file and imported it again. This approach solved my issue.
You can try once.
Another option is to open your CSV in Notepad or some other text editor. If there are additional rows, you will see them (rows that look like ",,,,,,,". You can delete them in the text editor and save, thereby being sure that Excel won't add the blank values back.
I am importing data from csv file in CRM2011, I was wondering if there a way to ignore a complete row, eg, if type = P then add if type = S then ignore?
Cheers
Using the Imports section of the Data Management area I think the only way you might have a chance at getting this to work is if you can control some other row value and make it invalid which would cause the entire row to fail on your type 'S' records.
Another alternative would be to use the SDK and create your own custom data mapping routine where are can have a bit more control over which records get processed.
SDK documentation
http://msdn.microsoft.com/en-us/library/hh547396.aspx
You could encourage CRM to consider the rows you're importing to be duplicates, or to be invalid lookups. Or you could accomplish this with workflow.
For example, if you mapped your 'type' field to an attribute, then made sure you had a record where that value is set to 'S', then set up a duplication rule to not allow records with non-unique 'type'.. that might work.
Or, you could try mapping 'type' to an Option Set which doesn't have a value for 'S' in it. This might work, or it might important blank, I'm not sure.
Or, you could make a workflow to retrospectively delete records where 'type' field is 'S'.
My disclaimer would be that none of these sound like particularly good ideas to me.
EDIT: another option is edit your CSV in Excel and remove the rows you don't want. That does sound like a good idea because then you're not asking the import wizard to do anything clever.