Extra blank records in CRM import - dynamics-crm

I am trying to import a dataset into a custom entity in Dynamics CRM 2015 on-prem, using the import template for the entity, saved as CSV.
My dataset is quite small - only 10 rows. (Yes, I know it would probably take less time to just enter them manually).
When I import the data, CRM reads it as 3,001 records. The extra records show up totally blank. I am sure I don't have any extraneous data in other rows and columns.
Has anyone seen this, or have any idea what could be going on?

I have faced the same issue once. Sometimes the csv file containing some empty rows.
I have copied only the rows which containing data to a new csv file and imported it again. This approach solved my issue.
You can try once.

Another option is to open your CSV in Notepad or some other text editor. If there are additional rows, you will see them (rows that look like ",,,,,,,". You can delete them in the text editor and save, thereby being sure that Excel won't add the blank values back.

Related

Uipath - How to extract A table from a pdf

Hi i have found some video and text on how to do this but they dont help with this task.
I know how to get one values but not extract a table.
I want this to get exported into a database if possible or a Excel. But i cant figure it out.
I have even tryed change the "Change reading opption"
I tryed to "data scraping" but the program just say
"This controler does not support data extraction"
And it can't be more of a table then this.
I have heard that it cant be because the structure of the PDF is bad.
Still isn't there more ways of doing this.
Unfortunately, there is no activity in UiPath to read tables directly from PDFs. (As of today.) That was the bad news. The good news is that you can get to the contents of the PDF. Either you get the data (as flat text) directly with UiPath.PDF.Activities.ReadPDFText or you have to use OCR.
#kwoxer provided a wonderful link for explanations on this topic.
I have already been able to extract data from tables contained in a PDF document. At that time, I was lucky: ReadPDFText extracted everything. The table elements were separated by tabs ("\t"). And the table header contained a word that did not appear elsewhere in the document.
Just as an idea, I proceeded like this:
Extract text from the PDF document with UiPath.PDF.Activities.ReadPDFText.
Create an array, where the elements are the lines in the document. (Split using Environment.NewLine and option StringSplitOptions.RemoveEmptyEntries)
Go through lines in a loop (ForEach) until the table header is found. (StartsWith or Contains etc.)
The next row belongs to the table as long as it contains a tab. (Otherwise the table is over.)
Split current row by tab and store it in an array: The elements of the array are the individual cells of the row.
I hope, this idea help.

You cannot import data to this record because the record was updated in Microsoft Dynamics 365 after it was exported

I'm having a strange issue with exporting/updating/importing data in our on-premises Dynamics 365 (8.2). I was doing a bulk update of over 3000 records by exporting the records to an Excel workbook, updating the data in a specific column, then importing the workbook back into CRM. It worked for all of the records except 14 of them, which according to the import log was for the reason that "You cannot import data to this record because the record was updated in Microsoft Dynamics 365 after it was exported." I looked at the Audit History of those 14 records, and find that they have not been modified in any way for a good two months. Strangely, the modified date of the most recent Audit History entry for ALL 14 records is the exact same date/time.
We have a custom workflow that runs once every 24 hours on a schedule that automatically updates the Age field of our Contact records based on the value in the respective Birthday field. For these 14 records, ALL of them have a birthday of November 3rd, but in different years. What that means though is that the last modification that was done to them was on 11/3/2019 via the workflow. However, I cannot understand why the system "thinks" that this should prevent a data update/import.
I am happy to provide any additional information that I may have forgotten to mention here. Can anyone help me, please?
While I was not able to discover why the records would not update, I was able to resolve the issue. Before I share what I did to update the records, I will try and list as many things as I can remember that I tried that did not work:
I reworked my Advanced Find query that I was using to export the records that needed updated to return ONLY those records that had actual updates. Previously, I used a more forgiving query that returned about 30 or so records, even though I knew that only 14 of them had new data to import. I did so because the query was easier to construct, and it was no big deal to remove the "extra" records from the workbook before uploading it for import. I would write a VLOOKUP for the 30-something records, and remove the columns for which the VLOOKUP didn't find a value in my dataset, leaving me with the 14 that did have new data. After getting the error a few times, I started to ensure that I only exported the 14 records that needed to be updated. However, I still got the error when trying to import.
I tried formatting the (Do Not Modify) Modified On column in the exported workbook to match the date format in the import window. On export of the records, Excel was formatting this column as m/d/yyyy h:mm while the import window with the details on each successful and failed import showed this column in mm/dd/yyyy hh:mm:ss format. I thought maybe if I matched the format in Excel to the import window format it might allow the records to import. It did not.
I tried using some Checksum verification tool to ensure that the value in the (Do Not Modify) Checksum column in the workbook wasn't being written incorrectly or in an invalid format. While the tool I used didn't actually give me much useful information, it did recognize that the values were checksum hashes, so I supposed that was helpful enough for my purposes.
I tried switching my browser from the new Edge browser (the one that uses Chromium) to just IE as suggested on the thread provided by Arun. However, it did not resolve the issue.
What ended up working in the end was Arun's suggestion to just do some arbitrary edit to all the records and exporting them afterward. This was okay to do for just 14 records, but I'm still slightly vexed as this wouldn't really be a feasible solution of it were, say, a thousand records that were not importing. There was no field that ALL 14 Contact records had in common that I could just bulk edit, and bulk edit back again. What I ended up doing was finding a text field on the Contact Form that did not have any value in it for any of the records, putting something in that field, then going to each record in turn and removing the value (since I don't know of a way to "blank out" or clear a text field while bulk editing. Again, this was okay for such a small number of records, but if it were to happen on a larger number, I would have to come up with an easier way to bulk edit and then bulk "restore" the records. Thanks to Arun for the helpful insights, and for taking the time to answer. It is highly appreciated!
When you first do an import of an entity (contacts for example) you see that your imported excel contains 3 hidden columns (Do Not Modify) Contact, (Do Not Modify) Row Checksum, (Do Not Modify) Modified On.
When you want to create new instances of the entity, just edit the records and clear the content of the 3 hidden colums.
This error will happen when there is a checksum difference or rowversion differs from the exported record vs the record in database.
Try to do some dummy edit for those affected records & try to export/reimport again.
I could think of two reasons - either the datetime format confusing the system :( or the the community thread explains a weird scenario.
Apparently when importing the file, amending and then saving as a different file type alters the spreadsheet's parameters.
I hence used Internet Explorer since when importing the file, the system asks the user to save as a different format. I added .xlsx at the end to save it as the required format. I amended the file and imported it back to CRM..It worked
For me it turned out to be a different CRM time zone setting for the exporter and importer. Unfortunately this setting doesn't seem to be able to be changed by an administrator via the user interface.
The setting is available for each user under File->Options->Time Zone.

Replacing character in magento product descriptions after import

I just imported over 20k items into magento.
The original data was from an access DB.
In the descriptions, all the " are showing as �
for example, the original description reads:
This arrangement is approx. 32" - 34" tall.
on the magento front-end it now reads:
This arrangement is approx. 32�-34� tall.
Reimporting the data is not an option.... I need to be able to either have this shown correctly in magento front-end using a hack or somehow replace all these characters with the proper characters in MySQL database, or somehow change the encoding...
Any suggestions would be appreciated.
Hi You need to save csv in utf-8 format before import
I ended up exporting all the descriptions, did a find/replace in notepad, saved it as utf-8. and re-imported them back in after other methods failed.
This wasn't what i wanted to go through again but i had no choice.

Magmi Not Importing When CSV Files Contains Commas

I have installed and made some successful product imports in to Magento using Magmi, but as soon as I try to import any data where the spreedsheet columns have commas [,] Magmi will not perform the import.
For example when I save the data in this speadsheet as a CSV file Magmi successfully imports the data;
http://i.imgur.com/PpDt0PS.png
However, Magmi refuses to import the data in the table below, where you can see in column F I have added data that include 'commas'.
http://i.imgur.com/MtGJPCw.png
Can anyone advise. I am using an Apple Mac with OpenOffice to prepare and save my data.
Is the data not importing entirely, or is just the visibility column not being set?
Visibility is a Magento core attribute which Magmi can set by using exact numerical option id value.
Generally, the option values you want to use for the visibility field are as follows:
Not Visible Individually = 1
Catalog = 2
Search = 3
Catalog, Search = 4
So in your case, if you want to set these products to Catalog, Search, you can set the visibility column value to 4.
To double-check that the above mapping is correct for your instance of Magento, the easiest way is as follows:
Go edit any product
Look for the Visibility drop down field, and right click > inspect element
In the developer tools, take note the values associated to each label.
Below an example of the process and what to look for.
Axel is correct, you should set the data to the numerical value 4.
But I do also recommend you explore a better way to export CSV content from Open Office. You may have to start a new document because I find I only see the dialogue below once and then I never see it again. Create a new document, paste your data into it. Choose save-as, select CSV, and save it. Eventually you should see the dialogue below. Change the encoding to UTF 8, the text delimiter to " and tick the 'Quote all text cells' box.
Then you should be able to have any cells with commas or other things in them. Always ensure you CSV files are quoted. "like","this","so you, can","have commas, in them". It is worth inspecting your CSV file in a text editor to see the format is as expected before uploading it to MAGMI.

Open XML SDK v2.0 Performance issue when deleting a first row in 20,000+ rows Excel file

Do anyone come across a performance issue when deleting a first row in a 20,000+ rows Excel file using OpenXML SDK v2.0?
I am using the delete row coding suggested in the Open XML SDK document. It takes me several minutes just to delete the first row using Open XML SDK, but it only takes just a second in Excel applicaton.
I eventually found out that the bottle-neck is actually on the bubble-up approach in dealing with row deletion. There are many rows updating after the deleted row. So in my case, there are around 20,000 rows to be updated, shifting up the data row by row.
I wonder if there is any faster way to do the row deletion.
Do anybody have an idea?
Well, the bad news here is: yep, that's the way it is.
You may get slightly better performance moving outside of the SDK itself into System.IO.Packaging and just creating an IEnumerable/List in like Linq-to-XML of all the rows, copy that to a new IEnumerable/List without the first row, rewrite the r attribute of <row r="?"/> to be it's place in the index, and the write that back inside <sheetData/> over existing children.
You'd need to kind of do the same for any strings in the sharedStrings.xml file - i.e. removing the <ssi>.<si> elements that were in the row that was deleted, but in this case they are now implicitly indexed, so you'd be able to get away with just outright removing them.
The approach of unzipping the file, manipulating it and repacking it is very error-prune.
How about this: If you say, that it works fine in Excel: Have you tried to use the Interop? This starts a new instance of Excel (either visible or invisible), then you can open the File, delete the line, save and close the application again.
using System;
using System.IO;
using Microsoft.Office.Interop.Excel;
using Excel = Microsoft.Office.Interop.Excel;
public void OpenAndCloseExcel()
{
Excel.Application excelApp = new Excel.Application();
// Open Workbook, open Worksheet, delete line, Save
excelApp.Quit();
}
The Range-object is qualified for many purposes. Also for deleting elements. Have a look at: MSDN Range-Description. One more hint: Interop uses Excel, so all Objects have to be adressed with a 1-based index!
For more resources take a look at this StackOverflow-thread.

Resources