I am importing data from csv file in CRM2011, I was wondering if there a way to ignore a complete row, eg, if type = P then add if type = S then ignore?
Cheers
Using the Imports section of the Data Management area I think the only way you might have a chance at getting this to work is if you can control some other row value and make it invalid which would cause the entire row to fail on your type 'S' records.
Another alternative would be to use the SDK and create your own custom data mapping routine where are can have a bit more control over which records get processed.
SDK documentation
http://msdn.microsoft.com/en-us/library/hh547396.aspx
You could encourage CRM to consider the rows you're importing to be duplicates, or to be invalid lookups. Or you could accomplish this with workflow.
For example, if you mapped your 'type' field to an attribute, then made sure you had a record where that value is set to 'S', then set up a duplication rule to not allow records with non-unique 'type'.. that might work.
Or, you could try mapping 'type' to an Option Set which doesn't have a value for 'S' in it. This might work, or it might important blank, I'm not sure.
Or, you could make a workflow to retrospectively delete records where 'type' field is 'S'.
My disclaimer would be that none of these sound like particularly good ideas to me.
EDIT: another option is edit your CSV in Excel and remove the rows you don't want. That does sound like a good idea because then you're not asking the import wizard to do anything clever.
Related
I have noticed two things I want to avoid when updating data in my FileMaker solution through JDBC:
field calculations are run while inserting/updating data, some of which considerably slow down the update process
last-modified fields are changed (but created fields are not, even on INSERT)
So I am looking for a way to update/insert through JDBC without triggering script triggers or the "changed on/by" auto-calculation. (because I am merging data from another DB and want the change fields to represent the actual last change, not the copy).
For case #2 I have tried both the built-in checkbox of changing the field when edited, and the calculated field solution with Let ( trigger = GetField ( "" ) ; If ( $$SilentSync > 0 ; Self ; Get ( CurrentDate ) ) ) as was answered, e.g. in my related question about avoiding auto-calculations when working in the solution itself. Sadly, both get triggered (and the global variable solution doesn't avoid it) when using JDBC.
Is there a way to say "don't update this field when I change it via JDBC"? Either globally or with an improved field calculation? I've searched the official JDBC guide and Google and found nothing.
For example, what would help is if using a calculated field I can somehow determine that data was changed not through the FM solution, but through JDBC.
For anyone finding this question in the future:
The solution is actually fairly simple, modifying the code snippet I posted to check for the account name instead of a global variable. As I have made a special user for JDBC access, that works well.
Is there a way to create a customized validator to validate the data that is imported from csv file in REDCap?
I'm not entirely sure what you mean but I will try to answer your question as best as possible.
To begin with, you can customize what validation is used on a per-field level in REDCap, by simply editing a field and selecting a field type. E. g. if it's a 'text' field with the 'integer' constraint activated, data for that field in uploaded .csv files will be checked to ensure that only integer values are imported. You can add some extra constraints as well, using e. g. min/max values or action tags (link to PDF document about action tags). Note that if you use 'multiple choice'/'checkboxes' type fields, where you manually specify a limited number of possible field choices, uploaded data will be checked to make sure that all values for the field are in the list of allowed field choices.
If you mean that you want to bypass REDCap's validation process so that it allows some normally invalid values to be uploaded (say a value of 2.5 for a 'text' field with the 'integer' constraint), I don't think that's possible, and I don't think there's any good reason for wanting to do that. If you want a field to be less 'lax' about validation, then you can change its type to a more general one. So if you have a 'text' field with the 'integer' constraint, you can simply remove the 'integer' constraint and have it be a plain 'text' field. But be careful if you do this. Think about whether you are removing constraints because actually valid data aren't allowed to be uploaded, or if you are trying to upload invalid data that ought to be removed or fixed. If it's the latter you should definitely leave the validation as it is, because then it's doing its job and letting you know what needs to be corrected.
If you want to add additional validation, making it stricter than what REDCap does at the moment, then you can of course change the field types to be more narrow, e. g. adding an 'integer' constraint to a 'text' field, or changing a 'text' field to be a 'multiple choice' field. That's the solution in REDCap. If you want to add additional constraints/validation, that REDCap doesn't support, I think you'll have to pre-process the data before upload. If you want to automate the validation process then you could write a script in R or Python that:
Imports the .csv data (e. g. with read.csv/read_csv)
Filters the data the way you want (e.g. if you want to exclude certain strings that have a pattern to them, you could use R's 'stringr' package or Python's 're' package)
Uploads the data to REDCap using the REDCap API (for R there's the 'redcapAPI' package, and for Python there's the 'PyCap' package)
Note that you would need an API key, which you can ask your institution's REDCap administrators for, if you'd want your script to upload the data directly. You can of course also create a script that, instead of uploading the data directly, produces .csv files for upload.
If you're not very familiar with R, Python or other programming languages/tools that enable you to craft validation scripts it's probably best to work with what REDCap itself offers you.
I'm having a strange issue with exporting/updating/importing data in our on-premises Dynamics 365 (8.2). I was doing a bulk update of over 3000 records by exporting the records to an Excel workbook, updating the data in a specific column, then importing the workbook back into CRM. It worked for all of the records except 14 of them, which according to the import log was for the reason that "You cannot import data to this record because the record was updated in Microsoft Dynamics 365 after it was exported." I looked at the Audit History of those 14 records, and find that they have not been modified in any way for a good two months. Strangely, the modified date of the most recent Audit History entry for ALL 14 records is the exact same date/time.
We have a custom workflow that runs once every 24 hours on a schedule that automatically updates the Age field of our Contact records based on the value in the respective Birthday field. For these 14 records, ALL of them have a birthday of November 3rd, but in different years. What that means though is that the last modification that was done to them was on 11/3/2019 via the workflow. However, I cannot understand why the system "thinks" that this should prevent a data update/import.
I am happy to provide any additional information that I may have forgotten to mention here. Can anyone help me, please?
While I was not able to discover why the records would not update, I was able to resolve the issue. Before I share what I did to update the records, I will try and list as many things as I can remember that I tried that did not work:
I reworked my Advanced Find query that I was using to export the records that needed updated to return ONLY those records that had actual updates. Previously, I used a more forgiving query that returned about 30 or so records, even though I knew that only 14 of them had new data to import. I did so because the query was easier to construct, and it was no big deal to remove the "extra" records from the workbook before uploading it for import. I would write a VLOOKUP for the 30-something records, and remove the columns for which the VLOOKUP didn't find a value in my dataset, leaving me with the 14 that did have new data. After getting the error a few times, I started to ensure that I only exported the 14 records that needed to be updated. However, I still got the error when trying to import.
I tried formatting the (Do Not Modify) Modified On column in the exported workbook to match the date format in the import window. On export of the records, Excel was formatting this column as m/d/yyyy h:mm while the import window with the details on each successful and failed import showed this column in mm/dd/yyyy hh:mm:ss format. I thought maybe if I matched the format in Excel to the import window format it might allow the records to import. It did not.
I tried using some Checksum verification tool to ensure that the value in the (Do Not Modify) Checksum column in the workbook wasn't being written incorrectly or in an invalid format. While the tool I used didn't actually give me much useful information, it did recognize that the values were checksum hashes, so I supposed that was helpful enough for my purposes.
I tried switching my browser from the new Edge browser (the one that uses Chromium) to just IE as suggested on the thread provided by Arun. However, it did not resolve the issue.
What ended up working in the end was Arun's suggestion to just do some arbitrary edit to all the records and exporting them afterward. This was okay to do for just 14 records, but I'm still slightly vexed as this wouldn't really be a feasible solution of it were, say, a thousand records that were not importing. There was no field that ALL 14 Contact records had in common that I could just bulk edit, and bulk edit back again. What I ended up doing was finding a text field on the Contact Form that did not have any value in it for any of the records, putting something in that field, then going to each record in turn and removing the value (since I don't know of a way to "blank out" or clear a text field while bulk editing. Again, this was okay for such a small number of records, but if it were to happen on a larger number, I would have to come up with an easier way to bulk edit and then bulk "restore" the records. Thanks to Arun for the helpful insights, and for taking the time to answer. It is highly appreciated!
When you first do an import of an entity (contacts for example) you see that your imported excel contains 3 hidden columns (Do Not Modify) Contact, (Do Not Modify) Row Checksum, (Do Not Modify) Modified On.
When you want to create new instances of the entity, just edit the records and clear the content of the 3 hidden colums.
This error will happen when there is a checksum difference or rowversion differs from the exported record vs the record in database.
Try to do some dummy edit for those affected records & try to export/reimport again.
I could think of two reasons - either the datetime format confusing the system :( or the the community thread explains a weird scenario.
Apparently when importing the file, amending and then saving as a different file type alters the spreadsheet's parameters.
I hence used Internet Explorer since when importing the file, the system asks the user to save as a different format. I added .xlsx at the end to save it as the required format. I amended the file and imported it back to CRM..It worked
For me it turned out to be a different CRM time zone setting for the exporter and importer. Unfortunately this setting doesn't seem to be able to be changed by an administrator via the user interface.
The setting is available for each user under File->Options->Time Zone.
I want to read from a table, change a couple column values for a few lines in a query, then update those lines on the same table.
I'm using SAP BODS, and that's what I tried:
I was about to insert images but just found out I can't insert images until 10 rep.
Anyway, I created a DataFlow where I have the same table as source and target.
A query to filter (using where) and change values (using mapping). And then a Table Comparison (where I expected those lines to be set to update, in this particular case), set table name on first entry, then PK in 'input primary key' and then the two columns I want to change in 'Compare columns'. No other changes from default that I can recall.
Got no warnings on 'validate all', and on execution I receive an ORA-00001 for the PK.
So ... I thought the Table Comparison would try to update, but seems like it's trying to insert instead. I want to know what I'm doing wrong and how could I get the job to do those updates. Thanks in advance.
Ps. I did search SO before asking and didn't find anything relevant.
Ok
So, turns out I just found what's going on a few minutes after posting the question.
Wasn't sure if I should answer my own question and took a look at this Etiquette for answering your own question
and decided to come back here and answer my own question.
For some reason I got stuck thinking that it was something to do with the Table Comparison trying to insert a line with a PK that's already there, instead of doing the update I wanted.
But after going back to the job to take another look at the issue, it occurred to me that maybe the problem could be a duplicate in the incoming data set. Made a few adjustment to filter those, and voilĂ .
I've made the mistake of using the 'Calculate and Replace Column' feature to replace the wrong column, and realized after the fact. The column I replaced corresponds to last names and is important. I would like to retrieve this column but maintain my other 15 or so data transformations. Ideally, I would like to remove this transformation, but I've come up empty so far. Here's what I've tried:
I tried adding the 'last name' column again from the same external source, using >Insert >Columns... I also tried renaming this column to avoid the data transformation. Unfortunately, this resulted in an entirely empty column, so it did not successfully match to the table or was affected by the transformation..
I checked the source information, and found exactly the 3-4 lines that I wish were not there. I thought it might be possible to edit this but haven't found a way. This seems like it would be the easiest.
Another idea I had was I could replace the data table with the same source, and repeat all of the transformations from the replace data table dialogue (excluding the bad one). This is my next plan of attack, but I figured I would come on here to see if there's an easier way first.
Thanks in advance!
Good News for YOU!!! #jeremyVollen.
It is possible to 'edit' your transformation per Tibco article 44098.
Resolution: If there are more then one transformations on a data table and you need to edit any of those transformation, follow the steps below:
Go To Edit >> Data Table Properties.
Select the desired data table inside which the transformation has been added and click on Refresh Data > With Prompt.
A new window will pop up which will allow you to make the desired changes in each of the transformations.
unfortunately it is NOT possible to reverse data table transformations.
it IS possible to undo the transformations with Edit>>Undo or CTRL+Z, but that's as far as it goes.
my strategy for dealing with this is (in accordance with your #3) to visit Edit>>Data Table Properties, select the table I'm interested in, select Source Information, then copy the contents of the textarea and paste it into notepad. then, I'll File>>Replace Data Table and start over from the beginning while keeping the notepad open so I don't miss any steps.
I realize it's not ideal, but there is unfortunately not another way.