Issue with choice action when running transform map - servicenow

I'm trying to insert records to a table by using transform maps. I have this field in the target table, which is a choice type, and I have set the choice action in the source table's field to reject if there's no matching value found. But, when I tried inserting the record using the transform map with the correct value, which exists in the choice list of the target field, it still got rejected and hence not inserting the records.
I have tried searching for possible reasons as to why it still got rejected even with correct value in the source field. Here's the sample link that I have found: https://hi.service-now.com/kb_view.do?sysparm_article=KB0677334
It says that if there are more than 40 characters for the choice list value it will be truncated and might not match those choice. But the choices in the target field has only 20 characters or less.
I have first tried running the transform map in the lower environments before proceeding to production. In the lower environment it works fine and the records got inserted. But, when I tried it in production it got rejected.

There is a difference between choice and choice list. Within the choice list the values are comma separated sys_ids. I could imagine that you have multiple values for import and then the max character are reached or the values do not match, etc.
You could use this approach:
Instead of a direct assignment, source to target field, use the script to target. Then you gain the full script power ;)
Maybe here you could add some logic like switch case or whatever, I guess you get the point.

Related

Error in azureml "Non numeric value(s) were encountered in the target column."

I am using Automated ML to run a time series forecasting pipeline.
When the AutoMLStep gets triggered, I get this error: Non numeric value(s) were encountered in the target column.
The data to this step is passed through an OutputTabularDatasetConfig, after applying the read_delimited_files() on an OutputFileDatasetConfig. I've inspected the prior step, and the data is comprised of a 'Date' column and a numeric column called 'Place' with +80 observations in monthly frequencies.
Nothing seems to be wrong with the column type or the data. I've also applied a number of techniques on the data prep side e.g. pd.to_numeric(), astype(float) to ensure it is numeric.
I've also tried forcing this through the FeaturizationConfig() add_column_purpose('Place','Numeric') but in this case, I get another error: Expected column(s) Place in featurization config's column purpose not found in X.
Any thoughts on how to solve?
So a few learnings on this interacting with the stellar Azure Machine Learning engineering team.
When calling the read_delimited_files() method, ensure that the output folder does not have many inputs or files. For example, if all intermediate outputs are saved to a common folder, it may read all the prior inputs into this folder, and depending upon the shape of the data, borrow the schema from the first file, or confuse all of them together. This can lead to inconsistencies and errors. In my case, I was dumping many files to the same location, hence this was causing confusion for this method. The fix is either to distinctly mark the output folder (e.g. with a UUID) or give different paths.
The dataframe from read_delimiter_files() may treat all columns as object type which can lead to a data type check failure (i.e. label_column needs to be numeric). To mitigate, explictly state the type. For example:
from azureml.data import DataType
prepped_data = prepped_data.read_delimited_files(set_column_types={"Place":DataType.to_float()})

SSRS Report Parameters Interactive

I have a report that requires 3 parameters, all 3 has q query to pre populate them using a dataset for each, so the under their properties the available values is selected with the query. Default were also set to use the same query. This work fine.
My problem is when the user of the report wanted to enter the values themselves rather than going into the list populated by the query. Users know the value that they wanted to enter so it's faster for them to enter rather than select. SSRS report seems not to give you the ability to enter if you have set the available values and default values for some reason. Is their a way to go around this please?
Many thanks.
There is one straight forward way to use comma separated multi value parameter rather than list where user enters input.
Below link explains in detail, but I am quite sure you do not want to stick to below solution.
https://www.mssqltips.com/sqlservertip/3479/how-to-use-a-multi-valued-comma-delimited-input-parameter-for-an-ssrs-report/
Another thing you could do is keep your multi value parameter as list as it is and create a text input parameter.
Now if user want to simply choose from list fair enough you will have to handle second parameter as null because user chose from list.
Then on your dataset check and apply filter as 2nd parameter value as not null.
Same goes if user does text input then multi value parameter as not null.

How to conditionally require 18+ fields based on selection of two dropdowns

I'm new to Sharepoint 2010 with what I would call a highschool freshman level of coding experience, though I can generally stumble and tinker my way through. I don't currently have access to Sharepoint designer, but from the searching I've done so far, it may required. Still I'm hoping to find an OOTB solution to the problem below.
I have been tasked with building a incident resolution tracking sheet on Sharepoint. My boss is very concerned with being audited by legal, and has some very specific requirements about required information. Column A contains a drop down list of 5 choices that indicate the Final Solution. Column B Contains a drop down list with 4 choices that indicate the Initial Problem. Based on The selections in A and B, different Columns in C-X are required to be blank, not blank, or contain specific entries. The only way I can find to do this is to create a list validation containing a nested if for each combination of A and B resulting in 20 nested ifs. However sharepoint is limited to 7 nested ifs, so I'm looking for any possible solutions.
*This List will primarily be accessed in Datasheet view, so "HTML in calculated column" type solutions are not viable.
You can use calculated columns to break up the validation formula into more manageable chunks.
Let's start with a simple example.
Condition 1: If the initial problem was that the user's computer was too slow and the final solution was restarting the computer, you need to fill in the [C] column.
Condition 2: If the initial problem was that the user was on fire and the final solution was dousing them with water, you need to fill in the [D] column.
You could perform that list validation all in one formula, as below:
=IF(
AND([A]="Restarted Computer",[B]="Computer is slow"),
NOT(ISBLANK([C])),
IF(
AND([A]="Doused with water",[B]="User is on fire"),
NOT(ISBLANK([D]),
TRUE
)
)
But that's long and ugly (especially when you condense it to one line).
Instead, you could add two calculated columns, one for each condition you want to check. For the sake of this example, let's say you add a column called C_is_valid and a column called D_is_valid:
C_is_valid calculated column formula:
=IF(AND([A]="Restarted Computer",[B]="Computer is slow"),NOT(ISBLANK([C])),TRUE)
D_is_valid calculated column formula:
IF(AND([A]="Doused with water",[B]="User is on fire"),NOT(ISBLANK([D]),TRUE)
Updated validation formula:
=AND([C_is_valid],[D_is_valid])
It's easy to see how this can simplify even a very complex set of validation conditions...
=AND(C_is_valid,AND(D_is_valid,AND(E_is_valid,AND(F_is_valid,AND(G_is_valid,AND(H_is_valid,I_is_valid)))))
But even that could be simplified by consolidating some of those AND()s into multiple calculated columns, so that your final validation formula could be as simple as:
=AND([First set of conditions is valid],[Second set of conditions is valid])

How do I validate data in a file in SSIS before inserting into a database?

What I want to do is take data from a dbf file and insert it in a table. Which I've already done. Since there are many files, a For-Each Container is being used. However, before inserting it into a table, I want to look at the date fields and compare it to a date variable. If the dates match the variable, then move on to the step of the flow. But if any of the dates don't match the variable, then that file and its contents are discarded and the next file is looked at.
How do I accomplish this in SSIS?
You're looking for the Conditional Split Component within your Data Flow Task.
Assuming your source column is MyDate and you have an SSIS Variable called #[User::ReferenceDate] then you'd apply an expression like
[MyDate] == #[User::ReferenceDate]
That will evaluate to True when the dates match, false otherwise.
In your Conditional Split, add a row into the component.
OutputName: DatesMatched
Condition: [MyDate] == #[User::ReferenceDate]
Default output name: DatesUnmatched
Now when you connect the output from this to your destination, it'll ask whether you want to route the data using the DatesMatched or DatesUnmatched path. Use the DatesMatched path.
As I re-read this, if any of the dates don't match the variable, then that file and its contents are discarded then you're looking at double processing the file. The first time to read it all in and validate it. The second time, optional, will actually load to the database.
From your Conditional Split, add a RowCount to the DatesUnmatched path. Use a Variable of type Integer/Int32 named CountDatesUnmatched. In a perfect world, that will be zero when the validation of the file completes.
In the Precedent Constraint between the Validation Data Flow and the actual Import Data Flow, double click the connector line and change the evaluation criteria from Constraint to Expression and Constraint. Leave the value as Success and in the Expression use #[User::CountDatesUnmatched] == 0 That data flow will only light up if both conditions are true: parsing was successful and no rows were sent to the Row Count component.
Finally, you can cheat and sometimes this approach makes sense. If you're using an OLE DB Destination, then you can use the MaximumInsertCommitSize of the default 2B and a data access mode of fast load. This translates to "Everything is going to commit or none of it is". That can lock up your target table and cause your transaction log to grow heavily depending on how much data you're loading. Use the Conditional Split as described above but for the DatesUnmatched path, induce a failure. A Derived column with divide by zero or a script task with an explicit FireError event will cause that transaction to go belly up. You'd need to do some magic in the OnError event handler to not abort the overall file processing but it's a lazy hack (or one that is useful when double reading the file is prohibitive but impacting the database is less so)

Split a Value in a Column with Right Function in SSIS

I need an urgent help from you guys, the thing i have a column which represent the full name of a user , now i want to split it into first and last name.
The format of the Full name is "World, hello", now the first name here is hello and last name is world.
I am using Derived Column(SSIS) and using Right Function for First Name and substring function for last name, but the result of these seems to be blank, this where even i am blank. :)
It's working for me. In general, you should provide more detail in your questions on places such as this to help others recreate and troubleshoot your issue. You did not specify whether we needed to address NULLs in this field nor do I know how you'd want to interpret it so there is room for improvement on this answer.
I started with a simple OLE DB Source and hard coded a query of "SELECT 'World, Hello' AS Name".
I created 2 Derived Column Tasks. The first one adds a column to Data Flow called FirstCommaPosition. The formula I used is FINDSTRING(Name,",", 1) If NAME is NULLable, then we will need to test for nullability prior to calling the FINDSTRING function. You'll then need to determine how you will want to store the split data in the case of NULLs. I would assume both first and last are should be NULLed but I don't know that.
There are two reasons for doing this in separate steps. The first is performance. As counter-intuitive as it sounds, doing less in a derived column results in better performance because the SSIS engine can better parallelize the operations. The other is more simple - I will need to use this value to make the first and last name split so it will be easier and less maintenance to reference a column than to copy paste a formula.
The second Derived Column is going to actually perform the split.
My FirstNameUnicode column uses this formula (FirstCommaPosition > 0) ? RTRIM(LTRIM(RIGHT(Name,FirstCommaPosition))) : "" That says "If we found a comma in the preceding step, then slice out everything from the comma's position to the end of the string and apply trim operations. If we didn't find a comma, then just return a blank string. The default string type for expressions will be the Unicode (DT_WSTR) so if that is not your need, you will need to cast the resultant into the correct string codepage (DT_STR)
My LastNameUnicode column uses this formula (FirstCommaPosition > 0) ? SUBSTRING(Name,1,FirstCommaPosition -1) : "" Similar logic as above except now I use the SUBSTRING operation instead of RIGHT. Users of the 2012 release of SSIS and beyond, rejoice fo you can use the LEFT function instead of SUBSTRING. Also note that you will need to back off 1 position to remove the comma.

Resources