Scribe: Not getting proper results - dynamics-crm

I am working on Sage ERP MAS 200 and Microsoft Dynamics CRM integration using Scribe.
I have a chain of 5 Scribe jobs with which I am trying to compute various values and update/insert in CRM (Target):
(1) Job 1: This job simply transfers all the data from AR_Customer table of MAS (source) to the same table in CRM (target). Also, for few new fields (yeartilldate sales, monthtilldate sales, prioryear sales, monthlytrend), it inserts value 0.
(2) Job 2: Month till date or Period till date:
This one computes the values of Month till date sales or Period till date sales and updates in the CRM. The values for those accounts, which do not get updated, the value is already inserted as 0 (in job 1).
(3) Job 3: Prior year:
This one computes the values of Prior Year sales and updates in the CRM. The values for those accounts, which do not get updated, the value is already inserted as 0 (in job 1).
(4) Job 4: Year till date:
This one computes the values of Year till date sales and updates in the CRM. The values for those accounts, which do not get updated, the value is already inserted as 0 (in job 1).
(5) Job 5: MonthlyTrend:
This one computes the values of MonthlyTrend and updates in the CRM. The values for those accounts, which do not get updated, the value is already inserted as 0 (in job 1).
Issue:
For jobs 1,2,3 and 4, there is no issue happening at all. The issue is happening in job 5.
I have 7 steps in my job. 7th step (CRM admin) is not called by any of the steps (i.e., there is no step in workflow which passes data to this step). But, still I have not removed it for some reasons.
Step 6 in my job (Account) is supposed to do account update. I have same formula for calculating the values of MonthlyTrend on both step 6 and 7.
Following are the observations:
1> For those records, where the flow never reaches steps 6 and 7: Value of MonthlyTrend is getting properly calculated (I could see the values when I clicked on 'Test Job') for both steps 6 and 7.
2> For those records, where the flow never reaches steps 7, but reaches step 6: Value of MonthlyTrend is getting properly calculated for step 7, but does not get calculated for step 6 (value remains #NULL).
Also, for step 6, when I tried giving constant value (like 0 or 8), it gets displayed even in case 2 mentioned above.
Please let me know why this might be happening.

Related

How can I ensure correct filtering of a detail table from a summary table?

In this production data the Total Quantity processed through the machines is in table WeekProd and the number of parts scrapped is stored in table WeekScrap. There can be more than one scrap reason and quantity for each production line and they are summed using a measure.
The two tables are filtered in common by the calendar week number, the machine lookup and the shift lookup. Using 2 visuals shows that this filtering is working as expected:
However, when I place the Sum Scrap Qty measure onto the WeekProd visual it show the scrap qty on every row, although the total figure of 34 ends up correct.
How can I stop this from happening?

Spotfire Calculation using previous rows calculated data

I have been struggling with the following calculation. I have tried a few previous, next and overs but I cant seem to get the syntax correct.
Basically i need to subtract demand from stock on hand, to get a new column. the the next row will use the newly created column as stock on hand and the subtract the demand for that row, then that result becomes the new stock on hand etc. i cant get it to loop. I have ranked the demand in order of date required per plant. AS the data set will have multiple Plants, SOH and demand.
The attached pic shows A020 only has one QTY short so that is straight forward, but for A030 opening SOH is 152, and the 1st date QTY short is 12, so i need 152-12 = 140. then the second date QTY which is ranked 2, needs to be 140 - 12 = 128, so then rank 3 uses 128 - 12 and so on. ie the SOH needs to dynamically update.
data set
It might not be natively possible in point-and-click Spotfire (happy to be corrected if this is incorrect).
You should consider writing a data function using R to do this groupby-loop operation.

Running Sum of Filtered Rows in Tableau

I have a table of challenge submissions (that records the time of submission of a challenge in a competition by different players, and whether the submission was correct or not) -
and another table that has the points associated with each challenge -
How do I plot a graph of running sum of points earned by the top 3 players in the competition over time (for last 24 hours only)? The catch here is that I only need to consider the first successful submission in case there are more than one successful submissions for a challenge in the competition (eg. Challenge #17 for Player A).
EDIT:
Dummy Data
Desired Output:
I am proposing a solution/answer assuming a few things-
Challenge acceptance time ends at 17:00 everyday
Different lines represent different challenges
Step-1 Create a CF to adjust date/time by calendar date - adjusted date as
DATEADD('hour', 7, [Date])
Note that I have added 7 hours to make the last calendar date/time for submission as 00 AM next day.
Step-2 Create another CF win_loss as
If [Success]='W' then 1 ELSE 0 end
step-3 create another CF game points as
[win_loss]*[Points (Points)]
Step-4 create another CF first win or loss as (don't worry about loss here)
{FIXED [Player], [Challenge], [success] : MIN([Date])} = [Date]
Step-5 create a set on 'players' field with TOP-3 with this formula (select top 3) by
sum(
IF [first win or loss]= TRUE
then [game points] END)
Step-6 build your view by dragging
set, MDY(adjusted date) & first win or loss on filters shelf/card
add mdy filter to context
[date] with exact date and discreet to columns
sum(game points) to rows
adding table calculation on measure - running total
right click sum(game points) click edit in shelf and replace the existing calculation by this one-
RUNNING_SUM(ZN(SUM([game points])))
(Note this will ensure your lines start at f(x)=0 always)
challenge on colors in marks card
sum(game points) to text in marks card.
Note- filters on (i) Set will ensure the top 3 players are in view only
(ii) adjusted date will ensure view for 24 hour challenge submission time
(iii) first win or loss will eliminate second and subsequent win(s) by same player for same challenge
I hope this will also make things clear to you.
You should get your desired view
OR change the date field to seconds to get a view like this

Need help creating rollup field for aggregating child records' Gross Profit values on Parent record

I am trying to create a rollup field on a Parent Opportunity record that will show the sum of all Estimated Gross Profit values of the child Opportunities associated with that parent record. One parent Opportunity record can be associated with many child Opportunity records.
However, I am running into some issues:
The Parent Opportunity includes "Est. Gross Profit" as well, and if the user fills out this field on the parent record, it is showing up in my "Parent Est. Gross Profit" rollup field. I only want child Opportunity records to be included in the sum for "Parent Est. Gross Profit".
I've noticed the rollup field takes a very long time to update... even hours maybe. Is there a way to avoid this?
Am I going about this issue the right way? Is there a better way to create a sum of the child Opportunities' Est. Gross Profit values on the parent record?
Thanks!
Make sure you are using the right relationship in your Rollup query definition - new_opportunity_childopportunities
Rollup calculation job is an asynchronous job with default schedule. Read more
You can modify the recurrence of the system job minimum 1-hour schedule. Read more
Rollup calculations
The rollups are calculated by scheduled system jobs that run
asynchronously in the background. You have to be an administrator to
view and manage the rollup jobs. To view the rollup jobs go to
Settings > System Jobs > View > Recurring System Jobs. To quickly find
a relevant job, you can filter by the System Job type: Mass Calculate
Rollup Field or Calculate Rollup Field.
Mass Calculate Rollup Field is a recurring job, created per a rollup
field. It runs once, after you created or updated a rollup field. The
job recalculates the specified rollup field value in all existing
records that contain this field. By default, the job will run 12 hours
after you created or updated a field. After the job completes, it is
automatically scheduled to run in the distant future, approximately,
in 10 years. If the field is modified, the job resets to run again in
12 hours after the update. The 12 hour delay is needed to assure that
the Mass Calculate Rollup Field runs during the non-operational hours
of the organization. It is recommended that an administrator adjusts
the start time of a Mass Calculate Rollup Field job after the rollup
field is created or modified, in such a way that it runs during
non-operational hours. For example, midnight would be a good time to
run the job to assure efficient processing of the rollup fields.
Calculate Rollup Field is a recurring job that does incremental
calculations of all rollup fields in the existing records for a
specified entity. There is only one Calculate Rollup Field job per
entity. The incremental calculations mean that the Calculate Rollup
Field job processes the records that were created, updated or deleted
after the last Mass Calculate Rollup Field job finished execution. The
default maximum recurrence setting is one hour. The job is
automatically created when the first rollup field on an entity is
created and deleted when the last rollup field is deleted.
Online recalculation option. If you hover over the rollup field on the
form, you can see the time of the last rollup and you can refresh the
rollup value by choosing the Refresh icon next to the field

Complex Queries in ELK?

I've successfully set-up ELK stack. ELK gives me great insights on data. However, I'm not sure how I'll fetch the following result.
Let say, I've a column user_id and action. The values in action can be installed , activated, engagement and click. So, I want that if a particular user has performed an activity installed on 21 May and 21 June, then while fetching results for the month of June, ELK should not return those users who has already performed that activity earlier before. For eg, for the following table:-
Date UserID Activityin the previous month
1 May 1 Activated
3 May 2 Activated
6 May 1 Click
8 May 2 Activated
11 June 1 Activated
12 June 1 Activated
13 June 1 Click
User1 and User2 has activated on 1May and 3May respectively. User2 has also activated on 8May. So, when I filter the users for the month of May having activity Activated, it should return me count 2, ie
1 May 1 Activated
3 May 2 Activated
User2 on 8May is being removed because it has performed that same activity before.
Now if I write the same query for the month of June, it should return me nothing, because the same users have perform the same activity earlier as well.
How can I write this query in ELK?
This type of relational query is not possible with ElasticSearch.
You would need to add another column (FirstUserAction) and either populate it when the data is loaded, or schedule a task (in whatever scripting/programming language you're comfortable with) to periodically calculate and update the values for this column.

Resources