I am trying to get some data parsed out of a subject line in Office 365 Flows. I have an email that has a consistent format:
Help Desk [Ticket #12345]
I want to get the number '12345' for use in later steps in the flow. So far, I've attempted to use the substring expression in a compose connector:
substring(triggerBody()?['Subject'], 20, 5)
But I get an error about the string being null.
Besides the index being incorrect (to retrieve '12345' from Help Desk [Ticket #12345] you need to use substring(value, 0, 5) as the index is 0-based), the expression looks correct. But you can take a step-by-step approach to see what is wrong.
To start, take a look at the flow run to see exactly what the trigger outputs are:
If you see the Subject field (as I do in my case), create a variable containing that value only to make sure that you don't have any typo:
If it works correctly, then you should see in the flow run the subject:
If everything is still good at that point, create a new variable with the substring that you want:
And again, check the value.
If you got to this point, then you should be able to retrieve the ticket id.
Related
Team,
Occasionally my flow fails and its enough test it manually to running again. However, I want to avoid that this error ocurrs again to stay in calm.
The error that appears is this:
Unable to process template language expressions in action 'Periodo' inputs at line '0' and column '0': 'The template language function 'split' expects its first parameter to be of type string. The provided value is of type 'Null'. Please see https://aka.ms/logicexpressions#split for usage details.'.
And it appears in 2 of the 4 variables that I create:
Client and Periodo
The variable Clientlooks this:
The same scenario to "Periodo".
The variables are build in the same way:
His formula:
trim(first(split(first(skip(split(outputs('Compos'),'client = '),1)),'indicator')))
His formula:
trim(first(split(first(skip(split(outputs('Compos'),'period = '),1)),'DATA_REPORT_DELIVERY')))
The same scenario to the 4 variables. 4 of them strings (numbers).
Also I attached email example where I extract the info:
CO NIV ICE REFRESCOS DE SOYA has finished successfully.CO NIV ICE REFRESCOS DE SOYA
User
binary.struggle#mail.com
Parameters
output = 7
country = 170
period = 202204012
DATA_REPORT_DELIVERY = NO
read_persistance = YES
write_persistance = YES
client = 18277
indicator_group = SALES
Could you give some help? I reach some attepmpts succeded but it fails for no apparent reason:
Thank you.
I'm not sure if you're interested but I'd do it a slightly different way. It's a little more verbose but it will work and it makes your expressions a lot simpler.
I've just taken two of your desired outputs and provided a solution for those, one being client and the other being country. You can apply the other two as need be given it's the same pattern.
If I take client for example, this is the concept.
Initialize Data
This is your string that you provided in your question.
Initialize Split Lines
This will split up your string for each new line. The expression for this step is ...
split(variables('Data'), '\n')
However, you can't just enter that expression into the editor, you need to do it and then edit in in code view and change it from \\n to \n.
Filter For 'client'
This will filter the array created from the split line step and find the item that contains the word client.
`contains(item(), 'client')`
On the other parallel branches, you'd change out the word to whatever you're searching for, e.g. country.
This should give us a single item array with a string.
Initialize 'client'
Finally, we want to extract the value on the right hand side of the equals sign. The expression for this is ...
trim(split(body('Filter_For_''client''')[0], '=')[1])
Again, just change out the body name for the other action in each case.
I need to put body('Filter_For_''client''')[0] and specify the first item in an array because the filter step returns an array. We're going to assume the length is always 1.
Result
You can see from all of that, you have the value as need be. Like I said, it's a little more verbose but (I think) easier to follow and troubleshoot if something goes wrong.
I have this Analysis that I wish to invoke from URL (which I can).
This analysis in particular is mounted on a dashboard and 6 of the 17 columns can be filtered from a prompt mounted on the same dashboard.
I only need the analysis and still be able to filter the results. I read about parameters that can be given to the URL, but I can't seem to make the filters work. I am using the &P0=n parameter and the consequent &P1=xx parameter.
Without the &P I am able to see the table, when I add parameters I get this error UH6MBRBC:E6MUPJPH.
Thanks for your time, have a nice day.
Got it to work by myself afterall (always best).
Basically instead of &p0 and such I used &coln and &valn (where n is the column number):
&coln - is the name of the field
&valn - is the value for which you filter
More information and better explanation can be found here.
Im new to Report Builder and having issues with some expressions that Im trying to implement in a report. I got the standard ones to work however as soon as I try any distinctions, I get error messages. Over the last couple weeks, Ive tried many combinations, read the expression help, google and looking at other questions at internet sites. To reduce my frustrations, I even would jump to other expressions and walk away hoping I would have different insight coming back.
Its probably something simple or something I dont know about writing expressions.
Im hoping that someone can help with these expressions; they are the versions I get the least errors with(usually just expression expected) and show what Im trying to accomplish.
=IIF((Fields!RECORDFLAG.Value)='D',COUNTDISTINCT(Fields!TICKETNUM.Value),0)
=IIF((Fields!TRANSTYPE.Value)='1' and (Fields!RECORDFLAG.VALUE)='A' or
'B',SUM(Fields!DOLLARS.Value),0)
=IIF((Fields!TRANSTYPE.Value)='1' and
(Fields!RECORDFLAG.VALUE)='P',SUM(Fields!DOLLARS.Value),0)
=Sum([DOLLARS] case when [RECORDFLAG]='P' then -1*[DOLLARS])
Thank You.
=IIF((Fields!RECORDFLAG.Value)=”D”,COUNTDISTINCT(Fields!TICKETNUM.Value))
The error message gives you the answer here - no false part of the iif() has been specified. Use =IIF((Fields!RECORDFLAG.Value)=”D”,COUNTDISTINCT(Fields!TICKETNUM.Value), 0)
=IIF((Fields!TRANSTYPE.Value)="1" and (Fields!RECORDFLAG.VALUE)="A" or "B",SUM(Fields!DOLLARS.Value),0)
This is not how an OR works in SSRS. Use:
=IIF((Fields!TRANSTYPE.Value)="1" and (Fields!RECORDFLAG.VALUE="A" or Fields!RECORDFLAG.Value = "B"),SUM(Fields!DOLLARS.Value),0)
The 0s are returned due to your report design. countdistinct() is an aggregate function - it's meant to be used on a set of data. However, your iif() is only testing on a per row basis - you're basically saying "if the current row is thing, count all the distinct values" which doesn't make sense. There are a couple of ways forward:
You can count the number of times a certain value occurs in a given condition using a sum(). This is not the same as the countdistinct(), but if you use =sum(iif(Fields!RECORDFLAG.Value = "D", 1, 0)) then you will get the number of times RECORDFLAG is D in that set. Note: this requires the data to be aggregated (so in SSRS, grouped in a tablix).
You can use custom code to count distinct values in a set. See https://itsalocke.com/aggregate-on-a-lookup-in-ssrs/. You can apply this even if you have only one dataset - just reference the same one twice.
You can change the way your report works. You can group on Fields!RECORDFLAG.Value and filter the group to where Fields!RECORDFLAG.Value = "D". Then in your textbox, use =countdistinct(Fields!TICKETNUM.Value) to get the distinct values for TICKETNUM when RECORDFLAG is D.
Been looking all over the place for a solution to this issue. I have a Yahoo Pipe (http://pipes.yahoo.com/pipes/pipe.info?_id=e5420863cfa494ee40e4c9be43f0e812) that I've created to pull back image content from the Bing Search API. The URL builder includes a $skip attribute that takes an integer and uses it to select the starting (index) point for the result set that the query returns.
My initial plan had been to use the math engine in the Wolfram Alpha API to generate a random number (randomInteger[1000]) that I could use to seed the $skip value each time that the pipe is run. I have an earlier version of the pipe where I was able to get the query / result steps working using either "XPath Fetch" and "Fetch Data". However, regardless of how I Fetch the result, the response returns as an attribute / value pair in a list item.Even when I use "Emit items as string" in XPath Fetch, I still get a list with a single item, when what I really want is the integer that I can plug into my $skip attribute.
I've tried everything in Pipes I can think of, and spent a lot of time online looking for an answer. Is there anyway to extract text (in this case, a number) from a single list item and then use the output as input to "wire" a text parameter in another Pipes block? Any suggestions / ideas welcome. In the meantime, I'm generating a sorta-random number by manipulating a timecode hash, but it just feels tacky :-)
Thanks!
All the sources are for repeated items. You can't have a source that just makes a single number.
I'm not really clear what you're trying to do. You want to put a random number into part of the URL string that gets an RSS feed?
I'm trying to use the NDFD (National Digital Forecast Database) to get current temperature and relative humidity given a Lat and Long using their REST based service.
The issue at hand:
I can't match the 'current observation data' WITH the 'results' I get back from the REST-service.
The setup:
Location:
* Apple (1-infinite loop, Cupertino, California)
* Lat = 37.33; Lon = -122.03
If I issue the following REST-call:
http://www.weather.gov/forecasts/xml/sample_products/browser_interface/ndfdXMLclient.php?lat=37.33&lon=-122.03&product=time-series&begin=2009-06-21T17:12:35&end=2009-06-21T17:12:35&appt=appt&rh=rh&temp_r=temp_r&temp=temp
Note 1: I'm passing the begin and end time in UTC. They're the same because I'm
looking for just a single-point-in-time: the latest observed
temp and relative humidity.
AND, then compare it to what is the closet reporting stations (San Jose International Airport, CA - KSJC - 37.37N 121.93W) # http://www.weather.gov/xml/current_obs/KSJC.xml
** I can never get them to MATCH. **
Note 2: The nearest reporting station is return back from the REST call
as well, so I know I'm comparing Location apples to Location apples.
I've had two ideas:
1: I'm doing something wrong with how I'm passing in the begin/end times into the REST call...
2: You can't get 'current observed data' the way I'm trying to...
Lastly:
I've found a solution using outoftime's NOAA ruby lib , [it parses an observation stations YAML file to find the nearest one given Lat/Lng then goes directly to that station via its identifier i.e. http://www.weather.gov/xml/current_obs/KSJC.xml].... but it just feels like I may be missing something obvious here and would like to use the REST-based interface ;)
Any help or pointers would be appreciated!
Thanks!
It looks like the service you are calling isn't for current data. Judging by the URL and the XML results it seems to be for forecasts. You can also put in future dates to get future forecast data. It expects the dates to be in the -0700 time zone according to the response. I'm not sure which service you should be calling to get the data you want though.
I know that this is an old question, but this is what I'm using to get current weather conditions: http://forecast.weather.gov/MapClick.php?lat=43.09110&lon=-79.0162&unit=0&lg=english&FcstType=dwml
Found this api/link yesterday. Its still developmental (operation-mode="developmental"):
http://forecast.weather.gov/MapClick.php?lat=37.33&lon=-122.03&FcstType=dwml
If you want the "current" observation, you use the XML here:
http://w1.weather.gov/xml/current_obs/seek.php?state=or&Find=Find
e.g.,:
http://w1.weather.gov/xml/current_obs/KAST.xml
If you click on the link you'll get a rendered page. However, if you pull from it using normal rest methods or just wget, it delivers an xml file.