Can a <Ad> tag appear more than once inside the <VAST> tag? - vast

a. Can a Ad tag appear more than once inside the VAST tag?
b. Can a Creatives tag contain more than one Creative element?
Reference: http://ad3.liverail.com/?LR_PUBLISHER_ID=1331&LR_CAMPAIGN_ID=229&LR_SCHEMA=vast2

Within the <VAST> element there can be one or more root <Ad> elements (at least one is required).
While a single <Ad> element represents the most common VAST response, multiple ads may be included as either stand-alone ads or a pod of ads (aka Ad Pod -introduced in VAST 3.0-), or a mix of both. Ads in a pod are distinguished by using the sequence attribute for an <Ad>, denoting which ad plays first, second, and so on. If the player supports ad pods, sequenced ads are played in numerical order and all ads in the pod should be played to the best of the player's ability. All sequence values in a VAST response must be unique. Non-sequenced ads, are stand-alone ads and considered part of an "ad buffet" from which the player may select one or more ad to play in any order.
The <Creatives> container (required -- only one may appear) can hold one or more <Creative> elements, in the form of Linear, NonLinear, or CompanionAds.
See VAST 3.0 spec for full details.

Related

Autocomplete v Nearby Search?

I am building an app that will require extensive use of the autocomplete function and have currently implemented under Nearby Search. I recently learned however that this is the priciest option given its high cost + associated Contact and Atmospheric data imposed costs.
I am therefore looking for a good option to get relevant autocomplete search results based on the users location without the need for 'Nearby search'. I care about the UX and thus want to avoid people scrolling too much to find a place near them. The only field I need is name & potentially address.
I tried Nearby search, if I understand correctly this is the only way to get autocomplete predictions based on where you physically are located - I have now learned that this is too expensive however
Autocomplete and Nearby Search are entirely different operations and APIs, you can combine both to build a user-friendly experience but they each play a very different role.
Place Autocomplete provide predictions of places based on the user's input, i.e. characters the enter into an input field. These predictions can be biased, even restricted, to a small area around the user's location, to increase the chances that they will represent places near to the user. Depending on whether places far away from the user are acceptable or useful, or not, you can use one or the other:
locationbias if predictions far away are acceptable and useful, e.g. a user searching for a place that is not necessaraly where they are, or in situations where the user location is either not available or not very precise, e.g.
user wants to find a place to go to
user location is obtained from geolocating their IP address
user location is obtained from geolocating their cell towers
locationrestriction if only very nearby predictions are acceptable and user location is known to be very precise (e.g. GPS or other high-precision sources). This would make sense in mobile applications when the user location is provided (by the phone's OS) with a small radius (e.g. under 100 m.) and the user really just wants to find places that describe where they are now. Even then, beware that some places can be bigger than you'd expect, e.g. airports include runways.
Note on billing: Place Autocomplete can be free under specific conditions: when your application implements session tokens and there is a Place Details request at the end of the session, in which case Place Details is billed and Autocomplete is not. However, even if your application implements session tokens, each time a user doesn't pick a prediction, Autocomplete is billed as a session without Details. And in the simpler case, if your application does not implement session tokens, all Autocomplete is billed as per-request (and Place Details is billed separately, on top of that).
Nearby Search can provide nearby places (and can rankby=distance) based on only the user's location and without user's input. This can be used to show an initial list of places (e.g. the nearest 5 places) even before the user starts typing. There a few caveats:
results depends heavily on the user location being very precise
results will only include establishment places, i.e. business, parks, transit stations
If you'd want addresses instead of businesses, you could use reverse geocoding instead of Nearby Search, with the caveat that this can return results that are near/ish and don't necessarily represent the exact place where the user is at. This is more useful when you want to find addresses around a location; they may include the actual address of that location, but that is not guaranted.

How is a Segment different from Marketing List? What is their purpose?

How is a Segment (Dynamics for Marketing) different from Marketing List (Dynamics for Sales)?
Is Marketing List for Campaign same as Segment for Customer Journey?
Are their purpose any different Marketing List for Campaign and Segment for Customer Journey?
The important differences:
You can always add members to/remove members from a marketing list, while you can't modify static segment once it is live
Marketing list "live" inside CDS (SQL) - i.e. it is subject to SDK limitation like any other entity and performance implications
Segments "live" in cloud, so they're not subject to SDK limitation (i.e. way better performance, no issues when running big campaign)
Marketing list can be used for contacts subscription/unsubscription while segments can't
Segments can be used to process both interactions (i.e. things like clicking an email link, submitting a form, or registering for an event) and profiles, while marketing list can process just profiles
Over all I understand your confusion - its just a matter of providing static segment similar capabilities marketing list have (i.e. subscription/unsubscription).
You can post it as an idea here: https://experience.dynamics.com/ideas/list/?forum=dfa5b83d-9e4c-e811-a956-000d3a1bef07
Disclaimer: I work in Microsoft Dynamics Marketing as a developer, and this is "how I feel about this", not official statement of any kind.

How to build a price comparison program that scrapes the prices of a product across several websites

I am trying to build a price comparison program for personal use (and for practice) that allows me to compare prices of the same item across different websites. I have just started using the Scrapy library and played around by scraping websites. These are my steps whenever I scrape a new website:
1) Find the website's search url, understand its pattern, and store it. For instance, Target's search url is composed by a fixed url="https://www.target.com/s?searchTerm=" plus the search terms (in parsed url)
2)Once I know the website's search url, I send a SplashRequest using the Splash library. I do this because many pages are heavily loaded with JS
3)Look up the HTML structure of the results page and determine the correct xpath expression to parse the prices. However, many websites have results page in different formats depending on the search terms or product category, changing thus the page's HTML code. Therefore, I have to examine all the possible results page's formats and come up with an xpath that can account for all the different formats
I find this process to be very inefficient, slow, and inaccurate. For instance, at step 3, even though I have the correct xpath, I am still unable to scrape all the prices in the page (sometimes I also get prices of items that are not present in the HTML rendered page), which I dont understand. Also, I dont know whether the websites know that my requests come from a bot, thus maybe sending me a faulty or incorrect HTML code. Moreover, this process cannot be automated. For example, I have to repeat step 1 and 2 for every new website. Therefore, I was wondering if there was a more efficient process, library, or approach that I could use to help me finish this program. I also heard something about using the website's API, although I dont quite understand how it works. This is my first time doing scraping and I dont know too much about web technologies, so any help/advice is highly appreciate!
The most common problem with crawling is that in general, they are determining everything to be scraped syntactically, while conceptualizing the entities you are to be working with helps a lot, I am speaking from my own experience.
In a research about scraping I was involved in we have reached to the conclusion that we need to use a semantic tree. This tree should contain nodes, which represent important data for your purpose and a parent-child relation means that the parent encapsulates the child in the HTML, XML or other hierarchical structure.
You will therefore need some kind of concept about how you will want to represent the semantic tree and how it will be mapped with site structures. If your search method allows you to use the logical OR, then you will be able to define the same semantic tree for multiple online sources.
On the other hand, if the owners of some sites are willing to allow you to scrape their data, then you might ask them to define the semantic tree.
If a given website's structure is changed, then using a semantic tree more often than not you will be able to comply to the change by just changing the selector of a few elements, if the semantic tree's node structure remains the same. If some owners are partners in allowing scraping, then you will be able to just download their semantic trees.
If a website provides an API, then you can use that, read about REST APIs to do so. However, these APIs are probably not uniform.

Difference between VAST, VPAID and VMAP

For some reason I need to know difference between VAST,VPAID and VMAP.
I know both are video ad deliver tags, these are following IAB standard, but I need to know clear difference between these three.
Any help is appreciated.
VAST, VMAP and VPAID solve different challenges when it comes to showing advertisements in a video player.
Short answer
VAST describes ads and how a video player should handle them. (more or less)
VPAID (deprecated, see update below) describes what "public" communication (methods, properties and events) an executable ad unit should at least implement/expose, so the video player can communicate with the ad unit in an uniform way and control it.
VMAP describes when an ad should be played.
In more detail
VAST (Video Ad Serving Template) is used to describe ads and how a video player should handle these. Note that the concrete implementation is up to the video player itself. There are three types of ads:
A linear ad is an advertisement video rendered inside the video player.
A non-linear ad is an advertisement overlaying the video player. It is mostly a banner image, but it could also be HTML or an iFrame.
A companion ad is an advertisement rendered outside the video player. It is mostly rendered alongside a linear or a non-linear ad, as they can complement each other (hence the name).
More examples of cool stuff VAST describes:
When an ad is allowed to be skipped (for linear ads)
What URIs should be pinged for tracking
Sequence of ads (ad pods) that should be played together
Different resolutions / codecs for same advertisement
VMAP (Video Multiple Ad Playlist) is an optional addition allowing you to specify when an ad must be played. Via VMAP you can indicate whether an ad is a pre-roll (ad before the content), a mid=roll (ad somewhere in the content) or a post-roll (ad after the content). VMAP can also refer to multiple VAST files to be played at different times.
VPAID (Video Player Ad Interface Definition) is a specification describing what an executable ad unit (= interactive ad) should at least implement and expose for public communication/control. This allows the player to delegate instructions to the ad and yet keep control over it (e.g. starting, pausing, finishing it...). That way, a player can give instructions (methods) and request information (properties). The ad itself can also dispatch events indicating that a certain action has happened (e.g. volume has changed, ad has been skipped, ad has been clicked...).
It is interesting to note that VPAID has two versions: version 1 is only Flash, while version 2 is only JavaScript.
How these three connect with each other
VMAP refers to a VAST, but never to another VMAP.
VAST can contain its ad data internally (Inline) or refer to another VAST (Wrapper), but never to a VMAP. VAST describes ads. Some ads can be executable (interactive).
If an ad is executable then it must implement VPAID so the player can cooperate with it.
Update June 2019
Quite a few things have changed since this answered was submitted. In VAST 4.1, the IAB deprecated the VPAID specification in favor of an upcoming specification. VAST 4.2 (currently in public comment phase) formalized the successor of VPAID:
for ad verification, the Open Measurement SDK should be used
for interactivity, the SIMID (Secure Interactive Media Interface) specification should be implemented.
IAB Digital Video Suite
VAST(Digital Video Ad Serving Template) is an XML with <VAST> root where the main part is MediaFile tag with a url to video file. IAB
VPAID(Digital Video Player-Ad Interface Definition) is extension of VAST where MediaFile tag contains type="application/javascript" apiFramework="VPAID" attributes which allows to define a JS source to be handled. SpotXChange, Innovid
VMAP(Digital Video Multiple Ad Playlist) - is an XML with <vmap:VMAP> root and is used to describe a schedule for VAST files( pre/mid/post roll)
Google IMA Examples
[MRAID]

Breaking a project's first User Story in to tasks [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm starting a new project from scratch and have written User Stores to describe how a given user will interact with the system. But, I'm having trouble understanding how to break the first user story in to tasks without the first one becoming an epic.
For example, if I were building a car and the first user story said something like "As a driver, I would like to be able to change the direction of motion so that I don't hit things.", that would imply a user interface (steering wheel), but also motion (wheels) and everything necessary to link those together (axle, frame, linkage, etc...). In the end, that first user story always seems to represent about 40% of the project because it implies so much about the underlying architecture.
How do you break user stories down for a new project such that the first one doesn't become an epic representing your entire underlying architecture?
You might want to think of your story as a vertical slice of the system. A story may (and often will) touch components in all of the architectural layers of the system. You might therefore want to think of your tasks as the work needed to be done on each of the components that your story touches.
For example, Let's say you have a story like In order to easily be able to follow my friends' tweets, as a registered user, I want to automatically follow all of my gmail contacts that have twitter accounts.
In order to accomplish this, you will have to pass through the UI layer, service layer, persist some data in the data layer, and make an API call to twitter and gmail.
Your tasks might be:
Add an option to the menu
Add a new gmail authentication screen
Add a twitter authentication screen
Add a contact selection screen
Add a controller that calls into your service layer
Write a new service that does the work
Save contacts to the database
Modify your existing gmail API calling service to get contacts
Add a twitter API calling service to follow selected contacts
There: That's 9 possible tasks right there. Now, as a rule, you want your tasks to take roughly 1/2 a day to 2 days, with a bias towards one day (best practice, for sizing). Depending on the difficulty, you might break down these tasks further, or combine some if they are two easy (perhaps the two API calling services are so simple, you'd just have a modify external API services).
At any rate, this is a raw sketch of how to break the stories down.
EDIT:
In response to more question that I got on the subject of breaking stories into tasks, I wrote a blog post about it, and would like to share it here. I've elaborated on the steps needed to break the story. The link is here.
When we started projects under a Scrum management style, the first set of tasks was always broad, or as you describe it: epic. That's inevitable, the framework of any project is usually the most important, largest, and time-consuming portion, but it supports the rest of the project. In order to pare down the scale on overwhelming-ness of how much there is to do see if you can list the MOST essential parts. Then work on defining those tasks as the starting points. Therefore you have a few tasks as starting points for a broad beginning. Hope that makes sense!
A user story describe the what while a task is more about the how.
There is no perfect formula, just add any task that describe how the user story is going to be implemented, documented or tested.
Keep in mind that a task should be estimated in hours, so try to scale and detail the tasks accordingly.
If you feel that you have too many tasks for a story (even if you have 1-8 hours long tasks), then maybe you should consider rewriting your user story in the first place because it's probably too complex.
Good luck
The story that you implement at the beginning can be refined over time. You dont need to think that every story has to be the final version that the user is going to use.
For example, in a recent project we had to develop an application which involved indexing various websites, and matching them against filters created by users, and finally alerting the user of matches (thing of it as google alert on steroids).
If you look at it from one perspective, there is only one story - "As a user I want to get alerts from matching pages". But look at it from another perspective of "what are the risks we want to mitigate". The first risk was that users wouldn't get relevant or better hits compared to google alerts. The second risk was in learning the technology to build this.
So our first user story was simply "As a user I want relevant hits", then we built just the hit matching algorithm on a hardcoded set of pages and hardcoded filters for some early users and got their feedback.
There might actually be a bit of back and forth here with multiple smaller stories to capture learning like "As a user I want more priority to be given to matches in the URL" etc.. these stories comes from the feedback as we iterate over what the early users consider "relevant hits".
Next, we broadened it to "As a user I want hits from specific websites" and we built the indexing architecture to crawl user specified sites and do hit matching on that.
The third story was "As a user I want to define my own filters", and we built this part of the system.
In this way we were able to build up the architecture piece by piece. Through most of the initial part, only early users could use the system, and many pieces of data were hardcoded etc.
After a point, early users could use the system completely. Then we added stories for allowing new users to register and opened it up to the public.
To cut a long story short, the story you implement first could implement only a small part of the final story, hardcoding and scaffolding everything else. And then you can iterate on it over time till you get the story that you might actually release to the public.
I've come to a crossroads with this issue in the past. User stories are supposed to be isolated so you can do them without any other stories, in whatever order, etc. But I found making that happen just made everything more complicated. To me this fell under the "Individuals and interactions over processes and tools" part of the agile manifesto - or at least my interpretation of it.
The ultimate goal is ship. And to ship you have to build, and to build you have to stop futzing with scrum and just get stuff done and make sure you track it.
So what we did was break a cardinal rule of stories and we made some tech stories like "create a preliminary schema". We also declared that some stories were dependent on others, and noted that on the back of the story card.
In the end I felt this type of story was few and far between, and the difficulty of the alternative justified the exception.

Resources