I'm using HTML agility pack in my app to scrape websites. When website is changing, i need to adapt the app to the new html coding. Certifing apps takes some time time that i'm looking for an idea how i can speedup the delivery and provide my own updates.
My question: is there a way for WP to read and execute code from a file in isolated storage?
No, there isn't. You should do the parsing on a web server and build an API for your app.
Related
What am I trying to accomplish?
I am trying to validate all AMP pages in a site (like Google AMP Validator), automatically and store the results. Is there an NPM BULK Validator or something similar out there? I am trying to avoid having to manually go through my sitemaps and test each of thousands of urls.
There is an NPM library and command line tool (https://npmjs.com/package/amphtml-validator), but you will still need to somehow generate the list of documents.
Here is the solution for testing(validate) thousands of URLs in a short time. One website i.e https://www.ampvalidator.com/
This website is secure, accurate, fast and easy to use. And the most interesting thing is you will get the proper beautified email report.
i have to develop a minimalistic and simple windows phone 7/7.1 app for my college website for displaying new notices and and any new content in the students' section . The website has a separate page for notices and a separate page for study material download. The site has no rss feeds. Please help me figure out how can i read the data into my app and display it on the app.
the website is www.niecdelhi.ac.in
Thank You
When a site doesn't publicly expose data in a machine readable format, you can scrape the HTML and extract the data yourself. This is a tedious and error-prone process, not to mention that changing the site design will immediately break your code.
To scrape a site, use a library that can access the HTML. For example, you can use HTML Agility Pack to extract data from HTML and then use it for whatever purpose you want.
You can also create a web service that will periodically perform data extraction and you can then publish RSS feed from your own service. This way your mobile applications don't depend on the parsing code which you can always tweak and update.
I am making a website in Codeigniter and not using any client side framework like angularJS. However I need some features of angularJS like downloading the JS and CSS once at the client rather than downloading it for each page. As my website content is much dependent on the server, should I use angularJS? I read that it makes tha application slower.
your question is not about angular at all!
I recommend you to read something about build systems like require, grunt, yeoman...
What you want to do is ajaxifying your website, as Stever said it's not about angular at all..
you may use RequireJS to load the script when a page need it.
For a best perfomance, use grunt for running any task like : minifying, compressing your stylesheet and so on..
I'm developing a small data-crunching / visualization app in Sinatra, and am split between two options.
The functionality is that you:
Upload a file to the app.
See a nice visualization of its contents.
Maybe start over with a new file.
So my choices are:
Letting both views (upload and results) be managed by the same template, thus creating a single-page app.
Splitting uploads and the visualization between two pages. You upload a file to '/', then are redirected to that file's URL which displays the results.
Which one is better? The advantage of the first is that I can manage it all within the same page, by passing some local vars between the two views.
On the other hand, the second seems like the more RESTful option - because each uploaded file gets its own URL and can be treated as a resource (more fine-grained control).
So, if you want to provide a RESTful API as well along with the web application, it is good idea to have tow different routes.
If you are planning to have just a web UI, it depends on how much control you want to give to the end-user.
Nothing is wrong with either of the approach. It depends on how much ease you can provide.
I have a web site which I download 2-3 MB of raw data from that then feeds into an ETL process to load it into my data mart. Unfortunately the data provider is the US Dept. of Ag (USDA) and they do not allow downloading via FTP. They require that I use a web form to select the elements I want, click through 2-3 screens and eventually click to download the file. I'd like to automate this download process. I am not a web developer but somehow it seems that I should be able to use some tool to tell me exactly what put/get/magic goes from the final request to the server. If I had a tool that said, "pass these parameters to this url and wait for a response" I could then hack something together in Perl to automate this process.
I realize that if I deconstructed all 5 of their pages and read through the JavaScript includes and tapped my heals together 3 times I could get this info from what I have access to. But I want a faster and more direct path that does not require me to manually parse all their JS.
Restatement of the final question: Is there a tool or method that will show clearly what the final request request sent from a web form was and how it was structured?
A tamperer's best friends (these are firefox extensions, you could also use something like Wireshark)
HTTPFox
Tamper Data
Best of luck
Use Fiddler2 as a proxy to see what is being passed back and forth. I've done this with success in other similar circumstances
Home page is here: http://www.fiddler2.com/fiddler2/
As with the other responses, except my tool of choice is Charles
What about using a web testing toolkit, like Watir and Ruby ?
Easy to fill in the forms.. just use the output..
Use WatiN and combine it with WatiN TestRecorder (Google for it)
It can "simulate" a user sitting in front of the browser punching in values which you can supply from your own C# code...