Using LabView to get title from HTTPS site? - https

I would like to retrieve a HTTPS website's title using LabView. There doesn't appear to be any examples using the LabVIEWHTTPClient.lvlib and I am struggling as to how to access data from an HTTPS site using this library. No login is required to access this website.

The LabVIEW HTTP client is built on top of cURL and has https built in.
Just pass in the url starting with https:// and it will work just fine. I guess there could be an example but it wouldn't be any different than a non-https example.
You can take that body result and run it through a regex node to get the title:
Produces the output:
Google

Related

What is the proper way to benchmarking a simple PHP-MySQL website with automated test cases

I am a novice in the area of benchmarking and so would like to request for your guidance.
Problem: I have a test website developed in PHP and MySQL hosted in the localhost.
I need to perform the following set of activities:-
Login as a registered user
Download a PDF file
I wish to know how to load test the above activities in order? I need to check if at a particular instant, 'n' number of users are logged in and they download a pdf file, what would be the worst response time and related stats.
Steps I already did (Please correct me if I did something wrong here.):-
Used the apache benchmarking tool (ab) to load test the login authentication script page passing the username and password as parameters
(i.e., ab -n 1000 -c 100 -A username:password url_of_script.php)
I tested both for apache and nginx webservers (got comparatively better results in nginx)
But, I want to test if after login, the user performs some other activities, how can we use the ab (or some other) tool to assess the load.
Waiting for your responses. Thanks.
Create a PHP script using curl.
Use your Browser to login and and downlaod the PDF.
Before you do that, right click and select Inspect or Inspect Element. Go to the Network tab. Then start the login and download PDF process.
In the Network Tab look only at the request headers of each HTML page request. Filter out all the other requests (e.g. JS CSS images, media and etc.). You can use these headers as a guideline when setting up curl to do each request.
In FireFox you can edit the headers and resend. Go in to the edit mode and copy the request header.
In your curl requests use exactly the the same request the Browser used.
Curl reports all the stats on the request and response.

How do I make my hosting detect _escaped_fragment_ and fetch the corresponding HTML?

I have an AJAX site and I'm using hashbangs (#!) in my urls with the intention of then providing the correct HTML versions when google bots replace the #! with ?_escaped_fragment_.
How do I go about routing/proxying/redirecting the url with _escaped_fragment_ to the corresponding HTML pages? I can't find documentation on this part of the process specifically, and my first thought was that I should be using a 301 or 302 redirect, but I was told that wasn't the case, albeit not given any more info.
You can't use htaccess or redirects at all. Everything after the # in the URL isn't even sent to the server. The URL fragment is entirely client side. You'll need to use some kind of javascript solution to look at the fragment, and make whatever appropriate AJAX call to the server and load the content you get back.

How to open a test page with a url with a certain domain in path without deploying it to a server?

This seems like a simple question, but I just can't seem to wrap my head around it...
I have a simple html page. All that html page does is looks to see whether a browser cookie is present, and if it is, it will write a message that says "Found the cookie".
In order for this html page to work, it needs to be opened in a browser using a url that uses a specific domain "mytestsite.org" in the path in order to work. So I want to be able to open that page in a browser using a url like "www.mytestsite.org/mytestpage.html". Easy enough...
When I use this test page locally, I just deploy it to a local JBoss server, then make a mapping in my "hosts" file (I'm on Windows XP), that maps my local IP to "local.mytestsite.org". This tricks the browser into thinking that it is actually getting the page from "mytestsite.org", when it is actually being served by my local JBoss server.
I want to give this html file to another person who is going to use it on their pc. However, they don't have any sort of http server installed, so the little host mapping trick won't work. I don't want to make them go through the trouble of installing a server just to get this test page to work. Additionally, I can't physically put this file on "mytestsite.org".
Any thoughts on how I could open this page through a "mytestsite.org" url through a browser, without actually having it deployed to a server?
Is your test machine with JBoss installed accessible from the Internet? If so, you may ask the other person to add a mapping to their hosts file, that maps local.mytextsite.org to the public IP of your test machine.

get feedburner feed on httpS

We are grabbing our feed at feedburner by using the jquery jGFeed plugin.
this works great until the moment our users are on a httpS:// page.
When we try to load the feed on that page the user gets the message that there is mixed conteent, protected and unprotected on the page.
A solution would be to load the feed on https, but google doesn't allow that, the certificate isn't working.
$.jGFeed('httpS://feeds.feedburner.com/xxx')
Does anyone know a workaround for this. The way it functions now, we simply cannot server the feed in our pages when on httpS
At this time Feedburner does not offer feeds over SSL (https scheme). The message that you're getting regarding mixed content is by design; in fact, any and all content that is not being loaded from a secured connection will trigger that message, so making sure that all content is loaded over SSL is really your only alternative to avoid that popup.
As I mentioned, Feedburner doesn't offer feeds over SSL, so realistically you'll need to look into porting your feed to another service that DOES offer feeds over SSL. Keep in mind what I said above, however, with respect to your feed's content as well. If you have any embedded content that is not delivered via SSL then that content will also trigger the popup that you're trying to avoid.
This comes up from time to time with other services that don't have an SSL cert (Twitter's API is a bit of a mess that way too.) Brian's comment is correct about the nature of the message, so you've got a few options:
If this is on your server, and the core data is on your server too, then you've got end to end SSL capabilities; just point jGFeed to the local RSS feed that FeedBurner's already importing.
Code up a proxy on your server to marshall the call to Feedburner and return the response over SSL.
Find another feed service that supports SSL, and either pass it the original feed or the Feedburner one.
i have started using WordPress paid theme Schema for my several blogs. In general, it is a nice theme, fast and SEO friendly. However, since my blogs are all on HTTPS, then I noticed that if I had a widget of (Google Feedburner) in the sitebar. The chrome will show a security error for any secure page with an insecure form call on the page.
To fix this, it is really simple,
you would just need to change the file widget-subscribe.php located at /wp-content/themes/schema/functions/ and replace all “http://feedburner.google.com” to “https://feedburner.google.com”.
Save the file, and clear the cache, then your browser will show a green padlock.
and i fix this in my this blog www.androidloud.com

How to use Google Custom Search on https to avoid non secure content prompt?

Is there a way to get the script for search results from an HTTPS site
instead of http://www.google.com/afsonline/show_afs_search.js
I am using the custom google business search on a HTTPS site.
When search is submitted web browser shows warning:
"This page contains both secure and nonsecure items"
I tried to modify the source to be https://www.google.com/afsonline/show_afs_search.js but that doesn't work. Still JavaScript returned from that link connects to http links instead of https links.
Does anybody knows how to fix this ?
A hack that works for me is to provide a modified version of the script
https://www.google.com/afsonline/show_afs_search.js
on another server, say:
https://www.myserver.com/show_afs_search.js
Just copy the original scripts source code and replace 'http' with 'https' in the script (one occurrence). Of course that might stop working anytime if Google changes something.
Even if you modify the Javascript link to HTTPS, show_afs_search.js is still hardcoded to get the search results via HTTP, not HTTPS. To avoid this error, make your own copy of show_afs_search.js that grabs results via HTTPS.
Steps:
download show_afs_search.js
open the file and replace 'http' with 'https'
put this file one your https server and use it
Just change the one instance of http to https.

Resources