static file url unexpected behavior - go

Can someone explain to me why the first line of code delivers the desired result and the second piece of code returns 404? In the browser I searched localhost/ and localhost/css respectively.
1. http.Handle("/", http.FileServer(http.Dir("css"))) // works
2. http.Handle("/css", http.FileServer(http.Dir("css"))) // fails
returns the .css file at desired url (localhost/).
returns 404 at desired url (localhost/css).
I am not trying to serve both URLs at the same time. I comment out one or the other while I am trying to figure this out.

I solved the problem. The code below returned my css at the desired URL.
http.Handle("/css/", http.StripPrefix("/css/", http.FileServer(http.Dir("css))))
What's odd is I could have sworn I tried this method. Must be something delaying the refresh of chrome.

Related

Getting an error trying to pull out text using Google Sheets and importxml()

I have a column of links in Google Sheets. I want to tell if a page is producing an error message using importxml
As an example, this works fine
=importxml("https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_T", "//td/b")
i.e. it looks for td, and pulls out b (which are postcodes in Canada)
But this code that looks for the error message does not work:
=importxml("https://www.awwwards.com/error1/", "//div/h1" )
I want it to pull out the "THE PAGE YOU WERE LOOKING FOR DOESN'T EXIST."
...on this page https://www.awwwards.com/error1/
I'm getting a Resource at URL not found error. What could I be doing wrong? Thanks
after quick trial and error with default formulae:
=IMPORTXML("https://www.awwwards.com/error1/", "//*")
=IMPORTHTML("https://www.awwwards.com/error1/", "table", 1)
=IMPORTHTML("https://www.awwwards.com/error1/", "list", 1)
=IMPORTDATA("https://www.awwwards.com/error1/")
it seems that the website is not possible to be scraped in Google Sheets by any means (regular formulae)
You want to retrieve the value of THE PAGE YOU WERE LOOKING FOR DOESN'T EXIST. from the URL of https://www.awwwards.com/error1/.
If my understanding is correct, how about this answer? Please think of this as just one of several possible answers.
Issue and workaround:
I think that the page of your URL is Error 404 (Not Found). So in this case, the status code of 404 is returned. I thought that by this, the built-in functions like IMPORTXML might not be able to retrieve the HTML data.
So as one workaround, how about using a custom function with UrlFetchApp? When UrlFetchApp is used, the HTML data can be retrieved even when the status code is 404.
Sample script for custom function:
Please copy and paste the following script to the script editor of the Spreadsheet. And please put =SAMPLE("https://www.awwwards.com/error1") to a cell on the Spreadsheet. By this, the script is run.
function SAMPLE(url) {
return UrlFetchApp
.fetch(url, {muteHttpExceptions: true})
.getContentText()
.match(/<h1>([\w\s\S]+)<\/h1>/)[1]
.toUpperCase();
}
Result:
Note:
This custom function is for the URL of https://www.awwwards.com/error1. When you use this for other URL, the expected results might not be able to be retrieved. Please be careful this.
References:
Custom Functions in Google Sheets
fetch(url, params)
muteHttpExceptions: If true the fetch doesn't throw an exception if the response code indicates failure, and instead returns the HTTPResponse. The default is false.
match()
toUpperCase()
If this was not the direction you want, I apologize.

CodeIgniter routes are not working

I've defined following route in config file as follows.
$route['apartments/(:any)'] = 'apartments/view/$1';
If I give http://localhost/apartment_advertisement/apartments/shobha_complex like this in url it works perfectly fine.
If I give http://localhost/apartment_advertisement/apartments/shobha_complex/abcd/abcd like this in url it goes to the same page as above. So I needed error page for this url. Please help me how to control these urls?. The work would be more appreciated.
Do you mean display an 404-not-found error when request URL has an unwanted "tail"? You can modify (:any) to restrict accepted string. It's simple:
$route['apartments/(\w+)'] = 'apartments/view/$1';

Ckeditor 4 - How to callback selected file/image in custom browser?

With Ckeditor3, when implementing your own file/image browser, to send back the file URL back to CKeditor, you would call :
window.opener.CKEDITOR.tools.callFunction(2,filename);
But it seems with Ckeditor4 this does not work anymore, even though the docs still says it's working...
Any help?
In this line
window.opener.CKEDITOR.tools.callFunction(CKEditorFuncNum,filename);
CKEditorFuncNum should be the value sent to the file browser via url param with the same name i.e &CKEditorFuncNum=4 - in this case 4 will be the first argument for callFunction().
Ok it seems like the problem is the "2" hard coded... as mentioned here
I solved it by using the example 2 function from the docs mentioned in my question

MVC Routing not working as expected / how to debug

I'm trying to debug an error with routing which isn't picking up my two integers
The two questions I have are:
Why is the requested link not being matched correctly?
how can I debug such issues? I've tried Phil Haacks routedebugger and
Glimpse however these only work when the requested output doesn't error. I ideally would like to see which route is being matched for this erroring request.
My Controller looks like:
My Global.asax Route
The Error I'm receiving
The first int is -1 however it still errors when it's a positive number.
Because I'm totally rep-hungry, I'm going to put my correct comment here.. :D
UrlParameter.Optional will fix it, however, this is probably due to a conflicting route somewhere, as I did not require that on my tests locally.
Also, for anyone else wondering, for negative numbers in your constraint, you'll need to have a regex like this: -{0,1}\d+

Can't get updated text file from another server. What is the cause of this?

I am trying to get a frequently updated text file from another server like http://site2.com/state.txt with cURL or PHP's file_get_contents() function.
With both two ways, after a few requests I'm getting previous file instead of getting the updated one.
If I change the file path like http://www.site2.com/state.txt it gets the updated file for a while and again starts to get the old content.
What can I do for getting the updated file countinuosly?
Thanks for help
I dont know php nor do I know curl, but I believe I know a browser caching issue when I see one.
When your browser sees the same get request request multiple times it serves you up a cached version instead of going and actually performing the request.
Two ways to fix:
Clear your browser cache every time you want to get the updated file. (That's a joke)
When you make your get request you should append some sort of time stamp to the end of your url.
I would do this in javascript.
var url = "http://www.site2.com/state.txt?_=" + now();
So basically I am appending a parameter named '_' to the end of the request that has the value of the current timestamp. This will cause the browser to perform a new get request and not give you the cached version.

Resources