Is there a better way than PHP include() - include

My sites include large segments of data via php's include(). This has worked well but I'm building a CMS and ran into an issue...
When I load the page to be edited, the included file doesn't print. I'm sure this is a relative path issue but the question is: What's the best way to deal with this?
I'm using TinyMCE and it puts all of the PHP into <!----> comments anyway, so I may be in big trouble here (I need to keep the PHP header()).
I'm thinking that I will have to just place the included data directly into the page so it doesn't get lost when loaded into the editor. I can probably move the header() into a higher document and have it change with variables, but before I do...
Is there a better way than include() to attach large segments of data?

Related

Use Grunt to make DOM changes

So I want to use Lazysizes (lazy loading responsive images). Included in my Grunt stack is Responsive Images Extender, which outputs responsive image code (srcset) from simply including an "img" tag with a "src" attribute. Lazysizes requires the "data-srcset" attribute in replace of the "srcset" attribute, however. I added a script to my page that changes the "srcset" attributes to "data-srcset" attributes, but this isn't ideal as images are are already downloaded at runtime. It would be ideal if I could change the tags with Grunt, as there is no advantage in changing them live.
This seems like a very common thing, but I cannot find a good way to do it. String replace doesn't seem like an ideal solution, since it can cause problems if I ever use "srcset=" in my code.
I gave the grunt-responsive-images-extender a major makeover and added the possibility to change the attribute name of srcset to anything you want (data-srcset in your case) via the srcsetAttributeName option.
There is a grunt tans called dom_munger. With dom_munger you can change HTML attributes and do a lot of interesting stuff; however I cannot find a way to change an attribute name to another thing. Perhapse you can have a better luck checking it.

Laravel blade debug view name on error

when there is some error in view, L4 shows a nice trace, but cached filename:
open: /var/www/webpage/app/storage/views/1154ef6ad153694fd0dbc90f28999013
howto during view-rendering-to-cache save view's path/name (in a comment or something)?
Or better yet - to show it in the debug-error-page (its called whoops or something?)
Thanks ;)
I don't know how to de encrypt view names , but one method i do is to
{{dd('will you reach here ')}}
Trying to move this line from view to another to watch where php render reach .
I know it is not the right way nor the professional one , but it may help in some cases .
This is not exactly a problema, this is a compiled version of your view.
Laravel Blade System will compile all your views and subviews into a single file and, if you didn't change anything on them it will always try to use the compiled version, to speed up your system.
Sometimes is hard to know wich one of our views is related to that error. Using Sublime text, what I do is to hit CTRL-P (windows) and paste the number of the compiled view (1154ef6ad153694fd0dbc90f28999013) and it will bring it to me right away.
Of course, you won't do any changes on it. This is just way to find the view you have problems in, so you can then find the real file and fix it. If you know wich file is the problematic one, you don't have to do this, go directly to your file.
One way to tackle this problem is add a html comments (not blade ones as they will not be rendered in compiled view) in sections which get echoed.
#section('content)
<!-- FILE: app/views/main/index.blade.php -->
<Your Content Goes Here>
#stop
This html comment will get rendered in the compiled source of the view. Of course you will have to inspect the compiled view first to identify which view is the problematic one. But in my experience, this method work almost all the time.
I created a helper that checks to see if you are working locally or in development mode, It then outputs an HTML comment.
{{ printViewComment('mockup/reports#content') }}
<!-- Template: mockup/reports#content -->
I chose to name the comments like this path.file_name#yeild_name but I only wish this was an automated featured.
I found my answer after looking into source,
when on the Whoops! page, just look for render in the sidebar, there will be the name of the view file...

Magento Javascript merge - why does it break in some cases?

Okay I've been doing JS merges for some time now and still can't figure out the logic behind making a successful merge. It comes down to repositioning libraries upwards and downwars on merge list. Sometimes jquery must be on top, sometimes it doesn't. Sometimes fancybox needs to be added as addJs, sometimes as addItem.
So, what is in your experience causing JS libraries to break when you use Magento's merge JS? Are there any rules for sucessful merge?
UPDATE: Just now in my local.xml I moved from
<action method="addItem"><type>skin_js</type><name>js/magiczoomplus.js</name></action>
to
<action method="addJs"><script>jquery/magiczoomplus.js</script></action>
and that solved the magiczoomplus error I was getting on the page. How so?
I'm trying to understand this problem so I can better tackle it in the future
you need to understand the core principle here what conflicts between javascript libraries and what not.
in case of jQuery and Prototype conflict and in Magento:
always include jQuery before any other script on your page , before Prototype is a must
add jQuery.noConflict(); call directly to the end of your jQuery library file
make sure that none of your jQuery based scripts are not using $ as method name (essence of the conflict here)
If there are any problems after Javascript merging is enabled, I always tried to replace minimized Javascript files with non-minimized versions of those files. And it always solved the problems. (I don't know why there are problems with minimized files)
Not alot of options to actually fix merging but
1.) Use group-ing in your local.xml files to ensure a better merge.
http://fishpig.co.uk/blog/why-you-shouldnt-merge-javascript-in-magento.html
2.) Abandon Magento's built-in merging altogether and use Fooman_Speedster instead.
http://www.magentocommerce.com/magento-connect/fooman-speedster.html
The second one has worked perfectly for me so far. I'm using jQuery libraries and even more (Handlebars, etc) and i'm having no problems whatsoever.
What worked for me was moving the jQuery include after prototype and adding jQuery.noConflict(); after the jQuery include.
What works for me after all this time, is:
Always putting jQuery on top, followed by noConflict
Toggling between compressed and uncompressed version of included JS (if you enable gzip compression you should not be worried about the final size - it will be compressed one way or another)
Toggling between addItem and addJs inclusion methods
Randomly repositioning erroring libraries

How to retrieve plain text from a formatted website to use in UIWebView

Not sure if what I want to do is possible, but what I am hoping to do is somehow gather certain pieces of text from a website, remove the header, footer, background, all formatting, and place it into my application in a scrollview or something similar...
I'll give you an example... Imagine I was making wikipedia's iPhone app, I want to download the information about the wiki on dogs, without the header, side bars etc, just the text. How would I go about doing this?
I understand that for this I have not provided any example code or what I've tried or started, but that's just because in this case I'm lost! That doesn't mean I want full chunks of code either. Any help will do. If this doesn't work, I will just have to make a 'mobile optimised' version of the webpages I want to include in my app.
Thanks
(Edit: the term I was trying to use was 'strip the web page of its HTML coding')
You may be going about this the wrong way, or perhaps even asking the wrong question.
Does the target website have an API or datafeed of some kind?
Can you get the information you need in JSON or XML format directly from the site?
I think you've misunderstood the technology. HTML is merely the framwork on which the formatting and data is hung.
Parsing the HTML page seems like an awfully big headache, I doubt you'll ever be able to get it to work, because almost all sites these days are partially or wholly generated on the server side, the page is only the result.
Some sites hide the information in memory and others get it dynamically through ajax for example, which means that simply trying to get the data by parsing the HTML will get zero data.
Another issue you should be aware of though, is that simply copying the data from generated websites may open yourself up to copyright issues.
You have to parse the html code and search for the part that you want and "throw" away the part that you do not need. This is more or less like bruteforcing and the code of the website should not change otherwise you are screwed. So you have to write the parser by hand with this method. But maybe there is a atom or rss feed and you can parse this one. This will be much more easier and you are not depending on the website layout because the rss/atom feed is just about the data. For parsing rss you could try out NSXMLParser.
And then you have to make a valid html page out of the data and present it in the UIWebView

Why does my website need so much time to render?

When cached, my starting page only needs to load one element (the "root document") - but then it needs some time until it's rendered completely:
alt text http://www.walkner.biz/_temp/firebug_net.png
The elements following are things loaded asynchronous via JavaScript.
Two questions:
Why does it take so "long" from loading the root document until the DomContentLoaded-event?
Does it make sense to load some not-so-important things asynchronously? Is it important to have the DmoContentLoaded-event as early as possible? Unfortunately there's not much documentation about that event, but I don't think it's the moment when the page is displayed, is it?
I'm not sure YSlow is gonna help him as that will download all elements for a page and run performance tests on them, whereas swalkner's problem is how long it is taking to render the HTML page itself when all other elements (images, CSS, etc) are cached.
At least that's what I think he's saying.
In the original question you said, "The elements following are things loaded asynchronous via JavaScript." but then listed nothing. What is loaded?
I would suggest checking for Javascript errors in the first instance. Then try removing some of your asynchronous loading calls one by one until you hit the bottleneck. In fact, remove them all, how long does the downloaded HTML take to render? Take that time and work from there.
Is your HTML document very big? Does it use lots of inline styles that could be in the CSS file?
Perhaps if you posted a link to the site then people would have a look at it.

Resources