Is there is code beautifier for less such as http://www.lonniebest.com/formatcss/ for css? I need sort properties in less code by alphabet.
I use CSSComb http://csscomb.com/. This one is a npm module but there are plugins for it. Especially I use it with Sublime Text.
It works with less too although there might me some edge case not (yet) properly handled. But it's good for me.
You can order rules however you want. Just read the docs ;)
You can also use cssbrush. It is based and uses the csscomb under the hood, but include a fix for this bug and also has the ability to remember the files that were previously beautified, so it will only beautify changed files on each run.
Full disclosure, I wrote it.
Related
Is there a way to have comments in a _CoqProject file?
I'd like to compile only part of a library, without completely removing all the other files from the _CoqProject file.
# is treated as a comment-the-rest-of-the-line symbol.
I think it is not documented as of now (see issue #7647). But the source code and some experiments show that it works.
I am using asciid for an article. In the end of my document I want to have a list of figures. How to I create a list of figures? Did not find something useful in the documentation for me.
Nope there isn't one at the time of answer. I checked the docs (which you indicated you did as well) and I also grepped the codebase. There is good news though! You should be able to do this with an extension.
Extensions can be written in any JVM language if you're using asciidoctorj, or in Ruby if you're using the core asciidoctor (I'm not sure about JavaScript for asciidoctorjs). You'll need to create two extensions probably: a TreeProcessor extension to go through the whole AST looking for images and pulling them out into a storage structure. Then you'll also need to create either an inline or block macro to actually place it within the page.
I strongly recommend examining the API for the nodes and functions you'll want to make use of. There are some other examples of processors that may also be helpful to examine.
I am getting data from a broken RSS feed that gives me wrong link. I wanted to fix this link so I made this code:
<link.*>(.*)&.*tid(.*)</link>
and the link could be like:
www.somedomain.com/?value=50&burrrdurrrr;tid=120
But the real working link is in this form:
www.somedomain.com/?value=50&tid=120
The thing that I'm asking is if my measure thing looks like this:
[FeedURL]
Measure=Plugin
Plugin=Plugins\WebParser.dll
Url=[Feed]
StringIndex=2 ;now I only get www.somedomain.com/?value=50
Substitute=#SubstituteFeed#
How am I supposed to concatenate the strings together to complete the url?
I'm guessing rather than &burrrdurrrr;, the link has &, which is how you have to write & in an HTML or XML file.
If that's the case, you just need to set the DecodeCharacterReference option, as described in this handy-looking tutorial. Another option mentioned there is Substitute, which would be able to strip it out even if it really was &burrrdurrrr;.
None of this is a particularly sensible way of dealing with HTML or XML - a much better approach would be a plugin which actually parsed the document structure and let you reference nodes using XPath or CSS rules - but you work with what you've got, I guess. (I've never heard of this "Rainmeter" before, despite its claim to be "the best known and most popular desktop customization program for Windows"; maybe because nobody else calls their program that, instead almost universally using the word "widget"?)
I've created a php file called pagebase.php that I'm quite proud of. It contains a class that created the whole html file for me from input such as css links and js links.
In any case, this file is several hundred lines long, as it includes several helper functions such as cleanHTML() that removes all whitespace from the html code then, in layman's terms, makes the source look pritty.
I have decided to use this pagebase in all my projects, particularly in all my internal projects. I also plan to add and expand to the pagebase file quite a lot. So what I'm wondering is if it's possible to set the allow_url_include option to on, but just on this one single file.
If I got my theory right, that would allow me to include() that file from any server and get the pagebase class.
So what I'm wondering is if it's possible to set the allow_url_include option to on, but just on this one single file.
No, as far as I'm aware this is not possible.
What you are planning to do sounds like a bad idea anyway, though. An include that gets loaded over the web on every request is awful for performance.
You should keep local copies of your library, and use a update script (or version control system) to keep versions up to date.
That is a bad practice.
You should put this file along with the project that needs it and locally include() it.
10 years and still no good solutions? Use this first one with allow_url_fopen for any convenient solution for your PHP file to allow the use of the php.ini brackets.
Replace allow_url_fopen in my example with allow_url_include to address the question.
This will surely be stated as a minor problem in future PHP. Especially if things aren't set up right with at least options are enabled globally. And this only works if there are no global php.ini rules.
Number 1 and 0 in code is either on and off.
<?php
echo ini_get('allow_url_fopen');
if (!ini_get('allow_url_fopen')) {
ini_set('allow_url_fopen', '1');
}
echo ini_get('allow_url_fopen');
Your code of use of either fopen() or copy() in between the code. Even some curl_init() might work.
echo ini_get('allow_url_fopen');
if (!ini_get('allow_url_fopen')) {
ini_set('allow_url_fopen', '0');
}
echo ini_get('allow_url_fopen');
}
?>
This also works for several other php.ini rules. I believe that this proof would be a security issue in many PHP codes further into the future. Surely a good thing to monitor in plugins in the further future or now for anti-malware signature. At least use the right global settings from now.
Sadly, a project that I have been working on lately has a large amount of copy-and-paste code, even within single files. Are there any tools or techniques that can detect duplication or near-duplication within a single file? I have Beyond Compare 3 and it works well for comparing separate files, but I am at a loss for comparing single files.
Thanks in advance.
Edit:
Thanks for all the great tools! I'll definitely check them out.
This project is an ASP.NET/C# project, but I work with a variety of languages including Java; I'm interested in what tools are best (for any language) to remove duplication.
Check out Atomiq. It finds code that is duplicate that is prime for extracting to one location.
http://www.getatomiq.com/
If you're using Eclipse, you can use the copy paste detector (CPD) https://olex.openlogic.com/packages/cpd.
You don't say what language you are using, which is going to affect what tools you can use.
For Python there is CloneDigger. It also supports Java but I have not tried that. It can find code duplication both with a single file and between files, and gives you the result as a diff-like report in HTML.
See SD CloneDR, a tool for detecting copy-paste-edit code within and across multiple files. It detects exact copyies, copies that have been reformatted, and near-miss copies with different identifiers, literals, and even different seqeunces of statements.
The CloneDR handles many languages, including Java (1.4,1.5,1.6) and C# especially up to C#4.0. You can see sample clone detection reports at the website, also including one for C#.
Resharper does this automagically - it suggests when it thinks code should be extracted into a method, and will do the extraction for you
Check out PMD , once you have configured it (which is tad simple) you can run its copy paste detector to find duplicate code.
One with some Office skills can do following sequence in 1 minute:
use ordinary formatter to unify the code style, preferably without line wrapping
feed the code text into Microsoft Excel as a single column
search and replace all dual spaces with single one and do other replacements
sort column
At this point the keywords for duplicates will be already well detected. But to go further
add comparator formula to 2nd column and counter to 3rd
copy and paste values again, sort and see the most repetitive lines
There is an analysis tool, called Simian, which I haven't yet tried. Supposedly it can be run on any kind of text and point out duplicated items. It can be used via a command line interface.
Another option similar to those above, but with a different tool chain: https://www.npmjs.com/package/jscpd