Anytime I open up the code editor in Visual Studio, there is always an empty new line at the end of generated codes. I usually delete them since they seem irrelevant to me. However, recently I read code at Github which said:
\ No newline at end of file
This was the last line. Now I'm thinking those empty new lines at the end of source codes do have some relevance. But what do they mean? Do they provide any performance boost?
Two things make me prefer having a newline at the end of files:
Code reviews are slightly easier when looking at diffs that occur at the end of the file (i.e., if a line is added at the end of the file, it appears that the previous line changed, when it only gained a newline)
Going to the end of the file (Ctrl+End in Windows) always puts me at the same column and not in some unexpected position out to the right
Pretty much the only difference it makes is that if you have a file with no newline - like this:
blah\n
bleh (no newline)
When you modify it to be:
blah\n
bleh\n
foo (no newline)
Then according to the diff, you modified 2 lines - one with content, the other one with newline... which is not what you wanted probably. Then again, in reality it doesn't matter that much which way you choose. If you include newlines, your diffs will be a little bit cleaner.
It also makes difference for some preprocessors as mentioned in other answer - but that depends on what language you use.
Of course it makes no performance difference at all.
No, it makes no difference whatsoever.
Some coding conventions say it's good to have a final newline, some say it's good not to.
Read more about new line in C++ here: "No newline at end of file" compiler warning
I suppose both Visual Studio and Git do it mostly for being coherent with the convention.
Related
I am having a problem with text files on my work computer. They are regularly being corrupted and I can't tell if it is an OS problem (Windows 7) or problem with my text editor (Sublime Text 2), or both. Here's what's happening:
I am working on a coding project. I'll write some code, test it out, then save it. It may look something like this (I'm writing in Ruby):
class MyClass
def my_method
end
end
Then later, maybe the same day or the following day, I will attempt to run my code and will get a syntax error or invalid character error or unterminated string error, etc. I trace the error and find that one of my files will look like this:
class MyClass
def my_method
§`½š—ÍÚ
So I'll fix it and close the code blocks, and then run again. Often, I will get a second or third similar error and will have to trace out all of the files where this happens and fix the code.
The 'corruption' always occurs at the end of the file, so usually it is destroying some end statement for a class/module definition, but it also occurs at the end of scripts in the middle of print or puts statements, etc., wreaking a more general brand of havoc. (That is to say, it doesn't seem to only be happening to the keyword end.) The invalid characters that appear are not always the same characters, and so far I have not noticed any repeats. The problem is destroying about 5-10 characters at the end of the file, including new lines (like my example above).
Also, it doesn't seem to be occurring in specific files. But I'm not sure if it is limited to the files in the folder(s) I have open in Sublime text or if it is happening to any files that I open for editing with Sublime text. 99% of the time I'm working in one directory which I have open in Sublime text for my project. I don't know if the problem would occur with another text editor (i.e. Notepad) because I haven't taken significant time to test it out (i.e. coding for a couple of days strictly in Notepad).
The problem occurs in files that are currently (at runtime) open in Sublime Text and have remained open since I last worked on them and saved, and also occurs in files that have been closed since their last use.
This is very frustrating as I feel like I can't trust that my work will be ready to go when I need it, and I have to spend extra time a couple of times a day fixing the files. I have searched for a solution and so far haven't hit on anything that seems to solve or even describe my problem - though I may not be using the best keywords.
Any help is appreciated! Thanks!
I have a large source code where most of the documentation and source code comments are in english. But one of the minor contributors wrote comments in a different language, spread in various places.
Is there a simple trick that will let me find them ? I imagine first a way to extract all comments from the code and generate a single text file (with possible source file / line number info), then pipe this through some language detection app.
If that matters, I'm on Linux and the current compiler on this project is CLang.
The only thing that comes to mind is to go through all of the code manually and check it yourself. If it's a similar language, that doesn't contain foreign letters, consider using something with a spellchecker. This way, the text that isn't recognized will get underlined, and easy to spot.
Other than that, I don't see an easy way to go through with this.
You could make a program, that reads the files and only prints the comments out to another output file, where you then spell check that file, but this would seem to be a waste of time, as you would easily be able to spot the comments yourself.
If you do make a program for that, however, keep in mind that there are three things to check for:
If comment starts with /*, make sure it stops reading when encountering */
If comment starts with //, only read one line - unless:
If line starting with // ends with \, read next line as well
While it is possible to detect a language from a string automatically, you need way more words than fit in a usual comment to do so.
Solution: Use your own eyes and your own brain...
I have a script that is throwing this error.
This usually means there is a loop (like an if or do) that is not correctly ended, or there are too many end clauses. I can't find the issue. Any good tips on how to identify this kind of syntax error?
It could also be a double-quote issue. Wondering if there is a way (in ultra-edit or text editor) to detect lines of script that have un-even numbers of double quotes.
In answer to: "It could also be a double-quote issue, possibly. Wondering if there is a way to detect any lines of script (in ultra-edit or text editor) where there are an un-even number of double quotes."
Sublime is a great editor that is available for most platforms.
For the first question, comment out blocks of code using =begin ... =end and/or # ... and narrow down the error.
For the second question, use syntax highlighting on the text editor. You can easily tell how long a single string literal is continued, and find unbalanced quotes.
Never mind, I found the issue. I commented out the newest definition that I had added and it ran. That let me know it was that definition. I then took that out and went through it with a darn comb. Found that I was checking a value, but hadn't allowed for it to be nil or empty. Added that in and now I'm good.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to read a file from bottom to top in Ruby?
In the course of working on my Ruby program, I had the Eureka Moment that it would be much simpler to write if I were able to parse the text files backwards, rather than forward.
It seems like it would be simple to simply read the text file, line by line, into an array, then write the lines backwards into a text file, parse this temp file forwards (which would now effectively be going backwards) make any necessary changes, re-catalog the resulting lines into an array, and write them backwards a second time, restoring the original direction, before saving the modifications as a new file.
While feasible in theory, I see several problems with it in practice, the biggest of which is that if the size of the text file is very large, a single array will not be able to hold the entirety of the document at once.
Is there a more elegant way to accomplish reading a text file backwards?
If you are not using lots of UTF-8 characters you can use Elif library which work just like File.open. just load Elif and replace File.open with Elif.open
Elif.open('read.txt', "r").each_line{ |s|
puts s
}
This is a great library, but the only problem I am experiencing right now is that it have several problems with line ending in UTF-8. I now have to re-think a way to iterate my files
Additional Details
As I google a way to answer this problem for UTF-8 reverse file reading. I found a way that already implemented by File library:
To read a file backward you can try the ff code:
File.readlines('manga_search.test.txt').reverse_each{ |s|
puts s
}
This can do a good job as well
There's no software limit to Ruby array. There are some memory limitations though: Array size too big - ruby
Your approach would work much faster if you can read everything into memory, operate there and write it back to disk. Assuming the file fits in memory of course.
Let's say your lines are 80 chars wide on average, and you want to read 100 lines. If you want it efficient (as opposed to implemented with the least amount of code), then go back 80*100 bytes from the end (using seek with the "relative to end" option), then read ONE line (this is likely a partial one, so throw it away). Remember your current position via tell, then read everything up until the end.
You now have either more or less than a 100 lines in memory. If less, go back (100+1.5*no_of_missing_lines)*80, and repeat the above steps, but only reading lines until you reach your remembered position from before. Rinse and repeat.
How about just going to the end of the file and iterating backwards over each char until you reach a newline, read the line, and so on? Not elegant, but certainly effective.
Example: https://gist.github.com/1117141
I can't think of an elegant way to do something so unusual as this, but you could probably do it using the file-tail library. It uses random access files in Ruby to read it backwards (and you could even do it yourself, look for random access at this link).
You could go throughout the file once forward, storing only the byte offset of each \n instead of storing the full string for each line. Then you traverse your offset array backward and can use ios.sysseek and ios.sysread to get lines out of the file. Unless your file is truly enormous, that should alleviate the memory issue.
Admitedly, this absolutely fails the elegance test.
I've occasionally encountered software - including compilers - that refuse to accept or properly handle text files that aren't properly terminated with a newline. I've even encountered explicit errors of the form,
no newline at the end of the file
...which would seem to indicate that they're explicitly checking for this case and then rejecting it just to be stubborn.
Am I missing something here? Why would - or should - anything care whether or not a file ends with a seemingly-superfluous bit of whitespace?
Historically, at least in the Unix world, "newline" or rather U+000A Line Feed was a line terminator. This stands in stark contrast to the practice in Windows for example, where CR+LF is a line separator.
A naïve solution of reading every line in a file would be to append characters to a buffer until an LF was encountered. If done really stupid this would ignore the last line in a file if it wasn't terminated by LF.
Another thing to consider are macro systems that allow including files. A line such as
%include "foo.inc"
might be replaced by the contents of the mentioned file where, if the last line wasn't ended with an LF, it would get merged with the next line. And yes, I've seen this behavior with a particular macro assembler for an embedded platform.
Nowadays I'm in the firm belief that (a) it's a relic of ancient times and (b) I haven't seen modern software that can't handle it but yet we still carry around numerous editors on Unix-like systems who helpfully put a byte more than needed at the end of a file.
Generally I would say that a lack of a newline at the end of a source file would mean that something went wrong in the editor or source code control client and not all of the code in the buffer got flushed. While it's likely that this would result in other errors, knowing that something likely went wrong in the editor/SCM and code may be missing is a pretty useful bit of knowledge. Certainly something that I would want to check.