This is the minorest of annoyances, but nevertheless it's still an annoyance. One day my line numbers in Visual Studio 2013 suddenly stopped being evenly spaced...
The lines of codes themselves are fine, just the line numbers are weird. I'm guessing it's something to do with Productivity Power Tools, but for the life of me I can't find any setting that fixes it. I don't really have any other extensions installed that would affect something like that. Anyone else run into this?
The Productivity Power Tools 2013 has an option to compress white space to allow more code to fit on the screen.
Description
The description on the Productivity Power Tools 2013 page says:
Syntactic Line Compression
Syntactic line compression enables you to make better use of your screen's vertical real-estate. It shrinks lines that contain neither letters nor numbers by 25% vertically, allowing more lines to be displayed in the editor. Other lines are not affected.
Disabling the setting
You can turn this off if it bothers you by going to
Tools/Options/Productivity Power Tools/Turn Extensions On/Off
but I find it does help readability with larger files.
I think this actually comes from C# formatting. These is an option to always show braces { and } on single lines and have those lines be smaller.
Related
I'm working on a large project in C++ using Visual Studio, but it very regularly either produces a duff build (the executable it generates doesn't match the code, resulting in random crashes or the inability to set breakpoints) or refuses to give any debug info for many of the types. For example, a vector of very simple structs stored by value will be displayed as "size: attempt to divide by zero". You can't drill down into the entries of the vector to see the values, and you get a similar thing for lists only you see a bunch of question marks instead of the divide by zero error.
This doesn't just affect standard library containers, but they are some of the worst culprits because they so often behave in this way. Doing a complete rebuild of the code will maybe rectify the problem 10% of the time, but it's completely unpredictable. I have found that writing shorter C++ files (I literally mean the file size, nothing to do with the objects themselves) can sometimes help, but I suspect that's just down to luck. It really doesn't make much sense that it could be relevant, anyway.
I work as part of a team on the same project, and only two of us seem to run into these kind of gnarly problems on a daily basis.
If anyone has any suggestions as to how I might be able to get the VS debugger to behave, I would be incredibly grateful.
The program 1+:o in the esoteric programming language ><> (Fish) slows down over time, and I don't know why. It slows down most on the :, which duplicates the item at the top of the stack, and slows down somewhat on the o, which prints out the corresponding character for the top item in the stack. You can try it out here; just make sure to initialize the stack with a 0. It slows down faster on mobile devices (source: my phone), in case you want to check in less time.
never heard of this language before but here some possibilities:
your output is increasing
so each iteration you got 1 more character to show
if your brownser does not have some kind of smart refreshing
then this will slow down a lot for higher n
have no clue what you are doing as the language is foreign to me
but it looks like you are also increasing some array/stack/heap/whatever size
by one item each iteration which takes memory
and on phones there is not so much of it available
not to mention the relocations ...
the output looks like it is in Unicode
and once you hit the special characters then it depends on fonts installed
some characters are pretty slow.
Usual policy is to find used code page in installed fonts which could take a while
and the rendering itself is also for some characters not very good.
If you have some Unicode font (not just the chunks) then it should speed things up considerably (especially raster fonts).
But there are not many of them out there and never saw any that is complete (it wold be huge) but it is a while I search for those...
here one of the "complete" Unicode fonts example GNU_Unifont
I've programmed in ><> before, so I'll give my view on it.
Personally, I didn't experience much slowdown of the program when running it on my computer. I ran it with the animation, so I could see what was going on with the stack and execution speed.
The stack operations seem to occur at the same rate the entire time. I ran it past 3300. The output appears to slow down, but that it because the character sets for foreign languages use characters that combine or interact with each other in some way. This is the primary cause of the visual slowdown, as the output has to be rewritten as certain characters are printed adjacent to each other. So really, it's the fact that the memory usage increases the longer you run it, and the fact that the output is atypical.
Also, the RTL (right-to-left) character was printed, so all output following this character is displayed from right to left, which the browser isn't as used to and may not be optimized for.
Another way I can tell that the output is slowing down the browser is that when I pause execution of the program, the page is still slow. I tried to zoom in/out, and it took multiple seconds to render the output in some cases.
I have two different machines with the same screen resolution but different amounts of space between the lines. I can only see 30 lines on one screen and 48 are visible on the other. I'm looking for a setting where I can decrease the spacing.
Maybe you have syntactic line compression from Productivity Power Tools enabled on your second machine.
The second computer had a different Windows magnification percentage. I forgot that setting existed lol.
I'm trying to come up with a platform independent way to render unicode text to some platform specific surface, but assuming that most platforms support something at least kinda similar, maybe we can talk in terms of the win32 API. I'm most interested in rendering LARGE buffers and supporting rich text, so while I definitely don't want to ever look inside a unicode buffer, I'd like to be told myself what to draw and be hinted where to draw it, so if the buffer is modified, I might properly ask for updates on partial regions of the buffer.
So the actual questions. GetTextExtentExPointW clearly allows me to get the widths of each character, how do I get the width of a nonbreaking extent of text? If I have some long word, he should probably be put on a new line rather than splitting the word. How can I tell where to break the text? Will I need to actually look inside the unicode buffer? This seems very dangerous. Also, how do I figure how far each baseline should be while rendering?
Finally, this is already looking like it's going to be extremely complicated. Are there alternative strategies for doing what I'm trying to do? I really would not at all like to rerender a HUGE buffer each time I change tiny chunks of it. Something between looking at individual glyphs and just giving a box to spat text in.
Finally, I'm not interested in using anything like knuth's word breaking algorithm. No hyphenation. Ideally I'd like to render justified text, but that's on me if the word positioning is given. Ragged right hand margin is fine by me.
What you're trying to do is called shaping in unicode jargon. Don't bother writing your own shaping engine, it's a full-time job that requires continuous updates to take into account changes in unicode and opentype standards. If you want to spend any time on the rest of your app you'll need to delegate shaping to a third-party engine (harbuzz-ng, uniscribe, icu, etc)
As others wrote:
– unicode font rendering is hellishly complex, much more than you expect
– winapi is not cross platform at all
The three common strategies for rendering unicode text are:
1. write one backend per system (plugging on the system native text stack) or
2. select one set of cross-platform libs (for example freebidi + harfbuzz-ng + freetype + fontconfig, or a framework like QT) and recompile them for each target system or
3. take compliance shortcuts
The only strategy I strongly advise against is the last one. You can not control unicode.org normalization (adding upper ss to German), you do not understand worldwide script uses (both African languages and Vietnamese are Latin variants but they exercise unexpected unicode properties), you will underestimate font creators ingenuity (oh, indic users requested this opentype property, but it will be really handy for this English use-case…).
The two first strategies have their own drawbacks. It's simpler to maintain a single text backend but deploying a complete text stack on a foreign system is far from hassle-free. Almost every project that tries cross-plarform has to get rid of msvc first since it targets windows and its language dialect won't work on other platforms and cross-platform libs will typically only compile easily in gcc or llvm.
I think harfbuzz-ng has reached parity with uniscribe even on windows so that's the lib I'd target if I wanted cross-platform today (chrome, firefox and libreoffice use it at least on some platforms). However Libreoffice at least uses the multi-backend strategy. No idea if it reflects current library state more than some past historic assessment. There are not so many cross-platform apps with heavy text usage to look at, and most of them carry the burden of legacy choices.
Unicode rendering is surprisingly complicated. Line breaking is just the beginning; there are many other subtleties that I don't think you've appreciated (vertical text, right-to=left text, glyph combining, and many more). Microsoft has several teams dedicated to doing nothing but implementing text rendering, for example.
Sounds like you're interested in DirectWrite. Note that this is NOT platform independent, obviously. It's also possible to do a less accurate job if you don't care about being language independent; many of the more unusual features only occur in rarer languages. (Chinese being a notable exception.)
If you want some perfect multi platform there will be problems. If you draw one sentence with GDI, one GDI+, one with Direct2D, one on Linux, one on Mac all with the same font size on the same buffer, you'll have differences some round some position to int some other use float for examples.
There is not one, but a least two problems. Drawing text and computing text position, line break etc are very different. Some library do both some do only computing or rendering part. A very simplified explanation is drawing do only render one single char at the position you ask, with transformations zoom, rotation and anti aliasing. Computing do everything else chose each char position in word, sentences line break, paragraphs etc
If you want to be platform independent you could use FreeType to read font files and get every information on each characters. That library get exact same result on each platform and predictability in font is good. The main problem with font is lot of bad, missed, or even wrong information in characters descriptions.Nobody do text perfectly because it's very hard (tipping hat to word, acrobat and every team who deal directly with fonts)
If your computing of font is good. There is lot of work to do everything you could see in a good word processor software (space between characters, spaces between word, line break, alignment, rotation, grease, aliasing...) then you could do the rendering. It should be the easier. You can with same computation do GDI, Direct2D, PDF or printing rendering path.
I am working on a cocoa-based text editor. Should I base it on NSTextView or is there a more efficient option? Keep in mind that I plan to support tabs so there can be many editors open at the same time.
I am working on a cocoa-based text editor. Should I base it on NSTextView
Yes.
or is there a more efficient option?
No, assuming “efficiency” includes your own time and effort weighed against the feature set you want to support—Cocoa's text system does a lot for you, which you'd be throwing away if you rolled your own.
Some examples:
Undo support
Advanced editing (emacs keys)
Support for input managers/input methods
Support for all of Unicode
Mouse selection
Keyboard selection
Multiple selection
Fonts
Colors
Images
Sounds
Find
Find and Replace
Spelling-checking
Grammar-checking
Text replacement
Accessibility
If you roll your own, you get to spend months reinventing and debugging some if not most if not all of those wheels. I call that inefficient.
The text system you already have, meanwhile, is fast nearly all of the time. You need huge texts with long lines (or maybe lots of embedded images/sounds) to bog it down.
Keep in mind that I plan to support tabs so there can be many editors open at the same time.
Unless the user is going to be typing into all of them at once, I don't see how that will cause a performance problem. 0% CPU × N or N-1 views = 0% CPU.
The one place where you might have a problem is memory usage, if the documents are both many and large. They'd have to be both in the extreme, as even a modest Mac nowadays has 1 GiB of RAM, and text doesn't weigh much.
If that's the case, then you could only keep the N most recently used unmodified texts in memory, and otherwise remember only the arrays of selection ranges. But 99% of the time, swapping texts in and out will be far more expensive than just leaving them all in memory.
NSTextView is probably the simplest way to go if you want to get a ton of nice features for free. It can't do everything, but it's an awesome start.