Can the characterset in TSynedit in Lazarus be set? - lazarus

I use TSynedit component in a program. I noticed when I use SynEdit1.Lines.SaveToFile(loadedfile);, then it is saved in utf8 encoding.
Is there a way to change that?. I changed font.charset property of Synedit1 object but it made no difference.
Any ideas?

Maybe there are additional helper routines, but in general FPC/Lazarus haven't implemented the extra parameter to various .save* methods that Delphi/Unicode implemented

Related

PyFMI parameter estimation and handling of fixed model parameters different from default

I have started to in PyFMI use parameter estimation with the procedure model.estimate() and works well.
From the documentation (Andersson et al 2016) as well as practical use I understand that model parameters are taken from the compiled FMU-model if not estimated. It would have been very practical to have an option to provide a dictionary with a set of the fixed parameter values different from the default of the model. Is there any way to provide that?
The current workflow is that for a larger model built up of parts from libraries, then you need to make a copy of these models and set parameters to the proper value in the code, and then compile it. It is a somewhat tedious procedure. Perhaps I have misunderstood something?
Andersson et al (2016): "PyFMI: A Python package for…”
https://portal.research.lu.se/portal/files/7201641/pyfmi_tech.pdf
From my contact Christian Winther at Modelon I learn that I understand the workflow right. He see also the advantage to have a possibility to have a list (or dictionary) of parameters that is changed from the default parameters and remain constant during parameter estimation. It may come in a future update.

Is it Acceptable to use Globals Variables for Debug code

I have a very well thought out object oriented structure to a large project that I am working on. However, in areas of my code I would like to toggle debug sections on and off through a set of variables located in one easy to access area. My question is whether this is a good practice or if I should implement an even more convoluted passing scheme to pass debug parameters.
You should probably take a good look at the System.Diagnostics.Debug class and how it is implemented using the Conditonal attribute.
Build something like that. Ease of use is nothing against the complexity of being certain you turned it all off.
And of course C# doesn't have glbal variables anyway.
You should use the debug class that has numerous methods to handle debugging, which are removed when built in release mode. Also conditional methods would probably help you as well.

Windows GUI Control: Difference between LVCOLUMN and HDITEM?

I am using Windows List-View control and am little bit confused by LVCOLUMN and HDITEM, former structure is used to define column properties, latter is used to define actural header object of the column, do I understand correctly?
If so, do I need to create / define both?
You usually just deal with LVCOLUMN and let the listview itself update the header control for you.
You generally only need to use HDITEM (or talk to the header control at all) when accessing the header directly, which is rare (but does happen in some situations).

How to see multi-byte strings in Xcode

Is it possible to see strings that use 16 bits chars in Xcode debugger? I use a custom string class, not NSString. The strings are NULL terminated. The only way I can see the strings is if I see them as memory, but they are hard to read.
Thank you!
You'll need to write a data formatter bundle -- just writing the data formatter expressions inside the debugger isn't enough. Viewing strings in the Xcode debugger is a black art. Even once you've written the data formatter bundle, be prepared for them not to work at least 50% of the time. We've been fighting this issue for about 5 years now. Most of the time the debugger will tell you the variable is no longer in scope when it really is, and you'll still need to drill down into the members to view the raw memory anyway.
Something that may make it easier (I haven't tried this) is to write a method in the class that returns an NSString and then you may be able to get the data formatter expressions to display something useful.
Use Data Formatting in the XCode debugger
For custom classes I always found it helpful to implement description and debugDescription methods. Maybe this approach will be sufficient in your case too.

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources