VERTICA: Which system parameter controls how often containers are condensed? - vertica

**
Which system parameter controls how often containers are condensed?
**

There are several parameters that control the tuple mover in Vertica, and they can influence each other sometimes.
Here's the documentation on it:
https://www.vertica.com/docs/10.1.x/HTML/Content/Authoring/AdministratorsGuide/ConfiguringTheDB/TupleMoverParameters.htm

Related

How to set a value, attribute or property of a block instance independent from a block

I am trying to define different systems (software and hardware) made of common building blocks in SysML. In the following I try to describe my approach with a sample:
For this purpose I have defined a common set of blocks in one BDD and described the common relationships of all blocks using ports and connectors in one IBD - nothing special so far:
block A, B, C
each block using each two ports
each block's port connected to other blocks ports
Now when using the blocks defined as given above, I want to add static characteristics of blocks and ports for each system I define based on the above building blocks. The system is defined in one additional BDD and IBD using the same blocks from above:
System(s) AX and AY have:
additional connections between two blocks A and B, described in IBD (OK)
additional characteristics of the ports (NOK)
additional characteristics of the blocks (NOK)
Problem:
The last two "NOK" points are a problem as follows:
Whenever I add additional properties/attributes/tags to a block in one system/IBD it also applies to the other systems/blocks
Whenever I add additional properties/attributes/tags to a port in one system it also applies to the other systems/blocks
My question can be generalized:
How would I define characteristics of instances of blocks in a way that they do not affect the original blocks they are instantiated from. The issue came up in multiple attempts to design systems, maybe SysML is not intended to be used in such a way at all?
I also tried to design my system in UML using component diagrams and components / component instances and the same problem appears there: instance specific attributes/values/ports do not seem to be supported.
Side Note:
I am using MagicDraw as a tool for SysML und UML.
I understand you want to define context specific connectors and properties.
First I want to clarify that all of these are already context specific. The context of properties is their owning block or InterfaceBlock (type of the port). The context of connectors is their owning block (IntefaceBlocks cannot have parts, therefore also no connectors).
So, a connector needs a context. Let's call it system A0. It has parts of type A, B and C and owns the connectors between the ports of its parts.
Now you can define systems AX and AY as special kinds of system A0. As such it has the same parts, but you can add more parts and connectors.
If you define additional properties of the parts of your special systems, you are in fact creating new special types of A, B and C and the port types. SysML 1 forces you to define these new types. And I think rightly so. If block A' shall have more features than block A, then it is a new type. It is irrelevant that A' is only used in the context of system AX. If you later decide to use it in system AZ, it would still have the same features.
Not all of these changes mean that it is a new type. For example if you only want to change the voltage of an adjustable power supply, this is not a new type of power supply. In the context of system AX it might be set to 12 V and in system AY it might be set to 24 V. In order to allow this, SysML 1 has context specific initial values. Cameo has great support for these. This helps around the somewhat clumsy definition in SysML 1. This will be much better in SysML 2.
If the value is not adjustable, a 12 V power supply would technically be a new type of power supply. However, I can see that it might be useful to only define this as context specific value, even though it is strictly speaking not context specific. I don't want to be more papal than the pope.
Now a lot of systems engineers don't like to define their blocks. I really don't understand why. But in order to accomodate this modeling habit, SysML 1 has property specific types. In the background these types are still regular blocks, only on the surface it appears as if they are only defined in the context. Up to now, no one could explain to me, what the advantage of this would be. However, SysML 2 has made it the core of the language. Here you can define properties of properties, even without first defining blocks.
Sometimes you have sub assemblies with some flexibility regarding cardinalities and types. If you use such a sub assembly in a certain context, it is often necessary to define context specific limitations. For example you could say the generic landing gear can have 4..8 Wheels, which could be High load wheels or Medium load wheels, but when used in a Boing 747, it will have 6 high load wheels. For this SysML 1 has bound references. Is that your use case?

Inside the Linux kernel, does the `nice` priority level of a task get modified anywhere else beside these cases?

From within the Linux kernel, I want to overwrite the nice priority (with a hardcoded value) of all regular processes (e.g. of the scheduling class SCHED_NORMAL) that have an uneven process ID.
I discovered that the default settings of a process are initiated in /source/init/init_task.c, including the priority value which is set to default value MAX_PRIO - 20 (i.e., 120), as can be seen in the definition of the struct definition of init_task.
My reasoning is that if I were to modify these default settings in init_task.c, it should cover all cases with exception to any users calling the system call sys_setpriority (by using the nice command for example). Is this correct, or are there any other cases where the task's prio may be modified?
I discovered that the default settings of a process are initiated in /source/init/init_task.c
No. That's the definition of the init task itself, it's not used to init[ialize] new tasks. The init task is the first userspace task that is ran by the kernel upon startup, and its task_struct is hardcoded for performance and ease of use. Any other task is created through fork(2)/clone(2).
My reasoning is that if I were to modify these default settings in init_task.c, it should cover all cases
It doesn't cover anything really, just the initial priority of init, which can very well overwrite it itself since it runs as root.
The priority of a task can be changed in different ways, but the kernel should not "automatically" change priority of tasks, at least to my knowledge.
Therefore, if you want to "control" the priority in such a way, your best bet would be to change the code of fork (specifically clone_process()), since forking is the only way to create new processes in Linux.
Other than that, there are other syscalls that can end up modifying the priority of a process. Taking a quick glance at the kernel code, at least sched_setscheduler (source), sched_setparam (source), and setpriority (source).

Is there a way for me to make parameters adjustable in canoe?

I'm trying to make parameters adjustable (read calibratable) in CANoe like what exists in CANape. In CANape, the menu item makes a parameter adjustable by copying it from the main memory to the pool memory. I'm trying to find a similar function in CANoe.
It would be nice if i was able to automate this and calibrate parameters during a test script.
After communicating with Vector Support about this issue, They told me that CANoe does not have the capability to tune/adjust XCP parameters.
This is a CANoe Limitation and can only be done on CANape.

Can an OS X application examine the launchd plist which invoked it?

I'm writing an application which will potentially be invoked by more than one launchd job – that is, there might be two distinct LaunchAgent .plist files which invoke the application with different program arguments or in different conditions. I'd like the application to be able to examine the job, or the job's .plist, so it can adjust its behaviour based on what it finds there.
In particular, supposing a program foo could be started up from both A.plist and B.plist, I'd like the program to be able to preserve different state depending on which job/plist invoked it. If all I can do is detect the (presumed distinct) Label of the job, that will be enough (though more would be better).
The obvious ways to do this are using different flags in the ProgramArguments array in the job, or to set different values in EnvironmentVariables, but both of those feel fragile, both imply duplication of bits of configuration, and both require extra documentation (“copy the value of the Label into EnvironmentVariables field FOO...; don't ask why”).
I can see the function SMJobCopyDictionary. With that, it appears that I can get access to the job's dictionary – ie, this information is available in principle – but I need to know the job's label first. Function SMCopyAllJobDictionaries allows me to iterate through all of the jobs, but it's not obvious how I'd find the one which invoked a particular instance of the application.
Googling launchd read job label or launchd self dict (or similar) don't come up with anything useful.
Take a look at the SampleD code on Apple's site. This code shows how daemon's can access calling launchd information.
Looking at the launch.h header, I suspect LAUNCH_JOBKEY_LABEL is what you are after.
Using the ProgramArguments is, in fact, probably the best way to handle this. It will make your code more portable (e.g. making it possible to launch in Linux, for example, or under a different launch daemon) to use program arguments that configure the behavior rather than relying on the program invocation chain.
With regard to program arguments, though, I would suggest not copying the label, but rather using the specific flags that differ between the jobs. For example, you might set --checkpoint-file and --log-file in both A.plist and B.plist to files that happen to include "A" or "B" in their names. Doing this will make the application code that depends on --checkpoint-file and --log-file much clearer, though, than if it were to make arbitrary choices based on some label and will also allow you to run the command directly from the commandline and have it still work despite there not being an invoking PLIST file.
Summary: I think the answer here is a qualified no.
Graham Miln's answer (which I've accepted) shows that this is possible in fact,
but the interface he mentions is apparently documented nowhere other than in
this sample code, and also possibly in Mac OS X Internals: A
Systems Approach (Amit Singh, http://osxbook.com, thanks to Mitchell J
Laurren-Ring for pointing this out).
I also asked about this on the launchd-dev list (see the thread starting with
my
question),
and Damien Sorresso suggested that this interface was “really only so that jobs
can check in with launchd to obtain listening sockets. They were way too general
at the outset.”
Put together, I'm left with the feeling that, while the behaviour I
want seems possible, and while I think it's neater in this case than
passing more state in environment variables, the API to support such
behaviour is, if not deprecated, than at least somewhat discouraged,
and at any rate not idiomatic.

Are there any good reference implementations available for command line implementations for embedded systems?

I am aware that this is nothing new and has been done several times. But I am looking for some reference implementation (or even just reference design) as a "best practices guide". We have a real-time embedded environment and the idea is to be able to use a "debug shell" in order to invoke some commands. Example: "SomeDevice print reg xyz" will request the SomeDevice sub-system to print the value of the register named xyz.
I have a small set of routines that is essentially made up of 3 functions and a lookup table:
a function that gathers a command line - it's simple; there's no command line history or anything, just the ability to backspace or press escape to discard the whole thing. But if I thought fancier editing capabilities were needed, it wouldn't be too hard to add them here.
a function that parses a line of text argc/argv style (see Parse string into argv/argc for some ideas on this)
a function that takes the first arg on the parsed command line and looks it up in a table of commands & function pointers to determine which function to call for the command, so the command handlers just need to match the prototype:
int command_handler( int argc, char* argv[]);
Then that function is called with the appropriate argc/argv parameters.
Actually, the lookup table also has pointers to basic help text for each command, and if the command is followed by '-?' or '/?' that bit of help text is displayed. Also, if 'help' is used for a command, the command table is dumped (possible only a subset if a parameter is passed to the 'help' command).
Sorry, I can't post the actual source - but it's pretty simple and straight forward to implement, and functional enough for pretty much all the command line handling needs I've had for embedded systems development.
You might bristle at this response, but many years ago we did something like this for a large-scale embedded telecom system using lex/yacc (nowadays I guess it would be flex/bison, this was literally 20 years ago).
Define your grammar, define ranges for parameters, etc... and then let lex/yacc generate the code.
There is a bit of a learning curve, as opposed to rolling a 1-off custom implementation, but then you can extend the grammar, add new commands & parameters, change ranges, etc... extremely quickly.
You could check out libcli. It emulates Cisco's CLI and apparently also includes a telnet server. That might be more than you are looking for, but it might still be useful as a reference.
If your needs are quite basic, a debug menu which accepts simple keystrokes, rather than a command shell, is one way of doing this.
For registers and RAM, you could have a sub-menu which just does a memory dump on demand.
Likewise, to enable or disable individual features, you can control them via keystrokes from the main menu or sub-menus.
One way of implementing this is via a simple state machine. Each screen has a corresponding state which waits for a keystroke, and then changes state and/or updates the screen as required.
vxWorks includes a command shell, that embeds the symbol table and implements a C expression evaluator so that you can call functions, evaluate expressions, and access global symbols at runtime. The expression evaluator supports integer and string constants.
When I worked on a project that migrated from vxWorks to embOS, I implemented the same functionality. Embedding the symbol table required a bit of gymnastics since it does not exist until after linking. I used a post-build step to parse the output of the GNU nm tool for create a symbol table as a separate load module. In an earlier version I did not embed the symbol table at all, but rather created a host-shell program that ran on the development host where the symbol table resided, and communicated with a debug stub on the target that could perform function calls to arbitrary addresses and read/write arbitrary memory. This approach is better suited to memory constrained devices, but you have to be careful that the symbol table you are using and the code on the target are for the same build. Again that was an idea I borrowed from vxWorks, which supports both teh target and host based shell with the same functionality. For the host shell vxWorks checksums the code to ensure the symbol table matches; in my case it was a manual (and error prone) process, which is why I implemented the embedded symbol table.
Although initially I only implemented memory read/write and function call capability I later added an expression evaluator based on the algorithm (but not the code) described here. Then after that I added simple scripting capabilities in the form of if-else, while, and procedure call constructs (using a very simple non-C syntax). So if you wanted new functionality or test, you could either write a new function, or create a script (if performance was not an issue), so the functions were rather like 'built-ins' to the scripting language.
To perform the arbitrary function calls, I used a function pointer typedef that took an arbitrarily large (24) number of arguments, then using the symbol table, you find the function address, cast it to the function pointer type, and pass it the real arguments, plus enough dummy arguments to make up the expected number and thus create a suitable (if wasteful) maintain stack frame.
On other systems I have implemented a Forth threaded interpreter, which is a very simple language to implement, but has a less than user friendly syntax perhaps. You could equally embed an existing solution such as Lua or Ch.
For a small lightweight thing you could use forth. Its easy to get going ( forth kernels are SMALL)
look at figForth, LINa and GnuForth.
Disclaimer: I don't Forth, but openboot and the PCI bus do, and I;ve used them and they work really well.
Alternative UI's
Deploy a web sever on your embedded device instead. Even serial will work with SLIP and the UI can be reasonably complex ( or even serve up a JAR and get really really complex.
If you really need a CLI, then you can point at a link and get a telnet.
One alternative is to use a very simple binary protocol to transfer the data you need, and then make a user interface on the PC, using e.g. Python or whatever is your favourite development tool.
The advantage is that it minimises the code in the embedded device, and shifts as much of it as possible to the PC side. That's good because:
It uses up less embedded code space—much of the code is on the PC instead.
In many cases it's easier to develop a given functionality on the PC, with the PC's greater tools and resources.
It gives you more interface options. You can use just a command line interface if you want. Or, you could go for a GUI, with graphs, data logging, whatever fancy stuff you might want.
It gives you flexibility. Embedded code is harder to upgrade than PC code. You can change and improve your PC-based tool whenever you want, without having to make any changes to the embedded device.
If you want to look at variables—If your PC tool is able to read the ELF file generated by the linker, then it can find out a variable's location from the symbol table. Even better, read the DWARF debug data and know the variable's type as well. Then all you need is a "read-memory" protocol message on the embedded device to get the data, and the PC does the decoding and displaying.

Resources