Systemd: Requires AND Wants - systemd

I've got a Systemd service which has both a Requires and a Wants section. i.e.
[Unit]
Description="Some service"
Requires= some-unit.target
Wants= some-unit.target
Is this incorrect or is it valid to have both?
What is the behaviour? i.e. does it fall back on Wants behaviour if unable to satisfy Requires?

To answer your question about the validity, that can be checked with systemd-analyze verify. It reports no errors when using the combination. However, perhaps it should. The combination expresses a confused intent.
What is the behaviour? i.e. does it fall back on Wants behaviour if unable to satisfy Requires?
To be certain of the behavior, mock up some simple dummy units and check.
My expectation is that Wants= is overridden by Requires= and has no affect. That's based on the docs in man systemd.unit which share that Want= is simply a weaker version of Requires=.
There are no indications in the documentation that the behavior of Requires= would be modified just because a Want= directive is also present.
For maximum clarity, pick which behavior you really want and remove the other directive.

Related

Enable/Disable Mode of a Resource in OneM2M

Let's assume I have a container that keeps temperature data. But for some reason, I don't want to delete container but I want to disable it to not accept any contentinstance until I make it enable again. It can be similar example for AE. If an AE is disabled, then I want it to not accept any request to itself. The example can be given for subscription also. If the subscription is disabled then the notification shouldnt be sent anywhere.
Is there a suitable way to implement this in OneM2M or should we handle it out of the OneM2M scope?
I assume that there is a management AE that decides whether the access to the <container> shall be limited. One possibility that works without changing the <container> and its structure could be that this AE updates or adds the <ACP> for this resource. This <ACP> should then forbid to create new <contentInstances>. The same can be done with other resources, e.g. an <AE>.
You also might have a look at the disableRetrieval attribute of <container>. Perhaps this might also be helpful.
This method, however, would not work with <subscriptions>. Here, you can change the eventNotificationCriteria to an impossible to reach criteria, or you can change the allowed schedule to an empty list. But please be aware that these solutions are hacks.

Is there a way to avoid managing selected Puppet resource properties?

The Problem:
I'm trying to have Puppet manage some of the details of several scheduled tasks without managing whether those tasks are enabled. To that end, I declare scheduled_task resources without any explicit enabled attributes, with the intention of communicating that whether the tasks are enabled is not to be altered by Puppet. Puppet, however, is persistent in re-enabling the tasks on every run, just as if I had specified enabled => true for each of them. How can I make it stop doing that?
Already Tried:
I've considered setting the attributes for each datacenter via hiera, but the reality is that this makes failovers and switches more complicated than necessary. I don't want to change my puppet code every time that needs to happen. I also can't shut-off the puppet-agent runs. I need to maintain the integrity of our rolling deployment system.
Providers?
I've read a little about providers, seems like I can handle the behavior there. However, I'm having a hard time figuring out where there is (if any) documentation that explains how to use them to override specific resource properties.
Notify/Subscribe:
I've thought about using notify/subscribe to only set the triggers to enabled on creation. I'm not thinking this is the right solution, because it's not one resource subscribing to/notifying another, it's properties being set on a single resource. If there's some magical way of doing this or something similar, I'd love to know.
Bottom-line: I just need puppet to create disabled tasks, and let me turn them on/off without changing the state in subsequent runs.
Is there a way to ignore resource attribute defaults in Puppet entirely?
Only by explicitly declaring a value for every attribute of every resource.
But that doesn't seem to be the question you really wanted to ask. You seem to be exploring ways to assign attribute values without specifying them as literals in your resource declarations, and I guess you're looking for some kind of layered approach, with the bottom layer replacing or overriding types' and providers' built-in defaults.
As you thought, you could conceivably do this at least in part via providers. You would need to write a custom provider for your resource type and specify that it be used by each resource instance. But don't. This way is complicated to implement and confusing to anyone who later has to read your manifests (maybe including future you).
Notify / subscribe, on the other hand, are simply the wrong tools for the job. They do not do what you seem to think they do, or perhaps you just haven't thought that idea through.
I think you're selling Hiera short and / or inappropriately minimizing the complexity of the task. Probably a mixture of both -- I'm inclined to guess that Hiera can do more for you than you appreciate, but also that the complications you envision will manifest to some extent, in some form, no matter what you do.
Nevertheless, there is an approach that seems to match pretty closely what I think you want: resource default declarations (I link to the latest docs, but this feature is present, in substantially the same form, in every Puppet version released in at least the last nine years). Thus, you might cause all scheduled_task resources to be disabled unless you explicitly say otherwise by putting this resource default declaration at an appropriate place:
Scheduled_task {
enabled => false
}
When choosing where to put such a statement, do note that, unlike anything else in modern Puppet, resource default statements have dynamic scope. The manual discusses that in somewhat more detail.
Revisited:
In light of the clarification of the question, I'll clarify that resource types' providers are where resource management behavior lives. Therefore, if Puppet's behavior on target machines is not what you want in the event of an altogether missing property declaration, then modifying the provider or writing your own are pretty much your only alternatives. Of course, if you have a support contract then perhaps you don't have to do that in-house.
If you don't want to hack on providers -- an altogether reasonable position -- then you're left with managing the properties to their desired values. Supposing that you employ Hiera effectively, however, you should not need to modify any manifests to control which servers have their tasks enabled.

How to document undefined behaviour in the Scrum/agile/TDD process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.
When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.
In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?
As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.
I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.
By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:
For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.
Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.
For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.
For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.
If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.
You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.
In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.
More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.
For example. suppose we're developing Stack Overflow and we're doing this story:
As a user I want to search questions so that I can find them
The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story
As a user I want to specifically search closed questions so that I can find more results
We develop this story, and find more edge cases, which are then more stories, etc.
In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way
Having undocumented functionality in your product really is a bad practice.
If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.
I'd suggest that if an undocumented feature is found:
Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.
You also had a question regarding the tracking of the outcome of these edge-case scenarios:
What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.
As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.
However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.

Does the order of attributes in SNMP Traps matter

I am using some SNMP traps for monitoring of applications. Now I was told that some monitoring systems might have problems if the order of the attributes within the the traps was not the same as defined in the MIB. From the Complexity of the OIDs that could easily be used to re-order the attributes I was surprised by this, so I tried to find the relevant section of the RFC, but I could neither find something that said any ordering is allowed nor anything that said it is important. In other secondary documentation about SNMP I was not able to find anything usefull either.
So this is more a curiosity question, that could however also help in further projects using SNMP. Could anyone point me to the correct documentation as far as this problem is concerned. Or is this something that one software might handle while other software might not handle this and I should check the actual documentation for that software?
I found the relevant document.
Section 3.1.2 specifies:
The VARIABLES clause, which need not be present, defines the
ordered sequence of MIB objects which are contained within
every instance of the trap type. Each variable is placed, in
order, inside the variable-bindings field of the SNMP Trap-
PDU. Note that at the option of the agent, additional
variables may follow in the variable-bindings field.
Thanks to the person who pointed this out to me.

If as assert fails, is there a bug?

I've always followed the logic: if assert fails, then there is a bug. Root cause could either be:
Assert itself is invalid (bug)
There is a programming error (bug)
(no other options)
I.E. Are there any other conclusions one could come to? Are there cases where an assert would fail and there is no bug?
If assert fails there is a bug in either the caller or callee. Why else would there be an assertion?
Yes, there is a bug in the code.
Code Complete
Assertions check for conditions that
should never occur. [...]
If an
assertion is fired for an anomalous
condition, the corrective action is
not merely to handle an error
gracefully- the corrective action is
to change the program's source code,
recompile, and release a new version
of the software.
A good way to
think of assertions is as executable
documentation - you can't rely on them
to make the code work, but they can
document assumptions more actively
than program-language comments can.
That's a good question.
My feeling is, if the assert fails due to your code, then it is a bug. The assertion is an expected behaviour/result of your code, so an assertion failure will be a failure of your code.
Only if the assert was meant to show a warning condition - in which case a special class of assert should have been used.
So, any assert should show a bug as you suggest.
If you are using assertions you're following Bertrand Meyer's Design by Contract philosophy. It's a programming error - the contract (assertion) you have specified is not being followed by the client (caller).
If you are trying to be logically inclusive about all the possibilities, remember that electronic circuitry is known to be affected by radiation from space. If the right photon/particle hits in just the right place at just the right time, it can cause an otherwise logically impossible state transition.
The probability is vanishingly small but still non-zero.
I can think of one case that wouldn't really class as a bug:
An assert placed to check for something external that normally should be there. You're hunting something nutty that occurs on one machine and you want to know if a certain factor is responsible.
A real world example (although from before the era of asserts): If a certain directory was hidden on a certain machine the program would barf. I never found any piece of code that should have cared if the directory was hidden. I had only very limited access to the offending machine (it had a bunch of accounting stuff on it) so I couldn't hunt it properly on the machine and I couldn't reproduce it elsewhere. Something that was done with that machine (the culprit was never identified) occasionally turned that directory hidden.
I finally resorted to putting a test in the startup to see if the directory was hidden and stopping with an error if it was.
No. An assertion failure means something happened that the original programmer did not intend or expect to occur.
This can indicate:
A bug in your code (you are simply calling the method incorrectly)
A bug in the Assertion (the original programmer has been too zealous and is complaining about you doing something that is quite reasonable and the method will actually handle perfectly well.
A bug in the called code (a design flaw). That is, the called code provides a contract that does not allow you to do what you need to do. The assertion warns you that you can't do things that way, but the solution is to extend the called method to handle your input.
A known but unimplemented feature. Imagine I implement a method that could process positive and negative integers, but I only need it (for now) to handle positive ones. I know that the "perfect" implementation would handle both, but until I actually need it to handle negatives, it is a waste of effort to implement support (and it would add code bloat and possibly slow down my application). So I have considered the case but I decide not to implement it until the need is proven. I therefore add an assert to mark this unimplemented code. When I later trigger the assert by passing a negative value in, I know that the additional functionality is now needed, so I must augment the implementation. Deferring writing the code until it is actually required thus saves me a lot of time (in most cases I never imeplement the additiona feature), but the assert makes sure that I don't get any bugs when I try to use the unimplemented feature.

Resources