performing logical NOT based on configurable data - avr

I would like to configure my switch input type as either active high or active low. I wrote below code and I know it will not work because I when I multiply by 0, nothing works. One way is to write code to check ACTIVE LOW using if statement and handle. But I am sure there could be a simple method to perform logical NOT operation based on configurable bits. How to make the readSwitch() function configurable depending on switch input types?
#define ACTIVE_LOW 0
#define ACTIVE_HIGH 1
#define SWITCH_TYPE ACTIVE_LOW
uint8_t readSwitch(uint8_t pinNum)
{
uint8_t state=0;
if(SWITCH_TYPE*(PINC&(1<<pinNum)))
{
_delay_ms(20);
if(SWITCH_TYPE*(PINC&(1<<pinNum)))
{
state=1;
}
}
return(state);
}

Based on #AterLux comment and further analysis, below code could take care.
if(!(PINC&(1<<pinNum))^SWITCH_TYPE)

Related

How to get the timestamp of when a disk is made offline from diskmgmt or other ways in windows?

I want to know the time when a disk is made offline by user. Is there a way to know this through WMI classes or other ways?
If you cannot find a way to do it through the Win32 API/WMI or other, I do know of an alternate way which you could look into as a last-resort.
What about using NtQueryVolumeInformationFile with the FileFsVolumeInformation class? You can do this to retrieve the data about the volume and then access the data through the FILE_FS_VOLUME_INFORMATION structure. This includes the creation time.
At the end of the post, I've left some resource links for you to read more on understanding this so you can finish it off the way you'd like to implement it; I do need to quickly address something important though, which is that the documentation will lead you to
an enum definition for the _FSINFOCLASS, but just by copy-pasting it from MSDN, it probably won't work. You need to set the first entry of the enum definition to 1 manually, otherwise it will mess up and NtQueryVolumeInformationFile will return an error status of STATUS_INVALID_INFO_CLASS (because the first entry will be identified as 0 and not 1 and then all the entries following it will be -1 to what they should be unless you manually set the = 1).
Here is the edited version which should work.
typedef enum _FSINFOCLASS {
FileFsVolumeInformation = 1,
FileFsLabelInformation,
FileFsSizeInformation,
FileFsDeviceInformation,
FileFsAttributeInformation,
FileFsControlInformation,
FileFsFullSizeInformation,
FileFsObjectIdInformation,
FileFsDriverPathInformation,
FileFsVolumeFlagsInformation,
FileFsSectorSizeInformation,
FileFsDataCopyInformation,
FileFsMetadataSizeInformation,
FileFsMaximumInformation
} FS_INFORMATION_CLASS, *PFS_INFORMATION_CLASS;
Once you've opened a handle to the disk, you can call NtQueryVolumeInformationFile like this:
NTSTATUS NtStatus = 0;
HANDLE FileHandle = NULL;
IO_STATUS_BLOCK IoStatusBlock = { 0 };
FILE_FS_VOLUME_INFORMATION FsVolumeInformation = { 0 };
...
Open the handle to the disk here, and then check that you have a valid handle.
...
NtStatus = NtQueryVolumeInformationFile(FileHandle,
&IoStatusBlock,
&FsVolumeInformation,
sizeof(FILE_FS_VOLUME_INFORMATION),
FileFsVolumeInformation);
...
If NtStatus represents an NTSTATUS error code for success (e.g. STATUS_SUCCESS) then you can access the VolumeCreationTime (LARGE_INTEGER) field of the FILE_FS_VOLUME_INFORMATION structure with the FsVolumeInformation variable.
Your final task at this point will be using the LARGE_INTEGER field named VolumeCreationTime to gather proper time/date information. There are two links included at the end of the post which are focused on that topic, they should help you sort it out.
See the following for more information.
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-ntqueryvolumeinformationfile
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/ne-wdm-_fsinfoclass
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntddk/ns-ntddk-_file_fs_volume_information
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724280.aspx
https://blogs.msdn.microsoft.com/joshpoley/2007/12/19/datetime-formats-and-conversions/

How can flex return multiple terminals at one time

In order to make my question easy to understand I want to use the following example:
The following code is called nonblock do-loop in fortran language
DO 20 I=1, N ! line 1
DO 20 J=1, N ! line 2
! more codes
20 CONTINUE ! line 4
Pay attention that the label 20 at line 4 means the end of both the inner do-loop and the outer do-loop.
I want my flex program to parse the feature correctly: when flex reads the label 20, it will return ENDDO terminal twice.
Firstly, because I also use bison, so every time bison calls yylex() to get one terminal. If I can ask bison to get terminals from yylex() in some cases, and from another function in other cases, maybe I could solve this problem, however, I got no idea here then.
Of course there are some workarounds, for eample, I can use flex's start condition but I don't think it is a good solution. So I ask if there's any way to solve my question without a workaround?
It is easy enough to modify the lexical scanner produced by (f)lex to implement a token queue, but that is not necessarily the optimal solution. (See below for a better solution.) (Also, it is really not clear to me that for your particular problem, fabricating the extra token in the lexer is truly appropriate.)
The general approach is to insert code at the top of the yylex function, which you can do by placing the code immediately after the %% line and before the first rule. (The code must be indented so that it is not interpreted as a rule.) For non-reentrant scanners, this will typically involve the use of a local static variable to hold the queue. For a simple but dumb example, using the C API but compiling with C++ so as to have access to the C++ standard library:
%%
/* This code will be executed each time `yylex` is called, before
* any generated code. It may include declarations, even if compiled
* with C89.
*/
static std::deque<int> tokenq;
if (!tokenq.empty()) {
int token = tokenq.front();
tokenq.pop_front();
return token;
}
[[:digit:]]+ { /* match a number and return that many HELLO tokens */
int n = atoi(yytext);
for (int i = 0; i < n; ++i)
tokenq.push_back(HELLO);
}
The above code makes no attempt to provide a semantic value for the queued tokens; you could achieve that using something like a std::queue<std::pair<int, YYSTYPE>> for the token queue, but the fact that YYSTYPE is typically a union will make for some complications. Also, if that were the only reason to use the token queue, it is obvious that it could be replaced with a simple counter, which would be much more efficient. See, for example, this answer which does something vaguely similar to your question (and take note of the suggestions in Note 1 of that answer).
Better alternative: Use a push parser
Although the token queue solution is attractive and simple, it is rarely the best solution. In most cases, code will be clearer and easier to write if you request bison to produce a "push parser". With a push parser, the parser is called by the lexer every time a token is available. This makes it trivial to return multiple tokens from a lexer action; you just call the parser for each token. Similarly, if a rule doesn't produce any tokens, it simply fails to call the parser. In this model, the only lexer action which actually returns is the <<EOF>> rule, and it only does so after calling the parser with the END token to indicate that parsing is complete.
Unfortunately, the interface for push parsers is not only subject to change, as that manual link indicates; it is also very badly documented. So here is a simple but complete example which shows how it is done.
The push parser keeps its state in a yypstate structure, which needs to be passed to the parser on each call. Since the lexer is called only once for each input file, it is reasonable for the lexer to own that structure, which can be done as above with a local static variable [Note 1]: the parser state is initialized when yylex is called, and the EOF rule deletes the parser state in order to reclaim whatever memory it is using.
It is usually most convenient to build a reentrant push parser, which means that the parser does not rely on the global yylval variable [Note 2]. Instead, a pointer to the semantic value must be provided as an additional argument to yypush_parse. If your parser doesn't refer to the semantic value for the particular token type, you can provide NULL for this argument. Or, as in the code below, you can use a local semantic value variable in the lexer. It is not necessary that every call to the push parser provide the same pointer. In all, the changes to the scanner definition are minimal:
%%
/* Initialize a parser state object */
yypstate* pstate = yypstate_new();
/* A semantic value which can be sent to the parser on each call */
YYSTYPE yylval;
/* Some example scanner actions */
"keyword" { /* Simple keyword which just sends a value-less token */
yypush_parse(pstate, TK_KEYWORD, NULL); /* See Note 3 */
}
[[:digit:]]+ { /* Token with a semantic value */
yylval.num = atoi(yytext);
yypush_parse(pstate, TK_NUMBER, &yylval);
}
"dice-roll" { /* sends three random numbers */
for (int i = 0; i < 2; ++i) {
yylval.num = rand() % 6;
yypush_parse(pstate, TK_NUMBER, &yylval);
}
<<EOF>> { /* Obligatory EOF rule */
/* Send the parser the end token (0) */
int status = yypush_parse(pstate, 0, NULL);
/* Free the pstate */
yypstate_delete(pstate);
/* return the parser status; 0 is success */
return status;
}
In the parser, not much needs to be changed at all, other than adding the necessary declarations: [Note 4]
%define api.pure full
%define api.push-pull push
Notes
If you were building a reentrant lexer as well, you would use the extra data section of the lexer state object instead of static variables.
If you are using location objects in your parser to track source code locations, this also applies to yylloc.
The example code does not do a good job of detecting errors, since it doesn't check return codes from the calls to yypush_parse. One solution I commonly use is some variant on the macro SEND:
#define SEND(token) do { \
int status = yypush_parse(pstate, token, &yylval); \
if (status != YYPUSH_MORE) { \
yypstate_delete(pstate); \
return status; \
} \
} while (0)
It's also possible to use a goto to avoid the multiple instances of the yypstate_delete and return. YMMV.
You may have to modify the prototype of yyerror. If you are using locations and/or providing extra parameters to the push_parser, the location object and/or the extra parameters will also be present in the yyerror call. (The error string is always the last parameter.) For whatever reason, the parser state object is not provided to yyerror, which means that the yyerror function no longer has access to variables such as yych, which are now members of the yypstate structure rather than being global variables, so if you use these variables in your error reporting (which is not really recommended practice), then you will have to find an alternative solution.
Thanks to one of my friends, he provide a way to achieve
If I can ask bison to get terminals from yylex() in some cases, and from another function in other cases
In flex generated flex.cpp code, there is a macro
/* Default declaration of generated scanner - a define so the user can
* easily add parameters.
*/
#ifndef YY_DECL
#define YY_DECL_IS_OURS 1
extern int yylex (void);
#define YY_DECL int yylex (void)
#endif /* !YY_DECL */
so I can "rename" flex's yylex() function to another function like pure_yylex().
So my problem is solved by:
push all terminals I want to give bison to a global vector<int>
implement a yylex() function by myself, when bison call yylex(), this function will firstly try to get terminals from a that global vector<int>
if vector<int> is empty, yylex() calls pure_yylex(), and flex starts to work

lock-free synchronization, fences and memory order (store operation with acquire semantics)

I am migrating a project that was run on bare-bone to linux, and need to eliminate some {disable,enable}_scheduler calls. :)
So I need a lock-free sync solution in a single writer, multiple readers scenario, where the writer thread cannot be blocked. I came up with the following solution, which does not fit to the usual acquire-release ordering:
class RWSync {
std::atomic<int> version; // incremented after every modification
std::atomic_bool invalid; // true during write
public:
RWSync() : version(0), invalid(0) {}
template<typename F> void sync(F lambda) {
int currentVersion;
do {
do { // wait until the object is valid
currentVersion = version.load(std::memory_order_acquire);
} while (invalid.load(std::memory_order_acquire));
lambda();
std::atomic_thread_fence(std::memory_order_seq_cst);
// check if something changed
} while (version.load(std::memory_order_acquire) != currentVersion
|| invalid.load(std::memory_order_acquire));
}
void beginWrite() {
invalid.store(true, std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
}
void endWrite() {
std::atomic_thread_fence(std::memory_order_seq_cst);
version.fetch_add(1, std::memory_order_release);
invalid.store(false, std::memory_order_release);
}
}
I hope the intent is clear: I wrap the modification of a (non-atomic) payload between beginWrite/endWrite, and read the payload only inside the lambda function passed to sync().
As you can see, here I have an atomic store in beginWrite() where no writes after the store operation can be reordered before the store. I did not find suitable examples, and I am not experienced in this field at all, so I'd like some confirmation that it is OK (verification through testing is not easy either).
Is this code race-free and work as I expect?
If I use std::memory_order_seq_cst in every atomic operation, can I omit the fences? (Even if yes, I guess the performance would be worse)
Can I drop the fence in endWrite()?
Can I use memory_order_acq_rel in the fences? I don't really get the difference -- the single total order concept is not clear to me.
Is there any simplification / optimization opportunity?
+1. I happily accept any better idea as the name of this class :)
The code is basically correct.
Instead of having two atomic variables (version and invalid) you may use single version variable with semantic "Odd values are invalid". This is known as "sequential lock" mechanism.
Reducing number of atomic variables simplifies things a lot:
class RWSync {
// Incremented before and after every modification.
// Odd values mean that object in invalid state.
std::atomic<int> version;
public:
RWSync() : version(0) {}
template<typename F> void sync(F lambda) {
int currentVersion;
do {
currentVersion = version.load(std::memory_order_seq_cst);
// This may reduce calls to lambda(), nothing more
if(currentVersion | 1) continue;
lambda();
// Repeat until something changed or object is in an invalid state.
} while ((currentVersion | 1) ||
version.load(std::memory_order_seq_cst) != currentVersion));
}
void beginWrite() {
// Writer may read version with relaxed memory order
currentVersion = version.load(std::memory_order_relaxed);
// Invalidation requires sequential order
version.store(currentVersion + 1, std::memory_order_seq_cst);
}
void endWrite() {
// Writer may read version with relaxed memory order
currentVersion = version.load(std::memory_order_relaxed);
// Release order is sufficient for mark an object as valid
version.store(currentVersion + 1, std::memory_order_release);
}
};
Note the difference in memory orders in beginWrite() and endWrite():
endWrite() makes sure that all previous object's modifications have been completed. It is sufficient to use release memory order for that.
beginWrite() makes sure that reader will detect object being in invalid state before any futher object's modification is started. Such garantee requires seq_cst memory order. Because of that reader uses seq_cst memory order too.
As for fences, it is better to incorporate them into previous/futher atomic operation: compiler knows how to make the result fast.
Explanations of some modifications of original code:
1) Atomic modification like fetch_add() is intended for cases, when concurrent modifications (like another fetch_add()) are possible. For correctness, such modifications use memory locking or other very time-costly architecture-specific things.
Atomic assignment (store()) does not use memory locking, so it is cheaper than fetch_add(). You may use such assignment because concurrent modifications are not possible in your case (reader does not modify version).
2) Unlike to release-acquire semantic, which differentiate load and store operations, sequential consistency (memory_order_seq_cst) is applicable to every atomic access, and provide total order between these accesses.
The accepted answer is not correct. I guess the code should be something like "currentVersion & 1" instead of "currentVersion | 1". And subtler mistake is that, reader thread can go into lambda(), and after that, the write thread could run beginWrite() and write value to non-atomic variable. In this situation, write action in payload and read action in payload haven't happens-before relationship. concurrent access (without happens-before relationship) to non-atomic variable is a data race. Note that, single total order of memory_order_seq_cst does not means the happens-before relationship; they are consistent, but two kind of things.

How to disable autotrading globally from MQL4/5 program (EA) without DLLs?

How do I disable autotrading globally in MetaTrader 4/5 from within MQL4/5 code without using DLLs?
Here you go, (Author is Tiago Praxedes)
#define MT_WMCMD_EXPERTS 32851
#define WM_COMMAND 0x0111
#define GA_ROOT 2
#include <WinAPI\winapi.mqh>
void SetAlgoTradingStatus(bool Enable)
{
bool Status = (bool) TerminalInfoInteger(TERMINAL_TRADE_ALLOWED);
if(Enable != Status)
{
HANDLE hChart = (HANDLE) ChartGetInteger(ChartID(), CHART_WINDOW_HANDLE);
PostMessageW(GetAncestor(hChart, GA_ROOT), WM_COMMAND, MT_WMCMD_EXPERTS, 0);
}
}
void OnTick()
{
SetAlgoTradingStatus(false);
}
Lifted it from here: Source
Yes, MQL4/5 Expert Adviser may locally forbid self to trade this way:
if ( IsTradeAllowed() )
{ Comment( __FILE__, " [EA] Trading is allowed, will disable self." );
...
}
else
{ Comment( __FILE__, " [EA] Trading is not allowed, will disable self." );
...
}
// ---------------------------------------// GRACEFULLY RELEASE ALL RESOURCES BEFORE FIN'd
// ********
// FINALLY: EXPERT-ADVISOR SIG_TERM -> self ( MT4 )
ExpertRemove(); /* MT4 The Expert Advisor
is not stopped immediately
as you call ExpertRemove();
just a flag to stop the EA operation is set.
That is:
- any next event won't be processed,
- OnDeinit() will be called
and
- the Expert Advisor will be unloaded and removed from the chart.
*/
If the idea is that you have several EA's on different pairs and you want to disable them all at the same time you could place a specific trade, that I would call an information trade and not meant to be used for trading.
Choose a price very far away from current price.
For this example, we can use 9,999.000 as price and we place the trade.
Loop through the trades and look for a price = 9999.
If you find it use that to disable trading.
If you want it to start again, you can have a button or manually delete that trade that is 9999.
Now the block is not there.
All the EA's you may have going can see this.
Also, an offsite computer will be able to see it as well.
This was used as a way to hand off trading control to a team of traders.
The new owner would send the trade that bumps everyone off.
Remove that trade but the other EA's are asleep until a human turns them back on or a "Wake Up" trade is sent.
You can send values in the Comment, TP and SL values as prices.
There are many ways to use this method.
For this to work though you would have to have your own code and make the changes. If you buy an EA and want to turn it on and off without the source code then this method won't work.

Testing A Function That Always Returns True

How would one write a test for the following function?
bool IsAnInterger(int ignore)
{
return true
}
I don't have enough time to iterate over every integer (for the actual code the parameter isn't even an integer).
This is used as part of the Specification Pattern, so that I can implement a Null Object.
... testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.
-- Edsger W. Dijkstra
I'd say that it's pointless to try to exhaustively black box test this function. It is better to test it in a context similar to where it will be used.
In TDD, you write the test first and that test should specify a specific behavior. So the question should always be: What do I expect to happen? - and then write the test to verify that behavior - finally write the solution to make the test pass.
edit: Understanding the question
Do you mean that this function is the behavior for a non-existent specification, e.g. a Null specification? You can of course test this null specification that it behaves in a certain way. At a guess though, this will pretty much be hard-coded one-line return values (if anything). The tests for this null object would then basically only document what the null specification should do. It won't add any other extra business value to the system.
Since you cannot reasonably test every case, you can use Mathematical Induction as a basis for your tests. You can't do this algebraically, but you can pick an arbitrary value for n.
#include <limits>
#include <cassert>
bool IsAnInteger(int)
{
return true;
}
int main()
{
assert(IsAnInteger(std::numeric_limits<int>::min())); // First
assert(IsAnInteger(0)); // n
assert(IsAnInteger(1)); // n+1
}
Edit
Hold on!
for the actual code the parameter isn't even an integer
What is it then?
#include <cassert>
template <class T>
bool IsAnInteger(const T&)
{
return true;
}
int main()
{
assert(IsAnInteger(0)); // First
assert(IsAnInteger("I am not a number!")); // n
assert(IsAnInteger(42.0f)); // n+x
}
Your function has 100% test coverage and your unit tests accurately document how it behaves. In TDD you only need to write just enough code so that your unit tests pass. You're done with this and can move on.

Resources