What does "emit" mean in general computer science terms? - emit

I just stumbled on what appears to be a generally-known compsci keyword, "emit". But I can't find any clear definition of it in general computer science terms, nor a specific definition of an "emit()" function or keyword in any specific programming language.
I found it here, reading up on MapReduce:
https://en.wikipedia.org/wiki/MapReduce
The context of my additional searches show it has something to do with signaling and/or events. But it seems like it is just assumed that the reader will know what "emit" is and does. For example, this article on MapReduce patterns:
https://highlyscalable.wordpress.com/2012/02/01/mapreduce-patterns/
There's no mention of what "emit" is actually doing, there are only calls to it. It must be different from other forms of returning data, though, such as "return" or simply "printf" or the equivalent, else the calls to "emit" would be calls to "return".
Further searching, I found a bunch of times that some pseudocode form of "emit" appears in the context of MapReduce. And in Node.js. And in Qt. But that's about it.
Context: I'm a (mostly) self-taught web programmer and system administrator. I'm sure this question is covered in compsci 101 (or 201?) but I didn't take that course.

In the context of web and network programming:
When we call a function the function may return a value.
When we call a function and the function is supposed to send those results to another function we will not user return anymore. Instead we use emit. We expect the function to emit the results to another function by our call.
A function can return results and emit events.

I've only ever seen emit() used when building a simple compiler in academia.
Upon analyzing the grammar of a program, you tokenize the contents of it and emit (push out) assembly instructions. (The compiler program that was written actually even contained an internal function called emit to mirror that theoretical/logical aspect of it.)
Once the grammar analysis is complete, the assembler will take the assembly instructions and generate the binary code (aka machine code).
So, I don't think there is a general CS definition for emit; however, I do know it is used in the pseudocode (and sometimes, actual code) for writing compiler programs. And that is undergraduate level computer science education in the US.

I can think of three contexts in which it's used:
Map/Reduce functions, where some input value causes 0 or more output values to go into the Reduce function
Tokenizers, where a stream of text is processed, and at various intervals, tokens are emitted
Messaging systems
I think the common thread is the "zero or more". A return provides exactly one value back from a function, whereas an "emit" is a function call that could take place zero times or several times.

In the context of the MapReduce programming model, it is said that an operation of a map nature takes an input value and emits a result, which is nothing more than a transformation of the input.

Related

Expressing a "random" behaviour (external API) in TLA+

In my module, I'm trying to represent the bahviour of an external API (which may respond with a 200 HTTP status or a 4XX/5XX HTTP status; which means it has 2 possible states vis-à-vis my system => Success OR Failure).
Briefly put, how should I describe an external API that my system consumes and to which it should react predictably according to the API's response (success or failure) ? (how often fo I get a Successful/Failed response; I don't know, it's "random")
This question is common among newcomers to the language. TLA+ doesn't have any built-in way of expressing that a system execution is more likely to take one path over another - in your case, whether the API returns an error. That's okay, because usually you write TLA+ specs to ensure your system functions well under all possible execution traces. So the possibility of API error would be expressed as a simple disjunct:
APIResult ==
\/ retVal' = "SomeHappyPathValue"
\/ retVal' = "Error"
Newcomers find this strange, because in their mind the error path should be "less likely" to happen than the happy path, while this spec seems to say they are equally likely. But this is based on a misunderstanding of how the model checker (usually) works: it doesn't simulate random traces trying to find an error, but instead does a breadth-first search of all possible execution traces, trying to find a state where an invariant is violated. So the idea that one execution trace is "more likely" than another doesn't mean anything to the model checker, because it is irrelevant to the things it checks.
Note it is actually possible to run the model checker in simulation mode as described above, but that's usually only done when your state space is so enormous that full exploration is untenable.
A thread on the TLA+ mailing list answers this question: http://discuss.tlapl.us/msg04033.html

Questions within questions for tin can api?

Does Tin Can API support questions within questions?
If so, what would be the specification for passing data to an LRS?
I was thinking of adding ID's to each sub question.
This would be much easier to answer if you could provide an example, but the flexibility of the Tin Can API is such that you can literally capture anything (which is also part of the complexity) with more or less grace.
Some immediate options come to mind:
Use a single interaction activity statement (likely with type choice) and use the formatting allowed to have multi-value responses (i.e. golf[,]tetris).
Use multiple statements where there is a combined statement (necessary if there is an overall result) such that there is a single main activity and each sub-question has its own statement where the sub-question has its own activity and the main activity would be stored in the context.contextActivities.parent list. When there is a combined statement in this case I would include a reference to the combined statement in the sub-question statements' context.statement property such that you can tie them all together.
Use result, context, and activity definition extensions to capture anything. This should be a last resort option, it usually makes setting things up simple but adds significant complexity on the reporting side. Though tempting because of the simplicity, unless you are trying to capture a specific type of data point (like geo-location data, math equations, etc.) usually you should try to avoid the use of extensions.
Which of the above makes the most sense is probably determined by what sort of response is being given, and whether or not questions are nested such that there is an overall result and sub-results or whether there is just overall results.

PE - Distinguish data from function export

I'm trying to find a way to figure out in IDA which exports are data exports and which are real functions export.
For example, let's have a look at Microsoft's msftedit.dll's export entries:
While CreateTextServices is a real exported function:
IID_IRichEditOle is a data export and IDA fails to realize that, interpeting data as code:
Do someone know a reliable way to distinguish the two? Help will be much appreciated.
Thanks in advance.
There is no perfectly reliable way to do this for every export.
Each export only specifies an offset within the executable file -- logically, it could be treated as code or as data by any other code that references it.
As you mentioned, you could come up with heuristics to detect the type of the export in almost all of the cases, but it would be easy to come up with counterexamples that do not work for any given heuristic. Take, for instance, the rule you proposed:
The exported entry will be considered a valid exported function if there is a ret instruction in the function, and there are more than <min> valid instructions, and IDA recognizes the function's calling convention.
False negatives: You might have a function that uses tail call optimization and ends with jmp instructions rather than ret instructions. Any short function would also fail. And there are several ways that IDA can be confused into not treating the code as a function.
False positives: There could be a string in memory followed closely by a C3 or C2 like db 'BACKGAMMON0',0,0C3h -- this could logically disassemble as a valid 11-instruction function with a ret and no arguments.
The lines are blurred even further when you consider that an export could be logically treated as both code and data: Imagine that a byte sequence at an export is copied into dynamically allocated memory -- potentially even in another process -- where it is later executed as code.
Perhaps a reasonable suggestion would be to just trust IDA and treat the export as code if IDA thinks it's code. A large part of IDA's functionality is automatically guessing the logical types of data, and it's normally pretty good at it. As you've shown, sometimes it's wrong. But you can't get 100% accuracy anyway. The best you can do is balance between false negatives and false positives.
Proof of this problem's undecidability:
Whether or not an export will be executed as code is undecidable. Whether or not an export will be read as data is also undecidable. Since we cannot guarantee that either is true, distinguishing between seemingly ambiguous cases is impossible.
Proof: Assume that we have an oracle A(P,I,E) which returns 1 if program P (including all of its dependencies) executes (or reads from) export E (from any DLL loaded in the course of P's execution) with "input" (external state) I. Otherwise, it returns 0.
Let us construct a minimal program Z(P,I,E) which executes (or reads from) export E (the DLL for which is loaded into the address space) if and only if A(P,I,E) returns 0.
Now consider the result of Z(Z,I,E):
If Z(Z,I,E) executes (or reads from) export E, then A(Z,I,E) would return 1. But Z(Z,I,E) is defined to not access export E unless A(Z,I,E) returns 0. This is a contradiction.
If Z(Z,I,E) does not execute (or read from) export E, then A(Z,I,E) would return 0. But Z(Z,I,E) is defined such that it will access export E when A(Z,I,E) returns 0. This is a contradiction.
Therefore, our initial assumption that oracle A(P,I,E) exists is proven false.
But you can do better through instrumentation...
Depending on the exact problem you're trying to solve, you may be able to determine which exports are valid functions at runtime.
For example, you could write an application which debugs the program you which to analyze and places guard pages on each of the pages that contain exports you wish to hook. This means, whenever a page is access (executed/read/written to), an exception is raised, and the debugger program gains control.
The debugger could check the program context to see what type of access was made and whether it has anything to do with the export. If the access is an attempt to execute an export, it could perform some hooking functionality before returning control to the program. Otherwise, it could just return control to the program.
In either case, the PAGE_GUARD modifier is lifted after each exception, so you'd need to put it back each time.
Unsurprisingly, this would make execution of your program very slow, as any R/W/X access to any of the pages containing an export causes an expensive context switch -- this would likely include the execution of most instructions that are a part of your exported functions, along with several others that have nothing to do with them.
You could take a similar approach with other instrumentation tools, such as Pin.
Note that you may not gain information about the usage of every export through instrumentation. This is because you may need to determine what input/external state is required to cause the program to access each export in order to learn if it is used as code or as data (if at all).
Also note that both execute and read (or even write) accesses could potentially occur on the same exports.

How do Erlang actors differ from OOP objects?

Suppose I have an Erlang actor defined like this:
counter(Num) ->
receive
{From, increment} ->
From ! {self(), new_value, Num + 1}
counter(Num + 1);
end.
And similarly, I have a Ruby class defined like this:
class Counter
def initialize(num)
#num = num
end
def increment
#num += 1
end
end
The Erlang code is written in a functional style, using tail recursion to maintain state. However, what is the meaningful impact of this difference? To my naive eyes, the interfaces to these two things seem much the same: You send a message, the state gets updated, and you get back a representation of the new state.
Functional programming is so often described as being a totally different paradigm than OOP. But the Erlang actor seems to do exactly what objects are supposed to do: Maintain state, encapsulate, and provide a message-based interface.
In other words, when I am passing messages between Erlang actors, how is it different than when I'm passing messages between Ruby objects?
I suspect there are bigger consequences to the functional/OOP dichotomy than I'm seeing. Can anyone point them out?
Let's put aside the fact that the Erlang actor will be scheduled by the VM and thus may run concurrently with other code. I realize that this is a major difference between the Erlang and Ruby versions, but that's not what I'm getting at. Concurrency is possible in other languages, including Ruby. And while Erlang's concurrency may perform very differently (sometimes better), I'm not really asking about the performance differences.
Rather, I'm more interested in the functional-vs-OOP side of the question.
In other words, when I am passing messages between Erlang actors, how is it different than when I'm passing messages between Ruby objects?
The difference is that in traditional languages like Ruby there is no message passing but method call that is executed in the same thread and this may lead to synchronization problems if you have multithreaded application. All threads have access to each other thread memory.
In Erlang all actors are independent and the only way to change state of another actor is to send message. No process have access to internal state of any other process.
IMHO this is not the best example for FP vs OOP. Differences usually manifest in accessing/iterating and chaining methods/functions on objects. Also, probably, understanding what is "current state" works better in FP.
Here, you put two very different technologies against each other. One happen to be F, the other one OO.
The first difference I can spot right away is memory isolation. Messages are serialized in Erlang, so it is easier to avoid race conditions.
The second are memory management details. In Erlang message handling is divided underneath between Sender and Receiver. There are two sets of locks of process structure held by Erlang VM. Therefore, while Sender sends the message he acquires lock which is not blocking main process operations (accessed by MAIN lock). To sum up, it gives Erlang more soft real-time nature vs totally random behaviour on Ruby side.
Looking from the outside, actors resemble objects. They encapsulate state and communicate with the rest of the world via messages to manipulate that state.
To see how FP works, you must look inside an actor and see how it mutates state. Your example where the state is an integer is too simple. I don't have the time to provide full example, but I'll sketch the code. Normally, an actor loop looks like following:
loop(State) ->
Message = receive
...
end,
NewState = f(State, Message),
loop(NewState).
The most important difference from OOP is that there are no variable mutations i.e. NewState is obtained from the State and may share most of the data with it, but the State variable always remains the same.
This is a nice property, since we never corrupt current state. Function f will usually perform a series of transformation to turn State into NewState. And only if/when it completely succeeds we replace the old state with the new one by calling loop(NewState).
So the important benefit is consistency of our state.
The second benefit I found is cleaner code, but it takes some time getting used to it. Generally, since you cannot modify variable, you will have to divide your code in many very small functions. This is actually nice, because your code will be well factored.
Finally, since you cannot modify a variable, it is easier to reason about the code. With mutable objects you can never be sure whether some part of your object will be modified, and it gets progressively worse if using global variables. You should not encounter such problems when doing FP.
To try it out, you should try to manipulate some more complex data in a functional way by using pure erlang structures (not actors, ets, mnesia or proc dict). Alternatively, you might try it in ruby with this
Erlang includes the message passing approach of Alan Kay's OOP (Smalltalk) and the functional programming from Lisp.
What you describe in your example is the message approach for OOP. The Erlang processes sending messages are a concept similar to Alan Kay's objects sending messages. By the way, you can retrieve this concept implemtented also in Scratch where parallel running objects send messages between them.
The functional programming is how you code the processes. For instance, variables in Erlang cannot be modified. Once they have been set, you can only read them. You have also a list data structure which works pretty much like Lisp lists and you have fun which are insprired by Lisp's lambda.
The message passing on one side, and the functional on the other side are quite two separate things in Erlang. When coding real life erlang applications, you spend 98% of your time doing functional programming and 2% thinking about messages passing, which is mainly used for scalability and concurrency. To say it another way, when you come to tackly complex programming problem, you will probably use the FP side of Erlang to implement the details of the algo, and use the message passing for scalability, reliability, etc...
What do you think of this:
thing(0) ->
exit(this_is_the_end);
thing(Val) when is_integer(Val) ->
NewVal = receive
{From,F,Arg} -> NV = F(Val,Arg),
From ! {self(), new_value, NV},
NV;
_ -> Val div 2
after 10000
max(Val-1,0)
end,
thing(NewVal).
When you spawn the process, it will live by its own, decreasing its value until it reach the value 0 and send the message {'EXIT',this_is_the_end} to any process linked to it, unless you take care of executing something like:
ThingPid ! {self(),fun(X,_) -> X+1 end,[]}.
% which will increment the counter
or
ThingPid ! {self(),fun(X,X) -> 0; (X,_) -> X end,10}.
% which will do nothing, unless the internal value = 10 and in this case will go directly to 0 and exit
In this case you can see that the "object" lives its own live by itself in parallel with the rest of the application, that it can interact with the outside almost without any code, and that the outside can ask him to do things you didn't know when you wrote and compile the code.
This is a stupid code, but there are some principle that are used to implement application like mnesia transaction, the behaviors... IMHO the concept is really different, but you have to try to think different if you want to use it correctly. I am pretty sure that it is possible to write "OOPlike" code in Erlang, but it will be extremely difficult to avoid concurrency :o), and at the end no advantage. Have a look at OTP principle which gives some tracks about the application architecture in Erlang (supervision trees, pool of "1 single client servers", linked processes, monitored processes, and of course pattern matching single assignment, messages, node clusters ...).

Extending functionality of existing program I don't have source for

I'm working on a third-party program that aggregates data from a bunch of different, existing Windows programs. Each program has a mechanism for exporting the data via the GUI. The most brain-dead approach would have me generate extracts by using AutoIt or some other GUI manipulation program to generate the extractions via the GUI. The problem with this is that people might be interacting with the computer when, suddenly, some automated program takes over. That's no good. What I really want to do is somehow have a program run once a day and silently (i.e. without popping up any GUIs) export the data from each program.
My research is telling me that I need to hook each application (assume these applications are always running) and inject a custom DLL to trigger each export. Am I remotely close to being on the right track? I'm a fairly experienced software dev, but I don't know a whole lot about reverse engineering or hooking. Any advice or direction would be greatly appreciated.
Edit: I'm trying to manage the availability of a certain type of professional. Their schedules are stored in proprietary systems. With their permission, I want to install an app on their system that extracts their schedule from whichever system they are using and uploads the information to a central server so that I can present that information to potential clients.
I am aware of four ways of extracting the information you want, both with their advantages and disadvantages. Before you do anything, you need to be aware that any solution you create is not guaranteed and in fact very unlikely to continue working should the target application ever update. The reason is that in each case, you are relying on an implementation detail instead of a pre-defined interface through which to export your data.
Hooking the GUI
The first way is to hook the GUI as you have suggested. What you are doing in this case is simply reading off from what an actual user would see. This is in general easier, since you are hooking the WinAPI which is clearly defined. One danger is that what the program displays is inconsistent or incomplete in comparison to the internal data it is supposed to be representing.
Typically, there are two common ways to perform WinAPI hooking:
DLL Injection. You create a DLL which you load into the other program's virtual address space. This means that you have read/write access (writable access can be gained with VirtualProtect) to the target's entire memory. From here you can trampoline the functions which are called to set UI information. For example, to check if a window has changed its text, you might trampoline the SetWindowText function. Note every control has different interfaces used to set what they are displaying. In this case, you are hooking the functions called by the code to set the display.
SetWindowsHookEx. Under the covers, this works similarly to DLL injection and in this case is really just another method for you to extend/subvert the control flow of messages received by controls. What you want to do in this case is hook the window procedures of each child control. For example, when an item is added to a ComboBox, it would receive a CB_ADDSTRING message. In this case, you are hooking the messages that are received when the display changes.
One caveat with this approach is that it will only work if the target is using or extending WinAPI controls.
Reading from the GUI
Instead of hooking the GUI, you can alternatively use WinAPI to read directly from the target windows. However, in some cases this may not be allowed. There is not much to do in this case but to try and see if it works. This may in fact be the easiest approach. Typically, you will send messages such as WM_GETTEXT to query the target window for what it is currently displaying. To do this, you will need to obtain the exact window hierarchy containing the control you are interested in. For example, say you want to read an edit control, you will need to see what parent window/s are above it in the window hierarchy in order to obtain its window handle.
Reading from memory (Advanced)
This approach is by far the most complicated but if you are able to fully reverse engineer the target program, it is the most likely to get you consistent data. This approach works by you reading the memory from the target process. This technique is very commonly used in game hacking to add 'functionality' and to observe the internal state of the game.
Consider that as well as storing information in the GUI, programs often hold their own internal model of all the data. This is especially true when the controls used are virtual and simply query subsets of the data to be displayed. This is an example of a situation where the first two approaches would not be of much use. This data is often held in some sort of abstract data type such as a list or perhaps even an array. The trick is to find this list in memory and read the values off directly. This can be done externally with ReadProcessMemory or internally through DLL injection again. The difficulty lies mainly in two prerequisites:
Firstly, you must be able to reliably locate these data structures. The problem with this is that code is not guaranteed to be in the same place, especially with features such as ASLR. Colloquially, this is sometimes referred to as code-shifting. ASLR can be defeated by using the offset from a module base and dynamically getting the module base address with functions such as GetModuleHandle. As well as ASLR, a reason that this occurs is due to dynamic memory allocation (e.g. through malloc). In such cases, you will need to find a heap address storing the pointer (which would for example be the return of malloc), dereference that and find your list. That pointer would be prone to ASLR and instead of a pointer, it might be a double-pointer, triple-pointer, etc.
The second problem you face is that it would be rare for each list item to be a primitive type. For example, instead of a list of character arrays (strings), it is likely that you will be faced with a list of objects. You would need to further reverse engineer each object type and understand internal layouts (at least be able to determine offsets of primitive values you are interested in in terms of its offset from the object base). More advanced methods revolve around actually reverse engineering the vtable of objects and calling their 'API'.
You might notice that I am not able to give information here which is specific. The reason is that by its nature, using this method requires an intimate understanding of the target's internals and as such, the specifics are defined only by how the target has been programmed. Unless you have knowledge and experience of reverse engineering, it is unlikely you would want to go down this route.
Hooking the target's internal API (Advanced)
As with the above solution, instead of digging for data structures, you dig for the internal API. I briefly covered this with when discussing vtables earlier. Instead of doing this, you would be attempting to find internal APIs that are called when the GUI is modified. Typically, when a view/UI is modified, instead of directly calling the WinAPI to update it, a program will have its own wrapper function which it calls which in turn calls the WinAPI. You simply need to find this function and hook it. Again this is possible, but requires reverse engineering skills. You may find that you discover functions which you want to call yourself. In this case, as well as being able to locate the location of the function, you have to reverse engineer the parameters it takes, its calling convention and you will need to ensure calling the function has no side effects.
I would consider this approach to be advanced. It can certainly be done and is another common technique used in game hacking to observe internal states and to manipulate a target's behaviour, but is difficult!
The first two methods are well suited for reading data from WinAPI programs and are by far easier. The two latter methods allow greater flexibility. With enough work, you are able to read anything and everything encapsulated by the target but requires a lot of skill.
Another point of concern which may or may not relate to your case is how easy it will be to update your solution to work should the target every be updated. With the first two methods, it is more likely no changes or small changes have to be made. With the second two methods, even a small change in source code can cause a relocation of the offsets you are relying upon. One method of dealing with this is to use byte signatures to dynamically generate the offsets. I wrote another answer some time ago which addresses how this is done.
What I have written is only a brief summary of the various techniques that can be used for what you want to achieve. I may have missed approaches, but these are the most common ones I know of and have experience with. Since these are large topics in themselves, I would advise you ask a new question if you want to obtain more detail about any particular one. Note that in all of the approaches I have discussed, none of them suffer from any interaction which is visible to the outside world so you would have no problem with anything popping up. It would be, as you describe, 'silent'.
This is relevant information about detouring/trampolining which I have lifted from a previous answer I wrote:
If you are looking for ways that programs detour execution of other
processes, it is usually through one of two means:
Dynamic (Runtime) Detouring - This is the more common method and is what is used by libraries such as Microsoft Detours. Here is a
relevant paper where the first few bytes of a function are overwritten
to unconditionally branch to the instrumentation.
(Static) Binary Rewriting - This is a much less common method for rootkits, but is used by research projects. It allows detouring to be
performed by statically analysing and overwriting a binary. An old
(not publicly available) package for Windows that performs this is
Etch. This paper gives a high-level view of how it works
conceptually.
Although Detours demonstrates one method of dynamic detouring, there
are countless methods used in the industry, especially in the reverse
engineering and hacking arenas. These include the IAT and breakpoint
methods I mentioned above. To 'point you in the right direction' for
these, you should look at 'research' performed in the fields of
research projects and reverse engineering.

Resources