Several limitation classes per one Job Object - windows

In Windows Job Object can apply some amount of different limitations for processes. These limitations are available through the different job object info classes.
MSDN says "You can use the SetInformationJobObject function to set several limits in a single call. If you want to establish the limits one at a time or change a subset of the limits, call the QueryInformationJobObject function to obtain the current limits, modify these limits, and then call SetInformationJobObject."
But it's unclear: is it possible to set limitations from more than one job object info class for one job object?
Of course "rich" limit classes wrap the basic, so we have essentially limitations of two classes simultaneously. But I ask about the case of two non-basic classes.

Related

Best practice to deploy multi models that will run concurrently at scale (something like map reduce)

I have a model that consists 150 models (runs in for loop).
In order to be performance oriented, I would like to split it into 150 models, that for every request my server gets it will send 150 api requests to every different model and then combine the result (so that the invocations will run parallely). So called map reduce
I thought about AWS SageMaker multi model but it says that the use case is better for serial running more than parallel or concurrent run.
In addition, I thought about maybe creating lambda function that will read the model and scale accordingly (serverless), but it sounds very odd to me and that I miss SageMaker's usecases.
Thanks!
are your models similarly sized? This should not be an issue for the concurrent requests as long as you choose an instance type to back the endpoint that has an appropriate amount of workers to be able to handle these requests. Check out the Real-Time Inference SageMaker Pricing page to see the different instance types you can use, I would suggest tuning this instance type along with count to be able to handle your requests.

(Golang) Clean Architecture - Who should do the orchestration?

I am trying to understand which of the following two options is the right approach and why.
Say we have GetHotelInfo(hotel_id) API that is being invoked from the Web till the Controller.
The logic of the GetHotelInfo is:
Invoke GetHotelPropertyData() (Location, facilities…)
Invoke GetHotelPrice(hotel_id, dates…)
Invoke GetHotelReviews(hotel_id)
Once all results come back, process and merge the data and return 1 object that contains all relevant data of the hotel.
Option 1:
Create 3 different repositories (HotelPropertyRepo, HotelPriceRepo,
HotelReviewsRepo)
Create GetHotelInfo usecase that will use these 3 repositories and
return the final result.
Option 2:
Create 3 different repositories (HotelPropertyRepo, HotelPriceRepo,
HotelReviewsRepo)
Create 3 different usecases (GetHotelPropertyDataUseCase,
GetHotelPriceUseCase, GetHotelReviewsUseCase)
Create GetHotelInfoUseCase that will orchestrate the previous 3
usecases. (It can also be a controller, but that’s a different topic)
Let’s say that right now only GetHotelInfo is being exposed to the Web but maybe in the future, I will expose some of the inner requests as well.
And would the answer be different if the actual logic of GetHotelInfo is not a combination of 3 endpoints but rather 10?
You can see a similar method (called Get()) in "Clean Architecture with GO" from Manato Kuroda
Manato points out that:
following Acyclic Dependencies Principle (ADP), the dependencies only point inward in the circle, not point outward and no circulation.
that Controller and Presenter are dependent on Use Case Input Port and Output Port which is defined as an interface, not as specific logic (the details). This is possible (without knowing the details in the outer layer) thanks to the Dependency Inversion Principle (DIP).
That is why, in example repository manakuro/golang-clean-architecture, Manato creates for the Use cases layer three directories:
repository,
presenter: in charge of Output Port
interactor: in charge of Input Port, with a set of methods of specific application business rules, depending on repository and presenter interface.
You can use that example, to adapt your case, with GetHotelInfo declared first in hotel_interactor.go file, and depending on specific business method declared in hotel_repository, and responses defined in hotel_presenter
Is expected Interactors (Use Case class) call other interactors. So, both approaches follow Clean Architecture principles.
But, the "maybe in the future" phrase goes against good design and architecture practices.
We can and should think the most abstract way so that we can favor reuse. But always keeping things simple and avoiding unnecessary complexity.
And would the answer be different if the actual logic of GetHotelInfo is not a combination of 3 endpoints but rather 10?
No, it would be the same. However, as you are designing APIs, in case you need the combination of dozens of endpoints, you should start considering put a GraphQL layer instead of adding complexity to the project.
Clean is not a well-defined term. Rather, you should be aiming to minimise the impact of change (adding or removing a service). And by "impact" I mean not only the cost and time factors but also the risk of introducing a regression (breaking a different part of the system that you're not meant to be touching).
To minimise the "impact of change" you would split these into separate services/bounded contexts and allow interaction only through events. The 'controller' would raise an event (on a shared bus) like 'hotel info request', and each separate service (property, price, and reviews) would respond independently and asynchronously (maybe on the same bus), leaving the controller to aggregate the results and return them to the client, which could be done after some period of time. If you code the result aggregator appropriately it would be possible to add new 'features' or remove existing ones completely independently of the others.
To improve on this you would then separate the read and write functionality of each context into its own context, each responding to appropriate events. This will allow you to optimise and scale the write function independently of the read function. We call this CQRS.

why do we use tibco mapper activity?

The tibco documentation says
The Mapper activity adds a new process variable to the process definition. This variable can be a simple datatype, a TIBCO ActiveEnterprise schema, an XML schema, or a complex structure.
so my question is tibco mapper does only this simple function.We can create process variables in process definition also(by right clicking on process definition).I looked for it in google but no body clearly explains why to use this activity and I have also tried in youtube and there also only one video and it does not explain clearly.I am looking for an example how it is used in large organizations and a real time example.Thanks in advance
The term "process variable" is a bit overloaded I guess:
The process variables that you define in the Process properties are stateful. You can use (read) their values anywhere in the process and you can change their values during the process using the Assign task (yellow diamond with a black equals sign).
The mapper activity produces a new output variable of that task that you can only use (read) in activities that are downstream from it. You cannot change its value after the mapper activity, as for any other activity's output.
The mapper activity is mainly useful to perform complex and reusable data mappings in it rather than in the mappers of other activities. For example, you have a process that has to map its input data into a different data structure and then has to both send this via a JMS message and log it to a file. The mapper allows you to perform the mapping only once rather than doing it twice (both in the Send JMS and Write to File activity).
You'll find that in real world projects, the mapper activity is quite often used to perform data mapping independently of other activities, it just gives a nicer structure to the processes. In contrast the Process Variables defined in the Process properties together with the Assign task are used much less frequently.
Here's a very simple example, where you use the mapper activity once to set a process variable (here the filename) and then use it in two different following activities (create CSV File and Write File). Obviously, the mapper activity becomes more interesting if the mapping is not as trivial as here (though even in this simple example, you only have one place to change how the filename is generated rather than two):
Mapper Activiy
First use of the filename variable in Create File
Second use of the filename variable in Write File
Process Variable/Assign Activity Vs Mapper Activity
The primary purpose of an assign task is to store a variable at a process level. Any variable in an assign task can be modified N times in a process. But a mapper is specifically used for introducing a new variable. We cannot change the same mapper variable multiple times in a project.
Memory is allocated to Process Variable when the process instance is created but in case of TIBCO Mapper the memory is allocated only when the mapper activity is executed in a process instance.
Process Variable is allocated a single slot of memory which is used to update/modify the schema thought the process instance execution i.e. N number of assign activity will access same memory allocated to the variable. Whereas using N mapper for a same schema will create N amount of memory.
Assign Activity can be is used to accumulate the output of a tibco activity inside a group.

Multiple RemoteObjects - Best Practices

I have an application with about 20 models and controllers and am not using any particular framework. What is the best practice for using multiple remote objects in Flex performance-wise?
1) Method 1 - One per Component - Each component instantiates a RemoteObject for itself
2) Method 2 - Multiple in Application Root - Each controller is handled by a RemoteObject in the root
3) Method 3 - One in Application Root - Combine all controllers into one class and handle them with one RemoteObject
I'm guessing 3 will have the best performance but will be too messy to maintain and 1 would be the cleanest but would take a performance hit. What do you think?
Best practice would be "none of the above." Your Views should dispatch events that a controller or Command component would use to call your service(s) and then update your model on return of the data. Your Views would be bound to the data, and then the Views would automatically be updated with the new data.
My preference is to have one service Class per different piece or type of data I am retrieving--this makes it easier to build mock services that can be swapped for real services as needed depending on what you're doing (for instance if you have a complicated server setup, a developer who is working on skinning would use the mocks). But really, how you do that is a matter of personal preference.
So, where do your services live, so that a controller or command can reach them? If you use a Dependency Injection framework such as Robotlegs or Swiz, it will have a separate object that handles instantiating, storing, and and returning instances of model and service objects (in the case of Robotlegs, it also will create your Command objects for you and can create view management objects called Mediators). If you don't use one of these frameworks, you'll need to "roll your own," which can be a bit difficult if you're not architecturally minded.
One thing people who don't know how to roll their own (such as the people who wrote the older versions of Cairngorm) tend to fall back on is Singletons. These are not considered good practice in this day and age, especially if you are at all interested in unit testing your work. http://misko.hevery.com/code-reviewers-guide/flaw-brittle-global-state-singletons/
A lot depends on how much data you have, how many times it gets refreshed from the server, and of you have to support update as well as query.
Number 3 (and 2) are basically a singletons - which tends to work best for large applications and large datasets. Yes, it would be complex to maintain yourself, but that's why people tend to use frameworks (puremvc, cairgorm, etc). much of the complexity is handled for you. Caching data within the frameworks also enhances performance and response time.
The problem with 1 is if you have to coordinate data updates per component, you basically need to write a stateless UI, always retrieving the data from the server on each component visibility.
edit: I'm using cairgorm - have ~ 30 domain models (200 or so remote calls) and also use view models. some of my models (remote object) have 10's of thousands of object instances (records), I keep a cache with/write back. All of the complexity is encapsulated in the controller/commands. Performance is acceptable.
In terms of pure performance, all three of those should perform roughly the same. You'll of course use slightly more memory by having more instances of RemoteObject and there are a couple of extra bytes that get sent along with the first request that you've made with a given RemoteObject instance to your server (part of the AMF protocol). However, the effect of these things is negligible. As such, Amy is right that you should make a choice based on ease of maintainability and not performance.

Why does COM+ ignore the apartment threading model?

I have an STA COM component which is put into a COM+ application. The client creates several instances of the class in that component and calls their methods in parallel. The class is registered properly - the "ThreadingModel" for the corresponding class id is "Apartment".
I see several calls of the same method of the same class being executed in parallel inside the component - in actual component code. They are executed in the same process but in different threads.
What's happening? Is COM+ ignoring the threading model? Shouldn't STA model only allow one call at a time to be executed?
To avoid confusion, I won't use the term "object" in this answer. Instead let's use "class" and "instance". I'm confident we all understand the difference between them.
Marking your COM class with a ThreadingModel of "Apartment" means that instances of it will be loaded into an STA. The process creating those instances will determine whether they all go into the same STA, or into separate STAs.
As you've discovered, COM+ has loaded several instances into separate STAs.
The guarantee you get with an STA is that a single instance will never be accessed by multiple threads at the same time. Separate instances of the same class, if they are loaded into separate STAs, could certainly be accessed by different threads at the same time.
So the STA is really a way of protecting your instance data. Not your class data. Any "shared" or "static" data in your COM code will have to be protected by you.
STA guarantees that your object is only accessed from a single, specific thread -- no protection against shared variable is required.
I remember that for VB6, there was a special mode (I do not recall how it was named): You could allow COM+ to spawn up multiple STAs, each using a dedicated object. The variables of these objects, however, were treated as thread-local storage -- so although there are multiple instances of your COM class being accessed from multiple threads, no sharing of variables is taking place. Is it possible that you are using this feature?
No, not really. STA literally means 'Single Threaded Apartment' which further means that only a single thread can run in an apartment. Now the question is that what is an apartment. Apartment is a logical space within a process and its implementation can vary from framework to framework. Microsoft implements apartments as Threads because of which an STA (in Microsoft's COM Context) translates into Single Threaded Thread i.e., there can be multiple apartments/threads but every apartment/thread will be single threaded in case of STA.
You can generalize this thing to MTA yourself. From what I said above, an MTA is a Multi-Threaded thread in COM Context.
Have you passed the object to objects that live in another apartment? If so, did you need to marshal the interface before you did it? Did you happen to aggregate the free threaded marshaller?
Roughly speaking, if you pass an interface to your object to an object in another apartment (thread), then you must make sure to marshal the interface. If you do not, then you may find that your object can be called freely from the objects in the other apartment, since they are not calling through a proxy which handles the call correctly.
All calls to an object must be made on
its thread (within its apartment). It
is forbidden to call an object
directly from another thread; using
objects in this free-threaded manner
could cause problems for applications.
The implication of this rule is that
all pointers to objects must be
marshaled when passed between
apartments. COM provides the following
two functions for this purpose:
* CoMarshalInterThreadInterfaceInStream marshals an interface into a stream object that is returned to the caller.
* CoGetInterfaceAndReleaseStream unmarshals an interface pointer from a stream object and releases it.
These functions wrap calls to
CoMarshalInterface and
CoUnmarshalInterface functions, which
require the use of the MSHCTX_INPROC
flag.

Resources