Aws lambda functions multiple jars vs single jar - aws-lambda

Lets say you have a feature to reserve a dining table broken down into multiple lambda functions ex:
1. RestaurantsLambda - get list of restaurants etc
2. BookingLambda - takes payment, confirms booking etc
3. EmailLambda - sending confirmation emails.
Now will you place all the above lambdas in 1 jar or have jar per lambda?

Probably all the functions are very small. If is the case, in the end of the day it will not make difference. But if they are big or, the functions has different dependencies, it will make difference, but...
The best practice is to keep in the lambda only the code needed by the function. This includes dependencies and libraries, and of course the main code. This best practice is based on the fact that for lambda execution the code must be downloaded at the first invocation (after a while, the code is dismissed and the next invocations will be treated as first again). In Java there is a extra pain that is the process of class loading that happens before code execution.
So: the best way to understand this response is: Bigger the code base, more time needed to load the function and more time that it will cost you.
Long story short: One jar per function!
Extra point: In Java you must be very caution with libraries to be imported. In aws-sdk for example, you must only import the libraries that you'll need instead of import the entire aws-sdk. This will keep your functions slim and ... read again the last paragraph...;-)

Related

How to apply machine learning for streaming data in Apache NIFI

I have a processor that generates time series data in JSON format. Based on the received data I need to make a forecast using machine learning algorithms on python. Then write the new forecast values ​​to another flow file.
The problem is: when you run such a python script, it must perform many massive preprocessing operations: queries to a database, creating a complex data structure, initializing forecasting models, etc.
If you use ExecuteStreamCommand, then for each flow file the script will be run every time. Is this true?
Can I make in NIFI a python script that starts once and receives the flow files many times, storing the history of previously received data. Or do I need to make an HTTP service that will receive data from NIFI?
You have a few options:
Build a custom processor. This is my suggested approach. The code would need to be in Java (or Groovy, which provides a more Python-like experience) but would not have Python dependencies, etc. However, I have seen examples of this approach for ML model application (see Tim Spann's examples) and this is generally very effective. The initialization and individual flowfile trigger logic is cleanly separated, and performance is good.
Use InvokeScriptedProcessor. This will allow you to write the code in Python and separate the initialization (pre-processing, DB connections, etc., onScheduled in NiFi processor parlance) with the execution phase (onTrigger). Some examples exist but I have not personally pursued this with Python specifically. You can use Python dependencies but not "native modules" (i.e. compiled C code), as the execution engine is still Jython.
Use ExecuteStreamCommand. Not strongly recommended. As you mention, every invocation would require the preprocessing steps to occur, unless you designed your external application in such a way that it ran a long-lived "server" component and each ESC command sent data to it and returned an individual response. I don't know what your existing Python application looks like, but this would likely involve complicated changes. Tim has another example using CDSW to host and deploy the model and NiFi to send it data via HTTP to evaluate.
Make a Custom Processor that can do that. Java is more appropriate. I believe you can do pretty much every with Java you just need to find libraries. Yes, there might be some issues with some initialization and preprocessing that can be handled by all that in the init function of nifi that will allow you preserve the state of certain components.
Link in my use case I had to build a custom processor that could take in images and apply count the number of people in that image. For that, I had to load a deep learning model once in the init method and after through on trigger method, it could be taking the reference of that model every time it processes an image.

How to avoid lambda trigger recursive call

I've written a lambda function that is triggered via an s3 bucket's putObject event. I am modifying the headers of an object post upload, downloading the object, and reuploading with appropriate headers. But because the function itself uses the putObject to reupload the object, the lambda triggers itself.
Three options:
Use a different API to upload your changes than the one that you have an event on. ie, if your lambda is triggered by PUT, then use a POST to modify the content afterwards (tough to do since POST isn't supported well by SDKs AFAIK, so this may not be an option).
Track usage and have a small guard at the beginning of your handler to short circuit if the only changes made to a file are ones you made. If you can't programmatically detect the headers you've set, you'll probably need a small dynamo table or similar for keeping track of which files you've already touched. This will let you abort immediately and only be charged the minimum 100ms fee.
Reorganize your project to have an 'ingest' bucket and an output bucket. Un-processed are put into the former, modified, and then placed into the latter. This has a number of advantages. The first is that you don't end up with the current situation, so that's a plus. The second is that you don't have whatever process consumes these modified files potentially pulling an unmodified version. The third is that you get better insight into the process - if something goes wrong, it's easy to see which batches of files have undergone which process.
Overall, I'd recommend option 3 for you, though I know that in my lazier moments I might try to opt for 1 or 2.
Either way, good luck.

First call to Julia is slow

This questions deals with first load performance of Julia
I am running a Julia program from command line. The program is intended to be an API where user of the API doesn't have to initialize internal objects.
module Internal
type X
c::T1
d::T2
.
..
end
function do_something(a::X, arg:Any)
#some function
end
export do_something
end
API.jl
using Internal
const i_of_X = X()
function wrapper_do_something(args::Any)
do_something(i_of_X, args)
end
Now, this API.jl is exposed to third party user so that they don't have to bother about instantiating the internal objects. However, API.jl is not a module and hence cannot be precompiled. As there are many functions in API.jl, the first load takes a "very" long time.
Is there a way to improve the performance? I tried wrapping API.jl in a module too but I don't know if wrapping const initialized variables in a module is the way to go. I also get segmentation fault on doing so (some of the const are database connections and database collections along with other complex objects).
I am using v0.5 on OSX
[EDIT]
I did wrap API.jl in a module but there is no performance improvement.
I digged deeper and a big performance hit comes from the first call to linear regression function (GLM module based OLS lm(y~X, df)). The df has only 2 columns and 3 rows so it's not the run time issues but compilation slowness.
The other big hit comes from calling a highly overloaded function. The overloaded function fetches data from the database and can accept variety of input formats.
Is there a way to speed these up? Is there a way to fully precompile the julia program?
For a little more background, API based program is called once via command-line and any persistent first compilation advantages are lost as command-line closes the Julia process.
$julia run_api_based_main_func.jl
One hacky way to use the compilation benefits is to somehow copy/paste the code in already active julia process. Is this doable/recommended?(I am desperate to make it fast. Waiting 15-20s for a 2s analysis doesn't seem right)
It is OK to wrap const values in a module. They can be exported, as need be.
As Fengyang said, wrapping independent components of a larger design in modules is helpful and will help in this situation. When there is a great deal going on inside a module, the precompile time that accompanies each initial function call can add up. There is a way to avoid that -- precompile the contents before using the module:
__precompile__(true)
module ModuleName
# ...
end # module ModuleName
Please Note (from the online help):
__precompile__() should not be used in a module unless all of its dependencies are also using __precompile__(). Failure to do so can
result in a runtime error when loading the module.

How would you implement a Workflow system?

I need to implement a Workflow system.
For example, to export some data, I need to:
Use an XSLT processor to transform an XML file
Use the resulting transformation to convert into an arbitrary data structure
Use the resulting (file or data) and generate an archive
Move the archive into a given folder.
I started to create two types of class, Workflow, which is responsible of adding new Step object and run it.
Each Steps implement a StepInterface.
My main concerns is all my steps are dependent to the previous one (except the first), and I'm wondering what would be the best way to handle such problems.
I though of looping over each steps and providing each steps the result of the previous (if any), but I'm not really happy with it.
Another idea would have been to allow a "previous" Step to be set into a Step, like :
$s = new Step();
$s->setPreviousStep(Step $step);
But I lose the utility of a Workflow class.
Any ideas, advices?
By the way, I'm also concerned about success or failure of the whole workflow, it means that if any steps fail I need to rollback or clean the previous data.
I've implemented a similar workflow engine a last year (closed source though - so no code that I can share). Here's a few ideas based on that experience:
StepInterface - can do what you're doing right now - abstract a single step.
Additionally, provide a rollback capability but I think a step should know when it fails and clean up before proceeding further. An abstract step can handle this for you (template method)
You might want to consider branching based on the StepResult - so you could do a StepMatcher that takes a stepResult object and a conditional - its sub-steps are executed only if the conditional returns true.
You could also do a StepException to handle exceptional flows if a step errors out. Ideally, this is something that you can define either at a workflow level (do this if any step fails) and/or at a step level.
I'd taken the approach that a step returns a well defined structure (StepResult) that's available to the next step. If there's bulky data (say a large file etc), then the URI/locator to the resource is passed in the StepResult.
Your workflow is going to need a context to work with - in the example you quote, this would be the name of the file, the location of the archive and so on - so think of a WorkflowContext
Additional thoughts
You might want to consider the following too - if this is something that you're planning to implement as a large scale service/server:
Steps could be in libraries that were dynamically loaded
Workflow definition in an XML/JSON file - again, dynamically reloaded when edited.
Remote invocation and call back - submit job to remote service with a callback API. when the remote service calls back, the workflow execution is picked up at the subsequent step in the flow.
Parallel execution where possible etc.
stateless design
Rolling back can be fit into this structure easily, as each Step will implement its own rollback() method, which the workflow can call (in reverse order preferably) if any of the steps fail.
As for the main question, it really depends on how sophisticated do you want to get. On a basic level, you can define a StepResult interface, which is returned by each step and passed on to the next one. The obvious problem with this approach is that each step should "know" which implementation of StepResult to expect. For small systems this may be acceptable, for larger systems you'd probably need some kind of configurable mapping framework that can be told how to convert the result of the previous step into the input of the next one. So Workflow will call Step, Step returns StepResult, Workflow then calls StepResultConverter (which is your configurable mapping thingy), StepResultConverter returns a StepInput, Workflow then calls the next Step with StepInput and so on.
I've had great success implementing workflow using a finite state machine. It can be as simple or complicated as you like, with multiple workflows linking to each other. Generally an FSM can be implemented as a simple table where the current state of a given object is tracked in a history table by keeping a journal of the transitions on the object and simply retrieving the last entry. So a transition would be of the form:
nextState = TransLookup(currState, Event, [Condition])
If you are implementing a front end you can use this transition information to construct a list of the events available to a given object in its current state.

Is modular approach in Drupal good for performance?

Suppose I have to create functionalities A, B and C through custom coding in Drupal using hooks.
Either I can club three of them in custom1.module or I can create three separate modules for them, say custom1.module, custom2.module and custom3.module.
Benefits of creating three modules:
Clean code
Easily searchable
Mutually independent
Easy to commit in multi-developer projects
Cons:
Every module entry gets stored in the database and requires a query.
To what extent does it mar the performance of the site?
Is it better to create a single large custom module file for the sake of reducing database queries or break it into different smaller ones?
This issue might be negligible for small scale websites, let the case be for large scale performance oriented sites.
I would code it based on how often do I need function A , B and C
Actual example:
I made a module which had two needs
1) Send periodic emails based on user preference. Lets call this function A
2) Custom content made in a module . Lets call this function B
3) Social integration . Lets call this function C
So what I did is as Function A is only called once a week I made a separate module for it.
As for Function B and C I put it all together as they would always be called together.
If you have problems with performance then check out this link . Its a good resource for performance improvement.
http://www.vmirgorod.name/10/11/5/tuning-drupal-performance
It lists a nice module called boost. I have not used it but I have heard good things about it.
cheers,
Vishal
Drupal .module files are all loaded with every page load. There is very little performance related that can be gained or lost simply by separating functions into different .module files. If you are not using an op code cache, then you can get improved performance by creating .inc files and referencing those files in the menu items in hook_menu, such that those files only get loaded when the menu items are accessed. Then infrequently called functions do not take up memory space.
File separation in general is a very small performance issue compared to how the module is designed with respect to caching, memory use, and/or database access and structure. Let the code maintenance and dependency issues drive when you do or do not create separate modules.
i'm actually interested in:
Is it better to create a single large custom module file for the sake of reducing database queries or break it into different smaller ones?
I was poking around and found a few things regarding benchmarking the database. Maybe the suggestion here is to fire up the dev version and test. check out db benchmarking
now i understand that doesn't answer specifically but i'd have to say its unique to each environment. I hate to use that type of answer but i truly believe it is. Depends on modules installed, versions used, hardware and os tunables among many other things.

Resources