Uploading an image into Apache Openwhisk - openwhisk

Currently, I'm running Openwhisk on a Docker standalone container. I've added normal simple JavaScript functions as actions and invoked the same. The input arguments are taken as --param or --param-file for json. The output is always a json file.
Is it possible to upload a simple picture(jpeg, png), do a simple change on it and return the image?
Running a python shell within the javascript file in which the python file uses pillow library to rotate an image for example. This is done because I wanted performance metrics with https://github.com/jthomas/openwhisk-metrics (can be done only with js files).
Serialization has been a very tedious process and highly inefficient.

Related

How to correctly dockerize and continuously integrate 20GB raw data?

I have an application that uses about 20GB of raw data. The raw data consists of binaries.
The files rarely - if ever - change. Changes only happen if there are errors within the files that need to be resolved.
The most simple way to handle this would be to put the files in its own git repository and create a base image based on that. Then build the application on top of the raw data image.
Having a 20GB base image for a CI pipeline is not something I have tried and does not seem to be the optimal way to handle this situation.
The main reason for my approach here ist to prevent extra deployment complexity.
Is there a best practice, "correct" or more sensible way to do this?
Huge mostly-static data blocks like this are probably the one big exception to me to the “Docker images should be self-contained” rule. I’d suggest keeping this data somewhere else, and download it separately from the core docker run workflow.
I have had trouble in the past with multi-gigabyte images. Operations like docker push and docker pull in particular are prone to hanging up on the second gigabyte of individual layers. If, as you say, this static content changes rarely, there’s also a question of where to put it in the linear sequence of layers. It’s tempting to write something like
FROM ubuntu:18.04
ADD really-big-content.tar.gz /data
...
But even the ubuntu:18.04 image changes regularly (it gets security updates fairly frequently; your CI pipeline should explicitly docker pull it) and when it does a new build will have to transfer this entire unchanged 20 GB block again.
Instead I would put them somewhere like an AWS S3 bucket or similar object storage. (This is a poor match for source control systems, which (a) want to keep old content forever and (b) tend to be optimized for text rather than binary files.). Then I’d have a script that runs on the host that downloads that content, and then mount the corresponding host directory into the containers that need it.
curl -LO http://downloads.example.com/really-big-content.tar.gz
tar xzf really-big-content.tar.gz
docker run -v $PWD/really-big-content:/data ...
(In Kubernetes or another distributed world, I’d probably need to write a dedicated Job to download the content into a Persistent Volume and run that as part of my cluster bring-up. You could do the same thing in plain Docker to download the content into a named volume.)

Backend Java application testing using jmeter

I have a Java program which works on backend .It's a kind of batch processing type where I will have a text file that contains messages.The Java program will fetch message from text file and will load in to DB or Mainframe.Instead of sequential fetching we need to try parallel fetching .How can I do through Jmeter?
I tried my converting the program to a Jar file and calling it through class name.
Also tried by pasting code and in argument place we kept the CSV (the text file converted to .CSV).
Both of this giving Sampler client exception..
Can you please help me how to proceed or is there something we are missing or other way to do it.
The easiest way to kick off multiple instances of your program is running it via JMeter's OS Process Sampler which can run arbitrary commands and print their output + measure execution time.
If you have your program as an executable jar you can kick it off like:
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information on the approach

Is there any document about using Python preprocess with h2o steam?

The h2o steam website said Python preprocess with pojo
As .War is an optional, but I can not find any examples about doing this step by step,
Where can I find out more details about this? Or I better do it in Java only?
The situation is I have one python preprocess program, mainly use pandas to do some data munging before calling h2o to train/score the model. I want to use the
h2o steam as the score engine. The website mentions I can wrap the python and h2o pojo/mojo file together as a .war file, so I can call it through REST API. But I
can not find example or details about how to proceed. Also do I need to and if yes how to include these
python library like pandas in the war file?

AWS Lambda: How To Upload & Test Code Using Python And Command Line

I am no longer able to edit my AWS lambda function using the inline editor because of the error, "Your inline editor code size is too large. Maximum size is 51200." Yet, I can't find a walk-through that explains how to do these things from localhost:
Upload a python script to Lambda
Supply "event" data to the script
View Lambda output
You'll need to create a deployment package for your code, which is just a zip archive but with a particular format. Instructions are in the AWS Python deployment documentation.
Then you can use the context object to supply event data to your script, starter information in the AWS Python programming model documentation.
Side note: once your Lambda code starts to get larger, it's often handy to move to some sort of management framework. Several have been written for Lambda, I use Apex which is written in Go but works for any Lambda code, but you might be more comfortable with Gordon (has a great list of examples and is more active) or Kappa, which are both written in Python.

Jenkins plugin auto import

Fairly new to the use of Jenkins, but I am looking for a way to get test results and feed it back into Jenkins.
What this means is, I have a bash scripts that collects metrics on a number of applications and checks to see whether or not the files exist. I collect this data to a plain text file, basically with counters 1/5, 2/5, 5/10 etc.
The output can be however I want it, but I was wondering if there is a good/clean process that can take this data file and output it nicely inside of Jenkins web interface?
We also use Trac as well.. so if there is a Trac plugin that can do something similar, that would be good too.
The best practices would say to escape them and use them as parameters to a parameterized jenkins build or a file save/capture. Parameters are finicky and subject to url encoding, so I would personally do file passing using a shared filesystem such as S3. Your best bet is https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API

Resources