Run test script in continuous integration using CircleCI? - continuous-integration

We're using CircleCI for CI/CD. In the continuous integration process, I'd like to run a test script I wrote that generates some data for my app to use. Then another script will run and make sure the data was processed in a certain amount of time (performance test). Is there a way I can do this through CI/CD using CircleCI?
I have a Spark Streaming App that reads data from Kafka, processes and then puts it into a database.
I generate some data and send it to Kafka. Meanwhile, the Spark program is running (my script didn't start it I manually start it earlier). The Spark app processes this data. My script just waits about 1 minute after it initially generates the data then makes an API call to get some metrics on the data that Spark Processed, basically performance metrics.
I'd like CircleCI to take the code from Github, Build it, Deploy it to a "test" server that then runs my Spark App in the test environment which has Spark/Kafka installed. Then I'd like CircleCI to run my custom script. My script will give back some performance values in some variable say X. I'd essentially like to have CircleCI compare X against some value like assert(X < 60) and then I'd like CircleCI to say that this failed some test if X < 60 == False. Is this possible?

I realise this question has been asked a while ago, however if anyone has a similar requirement, this circleci pipeline runs a dockerised integration and performance test against a locally deployed API server.
Specific problems that needed solving:
wait for servers to start within circleci script on cli using pwt
run performance test against API using wrk with LUA script
Run locally using docker (OSX)
git clone https://github.com/simonmittag/jabba.git
circleci local execute --job integration
circleci local execute --job performance
*Disclaimer: I'm involved with some of the linked github projects.

Related

Best way to test code with Airflow variables and connections on different environments

For our Airflow Projects (running Airflow 2.0.1) I have implemented some general test that verify the DAG validity, check if each DAG has an owner/email, check for parsing time and so on ..
Now I am trying to set up some CI/CD pipeline that runs these tests and then pushes the DAGs to the Cloud Composer bucket. This test however will obviously fail if I use any Airflow Connection or Variable as these have not been created yet on the runners.
What I do not want to do is using mocking, as I have to specify each connection/variable which is a bit too much work for general tests. How do you deal with connections/variables for testing on different environments (development/testing/production)
Since you're using Airflow 2 you can use the stable API to create or update variables in the desired environment. You just need to call that API from the CI/CD pipeline.
Check the official documentation for create a variable and update a variable.

Cypress running in parallel on AWS recording to old run

I am running cypress in a pipeline that has 2 build actions (that use the same codebuild) for running in parallel. After I manually initiate the pipeline, it will pass with a success. The problem is that the test suite isn't ran. It lists the link to cypress dashboard but it links to an older run and not a new one. I am using --ci-build-id $CODEBUILD_INITIATOR per the cypress documentation. Anyone have an idea why I am not getting unique runs every time the pipeline is ran?
This problem was due to the buildspec not being setup correctly to run machines in parallel. The the batch build configuration was not set in Codebuild. Quite simple really.
Originally the pipeline just ran two instances of the same codebuild. The first times it was ran by initiating the pipeline manually it worked and reported fine through cypress dashboard. Changing the buildspec to run a buildlist solved this problem. I was too hung up on following the buildmatrix example that was given in the cypress documentation.

How can I run a task that requires multiple containers?

This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.

CI based on docker-compose?

I am currently building a little application that requires that some massively annoying software is installed and running in backround. To ease the pain of developing, I wrote a set of docker-compose files that runs the necessary daemons, creates some jobs, and throws in some test data.
Now, I'd like to run this in a CI-like manner. I currently have Jenkins check all the different repositories and execute a shell script that calles docker-compose up --abort-on-container-exit. That gets the job done, but it seems like a hack, and I'm not such a huge fan of Jenkins.
What I want to ask is: is there a more beautiful way of doing this? Specifically, is there a CI that will
watch a set of git repositories,
re-execute docker-compose (possibly multiple times with different sets of parameters), and
nicely collect and split the logs and tell me which container exactly failed how?
(Optionally) is not some cloud service but installable on my local server?
If the answer to this is "write a Jenkins module", then fine, so be it.
I'm aware that there are options like gitlab-ci, but I'd like to keep the CI script in a fashion that can also be easily executed during development, before pusing to a repo.

any library to check marathon deployment status

I have a bunch of marathon docker tasks that are run on our test deployment machines.
There is a Jenkins CI job, that triggers the deployment a whole bunch of docker containers that are run on marathon-mesos cluster. (3 mesos-slaves, 1 master and 1 marathon.)
There is another downstream jenkins job (a automated test suite) that is triggered after above job. Presently, we wait for sufficient time, so that deployment gets completed then only we proceed with this automation testsuite. I want to change this behavior. I know marathon exposes rest APIs using which I can determine if I am good to go - after all the containers are deployed and all health checks are passing - for running the automation test suite.
Question is: Is there any library already out there for marathon, that I can reuse to accomplish above task ? I do not want to reinvent the wheel.
When I posted this question, I had java library actually in mind, but forgot to mention that. I find #michael 's libraries are also very good. But this is what I settled upon. Marathon-client. I think I saw this, while browsing through mesosphere repositories but somehow missed it.
This is the library: marathon-client
I've successfully been using the two following libs:
Go: gambol99/go-marathon
Python: thefactory/marathon-python

Resources