What is the best way to setup the API application using Protractor? - continuous-integration

I'm setting up my front-end application to use continuous integration in CircleCI. Unit tests work fine, but end-to-end tests are not.
The problem is that it requires the backend (API) server to be running, and ours is in another completely different application. So, what is the best way to setup this backend server (thinking about CI)?
I thought about uploading it on heroku, but then I'd have to keep manually updating the code via git. Another option was to download the code to the CI VM and run the server directly there, but it is just too much work (install ruby, postgres, gems...), and it doesn't seem in no way the best option.
Have anyone passed through the same situation? How do you guys usually deal with this kind of situations?

I ended up doing everything inside the CI. I made some custom scripts that configure the backend project every time the test suite is ran. Also, I cached the folder with the backend code and the gems (which was taking ~2min to install).
The configuring part now adds ~20 seconds to the total time, so it wasn't a big deal. Although I still think that this is probably not the best way to do this, it has some advantages, such as not worrying about updating the backend code (it pulls from master automatically) or its database (it runs rake db:reset after updating the code).

Assuming the API server is running somewhere, configure the front-end application to point there while in the test/CI environment, at least to start out. If there are multiple API environments, choose the one the most closely matches the front-end environment (e.g. dev, staging, etc).
It gets more complicated if/when you need to run the e2e tests each time the API is built or match up specific build versions of the front-end and the API. In that case you will have to run the API server as part of the test.

Related

Using VS Code debugger for serverless lambda flask applications

I have been creating some Lambda functions on AWS using the serverless framework, Flask and SLS WSGI. Some dynamodb tables but that should not matter in this case.
The problem that I am facing is that I can not debug the whole thing end to end, I am able to run sls wsgi serve and run a local instance of my lambda functions, happy days. However, I am a little bit spoiled by other dev tools, languages and IDEs (even just Flask itself) that allow me to set breakpoints and see the scope, step through etc. So I would really like to be able to get that done here as well.
I tried launching the sls command mentioned above in a launch configuration inside vs code, no luck. Next thing I tried was to run the default flask launch config but that obviously didn't include all the configuration stored in the sls.yml file which is essential for accessing the local dynamodb instance.
The last thing I tried was to attach to ptvsd at the end of my app.py file. So I would hit a wait action from ptvsd, attach the debugger in vs code to the specified port, which seems to be successful and returning the code execution. However, it seems like sls wsgi runs the file twice, so that the attaching happens for the first instance and not the second, which then does not trigger a breakpoint when I try to execute an API call through Postman.
I guess I could include the wait step everywhere manually, then attach for each method that I am trying to debug inside the code instead of in the IDE, but that seems like overkill and not very convenient.
I have been looking for answers online and reading through docs and could find no further.
I figured out that I can use Attach using Process Id It is however a little bit tricky because there are always 2 processes running in the list (different pid's). Its not great, but it does the trick
One technique I have found useful, albeit in a node environment but should apply here, is to make use of unit testing as a way to execute code locally with the ability to tie in a debuggerm as well as make use of mocking to stub away the external dependencies such as AWS services (S3, DynamoDB, etc). I wrote a blog post about setting this up for node but you may find it useful as a way to consider setting things up with Pythoin as well: https://serverless.com/blog/serverless-local-development/
However, in the world of serverless development, it ultimately doesn't matter how sophisticated you get with your local development environment, you will have to to test in the cloud environment as well. The unit testing technique I described is good for catching those basic syntactical and logical errors but you still will need to perform a deployment into the cloud and test in that environment. Its one of the reasons at Serverless we are working very hard on ways to improve the ability and time it takes to deploy to the cloud so that testing in AWS is replaces local testing.

MicroService with DCOS

I decided to move to MicroService architecture, divide a project into multiple services and run those services on DCOS.It really gives a good story to project deployment and maintenance. But it makes development process complex.
For the developer, it was easy to run the application locally while implementation is in progress.Now the project is divided into multiple services and runs on DCOS which require good configuration. so to test application for the developer in the middle of implementation becomes a nightmare.
Guys, anyone is using DCOS with Microservice, can you please suggest what process you are following for internal development.
DCOS is just a tool for your deployment, so in order to give a better answer you'd have to share more about your technology stack in your question. But here some general thoughts: There are different types/levels of testing and there are different considerations for each.
On unit level - This depends on what technology you use for implementing your services and is the reason why languages like go become more and more popular for server development. If you use go for example, you can easily run any service you are currently developing locally (not containerized) on the dev machine. And you can easily run attached unit tests. You would either run dependent services locally or mock them up (probably depending on effort). You may also prefer asking each service team to provide mock services as part of their regular deliveries.
And you will require special environment settings and service configuration for the local environment.So summarized this approach will require you to have means in place to run services locally and depending on the implementation technologies you use it will be easier or harder.
On deployment/integration level - Setting up a minimal cluster on the dev's local machine and/or using dedicated testing- and staging clusters. This allows you to test the services including containerization and in a more final deployment environment with dependencies. You would probably write special test clients for this type of test. And this approach will also require you to have separate environment settings, configuration files, etc. for the different environments. Tools like Jaeger become more popular for helping with debugging errors through multiple services here.
Canary testing. Like the name suggests - You deploy your latest service version to a small portion of your production cluster in order to test it on a limited number of users first before rolling it out to the masses. In this stage you can run user level tests and in fact your users become the testers, so it is to be used carefully. Some organizations prefer to have special beta-type-users that will only get access to those environments.

How to minimise differences between a development and production server?

Lots of time, We build a web application in our own personal system, with a local server that comes in packages like WAMP, XAMPP. But, the configuration we have on our system hardly matches 100% with the server where we host that application. This adds up to the debugging complexity a lot. Specially when the server you host is configured to hide the errors. Relying on error logs to debug everything, is a uncomfortable options, which is also not guaranteed to be available.
What are the measures we can take to minimise such differences?
You guys might say, it depends on web server as the configuration might be different from server to server. Even in this case, or shared hosting, I would like to know of the pointers to take, before hosting a application or even start building it.
We use a staging server that is a clone of our deployment server. Running code by the staging server (using version control) is both fast and a reliable way to ensure that the new code/code changes work as expected on the live environment.
That said, we use Ubuntu servers and I've opted to use Ubuntu as my development environment as well, makes stuff so much easier.

Load-testing a web application

How does one go about testing the server-side performance of a web application?
I'm working on a small web application (specifically, it will solely be responding to AJAX requests with database rows). I want to see how it performs under load. However, I cannot upload it to an internet host right away. The development environment is, however, part of the local intranet, so I can use as many machines as I want to hammer the development server, presumably using Python in conjunction with urllib2.
My question is, is such an approach really accurate for determining the high-load performance of a server-side script? Is there a better way to do this? Am I missing something here?
Have you tried a tool like Apache JMeter? http://jmeter.apache.org/

Running test on Production Code/Server

I'm relatively inexperienced when it comes to Unit Testing/Automated Testing, so pardon the question if it doesn't make any sense.
The current code base I'm working on is so tightly coupled that I'll need to refactor most of the code before ever being able to run unit tests on it, so I read some posts and discovered Selenium, which I think is a really cool program.
My client would like specific automated tests to run every ten minutes on our production server to ensure that our site is operational, and that certain features/aspects are running normally.
I've never really thought to run tests against a production server because you're adding additional stress to the site. I always thought you would run all tests against a staging server, and if those work, you can just assume the prouction site is operational as long as the hosting provider doesn't experience an issue.
Any thoughts on your end on testing production code on the actual production server?
Thanks a lot guys!
Maybe it would help if you thought of the selenium scripts as "monitoring" instead of "testing"? I would hope that every major web site out there has some kind of monitoring going on, even if it's just a periodic PING, or loading the homepage every so often. While it is possible to take this way too far, don't be afraid of the concept in general. So what might some of the benefits of this monitoring/testing to you and your client?
Somehow not all the best testing in the world can predict the odd things users will do, either intentionally or by sheer force of numbers (if 1 million monkeys on typewriters can write Hamlet, imagine what a few hundred click happy users can do? Pinging a site can tell you if it's up, but not if a table is corrupted and a report is now failing, all because a user typed in a value with a umlaut in it.
While your site might perform great on the staging servers, maybe it will start to degrade over time. If you are monitoring the performance of those selenium tests, you can stay ahead of slowness complaints. Of course as you mentioned, be sure your monitoring isn't causing problems either! You may have to convince your client that certain test are appropriate to run every X minutes, and others should only be run once a day, at 3am.
If you end up making an emergency change to the live site, you'll be more confident knowing that tests are running to make sure everything is ok.
I have worked on similar production servers from long time. From my experience, i can say is that, Always it is better to test our change changes/patches in Stage environment and just deploy it, in production servers. This is because, both Staging and Production environments are alike, except the volume of data.
If really required, it would be ok, to run few tests on Production servers, once the code/patch is installed. But it is not recommended/good way to run the tests always on the production server.
My suggestion would be to shadow the production database down to a staging/test environment on a nightly basis and run your unit tests there nightly. The approach suggested by the client would be good for making sure that new data introduced into the system didn't cause exceptions within the system, but i do not agree with doing this in production.
Running it in a staging environment would give you the ability to evaluate features as new data flows into the system without using the production environment as a test bed.
[edit] to make sure the site is up, you could write a simple program which pings it every 10 minutes rather than running your whole test suite against it.
What will change in production environment that you would need to run automated tests? I understand that you may need monitoring and alerts to make sure the servers are up and running.
Whatever the choice, whether it be a monitoring or testing type solution, the thing that you should be doing first and foremost for your client is warning them. As you have alluded to, testing in production is almost never a good idea. Once they are aware of the dangers and if there are no other logical choices, carefully construct very minimal tests. Apply them in layers and monitor them religiously to make sure that they aren't causing any problems to the app.
I agree with Peter that this sounds more like monitoring than testing. A small distinction but an important one I think. If the client's requirements relate to Service Level Agreements then their requests do not sound too outlandish.
Also, it may not be safe to assume that if the service provider is not experiencing any issues that the site is functioning properly. What if the site becomes swamped with requests? Or perhaps SQL that runs fine in test starts causing issues (timeouts, blocking etc.) with a larger production database?

Resources