Problem Statement: I'm building an .NET Core based REST API. It has unit tests and Postman Tests. When I check-in the code to my GIT HUB, I want my GIT HUB Action to RUN my unit tests and also run my postman tests which calls the actual Rest API. The REST API in turn calls the database to do it works.
As part of my github action script definition, I think I will need to launch a container for my REST API and another container to host my database. The Postman test then just hits the API container to test the various endpoints. Once testing completes, the containers go away.
Question: Is this approach viable/practical? How can I go about building this in github actions?
Tech stack: Postgress, .net core 6, Postman.
Related
Our cloud-deployed app is composed of (simplified):
A web front-end
Some back-end services
A database
When developing the front-end, I can easily debug by running the front-end locally, and redirecting its back-end calls to the actual services, since their endpoint routes are public.
However, I also want to debug back-end service code, together with the front-end. I can run back-end services locally, but they can't access the database, since the database doesn't have any publicly-accessible endpoint.
Question: How can I conveniently develop the service code? I can think of these options:
Expose the database publicly, maybe just the dev env's database. This doesn't sound like a good practice security-wise, and I haven't found a way to do it in my cloud platform (CloudFoundry).
Test everything using local unit- and component- tests. We do this, but can't cover everything, and certainly not the integration with the front-end.
Deploy my code changes to a dev environment, and test that way. This is what we do now, but:
It's a much slower development turn-around that running locally
I can't connect a debugger to the deployed app, so I must debug using logs, which again is slow
We have a limited number of dev environments, and this creates contention for them.
Somehow deploy a replica of the database locally as well, using some kind of test data.
Tech details: For cloud we use CloudFoundry over AWS. My back-end services are written in C# + .NET core 5. Locally, we develop them using Visual Studio 2019 on Windows.
For now, I managed to expose my database locally using an SSH tunnel. Specifically, by running cf ssh [AppName] -L [local_port]:[db_hostname]:[port], with [db_hostname]:[port] taken from the app's configuration, taken by running cf env [AppName].
Assumption
API tests are written in the same API code repository using GoLang
API tests are run as part of the Pull Request CI build.
An API test is basically starting gRPC server, making request from a golang based gRPC client and verifying the response
Flow
Dev starts building a new endpoint for an API
Dev does not write any API tests (please note that I mention specifically API tests and not unit tests)
Dev creates a Pull Request for the changes
The Pull Request should identify that there are no tests written for the new changes and block the PR
My question
How could we build a CI flow like this to achieve point 4?
I'm aware that code coverage is one way to go but I'm not fully convinced if that would be the correct approach for API tests?
Is it possible to find code coverage of gRPC service based on such API tests? If yes, then how?
I am trying to create a middleware between SAP and Salesforce. It is a .NET RESTful service.
If SAP desires any data, it will pass parameters to my .NET API which in turn will query Salesforce for the same. It should fetch the data and return as XML to SAP.
My first step is to build a webAPI and fetch sample data (about 10-20 rows) from Salesforce developer account.
I referred to Salesforce documentation on how to setup Remote Access for external applications which uses oAuth. The Remote Access documentation page looks a bit obsolete.
I want to know whether I need to enable Remote Access first in Salesforce and expose endpoints for my .NET Service to consume it? Where can I find updated Remote Access oAuth documentation for interacting with the .NET webAPI?
The link I referred is this: Consuming Force.com SOAP and REST Web Services from .NET Applications
We have been writing our mobile app tests with Calabash/Xamarin.UITest for a while using backdoor methods to redirect our app's base api url to a mock HTTP server to make tests repeatable without incurring unnecessary server costs.
As Xamarin has announced they are phasing out Calabash (and Xamarin.UITest with it, as it relies on the Calabash server component) we have been working on migrating our test suite to Appium to comply with Xamarin's recommendations. Our simpler tests were migrated easily but a lot of our testing relies on said mock HTTP server with backdoor configuration which Appium does not seem to support, rendering our tests impossible to port without a way to recompile our app with the mock HTTP server address built in, which quite frankly, sucks.
Anyone knows an alternative to said backdoor method? Or an alternative to the whole mock HTTP server overall?
We want our tests to run in parallel in the Xamarin Test Cloud service so issuing a QA/Test environment for this is completely undesirable.
My company want to apply TDD in our projects and we began to study TDD 5 months ago. We start from writing unit to acceptace tests ( you can see in http://uet.vnu.edu.vn/~chauttm/TDD/). Then we follow this book "growing_object-oriented_software_guided_by_tests" to do a pilot project.
But we have a problem with test rig( architecture to test end to end system)
https://docs.google.com/file/d/0B23s8xkJtB5ZNHBJbEZ3YTdMTWc/edit.
We have 3 teams, a team develops service side, a team develops Android client and a team develops iOS client. Following above test rig, client teams will write acceptance tests and insert directly data into database. The service team will create an sql file then client teams will use this file to insert into database. The client teams do not know about all database ( Our system has more than 200 tables) and sometimes, they have to spend a lot of time to debug because they do not know the service errors.
Can you give me another test rig or suggest me the ways to make our projects ( in TDD) more effective ?
The client teams should have a mock service layer that they write automated tests against. These will have the advantage of running quickly and not requiring coordination with the service team. Most of the acceptance tests for the client application should be written this way. If you were writing an app that uses the Google calendar API, you wouldn't try to recreate the entire calendar API, you'd just mock out the calendar API the way you expect it to work.
For integration tests between the teams you can have a version of the production service on a separate server with a copy of the production database with some test data in it. For testing, configure the clients to use the test endpoint instead of production.