Difference between certbot's dry run and staging options - lets-encrypt

Whenever I'm testing with certbot, I'm afraid of exceeding rate limits and thus getting my account throttled. So I use both the --dry-run and --staging options simultaneously.
This is shown in many other SO questions and tutorials - and since it works, I never worried about it.
But I'm sure there's a difference between them... what is it?

From the CLI docs, the --staging option:
--test-cert, --staging
Use the staging server to obtain or revoke test (invalid) certificates; equivalent to --server https:// acme-staging-v02.api.letsencrypt.org/directory (default: False)
And the --dry-run option:
--dry-run
Perform a test run of the client, obtaining test (invalid) certificates but not saving them to disk. This can currently only be used with the 'certonly' and 'renew' subcommands. Note: Although --dry-run tries to avoid making any persistent changes on a system, it is not completely side-effect free: if used with webserver authenticator plugins like apache and nginx, it makes and then reverts temporary config changes in order to obtain test certificates, and reloads webservers to deploy and then roll back those changes. It also calls --pre-hook and --post-hook commands if they are defined because they may be necessary to accurately simulate renewal. --deploy- hook commands are not called. (default: False)
So according to the docs, using the staging server avoids the rate limiter. The dry run option can be used to verify one's config is working, without saving the result of issue/renew requests.
The docs do not mention whether a dry run can exceed use limits, but from the above descriptions I'd assume it can.

Related

Alter 'status' request interval of CloudBuild submit

I'm trying to setup the CI/CD setup of a mono repository using Google Cloud Build. We have a single Cloud Build trigger that starts a build on a new commit, it does some general steps and then then starts a build for every (micro)service in the mono repository using gcloud build submit.
This however means that if 4 or 5 people are push code to the repository roughly at the same time we can have around 50-70 concurrent builds running in cloud build. Which in itself isn't an issue for us. The only issues is that when this happens the following errors will popup:
{
“code”: 429,
“message”: “Quota exceeded for quota metric ‘Build and Operation Get requests’ and limit ‘Build and Operation Get requests per minute’ of service ‘cloudbuild.googleapis.com’ for consumer ‘project_number:<PROJECT_NUMBER>’.“,
“status”: “RESOURCE_EXHAUSTED”,
“details”: [{
“#type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “RATE_LIMIT_EXCEEDED”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “cloudbuild.googleapis.com”,
“consumer”: “projects/<PROJECT_NUMBER>”,
“quota_limit”: “GetRequestsPerMinutePerProject”,
“quota_metric”: “cloudbuild.googleapis.com/get_requests”
}
}]
}
In other words: We are running into quota limits. The quota only allows us to only make 900 operational requests per minute.
We already tried switching to private pools in the hope that the above quota limit was only there for when you don't use private pools, but this unfortunately still makes us hit the quota.
Now, I am trying to find out if I can decrease the amount of these operational requests.
A possible solution might be related to how I am using gcloud build submit. When you run gcloud build submit, it starts a new build, waits for the build to finish, and shows the output of the build. To achieve this, I presume that gcloud is making requests every few seconds to find out what the status of the build is. I suspect that these 'status' requests are why my Cloud Build quota limit is reached. Which is why I'm trying to see how I can lower the amount of these requests per minute.
One option is to simple decrease the amount of builds running in parallel, which is unfortunately not an option in my situation. If I execute them sequentially it simply takes more time than acceptable in my situation.
Another option would be to increase the time in between such 'status' requests. However, on this page I did unfortunately not find a CLI flag to alter this.
Note: I did find the --async flag, however that does NOT help me, since I still want the process to wait until the build has succeeded. And I also did find the --supress-logs, which also does NOT help me, since these requests presumably don't interact with Cloud Build but with the GCS bucket where the logs are stored.
The only option left that I can think off, is that I can start my builds with the --async flag and then manually request whether the build has succeeded using a longer interval. However I do feel like that is a lot of manual work that, for which I need to write some bash scripts that need to be maintained. This preferably isn't a path I would like to take unless really necessary.
Does anyone know of another way of achieving this?
If 4 or 5 people are push code to the repository
This shouldn't happen. The reason it shouldn't happen is because you should use the "push" trigger on the main branch, not on a development branch.
What do I mean by this?
I mean that building should occur on the main branch, which would correspond to joined effort of those five users and a responsible party in charge of unifying their changes.
So, really, your users should be pushing to the development branch, and pushes to main should be reserved for things that need to be built.
How can we work around this if we're only allowed one branch or are required to have updates visible on one branch?
My recommendation would be to use the tag filter, specifically filter the pushes by tag, as mentioned in the documentation. That way only the pushes person in charge of merging the changes will be built (assuming that this person pushes to the tag you've set)
TL;DR
Don't create push triggers for Cloud Build on a branch multiple people are working on. Either create it with a tag filter or have seperate development and main branches (people work on dev, builds are only made from pushes to main)

How to implement synchronization of browser-based online games when users refresh their browser

In implementing a browser-based simple game involving multiple users, I have the server save the game state at certain sync points (not time-based but event-specific). I identify each state by an integer.
When a user refreshes his browser, the server provides the latest state and restores the content in the browser. However, in those few seconds while the browser is loading the latest content after browser-refresh, the state could change again. I do not know how to handle this situation because sending the next state will again raise the same issue.
I want a seamless refresh so none of the other players are impacted when one user refreshes his browser (or for that matter leaves and comes back).
The implementation language is not relevant. I use websockets to communicate between the browser and the server. The server is the intermediary for all communication between users (I am not using WebRTC data channels). What is the best way to sync the application content in multiple browsers?
This is indeed a programming-based question though no code is provided.
Forget the fact that your client exists in a browser. Let's just talk about replication.
The usual approach in databases is to separate snapshots from Write Access Logging (WAL) logs. When you bring a new client up, you select a snapshot and transfer that. Then when the client is ready it asks for WAL logs from that snapshot forward. The same mechanism is used after crashes. The last available snapshot is loaded, then the WAL log is replayed, then the database comes up.
I would suggest the same strategy. This does require efficient storage of snapshots. Some kind of log. And some kind of replay mechanism. Which is a lot of easy to mess up code. If you can use something existing, that would be good.
The first thing that I looked into was using Emscripten to compile Redis to JS, and then try to use Redis' built-in asynchronous replication to replicate to your browser. That may be possible, but the fact that Redis is single-threaded and wants to be a client-server is probably a showstopper.
The next best option that I found is that you can use https://isomorphic-git.org/. Here is how that could build what you need. You simply maintain your current state in a git repository, and keep a WAL log of everything that you've done with it. When a client connects, it clones the repository. Once done it connects to the websocket, tells you what commit it is at, and you send it the WAL log from that point forward. Locally in the browser you run those git commands. If the client simply loses its connection and then rejoins, it can do a git pull, and then follow the same strategy.
This will be a bunch of work for you. But a lot less work than implementing everything from scratch.

Cleanup database state in a beforeEach?

In Using after or afterEach hooks, it is recommended to clean up server/db state in beforeEach or before. I understand the rationale but I believe the text lacks some real use case. Here is a use case that I don't know how to solve following the best practice.
Imagine I'm testing my own clone of github. To have a clean environment for my tests, I want Cypress to use a clean temporary user and a clean temporary repository. To avoid conflicts between multiple Cypress instances targeting the same server (e.g., multiple front-end developers testing their changes in parallel), there should be one user and one repository dedicated to each Cypress instance. This can be implemented by generating users and repositories with well-known random ids (e.g., temp-user-13432481 and temp-repo-134234). Cleaning up the mess in the database is just a removal of temp-* databases away.
The problem is when to clean up. If the clean up is done in a beforeEach() as is recommended, running a test in a Cypress instance will delete the data of other Cypress instances running in parallel.
Is there an obvious solution that I'm missing? How do people usually cleanup temporary testing data in a database?
The obvious answer would be to not run tests in a distributed manner against a single remote server (and instead run the DB server locally on each client), but since this is not an answer to your question, here are a few ideas:
Set up a cron job that will clean up old test repos/users at the end of each day.
If you only clean up users/repos that are older than e.g. several hours, it will avoid cleaning up resources that may still be used by running tests.
You must ensure that the ids are random and large enough (i.e. have enough entropy) that you won't run into collisions even if you don't clean them up for a while.
Make each client (i.e. the PC running the tests) use a fingerprint that you'll use to namespace the repo/user in the DB, and clean them up before each test run.
This way, each client will only clean up their own resources.
I'm leaning towards solution (1).

is this braintree testing multi purchase error something I should worry about?

I'm trying to figure out how to test with braintree, and I'm running into what feels like a bandwidth error.
response = ::Braintree::Customer.create(payment_method_nonce: Braintree::Test::Nonce::Transactable)
token = response.customer.credit_card.first.token
#so far so good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#still good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#response is failure
# => Braintree::ErrorResult ... status: "gateway_rejected"
All that takes place without a pause.
If I wait a bit and run the sale line again it works again..
This of course sets up a problem with test scripts. I can moc-out the actual connection to BT, but I'm slightly worried about this. Should I be?
I work at Braintree. If you have more questions, you can always get in touch with our support team.
You can see what gateway_rejected means on the transaction statuses page of the API docs:
Gateway rejected
The gateway rejected the transaction because AVS, CVV, duplicate or fraud checks failed.
Transactions also have a gateway rejection reason, which in this case will be duplicate.
You can find more information about duplicate checking settings in the control panel docs:
Configure duplicate transaction checking
Duplicate transaction checking is enabled by default with a 30-second window in both the sandbox and production environments. These settings can be updated or disabled by users with Account Admin privileges.
Log into the Control Panel
Navigate to Settings > Processing > Duplicate Transaction Checking
Click Edit to adjust the time window or Enable/Disable to turn the feature on/off
Looks like it may be a rate-limit error. Search their help/docs/site about information related to rate limiting so you can know what the limits are and work around them.
However...if you're talking about testing as in automated tests - I would recommend not using external services in your test suite, and mocking out everything. Ideally you want your test suite to be able to run even when the network connection is down and you don't want it slowing down when 3rd party services are slow or when your network is slow.
If you really want to do a full integration test with all your 3rd party services, you can create a special set of tests that do that that are annotated with something like "#external", and then schedule them to run once a week or something just to flag some weird changes or errors.

Is it possible for RoleEntryPoint.OnStart() to be run twice before the host machine is cleaned up?

I plan to insert some initialization code into OnStart() method of my class derived from RoleEntryPoint. This code will make some permanent changes to the host machine, so in case it is run for the second time on the same machine it will have to detect those changes are already there and react appropriately and this will require some extra code on my part.
Is it possible OnStart() is run for the second time before the host machine is cleared? Do I need this code to be able to run for the second time on the same machine?
Is it possible OnStart() is run for
the second time before the host
machine is cleared?
Not sure how to interpret that.
As far as permanent changes go: Any installed software, registry changes, and other modifications should be repeated with every boot. If you're writing files to local (non-durable storage), you have a good chance of seeing those files next time you boot, but there's no guarantee. If you are storing something in Windows Azure Storage (blobs, tables, queues) or SQL Azure, then your storage changes will persist through a reboot.
Even if you were guaranteed that local changes would persist through a reboot, these changes wouldn't be seen on additional instances if you scaled out to more VMs.
I think the official answer is that the role instance will not run it's Job more than once in each boot cycle.
However, I've seen a few MSDN articles that recommend you make startup tasks idempotent - e.g. http://msdn.microsoft.com/en-us/library/hh127476.aspx - so probably best to add some simple checks to your code that would anticipate multiple executions.

Resources