So i'm essentially using a column in my parse server db as a configuration for my app. I now realize parse has a config feature specifically for this.
My question is which is faster/better to use and why? Hitting a query every single time i want to check, or check the parse config. Isn't checking the config variable just another query?
If anyone can provide proper/better/ or any documentation...
ParseConfig is faster because it store the config in the cache while ParseQuery will retrieve it from the DB. But you cannot use ParseConfig for every query operation but only to get things that are relevant to all of your users like: "Message of the day" and more.
Another very cool thing about ParseConfig is that if you don't have network connectivity it will fallback automatically to the client cache.
Related
When googling this question, it seems to have been asked, and partially (and poorly) answered a number of times, mostly for older versions.
Question: How can I download a CSV to a local file, with the below constraints? I'm designing in Spoon.
URL: Will always be the same. https://example.com/data/my.csv . The website prepares the csv and provides it back to the web client as a file download after about 4-5 seconds. In a browser this means it is downloaded as a .csv, and not displayed.
Authentication: The website does not require authentication for access. The data isn't sensitive.
Local file path: The downloaded CSV will overwrite the existing csv. eg: d:\data\my.csv . Ie, I can set this on a timer and have it download the newest csv every hour or so.
Proxy: It is quite likely I will need to traverse a network proxy. eg badproxy.mynetwork.internal:8080 and that proxy requires a username and password. It's far better if I can set this password in a single location so any future things created can reference it. Not really sure on how to approach this either.
The rest of my process focuses on addressing the content of the csv, and already works fine.
The processes I've found on google show using the Http Client component, though it's not particularly straightforward how this translates into a file being saved locally into a known location.
Thanks for any pointers.
PDI v9.0.0.0-423
The HTTP client step needs to be triggered. Use a Row generator step generating e.g. 1 empty row and link that with a hop to the HTTP client step.
for your solution , try this:
Data Grid -->HTTP Client-->CSV File Input->Text file output(extension with csv)
Very new to Datadog and need some help. I have crafted 2 SQL queries (one for on-prem database and one for cloud database) and I would like to run those queries through Datadog and be able display the query results and validate that the daily results fall within an expected variance between the two systems.
I have already set up Datadog on the cloud environment and believe I should use DogStatsD to create a custom metric but I am pretty lost with how I can incorporate my necessary SQL queries in the code to create the metric for eventual display on a dashboard. Any help will be greatly appreciated!!!
You probably want to be using the MySQL integration, and configure the 'custom queries' option: https://docs.datadoghq.com/integrations/faq/how-to-collect-metrics-from-custom-mysql-queries
You can follow those instructions after you configure the base integration https://docs.datadoghq.com/integrations/mysql/#pagetitle (This will give you a lot of use metrics in addition to the custom queries you want to run)
As you mentioned, DogStatsD is a library you can import to whatever script or application in order to submit metrics. But it really isn't a common practice in the slightest to modify the underlying code of your database. So instead it makes more sense to externally run a query on the database, take those results, and send them to datadog. You could totally write a python script or something to do this. However the Datadog agent already has this capability built in, so it's probably easier to just use that.
I am also just assuming SQL refers to MySQL, there are other integration for things like SQL Server, and PostgreSQL, and pretty much every implementation of sql. And the same pattern applies where you would configure the integration, and then add an extra line to the config file where you have the check run your queries.
I am working On a Microservice (Spring boot) that require to store some static information that infrequently changes (once per quarter). The data (below) is about the company reports that looks like
reportId#1: "frequency"="daily","to":"some email ids"
reportId#2: "frequency"="weekly", "to":"some emailids"
As you can see an entry in the data is basically a Report id, and associated attributes are frequency of reports and receiver's email id.
My question is.. What is the best place to store this information? I have some thoughts..and here are my views.
a) NoSQL DB like MongoDB seems to be a good option.. I can create a Collection and store it there and retrieve it once during app startup. But the I thought, whether creating a Collection just to store this static info is a good choice?
b) Redis seems to be another good option. I can create a template for above dataset and store it there. I can query the Redis based on the reportId to retrieve the frequency and senders list.
c) Store it in a file in the classpath and load at the app startup. The downside is that, I will have to redeploy the app with new changes in file whenever this report listing changes. I believe externalizing this information to either Mongo or Redis is a better option.
d) The app is running in the AWS..so I can even store this in a file in S3 bucket.
Would like to know your views?
Since the config will only change once a quarter, the overheard of a database is not required. You should consider Apache commons configuration. It will allow you to load config changes from files without the need for an application restart.
http://commons.apache.org/proper/commons-configuration///userguide/howto_reloading.html
I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.
I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post