I'm writing an API with Laravel 9 and I use the AWS Secrets Manager to get secrets like a database password. I can't write it in the .env since every 4 hours or so, the password will change so i need to pull the new one from AWS. So config:cache is not an option, since rerun config:cache in production will delete the bootstrap/cache/config.php file which make the database unavailable till the new file is created. This maybe only takes a few seconds but this is unacceptable for an API.
I tried to override the config:cache command but to not delete the old file but rather create a temp file load all the keys and then replace the old file by moving the new on to the right place. My Plan was to config:cache every hour by a cron job. But as long as the file is not deleted laravel refuses to read the files in the config folder, and getting the config from somewhere else.
Is there a recommended way to hold such passwords/keys. It can be a total different aproach, i just need something that works, and is not super hacky. Since we use only a remote database storing in the databse is not an option to, since than we'll add a round trip to the database each call. So storing locally would be ideal. It felt like everyone in the internet puts their keys in the .env and caches them at deployment and is happy with it :D
I am storing it in DB.
Table name: dbo.setting
Example:
key | value | updated_at
OpenExchangeRatesApiKey | o2a..sg8 | 2023-01-01 00:00:00.000
SendgridApiKey | sas..7dr | 2020-01-01 00:00:00.000
Check also this:
Are there best Practices for saving API Keys & Secret into the Database?
Related
I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.
I have looked everywhere but cannot seem to figure out how to setup cloud coding on the Parse Open Server using Heroku.
I see this link which tells me what to put in the Index.js and Main.js file: Implementing Cloud Code on Open Source Parse Server. However, I cannot seem to find those files. Nor can I find the "cloud" folder.
How do I find the cloud folder?
I created the Parse Server on MongoDB using the "Deploy to Heroku" link on this page: https://github.com/ParsePlatform/parse-server-example. After creating my application by filling out all the information, I ran the command heroku git:clone -a yourAppName to clone the application files. However, when I use the command I obtain a empty repository and get the following message in my terminal:
Cloning into 'hyv3-moja'...
warning: You appear to have cloned an empty repository.
So, how/where do I find the cloud folder with main.js? Did I miss any step in creating the Parse Server?
I also tried using the Parse Command Line. However, when I try to use the parse new command, it requires me to login to a Parse account. However, since Parse is going down, they are not accepting new accounts and I did not have an account before. Regardless, this seems like a deadend.
So can someone please explain to me how to set up Cloud Code?? I want to create a code that decrements a column in the database every second so it operates like a timer. Basically, I want my application to create objects on the database that last a certain amount of time chosen by the user. For this example, ill say 24 hours. So from the moment it is created, I want to decrement those 24 hours in the database. That way when a user of my application clicks to view the object, I translate the time remaining from the database and just output that value to the user to show how much time is remaining for the life of the object.
Consider this scenario. In a load-balanced environment, I have 3 separate instances of a CMS running on 3 different physical servers. These 3 separate running instances of the application is sharing the same database.
On each server, the CMS has a /media folder where all media subfolders and files reside. My question is how I'd implement/code a file replication service/functionality in Golang, so when a subfolder or file is added/changed/deleted on one of the servers, it'll get copied/replicated/deleted on all other servers?
What packages would I need to look in to, or perhaps you have a small code snippet to help me get started? That would be awesome.
Edit:
This question has been marked as "duplicate", but it is not. It is however an alternative to setting up a shared network file system. I'm thinking that keeping a copy of the same file on all servers, synchronizing and keeping them updated might be better than sharing them.
You probably shouldn't do this. Use a distributed file system, object storage (ala S3 or GCS) or a syncing program like btsync or syncthing.
If you still want to do this yourself, it will be challenging. You are basically building a distributed database and they are difficult to get right.
At first blush you could checkout something like etcd or raft, but unfortunately etcd doesn't work well with large files.
You could, on upload, also copy the file to every other server using ssh. But then what happens when a server goes down? Or what happens when two people update the same file at the same time?
Maybe you could design it such that every file gets a unique id (perhaps based on the hash of its contents so you can safely dedupe) and those files can never be updated or deleted, only added. That would solve the simultaneous update problem, but you'd still have the downtime problem.
One approach would be for each server to maintain an append-only version log when a file is added:
VERSION | FILE HASH
1 | abcd123
2 | efgh456
3 | ijkl789
With that you can pull every file from a server and a single number would be sufficient to know when a file is added. (For example if you think Server A is on version 5, and you get informed it is now on version 7, you know you need to sync 2 files)
You could do this with a database table:
ID | LOCAL_SERVER_ID | REMOTE_SERVER_ID | VERSION | FILE HASH
Which you could periodically poll and do your syncing via ssh or http between machines. If a server was down you could just retry until it works.
Or if you didn't want to have a centralized database for this you could use a library like memberlist. The local meta data for each node could be its version.
Either way there will be some amount of delay between a file was uploaded to a single server, and when it's available on all of them. Handling that well is hard, which is why you probably shouldn't do this.
I know this question has been ask quite a few times, but nothing seems to be working for me...
I really need help. I have made my entire website on my localhost, but hos do i get it up and running live? I've tried everything :( I've copied all of my files onto the live server and looked at endless tutorials, but nothings working. Can you maybe do a video about this or tell med what to do? I really don't want to start ALL OVER on creating all the pages and static blocks and so on.
You have to just change the url in database table. Run the query
SELECT *
FROM `core_config_data`
WHERE `value` LIKE 'http://%';
and change the url from localhost to live server url. Hopefully that'll work. Thanks
I guess that you already created many blocks, cms pages and did a lot customizations in backend in local system. You have to do next:
Copy your magento code completely to server.
On server, delete app/etc/local.xml
Create empty mysql database on the server
Backup your local database
Import that local database into database on server (which you created in step 3.)
Run magento site via browser
because you deleted local.xml file, magento will start installing process and ask you for params for db in server (here, enter data which you used for creation db in step 3, like db nambe, username, password,..). And that's it. Magento will make connection with that db and you will have everything you had in local.
One more thing which I forgot:
you have to change in database on field (you will change this on live database after importing local database, that means after step 5). There should be table core_config_data. Do search in that table, and wherever you find you local url like:
http://localhost/magento/
or something like that, you should change to your real domain, for example to:
http://my-magento-domain.com/
What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.