Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
Please share any bash script to run s3 copy commands. I have already tried separate Script to call the rclone command for each agency to backup EC2 instance windows server data to S3.
The command below syncs the current directory to an S3 bucket using a named profile.
aws s3 sync . $S3_BUCKET_URL --profile $YOUR_PROFILE_BRO
The next command syncs the S3 bucket to the current directory using a named profile.
aws s3 sync $S3_BUCKET_URL . --profile $YOUR_PROFILE_BRO
The next command copies a file (file.txt) from your machine to S3 using the default profile of your machine.
aws s3 cp file.txt s3://my-bucket/
Of course, you need an aws-cli and AWS credential pair (secret key & secret key id) to make this work from your machine or on-premise networks. If you want to copy from EC2 to S3, you can assign IAM roles that posses permissions to write/read objects to that EC2 and you should be good to go.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a discord bot that saves JSON files on the dir he is in so it could work on more than one server without colliding.
I finished my code and I uploaded it to heroku for hosting. The thing is , when I ran the code locally I could see the files that were being created for each server for testing but now I don't know how to reach them.
Is there a way to check all the files I have in heroku without downloading everything down?
You can install Heroku CLI and then access your files using:
heroku login
heroku run bash -a APPNAME
But be aware that Heroku uses an ephemeral filesystem. This means that your local filesystem is only accessible to a single dyno, and once the dyno is stopped, restarted, or moved, all files on the local filesystem are destroyed.
You could use a service like Amazon S3 to store your files in a more permanent way.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a list of zip files stored in an S3 bucket that I would like to access from my web app which runs on an IIS server, outside AWS (on-prem). What would be the best way to handle the AWS credentials (I have a total of 4 environments - 4 IIS servers I need to maintain).
I know that I can manually set up an AWS credential profile (using an IAM user) using the AWS CLI on each environment but I was wondering if there would be a better way to handle the AWS credentials that are stored on the servers. For example, our organization has an IAM user policy that expires the credentials every 90 days. That means that there would be a bit of maintenance overhead requiring me to update the IAM user credentials all the machines every 90 days.
Is there a way to automate the above process so that the IAM user credentials are updated automatically?
You would need to create such a process yourself. There is nothing within AWS that can update credentials stored on your own computers.
IAM Users can have two active credentials, so the process would be:
Generate second set of credentials on the IAM User
Update credentials stored in your applications
Disable/delete the first set of credentials from the IAM User
This gives some time for applications to update to the new credentials while the old credentials are still valid. These steps can be automated via API calls to AWS, but your own application would need to initiate the steps and update the credentials on your servers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am attempting to migrate my server from Parse.com over to Heroku with their one click migration. Their documentation says that Parse Server supports "file" type, but I can't find any documentation on transferring these files so Heroku can access them.
This isn't an answer but I've been having the same issue/dilemma and have partial information that might be helpful in eventually finding an answer. I did a migration and took a look at some of the stuff going on.
Example, photoObj.get('file').url();
On Parse Hosting: files point to the following:
http://files.parsetfss.com/parseFileKey/fileName.ext
This is stored on some amazon S3 thing. Basically this points to:
https://s3.amazonaws.com/files.parsetfss.com/parseFileKey/fileName.ext
After migrating to Heroku/MongoLab, photoObj.get('file').url() points to the following:
http://files.parsetfss.com/newHostFileKey/fileName.ext
newHostFileKey is something we designate in the parse-server setup and seems to be automatically generated via this setting.
I don't see any evidence so far that the migration tool moves files from Parse Hosting to the new host/db.
File uploading to the new host works fine. On the new host, if one generates a new file it ends up pointing to something like this:
http://newHostURL/parse/files/appID/fileName.ext
parse is whatever you designate at the startup of your parse-server like app.use('/parse', api);
appID is whatever you designate at the startup of your parse-server like
var api = new ParseServer({
appId: 'appID',
fileKey: 'newHostFileKey'
});
Changing the url point of an Parse Hosted file to fit the new host pattern doesn't yield anything (file not found) etc.
I have no idea how new files are being stored and to where the url routes to.
With new files that are uploaded via the new host, I notice that some new tables/collections are created in the MongoLab DB. These are fs.chunks and fs.files
fs.chunks is where the data of the file is being stored (I think). So under the new heroku/mongolab setup, files seem to reside "in" the DB.
As for what the best way is to migrate images from Parse hosting to new hosting is, I have no idea but I'm not sure there is a straightforward answer that is publicly out there at this point.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I cannot retrieve the password for my EC2 instance running Windows 2008 server. I want this particular instance and my get password does not work.
Any work around how I can retrieve the password.
Your server is launched from an AMI most probably that is the reason you cannot retrieve your password. It has happened to me in past. In short what you have to do is create an image of the existing instance and then launch a new instance using that image.
create a snapshot with ec2-create-snapshot
attach the snapshot to a new instance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I just found the wonderful ElasticFox, a Firefox plugin that makes working with Amazon EC2 much more enjoyable. Is there a similar tool for Amazon RDS?
Or, rather, what is the best/easiest tool to work with RDS?
I have been using MySQL Workbench http://www.mysql.com/products/workbench/ with RDS and it works great. Very easy to create and save a new database service instance. Click "New Server Instance" under "Server Administration" and follow the prompts. You will need to enter the information provided in the AWS RDS webpage for that instance (for example, it's endpoint).
NOTE: In order for you to actually connect, you MUST add your IP address in the "DB Security Groups." The link is in the left-hand column, which is titled "Navigation." I use the "CIDR/IP" option (the other is EC2 Security Group). Make sure to include a /## after the IP, such as the /32 they use in the example. In a few seconds, the IP address should be authorized.
After the new security group has been authorized, the "DB Security Groups" of the DB Instance running MySql needs to be updated to include this newly created security group. After this updation, the "DB Security Groups" should show atleast two 'active' security groups, one which was already present previously and other which was newly created in the previous step.
After that, go back to MySQL Workbench and complete the New Server Instance creation process.
I'd say the AWS Console and RDS CLI along with MySQL client itself are totally sufficient.
Anything particular you are looking for?
AWS console is well enough to monitor and configure the RDS. However we cant change some parameters with AWS Console (like mysql.ini parameters). In that case you have to use RDS Command Line tools.
Still if you dont want to mess with Command line APIs, you can use cloud management systems and use it (free edition) as GUI tool such as RightScale
Here is post you can see how third party GUI tools can be used to work with Amazon RDS
Try DBHawk from Datasparc. It can connect to cloud databases such as Amazon RDS and MS Azure.