I have vault and need to restore one of the folder from the vault I have initiated the job using AWS CLI and got the inventory using JSON file but unable to get the complete folder from the inventory. Any one can help me restoring the folder?
I am able to get CSV file formate to see the archive ID of the files but is it possible to take the complete folder as it is showing separate archive ID for all files in folder?
Related
I am trying to put a start-up script in cloud storage file for a vm. this cloud storage file will contain pull related command.
so the first step to get a ssh-key, I generate it from bitbucket, now when I went for adding the ssh-key in vm metadata, I saw there is already ssh there in metadata.
How can I use this metadata ssh key to pull the repo from bitbucket. I want to write the shell script to pull the code in cloud storage file and then give this file as startup script for the vm.
I am stuck on how can I access ssh-key. I saw somewhere
cat ~/.ssh/id_rsa.pub
I was guessing this file should show the keys it has as I am able to see the ssh-keys in vm metadata, but it says file not found.
I am looking into wrong file
Thanks,
Looking for recommendations for the following scenario:
In an ubuntu 18.04 server, every 1 minute check for new files in an AWS S3 bucket, fetch only the newest file to a temp folder at the end of the day remove them.
It should be automated in bash.
I proposed using aws s3 events notification, queues, lambda but it was defined that is best to keep it simple.
i am looking for recommendations for the steps described below:
For step 1 i was doing aws s3 ls | awk (FUNCTION to filter files updated within the last minute)
then i realized that it was best to do it with grep
0-Cron job should run from 7:00 to 23:00 every minute
1-List the files updated to S3 bucket during the past 1 minute
2-List the files in a temp-encrypted folder in ubuntu 18.03
3-Are the files listed in step 1 already downloaded in folder temp-encrypted from step 2
4-If the files are not already donloaded > download newest files from S3 bucket into temp-encrypted
5-At end of the day 23:00 take a record of the last files fetched from s3
6-run cleanup script at end of the day to remove everything in temp-encrypted
I attach a diagram with the intended process and infrastructure design.
The solution was like this:
Change FTPS to SFTP running in Ubuntu 18.04
change main ports: randomport1 for SSH and randomport2 for SFTP
configure SFTP in sshd_config file
once everything is working create local directory structure
by using a bash script
5.1 List what is in S3 and save in a var
5.2 for each of the files listed in s3 check if there is a new file not present in the mirrored file in the local directory s3-mirror
5.3 if there is new file fetch, touch a file with empy contents in s3-mirror directory just same name, move encrypted file to SFTP and remove fetched S3 file from mirrored local directory
5.4 record successful actions in a log.
So far it works good.
I'm using AWS EB to host my Laravel 5.8 application. I have a view where I upload files. These files are 2 CSV files and a ZIP archive. Long story short, these files are stored locally to process their content, once they are stored, their stored location, which is returned as temp/... is concatenated with the helper method storage_path() to determine the full path for the file to open for processing:
$file = file(storage_path() . '\app/' . $filePath);
Once this line is reached, I'm getting this error:
fopen(/var/app/current/storage\app/temp/routes/20190906143753/ztySdkFwY5bAvQhIK4Mlsftz8fzVJfPK3S3d5CSV.txt): failed to open stream: No such file or directory
*************** UPDATE ***************
So it turns out that the storage folder in my Laravel project had no write permission. I've managed to fix that and now the files are actually being stored. However, another problem showed up. Which is files aren't stored fully until the execution of the page ends, which is not what I want since those files are also used during the same process for other purposes.
I received following error in CloudWatch Logs after using AWS CodePipeline (AWS CodeBuild) to deploy my C# Lambda Function Code
Could not find the required 'MyAssembly.deps.json'.
This file should be present at the root of the deployment package.: LambdaException
The problem in my case was that the linux file permissions on files inside the Zip were set to 000; so when the zip was extracted by AWS Lambda; AWS Lambda did not have file permission to access the file MyAssembly.deps.json
I was using C# System.IO.Compression.ZipFile.CreateFromDirectory to author the zip file. I had to shell out to the native zip program to produce a zip file which worked.
Big thanks to https://forums.aws.amazon.com/message.jspa?messageID=856247
I know this is bit old question but writing answer for any user who are still facing the problem on windows system.
this is with dotnet core 3.1
The first command in package manager console to ensure the .deps.json included in publish files
dotnet publish /p:GenerateRuntimeConfigurationFiles=true
and than zip all files of publish folder in the same name of namespace folder. upload the zip file to AWS lambda using console.
worked.
If not than copy all project files ( not the published) in zip and upload to aws lambda.
I want to back my DynamoDB local server. I have install DynamoDB server in Linux machine. Some sites are refer to create a BASH file in Linux os and connect to S3 bucket, but in local machine we don't have S3 bucket.
So i am stuck with my work, Please help me Thanks
You need to find the database file created by DynamoDb local. From the docs:
-dbPath value — The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to
the current directory. Note that you cannot specify both -dbPath and
-inMemory at once.
The file name would be of the form youraccesskeyid_region.db. If you used the -sharedDb option, the file name would be shared-local-instance.db
By default, the file is created in the directory from which you ran dynamodb local. To restore you'll have to the copy the same file and while running dynamodb, specify the same dbPath.