How to create a new candy machine after testing to test again within project? - solana

So I've been playing around and testing Candy machine on devnet, have the mint site set up and everything seems to be working ok. I want to test again as i have no more assets to mint so need to set up a new candy machine in the project. I've searched around for a solution and saw that deleting the .cache folder will allow you to create a new candy machine and or deleting the mainnet-beta-temp.json or devnet-temp.json file in your .cache folder allowing you to use the keypair you were using before. Can someone confirm if this is true or tell me what the best way to go about this. Thanks!

erasing the cache is not the best way to create a new Candy Machine, because you need that cache in the future to close the candy machine account and get the rent back into ur account using the withdraw command.
The best way to create and use a new CandyMachine (devnet or mainnet) is using the -c parameter on any CandyMachineV2 command.
For example the following commandts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k ~/.config/solana/devnet.json -cp config.json -c example ./assets will create a new cache file called devnet-example.json because you set -e devnet and -c example. Then to use this CandyMachine in future commands you just have to add -e devnet and -c example in the command that you wanna use.
Take in mind that -c can be any name that you want and this will create (or use) the cache file inside .cache. Just make sure, if you create a new candy machine for devnet or mainnet-beta, to pass the same -c name that you used to create the Candy Machine.
By the way, if you dont set -c on any command it will use temp as default and thats why you wont be able to use/create a new Candy Machine unless you erase the cache file/folder. But my recommendation is to use the -c param instead to ensure the using and creation of the Candy Machine.

Related

Ubuntu terminal ssh to same ip after device changed

I ssh to a device that gets attached to a test bench with the following:ssh root#1.2.3.4
Because the actual device has been changed since the last time I connected to that IP I get:WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! all as expected. The warning tells me that I can easily fix this with ssh-keygen -f "/home/myuser/.ssh/known_hosts" -R "1.2.3.4"
I do find it repeating the same thing over and over, there must be some way to improve this. I know this IP and it is internal to my company.
I started working on trying to use something like !!:s/find/replace but the spaces in the replace is making my life difficult.
What is the easiest way to automate this, maybe create an alias?
Thank you
Jack
I asked a senior dev at my company and he suggested that I just update my ~/.ssh.config file. I added:
Host 1.2.3.*
User root
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
No more having to copy and paste the line to update my known_host file

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Bash change password on boot

* QUICK SOLUTION *
For those of you visiting this page based on the title solely and not wanting to read through everything below, or thinking everything below doesn't apply to your situation, maybe this will help... If all you are looking to do is change a users password on boot and are using Ubuntu 12.04 or similar, here is all you have to do. Add a script to start on boot containing the following:
printf "New Password\nRepeat Password\n" | passwd user
Keep in mind, this must be run as root, otherwise you will need to provide the original password like so:
printf "Original Password\nNew Password\nRepeat Password\n" | passwd user
* START ORIGINAL QUESTION *
I have a first boot script that sets up a VM by doing some configuration and file copies from a mounted iso. Basically the following happens:
VM boots for the first time.
/etc/rc.local is used to mount a CD ISO to /media/cdrom and execute /media/cdrom/boot.sh
The boot.sh file does some basic configuration, copies some files from CD to the VM and should update the users password, using the current password.
This part of the script fails. The password is not updating. I have tried the following:
VAR="1234test6789"
echo -e "DEFAULT\n$VAR\n$VAR" | passwd user
Basically the default VM is setup with a user (for example jack) with a default password (DEFAULT) The script above, using the default password updates to the new password stored in VAR. The script works by itself when logged in, but I cant get it to do the same on boot. I'm sure there is some sort of system policy or something that prevents this. If so, I need some sort of work around. This VM is being mass deployed and is packaged automatically and configured with a custom user password that is passed from the CD ISO.
Please help. Thank you!
* UPDATE *
Oh, and I'm using Ubuntu 12.04
* UPDATE *
I tried your suggestion. The following files directly in the rc.local ie the password does not update. The script is running however. I tested by adding the touch line.
touch /home/jack/test
VAR="1234test5678"
printf "P#ssw0rd\n$VAR\n$VAR" | passwd jack
P#ssw0rd is the example default VM password.
Jack is the example username.
* UPDATE *
Ok, we think the issue may be tied to rc.local. So rc.local is called really early on before run levels and may be causing the issue.
* UPDATE *
Well, potentially good news. The password seems to be updating now, but its updating to something other than what I set in $VAR. I think it might be adding something to it. This is ofcourse just a guess. Everytime I run the test, immediately after the script runs at boot I can no longer login with the username it was trying to update. I know that's not a lot of information to go on, but it's all I've got at the moment. Any ideas what or why its appending something else to the password?
* SOLUTION *
So there were several small problems as to why I could not get the suggestion below working. I won't outline them here as they are irrelevant. The ultimate solution was from Graeme tied in with some other features of my script which I will share below.
The default VM boots
rc.local does the following:
if [ -f /etc/program/tmp ]; then
mount -t iso9660 -o ro /dev/cdrom /media/cdrom
cd /media/cdrom
./boot.sh
fi
(The tmp file is there just to prevent the first boot script from running more than once. After boot.sh runs one, it removes that tmp file.)
boot.sh on the CDROM runs (with root privileges)
boot.sh copies files from the CDROM to /etc/program
boot.sh also updates the users password with the following:
VAR="DEFAULT"
cp config "/etc/program/config"
printf "$VAR\n$VAR\n" | passwd user
rm -rf /etc/program/tmp
(VAR is changed by another part of the server that is connected to our OVA deployment solution. Basically the user gets a customized, well random password for their VM so similar users cannot access each others VMs)
There is still some testing to be done, but I am reasonably satisfied that this issue is resolved. 95%
Edit - updated for not entering the original password
The sh version of echo does not have the -e option, unlike bash. Switch echo for printf. Also the rc.local script will have root privileges, so it won't prompt for the original password. Using that will cause the command to fail since 'DEFAULT' will be taken as the new password and the confirm will fail. This should work:
VAR="1234test6789"
printf "$VAR\n$VAR\n" | passwd user
Ubuntu uses dash at boot time, which is a drop in replacement for sh and is much more lightweight that bash. echo -e is a common bashism which doesn't work elsewhere.

I have to move my logs from one server to another server on a weekly basis using a shell script

I am having a new sever and i want to move all my logs file from the old server to the new server on a weekly basis.
If the directory is not exist then create a directory of that week and transfer all the files of that week from the old server to the new one.
I am not able to find how to do that.
Write a cron job that triggers once every week. See this tutorial.
In your cron command, you write a copy (and optionally delete) command
scp -i private_key remote_server_address:/path/to/paste/log/dir; rm -rf /path/to/logfile/on/current/server;
done.
One thing to note, that I have used private_key to authenticate the connection. See here how to achieve password less authentication

Resources