Update Authority of Candy Machine Project - solana

My question is regarding taking over a partially minted CM project. I'm not too familiar with Solana development.
A buddy of mine is taking over ownership of a project. I just wanted to confirm: if the authority of the candy machine is updated to his wallet and he has the CM id that's all that would be required to relinquish previous ownership? He'd be free to make changes to the CM configuration at that point.
Also, the authority of each minted NFT would have to be updated as well? Would the only downside of not updating the existing NFTs be not being able to change the royalties?
Is there anything else to consider in transferring ownership?
Thanks in advance!

An already partical mint has a few things that need to be updated to fully take over a project.
You'll need to have on hand the update auth keypair for the Candymachinne given to you by the old owner or have them perform some of the actions here if they do not hand it over as some actions need to be performaned by the original keypair.
Update the CandyMachine CM to a new authourity. This will grant access to the new wallet full controll of the CM only and all NFT's minted from that point forward will their update authoritys will be set to the new candymachine authority on minting.
Get Treasury Balance (optional). If you are taking over the original treausry then getting the old owner to either hand over the private key to treasury account (extremely unlikely) or by getting them to transfers the funds to wallet of your choosing. If they for what ever reason give you the keys to the treasury wallet then you then transfer this balance out to a wallet of your choosing.
Update the CM treasury - When a purchase is made from a candymachine the SOL is sent to the treasury address, this obviously needs to be a wallet that new owner owns.
Updating already minted NFT's to the new authority. If minting has already happened then the update authoritys for these NFT's will belong to the old candymachine authority (the old owner). To secure these NFT's you'll have to update the update authority of them to the new owner. So this means you will need the original private key of the candymachine auth or you will have to get the original owner to transfer the auth over to you.
A tool to update the authority of NFT's is Metaboss. If you look at this page here https://metaboss.rs/set.html there are a few commands you can use for updating the authority. You'll also obviously need a snapshot of all the current NFT's that have been minted so you can run the set auth commands. https://metaboss.rs/snapshot.html
Once you have update authority over the NFT's you can then change royalty % and everything else.

1.- Yes, if the authority is changed he just need to provide the New wallet address, and that New wallet will be the only one that can modify the Candy Machine configs.
2.- All the already minted NFTs will be attached to the old wallet, the New minted NFTs should be attached to the New wallet. In order to change the authority of the already minted nfts to the New authority the old wallet have to do it using metaboss for example.
3.- The old wallet can update every data of the NFT (if the nft is marked as mutable that normally everyone does it), it can change the images, the attributes, the royalty fees, the creator array, names, etc. Its really important that the New wallet have the authority of all NFTs in order to not get malicious attacks from the old owners of the project that you wont be able to fix after.

Related

Google Cloud Storage - Handle rotating keys from outside the environment

Need a help on how do I handle rotating keys on google cloud storage thats managed by one google account but being accessed by an app thats running on another google cloud account. I tried searching for solutions but couldn't find an answer
With IAM service you can grant permission at project level, and, for some resources, at resources level.
It's the case for your KMS keys where you can grant permission on the key rings
Or directly at the key level
Choose what works the best for your use case, and grant the service account of the external project with the correct permission (decrypter to read the files in the GCS, encryter to write files)
Note: A key rotation is a new version of a key.

Using AD with tortoiseGit

In windows, is it possible for TortoiseGit to use the credentials of the currently logged user, or via LDAP, rather than the user having to enter in a user name and password and this getting stored?
For us, using git-credential-wincred option for the credentials_helper setting isn't feasible. The user environment is very restricted and locked down, and administered by a third party. A change to an AD account password it can result in a wait of days (they have a 4 day response in the SLA) before the credentials are removed and the user can get back to using TortoiseGit.
So storing the passwords is not a realistic option. For now Credential_helper is set to none, forcing the users to enter the username & password on every push. Not the ideal situation, but better than a 4 day wait.
[edit]
A few more details about our environment. It's an HVD, based on a master image. The master image is maintained by another company, and each time the users log on, their session is created from the master image. The users cannot install anything on it. They cannot make changes to the setup or settings. They cannot access the internet with it. There's no command line (OS, Git bash or anything else). There is only restricted access to a handful of internal servers.
The master image has TortoiseGit and Git, and we have GitLab on one of the server that the HVD can access.

Cannot delete ITIM accounts

I am trying to delete the ITIM A/C created for a user, however it doesn't let me delete it, an error is displayed "following accounts cannot be deleted since they are governed by automatic provisioning policy".
Please let me know what is the reason for it and how to correct it.
The reason is that there is a Provisioning Policy defined in your environment with the following parameters :
One of the entitlements of the Provisioning Policy is an ITIM Account (possibly with some entitlement parameters)
The provisioning option for this entitlement is set to Automatic
The role membership of the provisioning policy either is a specific role that your user has or applies to all users in the organization.
What the above mean is that there is a provisioning policy that says "All the users that have this role MUST have an ITIM account". This is why you cannot manually delete the ITIM Account for that person.
It's not about correcting, but rather on figuring out what you want to achieve there. You have several options but first you need to take a step back and understand the reason instead of just attempting to fix the symptom. Why should this user not have an ITIM Account ?
IF there is a role that gives him this account you need to figure out which role is that and remove the role from the person. Then, the Provisioning Policy enforcement will remove the ITIM Account ( oversimplifying here assuming there are no other PPs that apply to the person and have an ITIM Account as entitlement)
If , on the other hand, the provisioning policy applies to everyone and you found out now that some of them should not have an account or that you should be able to remove accounts from them, you either need to make the provision option manual (this means everyone CAN have an account but they will need to request it or get it provisioned by someone/some process) or change the membership of the policy to a more exclusive role that contains only the persons who should have an ITIM Account.
EDIT
You would need to go a little bit back and try to understand the notions of Provisioning Policies in the context of ITIM and RBAC in general. This is not the place to analyze the topic :) However, shortly and for the question at hand
The ITIM Account is not necessarily mapped 1:1 to every ITIM person. ITIM Persons are the entities that are managed by your Identity Management System (ITIM) and they might have ITIM accounts, that is accounts on the ITIM Service that is predefined in ITIM.
The ITIM Account is the account that gives access to the ITIM Administrative console and the Self Service UI, not all persons need this/should have this.
The reason why as you say, the user got an ITIM Account when you created the user, is that there is a Provisioning Policy that has the ITIM Service as entitlement and is set to automatic. This says that all ITIM users MUST have an ITIM account. This is why you can't remove the ITIM account by itself because it contradicts the Provisioning Policy that is in place.
Reason of not deleting account is automatic provisioning policy which is not allowing to delete itim account. Make the provisioning policy from automatic to manual then only it will allow deletion of accounts.

How can I store keychain credentials for multiple Github accounts?

I am running Git on OSX Mavericks and have not had issues until now. What has changed is that I'm trying to use two Github accounts on different repos on the same computer.
The problem is that the osx-keychain is storing the login information from my first account. That was terrific before, but whenever I try to commit or push from my new Github account, it is defaulting to use the keychain's username and password values, and ignoring the locally-defined git config (or even global git config!) files.
I can delete my osx-keychain, and then push to the new account, but in doing so it will create a new keychain for that account, which puts me back at square one: able to push to my secondary account with the new keychain values but locked out of my primary account.
So I'm stuck in an "either-or" situation, and I'm really hoping there's a "both" solution. Any help?
P.S. I have tried this solution, and it did not work, as the osx-keychain appeared to override the SSH Identity functionality
If you are using https url, then the solution you mention wouldn't have any effect: it is for multiple ssh keys.
Regarding https, this question mentions a few solutions, including:
By default gitcredentials only considers the domain name.
If you want git to consider the full path (e.g. if you have multiple GitHub accounts), set the useHttpPath variable to true, as described at gitcredentials.
Note that changing this setting will ask your credentials again for each URL.
By default, Git does not consider the "path" component of an http URL to be worth matching via external helpers.
This means that a credential stored for https://example.com/foo.git will also be used for https://example.com/bar.git.
If you do want to distinguish these cases, set this option to true.
Also, make sure your https url incudes your account name:
git clone https://user1#github.com/auser/aprojectX
git clone https://user2#github.com/auser/aprojectY
That will help a credential helper to known which account/password it should be looking for.
Finally, the authentication you are using for accessing a git repo hosting service has nothing to do with:
git config (--global) user.name
That last config is only for setting the author associated with your local commits.
It is not for selecting the account used to access a remote hosting website.

How paranoid should I be about my Azure application binary files being stolen?

I need to migrate a huge application to Windows Azure. The application depends on a third-party library that requires an activation key stored in a special binary file on the role instance filesystem.
Obviously that key has to be either included into the role package or stored somewhere where role can fetch it. The activation key will not be bound to the machine (since I have no idea where exactly a role will be run in the cloud) so anyone can use it to make a copy of that library work.
Since Azure roles run somewhere not under our control I feel somewhat paranoid about the key being stolen and made widely available.
How do I evaluate how likely it is to steal a binary file that is included into Azure role? How do I mitigate such risks?
When asking questions like this you need to differentiate your attackers:
A2) Rogue internet hacker
A3) A developer in your organization
A4) A person in your organization who does deployments
Your role binaries are not accessible to A2, however they are very accessible to A3 and A4.
As mentioned in another answer, you can store your secrets in the Role Configuration File.
However, these secrets are still accessible to A4 (anyone with access to the portal).
To defend against A4 you want to encrypt the secrets in the role configuration file with a key that isn't present in the role binaries or the role configuration. Then in your role binaries you decrypt the encrypted role setting, with the decryption key.
The key to use for encrypting the secrets is a certificate, which the Azure SDK will install for you. You can find a post about setting up certificates for azure here.
In summary, for secrets where you need to defend against people in your organization who do deployments/have access to your configuration files you do the following:
Have a trusted party do the following:
Generate a Certificate/Private Key
Encrypt secrets with the Certificate, and store the encrypted settings in your configuration files
Upload the Certificate/Private Key to your service.
Then modify your service to:
Have the service model install the Certificate/PrivateKey
Have your application, load the private key for decrypting secrets
Have your application load the encrypted settings from role configuration, and decrypt them with the private key.
As far security is concerned, unless your team is extremely capable in this area, the default always upgraded Azure OS is probably much more secure than any self-configured host. At least, this is how we (aka my company, Lokad) assess the situation two years after migrating in full to Windows Azure compared to our previous self-hosted situation.
Then, if you have credentials, such as licence keys, then the most logical place to put them is the Role configuration file - not the Role binaries. You can naturally regenerate the special file within the VM as needed, but this way, you don't internally spread your credentials all over the place as you archive your binary versions.

Resources