In Ethereum ERC-721 or ERC1155, off-chain metadata can be saved on private storage like S3. In solana, I want to use SPL to mint NFT with metadata on own storage, it is returning type mismatch error. Is it possible to use own/private storage for off-chain metadata with SPL instead of metaplex?
Metaplex's Token Metadata Standard uses an "uri" field that points to a jsonFile. This json can be stored on any storage, like IPFS, AWS, Arweave, your own server, etc.
If you wanna mint some singles NFTs (instead of a collection) a fast and simple way is using Metaboss's mint command, that will create an on-chain NFT with the off-chain metadata (json) pointing to a storage. Metaboss is a Rust-CLI that has a lot of utilities for Solana-NFTs.
If you wanna create a NFT-collection I recommend you to use Metaplex's Candy Machine where you can use Arweave, IPFS, Pinata, NFT-Storage or AWS as storage, and you will be able to create the mint page so everyone can mint your project.
I hope this can solve your question, I will be looking if more questions appears through this one!.
Related
I am trying to decrypt an rpmsg file received from inside my organization that has been encrypted with RMS. I have installed AD RMS and the MDE. I am using the MIP SDK for C# version 1.11.72.
Decryption fails with a generic message - "One or more errors occurred." However, in the MIP SDK logs, I see this:
Failed API call: file_create_file_handler_async Failed with: [NoPermissionsError: 'Received message: Can't find SLC public key in global lookup tenant when targeting https://api.aadrm.com/my/v2/enduserlicenses, NoPermissionsError.Category=UnknownTenant, CorrelationId=6f5fb43e-4fe8-452c-ad30-3d3e5e479a5c, CorrelationId.Description=ProtectionEngine'
I am not sure what this issue might be related to. Any advice as to how to diagnose would be very helpful.
Using AD RMS requires that you also have registered the _rmsdisco SRV record. Without that, the SDK defaults to using Azure.
https://learn.microsoft.com/en-us/information-protection/develop/quick-app-adrms#service-discovery
I'll look at adding a section to the Service Discovery section that links to the AD RMS details.
Once the record is published, you need to use the Identity property on the FileEngineSettings object. The SDK will use the domain suffix from the identity to chase the SRV record.
If your organization has multiple email domains, you'll need an SRV record for each that points to the RMS cluster.
I am trying to rotate the user access keys & secret keys for all the users, last time when it was required I did it manually but now I want to do it by a rule or automation
I went through some links and found this link
https://github.com/miztiik/serverless-iam-key-sentry
with this link, I tried to use but I was not able to perform the activity, it was always giving me the error, can anyone please or suggest any better way to do it?
As I am new to aws lamda also I am not sure that how my code can be tested?
There are different ways to implements a solution. One common way you can automate this is through a storing the IAM user access keys in Secret Manager for safely storing the keys. Next, you could configure a monthly or 90 days check to rotate the keys utilizing the AWS CLI and store the new keys within AWS Secrets Manager. You could use an SDK of your choice for this.
I am attempting to use the Microsoft Azure Storage Explorer, attaching with a SAS URI. But I always get the error:
Inadequate resource type access. At least service-level ('s') access
is required.
Here is my SAS URI with portions obfuscated:
https://ti<...>hare.blob.core.windows.net/?sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRUD<...>U0%3D
And here is my connection string with portions obfuscated:
BlobEndpoint=https://tidi<...>are.blob.core.windows.net/;QueueEndpoint=https://tidi<...>hare.queue.core.windows.net/;FileEndpoint=https://ti<...>are.file.core.windows.net/;TableEndpoint=https://tid<...>hare.table.core.windows.net/;SharedAccessSignature=sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRU<...>YU0%3D
It seems like the problem is with the construction of my URI/endpoints/connectionstring/etc, more than with permissions granted me on the server, due to the fact that when I click Next, the error displays instantaneously. I do not believe it even tried to reach out to the server.
What am I doing wrong? (As soon as I get this working, I'll be using the URI/etc to embed in my C# app for programmatic access.)
What you need to connect is a service requirement the "SRT" part of the URI.
The URI you have has a SRT of "CO" container and object and needs the "S" part, you need to create a new sas key this can be generated in portal, azure cli or powershell.
In the portal is this part:
You have to enter to the storage acount and select what you need:
Allowed services (if you are looking for blob)
Blob
Allowed resource types
Service (make sure this one is activated)
Container
Object
Allowed permissions (this to do everything)
Read
Write
Delete
List
Add
Create
Example where to look
If you need more info look here:
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN
If you like to create the SAS key in the CLI use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
If you like to create the SAS key in powershell use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-powershell
I has a similar issue trying to connect to the blob container using a Shared Access Signature (SAS) URL, and this worked for me:
Instead of generating the SAS URL in Azure Portal, I used Azure Storage Explorer.
Right click the container that you want to share -> "Get Shared Access Signature"
Select the Expiry time and permissions and click create
This URL should work when your client/user will try to connect to the container.
Cheers
I had the same problem and managed to get this to work by hacking the URL and changing "srt=co" to "srt=sco". It seems to need the "s".
I am trying to load public data using pig from s3 using this url
s3://datasets.elasticmapreduce/ngrams/books/20090715/eng-us-all/4gram/data
LOAD 's3n://datasets.elasticmapreduce/ngrams/books/20090715/eng-us-all/4gram/data'
but it is asking for access and secret key. Should I move this data to one of my buckets? or am I missing something
Public data sets are also accessible only when you have AWS account. Data sets are visible to every one on AWS. Hence you need to pass credentials - access key and secret key in this case.
I would like to access external data from my aws ec2 instance.
In more detail: I would like to specify inside by user-data the name of a folder containing about 2M of binary data. When my aws instance starts up, I would like it to download the files in that folder and copy them to a specific location on the local disk. I only need to access the data once, at startup.
I don't want to store the data in S3 because, as I understand it, this would require storing my aws credentials on the instance itself, or passing them as userdata which is also a security risk. Please correct me if I am wrong here.
I am looking for a solution that is both secure and highly reliable.
which operating system do you run ?
you can use an elastic block storage. it's like a device you can mount at boot (without credentials) and you have permanent storage there.
You can also sync up instances using something like Gluster filesystem. See this thread on it.