What are the valid image formats for Solana NFTs? - solana

Every tutorial I've seen and every single example in the doc shows PNG files. But I can't find any documentation that explicitly states that this is the only supported format. Is it possible to mint an NFT with a JPG file or something else?

Short answer: All image formats with the image/* mime type.
Reasoning behind the statement:
Solana is a platform allowing for smart contracts (called Programs) written in Rust, C, and C++ (source). There are also some unofficial tools that allow you to compile contracts written in other languages, such Solang for Solidity.
There are smart contract standards, usually defined as an interface and a set of rules (such as when to emit specific events), originated in the Ethereum ecosystem as EIPs (Ethereum Improvement Protocol). One of them is EIP-721 which is the first approved, and widely used, NFT standard.
You as a developer, are able to create an NFT smart contract that doesn't follow any of the EIP standards. But, a common practice is to follow the original standards even though you're developing on another network (such as Solana), respecting network-specific differences.
The EIP-721 standard explicitly describes the image field of the metadata JSON:
A URI pointing to a resource with mime type image/* representing the asset to which this NFT represents.
Note: The standard is also referred to as ERC-721. ERC is a subset type of EIP, and both names are correct.

Related

Off-Chain Worker Framework

I haven’t entirely given up on the idea of validators moonlighting as oracles for off-chain computation…based on this extensive discussion: https://gov.near.org/t/off-chain-computation-framework/1400/6
So far from studying Sputnik’s code, I have figured out the mechanics of how to upload a blob to a smart contract. Let's say that a blob represents a storage-less contract, having only stateless functions that act only on input to the function, and return those inputs modified.
Now I’m missing the piece of how Validators can download and execute the blob. As mentioned by Ilya in the link above, the NearSDK would be able to interpret the blob (if the blob is essentially a compiled contract), but it needs to be a modified version of the SDK...
Think of this like sandbox mode…blob cannot modify state of any other contract, but can read state (forget about the internet access part for now). Results of the blob execution are then fed back to a smart contract, where they have to match the results of every other validator who executed the blob. This can be done by hash comparison (rather than looping through the results individually), so it’s not an expensive comparison, especially because it’s all or nothing.
Question: how can a Validator download the blob and execute it via a sandboxed SDK, and post the result via the regular SDK to the blockchain? I am missing a lot of architectural context…and this is bringing me to the edge of giving on the idea. Please help prevent that from happening!
If you are implementing this as a separate binary, your binary will be doing next things:
Use RPC to load the WASM file from the blockchain. See RPC reference
Use runtime-standalone to run this WASM with specific inputs. An example of using runtime standalone is here, but you will need to customize this with few things.
The result should be sent as a transaction signed by this binary again via RPC.
If you want these WASM files to have access to state, you will need to load state inside this binary. There are two options:
Modify a nearcore node to also do the above items
Run nearcore in parallel, and open the database on read when you are initializing Trie (e.g. here load from disk instead).
If you want to add more host functions (like accessing internet), you will need to fork runtime-standalone to expose those functions.

FHIR interoperability platform choice

I want to create an interoperability platform FHIR compliance with a complex business logic.
Our clients can send FHIR resources to platform.
The best architecture by best practise documentation is an ibrid system FHIR + SOA, as this link says.
Now I write two examples of scenario I must to manage:
The first:
I want to create a ServiceRequest resource with a subject where I know only the fiscal code as identifier. If I need other informations about the subject I can query an external database, for example, to know name, surname and others.
I can do this, send to my interoperability platform only a Service Request as follow?
"resourceType" : "ServiceRequest",
"subject" : {
"reference" : "Patient?identifier=FISCALCODE"
}
and so on
The second:
I want to create a ServiceRequest resource with a RelatedPerson linked in the requester tag.
The RelatedPerson is not a fully registry, I know only name and surname and a link to patient.
I must create a SOA method createServiceRequest where I must to pass two parameters the ServiceRequest and RelatedPerson? Or I can use a CRUD method for Bundle resource where I put as entries my ServiceRequest and my RelatedPerson?
So if I try to summarize, the possible ways are:
Create a method createMyMethodName(ServiceRequest serviceRequest, RelatedPerson relatedPerson)
Creation and exposure of this method is it FHIR standard?
If the answer of first quesiton point is YES, in my platform I'll have a lot of custom methods but I have a very strict control on the input informations
Use a CRUD Bundle method where I pass into the Bundle resource the following entries: ServiceRequest, RelatedPerson
In this way I expose only one method to write on my platform, but I must to implement a lot of code to manage all input bundles with several different entries (I suppose a mega switch and then for each branch I apply the business logic controls to accomplish my business logic rules)
This response is not intended as a complete response to your question and comes from a US perspective; however, you may find the perspective useful.
Gotcha with identifier queries
"reference" : "Patient?identifier=FISCALCODE"
As written, your ?identifier=FISCALCODE will query the FISCALCODE key against all code systems. I think what you want is to specify a code system, e.g. ?identifier=<CodeSystem>|<FiscalCode>
This is a common gotcha that's buried in the FHIR search documentation.
You'll either have to reference an existing code system, e.g. an Italy specific implementation guide analogous to US Core that contains the list of FiscalCodes, or author your own.
Which FHIR integration paradigm are you using?
Before diving into the createMethod vs Bundle question, I think it'd be useful to step back and pick an overall FHIR integration approach.
In my opinion, there are three major approaches:
Load data into an existing stand-alone FHIR server
Challenge: Drift between data loaded in FHIR server and other data warehouses
FHIR server queries non-FHIR API
Challenge: Duplication between FHIR API and non-FHIR API
NB: In the limiting case, there is no data stored in the FHIR server proper. Adding to the confusion, some will call this implementation a "FHIR gateway" instead of a "FHIR server."
FHIR server queries staging database for FHIR data
Challenge: Must write data access for each FHIR resource and each data element.
In the future, there may be a fourth approach where one uses the FHIR mapping language in real-time from an intermediate source model to multiple targets.
Your "CRUD Bundle method" is more in-line with POSTing data to a stand-alone FHIR server, whereas your "createMyMethodName" is more in-line with writing DAOs (Data Access Objects) to an external database.
In the limit where you don't need to maintain synchrony between the FHIR server and source data systems, importing data into a stand-alone FHIR server is much less work.
In the limit where you already have mappings to an intermediate data model (in the US, many large service providers will have mappings to either the USCDI or the Common Clinical Dataset), you'll have an easier lift writing CRUD in the FHIR server against an existing database.
For a more in-depth discussion, there was a FHIR integration patterns talk at FHIR Dev Days 2018, starting at Slide 21. Note that the author assumes a familiarity with architectural patterns such as the facade pattern.
Select a stand-alone server or library
Unless you have a compelling requirement or are a large company, it's advisable to use an existing open-source stand-alone server or library implementation. The three most popular are:
HAPI-FHIR (Java)
Microsoft (.NET)
IBM (Java)
If taking the stand-alone option, popular commercial FHIR servers
Microsoft (hosted in Azure)
Smile CDR (commercial version of HAPI-FHIR)
Firely Vonk

FHIR: do I need to have StructureDefinition for all the resources?

FHIR Conformance Layer include StructureDefinition resource, and I'm trying to understand if it is mandatory to provide anything there, when my server does not have any custom resources?
We are going to support multiple Implementation Guides (e.g. US Core and CarinBB), which have their own profiles and extensions. But all their StructureDefinitions are already definied on hl7.org and I can have links to those profiles from my CapabilityStatement and instances. So do I need to expose those structure definitions on my server?
Or it should be just empty, since I don't have anything custom?
Your CapabiltyStatement should declare a StructureDefinition for each resource you support that indicates what your actual system capabilities are - I.e. what data you can actually consume or produce. Typically this will involve a combination of the expectations of a variety of profiles as well as some additional stuff. You may have limits on repetitions, you might not support certain optional elements from some profiles, and might support some additional elements or extensions that none of the profiles expect support for. Very few implementations will have internal support that exactly matches an official published profile. However, if you do, you could technically point to that official profile rather than creating one of your own.

Inventory Item MFN M15 message in fhir

recently I've implemented MFN M15 message sender using HL7 (v2.x).
I see that HAPI FHIR is emerging as de-facto standard. Can someone point me out what would be MFN M15 message counterpart in https://hapifhir.io/?
NOTE:
I've already downloaded ca.uhn.hapi.fhir:org.hl7.fhir.dstu3:4.2.10-SNAPSHOT, and start investigation in org.hl7.fhir.dstu3.model but some directions would be more than gratefull.
Work on managing inventory using FHIR is still an area of exploration in FHIR (definitely a long way from 'de-facto standard' in this space, though FHIR is certainly becoming that for some areas of healthcare. In DSTU3, you may be able to cobble together part of a solution using Device and Medication. You might also look at this stream on chat.fhir.org: https://chat.fhir.org/#narrow/stream/179165-committers/topic/inventory.20tracking

gRPC and schema evolution guarantees?

I am evaluating using gRPC. On the subject of compatibility with 'schema evolution', i did find the information that protocol buffers, which are exchanged by gRPC for data serialization, have a format such that different evolutions of a data in protobuf format can stay compatible, as long as the schemas evolution allows it.
But that does not tell me wether two iterations of a gRPC client/server will be able to exchange a command that did not change in the schema, regardless of the changes in the rest of the schema?
Does gRPC quarantee that an older generated version of a client or server code will always be able to launch/answer a command that has not changed in the schema file, with any more recent schema generated code on the interlocutor side, reglardless of the rest of the schema? (Assuming no other breaking changes like a non backward compatible gRPC version change)
gRPC has compatibility for a method as long as:
the proto package, service name, and method name is unchanged
the proto request and response messages are still compatible
the cardinality (unary vs streaming) of the request and response message remains unchanged
New services and methods can be added at will and not impact compatibility of preexisting methods. This doesn't get discussed much because those restrictions are mostly what people would expect.
There is actually some wiggle room in changing cardinality as unary is encoded the same as streaming on-the-wire, but it's generally just better to assume you can't change cardinality and add a new method instead.
This topic is discussed in the Modifying gRPC Services over Time talk (PDF slides and Youtube links available). I'll note that the slides are not intended to stand alone.

Resources