JSON-LD multiple types - markup

I am currently doing some json-ld. I am quite new to this(also with coding). I am trying to figure it out how could I use different Types in one script, as you can see below. I cannot get a hold onto what am I doing wrong and what should I change to make it work? Thanks
<script type="application/ld+json">
{
"#context": "https://schema.org",
"#type": "Course",
"name": "MSc in IT- Web Communication Design",
"coursePrerequisites": "The following bachelor degree programmes from the University of Southern Denmark and from other universities provide access to the Master’s degree in Web Communication Design: A relevant professional bachelor's degree, e.g. web developer, software developer, business language and IT-based marketing communication, school teacher, nurse, educator, social worker.",
"occupationalCredentialAwarded": "As a student of the MSc in IT – Web Communication Design you will gain specialised skills in web-based communication and knowledge management. Your choice of elective courses, your projects, your thesis as well as your bachelor background qualify you to work with: Web development, digitalisation, web design, digital skills development, social media, etc.",
"description":"Master of Science in IT Web Communication Design. A multi-disciplinary graduate programme that combines IT, communication and organisation. We emphasise the interaction between humans and information technology and combine research-based knowledge with challenges from practice."
},
"provider": {
"#type": "Organization",
"name": "University of Southern Denmark",
"department": "Institute for Design and Communication",
"address": "Universitetsparken 1, 6000 Kolding, Denmark",
"telephone": "+45 65 50 10 00"
},
{
"#context": "http://schema.org",
"#type": "EducationalOccupationalCredential",
"programPrerequisites": "You are expected to have basic knowledge of HTML and CSS before you commence the programme. This may be from courses in your Bachelor's, but it is also possible to obtain this knowledge through online tutorials, e.g. w3schools.com."
}
</script>

Here's a version that validates:
<script type="application/ld+json">{
"#context": "https://schema.org",
"#type": "Course",
"name": "MSc in IT- Web Communication Design",
"coursePrerequisites": "You are expected to have basic knowledge of HTML and CSS before you commence the programme. This may be from courses in your Bachelor's, but it is also possible to obtain this knowledge through online tutorials, e.g. w3schools.com.",
"occupationalCredentialAwarded": "As a student of the MSc in IT – Web Communication Design you will gain specialised skills in web-based communication and knowledge management. Your choice of elective courses, your projects, your thesis as well as your bachelor background qualify you to work with: Web development, digitalisation, web design, digital skills development, social media, etc.",
"description": "Master of Science in IT Web Communication Design. A multi-disciplinary graduate programme that combines IT, communication and organisation. We emphasise the interaction between humans and information technology and combine research-based knowledge with challenges from practice.",
"provider": {
"#type": "Organization",
"name": "University of Southern Denmark",
"department": "Institute for Design and Communication",
"address": "Universitetsparken 1, 6000 Kolding, Denmark",
"telephone": "+45 65 50 10 00"
}
}</script>
The script expects one top level object, not a list of objects. To get around it you can use #graph. My changes meant there is only one top object anyhow.
This is because you want to connect your information. The organization is the provider of the course, so that information should be in the course object.
I wasn't sure about your EducationalOccupationalCredential. I'm guessing a coursePrerequisites is closer to what you want.

Related

Anomaly detection on Azure Databricks Diagnostic audit logs

I have a lot of audit logs coming from the Azure Databricks clusters I am managing. The logs are simple application audit logs in the format of JSON. You have information about jobs, clusters, notebooks, etc. and you can see a sample of one record here:
{
"TenantId": "<your tenant id",
"SourceSystem": "|Databricks|",
"TimeGenerated": "2019-05-01T00:18:58Z",
"ResourceId": "/SUBSCRIPTIONS/SUBSCRIPTION_ID/RESOURCEGROUPS/RESOURCE_GROUP/PROVIDERS/MICROSOFT.DATABRICKS/WORKSPACES/PAID-VNET-ADB-PORTAL",
"OperationName": "Microsoft.Databricks/jobs/create",
"OperationVersion": "1.0.0",
"Category": "jobs",
"Identity": {
"email": "mail#contoso.com",
"subjectName": null
},
"SourceIPAddress": "131.0.0.0",
"LogId": "201b6d83-396a-4f3c-9dee-65c971ddeb2b",
"ServiceName": "jobs",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36",
"SessionId": "webapp-cons-webapp-01exaj6u94682b1an89u7g166c",
"ActionName": "create",
"RequestId": "ServiceMain-206b2474f0620002",
"Response": {
"statusCode": 200,
"result": "{\"job_id\":1}"
},
"RequestParams": {
"name": "Untitled",
"new_cluster": "{\"node_type_id\":\"Standard_DS3_v2\",\"spark_version\":\"5.2.x-scala2.11\",\"num_workers\":8,\"spark_conf\":{\"spark.databricks.delta.preview.enabled\":\"true\"},\"cluster_creator\":\"JOB_LAUNCHER\",\"spark_env_vars\":{\"PYSPARK_PYTHON\":\"/databricks/python3/bin/python3\"},\"enable_elastic_disk\":true}"
},
"Type": "DatabricksJobs"
}
At the moment, I am storing the logs into Elasticsearch and I was planning to use their Anomaly Detection tool on this type of logs. Therefore, I do not need to implement any algorithm, but rather choose the right attribute, or perform the right aggregation, or maybe combine more attributes using a multi-variate analysis. However, I am not familiar with such topic nor I have this background. I have read Anomaly Detection: A Survey by Chandola et al., which was pretty useful to point me to the right sub-field.
So, I have understood that I am dealing with time series and depending on the kind of aggregation I will perform I might face collective anomalies on sequence data (eg: the ActionName field of these logs) or contextual anomalies on sequence data.
I was wondering whether you can point me in the right direction, since I haven't managed to find any related work of anomaly detection on audit logs. More specifically, what kind of anomalies should I investigate? and which kind of aggregation will be beneficial?
Please keep in mind that I have a quite large amount of data. Moreover, I would appreciate any kind of feedback, even if it doesn't involve Elasticsearch; therefore, feel free to propose a whole unsupervised machine learning method for this kind of anomaly detection scenario rather than a simpler use case of Elasticsearch.

Is there a google API for "People also ask"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Is there any API to access the people also ask questions in the google's search result list?
take a look in this example, my search query: what is google search
And google presents this other questions:
google alternative questions image
SerpApi seems to do it.
Source url : https://serpapi.com/search.json?q=How+do+you+earn+bitcoins%3F&location=Dallas&hl=en&gl=us&source=test
Look at the key related_questions
"related_questions": [
{
"question": "How can I get free Bitcoin?",
"snippet": "Earn Free Bitcoins Daily with No Investment from InternetOnline Home Income recommend to Earn Free Bitcoins as the number FIVE option because earning bitcoins is really easy and free to join.Check your BTC balance at any time through this address https://blockchain.info/address/<your own BTC address>.KIND ATTN:- You cannot get cash directly from Bitcoins… ... Earning example:More items...•Oct 9, 2018",
"title": "Earn Free Bitcoins Daily with No Investment from Internet",
"link": "https://www.onlinehomeincome.in/earn-free-bitcoins-daily.php",
"displayed_link": "https://www.onlinehomeincome.in/earn-free-bitcoins-daily.php"
},
{
"question": "How do you get bitcoins?",
"snippet": "Seventh, you can get bitcoins by accepting them as a payment for goods and services or by buying them from a friend or someone near you. You can also buy them directly from an exchange with your bank account. Eighth, there is a growing number of services and merchants accepting Bitcoin all over the world.",
"title": "5 Easy Steps To Get Bitcoins and Learning How To Use Them",
"link": "https://www.weusecoins.com/en/getting-started/",
"displayed_link": "https://www.weusecoins.com/en/getting-started/"
},
{
"question": "Can you make money on Bitcoin?",
"snippet": "Bitcoin is just like real money. For some strange reason, people tend to think that because Bitcoin is a new form of currency, there is some magical way you can earn Bitcoins or make money from it easily.Oct 10, 2018",
"title": "How to Get Bitcoins? 12 Ways for Making Money with Bitcoin in 2018",
"link": "https://99bitcoins.com/earn-bitcoins-fast-free/",
"displayed_link": "https://99bitcoins.com/earn-bitcoins-fast-free/"
},
{
"question": "What is Bitcoin and how it works?",
"snippet": "A transaction is a transfer of value between Bitcoin wallets that gets included in the block chain. Bitcoin wallets keep a secret piece of data called a private key or seed, which is used to sign transactions, providing a mathematical proof that they have come from the owner of the wallet.",
"title": "How does Bitcoin work? - Bitcoin - Bitcoin.org",
"link": "https://bitcoin.org/en/how-it-works",
"displayed_link": "https://bitcoin.org/en/how-it-works"
}

Resource Identifiers between two FHIR servers

Our scenario is there is an EHR system that is integrating with a device sensor partner using FHIR. In this scenario both companies will have independent FHIR servers. Each of them has different Patient and Organization(s) records with their own identifiers. The preference is the the sensor FHIR server keep the mapping of EHR identifiers to it's own internal identifiers for these resources
The EHR wants to assign a Patient to a Device with the sensor FHIR server.
Step 1: First the EHR would #GET the list of Device resources for a given Organization where a Patient is not currently assigned from the sensor FHIR server e.g.
/api/Device?organization.identifier=xyz&patient:missing=true
Here I would assume the Organization identifier is that of the EHR system since the EHR system doesn't have knowledge of the sensor system Organization identifier at this point.
The reply to this call would be a bundle of devices:
... snip ...
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
... snip ...
Question 2: Would the owner Organization reference have the identifier from the search or the internal/logical ID as it's known by the sensor FHIR server as in the snippet above?
Step 2: The clinician of the EHR system chooses a Device from the list to assign it to a Patient in the EHR system
Step 3: The EHR system will now issue a #PUT /api/Device/{id} request back to the sensor FHIR server to assign a Patient resource to a Device resource e.g.
{
"resourceType": "Device",
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
"id": "b4994c31f906",
"patient": {
"reference": "https://ehr-server.com/api/Patient/4754475"
},
"identifier": [
{
"use": "official",
"system": "bluetooth",
"value": "b4:99:4c:31:f9:06",
"label": "Bluetooth address"
}
]
}
Question 3: What resource URI/identifier should be used for the Patient resource? I would assume it is that of the EHR system since the EHR system doesn't have knowledge of the sensor system Patient identifier. Notice however, that the Organization reference is to a URI in the sensor FHIR server while the Patient reference is a URI to the EHR system - this smells funny.
Step 4: The EHR can issue a #GET /api/Device/{id} on the sensor FHIR server and get back the Device resource e.g.
{
"resourceType": "Device",
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
"id": "b4994c31f906",
"patient": {
"reference": "https://sensor-server.com/api/Patient/abcdefg"
},
"identifier": [
{
"use": "official",
"system": "bluetooth",
"value": "b4:99:4c:31:f9:06",
"label": "Bluetooth address"
}
]
}
Question 4: Would we expect to see a reference to the Patient containing the absolute URI to the EHR FHIR server (as it was on the #PUT in Step 3) or would/could the sensor FHIR server have modified that to return a reference to a resource in it's FHIR server using it's internal logical ID?
I didn't see a Question 1, so I'll presume it's the "assume" sentence in front of your first example. If the EHR is querying the device sensor server and the organizations on the device sensor server include the business identifier known by the EHR, then that's reasonable. You would need some sort of business process to ensure that occurs though.
Question 2: The device owner element would be using a resource reference, which means it's pointing to the "id" element of the target organization. Think of resource ids as primary keys. They're typically assigned by the server that's storing the data, though in some architectures, they can be set by the client (who creates the record using PUT instead of POST). In any event, you can't count on them being meaningful business identifiers - and according to most data storage best practices, they generally shouldn't be. And if, as I expect, your scenario involves multiple EHR clients potentially talking to the "device" server, the resource id couldn't possibly align with the business ids of all of the EHRs. (That's a long way of saying "no 'xyz' probably won't be '3')
Question 3: If the EHR has its own server, the EHR client could update the device on the "sensor" server to point to a URL on the EHR server. Whether that's appropriate or not depends on your architecture. If you want other EHRs to recognize the patient, then you'd probably want the "sensor" server to host patients too and for the EHR to look up the patient by business id and then reference the "sensor" server's URL. If not, then pointing to the EHR server's URL is fine.
Question 4: When you do a "GET", its normal to receive back the same data you specified on a POST. It's legal for the server to change the data, including possibly updating references. But that's likely to confuse a lot of client systems, so it's not generally recommended or typical.

Why isn't the Google QPX Express API returning results for all airlines?

I enabled access to the Google QPX Express API to do some analytics on the prices of Delta's tickets and Fare Classes. But the response seems to only include flights from a limited set of airlines.
For example, the following request
{
"request": {
"passengers": {
"adultCount": 1
},
"slice": [
{
"origin": "JFK",
"destination": "SFO",
"date": "2015-02-15",
"maxStops": 0
}
],
"solutions": 500
}
}
only returns flights for AS (Alaska Airlines), US (US Air), VX (Virgin America), B6 (JetBlue), and UA (United Airlines).
If I add "permittedCarriers": [DL], then I get an empty response. Likewise, I get an empty response if I leave out permittedCarriers and look for flights between Delta hubs (e.g., "origin": "ATL", "destination": "MSP").
The documentation suggests that QPX Express is supposed to have most airline tickets available. Is there something wrong with my request? Why am I not seeing any results for Delta?
I received a response from Google's QPX Express help team about missing data for Delta. The response was that
Delta's data, as well as American Airline's data, is not included in
QPX Express search results as a default. Access to their data
requires approval by those carriers.
After informing him that my plans to use the data were for research purpsoses, he responded,
American and Delta restrict access to their pricing and availability
to companies which they approve, which are primarily organizations
driving the sale of airline tickets. Unfortunately, requests for
access are only being reviewed for companies that plan to use the API
for commercial purposes.

Steam trading cards API or achievements api

Is there an API to get a user's steam trading cards?
I'm not very familiar with steam but it doesn't seem to be on this page.
https://developer.valvesoftware.com/wiki/Steam_Web_API#GetPlayerSummaries_.28v0001.29
There's an achievements API would that get me the trading cards info as well?
There is not an API for the trading cards (yet). You can, however, still find them. It does depend on the user's privacy settings though. I went into more detail on this question and believe it will help you out.
Achievements can be pulled via the GetPlayerAchievements API call using the following format:
http://api.steampowered.com/ISteamUserStats/GetPlayerAchievements/v0001/?appid=<APPID>&key=<APIKEY>&steamid=<PROFILEID>l=<LANG>
APPID is the application ID the achievements are associated with (ie. Team Fortress 2 is 440)
APIKEY is the API key Valve assigned you
PROFILEID is the 64bit player ID give to you when you sign up on steam
LANG is the language you wish to return the descriptions in (this parameter is optional and not including it removes the name and description fields from the results). en is for English.
In the response is a listing of all of the achievements in the game.
{
"apiname": "TF_MVM_PYRO_BOMB_RESET",
"achieved": 0,
"name": "Hard Reset",
"description": "As a Pyro, reset the bomb 3 times in a single wave."
},
{
"apiname": "TF_MVM_ENGINEER_ESCAPE_SENTRY_BUSTER",
"achieved": 1,
"name": "Real Steal",
"description": "As an Engineer, escape with your sentry as a sentry buster is about to detonate."
},

Resources