How can I add path alias to a drupal 9 view? - view

In a Drupal 9 installation there is a view where nodes are listed. Now I would like to do two things.
1: add a field with the node path alias to the view.
2: add an exposed filter to filter path aliases.
Unfortunately I am not able to do this. The only way to add a field with Path alias is the field "Link to Content", which can be output as plain text.
The field is displayed, but I cannot create an exposed filter for it. The error is Column not found node.view_node.
Any ideas?
Thank you

First you need to install Pathauto module https://www.drupal.org/project/pathauto
Then Create a pattern for allias,This is where you can create allias for a node
Now you need to go to the view and in the field configuration of which you created a pattern with in the overwrite result there will be a field named allias copy that token and place it above.
then For the filter part go to the contextual filter and add a filter of ID then do the following configurations
This
Thats how i did it, hope it helps.

Related

Google Drive API Listing Files with Field Paramter Not Working

I am currently listing some files with the Google Drive API. However, the default list only lists id, name, and mimeType. I know that the fields parameter can list more than just the default, but when I put parents as a field in the Google API Playground, I get the error of Invalid field selection parents. However, when I use * in the fields parameter, it returns all the fields. Am I doing anything wrong by putting parents in the fields parameter? If so, does anyone have any idea how to include the parents field as a field in the list results?
Here is my current endpoint, which causes the error:
https://www.googleapis.com/drive/v3/files?fields=parents&key=[YOUR_API_KEY]
Thanks!
From https://www.googleapis.com/drive/v3/files?fields=parents&key=[YOUR_API_KEY], in your situation, when parents is directly put to fields as follows,
such error of Invalid field selection parents occurs. Because the method of "Files: list" returns the file list which is an array including each file metadata. Ref I think that this is the reason of your issue.
Solution:
So in this case, please set files(parents) instead of parents.
Note:
In the case of files(parents), only parents is returned. When you want to retrieve id, name, mimeType and parents, please use files(id,name,mimeType,parents) to fields.
References:
Try this API of "Files: list"
Files

Problems defining new elastic data source in grafana using dots in time field name

I'm trying to define a new data source in Grafana
The data source is an Elastic index (which I'm not responsible of)
When trying to Save & Test the new data source I get the following error:
No date field named Date.Epoch found
This field is the same field that is set in the Kibana Index Pattern as the time filter field, So I'm sure there is no typo or some other confusion..
After a lot of searching online I suspect what causes the problem is that we have a dot . in the field name.
Is there any way to escape the dot? or another solution without changing the index?
Update: I opened an issue in Grafana's github project https://github.com/grafana/grafana/issues/27702
Try using advanced variable formatting and use raw value if you have escaping problems:
$variable
or
${variable:raw}

Using a Content ID Contextual Filter in a Views Block in Drupal 8

I am attempting to create a View that, instead of showing a list of all nodes of a content type, will show only a single node of a content type based on node ID. In Drupal 7, I worked almost exclusively in Views Content Panes and was able to achieve this based on NID and then setting the Argument Input to From Context: Content ID. How do I get similar results using Blocks in Drupal 8?
I have a view that is correctly configured to show all nodes of a content type. I've tried to add a Contextual Filter: ID; but I cannot figure out how to configure it to get a result that isn't All Results.
Thank you in advance!
When you edit the contextual filter Content ID, you have :
WHEN THE FILTER VALUE IS NOT AVAILABLE (base view is built without filter, this is the case)
Check Provide default value to set how filter values should be retrieved, then you can choose a type, for example Content ID from URL, or Query parameter, etc.
For example with Query parameter you can set the parameter name and the Fallback value. In your case you would set something like nid as the query param, and all or a fixed node ID as fallback value ('all' is the exception value by default that is to disable the filter).
Given this example, you 'd just add the query ?nid=5 to the request path. It seems you need the block filtered by default though, in this case just set a fixed node ID (eg. 5 instead of all) as fallback value in views admin, then the block will be filtered by default the same way.

How to find fields with mapping conflicts

My index settings in Kibana tell me that I have fields with mapping conflicts in my logstash-* index patterns.
What is the easiest way to find out which fields have a conflicting mapping and/or in which indices the conflict occurs?
As of at least Kibana 5.2, you can type "conflict" into the Filter field, which will filter all fields down to only those which have a conflict. At the far right there is a column named "controls", and for each field it has a button with a pencil icon. Clicking that will tell you which indices have which mapping.
Fields filtered to only those with conflicts:
Indices in which field mapping conflicts:
You can easily find how fields are mapped using the mapping API in Kibana.
If you know you have a mapping conflict, I will assume you know the field name that has the conflict. These will be listed under Management/Index Patterns/index_pattern
If you have indices that are created daily, such as production-2020.06.16, you can search across all the indices with production*.
Go to Dev Tools and enter this query, changing the index pattern (production*) and conflictedFieldname to suit your needs.
GET production*/_mapping/field/conflictedFieldname
This will pull all indices that match the production* pattern and will list the mapping for conflictedFieldname for each index. Scroll through and see which one is not like the other one.
You can also check out the Elasticsearch documentation here: Elasticsearch documentation: Get Field Mapping API
The reason you're getting a conflict is because the first value that goes into the index is used by Elasticsearch to make its best guess as to what data type it should be. You can ensure it is always the same type by placing a template for the index pattern you are concerned with.
Elasticsearch documentation: Put Index Template
In Elasticsearch 5.5.2, you can click on the dropdown on the right of the Filter search box and select "conflict". This is in the Index Patterns page.
It should be easy to spot those in the list of fields, when defining the pattern. Something like this:
Since I couldn't locate the mapping conflict in the gui. I went down the hard path analysed my config for missing/conflicting field type found the offender and reindexed my data.
If you click the type column on the index patterns page where the warning is displayed, it should sort the indexes by type. Conflicted fields will have type 'conflict'.

Elasticsearch not searching some fields

I have just updated a website, the update adds new fields to elasticsearch.
In my dev environment, it all works fine. but on the live site, the new fields are not being found.
Eg. I have added a new field with the value : 1
However, when adding a filtered query of
{"field":1}
It does not find any matching results.
When I look in the documents, I can see docs with the field set to 1
Would the reason for this be that the new field was added after the mappings was set? I am not all that familiar with elasticsearch, So I am not really sure where to start looking to fix it.
Any help would be appreciated.
Update:
querying from URL shows nothing either
_search/?pretty=true&size=50&q=field1:*
however there is another field that was added at the same time which I can search on.
I can see field1 in the result set but it just wont allow me to search on it.
Only difference i see in the mapping is that the one that is working is set to type:long whereas the one not working is set as type:string
Is it a length issue on the ngram? what was your "min_gram" settings?
When you check on your index settings like this:
GET <host>/<index_name>/_settings
Does it work when you filter for a two digit field?
Are all the field values one digit?
It's OK to add a field after the mapping was set. ElasticSearch will guess the mapping for you. (in fact, it's one of their selling features --- no need to define the mapping, just throw the data at it)
There are a few things that can go wrong:
Verify that data is actually in the index. To do that, just navigate to the _search url with no parameters, you should see the field if it is indexed.
Look at your mapping. Could it be that the field is explicitly set not to be indexed?
Another possibility is that your query is wrong (but that is unlikely, since you're saying it works in the development environment)

Resources