influxDB: How to convert field to tag in influxDB v2.0 - flux

We need to convert field to tag in influxDB v2.0 but not able to find any proper solution.
Can someone help me out to achieve the same ?
Solution we found was to create new measurement by altering fields and tags of existing measurement but not able achieve it using Flux language.
Using below flux query we can copy the data from one measurement to another but not able to change the field to tag while adding data in new measurement.
from(bucket: "bucket_name")
|> range(start: -10y)
|> filter(fn: (r) => r._measurement == "cu_om")
|> aggregateWindow(every: 5s, fn: last, createEmpty: false)
|> yield(name: "last")
|> set(key: "_measurement", value: "cu_om_new1")
|> to(org: "org_name", bucket: "bucket_name")
Any help appreciated.

You're almost there with your original code, there are extra fields with the to() function that allow this.
If you have a set of data already where you have a tag name as value, you can specify it as a tagColumn in to().
Also, the new tag(s) must be string(s).
|> to(bucket: "NewBucketName",
tagColumns: ["NewTagName"],
fieldFn: (r) => ({"SomeValue": r._value })
)

Have a look at writing pivoted data to InfluxDB, maybe that's what you need. Using this method, you have control over which columns are written as fields and which as tags:
Use experimental.to() to write pivoted data to InfluxDB. Input data must have the following columns:
_time
_measurement
All columns in the group key other than _time and _measurement are written to InfluxDB as tags. Columns not in the group key are written to InfluxDB as fields.

Related

How to return a query from cosmos db order by date string?

I have a cosmos db collection. I need to query all documents and return them in order of creation date. Creation date is a defined field but for historical reason it is in string format as MM/dd/yyyy. For example: 02/09/2019. If I just order by this string, the result is chaos.
I am using linq lambda to write my query in webapi. I have tried to parse the string and try to convert the string. Both returned "method not supported".
Here is my query:
var query = Client.CreateDocumentQuery<MyModel>(CollectionLink)
.Where(f => f.ModelType == typeof(MyModel).Name.ToLower() && f.Language == getMyModelsRequestModel.Language )
.OrderByDescending(f => f.CreationDate)
.AsDocumentQuery();
Appreciate for any advice. Thanks. It will be huge effort to go back and modify the format of the field (which affects many other things). I wish to avoid it if possible.
Chen Wang.Since the order by does not support derived values or sub query(link),so you need to sort the derived values by yourself i think.
You could construct the MM/dd/yyyy to yyyymmdd by UDF in cosmos db.
udf:
function getValue(datetime){
return datetime.substring(6,10)+datetime.substring(0,2)+datetime.substring(3,5);
}
sql:
SELECT udf.getValue(c.time) as time from c
Then you could sort the array by property value of class in c# code.Please follow this case:How to sort an array containing class objects by a property value of a class instance?

readFragment to return all object of a type

i'm using Apollo Client do request a very structured dataset from my server. Something like
-Show
id
title
...
-Seasons
number
-Episodes
id
number
airdate
Thanks to normalization my episodes are stored individually but i cannot query them. For exemple i would like to query all the episodes to then sort them by date to display coming next.
the only way i see is to either 'reduce' my show list to an array of episode and then do the filtering. Or to do a new query to the server.
But it will be so much faster if I could get a list of all Episodes in cache.
Unfortunately with readFragment you can only query One object by its id.
Question:
Is there a way to query the cache for all object of a defined type?
The answer is late, but could have helped someone else, currently apollo does not support it. This is the issue here from github, and also a work around.
https://github.com/apollographql/apollo-client/issues/4724#issuecomment-487373566
Here is the copied workaround by #superandrew213
const serializedState = client.cache.extract()
const typeNameItems = Object.values(serializedState)
.filter(item => item.__typename === 'TypeName')
.map(item => client.readFragment({
fragmentName: 'FragmentName',
fragment: Fragment,
id: item.id,
}))
Please take note that this method is slow, especially if you have a large normalized data.

Disappearing column filter value

In Power Query Editor, I have a table I want to filter on a specific column. When I click on the arrow on the column header, it first gives me following items:
When I click "Load More", the first entry "100R1" is not available anymore? I also know there should be other values (like "500", but those are also not shown)...
This behaviour starts only after I do a NestedJoin like so:
= Table.NestedJoin(Source,{"Number"},Parts,{"Parts"},"Parts",JoinKind.Inner)
So, the column that I join on is Number, the column I want to filter on is Type ...
When I try to filter Type on the Source table, it behavious correctly...
How is this possible?
PS: If I adjust the filter manually from:
Table.SelectRows(JoinedTable, each ([Type] = "100R2" or [Type] = "400R1" or [Type] = "400R2"))
to
Table.SelectRows(JoinedTable, each ([Type] = "100R2" or [Type] = "400R1" or [Type] = "400R2" or [Type] = "100R1"))
it effectively keeps instances of "100R1" ...
Once I've faced situation, when filters in PQ are lied to me. The problem was solved by clearing cash.

kafka streams DSL: add an option parameter to disable repartition when using `map` `selectByKey` `groupBy`

According to the documents, streams will be marked for repartition when applied map selectKey groupBy even though the new key has been partitioned appropriately. Is it possible to add an option parameter to disable repartition ?
Here is my user case:
there is a topic has been partitioned by user_id.
# topic 'user', format '%key,%value'
partition-1:
user1,{'user_id':'user1', 'device_id':'device1'}
user1,{'user_id':'user1', 'device_id':'device1'}
user1,{'user_id':'user1', 'device_id':'device2'}
partition-2:
user2,{'user_id':'user2', 'device_id':'device3'}
user2,{'user_id':'user2', 'device_id':'device4'}
I want to count user_id-device_id pairs using DSL as follow:
stream
.groupBy((user_id, value) -> {
JSONObject event = new JSONObject(value);
String userId = event.getString('user_id');
String deviceId = event.getString('device_id');
return String.format("%s&%s", userId,deviceId);
})
.count();
Actually the new key has been partitioned indirectly. There is no need to do it again.
If you use .groupBy(), it always causes data re-partitioning. If possible use groupByKey instead, which will re-partition data only if required.
In your case, you are changing the keys anyways, so that will create a re-partition topic.

Lambda Expression for LINQ Select Items

I have this code
var list = _db.Projects.Where(item => item.Loc =="IN").Select(p => new {id=p.Id, title=p.Title,pc=p.PostalCode });
Project table having lot of columns, i need to query required columns dynamically and load from database, not all columns along with data.
Questions:
how to write lambda expressions for linq select ?
How to reduce data reads on database by selecting specific cols, entity framework ?
Look at the expression the C# compiler generated and try to replicate what it does:
Expression<Func<Project, object>> lambda =
(Project p) => (object)new {id=p.Id, title=p.Title,pc=p.PostalCode };
I hope this code compiles. If not, you'll surely be able to fix it. Afterwards, look at the contents of the lambda variable.
Note, that the cast to object is only there to make this compile. You don't need/want that is production.

Resources