Phantom DSL Conditional Update - phantom-dsl

I have a following conditional update returning false. But when I check in the database the columns I was trying to update are in fact updated.
def deliver(d: Delivery, placedDate: java.time.LocalDate, locationKey: String, vendorId: String, orderId: String, code: String, courierId: String, courierName: String) = {
update.
where(_.placedDate eqs placedDate).
and(_.locationKey eqs locationKey).
and(_.vendorId eqs vendorId).
and(_.orderId eqs orderId).
modify(_.status setTo "DELIVERED").
and(_.deliveredTime setTo LocalDateTime.now()).
onlyIf(_.status is "COLLECTED").and(_.deliveryCode is code).future().map(_.wasApplied)
}
Thank you

This is a pass through value for the phantom driver, which means that the Datastax Java Driver underneath is the one generating this. If you want to follow this up, could you please post a full bug on GitHub?
Meanwhile, I would suggest not relying on wasApplied if you are simply trying to test, and instead doing a direct read.
You generate some test data and the updated values, perform the update, and compare the final results in Cassandra by reading back. There are known problems with wasApplied with conditional batch updates, but aside from that I'm expecting this to work.

Related

Reading CloudWatch log query status in go SDK v2

I'm running a CloudWatch log query through the v2 SDK for Go. I've successfully submitted the query using the StartQuery method, however I can't seem to process the results.
I've got my query ID in a variable (queryID) and am using the GetQueryResults method as follows:
results, err := svc.GetQueryResults(context.TODO(), &cloudwatchlogs.GetQueryResultsInput{QueryId: queryId,})
How do I actually read the contents? Specifically, I'm looking at the Status field. If I run the query at the command line, this comes back as a string description. According to the SDK docs, this is a bespoke type "QueryStatus", which is defined as a string with enumerated constants.
I've tried comparing to the constant names, e.g.
if results.Status == cloudwatchlogs.GetQueryResultsOutput.QueryStatus.QueryStatusComplete
but the compiler doesn't accept this. How do I either reference the constants or get to the string value itself?
The QueryStatus type is defined in the separate types package. The Go SDK services are all organised this way.
import "github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs/types"
if res.Status == types.QueryStatusComplete {
fmt.Println("complete!")
}

How to Fix Document Not Found errors with find

I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!

How should I merge hash while using 'first_or_create'

I have this hash, which is built dynamically:
additional_values = {"grouping_id"=>1}
I want to merge it with this record object after creation via first_or_create:
result = model.where(name: 'test').first_or_create do |record|
# I'm trying to merge any record attributes that exist in my hash:
record.attributes.merge(additional_values)
# This works, but it sucks:
# record.grouping_id = data['grouping_id'] if model.name == 'Grouping'
end
#Not working:
#result.attributes>>{"id"=>1, "name"=>"Test", "grouping_id"=>nil}
I understand that if the record already exists (returned via 'first'), it won't be updated...although that would be a nice option and any recommendations on that are welcome, but the table was just dropped and recreated, so that's not the issue.
What am I missing?
I also tried using to_sym, resulting with:
additional_values = {:grouping_id=>1}
...just in case there was some weirdness I didn't know about...didn't make a difference
The problem is Hash#merge returns a new hash and then you aren't doing anything with that hash, you're just throwing it away. I would also suggest sticking to using the ActiveRecord methods for updating attributes, instead of trying to manipulate the underlying hash, such as using assign_attributes or, if you want to save the record update. Though, you may find the create_with, which can be used with find_or_create_by, useful here:
model.create_with(additional_values).find_or_create_by(name: 'test')
I can't find any documentation that I like (if at all) for first_or_create in recent rails versions, but if you like that more than find_or_create_by, then if we look at the Rails 3 documentation for first_or_create, you should be able to do with out the create_with:
model.where(name: 'test').first_or_create(additional_attributes)

How do I use the azure_table_service.query_entities result and token in Ruby?

I'm trying to us the Ruby Azure SDK to query an Azure Table. I can get the call to work, and if I look at the wireshark it's returning tons of results. But I can't figure out how to iterate through them.
query = {:filter => "Timestamp ge datetime'2015-01-01T00:00:00Z'", :select => ["FileName"]}
result, token = azure_table_service.query_entities("ActivityTable", query)
p result
p token
Shows this as the output.
#<Azure::Table::Entity:0xb8f74fdc #properties={"FileName"=>"LOCKINFO.DAT"}, #table="ActivityTable", #updated=2015-01-06 20:22:14 UTC, #etag=nil>
#<Azure::Table::Entity:0xb8f74f3c #properties={"FileName"=>"Scan000.pdf"}, #table="ActivityTable", #updated=2015-01-06 20:22:14 UTC, #etag=nil>
I tried result.count, result.pop, and others. The documentation really sucks too, https://github.com/Azure/azure-sdk-for-ruby/blob/master/lib/azure/table/table_service.rb. I looks like I'm getting an array of EnumerationResults back but none of the array calls work.
I also can't figure out how to use the token to get the next set of results but that's after I can figure out how to use the ones I have.
-Update-
p result.class
p token.class
Shows that both are Azure::Table::Entity
Okay! I found an issue with the documentation I guess. I shouldn't use their example since
result = azure_table_service.query_entities("XASActivityTable", query)
returns an array of the expected values. Adding that Token variable seems to cause some type of Ruby magic where the first and second values are put into the variables and the rest are dumped.
You can get the status by using below statement.
status = result.properties['status']

No signature of method: groovy.lang.MissingMethodException.makeKey()

I've installed titan-0.5.0-hadoop2 with hbase and elasticsearch support
I've loaded the graph with
g = TitanFactory.open('conf/titan-hbase-es.properties')
==>titangraph[hbase:[127.0.0.1]]
and a then I loaded the test application
GraphOfTheGodsFactory.load(g)
Now when I'm trying to create a new index key with:
g.makeKey('userId').dataType(String.class).indexed(Vertex.class).unique().make()
and I got this error:
No signature of method: groovy.lang.MissingMethodException.makeKey() is applicable for argument types: () values: []
Possible solutions: every(), any()
Display stack trace? [yN]
Can someone help me with this ?
when I want to see the indexed keys I see this
g.getIndexedKeys(Vertex.class)
==>reason
==>age
==>name
==>place
I'm not completely following what you are trying to do. It appears that you loaded Graph of the Gods to g and then you want to add userId as a new property to the schema. If that's right, then i think your syntax is wrong, given the Titan 0.5 API. The method for managing the schema is very different from previous versions. Changes to the schema are performed through the ManagementSystem interface which you can get an instance of through:
mgmt = g.getManagementSystem()
The syntax for adding a property then looks something like:
birthDate = mgmt.makePropertyKey('birthDate').dataType(Long.class).cardinality(Cardinality.SINGLE).make()
mgmt.commit()
Note that g.getIndexKeys(Class) is not the appropriate way to get schema information either. You should use the ManagementSystem for that too.
Please see the documentation here for more information.

Resources