I tried the following yaml code:
columns:
created_time:
type: timestamp
notnull: true
default: default CURRENT_TIMESTAMP
In the outputted sql statement, the field is treated as datetime instead of timestamp, which I cannot define the current timestamp in it...
If I insist to use timestamp to store current time, how to do so in yaml?
If you are willing to sacrifice some portability (see description of columnDefinition attribute) for the ability to use MySQL's automatic initialization TIMESTAMP (see MySQL timestamp initialization), then you can use the following:
Yaml:
created_time:
type: datetime
columnDefinition: TIMESTAMP DEFAULT CURRENT_TIMESTAMP
Annotation:
#ORM\Column(type="datetime", columnDefinition="TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
Notice that DEFAULT CURRENT_TIMESTAMP does not work the same as Timestampable, and thus you cannot blindly exchange one for the other.
First and foremost, the former uses the date/time of the DB server, while the latter uses a Doctrine magic that calls PHP's date() function on your webserver. In other words, they are two distinct ways of getting the date/time, from two entirely different clock sources. You may be on big trouble if you use Timestampable, your web server runs on a different machine than your DB server, and you don't keep your clocks in sync using e.g. NTP.
Also, the DEFAULT CURRENT_TIMESTAMP being on the table definition makes for a much more consistent database model IMHO, as no matter how you insert the data (for instance, running INSERTs on the DB engine command line), you'll always get the current date/time on the column.
BTW, I'm also looking for an answer to the CURRENT_TIMESTAMP problem mentioned in the initial question, as this is (due to the reasons outlined above) my preferred way of keeping "timestamp" columns.
You could use the 'Timestampable' functionality in doctrine, eg:
actAs:
Timestampable:
created:
name: created_time
updated:
disabled: true
columns:
created_time:
type: timestamp
notnull: true
/**
* #var int
* #ORM\Column(type="datetime", columnDefinition="TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
*/
protected $created;
after run ./vendor/bin/doctrine-module orm:schema-tool:update --force
Updating database schema... Database schema updated successfully! "1"
queries were executed
and run ./vendor/bin/doctrine-module orm:validate-schema
[Mapping] OK - The mapping files are correct. [Database] FAIL - The
database schema is not in sync with the current mapping file.
But FAIL for sync appear
Sorry for necroposting.
But i have encoutered the same problem. There is solution for doctrine 2 and postgreSql. I have used Gemdo extension and added following strings:
$evm = new \Doctrine\Common\EventManager();
$timestampableListener = new \Gedmo\Timestampable\TimestampableListener;
$timestampableListener->setAnnotationReader($cachedAnnotationReader);
$evm->addEventSubscriber($timestampableListener);
YAML:
created:
type: date
options:
default: 0
nullable: true
gedmo:
timestampable:
on: create
updated:
type: datetime
options:
default: 0
nullable: true
gedmo:
timestampable:
on: update
dump-sql:
ALTER TABLE users ADD created DATE DEFAULT CURRENT_DATE NOT NULL;
ALTER TABLE users ADD updated TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL;
I suggest not to use "default" for timestamp at all.
It will bring unpredictable state in yaml in your application.
This video (PHP UK Conference 2016 - Marco Pivetta - Doctrine ORM Good Practices and Tricks) provides some more information about this topic.
I suggest you to to go through it and create a named constructor.
public function createTimestamp(string $priority, int $priorityNormalized)
{
$this->priority = $priority;
$this->priorityNormalized = $priorityNormalized;
}
I suggest to be stateless, good luck!
You can use:
default: '<?php echo date('Y-m-d H:i:s') ?>'
Related
This is my current repo structure, I'm looking for a solution that works with both Postgres and OracleDB and preferably does not involve changing my DB schema to accomodate the ORM. Whether Postgres or Oracle is used is in defined in the spring.datasource.url in the application.properties file.
data class NewsCover(
#Id val tenantId: TenantId,
val openOnStart: Boolean,
val cycleDelay: Int,
#MappedCollection(idColumn = "tenant_id", keyColumn = "tenant_id")
val sections: Set<NewsCoverSection>,
)
data class NewsCoverSection(
#Id val id: NewsCoverSectionId,
val title: String,
val pinnedOnly: Boolean,
val position: Int,
val tenantId: TenantId,
... some other fields ...
)
interface NewsCoverRepo : CrudRepository<NewsCover, TenantId> { ... }
This works just fine with Postgresql, but creates errors when uses with Oracle:
SELECT "NEWS_COVER_SECTION"."ID" AS "ID", "NEWS_COVER_SECTION"."TITLE" AS "TITLE", "NEWS_COVER_SECTION"."POSITION" AS "POSITION", "NEWS_COVER_SECTION"."TENANT_ID" AS "TENANT_ID", "NEWS_COVER_SECTION"."PINNED_ONLY" AS "PINNED_ONLY"
FROM "NEWS_COVER_SECTION"
WHERE "NEWS_COVER_SECTION"."tenant_id" = ?
See the quoted idColumn/keyColumn names in the #MappedCollection. They are lower case. That is fine for Postgres, but does not work with Oracle. Changing tenant_id to TENANT_ID fixes the problem for Oracle, but breaks Postgres.
What I tried:
A NamingStrategy override for Oracle, but I can't seem to override those quoted identifiers.
Conditional column names in #MappedCollection, but #MappedCollection only accepts compile time constants and does not support SpEL, so I can't differentiate based on the spring.datasource.url property.
Any ideas how I can get it to query for "news_cover_section"."tenant_id" when the DB is Postgres and "NEWS_COVER_SECTION"."TENANT_ID" when the DB is Oracle?
As you found out you can disable the behaviour of quoting all names by setting the forceQuote property of the JdbcMappingContext to false.
Alternatively you can create the schema in a consistent way on both databases by quoting the names in your schema creation script.
The first option allows you not to fiddle with the database schema.
But it makes the application depend on avoiding database key words like for example: ORDER or USER.
The second option is arguably the conceptual cleaner one, because it actually uses the same schema (as far as names are concerned) for both databases, which in itself is certainly valuable. But comes at the cost of quoting names because Postgres doesn't adhere to the behaviour prescribed by the SQL standard of treating unquoted names as uppercase.
Note: There is now an issue for supporting SpEL expressions for table and column names.
I'm trying to store inside the database the date of a restaurant booking but, even though the date I submit is correct, hibernate stores inside the database a date one day before the one I submitted. I don't know why... it's probably a timezone problem but I can't understand why... the date should not be affected by the timezones.
Here is my spring boot properties file:
spring:
thymeleaf:
mode: HTML5
encoding: UTF-8
cache: false
jpa:
database: MYSQL
hibernate:
ddl-auto: update
properties:
hibernate:
locationId:
new_generator_mappings: false
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
jdbc:
time_zone: UTC
datasource:
driver:
class: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/databaseName?useSSL=false&useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
username: username
password: **********
I'm from Italy, so my timezone is this:
GMT/UTC + 1h during Standard Time
GMT/UTC + 2h during Daylight Saving Time
Currently we are UTC + 2h.
The object I'm storing is this one:
#Entity
public class Dinner {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private long dinnerId;
private LocalDate date;
...
The controller I'm using to intercept the POST request is this:
#PreAuthorize("hasRole('USER')")
#PostMapping
public String createDinner(#RequestParam(value="dinnerDate") String dinnerDate, Principal principal, Model model){
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");
LocalDate date = LocalDate.parse(dinnerDate, formatter);
dinnerService.createDinner(date);
return "redirect:/dinners?dinnerDate=" + dinnerDate;
}
Which calls the service method createDinner that call the Jpa method save to store the object.
I'm using thymeleaf to handle the html templates.
If I submit the date 30/6/2019 inside the database I get 29/6/2019. When I retrieve the Dinner object by date, if I insert 30/6/2019, I get the Dinner with the date 29/6/2019. So it seems that spring handle the date by itself in a weird way... considering some sort of timezone but I don't know how to disable or handle it. Any idea?
You do not need to define a format for the pattern yyyy-MM-dd. LocalDate#parse uses DateTimeFormatter.ISO_LOCAL_DATE by default which means LocalDate.parse("2020-06-29") works without applying a format explicitly.
Since you already know that date-time in your time-zone is different from that in UTC, you should never consider just a date; rather you should consider both date and time e.g. 11:30 PM at UTC on 2020-06-29 will fall on 2020-06-30 in your time-zone. Therefore, the first thing you should do is to change the type of the field as TIMESTAMP in the database.
Once you have changed the type of the field to TIMESTAMP, change the method, createDinner as follows:
LocalDateTime dinnerDateTime = LocalDateTime.of(LocalDate.parse(dinnerDate), LocalTime.of(0, 0, 0, 0));
OffsetDateTime odt = dinnerDateTime.atOffset(ZoneOffset.UTC);
dinnerService.createDinner(odt);
Then inside DinnerService (or DinnerServiceDAO wherever you have written the logic to insert/update record in the database):
pst.setObject(index, odt);
where pst represents the object of PreparedStatement and index represents the index (starting with 1) of this field in your insert/update query.
same problem (and same country! :-) ).
I suspect that if hibernate or jpa are set with timezone UTC, while the machine is set with default timezone == Europe/Rome when a date is persisted, then it will be converted automatically from machine timezone to database timezone, which is not a bad feature if you have all dates stored in UTC format on the DB.
The problem happens when you convert the date before persisting: it gets converted twice. At least, this is my case.
Still looking for the best solution! In case I'll find one, then I'll add it later to the answer.
Assuming your time zone is : Europe/Italy , You have to set up your serverTimezone variable like this :
url: jdbc:mysql://localhost:3306/databaseName?useSSL=false&useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=Europe/Italy
In my Oracle database, there is an Agreement table with a column effectivityDate with a data type of DATE. When I try to query a certain row
select * from agreement where id = 'GB'
it returns a row with this value:
id: GB
name: MUITF - Double bypass
...
effectivityDate: 7/2/2015
I created a Grails Domain class for this:
class Agreement implements Serializable {
String id
Date effectivityDate
static mapping = {
table "agreement"
varsion: false
id column: "id"
name column: "name"
...
effectivityDate column: "effectivityDate"
}
}
But when I tried to query it on groovy using:
Agreement a = Agreement.findById("GB")
println a
It return this object:
[id:GB, name:MUITF - Double bypass, ..., effectivityDate: 2015-07-01T16:00:00Z]
^^^^^^^^^^^^^^^^^^^^
My question is, why would the date fetched directly from the database different from the one retrieved by gorm? Does this have something to do with time zones?
Just seen in your profile you are from Philippines (PHT, GMT+8).
Since 2015-07-01T16:00:00Z === 2015-07-02T00:00:00+08:00, the most likely cause is that you are using the PHT time zone to display the date when querying the database and the GMT/Zulu time zone when querying/displaying with groovy/grails.
I have a site that is for a video game I play and am working on improving the performance of the site by implementing some additional caching. I've already been able to implement query result caching on custom repository functions, but haven't been able to find anywhere that explains how I can include query result caching on the built in functions (findOneById, etc). I'm interested in doing this because many of my database queries are executed from these 'native' repository functions.
So as an example I have a character entity object with the following properties: id, name, race, class, etc.
Race and class in this object are references to other entity objects for race and class.
When I load a character for display I get the character by name (findOneByName) and then in my template I display the character's race/class by $characterObject->getRace()->getName(). These method calls in the template result in a query being run on my Race/Class entity tables fetching the entity by id (findOneById I assume).
I've attempted to create my own findOneById function in the repository, but it is not called under these circumstances.
How can I setup doctrine/symfony such that these query results are cache-able?
I am running Symfony 2.1.3 and doctrine 2.3.x
I've found out that it isn't possible to enable query cache on doctrine build in functions. I will post a link which explains why later after I find it again.
Your entities probably look something like this:
MyBundle\Entity\Character:
type: entity
table: Character
fields:
id:
id: true
type: bigint
name:
type: string
length: 255
manyToOne:
race:
targetEntity: Race
joinColumns:
raceId:
referencedColumnName: id
MyBundle\Entity\Race:
type: entity
table: Race
fields:
id:
id: true
type: bigint
name:
type: string
length: 255
oneToMany:
characters:
targetEntity: Character
mappedBy: race
If that's the case, then modify your Character entity mapping so that it eagerly loads the Race entity as well:
MyBundle\Entity\Character:
...
manyToOne:
race:
targetEntity: Race
joinColumns:
raceId:
referencedColumnName: id
fetch: EAGER
Doctrine documentation on the fetch option: #ManyToOne
I'm using the Mongo shell to query my Mongo db. I want to use the timestamp contained in the ObjectID as part of my query and also as a column to extract into output. I have setup Mongo to create ObjectIDs on its own.
My problem is I can not find out how to work with the ObjectID to extract its timestamp.
Here are the queries I am trying to get working. The 'createdDate' field is a placeholder; not sure what the correct field is:
//Find everything created since 1/1/2011
db.myCollection.find({date: {$gt: new Date(2011,1,1)}});
//Find everything and return their createdDates
db.myCollection.find({},{createdDate:1});
getTimestamp()
The function you need is this one, it's included for you already in the shell:
ObjectId.prototype.getTimestamp = function() {
return new Date(parseInt(this.toString().slice(0,8), 16)*1000);
}
References
Check out this section from the docs:
Extract insertion times from _id rather than having a separate timestamp field
This unit test also demostrates the same:
mongo / jstests / objid6.js
Example using the Mongo shell:
> db.col.insert( { name: "Foo" } );
> var doc = db.col.findOne( { name: "Foo" } );
> var timestamp = doc._id.getTimestamp();
> print(timestamp);
Wed Sep 07 2011 18:37:37 GMT+1000 (AUS Eastern Standard Time)
> printjson(timestamp);
ISODate("2011-09-07T08:37:37Z")
This question is helpful to understand of how to use the _id's embedded timestamp in query situations (refers to the Mongo Extended JSON documentation). This is how it's done:
col.find({...,
'_id' : {'$lt' : {'$oid' : '50314b8e9bcf000000000000'}}
})
finds documents created earlier than the one that's given by oid. Used together with natural sorting and limiting you can utilize BSON _ids to create Twitter-like API queries (give me the last OID you have and I'll provide twenty more)
In python you can do this:
>>> from bson.objectid import ObjectId
>>> gen_time = datetime.datetime(2010, 1, 1)
>>> dummy_id = ObjectId.from_datetime(gen_time)
>>> result = collection.find({"_id": {"$lt": dummy_id}})
I think, ObjectId.from_datetime() - its a useful method of standard bson lib
Maybe other language bindings have alternative builtin function.
Source: http://api.mongodb.org/python/current/api/bson/objectid.html
To use the timestamp contained in the ObjectId and return documents created after a certain date, you can use $where with a function.
e.g.
db.yourcollection.find( {
$where: function() {
return this._id.getTimestamp() > new Date("2020-10-01")
}
});
The function needs to return a truthy value for that document to be included in the results. Reference: $where
Mongo date objects can seem a bit peculiar though. See the mongo Date() documentation for constructor details.
excerpt:
You can specify a particular date by passing an ISO-8601 date string with a year within the inclusive range 0 through 9999 to the new Date() constructor or the ISODate() function. These functions accept the following formats:
new Date("<YYYY-mm-dd>") returns the ISODate with the specified date.
new Date("<YYYY-mm-ddTHH:MM:ss>") specifies the datetime in the client’s local timezone and returns the ISODate with the specified datetime in UTC.
new Date("<YYYY-mm-ddTHH:MM:ssZ>") specifies the datetime in UTC and returns the ISODate with the specified datetime in UTC.
new Date(<integer>) specifies the datetime as milliseconds since the Unix epoch (Jan 1, 1970), and returns the resulting ISODate instance.