Authors |
Eric Faulhaber Constantin Asofiei |
Date |
June 4, 2009 |
Access Control |
CONFIDENTIAL |
foo
containing a data field bar
will convert roughly to a DMO
class Foo
with a private instance variable bar
,
and public getBar
and setBar
methods
(note: the accessor will be isBar
in the case where
bar
represents a logical
data type).
In actuality, the naming conversions may be more complex than this, in
that arbitrarily granular substitutions and expansions are applied
during the conversion process, which are specific to the application
being converted. This allows for a more verbose DMO class name
(often a compound word), which is typical in the target environment, as
opposed to the often terse, and sometimes cryptic, table and field
names used in the source environment, imposed by historical limitations.persist
package may also use
this identifier directly to retrieve individual records in certain
cases. This unique identifier maps directly to the primary key
value of the backing record in the database. Temp table DMO
classes are slightly more polluted: each additionally contains a
multiplex ID field and implements the Temporary
marker
interface..hbm.xml
instead of .class
(.java
in the source directories). These documents
live alongside their Java counterparts in the same directory inside the
deployment archive (generally a JAR file).schema
package overview.schema
package overview.java.lang.Long
(primary key field/methods);java.lang.Integer
(_multiplex
field for temp table DMOs);java.lang.Integer
(Hibernate type integer
) and java.lang.Long
(Hibernate type long
), the backend database in the
target environment and Hibernate of course are unaware of the P2J
wrapper data types. This is another area where the mismatch
between Progress 4GL semantics and the target environment must be
managed.org.hibernate.usertype.UserType
interface. This provides transparent transitions between the P2J
wrappers needed by the converted application, and the SQL data types
needed by the database. For the custom user type implementations,
we provide a mapping between the P2J wrapper types and the closest
corresponding JDBC data type (defined in the java.sql.Types
class), as follows:Progress Type |
JDBC/SQL
Type |
P2J
Type |
User
Type Implementation |
Notes |
character |
VARCHAR |
character |
CharacterUserType |
User type uses java.lang.String
internally. |
date |
DATE |
date |
DateUserType |
User type uses java.sql.Date
internally. |
decimal |
NUMERIC |
decimal |
DecimalUserType |
User type uses java.math.BigDecimal
internally. |
integer |
INTEGER |
integer |
IntegerUserType |
User type uses int
internally. |
recid | INTEGER | recid | IntegerUserType | User type uses int internally. |
rowid | BIGINT | rowid | RowidUserType | User type uses java.lang.Long internally. |
logical |
BIT |
logical |
LogicalUserType |
User type uses boolean
internally. |
raw |
VARBINARY |
raw |
RawUserType |
User type uses byte[]
internally. |
unknown |
NULL |
n/a |
n/a |
All user type implementations
transparently implement this mapping |
TransactionManager
to map
Progress transaction semantics to SQL transactions. As much
transaction processing as possible has been made transparent to
application code. The persistence layer uses TransactionManager
's
various callback services to determine when a transaction must be
commited or rolled back, and when database resources and record locks
are released.BufferManager
class is created immediately upon the creation of each new user
context. It manages all buffers open in that context. As an
implementor of Scopeable
, BufferManager
is
notified by the TransactionManager each time a new block scope is
opened and again when it is closed. The object is also notified
by each RecordBuffer
instance whenever a new scope is
opened on that buffer. BufferManager
tracks all
open buffer scopes within the local user context. When it
receives notice that a transaction level block has opened, it starts a
new database level transaction for each database currently referenced
by an open buffer scope. When any block opens that is within a
transaction, it registers all open buffers for a commit/rollback
callback.BufferManager
also implements BatchListener
,
so that it can receive notifications about assign batch mode
processing. This mode is used when multiple changes must be made
to one or more DMO objects without triggering validation until the
batch ends. BatchListener.batchNotify
methods
trigger calls to RecordBuffer
's static startBatch
and endBatch
methods.RecordBuffer
class is created each time a buffer is defined in application
code. This object manages all resources associated with a buffer
throughout its lifetime, including any record locks acquired through
that buffer. RecordBuffer
implements Finalizable
and registers with the TransactionManager
each time openScope
is invoked. The context-local ConnectionManager
is
also notified of the buffer scope open event. Upon closing of the
scope, the buffer attempts to transition any record locks acquired by
the buffer during the lifetime of the current scope. This may
mean releasing each lock entirely, or downgrading it to a share lock
from an exclusive lock.RecordBuffer
manages the interaction between DMOs and
Hibernate's Session
object (indirectly, via the Persistence
class), in order to maintain sub-transaction level undo (i.e.,
rollback) capability across nested scopes. For each create,
delete, and load operation which occurs in a RecordBuffer
instance, a RecordBuffer.Reversible
instance is stored
for the current scope. RecordBuffer
implements Commitable
and is registered to receive commit and rollback callbacks by the BufferManager
at each block scope inside a transaction. When a subtransaction
or transaction commit occurs, all Reversible
objects in
the current scope are copied up to the parent scope, to preserve undo
capability at that scope. When a rollback occurs, all Reversible
objects in the current scope have their rollback methods invoked, which
resets the buffer's state to enable a retry.ConnectionManager
class is created the first time it is needed in the current user
context. This is typically during instantiation of the first RecordBuffer
used in that context. This object tracks all connections made to
a physical database, in the sense of Progress
connections (i.e., uses of the CONNECT and DISCONNECT statements in
pre-converted code). This does
not trigger a JDBC connection, since this occurs implicitly when
a transaction is started or when a query is executed. Rather,
this object is used to track references to physical, transient (i.e.,
not configured by default) databases which are dynamically registered
and deregistered by application logic. Connections trigger
transient databases to be registered with the DatabaseManager
,
and disconnections cause them (eventually) to be deregistered.ConnectionManager
implements Finalizable
, and registers itself for master
transaction finish notifications upon construction. Whenever a
transaction scope closes, all pending disconnect requests are honored
and those databases are deregistered with the database if no other
buffers in the current context still reference them.No record in scope | |
NO-LOCK | |
SHARE-LOCK | |
EXCLUSIVE-LOCK |
Example | Notes | |
1 |
|
At line 002, the transaction a record is found with EXCLUSIVE-LOCK, then re-found with NO-LOCK at line 004. However, because we are inside a transaction, the lock is downgraded to SHARE-LOCK here, rather than released. After the transaction ends, the lock is released. |
2 |
|
In this case, the buffer scope of the person buffer is larger than the transaction scope, because it begins at line 001, when the record is found with NO-LOCK. As in the previous example, it is downgraded rather than released at line 005, but the SHARE-LOCK remains even after the transaction ends at line 006. It is not until line 007 that the lock is finally released by the explicit find NO-LOCK statement. |
3 |
|
Even though the buffer scope is larger than the transaction scope in this case, the undo at line 005 causes the lock to revert at the end of the transaction to its state at the beginning of the transaction, NO-LOCK. |
4 |
|
In
this case, the EXCLUSIVE-LOCK is acquired in a repeat block
nested within the transaction scope at line 004. Even though the
repeat block is undone at line 006, the exclusive lock is not released
when the repeat block ends at line 007. It is not until the
transaction exits at line 008 that the lock is released. Note that even though the buffer scope is larger than the transaction scope, the lock is not downgraded to SHARE-LOCK when the transaction ends, unlike Example 2. The existence of the UNDO preempts this behavior and causes the lock instead to revert to its state at the beginning of the transaction, even though it remains at EXLUSIVE-LOCK throughout the remainder of the transaction. Thus, had we initially found the record with SHARE-LOCK at line 001, it would still be locked with SHARE-LOCK at line 008. |
5 |
|
This example is similar to Example 4, however, without the UNDO, the EXCLUSIVE-LOCK is downgraded to SHARE-LOCK at the end of the transaction. It remains SHARE-LOCKED until it is explicitly released at line 009. |
6 |
|
In this case, the buffer scope is smaller than the transaction scope. Even though the record is no longer referenced after the end of the inner repeat loop, it remains locked through the end of the transaction, so that no other session can edit it before the transaction has been committed (or undone). However, the EXCLUSIVE-LOCK is downgraded to a SHARE-LOCK at the end of the buffer's scope. It is released entirely at the end of the transaction. |
LockManager
Java interface
was developed for this purpose. A particular concrete
implementation is created at P2J server initialization;
the implementation class to use is specified in the
directory. The Persistence
class uses the lock manager transparently to do its work;
generally,
the lock manager does not need to be accessed directly by application
level
code. There is one Persistence instance per physical database managed
by a P2J application server. External applications can access a
Persistence instance via remote method calls, or they may implement a
remotable implemenation of the LockManager interface (see below). To date, a
non-Java mechanism does not exist for external access.LockManager
interface
is InMemoryLockManager
.
An instance of this class resides in the server JVM. As the name
suggests, all locking behavior is managed in the internal memory of the
server
process. Thus, coordinated locking is available only to
objects which exist in the JVM server process. To maintain the
integrity of concurrent record access, all lock requests must
be routed
through this object. If locking needs to be shared among external
JVM
instances, one of the mechanisms described above must be employed.
The default mechanism for such coordination among separate P2J
server instances is described below.LockManager
interface: RemoteLockManager
. An instance of this class is automatically created for and used by a specialized Persistence
instance, which is responsible for accessing a remote database. RemoteLockManager
acts as a proxy for the remote server's corresponding, primary LockManager
for the target database: as a method on the RemoteLockManager
instance is invoked, the request is forwarded to the remote server and is handled there by a LockManagerMultiplexer
implementation, which dispatches the request to the appropriate, local LockManager
for the target database. A response is returned to the RemoteLockManager
which initiated the request on the requesting server. Other than
the increased latency to communicate between P2J servers, the use of
the remote LockManager
proxy mechanism is transparent,
and is managed automatically by the P2J runtime environment. This
mechanism presumes a direct, server-to-server connection can be
established between the two P2J servers.RecordLockContext
Java interface. It is intended for use only within the P2J runtime environment, to manage communal locks among RecordBuffer
instances. A no-op implementation is used for temporary tables,
and a package private, inner class implementation is used for permanent
tables. When a RecordBuffer
or query object needs to acquire or release a record lock, it does so through this interface. A RecordLockContext
implementation uses the low level interface to perform the actual lock
operations. This implementation, in collaboration with the
RecordBuffer
class, enforces the lock transition rules specified above.
It is also responsible for ensuring the asymmetric rules of lock
acquisition and release when multiple buffers within a user context
access locks for the same record.RecordBuffer.define
or TemporaryBuffer.define
), the application is
given a proxy to the underlying DMO, rather than a direct
reference.
The proxy's invocation handler (implemented within RecordBuffer
as an
inner class) intercepts mutator method invocations on the DMO and
upgrades the lock from SHARE to EXCLUSIVE if necessary. It is
assumed a transaction is active when such an upgrade is
attempted._temp
, which is reserved
for this purpose. This
circumvents the problem of having to associate temp tables at runtime
with different, existing databases. Whereas regular table DMOs
are
backed by RecordBuffer
instances and are defined with a
call to RecordBuffer.define()
, temp table DMOs are backed
by TemporaryBuffer
instances and are defined with a call
to TemporaryBuffer.define()
. Since TemporaryBuffer
extends RecordBuffer
, they are otherwise used the same
way.Temporary
marker interface;_multiplex
_multiplex
fieldTemporaryBuffer
class for additional details on multiplexing.=
),
or at the end of a buffer
copy operation. If multiple fields are being updated in a
BUFFER-COPY or in an ASSIGN statement, updates to affected indices are
deferred at least
until all fields in the respective statement have been assigned.
As soon as enough assignments have been made so as to have
"touched" all
the fields participating in any index (or group of indices) of the
target table, a snapshot of the new record is made in its current
state, and this "unfinished", placeholder record is inserted into that
index or into those indices. All other indices remain unaffected.
Note that for the placeholder record to be inserted into an
index, it is not necessary that the new values of the assigned fields
be different than their initial, default values; all that is
necessary is the act of assignment itself.Field | Type | Initial Value |
---|---|---|
f1 | integer | 0 |
f2 | integer | 2 |
index idxF1 field f1 | ||
index idxF2 field f2 |
Field f1 | Field f2 |
---|---|
1 | 1 |
3 | 3 |
1 CREATE sample.Session B:
2 f1 = 2.
3 pause.
4 f2 = 2.
5 pause.
6 f2 = 7.
1 FIND FIRST sample WHERE f1 = 2 USE-INDEX idxF1 NO-LOCK NO-ERROR.Assume session A runs through line 2 and pauses at line 3. At this point, a new sample record has been created and only field
2 FIND FIRST sample WHERE f2 = 2 USE-INDEX idxF2 NO-LOCK NO-ERROR.
3 pause.
4 FIND FIRST sample WHERE f2 = 2 USE-INDEX idxF2 NO-LOCK NO-ERROR.
5 pause.
6 FIND FIRST sample WHERE f2 = 7 USE-INDEX idxF2 NO-LOCK NO-ERROR.
7 IF AVAILABLE sample
8 THEN MESSAGE "Found sample record where f2 = 7; f1:" f1 ", f2:" f2.
f1
has been assigned. The new sample record has a value of 2 in field f1
(explictly assigned), and a value of 2 in field f2
(initial, default value). If the code in session B is now run,
the FIND statement in line 1 will find a snapshot of the newly created
record in session 1. The FIND statement in line 2 will not find a
record, even though the default value of field f2
matches the search criteria. This is because the assignment in line 2 of session A only has updated index idxF1
so far. Index idxF2
has not yet been updated to reflect the presence of the new record.f2
's value as 2 (the same value it already had as its initial value), index idxF2
is now updated. If the FIND statement in line 4 of session B is executed, it will find the new record's snapshot.idxF2
again. Now, session B is allowed to continue through line 8.
Line 6 in session B will find the new record's snapshot.
However, the message which is printed at line 8 will look like:Found sample record where f2 = 7; f1: 2 , f2: 2Note that -- even though the new record was found in session B using the criterion
f2 = 7
-- session B will not be aware of the new value (7) of f2
,
assigned in line 6 of session A. Only the data which were in the
new record at the moment the snapshot was made (that is, at the moment
the first index was modified in session A, line 2), are visible from
other sessions. Subsequent updates to the new record are not
visible in other sessions until the new record is flushed to the
database and the enclosing transaction is committed; however,
insofar as any updates made in session A which change the location of
the new record in any index on the sample table, those index updates
will change where and if the new record can be found in other sessions.INSERT | UPDATE | DELETE | |
|
|
|
Deleted record can no longer be retrieved. |
|
Inserted record not available to second session. | Record update affects neither record retrieval nor data visibility in second session. | Deleted record can no longer be retrieved. |
dmo_oper.p
and in session 2 the driver program dmo_test.p
. lock: share-lock; exclusive-lock;
statements: repeat preselect; query preselect; query preselect scrolling; do preselect
statements type: without where; use-index; where indexed; where not indexed;
lock: share-lock; exclusive-lock;
statements: query for; query for scrolling; for each;
statements type: without where; use-index; where indexed; where not indexed;
lock: no-lock
statements: repeat preselect; query for; query for scrolling; query preselect; query preselect scrolling;
statements type: without where;
lock: no-lock
statements: do preselect each;
statements type: without where; use-index; where indexed; where not indexed;
lock: no-lock
statements: repeat preselect; query for; query for scrolling; for each;
statements type: use-index; where indexed;
lock: no-lock
statements: repeat preselect; query for; query for scrolling; for each;
statements type: where not indexed;
lock: share-lock; exclusive-lock;
statements: query for; query for scrolling; for each;
statements type: use-index; where indexed;
lock: share-lock; exclusive-lock;
statements: query preselect; repeat preselect; do preselect;
statements type: without where; use-index; where indexed; where not indexed;
lock: no-lock
statements: query for; query for scrolling; for each;
statements type: without where; use-index; where indexed; where not indexed;
lock: no-lock
statements: query preselect; repeat preselect; do preselect;
statements type: without where; use-index; where indexed; where not indexed;
statements: repeat preselect each; query preselect; query preselect scrolling; do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect last; for last; do preselect last;
statements type: without where
locks: share-lock, exclusive-lock
statements: query for; query for scrolling; for each;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect first; for first; do preselect first;
statements type: without where
locks: share-lock, exclusive-lock
statements: repeat preselect each; repeat preselect first; repeat preselect last; query for; query for scrolling; query preselect; query preselect scrolling; for each; for first; for last; do preselect first; do preselect last;
statements type: without where
locks: no-lock
statements: do preselect each;
statements type: without where
locks: no-lock
statements: query for; query for scrolling; for each; do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect each; query preselect; query preselect scrolling;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: query for; query for scrolling; for each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: repeat preselect each; query preselect; query preselect scrolling; do preselect each;
statements type: without where, use-index, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect first; repeat preselect last; query for; query for scrolling; for each; for first; for last; do preselect first; do preselect last;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect each; query preselect; query preselect scrolling; do preselect each;
statements type: where indexed
locks: share-lock, exclusive-lock
statements: repeat preselect first; repeat preselect last; query for; query for scrolling; query preselect; query preselect scrolling; for each; for first; for last; do preselect first; do preselect last;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: query for; query for scrolling; for each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: repeat preselect each; do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: repeat preselect each; query preselect; query preselect scrolling;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: query for; query for scrolling; for each; do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: share-lock, exclusive-lock
statements: repeat preselect each; query for; query for scrolling; query preselect; query preselect scrolling; for each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
statements: do preselect each;
statements type: without where, use-index, where indexed, where not indexed
locks: no-lock
DirtyShareManager
manages access to each dirty database instance and its supporting data
structures. There is one such object per primary, permanent
database instance. It is this object which executes queries,
inserts, and updates against the dirty database when information about
uncommitted transaction state in the corresponding, primary database
must be shared. This object lazily creates tables in the dirty
database as needed. Each such table mirrors the structure of its
corresponding relation in the primary database, including indexes,
except that no index (save the surrogate primary key index) is unique,
even if its analog is unique in the primary database. This
enables the P2J runtime to detect potential unique constraint
violations safely, without triggering an actual error in the dirty
database. Only information about uncommitted inserts of new
records and updates of new or existing records is maintained inside the
dirty database. Information about uncommitted record deletes is
maintained in separate data structures, within the dirty share manager
implementation.DirtyShareContext
, one per primary database instance. This DirtyShareContext
instance is that session's access point to the associated DirtyShareManager
.
When a session needs information about uncommitted state in other
sessions, or needs to share information about its own uncommitted
state, it does so using the DirtyShareContext
API. A DefaultDirtyShareContext
implementation coordinates with the appropriate DirtyShareManager
implementation object for permanent tables. A NopDirtyShareContext
implementation, whose methods generally do nothing and return quickly,
is used for temporary tables. This choice was made to avoid
having to write and maintain specialized runtime code specifically for
temporary tables in many places, since so much of the runtime is common
for permanent and temporary tables.DirtyShareContext
instance for that database will coordinate with a special implementation of the DirtyShareManager
interface: RemoteDirtyShareManager
. RemoteDirtyShareManager
acts as a local proxy to the remote database's DirtyShareManager
instance. All requests are executed over the network to the
remote server, where they are serviced by the appropriate dirty share
manager. The remote nature of a database and any dirty sharing
done with respect to that database is thus logically transparent to the
requesting session.HQLPreprocessor
analyzes each where clause and determines which types of DMOs must have
their uncommitted changes leaked when the associated query is
executed. RandomAccessQuery
and PreselectQuery
use this information to trigger the dirty share manager to update its
snapshots of the associated, modified records in the current session
via DirtyShareContext.updateSnaphots(List<String>, Persistence)
, effectively publishing these uncommitted updates to all sessions.RecordBuffer
and certain query classes that contain it manage when validation occurs.RecordBuffer$ValidationHelper
manages what needs to be validated.DMOValidator
manages how validation occurs.RecordBuffer.endBatch()
).
Finally, a converted BUFFER-COPY statement will update multiple
(potentially all) of a DMO's properties. Again, validation of the
changes to the destination DMO is deferred until all targeted
properties are copied.RecordBuffer.validate(DataModelObject)
method explicitly. This triggers full validation of the target DMO.ValidationHelper
inner class of RecordBuffer
manages the state which tracks which properties and unique constraints have been validated and which still require validation.RecordBuffer
determines a particular level of validation is necessary, it uses the ValidationHelper
to prepare a ValidationData
instance. This object contains metadata about the validation to be performed. ValidationHelper
passes this object to an instance of DMOValidator
. One instance of DMOValidator
is created for each DMO type. These instances are immutable and
stateless (with respect to any discreet validation operation, at
least). As such, they can be shared across user contexts and are
thus created lazily and cached in a static map by DMO type. The DMOValidator
interrogates the ValidationData
object to determine which nullability and unique constraints must be checked.ValidationException
to be thrown, which is caught by the calling RecordBuffer
and reported to the user as an error.ValidationException
is thrown. If all checks pass, the change is immediately flushed
to the database. This flushing and checking is synchronized
at an index level, so that only one user context can be updating an
index at a time. It is also at this stage (for permanent tables)
that index changes are shared with other sessions via the dirty share manager.Relation |
DMO Instance Member |
many-to-one; "many" end represents the "foreign" end of the relation | DMO on "many" end stores direct
reference to DMO at the "one" end of the relation |
one-to-many; "many" end represents the "foreign" end of the relation | DMO on "one" end stores a Set
of DMOs at the "many" end of the relation |
one-to-one; conversion hint determines "foreign" end of the relation | DMO on either end stores direct
reference to DMO at the opposite end of the relation |
<record> of <table>
join
construct must be supported by the use of foreign keys;P2JQuery
implementations support the use of an inverse
DMO, either at query construction, or when adding a component to a
multi-table query. The inverse
parameter represents
the DMO instance on the other end of the foreign relation which defines
the natural join. When such a parameter is provided to a query,
the persistence runtime does the necessary work behind the scenes to
execute an ANSI-style join between the necessary tables using foreign
keys. Calling code needs to provide the inverse
parameter to a query constructor or to an addComponent()
method variant, but the natural join query support is otherwise
transparent to application code.RecordBuffer
class.dmo_index.xml
file for the application, specifically in the path dmo-index/schema/class/foreign/property/
.
This document is created during application conversion and is read by
the persistence framework at P2J server initialization.AssociationSyncher
class does the
grunt work of foreign key synchronization. At defined points
(after a batch assignment of property values, or upon the assignment of
a single property if not in batch assign mode), this class will query
the database for the foreign record which matches the new property
value (or set of new values), and update the appropriate instance
member in the current buffer record with the DMO, if any, found.
When this change is flushed to the database, Hibernate automatically
updates the associated foreign key.com.goldencode.p2j.persist.pl
package
description.Wrapper
Type |
Feature |
Notes |
com.goldencode.p2j.util.character |
Case sensitivity |
Case sensitivity is an attribute
of the P2J character class. However, server-side
database functions currently use java.lang.String as the
data type for parameters and return types where a text value is
needed. Thus, all text comparison operations which occur
internally within the database function will be case sensitive.
For case-insensitive comparisons, this issue usually can be
circumvented by uppercasing text parameters in the HQL expression as
necessary, before they are passed to the database function.
However, functions whose work relies upon querying or setting this
attribute internal to the function implementation will not necessarily
behave correctly. Functions must therefore be reviewed on a case
by case basis to determine whether such a dependency exists. |
com.goldencode.p2j.util.decimal |
Precision |
The P2J implementation of decimal
uses a default precision value of 10 places after the decimal
point. However, server-side database functions currently used
java.lang.Double as the data type for parameter and return types where
a non-integral, numeric value is needed. Thus, database function
parameters which use a non-default precision setting currently
disqualify a particular function or operator instance from converting
to a server-side database function. Likewise, database functions
which set the precision value to a non-default value cannot be
converted to server-side functions. |
com.goldencode.p2j.util.date |
Timezone |
The P2J implementation of date
is timezone-neutral by default. If a where clause expression
requires a specific timezone setting for a function parameter or for an
operand of a logical or arithmetic operation, the subexpression
containing that function call or operation currently cannot be
converted to a server-side database function. |
persist
package includes the WhereExpression
class. Each query type in the persist
package
allows one (or multiple, for some types) WhereExpression
instance to be registered with the query. Where clauses in the
pre-conversion application are analyzed during conversion. All
components of the where clause which can be executed at the database
server are distilled into an HQL where clause. Those which
require client-side where clause processing result in an anonymous WhereExpression
subclass being defined as an instance variable within the converted
program's Java class. This conversion implementation is designed
to minimize the number of records brought back to the P2J server for
client-side testing, by maximizing the restriction which occurs at the
server. However, this is not always possible and in the worst
case scenario will result in every record in a table being returned to
the P2J server for client-side testing, in the case where no HQL
expression can be distilled from the original where clause.java.sql.Date
is actually a time and date value rolled into one, including a
particularly confusing implementation of timezones, which may have
implications on the use of dates in the persistence runtime. The
most obvious problem from a runtime perspective is the mismatch of
valid date ranges supported by different database vendors, which
creates the possibility that we will encounter dates within a query's
where clause which are supported by Progress (and Java), but may be out
of bounds (i.e., too early or too late) for a particular backend.SQLException
.
Therefore, we cannot allow out of bounds dates to be sent to the
backend. This problem may occur both with dates which are used as
query substitution parameters, as well as date literals which are
embedded directly in the where clause text. The former issue has
been addressed by clipping query substitution parameter dates
which are invalid (i.e., too early or too late) with regard to the
range of dates supported by the backend database's date type. The
latter issue remains unaddressed; in this case, a different
approach will be necessary, probably
involving some advanced analysis by the HQLPreprocessor
.com.goldencode.p2j.util.date
, we check whether its value
is outside the
range supported by the backend implementation. If so, we reset
the
value to the closest possible date supported by the backend. A
debug message (level FINE) is
logged whenever such a substitution is made.com.goldencode.p2j.persist.dialect.P2JDialect
,
to match the implementation of com.goldencode.p2j.util.date
....where my_date >= ? and
my_date <=
?
). However, it may cause a change in behavior for
equality and
inequality matches (e.g., ...where my_date = ?
). We
chose to log the message at level FINE, rather than
WARNING. In cases where the substitution is necessary, it may
occur
quite often, particularly for dynamic queries executed within a loop.
Thus, the cost of composing and logging the message
repeatedly would become significant in a production environment.
Therefore, it should be noted that this higher level of log filtering
may mask an unintended change in query behavior, which may only be
detectable by its subtle side effects (missing records, too many
records found, etc.). We
deemed this risk to be minimal, since it would be unusual for a
business application to rely upon date processing outside the date
ranges supported by most modern database implementations.