Feature #4011
database/persistence layer performance improvements
100%
Related issues
History
#5 Updated by Greg Shah about 5 years ago
This issue is for tracking ideas about improving the performance of the database and the persistence framework in FWD. Separate issues should be opened as these are implemented.
- #4056 instrument FWD and supporting libraries to measure database performance
- #3194 optimize FINDs enclosed in loops iterating over rows in a related buffer
- #4030 improve
CompoundQuery
runtime optimizer - #6582 implement multi-table AdaptiveQuery
- #3896 improve performance of the _lock metadata implementation
- #3958 BUFFER-COPY optimization
- #4016 less Hibernate
- #4015 newer Hibernate
- #4018 possibly move temp-table support into persistent database
- #4019 move dirty database implementation into persistent database
- #4054 minimize use of dirty database to only tables which need it
- #4055 optimize temp-table output parameter copying
- #4020 reduce amount of flushing
- #4021 keep Hibernate sessions alive longer to amortize cost of prepared statements and context-level caches better
- #4033 avoid query for FIND unique when the current record in the buffer would match the query result
- PostgreSQL optimization
- H2
- #4057 / #6679 h2 performance degradation
- #4701 h2 transaction commit overhead
- #4702 tt performance testcases
- #4703 investigate whether performance of TempTableDataSourceProvider can be improved
- #4996 Integrate hash indexes in H2 temp-tables
- #5003 Enhance performance of H2 sequences
- #6829 H2 forces re-parse of all prepared statements when metadata is changed
- #6928 h2 update performance improvement
- #6995 possibly support NO-UNDO temporary tables directly in H2
- #7108 Simulate upper/rtrim directly in H2 using case specific columns
- #7252 Use direct-access to support for-each queries without WHERE and BY clauses
- #7323 Implement soft unique index in FWD-H2
- #7321 Improve FWD-H2 wildcard projection
- #7138 Short circuit fake-update (update with a new value eqaul to the old value) in H2
- #7185 H2 in-memory lazy hydration
- #7382 Check performance of delete from vs drop table in H2
- #7447 Compare ValueString and ValueStringIgnoreCase faster in FWD-H2
- #7448 Optimize FWD-H2 ValueTimestampTimeZone and maybe avoid caching
- #7454 Make ValueStringIgnoreCase the default generated value for setString in FWD-H2
- #7459 Implement BucketIndex in FWD-H2 to support multiplex
- #7488 Slow fast-copy with before tables in H2
- #8496 investigate if a 'batch insert with deactivated index' in H2 is possible
- #8451 improve performance of H2 String column comparison
- #8459 investigate changing H2 to allow NULL to equal NULL (and the impact on the other compare operators)
- #4058 consider denormalizing tables as the default approach
- #4060 investigate converting extent fields to array columns
- #4128 defer transaction start until the time when real edits begin
- #4307 DMO conversion changes
- #4825 OutputTableHandleCopier dynamically defines destination temp-table
- #4917 eliminate redundant ORDER BY elements in multi-table queries
- #4928 rewrite FieldReference to use lambdas instead of reflection
- #4931 possible ProgressiveResults performance improvement
- #4949 implement PreparedStatement cache for temp-table connections
- #5685 limit iteration of RecordBuffers in Commitable hooks by scope
- #6707 evaluate the effectiveness of the current approach to ProgressiveResults
- #6708 track/report at runtime when AdaptiveQuery shifts into dynamic mode
- #6709 track nested FIND inside a related FOR loop
- #6710 persist dynamic temp-tables across server restarts
- #6711 skip schema AST creation and go directly to .p2o in dynamic temp-table generation
- #6712 check generated dynamic temp-table caching to ensure that all cache keys work when generating hits
- #6713 check if it is faster to use separate temp-table instances instead of multiplex id
- #6714 check if there is an optimization opportunity when copying temp-tables based on multiplex id
- hydration improvements
- #6813 improvement of dynamic query and temp-table conversion
- #6815 configure all cache sizes in the directory, and create documentation for them
- #6816 improve PreselectQuery.assembleHQL
- #6823 improve TemporaryBuffer/RecordBuffer define and buffer delete
- #6825 improve table meta runtime (TableMapper and other)
- #6830 find and fix all SQL SELECT statements with inlined literal arguments, and rewrite it using arguments and prepared statement
- #6887 improve performane of dataset/table[-handle] parameters
- #6895 improve performance of BufferReference methods buffer() and definition()
- #6939 improve proxy performance by hard-coding method invocation (not strictly database-related, but heavily used by persistence)
- #2176 exceptions to rereadnolock
- #7165 Close multiplex scope using prepared statement
- #7167 Associating records from opened buffers to new sessions is slow
- #7174 resolve simple CAN-FIND statements faster
- #7258 Optimize INDEX-INFORMATION with closed dynamic queries
- #7194 Avoid generating an ORDER-BY clause if not required
- #7329 Improve FieldReference resolution of getter/setter/extent accessors
- #7330 Increase psCache size
- #7334 Reclaim used sessions to improve performance
- #7351 Reduce SQL query string size of an INSERT INTO SELECT FROM
- #7352 Mismatching results when comparing case-insensitive/case-sensitive values between 4GL and FWD
- #7358 Optimize Session.invalidateRecords
- #7366 improve performance of buffer.fill
- #7367 disable TriggerTracker for TemporaryBuffer
- #7379 cache the def and bound DMO property info in TemporaryBuffer$ReferenceProxy
- #7403 Copy-temp-table with replace-mode should replace already inserted records
- #7404 Trasform replace-mode into append-mode when target table is empty
- #7420 Optimize Cursor for scrolling AdaptiveQuery
- #7421 Check only the indexes that were changed when using Validation.checkMaxIndexSize
- #7489 Lazily initialize DMO signature
- #7963 disable dirty share database and test existing customer applications
- #7964 investigate the benefits of disabling cross-session features of the dirty share database (without completely eliminating all dirty share functionality)
- #7999 FWD does not honor FIELDS/EXCEPT at dynamic queries
- #8593 H2 performance improvement for table 'truncate by multiplex'
#6 Updated by Eric Faulhaber almost 5 years ago
Running the instrumentation added in trunk rev 11323 (#4056) has indicated that common persistence scenarios spend a large proportion of time in the FWD persistence runtime, Hibernate, and H2. Less time than previously thought is being spent in the PostgreSQL JDBC driver and on down through the database stack. As a result, we are focusing the current performance effort on:
- replacing as much of Hibernate as possible with a leaner, bespoke ORM implementation that is tailored to better support FWD's use patterns;
- bypassing H2 where possible (moving what processing we can out of H2 into FWD runtime code) and possibly improving H2 itself to address the performance-challenged use patterns that remain;
- making any improvements possible in the FWD persistence runtime (specific ideas documented in #4011-5).
#7 Updated by Eric Faulhaber over 4 years ago
Ovidiu wrote (in email):
At this moment (at least for _temp database, but I believe the permanent databases behave in a similar manner) the changes in DMO are not matching the same course as with Hibernate. In details: if a freshly created record gets the first field assigned, then the record is Persister.insert() -ed into database (because it is transient at this moment). When a second field is assign, the change will not trigger the flush operation. In fact, the record will never be persisted again so when it is retrieved, only the field that was assigned the first time is visible.
Compared to Hibernate, we are too eager to do this first insert. Hibernate tracks all entities and auto-flushes them when required, in batch. This is helpful from performance POV because it uses a single insert stmt as late as possible instead of an insert and multiple updates, for subsequent changes.
I believe we should do the same: keep all entities in a separate cache and instead of insert-ing them in place, we should delay this too. I notice the RecordState class that is a component of all BaseRecord-s but I have not seen any actual usage. I will try to integrate it in this caching implementation.
The time is overdue for me to share more of my design choices with you. In our early work, we removed Hibernate references and replaced them with stubs to get the compile going. But the intention was not to implement all those stubs. We are doing things differently now.
For one, I don't intend to re-implement Hibernate's transparent write-behind mechanism (their name for this last-possible-moment flushing) in FWD. I've spent 15 years fighting the impedance mismatch between this feature in Hibernate and the way Progress flushes records to the database. I've decided that we can get other performance advantages with a different approach than what we've done until now.
We already track all changes to a DMO and determine when it needs to be "saved to the database" according to 4GL rules. "Saved to the database" is in quotes because today that actually means "registered as persistent" with Hibernate, which as you note above may actually save the record to the database later. As part of this process, we validate the DMO's content against unique constraints by doing two queries for each unique index:
- one against the primary database;
- one against the dirty database.
We are eliminating both of these. Instead, to replace the queries against the primary database, we will directly attempt to insert or update the record into the primary database, and to handle any exception naturally. In the past, we have not done this because we cannot be sure what the state of synchronization was at that moment between the database and Hibernate. So, if the insert/update failed, we were not sure how safe it was to continue using the connection/session. In fact, Hibernate documentation states that the session is no longer safe when most exceptions are thrown, so this was not an option.
However, now we can set a savepoint immediately before the insert/update and commit it if that small unit of work succeeds, or roll it back if it fails. I have tested this with standalone code, but it needs to be implemented in the runtime. This approach will avoid at least two queries per validation, more if there are multiple unique indices. In cases where a table has no unique constraints, it may be slower than the transparent write-behind approach, but in cases of one or more unique indices, I think it will be faster.
However, inserting/updating to the primary database is not enough to fully ensure we won't get into unique constraint trouble when committing the full transaction later. We still need to check against changes in other uncommitted transactions. This is what the second set of queries against the dirty database does today. But that approach is too expensive.
The dirty database itself is being scaled back considerably in its use. It will only be used for tables where we know there is a specific dependency on dirty reads. The actual number of such tables we are aware of is very small. The queries to check for unique constraint collisions against the dirty database are being replaced with an in-memory set of hash maps to track in-play (i.e., uncommitted) unique index updates across all open transactions. This should be much faster than executing SQL against the dirty database.
The state management for DMOs is not implemented yet, which is why there is only the initial insert now (and this needs to be integrated with the validation and 4GL-compatible flushing as noted above). Updates need to be implemented in this same mechanism. I haven't decided yet whether it would be better to have a reusable update prepared statement which updates ALL columns, even if some of them are updated to their current values, or if it would be better to build up individual update statements for each subset of updated columns. I think Hibernate does the former, as the update statements I see in the database log are always huge. But I'm not 100% certain of this.
The DMO implementation class generation is going more slowly than I would like, but the methods called on those objects will update the state directly within the DMO, rather than all the external data structures we and Hibernate use today. This will give us all the information we need to make decisions about database synchronization directly within the DMO, and I intend to leverage the fact that this information resides there to simplify and improve the performance of sub-transactions and undo handling as well.
#8 Updated by Eric Faulhaber over 4 years ago
Ovidiu, I saw you were working on the update statement part of the process. Please discuss your thoughts here. My intention was that this would be the purview of the Persister
class, in a similar fashion to the way the insert
method works. An update
method would use the RecordState
object stored in BaseRecord.state
to determine which properties are dirty. I am working on the infrastructure now to update these states when a property's value is changed. I don't want us to end up with diverging implementations, so please document your intentions here.
#9 Updated by Eric Faulhaber over 4 years ago
In the previous post, I meant to refer to the PropertyState
and the BaseRecord.propState
array w.r.t. individual properties, while the RecordState
reflects the state of the entire record.
#10 Updated by Ovidiu Maxiniuc over 4 years ago
I have a not-yet-stable UPDATE implementation in Persister
. It uses a combination of both RecordState
and PropertyState
to identify the current state of a record and its fields. I will commit the changes tomorrow, after further testing. I wait for your complete implementation of dirty-detection, momentarily I simply used the PropertyState.DIRTY
to flag touched properties (as documented in its javadoc).
#11 Updated by Eric Faulhaber over 4 years ago
I have a very rough cut of a DMO implementation class assembler in 4011a/11363. It is not feature complete and it has had zero testing. In fact, I just realized I'm not extending the correct class for temp-tables. But it is a start. I began to integrate it into the persistence framework, but on trying to stand up the FWD server, I got this:
com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2003) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:964) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.IllegalStateException: Error loading DMO class index at com.goldencode.p2j.persist.DMOIndex.getMap(DMOIndex.java:887) at com.goldencode.p2j.persist.DMOIndex.classes(DMOIndex.java:431) at com.goldencode.p2j.persist.meta.MetadataManager.initialize(MetadataManager.java:189) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1424) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:873) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1209) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:1999) ... 5 more Caused by: com.goldencode.p2j.MissingDataException: DMO index cannot be loaded from location 'com/goldencode/testcases/dmo/dmo_index.xml' at com.goldencode.p2j.persist.DMOIndex.parseXML(DMOIndex.java:1369) at com.goldencode.p2j.persist.DMOIndex.getMap(DMOIndex.java:883) ... 11 more
I had just converted the simplest test case along with ask.p
, compiled and jar'd them both and started simple server. The test case is not relevant, since the server won't initialize and we have to get that going as a first step.
Ovidiu, could you please work on this while I test the DMO assembler in parallel? I know you've been working to replace DMOIndex
with DmoMetadataManager
. Are we ready to pull out the dmo_index.xml
initialization from server startup?
#12 Updated by Ovidiu Maxiniuc over 4 years ago
- first: at start-up
DMOIndex
was reading thedmo_index.xml
so it has mode information on DMOs, more specifically, the relations between the DMOs, stored usingforeign interface
tags in thedmo_index.xml
file. The problem is that the jast of DMOa do not contain this infos; - second: the same
dmo_index.xml
is used to fill the_file
meta table.
#13 Updated by Eric Faulhaber over 4 years ago
Getting rid of dmo_index.xml
is a long-term goal, not a short term requirement. I thought you had begun to undertake this effort with some of the changes you were making, but the main goal right now is to get the server up and running again. If this means living with dmo_index.xml
a while longer, that's fine.
#14 Updated by Ovidiu Maxiniuc over 4 years ago
Current state of the task (from email):
I have just committed the revision 11365 of 4011a. I tested it, and the build process is successful. The conversion should also work for basic testcases and for me the server starts and runs some queries.
The FWD project runs without Hibernate library. The FqlToSqlConverter
is used each time a query conversion is necessary. The log level is configured a little to noisy so all the converted queries are displayed. Adding the creation of temp-tables DDLs and insert/update messages, the log can be a bit difficult to read, but will give the reader an overview of what is happening in the background.
DMOIndex
is almost disconnected. I identified only two reasons/usecases which requires the information from the dmo_index.xml
which it read and provide to FWD:
- the full set of tables needed by
_file
meta table; - the relations between the tables marked in
foreign
nodes. This information is not yet accessible in DMOs. I believe this is the right location for them (similar toindices
). Much of this information was copied/moved to the newDmoMetadataManager
.
At this moment, the DMOIndex
is momentarily disabled so the dmo_index.xml
is not required/parsed.
- is internal
Buf
interface still needed? It looks to me like the DMO proxy creation can take theBuffer
,BufferReference
and DMO interfaces directly to build aroundBufferImpl
class. The methods used by extent fields can be moved directly into the 'main' DMO interface. Please add support for them in the newDmoClass
along with the basic setters and getters. There should be 3 getters (T getField(int)
,T getField(NumberType)
, and the bulk:T[] getField()
) and 4 setters for each extent field (void setField(int, T)
,void setField(NumberType, T)
,void setField(T[])
,void setField(T)
); - also, it looks to me that the
CompositeN
internal static classes will also go away naturally since the in-memory back-end for fields is the new data array. I have already eliminated their analyse inTableMapper
/LegacyTableInfo
.
In the absence of DMO implementation, I use classes generated by old conversion, with minimum manual adjustments. To level out the in-memory storage difference, I created temporarily two mirror utility methods: populateData()
and populateFields()
. They convert data to/from the old fields from DMO implementation to the new data array. Note that the values stored in the latter are BDT, instead of java plain types. I found it to be simpler temporarily. However, the access from the other side (when setting the values in some PreparedStatement
) is done via ParameterSetter
implementors (see TypeManager
). Using the getTypeHandler
API you will get a specific instance that will handle setting parameters of both plain Java and FWD so there is no runtime issue here. The two utility methods are called quite aggressive at this moment but they will be removed as the DmoClass
will become operational. So they are marked as @Deprecated
.
My testcases (which were running about a week ago, although incorrectly because the lack of UPDATE statement support) now disconnect the client with errors like this:
java.lang.AbstractMethodError: Method com/goldencode/p2j/persist/$__Proxy1.setT100F4(ILcom/goldencode/p2j/util/logical;)V is abstract at com.goldencode.p2j.persist.$__Proxy1.setT100F4(Unknown Source) at com.goldencode.testcases.p4011.T43071.lambda$null$4(T43071.java:101)
At line 101 there is a something like: t100Abc.setT100F4(0, new logical(false));
, in a batch (as the 3rd assignment).
#15 Updated by Ovidiu Maxiniuc over 4 years ago
And the answer from Eric (also by email):
I think you are correct that the Buf
interface can be removed, but I am not 100% sure yet. Please leave it in for now. I have to look into that more and catch up with you there.
You are definitely correct that all the CompositeN
classes go away with the new implementation.
As the priority is to get real code running, please continue to work through the AbstractMethodError
issue and any other issues preventing this goal. To the extent the issues are in the DMO assembler, let me know, though I guess this will not be the case, as you are using manually adjusted DMOs at the moment.
I am not overly concerned with the foreign node info in dmo_index.xml
just now, though we will need to retain support for that ultimately, so it will have to find a new home. Perhaps there is a clean way to include this information in the DMO interface annotations?
I am working on verifier errors for the assembled DMO implementation classes, which just represent errors in my bytecode generation. I will try to get a cleaner version checked in tomorrow.
#16 Updated by Eric Faulhaber over 4 years ago
FYI, I've gotten through the verifier errors and I'm trying a simpler test case than yours (to begin with):
def temp-table tt field f1 as int. create tt. tt.f1 = 1. find first tt where tt.f1 = 1.
So, now it's time for me to replace the temporary scaffolding you put in place (i.e., populateFields
and populateData
), because I'm getting this:
Failed to set ? (com.goldencode.p2j.util.integer) to com.goldencode.testcases.dmo._temp.Tt_1_1__Impl__ java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.persist.orm.BaseRecord.populateFields(BaseRecord.java:218) at com.goldencode.p2j.persist.orm.BaseRecord.copy(BaseRecord.java:70) at com.goldencode.p2j.persist.TempRecord.copy(TempRecord.java:181) at com.goldencode.p2j.persist.Record.snapshot(Record.java:25) at com.goldencode.p2j.persist.TempRecord.snapshot(TempRecord.java:175) at com.goldencode.p2j.persist.TempRecord.snapshot(TempRecord.java:9) at com.goldencode.p2j.persist.RecordBuffer.create(RecordBuffer.java:9382) at com.goldencode.p2j.persist.BufferImpl.create(BufferImpl.java:1089) ...
#17 Updated by Eric Faulhaber over 4 years ago
Ovidiu, I think there is a fundamental misunderstanding in your implementation of populateData
, and as I dig further into your implementation of other functionality, in other places as well.
We do not store BDTs in the BaseRecord
data array. This array is for lower level types that have less overhead than the BDTs. For example, we store java.lang.Integer
there, not BDT integer
. This is intentional, to avoid the overhead built into the BDTs when doing low level operations.
The translation point is the _{get|set}<BDT subclass>
methods in Record
. DMO implementation classes are built on top of Record
and invoke these. Their getter and setter methods are meant to be called (via proxy) by business logic, but we don't ever (if possible) want to call those getters and setters below the Record
class. Record
translates between the BDTs and lower level types. The persistence framework (other than Record
) should only be dealing with the lower level types.
Could you please review your implementation so far and see where there are dependencies on the DMO getters and setters and BDT types in the framework? Then let's discuss...
#18 Updated by Ovidiu Maxiniuc over 4 years ago
Eric, I am aware of this. In fact, I wrote in note 14 about BDT used temporarily in data (search for "Note that the values stored in the latter are BDT, instead of java plain types. I found it to be simpler temporarily").
#19 Updated by Eric Faulhaber over 4 years ago
Whew, ok good :)
I haven't gotten my head fully around your FQL stuff yet, and I was worried the dependency might be more pervasive.
#20 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
Eric, I am aware of this. In fact, I wrote in note 14 about BDT used temporarily in data (search for "Note that the values stored in the latter are BDT, instead of java plain types. I found it to be simpler temporarily").
Sorry, must have read that too quickly.
#21 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
As the priority is to get real code running, please continue to work through the
AbstractMethodError
issue and any other issues preventing this goal. To the extent the issues are in the DMO assembler, let me know, though I guess this will not be the case, as you are using manually adjusted DMOs at the moment.
I've fixed an AbstractMethodError
as part of my DmoClass
fixes, which I think is the same as this. I'll commit shortly.
#22 Updated by Eric Faulhaber over 4 years ago
I'm getting a Failed to locate ALIAS for Tt_1_1__Impl__
error on a select. Is there an implicit assumption somewhere on the old DMO implementation class naming convention, which I need to fix up?
#23 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
I'm getting a
Failed to locate ALIAS for Tt_1_1__Impl__
error on a select. Is there an implicit assumption somewhere on the old DMO implementation class naming convention, which I need to fix up?
This is something new. The problem is that DmoMetadataManager.registerDmo()
takes the DMO interface when updating its internal registry
. The Tt_1_1__Impl__
is the DMO implementation class. Until now, the implementing class existed physically from the beginning. Please insert the following code in DmoMetadataManager.registerDmo()
at line 144:
registry.put(getImplementingClass(dmoSharedIface).getSimpleName(), dmo);
This will cause the implementation proxy to be built and added to manager's registry. Probably the line 143: registry.put(dmoSharedIface.getSimpleName(), dmo);
can be dropped.
#24 Updated by Eric Faulhaber over 4 years ago
Thanks, that helped. Now I get:
INFO: DmoMetadataManager: com.goldencode.testcases.dmo._temp.Tt_1 queried but was not registered beforehand.
So, maybe that interface registry line is needed after all, or maybe we need to flow the same change to more places. I will check in what I have and let you take it from here, if that's ok.
BTW, there is no reason (that I know of) we have to use the implementation class names in the FQL queries. That was something that came from Hibernate. We can use the shorter interface names instead. Not sure if it's worth the effort to change at this point, though, or if there is some technical limitation I'm not thinking of ATM. What do you think?
#25 Updated by Ovidiu Maxiniuc over 4 years ago
Please move the line that registers the DMO class just before returning the dmoIface
(ie. after the last if
). This should fix it (by adding the Tt_1
interface into registry before trying to access it).
#26 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
Please move the line that registers the DMO class just before returning the
dmoIface
(ie. after the lastif
). This should fix it (by adding theTt_1
interface into registry before trying to access it).
Actually, I was already committing rev 11366. Can you look at it, please? I don't think my change is quite what you wanted.
#27 Updated by Eric Faulhaber over 4 years ago
There is still work I need to do on the DMO class assembler, but I wanted to try to get a real, converted, test code going before you left for the holidays. Getting close...
Now I have this to work out:
[12/19/2019 15:09:46 EST] (TransactionManager.registerFinalizable:WARNING) {00000001:00000009:bogus} <depth = 4; trans_level = -1; trans_label = null; rollback_scope = -1; rollback_label = null; rollback_pending = false; in_quit = false; retry_scope = -1; retry_label = null; ignore_err = false> [label = methodScope; type = EXTERNAL_PROC; full = false; trans_level = SUB_TRANSACTION; external = true; top_level = true; loop = false; loop_protection = true; had_pause = false; endkey_retry = false; next_or_leave = leave; is_retry = false; needs_retry = false; FOR (aggressive) flushing = false; ilp_count = -1; pending_break = false; properties = 'ERROR, ENDKEY'] Illegal attempt to register Finalizable java.lang.Throwable at com.goldencode.p2j.util.TransactionManager.registerFinalizableAt(TransactionManager.java:3515) at com.goldencode.p2j.util.TransactionManager.access$5600(TransactionManager.java:588) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.registerFinalizable(TransactionManager.java:7316) at com.goldencode.p2j.persist.RecordBuffer.flush(RecordBuffer.java:5957) at com.goldencode.p2j.persist.RecordBuffer.setCurrentRecord(RecordBuffer.java:11053) at com.goldencode.p2j.persist.RecordBuffer.finishedImpl(RecordBuffer.java:10249) at com.goldencode.p2j.persist.RecordBuffer.deleted(RecordBuffer.java:5612) at com.goldencode.p2j.util.TransactionManager.processFinalizables(TransactionManager.java:6364) at com.goldencode.p2j.util.TransactionManager.popScope(TransactionManager.java:4431) at com.goldencode.p2j.util.TransactionManager.access$6700(TransactionManager.java:588) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.popScope(TransactionManager.java:7977) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7890) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.testcases.Junk.execute(Junk.java:24) ...
#28 Updated by Ovidiu Maxiniuc over 4 years ago
I have the following compile issue in src/com/goldencode/p2j/persist/orm/DmoClass.java
:
Error:(320, 53) java: cannot find symbol symbol: variable METH_CLINIT location: class com.goldencode.p2j.persist.orm.DmoClass
#29 Updated by Ovidiu Maxiniuc over 4 years ago
The DmoClass
does not have full support for EXTENT fields. For completeness, there are two more getter and three setters to be implemented. For example for a logical
extent field, the following set of methods is needed:
logical com.goldencode.testcases.dmo._temp.T100Abc_1.isT100F4(int)
logical com.goldencode.testcases.dmo._temp.T100Abc_1.isT100F4(NumberType)
logical[] com.goldencode.testcases.dmo._temp.T100Abc_1.isT100F4()
void com.goldencode.testcases.dmo._temp.T100Abc_1.setT100F4(int,logical)
void com.goldencode.testcases.dmo._temp.T100Abc_1.setT100F4(NumberType,logical)
void com.goldencode.testcases.dmo._temp.T100Abc_1.setT100F4(logical[])
void com.goldencode.testcases.dmo._temp.T100Abc_1.setT100F4(logical)
At this moment only the first setter and first getter are supported.
#30 Updated by Ovidiu Maxiniuc over 4 years ago
Sorry to bother. The generated classes are invalid when they contain extent fields. The for
a line 252 will create multiple setters and getters for same property because they are listed with different entries in propMeta
.
LE: The exception is:
java.lang.ClassFormatError: Duplicate method name&signature in class file com/goldencode/testcases/dmo/_temp/T100Abc_1_1__Impl__ at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.lang.ClassLoader.defineClass(ClassLoader.java:642) at com.goldencode.asm.AsmClassLoader.findClass(AsmClassLoader.java:186) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at com.goldencode.asm.AsmClassLoader.loadClass(AsmClassLoader.java:220) at com.goldencode.p2j.persist.orm.DmoClass.load(DmoClass.java:553) at com.goldencode.p2j.persist.orm.DmoClass.forInterface(DmoClass.java:302) ...
#31 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
I have the following compile issue in
src/com/goldencode/p2j/persist/orm/DmoClass.java
:
[...]
The METH_CLINIT
constant was added to DmoAsmTypes
, which was committed as part of my last checkin. It compiles ok for me. Perhaps there was a merge problem with DmoClass
? Do you have changes in that class?
#32 Updated by Ovidiu Maxiniuc over 4 years ago
Yes, sorry, I missed the file.
#33 Updated by Eric Faulhaber over 4 years ago
I am working my way up to extent fields. I put in the basic getter/setter methods -- incorrectly, apparently ;-) -- but I have not tested with them yet. I know the specialized extent field methods are missing.
I'm just trying to get the very simple test case from #4011-16 to run all the way through right now, before you leave. I have changes to make in the invocation handler as well, and I need to add runtime support for state updates to the DMO. Don't expect any of that to work yet.
So, basically: just a very simple temp-table with one field, one record created, and a simple query against it.
#34 Updated by Ovidiu Maxiniuc over 4 years ago
I see. I started form that point and gradually gain complexity. I'm going back to the basic testcase to have the same view.
#35 Updated by Eric Faulhaber over 4 years ago
Once that is working, I will implement the state changes to get an update test case going. Then expand into a more complex set of data types, extent fields, etc.
Once concern I have is how to map in the datetimetz
type into this model. Although we have 2 columns to represent this in the database, I want to take up only 1 slot in the DMO's data array for one of these. That seems it will require a custom class on the FWD side, but that won't be one of the types supported natively by JDBC. Have to think more about this... If you have any ideas, please let me know.
#36 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
The
for
a line 252 will create multiple setters and getters for same property because they are listed with different entries inpropMeta
.
Hm, I had intended the block at line 262 to avoid this. I'll look at it in a bit...
#37 Updated by Ovidiu Maxiniuc over 4 years ago
Your basic testcase works for me. Clean convert, launched server, client and got the following output: 0x1 1
after changing the code as follows (to be sure the data is fresh):
def temp-table tt field f1 as int. create tt. tt.f1 = 1. release tt. find first tt where tt.f1 = 1. message rowid(tt) tt.f1.
BTW, I have just committed revision 11367 with fixed registration in DmoMetadataManager
.
#38 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
Hm, I had intended the block at line 262 to avoid this. I'll look at it in a bit...
I changed it temporarily to i += next.extent;
, so it works for tables/classes with one extent field. But I am not sure if there are more this trick might not work.
#39 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
Your basic testcase works for me. Clean convert, launched server, client and got the following output:
0x1 1
after changing the code as follows (to be sure the data is fresh):
Cool! But that's a regression. The FIND should act as an implicit release of the record from the CREATE.
#40 Updated by Ovidiu Maxiniuc over 4 years ago
No, it is not. It works without the release, too, but I wanted t be sure it does this way, too.
#41 Updated by Eric Faulhaber over 4 years ago
Hm, I still get the Illegal attempt to register Finalizable
crash. You're not seeing this? Do you have any changes not checked in? Did you have conflicts on any of my changes that you resolved manually?
#42 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
Hm, I still get the
Illegal attempt to register Finalizable
crash. You're not seeing this? Do you have any changes not checked in? Did you have conflicts on any of my changes that you resolved manually?
Oops, I have this too, but when closing the client window. I am looking into it. I did not reached this point for some days now, the runtime usually was finished because other errors, caused by the more complex code.
#43 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
Oops, I have this too, but when closing the client window.
I am using the ChUI client. It happens without manually closing the window. But I do see the message output for a split second beforehand. So...success? ;-)
#44 Updated by Ovidiu Maxiniuc over 4 years ago
The message is just a WARNING at the moment it is first displayed, but something goes wrong after the active (converted) code is finished executed. However, I believe it can be counted as a success.
#45 Updated by Ovidiu Maxiniuc over 4 years ago
Well, a bit of investigation and I remember that in RecordBuffer:5917 I replaced if (!isTransient())
with if (currentRecord == null)
to allow the records to be flushed more aggressively, as opposed to Hibernate's "write behind" strategy.
However, after rolling back that code, it seems that there is a imbalance in Persistence
transaction management: pushImplicitTransaction()
/beginTransaction()
. I need to investigate this.
#46 Updated by Eric Faulhaber over 4 years ago
RB's flushing and Hibernate's transparent write-behind are not directly related. We implemented the flushing in RB to match the 4GL's timing of updating/inserting records, based primarily on index updates (but some other factors as well). We adjusted for the write-behind by managing Hibernate's auto-flushing behavior at the Persistence
class level. Basically, we track whether a query or some other access uses a table which has had some update made to it since the last commit. If so, we force a flush on that table before doing the query/access. Please don't change the RB-level flushing to adjust for the removal of Hibernate. We'll address this at the Persistence
class level (or in supporting classes).
#47 Updated by Ovidiu Maxiniuc over 4 years ago
Ovidiu Maxiniuc wrote:
However, after rolling back that code, it seems that there is a imbalance in
Persistence
transaction management:pushImplicitTransaction()
/beginTransaction()
. I need to investigate this.
Actually the problem was with Session.inTx
. After a successful commit / rollback, the endTransaction()
the Persistence
was setting transaction = null
, but the Session
was not informed that the transaction is gone. As result the inTx
flag remained set so no other transactions were ever created as beginTransaction()
was eager to return null
.
#48 Updated by Eric Faulhaber over 4 years ago
Ah, makes sense. That's a remnant of how the old Hibernate Transaction
object worked. I did not really intend to carry that class forward, just stubbed it out when we were taking Hibernate out. My intention was to remove Transaction
and to manage the transaction instead with the Session
API directly.
Please check in your changes.
#49 Updated by Ovidiu Maxiniuc over 4 years ago
Done, see r11368.
Related to the content of data[]
: I was using BDT, with unknown values. Your approach uses null
for them. This makes TypeManager.getTypeHandler()
unable to guess which data type handle to return. I am trying to fix this using the metadata.
#50 Updated by Eric Faulhaber over 4 years ago
Yes, the idea behind PropertyMeta
was to store any information needed to convert between these lower level types and BDT subclasses. It can be used for this. I see you've already added a type
property previously. Why not store the handler here directly and avoid the additional lookup? Is it stateless?
#51 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
Yes, the idea behind
PropertyMeta
was to store any information needed to convert between these lower level types and BDT subclasses. It can be used for this. I see you've already added atype
property previously. Why not store the handler here directly and avoid the additional lookup? Is it stateless?
Great idea. Thank you. Is it stateless. I will add to meta info.
#52 Updated by Eric Faulhaber over 4 years ago
I had a question w.r.t. the FQL->SQL generation: I know you've been focused primarily on getting the functionality working correctly, but I wanted to understand what you've done (or planned for, if not yet implemented) in terms of caching or other performance techniques, in that implementation?
#53 Updated by Ovidiu Maxiniuc over 4 years ago
- removed completely the temporary
populateFields()
andpopulateData()
methods and did a clean up; - added support for persisting NULL values (tracked their original types). Added more supported types in
TypeManager
; - added support for setting/getting new types in
Record
.
The extent support is not yet full, but otherwise more complex queries should be supported.
Regarding to performance support. There is nothing expressly implemented yet. As you wrote, I aimed for correctness first. But the FQL->SQL conversion should be quite fast: there are only two tree-iterations with minimum computations.
However, we should avoid unnecessary duplication of work. So the idea is to use cache the results. Separate cache structures will be used for each dialect as the same query could result in different SQL. The only issue is choosing the right key. I am not aware at this moment what should it contain beside the original FQL string.
Beside the queries, there are two other related areas: DDL generation (I guess there is no problem here) and UPDATE statements. The latter might be the subject of optimization but it is a bit difficult because the statements are custom-made to fit the exact changes in the record to be updated.
These optimizations will be done after we have the server running stable enough. At that moment we will be able to see the exact points that can be optimized by simply looking at the input values, not mandatory using timers. Then a second level would be profiling and fine tuning.
#54 Updated by Eric Faulhaber over 4 years ago
Ovidiu, welcome back! I am reworking the RecordBuffer$Handler
inner class pretty drastically, pushing much of the processing into BaseRecord
and related classes. It is not yet in a state which I can check in. Please do not make changes in the RecordBuffer$Handler
class ATM, nor in DMOValidator
and RecordBuffer$ValidationHelper
. Thanks.
As part of reworking this, I am trying to eliminate as much use of the DMO property name and related map lookups in internal processing as possible. There are so many maps and sets in transient use within RecordBuffer
, which are keyed by String
. For the validation/flushing logic, I have been reimplementing the tracking of whether indices are dirty using BitSet
, where the set bits correspond to the position of a ProperyMeta
object in their array (i.e., in RecordMeta.props
). As properties are touched, the corresponding bits are set. When this bitset matches that of one of the indices, the index is considered dirty. However, mapping this new processing into (or rather replacing) the spaghetti code I created in RecordBuffer$ValidationHelper
has proved challenging so far.
#55 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
Ovidiu, welcome back! I am reworking the
RecordBuffer$Handler
inner class pretty drastically, pushing much of the processing intoBaseRecord
and related classes. It is not yet in a state which I can check in. Please do not make changes in theRecordBuffer$Handler
class ATM, nor inDMOValidator
andRecordBuffer$ValidationHelper
. Thanks.
Understood. In fact my changes for this task did not affect too much these parts of the code so they should be still very similar to trunk code.
As part of reworking this, I am trying to eliminate as much use of the DMO property name and related map lookups in internal processing as possible. There are so many maps and sets in transient use within
RecordBuffer
, which are keyed byString
. For the validation/flushing logic, I have been reimplementing the tracking of whether indices are dirty usingBitSet
, where the set bits correspond to the position of aProperyMeta
object in their array (i.e., inRecordMeta.props
). As properties are touched, the corresponding bits are set. When this bitset matches that of one of the indices, the index is considered dirty. However, mapping this new processing into (or rather replacing) the spaghetti code I created inRecordBuffer$ValidationHelper
has proved challenging so far.
The usage of BitSet
should be blazing fast. I wonder if we can replace the String
-keyed structures with field-offset integers (or better - arrays) in many other places. However, in a lot of places the field name is implicit for some table (ex: copy/compare) operations so we will have to keep using them.
#56 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
The usage of
BitSet
should be blazing fast.
I think it should be much better, though not as fast as using long
as a bit field directly. However, we need the variable length of the BitSet
implementation in this case, since the number of DMO properties could exceed 64.
I wonder if we can replace the
String
-keyed structures with field-offset integers (or better - arrays) in many other places. However, in a lot of places the field name is implicit for some table (ex: copy/compare) operations so we will have to keep using them.
That is the idea over time... I have been trying to get rid of a lot of the maps and sets, at least at a low level, and replace these mechanisms by using the implicit order of the PropertyMeta
array instead. It may be a bit less readable/maintainable, but we need the performance improvements. However, each time I've pulled on a thread, it has led to a lot of dependencies. The key is to not add new dependencies on String keys and maps to the degree possible, which is why I mentioned this.
#57 Updated by Eric Faulhaber over 4 years ago
Another issue I want to discuss for when I get back to the bulk getters and setter for extent fields in DmoClass
...
It seems we could rationalize the DMO interface and implementation class hierarchy a bit. Currently, we have all direct getters/setters (including for extents) defined in the "main" DMO interface. All bulk/special getters/setters are defined in the inner *.Buf
interface, which also extends the Buffer
interface.
It seems now would be the time to split these interfaces along slightly different lines, though it will have implications for PropertyHelper
and DmoProxyPlugin
. Wouldn't it make more sense to have the "main" DMO interface define all methods used by converted business logic? This would include all direct and bulk getters/setters. The *.Buf
interface would be used as it is now to add in the Buffer
interface, but no longer the bulk methods. Does this make sense?
I am also trying to remember why we have the separate set of interfaces for temp-tables. I recall this was about shared temp-tables, but I don't recall the exact need that drove this conversion. Can you please refresh my memory? I want to make sure the ideas above don't break anything at this level.
#58 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
It seems we could rationalize the DMO interface and implementation class hierarchy a bit. Currently, we have all direct getters/setters (including for extents) defined in the "main" DMO interface. All bulk/special getters/setters are defined in the inner
*.Buf
interface, which also extends theBuffer
interface.It seems now would be the time to split these interfaces along slightly different lines, though it will have implications for
PropertyHelper
andDmoProxyPlugin
. Wouldn't it make more sense to have the "main" DMO interface define all methods used by converted business logic? This would include all direct and bulk getters/setters. The*.Buf
interface would be used as it is now to add in theBuffer
interface, but no longer the bulk methods. Does this make sense?
Yes, it does. We did discuss a bit about this in notes 14/15 (by email and then pasted here). In fact I attempted to go a bit forward there, by getting rid of the *.Buf
interface completely (and pass the Buffer
along with BufferReference
and DMO in RecordBuffer.createProxy
to ProxyFactory.getProxy()
). But we need the *.Buf
to group the first and last one. However, the bulk/special getters/setters method can be moved to DMO (without any annotation). I think this simplified the code and makes it less messy by grouping the field accessors together.
I am also trying to remember why we have the separate set of interfaces for temp-tables. I recall this was about shared temp-tables, but I don't recall the exact need that drove this conversion. Can you please refresh my memory? I want to make sure the ideas above don't break anything at this level.
Same for me. I thought about it but it remained an open issue. I searched the archives and identified task #2595. We discuss this starting at entry 27 or so, based on an actual testcase found in the customer code. To summarize: when a new table is defined as new in a procedures and then used as shared in different other called ones. In this scenario, the caller will define the tt variable using the Tt_2_1.Buf
interface:
Tt_2_1.Buf tt = SharedVariableManager.addTempTable("tt", Tt_2_1.Buf.class, "tt", "tt");
and the called procedure will use the
Tt_2.Buf
:Tt_2.Buf tt = (Tt_2.Buf) TemporaryBuffer.useShared("tt", Tt_2.Buf.class);
so allowing the narrower
Tt_2_1.Buf
to be assigned to wider Tt_2.Buf
(note that Tt_2_1.Buf extends Tt_2_1, Buffer, Tt_2.Buf
). If another caller will do the same, its *.Buf
will be Tt_2_2.Buf
and will also extend Tt_2.Buf
so the assignment between temp-tables will also work.#59 Updated by Eric Faulhaber over 4 years ago
Please go ahead and implement the conversion changes for the DMO interface simplification (i.e., moving the bulk methods into the "main" interface). Leave the Buf
inner interface for now. I still haven't determined whether we still need it. There was a reason I separated it this way originally, but I don't recall why ATM. As we implement the replacement code further, perhaps this reason will come back to me.
Please also implement a table-level schema hint (dirty-read
XML attribute) which indicates that a particular table/DMO needs to be tracked by the dirty share manager. This should translate through conversion to a boolean dirtyRead
attribute in the @Table
annotation, which is absent by default. Accordingly, the isDirtyRead()
method on the @Table
annotation should return false
by default.
#60 Updated by Ovidiu Maxiniuc over 4 years ago
Tasks from note 59 are implemented and were pushed to branch 4011 as r11370.
#61 Updated by Eric Faulhaber over 4 years ago
Ovidiu, please implement a feature which uses BitSet
and HashSet
(or HashMap
) to implement a tracker for records in unique indices in uncommitted transactions. This is meant to be a lightweight replacement for the dirty share manager, for most tables. The key for the hash map/set is the data
array (probably a copy) from a BaseRecord
instance. I'm not sure if it's necessary to track a value associated with this key. If so, it would be the unique id
of the record.
The idea is that the BitSet
represents an index definition (in this case a unique index), where the left-shifted position of each set bit corresponds with the zero-based index of a PropertyMeta
object in a DMO implementation class. The origin of the BitSet
object and its association to a DMO is not relevant to this API, though the BitSet
instance which defines an index will be provided by the code I am writing. The point is, you don't need to implement the code which creates the BitSet
index definition. In the creation of your API, assume that is already written, even though I have not checked it in yet.
The BitSet
data, combined with the corresponding array of PropertyMeta
objects, determine the hashCode
and equals
implementation for the map/set which tracks each unique index, and each such map/set needs to implement the same, deterministic hash code and equality implementation for a given index and data array. In other words, there will be one map/set object per unique index, with the BitSet
data driving how the hashCode
and equals
methods work. The actual order of the index components is not relevant; the algorithm will use the order (from low to high) of the set bits in the BitSet
index definition, even if this does not correspond with the actual order of the index fields. We just need it to be deterministic/consistent, such that we can recognize a unique constraint violation.
The hash code and equality algorithms should be data type-agnostic to the degree possible, to keep that code as simple as possible.
There needs to be an API to add, remove, and delete elements from the map/set, as various, uncommitted transactions make changes to their DMOs. This API will be used from the DMOValidator
class instead of the dirty share code we use today. This is meant to be informational, to help you to understand the context of the API; however, don't change the DMOValidator
code itself, as I already am making substantial changes here.
We also will need a way to remove all elements related to a particular user context/session, which will be needed when that context's current transaction is committed (or rolled back). This would also be invoked to clean up a context if that session ends abnormally/abruptly.
This feature only tracks the uncommitted changes; we will rely on the primary database to reject an insert/update in cases where we pass the preliminary unique constraint validation check I'm describing here.
I'm not yet sure to what level we need to synchronize access to this set of data structures. We may already have enough synchronization external to this feature, because for persistent tables, an exclusive, pessimistic record lock already is needed to make updates (TBD: does this adequately cover us for new inserts?). Temp-tables already are local to a session, so synchronization is not needed there.
So, in summary:
- hash map/set implementation replaces dirty database for unique constraint checking, for all tables which do not specifically require dirty database;
- one map/set instance per unique index, per table/DMO type;
- one element added to each map/set instance per uncommitted insert/update (removed when DMO is deleted);
- all elements associated with a transaction removed when the transaction is committed (or rolled back), or if a session ends badly.
Hopefully, the above makes sense as to my intent. The idea is to have a much more performant unique constraint check than we have with the dirty share code we are using today. Please propose an API and we can discuss further.
#62 Updated by Ovidiu Maxiniuc over 4 years ago
For the moment, I have one question, from 1st phrase, related to the key for the hash map/set: wouldn't it be a bit of a performance hit to hash/compare the full data
array? At first I thought you mean the data
object itself, but it does not make much sense, does it?
#63 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
For the moment, I have one question, from 1st phrase, related to the key for the hash map/set: wouldn't it be a bit of a performance hit to hash/compare the full
data
array? At first I thought you mean thedata
object itself, but it does not make much sense, does it?
The idea is to only hash/compare the elements of the data array that are involved in the index. That is, the set bits in the BitSet
, combined with the PropertyMeta
array, define which properties are components of the index. So, if bits 0, 2, and 3 are set, that means that elements 0, 2, and 3 of the PropertyMeta
array define the components of the index. Only those data elements are involved in the hash code and equals computations. In fact, only those elements need to be stored in the map/set (not the whole data array as I wrote earlier). That should be pretty efficient, I think. Does that make sense?
#64 Updated by Eric Faulhaber over 4 years ago
The trick is to write the hashCode
and equals
methods generically, to operate only on those properties that are components of the index, as defined by the BitSet
. And again, we use the order of the set bits, rather than the actual order of the index components, because all we care about is that the combined properties define uniqueness. We don't care about the actual sort order of the index for this use case. This will simplify the code and ensure consistency in generating the hash codes.
#65 Updated by Eric Faulhaber over 4 years ago
Ovidiu, currently, we determine which implementation of DirtyShareContext
we use for a particular buffer by the type of database: persistent or temporary. We use the DefaultDirtyShareContext
for the former and the NopDirtyShareContext
for the latter.
Please rework this code to honor the new dirtyRead
attribute you added to the @Table
annotation. If a DMO should share dirty data, use the DefaultDirtyShareContext
, else use the NopDirtyShareContext
. Temp-table DMOs should always use the no-op implementation, as today. If you honor the dirtyRead
setting, we should get that behavior for free, since the default for this attribute is false
.
#66 Updated by Eric Faulhaber over 4 years ago
After addressing the items in #4011-61 through #4011-65, please consider and propose a design for a caching/optimization strategy for your FQL interpreter/converter implementation. Now that we have a working first pass, we should determine how to make it as efficient as possible. If you need changes within the low-level ORM classes to support this, let's discuss.
#67 Updated by Ovidiu Maxiniuc over 4 years ago
Eric,
after fixing some issues that surfaced while working with repetitive queries and client (re-)connections, my eyes got stuck on Persistence.Context.getQuery()
and Persistence.getQuery()
methods. The code is documented that it is OK to do this with cached queries, because each use is guaranteed to reset the parameters, but I wonder whether this is correct. I think that there is a very slight chance that two queries to be used at the same time and the parameters from latest access will prevail.
The nice part is that if this works correctly (apparently it did until now) it will also cover the caching of the SQL queries converted from FQL. In my tests, the repeated queries for same FQL were cached so the conversion was run only once. However, Session.createQuery()
and execution calls are accessed from a few other places (DefaultDirtyShareManager
) so I wonder whether I should move the caching to cover that too.
One more thing: I see the cache is context-local. Wouldn't it cover all connections if we make it true static? At least for permanent tables and static temp-tables. I am not sure about the dynamic temp-tables, as the property names might be different.
#68 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
... The code is documented that it is OK to do this with cached queries, because each use is guaranteed to reset the parameters, but I wonder whether this is correct. I think that there is a very slight chance that two queries to be used at the same time and the parameters from latest access will prevail.
What case are you considering where this could happen?
Since each context is single-threaded, as far as running business logic, some sort of nesting is the only thing I can think of. Even in the nesting case, does it matter? Because the bound parameters are only important at the moment the query is executed, and there shouldn't be any possibility to interleave another cache lookup between the time parameters are bound and a query is executed (again, because of the single thread).
I can't come up with any other scenario where this parameter binding is a problem. But maybe I have a blind spot here. Are you thinking of some other potential case?
#69 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
One more thing: I see the cache is context-local. Wouldn't it cover all connections if we make it true static? At least for permanent tables and static temp-tables. I am not sure about the dynamic temp-tables, as the property names might be different.
This cache was made context-local originally because the query instance was a creature of a specific Hibernate session, which in turn was bound to a specific JDBC connection. A static cache would not have worked for the Query
instances themselves originally. However, if you are caching a different level of intermediate result, such as the result of parsing an FQL statement, and there is not yet any JDBC connection/resource associated with it, a static cache absolutely makes more sense.
What exactly we cache of course depends on where the time is being spent, and whether an intermediate result can be saved and re-used by different contexts. Intuition suggests that the FQL parsing and the associated identification and organization of all the parts of the expression into a usable form is where the heavy lifting is. But you would know best where the time is being spent, and what re-usable intermediate form might be optimal for caching.
#70 Updated by Eric Faulhaber over 4 years ago
Ovidiu Maxiniuc wrote:
The nice part is that if this works correctly (apparently it did until now) it will also cover the caching of the SQL queries converted from FQL. In my tests, the repeated queries for same FQL were cached so the conversion was run only once. However,
Session.createQuery()
and execution calls are accessed from a few other places (DefaultDirtyShareManager
) so I wonder whether I should move the caching to cover that too.
Going forward, the dirty share manager will play a much smaller role than it has in the past. I know of only one table in all the projects we have done which actually needs this support to solve a real problem. The plan is to disable it in all other cases. So, it should not be a performance bottleneck. If it is easy to incorporate into your caching strategy, then go ahead. But do not make compromises in the design or spend a lot of extra time to accommodate this, since it will play such a small role in the future.
#71 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
I was thinking of two cases. The truth is that they are a bit ahead-thinking:Ovidiu Maxiniuc wrote:
... The code is documented that it is OK to do this with cached queries, because each use is guaranteed to reset the parameters, but I wonder whether this is correct. I think that there is a very slight chance that two queries to be used at the same time and the parameters from latest access will prevail.
What case are you considering where this could happen?
- the Feature #3254 which was a bit active recently;
- now the cache is context-local but if changed to be trully static, then this can happen. As catalyst: multiple clients running in batches multiple time the same program / query.
#72 Updated by Eric Faulhaber over 4 years ago
In email, Ovidiu Maxiniuc wrote:
Have you had the time to add the support for extent properties? I am a bit blocked by lack of some methods, like:
Caused by: java.lang.AbstractMethodError: Method com/goldencode/p2j/persist/$__Proxy4.setT100F4(ILcom/goldencode/p2j/util/logical;)V is abstract
Not a big issue, I can work around it but I would prefer fixing it instead. I can added the missing signatures tomorrow.
Yes, please go ahead and make the necessary changes to DmoClass
to add the missing signatures for extent fields. I haven't figured out exactly what I want the implementations to look like yet, which is why I've put this off so far. All I know is that I want the implementations to be as efficient as possible.
I've been leaning toward something very similar to what we do for these methods today in DmoAsmWorker
for dynamic temp-table DMOs, but since the internals of our DMO implementation classes have changed a bit, I didn't yet take on the changes needed to migrate this code into DmoClass
. If you want to do that, it's fine. Or if you just want to stub out the methods to avoid the AbstractMethodError
for now, that's fine, too.
I am working on other runtime classes right now, so if your changes are just in DmoClass
, we shouldn't have any conflict. Rev. 11374 contains some very minor changes I had in my working copy. Please work on top of that version.
#73 Updated by Eric Faulhaber over 4 years ago
Speaking of dynamic temp-tables, one of us is going to need to address the changes to the DMO interface design. Right now, DmoAsmWorker
still builds the old-style DMO interfaces and implementation classes. I planned to drop the generation of the DMO implementation classes from DmoAsmWorker
, once we had DmoClass
finished. We would just let DmoClass
generate the implementation class for a dynamic temp-table DMO from the interface, like for any other DMO. Let me know if you're comfortable taking on that sub-task. It is slightly lower priority than some other things, but ATM I am having trouble carving out something else, because I have so many changes in flight. I plan to bring things back together shortly into a compilable and testable state, and then we can regroup on other items...
#74 Updated by Eric Faulhaber over 4 years ago
Ovidiu, thanks for the UniqueTracker
implementation. I am working to integrate this code now, but I have some questions and some things I would like you to change.
isTracked
actually removes a record's key. That doesn't seem right. Maybe I'm not understanding the purpose of this method, but based on the javadoc, it seems like a more appropriate name would be isUnique
. This is the method the validator should call to test uniqueness of a record which we plan to sync with the database, right?
It wasn't my intention that UniqueKey
should store the entire snapshot of the record. We only need the index component data, nothing else. All we need to know from UniqueTracker
is whether we have a unique constraint conflict. The non-indexed data is irrelevant and in fact is likely to be much larger than the index data alone. Now, we may already have that data in memory elsewhere for other purposes (e.g., in the session cache), but I don't see a reason to add strong references to it here.
Also, storing a reference to BaseRecord.data
as snapshot
seems dangerous. The elements of that array are immutable, but the array itself is mutable, and it backs the BaseRecord
from which it was taken. So, the key will break the first time an update is made to one of the indexed properties of a record. You pre-compute the hash for efficiency, which makes sense. However, when an element in the snapshot
array is updated by business logic modifying the record, the hash will get out of sync with the implementation of equals
.
If it was your intent for the snapshot to remain a "live", mutable reflection of the record, so that the UniqueTracker
automatically is updated as business logic changes property values in a tracked record, I don't think this can work without additional changes. There is no guarantee a particular BaseRecord
instance representing a tracked database record will remain in memory for the life cycle of this tracking (i.e., the current transaction). While we intend to cache as many records as we can, the session cache may expire or evict BaseRecord
instances. That record's unique index state must nevertheless remain tracked by UniqueTracker
until the current transaction is committed or rolled back. If the BaseRecord
instance for which the UniqueKey
was created is replaced with a new instance, UniqueKey.snapshot
will no longer represent the data array of the live record. There will be a new BaseRecord.data
array and they will be out of sync.
I am trying to figure out the best way we can make UniqueKey
safe, such that it reflects the actual record state as efficiently as possible...
#75 Updated by Eric Faulhaber over 4 years ago
Another feature we need is an atomic, context-safe, check and update feature to update an index with a new record if it is found to not violate a unique constraint. Currently, we check the primary database for conflict first, then the dirty database. I am planning on doing it the same way here. So, the flow is:
- start a savepoint
- attempt insert/update into the primary database
- attempt insert/update into unique tracker
- commit savepoint
If either insert/update operation fails, we report the error and roll back to the savepoint.
Today, we synchronize on the index(es) being updated before making the dirty share database updates. I think we continue to do this, but it's a little messy, because that synchronization is embedded within the dirty share classes, and uses an InMemoryLockManager
instance. So, I'm not sure if that sync code belongs within or external to the UniqueTracker
implementation...
Finally, how to handle deleted records? It's easy enough to remove them from the tracker, but if the delete is rolled back, the removal from the UniqueTracker
needs to be rolled back accordingly. The logic in the dirty share code which handles this today is a bit complicated. I was hoping to simplify this. For now, I probably will continue to use the DirtyShareContext.lockAllIndexes(String, LockType)
method, but we will want to replace this eventually with something more efficient. It also means I cannot simply use NopDirtyShareContext
for non-dirty-read persistent tables, like we do for temp-tables, as I had planned. That method (as the name suggests) is a no-op.
#76 Updated by Eric Faulhaber over 4 years ago
Ovidiu, please hold off making changes to UniqueTracker
related to the above post. I have begun refactoring the dirty share context and manager classes to incorporate the UniqueTracker
into a new hierarchy. I'm a bit too tired to go into the details right now, but I don't want to duplicate effort. Please continue working on a caching strategy for the FQL code instead.
#77 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
isTracked
actually removes a record's key. That doesn't seem right. Maybe I'm not understanding the purpose of this method, but based on the javadoc, it seems like a more appropriate name would beisUnique
. This is the method the validator should call to test uniqueness of a record which we plan to sync with the database, right?
You are right for both. The remove
is BAD. It must be replace with get
to test whether the record is in cache. The isUnique
is, also, probably better.
It wasn't my intention that
UniqueKey
should store the entire snapshot of the record. We only need the index component data, nothing else. All we need to know fromUniqueTracker
is whether we have a unique constraint conflict. The non-indexed data is irrelevant and in fact is likely to be much larger than the index data alone. Now, we may already have that data in memory elsewhere for other purposes (e.g., in the session cache), but I don't see a reason to add strong references to it here.
I will do the filtering so that only the unique-indexed-fields will remain in the snapshot. Note that the same snapshot is used by all unique indexes.
Also, storing a reference to
BaseRecord.data
assnapshot
seems dangerous. The elements of that array are immutable, but the array itself is mutable, and it backs theBaseRecord
from which it was taken. So, the key will break the first time an update is made to one of the indexed properties of a record. You pre-compute the hash for efficiency, which makes sense. However, when an element in thesnapshot
array is updated by business logic modifying the record, the hash will get out of sync with the implementation ofequals
.
The reference to BaseRecord.data
is used only when testing and removing the record from tracker, so the key object is discarded without having the chance to alter the array. This is by design, in order avoid the extra array copy. When the record is added, a copy of the data
is used: recData = Arrays.copyOf(r.data, r.data.length);
.
If the record is updated, we need to notify the tracker and be re-validated. (This is not implemented.) In this event, the re-computed key might create a collision with another unique record so the update must be rejected. If the updated record is still unique, the key / snapshot are updated (only affected fields) so kept in sync.
If it was your intent for the snapshot to remain a "live", mutable reflection of the record, so that the
UniqueTracker
automatically is updated as business logic changes property values in a tracked record, I don't think this can work without additional changes. There is no guarantee a particularBaseRecord
instance representing a tracked database record will remain in memory for the life cycle of this tracking (i.e., the current transaction). While we intend to cache as many records as we can, the session cache may expire or evictBaseRecord
instances. That record's unique index state must nevertheless remain tracked byUniqueTracker
until the current transaction is committed or rolled back. If theBaseRecord
instance for which theUniqueKey
was created is replaced with a new instance,UniqueKey.snapshot
will no longer represent the data array of the live record. There will be a newBaseRecord.data
array and they will be out of sync.
No. Although I gave it a though, the "live" snapshot was not what I intended when doing the implementation.
I am trying to figure out the best way we can make
UniqueKey
safe, such that it reflects the actual record state as efficiently as possible...
Maybe my responses come in help.
The several modifications will be done after you finish the integration.
#78 Updated by Ovidiu Maxiniuc over 4 years ago
Eric Faulhaber wrote:
Yes, please go ahead and make the necessary changes to
DmoClass
to add the missing signatures for extent fields. I haven't figured out exactly what I want the implementations to look like yet, which is why I've put this off so far. All I know is that I want the implementations to be as efficient as possible.I've been leaning toward something very similar to what we do for these methods today in
DmoAsmWorker
for dynamic temp-table DMOs, but since the internals of our DMO implementation classes have changed a bit, I didn't yet take on the changes needed to migrate this code intoDmoClass
. If you want to do that, it's fine. Or if you just want to stub out the methods to avoid theAbstractMethodError
for now, that's fine, too.
I finished implementation of the missing methods. For example the bulk setter for the logical
T100F4
field looks like the compiled code of this:
// extent = 10
// offset = 7
public void setT100F4(logical[] val)
{
for (int k = 0; k < 10; k++)
{
_setLogical(7 + k, val[k]);
}
}
I am not sure if we can do implement something faster. The diassembled code is:
0: iconst_0 1: istore_2 2: iload_2 3: bipush 10 5: if_icmpge 25 8: aload_0 9: bipush 7 11: iload_2 12: iadd 13: aload_1 14: iload_2 15: aaload 16: invokevirtual #4 // Method _setLogical:(ILcom/goldencode/p2j/util/logical;)V 19: iinc 2, 1 22: goto 2 25: return
However, I still have one issue: when loaded, the following error is reported:
java.lang.VerifyError: Expecting a stackmap frame at branch target 25 Exception Details: Location: com/goldencode/testcases/dmo/_temp/T100Abc_1_1__Impl__.setT100F4([Lcom/goldencode/p2j/util/logical;)V @5: if_icmpge Reason: Expected stackmap frame at this location. Bytecode: 0x0000000: 033d 1c10 0aa2 0012 2a10 071c 602b b600 0x0000010: 6e84 0201 a7ff eeb1
I am unable at this moment to identify the reason. Seems to be some additional check in Java7, but don't know how to make it pass.
#79 Updated by Eric Faulhaber over 4 years ago
I'm not sure why you get the VerifyError
.
I haven't considered the most efficient ways these methods might be written; I think what you've done, at least for the loop you note above, is a good way to go. It is very similar to what I've done in DmoAsmWorker
for dynamic temp-tables in the past. We might be able to create some custom bulk/mass get/set methods to call from these methods that might be more efficient, but I don't think this is where we will have a bottleneck. If we find we do through profiling, we can address it then.
Separately, I should note that I think the comparable source to what we are trying to achieve for the use case you noted above needs to look more like this:
// extent = 10
// offset = 7
public void setT100F4(logical[] val)
{
int len = Math.min(val.length, 10);
for (int k = 0; k < len; k++)
{
_setLogical(7 + k, val[k]);
}
}
My understanding of how this works is that the size of the array does not need to match the size of the extent field. The above implementation will work in the case that the sizes are the same or the method parameter array is larger than the field. However, we will get ArrayIndexOutOfBoundsException
in the case val
is smaller than the extent size.
Something very similar already is implemented in DmoAsmWorker$Library.indexedSetterFromArray
, though the invoked method signature is different in that case.
In the final version of your implemented methods, please follow the example of the methods in that DmoAsmWorker
library w.r.t. comments. That is, you have commented the code extensively already, but for the individual bytecode instructions, instead of using the decompiled bytecode as the comment, please describe in layman's terms what that instruction is doing. I think it is important for people following after to have this extreme level of documentation, since this is a pretty esoteric area of the project.
#80 Updated by Greg Shah over 4 years ago
We might be able to create some custom bulk/mass get/set methods to call from these methods that might be more efficient
There are the following advantages to using common helpers for things like this (instead of emitting extra code in the generated classes):
- (most important) The JIT will "warm up" on those helper methods much faster than in all the places in which the same code pattern is "exploded out".
- Less "hidden" code, which means it is easier to understand and debug.
- Less code in the customer's project at runtime.
#81 Updated by Eric Faulhaber over 4 years ago
Ovidiu, I want to run my ideas for planned changes to the undo architecture by you and get your input...
To the greatest degree possible, I want to get rid of the cumbersome reversible data structures and processing we have today and replace all this with what is essentially a lazy/JIT strategy of rollback/undo, with respect to the in-memory state of DMOs. Today, we track every change made to a DMO in some implementation of Reversible
. On subtransaction commit, we roll all these data structures up and merge them into the enclosing scope's version of the same. On full transaction commit, we stop tracking these, since there's no more chance or rolling back (caveat: no-undo temp-tables have special processing). On rollback/undo, we walk the reversible data structures and reverse-apply all the create/update/delete operations using reflection for all the property changes.
Now that Hibernate is no longer in the way, I want to replace this with savepoints, and let the database be the authoritative keeper of state. Instead of mirroring every data change and eagerly undoing all the in-memory DMO state at a rollback, I want to just mark in-memory DMOs which have been updated by that unit of work (i.e., transaction or subtransaction) as stale. The next time they are accessed by business logic, they would be refreshed from the database first. This will require keeping minimal state about which records and possibly properties are dirty at each subtransaction level, so we know what is stale (and thus needs refresh) and what isn't (and thus doesn't).
I'm hoping this is more efficient than the current approach, which involves a complex data structure of nested collection instances and reflection, which is constantly being modified each time we enter and exit a subtransaction. The idea is that a rollback is the exception rather than the common case, so let's try to invert the amount of work done, so that we minimize the work done for the more common case of committing a (sub-)transaction, while maybe requiring a bit more effort for the rollback case.
Where I need some feedback:
- If you agree it makes sense, how exactly to we manage the state at each sub- and full transaction?
- Since we have complete control over the DMO structure and we are managing much of the state there already, it seems to make sense to keep the state close to the record object, though it can be something that is referenced by more than one object (i.e., by the
BaseRecord
for validation/flushing and session management purposes, as well as by some undo-specific object(s)).
- Since we have complete control over the DMO structure and we are managing much of the state there already, it seems to make sense to keep the state close to the record object, though it can be something that is referenced by more than one object (i.e., by the
- How to deal with no-undo (i.e., auto-commit) temp-tables?
- We need them to be on the same JDBC connection as regular temp-tables, if we are to allow server-side joins and avoid very slow compound queries.
- Is there a way to leverage some form of this approach as for regular tables, or do we have to keep the entire history of changes made during the transaction, as we do today, to enable re-roll forward after the database transaction automatically undoes changes on a transaction rollback?
- How to handle undoable record creates/deletes?
- Rolling back a create seems easy enough, but the delete seems trickier. I guess the database will handle the un-delete of the record for us, and we just recreate the DMO from that lazily if it is needed by business logic? Could it be that simple?
- There is some tracking of inserts/deletes currently in the default dirty share manager, but I'm not sure it's sufficient for full undo support.
- We could potentially leave this portion of the reversible architecture in place, though I'd prefer to get rid of all the aforementioned rolling-up effort at each subtransaction boundary.
- Any new performance concerns with this approach? Areas to consider:
- How is savepoint performance in the databases we use?
- How much hit do we take going to the database to refresh stale (rolled back) records, compared to the current approach? Can we batch these requests or combine the query to retrieve multiple records at once (something like:
SELECT ... FROM TABLE_A a WHERE a.id in (...<tracked primary keys>...)
)? The trick here is combining the lazy/JIT approach of refreshing a needed DMO with potentially predicting which ones we will need next... - Will the data structures and processing of this new approach be more efficient than what we use today, for the most common cases?
- What do you think of the approach generally? Any other thoughts are welcome, even if not covered above...
#82 Updated by Ovidiu Maxiniuc over 4 years ago
Yes, I totally agree with the change of the way records are handled on rollback events. And I firmly believe that, even the computational effort at the rollback moment would be greater, the overall solution will get us a performance gain. The only think that is not fully clear to me is the exact method to declare a record STALE. I will re-read your post to get it correctly "digested".
To give you feedback on each issue:
1: the state at each sub- and full transaction: isn't this the role of RecordState
of BaseRecord
?;
2/3: as you mentioned, the big issue is handling the data that is not affected by transactions (no-undo
). In other words, how can we create SQL tables that are not affected by rollback to a specific savepoint? Indeed, this beats the transaction idea itself.
However, there is the notion of "table variable" in SQL Server which seems to be what we need. Unfortunately, I think all other SQL implementations does not support it, including H2;
4: as noted above, this will happen only in the least expected event of rollback. The bulk retrieving is probably the best way to get multiple records at once.
#83 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I have been reworking the DMO validation processing to take advantage of the fact that we have removed hibernate and we are accessing the database directly, in order to make validation more efficient. The idea is to:
- use the unique tracker or traditional dirty share management to check for conflicts in uncommitted transactions;
- start a savepoint;
- attempt the database update (for an existing record) or insert (for a newly created record);
- commit the savepoint if the operation succeeds or rollback the savepoint if it fails.
In the failing case, there is a little more work to do to figure out the error message to display if the failure was at the database. However, in the succeeding case (presumably the common case), we will have avoided potentially multiple queries to test the unique constraint(s), if any. Also, a later update to the database is not required for that change. The flip side of this is that we lose Hibernate's transparent write-behind feature (i.e., deferring persistence until the last possible moment), but based on our backward compatibility with the 4GL behavior, we weren't taking full advantage of this anyway.
This change in approach necessitates a modification of our current architecture, in which validation always occurs before flushing to the database. It has been challenging to refactor the code to make this happen. I wanted to get your thoughts on some things...
Can you think of any case where combining validation and flushing into a single step will break compatibility with the 4GL? Our separation of these steps was not driven by the 4GL per se, it was necessary because of Hibernate. I suspect the 4GL does not actually separate these operations, but I am not certain.
The cases which trigger validation I am thinking of are:
- all component fields of an index are updated on a newly created record;
- a single component field in an index is updated on an existing record;
- an ASSIGN statement completes and any partial validation begun earlier is completed;
- a BUFFER-COPY statement completes.
Am I leaving any out? Can you think of anything about the above cases (or any I've missed) which would prevent combining validation and flushing from working?
#84 Updated by Constantin Asofiei about 4 years ago
Eric, does the above include what VALIDATE does in 4GL?
#85 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Yes, I totally agree with the change of the way records are handled on rollback events. And I firmly believe that, even the computational effort at the rollback moment would be greater, the overall solution will get us a performance gain. The only think that is not fully clear to me is the exact method to declare a record STALE. I will re-read your post to get it correctly "digested".
To give you feedback on each issue:
1: the state at each sub- and full transaction: isn't this the role ofRecordState
ofBaseRecord
?;
Yes, that's the idea. However, currently, there is a one-to-one relationship between a RecordState
instance and a BaseRecord
instance. This may need to change to deal with nested subtransactions. Having many stacks of these across records seems like a bad idea, as does having a complex data structure managed outside of the record objects. I have not got my head fully around this bit yet. Any suggestions are welcome. In any case, we won't be holding the changed data itself in memory.
2/3: as you mentioned, the big issue is handling the data that is not affected by transactions (
no-undo
). In other words, how can we create SQL tables that are not affected by rollback to a specific savepoint? Indeed, this beats the transaction idea itself.
I'm not aware of any way of doing this at the database level. Maybe adding such support to H2 is the way to do this, but it locks us into H2 for temp-table use. Today, we are (theoretically) able to use other databases, though this would require significant code changes.
However, there is the notion of "table variable" in SQL Server which seems to be what we need. Unfortunately, I think all other SQL implementations does not support it, including H2;
I am not familiar with this. But since NO-UNDO only applies to temp-tables and the current target for that support is H2, it is not an option anyway. However, this may be good inspiration for an H2 modification noted above. And if it exists already in a major database product like SQL Server, the H2 team might even be willing to accept it back.
But, if we can come up with an efficient way to deal with it in FWD, that's good, too.
4: as noted above, this will happen only in the least expected event of rollback. The bulk retrieving is probably the best way to get multiple records at once.
Possibly. But if some cached records are never used after a rollback, this will be partially wasted effort. Note that we are removing the aggressive DMO eviction policy, which was only implemented to deal with inefficiencies in Hibernate's dirty checking implementation. So the cache is more likely to have some records in it that won't be accessed again. Thus, whatever bulk fetch we do probably would need to be smarter than just getting all the records marked stale by a rollback.
#86 Updated by Eric Faulhaber about 4 years ago
Constantin Asofiei wrote:
Eric, does the above include what VALIDATE does in 4GL?
That's a good point. No, I wasn't thinking of that one. That seems like a case where the combined approach won't work very well.
#87 Updated by Ovidiu Maxiniuc about 4 years ago
I was thinking of RELEASE
, too. However, the release will automatically issue a FLUSH routine, indirectly validating the data.
The problem with VALIDATE
is that the data is not actually flushed, so only the unique tracker and dirty share tests are available.
#88 Updated by Greg Shah about 4 years ago
Perhaps in the VALIDATE
case we can have a routine that implements the multiple queries approach. It is not optimal, but most likely this is not in performance sensitive code paths. It certainly is quite separate from the rest of the approach, right?
#89 Updated by Eric Faulhaber about 4 years ago
Well, wait a minute. The savepoint is tightly bracketed to just that validating update, so I suppose I could just rollback the savepoint whether it succeeds or fails, for the VALIDATE statement case. Anyone see a problem with that approach?
#90 Updated by Eric Faulhaber about 4 years ago
I think we're ok on both the explicit RELEASE statement validate/flush as well as the implicit validate/flush when a record is unloaded or replaced in a buffer. Is there a specific concern you have for this case, Ovidiu?
#91 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Well, wait a minute. The savepoint is tightly bracketed to just that validating update, so I suppose I could just rollback the savepoint whether it succeeds or fails, for the VALIDATE statement case. Anyone see a problem with that approach?
Yes, inserting an explicit savepoint for the VALIDATION statement and restoring it back after a successful update/insert should do the trick. Of course, on fail, the record failed validation.
I think we're ok on both the explicit RELEASE statement validate/flush as well as the implicit validate/flush when a record is unloaded or replaced in a buffer. Is there a specific concern you have for this case, Ovidiu?
Nope. At this moment, all issues seem to have solutions.
#92 Updated by Eric Faulhaber about 4 years ago
Separate idea, Ovidiu, for the caching of converted FQL statements...
A lot of 4GL code I've seen has a pattern like:
- fetch an existing record;
- update some number of fields, but not all;
- flush the data, either implicitly by fetching another record, or explicitly with RELEASE.
I am tracking the dirty properties of a record in a bit set, where each set bit marks a field which has been touched by business logic. It seems like this would make a good key for a cache of update statements. Alternatively, we could use a bitset of fields which were actually changed, not just touched.
Say this happens in a loop. This would result in a consistent bitset pattern of dirty/changed properties, which we could associate with a SQL partial update statement. Thus, we would only need to generate the SQL update statement on the first pass of the loop, and we could use the dirty props bitset as a key to cache that prepared statement and fetch it the next time through the loop. Thus, we have the benefit of only updating the columns we need to update, without having to compose this "custom" statement every time.
The cache would last as long as the connection, then would be cleared.
#93 Updated by Ovidiu Maxiniuc about 4 years ago
- I do not think this is related to FQL conversion. The SQL code is directly generated, in
Persister
, from the record at hand (and field state bits). I believe this is quite fast. The cache is probably faster, but I incline to think that not by much; - if there are many fields (
n
) and a program really alters them randomly, we get a very large number (up to2
) of SQL statements cached. Of course, this is just theoretical, in real-life, only a fraction of fields are altered, but I expect a 5 to 10 fold of updates per DMO. So, it might work.n
#94 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Eric, I understand that the "altered" bits allow to update only some of fields of a record. The idea is good but I have two things in my head:
- I do not think this is related to FQL conversion. The SQL code is directly generated, in
Persister
, from the record at hand (and field state bits). I believe this is quite fast. The cache is probably faster, but I incline to think that not by much;
Yes, I misstated FQL. I meant SQL.
In terms of the performance impact, I expect you are correct, in that in the scheme of everything else going on in the system, the composition of the SQL update statement string is a relatively small part. However, generally speaking, string operations are not the fastest and I'd like to avoid repeating the same work to get the same result, if possible.
More importantly, though, consider that if we are caching PreparedStatement
instances rather than the strings themselves, we can skip not only the composition of the statement text, but also gain whatever benefit we get from the JDBC driver (depends what the implementation does) and/or the database server (statement parsing and prep) by re-using PreparedStatement
instances, rather than creating new ones. I was not clear on that earlier, but that's why the cache's life cycle would match that of the JDBC connection.
- if there are many fields (
n
) and a program really alter them randomly, we get a very large number (up to2
) of SQL statements cached. Of course, this is just theoretical, in real-life, only a fraction of fields are altered, but I expect a 5 to 10 fold of updates per DMO. So, it might work.n
Yes, we can use an fixed size cache (maybe LRU; LFU would be better in theory, but my implementation is slow) to control the growth of the cache. But in practical terms, I think we most often will hit a small number of update patterns per DMO, as most business logic won't be updating random combinations of fields. It will be groups of fields that have some business meaning.
#95 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I do not understand this code at line 370 of Session
, from one of your recent updates:
for (PropertyMeta prop : props) { if (dmo.checkState(prop.getName(), PropertyState.DIRTY)) { persister.update(dmo); wasDirty = true; break; } }
It replaced:
persister.update(dmo);
The new code will call persister.update
for every dirty property, instead of just once if the record is dirty. That wasn't the intention, was it?
BTW, I have gotten rid of the PropertyState.DIRTY
flag altogether, as it was redundant with the bit set of dirty properties I am keeping now, but that is not central to this question.
#96 Updated by Eric Faulhaber about 4 years ago
Never mind. I think I have worked it out.
#97 Updated by Ovidiu Maxiniuc about 4 years ago
- Status changed from New to WIP
I have a pending update with some fixes. This exact piece of code is affected. In fact it looks something like:
if (dmo.getModified().cardinality() > 0)
{
persister.update(dmo);
wasDirty = true;
}
I am not sure whether the wasDirty
flag is still needed, but if it remains false
, the dmo/property flags are not attempted to be updated. The getModified()
is a BaseRecord
method I added recently to get the modified property bit set we talked about.
#98 Updated by Eric Faulhaber about 4 years ago
Please hold off committing that. I have very similar changes which I will be committing shortly (sorry in advance for the mess). Please review those and see if they are compatible with yours or can be made so.
#99 Updated by Eric Faulhaber about 4 years ago
After I updated to your latest revision, I seem to be missing a RecordHelper
class. Please commit just that class so I can compile.
#100 Updated by Ovidiu Maxiniuc about 4 years ago
Done, please see r11379. Sorry, I forgot to bzr add
:(.
I understand that you still have a bit of work before committing. I do not want to rush you so take your time. I will update and merge changes over the weekend (on Sunday most likely) or on Monday morning.
I expect there will be code conflicts, even if minors, so I will have to resolve or even adjust my new code.
#101 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Done, please see r11379. Sorry, I forgot to "bzr add" :(.
I understand that you still have a bit of work before committing. I do not want to rush you so take your time. I will update and merge changes over the weekend (on Sunday most likely) or on Monday morning.
I expect there will be code conflicts, even if minors, so I will have to resolve or even adjust my new code.
Yay! It compiled. Ok, so now my working copy is merged with all your changes through your last commit.
Yes, there is a bit of work before getting things working, but I want to commit to get this set of changes backed up at least.
Can you please point me to the test cases you are using? Thanks.
#102 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Can you please point me to the test cases you are using? Thanks.
Well, I do not have a set of testcases, instead I have a piece of code that I converted initially and then manually adjusted the code so that only important areas remain and I can time them. The initial P4GL code contains a table with interwoven simple data, extents, datatime-tz, and extent datatime-tz. r11379 does not handle it too well, but my pending changes should fix and add optimizations (including for the cached UPDATE). Here is the base P4GL code: Show
#103 Updated by Eric Faulhaber about 4 years ago
Thank you!
I've committed rev 11380. Please take a look when you have time (doesn't need to be now), starting with BaseRecord
and RecordBuffer$Handler.invoke
. My intentions of refactoring the invocation handler, RecordBuffer
, and other classes were more ambitious than what I was able to do in this pass. I had to abort and roll back several approaches to stop scope creep.
I left a lot of the old code there, but I commented or disconnected it (e.g., DmoValidator
, RecordBuffer.ValidationHelper
, ChangeBroker.SessionInterceptor
, etc.).
Note the new TODOs, which are either missing features needing implementation, or me questioning my decision on some approach. I will need your help cleaning some of those up.
I'll describe the intent behind the changes in a separate note. We've covered some of the ideas already. I have to put my thoughts together on that...
I will try to get some basic testing going. I'm sure there is much to debug. Thanks for your patience on this revision.
#104 Updated by Eric Faulhaber about 4 years ago
Oops. Please use rev 11381 instead. I missed rolling back one aborted change in rev 11380. The class it referenced still existed in my working copy, so it compiled here, but the committed revision was broken.
#105 Updated by Eric Faulhaber about 4 years ago
Ovidiu, this is a brain dump of things on my mind about the latest effort. I'm sure more will come to me, and I will add it.
You may have concerns of your own that are closer to the code you have been working on. If you can edit this note, go ahead and add those, and we will maintain this as something of a work tracker. We can add initials of the person working on an item at the front of that point, and then mark it with strike out formatting when it is handled.
Be sure to coordinate with me (via email - we can use the same check in/out protocol as for other shared resources) and always do a hard reload of the page before going into edit mode, so we don't lose each other's updates. If more detail is needed on a particular point, we can discuss those as usual in separate posts.
TODO:
Short term cleanup...
General
- remove deprecated classes/code
- javadoc/comments
- remove remaining Hibernate vestiges; Hibernate is no longer linked to the build, but there is a lot of code still in the project related to it
- [OM] review placeholder classes which were created to allow compile
- which can be eliminated permanently?
- which require additional implementation of replacement functionality?
- rename code constructs which reference HQL to instead use FQL
- a lot of javadoc and comments refer to it (this cleanup may be a longer term item)
- [OM] review placeholder classes which were created to allow compile
persist.orm
package
[OM] refactor and finishBDTType
implementations:fixParameterSetter
:we are not converting from BDT types to Java types; that is done inRecord
settersonly business logic and the top level API it touches should be dealing inBaseDataType
subclasses; everything below should deal with database-friendly Java types (andnull
for unknown)we need type-appropriate JDBCsetParameter
methods, or we skip this altogether and usePreparedStatement.setObject
and let driver figure it out (but this may be slower)renameBDTType
(toDataHandler
?), sinceBaseDataType
is not involved at all and I've since addedinitialValue
method to deal with converting intitial value strings to actual types when record metadata is loaded
- [OM] remaining
initialValue
methods need to be implemented- need to handle non-static initial values, such as
now()
for date types; seems we could use a lambda for this
- need to handle non-static initial values, such as
- [ECF]
BaseRecord
:activeBuffer
maybe not needed- was using it for more initially, but now it's only used to upgrade lock if the data changed
- need testing to determine if lock is upgraded even if data is the same (i.e., field is set to its same value)
- if so, lock upgrade can move back up to RB invocation handler, before setter is invoked
- current implementation upgrades lock only if data value is changing
- confirm whether all record and property states currently in use are actually needed, or if others are needed
undoActive
(called upon validation failure for a change on an undoable table) most likely needs record and property state to roll back in addition to data value- remove inline validation code; I initially implemented some validation there, but now all validation is handled by
Validation
class
Validation
:[ECF] after some testing, remove deprecated methods; these are vestiges of early, aborted implementations[OM] implementcheckMaxIndex
can this be refactored into SQL Server dialect, or is it needed always (to apply 4GL limitations)?can 4GL limitations be ignored in our implementation, or are they needed to support business logic?
- mandatory/non-null validation:
checkNotNull
(without parameter) reports first violating property it finds in order of properties (which matches legacy field order, IIRC); confirm that this is the same behavior as 4GLfailNotNull
reports converted DMO property name in error message instead of legacy field name; should we add legacy name toPropertyMeta
?
- [ECF] unique constraint (uncommitted data) validation (some of this may be implemented in
UniqueTracker
instead ofValidation
); seevalidateUniqueUncommitted
:need to synchronize access toUniqueTracker
data before validating against uncommitted and committed dataI think this must include the database update to ensureUniqueTracker
state remains in sync with database indiceslocking must be as fast and as granular as possible to prevent contention; this could become a bottleneck, since we are making a trip to the database while holding lockinitial implementation may lock onUniqueTracker
instance, but this may cause contentionmore granular implementation would lock only affected unique indices (in a consistent order), based on changed data (do we need a bitset for changed data inBaseRecord
, in addition to dirty data?), but the selection of these indices must be very fast (probably bitset scan)
- ensure
UniqueTracker
check excludes a match on the object being checked ensureUniqueTracker
removes entries when an already tracked record's data changeswhen are records released from unique tracking?- need proper error message with fields and data when
UniqueTracker
determines a record violates a unique constraint
- unique constraint (committed data) validation; see
validateUniqueCommitted
:- differentiate between database errors and unique constraint violations (former is fatal; latter is what we are trying to detect as a validation error)
- once we've determined it is not a fatal database error and is in fact a unique constraint violation causing a
session.save
failure, how do we compose a legacy error message:- option 1: based on old implementation, issue queries testing every unique index until failing one is found
- PROS:
- dialect neutral
- already have the code to do this in some form, though it will need heavy refactoring
- CONS:
- slow, likely to cause contention because it greatly expands time spent with
UniqueTracker
resource(s) locked - adds a lot of cumbersome, awkward code just to handle error case
- slow, likely to cause contention because it greatly expands time spent with
- PROS:
- option 2: dialect-specific error analysis
- PROS:
- don't need to clutter code with all the query set up; analysis code resides in dialects
- don't make potentially multiple additional round trips to the database
- analysis can be done outside of
UniqueTracker
resource locks, so less likely to cause contention
- CONS:
- different code for each dialect
- fragile if database error handling changes (though not likely, as many applications will have been built around these architectures)
- probably requires string parsing, messy
- adds a lot of cumbersome, awkward code just to handle error case
- PROS:
given the pros and cons, especially the prospect of increasing contention under option 1, I prefer option 2I've changed my mind on this; see #4011-127
- option 1: based on old implementation, issue queries testing every unique index until failing one is found
Session
- review record and property state changes
- review caching code, determine best type of cache
[ECF] reviewrefresh
PropertyMeta
[OM] can we get rid of reflection (getter/setter) use?
persist
package
- currently, there is an impedance mismatch between old code which survived and new code, where we deal with DMO data in this package
- old code uses BDT types
- new code uses lower level Java/SQL types
- goal is to have all "behind-the-scenes" work done using the new types
- have more direct access to manipulate
BaseRecord
data and state via an internal API - only business logic and the API it uses (i.e., DMO interfaces via proxy) should deal in BDTs; these are translated to/from low-level types in
Record
setter methods (this part exists, but a lot of low level work is still done using maps of methods, reflection, and BDT getters/setters)- review all uses of reflection and maps by property to see what can be refactored/improved to use low-level types and what must remain in place because business logic needs it
- [ECF] remove the following classes/code and related, commented references, once validation/flushing is better tested and debugged:
RecordBuffer$ValidationHelper
DmoValidator
(once dirty share features are re-integrated elsewhere)ChangeBroker$SessionInterceptor
RecordBuffer$Handler
methodsinvoke1
(created as a reference forinvoke
) anddetectChange
- re-enable dirty share features
- only for tables marked for dirty-read
- with DmoValidator being removed, from where should dirty sharing now be driven?
- impact of dirty share support for everything else should be minimal
- [ECF] remove code that was part of aggressive DMO eviction policy (DMO use counting and related code); we want to keep more records in the session cache than we were, not only those currently in active buffers
[OM] review/test trigger, dataset, and OO code in RB invocation handler; did I carry this over correctly?- [ECF]
ChangeBroker
,RecordChangeEvent
,RB.unreportedChanges
and data change processing:- old code uses BDT before and after data, while new code uses low-level data types; I expect there are bugs to be found because of this
- refactor
RecordChangeEvent
and listeners be to be moreBaseRecord
aware- use flatter before/after representation of the changed data and perhaps offsets and direct calls to
BaseRecord
, instead of the current logic to deal with scalar vs. extent data - review what is really still necessary here
- use flatter before/after representation of the changed data and perhaps offsets and direct calls to
[ECF]Persistence
andPersistence$Context
review/remove any code meant to optimize/workaround Hibernate's transparent write-behind (i.e., deferred flushing) featurethese classes are least changed from the Hibernate-dependent implementation; I did not do a lot of work here after removing Hibernate; I suspect there areunimplemented code paths from hereworkarounds for the way Hibernate did things which must now be removed/changed[OM] caches of objects it may no longer make sense to cache and handle the same way (e.g., Hibernate'sQuery
andSession
classes)
[ECF] can we get rid of the implicit transaction logic?I suspect this is a performance bottleneck, with all the work that must be done for all the extra transactions being introducedthis was introduced because Hibernate opened a transaction on any read operation and left it open (since it cannot know when the user is finished with a result set); is a transaction needed for read-only work?I have seen the implicit transaction reference counting get out of balance on several occasions, so there is at least one bug in that code anyway; would rather remove it than fix it, if possible
[OM]reviewTempTableResultSet
and related code[OM]review various "live" metadata "updater" implementations, which use aMinimal<XYZ>
proxy interface to update a backing metadata table; do these need to change in light of the new DMO interface design? Probably not, but let's review to be sure...LockTableUpdater
ConnectTableUpdater
TransactionTableUpdater
- is there a more efficient way to implement these now? (deferred)
schema
package
[OM] rework dynamically generated temp-tables to use new DMO designgenerate DMO interface only; useDmoClass
to generate implementation classes at runtime like we do for all other tablesall annotations moved to DMO interface
Next steps...
[ECF] replace current UNDO architecture with savepoint-based, just-in-time DMO refresh implementation (described separately in this task)- review temp-table copying to be more efficient (e.g., in dataset use, for temp-table parameters, etc.) (deferred)
- can we leverage deferred persistence and/or batching? (deferred)
[OM] re-enable database importprevious version required Hibernate; dependencies were either removed or replaced with stubbed classes, but it currently is non-functionalto simplify first pass, we should keep the current architecture and just implement the missing dependencies; subsequent pass would be optimized to remove largest tables as the critical pathresolveRecordMeta.indexOf
use
#106 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote (in email):
working on update optimization based on "changed" bitsets as keys. It proves to be a bit more difficult as I expected as the cached object must include more than a simple UPDATE string, the position in data of the altered fields and SQL setters must also be saved.
I'm not sure what you mean here. The bitsets implicitly record the offsets of the changed properties in the data array; we shouldn't store that information redundantly. That is, the offsets of the set bits correspond with the offsets in the data array. These same offsets correspond with the offsets of the PropertyMeta
objects in their array, which hold the SQL parameter setters. So, it seems to me you have all the needed information in the bitsets, which would be the keys to the map (with the PreparedStatement
objects for the updates being the values).
So I was thinking we would have something like this:
Bitset changed = dmo.getChangedProps(); PreparedStatement stmt = cache.get(changed); if (stmt == null) { sql = generateUpdateSql(dmo); stmt = conn.prepareStatement(sql); cache.put(changed, stmt); } prepareUpdate(stmt, dmo); ... void prepareUpdate(PreparedStatement stmt, BaseRecord dmo) { PropertyMeta[] propMeta = dmo._recordMeta().getPropertyMeta(); // walk changed bitset and set updated values into prepared statement for (int i = key.nextSetBit(0), int j = 1; i >= 0; i = key.nextSetBit(i + 1), j++) { propMeta[i].typeHandler.setParameter(stmt, j, dmo.data[j]); } }
The offsets of the query substitution parameters in the update statement SQL should never change, and the offsets of the data and PropertyMeta
objects in their arrays are fixed, so isn't that all you need?
TIMESTAMP WITH TIMEZONE may complicate this; I haven't yet carefully reviewed how you've implemented that. Hopefully it can be adjusted as needed, if this complexity is not already hidden by that type's ParamaterSetter
implementation.
#107 Updated by Eric Faulhaber about 4 years ago
Ovidiu, please see #4011-105 and note that I have assigned the "refactor and finish BDTType
implementations" item to you. This is high priority, because right now, nothing works without it. This is due to my recent commits, but the change to use lower level types in the persist.orm
package instead of BDTs is intentional.
To the degree possible, I want the persist.orm
package to be entirely unaware of the FWD wrapper types, though this might not be entirely feasible. Ideally, even the class names in the persist.orm.types
package should refer to the J2SE type names we are using at the ORM level, instead of to the FWD wrapper type names. The translation between FWD and J2SE types should occur in the Record
class only.
Admittedly, I have muddied this a bit myself with the inclusion of the the initialValue
method in the BDTType
interface. This was a bit of a hack on my part, as it seems you wrote BDTType
only to group two functional interfaces. I did not want to have yet another handler object as part of PropertyMeta
, but maybe it makes sense to do so.
In 4011a/11384, I have changed IntegerType
as an example of what I'm looking for, and in order to get a very basic test case working, but I haven't touched the other ParameterSetter
implementations.
#108 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
I committed my latest changes as revision 11385. The caching of UPDATES is included, along with fixes for BDTType.
The caching is a bit more complex, as the data (their offsets) and the associated parameters setters need to be prepared in separate lists for each SQL query used. Of course, if there are no EXTENT fields, a single SQL statement is used and the list of parameters matches the bitset. However, if there are EXTENTS, multiple secondary SQL statements are needed and they need to be linked to the main record using parent
and index
as special fields resolved after the UPDATE is computed/obtained from cache, for each secondary statement. Note the the PropertyMeta
, beside the columnIndex
, now has a columnCount
(usually 1, but for DTZ is 2, evidently) and an offset
. The columnIndex
gives the column number in a SQL insert, the offset
gives us the position of a field in in-memory buffer data
. For extent properties, the 1st element is referred. To locate the actual position of another element, its index (0-base) must be added. Note that the lists of parameters will count the number of updated fields, but in case of DTZ, the index of the next positional parameter is automatically incremented by the number of columns used.
#109 Updated by Ovidiu Maxiniuc about 4 years ago
Regarding the BDTType
and the TypeManager
.
The problem is that, I wrote TypeManager
to provide ParameterSetter
for all kind of data that could be passed as a query parameter, including native Java, FWD, and other supported types (see the static initialization in TypeManager
).
Initially (when I used FWD types in BaseRecord.data
) I needed conversion to-from FWD so the Converter
interface did the trick. With latest commit, the interface is no longer present, but I kept the method definition in BDTType
, along with initialValue()
. Since the classes which implementat BDTType
are FWD-type related I decided to keep this name, momentarily.
#110 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I've made some initial assignments in #4011-105. Please look at those items which I've marked with your initials and let me know if you have any questions/thoughts. I think priority should be:
- Finish your current work with type handlers, update statement caching (probably already done).
- Review RB invocation handler changes; all of it, but especially as it relates to code you've implemented in the past which I've tried to pull over to the new version (e.g., triggers, dataset, OO). I'm not sure it will ultimately stay in that form, but I want to at least avoid having introduced regressions in this pass. If you have any feedback on any of the TODOs in that code, please let me know.
- Review placeholder classes/code we put in place to compile w/o Hibernate. We need to come up with an action plan for that code.
- Removing reflection dependencies (getter/setter methods) from
PropertyMeta
if possible. - Migrate
checkMaxIndex
code. - Rework UNDO architecture.
#111 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
4. Removing reflection dependencies (getter/setter methods) from
PropertyMeta
if possible.
As with r11385, the PropertyMeta
does not use reflection, it merely uses the getter/setter names for assembling the field setter and getter implementations.
#112 Updated by Eric Faulhaber about 4 years ago
Ovidiu, a few questions...
- I am testing/debugging
UniqueTracker
and I will have some changes. Do you have any pending updates to this class? - Can I mark the
BDTType
work as complete, or is there any work pending on this item?
#113 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
- I am testing/debugging
UniqueTracker
and I will have some changes. Do you have any pending updates to this class?
No, to avoid any conflict, I have not touched this class (except for a really small fix already committed) since you mentioned you started to integrate it. Please go ahead with any change you need here.
- Can I mark the
BDTType
work as complete, or is there any work pending on this item?
Not yet, I guess. There are the remaining initialization methods to be implemented. I am writing right now testcases for them. Also, I think I have spotted some javadocs that need minor adjustments. I am not sure whether we should keep this name or change it to DataHandler
/ TypeHandler
as you suggested.
#114 Updated by Eric Faulhaber about 4 years ago
Just a note on what I'm up to, since I realize I've become a bit quiet again. I've been reworking UniqueTracker
as a result of the requirements becoming more clear to me while I was implementing the unique constraint validation logic. Specifically, I've been:
- adding reverse mapping of unique key by record id to enable easier updates to records already being tracked;
- adding locking; I am making this granular to the
UniqueIndex
level (really, to the group of such instances requiring update), to try to minimize contention in this class as we scale the number of sessions using it; - implementing atomicity of locking/rollback of a unit of work, i.e.,
- an update locks each affected
UniqueIndex
instance and changes its unique key; - if any fails, the ones already updated are rolled back and those locks released;
- this has to be opened up (rather than just being self-contained), so that the database-level unique constraint check can happen between lock and unlock, and changes not possible at the database can be rolled back in the
UniqueTracker
.
- an update locks each affected
- implementing an error message in the expected format if a
UniqueIndex
update is found to violate the unique constraint; - integrating these changes into and testing [sub-]transaction undo (within
UniqueTracker
, I mean, not the larger undo architecture).
The changes have become a bit more involved than I was hoping, but hopefully I can finish them today and get further into testing...
#115 Updated by Eric Faulhaber about 4 years ago
Eric Faulhaber wrote:
[OM] replace current UNDO architecture with savepoint-based, just-in-time DMO refresh implementation (described separately in this task)
Ovidiu, I would like you to start thinking about this in more detail sooner rather than later. It is probably the biggest piece left in the current performance effort, so I don't want to leave it to the end. Please review the information already posted in this task and present any ideas and questions you may have, so we can get the discussion going.
#116 Updated by Eric Faulhaber about 4 years ago
I committed intermediate changes for unique constraint validation in r11389, but it is not yet working properly. The reverse lookup is there. The locking and unlocking are implemented. The rollback is implemented (of changes to the tracker, not for the DMOs in general). The error message is partially implemented, though it needs to use legacy names instead of converted names.
I haven't yet decided what to do with the track
/untrack
methods, though I think track
will go away in its current form, as that functionality was integrated into beginUpdate
. I haven't yet found the best place to hook in the untracking of records, so right now, the entries just accumulate and are never removed.
Also, I haven't figured out how to best deal with transactions and nested subtransactions. There are {begin|end}Transaction
methods, but it's not clear that their implementations are complete, nor exactly what would invoke them.
#117 Updated by Ovidiu Maxiniuc about 4 years ago
As seen from a distance, a new class that should handle the savepoints should implement the Committable
interface, with the addition of a new method/notification of startTransaction
. Most likely, there should be multiple instances of this, one for each database. At this moment we do not have object at this granulation level.
At this point, it is not very clear to me who should handle the savepoint management. Clearly, they must be saved when the transaction begins (beginTx()
), but the notification chain links TransactionManager
, its inner TransactionHelper
, BufferManager
. What is evident is that each transaction block need to save the Savepoint
/ id
so that, in a cascaded rollback the proper savepoint to be restored.
Implementing (or intercepting) the commit()
and rollback()
of Commitable
interface will be sufficient for the manager to issue the SQL primitives against all affected databases. As noted, the rollback/restore will be more resource-consuming as the tracked entities need to be refreshed. Normally, this must be done for each table of all databases, but I wonder if we can track the entities and refresh only those which were touched in the block being rolled back.
I am not sure if Reversible
hierarchy has to go. I think we should keep it or modify it so we can use it as a last resort for handling the NO-UNDO temp-tables (fast-forward REDO SQL operations that are reverted by a jump to a savepoint). I could not find a better solution here.
An optimization for NO-UNDO tables: if all temp-tables from respective connection are NO-UNDO, we simply declare it as auto-commit and completely skip savepoint management. However, this might not be possible in real life, as the set of tables is really not know beforehand.
#118 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
As seen from a distance, a new class that should handle the savepoints should implement the
Committable
interface, with the addition of a new method/notification ofstartTransaction
. Most likely, there should be multiple instances of this, one for each database. At this moment we do not have object at this granulation level.
There is one instance of BufferManager$DBTxWrapper
for each database with an active, explicit transaction in the current context. Is this not what you mean?
At this point, it is not very clear to me who should handle the savepoint management. Clearly, they must be saved when the transaction begins (
beginTx()
), but the notification chain linksTransactionManager
, its innerTransactionHelper
,BufferManager
. What is evident is that each transaction block need to save theSavepoint
/id
so that, in a cascaded rollback the proper savepoint to be restored.Implementing (or intercepting) the
commit()
androllback()
ofCommitable
interface will be sufficient for the manager to issue the SQL primitives against all affected databases. As noted, the rollback/restore will be more resource-consuming as the tracked entities need to be refreshed. Normally, this must be done for each table of all databases, but I wonder if we can track the entities and refresh only those which were touched in the block being rolled back.
I would like to do this lazily, if possible, such that it is avoided for DMOs which are never accessed again after a rollback. That is, mark a DMO with the STALE
record state, and only refresh that DMO's in-memory data from the database with the rolled back data when it is actually accessed/used. Some questions which require analysis, however:
- How do we know which records to mark
STALE
?- Do we run through the entire
Session.cache
and mark them all? Probably not. - Do we instead remember which records have been modified (even after the changes are persisted) and only mark those? I'm guessing yes. This would require the efficient maintenance of a list of these. Maybe a
Set
maintained fromBaseRecord
during processing of setter calls? - Is it possible to have a DMO in memory which is not currently in the cache? I didn't intend this to be possible with the re-design, but have I missed something?
- Do we run through the entire
- What are all the touch points where we check whether a DMO is
STALE
and needs to be refreshed?- It is not just getter/setter processing, but also lower-level operations like making snapshots of a DMO, etc. These all have to be identified.
I am not sure if
Reversible
hierarchy has to go. I think we should keep it or modify it so we can use it as a last resort for handling the NO-UNDO temp-tables (fast-forward REDO SQL operations that are reverted by a jump to a savepoint). I could not find a better solution here.
Am I oversimplifying to think that we could handle NO-UNDO using a twist on the new approach? Meaning...
- we never mark NO-UNDO DMOs as
STALE
upon a rollback - they have another state (
NO_UNDO
) - this state is set when the record is created and never changed (possible alternative: use a specialized subclass of
TempRecord
?) - any DMOs which are
NO_UNDO
and which have been modified at any point during the transaction cannot be expired from the session cache; they must remain in memory in their modified state- this is key; otherwise, any changes in DMOs which were allowed to expire are lost at full transaction database rollback
- we do not refresh the in-memory DMO state from the rolled back database state the way we would with the
STALE
state - after database-level full transaction rollback, we re-persist the DMO to the database in a separate transaction (similar to today)
- there are no reversible changes to apply like we do today, because the data already in the DMO in memory is what we want in the database
- does nothing special after database full transaction commit (same as regular DMOs)
- this is the fast and hopefully common case
- NO-UNDO DMOs don't do anything on sub-transaction rollback or commit.
In this model, there would be no reversible state to roll up or manage at the sub-transaction level. The in-memory DMO state at the time of the full transaction rollback represents what we want in the database.
An optimization for NO-UNDO tables: if all temp-tables from respective connection are NO-UNDO, we simply declare it as auto-commit and completely skip savepoint management. However, this might not be possible in real life, as the set of tables is really not know beforehand.
Yes, that would be ideal if we could know this beforehand, but I don't know how that would be possible.
#119 Updated by Eric Faulhaber about 4 years ago
Another aspect of the NO-UNDO support is that we will have to handle REDO of creates and deletes which are rolled back with the database transaction.
- a deleted record will have had its in-memory DMO deleted as well;
- it will need an SQL DELETE statement to be executed after rollback;
- we don't necessarily need to hold the original DMO instance in the cache for this, since the deleted data is not relevant;
- a created record will still have a corresponding, in-memory DMO representation after database rollback;
- it will need an SQL INSERT statement to be executed after rollback, as opposed to an UPDATE statement;
- I assume we can hold this DMO in the session cache, as in the original concept described previously.
These operations likely will need additional data structures and/or DMO state(s) to implement.
Note that there are no implications for NO-UNDO tables with the UniqueTracker
, since temp-tables are not tracked by this mechanism.
#120 Updated by Eric Faulhaber about 4 years ago
FYI (especially Constantin and Greg): the implementation of lightweight unique constraint tracking across uncommitted transactions uses ReentrantLock
to synchronize changes. If the context ends abnormally, we need to release all the locks held by the current context. However, the same thread which acquired a ReentrantLock
must be the one to release it. I have a feeling #4071 will cause a problem for us in this regard.
#121 Updated by Constantin Asofiei about 4 years ago
Eric Faulhaber wrote:
FYI (especially Constantin and Greg): the implementation of lightweight unique constraint tracking across uncommitted transactions uses
ReentrantLock
to synchronize changes. If the context ends abnormally, we need to release all the locks held by the current context. However, the same thread which acquired aReentrantLock
must be the one to release it. I have a feeling #4071 will cause a problem for us in this regard.
From #4071, tx rollback must always be done from the Conversation thread. CriticalSectionManager
is (currently) used in TM.processRollback
to protect against thread interruption (via CTRL-C or other means) - so that if rollback has started, it will finish properly.
I don't see the change from #4071-14 in trunk. But regardless, I think we need to ensure any cleanup is done on the Conversation
thread, not just rollback processing.
Also, you probably are already considering this, but keep in mind the locks must be released even if there is an abend during the rollback processing.
#122 Updated by Eric Faulhaber about 4 years ago
I committed rev 11391, which has changes related to DMO validation and flushing, especially w.r.t. unique constraint validation. I'm working on getting this basic test case working:
create book. create book. find first book. message rowid(book) book.isbn.
It should fail with a unique constraint validation error at the second create book
, since there are some unique indices, and the default values cannot be used twice for those. Once I get through that, I will flesh out this test case a bit more to avoid that error and to test the FIND with a WHERE clause. Once those fundamentals are working again, I want to do some heavier creates and reads of records in this table, to stress the new unique index validation logic.
But first, I've been working through various infrastructure errors to get to that point. Currently, we are trying to load the old DefaultDirtyShareManager
and we are hitting an error there. This is not too surprising, since this code is not properly re-integrated yet. However, I thought we had done enough to bypass the use of that code with the introduction of the dirty-read
hint. There is no such hint in the schema for the book
table; nevertheless, it is still trying to use the DefaultDirtyShareContext
in this case, instead of the NopDirtyShareContext
.
I had to comment out the use of ReversibleCreate
in RB.create
to avoid getting into a problem with the deprecated UNDO implementation. Ovidiu, please update with your latest thoughts on the replacement UNDO architecture, particularly on the additional NO-UNDO ideas I posted earlier. I'd like to get your opinion on those.
#123 Updated by Eric Faulhaber about 4 years ago
Ovidiu, is Validation.checkMaxIndex
fully implemented? Can I mark this item as complete?
#124 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Ovidiu, is
Validation.checkMaxIndex
fully implemented? Can I mark this item as complete?
Yes, the method should be ready but I did not tested it thoroughly. I based the new implementation on the old code. So, not all data types are well handled. For example, DecimalType:getFieldSizeInIndex()
will return an average figure of 8 for non-null values. This is on-par with the old code, I could not figure how these fields are part of the binary representation of the index.
#125 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I just committed rev 11392 with some fixes.
Please review my change to FqlToSqlConverted
for correctness. I was getting an NPE for a FIND FIRST with no WHERE clause.
Note that I temporarily disabled dirty share management altogether in RB, as we were still getting the default processing for a non-dirty-read table.
I am trying to get this simple test case to work:
create book. find first book. message rowid(book) book.isbn.
There should be no validation error (unless there is already a conflicting record in the database). It blows up in the FIND processing. Maybe this is a side effect of my FQL converter fix? Could you have a look, please?
The error:
[02/19/2020 15:04:12 EST] (com.goldencode.p2j.persist.orm.FqlToSqlConverter:INFO) FQL: select book.id from Book__Impl__ as book order by book.bookId asc, LIMIT: 1, START AT: 0 [02/19/2020 15:04:12 EST] (com.goldencode.p2j.persist.orm.FqlToSqlConverter:INFO) SQL: select book__impl0_.id as col_0_0_ from book book__impl0_ order by book__impl0_.book_id asc limit ? [02/19/2020 15:04:12 EST] (com.goldencode.p2j.persist.orm.SQLQuery:INFO) FWD ORM: com.mchange.v2.c3p0.impl.NewProxyPreparedStatement@1eb3eecb [wrapping: select book__impl0_.id as col_0_0_ from book book__impl0_ order by book__impl0_.book_id asc limit 1] [02/19/2020 15:04:12 EST] (com.goldencode.p2j.persist.Persistence:WARNING) [00000001:00000009:bogus-->local/p2j_test/primary] error loading record select book.id from Book__Impl__ as book order by book.bookId asc [02/19/2020 15:04:12 EST] (com.goldencode.p2j.persist.Persistence:SEVERE) [00000001:00000009:bogus-->local/p2j_test/primary] error loading record com.goldencode.p2j.persist.PersistenceException: org.postgresql.util.PSQLException: The column index is out of range: 0, number of columns: 10. at com.goldencode.p2j.persist.orm.BaseRecord.readProperty(BaseRecord.java:457) at com.goldencode.p2j.persist.orm.Loader.readScalarData(Loader.java:427) at com.goldencode.p2j.persist.orm.Loader.load(Loader.java:357) at com.goldencode.p2j.persist.orm.Session.get(Session.java:255) at com.goldencode.p2j.persist.Persistence.load(Persistence.java:2238) at com.goldencode.p2j.persist.RandomAccessQuery.executeImpl(RandomAccessQuery.java:4196) at com.goldencode.p2j.persist.RandomAccessQuery.execute(RandomAccessQuery.java:3414) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1437) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1329) at com.goldencode.testcases.SimpleDb.lambda$execute$0(SimpleDb.java:32) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.testcases.SimpleDb.execute(SimpleDb.java:28) at com.goldencode.testcases.SimpleDbMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:6198) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:3888) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5828) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5727) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:904) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:890) at com.goldencode.testcases.Ask.lambda$null$1(Ask.java:52) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.coreLoop(BlockManager.java:9439) at com.goldencode.p2j.util.BlockManager.doLoopWorker(BlockManager.java:9260) at com.goldencode.p2j.util.BlockManager.doWhile(BlockManager.java:1178) at com.goldencode.testcases.Ask.lambda$execute$2(Ask.java:35) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.testcases.Ask.execute(Ask.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1380) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2125) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1561) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:544) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748) Caused by: org.postgresql.util.PSQLException: The column index is out of range: 0, number of columns: 10. at org.postgresql.jdbc.PgResultSet.checkColumnIndex(PgResultSet.java:2755) at org.postgresql.jdbc.PgResultSet.checkResultSet(PgResultSet.java:2775) at org.postgresql.jdbc.PgResultSet.getString(PgResultSet.java:1894) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getString(NewProxyResultSet.java:4865) at com.goldencode.p2j.persist.orm.types.CharacterType.readProperty(CharacterType.java:134) at com.goldencode.p2j.persist.orm.BaseRecord.readProperty(BaseRecord.java:453) at com.goldencode.p2j.persist.orm.Loader.readScalarData(Loader.java:427) at com.goldencode.p2j.persist.orm.Loader.load(Loader.java:357) at com.goldencode.p2j.persist.orm.Session.get(Session.java:255) at com.goldencode.p2j.persist.Persistence.load(Persistence.java:2238) at com.goldencode.p2j.persist.RandomAccessQuery.executeImpl(RandomAccessQuery.java:4196) at com.goldencode.p2j.persist.RandomAccessQuery.execute(RandomAccessQuery.java:3414) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1437) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1329) at com.goldencode.testcases.SimpleDb.lambda$execute$0(SimpleDb.java:32) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.testcases.SimpleDb.execute(SimpleDb.java:28) at com.goldencode.testcases.SimpleDbMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:6198) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:3888) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5828) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5727) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:904) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:890) at com.goldencode.testcases.Ask.lambda$null$1(Ask.java:52) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.coreLoop(BlockManager.java:9439) at com.goldencode.p2j.util.BlockManager.doLoopWorker(BlockManager.java:9260) at com.goldencode.p2j.util.BlockManager.doWhile(BlockManager.java:1178) at com.goldencode.testcases.Ask.lambda$execute$2(Ask.java:35) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.testcases.Ask.execute(Ask.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1380) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2125) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1561) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:544) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748)
#126 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
It seems that you have the cache disabled (Session.get(Session.java:255))? I disable it, too, and I was able to duplicate your issue. Luckily, it was a simple fix. Please update to r11393.
#127 Updated by Eric Faulhaber about 4 years ago
Eric Faulhaber wrote:
- once we've determined it is not a fatal database error and is in fact a unique constraint violation causing a
session.save
failure, how do we compose a legacy error message:
- option 1: based on old implementation, issue queries testing every unique index until failing one is found
- PROS:
- dialect neutral
- already have the code to do this in some form, though it will need heavy refactoring
- CONS:
- slow, likely to cause contention because it greatly expands time spent with
UniqueTracker
resource(s) locked- adds a lot of cumbersome, awkward code just to handle error case
- option 2: dialect-specific error analysis
- PROS:
- don't need to clutter code with all the query set up; analysis code resides in dialects
- don't make potentially multiple additional round trips to the database
- analysis can be done outside of
UniqueTracker
resource locks, so less likely to cause contention- CONS:
- different code for each dialect
- fragile if database error handling changes (though not likely, as many applications will have been built around these architectures)
- probably requires string parsing, messy
- adds a lot of cumbersome, awkward code just to handle error case
- given the pros and cons, especially the prospect of increasing contention under option 1, I prefer option 2
I'm reconsidering this. In the event there is more than one unique constraint violated, I've found that you cannot rely on the database to report the violation for the same index as the legacy environment would. For instance, using the p2j_test.book
table, I created a record which violates the unique constraint on both book-id
and isbn
. The legacy environment reports the book-id
violation, while PostgreSQL reports the isbn
violation.
I think the only way we can maintain true legacy compatibility is to implement option 1. However, it makes sense to do the implementation at a lower level (using SQL in the persist.orm
package) than we are today (using HQL in the persist
package).
#128 Updated by Eric Faulhaber about 4 years ago
I've committed rev 11394, which fixes a few more issues with the validation/flushing. I managed to run an early/rough performance comparison test case between this revision of 4011a and trunk 11339. Here is the test:
def var i as int no-undo. def var count as int no-undo init 10000. def var start as int64 no-undo. start = etime. do i = 1 to count: create book. assign book.book-id = i + count book.isbn = string(i + count). end. message "elapsed:" string(etime - start). find first book. message rowid(book) book.isbn.
The test creates 10,000 book
records, assigning unique values only to the two fields which appear in the book
table's two unique indexes, and allowing all other fields to remain at their default values. The idea is to stress the validation of the records, especially the unique constraints.
I converted and ran in both environments, 4 times each. The same database and table were used as the back end. It was running on an SSD, so this test emphasizes work done in the FWD server.
The timing results in milliseconds are as follows:
trunk/11339 | 4011a/11394 |
---|---|
65,499 | 2,085 |
56,908 | 2,100 |
67,493 | 2,998 |
66,209 | 2,136 |
I stopped and restarted the server between each run, because we currently have a JDBC connection or connection pool bug in 4011a which prevents us from running the program twice. Also, I manually deleted the created records after each run.
I initially tried 100,000 iterations, which 4011a completed in times ranging from 28-52 seconds. However, trunk took so long (many minutes) for that version of the test that I eventually decided something might be wrong and I killed it. As the number of iterations is decreased, the times draw closer between the two versions.
Note that this is one, early data point, using a test case which only stresses record creation and validation. There are still bugs to be discovered and fixed, as well as features to be reworked and completed (see #4011-105), so I expect we will give some of this gain back as we finish that work. Nevertheless, I am very encouraged by this result that we are on the right track.
#129 Updated by Ovidiu Maxiniuc about 4 years ago
Notes/status on UNDO part (related to notes 118/119):
It is a bit difficult for me to "dry" discuss. My experience taught me that, even if in-depth planning, starting the actual implementation will reveal unforeseen issues.So, I started a new manager class that should drive the undo database based on the events that need to be wired from business logic. Important events are:
- when a possible undo block starts. A block (ex: external procedure) does not know whether it will be undone or not. At this moment the savepoint is created.
- what are the buffers and the records affected (the .id). Beside restoring the record buffers, the altered records also need to be stored so in a UNDO event, they can be re-fetched to cache or simply invalidated so they can be fetched later, when needed. This event is created when the records are updated. These will be stored in the above scope-related containers, grouped by DMO class. The buffers will be stored with their record .id at the moment the block is entered. Because they can load other records, we need this info to restore their content. Although this prepare step has some similarities with old implementation, in this case we just add the .id to a set, no other Reversible objects are created.
- when the UNDO executes.
- the savepoint for the block is rolled-back;
- then the stored records are invalidated (ie. mark them as STALE) in session cache;
- we use the
SQLQuery
/Loader
to execute the dedicated SQL query so it can be cached and result properly hydrated; - finally, the record buffers are updated.
The difficult part seems to be the NO-UNDO temp-tables. I agree with the notes from 119. Alternately, we can think of a structure similar to the BEFORE-TABLES and their MERGE operation. This is because even if we have the changes in the buffers, some altered records might not be available in memory without additional coding. And again, these structures must be maintained at undo-block level.
- when the possible undo block ends normally. If it is a (full-) transaction, it will be committed as usual, but otherwise, the affected record buffers and records need to be "moved" to upper undoable block. The savepoint is eventually released. The NO-UNDO temp-tables are not affected.
#130 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Notes/status on UNDO part (related to notes 118/119):
It is a bit difficult for me to "dry" discuss. My experience taught me that, even if in-depth planning, starting the actual implementation will reveal unforeseen issues.
Yes, I understand. I will be the first to admit that I have not been the best at keeping you informed about where some of my implementations are going, for this same reason. But since our work is so interdependent and we now at least have a functional framework, I'd like to have a little more communication, so we can bounce ideas back and forth, and make sure we are on the same page.
So, I started a new manager class that should drive the undo database based on the events that need to be wired from business logic. Important events are:
when a possible undo block starts. A block (ex: external procedure) does not know whether it will be undone or not. At this moment the savepoint is created.
Are you using something existing in TransactionManager
or BlockManager
as a hook for this?
what are the buffers and the records affected (the .id). Beside restoring the record buffers, the altered records also need to be stored so in a UNDO event, they can be re-fetched to cache or simply invalidated so they can be fetched later, when needed.
I intentionally made RecordState
a distinct class, so we could store an instance of this class separately from the record object (and its data), if needed. I was thinking of UNDO at the time, though I was not sure exactly whether or how that separation could be used.
This event is created when the records are updated. These will be stored in the above scope-related containers, grouped by DMO class. The buffers will be stored with their record .id at the moment the block is entered. Because they can load other records, we need this info to restore their content. Although this prepare step has some similarities with old implementation, in this case we just add the .id to a set, no other Reversible objects are created.
Great! As previously noted, I'd like to get rid of the Reversible
hierarchy altogether and replace it with a simpler model.
- when the UNDO executes.
- the savepoint for the block is rolled-back;
- then the stored records are invalidated (ie. mark them as STALE) in session cache;
- we use the
SQLQuery
/Loader
to execute the dedicated SQL query so it can be cached and result properly hydrated;
For the hydration, I guess you mean later, only if/when we determine a particular record is needed, correct?
- finally, the record buffers are updated.
Which record buffers? What state is updated?
The difficult part seems to be the NO-UNDO temp-tables. I agree with the notes from 119. Alternately, we can think of a structure similar to the BEFORE-TABLES and their MERGE operation. This is because even if we have the changes in the buffers, some altered records might not be available in memory without additional coding. And again, these structures must be maintained at undo-block level.
I am not as familiar with the BEFORE-TABLES and MERGE implementation. What does this entail?
- when the possible undo block ends normally. If it is a (full-) transaction, it will be committed as usual, but otherwise, the affected record buffers and records need to be "moved" to upper undoable block. The savepoint is eventually released. The NO-UNDO temp-tables are not affected.
Here, I'm not sure what you mean. In particular, I'm not sure what rolling up a record buffer to its upper undoable block means. The buffer is hard coded to a particular scope in business logic. Which state needs to be rolled up? The complex roll-up of Reversible
objects is the performance bottleneck I was hoping to avoid or at least simplify with this new approach. What information, besides "touched" record IDs, needs to be tracked and rolled up on subtransaction commit?
#131 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Ovidiu Maxiniuc wrote:
So, I started a new manager class that should drive the undo database based on the events that need to be wired from business logic. Important events are:
when a possible undo block starts. A block (ex: external procedure) does not know whether it will be undone or not. At this moment the savepoint is created.Are you using something existing in
TransactionManager
orBlockManager
as a hook for this?
I have not decided yet. The TransactionManager
seems like the first candidate (rollbackWorker()
method).
what are the buffers and the records affected (the .id). Beside restoring the record buffers, the altered records also need to be stored so in a UNDO event, they can be re-fetched to cache or simply invalidated so they can be fetched later, when needed.
I intentionally made
RecordState
a distinct class, so we could store an instance of this class separately from the record object (and its data), if needed. I was thinking of UNDO at the time, though I was not sure exactly whether or how that separation could be used.
I will keep this in my mind. In this case, for each buffer we need to keep a different RecordState
for each block: an inner block might not need to change a record at a UNDO event while the outer might do it is the record only changed in the larger scope.
This event is created when the records are updated. These will be stored in the above scope-related containers, grouped by DMO class. The buffers will be stored with their record .id at the moment the block is entered. Because they can load other records, we need this info to restore their content. Although this prepare step has some similarities with old implementation, in this case we just add the .id to a set, no other Reversible objects are created.
Great! As previously noted, I'd like to get rid of the
Reversible
hierarchy altogether and replace it with a simpler model.
At first I expected we can get away without anything like this (with only some additional support NO-UNDO tables), but we still need the items to be reverted from database.
- when the UNDO executes.
- the savepoint for the block is rolled-back;
- then the stored records are invalidated (ie. mark them as STALE) in session cache;
- we use the
SQLQuery
/Loader
to execute the dedicated SQL query so it can be cached and result properly hydrated;For the hydration, I guess you mean later, only if/when we determine a particular record is needed, correct?
Yes, only the records in the affected buffers. There is no need to fetch the other affected records at all at this moment. Since they are marked as STALE (or removed from cache) they will be fetched at request (with the proper value from database).
- finally, the record buffers are updated.
Which record buffers? What state is updated?
The ones that were altered (CREATED/DELETED/LOADED or UPDATED) within the block being rolled back.
The difficult part seems to be the NO-UNDO temp-tables. I agree with the notes from 119. Alternately, we can think of a structure similar to the BEFORE-TABLES and their MERGE operation. This is because even if we have the changes in the buffers, some altered records might not be available in memory without additional coding. And again, these structures must be maintained at undo-block level.
I am not as familiar with the BEFORE-TABLES and MERGE implementation. What does this entail?
The BEFORE tables store only the changes from the main (AFTER) tables. Each event will create a new record in the BEFORE table if one for the respective record id does not exist. However, we cannot use them as they are also normal tables so their content will be affected by the rollback operation.
- when the possible undo block ends normally. If it is a (full-) transaction, it will be committed as usual, but otherwise, the affected record buffers and records need to be "moved" to upper undoable block. The savepoint is eventually released. The NO-UNDO temp-tables are not affected.
Here, I'm not sure what you mean. In particular, I'm not sure what rolling up a record buffer to its upper undoable block means. The buffer is hard coded to a particular scope in business logic. Which state needs to be rolled up? The complex roll-up of
Reversible
objects is the performance bottleneck I was hoping to avoid or at least simplify with this new approach. What information, besides "touched" record IDs, needs to be tracked and rolled up on subtransaction commit?
I think we will need to keep the roll-up process, but only as references to altered objects, ie the ID set of touched records. Still, I expect this to be fast since it is a simple set union.
#132 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I have this stack trace:
Caused by: java.lang.NumberFormatException: For input string: "NULL" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at org.h2.value.Value.convertTo(Value.java:1061) at org.h2.value.Value.convertTo(Value.java:617) at org.h2.value.Value.convertTo(Value.java:592) at org.h2.table.Table.compareTypeSafe(Table.java:1187) at org.h2.index.BaseIndex.compareValues(BaseIndex.java:364) at org.h2.index.BaseIndex.compareRows(BaseIndex.java:299) at org.h2.index.TreeIndex.findFirstNode(TreeIndex.java:278) at org.h2.index.TreeIndex.find(TreeIndex.java:318) at org.h2.index.TreeIndex.find(TreeIndex.java:298) at org.h2.index.IndexCursor.find(IndexCursor.java:176) at org.h2.table.TableFilter.next(TableFilter.java:471) at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1453) at org.h2.result.LazyResult.hasNext(LazyResult.java:79) at org.h2.result.LazyResult.next(LazyResult.java:59) at org.h2.command.dml.Select.queryFlat(Select.java:527) at org.h2.command.dml.Select.queryWithoutCache(Select.java:633) at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114) at org.h2.command.dml.Query.query(Query.java:371) at org.h2.command.dml.Query.query(Query.java:333) at org.h2.command.CommandContainer.query(CommandContainer.java:114) at org.h2.command.Command.executeQuery(Command.java:202) at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:114) at com.goldencode.p2j.persist.orm.SQLQuery.uniqueResult(SQLQuery.java:225) at com.goldencode.p2j.persist.orm.Query.uniqueResult(Query.java:231) at com.goldencode.p2j.persist.Persistence.load(Persistence.java:2146) at com.goldencode.p2j.persist.RandomAccessQuery.executeImpl(RandomAccessQuery.java:4196) at com.goldencode.p2j.persist.RandomAccessQuery.execute(RandomAccessQuery.java:3414) at com.goldencode.p2j.persist.RandomAccessQuery.findNext(RandomAccessQuery.java:3802) at com.goldencode.p2j.persist.RandomAccessQuery.next(RandomAccessQuery.java:1819) at com.goldencode.p2j.persist.RandomAccessQuery.next(RandomAccessQuery.java:1698) at com.goldencode.p2j.persist.AdaptiveQuery$DynamicResults.next(AdaptiveQuery.java:4077) at com.goldencode.p2j.persist.ResultsAdapter.next(ResultsAdapter.java:159) at com.goldencode.p2j.persist.AdaptiveQuery.next(AdaptiveQuery.java:2041) at com.goldencode.p2j.persist.AdaptiveFind.next(AdaptiveFind.java:371) at com.goldencode.p2j.persist.PreselectQuery.next(PreselectQuery.java:2441) ...
The root cause is in SQLQuery.setParameters
. It appears deliberate, but as a temporary placeholder:
// validate first: no [null] parameters for (int i = 0; i <= maxParamIndex; i++) { Object p = parameters[i]; if (p == null || (p instanceof BaseDataType) && ((BaseDataType) p).isUnknown()) { System.err.println("SQLQuery: null/unknown value for " + i + "th parameter."); // NOTE: this is not correct, some exception MUST be thrown p = "NULL"; } ParameterSetter setter = TypeManager.getTypeHandler(p.getClass()); if (setter != null) { setter.setParameter(stmt, i + 1, p); // 1-base indexes of parameters } else { System.err.println("SQLQuery: do not know how to handle " + i + "th parameter: " + p.toString() + " of type " + p.getClass()); } }
I haven't debugged too deeply yet... shouldn't HQLPreprocessor
have taken care of those null
parameters by rewriting the FQL to use IS NULL
for the null
comparisons?
#134 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
I haven't debugged too deeply yet... shouldn't
HQLPreprocessor
have taken care of thosenull
parameters by rewriting the FQL to useIS NULL
for thenull
comparisons?
Indeed, that's why I added the special code. However, it might be the case that the parameter is used for other logic than to compare fields. Could you send me the testcase?
Meanwhile, I altered a bit that piece of code so that the parameters will be set even if they are null
, to allow you go on. The case is still detected and printed to log with INFO priority. Please see r11395.
#135 Updated by Eric Faulhaber about 4 years ago
Thanks. See raq_nulls_sort.p
in testcases.
#136 Updated by Ovidiu Maxiniuc about 4 years ago
Eric, please grab the 11396 revision. In my hurry I did not fully test the do and I made it worse.
#137 Updated by Eric Faulhaber about 4 years ago
So, are you saying you also would expect HQLPreprocessor
to still handle the null parameters before getting into the SQLQuery
code (i.e., you did not in some way disconnect the HQLPreprocessor
handling of null parameters)? If so, then it seems we somehow have introduced a regression in that preprocessing code.
#138 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
So, are you saying you also would expect
HQLPreprocessor
to still handle the null parameters before getting into theSQLQuery
code (i.e., you did not in some way disconnect theHQLPreprocessor
handling of null parameters)? If so, then it seems we somehow have introduced a regression in that preprocessing code.
HQLPreprocessor
behaviour is unaffected. The reason BDT
parameters reach the SQL query is because of the augmentation used in RAQ.executeImpl()
. At the moment persistence.load()
is invoked, the parms
array contains both original parameters which were taken into consideration by the preprocessor and the augmentation parameters, which were obtained using reflection (line 4127). For the moment, ParameterSetter
are defined for all BDT
s so the downstream processing will work in r11396. I think we should let it as it is as it proved to be working. Unless the augmented part need to be re-augmented with P4GL semantics for unknown operands in relational operators (I am not aware if this is necessary, but certainly the final hql is not augmented for null values for the RAQ augmented part).
#139 Updated by Ovidiu Maxiniuc about 4 years ago
UNDO update.
The old implementation was using commits and rollbacks a bit aggressive, altogether with creation of new Sessions to Hibernate. With current solution, we need in fact a single stable connection to database (for each user, per database, eventually pooled). The transactions need to be replaced with savepoints, but the problem is keeping the association between the undoable blocks with the proper savepiont. When a bock is repeated, the savepoint is recreated at each iteration. At the end of the top-level transaction, a commit will be issued.
At this moment I am working on separating the savepoint locations and their rollback from the big commit before the final session close operation. Of course, the undo statement will revert to current block's savepoint. The Sessions are too eagerly closed and re-opend and since they manage the connections, the latter are also too aggressively closed.
The undo seem to work for very simple cases, when a block is iterated, the begin/end go out of balance so exceptions are thrown.
#140 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
UNDO update.
The old implementation was using commits and rollbacks a bit aggressive, altogether with creation of new Sessions to Hibernate. With current solution, we need in fact a single stable connection to database (for each user, per database, eventually pooled). The transactions need to be replaced with savepoints, but the problem is keeping the association between the undoable blocks with the proper savepiont. When a bock is repeated, the savepoint is recreated at each iteration. At the end of the top-level transaction, a commit will be issued.
The idea was to use the Commitable.iterate
hook to release the savepoint for the last pass of a loop (we would need to add a method to Session
for this), and to set a new savepoint for the next pass. Subtransaction rollback would roll the last savepoint back; subtransaction commit would release the last savepoint.
At this moment I am working on separating the savepoint locations and their rollback from the big commit before the final session close operation. Of course, the undo statement will revert to current block's savepoint. The Sessions are too eagerly closed and re-opend and since they manage the connections, the latter are also to aggressively closed.
I was hoping this would allow us to simplify the session management code. The need to close the session was driven by the lack of control we had over the connection, which Hibernate managed. Prototype testing showed me that the connection remained stable after a unique constraint or non-null constraint violation, with the use of savepoints. However, I understand there is some work to do to untangle the current implementation.
The undo seem to work for very simple cases, when a block is iterated, the begin/end go out of balance so exceptions are thrown.
See Commitable.iterate
thought above, though I guess you already are doing something along these lines.
#142 Updated by Eric Faulhaber about 4 years ago
We have had customer requests to rename the id
field/column, which is a commonly used field name in applications. Currently, we rename id
to identifier
by default, but this causes problems, if there are any external dependencies on the original name id
. Until now, Hibernate has imposed a dependency on the id
name, but since we have pulled Hibernate out of the FWD implementation, we can choose a different name, which does not cause problems for our users.
In the new persistence architecture, we no longer emit this field into each DMO implementation class, but in BaseRecord
, we do have getId
and setId
methods, and a private instance variable id
. We should change these to Long primaryKey()
, void primaryKey(Long)
, and primaryKey
, respectively, to make their purpose clear, and to avoid collision with any generated methods of the same name.
There is already a recid recordID()
method in RecordBuffer
, which returns the data type recid
, so I don't want to add fields and methods using the same name in BaseRecord
. The implementation happens to use the same data internally as the recid
and the primary key, but I prefer to preserve the logical concept of a surrogate primary key in the ORM layer by using the primaryKey
name there.
#143 Updated by Ovidiu Maxiniuc about 4 years ago
There is this constant:
/** Reserved name of primary key identifier column/property */
public static final String PRIMARY_KEY = "id";
defined in
DatabaseManager
I tried to use it lately in the code. It allows some flexibility on the database-side as well. I did a quick search in the project and there are still some usages of the hardcoded name as a string, but these can be easily replaced.
However, if we upgrade an old installation a special care is needed for renaming this field.
#144 Updated by Eric Faulhaber about 4 years ago
Rev 11397 changes the BaseRecord
methods getId()
and setId(Long)
to primaryKey()
and primaryKey(Long)
, respectively. I unintentionally included an unfinished update to FqlToSqlConverter
in the commit. However, it should not hurt anything, unless used with SQL Server. I'll fix this in a future commit.
#145 Updated by Ovidiu Maxiniuc about 4 years ago
SavepointReversible
- dead code;Commitable
- I added aniterate
method, but it did not help me much. I left it in place just in case;- I disabled some methods using
if (true) return;
(like commit of reversibles and session flushes); - in
UniqueTracker
I encountered some imbalancing ofchanges
anddeletes
. I worked around by simply detecting the case and printing a call stack; - the
Session
will not be close at the event of any error. It provides a more intelligentTransaction
, but the begin and end have a bit different semantics; - The
Transaction
is meant to keep track of the nesting undoable blocks in a stack of scopes. Each scope hold the list of buffers that were altered and need to be reverted in the event of an UNDO statement together with the rowid of the record they were storing at the beginning of the block. Once the execution starts for a block theTransaction
created aSavepoint
and collect modified buffers. The execution of the current block can:- end with success, when the savepoint is simply discarded. If we reached the last level, the transaction is finally committed and made inactive;
- or with error/undo so the registered buffers need to be restored. The
fetchRecordsBack()
is called after the savepoint was rolled back and it will create a set of queries and re-populate the affected buffers at this level.
Please see the javadocs ofdecrementTxLevel
andincrementTxLevel
ofBufferManager.DBTxWrapper
with possible more information;
- in
RecordMeta
I noticed that thereadPropertyMeta
is called multiple time. I tried to fix this by caching the result but I did not find the proper solution so I let a TODO instead.
Please see revision 11399.
#146 Updated by Eric Faulhaber about 4 years ago
I am checking out 4011a2 now...
Thinking over some of the issues you've been having, I expect the implicit transaction code in Persistence$Context
is problematic with this feature set, though you may have taken this into account already.
#147 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Thinking over some of the issues you've been having, I expect the implicit transaction code in
Persistence$Context
is problematic with this feature set, though you may have taken this into account already.
Indeed, and the (soemewhat new) begin/end transaction bracket is not yet finished. I tried to maintain the old semantics but in the end I disabled some pieces of code that were not necessary for the smaller testcases I used.
#148 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I need a few more days to work through the UNDO code. Some of the features I expected to see (e.g., the tracking of record state and refresh of stale records from the database) don't seem to be there, unless I've missed them, and I'd like to refactor things. Please don't make any more changes to this feature set.
For now, please:
- move any other changes (some documentation and some fixes) you've put into 4011a2 which are not directly related to the UNDO work, into 4011a;
- move onto any unfinished items in #4011-110;
- if you have any other fixes currently in your working copy of 4011a (such as for the temp-table DMO interface issue you noted in email), please commit those to 4011a as well;
- try to stand up Hotel GUI:
- figure out which revision of Hotel GUI works with the version of trunk on which we based 4011a;
- convert and build that version with 4011a and see how functional (or not) it is;
- you will need to use a database imported with the old FWD code, since import will not work with 4011a yet;
- look into reworking the dynamic temp-table runtime conversion to use the new DMO architecture.
#149 Updated by Eric Faulhaber about 4 years ago
Greg, how hard would it be to add a hook for the savepoint processing which is called at the same point in the cycle as iterate
, but only on the initialization of the block, instead of on the second and subsequent iterations? It would have to be called regardless of whether the block is looping or not.
We need a place to set a savepoint as a subtransaction scope begins, so that it can be committed (i.e., released) with Commitable.commit
or rolled back with Commitable.rollback
.
#150 Updated by Greg Shah about 4 years ago
Eric Faulhaber wrote:
Greg, how hard would it be to add a hook for the savepoint processing which is called at the same point in the cycle as
iterate
, but only on the initialization of the block, instead of on the second and subsequent iterations? It would have to be called regardless of whether the block is looping or not.We need a place to set a savepoint as a subtransaction scope begins, so that it can be committed (i.e., released) with
Commitable.commit
or rolled back withCommitable.rollback
.
It should not be hard. I need to analyze the code to make sure I know the right place, but we can do it. I'll put this into 4011a.
#151 Updated by Greg Shah about 4 years ago
In 4011a revision 11402, I have added Finalizable.entry()
which is called during TransactionManager.blockSetup()
on the first entry to the block or loop (it is executed regardless of the looping nature). The notification will occur after the pushScope()
and after the Block.init()
lambda is executed and before the Block.body()
executes the first time. This is the equivalent point to when Finalizable.iterate()
is called.
The method has a default implementation so it only needs to be overridden wherever needed.
#152 Updated by Eric Faulhaber about 4 years ago
Ovidiu, those places we are seeing the NPE on BaseRecord.activeBuffer
seem to be old code where we are calling a setter through reflection, but not going through the proxy invocation handler. For instance: reversibles rolling back updates.
I see several issues here:
- Wherever we are using reflection in the persistence layer to invoke getter/setter methods on DMOs, we really should be using direct access to the
BaseRecord
data instead of theMethod
and reflection stuff. We may need to work up some convenience methods to make this easier from internal persistence code. - The old reversibles code shouldn't be invoked anymore. Once the new UNDO implementation is there, we will be rid of this.
- Moving the locking semantics into
BaseRecord
may not have been the best idea. PerhapsBaseRecord
should remain unaware of record locking, and this be handled at a higher layer. TheactiveBuffer
variable is only used for locking. However,activeOffset
also is used during validation. Can you think of a cleaner way to handle the requirements for which I added these two variables?
If you are seeing NPEs invoked from code not related to the reversibles architecture, the same thing applies. That is, we shouldn't be using the reflection-based code internally anymore.
#153 Updated by Ovidiu Maxiniuc about 4 years ago
The actual cause of NPW was the fact that the activeOffset
was accessed for the snapshot
of the dmo
of the RecordBuffer
's currentRecord
in reportChange()
(the setActiveBuffer
was called on currentRecord
but the activeOffset
of its snapshot
is accessed and null
).
I have not dug in yet, but I understand that the same issue might happen with activeOffset
(trying to access it form the snapshot
of a currentRecord
)?
#154 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
The actual cause of NPW was the fact that the
activeOffset
was accessed for thesnapshot
of thedmo
of theRecordBuffer
'scurrentRecord
inreportChange()
(thesetActiveBuffer
was called oncurrentRecord
but theactiveOffset
of itssnapshot
is accessed andnull
).
activeOffset
is a primitive int
. Did you mean activeBuffer
?
I have not dug in yet, but I understand that the same issue might happen with
activeOffset
(trying to access it form thesnapshot
of acurrentRecord
)?
Yes. Not sure about this particular code path, but generally speaking, any logic which results in a DMO interface's setter method being called outside the RB invocation handler can cause a similar problem. But since this variable can't be null
, we would see ArrayIndexOutOfBoundsException
instead.
Please consider how we might get rid of the dependencies on activeBuffer
and activeOffset
. The latter may be harder to get rid of.
#155 Updated by Eric Faulhaber about 4 years ago
Greg, could you please double-check the entry
hook logic for this test case?
define temp-table undott field f-Char as character init "o" field f-Int as integer init 123 field f-Decimal as decimal init 4.876 index idx1 as unique f-Int f-Char index idx2 f-Decimal. define temp-table onerectt field omefiled as character init "X". create book. book.book-title = "Gone with the Wind". release book. undo,leave.
We are getting calls to finished
and deleted
(also rollback
, since we register a Commitable
for the same block), but not to entry
.
#156 Updated by Eric Faulhaber about 4 years ago
Greg, never mind.
This seems to have something to do with my registration mechanism. It works with other test cases I've tried, but it must be registering the Finalizable
too late in this case. I hit a breakpoint in processEntry
, so it is being called. However, the savepoint manager object is not among the blk.finalizables
at this point.
#157 Updated by Ovidiu Maxiniuc about 4 years ago
While working on dynamic temp-tables I encountered the following issue: the AsmClassLoader
loads the dynamically generated class bytes but the annotations are not available. In consequence, the DmoMetadataManager
is unable to process it because apparently there are no annotations, neither the Table
for class, nor the Property
annotation for primary getters.
Better illustrate this with an example:
DEF VAR th1 AS HANDLE NO-UNDO.
DEF VAR bh1 AS HANDLE NO-UNDO.
CREATE TEMP-TABLE th1.
th1:ADD-NEW-FIELD("f-1", "integer").
th1:ADD-NEW-FIELD("f-2", "character", 3).
th1:ADD-NEW-FIELD("f-3", "datetime-tz", 4).
th1:TEMP-TABLE-PREPARE("dynamic-temp-table-1").
bh1 = th1:DEFAULT-BUFFER-HANDLE.
bh1:BUFFER-CREATE.
This sample code attempts to create a dynamic temp table with 3 fields, two extents (f-2
and f-3
) and one scalar (f-1
). After the ConversionProfile.BREW_DMO_ASM
was executed, I get the class as a byte array and it get successfully loaded by the AsmClassLoader
. At the same time, I dumped the generated byte array to disk. You can find it attached.
I tried to disassemble it (using javap -v com.goldencode.p2j.persist.dynamic._temp.DynamicRecord1.class
) and I get:
disassembled code of classfile com.goldencode.p2j.persist.dynamic._temp.DynamicRecord1.class
Notice the lines below RuntimeVisibleAnnotations
. Apparently the annotations are there (at least as cross-references from the constant pools), but for some unknown reasons, they are not loaded/initialized/set or otherwise processed to become available to Class/Method.getAnnotation()
methods.
I have written some debug lines to see the insights and the result was the following:
clazz.getAnnotation(Table.class) = null
Arrays.toString(clazz.getDeclaredMethods()[15].getAnnotationBytes()) = [0, 1, 0, 17, 0, 6, 0, 18, 73, 0, 19, 0, 11, 115, 0, 20, 0, 21, 115, 0, 20, 0, 13, 115, 0, 22, 0, 23, 115, 0, 24, 0, 25, 115, 0, 22]
clazz.getDeclaredMethods()[15].getAnnotation(Property.class) = null
(where method # 15 is
public abstract com.goldencode.p2j.util.integer com.goldencode.p2j.persist.dynamic._temp.DynamicRecord1.getField1()
- one of the getter methods that are (or better said, should be) annotated with Property
annotation).
Let's say that we will be able to reconstruct (how?) the Property
annotations staring from this byte array. But first we need the Table
annotation from the main interface to be processed.
#158 Updated by Greg Shah about 4 years ago
I've was discussing (with a customer) the UNDO/transaction and sub-transaction rework that is happening in this task. I explained that we have to go through special efforts to implement the "roll-forward" of NO-UNDO
temp-tables which have been rolled back.
He mentioned that he believes his team never knowingly uses UNDOable temp-tables. In other words, he thinks all of their temp-tables are NO-UNDO
and if some are not then it is a bug. I've asked him to confirm this. If he is correct, then we could implement a directory flag that customers can optionally set to disable the undoability of the temp-table database (treating all temp-tables as NO-UNDO
). This would mean that the rollback for temp-tables would actually commit when this flag is set and all the extra processing would be eliminated. Please keep this in mind in your design efforts. I will post here when we have the confirmation from the customer.
#159 Updated by Ovidiu Maxiniuc about 4 years ago
I have fixed the issue in entry 157, please disregard it.
The problem was in annotations' path: in the paths were incorrect (see constant entries 10 and 17). I find it strange that the classloader wasn't returning some kind of error/exception.
#160 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
[...]
This would mean that the rollback for temp-tables would actually commit when this flag is set and all the extra processing would be eliminated.
Very interesting idea. Should not be too hard to integrate this option into the implementation.
#161 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
He mentioned that he believes his team never knowingly uses UNDOable temp-tables. In other words, he thinks all of their temp-tables are
NO-UNDO
and if some are not then it is a bug.
We could help to enforce this with a related option during conversion (including honoring the directory option when converting dynamic temp-tables at run-time).
#162 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I am implementing the forced NO-UNDO override idea Greg mentioned above, and I have come across UndoStateProvider
. This interface is used in several parts of the runtime in ways that I don't fully understand. I don't want to break what it does, and I don't want to overcomplicate my implementation of the override. Essentially, I want to perform the override as simply as possible, so that the runtime behaves as if the temp-table had been defined with the NO-UNDO option.
I am reading the directory and storing the override flag in a DatabaseManager
static variable. This is enough for my savepoint implementation purposes, but I figured I'd make sure TemporaryBuffer.isUndoable
returns the correct response when the override is enabled. This is where I hit the use of UndoStateProvider
. What do you suggest? Take into account that dynamically defined temp-tables must honor the override as well.
Is it enough to just return false
from TB.isUndoable
when the override is enabled, or will that miss some more subtle use of UndoStateProvider
elsewhere?
#163 Updated by Ovidiu Maxiniuc about 4 years ago
IIRC, the
UndoStateProvider
is used for two reasons:
- to allow the
undo
property to be lazily set (in some conditions: see theAbstractTempTable.setCanUndo()
's javadoc) and - to keep in sync named buffers with the master/default buffer, in which case the evaluation of the
isUndoable()
method is delegated to the latter one.
So, as long as the directory flag is set, I think it is safe to return false
for temp tables.
#164 Updated by Eric Faulhaber about 4 years ago
Eric Faulhaber wrote:
Greg Shah wrote:
He mentioned that he believes his team never knowingly uses UNDOable temp-tables. In other words, he thinks all of their temp-tables are
NO-UNDO
and if some are not then it is a bug.We could help to enforce this with a related option during conversion (including honoring the directory option when converting dynamic temp-tables at run-time).
Greg, if we enforce the override at runtime, such that all temp-tables behave as NO-UNDO when the directory is so configured, can you see a use for the conversion option? The only value I see is that of pointing out which temp-tables are NOT marked NO-UNDO in the 4GL code, which might point the 4GL developers to what they consider a bug in their code. However, that assumes we log it at conversion and someone reviews the conversion log. We would need to do similar logging at runtime for dynamic temp-tables.
In the absence of this feedback and potential clean-up of the original code, we may have differences in behavior reported back to us as bugs, requiring additional effort to reference back to the original code to ascertain a missing NO-UNDO option.
#165 Updated by Greg Shah about 4 years ago
The only value I see is that of pointing out which temp-tables are NOT marked NO-UNDO in the 4GL code, which might point the 4GL developers to what they consider a bug in their code.
Yes. Writing a WARNING message during conversion is sufficient (for now).
In the absence of this feedback and potential clean-up of the original code, we may have differences in behavior reported back to us as bugs, requiring additional effort to reference back to the original code to ascertain a missing NO-UNDO option.
Understood.
#166 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
The only value I see is that of pointing out which temp-tables are NOT marked NO-UNDO in the 4GL code, which might point the 4GL developers to what they consider a bug in their code.
Yes. Writing a WARNING message during conversion is sufficient (for now).
To be clear, is it a warning only or a change to conversion? In other words, is the warning that the NO-UNDO option was missing and we added it, or just that the option was missing?
#167 Updated by Greg Shah about 4 years ago
If the directory has a flag to force everything to be treated as NO-UNDO
, then I don't see the need for a conversion change. We just need a warning. If we make it a conversion change, then you have to reconvert to switch back. That is not as good as a runtime flag.
#168 Updated by Eric Faulhaber about 4 years ago
Ovidiu, if two temp-tables are defined identically, except that one is declared NO-UNDO and the other is not, do we generate separate DMO interfaces for them? In other words, would it ever be possible to have the same BaseRecord
instance be loaded in two temp-table buffers, one that is NO-UNDO and one that is not?
#169 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Ovidiu, if two temp-tables are defined identically, except that one is declared NO-UNDO and the other is not, do we generate separate DMO interfaces for them? In other words, would it ever be possible to have the same
BaseRecord
instance be loaded in two temp-table buffers, one that is NO-UNDO and one that is not?
No, only one set of interfaces are created. However, the definitions from Java source are a bit different, the buffer backed by NO-UNDO table will have an extra false
for the undoable
parameter.
This is decided in schema/p2o.xml
and schema/p2o_pre.xml
by the AstKey
built on local tmpTabCrit
. The no-undo
nodes are not added to tmpTabCrit
.
#170 Updated by Eric Faulhaber about 4 years ago
I guess the better question is whether it is possible for the same data record to be used by such temp-tables (i.e., those that differ only in their NO-UNDO designation) in the 4GL?
The reason I ask: it will simplify our implementation if I can track NO-UNDO state within an instance of BaseRecord
. However, if the same record can be loaded into both an undoable buffer and a NO-UNDO buffer at the same time, this would complicate things.
#171 Updated by Ovidiu Maxiniuc about 4 years ago
How can we test that? The only way is to use buffer-copy
, but this will copy field by field, not the whole record.
I don't know your idea, but I don't think duplicating NO-UNDO to all records is correct. This is some kind of 'denormalizing' the relation between the record and the table it belongs to.
#172 Updated by Eric Faulhaber about 4 years ago
So, what I understand from your comment is that the NO-UNDO state is a property of the table the record belongs to (in the 4GL), and we are just defining a single DMO interface to represent both tables as a way of preventing redundancy of Java code.
Ideally (for my current purpose at least), NO-UNDO would be a part of the RecordMeta
object, because I need to know (without access to a TemporaryBuffer
or RecordBuffer
instance), whether a BaseRecord
instance is to be tracked as undoable or NO-UNDO (I store different state within the savepoint manager based on this criterion: the full BaseRecord
for NO-UNDO, and just the RecordState
for undoable).
However, RecordMeta
instances are one-to-one with the interface definitions. So, as an alternative, I was trying to find a way to be able to have this information in the BaseRecord
instance.
I suppose an alternative is to include NO-UNDO among the criteria which differentiates DMO interfaces during conversion, so that these actually would result in the generation of separate DMO interfaces.
#173 Updated by Greg Shah about 4 years ago
I suppose an alternative is to include NO-UNDO among the criteria which differentiates DMO interfaces during conversion, so that these actually would result in the generation of separate DMO interfaces.
I think more runtime state is a fair cost in exchange for a better conversion.
#174 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
I suppose an alternative is to include NO-UNDO among the criteria which differentiates DMO interfaces during conversion, so that these actually would result in the generation of separate DMO interfaces.
I think more runtime state is a fair cost in exchange for a better conversion.
I'm not sure what you are advocating for with this comment. In your view, does "a better conversion" mean fewer DMO interfaces for similar temp-tables, or separate DMO interfaces which are mostly redundant, but which more accurately reflect the situation in the original code?
#175 Updated by Greg Shah about 4 years ago
Less is more. I see no reason a customer would want the same structure duplicated just to get the affect of setting a flag that is just about runtime behavior.
#176 Updated by Constantin Asofiei about 4 years ago
Eric/Ovidiu: maybe you've already considered, but please keep in mind also the 'resource' behavior for temp-tables (which includes all the LegacyTable
, LegacyIndex
and LegacyField
annotations we now emit at the DMO implementation classes).
#177 Updated by Eric Faulhaber about 4 years ago
Constantin Asofiei wrote:
Eric/Ovidiu: maybe you've already considered, but please keep in mind also the 'resource' behavior for temp-tables (which includes all the
LegacyTable
,LegacyIndex
andLegacyField
annotations we now emit at the DMO implementation classes).
These already have moved to be annotations of the DMO interface in 4011a. The implementation class is now generated at runtime with ASM, and has no annotations. This discussion was in part about determining whether NO-UNDO best fits as one of those annotations, or should remain as runtime state only.
#178 Updated by Constantin Asofiei about 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Eric/Ovidiu: maybe you've already considered, but please keep in mind also the 'resource' behavior for temp-tables (which includes all the
LegacyTable
,LegacyIndex
andLegacyField
annotations we now emit at the DMO implementation classes).These already have moved to be annotations of the DMO interface in 4011a. The implementation class is now generated at runtime with ASM, and has no annotations. This discussion was in part about determining whether NO-UNDO best fits as one of those annotations, or should remain as runtime state only.
My worry is mostly about cases where tt1
is defined in a file like:
def temp-table tt1 xml-node-name "foo" field f1 as int.
and in another file like:
def temp-table tt1 xml-node-name "bar" field f1 as int.
Are you generating 2 DMO interfaces, for each case? We need to preserve any temp-table or field options (to be clear, this is not about just xml-node-name
, but any option).
#179 Updated by Ovidiu Maxiniuc about 4 years ago
Constantin, you are right, one of the xml-node-name
s will be lost in this case.
However, this is a bug (I can say) already in trunk. If you look at p2o.xml
/ p2o_pre.xml
, you can see that nodes like KW_XML_NNAM
are not took into consideration when comparing temp-table trees, so the DMO interface for both tt1
will be same.
With 4011a we merely moved the annotations from impl class to its parent DMO interface, so we kept the bug.
#180 Updated by Constantin Asofiei about 4 years ago
Ovidiu Maxiniuc wrote:
Constantin, you are right, one of the
xml-node-name
s will be lost in this case.
This was just an example, even if it is a bug, currently (which we will need to fix at some point - and from my experience, these kind of bugs, of differences only in temp-table or field options, are pretty hard to track down). Replace xml-node-name
with SERIALIZE-NAME
- in trunk, we generate 2 DMO implementation classes (so we can track the proper serialize-name
in the LegacyTable
annotation). Are you still doing the same in 4011a?
#181 Updated by Eric Faulhaber about 4 years ago
I'm pretty sure we were never generating more than one DMO implementation class per DMO interface. These were designed to be pairs.
If we were generating different DMO types for differences in certain [temp-]table attributes, we should be doing the same in 4011a. At least, that was my intention.
#182 Updated by Constantin Asofiei about 4 years ago
Eric Faulhaber wrote:
If we were generating different DMO types for differences in certain [temp-]table attributes, we should be doing the same in 4011a. At least, that was my intention.
I've tested the serialize-name
case in 4011a and is broken; only one DMO interface is generated.
We need this feature preserved, somehow, as these attributes/options are part of the resource/handle 'flavor' of a temp-table, and can be different, even if the underlying structure is the same.
I'm pretty sure we were never generating more than one DMO implementation class per DMO interface. These were designed to be pairs.
I think this still stands for trunk - but there is still in trunk there is a single 'root' interface with the table field's getters/setters, from which 2 other interfaces are inherited, and each DMO impl class has its specific DMO interface.
The idea above was that in p2o.xml
(look for tmpTabProps
), in trunk we are considering 2 temp-tables which differ in their defined options, to be actually distinct, and generate separate DMO implementation classes (to properly track these options).
#183 Updated by Ovidiu Maxiniuc about 4 years ago
- structural (actual data structure, meaning the fields, their types and indexes, probably triggers). These are information strictly related to database persistence;
- decorative: data not used when interacting with persistence layer, but for display or other serialization (SERIALIZE-NAME, NAMESPACE-URI, NAMESPACE-PREFIX, XML-NODE-NAME).
When comparing two tables, the first category is capital, meaning the structure must be the same to use the same DMO & backing database table. So they are mandatory belong to generated DMO interface.
The second category is... secondary, all these attributes are in fact optional. I think they can be set at a later time, maybe at the beginning of the scope, when the default buffer is open. The question is: where do we store these? We have classes that abstract TEMP-TABLEs. We can save them there (if not we are already doing this for some), but the permanent tables don't have any objects equivalent TempTable
implementations.
This is just an idea, please let me know what you think.
I wonder if we should replace
Customer.Buf customer = RecordBuffer.define(Customer.Buf.class, "fwd", "customer", "customer");
Tt1_1_1.Buf tt1 = TemporaryBuffer.define(Tt1_1_1.Buf.class, "tt1", "tt-1", false /* non-global */, false /* not undoable */);
Tt1_1_1.Buf bTt1 = TemporaryBuffer.define(btt1, "bTt1", "b-tt-1"); // non-default buffer
with a bit more complex:
Table customer_table = Table.define(Customer.Buf.class);
Customer.Buf customer = customer_table.getBuffer("customer", "customer");
TempTable tt1_table = TempTable.define(Tt1_1_1.Buf.class, false /*global flag*/)
.setUndoable(false)
.setSerializeName("my-private-data");
Tt1_1_1.Buf tt1 = tt1_table.getBuffer("tt1", "tt-1"); // last parameter may be skipped for default buffers
Tt1_1_1.Buf bTt1 = tt1_table.getBuffer("bTt1", "b-tt-1"); // non-default buffer
The new code allows different Table
/TempTable
objects to use the same DMO interface/implementation with common structure, but allows fine-tuning of decorative attributes in the generated Java source by chaining them in the table definition. Also the table declarations are separated from buffers, but the latter will keep references to their tables.
#184 Updated by Constantin Asofiei about 4 years ago
Ovidiu, your idea of setting these non-schema settings for a temp-table at the program where it is defined is interesting, and I think it can work. There is another issue to keep in mind: from some simple testing, a child shared temp-table inherits the options (from table or field) from the master definition. But there's something bugging me that there were cases in MAJIC cases where the field's label can be different in a child shared temp-table (I might be wrong here, I can't put my finger on it at this time). Also, looks like (for shared temp-tables) the legacy field name can be different at the child definition, and they are actually inherited from the master definition.
I think your solution can limit the legacy annotations to only schema-related information (if there's anything left). But we need some tests to see what actual information can be kept there; for example, do you generate different DMO interfaces if the temp-table structure is the same regarding field data types and their order, but not their names? I ask this because the legacy field names can be different, even if otherwise the schema matches.
Also, this approach I think can solve all the other 'existing bugs' in trunk, (see the xml-node-name
issue).
#185 Updated by Eric Faulhaber about 4 years ago
Ovidiu, I like the chaining approach. This is an approach I have adopted with other new APIs which have lots of possible options, rather than exploding the API. It is much more flexible. As long as the concerns Constantin has expressed can be dealt with, we should adopt this here.
#186 Updated by Ovidiu Maxiniuc about 4 years ago
Constantin Asofiei wrote:
I think your solution can limit the legacy annotations to only schema-related information (if there's anything left). But we need some tests to see what actual information can be kept there; for example, do you generate different DMO interfaces if the temp-table structure is the same regarding field data types and their order, but not their names? I ask this because the legacy field names can be different, even if otherwise the schema matches.
It seems that we collect different "features" of the temp tables, see #2942, #2595. We need to fine-tune the AST keys in the tmpTabProps
of p2o*.xml
.
Also, this approach I think can solve all the other 'existing bugs' in trunk, (see the
xml-node-name
issue).
The 4011 branch generates the exact set of interfaces as in trunk, the only difference is that there are new annotations for the DMO interfaces and the impl classes are not generated until runtime. So these related bugs were kept.
#187 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Ovidiu, I like the chaining approach. This is an approach I have adopted with other new APIs which have lots of possible options, rather than exploding the API. It is much more flexible. As long as the concerns Constantin has expressed can be dealt with, we should adopt this here.
I also like the chaining. I used this solution for system dialogs and more recently for datasets, too.
However, we need to pay attention to generated DMO. As Constantin noticed in #2942 some attributes/properties may be different at database level. In this case, we have a problem, as the extent is hardcoded at conversion time in the Field
annotation in the generated DMO interface. I do not have a solution for this yet.
#188 Updated by Greg Shah about 4 years ago
I really like your proposal as well. It seems "more correct" and it reads better.
In regard to the question "which attributes of a table are structural", you may want to review #3751-492. At least for method overloading, two otherwise identical tables are differentiated when a field has different EXTENT
settings, so I think EXTENT
is something that should cause different DMOs to be generated.
#189 Updated by Ovidiu Maxiniuc about 4 years ago
My concerns on different extent
fields were related to shared
tables. However, re-reading #2942 notes I see that the issue is solved, logically, by using the new
definition (ie. from master table). Se we need to create the DMO from the same source (and I think we do just that).
At the same time, in the case of independent (non-shared
) tables, it's clear that a table structure with a field with different extents will create different DMO interfaces.
OTOH, in case of the TEMP-TABLEs, there is the case when two independent tables are defining the same structure but use different name. In this case we also create different, but identical DMOs (except the table names). I think we could force using the same DMO here provided the impl tables will use different multiplexer, but it will be a bit strange at runtime when the legacy table a
will be backed by SQL/ORM table b
.
#190 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
OTOH, in case of the TEMP-TABLEs, there is the case when two independent tables are defining the same structure but use different name. In this case we also create different, but identical DMOs (except the table names). I think we could force using the same DMO here provided the impl tables will use different multiplexer, but it will be a bit strange at runtime when the legacy table
a
will be backed by SQL/ORM tableb
.
I agree that we should use the same DMO in this case. I think the two most common cases of this phenomenon are:
- Shared temp-tables, where the name change between definitions across files was most likely accidental. It seems we would cause a bug by using different DMOs (and thus different backing tables) in this case. Your suggested approach would fix this.
- The same structural temp-table in use across multiple files, either intentionally or by coincidence.
In both cases, the name(s) defined in the temp-table definition first encountered by conversion will be preferred and will be flowed consistently through the converted business logic, so the resulting code should be easy to understand, even with the name adjustment.
In any case where the name change was intentional and it is important to preserve separate tables, we can override using hints. Our hint processing may have to be tweaked, however, to make sure it will override the default behavior and actually result in separate DMOs being generated. I suggest we defer supporting this through changes in the hint implementation for now, however. It seems it would be a rare case where this would need to be used.
#191 Updated by Eric Faulhaber about 4 years ago
I've been struggling with how to fit a bulk delete (i.e., delete from <table> [ where <condition> ]
) which operates on a NO-UNDO temp-table, into the new UNDO architecture.
Right now, I have a SavepointManager
class which is created when we start a new application-driven database transaction. It registers for Commitable
and Finalizable
hooks at every full or sub-transaction block, creating a database savepoint at every sub-transaction block, and tracking changed/deleted/inserted records at every full or sub-transaction block.
Upon a rollback at any level, we mark all tracked records as stale, which means the in-memory DMOs are potentially out of sync with the database. This means the following:
- For undoable records, the database is authoritative. Any use of a stale record requires a lazy refresh from the database before the record is used. For inserted or changed records, we fetch the latest data values and store them in the DMO. For deleted records, we remove the DMO from the buffer using it and from the session cache.
- For NO-UNDO records, the DMOs in memory are authoritative. This means that after a savepoint or transaction rollback, we immediately insert, update, or delete the records to match the current state of the DMO.
However, the NO-UNDO approach breaks down in the case of a true bulk delete (as opposed to the looping, emulated version), where we haven't tracked specific records that were deleted.
I'm thinking that maybe, instead of trying to track NO-UNDO DMOs as POJOs, I need to track the SQL statements that were rolled back and "replay" them in sequence after a sub or full transaction rollback. This may be best for all the NO-UNDO cases, not just bulk delete. I'd rather not do it both ways. I would have to track the substitution parameters with the statements, which is a bit messy.
If anyone has any thoughts on this change in approach, please post them.
#192 Updated by Ovidiu Maxiniuc about 4 years ago
Eric, this FFD replay is the same though I had in mind. I don't think we can keep the database in good condition and buffers synchronized with it otherwise.
However, after we do this, it might be a good idea (at least for debugging/testing purposes) to re-fetch the record from database and compare them against DMOs currently in memory, just to be sure the replay batch was successful.
There is another issue which I have in mind: when creating a savepoint, shouldn't we also save the individual field and record states? I think we should restore these, too, after a rollback. For example, if a field was modified but not not committed, we need to restore this dirty state to make sure it will be ready to (re-)commit it with next opportunity.
#193 Updated by Greg Shah about 4 years ago
maybe, instead of trying to track NO-UNDO DMOs as POJOs, I need to track the SQL statements that were rolled back and "replay" them in sequence after a sub or full transaction rollback
What happens if in the meantime between the roll-forward and the original execution of a statement, another session has committed changes to an index that matters to the statement being rolled forward? This approach seems more fragile than tracking the specific changes made.
There is another issue which I have in mind: when creating a savepoint, shouldn't we also save the individual field and record states? I think we should restore these, too, after a rollback. For example, if a field was modified but not not committed, we need to restore this dirty state to make sure it will be ready to (re-)commit it with next opportunity.
I had assumed that all pending changes were already flushed to the database before the savepoint. If not, then you would be right.
#194 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
What happens if in the meantime between the roll-forward and the original execution of a statement, another session has committed changes to an index that matters to the statement being rolled forward? This approach seems more fragile than tracking the specific changes made.
NO-UNDO is only relevant for temp-tables, which are local to a session.
#195 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
There is another issue which I have in mind: when creating a savepoint, shouldn't we also save the individual field and record states? I think we should restore these, too, after a rollback. For example, if a field was modified but not not committed, we need to restore this dirty state to make sure it will be ready to (re-)commit it with next opportunity.
I had assumed that all pending changes were already flushed to the database before the savepoint. If not, then you would be right.
The intent was that the changes would be flushed before the savepoint, since we are no longer deferring the writes. However, I wonder if this might be possible in the case of a newly created record. I'll write some test cases to try to figure this out.
For undoable records, the RecordState
objects are being tracked as part of the information associated with each savepoint. In its current form, the new implementation tracks the BaseRecord
objects for NO-UNDO records, which includes the RecordState
. I'll keep your point in mind as I consider the move to tracking the SQL instead (if I find cases where there can be unflushed changes at the time the savepoint is set).
#196 Updated by Ovidiu Maxiniuc about 4 years ago
I encountered the following issue and I see no easy trick to work around. It is a antlr issue I think. Suppose we have the following table and query:
define temp-table tt1 field order as int. find first tt1 where order ne ?.
The conversion will create a (temp-)table with a column
order_
as it knows that order
is a reserved keyword in SQL, but the where predicate of the query will use the property name, which is still order
, like this: tt1.order is not null
. After preprocessing, the property remains unchanged so this is the exact hql passed to Hibernate. Hibernate parses the hql and somehow draws the conclusion that the token at col 5 is the property name, not the order
SQL keyword. So it will convert the property name to order_
column name so the resulting sql is valid and can be execute against SQL server.The problem with the new implementation is that the
order
(among others new ones) is already a keyword in FQL. After HQLPreprocessor
have done its job, the FqlToSqlConverter
will receive the following string parameter to convert:select tt1 from Tt1_1_1__Impl__ as tt1 where tt1.order is not null order by tt1.recid
As you see, the
order
token already occurs twice. On the other side, such a query would be valid in SQL. However, antlr is too eager to parse the token as ORDER
and will fail to match it as a SYMBOL
. As result, FqlToSqlConverter
will be unable to get a correct AST tree to process.
There are many other SQL keywords that suffers from same issue (by
, cross
, limit
, etc). I could not find yet a generic solution.
#197 Updated by Ovidiu Maxiniuc about 4 years ago
In fact, by
does not suffer from this inconvenient as it is already a Progress reserved keyword and cannot be the name of a field. The same stands for from
and select
. So identifiers matching SQL keywords (when this collision is allowed) that are also P4GL keywords are protected from this point of view. Luckily, the HQL language seems to have all keywords protected, but FQL have additional ones which don't have such protection: order
, cross
, left
, right
(types of join), limit
, that I could identify at this moment. Since Hibernate works on the HQL which does not contain order
as keyword, the testcase from 196 will execute successfully with trunk revisions.
#198 Updated by Greg Shah about 4 years ago
Is the column name version of order
always "qualified" (e.g. tt1.order
)? If so, then you can easily differentiate this as a column reference by lexer modifications to NUM_LITERAL
which is where the lexer generates the DOT
. We have to do a similar thing in the progress.g
lexer. You can look at the NUM_LITERAL
implementation there, see how we match on SYMBOL
and mark the result with the type FILENAME
. The idea here is that we cannot match .
in SYMBOL
without lexical non-determinism. So we move all the .
processing into NUM_LITERAL
for the "compound" cases. This is equivalent to your "qualified" case. In progress.g
we match on FILENAME
in the parser and convert these into field references (FIELD_*
) when we have enough context to make the decision. In your case, it may not be as complicated as you may already know that these qualified cases are safe to match at lexer time.
Let me know what questions you have. I'm happy to help you solve this.
#199 Updated by Ovidiu Maxiniuc about 4 years ago
Greg, thanks for suggestion. This is a non-orthodox workaround. Here is what I've done.
I altered NUM_LITERAL
in lexer so that it will eat any non-digit SYMBOL
after DOT and, in this case the new token (for example .order
) is assigned PROPERTY
type. On the other hand, when describing the property
in parser I added an optional rule that checks for a SYMBOL
and a PROPERTY
tokens. In this case, the former will become an ALIAS
while the second will remain a PROPERTY
, but its text gets the first character (that is the DOT) removed. Apparently, this works for my testcases.
I will try using this in hotel_gui and let you know of the result.
#200 Updated by Greg Shah about 4 years ago
This is a non-orthodox workaround.
You're being kind. Let us call it what it is: an ugly solution. Unfortunately, it is the only way that ANTLR lexical non-determinism can be solved.
#201 Updated by Ovidiu Maxiniuc about 4 years ago
I have committed my latest changes to 4011a as r11404.
The standalone hotel_gui is starting up and tabs can be navigated. However, it is not stable enough. The embedded client is also starting, but still behind the standalone version.
- FQL parsing. As noted in previous notes (196-200), the FQL grammar was adjusted so now the properties that match SQL keywords are correctly interpreted;
- the overloading of UDF for H2. Functions like lookup were not recognized by SQL engine. I do not remember when were they declared in previous implementation, I had to add them when connecting to database, very similar to how we do it for the temp-tables. Then, even they were registered themselves to
HQLPreprocessor
, sometimes they were not replaced by the overloaded form (_1, _2, etc). This was because the types of theSUBST
nodes were incorrectly calculated. In fact this was a consequence of recent change in type. The old (marked as@Deprecated
) types were removed and replaced by the simplerHQLTypes
(renamed asFqlType
); - table relations. Or better said, the lack of knowledge of table relations in DMO annotations needed by complex join queries. These pieces of informations were stored in
dmo_index.xml
and processed byDMOIndex
, but these are now obsolete and probably will be completely dropped with today commit. I created a new set of annotations to support the table relations to store them and adapted the conversion to add them in the generated DMO interface. At runtime, they are processed and provided to queries which need them. TheDMOIndex
is now not present any more in the project as all its roles were assumed by the newly added classes; - primary key name. I encountered some generated pieces of code that still were not aware of the configurable PK name. I added/improved support for reading the setting at conversion time and use the configuration setting as default when not expressly set in directory. If not present in
p2j.cfg.xml
, the hardcoded default value ofrecid
is used; Session.save()
got a new parameter. The validation before actual saving was invalidating the dirty flags of a record. As result, when a record was not really saved when the actual save operation was invoked. The extra parameter allows thesave()
method to identify the validation operations and skip resetting the dirty state of the properties to be saved;- intentionally deprecated classes. These were dropped or replaced with new or existing alternatives.
I hope rebasing of 4011a2 will not be too difficult.
#202 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
I have committed my latest changes to 4011a as r11404.
The standalone hotel_gui is starting up and tabs can be navigated. However, it is not stable enough. The embedded client is also starting, but still behind the standalone version.
Which revision of hotel_gui are you using with 4011a/11404?
I hope rebasing of 4011a2 will not be too difficult.
As long as you already have moved all the fixes/changes not specific to UNDO from 4011a2 into 4011a (and my understanding is that you have), we will not be rebasing 4011a2. I am making the UNDO changes directly in 4011a and I will coordinate the commit of this work with you.
#203 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Which revision of hotel_gui are you using with 4011a/11404?
I found r202 of hotel_gui is what we need. The more recent revisions are based on FWD trunk changes not present in our branch.
As long as you already have moved all the fixes/changes not specific to UNDO from 4011a2 into 4011a (and my understanding is that you have), we will not be rebasing 4011a2. I am making the UNDO changes directly in 4011a and I will coordinate the commit of this work with you.
I understand. The 4011a should contain all my non-undo related changes.
#204 Updated by Ovidiu Maxiniuc about 4 years ago
While working with the hotel_gui
and in particular with embedded client I feel that FWD is non-deterministically unstable. Meaning that even I execute the same scenarios, the errors I get are different. Among these I can mention following things:
- attempts to access
null
activeBuffer
inBaseRecord
. TheactiveOffset
andactivePropPreviousValue
are also not fully integrated; - imbalanced
Scopes
inUniqueTracker
. I believe I reported these already, but I haven't time to fully understand where/when this happens; null
transaction
inPersistence$Contest.commit(boolean)
. This one I spotted recently withhotel_gui
and 4011a. I believe I tried to fix something in 4011a2. I think the problem reside inPersistence$Contest.beginTransaction(boolean)
where we havetransaction = session.beginTransaction()
. I think something wrong there because if one transaction is in progress,session.beginTransaction()
will returnnull
. This will overwrite the valid value already stored intransaction
field. Maybe a local variable should be used and returned instead?- database connection closing aggressiveness. At least for permanent databases, closing the connection will lead to a halt of the entire client connection. This happens for each issues, including expected issues, like query off end. The temp-tables are more stable because the connection is proxied/delegated and actual close is not executed each time;
- rollback issues. I know that these are deprecated by the new UNDO approach. Yet, I find it strange that they are somehow affected, for example,
ReversibleUpdate.rollbackWorker()
attempts to restore some properties on anull
currentRecord
; - accessing record properties using POJO interface. I got these exceptions thrown in
PropertyHelper.getRootPojoInterface()
last night so I have very little knowledge of their causes.
Rooms
tab will not be visible at first switch (most likely there is a HTML/javascript issue). Advancing with Check-In
in 1st tab and adding a guest will usually make the client disconnect, in most cases because of:
IllegalStateException
: no current transaction available to rollback fromPersistence$Context.rollback(Persistence.java:5157)
IllegalStateException
: Imbalanced Scopes in UniqueTracker atUniqueTracker$Context.endTxScope(UniqueTracker.java:534)
NullPointerException
atRecordBuffer$ReversibleUpdate.rollbackWorker(RecordBuffer.java:13839)
SQLException
: You can't operate on a closed Connection!!! atSQLQuery.uniqueResult(SQLQuery.java:221)
The latter case, it is not an error per se, but a consequence that the connection was previously closed and the SQLQuery
is not aware of it.
#205 Updated by Ovidiu Maxiniuc about 4 years ago
The first revision of import process was committed to 4011a. I have successfully imported the hotel_gui database in both H2 and PostgreSQL databases. After the import, the server started and I was able to navigate tabs and see browses being populated.
Committed revision 11406.
- I improved the PSC footer lookup so not if the footer ends with
00000000
the PSC footer is reversed searched and import configured according to values. The problem is: thenumformat
anddateformat
might be different in each imported .d file. We store these values as context local. If there is only one import thread, then everything goes smooth, as the thread sets these values before reading data. In the case of parallel processing, if the settings are different, the last importer thread to start will overwrite the setting for those already executing which can cause parsing errors to occur for date-related and numerical values; - I observed your notice:
// we no longer manufacture foreign keys from 4GL schemata; it is not safe to try to add // referential integrity which was not part of the legacy database
I think you are right. I kept the commented code, but I am thinking to drop it; - current implementation uses heavily the reflection to create the DMO instances and populate them. With the new persistence and access to record's metadata, meaning that we can avoid using reflection to set/get values on fields and instead to directly access the
data
from theBaseRecord
. TheDataHandler
classes will need an additional method to do it. - at the moment the above processing is done, the record status is managed as it is in a real FWD environment. I think this is useless (as the records are just created and flushed to database) and the status management should be avoided. The only information is that the record is
NEW
so that it can be inserted to database by thePersister
.
#206 Updated by Eric Faulhaber about 4 years ago
Thanks for fixing up the import.
I had two errors compiling with rev 11406. One was due to a change I have made in my local working copy; the other was a missing import that your IDE may have resolved for you.
The missing import is for the java.beans
package, for the use of PropertyVetoException
in ImportWorker
. This was any easy fix: I just had to add the import statement.
The other issue is that I had removed the deprecated BaseRecord.indexOf
method in my working copy, because there were no dependencies on it before rev 11406. However, a new dependency was added to schema.PropertyMapper
in 11406. I have temporarily commented that use out in my working copy; however, now import won't work, of course. Do we need to leave the deprecated method in BaseRecord
just for this, or can you think of a way to do it without leaving the name/index lookup in BaseRecord
?
#207 Updated by Eric Faulhaber about 4 years ago
Eric Faulhaber wrote:
The other issue is that I had removed the deprecated
BaseRecord.indexOf
method in my working copy...
Sorry, I meant RecordMeta
.
#208 Updated by Ovidiu Maxiniuc about 4 years ago
I started rewriting some parts of the
import.xml
, ImportWorker
and related classes to take advantage of the new persistence framework and also I would like to do a bit of cleanup in import flow. I have some issues:
- I found out that
PropertyMapper
tries to delegate the setting of a field ifdelegate
is on. However, the chain seems broken. Firstly because there is no action taken to set the 'delegated' field of the record and secondly because I found no active TRPL code to generate p2o objects withdelegated
annotation. I found some remnant code (commented out since 2005 inp2o_post.xml
moved fromp2o.xml
) documented as: "to simplify first implementation of P2J". Practically, thePropertyMapper
constructor is always non-delegated. Is there a chance these pieces of code will ever be activated? - the
import.xml
does a lot of work to collect the properties of a DMO, then to find the bean set/get methods to access them using reflection in order to populate the fields of records in each table. This is obsolete as we moved to the new persistence. I have dropped the reflection access and now the properties are accessed directly, by assigning the values read from input (.d file) stream and converted to BDT directly to record'sdata
array, knowing its offset. This means that theMethod setter
,Method getter
,Object[] setterArgs
andObject[] getterArgs
member fields are all gone fromPropertyMapper
. It works faster as the record state is not uselessly maintained, but there are some issues here:- the offset of the property is computed using the property's name (and returned by
RecordMeta.indexOf
you removed). This is done only once per field/property. However, I think that this should be done in a more 'modern' fashion as the DMO interface is already processed and we can construct the set ofPropertyMapper
s directly, after the DMO interface is available; - I feel that it would be better to read the dump-data directly as Java native (as stored in
data
array) instead of 'transiting' through BDT. However, since the conversion routines are already written in each BDT class I expect doing so will only duplicate the code, isn't it? - the direct access to
data
array from record requires that this field to bepublic
since theImportWorker
reside in a different package. There is also some access required to this member from theorm/types
. This is really not what we want. I am thinking of a solution;
- the offset of the property is computed using the property's name (and returned by
- what about the foreign keys and their queries from
SqlRecordLoader
? They seem like dead and disabled code. ThegetDependencies()
returns an empty set and usages ofhqlParms
andqueries
are commented out. Should I do the cleanup and drop related code?
#209 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
I found out that
PropertyMapper
tries to delegate the setting of a field ifdelegate
is on. However, the chain seems broken. Firstly because there is no action taken to set the 'delegated' field of the record and secondly because I found no active TRPL code to generate p2o objects withdelegated
annotation. I found some remnant code (commented out since 2005 inp2o_post.xml
moved fromp2o.xml
) documented as: "to simplify first implementation of P2J". Practically, thePropertyMapper
constructor is always non-delegated. Is there a chance these pieces of code will ever be activated?
IIRC, this was about the foreign keys we tried to introduce into the schema through code analysis. If that's accurate, then no, it won't be re-activated. We found that this idea was unsafe. This code is obsolete and should be removed.
theimport.xml
does a lot of work to collect the properties of a DMO, then to find the bean set/get methods to access them using reflection in order to populate the fields of records in each table. This is obsolete as we moved to the new persistence. I have dropped the reflection access and now the properties are accessed directly, by assigning the values read from input (.d file) stream and converted to BDT directly to record'sdata
array, knowing its offset. This means that theMethod setter
,Method getter
,Object[] setterArgs
andObject[] getterArgs
member fields are all gone fromPropertyMapper
. It works faster as the record state is not uselessly maintained, but there are some issues here:
- the offset of the property is computed using the property's name (and returned by
RecordMeta.indexOf
you removed). This is done only once per field/property. However, I think that this should be done in a more 'modern' fashion as the DMO interface is already processed and we can construct the set of @PropertyMapper@s directly, after the DMO interface is available;
Agreed.
- I feel that it would be better to read the dump-data directly as Java native (as stored in
data
array) instead of 'transiting' through BDT. However, since the conversion routines are already written in each BDT class I expect doing so will only duplicate the code, isn't it?
We originally had hand-written Java code which read the dump data directly, but it was incomplete and error-prone. We found this to be a non-trivial effort to get right, so we switched to using the import capabilities of the BDT classes, as this code was more complete and already well tested. There is surely some performance cost to this approach, as the BDT classes have other baggage not needed for import. And with the new ORM implementation, it is the Java types we actually want for the DMOs, not the BDTs. Nevertheless, I would rather not take on this level of effort at this time. I think our energies are probably better spent in other performance areas.
Is there a functional reason you are interested in doing it this way, or are you concerned primarily with the performance of the BDT hierarchy?
- the direct access to
data
array from record requires that this field to bepublic since the @ImportWorker
reside in a different package. There is also some access required to this member from theorm/types
. This is really not what we want. I am thinking of a solution;
OK, keep me posted.
what about the foreign keys and their queries from
SqlRecordLoader
? They seem like dead and disabled code. ThegetDependencies()
returns an empty set and usages ofhqlParms
andqueries
are commented out. Should I do the cleanup and drop related code?
Yes, please do. As noted, this approach is unsafe and this code will not be re-activated.
#210 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Surely,The other issue is that I had removed the deprecated
BaseRecord.indexOf
method in my working copy, because there were no dependencies on it before rev 11406. However, a new dependency was added toschema.PropertyMapper
in 11406. I have temporarily commented that use out in my working copy; however, now import won't work, of course. Do we need to leave the deprecated method inBaseRecord
just for this, or can you think of a way to do it without leaving the name/index lookup inBaseRecord
?
@Deprecated
is not the correct annotation, I needed something to mark it somehow and I (ab-)used it because both the IDE and compiler will issue at least a warning. But, at the moment, I am not aware of an @UsageNotRecommended
annotation, which would be a closer to what I meant. Practically, such an annotation will contradict with the public
specifier. I think we can either:
- drop the method, I think the association property-index can be obtained by iterating the list of properties for the import process as all fields will eventually be processed. That is for the import process, other use-cases might not be as lucky;
- keep the method as we can use it to access the properties like in ABL dereference mode. The following two method can be part of the
Record
class:
Note that the value parameter of the first former and the return value of the latter are in Java plain datatype, not BDT.public void setProperty(String property, Integer extOffset, Object value) { int offset = _recordMeta().indexOf(property, extOffset); if (offset < 0 || offset >= data.length) { throw new IllegalArgumentException(property); } setDatum(offset, value); } public Object getProperty(String property, Integer extOffset) { int offset = _recordMeta().indexOf(property, extOffset); if (offset < 0 || offset >= data.length) { throw new IllegalArgumentException(property); } return data[offset]; }
However, the javadoc of all three of them should documented that their usage as not recommended.
#211 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
[...] I think we can either:
- drop the method, I think the association property-index can be obtained by iterating the list of properties for the import process as all fields will eventually be processed. That is for the import process, other use-cases might not be as lucky;
Do you still have other use cases for this? I removed it in my working copy because it was not referenced by any other part of FWD (before pulling the revision with import re-worked, that is).
If there is truly no other use case needing it, AND if the iteration can be done efficiently (i.e., once per table as opposed to once per record), then I think this is the way to go.
However, I have another question on this topic: aren't the values in each row of the .d
files already in the same order as the data array in BaseRecord
? Or if not, isn't there a deterministic relationship there?
- keep the method as we can use it to access the properties like in ABL dereference mode. The following two method can be part of the
Record
class:
[...]Note that the value parameter of the first former and the return value of the latter are in Java plain datatype, not BDT.
However, the javadoc of all three of them should documented that their usage as not recommended.
Prefer not to keep it, but if we must, I agreed on the javadoc comment.
#212 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
However, I have another question on this topic: aren't the values in each row of the
.d
files already in the same order as the data array inBaseRecord
? Or if not, isn't there a deterministic relationship there?
Yes, I guess they are. Not sure about the LOBs. The problem is that we need to process the metadata to know how each read field is converted using the correct BDT type.
#213 Updated by Eric Faulhaber about 4 years ago
I've checked in revisions 11407 and 11408, which replace the UNDO architecture and the transaction management.
Please note the following about working with rev 11408:
- Database import is temporarily disabled, per our previous discussions about the field lookup by name. See
PropertyMapper
:238. This is a priority to resolve. - I have removed the
Transaction
class. It was bypassing code inPersistence$Context
on commit and rollback.Persistence$Context.beginTransaction
returns aboolean
instead now. Commits and rollbacks from thepersist
package should be done throughPersistence
orPersistence$Context
.
This revision is known to have bugs, especially around database transactions. I am working through them as quickly as possible.
#214 Updated by Eric Faulhaber about 4 years ago
I committed rev 11409. With this drop, I can log into virtual desktop Hotel GUI. I can navigate the tabs, though the system is really busy and it is slower than I am used to. If I try to do a check in, it fails with the imbalanced unique tracker issue.
#215 Updated by Ovidiu Maxiniuc about 4 years ago
- the property lookup was removed from
RecordMeta
. ThePropertyMapper
s use a local code to compute the property offset directly; - I noticed your TODO in
DatabaseManager
. I am going to address is immediately as this is used in several places, including conversion time; - there were some collisions, most of them comments/javadocs. To resolve them I favoured your code;
- there are two solutions I would like to be reviewed:
- the fix of double evaluation of
RecordMeta
(ie: double processing of the DMO interface annotations). The first solution was to have the_metadata
static field notfinal
and initialized it using reflection after loading the new class inforInterface()
method. It worked well and was rather simple, but I did not like the absence offinal
specifier. The 11411 revision is a bit more complicated: it is based that only a DMO class is processed at a time, in the synchronized block offorInterface()
method. As result, the newRecordMeta
is stored after evaluation in a static field and requested (and reset) when the new class is loaded and the static final field is initialized. I left some additional documentation inRecordMeta
: fieldscrtRecordMeta
,crtRecordIface
and static methodget(Class extends DataModelObject>)
; - I mentioned earlier that the
data
is nowprotected
(it cannot be package-private as theRecord
needs access to it). To implement direct access fromPropertyMapper
I implemented an accessor that allows only objects of this kind to obtain thedata
member. As thePropertyMapper
s are created at import time only and they arefinal
, theirdata
field is safe in normal FWD mode;
- the fix of double evaluation of
- I spotted an inheritance issue with
Record
class and the dynamically generated implementations. Before the latest commits, the new classes were lacking the implementation ofUndoable Undoable.deepCopy()
andvoid Undoable.assign(Undoable)
which were inherited viaPersistable
interface. I added code for generating these methods (by delegating the calls toRecord Record.snapshot()
andvoid Record.copy(BaseRecord)
respectively). However, I noticed that you also added inBaseRecord
a new method,BaseRecord deepCopy()
to be used with the UNDO framework. There is a problem here: the return types do not match (Undoable
andBaseRecord
are unrelated) so the classes are invalid.
#216 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
I merged in my changes related to import and a few other fixes. Latest revision of 4011a is 11411. Notes:
- the property lookup was removed from
RecordMeta
. ThePropertyMapper
use a local code to compute the property offset directly;
Great!
- I noticed your TODO in
DatabaseManager
. I am going to address is immediately as this is used in several places, including conversion time;
Thank you.
- there were some collisions, most of them comments/javadocs. To resolve them I favoured your code;
- there are two solutions I would like to be reviewed:
- the fix of double evaluation of
RecordMeta
(ie: double processing of the DMO interface annotations). The first solution was to have the_metadata
static field notfinal
and initialized it using reflection after loading the new class inforInterface()
method. It worked well and was rather simple, but I did not like the absence offinal
specifier. The 11411 revision is a bit more complicated: it is based that only a DMO class is processed at a time, in the synchronized block offorInterface()
method. As result, the newRecordMeta
is stored after evaluation in a static field and requested (and reset) when the new class is loaded and the static final field is initialized. I left some additional documentation inRecordMeta
: fieldscrtRecordMeta
,crtRecordIface
and static methodget(Class extends DataModelObject>)
;
I also want to keep the final
modifier on the _metadata
field. I prefer your second approach. However, I don't like the synchronization on the DmoClass
class and the use of static variables for the crt*
information.
It seems the two things you are protecting with synchronization are the crt*
static variables in RecordMeta
and the static DmoClass.classes
cache. I would propose that instead:
- in the
RecordMeta
c'tor, we store in a thread-local variable (using a struct-like, static inner class) the data currently in thecrt*
variables; and - in
RecordMeta.get
, we retrieve the same data from the thread-local variable and return theRecordMeta
instance for the DMO class currently being loaded, then null out the thread-local variable contents; and - use a
ConcurrentHashMap
forDmoClass.classes
, usingputIfAbsent
; and - remove the
sychronized(DmoClass.class)
block.
I use the thread-local technique in AsmClassLoader
to avoid a synchronization bottleneck in a similar situation. See the local
variable and the loadClass
and defineClass
methods, as well as the Data
inner class.
The class loader locking mechanism will ensure that we can't load separate classes for the same class definition. Even if we try to load the same class twice nearly simultaneously, the first one defined wins. So, at most, we waste the assembly time if the DMO implementation class for the same DMO interface is requested by multiple sessions simultaneously. I prefer this possible overhead to the cost of synchronizing every assembly across all DMO interfaces on DmoClass
. Using Map.putIfAbsent
may be redundant, but it seems more consistent with the design.
- I mentioned earlier that the
data
is nowprotected
(it cannot be package-private as theRecord
need access to it). To implement direct access fromPropertyMapper
I implemented an accessor that allows only objects of this kind to obtain thedata
member. As thePropertyMapper
are created at import time only and they arefinal
, theredata
field is safe in normal FWD mode;
I thought BaseRecord.data
has always been protected
. So, yes, no problem with that.
- I spotted an inheritance issue with
Record
class and the dynamically generated implementations. Before the latest commits, the new classes were lacking the implementation ofUndoable Undoable.deepCopy()
andvoid Undoable.assign(Undoable)
which were inherited viaPersistable
interface. I added code for generating these methods (by delegating the calls toRecord Record.snapshot()
andvoid Record.copy(BaseRecord)
respectively). However, I noticed that you also added inBaseRecord
a new method,BaseRecord deepCopy()
to be used with the UNDO framework. There is a problem here: the return type do not match (Undoable
andBaseRecord
are unrelated) so the classes are invalid.
I completely forgot that Persistable
extends Undoable
. There is actually no need to integrate the BaseRecord
hierarchy with the standard Undoable
mechanism in FWD. We don't treat DMOs as Undoable
.
The fact that I recently added deepCopy
and named it that is coincidental. I needed the capability to make a deep copy (i.e., not only the data, but the state as well) of a DMO for the NO-UNDO support (for use with SQLRedo
). I saw your existing snapshot
and copy
methods and I didn't want to change them to add the copying of state, in case that went beyond what you needed for your purposes when you implemented those methods. If they should all be rationalized into a simpler set of common methods, let's do that. But if not, we can leave them as is.
I would suggest that instead of implementing the Undoable
interface, we remove it from the hierarchy. IIRC, there are places in the runtime we currently use the Undoable
methods. This was done as a convenience, because the methods were what we needed for some other purpose (like copying a record), though not as part of a framework that worked with Undoable
objects per se.
#217 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
I noticed your TODO in DatabaseManager. I am going to address is immediately as this is used in several places, including conversion time;
Thank you.
There is an idea in my old implementation which perhaps was not as visible: the fact that when configuring the value for PRIMARY_KEY
, the p2j.cfg.xml
is also checked for a value for this setting before defaulting to DEFAULT_PRIMARY_KEY = "recid"
. I think we should keep this because it keep the environment more solid. Otherwise, forgetting to update the value in directory will lead to failed queries.
There was a 3rd idea which I dropped earlier when analysing the solution so I did not presented you. In fact it was closer to what you proposed, but keeping the temporary data inI also want to keep the
final
modifier on the_metadata
field. I prefer your second approach. However, I don't like the synchronization on theDmoClass
class and the use of static variables for thecrt*
information.
It seems the two things you are protecting with synchronization are thecrt*
static variables inRecordMeta
and the staticDmoClass.classes
cache. I would propose that instead:
[...]
I use the thread-local technique inAsmClassLoader
to avoid a synchronization bottleneck in a similar situation. See thelocal
variable and theloadClass
anddefineClass
methods, as well as theData
inner class.The class loader locking mechanism will ensure that we can't load separate classes for the same class definition. Even if we try to load the same class twice nearly simultaneously, the first one defined wins. So, at most, we waste the assembly time if the DMO implementation class for the same DMO interface is requested by multiple sessions simultaneously. I prefer this possible overhead to the cost of synchronizing every assembly across all DMO interfaces on
DmoClass
. UsingMap.putIfAbsent
may be redundant, but it seems more consistent with the design.
DmoClass
itself. The problem is, DmoClass
is not visible outside the package so that the new class to access it. If it was then the algorithm would go like this:
- acquire lock on
DmoClass
class;- create the
RecordMeta
, save it to a static member inDmoClass
; - generate the new class binary and attempt loading it;
- the final
_metadata
field of the new class is initialized usingDmoClass.getCurrentMetadata()
static method;
- the final
- the static member is reset to
null
, eventually in afinally
construct;
- create the
- the lock on
DmoClass
class is released.
The fact that I recently added
deepCopy
and named it that is coincidental. I needed the capability to make a deep copy (i.e., not only the data, but the state as well) of a DMO for the NO-UNDO support (for use withSQLRedo
). I saw your existingsnapshot
andcopy
methods and I didn't want to change them to add the copying of state, in case that went beyond what you needed for your purposes when you implemented those methods. If they should all be rationalized into a simpler set of common methods, let's do that. But if not, we can leave them as is.
I would suggest that instead of implementing theUndoable
interface, we remove it from the hierarchy. IIRC, there are places in the runtime we currently use theUndoable
methods. This was done as a convenience, because the methods were what we needed for some other purpose (like copying a record), though not as part of a framework that worked withUndoable
objects per se.
The snapshot
and copy
methods were created because I tried to think ahead. Indeed, I tried to use them in my attempted UNDO solution, but I still kept them as they were used in several other places. The new deepCopy() you created an improved, status-aware method.
I dropped the Undoable
from the Persistable
parents completely and it seems OK.
#218 Updated by Eric Faulhaber about 4 years ago
Ovidiu, when I try to log into Hotel GUI embedded mode (or run any test case) with rev 14111, I get this:
[04/22/2020 05:08:49 EDT] (com.goldencode.p2j.util.ControlFlowOps$ExternalProgramResolver:SEVERE) Unable to resolve external program. java.lang.RuntimeException: Failed to set metadata on new Record at com.goldencode.p2j.persist.orm.DmoClass.forInterface(DmoClass.java:328) at com.goldencode.p2j.persist.orm.DmoMetadataManager.getImplementingClass(DmoMetadataManager.java:224) at com.goldencode.p2j.persist.RecordBuffer.<init>(RecordBuffer.java:1700) at com.goldencode.p2j.persist.TemporaryBuffer.<init>(TemporaryBuffer.java:651) at com.goldencode.p2j.persist.TemporaryBuffer.makeBuffer(TemporaryBuffer.java:2194) at com.goldencode.p2j.persist.TemporaryBuffer.makeBuffer(TemporaryBuffer.java:2157) at com.goldencode.p2j.persist.TemporaryBuffer.define(TemporaryBuffer.java:732) at com.goldencode.hotel.FwdEmbeddedDriver.<init>(FwdEmbeddedDriver.java:109) at com.goldencode.hotel.FwdEmbeddedDriverConstructorAccess.newInstance(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$ExternalProcedureCaller.<init>(ControlFlowOps.java:8724) at com.goldencode.p2j.util.ControlFlowOps$ExternalProgramResolver.resolve(ControlFlowOps.java:9082) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5256) at com.goldencode.p2j.util.ControlFlowOps.lambda$invoke$0(ControlFlowOps.java:3746) at com.goldencode.p2j.util.BlockRunner.process(BlockRunner.java:153) at com.goldencode.p2j.util.BlockRunner.lambda$execute$0(BlockRunner.java:138) at com.goldencode.p2j.util.ErrorManager.silent(ErrorManager.java:606) at com.goldencode.p2j.util.BlockRunner.execute(BlockRunner.java:138) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:3777) at com.goldencode.p2j.ui.LogicalTerminal.lambda$invoke$9(LogicalTerminal.java:15480) at com.goldencode.p2j.ui.LogicalTerminal.invokeOnServer(LogicalTerminal.java:16378) at com.goldencode.p2j.ui.LogicalTerminal.invoke(LogicalTerminal.java:15480) at com.goldencode.p2j.ui.LogicalTerminalMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.waitMessage(Conversation.java:348) at com.goldencode.p2j.net.Queue.transactImpl(Queue.java:1201) at com.goldencode.p2j.net.Queue.transact(Queue.java:672) at com.goldencode.p2j.net.BaseSession.transact(BaseSession.java:271) at com.goldencode.p2j.net.HighLevelObject.transact(HighLevelObject.java:211) at com.goldencode.p2j.net.RemoteObject$RemoteAccess.invokeCore(RemoteObject.java:1473) at com.goldencode.p2j.net.InvocationStub.invoke(InvocationStub.java:145) at com.sun.proxy.$Proxy6.waitFor(Unknown Source) at com.goldencode.p2j.ui.LogicalTerminal.waitFor(LogicalTerminal.java:6511) at com.goldencode.p2j.ui.LogicalTerminal.waitFor(LogicalTerminal.java:6281) at com.goldencode.hotel.Emain.lambda$null$0(Emain.java:64) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.doBlockWorker(BlockManager.java:9112) at com.goldencode.p2j.util.BlockManager.doBlock(BlockManager.java:1132) at com.goldencode.hotel.Emain.lambda$execute$1(Emain.java:62) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.Emain.execute(Emain.java:50) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1380) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2125) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1561) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:544) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748)
Please resolve this with high priority.
#219 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
I committed r11412. The changes include:- I fixed the reported issue. It was caused by the
key
being different from thecrtIface
whenRecordMeta.get()
is called, so the double check I was so proud fails. However, this happens for temp-tables only because there are two interfaces involved:TT_1
andTT_1_1
. When the DMO class is created it usesTT_1_1
, but the metadata annotation are found inTT_1
. The permanent DMOs are OK. - the
DatabaseManager.PRIMARY_KEY
is used when initializing theDmoMeta
for each DMO registered (before the impl class is generated). This normally happens at runtime, but also happens when the import is run. I made it so that is not accessed at conversion time, but for import I had to add some workarounds, for example, I createdPK
static field inSession
, which is initialized, depending on current configuration. From here it is used in multiple places for both, runtime and import. TheDialect
s also need access to this value. There is also the reverse issue: once DMO classes are generated they are registered toDatabaseManager
. As consequence, its static block is executed. When this happens at import it crashed because the binding to registry fails. - apparently we advanced a bit with the hotel_gui. During my latest tests, the execution path reached other placed where the
.id
key was hardcoded. I fixed them, too; - there is a strange issue with
_multiplex
which I madeprivate
. It does not seem logic so I will go back to this issue later; - the
_metadata
issue was not addressed yet.
#220 Updated by Ovidiu Maxiniuc about 4 years ago
BTW, for import and server to start the
-Djava.system.class.loader=com.goldencode.p2j.classloader.MultiClassLoader
must be added to launchers: in server.sh
and build.db.xml
for both H2 and PSQL dialects, respectively. Otherwise the AsmClassLoader.getInstance()
will return null
and newly generated classes won't load in DmoClass.forInterface(Class<? extends DataModelObject>)
.
#221 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
BTW, for import and server to start the
-Djava.system.class.loader=com.goldencode.p2j.classloader.MultiClassLoader
must be added to launchers: inserver.sh
andbuild.db.xml
for both H2 and PSQL dialects, respectively. Otherwise theAsmClassLoader.getInstance()
will returnnull
and newly generated classes won't load inDmoClass.forInterface(Class<? extends DataModelObject>)
.
Please post the changes as patches relative to Hotel GUI rev 202 here. We will integrate them to the main Hotel GUI line once 4011a is in trunk. Thanks.
#222 Updated by Eric Faulhaber about 4 years ago
Is MultiClassLoader
really needed for the import, or just AsmClassLoader
?
#223 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Is
MultiClassLoader
really needed for the import, or justAsmClassLoader
?
Good point. The com.goldencode.asm.AsmClassLoader
will do.
#224 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
- the
DatabaseManager.PRIMARY_KEY
is used when initializing theDmoMeta
for each DMO registered (before the impl class is generated). This normally happens at runtime, but also happens when the import is run. I made it so that is not accessed at conversion time, but for import I had to add some workarounds, for example, I createdPK
static field inSession
, which is initialized, depending on current configuration. From here it is used in multiple places for both, runtime and import. TheDialect
s also need access to this value. There is also the reverse issue: once DMO classes are generated they are registered toDatabaseManager
. As consequence, its static block is executed. When this happens at import it crashed because the binding to registry fails.
I was not thinking of database import when I put PRIMARY_KEY_NAME
into DatabaseManager
. In retrospect, that was a bad choice, considering all the legacy and other runtime baggage in that class. It probably belongs in a standalone class, or we can use Session.PK
and put all the initialization code in there instead, since that class is naturally used by both import and runtime. For now, let's have DatabaseManager.PRIMARY_KEY
reference Session.PK
, instead of the other way around. We should fix up all the references to DatabaseManager.PRIMARY_KEY
to use Session.PK
instead.
#225 Updated by Eric Faulhaber about 4 years ago
- File build_xml.patch added
I checked in rev 11413, which fixes a few bugs (hard-coded references to id
as primary key) and makes Session.PK
public.
The current status of Hotel GUI:
- embedded mode: during login, it successfully executes a number of queries and then gets hung up sending a JSON message from Java.
- virtual desktop mode: same as last update. I am debugging the unique constraint checking imbalanced scope bug.
BTW, server.sh
already sets MultiClassLoader
as the system class loader. I added AsmClassLoader
to build_db.xml
. Patch is attached.
LE: name of the patch is misleading; the file changed was build_db.xml
, not build.xml
.
#226 Updated by Eric Faulhaber about 4 years ago
- File directory_xml.patch added
The attached changes to directory.xml
are needed to adjust for the removal of Hibernate. Note that these are just the persistence-related changes. The directory still needs to be prepared as always.
#227 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
I was not thinking of database import when I put
PRIMARY_KEY_NAME
intoDatabaseManager
. In retrospect, that was a bad choice, considering all the legacy and other runtime baggage in that class. It probably belongs in a standalone class, or we can useSession.PK
and put all the initialization code in there instead, since that class is naturally used by both import and runtime. For now, let's haveDatabaseManager.PRIMARY_KEY
referenceSession.PK
, instead of the other way around. We should fix up all the references toDatabaseManager.PRIMARY_KEY
to useSession.PK
instead.
OK, I will do this.
#228 Updated by Ovidiu Maxiniuc about 4 years ago
I used r11413 and the hotel_gui has deploy.all
-ed smoothly. I found some new hardcoded id
which were replaced with Session.PK
. I will commit this soon with new Session.PK
initialization.
During tests, the client crashed at a moment due the creation of an index that already existed in database. I am investigating this, but there is another chained issue: the event (PersistenceException
) was caught and handled by following block:
DBUtils.handleException(database, exc);
local.closeSession(true);
This caused the session which opened the transaction (
boolean inTx = local.beginTransaction(null);
) to be closed. In the finally
block we have: if (inTx)
{
local.commit();
}
Because the transaction and session are closed the following exception is thrown:
Caused by: java.lang.IllegalStateException: [00000004:00000013:bogus-->local/_temp/primary] no current transaction available to commit at com.goldencode.p2j.persist.Persistence$Context.commit(Persistence.java:4909) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:3895) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:2953) at com.goldencode.p2j.persist.TemporaryBuffer$Context.doCreateTable(TemporaryBuffer.java:6511) at com.goldencode.p2j.persist.TemporaryBuffer$Context.createTable(TemporaryBuffer.java:6307) at com.goldencode.p2j.persist.TemporaryBuffer.openScope(TemporaryBuffer.java:4159) at com.goldencode.p2j.persist.RecordBuffer.openScope(RecordBuffer.java:2865) at com.goldencode.hotel.common.adm2.Smart.lambda$execute$1(Smart.java:144) at com.goldencode.p2j.util.Block.body(Block.java:604) [...]
Do we need to be so eager to close the session in the attempt to salvage the normal execution path?
OTOH, I am thinking of a dirty workaround: make closeSession(true)
to acquire a new connection to database immediately after closing the faulty one and commit()
/rollback
to be NO-OPs in this case. It might make invisible other issues (like unbalanced begin/end transactions) so I don't think it's the best solution.
#229 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Do we need to be so eager to close the session in the attempt to salvage the normal execution path?
I'm not sure. A SQL exception is either going to mean a programming error (in which case we should be able to fix it and thus avoid it) or a connection or other unforeseeable database problem. Things like unique and non-null constraint violations should be caught by our validation (and if not, they represent another fixable programming error). I guess it is only the case of connection/other database problems where we would need to close the session. Perhaps we can do a connection test in the error handling to decide.
I think there probably is a flaw in the way I refactored the old push/pop implicit transaction paradigm to the new design. In many places where we attempt to get a "quick" transaction for some work (to handle the case we are not within a larger one already), we are doing the commit in a finally
block. We probably need to do this inside the try
block instead.
OTOH, I am thinking of a dirty workaround: make
closeSession(true)
to acquire a new connection to database immediately after closing the faulty one andcommit()
/rollback
to be NO-OPs in this case. It might make invisible other issues (like unbalanced begin/end transactions) so I don't think it's the best solution.
I just unwound a lot of processing like this for the implicit transactions. I'd rather not add back more. But maybe I'm missing the idea behind your workaround?
#230 Updated by Ovidiu Maxiniuc about 4 years ago
- changed the initialization of
_metadata
ofDmoClass
generated classes; - moved initialization of
PK
name inSession
fromDatabaseManager
; - replaced hardcoded id with
Session.PK
; - fixed an
IllegalArgumentException
when properties type are detected (by chance, also related toPK
); - lowered the verbosity of FQL convertor and listing of executed SQL statements.
#231 Updated by Eric Faulhaber about 4 years ago
Why do we create temp tables with the postfix on commit drop transactional
? What does this do?
#232 Updated by Eric Faulhaber about 4 years ago
Update on the unbalanced scopes in UniqueTracker
...
I've reworked the scope mechanism in UniqueTracker
to rely on Finalizable
callbacks. In BufferManager.beginTx
, we register for the callbacks if we are in any kind of transaction block. The logic in UniqueTracker.Context
then does the following:
- on
entry
events: push a scope - on
iterate
events: pop a scope, then push a scope - on
retry
events: pop a scope, then push a scope - on
finished
events: pop a scope - when the context ends normally or abnormally, all scopes are popped (presumably there only should be scopes on abnormal exit)
This more structured approach improves things. Now we can click on the "Check-In..." button without the error in the log. However, clicking "Check-In..." again on the next dialog blows things up. It turns out, the scopes are already getting unbalanced as the first dialog is displayed. I've instrumented the code with some logging and analysis of the push/pop logic, and already as the first dialog is brought up, we see the following:
---------------------------- BEGIN ---------------------------- #0 --> >>> entry com.goldencode.hotel.AvailRoomsFrame$1.body(AvailRoomsFrame.java:579) #1 --> <<< entry com.goldencode.hotel.AvailRoomsFrame$1.body(AvailRoomsFrame.java:579) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.execute(UpdateStayDialog.java:199) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.execute(UpdateStayDialog.java:199) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> commit com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) [04/24/2020 03:37:25 EDT] (com.goldencode.p2j.util.TransactionManager:WARNING) Database _temp was not found in the list of active databases. Adding it now. #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> commit com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> >>> entry com.goldencode.hotel.common.adm2.Appserver.getAppService(Appserver.java:1313) #3 --> <<< entry com.goldencode.hotel.common.adm2.Appserver.getAppService(Appserver.java:1313) #3 --> commit com.goldencode.hotel.common.adm2.Appserver.getAppService(Appserver.java:1313) #3 --> >>> finished com.goldencode.hotel.common.adm2.Appserver.getAppService(Appserver.java:1313) #2 --> <<< finished com.goldencode.hotel.common.adm2.Appserver.getAppService(Appserver.java:1313) #2 --> >>> entry com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #3 --> <<< entry com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #3 --> commit com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #3 --> >>> finished com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #2 --> <<< finished com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> commit com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> commit com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.startSuperProc(UpdateStayDialog.java:1245) #2 --> >>> entry com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> <<< entry com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> >>> entry com.goldencode.hotel.common.adm2.Smart.getContainerSourceEvents(Smart.java:2329) #4 --> <<< entry com.goldencode.hotel.common.adm2.Smart.getContainerSourceEvents(Smart.java:2329) #4 --> commit com.goldencode.hotel.common.adm2.Smart.getContainerSourceEvents(Smart.java:2329) #4 --> >>> finished com.goldencode.hotel.common.adm2.Smart.getContainerSourceEvents(Smart.java:2329) #3 --> <<< finished com.goldencode.hotel.common.adm2.Smart.getContainerSourceEvents(Smart.java:2329) #3 --> >>> entry com.goldencode.hotel.common.adm2.Smart.setContainerSourceEvents(Smart.java:3785) #4 --> <<< entry com.goldencode.hotel.common.adm2.Smart.setContainerSourceEvents(Smart.java:3785) #4 --> commit com.goldencode.hotel.common.adm2.Smart.setContainerSourceEvents(Smart.java:3785) #4 --> >>> finished com.goldencode.hotel.common.adm2.Smart.setContainerSourceEvents(Smart.java:3785) #3 --> <<< finished com.goldencode.hotel.common.adm2.Smart.setContainerSourceEvents(Smart.java:3785) #3 --> commit com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> >>> finished com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #2 --> <<< finished com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #2 --> >>> entry com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #3 --> <<< entry com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #3 --> commit com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #3 --> >>> finished com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #2 --> <<< finished com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #2 --> >>> entry com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> <<< entry com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> >>> entry com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #4 --> <<< entry com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #4 --> commit com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #4 --> >>> finished com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #3 --> <<< finished com.goldencode.hotel.common.adm2.Smart.getSupportedLinks(Smart.java:2851) #3 --> >>> entry com.goldencode.hotel.common.adm2.Smart.setSupportedLinks(Smart.java:4209) #4 --> <<< entry com.goldencode.hotel.common.adm2.Smart.setSupportedLinks(Smart.java:4209) #4 --> commit com.goldencode.hotel.common.adm2.Smart.setSupportedLinks(Smart.java:4209) #4 --> >>> finished com.goldencode.hotel.common.adm2.Smart.setSupportedLinks(Smart.java:4209) #3 --> <<< finished com.goldencode.hotel.common.adm2.Smart.setSupportedLinks(Smart.java:4209) #3 --> commit com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #3 --> >>> finished com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) #2 --> <<< finished com.goldencode.hotel.common.adm2.Smart.modifyListProperty(Smart.java:1310) [04/24/2020 03:37:25 EDT] (com.goldencode.p2j.util.TransactionManager:WARNING) Database hotel was not found in the list of active databases. Adding it now. #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> <<< iterate com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> rollback com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1014) #2 --> >>> entry com.goldencode.hotel.common.adm2.Containr.createObjects(Containr.java:1656) #3 --> <<< entry com.goldencode.hotel.common.adm2.Containr.createObjects(Containr.java:1656) #3 --> >>> entry com.goldencode.hotel.UpdateStayDialog.admCreateObjects(UpdateStayDialog.java:1459) #4 --> <<< entry com.goldencode.hotel.UpdateStayDialog.admCreateObjects(UpdateStayDialog.java:1459) #4 --> commit com.goldencode.hotel.UpdateStayDialog.admCreateObjects(UpdateStayDialog.java:1459) #4 --> >>> finished com.goldencode.hotel.UpdateStayDialog.admCreateObjects(UpdateStayDialog.java:1459) #3 --> <<< finished com.goldencode.hotel.UpdateStayDialog.admCreateObjects(UpdateStayDialog.java:1459) #3 --> commit com.goldencode.hotel.common.adm2.Containr.createObjects(Containr.java:1656) #3 --> >>> finished com.goldencode.hotel.common.adm2.Containr.createObjects(Containr.java:1656) #2 --> <<< finished com.goldencode.hotel.common.adm2.Containr.createObjects(Containr.java:1656) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1143) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1143) #3 --> >>> entry com.goldencode.hotel.UpdateStayDialog.initializeObject(UpdateStayDialog.java:1520) #4 --> <<< entry com.goldencode.hotel.UpdateStayDialog.initializeObject(UpdateStayDialog.java:1520) #4 --> >>> entry com.goldencode.hotel.common.adm2.Containr.initializeObject(Containr.java:2958) #5 --> <<< entry com.goldencode.hotel.common.adm2.Containr.initializeObject(Containr.java:2958) #5 --> >>> entry com.goldencode.hotel.common.adm2.Containr.initializeVisualContainer(Containr.java:3181) #6 --> <<< entry com.goldencode.hotel.common.adm2.Containr.initializeVisualContainer(Containr.java:3181) #6 --> commit com.goldencode.hotel.common.adm2.Containr.initializeVisualContainer(Containr.java:3181) #6 --> >>> finished com.goldencode.hotel.common.adm2.Containr.initializeVisualContainer(Containr.java:3181) #5 --> <<< finished com.goldencode.hotel.common.adm2.Containr.initializeVisualContainer(Containr.java:3181) #5 --> >>> entry com.goldencode.hotel.common.adm2.Containr.initializeDataObjects(Containr.java:2855) #6 --> <<< entry com.goldencode.hotel.common.adm2.Containr.initializeDataObjects(Containr.java:2855) #6 --> >>> finished com.goldencode.hotel.common.adm2.Containr.lambda$initializeDataObjects$100(Containr.java:2899) #5 --> <<< finished com.goldencode.hotel.common.adm2.Containr.lambda$initializeDataObjects$100(Containr.java:2899) #5 --> >>> entry com.goldencode.hotel.common.adm2.Containr.fetchContainedData(Containr.java:1830) #6 --> <<< entry com.goldencode.hotel.common.adm2.Containr.fetchContainedData(Containr.java:1830) #6 --> >>> entry com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #7 --> <<< entry com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #7 --> commit com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #7 --> >>> finished com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #6 --> <<< finished com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #6 --> >>> finished com.goldencode.hotel.common.adm2.Containr.lambda$fetchContainedData$82(Containr.java:1850) #5 --> <<< finished com.goldencode.hotel.common.adm2.Containr.lambda$fetchContainedData$82(Containr.java:1850) #5 --> >>> entry com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #6 --> <<< entry com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #6 --> commit com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #6 --> >>> finished com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #5 --> <<< finished com.goldencode.hotel.common.adm2.Appserver.setAppService(Appserver.java:1836) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #6 --> commit com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.changeCursor(Smart.java:870) #5 --> commit com.goldencode.hotel.common.adm2.Containr.fetchContainedData(Containr.java:1830) #5 --> >>> finished com.goldencode.hotel.common.adm2.Containr.fetchContainedData(Containr.java:1830) #4 --> <<< finished com.goldencode.hotel.common.adm2.Containr.fetchContainedData(Containr.java:1830) #4 --> commit com.goldencode.hotel.common.adm2.Containr.initializeDataObjects(Containr.java:2855) #4 --> >>> finished com.goldencode.hotel.common.adm2.Containr.initializeDataObjects(Containr.java:2855) #3 --> <<< finished com.goldencode.hotel.common.adm2.Containr.initializeDataObjects(Containr.java:2855) #3 --> >>> entry com.goldencode.hotel.common.adm2.Visual.initializeObject(Visual.java:843) #4 --> <<< entry com.goldencode.hotel.common.adm2.Visual.initializeObject(Visual.java:843) #4 --> >>> entry com.goldencode.hotel.common.adm2.Smart.initializeObject(Smart.java:1112) #5 --> <<< entry com.goldencode.hotel.common.adm2.Smart.initializeObject(Smart.java:1112) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.createControls(Smart.java:890) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.createControls(Smart.java:890) #6 --> commit com.goldencode.hotel.common.adm2.Smart.createControls(Smart.java:890) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.createControls(Smart.java:890) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.createControls(Smart.java:890) #5 --> commit com.goldencode.hotel.common.adm2.Smart.initializeObject(Smart.java:1112) #5 --> >>> finished com.goldencode.hotel.common.adm2.Smart.initializeObject(Smart.java:1112) #4 --> <<< finished com.goldencode.hotel.common.adm2.Smart.initializeObject(Smart.java:1112) #4 --> >>> entry com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #5 --> <<< entry com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #5 --> >>> entry com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #6 --> <<< entry com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #6 --> commit com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #6 --> >>> finished com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #5 --> <<< finished com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #5 --> commit com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #5 --> >>> finished com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #4 --> <<< finished com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #4 --> >>> entry com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< entry com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> commit com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.deleteEntry(Smart.java:2149) #5 --> >>> finished com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #4 --> <<< finished com.goldencode.hotel.common.adm2.Visual.lambda$initializeObject$49(Visual.java:954) #4 --> >>> entry com.goldencode.hotel.common.adm2.Visual.enableObject(Visual.java:719) #5 --> <<< entry com.goldencode.hotel.common.adm2.Visual.enableObject(Visual.java:719) #5 --> >>> entry com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< entry com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> <<< entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> commit com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> >>> finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> <<< finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> <<< entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> commit com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> >>> finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> <<< finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> <<< entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> commit com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> >>> finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> <<< finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> <<< entry com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> commit com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #7 --> >>> finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> <<< finished com.goldencode.hotel.common.adm2.Visual.popupHandle(Visual.java:3492) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> <<< iterate com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #6 --> >>> finished com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #5 --> <<< finished com.goldencode.hotel.common.adm2.Visual.lambda$enableObject$32(Visual.java:742) #5 --> >>> entry com.goldencode.hotel.UpdateStayDialog.enableUi(UpdateStayDialog.java:1472) #6 --> <<< entry com.goldencode.hotel.UpdateStayDialog.enableUi(UpdateStayDialog.java:1472) #6 --> commit com.goldencode.hotel.UpdateStayDialog.enableUi(UpdateStayDialog.java:1472) #6 --> >>> finished com.goldencode.hotel.UpdateStayDialog.enableUi(UpdateStayDialog.java:1472) #5 --> <<< finished com.goldencode.hotel.UpdateStayDialog.enableUi(UpdateStayDialog.java:1472) #5 --> commit com.goldencode.hotel.common.adm2.Visual.enableObject(Visual.java:719) #5 --> >>> finished com.goldencode.hotel.common.adm2.Visual.enableObject(Visual.java:719) #4 --> <<< finished com.goldencode.hotel.common.adm2.Visual.enableObject(Visual.java:719) #4 --> >>> entry com.goldencode.hotel.common.adm2.Containr.viewObject(Containr.java:4160) #5 --> <<< entry com.goldencode.hotel.common.adm2.Containr.viewObject(Containr.java:4160) #5 --> >>> entry com.goldencode.hotel.common.adm2.Smart.viewObject(Smart.java:2019) #6 --> <<< entry com.goldencode.hotel.common.adm2.Smart.viewObject(Smart.java:2019) #6 --> >>> entry com.goldencode.hotel.common.adm2.Smart.setObjectHidden(Smart.java:4022) #7 --> <<< entry com.goldencode.hotel.common.adm2.Smart.setObjectHidden(Smart.java:4022) #7 --> commit com.goldencode.hotel.common.adm2.Smart.setObjectHidden(Smart.java:4022) #7 --> >>> finished com.goldencode.hotel.common.adm2.Smart.setObjectHidden(Smart.java:4022) #6 --> <<< finished com.goldencode.hotel.common.adm2.Smart.setObjectHidden(Smart.java:4022) #6 --> >>> entry com.goldencode.hotel.common.adm2.Smart.assignLinkProperty(Smart.java:2067) #7 --> <<< entry com.goldencode.hotel.common.adm2.Smart.assignLinkProperty(Smart.java:2067) #7 --> >>> entry com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #8 --> <<< entry com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #8 --> >>> entry com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #9 --> <<< entry com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #9 --> commit com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #9 --> >>> finished com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #8 --> <<< finished com.goldencode.hotel.common.adm2.Containr.getContainerTarget(Containr.java:5107) #8 --> commit com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #8 --> >>> finished com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #7 --> <<< finished com.goldencode.hotel.common.adm2.Smart.linkHandles(Smart.java:3316) #7 --> >>> finished com.goldencode.hotel.common.adm2.Smart.lambda$assignLinkProperty$115(Smart.java:2082) #6 --> <<< finished com.goldencode.hotel.common.adm2.Smart.lambda$assignLinkProperty$115(Smart.java:2082) #6 --> commit com.goldencode.hotel.common.adm2.Smart.assignLinkProperty(Smart.java:2067) #6 --> >>> finished com.goldencode.hotel.common.adm2.Smart.assignLinkProperty(Smart.java:2067) #5 --> <<< finished com.goldencode.hotel.common.adm2.Smart.assignLinkProperty(Smart.java:2067) #5 --> commit com.goldencode.hotel.common.adm2.Smart.viewObject(Smart.java:2019) #5 --> >>> finished com.goldencode.hotel.common.adm2.Smart.viewObject(Smart.java:2019) #4 --> <<< finished com.goldencode.hotel.common.adm2.Smart.viewObject(Smart.java:2019) #4 --> commit com.goldencode.hotel.common.adm2.Containr.viewObject(Containr.java:4160) #4 --> >>> finished com.goldencode.hotel.common.adm2.Containr.viewObject(Containr.java:4160) #3 --> <<< finished com.goldencode.hotel.common.adm2.Containr.viewObject(Containr.java:4160) #3 --> commit com.goldencode.hotel.common.adm2.Visual.initializeObject(Visual.java:843) #3 --> >>> finished com.goldencode.hotel.common.adm2.Visual.initializeObject(Visual.java:843) #2 --> <<< finished com.goldencode.hotel.common.adm2.Visual.initializeObject(Visual.java:843) #2 --> commit com.goldencode.hotel.common.adm2.Containr.initializeObject(Containr.java:2958) #2 --> >>> finished com.goldencode.hotel.common.adm2.Containr.initializeObject(Containr.java:2958) #1 --> <<< finished com.goldencode.hotel.common.adm2.Containr.initializeObject(Containr.java:2958) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1558) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1558) #2 --> commit com.goldencode.hotel.UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1558) #2 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1558) #1 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1558) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshServices(UpdateStayDialog.java:1572) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshServices(UpdateStayDialog.java:1572) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.lambda$refreshServices$57(UpdateStayDialog.java:1578) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.lambda$refreshServices$57(UpdateStayDialog.java:1578) #3 --> rollback com.goldencode.hotel.UpdateStayDialog.lambda$refreshServices$57(UpdateStayDialog.java:1578) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.lambda$refreshServices$57(UpdateStayDialog.java:1578) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.lambda$refreshServices$57(UpdateStayDialog.java:1578) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #3 --> commit com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #2 --> commit com.goldencode.hotel.UpdateStayDialog.refreshServices(UpdateStayDialog.java:1572) #2 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshServices(UpdateStayDialog.java:1572) #1 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshServices(UpdateStayDialog.java:1572) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.updateRoomType(UpdateStayDialog.java:1642) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.updateRoomType(UpdateStayDialog.java:1642) #2 --> commit com.goldencode.hotel.UpdateStayDialog.updateRoomType(UpdateStayDialog.java:1642) #2 --> >>> finished com.goldencode.hotel.UpdateStayDialog.updateRoomType(UpdateStayDialog.java:1642) #1 --> <<< finished com.goldencode.hotel.UpdateStayDialog.updateRoomType(UpdateStayDialog.java:1642) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshDaysPrice(UpdateStayDialog.java:1431) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshDaysPrice(UpdateStayDialog.java:1431) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshDays(UpdateStayDialog.java:1271) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshDays(UpdateStayDialog.java:1271) #3 --> commit com.goldencode.hotel.UpdateStayDialog.refreshDays(UpdateStayDialog.java:1271) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshDays(UpdateStayDialog.java:1271) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshDays(UpdateStayDialog.java:1271) #2 --> >>> entry com.goldencode.hotel.UpdateStayDialog.calcPrice(UpdateStayDialog.java:1374) #3 --> <<< entry com.goldencode.hotel.UpdateStayDialog.calcPrice(UpdateStayDialog.java:1374) #3 --> >>> entry com.goldencode.hotel.UpdateStayDialog.calcPriceProc(UpdateStayDialog.java:1304) #4 --> <<< entry com.goldencode.hotel.UpdateStayDialog.calcPriceProc(UpdateStayDialog.java:1304) #4 --> >>> entry com.goldencode.hotel.UpdateStayDialog.lambda$calcPriceProc$41(UpdateStayDialog.java:1315) #5 --> <<< entry com.goldencode.hotel.UpdateStayDialog.lambda$calcPriceProc$41(UpdateStayDialog.java:1315) #5 --> commit com.goldencode.hotel.UpdateStayDialog.lambda$calcPriceProc$41(UpdateStayDialog.java:1315) #5 --> >>> finished com.goldencode.hotel.UpdateStayDialog.lambda$calcPriceProc$41(UpdateStayDialog.java:1315) #4 --> <<< finished com.goldencode.hotel.UpdateStayDialog.lambda$calcPriceProc$41(UpdateStayDialog.java:1315) #4 --> commit com.goldencode.hotel.UpdateStayDialog.calcPriceProc(UpdateStayDialog.java:1304) #4 --> >>> finished com.goldencode.hotel.UpdateStayDialog.calcPriceProc(UpdateStayDialog.java:1304) #3 --> <<< finished com.goldencode.hotel.UpdateStayDialog.calcPriceProc(UpdateStayDialog.java:1304) #3 --> commit com.goldencode.hotel.UpdateStayDialog.calcPrice(UpdateStayDialog.java:1374) #3 --> >>> finished com.goldencode.hotel.UpdateStayDialog.calcPrice(UpdateStayDialog.java:1374) #2 --> <<< finished com.goldencode.hotel.UpdateStayDialog.calcPrice(UpdateStayDialog.java:1374) #2 --> commit com.goldencode.hotel.UpdateStayDialog.refreshDaysPrice(UpdateStayDialog.java:1431) #2 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshDaysPrice(UpdateStayDialog.java:1431) #1 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshDaysPrice(UpdateStayDialog.java:1431) #1 --> >>> entry com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #2 --> <<< entry com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #2 --> commit com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #2 --> >>> finished com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #1 --> <<< finished com.goldencode.hotel.UpdateStayDialog.refreshTotalPrice(UpdateStayDialog.java:1616) #1 --> commit com.goldencode.hotel.UpdateStayDialog.initializeObject(UpdateStayDialog.java:1520) #1 --> >>> finished com.goldencode.hotel.UpdateStayDialog.initializeObject(UpdateStayDialog.java:1520) #0 --> <<< finished com.goldencode.hotel.UpdateStayDialog.initializeObject(UpdateStayDialog.java:1520) ----------------------------- END ----------------------------- finished() calls unmatched by entry() calls: com.goldencode.hotel.common.adm2.Containr.lambda$initializeDataObjects$100(Containr.java:2899) com.goldencode.hotel.common.adm2.Containr.lambda$fetchContainedData$82(Containr.java:1850) com.goldencode.hotel.common.adm2.Smart.lambda$assignLinkProperty$115(Smart.java:2082)
>>>
indicates we are entering a callback. <<<
indicates we are leaving a callback. The number after the hashtag is the number of scopes at the time of logging. The last part is the closest application method to the callback. The BEGIN and END markers are driven by when the number of scopes is 0.
The analysis at the bottom shows mismatches where the finished
callback popped a scope, without entry
having been called for the same application method to first push a scope. The number of times these methods were called is lost, as the analysis is done using sets, but the point was to find which methods are causing a scope to be popped without first having pushed one. This analysis is based on an assumption that entry
and finished
should be invoked (indirectly) from the same line of application code. Is that assumption correct?
The similarity between these three methods is that they all represent doTo
blocks. I suspect what is happening is that there is something particular to doTo
blocks in the timing of the entry
callback, such that we are possibly registering for Finalizable
callbacks for a block after the point where entry
would be invoked for that block. However, the rest of the callbacks are invoked normally, resulting in the imbalance.
We register for the callbacks inside BlockManager.beginTx
, which was refactored sometime after the original BufferManager
implementation which would register for these types of services directly from a Scopeable.scopeStart
call. I understand that this method was inserted to accommodate something with the interaction between persistent procedures and transaction/buffer management, but I don't fully understand the details, or the timing with which this method is now invoked. BufferManager.beginTx
is invoked both from BufferManager.scopeStart
and from TransactionManager.beginTx
.
Greg, Constantin, do you think there is merit to this theory? If so, any thoughts on a more reliable place to register for Finalizable
and Commitable
callbacks for every full or sub-transaction block, such that calls to entry
are not lost?
FYI, Ovidiu, I did not check in my changes, because I don't intend to leave the logging and imbalance analysis code in place, at least not in its current form.
#233 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
TBH, when I added support for creating tables I tried to mimic what Hibernate was doing. For that I activated theWhy do we create temp tables with the postfix
on commit drop transactional
? What does this do?
show_sql
flag and investigated the output in the console.I searched for the semantics of the suffix and I found the following:
- the first part,
on commit drop
means to drop the table on transaction commit. Now, I don't think that this makes much sense. transactional
- the H2 manual says that theCREATE TABLE
command commits an open transaction, except when using TRANSACTIONAL (only supported for temporary tables).
#234 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
the first part,
on commit drop
means to drop the table on transaction commit. Now, I don't think that this makes much sense.
Agreed, we don't want such behavior. We have to be able to keep the tables around through multiple transactions. I will remove this.
transactional
- the H2 manual says that theCREATE TABLE
command commits an open transaction, except when using TRANSACTIONAL (only supported for temporary tables).
I will leave the transactional
. We don't want auto-commit behavior in the database or driver, since we have state to maintain in FWD for open transactions.
Thanks for the explanations.
#235 Updated by Eric Faulhaber about 4 years ago
I've committed rev 11415, which I believe fixes the UniqueTracker
scope imbalance problem. I think the logic is sound now, but I've left some of the messy debug code behind (commented out), in case we discover any new cases with deeper testing.
The problem with my last revision is that I was assuming Finalizable
entry
and finished
calls were always paired, when in fact finished
always gets called for a block, whereas entry
appears only to be called if you enter the block body (Greg, apologies, I'm sure you explained this to me at one point). So, now we push the first scope upon registering for the block callbacks and we do not rely on Finalizable.entry
for that first scope push. We still pop the innermost scope on finished
.
Removing on commit drop
from the create table
statement resolved problems which occurred when logging into Hotel GUI a second time.
I've now come across a dynamic database problem in Hotel GUI (virtual desktop). On the "Rooms" tab, click "Add Rooms..." and you will get a series of dynamic query errors. Ovidiu, please work on this next. It is higher priority than the metadata "updater" reviews.
I still cannot log into embedded mode, due to a JSON error. It seems a JSON message is being sent and is never returning. So, we sit on the "Loading..." screen forever.
Finally, switching between the tabs in virtual desktop mode still seems slower than I remember. Previously, I was using PostgreSQL and now I'm using H2, so it's not apples to apples. However, if anything, I would expect the embedded Java database to be faster. I will need to do some profiling, though first I want to try (a) with PostgreSQL; and (b) with the older FWD trunk from which we branched, to get a baseline.
#236 Updated by Greg Shah about 4 years ago
The problem with my last revision is that I was assuming Finalizable entry and finished calls were always paired, when in fact finished always gets called for a block, whereas entry appears only to be called if you enter the block body (Greg, apologies, I'm sure you explained this to me at one point). So, now we push the first scope upon registering for the block callbacks and we do not rely on Finalizable.entry for that first scope push. We still pop the innermost scope on finished.
Correct. It is init()
which is paired with finished()
. Each only ever happens once and each is sure to fire.
entry()
occurs at the top of the block, just before the block is executed. This is the same location as iterate()
, just on the first block execution rather than a subsequent non-retry execution.
The reason why doTo
(or doWhile
or doToWhile
or repeatTo
...) can possibly bypasses the block execution is related to the expression being processed. If the runtime condition for the TO
or WHILE
clause evaluates to false
then the block won't execute. For example DO i = 1 to x
where x < 1
would cause this.
I'm glad you figured this out. Sorry I didn't catch this on first reading of your question.
#237 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
The issue I was after yesterday changes a bit. Now I am getting the following NPE when a record is deleted:
java.lang.NullPointerException at com.goldencode.p2j.persist.orm.UniqueTracker$Context.endTxScope(UniqueTracker.java:725) at com.goldencode.p2j.persist.orm.UniqueTracker$Context.finished(UniqueTracker.java:573) at com.goldencode.p2j.util.TransactionManager.processFinalizables(TransactionManager.java:6364) at com.goldencode.p2j.util.TransactionManager.popScope(TransactionManager.java:4437) at com.goldencode.p2j.util.TransactionManager.access$6700(TransactionManager.java:591) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.popScope(TransactionManager.java:8030) at com.goldencode.p2j.util.BlockManager.doBlockWorker(BlockManager.java:9180) at com.goldencode.p2j.util.BlockManager.doBlock(BlockManager.java:1048) at com.goldencode.hotel.UpdateStayDialog$4.body(UpdateStayDialog.java:816) [...]
The
key
is null
because in lockAndDelete
(when the actual delete happened) the old value was not found so a Pair.of
the id
and null
was added to context.deletes
. I don't know yet how to fix this.Anyway, I will switch to "Add Rooms..." issue meanwhile.
#238 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
The issue I was after yesterday changes a bit. [...]
What is the recreate for this? I can have a look.
#239 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
What is the recreate for this? I can have a look.
Available Rooms > Check-In > (select a guest) > Delete > Yes (confirm) > (client disconnects with the above exception)
#240 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
I've now come across a dynamic database problem in Hotel GUI (virtual desktop). On the "Rooms" tab, click "Add Rooms..." and you will get a series of dynamic query errors. Ovidiu, please work on this next. It is higher priority than the metadata "updater" reviews.
This was caused by some incorrect handling of indexes (the legacy field names and properties were a bit mixed up). This is fixed in revision 11416.
#241 Updated by Eric Faulhaber about 4 years ago
Ovidiu, thanks for the fixes in rev 11416. Please look at these next, which occur when you try to manipulate the tree view in that same "Add Rooms..." work flow. They could be temp-table problems.
[04/27/2020 02:09:30 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Invalid handle. Not initialized or points to a deleted object [04/27/2020 02:09:33 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Cannot access the Y attribute because the widget does not exist [04/27/2020 02:09:36 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Lead attributes in a chained-attribute expression (a:b:c) must be type HANDLE or a user-defined type and valid (not UNKNOWN). (10068) [04/27/2020 02:09:39 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} **Unable to assign UNKNOWN value to attribute FRAME on FRAME widget. (4083) [04/27/2020 02:09:40 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Invalid handle. Not initialized or points to a deleted object [04/27/2020 02:09:42 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Cannot access the FRAME attribute because the widget does not exist [04/27/2020 02:09:43 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Lead attributes in a chained-attribute expression (a:b:c) must be type HANDLE or a user-defined type and valid (not UNKNOWN). (10068) [04/27/2020 02:09:45 EDT] (ErrorManager:SEVERE) {00000003:00000010:bogus} Lead attributes in a chained-attribute expression (a:b:c) must be type HANDLE or a user-defined type and valid (not UNKNOWN). (10068)
However, before you spend time on this, please make sure it is not happening with the version of trunk we branched for 4011a (11328). I recall something was broken in this area in a recent version of trunk. Not sure if it was this, but if so, I don't want to waste time tracking down an old problem which has since been fixed.
#242 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
What is the recreate for this? I can have a look.
Available Rooms > Check-In > (select a guest) > Delete > Yes (confirm) > (client disconnects with the above exception)
This is fixed in rev 11417.
#243 Updated by Eric Faulhaber about 4 years ago
Ovidiu, after the issues in #4011-241, could you also please take a look at what is going wrong with embedded mode Hotel GUI? As I noted previously, we seem to be hitting a point during login/initialization where we are getting stuck trying to process a JSON message (I think from the server to the client, but I haven't looked at it closely enough to be sure). Assuming the baseline running with trunk 11348 works, it must be some regression related to our changes in this branch.
#244 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
This is fixed in rev 11417.
Super!
Ovidiu, after the issues in #4011-241, could you also please take a look at what is going wrong with embedded mode Hotel GUI? As I noted previously, we seem to be hitting a point during login/initialization where we are getting stuck trying to process a JSON message (I think from the server to the client, but I haven't looked at it closely enough to be sure). Assuming the baseline running with trunk 11348 works, it must be some regression related to our changes in this branch.
OK. I will use r11417. With a previous revision, the embedded hotel was working (although the Rooms tab was not displayed from first click). I suspect the issue is not JSON related but something we changed in 4011.
#245 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
We need to discuss the
if (!isTransient())
from
RecordBuffer.flush()
(line ~5911). Why would we abort the flush? I think we should only test for currentRecord == null
instead.
The problem is, when a query is iterated and a buffer is loaded with a record that got modified, when the next record is loaded, the currentRecord
needs to be flushed. It is marked as CHANGED
and was validated (see the block at 10998). But since the table is iterated, the record is not NEW, so not transient. In this case the flush is aborted, which means all changes in previous iteration get lost.
This was making sense when we let Hibernate do the flush as late as possible, but we are not doing that write-behind strategy, and as shown above, is not correct any more.
#246 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
[...]
This was making sense when we let Hibernate do the flush as late as possible, but we are not doing that write-behind strategy, and as shown above, is not correct any more.
Given the other changes already made to this method in 4011a, I think you are correct. This method originally was written to ensure a new, transient record was attached to the Hibernate session. With the way we do the persist in the validation now, your change makes sense.
#247 Updated by Ovidiu Maxiniuc about 4 years ago
Eric,
I committed a small update that will make the embedded client work. See r11418.
But I have the following issue:
In Check-in
dialog, if the Add
(guest) is pressed twice (with the first dialog simply cancelled), the following exception is thrown:
org.h2.jdbc.JdbcSQLException: Timeout trying to lock table ; SQL statement: insert into guest (stay_id, order_, first_name, last_name, guest_id_type, guest_id_num, phone, birth_date, country, city, address, recid) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [50200-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) at org.h2.message.DbException.get(DbException.java:168) at org.h2.command.Command.filterConcurrentUpdate(Command.java:316) at org.h2.command.Command.executeUpdate(Command.java:268) at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:249) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.execute(NewProxyPreparedStatement.java:67) at com.goldencode.p2j.persist.orm.Persister.insert(Persister.java:398) at com.goldencode.p2j.persist.orm.Session.save(Session.java:539) at com.goldencode.p2j.persist.orm.Validation.validateUniqueCommitted(Validation.java:532) at com.goldencode.p2j.persist.orm.Validation.checkUniqueConstraints(Validation.java:461) at com.goldencode.p2j.persist.orm.Validation.validateMaybeFlush(Validation.java:203) at com.goldencode.p2j.persist.RecordBuffer$Handler.invoke(RecordBuffer.java:12151) at com.goldencode.p2j.persist.$__Proxy17.setOrder(Unknown Source) at com.goldencode.hotel.UpdateGuestDialog.lambda$execute$18(UpdateGuestDialog.java:629) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.hotel.UpdateGuestDialog.execute(UpdateGuestDialog.java:149) [...] Caused by: org.h2.jdbc.JdbcSQLException: Concurrent update in table "IDX__GUEST_STAY_ORDER": another transaction has updated or deleted the same row [90131-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.table.RegularTable.addRow(RegularTable.java:146) at org.h2.command.dml.Insert.insertRows(Insert.java:182) at org.h2.command.dml.Insert.update(Insert.java:134) at org.h2.command.CommandContainer.update(CommandContainer.java:102) at org.h2.command.Command.executeUpdate(Command.java:261) ... 123 more
This is a bit strange. I don't see a concurrent update, except from the insert/update from the first dialog (that was cancelled):
FWD ORM: com.mchange.v2.c3p0.impl.NewProxyPreparedStatement@1c1420fa [wrapping: prep122: insert into guest (stay_id, order_, first_name, last_name, guest_id_type, guest_id_num, phone, birth_date, country, city, address, recid) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) {1: 149, 2: 2, 3: '', 4: '', 5: 0, 6: '', 7: '', 8: NULL, 9: '', 10: '', 11: '', 12: 50076}] FWD ORM: com.mchange.v2.c3p0.impl.NewProxyPreparedStatement@6233ae34 [wrapping: prep123: update guest set guest_id_type=? where recid=? {1: 1, 2: 50076}]
I think there is an issue with the nesting transactions. The beginTransaction()
was called when the procedure of Check-in
dialog started (update-stay-dialog.w
) in FULL_TRANSACTION mode. When the Add
button is clicked, the update-guest-dialog.w
is launched, also in FULL_TRANSACTION. A savepoint should have been created for the sub-transaction. At this point, the SQL stmts above are executed. Since the dialog is closed (by END-ERROR), the last procedure should be reverted, meaning the savepoint restored, so the effects of the SQL stmts would cease to exist. Since there is not sub-transaction, the unique key (149, 2) remain in the guest table and, when at the time the second Add
guest dialog is open, the procedure will try to create a new guest
with same key (149, 2). It is not really the same row
, as it will have a different PK, but since it is the same unique key, the unique index is assuming that. Anyway, this is more like a supposition, but it does seem logic at this moment.
Please have a look over this, I will also do this, maybe it's something I missed.
Thank you!
#248 Updated by Eric Faulhaber about 4 years ago
I don't intentionally create nested transactions, in fact, the inTx
flag in Session
is intended specifically to avoid it. Must be something we (most likely I) overlooked. We must be sure to avoid creating transactions directly on the connection and always go through Session
to begin a new transaction, though I'm pretty sure we do that already...
#249 Updated by Eric Faulhaber about 4 years ago
Ovidiu, the change in 11418 to Session.save
seems dangerous to me. The reason for the check to test whether another DMO with the same ID already is cached and throw an exception was to root out programming errors. There should only ever be one instance of a DMO that is used in record buffers within a user context. There may be snapshots or dirty copies for other uses, but they should never be saved back to the session.
It seems we found just such a programming error in this case. However, the way it works after your change, if I'm reading it correctly, is to accept the last instance saved as the one and only correct instance. Any other instances, upon next use, will be refreshed from the database with the data last saved. This seems like it could lead to very unpredictable behavior. Isn't this change hiding the problem of why there are two instances of the DMO with the same ID in the first place?
Can you please share the reasoning behind your change?
#250 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
Can you please share the reasoning behind your change?
The usecase is when same dmo is loaded in two different buffers. It makes sense to keep in cache the version that we just wrote to database and invalidate the other.
#251 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
I don't intentionally create nested transactions, in fact, the
inTx
flag inSession
is intended specifically to avoid it. Must be something we (most likely I) overlooked. We must be sure to avoid creating transactions directly on the connection and always go throughSession
to begin a new transaction, though I'm pretty sure we do that already...
I see that inTx
as a flag for closing the transaction if it was started in the current context. Much like a Java try
with resource, to remember to always close it. This is logic. However, I do not understand how are we rolling back to the savepoint when a second-level procedure or a block need to be undone, if there is a single, 'flat' transaction?
#252 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
Can you please share the reasoning behind your change?
The usecase is when same dmo is loaded in two different buffers.
In the current (Hibernate) version of FWD, these would be the same instance. Only one DMO instance with a given ID can be associated with the session at a time, and any DMO in a buffer must be associated with the session until removed from all buffers, via delete or otherwise.
The only DMO that can be loaded in a buffer and not associated with a session is one that is newly created and not yet saved (i.e., transient). And the ID of each such newly created DMO is guaranteed to be unique, because we get the next primary key for each DMO creation.
We need to maintain this rule. We always should be loading a DMO instance from the session cache, if available, into the buffer, even if we found the same record for 2 buffers via separate queries.
As an aside, I think we have to get rid of the fixed-size nature of the session cache and come up with a formal eviction policy, so that this rule can be enforced.
It makes sense to keep in cache the version that we just wrote to database and invalidate the other.
If we follow the above rule (which was my intention, even if I didn't necessarily implement it correctly), the DMO we are writing to the database will already be in the session cache and there will be no separate instance to invalidate. Having two DMO instances associated with the same primary key in the same session is an illegal state. This was why I was throwing the PersistenceException
in this condition.
I realize now that the current implementation of Session.refresh
violates this rule. I wonder if that is the real cause behind the original error. I need to rework Session.refresh
to always return an instance of the DMO from the session cache, if found there, before attempting to reload from the database, unless that cached DMO is marked stale (i.e., from a rollback). In the stale case, it would be evicted, reloaded from the database, and re-cached. We may need to rename the method to something other than refresh
, too, so that it doesn't suggest that the DMO is updated in place with fresh data from the database.
#253 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
I don't intentionally create nested transactions, in fact, the
inTx
flag inSession
is intended specifically to avoid it. Must be something we (most likely I) overlooked. We must be sure to avoid creating transactions directly on the connection and always go throughSession
to begin a new transaction, though I'm pretty sure we do that already...I see that
inTx
as a flag for closing the transaction if it was started in the current context. Much like a Javatry
with resource, to remember to always close it. This is logic. However, I do not understand how are we rolling back to the savepoint when a second-level procedure or a block need to be undone, if there is a single, 'flat' transaction?
This is how rollback works (or is intended to work) now...
There can only be a single database transaction in a session at a time. I'm not sure what you mean by "flat" in this sense. The transaction can contain an arbitrary number of savepoints. For application-level transactions, a SavepointManager
instance is created when an application-level transaction is begun. Once that application-level transaction has begun, we wait for the first record buffer to open a scope and then we open a matching database transaction, passing that SavepointManager
instance to the Session.beginTransaction
call.
At each nested block within that application transaction, the SavepointManager
sets a database savepoint. If that block is committed, the corresponding savepoint is released. If that block is rolled back, the corresponding savepoint is rolled back:
- each undoable DMO which was touched during that block is marked STALE; and
- each NO-UNDO operation which was performed during that block has a corresponding
SQLRedo
object created for it.
For undoable DMOs marked stale, the next access forces a re-read of that DMO's data from the database (this part needs to be changed as noted in the previous entry).
For NO-UNDO DMOs, if the sub-transaction or full transaction is rolled back, the corresponding SQLRedo
objects are executed in sequence to redo the SQL operation that was rolled back in that scope.
#254 Updated by Ovidiu Maxiniuc about 4 years ago
Thanks for clarifications, I will use these information (from both your last posts) to continue investigations on the hotel_gui issues.
#255 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
I think there is an issue with the nesting transactions. [...]
I debugged through this some tonight, but I did not track down the root cause. The savepoint manager seems to be working as designed. However, debugging through this showed me that I set up way too many savepoints that are never used, which I imagine causes a lot of unnecessary database churn. I think I can optimize this, but that is a separate matter.
I did make a fix to TempTableDataSourceProvider
in rev 11420, but that also was just something I found along the way.
#256 Updated by Eric Faulhaber about 4 years ago
Eric Faulhaber wrote:
For undoable DMOs marked stale, the next access forces a re-read of that DMO's data from the database (this part needs to be changed as noted in the previous entry).
I did change this to use Session.get
instead of Session.refresh
(checked into rev 11419). The former does exactly what I want in this case. However, the use of refresh
was not the root cause of the multiple DMOs being associated with the session with the same ID.
#257 Updated by Eric Faulhaber about 4 years ago
One more issue to fix is that it seems we are not properly dropping indexes on temporary tables upon exiting Hotel GUI. The recreate is to log into virtual desktop mode, then exit (so it drops you back to the OS-level login page). Log back into the application and you will get:
[04/28/2020 05:40:47 EDT] (com.goldencode.p2j.persist.Persistence$Context:WARNING) [00000005:00000016:bogus-->local/_temp/primary] rolling back orphaned transaction Closing connection for database local/_temp/primary: com.goldencode.p2j.persist.orm.TempTableDataSourceProvider$DataSourceImpl$1@52963839 [04/28/2020 05:40:47 EDT] (com.goldencode.p2j.persist.Persistence:SEVERE) Error executing SQL statement: [create local temporary table tt5 ( recid bigint not null, _multiplex integer not null, linksource varchar, linktarget varchar, linktype varchar, __ilinktype varchar as upper(rtrim(linktype)), primary key (recid) ) transactional;, create index idx_mpid__tt5__1 on tt5 (_multiplex);] com.goldencode.p2j.persist.PersistenceException: Failed to execute SQL statement. at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:3887) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:2953) at com.goldencode.p2j.persist.TemporaryBuffer$Context.doCreateTable(TemporaryBuffer.java:6511) at com.goldencode.p2j.persist.TemporaryBuffer$Context.createTable(TemporaryBuffer.java:6307) at com.goldencode.p2j.persist.TemporaryBuffer.openScope(TemporaryBuffer.java:4159) at com.goldencode.p2j.persist.RecordBuffer.openScope(RecordBuffer.java:2865) at com.goldencode.hotel.common.adm2.Smart.lambda$execute$1(Smart.java:144) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.common.adm2.Smart.execute(Smart.java:135) at com.goldencode.hotel.common.adm2.SmartMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5297) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) at com.goldencode.hotel.MainWindow.lambda$startSuperProc$25(MainWindow.java:620) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.internalProcedure(BlockManager.java:493) at com.goldencode.p2j.util.BlockManager.internalProcedure(BlockManager.java:479) at com.goldencode.hotel.MainWindow.startSuperProc(MainWindow.java:608) at com.goldencode.hotel.MainWindowMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:6198) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:3888) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5828) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5727) at com.goldencode.p2j.util.ControlFlowOps.invokeWithMode(ControlFlowOps.java:876) at com.goldencode.p2j.util.ControlFlowOps.invokeWithMode(ControlFlowOps.java:858) at com.goldencode.hotel.MainWindow.lambda$execute$18(MainWindow.java:392) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.MainWindow.execute(MainWindow.java:136) at com.goldencode.hotel.MainWindowMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5297) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) at com.goldencode.hotel.Start.lambda$null$4(Start.java:134) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.coreLoop(BlockManager.java:9439) at com.goldencode.p2j.util.BlockManager.repeatWorker(BlockManager.java:9336) at com.goldencode.p2j.util.BlockManager.repeat(BlockManager.java:2052) at com.goldencode.hotel.Start.lambda$execute$5(Start.java:95) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.Start.execute(Start.java:55) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1380) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2125) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1561) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:544) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748) Caused by: org.h2.jdbc.JdbcBatchUpdateException: Index "IDX_MPID__TT5__1" already exists; SQL statement: create index idx_mpid__tt5__1 on tt5 (_multiplex); [42111-197] at org.h2.jdbc.JdbcStatement.executeBatch(JdbcStatement.java:792) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:3873) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:2953) at com.goldencode.p2j.persist.TemporaryBuffer$Context.doCreateTable(TemporaryBuffer.java:6511) at com.goldencode.p2j.persist.TemporaryBuffer$Context.createTable(TemporaryBuffer.java:6307) at com.goldencode.p2j.persist.TemporaryBuffer.openScope(TemporaryBuffer.java:4159) at com.goldencode.p2j.persist.RecordBuffer.openScope(RecordBuffer.java:2865) at com.goldencode.hotel.common.adm2.Smart.lambda$execute$1(Smart.java:144) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.common.adm2.Smart.execute(Smart.java:135) at com.goldencode.hotel.common.adm2.SmartMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5297) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) at com.goldencode.hotel.MainWindow.lambda$startSuperProc$25(MainWindow.java:620) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.internalProcedure(BlockManager.java:493) at com.goldencode.p2j.util.BlockManager.internalProcedure(BlockManager.java:479) at com.goldencode.hotel.MainWindow.startSuperProc(MainWindow.java:608) at com.goldencode.hotel.MainWindowMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:6198) at com.goldencode.p2j.util.ControlFlowOps.invoke(ControlFlowOps.java:3888) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5828) at com.goldencode.p2j.util.ControlFlowOps.invokeImpl(ControlFlowOps.java:5727) at com.goldencode.p2j.util.ControlFlowOps.invokeWithMode(ControlFlowOps.java:876) at com.goldencode.p2j.util.ControlFlowOps.invokeWithMode(ControlFlowOps.java:858) at com.goldencode.hotel.MainWindow.lambda$execute$18(MainWindow.java:392) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.MainWindow.execute(MainWindow.java:136) at com.goldencode.hotel.MainWindowMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5297) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) at com.goldencode.hotel.Start.lambda$null$4(Start.java:134) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.coreLoop(BlockManager.java:9439) at com.goldencode.p2j.util.BlockManager.repeatWorker(BlockManager.java:9336) at com.goldencode.p2j.util.BlockManager.repeat(BlockManager.java:2052) at com.goldencode.hotel.Start.lambda$execute$5(Start.java:95) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.Start.execute(Start.java:55) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1380) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2125) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1561) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:544) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:757) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748)
Note I recently changed the CREATE TEMPORARY TABLE statement to use the LOCAL specifier, meaning the table is private to the creating connection. There is no similar option for CREATE INDEX. I seem to recall we had a similar problem when I first wrote the CREATE INDEX code for older FWD, which I think is why we have that __1
suffix at the end of the name. IIRC, each session gets a unique index name. So, this is probably a different problem now. Maybe the DROP INDEX statement is not being executed correctly.
#258 Updated by Greg Shah about 4 years ago
At each nested block within that application transaction, the SavepointManager sets a database savepoint. If that block is committed, the corresponding savepoint is released. If that block is rolled back, the corresponding savepoint is rolled back:
I assume you are creating a savepoint if the block is a SUB_TRANSACTION
or TRANSACTION
but not if it is NO_TRANSACTION
.
#259 Updated by Ovidiu Maxiniuc about 4 years ago
Eric Faulhaber wrote:
One more issue to fix is that it seems we are not properly dropping indexes on temporary tables upon exiting Hotel GUI. The recreate is to log into virtual desktop mode, then exit (so it drops you back to the OS-level login page). Log back into the application and you will get:
[...]
Note I recently changed the CREATE TEMPORARY TABLE statement to use the LOCAL specifier, meaning the table is private to the creating connection. There is no similar option for CREATE INDEX. I seem to recall we had a similar problem when I first wrote the CREATE INDEX code for older FWD, which I think is why we have that__1
suffix at the end of the name. IIRC, each session gets a unique index name. So, this is probably a different problem now. Maybe the DROP INDEX statement is not being executed correctly.
Briefly speaking, the problem is caused because the static temp-tables are 'permanent', and never dropped, even if all users using them have disconnected. I don't know whether the local
pays any role in this, I rather think the problem existed since before in 4011. The connection to _temp
database (it is "in-mem") is a always-on so all the clients will share it. So, the LOCAL might not have any affect.
The exception is caused by the attempt to create the same table a second time. FWD tests this in TemporaryBuffer.openMultiplexScope()
. The local
field stores the set of multiplex IDs currently in use. Since this is context local, it is unaware of the multiplexes of other parallel users or multiplexes users that have already disconnected. As result, there will will always be an attempt to create these tables. Logically, the solution would be to make inUseMPIDs
fully static, sharing the multiplex data across all users. This way its data will be as 'persistent' as the tables themselves.
The dynamic temp-tables are correctly destroyed, when the session which created them end.
#260 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
Briefly speaking, the problem is caused because the static temp-tables are 'permanent', and never dropped, even if all users using them have disconnected. I don't know whether the
local
pays any role in this, I rather think the problem existed since before in 4011. The connection to_temp
database (it is "in-mem") is a always-on so all the clients will share it. So, the LOCAL might not have any affect.
The temporary tables which represent legacy temp-table use (i.e., not dirty share or other housekeeping temporary tables) are not shared across users. Two different sessions can create the same LOCAL temporary table and only have access to their own version. They are created on one connection for a particular user's context and are (or at least used to be) dropped when that user's current database session (and thus JDBC connection) is closed. This happens in response to the SESSION_CLEANUP event or one of the SESSION_CLOSING_* events, fired from Persistence$Context.closeSession
. However, now we are not closing sessions as aggressively as we previously did, so it may be that these events are not being fired with the same timing as they were before.
We are not closing the session as aggressively on purpose, to reduce the constant churn or tearing down and bringing up sessions, and re-attaching all DMOs from active buffers every time. So, we probably need to be smarter about the create table side of the equation, to match the less aggressive table/index dropping we are now doing.
#261 Updated by Ovidiu Maxiniuc about 4 years ago
There are two changes:
- in
Persistence.closeSession(boolean)
, thecloseSessionImpl(error)
is called for temporary databases, too. This mean the session is destroyed and local temp-table are dropped. This fixes the issue described in note 257. Yet, I wonder whether is not too aggressive. There is a 3-line comment before the removed return, but I think it is not actual any more: if there are some table left in the current connection/session, we do want to drop them. - after previous point I noticed that queries were not longer working. This is because they were constructed using the current session/connection. I disconnected them, so now
Query
andSQLQuery
are not linked to a single connection, they can be used across sessions.
#262 Updated by Ovidiu Maxiniuc about 4 years ago
- some records are created fresh in a loop and get saved. They are successfully inserted in the appropriate table in database and saved to
Session
's cache; - then, a more complex query loads and modifies them:
- the
SQLQuery.list()
is used to fetch the set of records. Here, the SQL query is already 'expanded' meaning that the whole records are requested. They are incrementally read and hydrated using optional supplementary SQL statements for the eventual extent fields; - the result (as full
Records
with PKs) is stored in thefwdQuery
and when iterated will be loaded into theRecordBuffer
. Note, that we already have a copy of the same records that were added toSession
's cache, but theLoader
fetched them directly, short-cutting the cache;
- the
- when the iteration ends, the updated record is saved, using the
Session.save()
. At this time, FWD realizes that there are in fact multiple copies of the same record.
What should we do? I guess the correct solution would be to generate the SQL from the fwdQuery
's FQL so that only the PKs are fetched. Then the list to be checked against the Session
's cache and issue a new query, only for those missing from cache. The final result set returned by the Loader
would be a merged list of cached and newly fetched records. It's difficult to compute the performance gain (if any) because there are at least two DB accesses now. And the extra computation for merging the list, with extra complications when dealing with a joined query.
#263 Updated by Eric Faulhaber almost 4 years ago
For now, don't change what the queries fetch. If a query fetches all columns, leave it that way, but ensure the primary key is the first column (I think this already is the case). Before creating and hydrating a new DMO, check the cache for a non-stale cached instance. If it exists, use that one. If not, create, hydrate, and cache a new DMO from the query results.
I have found that it is much faster to do it this way than to only fetch primary keys and do separate queries to fetch the full column data for each found PK. Also, it enforces the one DMO instance per primary key in the cache rule. It is also faster to fetch all columns and throw them away to instead use a cached DMO instance, than it is to hydrate a new DMO from the query result set. A lot of time is spent by the JDBC driver extracting data from result sets, for some reason.
#264 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Just committed r11421.
There are two changes:
- in
Persistence.closeSession(boolean)
, thecloseSessionImpl(error)
is called for temporary databases, too. This mean the session is destroyed and local temp-table are dropped. This fixes the issue described in note 257. Yet, I wonder whether is not too aggressive. There is a 3-line comment before the removed return, but I think it is not actual any more: if there are some table left in the current connection/session, we do want to drop them.- after previous point I noticed that queries were not longer working. This is because they were constructed using the current session/connection. I disconnected them, so now
Query
andSQLQuery
are not linked to a single connection, they can be used across sessions.
I think closing the session while tables are still possibly in use is too aggressive, or rather, I think we should not ignore it when TemporaryBuffer.getTableCount
reports tables still in use. I believe we need to clean up and drop unused tables and indexes more proactively than we currently are (so that getTableCount
is actually reporting an accurate number, but I haven't figured out the most appropriate trigger for this yet.
#265 Updated by Ovidiu Maxiniuc almost 4 years ago
If Persistence.closeSession(false)
is called only from Persistence$Context.cleanup()
. This is the end of session, when the user disconnects. The static TEMP-TABLES are never dropped, unless in a SessionListener.Event.SESSION_CLOSING_NORMALLY
event. But the event is only called from closeSessionImpl(*)
. If we condition it with TemporaryBuffer.getTableCount()
, once at least a static table was created, none of them can be dropped because of the circular conditions.
#266 Updated by Ovidiu Maxiniuc almost 4 years ago
Last night I committed r11422.
This revision re-enables LockTableUpdater
, ConnectTableUpdater
and TransactionTableUpdater
, as they were not working at all. Note that, in absence of the index-dmo.xml
file, I used the current configuration to get the active metadata tables.
Another change here: since the Impl classes DmoClass
generates are final
, the ProxyFactory
is unable to do its work. However, they don't need to extend the DMO impls so I used a plain local object as parent class.
I do not think there is a better approach for these tables, taking in consideration that we do not have the class at the moment FWD is compiled. In fact we don't have the interface, either. The alternative would be a loose-coupling by using a mapped property set. I estimate the mapping would be faster than reflection. At any rate, a mapping is already used by the proxy handler when deciding to which method to delegate the setters.
#267 Updated by Greg Shah almost 4 years ago
taking in consideration that we do not have the class at the moment FWD is compiled
In my opinion, it makes no sense for these classes (metadata) to come from the conversion. They should be built-into FWD. Requiring every customer to have the metadata .df
and creating slightly different versions every time is extra work and more potential for errors.
#268 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
taking in consideration that we do not have the class at the moment FWD is compiled
In my opinion, it makes no sense for these classes (metadata) to come from the conversion. They should be built-into FWD. Requiring every customer to have the metadata
.df
and creating slightly different versions every time is extra work and more potential for errors.
I agree. That would simplify a lot these classes. There are a lot of versions of standard.df
, depending on the OE revision they were extracted from. I think we should add standard.df
as a bundle and convert (.sql, DMO interface) it as part of the FWD build process.
#269 Updated by Greg Shah almost 4 years ago
In #4155 we are working to eliminate use of the .df
. And converting as part of the build process leads to a bootstrapping problem (How do you get an egg without a chicken? How do you get the chicken without an egg?). So I think we should avoid the conversion in the build even if we were trying to eliminate the .df
.
I was thinking more along the lines of having hand-written DMOs for these metadata tables.
#270 Updated by Igor Skornyakov almost 4 years ago
Greg Shah wrote:
In #4155 we are working to eliminate use of the
.df
. And converting as part of the build process leads to a bootstrapping problem (How do you get an egg without a chicken? How do you get the chicken without an egg?). So I think we should avoid the conversion in the build even if we were trying to eliminate the.df
.I was thinking more along the lines of having hand-written DMOs for these metadata tables.
This is exactly what I'm thinking about in the scope of #3814. I also think that metadata
section of the p2j.cfg.xml doesn't make sense as the metadata tables are what exists in any 4GL database. We can not fully support some of them at the moment but can at least provide stubs.
#271 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
When we upgrade to Java9 we could make use of the new modules and compile them independently, eventually in the following order:In #4155 we are working to eliminate use of the
.df
. And converting as part of the build process leads to a bootstrapping problem (How do you get an egg without a chicken? How do you get the chicken without an egg?). So I think we should avoid the conversion in the build even if we were trying to eliminate the.df
.
- the
conversion
module: it does not depend on anything, so it will be compiled first; - the
client runtime
module: once the client was really thin, but it now grew. Yet, extracting only the classes that are needed to ran the client and connect to a remote server seems logical. The only issue here is to make sure the protocol is not broken, so they must be build on same sources; conversion of metadata
: the first module can be used to produce the artefacts needed by the runtime (DMO-classes). Maybe optional, as configured inp2j.cfg.xml
;- the
server runtime
: the big boy. It eventually depends on and uses the classes from the client. If we do not want this dependency, we can extract the core classes in acore
module. It will also be dependent on the conversion module which is needed for dynamic queries; tools
module: the database import, admin, and other tools which can be run with or without the running server.
This way we can get rid of the messy monolith that is FWD at this moment. And nothing is hand-written. The metadata support is provided with FWD, and once a new meta schema is released we extract standard.df
and update it in FWD.
#272 Updated by Igor Skornyakov almost 4 years ago
Ovidiu Maxiniuc wrote:
When we upgrade to Java9 we could make use of the new modules and compile them independently, eventually in the following order:
- the
conversion
module: it does not depend on anything, so it will be compiled first;- the
client runtime
module: once the client was really thin, but it now grew. Yet, extracting only the classes that are needed to ran the client and connect to a remote server seems logical. The only issue here is to make sure the protocol is not broken, so they must be build on same sources;conversion of metadata
: the first module can be used to produce the artefacts needed by the runtime (DMO-classes). Maybe optional, as configured inp2j.cfg.xml
;- the
server runtime
: the big boy. It eventually depends on and uses the classes from the client. If we do not want this dependency, we can extract the core classes in acore
module. It will also be dependent on the conversion module which is needed for dynamic queries;tools
module: the database import, admin, and other tools which can be run with or without the running server.This way we can get rid of the messy monolith that is FWD at this moment. And nothing is hand-written. The metadata support is provided with FWD, and once a new meta schema is released we extract
standard.df
and update it in FWD.
I think that there is no need to wait for the upgrade to Java 9. A notion of subproject and multi-project build doesn't depend on Java 9 module system, although with modules the result of a multi-project build will look more elegant of course. And in any case, splitting FWD into multiple subprojects will be the first step.
#273 Updated by Greg Shah almost 4 years ago
I agree that refactoring and using modules are both very good things. And we are definitely wanting to do some of this, maybe all of it.
On the other hand, I am unsure of when this will happen. We won't move to Java 11 until the fall and have no plans to stop at Java 9 along the way. If we are going to refactor things, I prefer to use modules but we will keep these ideas in mind.
#274 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
If
Persistence.closeSession(false)
is called only fromPersistence$Context.cleanup()
. This is the end of session, when the user disconnects. The static TEMP-TABLES are never dropped, unless in aSessionListener.Event.SESSION_CLOSING_NORMALLY
event. But the event is only called fromcloseSessionImpl(*)
. If we condition it withTemporaryBuffer.getTableCount()
, once at least a static table was created, none of them can be dropped because of the circular conditions.
The session used to be closed at the end of every transaction. I disconnected this, because:
- it was causing too much unnecessary database and runtime activity;
- it forced us to cache all query result set data for result sets obtained before the transaction started, but which had not yet been iterated;
- I wanted to take more advantage of re-using connection-scoped resources, such as prepared statements;
- I wanted to take more advantage of record caching, and killing the session would kill the cache as well.
However now, as you point out, we are not closing the session at all, ever, except when the user context ends. This is not just a problem for temporary tables, but also for persistent tables. This is not good, as it holds open database connections indefinitely and can cause memory leaks on the FWD runtime side (and possibly on the database side?).
I've been trying to figure out the best point at which to close the session. The scope has to be larger than application level transactions, because we can open prepared statements and run queries outside that scope, and this is what led to the unnatural results caching we were doing previously. We need to keep the JDBC connection open to allow the natural iteration of those results, even after one or more transactions have been opened and closed. For temp tables, we need to keep the connection around for the longer of (a) any table is in use (i.e., has data in it); or (b) any connection-bound resource (e.g., result set, prepared statement) is needed.
Right now, I am trying to identify the events/resources I need to hook to determine the proper scope at which to close the session. For temp-tables, we have the multiplex scope, which already is registered with the TM to allow us to figure out when tables are no longer needed. This is a good start. I'm working on determining the right scope w.r.t. the result set, prepared statement, etc. resources. It may be that the first buffer scope opened for a database is a good indicator of this. Another option is to detect the first result set oriented (i.e., not FIND) query opened for a particular database. Tracking by buffer scope may result in a wider scope than tracking by result set oriented query, because it would include buffers for FIND and even CAN-FIND.
If you have any other ideas, please let me know.
#275 Updated by Eric Faulhaber almost 4 years ago
Constantin, I need some guidance on my previous entry.
Specifically, I provisionally have decided on the approach where we tie the database session closing/cleanup to the scope of the first buffer opened for a database. I can see how this would work for regular procedures, as I can detect the first instance of an open buffer per database in BufferManager.scopeStart
, and drive the session closing after all other processing in BufferManager.scopeFinished
.
However, I don't have as good an understanding of the interaction of buffers with persistent procedures. You've added a lot of logic to BufferManager
for persistent procedures. Could you please help me understand whether and how my plan could be expanded to account for these as well? Thanks.
#276 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Specifically, I provisionally have decided on the approach where we tie the database session closing/cleanup to the scope of the first buffer opened for a database.
This doesn't look right to me. For persistent procedures, a buffer will live as long as that program lives. More, think of dynamic buffers, which will live until they are deleted.
So, why not think in terms of 'how many buffers are still attached to this database connection'? I mean, keep a counter somewhere which tracks how many live buffers exist. And this counter follows the buffer's lifecycle. This should map to the 'tie to the first buffer opened for a database', which is normal for a non-persistent, non-dynamic application.
#277 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
For persistent procedures, a buffer will live as long as that program lives.
Hm, right, this seems like it will be a problem. I am concerned that a buffer associated with a persistent procedure may pin the session open too long, even though the session may not be needed anymore.
More, think of dynamic buffers, which will live until they are deleted.
I hadn't considered dynamic buffers, but your idea seems like it would work well in this case. We would increment the reference count (of any buffer) on the outermost scope opening and decrement on it closing.
However, the persistent procedure issue concerns me. We can't keep a database session open for the life of the program.
Ovidiu, AFAICT, we do not do any explicit closing of resources associated with converted, legacy queries (e.g., result sets, prepared statements), correct?
#278 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Last night I committed r11422.
This revision re-enablesLockTableUpdater
,ConnectTableUpdater
andTransactionTableUpdater
, as they were not working at all. Note that, in absence of theindex-dmo.xml
file, I used the current configuration to get the active metadata tables.
I'm not sure what you mean by this. What current configuration, specifically? Hotel GUI server startup now fails with:
com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2003) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:964) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.ExceptionInInitializerError at com.goldencode.p2j.util.TransactionManager.initializeMeta(TransactionManager.java:689) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1547) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:872) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1209) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:1999) ... 5 more Caused by: java.lang.RuntimeException: Error preparing MetaTrans implementation class at com.goldencode.p2j.persist.meta.TransactionTableUpdater.<clinit>(TransactionTableUpdater.java:116) ... 10 more Caused by: java.lang.ClassNotFoundException: com.goldencode.hotel.dmo._meta.MetaTrans at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at com.goldencode.p2j.persist.meta.TransactionTableUpdater.<clinit>(TransactionTableUpdater.java:101) ... 10 more
The discussion of modifying the approach on the metadata DMOs notwithstanding, the current approach calls for the absence of MetaTrans
to not be fatal. It just means the _Trans
table is not used by that application, and so the TransactionTableUpdater
will be disabled (i.e., TransactionTableUpdater.isEnabled
should return false
because TransactionTableUpdater.dmoClass
is null
). However, now the missing MetaTrans
DMO is a fatal error that prevents the server from starting. Hotel GUI does not use _Trans
, and this should not fail, but it does, because we no longer eat the ClassNotFoundException
in the class initializer in 11422.
#279 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
The discussion of modifying the approach on the metadata DMOs notwithstanding, the current approach calls for the absence of
MetaTrans
to not be fatal. It just means the_Trans
table is not used by that application, and so theTransactionTableUpdater
will be disabled (i.e.,TransactionTableUpdater.isEnabled
should returnfalse
becauseTransactionTableUpdater.dmoClass
isnull
). However, now the missingMetaTrans
DMO is a fatal error that prevents the server from starting. Hotel GUI does not use_Trans
, and this should not fail, but it does, because we no longer eat theClassNotFoundException
in the class initializer in 11422.
Sorry, because of some linking issue (missing jar) I was not able to fully test the commit (I noted that in commit message).
The initialization of _meta VST support had to be rewritten as it was completely disabled. However, the dmoClass
should have been set to null
in this case, as you noted.
#280 Updated by Eric Faulhaber almost 4 years ago
- File TransactionTableUpdater.patch added
No worries. The patch I used to get past it is attached. If you're ok with it (I didn't test the case where the MetaTrans
table exists), I'll add it to my next commit, or you can.
#281 Updated by Igor Skornyakov almost 4 years ago
Gentlemen,
I'm working on metatables support right now. Maybe it makes sense for me to use your branch?
Thank you.
#282 Updated by Eric Faulhaber almost 4 years ago
We have not changed the approach to "static" metadata tables in this branch, and we're making only minimal changes to the "live" metadata support (i.e., that which reflects changing runtime state), in order to make sure it still works. I am not planning on making changes to the metadata design in this branch; there is more change than I would like here already.
My understanding is that you are working on figuring out the meaning of a number of fields and making updates to the static tables at the moment. For that, a branch that is closer to current trunk is probably best.
#283 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Ovidiu, AFAICT, we do not do any explicit closing of resources associated with converted, legacy queries (e.g., result sets, prepared statements), correct?
You mean, the static resources? I guess they (not sure if this is general) register themselves as finalizable and get the opportunity to release resources when the scope they registered for ends.
If you have any other ideas, please let me know.
At this moment we have some Session
-related events: SESSION_CLEANUP, SESSION_CLOSING_NORMALLY and SESSION_CLOSING_WITH_ERROR.
I think we are talking of SESSION_CLOSING_NORMALLY now. This means the client peacefully disconnected and, normally, all its resources are no longer needed. The dynamic temp-tables are dropped, but not the static ones. Except if the closing session is a wrapper of a reusable session. In this case, we may allow the static _temp tables to survive. The next client which gets it re-wrapped has the tables already there, but needs to use a different _multiplex. At any rate, the records of first client are stale and must be dropped, anyway. And, before the second client attempts to create the tables, it should test first if they already exist. To simplify the management and avoid possible memory leaks, I think we should drop the static _temp tables when at SESSION_CLOSING_NORMALLY.
#284 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
We have not changed the approach to "static" metadata tables in this branch, and we're making only minimal changes to the "live" metadata support (i.e., that which reflects changing runtime state), in order to make sure it still works. I am not planning on making changes to the metadata design in this branch; there is more change than I would like here already.
My understanding is that you are working on figuring out the meaning of a number of fields and making updates to the static tables at the moment. For that, a branch that is closer to current trunk is probably best.
Yes, I have some changes for metatables in a pending update, but they are merely fixes to get them running with the new persistence. Nothing is changed related to 'minimal' structure and the way they are constructed and populated. Only the initialization and registration with the database subsystem.
#285 Updated by Igor Skornyakov almost 4 years ago
Eric Faulhaber wrote:
We have not changed the approach to "static" metadata tables in this branch, and we're making only minimal changes to the "live" metadata support (i.e., that which reflects changing runtime state), in order to make sure it still works. I am not planning on making changes to the metadata design in this branch; there is more change than I would like here already.
My understanding is that you are working on figuring out the meaning of a number of fields and making updates to the static tables at the moment. For that, a branch that is closer to current trunk is probably best.
OK, thank you.
#286 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
At this moment we have some
Session
-related events: SESSION_CLEANUP, SESSION_CLOSING_NORMALLY and SESSION_CLOSING_WITH_ERROR.
I think we are talking of SESSION_CLOSING_NORMALLY now. This means the client peacefully disconnected and, normally, all its resources are no longer needed. The dynamic temp-tables are dropped, but not the static ones. Except if the closing session is a wrapper of a reusable session. In this case, we may allow the static _temp tables to survive. The next client which gets it re-wrapped has the tables already there, but needs to use a different _multiplex. At any rate, the records of first client are stale and must be dropped, anyway. And, before the second client attempts to create the tables, it should test first if they already exist. To simplify the management and avoid possible memory leaks, I think we should drop the static _temp tables when at SESSION_CLOSING_NORMALLY.
Yes, all temp-tables, static or dynamic, and all indexes on them must go away when the session is closed. These are local (i.e., private to the connection) tables and they do not survive the connection closing, nor do we want them to. The indexes cannot be declared local, which complicates things a bit, but we explicitly drop all tables and indexes with DDL (or we used to until I regressed this). New tables and indexes are created when a new session is created. We start every session with a clean slate in terms of temporary tables and indexes on them.
All this already works this way in the current version of FWD and I don't want to change it, I just want to fix that I regressed it. What I want to do is change the timing and circumstances of when the session is closed. We have been closing it too aggressively until now (at every transaction end).
BTW, the multiplexing is about using the same physical table within the same connection as multiple logical tables, when using the same temp-table (i.e., DMO interface/class) in different procedures. We don't re-use the physical temp-table with a different multiplex ID in a new session/connection. It is destroyed before the connection is closed.
My question was really about the determination of when to close the session and clean up, now that we are not firing SESSION_CLOSING_NORMALLY at the end of every transaction. Actually, it was even broader than that, because SESSION_CLOSING_NORMALLY will now really only be about temp-tables, as I plan to remove all the result set caching that the higher level queries were doing in response to this event.
I want to make sure instead that the scope of the code which uses these results is taken into account when deciding whether the session can be closed. This will obviate the need for the result set caching altogether, and will allow the iteration of results to continue naturally.
The session life cycle needs to be broad enough to not kill the connection while connection-bound resources are still in use, but narrow enough that it is not held open indefinitely, leaking JDBC and database resources.
I am concerned that the persistent procedures are going to be a problem in this regard.
Anyway, was just looking for ideas on whether there were any other connection-bound resources to take into account, besides prepared statements and open/in-use result sets. Don't work on this, though, other than posting ideas. Please continue trying to track down the reason we have different DMO instances with the same IDs we are trying to associate with the session, so we can revert the recent change to Session.save
and enforce the one-DMO-instance-per-ID rule again.
#287 Updated by Ovidiu Maxiniuc almost 4 years ago
I do not know the exact H2 implementation, but it seems to me logical that once the table is LOCAL, its indexes will inherit the attribute. Otherwise when the session ends, it will take the table with it, but the indexed remain? That's a nonsense.
I understand that closing the session at the end of each transaction was useful with Hibernate, to help free resources. If we are talking about temp-tables, it really does not make much sense at the moment. Since the tables are LOCAL, closing the session means losing all tables and data from them. This is not what we want. So I think the only reasons for closing the session is when:- there is a critical error - although I am not sure about this: we should try isolating the cause and, if not possible, drop the session/client;
- when the session end normally.
For permanent database, closing the session only at the end of each transaction is not a problem except caching and results so we should also avoid the extra management.
The issue from note 257 was caused by the database session/connection was in fact surviving the embedded FWD client session. This happened because the client is actually the agent so the connection bound to FWD client context. The _temp database was not properly cleaned up and the next virtual client tried to recreate the tables.
#288 Updated by Ovidiu Maxiniuc almost 4 years ago
- fixed regression in _meta tables;
- skip re-hydration of records if a non STALE copy is found in session cache.
#289 Updated by Eric Faulhaber almost 4 years ago
Constantin, can you please point me at some test cases which use buffers in persistent procedures?
#290 Updated by Eric Faulhaber almost 4 years ago
Tried an ETF run. Server startup reported these errors:
[05/05/2020 09:07:19 GMT] (com.goldencode.p2j.persist.meta.MetadataManager:WARNING) Failed to load _meta DMO: <app>.dmo._meta.MetaFile [05/05/2020 09:07:19 GMT] (com.goldencode.p2j.persist.meta.MetadataManager:WARNING) Failed to load _meta DMO: <app>.dmo._meta.MetaUser [05/05/2020 09:07:19 GMT] (com.goldencode.p2j.persist.meta.MetadataManager:WARNING) Failed to load _meta DMO: <app>.dmo._meta.MetaDb [05/05/2020 09:07:19 GMT] (com.goldencode.p2j.persist.meta.MetadataManager:WARNING) Failed to load _meta DMO: <app>.dmo._meta.MetaIndex-field
The last one looks especially strange; I don't know how we get that class name in conversion.
Then it got stuck for a very long time processing the index metadata:
"main" - Thread t@1 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for <ed5140e> (a java.util.concurrent.FutureTask) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429) at java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.goldencode.p2j.persist.Persistence.queryAllIndexData(Persistence.java:1152) at com.goldencode.p2j.persist.DatabaseManager.registerDatabase(DatabaseManager.java:2495) - locked <1a6403e> (a java.lang.Object) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1538) - locked <1a6403e> (a java.lang.Object) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:872) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1209) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:1999) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:964) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Locked ownable synchronizers: - None
It eventually pushed its way through the index metadata queries and the rest of server startup, but failed because I had not set up the spawner correctly (was still set for hotel_gui testing). I will correct this and try again later.
#291 Updated by Eric Faulhaber almost 4 years ago
I committed rev 11426, which attempts to fix the session closing issue. However, there is still a problem, which I am debugging.
#292 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, can you please point me at some test cases which use buffers in persistent procedures?
Please see dyn_tt_with_persistent.p
, persistent_with_dyn_tt.p
, persistent_with_tt.p
and static_tt_with_persistent.p
in testcases/uast/buffer
rev 2188
Execute the dyn_tt_with_persistent.p
and static_tt_with_persistent.p
- they run another persistent program which creates a static or dynamic temp-table, which gets exposed to the caller. In this case, the buffer's initial scope is not closed until either the program gets deleted (for static tt) or the temp-table gets deleted (for dynamic tt).
If you need something for permanent tables, too, please let me know.
#293 Updated by Eric Faulhaber almost 4 years ago
Constantin, I am hitting a resource cleanup problem with a persistent procedure in Hotel GUI. The procedure is the converted adm2/smart.p
program. It defines a non-global buffer:
Admlink_1_1.Buf admlink = TemporaryBuffer.define(Admlink_1_1.Buf.class, "admlink", "ADMLink", false, false);
During TM.processFinalizables
for the Smart.execute
external procedure popScope
processing, it is determined that this resource requires delayed cleanup, and it is added to the ProcedureManager
's ProcedureData.finalizables
set for the Smart
procedure referent, via ProcedureManager$ProcedureHelper.addFinalizable
.
However, upon exiting the Hotel GUI session, these finalizables are never iterated, so the TemporaryBuffer
for this temp-table never has a chance to clean up after this resource. As a result, we do not drop the temporary table nor its indexes from the database. The existence of the index is problematic, because it is not local to the connection, and thus we are leaking indexes in the database.
Branch 4011a currently is based off trunk 11328. I looked at ProcedureManager
in latest trunk to see if something had changed in this area. Although there have been quite a few updates to the file, I did not see anything that looked like it would change this behavior.
I will try to work around this by dropping all "left over" database resources at the end of a context's life, but it would be better to get a deleted
call at the right moment, to do proper cleanup.
Please let me know if you have any ideas in this regard. Thanks.
#294 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, I am hitting a resource cleanup problem with a persistent procedure in Hotel GUI. The procedure is the converted
adm2/smart.p
program. It defines a non-global buffer:[...]
During
TM.processFinalizables
for theSmart.execute
external procedurepopScope
processing, it is determined that this resource requires delayed cleanup, and it is added to theProcedureManager
'sProcedureData.finalizables
set for theSmart
procedure referent, viaProcedureManager$ProcedureHelper.addFinalizable
.However, upon exiting the Hotel GUI session, these finalizables are never iterated, so the
TemporaryBuffer
for this temp-table never has a chance to clean up after this resource. As a result, we do not drop the temporary table nor its indexes from the database. The existence of the index is problematic, because it is not local to the connection, and thus we are leaking indexes in the database.
This may be just that the program handle never gets deleted explicitly. And this is safe in 4GL, as the client process just ends. And in FWD is safe too, in terms of freeing the context-local resources, as these get destroyed. What you have here is an external dependency on the context-local state.
I will try to work around this by dropping all "left over" database resources at the end of a context's life, but it would be better to get a deleted call at the right moment, to do proper cleanup.
If there is no explicit DELETE statement for the handle in the 4GL code, there is no other 'right time' to clean it than during context cleanup.
#295 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, I've committed rev 11427, which partially fixes persistence context cleanup. However, I found a problem with the temp-table DDL and I need your help to fix it, as you understand the new code better in that area.
For the legacy temp-table support, when we generate DDL statements to create/drop temporary tables and the indexes on temporary tables, the names of the tables and indexes need to have a unique suffix. We lost that suffix in 4011a. I noticed the following code in DmoMetadataManager.prepareDDLStatements
:
String sqlIdxName = "idx__" + index.name() + "__1"; // TODO: why [idx__ ...__1] ?
To answer the TODO question, the prefix is just a constant to make index names somewhat uniform; the suffix should not be hard-coded to "__1"
as the question suggests, but rather should be generated as follows:
"__${" + TempTableHelper.UNIQUE_SUFFIX_KEY + "}"
The constant is defined as "1", so when condensed, the suffix looks like this:
"__${1}"
When we execute these statements, we use StringHelper.sweep
to replace the ${1}
placeholder with the user's SecurityManager
session ID, to ensure the resource names are unique. So, for example, tt5
would become tt5__7
, if the user's session ID is 7
.
Right now, without this change, there cannot be more than one user logged into Hotel GUI at a time, because both will create the same database resources and the second will fail. The key thing to understand is that each user session has its own temp-table (and index) instances in the database; these are not shared resources.
I didn't just add the suffix code myself, because I'm not quite clear on some of the conditional logic in prepareDDLStatements
, so I prefer to explain the requirement, and let you figure out the correct place(s) it needs to be applied. Thanks.
#296 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, I am hitting a resource cleanup problem with a persistent procedure in Hotel GUI. The procedure is the converted
adm2/smart.p
program. It defines a non-global buffer:
Eric, thanks for this question, it made something tick for #4187-58. For some reason permanent buffers affect the DELETE on an invalid handle which previously held a persistent program with permanent buffers.
#297 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric,
Your statements are correct. I added the code exactly as you described them an continued some investigations. I noticed in trunk version that the DDLs for creating temp tables are different: in normal temp-tables, they are declared as local, as we discussed above. However, the dirty temp-tables are not declared local. In 4011 both of them are local
. This is not fine, as the latter should expose its content. This means the DmoMetadataManager
has another possible flaw, since the DDLs it generates are cached on per-DMO basis. Yet, they should not overlap since the dirty one uses the permanent schema interfaces, right?
Anyway, I should probably drop the DmoMetadataManager
local cache since the TempTableHelper
also keeps a static cache
in a form of a HashMap
which is never cleaned up. Shouldn't we drop from TempTableHelper.cache
at least the dynamic tables?
#298 Updated by Eric Faulhaber almost 4 years ago
It doesn't matter much to me where we keep the cache... wherever you think it makes the most sense is fine.
As to the different statements for legacy temp-tables and dirty share tables, you are correct. The latter must not be local and their names do not need the unique suffix. Since they are inherently local, temporary DMOs will never have dirty share tables associated with them. Dirty share tables are only created to "mirror" persistent tables.
#299 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
... the
TempTableHelper
also keeps a staticcache
in a form of aHashMap
which is never cleaned up. Shouldn't we drop fromTempTableHelper.cache
at least the dynamic tables?
Forgot to address this part. Well, I didn't remove these previously, because I think it is likely that we will see the same dynamic temp-tables defined repeatedly for many applications. Consider, for example, an appserver API which delivers results with a dynamic temp-table. The same definitions will be created each time a specific API is hit. Also, we have no way to unload the dynamically assembled Java classes at this time, and I didn't see much to be saved by discarding only the SQL. If you can think of a compelling reason we should, I am open to changing this.
#300 Updated by Ovidiu Maxiniuc almost 4 years ago
The following is a short discussion from email, with my last replies:
Eric Faulhaber wrote:
On 5/6/20 4:33 PM, Ovidiu Maxiniuc wrote:
This week I also converted the <app> server and imported the sample data in a new database whose schema was generated by 4011a. The conversion was smooth, but for import I had to do some adjustments, my last update have broken it. I have the fix as a small fix.
I might have been a bit eager to get the import done so I raised the thread to 5. Also I could not let the PC working overnight because of the instability of the weather so I suspended it. The timed import is this:Elapsed job time: 16:56:13.274 real 1016m14.189s user 424m14.206s sys 96m52.129sI suppose you also did the import before attempting to start the server. Was it OK for you?
I did not. I restored from a master template in PostgreSQL. Actually, I can't remember the last time I ran import with this data.
This must be done as some things changed: id
/recid
as PK, identifier
/id
, datetimetz sql type, some fields with reserved sql names: from
, order
, limit
. From my quick comparison, only the PKs have changed for this application.
I investigated the "Failed to load _meta DMO" errors. There are two different causes. The fixes are on the way.
Thanks.
Indeed, it takes a lot of time (20+mins) to read index metadata from database. I am not sure what goes wrong here. Yet, I wonder whether we still need do to this. We have all index-related data as annotations, do we need a confirmation?
Maybe not. I originally wrote this when the database was the only place index information was available at runtime. But if we remove it, we'll need something to replace it, to move the information from annotations into the form we need it at runtime.
I will try to optimize this or completely remove it. I think we have all we need in DMO annotations.
In the long run, we probably need a background process to ensure the database indexes agree with the annotations. If these get out of sync, we will have runtime problems. But not for now. The existing process doesn't do this anyway; it just treats the database schema as authoritative.
I find it strange because the code itself is not new, but now it seems to do a lot of communication with db server. All my cores were maxed in this period.
Unfortunately, the server stopped because of missing aspects.ltw class. I will add the jar to classpath and try again.
Hm, IIRC, this should not be in play unless you are actively tracing. Check that this is not the case in the
server.sh
script.
My server.sh
is unchanged for a few year now, the jar was added relatively recent. How is it configured to trace what it is tracing? (sorry I have no clue of the insights here)
#301 Updated by Eric Faulhaber almost 4 years ago
Please prioritize the temp-table name suffix fix. I'd like to see this go in as soon as possible, even if nothing else (e.g., temp-table SQL statement caching or the local vs. non-local issue) is resolved along with it.
#302 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Please prioritize the temp-table name suffix fix. I'd like to see this go in as soon as possible, even if nothing else (e.g., temp-table SQL statement caching or the local vs. non-local issue) is resolved along with it.
I will take a crack at this myself, at least for a short-term fix. Looks like the way we were doing it before was just making the index names unique, not the table names. The tables already are local and won't cause a name conflict across connections/sessions. The indexes are not local. This keeps the DMO-to-table mapping a lot simpler; we don't have to change the DML SQL.
#303 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
This must be done as some things changed:
id
/recid
as PK,identifier
/id
, datetimetz sql type, some fields with reserved sql names:from
,order
,limit
. From my quick comparison, only the PKs have changed for this application.
Instead of re-importing the data, I am testing our new feature of being able to specify the primary key name, in this case to be backward compatible with id
. I am doing this by setting:
<schema primaryKeyName="id"> ...
in p2j.cfg.xml
and:
<node class="container" name="persistence"> <node class="string" name="primary-key-name"> <node-attribute name="value" value="id"/> </node> </node>
in directory.xml
.
#304 Updated by Eric Faulhaber almost 4 years ago
I committed rev 11427, which adds the placeholder suffix at the end of temp-table index names. Note that this short term fix does not address the related issues you pointed out (cache changes, separate DDL for dirty database). I would like to hear your further thoughts on those.
Even with this update, there are still issues with using multiple user sessions simultaneously. I think we are still not dropping the temp-tables on context cleanup in some cases.
Also, the tree control on the "Add Room..." dialog will not populate and now hangs the client at a WAIT-FOR. I suspect one of my recent changes regressed something here.
I will look into the context cleanup and the tree control issues further, later today.
#305 Updated by Ovidiu Maxiniuc almost 4 years ago
- moved DDL generation from
DmoMetadataManager
toTempTableHelper
and fixed the index suffix issue; - fixed meta tables load at startup if the project was configured with them. There were some issues with the
_file
metatable which prepares the template records so the DMOs need to be registered at this moment. This might cause a bit of delay when the server starts up, but these tasks will be skipped when the DMOs are accessed again. - fixed regression in import process;
- fixed initialization and setters for
raw
type fields.
The standalone clients now can sequentially connect to server and the table/index cleanup seems correct. The Add rooms..
dialog does not hang for me, but a delay of about 5 seconds is perceived before loading it. Another issue here is that the tree is broken, only a single type of room can be seen multiple times. The web client hangs because of an "Unexpected token ? in JSON". It seems familiar.
#306 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
The standalone clients now can sequentially connect to server and the table/index cleanup seems correct.
This still fails badly for me. I am using virtual desktop mode in the Chrome browser. My recreate is
- log into Hotel GUI, both the OS login and the application login
- open a new browser tab and repeat the previous step
Step 2 results in an error in the log that table tt5__<N> already exists. The UI in tab 2 hangs. Going back to tab 1 and trying to exit results in errors about various dynamic temp-tables not existing.
The
Add rooms..
dialog does not hang for me, but a delay of about 5 seconds is perceived before loading it. Another issue here is that the tree is broken, only a single type of room can be seen multiple times. The web client hangs because of an "Unexpected token ? in JSON". It seems familiar.
The dialog is still not displaying for me at all and the UI hangs.
Before you ask, yes I ran ant deploy.prepare
after rebuilding ;-)
I am going to reconvert the whole application from scratch and try again. There must be something wrong with my environment.
#307 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, I'm getting this during the import step:
[java] ERROR: [java] com.goldencode.p2j.pattern.TreeWalkException: ERROR! Active Rule: [java] ----------------------- [java] RULE REPORT [java] ----------------------- [java] Rule Type : WALK [java] Source AST: [ GuestIdType ] DATA_MODEL/CLASS/ @0:0 {313532612732} [java] Copy AST : [ GuestIdType ] DATA_MODEL/CLASS/ @0:0 {313532612732} [java] Condition : countResult = #(long) session.createSQLQuery(queryText).uniqueResult(null) [java] Loop : false [java] --- END RULE REPORT --- [java] [java] [java] [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1070) [java] at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2110) [java] Caused by: com.goldencode.expr.ExpressionException: Expression error [countResult = #(long) session.createSQLQuery(queryText).uniqueResult(null)] [CLASS id=313532612732] [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:275) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:210) [java] at com.goldencode.p2j.pattern.PatternEngine.apply(PatternEngine.java:1633) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1531) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1479) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1034) [java] ... 1 more [java] Caused by: com.goldencode.expr.ExpressionException: Expression error [countResult = #(long) session.createSQLQuery(queryText).uniqueResult(null)] [java] at com.goldencode.expr.Expression.getCompiledInstance(Expression.java:681) [java] at com.goldencode.expr.Expression.execute(Expression.java:380) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:497) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.RuleContainer.apply(RuleContainer.java:585) [java] at com.goldencode.p2j.pattern.RuleSet.apply(RuleSet.java:98) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:262) [java] ... 6 more [java] Caused by: com.goldencode.expr.CompilerException: Error parsing expression [java] at com.goldencode.expr.Compiler.process(Compiler.java:375) [java] at com.goldencode.expr.Compiler.compile(Compiler.java:287) [java] at com.goldencode.expr.Expression.getCompiledInstance(Expression.java:673) [java] ... 17 more [java] Caused by: com.goldencode.expr.UnresolvedSymbolException: No function resolved for instance (com.goldencode.p2j.persist.orm.SQLQuery createSQLQuery(java.lang.String) @class com.goldencode.p2j.persist.orm.Session) and method (name == uniqueResult, signature == ([null])) [java] at com.goldencode.expr.ExpressionParser.method(ExpressionParser.java:2029) [java] at com.goldencode.expr.ExpressionParser.primary_expr(ExpressionParser.java:845) [java] at com.goldencode.expr.ExpressionParser.un_expr(ExpressionParser.java:1586) [java] at com.goldencode.expr.ExpressionParser.prod_expr(ExpressionParser.java:1441) [java] at com.goldencode.expr.ExpressionParser.sum_expr(ExpressionParser.java:1372) [java] at com.goldencode.expr.ExpressionParser.shift_expr(ExpressionParser.java:1296) [java] at com.goldencode.expr.ExpressionParser.compare_expr(ExpressionParser.java:1211) [java] at com.goldencode.expr.ExpressionParser.equality_expr(ExpressionParser.java:1141) [java] at com.goldencode.expr.ExpressionParser.bit_and_expr(ExpressionParser.java:1093) [java] at com.goldencode.expr.ExpressionParser.bit_xor_expr(ExpressionParser.java:1051) [java] at com.goldencode.expr.ExpressionParser.bit_or_expr(ExpressionParser.java:1009) [java] at com.goldencode.expr.ExpressionParser.log_and_expr(ExpressionParser.java:967) [java] at com.goldencode.expr.ExpressionParser.expr(ExpressionParser.java:918) [java] at com.goldencode.expr.ExpressionParser.expression(ExpressionParser.java:673) [java] at com.goldencode.expr.Compiler.parse(Compiler.java:481) [java] at com.goldencode.expr.Compiler.process(Compiler.java:341) [java] ... 19 more
Do you already have this fixed locally, and maybe not checked in?
#308 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric,
Please grab r11429 LE:r11430. I have a separate directory for commits and I forgot to compare and merge in the changed from rules
folder.
#309 Updated by Ovidiu Maxiniuc almost 4 years ago
New update, r11431.
There was an issue with _temp records fetched by loader. The hydration wasn't setting up the _multiplex
since this field was not part of the query. As result the records fetched could not be written back (updated) when needed.
#310 Updated by Eric Faulhaber almost 4 years ago
Thanks for the fixes.
I rebuilt and converted Hotel GUI from scratch. The import worked, but unfortunately, I still have the runtime problems (both the "table already exists on second login" and the failing "Add Room..." dialog.
However, the good news (sort of) is that I checked out Hotel GUI r202 and converted/ran with trunk 11328 (the current base for 4011a) as a baseline, and we have an NPE/crash when trying to open the "Add Room..." dialog, so this already is not working (albeit in a different way) in that version. The NPE seems to be a FWD runtime bug rather than an application bug, though, so, we still have something to fix here in 4011a. But I don't want to chase it until it can be shown to work normally in the baseline. I may temporarily patch the NPE on top of trunk 11328 by looking at a later version where it's fixed, then see what we've got as a baseline behavior for this dialog. In any event, it is disturbing that you and I are seeing substantially different behavior in this area.
The baseline works fine when logging in multiple times, so I'm still chasing this problem down in 4011a.
Some other good news (sort of) is that the performance of the baseline in Chrome is similarly bad, compared to what I am seeing with 4011a. I think this is a browser issue. I recall someone (Hynek or Eugenie?) mentioning a new version of Chrome has a severe Javascript performance regression, so that may be what I'm seeing here. I'm running Chrome for Linux Version 81.0.4044.138 (Official Build) (64-bit).
Running Hotel GUI r202 in Firefox and in the Swing client shows much better performance for both 4011a and the baseline, more in line with what I remember it being. So, it turns out I can't detect any deterioration in performance of 4011a compared to the baseline on any of these 3 client platforms. I think we're ok in this regard.
#311 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, w.r.t. the ETF import taking almost 17 hours, is that normal for that machine? It seems really long to me. Without having done any profiling, I assume this is because we are not using SQL batching for the inserts. The original import code was written for Hibernate's delayed Session.save
implementation, not the new one, which is optimized for legacy runtime behavior. We probably need a Session.insert(List<T> dmos)
method which uses SQL batches behind the scenes, to support this use case. It is not the highest priority requirement (it falls behind functional issues), but if we determine through some profiling that this is a big bottleneck in import, we will need to fix it soon.
We also don't need the session cache for this use case, because we never need a record again, once it is persisted. So any time spent caching and clearing the cache is wasted. I've been coming to the conclusion that for the runtime, we probably need to (a) separate the cache from Session
, so we can pass a cache from an old session instance to a new one without moving records individually; and (b) allow a cache-less Session
, for the import (and possibly other) use cases. But this cache thing is probably negligible in terms of import performance. I bet the lack of batching is the issue.
#312 Updated by Ovidiu Maxiniuc almost 4 years ago
I tried connecting with two clients at once: the Swing client first (only connect, no other clicks) and then the embedded/web. The former looks OK, but the latter only displayed the Hotel capacity graph and the lower-right table with correct value. The main frame failed to show. Looking at the server's log I can see this:
[05/08/2020 12:55:34 EEST] (com.goldencode.p2j.persist.Persistence:SEVERE) Error executing SQL statement: [create local temporary table tt5 ( recid bigint not null, _multiplex integer not null, linksource varchar, linktarget varchar, linktype varchar, __ilinktype varchar as upper(rtrim(linktype)), primary key (recid) ) transactional;, create index idx_mpid__tt5__5 on tt5 (_multiplex);] com.goldencode.p2j.persist.PersistenceException: Failed to execute SQL statement. at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:3884) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:2950) at com.goldencode.p2j.persist.TemporaryBuffer$Context.doCreateTable(TemporaryBuffer.java:6510) at com.goldencode.p2j.persist.TemporaryBuffer$Context.createTable(TemporaryBuffer.java:6307) at com.goldencode.p2j.persist.TemporaryBuffer.openScope(TemporaryBuffer.java:4159) at com.goldencode.p2j.persist.RecordBuffer.openScope(RecordBuffer.java:2865) at com.goldencode.hotel.common.adm2.Smart.lambda$execute$1(Smart.java:144) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8087) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7808) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) at com.goldencode.hotel.common.adm2.Smart.execute(Smart.java:135) at com.goldencode.hotel.common.adm2.SmartMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invokeImpl(ControlFlowOps.java:7518) at com.goldencode.p2j.util.ControlFlowOps$InternalEntryCaller.invoke(ControlFlowOps.java:7489) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5297) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) at com.goldencode.hotel.MainWindow.lambda$startSuperProc$25(MainWindow.java:620) [...] Caused by: org.h2.jdbc.JdbcBatchUpdateException: Table "TT5" already exists; SQL statement: create local temporary table tt5 ( recid bigint not null, _multiplex integer not null, linksource varchar, linktarget varchar, linktype varchar, __ilinktype varchar as upper(rtrim(linktype)), primary key (recid) ) transactional; [42101-197] at org.h2.jdbc.JdbcStatement.executeBatch(JdbcStatement.java:792) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:3870) at com.goldencode.p2j.persist.Persistence.executeSQLBatch(Persistence.java:2950) [...]
There are other exception listed but they are no longer relevant after these.
It seems to me that we have a table leak. The local
tables are not that local. The index is correctly suffixed: index idx_mpid__tt5__5
, as the web client is connection #5 (the swing client being #2).
I investigated the connections and they seem to share the same connection. I will investigate the data source to see what goes wrong.
#313 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
I investigated the connections and they seem to share the same connection. I will investigate the data source to see what goes wrong.
Nice find. I'm pretty sure the local
keyword works; we have been using it for years to create the same temporary table on different connections without this kind of conflict. But if the data source is serving the same connection to multiple user sessions, this would explain a lot of what I have been seeing...
#314 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Ovidiu Maxiniuc wrote:
I investigated the connections and they seem to share the same connection. I will investigate the data source to see what goes wrong.
Nice find. I'm pretty sure the
local
keyword works; we have been using it for years to create the same temporary table on different connections without this kind of conflict. But if the data source is serving the same connection to multiple user sessions, this would explain a lot of what I have been seeing...
At a glance, I don't see anything wrong with TempTableDataSourceProvider
which would result in the same connection being handed out to different user contexts. Did you find anything? DriverManager.getConnection(user, pass)
should always hand out a fresh connection AFAIK (especially when it already is in use), unless H2 is doing something highly unexpected under the covers.
#315 Updated by Ovidiu Maxiniuc almost 4 years ago
The problem is in JdbcDataSource
. For temp-tables, it is built from the DataSourceImpl
returned by the TempTableDataSourceProvider
at first access. Then mapped in sources
static field. This is wrong because this DataSourceImpl
will provide the same connection, inaware of the fact that it will be called on different contexts. I intend to cange this so that the JdbcDataSource
will have as member the provider instead.
However, I believe that the way the connections are obtained are a bit too complicated. It seems to me that there are too many classes involved.
#316 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
The problem is in
JdbcDataSource
. For temp-tables, it is built from theDataSourceImpl
returned by theTempTableDataSourceProvider
at first access. Then mapped insources
static field. This is wrong because thisDataSourceImpl
will provide the same connection, inaware of the fact that it will be called on different contexts. I intend to cange this so that theJdbcDataSource
will have as member the provider instead.
Sure. My bad.
I think we should move the scope of the context-locality to DataSource.getConnection
, and leave the single data source per database. I don't know how much work there is on the c3p0 side to provide a new data source provider, and DataSource
is supposed to be a connection factory. I just implemented the first pass wrong.
However, I believe that the way the connections are obtained are a bit too complicated. It seems to me that there are too many classes involved.
I'm not saying it's a great design, but the complexity was to some degree necessitated by the way c3p0 needs to be used. The temp table stuff then got complicated because I wanted the API to get a _temp
database connection to be the same, so that we didn't need conditional logic at the caller.
To reduce complexity now, we could at minimum get rid of all the proxy stuff (InvocationHandler
, ConnectionCloser
, etc.), since the delegate model replaced this stuff.
#317 Updated by Ovidiu Maxiniuc almost 4 years ago
I think the framework might work with this idea: we just move the context from the TempTableDataSourceProvider
, to DataSourceImpl
. The TempTableDataSourceProvider
will provide the same DataSourceImpl
each time, but its getConnection()
will use the context to provide the contextual-'singleton'. Of course, the proxy stuff gets removed, the current delegate approach is faster.
This way, the DataSource
stored in JdbcDataSource
will keep the sematic of providing connections to a specified database for any request in any context. Probably C3P0 remains unaffected (I do not know it very well).
#318 Updated by Eric Faulhaber almost 4 years ago
Agreed. That's what I was trying to suggest, but you summarized it better.
Also, looking back at how it ended up, my introduction of the DataSourceProvider
interface seems like over-engineering now. I expected it to be a little different when I started. But we can remove this later if we want, let's just get the minimum change right now to fix the connection issue. Thanks for tracking this down and fixing it.
#319 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Ovidiu, w.r.t. the ETF import taking almost 17 hours, is that normal for that machine? It seems really long to me. Without having done any profiling, I assume this is because we are not using SQL batching for the inserts. The original import code was written for Hibernate's delayed
Session.save
implementation, not the new one, which is optimized for legacy runtime behavior. We probably need aSession.insert(List<T> dmos)
method which uses SQL batches behind the scenes, to support this use case. It is not the highest priority requirement (it falls behind functional issues), but if we determine through some profiling that this is a big bottleneck in import, we will need to fix it soon.We also don't need the session cache for this use case, because we never need a record again, once it is persisted. So any time spent caching and clearing the cache is wasted. I've been coming to the conclusion that for the runtime, we probably need to (a) separate the cache from
Session
, so we can pass a cache from an old session instance to a new one without moving records individually; and (b) allow a cache-lessSession
, for the import (and possibly other) use cases. But this cache thing is probably negligible in terms of import performance. I bet the lack of batching is the issue.
I have to run the ETF import to pick up the new primary key name. If it is slow and if the lack of batching seems like the cause, I probably will attack this problem. It makes sense to do this while I am making changes to the cache implementation in Session
.
#320 Updated by Ovidiu Maxiniuc almost 4 years ago
I was testing the new datasource/connection when I encounter the following issue: once a dynamic table (probably not only this case) is done and its its multiplex scope is closing, the removeRecords(null, null, null, true);
is called (TemporaryBuffer
:5243). The method constructs a delete statement delStmt
which looks like this: delete from dtt1 where _multiplex = 2
. It seems fine, but in this case, the table had extent normalized fields, which means they reside in a separate table dtt1__10
. And when this was created, a constraint was added, so when the stmt is executed, Referential integrity constraint violation: "FK765BAA66002F36CD: PUBLIC.DTT1__3 FOREIGN KEY(PARENT__ID) REFERENCES PUBLIC.DTT1(RECID) (1)"
is raised.
We need to issue multiple delete statements for secondary tables before dropping the foreign keys from the main table.
I think this region of the persistence was not altered. I wonder how this was working before. Were the records from the secondary tables, deleted in this case?
#321 Updated by Ovidiu Maxiniuc almost 4 years ago
Please find the TempTable DataSource management patch as r11432. I tested the web client and multiple clients connected simultaneously and the issue is gone.
It's strange that two days go the access to database seemed to be working, when the same connection to temp-table was used :-/?
#322 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
[...]
I think this region of the persistence was not altered. I wonder how this was working before. Were the records from the secondary tables, deleted in this case?
This would be a DDL problem. The foreign keys from the child tables need to be created with ON DELETE CASCADE
.
#323 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Ovidiu Maxiniuc wrote:
[...]
I think this region of the persistence was not altered. I wonder how this was working before. Were the records from the secondary tables, deleted in this case?This would be a DDL problem. The foreign keys from the child tables need to be created with
ON DELETE CASCADE
.
I looked at the old permanent table DDLs. They also do not have this clause for their FK_ constraints. Is there a reason? I would like to make it consistent both DDL generators, if possible.
#324 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
Ovidiu Maxiniuc wrote:
[...]
I think this region of the persistence was not altered. I wonder how this was working before. Were the records from the secondary tables, deleted in this case?This would be a DDL problem. The foreign keys from the child tables need to be created with
ON DELETE CASCADE
.I looked at the old permanent table DDLs. They also do not have this clause for their FK_ constraints. Is there a reason? I would like to make it consistent both DDL generators, if possible.
What about the old temp-table DDL? Does it have the ON DELETE CASCADE
clause for the FK constraints, or am I misremembering? I thought it was needed.
As to why it wouldn't be there for persistent tables' foreign keys, it would not be needed because we don't ever perform a true bulk delete on those tables. True bulk delete is only performed for temp-tables.
In order to do this for persistent tables, we would need to lock the records to be deleted in the lock manager, and we can't know which records to lock when using a DELETE FROM ...
statement. Instead, we basically do what the legacy code would do, which is to use a looping delete (with a PreselectQuery
), where we follow the legacy locking semantics on each record being deleted.
IIRC, Hibernate would create the delete DML for DMOs with extent fields, so this was transparent to us. In 4011a, Persister
handles this for us, using RecordMeta.deleteSql
.
#325 Updated by Eric Faulhaber almost 4 years ago
To clarify, that last line refers to deleting DMOs one at a time using Persistence.delete
, as is done in the persistent table looping delete (see RecordBuffer.delete(String where, Object[] args)
).
#326 Updated by Eric Faulhaber almost 4 years ago
I've been running database import and I noticed something surprising in the import worker code:
... if (hasNotNull) { // for all not-null properties which don't have data, set it to default for (Map.Entry<String, BaseDataType> entry : notNullProperties.entrySet()) { String prop = entry.getKey(); BaseDataType def = entry.getValue(); if (def != null) { loader.overwriteNullProperty(prop, dmo, def); } } } ...
Also, the failing record handling logic was disabled. There may have been a bug in this code and I remember working on a fix relatively recently, possibly after trunk 11328. Regardless of any changes I may have made, we still need to identify failing/inconsistent records, without aborting the import.
Why are we overwriting anything that comes from the export data files? If a mandatory field is null/unknown in the exported data, that represents a serious data inconsistency issue and an error condition. We should not be silently overwriting those values with default values. The default values are not what was in the data at export, so we essentially are changing what is in an application's data (even if it was inconsistent with the schema). I don't think we should be making this decision on behalf of an application expert/DBA. They need to put eyes on any inconsistencies and decide what to do.
On the import performance side, it does look like we would benefit from re-enabling batching. CPU sampling shows that we are spending a large percentage of the time in the JDBC driver code, either sending sync requests or processing results coming from the database server. This suggests to me that if we reduced the number of trips to the server with SQL batching, we could reduce the time significantly. We also are closing the connection (really checking it into the pool) probably a lot more than we need to, and clearing the session cache when we don't even need to use it. However, these are much smaller time consumers.
I also noticed there are some weird pauses where the JVM is completely quiet (not even GC running) and the database seems quiet also. I'm not sure what is going on during these pauses, which are quite long (a few minutes). My machine was not swapping during this time, though there were other programs running, so maybe its specific to my environment. I'm not going to worry about this right now, but I wanted to note it in case it happens for you as well, or I see it again.
I am working on both the batching and session cache changes that will allow the import to go cache-less. The latter is something more related to the runtime (separating the cache from the session), but it's not a big change to enable a mode with no cache at all while I'm in that code. I am not doing anything with the null value override and failing record logic, as I wanted to discuss with you first.
#327 Updated by Igor Skornyakov almost 4 years ago
I'm trying to run my standalone test with H2 database. It runs fine with 4231b, but with 4011a the server fails at startup:
[05/11/2020 12:37:52 GMT+03:00] (com.goldencode.p2j.persist.DatabaseManager:INFO) Database local/fwd1/primary initialized in 5881538 ms. com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2003) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:964) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:1) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) at com.goldencode.p2j.persist.DatabaseManager.registerDatabase(DatabaseManager.java:2488) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1538) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:872) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1209) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:1999) ... 5 more
The reason is that the code fragment in the
MetadataManager.registerDatabase
Settings settings = dbConfig.getOrmSettings(); Dialect dialect = Dialect.getDialect(settings);
results in the
null
value of the dialect
.
What configuration changes are required to fix this?
Thank you.
#328 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
I've been running database import and I noticed something surprising in the import worker code:
[...]Also, the failing record handling logic was disabled. There may have been a bug in this code and I remember working on a fix relatively recently, possibly after trunk 11328. Regardless of any changes I may have made, we still need to identify failing/inconsistent records, without aborting the import.
Ad documented, that piece of code was added to handle 'incomplete' records. In trunk we do not initialize the fields with the default value specified in .df
but with the default value of the specific field type. That piece of code was added to set the default value of fields that remained uninitialized to make sure the mandatory fields are filled. However, in 4011 the record initializes with the table-specific default values, taken from Property
annotations so it should not do anything unless the unknown
value is expressly found in the dump file for a mandatory field.
Why are we overwriting anything that comes from the export data files? If a mandatory field is null/unknown in the exported data, that represents a serious data inconsistency issue and an error condition. We should not be silently overwriting those values with default values. The default values are not what was in the data at export, so we essentially are changing what is in an application's data (even if it was inconsistent with the schema). I don't think we should be making this decision on behalf of an application expert/DBA. They need to put eyes on any inconsistencies and decide what to do.
You are right. Yet, the code was intentionally added (a long time ago, in r9939), so it must have some background cause.
On the import performance side, it does look like we would benefit from re-enabling batching. CPU sampling shows that we are spending a large percentage of the time in the JDBC driver code, either sending sync requests or processing results coming from the database server. This suggests to me that if we reduced the number of trips to the server with SQL batching, we could reduce the time significantly. We also are closing the connection (really checking it into the pool) probably a lot more than we need to, and clearing the session cache when we don't even need to use it. However, these are much smaller time consumers.
I think the insert does work in batches. Indeed, there is a flaw (getProperty
will return null
if the property is Integer
instead of String
) which prevent the 200 record batch to be used, but the DEFAULT_BATCH_SIZE of 50 records will be used.
I also noticed there are some weird pauses where the JVM is completely quiet (not even GC running) and the database seems quiet also. I'm not sure what is going on during these pauses, which are quite long (a few minutes). My machine was not swapping during this time, though there were other programs running, so maybe its specific to my environment. I'm not going to worry about this right now, but I wanted to note it in case it happens for you as well, or I see it again.
I also noticed similar behaviour. I assumed that my 5thread configuration is pushing too much data for PSQL to handle (or at least my HDD). Note: I observed an usual 60-180 CPU load on Java side and similar load on PSQL workers. There were some spikes to 100% on one PSQL worker (and very low on other PSQL and Java) when a table was just finished, but this is normal, as the respective worker was busy indexing the table.
I am working on both the batching and session cache changes that will allow the import to go cache-less. The latter is something more related to the runtime (separating the cache from the session), but it's not a big change to enable a mode with no cache at all while I'm in that code. I am not doing anything with the null value override and failing record logic, as I wanted to discuss with you first.
Just a single change, at line 626, when batchSize
is initialize (see above note), please replace it with:
batchSize = props.containsKey(Settings.BATCH_SIZE)
? (int) props.get(Settings.BATCH_SIZE) : 0;
#329 Updated by Eric Faulhaber almost 4 years ago
Thanks for the batch size fix, I hadn't noticed that. Yes, the import organizes the records into batches, but we don't support SQL batching on the Session
side. So, each insert is sent through to the database individually. This is what I am addressing.
#330 Updated by Eric Faulhaber almost 4 years ago
Revision 11434 implements bulk insert (i.e., broader SQL batching) and a few other optimizations for the database import. I tested it against rev 11433 with an ETF import on an SSD and it cut the time to less than half (01:28 vs 03:06). On a non-SSD, I would expect the improvement to be less dramatic, since much more of the overall time would be spent on I/O. I don't have a pre-4011a baseline to compare it with, but I would expect the new implementation to be somewhat faster than before, because even in the import, we should be doing less work than we did with Hibernate.
At the moment, I cannot start the FWD server for ETF, because I get a metadata initialization error which seems to be related to the PropertyHelper.getRootPojoInterface()
issue you reported previously. I'm working on that now...
#331 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Revision 11434 implements bulk insert (i.e., broader SQL batching) and a few other optimizations for the database import. I tested it against rev 11433 with an ETF import on an SSD and it cut the time to less than half (01:28 vs 03:06). On a non-SSD, I would expect the improvement to be less dramatic, since much more of the overall time would be spent on I/O. I don't have a pre-4011a baseline to compare it with, but I would expect the new implementation to be somewhat faster than before, because even in the import, we should be doing less work than we did with Hibernate.
The import speed up is really impressive. I will do a test on my HDD to give as a base for comparing future work. How many threads did you use for the import process?
At the moment, I cannot start the FWD server for ETF, because I get a metadata initialization error which seems to be related to the
PropertyHelper.getRootPojoInterface()
issue you reported previously. I'm working on that now...
I remember fixing something like that. It was caused by the PK "property" not found when accessing the object through the DMO interface. This is correct because the generated DMO contains only the true properties. The primaryKey()
method is in Record
class. The fix was to test for this particular "property", before trying to access via POJO interface. But this might be another similar issue or the call comes from other location.
#332 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
I will do a test on my HDD to give as a base for comparing future work. How many threads did you use for the import process?
I used 8, but that may be too high for a system where I/O is more of a bottleneck. If you have a baseline already for 5, just be consistent.
#333 Updated by Ovidiu Maxiniuc almost 4 years ago
Related to not-null/mandatory validation. There are some issues with current implementation.
- 1st: there is a two-stage check: in the assign (both simple and batch) and at validation/flush time.
- in former case, error 275 ("<legacy-field-name> is mandatory, but has a value of ?") and 142 ("Unable to update <legacy-table-name> Field") should be thrown;
- in latter case, error 110 ("<legacy-table-name>.<legacy-field-name> is mandatory, but has unknown (?) value") should be issued;
As result, a unknown/null mandatory field will not trigger any condition when other fields of its buffer are touched in a batch-assign, but will raise condition 110 when the buffer is validated explicit or implicit when the buffer is flushed/released.
- in case of batch assign, only the touched fields are validated. We do a 'global' validation on all mandatory fields. This means that, even if a field is not touched it will be validated and possible the assign statement fail even if this should not happen;
- another issue I discovered is that, if an assign from a batch fails under a
no-error
clause, the whole operation is rolled back. This roll-back involves not only the buffer where the validation failed, but all the buffers taking part in the batch assign and the eventual variables. I am not aware whether the side-effects of a function calls is also reverted, but I guess (and somehow hope) so;
At first I thought we will need to keep a scoped list of touched properties for each buffer because possible nestings. Since the RecordBuffer$Handle.invoke()
has only a map from the invoked Method
to the property name, I was afraid we will have to introduce back the property-to-index map in order to construct a BitSet
of altered fields to be tested instead of testing all properties in batch-assign validation. However, for not-null validation this is not necessary. It's clear to me now that we need to create some kind of savepoint (and add to it all touched fields and variables) each time the batch assign starts. If it ends with success the savepoint is released, but in case of a validation failure, the savepoint must be rolled back. We do not need to process all assignments. The check must be done serially, as encountered, and validation condition raised when the first mandatory constraint is broken.
#334 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Related to not-null/mandatory validation. There are some issues with current implementation.
Nice catches.
- 1st: there is a two-stage check: in the assign (both simple and batch) and at validation/flush time.
- in former case, error 275 ("<legacy-field-name> is mandatory, but has a value of ?") and 142 ("Unable to update <legacy-table-name> Field") should be thrown;
- in latter case, error 110 ("<legacy-table-name>.<legacy-field-name> is mandatory, but has unknown (?) value") should be issued;
As result, a unknown/null mandatory field will not trigger any condition when other fields of its buffer are touched in a batch-assign, but will raise condition 110 when the buffer is validated explicit or implicit when the buffer is flushed/released.- in case of batch assign, only the touched fields are validated. We do a 'global' validation on all mandatory fields. This means that, even if a field is not touched it will be validated and possible the assign statement fail even if this should not happen;
Were you able to produce a case where this can occur? If a mandatory field is checked during the assignment, how could it later enter a batch assign in an invalid state, such that we "over-check" during batch validation and report it improperly? Or is this only the case when you have a mandatory field which defaults to unknown and is never assigned to something else?
In any event, the detection of this condition seems fairly simple. In the batch assign case, to scope the validation correctly, it seems we would AND the bitsets nullProps
, nonNullableProps
, and dirtyProps
to find only dirty properties which are mandatory and set to null
.
- another issue I discovered is that, if an assign from a batch fails under a
no-error
clause, the whole operation is rolled back. This roll-back involves not only the buffer where the validation failed, but all the buffers taking part in the batch assign and the eventual variables. I am not aware whether the side-effects of a function calls is also reverted, but I guess (and somehow hope) so;
Did we handle this before? It seems rolling back any side effects could be problematic.
At first I thought we will need to keep a scoped list of touched properties for each buffer because possible nestings. Since the
RecordBuffer$Handle.invoke()
has only a map from the invokedMethod
to the property name, I was afraid we will have to introduce back the property-to-index map in order to construct aBitSet
of altered fields to be tested instead of testing all properties in batch-assign validation. However, for not-null validation this is not necessary. It's clear to me now that we need to create some kind of savepoint (and add to it all touched fields and variables) each time the batch assign starts. If it ends with success the savepoint is released, but in case of a validation failure, the savepoint must be rolled back. We do not need to process all assignments. The check must be done serially, as encountered, and validation condition raised when the first mandatory constraint is broken.
It sounds like we need more than only the savepoint (to handle any variable UNDOs, for instance). Is this something that needs to fit into the broader UNDO framework?
If it is not a simple fix AND we did not handle this before, we should open an issue for this and address it separately. If it is a regression, we need to fix it now.
#335 Updated by Ovidiu Maxiniuc almost 4 years ago
Indeed, to observe the null-mandatory-nonValidated field case the field initial value must be declared ?
, like:
ADD FIELD "f9" OF "test-mandatory" AS integer FORMAT "->,>>>9" INITIAL ? POSITION 4 MAX-WIDTH 4 ORDER 30 MANDATORY
Indeed, it doesn't make much logic at first, but it could be a trick to force the ABL programmer to intentionally set a value on a field. From the POV of our problem, it shows how the assign validation works: the fields that were not touched, are not validated so the code:
create test-mandatory. assign k = 100 f11 = 1 // f9 = ? // no-error . message k, f11.
will execute correctly on 4GL, even if
f9
remain initialized and print 100 1
. The validation will happen only the record is manually validated or flushed.
Also, from my test resulted that setting a ?
on a mandatory field will be null-validated at the moment it happens
, not at the end of assign block.
Next, if we enable the two commented lines the program will print 0 ?
because the values of k
and f11
will be rolled-back (the initial value of f11
is also ?
). It makes sense: the assign
is like a micro-transaction: it's all or nothing.
I created a testcase where a field is initialized to a value returned by a function in which there is another assign. The variable and fields already set in the top-level assign
are visible during the call of the function. If the assign from the function is successful the effects are kept. Otherwise they are locally reverted. So yes, the assign
behaves like a sub-transaction. The code with uncommented lines above would be equivalent to:
do transaction on error undo,leave: k = 100. f11 = 1. f9 = ?. end.In trunk:
- the validation is done correctly, only on touched fields, but the incorrect error message seems to be printed (275 instead of 110). In fact the same 275 is raised in both occasions. Beside, the field name is wrong, the legacy field name should be used and, in case of 110, the legacy table name is also printed. LE: it seems like the error 110 is never raised in trunk, at least I was unable to find the code that does that.
- wrt UNDO 'feature', the fields of the record failing the validation are correctly restored to previous value, but not the other entities: if other variables and/or records were touched in same
assign
, they are not reverted to their values from the start ofassign
.
#336 Updated by Greg Shah almost 4 years ago
Next, if we enable the two commented lines the program will print 0 ? because the values of k and f11 will be rolled-back (the initial value of f11 is also ?). It makes sense: the assign is like a micro-transaction: it's all or nothing.
We've never seen this behavior before. I think it can only be seen in a NO-ERROR
case because when normal error processing is active, the UNDO
will occur at a containing block. Please check this.
If this only exists in NO-ERROR
mode, adding a sub-transaction block could break things in normal error mode. In other words, without NO-ERROR
a hidden sub-transaction block would undo and then allow processing to continue. The containing block would never have the chance to undo, which I think is incorrect.
Also, we would not want to add DO TRANSACTION
since that would open a new transaction if one was not already open. It would potentially change other behavior as well, so I think we would want to use a sub-transaction, at most.
#337 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
We've never seen this behavior before. I think it can only be seen in a
NO-ERROR
case because when normal error processing is active, theUNDO
will occur at a containing block. Please check this.
I give some thought here but I am not sure how to test non-NO-ERROR
. The error must be caught somehow, otherwise the procedure itself stops. The best I could come up with is:
do on error undo, leave: k = 100. tm2.f9 = 10. tm2.f1 = ?. catch err1 as Progress.Lang.SysError: message err1:getmessage(1). message "cgt:" " f1:" tm2.f1 " f9:" tm2.f9 " k:" k. end catch. end.
The problem is, when the
cgt
message is printed inside catch
statement, the f9
and k
values are already rolled back to their value before entering the do
block. It doesn't matter whether the three assignments are independent or in a simple (without no-error
) statement.
If this only exists in
NO-ERROR
mode, adding a sub-transaction block could break things in normal error mode. In other words, withoutNO-ERROR
a hidden sub-transaction block would undo and then allow processing to continue. The containing block would never have the chance to undo, which I think is incorrect.
Also, we would not want to addDO TRANSACTION
since that would open a new transaction if one was not already open. It would potentially change other behavior as well, so I think we would want to use a sub-transaction, at most.
Indeed, a true transaction might bit a bit too much. Based on this last piece of code (which seems to have similar outcome), probably it's better to say that the assign
is equivalent with an undo block.
#338 Updated by Ovidiu Maxiniuc almost 4 years ago
I noticed something else related to assigns, which will probably not be handled in this task: the way the ASSIGN triggers are fired for the fields touched.
I created ASSIGN triggers that just print the field name and the value and run the following code:
assign f1 = 10 f2 = 20 f3 = 30 f1 = f1 * 10 f2 = f2 * 20 // f-mandatory = ? .
I had quite a surprise when the output was:
f1 <- 100 f2 <- 400 f3 <- 30 f1 <- 100 f2 <- 400
If the commented assignment is enabled, then NONE of the above is printed.
I read the documentation and, indeed, they state that the ASSIGN triggers are fired "after all the assignments have taken place". I would better say after they have been validated instead, since if one fails, none is actually executed. Interesting is the way the triggers are called and their parameter values. It's clear that there is one invocation for each "touched" field, but the value is always the final value assigned to the field. It seems to be some mapping and any new assignment overwrite the value if it exists.
#339 Updated by Eric Faulhaber almost 4 years ago
Hotel GUI r220 issues with 4011a:
- Adding a guest to the "Check-In" dialog replaces the existing guest instead of adding. Upon deleting that guest, you get a confirmation prompt that asks if you want to delete the original guest (the one that was replaced), not the guest that now is listed in the browse. Cancelling the check-in at that point results in "No ADMProps record is available. (91)", instead of the validation error you get with the trunk version (the validation error is incorrect, though -- we shouldn't reproduce that).
- Using the mouse wheel to scroll down in the "Available Rooms" tab browse results in an endless number of rows. The backing query does not seem to run off end properly.
- In embedded mode, clicking the "Joseph Nighton" hyperlink in the "Guest" column of the browse in the "Guests" screen seems to get into an infinite loop of opening tabs, each with the same invoice. This is probably the same root cause as the previous bullet.
#340 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
- Using the mouse wheel to scroll down in the "Available Rooms" tab browse results in an endless number of rows. The backing query does not seem to run off end properly.
This seems to be caused by the premature closing of the session after the call to refresh-avail-rooms
ends.
When the session is closed, the queries are reset and their open ResultSet
are also closed. In this case, the Browse
is already populated with first 15 results. The query result set has all 25 of them but it is closed and ProgressiveResults
gets reset (delegate closed and position
set to -1). When the browse is scrolled (via the mouse whell or keyboard), the BrowseWidget
will invoke getRows
for correct cursor position, but it will assume the query.next()
will work. Since the query was reset it will fetch and return back again the first 15 results. As result, after room 203, we start again with room 101 and so on.
#341 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
- Using the mouse wheel to scroll down in the "Available Rooms" tab browse results in an endless number of rows. The backing query does not seem to run off end properly.
This seems to be caused by the premature closing of the session after the call to
refresh-avail-rooms
ends.
I recently implemented a reference count on the active session in Persistence$Context
, whereby we close the session the next time we exit a scope when the count has reached 0. One of the events which increments the reference count is QueryOffEndListener.initialize()
, with a corresponding decrement event upon QueryOffEndListener.finish()
.
However, when a query is associated with a browse, apparently we need some special treatment, because it seems the lifespan of the query can be longer. I initially wanted to use the open and close of a query for the reference counting, but I'm pretty sure there are cases where the query is not explicitly closed, and this seemed like it would keep the session open for as long as the user context was alive.
What is the session reference count doing in this use case?
#342 Updated by Eric Faulhaber almost 4 years ago
The long delay during ETF server startup while processing index metadata appears to be due to the fact that we swapped out our very primitive internal JDBC connection "pool" with (over-)use of the new Session
object. Whereas before we were sharing a handful of connections for many individual metadata queries via a very simple and fast data structure, we are now instantiating a new Session
object each time, which does a more heavyweight checkout from the c3p0 pool for each of these requests. Evidently, there is some overhead here which stands out in this heavy use case.
I reduced the number of checkouts by passing the connection as a parameter into methods which use it, rather than letting these methods create their own new sessions, which would do a checkout and checkin with c3p0 under the covers. The startup is an order of magnitude faster, but is still noticeably slower than trunk. I probably will just create a handful of sessions and share them in a similar way to what we were doing before.
This also makes me wonder whether we can do a better job configuring c3p0 for faster access to pooled connections. This would help us everywhere we are creating a lot of sessions, though it is particularly pronounced in this startup use case. I will see what I can figure out in this regard.
#343 Updated by Eric Faulhaber almost 4 years ago
Running a single ETF test results in some errors...
java.lang.IllegalStateException: Empty scopes in UniqueTracker at com.goldencode.p2j.persist.orm.UniqueTracker.lockAndChange(UniqueTracker.java:240) at com.goldencode.p2j.persist.orm.UniqueTracker$Context.lockAndChange(UniqueTracker.java:581) at com.goldencode.p2j.persist.orm.Validation.checkUniqueConstraints(Validation.java:458) at com.goldencode.p2j.persist.orm.Validation.validateMaybeFlush(Validation.java:203) at com.goldencode.p2j.persist.RecordBuffer.flush(RecordBuffer.java:5930) at com.goldencode.p2j.persist.RecordBuffer.setCurrentRecord(RecordBuffer.java:11004) at com.goldencode.p2j.persist.RecordBuffer.finishedImpl(RecordBuffer.java:10234) at com.goldencode.p2j.persist.RecordBuffer.deleted(RecordBuffer.java:5607) at com.goldencode.p2j.util.TransactionManager.processFinalizables(TransactionManager.java:6372) at com.goldencode.p2j.util.TransactionManager.popScope(TransactionManager.java:4437) at com.goldencode.p2j.util.TransactionManager.access$6700(TransactionManager.java:591) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.popScope(TransactionManager.java:8030) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:7890) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:467) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:438) ...
Caused by: java.lang.AbstractMethodError: app.Foo__Impl__.isBar(I)Lcom/goldencode/p2j/util/logical; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.persist.RecordBuffer$Handler.invoke(RecordBuffer.java:12123) at com.goldencode.p2j.persist.$__Proxy3.isBar(Unknown Source) at com.goldencode.p2j.persist.$__Proxy3.isBar(Unknown Source) ...
Looking into these. I suspect the second one is related to denormalized extent fields.
#344 Updated by Eric Faulhaber almost 4 years ago
The UniqueTracker
empty scopes issue seems to be caused by the fact that we aren't registering the tracker for Finalizable
callbacks as expected, upon invoking an external procedure with the sub-transaction property. Then, during RecordBuffer
Finalizable
processing, we set the current record to null
, causing a flush and validation, and we find we have no current scope. I'm trying to figure out why the registration for these hooks is not happening.
Separately, I committed rev 11436, which completes the fix to the server startup performance regression. However, the code which ultimately uses the collected index information seems to be disconnected/deprecated (TempTableHelper.getComputedCols
). Please help me understand what replaces this functionality.
Even if the JDBC index metadata collection is no longer directly used for this former purpose, I think we need to keep it around in some form, to be used as a sanity check, at minimum. What I mean by this is that in trunk, the database is the authoritative source of index information and in 4011a, the authoritative source is the DMO annotations (I assume). We will want to report if these are ever out of sync. But this is lower priority, compared to fixing stability issues.
#345 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Separately, I committed rev 11436, which completes the fix to the server startup performance regression. However, the code which ultimately uses the collected index information seems to be disconnected/deprecated (
TempTableHelper.getComputedCols
). Please help me understand what replaces this functionality.
At this moment it appears to me as no longer needed. The trunk code uses this for creating the computed columns for temp-tables, but in 4011 this is automatically done by DmoMetadataManager
. The 4011 uses the new syntax supported by H2 (and PSQL) so the computed columns are constructed inline with the table, not in a separate ALTER statement.
OTOH, the list of these character
fields/properties was used by the old ORMHandler
when injecting a special property node into the ORM XML document for the respective table to ignore them when fetching the columns from db.
Even if the JDBC index metadata collection is no longer directly used for this former purpose, I think we need to keep it around in some form, to be used as a sanity check, at minimum. What I mean by this is that in trunk, the database is the authoritative source of index information and in 4011a, the authoritative source is the DMO annotations (I assume). We will want to report if these are ever out of sync. But this is lower priority, compared to fixing stability issues.
This is correct, we need to be sure the set of indexes in database matches the set we see from annotations. If they differ we might need to log an alert message that the performance might be depreciated or even halt the execution of the application server.
#346 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
[..] when a query is associated with a browse, apparently we need some special treatment, because it seems the lifespan of the query can be longer. I initially wanted to use the open and close of a query for the reference counting, but I'm pretty sure there are cases where the query is not explicitly closed, and this seemed like it would keep the session open for as long as the user context was alive.
What is the session reference count doing in this use case?
The sessionUseCount
is 0 so the canCloseSession()
will return true. The ResultSet
s in use do not affect the counter. I tried to bracket their usage with useSession()
/ releaseSession()
but this is not always possible. Evidently, the increment pose no problems, but the decrement is difficult. This is because of the nature of Java, when the objects are just created and gc will handle their destruction. There are other objects built with ResultSet
s as members and tracking them seems difficult.
OTOH, the ProgressiveResults
have sessionClosing()
method which will duplicate the unprocessed set of rows from its current delegate
before closing it. The problem is, the SESSION_CLOSING_NORMALLY
is dispatched by AdaptiveQuery
(line 2510) with resetResults()
, which not only will call closeDelegate()
on ProgressiveResults
but will set its delegate
to null
and also will reset its position
and the internal Query
data to default values.
In consequence, I will have to go back and implement the tracking of Results
with sessionUseCount
so that the Session
will not be closable if these objects are in use.
#347 Updated by Greg Shah almost 4 years ago
but in 4011 this is automatically done by DmoMetadataManager. The 4011 uses the new syntax supported by H2 (and PSQL)
Does this mean that FWD will require a minimum level of PostgreSQL once 4011a is merged to trunk? It's OK if so, I just want to remember this so we are not surprised later.
#348 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
but in 4011 this is automatically done by DmoMetadataManager. The 4011 uses the new syntax supported by H2 (and PSQL)
Does this mean that FWD will require a minimum level of PostgreSQL once 4011a is merged to trunk? It's OK if so, I just want to remember this so we are not surprised later.
I'm not sure to what new syntax Ovidiu is referring. We also need to consider SQL Server w.r.t. the use of any newer syntax.
#349 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
OTOH, the
ProgressiveResults
havesessionClosing()
method which will duplicate the unprocessed set of rows from its currentdelegate
before closing it. The problem is, theSESSION_CLOSING_NORMALLY
is dispatched byAdaptiveQuery
(line 2510) withresetResults()
, which not only will callcloseDelegate()
onProgressiveResults
but will set itsdelegate
tonull
and also will reset itsposition
and the internalQuery
data to default values.
To the degree it helps in how you think about a solution, all the results caching should be going away. The idea behind this was that we had to cache results that would be otherwise lost when the session was closed normally, due to the more aggressive session closing we used to do with Hibernate. However, now the idea is to hold the session open long enough that the results do not need to be cached, because the connection will be there to continue iterating the results.
I did not explicitly remove the results caching code because there was so much change happening already, I didn't want to take the risk of breaking something else. My thinking is that if we got the reference counting right, we wouldn't actually ever be sending the SESSION_CLOSING_NORMALLY
event while any query still needed the session, so the caching would never happen.
My only point in mentioning this is that you do not have to preserve any of the results caching functionality, if we get the session closing right.
#350 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
but in 4011 this is automatically done by DmoMetadataManager. The 4011 uses the new syntax supported by H2 (and PSQL)
Does this mean that FWD will require a minimum level of PostgreSQL once 4011a is merged to trunk? It's OK if so, I just want to remember this so we are not surprised later.
Sorry for misleading you. It is not a new syntax for H2/PSQL. PSQL is actually not affected because the indexes are created using expressions. I tested with fwd-h2-1.4.197 that comes with FWD. It's new from POV FWD. In trunk a H2 table for meta_db
is created using:
create table meta_db (
[...]
db_name varchar not null,
__idb_name varchar not null,
[...]
alter table meta_db alter column __idb_name varchar as upper(rtrim(db_name));
with 4011 the ddl changes to simpler:
create table meta_db (
[...]
db_name varchar not null,
__idb_name varchar as upper(rtrim(db_name)),
[...]
#351 Updated by Ovidiu Maxiniuc almost 4 years ago
I've just committed my latest changes to 4011 as r11437.
The most important thing here is the fix for the endless query issues caused by session being closed. This might help advance ETF testing.
Secondly, there is a pending change related to mandatory (not null) field validation. This is not complete, just some code I cherry-picked which I considered to be useful and is supported by my tests.
I also marked as deprecated the methods related to obsolete computed columns management (notes 344/345).
#352 Updated by Eric Faulhaber almost 4 years ago
We are not generating implementations for some methods declared in DMO interfaces, such as indexed getters and setters, when denormalization of extent fields is enabled. I think we would, except that the generation of the methods in DmoClass
is driven by the existence of PropertyMeta
objects, rather than the existence of method declarations in the DMO interface. PropertyMeta
objects only exist for the annotated, expanded getter methods, not for the original, indexed versions of these methods. So we get abstract method errors at runtime when such a missing indexed getter/setter is invoked on the implementation class.
I'm a little surprised we don't get an error earlier, when instantiating a concrete DMO class which doesn't have all interface methods defined, or that the verifier doesn't report the error when we try to load the class in the first place. But apparently, the JVM allows the class to be loaded and instantiated, and then fails when it can't find an implementation for an invoked method. I guess this error is normally caught by the compiler, not the JVM. Anyway...
Ovidiu, I began addressing this, then I remembered that you recently made some fixes for denormalized extent fields. AFAICT, however, the changes only were to conversion of the DMO interface. Do you have any outstanding changes related to the DMO implementation class generation which are not checked in yet?
#353 Updated by Ovidiu Maxiniuc almost 4 years ago
My recent changes only involved the conversion. I do not have changes related to DmoClass
. You can safely work in this area.
It is strange that JVM will load "incomplete" classes. I also noticed the same issue, but the missing methods for denormalized fields seem to have slipped unobserved. Do you have enough information in DMO annotations to generate them?
#354 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Do you have enough information in DMO annotations to generate them?
Yes, but I've had to add some additional information from the annotations to PropertyMeta
to implement this and getting the data offsets handled correctly has been a bit tricky. I expect to have it working later today, though.
#355 Updated by Ovidiu Maxiniuc almost 4 years ago
There are a couple of days since I have started thinking of the following issue related to validation.
As I informed you, theassign
statement behave like a mini-transaction, forcing that all assignments it contain to be performed in an "atomic" statement. There are three cases:
- if everything is fine, at the end of the block, all values are set - this is the expected behaviour and will happen regardless of the presence of
no-undo
option; - in case of an error when
no-undo
is specified, all assignments are reverted back to values before theassign
block; - in case of an error but
no-undo
is not specified, the rollback is done at the next outer undoable block level. If the is no such block, the whole procedure is reverted.
It makes sense when talking about the integrity and consistency of data. For example if one wants to be sure a double link is about to be executed:
assign link1.next = link2 link2.prev = link1.
After the assign is executed, either both values are correctly executed, or both remain unknown. If the error occurs at the last assignment and the assignments before it remain, the double-linked chain remain inconsistent.
This is not the problem to solve now (reverting the values on no-error
) but when everything ends with success. In this case the null
/unknown values are checked at the time they are touched. But it is possible that some of them still remained if they were not touched; 4GL will not check them.
Next, the UNIQUE indexes are tested. However, at this stage only the modified fields are taken into consideration. In case of a transient record, if at least a field that is part of a unique index was only touched (the end of assign finds them assigned to the default value) the record is not validated and not leaked.
Another big problem comes when the mandatory fields are not touched and the all unique indexes are fully updated. In this case, the 4GL record is somehow validated at the end of the assign
statement and leaked so the other clients can see them with mandatory fields set to unknown values! It is really interesting to see how 4GL contradicts itself. However, these records are not fully validated and completely flushed records, but the other clients are not aware of that.
Of course this is not possible in FWD at this moment. Once the fields are defined as not null
, the database engine will not let such a record to be persisted as result the validation will fail with a "not-null constraint violation".
#356 Updated by Eric Faulhaber almost 4 years ago
I've committed rev. 11438, which fixes both the UniqueTracker
empty scopes issue and the missing extent field method implementations. It also fixes a latent conversion issue that only surfaced in denormalized extent field mode. We were emitting the wider return type (e.g., NumberType[]
) for bulk getter methods, instead of the narrower one (e.g., integer[]
).
Now hitting a ClassCastException
in RecordBuffer.create
. Will look at that next.
#357 Updated by Greg Shah almost 4 years ago
In #4011-355, your references to no-undo
are probably meant to be no-error
?
This is not the problem to solve now (reverting the values on no-error) but when everything ends with success.
I don't understand this comment. It was my understanding that we don't handle this part correctly. There is no mini-sub-transaction for a no-error
assign statement.
Actually, I think this part is easy to solve. We can handle it in the runtime for the processing of the RecordBuffer.batch()
but ONLY if we are in silent mode. I think we can use a DO ON ERROR UNDO, LEAVE
for the no-error
case.
I guess the rest of your issues are beyond my ability to assist.
#358 Updated by Eric Faulhaber almost 4 years ago
I've committed rev. 11439, which fixes several issues:
- the previously reported
ClassCastException
, which was caused by some API parameter data type changes; - the conversion of FQL queries which do not use DMO alias qualifiers (e.g.,
TemporaryBuffer.copyAllRows
); - an issue consuming scrolling query results (also occurs in
TemporaryBuffer.copyAllRows
).
Ovidiu, please carefully review my changes in this revision. These are in areas I'm not as familiar with (especially the FQL converter).
I'm seeing the following failure in Hotel GUI, when trying to export to PDF from the "Guests" tab:
Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement " SELECT STAY__IMPL0_.RECID AS COL_0_0_ROOM__IMPL1_.[*]RECID AS COL_1_0_ROOMTYPE__2_.RECID AS COL_2_0_ FROM STAY STAY__IMPL0_ CROSS JOIN ROOM ROOM__IMPL1_ CROSS JOIN ROOM_TYPE ROOMTYPE__2_ WHERE ROOM__IMPL1_.ROOM_NUM = STAY__IMPL0_.ROOM_NUM AND ROOMTYPE__2_.ROOM_TYPE = ROOM__IMPL1_.ROOM_TYPE ORDER BY STAY__IMPL0_.STAY_ID ASC NULLS LAST, ROOM__IMPL1_.ROOM_NUM ASC NULLS LAST, ROOMTYPE__2_.ROOM_TYPE ASC NULLS LAST LIMIT ? "; SQL statement: select stay__impl0_.recid as col_0_0_room__impl1_.recid as col_1_0_roomtype__2_.recid as col_2_0_ from stay stay__impl0_ cross join room room__impl1_ cross join room_type roomtype__2_ where room__impl1_.room_num = stay__impl0_.room_num and roomtype__2_.room_type = room__impl1_.room_type order by stay__impl0_.stay_id asc nulls last, room__impl1_.room_num asc nulls last, roomtype__2_.room_type asc nulls last limit ? [42000-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.message.DbException.getSyntaxError(DbException.java:203) at org.h2.command.Parser.getSyntaxError(Parser.java:548) at org.h2.command.Parser.prepareCommand(Parser.java:281) at org.h2.engine.Session.prepareLocal(Session.java:611) at org.h2.engine.Session.prepareCommand(Session.java:549) at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247) at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:76) at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:306) at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:567) at com.goldencode.p2j.persist.orm.SQLQuery.list(SQLQuery.java:333) at com.goldencode.p2j.persist.orm.Query.list(Query.java:251) at com.goldencode.p2j.persist.Persistence.list(Persistence.java:1532) at com.goldencode.p2j.persist.ProgressiveResults.getResults(ProgressiveResults.java:1107) at com.goldencode.p2j.persist.ProgressiveResults.getResults(ProgressiveResults.java:1063) at com.goldencode.p2j.persist.ProgressiveResults.moveTo(ProgressiveResults.java:914) at com.goldencode.p2j.persist.ProgressiveResults.moveTo(ProgressiveResults.java:788) at com.goldencode.p2j.persist.ProgressiveResults.next(ProgressiveResults.java:399) at com.goldencode.p2j.persist.ResultsAdapter.next(ResultsAdapter.java:159) at com.goldencode.p2j.persist.AdaptiveQuery.next(AdaptiveQuery.java:2039) at com.goldencode.p2j.persist.CompoundQuery.processComponent(CompoundQuery.java:2916) at com.goldencode.p2j.persist.CompoundQuery.retrieveImpl(CompoundQuery.java:2517) at com.goldencode.p2j.persist.CompoundQuery.retrieve(CompoundQuery.java:1936) at com.goldencode.p2j.persist.CompoundQuery.retrieve(CompoundQuery.java:1823) at com.goldencode.p2j.persist.CompoundQuery.first(CompoundQuery.java:848) at com.goldencode.p2j.persist.CompoundQuery.first(CompoundQuery.java:750) at com.goldencode.p2j.persist.QueryWrapper.first(QueryWrapper.java:1751) at com.goldencode.hotel.GuestsFrame.lambda$exportAsPdf$36(GuestsFrame.java:1049) ...
The SELECT phrase seems to be malformed. To make sure this was not a regression from my FQL converter changes, I temporarily reverted them, but I get the same error.
The next issue in ETF may be a red herring. I need to check whether we have the same thing happening with trunk:
[05/25/2020 09:45:15 GMT] (com.goldencode.p2j.util.handle:FINE) You are creating a new ID for an invalid resource com.goldencode.p2j.util.ExternalProgramWrapper! java.lang.Exception at com.goldencode.p2j.util.handle.resourceId(handle.java:709) at com.goldencode.p2j.util.handle.toString(handle.java:3673) at com.goldencode.p2j.util.handle.toString(handle.java:3650) at com.goldencode.p2j.util.handle.toStringExport(handle.java:3700) at com.goldencode.p2j.persist.Record._setHandle(Record.java:1103) at <app>.<dmo>.setHandle(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.persist.RecordBuffer$Handler.invoke(RecordBuffer.java:12152) at com.goldencode.p2j.persist.$__Proxy39.setCrudhandle(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.persist.FieldReference.set(FieldReference.java:860) at com.goldencode.p2j.persist.FieldReference.set(FieldReference.java:796) at com.goldencode.p2j.util.HandleFieldRef.set(HandleFieldRef.java:149) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5294) at com.goldencode.p2j.util.ControlFlowOps.invokeExternalProcedure(ControlFlowOps.java:5156) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6523) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentImpl(ControlFlowOps.java:6329) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1902) at com.goldencode.p2j.util.ControlFlowOps.invokePersistentSet(ControlFlowOps.java:1881) ...
It is a warning only. The stack trace is generated explicitly in order to log the stack trace. However, if the handle is invalid and is pulled out of the temp-table later, this could lead to problems downstream.
#359 Updated by Eric Faulhaber almost 4 years ago
I just realized I forgot to add a NO_ALIAS
constant to the FQL converter instead of using the empty string literal (""
) inline. I meant to do this before committing 11439. If you want to add this in your review (if the changes are OK in the first place, that is), please do. Otherwise, I will clean it up in a future check-in.
#360 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric,
I finished reviewing your latest changes. They are sound. I added the constant and replaced the related empty strings, but these changes will go in repository with my other pending changes.
#361 Updated by Eric Faulhaber almost 4 years ago
Constantin or Greg: is that invalid handle error in #4011-358 something to be concerned with? I want to avoid chasing ghosts if possible.
#362 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin or Greg: is that invalid handle error in #4011-358 something to be concerned with? I want to avoid chasing ghosts if possible.
I recall started seeing this in ETF at some point (a year back or something), but it was harmless.
#363 Updated by Ovidiu Maxiniuc almost 4 years ago
please find r11440 of 4011. Some additional notes:
- the broken SELECT from note 258 was easy to fix, there was another bigger issue hidden by this defect. Now it works correctly.
- the pdf is generated but my pdf application is not fast enough to display it. The file is deleted too fast so a missing file dialog is displayed;
- the broken SELECT from note 258 is converted from
OPEN QUERY staysBrowseRpt FOR EACH stay, EACH room of stay, EACH room-type OF room, EACH guest OF stay NO-LOCK.
For some unknown reason, theguest
table is missing. It is handed on app-server side, as the 2nd query in a compound query (the 1st one being the one I fixed). I do not understand why it is not part of a single JOINed query ready to be executed on SQL side; theGuest
DMO has theMANY_TO_ONE
Relation
annotation, like the other DMO interfaces (Room
andStay
) have; - it seems to me that the
PropertyMeta
andProperty
like theDmoMeta
andRecordMeta
tend to be used together or maybe overlap. It is possible that we'll unify them sometime in the future. BTW, accessing annotations likeProperty
directly seems to be rather slow (by a factor of 20-50), because they are implemented like some kind of proxy. They are not accessed to much, but I'm thinking of caching the values in some structures.
#364 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
it seems to me that the
PropertyMeta
andProperty
like theDmoMeta
andRecordMeta
tend to be used together or maybe overlap. It is possible that we'll unify them sometime in the future. BTW, accessing annotations likeProperty
directly seems to be rather slow (by a factor of 20-50), because they are implemented like some kind of proxy. They are not accessed to much, but I'm thinking of caching the values in some structures.
The intent behind PropertyMeta
and RecordMeta
were to act as those structures; i.e., as faster repositories for runtime use of the annotation data. These also were meant to naturally match in layout the physical structure of the data in a BaseRecord
object, so the transition of data to the object from a result set (read) and from the object to a prepared statement (insert/update) could be done as quickly as possible, with no maps/lookups required.
It seems you had similar goals in mind with your information objects when coming up with the FQL/SQL support.
Yes, I agree these should be unified, but let's not restructure things at the moment; I don't want to introduce more delay.
#365 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, can you help me understand the idea behind the Query.uniqueResult
implementation? The object ignores rowStructure
for this method and passes null
instead. This results in FQL queries like:
from Foo__Impl__ [...]
querying all columns from the table, but then not hydrating the DMO. Instead, only column 1 of the query (the primary key) is always returned. Query.uniqueResult
is most often invoked from Persistence.load
, which handles the case of either the primary key or the full object being returned, but we never go down the full object code path as a result. However, callers of Persistence.load
have logic to decide whether or not to perform an id-only query or a full object query, so this design ignores those decisions.
The effect of this is that unless we already have cached the target record in the session (and it is not stale), we do a query for all the columns in Query.uniqueResult
, throw away the column data, then do another full query for the same data from Session.get
, even in cases where we don't need the second query (i.e., as we might for locking purposes, which is one of the reasons higher level queries might do an id-only query first).
I tried passing the rowStructure
that already exists in the Query
object, but I get JDBC errors indicating that we are invoking methods on the result set which expect the wrong type, such as, 'Cannot cast to boolean: "XYZ"'.
I came upon this issue while debugging queries for incorrect results. It doesn't seem to be getting the wrong data (I mean before my attempt to use rowStructure
), but it seemed to be doing more work than necessary. So, this is not urgent (unless you think it could also be a functional problem), but I was hoping you could shed some light on the implementation. Thanks.
#366 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric,
I just committed r11441 with some adjustments in Sql/Query.uniqueResult()
implementation. I do not know the exact use-case you encountered, so please test it.
#367 Updated by Eric Faulhaber almost 4 years ago
Thanks, but this breaks things the same way as I described. Here's an example stack trace. There are many like this, to the point where I had to kill the server to stop them.
Caused by: org.postgresql.util.PSQLException: Cannot cast to boolean: "CQ" at org.postgresql.jdbc.BooleanTypeUtil.cannotCoerceException(BooleanTypeUtil.java:99) at org.postgresql.jdbc.BooleanTypeUtil.fromString(BooleanTypeUtil.java:67) at org.postgresql.jdbc.BooleanTypeUtil.castToBoolean(BooleanTypeUtil.java:43) at org.postgresql.jdbc.PgResultSet.getBoolean(PgResultSet.java:1975) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getBoolean(NewProxyResultSet.java:269) at com.goldencode.p2j.persist.orm.types.LogicalType.readProperty(LogicalType.java:134) at com.goldencode.p2j.persist.orm.BaseRecord.readProperty(BaseRecord.java:575) at com.goldencode.p2j.persist.orm.SQLQuery.hydrateRecord(SQLQuery.java:533) at com.goldencode.p2j.persist.orm.SQLQuery.uniqueResult(SQLQuery.java:258) at com.goldencode.p2j.persist.orm.Query.uniqueResult(Query.java:244) at com.goldencode.p2j.persist.Persistence.load(Persistence.java:1689) at com.goldencode.p2j.persist.RandomAccessQuery.executeImpl(RandomAccessQuery.java:4101) at com.goldencode.p2j.persist.RandomAccessQuery.execute(RandomAccessQuery.java:3331) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1431) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1328) ...
Possibly an offset problem reading the data from the result set?
Reverting to 11440 for now.
#368 Updated by Greg Shah almost 4 years ago
the pdf is generated but my pdf application is not fast enough to display it. The file is deleted too fast so a missing file dialog is displayed;
This should not be possible. I assume it is independent of 4011a since the processing of the output has no database/persistence dependencies. Have you seen this outside of 4011a?
#369 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, please revert the changes in rev 11441 and commit a new revision to bring the branch back to where it was. Don't uncommit please, as I already have made changes in my working tree, which is based on the back-level 11440 revision. I'd like to check those in after updating to the latest revision.
#370 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
the pdf is generated but my pdf application is not fast enough to display it. The file is deleted too fast so a missing file dialog is displayed;
This should not be possible. I assume it is independent of 4011a since the processing of the output has no database/persistence dependencies. Have you seen this outside of 4011a?
The web/embedded client works fine. The pdf is open in a separate tab. The standalone (GUI client) has the problem. The ABL code is:
rptPdf:export-report-pdf(rptName). OPEN-MIME-RESOURCE "application/pdf" STRING("file:///" + rptName) false. OS-DELETE VALUE(rptName).
Maybe it's something related to my OS environment or the Acrobat Reader 9 I have installed is somehow asynchronous when invoked in this way. The exact message is:
Failed to open "/home/om/workspaces/hotel_gui.202-fwd.4011a/deploy/client/c84c4b7e-bc43-ebc4-0000-000000000000.pdf".
Error when getting information for file “/home/om/workspaces/hotel_gui.202-fwd.4011a/deploy/client/c84c4b7e-bc43-ebc4-0000-000000000000.pdf”: No such file or directory.
If I delay or prevent the os-delete
statement from executing, the report is open correctly. Anyway, this is not related to this task so I will ignore it for now.
#371 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Ovidiu, please revert the changes in rev 11441 and commit a new revision to bring the branch back to where it was. Don't uncommit please, as I already have made changes in my working tree, which is based on the back-level 11440 revision. I'd like to check those in after updating to the latest revision.
Done, please see r11442.
#372 Updated by Eric Faulhaber almost 4 years ago
I've committed rev 11443.
- temp-table copying fixes;
ScrollableResults
fix to not skip first record;SQLQuery.hydrateRecord
caches the newly read record;BaseRecord
copy changes to update state, so record is saved properly;- minor format/doc changes.
Ovidiu, please review, as these are largely in areas of the code you are more familiar with.
#373 Updated by Eric Faulhaber almost 4 years ago
Something in r11440 caused the following regression in Hotel GUI (virtual desktop). Recreate is to check in a guest from the first tab. You don't need to change any information.
Caused by: org.h2.jdbc.JdbcSQLException: Invalid value "2" for parameter "columnIndex" [90008-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.getInvalidValueException(DbException.java:240) at org.h2.jdbc.JdbcResultSet.checkColumnIndex(JdbcResultSet.java:3191) at org.h2.jdbc.JdbcResultSet.get(JdbcResultSet.java:3230) at org.h2.jdbc.JdbcResultSet.getInt(JdbcResultSet.java:328) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:425) at com.goldencode.p2j.persist.orm.types.IntegerType.readProperty(IntegerType.java:148) at com.goldencode.p2j.persist.orm.BaseRecord.readProperty(BaseRecord.java:575) ...
Latest revision without the error is 11439.
#374 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
I've committed rev 11443.
Ovidiu, please review, as these are largely in areas of the code you are more familiar with.
I reviewed this yesterday and found nothing wrong with the update.
At the same time, I discovered the regression in note 373. I committed the fix as 11444. Now the ScrollableResults.get()
and SQLQuery
query methods which handle ResultSet
s are smarter, they will return lists/arrays of PKs and will not hydrate the result unless the full set of columns was requested by caller.
I am continuing with debug on hotel_gui application. The disappearing guest issue seems to be caused by a reset of the cursor, in combination with calls to API from the browse.
#375 Updated by Ovidiu Maxiniuc almost 4 years ago
The revision 11445 contains the fix for conversion of FQL statements when denormalized extent fields are present.
#376 Updated by Ovidiu Maxiniuc almost 4 years ago
Note regarding the check for null mandatory fields (checkNotNull
/failNotNull
) already committed in 4011.
There is one testcase when FWD cannot mimic the 4GL behaviour. The problem exists in trunk, but it it handled differently, but equally incorrect.
Consider the case when, to enforce a field to initialized, a database designer made it mandatory and set the initial/default value to ?
(unknown).
- In P4GL, if the field is not touched, it is not checked for null/unknown until implicit or explicit validation or release are called. More than that, if all unique indexes are filled, the record will leak and the other clients will be able to see the record with the mandatory field set to 0.
- In trunk we do some aggressive not-null-testing. All mandatory fields are checked regardless they are touched or not. In case of bulk assign, the test is done after all assignments are done. As result, before the record has the chance to be leaked, the error 275 is raised.
- in 4011, I tried to get the behaviour closer to P4GL. All assignments to mandatory fields are tested at the moment they are processed, as P4GL does. The problem occurs when the unique indexes are full and the record must be flushed to database to test the uniqueness and mimic the leak behaviour. At this moment, our mandatory field is still null/unknown and the DDL states that the field must be
not null
. So the save attempt will fail with some kind of "not null" exception, before the unique indexes to be validated.
Example:
The definition (a bit cleaned for brevity) of the table with mandatory field:
ADD TABLE "test-mandatory"
ADD FIELD "f1" OF "test-mandatory" AS integer
INITIAL ?
ORDER 10
MANDATORY
ADD FIELD "f2" OF "test-mandatory" AS integer
ORDER 20
ADD FIELD "f3" OF "test-mandatory" AS integer
ORDER 30
ADD INDEX "my-index" ON "test-mandatory"
UNIQUE
PRIMARY
INDEX-FIELD "f2" ASCENDING
INDEX-FIELD "f3" ASCENDING
The test procedure 1:
CREATE test-mandatory.
ASSIGN
f2 = 1
f3 = 2. // now the index is complete and the record is leaking
PAUSE MESSAGE "Is it visible?".
UNDO,LEAVE. // to keep the table clean
While the above procedure waits in PAUSE statement, the second procedure:
FIND FIRST test-mandatory.
MESSAGE f1 f2 f3.
is executed and prints:
? 1 2
Since the fields with normal initial value does not suffer from this issue and we do not have a record of such table definition I decided to ignore it for the moment.
#377 Updated by Constantin Asofiei almost 4 years ago
Do you have any unofficial/uncommitted updates to Hotel GUI, in respect to #4011? I can't import the data in H2, I get this:
[java] EXPRESSION EXECUTION ERROR: [java] --------------------------- [java] dmoClass = imp.getDmoClass(dmoIface) [java] ^ { Failed to load DMO implementation } [java] --------------------------- [java] ERROR: [java] com.goldencode.p2j.pattern.TreeWalkException: ERROR! Active Rule: [java] ----------------------- [java] RULE REPORT [java] ----------------------- [java] Rule Type : WALK [java] Source AST: [ Guest ] DATA_MODEL/CLASS/ @0:0 {326417514594} [java] Copy AST : [ Guest ] DATA_MODEL/CLASS/ @0:0 {326417514594} [java] Condition : dmoClass = imp.getDmoClass(dmoIface) [java] Loop : false [java] --- END RULE REPORT --- [java] [java] [java] [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1070) [java] at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2110) [java] Caused by: com.goldencode.expr.ExpressionException: Expression execution error @1:16 [CLASS id=326417514594] [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:275) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:210) [java] at com.goldencode.p2j.pattern.PatternEngine.apply(PatternEngine.java:1633) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1531) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1479) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1034) [java] ... 1 more [java] Caused by: com.goldencode.expr.ExpressionException: Expression execution error @1:16 [java] at com.goldencode.expr.Expression.execute(Expression.java:484) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:497) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.RuleContainer.apply(RuleContainer.java:585) [java] at com.goldencode.p2j.pattern.RuleSet.apply(RuleSet.java:98) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:262) [java] ... 6 more [java] Caused by: java.lang.IllegalArgumentException: Failed to load DMO implementation [java] at com.goldencode.p2j.persist.orm.DmoMetadataManager.getImplementingClass(DmoMetadataManager.java:266) [java] at com.goldencode.p2j.persist.orm.DmoMetadataManager.registerDmo(DmoMetadataManager.java:177) [java] at com.goldencode.p2j.schema.ImportWorker$Library.getDmoClass(ImportWorker.java:1465) [java] at com.goldencode.expr.CE86.execute(Unknown Source) [java] at com.goldencode.expr.Expression.execute(Expression.java:391) [java] ... 16 more [java] Caused by: java.lang.NullPointerException [java] at com.goldencode.p2j.persist.orm.DmoClass.load(DmoClass.java:1517) [java] at com.goldencode.p2j.persist.orm.DmoClass.forInterface(DmoClass.java:385) [java] at com.goldencode.p2j.persist.orm.DmoMetadataManager.getImplementingClass(DmoMetadataManager.java:262) [java] ... 20 more
Also, another part which we must have working in 4011a (at least the points which work in trunk) are the issues mentioned in the #4011-183, #4011-182, #4011-181 discussion, related to table and field options which maybe be different accross temp-tables with otherwise the same schema. Ovidiu, did you manage to fix these?
#378 Updated by Eric Faulhaber almost 4 years ago
#379 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Do you have any unofficial/uncommitted updates to Hotel GUI, in respect to #4011? I can't import the data in H2, I get this:
No, but are you using rev 202 of Hotel GUI (see #4011-203)?
I take that back. See the (misnamed) build_xml.patch
attached to this task (also directory_xml.patch
when you get to runtime).
#380 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
I think the AsmClassLoader
is missing from the command line (see the -Djava.system.class.loader
).
LE: see the attached build_xml.patch
#381 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Note regarding the check for null mandatory fields (
checkNotNull
/failNotNull
) already committed in 4011.There is one testcase when FWD cannot mimic the 4GL behaviour. The problem exists in trunk, but it it handled differently, but equally incorrect.
[...]
This behavior rings a bell. We probably were handling this (albeit incorrectly/differently) because every update to a persistent table was running through the dirty database pre-4011a. Now that we are removing that overhead, except for very specific cases/tables, we are losing that cross-session communications mechanism. To the degree it is even necessary to communicate the existence of this partial record across sessions, can you think of a lighter-weight way to accomplish this (similar to what we do with UniqueTracker
)? I really want to avoid the full overhead of the dirty share database infrastructure here, if at all possible. We also have a cross-session communication mechanism in GlobalEventManager
, though I'm not sure it could easily be applied to this problem.
#382 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Do you have any unofficial/uncommitted updates to Hotel GUI, in respect to #4011? I can't import the data in H2, I get this:
No, but are you using rev 202 of Hotel GUI (see #4011-203)?
I take that back. See the (misnamed)
build_xml.patch
attached to this task (alsodirectory_xml.patch
when you get to runtime).
Yes, the patch works. I still need r202 though (latest Hotel rev doesn't convert properly).
#383 Updated by Greg Shah almost 4 years ago
The leaking of these malformed records seems less likely to be useful than the one ChUI application case where the code was written to use the dirty database. The error handling is important, but I think the visibility of these may be less important.
#384 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
The leaking of these malformed records seems less likely to be useful than the one ChUI application case where the code was written to use the dirty database. The error handling is important, but I think the visibility of these may be less important.
Let's assume so, and focus our efforts on the error handling and reporting, rather than on the visibility/sharing across sessions.
If we find specific cases where the sharing is actually important, we can evaluate on a case-by-case basis whether marking the involved table(s) as dirty-read (to enable the dirty database) is helpful, and if so, whether the dirty database needs to be modified to handle the sharing in a better way.
#385 Updated by Eric Faulhaber almost 4 years ago
Committed rev 11446, which fixes a flushing issue found in testing, adds stricter caching logic to the ORM session to prevent duplicate DMO instances, and adds some debug code.
I have found that while my change to RecordBuffer.flush
fixes a serious flush logic problem revealed in automated testing, it does surface a problem in Hotel GUI. We can see this in embedded mode: when clicking on dates in the heat map, a unique constraint violation in a temp-table occurs. I am pretty sure the change to RecordBuffer.flush
makes the code more correct than it was. It is unclear why it causes this regression.
#386 Updated by Constantin Asofiei almost 4 years ago
From Ovidiu:
I understand that you managed to get the hotel GUI application up and running (including the database import).
Please take a look at:
refresh Guest list in Available Rooms/Check-in. I tested the issue with branched trunk and it is 4011 issue. Somehow, when doing the repositioning, the existing rows are dropped from the query's Cursor. The reason just slips between my fingers. I tried tracing applications using both FWD but the cause has not surfaced yet;
To recreate this do:
- authenticate into client (works both swing/web-embedded);
- click the Check-in... button in Available Rooms tab. Notice the upper browse has a single guest (Bedelia Smallson, in my case);
- click the upper right Add button to add a new guest;
- enter some First Name and Last Name (the other fields are optional);
- confirm by pressing Add button.At this moment the Add Guest dialog will close and the former guest was replaced by the one you just entered in step 4. In trunk, this operation is additive. I investigated this issue but I do not have determined the cause yet :(. Something goes wrong after the BrowseWidget.getRows(BrowseWidget.java:4183) is invoked and the Cursor of the AdaptiveQuery is reset when the two records are added.
Next, if you continue to press Update, View or Delete buttons on the Check-In... dialog they will attempt to execute their actions on the old record (Bedelia Smallson) instead. The new record is saved and you can see it in database, but the browse is off by one record.
To get you started, put a breakpoint in UpdateStayDialog.refreshGuests(UpdateStayDialog.java:1564). The line is:
query0.open();
The query is quite simple, but it detects the snapshot of the record that was in RecordBuffer before the creation of the new record and will perform an incremental search, assuming Bedelia Smallson is already displayed (the statements from findNext bundle).If you want to see the content of a table (this is more needed in H2, because for PSQL you can connect with a SQL client) you can use a piece of code like the following:
persistence.getSession().getConnection().prepareStatement("select * from guest where stay_id=150").executeQuery()
You just need to have access to the persistence, session or the low-level connection to respective database.Don't hesitate to ask any questions.
#387 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, I can't duplicate this anymore - yesterday I saw this issue, but it seems to be fixed in 4011a r11446.
#388 Updated by Constantin Asofiei almost 4 years ago
There is another issue, and please let me know if either of you tracks it. If you go to Reservations
, and click Check-in
, you get a Unique index or primary key violation: "IDX__GUEST_STAY_ORDER ON PUBLIC.GUEST(STAY_ID, ORDER_) VALUES ( /* key:34298 */ null, 150, 1, null, null, null, null, null, null, null, null, null)"; SQL statement:
I can look at this, if you are not working on it.
#389 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
There is another issue, and please let me know if either of you tracks it. If you go to
Reservations
, and clickCheck-in
, you get aUnique index or primary key violation: "IDX__GUEST_STAY_ORDER ON PUBLIC.GUEST(STAY_ID, ORDER_) VALUES ( /* key:34298 */ null, 150, 1, null, null, null, null, null, null, null, null, null)"; SQL statement:
I can look at this, if you are not working on it.
Yes, this is a new one. Please go ahead.
#390 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin Asofiei wrote:
There is another issue, and please let me know if either of you tracks it. If you go to
Reservations
, and clickCheck-in
, you get aUnique index or primary key violation: "IDX__GUEST_STAY_ORDER ON PUBLIC.GUEST(STAY_ID, ORDER_) VALUES ( /* key:34298 */ null, 150, 1, null, null, null, null, null, null, null, null, null)"; SQL statement:
I can look at this, if you are not working on it.
Yes, this is a new one. Please go ahead.
This is because r202 of Hotel GUI has 'bad data'. After importing the data from Hotel GUI trunk, I get this:
But 4011a still has a problem with the 'bad data'. 4231b gets us the guest already exists with stayid ... and order ...
message (as 4GL raises an ERROR condition), and it does not abend.
#391 Updated by Constantin Asofiei almost 4 years ago
Eric/Ovidiu, about the 'bad data' case; there is this class orm.UniqueTracker
:
/** * The data related to a specified unique index. This holds all the unique records added for * an index of a specific {@code DMO}. */ private class UniqueIndex
which has this field:
/** * The set of unique records, keyed by {@code Key}, mapping to the original * {@code id} of the record. */ private final Map<Key, Long> records = new HashMap<>();
Why is records
empty? Shouldn't this be loaded with all the unique records from the database?
This code in lockAndChange
:
// attempt to store new Key, based on record's data UniqueIndex.Key newVal = ui.createKey(dmo); if (ui.records.containsKey(newVal)) { reportUniqueConstraintViolation(dmo, newVal); }
misses the unique index collision, as
ui.records
is empty.
Or do we rely later call validateUniqueCommitted
, which has these TODOs?
catch (PersistenceException exc) { // TODO: process the exception to differentiate between unique constraint violation and // other database errors; rethrow in the event of other errors // validation failed, or we had a different error; roll back to savepoint // TODO: is this always safe (or even possible) in the event of a non-unique constraint // related database error?
#392 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Why is
records
empty? Shouldn't this be loaded with all the unique records from the database?
No, here we are just tracking unique index changes happening within uncommitted transactions. Once a transaction is committed, any unique constraint information related to the records affected by that transaction are removed. A separate check against all the unique records from the database is done after this check, when we actually try to insert/update the DMO inside a very small-scoped savepoint.
The UniqueTracker
is playing the role the dirty database used to play w.r.t. checking unique constraint changes for uncommitted transactions.
#393 Updated by Eric Faulhaber almost 4 years ago
This is interesting:
Caused by: org.h2.jdbc.JdbcSQLException: Savepoint is invalid: "SYSTEM_SAVEPOINT_0"; SQL statement: ROLLBACK TO SAVEPOINT SYSTEM_SAVEPOINT_0 [90063-197]
We are trying to rollback to a savepoint which does not exist. This suggests we are trying to use the same savepoint (commit and/or rollback) more than once, or that we are otherwise unbalanced in setting and releasing/rolling back savepoints. This could potentially happen if we get more than one commit/rollback call from TM, which Greg noted was possible in certain conditions. I may have missed safeguarding against such a scenario.
#394 Updated by Constantin Asofiei almost 4 years ago
OK, that makes sense. My problem is this: in the validateUniqueCommitted
, we don't know which unique index failed validation. I can use 23 integrity constraint violation
from SQLException.getSQLState()
first two chars to determine that we are in a unique constraint violation case, but to determine the exact index we'll need a dialect-specific way to parse the error message (Unique index or primary key violation: "IDX__GUEST_STAY_ORDER ON PUBLIC.GUEST(STAY_ID, ORDER_) VALUES ( /* key:31087 */ null, 149, 1, null, null, null, null, null, null, null, null, null)"
for H2), extract the index name, and from that the fields.
#395 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
I am working on getting these messages parsed and the proper 4GL error issued. I estimate they will be committed later today.
#396 Updated by Eric Faulhaber almost 4 years ago
This is one of the open items. See #4011-127.
#397 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin,
I am working on getting these messages parsed and the proper 4GL error issued. I estimate they will be committed later today.
OK, I'll switch to the savepoint problem when using the 'good data'.
#398 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
OK, I'll switch to the savepoint problem when using the 'good data'.
Since you have the 'good data' loaded, would you please check whether the problem I reported in #4011-385 has the same root cause?
#399 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
OK, I'll switch to the savepoint problem when using the 'good data'.
Since you have the 'good data' loaded, would you please check whether the problem I reported in #4011-385 has the same root cause?
This doesn't have the same clause, the error is in a temp-table.
#400 Updated by Eric Faulhaber almost 4 years ago
OK, so it probably is an issue with the timing of the record flush.
#401 Updated by Constantin Asofiei almost 4 years ago
The root cause for the Reservations/Check-in/Delete guest/Cancel reservation
abend related to invalid savepoint is an intermediate commit which clears all savepoints. The commit is done when creating a dynamic table:
Session.unlockAll() line: 996 [local variables unavailable] Session.endTransaction() line: 760 Session.commit(boolean) line: 708 CreateIndex.update() line: 61 CommandContainer.update() line: 102 CommandContainer(Command).executeUpdate(Object) line: 261 JdbcStatement.executeUpdateInternal(String, Object) line: 169 JdbcStatement.executeBatch() line: 777 [local variables unavailable] Persistence.executeSQLBatch(Context, List<String>, boolean) line: 3382 Persistence.executeSQLBatch(List<String>, boolean) line: 2469 TemporaryBuffer$Context.doCreateTable(Class<DataModelObject>) line: 6502 TemporaryBuffer$Context.createTable(Class<DataModelObject>, String) line: 6294 TemporaryBuffer.openScopeAt(int) line: 4208 TemporaryBuffer.openScopeAt(int, DataModelObject...) line: 1971 TemporaryBuffer.createDynamicBufferForTempTable(String, RecordBuffer, TempTable, int) line: 2128 TemporaryBuffer.createDefaultDynamicBufferForTempTable(TempTableBuilder) line: 2081 TempTableBuilder.createDefaultBuffer() line: 3014 TempTableBuilder.tempTablePrepareImpl(String, String) line: 2243 TempTableBuilder.tempTablePrepare(character, logical) line: 2131 TempTableBuilder.tempTablePrepare(String) line: 2294 UpdateStayDialog.lambda$execute$25(handle, logical, handle, handle, character, integer, character, character, character, character, rowid, logical, logical, character, integer, rowid, integer, date, date, character, integer) line: 468
H2's CreateIndex
class has this code in update()
:
* */ public int update() { /* 60 */ if (!this.transactional) { /* 61 */ this.session.commit(true);
So the commit is part of a batch sql statement, to create the temp-table. Does this mean that the create index idx_mpid__dtt4__2 on dtt4 (_multiplex);
is missing the transactional
keyword?
#402 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
transactional
- the H2 manual says that theCREATE TABLE
command commits an open transaction, except when using TRANSACTIONAL (only supported for temporary tables).
What the manual doesn't say is that the CREATE TABLE ... TRANSACTIONAL
is the only schema command which is allowed to not commit the transaction. All other schema commands will commit the transaction!
I think this is a problem for us... starting with the create index
, and then the drop commands and whatever else we use.
#403 Updated by Eric Faulhaber almost 4 years ago
Nice find. I wonder why we weren't hitting this before. I was not doing anything special to do the temp-table DDL outside of an existing transaction.
I can see how we could defer the drops until a safe time, after a transaction. The creates are a bit problematic to handle separately, because they happen lazily and thus easily could be within an existing transaction.
#404 Updated by Ovidiu Maxiniuc almost 4 years ago
- unique index validation: primarily a no 1 implementation (see entry 127), but also slightly covered the 2nd, as a backing solution (not yet tested with MSSQL database);
- fixed FQL issue. It was actually a case-sensitivity issue. FQL/SQL are case-insensitive;
- fixed parsing
datetime
/datetime-tz
literals; - other minor changes.
#405 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I can see how we could defer the drops until a safe time, after a transaction. The creates are a bit problematic to handle separately, because they happen lazily and thus easily could be within an existing transaction.
Are you talking about static or dynamic temp-tables? Because a dynamic temp-table can be created or dropped at any time, during a transaction.
#406 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
I can see how we could defer the drops until a safe time, after a transaction. The creates are a bit problematic to handle separately, because they happen lazily and thus easily could be within an existing transaction.
Are you talking about static or dynamic temp-tables? Because a dynamic temp-table can be created or dropped at any time, during a transaction.
Either. When the 4GL deletes the table resource and when we actually issue the DROP DDL don't need to be the same thing. We already are careful to drop temporary tables only when we are sure they are no longer in use.
But I think the best way to address this is probably with changes to H2 itself, to support the TRANSACTIONAL keyword at minimum with CREATE INDEX, but probably also with DROP TABLE and DROP INDEX. I posted the following to the H2 user list last night:
Subject: TRANSACTIONAL option with creating/dropping tables/indexes
Date: Fri, 29 May 2020 21:21:46 -0400
From: Eric Faulhaber <ecf@goldencode.com>
To: h2-database@googlegroups.comHello,
I work on a framework which uses H2 1.4.197 as an in-memory database, primarily for temporary tables.
The framework needs to create and drop temporary tables and their indexes on demand, as needed by an application running atop the framework. The timing of these actions is driven by application logic, and as such is not fully under the control of the framework. As a result, create/drop DDL often needs to be executed within the scope of an existing transaction established by an application. These DDL actions must not commit the current transaction, as this causes application errors.
The CREATE TEMPORARY TABLE command offers the TRANSACTIONAL option, which prevents such a commit. However, in accordance with the online documentation, the CREATE INDEX, DROP TABLE, and DROP INDEX commands do not support TRANSACTIONAL.
However, we noticed that at the top of the CreateIndex.update() method, a transactional instance variable (inherited from DefineCommand) is checked, and only if it is false is the current transaction committed. The field is not initialized by default, and it is not obvious under which conditions (if at all) it would be set to true for the CREATE INDEX command.
I also checked DropTable.update() and DropIndex.update(). Both methods commit the current transaction unconditionally, again in accordance with the documentation.
My questions:
- As noted, CreateIndex.update() checks the transactional flag before committing. Is this variable ever set to true, such that the commit would be bypassed?
- Is there a functional reason why CREATE INDEX, DROP INDEX, and DROP TABLE should not support the TRANSACTIONAL option for temporary tables and the indexes on them? Or is it simply that no one has implemented it yet?
- If we were to implement support for TRANSACTIONAL syntax on CREATE INDEX, DROP INDEX, and DROP TABLE, would such support likely be welcomed back into the project?
Thank you in advance for your help.
Best regards,
Eric Faulhaber
I got the folllowing reply from one of the contributors:
Subject: Re: [h2] TRANSACTIONAL option with creating/dropping tables/indexes
Date: Sat, 30 May 2020 11:02:42 +0200
From: Noel Grandin <noelgrandin@gmail.com>
Reply-To: h2-database@googlegroups.com
To: H2 Database <h2-database@googlegroups.com>On Sat, 30 May 2020 at 03:22, Eric Faulhaber <ecf@goldencode.com> wrote:
As noted, CreateIndex.update() checks the transactional flag before committing. Is this variable ever set to true, such that the commit would be bypassed?
It is set when creating constraints inside a table and when the TRANSACTIONAL keyword is encountered.
Is there a functional reason why CREATE INDEX, DROP INDEX, and DROP TABLE should not support the TRANSACTIONAL option for temporary tables and the indexes on them? Or is it simply that no one has implemented it yet?
Most likely.
If we were to implement support for TRANSACTIONAL syntax on CREATE INDEX, DROP INDEX, and DROP TABLE, would such support likely be welcomed back into the project?
We're quite happy to accept changes, as long they come with at least one unit test and don't break any of the other unit tests.
See build instructions here:
#407 Updated by Constantin Asofiei almost 4 years ago
Eric, we still require the synchronization patch we have for ver 192 (from here: https://proj.goldencode.com/downloads/h2/h2_synchronization_fix_20160816a.patch), right?
#408 Updated by Constantin Asofiei almost 4 years ago
Also, the current version is 1.4.200 - should we build the changes on top of that? And also switch FWD to 1.4.200?
#409 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Also, the current version is 1.4.200 - should we build the changes on top of that? And also switch FWD to 1.4.200?
For this task I worked with fwd-h2-1.4.197.jar
.
#410 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, we still require the synchronization patch we have for ver 192 (from here: https://proj.goldencode.com/downloads/h2/h2_synchronization_fix_20160816a.patch), right?
Yes.
And also switch FWD to 1.4.200?
Let's stay with the version we're on for the moment. There is enough change in 4011a already. When we're ready to submit the changes back to the H2 project, we can base them on the latest version. But that will be a separate effort.
#411 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Eric, we still require the synchronization patch we have for ver 192 (from here: https://proj.goldencode.com/downloads/h2/h2_synchronization_fix_20160816a.patch), right?
Yes.
Do you have the 197 version of this patch? I can't apply it to ver 197.
#412 Updated by Constantin Asofiei almost 4 years ago
Eric, I have the H2 fixes for CREATE/DROP INDEX, DROP TABLE, ALTER TABLE ADD CONSTRAINT FOREIGN cases. I'll upload the H2 .jar and .patch here, correct?
Now I get a NPE related to rollback:
Caused by: java.lang.NullPointerException at com.goldencode.p2j.persist.RecordBuffer.load(RecordBuffer.java:10809) at com.goldencode.p2j.persist.RecordBuffer.access$32(RecordBuffer.java:10714) at com.goldencode.p2j.persist.RecordBuffer$LightweightUndoable.assign(RecordBuffer.java:13164) at com.goldencode.p2j.util.LazyUndoable.rollback(LazyUndoable.java:167) at com.goldencode.p2j.util.TransactionManager.processRollback(TransactionManager.java:6231) at com.goldencode.p2j.util.TransactionManager.rollbackWorker(TransactionManager.java:4070) at com.goldencode.p2j.util.TransactionManager.rollback(TransactionManager.java:3929) at com.goldencode.p2j.util.TransactionManager.access$10(TransactionManager.java:3920) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.rollback(TransactionManager.java:7716) at com.goldencode.p2j.util.TransactionManager$TransactionHelper.rollback(TransactionManager.java:7677) at com.goldencode.p2j.util.BlockManager.processCondition(BlockManager.java:10262) at com.goldencode.p2j.util.BlockManager.doBlockWorker(BlockManager.java:9119) at com.goldencode.p2j.util.BlockManager.doBlock(BlockManager.java:1132) at com.goldencode.hotel.UpdateStayDialog.lambda$execute$25(UpdateStayDialog.java:1143)The scenario is this:
- reservations - click 'Check in'
- delete the guest
- press cancel in the 'Check in' dialog
- you get a NPE
The guest is created in the UpdateStayDialog.execute
method, which is a FULL tx. Later one, we have a DO block:
doBlock(TransactionType.SUB, "mainBlock", onPhrase0, new Block((Body) () -> { ControlFlowOps.invoke("initializeObject"); LogicalTerminal.waitFor(gdialogFrame, new EventList("GO", gdialogFrame.asWidget())); }));
which shows the 'Check in' dialog.
The guest delete happens in a UI trigger, with a block like:
doBlock(TransactionType.FULL, "blockLabel2", new Block((Body) () -> { new FindQuery(guest, (String) null, null, "guest.stayId asc, guest.order asc", LockType.EXCLUSIVE).current(); guest.deleteRecord(); i.assign(1);
Now, when the guest delete commits, it will copy to the wait-for DO block the rollback data (as this is the nearest sub-tx block). When Cancel is pressed in the 'Check in' dialog, this wait-for DO block (a sub-tx) needs to undo. But, for some reason, it doesn't find the record in the table; it tries to find it via a dmo = persistence.load(this.dmoClass, id, lockType, 0L, true, forceRefresh);
on line 10799 in RecordBuffer.load
.
For some reason, FWD does the rollback for the savepoint which has the guest.create()
, too - thus it does the 'undo delete' and after that the 'undo create', and no record remains in the table. Shouldn't the sub-tx have its own savepoint?
#413 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
OK, FWD does have a savepoint for the sub-tx wait-for DO block. The problem is with the flush, in this scenario:For some reason, FWD does the rollback for the savepoint which has the
guest.create()
, too - thus it does the 'undo delete' and after that the 'undo create', and no record remains in the table. Shouldn't the sub-tx have its own savepoint?
- a record created in an outer, full-tx block
- the flush happens in a sub-tx block
- the sub-tx block is rolled back, and in H2 this has the record created in the full-tx block, too (because of the flush)
I don't know if I have enough of a full-picture to understand what should be changed. The flush in this case is because of a browse query (for guest
table) being executed in an inner sub-tx block, and this flushes a record created by the outer full-tx block. And H2 will register the created record in its UndoLog for the savepoint associated with the inner sub-tx block (as this is the current active one) - but this is incorrect, when rolling back the inner sub-tx block the record needs to remain, as this was created by the outer block. The scenario may be valid for nested sub-tx blocks, too.
So, maybe we let the flush happen at the sub-tx (and H2 register the action in the UndoLog for this inner sub-tx savepoint), and just register a 'redo operation' if the sub-tx block gets rolled back? Something else?
#414 Updated by Eric Faulhaber almost 4 years ago
- File h2_sync_1-4-197.patch added
Constantin Asofiei wrote:
Do you have the 197 version of this patch? I can't apply it to ver 197.
Hm, I have the patch on my system and an H2 project with files of the same timestamp, but it looks like I have a slightly different set of changes. I must have made some edits when moving to 197 and I did not update the official patch.
I am attaching the output from git diff
.
#415 Updated by Constantin Asofiei almost 4 years ago
- File h2_synchronization_transactional_fix_20200529a.patch added
- File fwd-h2-1.4.197.jar added
Attached is the changed H2 jar and the patch.
#416 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Eric, I have the H2 fixes for CREATE/DROP INDEX, DROP TABLE, ALTER TABLE ADD CONSTRAINT FOREIGN cases.
The TRANSACTIONAL support for these is in 4011a rev 11448. Once you move to this rev, it is mandatory to use the h2 jar from #4011-415 (otherwise the DDL will not parse in H2).
#417 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
For some reason, FWD does the rollback for the savepoint which has the
guest.create()
, too - thus it does the 'undo delete' and after that the 'undo create', and no record remains in the table. Shouldn't the sub-tx have its own savepoint?
Yes, it should. This should be handled by SavepointManager
, which is registered for callbacks at every block with full or sub-transaction properties (if a buffer has been opened for the associated database). A savepoint should be created at sub-transaction blocks:
- when the
SavepointManager
callbacks are registered; and - when
iterate
is invoked; and - when
retry
are invoked.
#418 Updated by Eric Faulhaber almost 4 years ago
Constantin, we made a significant functional change in 4011a in terms of the timing of flushes to the database.
As you know, pre-4011a, a "flush" in FWD meant associating a DMO with the Hibernate session, not necessarily persisting it to the database at that moment. Hibernate would manage the actual timing of the flush to the database: it used "transparent write-behind" to defer the flush until the last possible moment, to try to take advantage of SQL batching, among other optimizations. In 4011a, a "flush" in FWD means actually persisting a record (or changes to it) at that moment. This is done via either the Validation
class or the Persistence.save
API.
The new approach more closely matches the 4GL's behavior and allows us to optimize DMO validation significantly. Previously, we had to validate every change before persisting it, using database queries against the primary and dirty databases, to be sure we would only associate "clean" DMOs with the Hibernate session. Now, we check unique constraints against uncommitted changes across all user sessions with the UniqueTracker
class, and we check against committed data by actually trying to issue an insert or update to the primary database, wrapped in a very small-scoped savepoint.
Another major change is in the UNDO architecture. Pre-4011a, the FWD server is the authoritative keeper of UNDO information (in the form of Reversible
objects). In 4011a, the database is the authoritative keeper of UNDO information. Instead of tracking every change in Reversible
objects, we now use database savepoints to roll back changes. We now keep track of which DMOs have been changed in a sub-transaction's scope, but not what data was changed. When a savepoint is rolled back, we mark those DMOs as STALE. When business logic accesses a stale DMO, its data is refreshed from the database (or perhaps it is removed, if the record was deleted by a rollback).
This is all to say that I expect that a lot of the bugs we find will be due to this change in flushing semantic and UNDO architecture.
I have not done much testing of the LightweightUndoable
areas of the code. It looks like you have found some problems here.
#419 Updated by Constantin Asofiei almost 4 years ago
Eric, the main issue with the savepoints, the FWD (sub)-tx and the FWD flush is this: in H2, a flush will associate the record (for undo) with the current savepoint; but the create or change may belong to an outer block, and not part of this block's savepoint. Ideally, we need all parts to be in sync - a H2 flush will record the change with the savepoint associated with the FWD block which is made that change.
But this is not happening, and something needs to change. But which part?- the flush - so that is done before the sub-tx starts (and this may not be how 4GL behaves)
- the rollback - if the change in the flushed record was not made by the current block, create a Reversible, to redo this change after the database level flush was made.
BTW, this was done with H2 for the permanent Hotel database, but I assume postgres behaves the same (anyway, I will check tomorrow).
#420 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, the main issue with the savepoints, the FWD (sub)-tx and the FWD flush is this: in H2, a flush will associate the record (for undo) with the current savepoint; but the create or change may belong to an outer block, and not part of this block's savepoint. Ideally, we need all parts to be in sync - a H2 flush will record the change with the savepoint associated with the FWD block which is made that change.
But this is not happening, and something needs to change. But which part?
- the flush - so that is done before the sub-tx starts (and this may not be how 4GL behaves)
- the rollback - if the change in the flushed record was not made by the current block, create a Reversible, to redo this change after the database level flush was made.
BTW, this was done with H2 for the permanent Hotel database, but I assume postgres behaves the same (anyway, I will check tomorrow).
I'm not sure I understand the problem you are describing without a test case, but I suspect that if some part is going wrong, it is most likely the timing of the flush (i.e., of an insert or update) to the database of a new or modified record. The pre-4011a code was written to work with Hibernate's deferred insert/flush mechanism. We were not sensitive to the fact that Hibernate was deferring this to some later point, because we were not using savepoints.
In 4011a, we must match the timing of the 4GL's flush exactly (at least within the same sub-transaction scope), so that any savepoint rollback affects the correct record states. I think perhaps we've missed that timing in some cases, as we've tried to adjust the old flush logic to the new requirement. This is where the legacy behavior code has changed most drastically.
Previously, RecordBuffer$ValidationHelper
and DMOValidator
were responsible for deciding when the flush of a newly created or updated DMO would occur. This decision-making has moved to Validation
, using the state stored inside BaseRecord
. Also, the logic which drives these old and new classes in RecordBuffer.flush
and RecordBuffer.validate
has changed.
Reversible
is now deprecated and will be removed entirely as soon as possible. I think I have disconnected all uses of that hierarchy of classes and I do not want to add new uses of it. The idea is that the state of the database is authoritative for all UNDO-able tables. The key to making this work is (a) to get the timing of the flushes correct in relation to the savepoints; and (b) to make sure stale DMOs are replaced/refreshed/removed after a rollback, to ensure they reflect the state of the database before they are again touched by business logic.
#421 Updated by Constantin Asofiei almost 4 years ago
Eric, please see this test:
def temp-table tt1 field f1 as int. repeat transaction: create tt1. tt1.f1 = 10. do transaction: find first tt1. undo, leave. end. for each tt1: message tt1.f1. end. leave. end.
The flush is executed at the find first
statement inside the sub-tx DO block. This associates the created record with the H2 savepoint for the DO
block.
When the DO block is rolled back, H2 will roll back the create, too - even if this belongs to the outer full-tx REPEAT block.
This shows that the savepoint used at the time of the flush is not the same as the savepoint where the change was actually made by FWD. Do we flush all buffers before the DO sub-tx starts?
#422 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, please see this test:
[...]The flush is executed at the
find first
statement inside the sub-tx DO block. This associates the created record with the H2 savepoint for theDO
block.When the DO block is rolled back, H2 will roll back the create, too - even if this belongs to the outer full-tx REPEAT block.
OK, now I see your point.
We defer the insert to the database until the point where the 4GL would flush the updated record to the database. The rules vary, but generally have to do with when an index is updated. So, from H2's point of view the insert is part of the inner savepoint, but this is not the same as the create. The create gives us an invalid record that can't necessarily be stored in the database, because its fields/indexes aren't necessarily in a valid state until some updates are made. We need to roll back to this invalid record state, rather than not having a record at all, which is what the inner savepoint gives us.
This shows that the savepoint used at the time of the flush is not the same as the savepoint where the change was actually made by FWD. Do we flush all buffers before the DO sub-tx starts?
No, the flush depends on the state of the record and its indexes.
I'm not sure what the best answer is. Generally, we have moved away from maintaining separate state in things like reversible objects. We probably need to maintain some state as part of RecordState
, which allows us to roll back to this "newly created, potentially invalid" record, instead of what we do in 4011a today, which is to remove the record altogether when it is marked stale, because we don't find it in the database after the inner savepoint rollback.
#423 Updated by Eric Faulhaber almost 4 years ago
The key is to maintain enough information to differentiate between a savepoint rollback which legitimately undoes a create in the same scope vs. a case like you've shown where the create occurred in a different scope. Not sure yet of the best way to store this information...
#424 Updated by Constantin Asofiei almost 4 years ago
Eric, is not just about inserts. See this test:
def temp-table tt1 field f1 as int. create tt1. tt1.f1 = 10. release tt1. repeat transaction: find first tt1. tt1.f1 = 20. do transaction: find first tt1. undo, leave. end. for each tt1: message tt1.f1. end. leave. end.
The change is done at the REPEAT
but the flush is done at the FIND FIRST
in the inner DO
sub-tx block. We have the same incorrect savepoint here.
#425 Updated by Eric Faulhaber almost 4 years ago
Constantin, I am working on a solution, but it may take another day or so. The good news is that I think you may have hit on the root cause of many bugs. The bad news is it will require storing more state than I wanted to in the FWD server, so the design is less elegant than I had hoped. The database will no longer be the authoritative source of record state; this responsibility will need to be shared with the FWD server more than I initially intended. However, I have learned some lessons from the Reversible
mess in the past and I expect this implementation to be more efficient.
Please continue to test Hotel GUI and work on fixes for any apparent regressions which do not appear to be related to this root cause.
#426 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Please continue to test Hotel GUI and work on fixes for any apparent regressions which do not appear to be related to this root cause.
I'm looking at #4011-385 issue.
#427 Updated by Constantin Asofiei almost 4 years ago
A note: we shouldn't use select count(*)
- this should be replaced by something like select count(recid)
or whatever the name of the table's ID field is. Otherwise, AFAIK count(*)
will use retrieve fields.
#428 Updated by Constantin Asofiei almost 4 years ago
About the #4011-385 - the issue is that the ttRoomTypeAgg.deleteAll()
is seeing the table as 'empty' (the isTableDefinitelyEmpty()
check), and the records are not actually removed. This is because the emptyTableFlags
is never used, when inserting records into the temp-table.
I think RecordBuffer.flush
still needs to call reportChange
- now this call is not reached, because of the 'record was neither in need of validation nor invalid' return:
else { // record was neither in need of validation nor invalid, so there is nothing more to do return; } if (insert) { markPersisted(); } // Share uncommitted change to the record, if not previously done. // TODO: fix this...validationHelper no longer used // if (validationHelper.wasShared()) // { // dirtyContext.insert(this, currentRecord, null, null, null, true); // } reportChange(currentRecord, insert, false);
Or should the ChangeBroker be notified somewhere else?
#429 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
A note: we shouldn't use
select count(*)
- this should be replaced by something likeselect count(recid)
or whatever the name of the table's ID field is. Otherwise, AFAIKcount(*)
will use retrieve fields.
I suppose you are talking about the unique index validation. These are queries using standard SQL cross-platform statements.
I used explain
command in psql
with both select count(*)
and select count(recid)
. The output almost identical, both will return a single row. The only difference is that the returned width of the row for former is 0, while for the latter is 8.
The same command for H2 will return the exactly the same output. What I found strange is that only the H2 plan reported the index used, psql
reported a Seq Scan
instead.
#430 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
I suppose you are talking about the unique index validation. These are queries using standard SQL cross-platform statements.
Yes.
I used
explain
command inpsql
with bothselect count(*)
andselect count(recid)
. The output almost identical, both will return a single row. The only difference is that the returned width of the row for former is 0, while for the latter is 8.
The same command for H2 will return the exactly the same output. What I found strange is that only the H2 plan reported the index used,psql
reported aSeq Scan
instead.
You are right, I think I remembered something wrong from long ago...
Anyway, looking at some benchmark articles, like https://www.citusdata.com/blog/2016/10/12/count-performance/, it seems the count(*)
will still scan the entire table. Instead, maybe we can use a select with a 'limit 1' for checking if there is a match?
#431 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Anyway, looking at some benchmark articles, like https://www.citusdata.com/blog/2016/10/12/count-performance/, it seems the
count(*)
will still scan the entire table. Instead, maybe we can use a select with a 'limit 1' for checking if there is a match?
This is what the old code did, IIRC. Although this should not be a very performance sensitive area in most cases, since it only should be invoked when reporting an error, it shouldn't be a hard change to make. For a very large table, a sequential scan could introduce a long delay.
#432 Updated by Ovidiu Maxiniuc almost 4 years ago
Indeed, the full scan will be required for both of count(__)
versions. However, the limit 1
will not help. The result is always of 1 row, be it 0
or other positive number.
The idea is to abort the scan at first occurrence. The solution for this would be something like:
select exists(select * from <table> where recid != ? and <record-matching-conditions>)
If the database engine is smart enough it should indeed stop at first occurrence and return
true
. There is no point in scanning the other records in table once it finds one match as it was necessary when one of the count
aggregate.However, the
psql
explain
command returns the same performance values.#433 Updated by Ovidiu Maxiniuc almost 4 years ago
Some thoughts: that article uses a pseudo randomized unindexed data set. They need the count so the scan is mandatory.
In our case, the data is indexed, and the form of the WHERE clause should force the plan to use that specific index (as all its components are specified). In this case locating the record(-s) should be done really fast as the field combination is or is not a key in the index table.
#434 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Some thoughts: that article uses a pseudo randomized unindexed data set. They need the count so the scan is mandatory.
In our case, the data is indexed, and the form of the WHERE clause should force the plan to use that specific index (as all its components are specified). In this case locating the record(-s) should be done really fast as the field combination is or is not a key in the index table.
You are right, I was about to post the same thing. Our case is '1 or 0 records' and the lookup will done via an index. Sorry for the derail.
#435 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
About the #4011-385 - the issue is that the
ttRoomTypeAgg.deleteAll()
is seeing the table as 'empty' (theisTableDefinitelyEmpty()
check), and the records are not actually removed. This is because theemptyTableFlags
is never used, when inserting records into the temp-table.I think
RecordBuffer.flush
still needs to callreportChange
- now this call is not reached, because of the 'record was neither in need of validation nor invalid' return:
[...]Or should the ChangeBroker be notified somewhere else?
The problem I was fixing when I added this return in RecordBuffer.flush
was to avoid firing the ChangeBroker
notification in a case where the record had been read from the database and never modified, yet we were sending an insert notification. This caused the AdaptiveQuery
doing the reading to invalidate its results and caused a bug.
However, the change I made was not enough, since it caused the regression you are seeing now.
The problem is that Validation
is pass/fail: if it throws an exception, it failed; if the logic is allowed to continue, it passed. However, passing entails two possible conditions:
- the record was validated and was found to be valid; OR
- the record was inspected and deemed to not be ready for validation yet, based on legacy flushing logic.
Implicit in the first condition is that the record was flushed as an insert or update to the database. This is the case about which we want CB to notify listeners. If the second condition occurred, then no flush was performed, and we do NOT want a CB notification to be fired.
The problem is that currently Validation
does not offer feedback on whether the flush occurred or was bypassed. I think we need to add this, so that we know whether to fire the CB notifications.
Also, I think there currently are other places in the code where we perform the validation (and possible flush), after which we do not fire a CB notification. It is probably one of these places that is performing the flush in this case. Then, we get to RecordBuffer.flush
, determine validation is not needed (because it already was done elsewhere), and return, thus missing the emptyTableFlags
update.
We probably need to centralize the validation/flushing/notification into one method to make this more maintainable.
#436 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Indeed, the full scan will be required for both of
count(__)
versions. However, thelimit 1
will not help. The result is always of 1 row, be it0
or other positive number.
The idea is to abort the scan at first occurrence. The solution for this would be something like:
[...]
If the database engine is smart enough it should indeed stop at first occurrence and returntrue
. There is no point in scanning the other records in table once it finds one match as it was necessary when one of thecount
aggregate.
However, thepsql
explain
command returns the same performance values.
Hm, I seem to recall from implementing CAN-FIND as a subquery that I needed the limit
with exists
. However, there are 2 implementations for CAN-FIND, one with FIRST and one without. Perhaps I am conflating the two, as I am not looking at that code ATM.
#437 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
We probably need to centralize the validation/flushing/notification into one method to make this more maintainable.
Constantin, are you working on this?
#438 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Eric Faulhaber wrote:
We probably need to centralize the validation/flushing/notification into one method to make this more maintainable.
Constantin, are you working on this?
Yes, I'm trying to fix it, but to me is more like a batch assign issue; this test:
def temp-table tt1 field f1 as int field f2 as int. def var i as int. do i = 1 to 10: create tt1. assign tt1.f1 = i tt1.f2 = i * 10. end. for each tt1: delete tt1. end. for each tt1: message tt1.f1. end.
works if you replace the batch
assign
statement with plain field assignments.#439 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Eric Faulhaber wrote:
We probably need to centralize the validation/flushing/notification into one method to make this more maintainable.
Constantin, are you working on this?
Yes, I'm trying to fix it, but to me is more like a batch assign issue; this test:
[...]
works if you replace the batchassign
statement with plain field assignments.
I've left the old invocation handler method in place (but it is not called) as invoke1
, for reference as to how the update logic used to work.
Even if this particular issue is related to a regression in batch assign mode, I think the other issue I described is still a problem or at least needs review. A lot of the flushing to the database moved into the Validation
class, and is not necessarily followed with a ChangeBroker
notification.
#440 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Even if this particular issue is related to a regression in batch assign mode, I think the other issue I described is still a problem or at least needs review. A lot of the flushing to the database moved into the
Validation
class, and is not necessarily followed with aChangeBroker
notification.
Yes, I'm working on creating some tests for that, too.
#441 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
unique index validation: primarily a no 1 implementation (see entry 127), but also slightly covered the 2nd, as a backing solution (not yet tested with MSSQL database);
I reviewed this and unfortunately it needs to be changed. The point of reworking the old unique constraint checking from DmoValidator
into the new Validation
class was to be able to get rid of the "pre-validation" of all the unique indexes using SQL queries. Instead, we bracket an insert/update in a savepoint and let the database tell us whether there are any unique constraint violations. We get a huge performance improvement from this change. However, rev 11447 puts the expensive queries back in front of the insert/update for every check. We cannot give this performance back.
I only want to go down the route of the queries in the case where the insert/update has failed and we need to work out the details of exactly which index caused the unique constraint violation, in order to report the error correctly in the legacy format. This requires some interpretation of the dialect-specific error/exception, to verify that the error was in fact due to a unique constraint violation, and not some other database error. Once we have made this determination, we use the queries to work out the details for the error message. But we ONLY do the queries if the insert/update has failed, and the savepoint has been rolled back.
Please disable the new query-based validation check for now, but then get back to integrating the dirty share manager as your top priority. After that, come around and finish the validation work.
#442 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Please disable the new query-based validation check for now, but then get back to integrating the dirty share manager as your top priority. After that, come around and finish the validation work.
Disabled unique index pre-validation. In the event of an error when a record is saved, the SQLException
is still analysed and the faulty dialect-specific index is used to describe the P4GL condition.
LE: the change is committed as r11449.
#443 Updated by Ovidiu Maxiniuc almost 4 years ago
More details:
At this moment the unique-indexes are validated on SQL-tier, the result in case of failure being a dialect specific SQLException
. The exception object itself is part of the dialect API, but the good part is that they all share the same db error state, in this case 23505 - UNIQUE VIOLATION
. There are two downsides: fist is that the faulty index is part of the exception message, but the syntax of the message is really dialect specific. The second one is the case when multiple indexes are breached and the SQL server will report possible different index than P4GL, as Eric observed.
To properly handle the unique-index errors, the SQL statements in RecordMeta.uniqueSql
need to be used to properly identify the right index to report, after the database reported the failure. This way both the above issues are eliminated, as there will no need to parse the dialect-specific message and the indexes are processed in the P4GL order.
uniqueSql
statements as they are build at this moment:
- they are not dialect-oriented. This is mainly the case-sensitiveness issue. The sqls used by dialects which use computed columns need to be created using the decorated name. Then there is another possible problem: if the same queries are used for the dirty database, the dialect may be different. Yet, this latter might not be an issue as the dirty database is not really a P4GL object so there is nothing to report. What do we do if we have a validation error on this database?
- I noticed an issue with decimal number validation. The values are not rounded when the parameters of these queries are set, but they were when the conflicting record was persisted. As result, we will not get the right faulty index. I tested this for both H2 and PSQL. This will require the
PropertyMeta
to store the scale/precision and do the rounding. - the
datetime-tz
: from my tests, I found no issues up until now, but I am a bit circumspect about how PSQL converts the time/zone-offset.
#444 Updated by Eric Faulhaber almost 4 years ago
I've committed rev 11450, which reworks the UNDO of DMOs to be primarily in memory on the FWD server, with less reliance on synchronizing to the database's state after rollback. This is meant to address the issues Constantin reported in #4011-421 and #4011-424.
The implementation should be much faster than the previous Reversibles
implementation, both for the common (commit) case, as well as for the exceptional (rollback) case. The representation of the DMO's data changes is flat (a multi-dimensional array vs. the multi-level map-based structures we had before). There is no use of reflection. The change data does not need to be aggregated at every sub-transaction commit, as before.
However, while it fixes the test cases posted in history entries 421 and 424, it has not been heavily tested yet, so I expect some things to shake out still.
#445 Updated by Constantin Asofiei almost 4 years ago
Eric, the issue in #4011-412 is not solved by your changes.
#446 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, the issue in #4011-412 is no solved by your changes.
OK, thanks for testing. The change is mostly about the raw undo mechanics for individual records to match the legacy semantics and timings, but I didn't yet look closely at interaction with LightweightUndoable
. I'll look at this case specifically.
If you know of any other issues in Hotel GUI which you suspect are related to this root cause, please let me know.
#447 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
If you know of any other issues in Hotel GUI which you suspect are related to this root cause, please let me know.
No, I haven't found (at least yet) any other case related to this.
But what I found is:- an abend when deleting a room
- after this abend, start another client (without restating the FWD server), try to delete another room - you will see a deadlock related to unique index tracking:
UniqueTracker$UniqueIndex.lock() line: 998 UniqueTracker.lockAndDelete(BaseRecord, UniqueTracker$Context) line: 352 UniqueTracker$Context.lockAndDelete(UniqueTracker, BaseRecord) line: 604 RecordBuffer.delete() line: 7334 $__Proxy13(BufferImpl).deleteRecord(boolean) line: 7795 $__Proxy13(BufferImpl).deleteRecord() line: 1121 RoomsFrame$3.lambda$body$3(AdaptiveQuery) line: 573
In case of abends, I think FWD should release all locks, right?
#448 Updated by Constantin Asofiei almost 4 years ago
My changes to fix the #4011-385, the deleteAll
problem because there was no notification of a newly created record - 4011a rev 11451.
I'm looking at the abend for room delete now.
#449 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
In case of abends, I think FWD should release all locks, right?
Yes, it should. The locking and unlocking code throughout should be reviewed. At minimum, we probably need to put calls to lock()
inside the try
block when used in try-finally
constructs, where unlock()
is already called in the finally
block. The current context cleanup code does not try to detect/release locks acquired but not released at the time of an abend. Not sure if this is necessary/possible.
#450 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, the issue in #4011-412 is not solved by your changes.
Constantin, I don't get an NPE with this recreate using rev 11451. Upon hitting "Cancel" in the "Check-In" dialog, it just dismisses the dialog as it should. Is there something different about the recreate to get the NPE?
#451 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Eric, the issue in #4011-412 is not solved by your changes.
Constantin, I don't get an NPE with this recreate using rev 11451. Upon hitting "Cancel" in the "Check-In" dialog, it just dismisses the dialog as it should. Is there something different about the recreate to get the NPE?
If I delete the existing guest, and then press Cancel in the Check-in dialog, I get the NPE.
#452 Updated by Eric Faulhaber almost 4 years ago
Ok, that's what I'm doing. I wonder why I get different behavior. I'll run a full conversion from scratch and try it again. You don't have any uncommitted changes, right?
#453 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Ok, that's what I'm doing. I wonder why I get different behavior. I'll run a full conversion from scratch and try it again. You don't have any uncommitted changes, right?
I've ran deploy.all
, my FWD is on 4011a rev 11451, no other changes. And the NPE is still there.
#454 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
My changes to fix the #4011-385, the
deleteAll
problem because there was no notification of a newly created record - 4011a rev 11451.I'm looking at the abend for room delete now.
This is fixed in 4011a rev 11452 - there was no 'autocommit' for the record delete in a temp-table.
#455 Updated by Constantin Asofiei almost 4 years ago
- delete a room
- close and start another client
- delete another room - deadlock
The UniqueIndex.unlock
is never called.
#456 Updated by Constantin Asofiei almost 4 years ago
This code in RecordBuffer.delete
doesn't set the token
:
// TODO: this probably belongs in the ORM layer (in Session?) UniqueTracker.Token token = null; try { uniqueTrackerCtx.lockAndDelete(uniqueTracker, currentRecord); persistence.delete(currentRecord); } finally { if (token != null) { uniqueTrackerCtx.unlock(uniqueTracker, token); } }
Fixed in 4011a rev 11453.
#457 Updated by Constantin Asofiei almost 4 years ago
The only issue I'm aware in Hotel GUI is the NPE related to Cancel the Reservation, after deleting the guest.
#458 Updated by Eric Faulhaber almost 4 years ago
- File unique_violation.png added
Constantin Asofiei wrote:
The only issue I'm aware in Hotel GUI is the NPE related to Cancel the Reservation, after deleting the guest.
Ugh, problem was, I wasn't switching to the "Reservations" tab first. But now I don't even get into the "Check In" dialog. Whether or not a select a guest in the main reservations browse, when I click the "Check in..." button, I get this:
#459 Updated by Eric Faulhaber almost 4 years ago
Another issue I hit was in Swing only:
- log in
- from "Available Rooms", click "Check-In..."
- delete guest
- UI hangs
Virtual desktop and embedded mode get through this flow fine, so it may be a client problem that has nothing to do with 4011a.
#460 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Virtual desktop and embedded mode get through this flow fine, so it may be a client problem that has nothing to do with 4011a.
Please update to rev 11453 - it may be related to not releasing the lock on a delete.
#461 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
The only issue I'm aware in Hotel GUI is the NPE related to Cancel the Reservation, after deleting the guest.
Ugh, problem was, I wasn't switching to the "Reservations" tab first. But now I don't even get into the "Check In" dialog. Whether or not a select a guest in the main reservations browse, when I click the "Check in..." button, I get this:
Maybe the DB is corrupted? I don't see this. You should have the 'at least one guest' message.
#462 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Virtual desktop and embedded mode get through this flow fine, so it may be a client problem that has nothing to do with 4011a.
Please update to rev 11453 - it may be related to not releasing the lock on a delete.
Sorry, I should have noted that this was with 11453. I'll try deploy.all
again and see if it improves.
#463 Updated by Ovidiu Maxiniuc almost 4 years ago
Unfortunately, I don't have the dirty share fully done yet.
To let my brain cool down I returned and fixed the unique index validation messages. Committed in revision 11454 together with some other fixes. I cherry-picked them to keep/improve the application stability.
#464 Updated by Eric Faulhaber almost 4 years ago
On a completely fresh run of Hotel GUI, first thing after running ant deploy.all
and server startup, I get some errors as follows. This is with 4011a rev 11454 in virtual desktop mode.
- log into the application
- click "Reservations" tab
- select the first guest in the browse
- click "Check-In..."
This results in a message box with a unique constraint violation message:
** guest already exists with stay-id 149 order 1.(132)
I dismiss this and I get a follow-up message:
** Invalid record..(0)
The application cannot recover after that, and I get many more Invalid record
messages as I try to exit.
However, I just recalled Constantin said the data with Hotel GUI rev 202 was bad, so I will try again with imported data from Hotel GUI trunk.
#465 Updated by Eric Faulhaber almost 4 years ago
- File duplicate_dmo_same_id.png added
After another full ant deploy.all
, this time with the data from Hotel GUI rev 212, I now get the following error with the same recreate described in my previous post:
When dismissed, this is followed by another error message:
** Unable to update stay Field. (142)
Once this is dismissed, the application is still usable.
Constantin, please look into this error. Bear in mind that it is an intentional policy that we have only one DMO instance with a given primary key in the session cache. So, the error is flagging a valid problem. The issue is to determine the reason we have a second instance and to fix that root cause.
#466 Updated by Constantin Asofiei almost 4 years ago
11454 does not compile for me:
[exec] :ant-compile [exec] [ant:javac] 4011a/src/com/goldencode/p2j/persist/RecordBuffer.java:12607: error: cannot find symbol [exec] [ant:javac] session = persistence.bind(persistenceContext); [exec] [ant:javac] ^
I've bypassed this and I don't get your error - if I press 'Check in' I get the 'Add at least on guest' message, even if the guest was not touched. This is with fresh data (in postgresql).
I'll reconvert again, but I don't think I'll see something different.
Eric: do you have any uncommitted changes?
#467 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
11454 does not compile for me:
[...]I've bypassed this and I don't get your error - if I press 'Check in' I get the 'Add at least on guest' message, even if the guest was not touched. This is with fresh data (in postgresql).
I'll reconvert again, but I don't think I'll see something different.
Eric: do you have any uncommitted changes?
I forgot about that regression in 11454. The error is in dead code, as that invoke1
method is not called by anything. It is just there for reference. The only uncommitted change I have is the workaround for it:
=== modified file 'src/com/goldencode/p2j/persist/RecordBuffer.java' --- src/com/goldencode/p2j/persist/RecordBuffer.java 2020-06-05 22:55:37 +0000 +++ src/com/goldencode/p2j/persist/RecordBuffer.java 2020-06-06 03:02:44 +0000 @@ -12604,7 +12604,7 @@ initialize(); // Bind Hibernate session to Persistence object if necessary. - session = persistence.bind(persistenceContext); + //session = persistence.bind(persistenceContext); // is property an extent field? boolean extent = false;
I am using the default H2 database in my testing. I'll try with PostgreSQL.
#468 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
I am using the default H2 database in my testing. I'll try with PostgreSQL.
Indeed, the error does not occur for me with PostgreSQL. I do get the "Add at least one guest" error message, as well, even though there is a guest name in the browse (the same guest as was selected in the "Reservations" screen). Please check whether the behavior here is different than that with FWD trunk rev 11328 and if so, work on a fix.
#469 Updated by Eric Faulhaber almost 4 years ago
BTW, also found these in the server log from the same work flow:
[06/08/2020 04:54:59 EDT] (com.goldencode.p2j.persist.orm.Persister:WARNING) Failed to UPDATE #30064 of com.goldencode.hotel.dmo.hotel.Stay__Impl__. [06/08/2020 04:54:59 EDT] (com.goldencode.p2j.persist.orm.Persister:WARNING) Failed to UPDATE #30065 of com.goldencode.hotel.dmo.hotel.Guest__Impl__.
#470 Updated by Constantin Asofiei almost 4 years ago
OK, I'll check with H2, too.
BTW, I still see the NPE for the 'delete guest' and 'Cancel', in the Reservations, Check-in scenario. I'll look at it, too.
#471 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
BTW, also found these in the server log from the same work flow:
[...]
The root cause is the dirty share database (the messages are from it) - it doesn't see the record as 'new' and it doesn't insert it. I'm inclined to copy (some) of the state from the DMO to its copy.
#472 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
BTW, also found these in the server log from the same work flow:
[...]
The root cause is the dirty share database (the messages are from it) - it doesn't see the record as 'new' and it doesn't insert it. I'm inclined to copy (some) of the state from the DMO to its copy.
Please coordinate with Ovidiu on this; he is actively making changes in this area. The dirty database should be completely disabled for all tables in Hotel GUI (and in most applications). Maybe his updates are in some interim state (see #4011-463) where he has not yet disabled it (i.e., used the NopDirtyShareContext
) by default.
#473 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Please coordinate with Ovidiu on this; he is actively making changes in this area. The dirty database should be completely disabled for all tables in Hotel GUI (and in most applications). Maybe his updates are in some interim state (see #4011-463) where he has not yet disabled it (i.e., used the
NopDirtyShareContext
) by default.
So, the dirty share database should be used only for explicitly-configured tables/databases? Currently, the NopDirtyShareContext
is used only for the temp DB. I'm inclined to change that line of code and use it for any DB (for now), to see how Hotel GUI behaves.
And another issue with the dirty database - previously there were Reversible
instances to rollback an operation. But now the dirty share database is not 'rolling back' in sync with the main database.
#474 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
This can be seen in Hotel GUI like this, execute at least two times without restarting the server:And another issue with the dirty database - previously there were
Reversible
instances to rollback an operation. But now the dirty share database is not 'rolling back' in sync with the main database.
- Reservations tab, click 'Check in' for first guest
- click 'Cancel' in the window
You will get an unique index violation, as the guest insert in the dirty share database was not rolled back.
#475 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
After another full
ant deploy.all
, this time with the data from Hotel GUI rev 212, I now get the following error with the same recreate described in my previous post:
I'm not sure of the root cause, but I can duplicate this only with the dirty share database; when disabling it, I no longer see it.
#476 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
So, the dirty share database should be used only for explicitly-configured tables/databases?
Yes, tables (not databases). It should default to "off" otherwise.
However, there is one component that I think needs to be refactored/extracted, so that it is not dependent upon the dirty share code: the GlobalEventManager
. That is used to communicate changes across sessions, which might impact index walks (e.g., FOR EACH).
Currently, the
NopDirtyShareContext
is used only for the temp DB. I'm inclined to change that line of code and use it for any DB (for now), to see how Hotel GUI behaves.
In a single session, it should work ok.
And another issue with the dirty database - previously there were
Reversible
instances to rollback an operation. But now the dirty share database is not 'rolling back' in sync with the main database.
Yes, good point.
#477 Updated by Eric Faulhaber almost 4 years ago
Constantin, please commit your change to always use NopDirtyShareContext
, with a TODO to re-enable it when the re-integration of the dirty share manager is complete. We need to avoid any code paths into there while we are testing 4011a for general use.
Ovidiu, please note.
#478 Updated by Eric Faulhaber almost 4 years ago
Rev 11455 fixes an NPE and some validation logic.
#479 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, please commit your change to always use
NopDirtyShareContext
, with a TODO to re-enable it when the re-integration of the dirty share manager is complete. We need to avoid any code paths into there while we are testing 4011a for general use.Ovidiu, please note.
The change is in 4011a rev 11456.
#480 Updated by Constantin Asofiei almost 4 years ago
The abend in Reservations/Check-in/Delete Guest/Cancel Check-in is related to this test:
def temp-table tt1 field f1 as int. create tt1. tt1.f1 = 1. release tt1. repeat transaction: create tt1. tt1.f1 = 10. do transaction: find last tt1. delete tt1. undo, leave. end. for each tt1: message tt1.f1. end. leave. end.
On delete, the record gets evicted from the session's cache. Do we need to restore the session's cache, on rollback?
#481 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
The abend in Reservations/Check-in/Delete Guest/Cancel Check-in is related to this test:
[...]On delete, the record gets evicted from the session's cache. Do we need to restore the session's cache, on rollback?
I didn't think so. We should be fetching it from the database on next access. See ChangeSet.rollback
for the DELETE case, and the code in RecordBuffer$Handler.invoke
which handles the STALE check and calls Session.get
to fetch it from the database and back into the cache. Perhaps if you debug through this case, you will find a flaw in that logic. Of course, if we're accessing it somewhere outside of the invocation handler, that could be a problem. That's currently the only place that does the STALE check.
#482 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I didn't think so. We should be fetching it from the database on next access.
The problem is that the record is no longer in the database. It was flushed and deleted in the same savepoint, for the DO TRANSACTION
block - and when the savepoint was rolledback, the record is gone from the database. Somehow it needs to be restored, and actually I don't think just placing it in the cache is enough.
#483 Updated by Constantin Asofiei almost 4 years ago
There is a deadlock when updating a Rate or a Guest by multiple clients, but this is an application logic problem.
For the Reservation update (from the Reservations tab), the deadlock exists because when comparing two BigDecimal values, although they are equal, their scale is different (2 vs 10). I don't know if this is a real bug or an application-level issue, as I can't check in 4GL.
The issue is for a 4GL code like reservation.price = price
, where price
is an integer value. This calls Record._setDecimal
, which creates a new BDT decimal
instance, for which the precision will be 10 instead of 2. I think for numbers we need to use compareTo
instead of equals
to determine if two values are equal.
#484 Updated by Constantin Asofiei almost 4 years ago
This fix solves all deadlocks related to locking (even those which exist in trunk), in BaseRecord.setDatum
:
### Eclipse Workspace Patch 1.0 #P p2j4011a Index: src/com/goldencode/p2j/persist/orm/BaseRecord.java =================================================================== --- src/com/goldencode/p2j/persist/orm/BaseRecord.java (revision 2329) +++ src/com/goldencode/p2j/persist/orm/BaseRecord.java (working copy) @@ -497,7 +497,10 @@ // update the bitset of dirty properties dirtyProps.set(offset); - if (!Objects.equals(data[offset], datum)) + if (!(Objects.equals(data[offset], datum) || + (datum instanceof Comparable && + data[offset] instanceof Comparable && + ((Comparable) datum).compareTo(data[offset]) == 0))) { // ensure we have an exclusive lock on the record if (!lockForUpdate())
For e.g. Rate, the compare in trunk is done decimal vs decimal, and the scale differs - so the lock attempt is done; same behavior is in 4011a.
For Reservations, in trunk the compare is done integer (the actual argument value) vs decimal, and these values are equal in FWD. In 4011a, we convert the integer to decimal first, and only after that compare/try to assign it - and, as the scale differs, there will be a lock attempt.
So this may not be just a 4011a problem, but a trunk problem when checking if two values are equal - in some cases, 'equals' is not enough, and if the values are comparable, I think we should use those, too.
Eric, I'll commit this if you don't see any issues. My only concern is if we should compare the actual argument value, e.g. an integer, even if the field is decimal. Now in 4011a we convert the argument to the field's type, and use this value to determine if is the same as the field's value.
OTOH, the root cause may be decimal.equals
, which delegates to BigDecimal.equals
, which is checks the scale, and not the exact value. Instead it should use BigDecimal.compareTo
.
#485 Updated by Eric Faulhaber almost 4 years ago
Constantin, at this level in the code, we are not dealing with BDT instances. These are the lower level Java types which are SQL "friendly". So, the fix looks ok, though we probably don't need to check whether both datum
and data[offset]
are instances of Comparable
. If one is, the other must be as well.
#486 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, at this level in the code, we are not dealing with BDT instances.
You are correct, the comparison in trunk is with BDT, not in 4011a.
I'll fix decimal.equals
too, to use compareTo
, as that is wrong now.
#487 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Constantin, at this level in the code, we are not dealing with BDT instances.
You are correct, the comparison in trunk is with BDT, not in 4011a.
I'll fix
decimal.equals
too, to usecompareTo
, as that is wrong now.
Fixed in 11457.
#488 Updated by Constantin Asofiei almost 4 years ago
Eric, in #4011-303, you note that in p2j.cfg.xml
you used this:
<schema primaryKeyName="id"> ...
but the FWD code reads it from this:
<parameter name="primary-key-name" value="id" />
What's the right approach?
#489 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
in FWD runtime the primary key name is read from persistence/primary-key-name
and stored globally in Session.PK
. This is a single, unified value for all databases/schemas. As result, there is a single location for defining the same value at conversion time (import and default for runtime), as project parameter. So the answer is the former, primary-key-name
in parameter
tag.
#490 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
I didn't think so. We should be fetching it from the database on next access.
The problem is that the record is no longer in the database. It was flushed and deleted in the same savepoint, for the
DO TRANSACTION
block - and when the savepoint was rolledback, the record is gone from the database. Somehow it needs to be restored, and actually I don't think just placing it in the cache is enough.
Going back to this: I've tried saving the record in the database, in ChangeSet.rollback
. But: this is done too early, before the savepoint is rolledback. So any changes to the database done in ChangeSet.rollback
will be lost. I'm making some changes to gather the list of 'rollback actions' to be applied after the savepoint is rolledback.
#491 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Constantin, at this level in the code, we are not dealing with BDT instances.
You are correct, the comparison in trunk is with BDT, not in 4011a.
I'll fix
decimal.equals
too, to usecompareTo
, as that is wrong now.Fixed in 11457.
The datum[] != null
is needed (the previous 'instanceof' was working because for null values, this was false). Fixed in 11458.
#492 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Going back to this: I've tried saving the record in the database, in
ChangeSet.rollback
. But: this is done too early, before the savepoint is rolledback. So any changes to the database done inChangeSet.rollback
will be lost. I'm making some changes to gather the list of 'rollback actions' to be applied after the savepoint is rolledback.
I want to avoid adding new lists of actions to be re-applied and I want instead to stick with the architecture already in ChangeSet
, perhaps with some changes to RecordBuffer$LightweightUndoable
, which I don't think is applicable to the new UNDO architecture in its current form. Let me please take a crack at a fix for this.
#493 Updated by Constantin Asofiei almost 4 years ago
- File rollback_patch.txt added
Eric, attached patch fixes this testcases:
def temp-table tt1 field f1 as int. create tt1. tt1.f1 = 1. release tt1. repeat transaction: create tt1. tt1.f1 = 10. do transaction: find last tt1. delete tt1. undo, leave. end. for each tt1: message tt1.f1. end. leave. end.
But it doesn't fix the Hotel GUI scenario. I'm trying to duplicate that in a standalone test, but no luck yet.
#494 Updated by Eric Faulhaber almost 4 years ago
I don't understand this patch. I mean, I get the logic, but it doesn't look right to me conceptually. The point of marking the events in the ChangeSet
is to allow rollback to put the record back into the state it was in before these events took place. However, the patch re-runs the actions we supposedly are rolling back, which doesn't make sense to me.
The approach I'm trying is to reset the record's state to what it was before the UNDO, based on the events recorded by the ChangeSet
, and to modify LightweightUndoable
so it does not try to re-load the record from the database. Instead, it resets the rolled back record as is (i.e., after ChangeSet.rollback
has modified its data and state) into the buffer as the current record. The state machine in RecordBuffer.setCurrentRecord
is quite complicated, so this may be tricky.
#495 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I don't understand this patch. I mean, I get the logic, but it doesn't look right to me conceptually. The point of marking the events in the
ChangeSet
is to allow rollback to put the record back into the state it was in before these events took place. However, the patch re-runs the actions we supposedly are rolling back, which doesn't make sense to me.
Yes, you are right, the patch is wrong. I haven't realized that it can affect even a 'good rollback', where the record is really gone from the database.
The approach I'm trying is to reset the record's state to what it was before the UNDO, based on the events recorded by the
ChangeSet
, and to modifyLightweightUndoable
so it does not try to re-load the record from the database. Instead, it resets the rolled back record as is (i.e., afterChangeSet.rollback
has modified its data and state) into the buffer as the current record. The state machine inRecordBuffer.setCurrentRecord
is quite complicated, so this may be tricky.
OK, so the record doesn't need to remain in the database, as the flush is rolledback, right? And it just needs to end up back in the buffer. My focus was on the record operations (i.e. insert), I wasn't thinking in 'flush' terms.
Now 'the problem is at LightweightUndoable
' makes sense. Thanks for the explanation.
#496 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
OK, so the record doesn't need to remain in the database, as the flush is rolledback, right?
That is what I'm thinking, yes.
Now 'the problem is at
LightweightUndoable
' makes sense.
Now my problem is that LightweightUndoable
is nothing like I remember it. I guess all the changes for LazyUndoable
are just an optimization, but the core idea is still the same, right? That is: capture state we want to roll back to in deepCopy
, and reset that state in assign
. Correct?
#497 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Now my problem is that
LightweightUndoable
is nothing like I remember it. I guess all the changes forLazyUndoable
are just an optimization, but the core idea is still the same, right? That is: capture state we want to roll back to indeepCopy
, and reset that state inassign
. Correct?
The reason I ask is that I noticed in the debugger that these methods of LightweightUndoable
seem to be invoked with different timing than before, (i.e., within the scope of the block, rather than just before it). And possibly more often?
I need to know conceptually that these methods are working the way I think they are, so that I am doing the right things within deepCopy
and assign
.
#498 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I guess all the changes for
LazyUndoable
are just an optimization, but the core idea is still the same, right? That is: capture state we want to roll back to indeepCopy
, and reset that state inassign
. Correct?
Yes, the core logic is the same. LazyUndoable
just delays the registration for undo support in the outer blocks, until an actual change was performed (and there's something to undo).
Eric Faulhaber wrote:
The reason I ask is that I noticed in the debugger that these methods of
LightweightUndoable
seem to be invoked with different timing than before, (i.e., within the scope of the block, rather than just before it). And possibly more often?
What do you mean here? The deepCopy
and assign
work as before. Is just a matter of registration, this is delayed until the buffer is first touched - see undoable.checkUndoable(true)
calls in RecordBuffer.create and others.
#499 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
The reason I ask is that I noticed in the debugger that these methods of
LightweightUndoable
seem to be invoked with different timing than before, (i.e., within the scope of the block, rather than just before it). And possibly more often?What do you mean here? The
deepCopy
andassign
work as before. Is just a matter of registration, this is delayed until the buffer is first touched - seeundoable.checkUndoable(true)
calls in RecordBuffer.create and others.
OK, I guess it was checkUndoable
that I noticed was being called a lot, not deepCopy
and assign
. As long as these last two work the same way logically, I have what I need. Thanks!
#500 Updated by Eric Faulhaber almost 4 years ago
- File rollback_wip_ecf_20200609.patch added
Attached are my changes in process for the rollback fix. It is not working yet, but I am posting because I need to take a break, in case you wanted to see where it is headed...
#501 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Attached are my changes in process for the rollback fix. It is not working yet, but I am posting because I need to take a break, in case you wanted to see where it is headed...
Eric, if I use the NPE check in LightweightUndoable.assign
:
if (copy.activeRecord != null) { Long id = copy.activeRecord.primaryKey(); LockType pinned = getPinnedLockType(id); if (lockType == null) { lockType = pinned; } if (!lockType.equals(pinned)) { setPinnedLockType(id, lockType); } }
then there is no more abend. I can't tell what else could be missing from your implementation.
#502 Updated by Constantin Asofiei almost 4 years ago
If you enable enhanced browse, and click the 'checkin' column in the Guests grid, you get a ClassCastException
:
java.lang.ClassCastException: java.lang.Long cannot be cast to com.goldencode.p2j.persist.Record at com.goldencode.p2j.persist.Presorter$SortedResults.<init>(Presorter.java:1643) at com.goldencode.p2j.persist.Presorter$SortedResults.<init>(Presorter.java:1619) at com.goldencode.p2j.persist.Presorter.createSortedResults(Presorter.java:483) at com.goldencode.p2j.persist.PresortCompoundQuery.presort(PresortCompoundQuery.java:530) at com.goldencode.p2j.persist.CompoundQuery.preselectResults(CompoundQuery.java:2054) at com.goldencode.p2j.persist.CompoundQuery.open(CompoundQuery.java:715) at com.goldencode.p2j.persist.QueryWrapper.open(QueryWrapper.java:654) at com.goldencode.p2j.ui.BrowseWidget.setDynamicSorting(BrowseWidget.java:2171) at com.goldencode.p2j.ui.BrowseWidget.setDynamicSorting(BrowseWidget.java:6751) at com.goldencode.p2j.ui.LogicalTerminal.setDynamicSorting(LogicalTerminal.java:12186) at com.goldencode.p2j.ui.LogicalTerminalMethodAccess.invoke(Unknown Source)
This is because PreselectQuery.getRow()
was changed to return what the javadoc states, Array of primary key IDs, one per table involved in the query.
. But in some cases, this is not valid - the row can be fully hydrated records.
Eric, what part is wrong here? The Presorter$SortedResults
which expects full records, or PreselectQuery.getRow()
which was changed to return record IDs only?
#503 Updated by Ovidiu Maxiniuc almost 4 years ago
Found a broken feature I forgot about in 4011: sequences, see #3814-109.
We need to add back initialization support or ETF might not pass (IIRC, its framework uses sequences).
#504 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Eric, what part is wrong here? The
Presorter$SortedResults
which expects full records, orPreselectQuery.getRow()
which was changed to return record IDs only?
In looking at how Results.get(int)
is used, there are many places where the returned value is expected to be a Record
. I think PreselectQuery.getRow()
is at fault. Eric, unless you made related changes which expect that instead of Record
these will be primary keys, then I'm inclined to rollback that change.
#505 Updated by Constantin Asofiei almost 4 years ago
This part in metaschema.xml
is wrong:
<!-- add ID field as first child element of record element; here, we hard-code the primary key name rather than using the project-specific override, because there is no need to introduce a possible conflict when we know recid is safe for the metadata database --> <rule> tpl.graft("meta_field_elem", null, elem, "name", "recid", "value", nextID.toString(), "type", "j") </rule>
We need to use the primary key as specified in p2j.cfg.xml. Otherwise, MetadataManager will use the primary key name from directory.xml, which can be 'id' or something else.
I'll fix it.
#506 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
I think the change you refer to in #4011-504 was mine. The idea was to fetch the exact amount of information from the query. If only the keys were requested, the full record will not be fetched so it's up to the invoker of a SQL query to decide if it wants the full records or the keys only for some comparisons, for example. The caller should know (or analyse) the nature of the received ResultSet
. To get the record use Session.get(Class<T> dmoImplClass, Long id)
. This will also take advantage of the Session's cache of Record
s.
Eric, do you agree?
#507 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
This part in
metaschema.xml
is wrong:
[...]
We need to use the primary key as specified in p2j.cfg.xml. Otherwise, MetadataManager will use the primary key name from directory.xml, which can be 'id' or something else.I'll fix it.
Nice catch!
#508 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin,
I think the change you refer to was mine. The idea was to fetch the exact amount of information from the query. If only the keys were requested, the full record will not be fetched so it's up to the invoker of a SQL query to decide if it wants the full records or the keys only for some comparisons, for example. The caller should know (or analyse) the nature of the receivedResultSet
. To get the record useSession.get(Class<T> dmoImplClass, Long id)
. This will also take advantage of the Session's cache ofRecord
s.
I don't mind for it working like that. But all clients of this API should know not to expect a Record but a Long primary key value, and be reworked to fetch the Record from the database. IMHO, this can be considered an optimization which we can add later on...
#509 Updated by Eric Faulhaber almost 4 years ago
If the query already has fetched the full records, we already would have used the session cache to gain this optimization, or else we would have hydrated the record while processing the query's results. Either way, if fullRecords
is true
, we will have full records, else we will have only the IDs. This notion is baked into the query before any call to getRows
.
I am inclined to think the javadoc is wrong and the change should be reverted, based on the fact that there is dependent code which relies on the old behavior, and the nature of what the "rows" of the query are is determined by the fullRecords
flag. Ovidiu, please let me know if I'm missing something about the new implementation that makes this inadvisable.
IMHO, this can be considered an optimization which we can add later on...
If we already had the full records and we are only returning the IDs, but have to add a call to fetch the full records from the cache, that doesn't seem like an optimization.
#510 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, one more - I don't understand where the "__i?" is coming from.
Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "SELECT COUNT(*) FROM TT1 WHERE ID != ? AND __IFIELDNAME=__I?[*] "; SQL statement: select count(*) from tt1 where id != ? and __ifieldname=__i? [42000-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.message.DbException.getSyntaxError(DbException.java:203) at org.h2.command.Parser.getSyntaxError(Parser.java:548) at org.h2.command.Parser.prepareCommand(Parser.java:281) at org.h2.engine.Session.prepareLocal(Session.java:611) at org.h2.engine.Session.prepareCommand(Session.java:549) at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247) at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:76) at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:306) at com.goldencode.p2j.persist.orm.TempTableDataSourceProvider$DataSourceImpl$1.prepareStatement(TempTableDataSourceProvider.java:177) at com.goldencode.p2j.persist.orm.Validation.generateUniqueIndexesCondition(Validation.java:611) at com.goldencode.p2j.persist.orm.Validation.validateUniqueCommitted(Validation.java:716)
#511 Updated by Ovidiu Maxiniuc almost 4 years ago
That is interesting. It looks like I added a regression when constructing validation message in my recent commit. Please email me with full stack. Thanks.
#512 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
That is interesting. It looks like I added a regression when constructing validation message in my recent commit. Please email me with full stack. Thanks.
After making this change in P2H2Dialect, the error is gone:
public String getProcessedCharacterColumnName(String name, boolean ignoreCase) { if ("?".equals(name)) { return name; } return DBUtils.getPrefixedParameter( name, ignoreCase ? INSENSITIVE_CHAR_FIELD : SENSITIVE_CHAR_FIELD); }
The problem is that the
name
is the placeholder, and not a real field name. So nothing should be added to it. This needs to be fixed for MSSQL, too.#513 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu Maxiniuc wrote:
That is interesting. It looks like I added a regression when constructing validation message in my recent commit. Please email me with full stack. Thanks.
After making this change in P2H2Dialect, the error is gone:
[...]
The problem is that thename
is the placeholder, and not a real field name. So nothing should be added to it. This needs to be fixed for MSSQL, too.
No, the problem is in RecordMeta.composeValidationSQLs()
, lines 494-499. The sql
should be constructed as:
boolean computed = crtProp.isCharacter();
sql.append(" and ")
.append(computed ? d.getProcessedCharacterColumnName(column, !csField) : column)
.append("=")
.append(computed ? d.getComputedColumnFormula("?", !csField) : "?");
#514 Updated by Ovidiu Maxiniuc almost 4 years ago
Actually, the code above will not work for PostgreSQL as the getComputedColumnFormula
will return null
. It looks like there are no methods in Dialect
class to generate the correct code expected at this place. I will add the inlined code and commit soon.
#515 Updated by Constantin Asofiei almost 4 years ago
There is this code in TemporaryBuffer.readAllRows
:
ScrollableResults<TempRecord> results = persistence.scroll(entities, hql, parms, 0, 0, ResultSet.TYPE_FORWARD_ONLY); BufferManager bufMgr = getBufferManager(); // process results while (results.next()) { TempRecord dmo = results.get(0, TempRecord.class); rowHandler.accept(dmo); bufMgr.evictDMOIfUnused(persistence, getPersistenceContext(), dmo); }
I would expect results.get
to give me a TempRecord instance, but underlying it still uses the JDBC ResultSet, with the flatten record. Don't know how to fix it.
#516 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
There is this code in
TemporaryBuffer.readAllRows
:
[...]I would expect
results.get
to give me a TempRecord instance, but underlying it still uses the JDBC ResultSet, with the flatten record. Don't know how to fix it.
All usages of ScrollableResults.get
must be reviewed. There is more than one case where the second arg (the expected type) is a Record.
4011a rev 11461 contains misc fixes:
1. meta tables must use the configurable primary key name
2. rolled back the change in PreselectQuery.getRow
3. fixed TemporaryBuffer.copyAllRows
4. fixed a ClassCastException in orm.Session, when loading a meta table
ETF goes further with 11461, but abends with the readAllRows
when the Agent finished the request and wants to transfer the table data to the remote side.
#517 Updated by Eric Faulhaber almost 4 years ago
Rev 11462 fixes TB.readAllRows
. It also has some validation, flush, rollback fixes that are work in process. They do not fully work yet, but I think they are more correct than before.
I get past readAllRows
, but I get the following in TableResultSet
now:
java.lang.ClassCastException: com.goldencode.p2j.persist.FqlType cannot be cast to java.lang.Integer at com.sun.proxy.$Proxy242.getColumnType(Unknown Source) at com.goldencode.p2j.persist.TableResultSet$DataHandler.invoke(TableResultSet.java:619) ...
#518 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, I need a mapping between FQLType enum values and java.sql.Types. Can you provide a complete list?
#519 Updated by Constantin Asofiei almost 4 years ago
11463 fixes the FqlType issue in #4011-517. But I think we need complete support for FqlType enum.
#520 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, I have a problem with the relation generated between the Field and File meta tables. In DMO index, I have this for MetaField
in 11328:
<foreign interface="MetaFile" schema="_meta"> <property local="fileRecid" name="id"/> </foreign>
In 4011a, I have this annotation:
@Relations( { @Relation(name = "metaFileRecord", type = "MANY_TO_ONE", database = "_meta", target = MetaFile.class, components = { @RelationComponent(name = "fileRecid", legacy = "_file-recid") }) })
In RelationComponent
, which one is the local and wich one is the foreign field? Shouldn't this be something like local="fileRecid", name="id"
?
#521 Updated by Constantin Asofiei almost 4 years ago
I've fixed that manually (BTW, name="id"
should be name="recid"
, depending on what you configured as the PK in p2j.cfg.xml) and now I get this:
Caused by: java.lang.IllegalArgumentException: Expected DMO POJO interface; found: class com.goldencode.p2j.persist.Record at com.goldencode.p2j.persist.PropertyHelper.getRootPojoInterface(PropertyHelper.java:964) at com.goldencode.p2j.persist.PropertyHelper.allMethodsByProperty(PropertyHelper.java:902) at com.goldencode.p2j.persist.PropertyHelper.allGettersByProperty(PropertyHelper.java:264) at com.goldencode.p2j.persist.FieldReference.getGetter(FieldReference.java:612) at com.goldencode.p2j.persist.FieldReference.<init>(FieldReference.java:472) at com.goldencode.p2j.persist.FieldReference.<init>(FieldReference.java:523) at com.goldencode.p2j.persist.DynamicLegacyKeyJoin.setup(DynamicLegacyKeyJoin.java:297) at com.goldencode.p2j.persist.AbstractJoin.<init>(AbstractJoin.java:234) at com.goldencode.p2j.persist.DynamicLegacyKeyJoin.<init>(DynamicLegacyKeyJoin.java:142) at com.goldencode.p2j.persist.RandomAccessQuery.initialize(RandomAccessQuery.java:1261) at com.goldencode.p2j.persist.RandomAccessQuery.initialize(RandomAccessQuery.java:1165) at com.goldencode.p2j.persist.FindQuery.<init>(FindQuery.java:413)
Any hints how this should be solved are appreciated.
#522 Updated by Constantin Asofiei almost 4 years ago
Replacing Record with Persistable fixes the above.
#523 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Replacing Record with Persistable fixes the above.
I am not sure is not correct. Which are the buffer and property being processed?
#524 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu, I have a problem with the relation generated between the Field and File meta tables. In DMO index, I have this for
MetaField
in 11328:
[...]In 4011a, I have this annotation:
[...]In
RelationComponent
, which one is the local and wich one is the foreign field? Shouldn't this be something likelocal="fileRecid", name="id"
?
Interesting finding. I do not have this with the standard.df
provided in repository. I will use a newer one and reconvert the project.
#525 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
After running conversion with a newer metadata definition I have this:
ALL NATURAL JOINS: ------------------ _meta._file <---- _meta._field (many-to-one on [_file-recid])
In
@Relations(
{
@Relation(name = "metaFileRecord", type = "MANY_TO_ONE", database = "_meta", target = MetaFile.class, components =
{
@RelationComponent(name = "fileRecid", legacy = "_file-recid")
})
})
the
name = "fileRecid"
represents the local (relative to MetaField
class) property. The legacy
attribute can probably be dropped as it does not help very much.The foreign table is mapped by
MetaFile.class
and by default the 'foreign' field is also _file-recid
. What you have found is the case when this foreign field is different, being the recid
of the foreign table, in this key, the private key (which is, of course, id
or recid
). The problem is, this information is not available at this moment in standard.p2o
look for MANY_TO_ONE
. I am a bit puzzled how was the foreign component (name="id"
) detected in dmoindex.xml
.#526 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin Asofiei wrote:
Replacing Record with Persistable fixes the above.
I am not sure is not correct. Which are the buffer and property being processed?
The MetaFile
and id
(primary key) property.
#527 Updated by Ovidiu Maxiniuc almost 4 years ago
Ovidiu Maxiniuc wrote:
Found a broken feature I forgot about in 4011: sequences, see #3814-109.
We need to add back initialization support or ETF might not pass (IIRC, its framework uses sequences).
I have just committed r11464 which will allow the SequenceManager
initialize properly. To do this the conversion creates a special enum
class for each database having as elements identifiers of the sequences. Each of these is annotated with the new Sequence
which contain metadata information that were previously stored in dmo_index.xml
An issue: the class name is _Sequences
. I know that this might not be the best idea but I needed a special hard-coded name to avoid collisions with normal tables. I was also biased a bit by the default sequence dump filename: _seqvals.d
. Please let me know if you have other ideas.
#528 Updated by Constantin Asofiei almost 4 years ago
There is a problem at the fql.g parser:
select_expr : SELECT^ ( ( alias (COMMA! alias)* ) | ( function | property (COMMA! (function | property))* ) ) ;
For a select t1.id, t2 from table1 as t1, table2 as t2
, this matches t2
as a PROPERTY
and not ALIAS
. Any explicit reason it was coded this way? I'd like to change it to be more generic, to allow alias/function/property combinations, like this:
select_expr : SELECT^ ( alias | function | property (COMMA! (alias | function | property))* ) ;
#529 Updated by Constantin Asofiei almost 4 years ago
Lots of regression after changing it... so no good.
#530 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
The foreign table is mapped by
MetaFile.class
and by default the 'foreign' field is also_file-recid
. What you have found is the case when this foreign field is different, being therecid
of the foreign table, in this key, the private key (which is, of course,id
orrecid
). The problem is, this information is not available at this moment instandard.p2o
look forMANY_TO_ONE
. I am a bit puzzled how was the foreign component (name="id"
) detected indmoindex.xml
.
Please note that the FWD runtime expects the annotation to be like local="fileRecid", name="id"
. Are you working on this?
#531 Updated by Constantin Asofiei almost 4 years ago
4011a rev 11465 contains my FQL fixes plus misc. Please review.
#532 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
4011a rev 11465 contains my FQL fixes plus misc. Please review.
I have only a question, about the construction (and usage of the new aliasStart
in SQLQuery.list
). It seems to me that it does exactly the same what the old resOffset
was doing dynamically. The difference is that aliasStart
used the ResultSet
metadata, while the resOffset
was incremented based on rowStructure
. BTW, when hydrating a record, the expColumnCount
is still used from rowStructure
in the new code.
Is there something I missed?
#533 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin Asofiei wrote:
4011a rev 11465 contains my FQL fixes plus misc. Please review.
I have only a question, about the construction (and usage of the new
aliasStart
inSQLQuery.list
). It seems to me that it does exactly the same what the oldresOffset
was doing dynamically. The difference is thataliasStart
used theResultSet
metadata, while theresOffset
was incremented based onrowStructure
. BTW, when hydrating a record, theexpColumnCount
is still used fromrowStructure
in the new code.
Is there something I missed?
The problem is the previous code assumed the SELECT was always retrieving full records. But you can havel SQLs like select tt1.id, tt2.id, tt2.f1, tt2.f2
, where tt1
has only the id
field and tt2
is the full record.
aliasStart
looks in the result set and determines the real location where each table starts. rowStructure
knows only the length of a full record.
#534 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
The problem is the previous code assumed the SELECT was always retrieving full records. But you can havel SQLs like
select tt1.id, tt2.id, tt2.f1, tt2.f2
, wherett1
has only theid
field andtt2
is the full record.
aliasStart
looks in the result set and determines the real location where each table starts.rowStructure
knows only the length of a full record.
In this case rowStructure
will look like: tt1:1, tt2:3
. resOffset
will start at 1 when hydrating tt1
then 2
(incremented by expColumnCount
of tt1
) for tt2
.
#535 Updated by Eric Faulhaber almost 4 years ago
Constantin, we have biased as much as we can in this implementation toward speed, so we are avoiding things like ResultSetMetaData
where possible, and instead relying on a well-known layout of the data array in the DMO, and of the column layout in a query. If we need to enhance the row structure concept, then I prefer we do that over querying the connection for metadata. OTOH, from what Ovidiu is saying, it sounds like we may already have the data we need in rowStructure
.
#536 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, we have biased as much as we can in this implementation toward speed, so we are avoiding things like
ResultSetMetaData
where possible, and instead relying on a well-known layout of the data array in the DMO, and of the column layout in a query. If we need to enhance the row structure concept, then I prefer we do that over querying the connection for metadata. OTOH, from what Ovidiu is saying, it sounds like we may already have the data we need inrowStructure
.
That code already has a call for RSMD, int columnCount = resultSet.getMetaData().getColumnCount();
. Is rsmd.getTableName
that expensive?
Looks like the problem may be that FqlToSqlConverter
doesn't set the proper value. I'm fixing it.
#537 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Ovidiu Maxiniuc wrote:
The foreign table is mapped by
MetaFile.class
and by default the 'foreign' field is also_file-recid
. What you have found is the case when this foreign field is different, being therecid
of the foreign table, in this key, the private key (which is, of course,id
orrecid
). The problem is, this information is not available at this moment instandard.p2o
look forMANY_TO_ONE
. I am a bit puzzled how was the foreign component (name="id"
) detected indmoindex.xml
.Please note that the FWD runtime expects the annotation to be like
local="fileRecid", name="id"
. Are you working on this?
In 11328, there is this code in hbm_dmo_index.rules
:
<rule>"_meta" == crtSchema <action>tpl.graft("attr_set_stub", null, propElem, "name", "id")</action> <action>execLib("putAttribute", propElem, "local", propRef.text)</action> <action on="false"> attrSet = tpl.graft("attr_set_stub", null, propElem, "name", propRef.text) </action> </rule>
This maps
name
to id
and local
to the field.#538 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
That code already has a call for RSMD,
int columnCount = resultSet.getMetaData().getColumnCount();
. Isrsmd.getTableName
that expensive?
I didn't realize we already were using RSMD. Honestly, I haven't profiled it, so I'm not sure about the performance of RSMD generally, I just want to be consistent in how we read the results. The main performance gain I found compared to the way we (i.e., Hibernate) used to read result sets comes from reading (and hydrating) by position instead of name. The fewer map lookups and string operations we can have at this low level that executes millions of times, the better.
#539 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
That code already has a call for RSMD,
int columnCount = resultSet.getMetaData().getColumnCount();
. Isrsmd.getTableName
that expensive?I didn't realize we already were using RSMD. Honestly, I haven't profiled it, so I'm not sure about the performance of RSMD generally, I just want to be consistent in how we read the results. The main performance gain I found compared to the way we (i.e., Hibernate) used to read result sets comes from reading (and hydrating) by position instead of name. The fewer map lookups and string operations we can have at this low level that executes millions of times, the better.
I agree. I'll backout that change and test again. The root cause may have been in hydrateRecord
when count = 1
, which is only possible when we have only the primary key.
#540 Updated by Constantin Asofiei almost 4 years ago
I've got an infinite loop because this code:
PreselectQuery.setResults(Results) line: 3493 PreselectQuery.resetResults() line: 4048 PreselectQuery.sessionEvent(SessionListener$Event) line: 2105 Persistence$Context.notifySessionEvent(SessionListener$Event) line: 4919 Persistence$Context.closeSessionImpl(boolean, boolean) line: 4854 Persistence$Context.closeSession(boolean) line: 4780 BufferManager.maybeCloseSessions() line: 2249 BufferManager.scopeFinished() line: 1112 TransactionManager.processScopeNotifications(TransactionManager$WorkArea, BlockDefinition, boolean) line: 7031
invalidates the results of an active query.
Should I call Persistence$Context.useSession
for ScrollingResults
, too? I see that we call this for ProgressiveResults
. In any case, I'll try.
#541 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Should I call
Persistence$Context.useSession
forScrollingResults
, too? I see that we call this forProgressiveResults
. In any case, I'll try.
This solved it.
Ovidiu, what's the difference between FqlType.DECIMAL
and FqlType.DOUBLE
? How are they mapped to java.sql.Type
? NUMERIC
or DECIMAL
?
#542 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
Should I call
Persistence$Context.useSession
forScrollingResults
, too? I see that we call this forProgressiveResults
. In any case, I'll try.This solved it.
Sounds right. Every resource which needs a session to stay open to finish its work should bracket that work with {use|release}Session
.
#543 Updated by Constantin Asofiei almost 4 years ago
11467 contains the ScrollingResults fix, rollback of SqlQuery RSMD usage and another fix in FqlType.
#544 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu, what's the difference between
FqlType.DECIMAL
andFqlType.DOUBLE
? How are they mapped tojava.sql.Type
?NUMERIC
orDECIMAL
?
Actually, the FqlType
were not created to be related to java.sql.Type
, but to identify the right signature for UDFs when where predicates are preprocessed. This if the old HQLTypes
which was only renamed. I an not sure where this issue arises.
#545 Updated by Constantin Asofiei almost 4 years ago
A test related to NO-UNDO temp-tables (not related to STOP, replace it with UNDO, LEAVE and there is the same behavior):
def temp-table tt1 no-undo field f1 as int. do on stop undo, leave: do transaction: create tt1. tt1.f1 = 10. stop. end. end. for each tt1: message tt1.f1. end.
#546 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
A test related to NO-UNDO temp-tables (not related to STOP, replace it with UNDO, LEAVE and there is the same behavior):
[...]
Nevermind, this test is a FWD conversion problem, not related to 4011a. There's a chance it may be fixed in 4231b.
#547 Updated by Constantin Asofiei almost 4 years ago
Eric, the NO-UNDO issue is related to passing the temp-table as a parameter, where you end up with two records like these the redoable
list, in order:
tt1:100 RecordState{NEW NOUNDO NEEDS_VALIDATION } dirty: {0, 1, 2, 3, 4, 5, 6, 7} data: {aaa, bbb, ccc, , , ddd, , } multipex: 920 tt1:100 RecordState{NEW NOUNDO } dirty: {0, 1, 2, 3, 4, 5, 6, 7} data: {aaa, bbb, ccc, , , ddd, , } multipex: 920
The code is in TemporaryBuffer.copyAllRows:2616
method. We call dstBuf.validate(dstRec);
on line 2616, which ends up calling Session.save(T, updateDmoState)
with false
for updateDmoState
; this is its javadoc:
* @param updateDmoState * If {@code true} its internal state information is updated. This is the standard * call. When the method is called for validation, use {@code false} as the database * operation will be rolled back anyway, so the state must not be altered.
So, why do we execute this code for NO-UNDO tables, if the insert will be rolledback?
if (wasDirty) { if (savepointManager != null) { if (dmo.checkState(NOUNDO)) // this part { if (wasNew) { savepointManager.noUndoInsert(dmo); } else { savepointManager.noUndoUpdate(dmo); } } else if (wasNew) { dmo.logInsert(this); } else { // TODO: needed? dmo.logUpdate(this); } }
My problem is that the copyAllRows
has a secondary persistence.save(dstRec, id, true);
on line 2667, which in turn will log the same action again, for insert in a NO-UNDO temp-table.
I know we need both the validate
call and the persistence.save
, to batch the inserts... but I'm looking for a way to not log NO-UNDO redo-ops for this case.
#548 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, the NO-UNDO issue is related to passing the temp-table as a parameter, where you end up with two records like these the
redoable
list, in order:
[...]The code is in
TemporaryBuffer.copyAllRows:2616
method. We calldstBuf.validate(dstRec);
on line 2616, which ends up callingSession.save(T, updateDmoState)
withfalse
forupdateDmoState
; this is its javadoc:
[...]
So, why do we execute this code for NO-UNDO tables, if the insert will be rolledback?
[...]My problem is that the
copyAllRows
has a secondarypersistence.save(dstRec, id, true);
on line 2667, which in turn will log the same action again, for insert in a NO-UNDO temp-table.I know we need both the
validate
call and thepersistence.save
, to batch the inserts... but I'm looking for a way to not log NO-UNDO redo-ops for this case.
It was an oversight on my part to not differentiate between the rollback that Validation
does when it is set to validate only (vs validate and flush), for NO-UNDO temp-tables. The validate-only save which must be rolled back should not log the SQLRedo
operations. I'm not sure the updateDmoState
is the right cue to use to make that decision, though. It is set to false
for more cases than this validate-only save in Validation
. We may need something more specific to the validate-only case.
#549 Updated by Constantin Asofiei almost 4 years ago
Eric, do you have this case in your current flush/validation work?
create person. // NEW NEEDS_VALIDATION message "a". person.emp-num = 1. // NEW NEEDS_VALIDATION CHANGED message "b". person.site-id = 2. // FWD validates here message "c". // OE validates here (on 'pop scope')
The person.ssn
is mandatory, and FWD abends on person.site-id = 2.
#550 Updated by Eric Faulhaber almost 4 years ago
Not exactly. I will add it, thanks.
#551 Updated by Constantin Asofiei almost 4 years ago
I've moved to another test as the current one is blocked with the flush/mandatory issue. Now I've reached an extent problem, a NPE RecordBuffer$Handler.invoke:12299
DmoMeta dmoInfo = DmoMetadataManager.getDmoInfo(dmoClass); Property fi = dmoInfo.getFieldInfo(property);
For an extent property like 'extProp', dmoInfo has them inlined like extProp1, extProp2, etc, for each index.
So dmoInfo.getFieldInfo returns null. How should these be handled?
#552 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
I've moved to another test as the current one is blocked with the flush/mandatory issue. Now I've reached an extent problem, a NPE RecordBuffer$Handler.invoke:12299
[...]
For an extent property like 'extProp', dmoInfo has them inlined like extProp1, extProp2, etc, for each index.So dmoInfo.getFieldInfo returns null. How should these be handled?
I'm a little confused... I found a case where DmoMeta
has the extent property 'unexpanded', also. For now I've added a getFieldInfo(String, Integer)
which, if if it can't find it, will look for the extProp<index>
version.
#553 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
I've moved to another test as the current one is blocked with the flush/mandatory issue. Now I've reached an extent problem, a NPE RecordBuffer$Handler.invoke:12299
[...]
For an extent property like 'extProp', dmoInfo has them inlined like extProp1, extProp2, etc, for each index.So dmoInfo.getFieldInfo returns null. How should these be handled?
I'm a little confused... I found a case where
DmoMeta
has the extent property 'unexpanded', also. For now I've added agetFieldInfo(String, Integer)
which, if if it can't find it, will look for theextProp<index>
version.
For this, the fix is in 11470 - I've added the getFieldInfo(String, Integer)
which looks for the denormalized property. My problem: there may be other parts which look for the property name, and the property is denormalized.
I've also fixed some other issues in FQL2SQL, related to nested SUBSELECT. I think FQL2SQL is OK now.
#554 Updated by Constantin Asofiei almost 4 years ago
Another issue which I don't understand: the SQLQuery
's JDBC statement and ResultSet is never closed. Why? Who should close it?
At least for SQLQuery.scroll
, these should remain open until the results (on FWD side) are closed. I think this is the reason for the memory leak I'm seeing.
#555 Updated by Eric Faulhaber almost 4 years ago
AbstractQuery
has a cleanup
method, however, the use of SQLQuery
and even Query
is somewhat internal/hidden within the Persistence
API. Perhaps we need a way to connect the AbstractQuery
cleanup to these underlying implementation objects. How were we handling cleanup of the Hibernate query resources?
#556 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
How were we handling cleanup of the Hibernate query resources?
I think Hibernate was taking care of closing the JDBC stuff (for Query.list
Hibernate calls). For FWD's ScrollingResults
, we were using Hibernate's ScrollableResults
and FWD closed them when it closed ScrollingResults
.
#557 Updated by Ovidiu Maxiniuc almost 4 years ago
The not closed objects are most likely the reason for the memory issues. I tried to close some of them but it was really to early.
The SQLQuery
has a close
method, but it is invoked only on errors. However, it will not close the ResultSet
s and I think this is the big issue. This need to done ASAP, in the method that use them, when possible. When this is not possible, the wrapper results or the new owner should do it.
#558 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
The not closed objects are most likely the reason for the memory issues. I tried to close some of them but it was really to early.
TheSQLQuery
has aclose
method, but it is invoked only on errors. However, it will not close theResultSet
s and I think this is the big issue. This need to done ASAP, in the method that use them, when possible. When this is not possible, the wrapper results or the new owner should do it.
Beside scroll
, all the APIs need to close the stmt
- the resultset will be closed automatically by the JDBC Statement. For scroll
, yes, the delegate will close the JDBC Statement. This solve this problem.
#559 Updated by Eric Faulhaber almost 4 years ago
I think we know when we are done with the statement in all cases, but do we cache PreparedStatement
instances to prevent having to prepare the same statements on the database server side multiple times? Or is the caching done at a different level? There is a balance to be struck between caching and memory consumption.
As for ResultSet
, I think we can close these for any case where we know we are done iterating the ResultSet
and hydrating DMOs (e.g., list
, uniqueResult
). Scrollable results need some external input to close the result set, but we should use the same mechanism we were using before with Hibernate.
#560 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I think we know when we are done with the statement in all cases, but do we cache
PreparedStatement
instances to prevent having to prepare the same statements on the database server side multiple times?
I don't see any cache of PreparedStatement
in SQLQuery
. This calls stmt = conn.prepareStatement
for each API.
#561 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
I think we know when we are done with the statement in all cases, but do we cache
PreparedStatement
instances to prevent having to prepare the same statements on the database server side multiple times? Or is the caching done at a different level? There is a balance to be struck between caching and memory consumption.
No, the PreparedStatement
instances are not cached, only their string statements. Caching them would complicate more the resource management. OTOH, closing their session will make them invalid, I think.
#562 Updated by Eric Faulhaber almost 4 years ago
OK, I think this is what Hibernate did as well. We may come back to PreparedStatement
caching as a potential optimization later.
#563 Updated by Constantin Asofiei almost 4 years ago
There's something wrong in TempTableHelper when the constraint name is computed:
long longFk = compositeTableName.hashCode(); longFk <<= 32; longFk |= dmo.getSqlTableName().hashCode();
There's a chance of overflow here, leading to collisions (
(-498895165l << 32) | -865345852l
results to -865345852l
, the dmo.getSqlTableName().hashCode()
value). Considering that compositeTableName
is something like tt1__10
, and dmo.getSqlTableName()
is tt1
, why not just use 32 bit hashcode from compositeTableName
?#564 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
There's something wrong in TempTableHelper when the constraint name is computed:
[...]
There's a chance of overflow here, leading to collisions ((-498895165l << 32) | -865345852l
results to-865345852l
, thedmo.getSqlTableName().hashCode()
value). Considering thatcompositeTableName
is something likett1__10
, anddmo.getSqlTableName()
istt1
, why not just use 32 bit hashcode fromcompositeTableName
?
Or better yet, why use a hashcode for the FK? Just as a convention? This name needs to be unique, and we have only one FK relation per composite table; so why not name it FK_tt1__10
? This would make it even easier to identify errors... instead of looking at a random string.
#565 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Or better yet, why use a hashcode for the FK? Just as a convention? This name needs to be unique, and we have only one FK relation per composite table; so why not name it
FK_tt1__10
? This would make it even easier to identify errors... instead of looking at a random string.
I agree with this approach.
#566 Updated by Ovidiu Maxiniuc almost 4 years ago
This was inspired by what I saw Hibernate was generating. The most signifiant 32 bit came naturally and will match those generated by Hibernate. I was not able to identify the LSB so I added the sqlTableName
' hash. This approach would give an exact size on the FK name.
We could inline the fk
string it as:
Integer.toHexString(compositeTableName.hashCode()).toUpperCase() + Integer.toHexString(dmo.getSqlTableName().hashCode()).toUpperCase()
to avoid bitwise issues, but if you think
compositeTableName
is fine, I am OK with it.#567 Updated by Eric Faulhaber almost 4 years ago
Constantin, I am blocked by this as well. Please commit the fix as soon as it is ready.
#568 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, I am blocked by this as well. Please commit the fix as soon as it is ready.
See 4011a rev 11472.
#569 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Ovidiu Maxiniuc wrote:
The not closed objects are most likely the reason for the memory issues. I tried to close some of them but it was really to early.
TheSQLQuery
has aclose
method, but it is invoked only on errors. However, it will not close theResultSet
s and I think this is the big issue. This need to done ASAP, in the method that use them, when possible. When this is not possible, the wrapper results or the new owner should do it.Beside
scroll
, all the APIs need to close thestmt
- the resultset will be closed automatically by the JDBC Statement. Forscroll
, yes, the delegate will close the JDBC Statement. This solve this problem.
Fixed in 4011a rev 11473 - please review.
#570 Updated by Constantin Asofiei almost 4 years ago
There's a bug in AnnotatedAst.iterator
when trying to iterate starting from the BOOL_TRUE
child (of the WHERE
clause), in this AST:
select statement [SELECT_STMT]:null @0:0 select [SELECT]:null @1:1 tt1 [ALIAS]:null @1:8 from [FROM]:null @1:14 tt1__Impl__ [DMO]:null @1:19 tt1 [ALIAS]:null @1:36 where [WHERE]:null @1:42 true [BOOL_TRUE]:null @1:49 order [ORDER]:null @1:55 upper [FUNCTION]:null @1:64 rtrim [FUNCTION]:null @1:70 tt1 [ALIAS]:null @1:76 f1 [PROPERTY]:null @1:81 asc [ASC]:null @1:93
There's nowhere to descend, nor another sibling. When iter.hasNext()
is called after the BOOL_TRUE
was processed, it abends with EmptyStackException
in notifyListenerLevelChanged
, as stack
is empty. Greg, I'm inclined to protect if (!stack.isEmpty()) stack.pop()
, as in this case there was no sibling.
#571 Updated by Eric Faulhaber almost 4 years ago
AFAIK, we didn't change this in 4011a. I wonder why we are hitting this now.
#572 Updated by Ovidiu Maxiniuc almost 4 years ago
Is it possible that the WHERE node was dropped in this case?
#573 Updated by Eric Faulhaber almost 4 years ago
Yes, if we can evaluate static sub-expressions during conversion, we roll them up to their result and replace the sub-expression with that.
#574 Updated by Greg Shah almost 4 years ago
There's nowhere to descend, nor another sibling. When iter.hasNext() is called after the BOOL_TRUE was processed, it abends with EmptyStackException in notifyListenerLevelChanged, as stack is empty. Greg, I'm inclined to protect if (!stack.isEmpty()) stack.pop(), as in this case there was no sibling.
I've been reviewing the AnnotatedAst.iterator()
code, over and over. I can't see how your situation can occur unless there is editing of the AST happening during the interation. If that is happening, then the traversal of node parents may get out of synch with the state of the parents
list/stack
. If there is no editing happening, then this error should not occur.
Your suggested change is OK, but I don't think it really is a solution. The real problem is something else. Normal traversal should not be able to cause this issue.
#575 Updated by Greg Shah almost 4 years ago
BTW, in FWD (both 4011 and trunk) only the RAQ will consult the dirty database for possible leaking record. The 4GL will find the leaked records in FOR statements, too, not only in FIND. I did not do anything in this regard as the current task is to bring 4011 on par functionally with the trunk.
The FOR queries not honoring the leaked records is a known issue. I think Eric has deliberately avoided that case and so far we don't have a customer app that requires it.
Since we are reducing the usage of this (really, just for the one known ChUI application n___s
table), I think there is even less reason to "upgrade" this capability.
#576 Updated by Constantin Asofiei almost 4 years ago
4011a rev 11475 fixes ChangeSet.ensureCapacity.
#577 Updated by Constantin Asofiei almost 4 years ago
A variation of #4011-549 with the validation:
do transaction: create person. // NEW NEEDS_VALIDATION message "a". person.emp-num = 1. // NEW NEEDS_VALIDATION CHANGED message "b". person.site-id = 2. // FWD validates here message "c". message "1". if can-find(person where person.emp-num = 1) then message "2". message "3". // undo, leave. // OE validates here (on 'pop scope') end.
The CAN-FIND
sees this record, which is in an 'invalid' state. Do we need the dirty database for this case?
#578 Updated by Constantin Asofiei almost 4 years ago
4011a rev 11479 adds the AnnotatedAst.iterator()
fix (#4011-574) and refactors rowStructure to allow for self-join.
#579 Updated by Constantin Asofiei almost 4 years ago
4011a rev 11480 refactors SortCriterion to allow indexed properties in the sort clause. FQL2SQL will take care of creating the join with the composite table, if needed.
HQLPreprocessor doesn't need to be aware of indexed extent fields, as FQL2SQL is the authority to solve these.
Please review.
#580 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
4011a rev 11480 refactors SortCriterion to allow indexed properties in the sort clause. FQL2SQL will take care of creating the join with the composite table, if needed.
HQLPreprocessor doesn't need to be aware of indexed extent fields, as FQL2SQL is the authority to solve these.
Please review.
My understanding is that the changes are not about changing the behavior of SortCriterion
, but rather they are about making it work with the new, underlying ORM implementation. Read in that context, they make sense.
#581 Updated by Greg Shah almost 4 years ago
issue in AnnotatedAst.iterator(), when walking from an AST with a single child. Need Greg to check if I'm missing something.
I've reviewed AnnotatedAst
in rev 11479. As mentioned in #4011-574, the change is OK but I don't think it is the real problem. The descent to the single child would set jump == -1
and added the parent into the parent list. Then the notify should have pushed onto the stack on line 2646. So when we iterate through the parent list in the ascent, we should have 1 parent in the list and 1 item on the stack.
#582 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
issue in AnnotatedAst.iterator(), when walking from an AST with a single child. Need Greg to check if I'm missing something.
I've reviewed
AnnotatedAst
in rev 11479. As mentioned in #4011-574, the change is OK but I don't think it is the real problem. The descent to the single child would setjump == -1
and added the parent into the parent list. Then the notify should have pushed onto the stack on line 2646. So when we iterate through the parent list in the ascent, we should have 1 parent in the list and 1 item on the stack.
This really feels like a situation where the AST is being modified during its iteration.
#583 Updated by Eric Faulhaber almost 4 years ago
I've committed revision 11481, which minimizes the number of database savepoints being set for application sub-transactions. We now lazily set savepoints only at the point that an undoable change is about to be made, to prevent unnecessary round trips to the database to set and release "empty" savepoints. So far, this only has been tested with read-heavy tests and needs more testing with change-heavy tests.
#584 Updated by Constantin Asofiei almost 4 years ago
I've seen cases where we have a select like select tt1.*, tt2.recid
where tt2
is a standalone table or a composite table for tt1. But in all cases, tt2
requires a new SELECT to hydrated the record. Is there any case where we would not require to hydrate the record? I would expect a CAN-FIND
, but beside that? Validation SQLs are real SQLs sent directly to JDBC, and do not involve orm.SqlQuery
, right?
I'm inclined to change FQL2SQL to always expand a tt2.recid
, so we can hydrate the record in one go.
#585 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
I've seen cases where we have a select like
select tt1.*, tt2.recid
wherett2
is a standalone table or a composite table for tt1. But in all cases,tt2
requires a new SELECT to hydrated the record. Is there any case where we would not require to hydrate the record? I would expect aCAN-FIND
, but beside that? Validation SQLs are real SQLs sent directly to JDBC, and do not involveorm.SqlQuery
, right?I'm inclined to change FQL2SQL to always expand a
tt2.recid
, so we can hydrate the record in one go.
I think this was supposed to let the session-level cache do its job, but the orm.Session instances are short lived... so this is (almost) useless, and we end up hitting the database to load the record.
#586 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
I've seen cases where we have a select like
select tt1.*, tt2.recid
wherett2
is a standalone table or a composite table for tt1. But in all cases,tt2
requires a new SELECT to hydrated the record. Is there any case where we would not require to hydrate the record? I would expect aCAN-FIND
, but beside that? Validation SQLs are real SQLs sent directly to JDBC, and do not involveorm.SqlQuery
, right?
Right
I'm inclined to change FQL2SQL to always expand a
tt2.recid
, so we can hydrate the record in one go.I think this was supposed to let the session-level cache do its job, but the orm.Session instances are short lived... so this is (almost) useless, and we end up hitting the database to load the record.
This is not necessarily the case. Consider temp-tables loaded in a persistent procedure. These can have a very long lifespan. There can be other cases as well.
There is a significant advantage to avoiding record hydration by getting an existing record from the cache instead, even when the result set has all the data. This was the only way I could get my early tests using SQL to beat Hibernate's performance.
Beyond this, we must never have the same record be represented by different DMO instances in different buffers in the same user session. This is a hard and fast rule around which the persistence code was written for years. We must not change this.
I've been considering divorcing the cache from the session object to get more advantage from the cache, for cases where we have high session turnover. We should do that, rather than bypassing the cache and creating new DMO instances during hydration.
#587 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I've been considering divorcing the cache from the session object to get more advantage from the cache, for cases where we have high session turnover. We should do that, rather than bypassing the cache and creating new DMO instances during hydration.
Eric, OK, thanks for the explanation. My problem is with short-lived Session objects which have their cache almost useless - I was thinking to change FQL2SQL, so that it receives a set of 'tables with at least a record in the cache', so that if we are retrieving a record from a table with nothing cached, then we fetch the full record.
But your solution overrides mine. So, if we move the cache, where do we move it? At the persistence instance?
#588 Updated by Constantin Asofiei almost 4 years ago
- File ASTIteratorTest.java added
Greg Shah wrote:
issue in AnnotatedAst.iterator(), when walking from an AST with a single child. Need Greg to check if I'm missing something.
I've reviewed
AnnotatedAst
in rev 11479. As mentioned in #4011-574, the change is OK but I don't think it is the real problem. The descent to the single child would setjump == -1
and added the parent into the parent list. Then the notify should have pushed onto the stack on line 2646. So when we iterate through the parent list in the ascent, we should have 1 parent in the list and 1 item on the stack.
I have a standalone recreate in a .java. See attached. The AST is not changed, but what matters is that we have a listener.
#589 Updated by Eric Faulhaber almost 4 years ago
- Related to Feature #4681: prepared statement caching/pooling added
#590 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
A variation of #4011-549 with the validation:
[...]The
CAN-FIND
sees this record, which is in an 'invalid' state. Do we need the dirty database for this case?
Constantin, is this the entire test case? I ask because the presence of the DO TRANSACTION block suggests to me there may be code outside this block.
#591 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
A variation of #4011-549 with the validation:
[...]The
CAN-FIND
sees this record, which is in an 'invalid' state. Do we need the dirty database for this case?Constantin, is this the entire test case? I ask because the presence of the DO TRANSACTION block suggests to me there may be code outside this block.
Yes, that's the entire testcase. The issue here is that OE doesn't flush the record to the database before the CAN-FIND looks for it.
#592 Updated by Constantin Asofiei almost 4 years ago
For a WHERE clause which has subscripted fields like tt1.f1[?]
, I need to inject the tt1__10.list__index = ?
at the same expression as where the tt1.f1[?]
resides, so that the argument order is preserved. But this is proving difficult in FQL2SQL's generateExpression
, as the tt1.f1[?]
may be part of any expression. If I add the list__index
condition at the beginning of the WHERE clause, then I need to rewrite the argument order to match this location. I think this would be easier than rewriting the tree and ensuring all cases are covered there, so I'm going with this approach.
#593 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
For a WHERE clause which has subscripted fields like
tt1.f1[?]
, I need to inject thett1__10.list__index = ?
at the same expression as where thett1.f1[?]
resides, so that the argument order is preserved. But this is proving difficult in FQL2SQL'sgenerateExpression
, as thett1.f1[?]
may be part of any expression. If I add thelist__index
condition at the beginning of the WHERE clause, then I need to rewrite the argument order to match this location. I think this would be easier than rewriting the tree and ensuring all cases are covered there, so I'm going with this approach.
Actually rewriting the AST works, but not while iterating - just save the nodes which you want to rewrite and do this after the iteration is finished. I'm testing these changes.
#594 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
For a WHERE clause which has subscripted fields like
tt1.f1[?]
, I need to inject thett1__10.list__index = ?
at the same expression as where thett1.f1[?]
resides, so that the argument order is preserved. But this is proving difficult in FQL2SQL'sgenerateExpression
, as thett1.f1[?]
may be part of any expression. If I add thelist__index
condition at the beginning of the WHERE clause, then I need to rewrite the argument order to match this location. I think this would be easier than rewriting the tree and ensuring all cases are covered there, so I'm going with this approach.Actually rewriting the AST works, but not while iterating - just save the nodes which you want to rewrite and do this after the iteration is finished. I'm testing these changes.
This is a different issue than that which is addressed by the use of the ParameterIndices
class in HQLPreprocessor
? I seem to recall having dealt with something very similar back when I wrote these classes, but I don't recall if your join case was a part of the problem set I was solving.
#595 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
This is a different issue than that which is addressed by the use of the
ParameterIndices
class inHQLPreprocessor
? I seem to recall having dealt with something very similar back when I wrote these classes, but I don't recall if your join case was a part of the problem set I was solving.
Yes, is a different one. FQL2SQL is responsible of extracting the joins and list_index condition. For a subscript as a substitution, FQL2SQL needs to preserve the location of the list__index
to match with the argument. I've fixed this (and insert to a table with only extent fields) in 4011a rev 11483.
#596 Updated by Eric Faulhaber almost 4 years ago
- Related to Feature #4690: re-integrate dirty share logic into new persistence implementation added
#597 Updated by Eric Faulhaber almost 4 years ago
As part of cleaning up the branch, we should get rid of these ANTLR warnings:
[ant:antlr] fql.g:864:7: warning:nondeterminism between alts 1 and 3 of block upon [ant:antlr] fql.g:864:7: k==1:SYMBOL [ant:antlr] fql.g:864:7: k==2:FROM [ant:antlr] fql.g:864:7: k==3:FROM,SYMBOL,LBRACKET [ant:antlr] fql.g:866:18: warning:nondeterminism between alts 1 and 3 of block upon [ant:antlr] fql.g:866:18: k==1:SYMBOL [ant:antlr] fql.g:866:18: k==2:FROM,COMMA [ant:antlr] fql.g:866:18: k==3:FROM,COMMA,SYMBOL,LBRACKET [ant:antlr] fql.g:877: warning:nondeterminism between alts 1 and 2 of block upon [ant:antlr] fql.g:877: k==1:JOIN [ant:antlr] fql.g:877: k==2:SYMBOL [ant:antlr] fql.g:877: k==3:AS [ant:antlr] fql.g:877: warning:nondeterminism upon [ant:antlr] fql.g:877: k==1:CROSS,FULL,INNER,JOIN,LEFT,OUTER,RIGHT [ant:antlr] fql.g:877: k==2:JOIN,OUTER,SYMBOL [ant:antlr] fql.g:877: k==3:AS,JOIN,ON,SYMBOL [ant:antlr] fql.g:877: between alt 1 and exit branch of block [ant:antlr] fql.g:877: warning:nondeterminism upon [ant:antlr] fql.g:877: k==1:JOIN [ant:antlr] fql.g:877: k==2:SYMBOL [ant:antlr] fql.g:877: k==3:AS,DOT,PROPERTY,LBRACKET [ant:antlr] fql.g:877: between alt 2 and exit branch of block [ant:antlr] fql.g:878: warning:nondeterminism upon [ant:antlr] fql.g:878: k==1:COMMA [ant:antlr] fql.g:878: k==2:SYMBOL [ant:antlr] fql.g:878: k==3:EOF,AND,AS,CROSS,ELSE,END,ESCAPE,FULL,IN,INNER,IS,JOIN,LEFT,OR,ORDER,OUTER,RIGHT,SINGLE,THEN,WHERE,EQUALS,NOT_EQ,COMMA,RPARENS,GT,LT,GTE,LTE,LIKE,CONCAT,PLUS,MINUS,MULTIPLY,DIVIDE [ant:antlr] fql.g:878: between alt 1 and exit branch of block [ant:antlr] fql.g:878: warning:nondeterminism between alts 1 and 2 of block upon [ant:antlr] fql.g:878: k==1:JOIN [ant:antlr] fql.g:878: k==2:SYMBOL [ant:antlr] fql.g:878: k==3:AS [ant:antlr] fql.g:878: warning:nondeterminism upon [ant:antlr] fql.g:878: k==1:CROSS,FULL,INNER,JOIN,LEFT,OUTER,RIGHT [ant:antlr] fql.g:878: k==2:JOIN,OUTER,SYMBOL [ant:antlr] fql.g:878: k==3:AS,JOIN,ON,SYMBOL [ant:antlr] fql.g:878: between alt 1 and exit branch of block [ant:antlr] fql.g:878: warning:nondeterminism upon [ant:antlr] fql.g:878: k==1:JOIN [ant:antlr] fql.g:878: k==2:SYMBOL [ant:antlr] fql.g:878: k==3:AS,DOT,PROPERTY,LBRACKET [ant:antlr] fql.g:878: between alt 2 and exit branch of block [ant:antlr] fql.g:934: warning:nondeterminism upon [ant:antlr] fql.g:934: k==1:COMMA [ant:antlr] fql.g:934: k==2:SYMBOL [ant:antlr] fql.g:934: k==3:EOF,AND,ASC,CROSS,DESC,DOT,ELSE,END,ESCAPE,FULL,IN,INNER,IS,JOIN,LEFT,OR,ORDER,OUTER,PROPERTY,RIGHT,SINGLE,THEN,WHERE,EQUALS,NOT_EQ,LPARENS,COMMA,RPARENS,GT,LT,GTE,LTE,LIKE,CONCAT,PLUS,MINUS,MULTIPLY,DIVIDE,LBRACKET [ant:antlr] fql.g:934: between alt 1 and exit branch of block [ant:antlr] fql.g:854:33: warning:nondeterminism between alts 1 and 2 of block upon [ant:antlr] fql.g:854:33: k==1:WHERE [ant:antlr] fql.g:854:33: k==2:BOOL_FALSE,BOOL_TRUE,CASE,DEC_LITERAL,FROM,NOT,NULL,SELECT,LPARENS,STRING,MINUS,SYMBOL,NUM_LITERAL,LONG_LITERAL,SUBST,POSITIONAL [ant:antlr] fql.g:854:33: k==3:EOF,AND,BOOL_FALSE,BOOL_TRUE,CASE,CROSS,DEC_LITERAL,DOT,ELSE,END,ESCAPE,FROM,FULL,IN,INNER,IS,JOIN,LEFT,NOT,NULL,OR,ORDER,OUTER,PROPERTY,RIGHT,SELECT,SINGLE,THEN,WHEN,WHERE,EQUALS,NOT_EQ,LPARENS,COMMA,RPARENS,GT,LT,GTE,LTE,LIKE,STRING,CONCAT,PLUS,MINUS,MULTIPLY,DIVIDE,SYMBOL,LBRACKET,NUM_LITERAL,LONG_LITERAL,SUBST,POSITIONAL [ant:antlr] fql.g:854:47: warning:nondeterminism between alts 1 and 2 of block upon [ant:antlr] fql.g:854:47: k==1:ORDER [ant:antlr] fql.g:854:47: k==2:BY [ant:antlr] fql.g:854:47: k==3:SYMBOL [ant:antlr] fql.g:854:63: warning:nondeterminism between alts 1 and 2 of block upon [ant:antlr] fql.g:854:63: k==1:SINGLE [ant:antlr] fql.g:854:63: k==2:EOF,AND,CROSS,ELSE,END,ESCAPE,FULL,IN,INNER,IS,JOIN,LEFT,OR,ORDER,OUTER,RIGHT,SINGLE,THEN,WHERE,EQUALS,NOT_EQ,COMMA,RPARENS,GT,LT,GTE,LTE,LIKE,CONCAT,PLUS,MINUS,MULTIPLY,DIVIDE [ant:antlr] fql.g:854:63: k==3:EOF,AND,ASC,BOOL_FALSE,BOOL_TRUE,BY,CASE,CROSS,DEC_LITERAL,DESC,ELSE,END,ESCAPE,FROM,FULL,IN,INNER,IS,JOIN,LEFT,NOT,NULL,OR,ORDER,OUTER,RIGHT,SELECT,SINGLE,THEN,WHERE,EQUALS,NOT_EQ,LPARENS,COMMA,RPARENS,GT,LT,GTE,LTE,LIKE,STRING,CONCAT,PLUS,MINUS,MULTIPLY,DIVIDE,SYMBOL,NUM_LITER
#598 Updated by Ovidiu Maxiniuc almost 4 years ago
As I informed by email, I noticed some differences in the generated indexes for our test framework. There are two kind:
- some indexes keep their
unique
attribute. I remember that we were intentionally doing some analysis and if another index with same or less component was alreadyunique
the attribute was dropped. This was done because of the way the unique validation was done, to have less indexes to test. However, with 4011a, we do not have this issue and more, the annotations in DMOs keep the original metadata for indexes. - more indexes have the PK added as the last component. Again, I remember this was added to make all indexes actually unique, even if not declared in OE that way. I cannot remember exactly why, but I guess this is needed by incremental search? Also I do not recall why we were not adding for all non-unique indexes. As for first issue, we have altering the index definition found in annotation metadata. Could this pose any problems?
#599 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
some indexes keep their
unique
attribute. I remember that we were intentionally doing some analysis and if another index with same or less component was alreadyunique
the attribute was dropped. This was done because of the way the unique validation was done, to have less indexes to test. However, with 4011a, we do not have this issue and more, the annotations in DMOs keep the original metadata for indexes.
This was not just about having fewer indexes to test during unique constraint validation. It was also about eliminating redundant indexes from the database schema. Having a redundant index means slower update performance and more storage required by the database. I don't want to change the database schema from what we were generating pre-4011a.
If there is some legacy need to keep a redundant index around (which fields need to be updating to trigger validation/flushing, maybe?), then perhaps we should not drop the index outright, but somehow mark it redundant, so it is not involved in the DDL generation. OTOH, I am concerned that we have different indexes from what we had pre-4011a.
more indexes have the PK added as the last component. Again, I remember this was added to make all indexes actually unique, even if not declared in OE that way. I cannot remember exactly why, but I guess this is needed by incremental search? Also I do not recall why we were not adding for all non-unique indexes. As for first issue, we have altering the index definition found in annotation metadata. Could this pose any problems?
Yes, adding the PK was done to ensure that sorting is always deterministic. This allows us to add the primary key to an ORDER BY to ensure we do not get into infinite loops when emulating a 4GL "index walk" (e.g., looping FIND NEXT). However, I don't know why we would have more such indexes than before 4011a. The only reason I can think of that we would not be adding the PK for all non-unique indexes is if we determine that the components in that index implicitly represent unique combinations already (i.e., those fields are already in a unique index, possibly in a different order). But again, this should not have changed compared to pre-4011a code.
#600 Updated by Eric Faulhaber almost 4 years ago
Committed rev 11487, which:
- bypasses database-level unique constraint validation when the backing table has no unique indices AND the operation is validation-only (not persist);
- avoids setting a savepoint if the backing table has no unique indices and we already are in a transaction.
Constantin, would you please carefully review the change? It did go through some regression testing already, but I'd appreciate a second set of eyes. It's not big and you did find my last regression in this area.
#601 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin, would you please carefully review the change? It did go through some regression testing already, but I'd appreciate a second set of eyes. It's not big and you did find my last regression in this area.
The only issue I have is that boolean doFlush = !rollback || hasUniqueConstraint;
will always be true (as this is the negation of rollback && !hasUniqueConstraint)
).
#602 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
The only issue I have is that
boolean doFlush = !rollback || hasUniqueConstraint;
will always be true (as this is the negation ofrollback && !hasUniqueConstraint)
).
Ok, thanks. I am trying to avoid setting a savepoint if we're already in a transaction and have no unique constraints to test. I'll review again after working on the flushing fix.
#603 Updated by Constantin Asofiei almost 4 years ago
- the composite tables need to be deleted manually (or is the foreign key 'on cascade delete')?
- fql.g and FQL2SQL are missing support for DELETE and UPDATE statements
- theoretically is possible for the WHERE clause (in DELETE or UPDATE) to contain an extent field. I think the solution will be a subselect, and not a join.
#604 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
the composite tables need to be deleted manually (or is the foreign key 'on cascade delete')?
I remember discussing this with Ovidiu. I believe the result was that we decided to use on cascade delete
, but I can't find this in the code now. Ovidiu?
#605 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
the composite tables need to be deleted manually (or is the foreign key 'on cascade delete')?
I remember discussing this with Ovidiu. I believe the result was that we decided to use
on cascade delete
, but I can't find this in the code now. Ovidiu?
We did. I added the code in TempTableHelper
:484, but now I am not sure that is enough.
#606 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, regarding 4011a/11486: RecordBuffer.cumulativeDirtyProps
is never modified, only instantiated and used as a parameter to Bitset.or()
and Bitset.and()
. It is also cleared, which is a no-op, since it is never updated. Looks like something is missing. I'm going back over the old implementation now, but if this is fresh in your mind, perhaps you can provide some guidance.
#607 Updated by Ovidiu Maxiniuc almost 4 years ago
The old cumulativeDirtyProps
was stored in ValidationHelper
and contained the entire set of changes at the last (pre-)validation. Now it should contain only the bits of altered properties (from currentRecord.dirtyProps
). This is because the validation occurs only after the properties are changed. So the new values are already set. Pre-4011, the new values were kept here and set when the record was updated into dirty database. Now it only serves to detect the set of touched properties since last save to dirty database.
#608 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
The old
cumulativeDirtyProps
was stored inValidationHelper
and contained the entire set of changes at the last (pre-)validation. Now it should contain only the bits of altered properties (fromcurrentRecord.dirtyProps
). This is because the validation occurs only after the properties are changed. So the new values are already set. Pre-4011, the new values were kept here and set when the record was updated into dirty database. Now it only serves to detect the set of touched properties since last save to dirty database.
But it is never modified by any code. AFAICT, all bits are always unset.
#609 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
A variation of #4011-549 with the validation:
[...]The
CAN-FIND
sees this record, which is in an 'invalid' state. Do we need the dirty database for this case?
There are two issues here. One is that we do in fact need the dirty database feature for this CAN-FIND to work properly. The other is that in the process of validating the single property change of site-id
, we persist the record inside a savepoint (in Validation.validateUniqueCommitted
), which causes the database to complain about a separate, not-null
column still being null
. This should have been outside the scope of this single property validation, but we cannot separate the database's not-null
checks from its unique constraint checks.
I have enabled the dirty database for the person
table for this test case. I've also made some changes to Validation
to detect this problem and to fall back to the "unique index query" method of validation instead of the "persist within a savepoint" method of validation in this case. I need to refine these changes, as they are not yet exactly right. However, I feel like I am on the right track with this approach. I expect to have something I can commit tomorrow.
#610 Updated by Eric Faulhaber almost 4 years ago
- Related to Feature #4692: add sanity check at server startup to compare DMO indices with database indices added
#611 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
There are issues in 4011a with the batch deletes:
- the composite tables need to be deleted manually (or is the foreign key 'on cascade delete')?
- fql.g and FQL2SQL are missing support for DELETE and UPDATE statements
- theoretically is possible for the WHERE clause (in DELETE or UPDATE) to contain an extent field. I think the solution will be a subselect, and not a join.
These are solved in 4011a rev 11493.
The subselect for subscript fields is 'inlined' as a STRING. Also, for UPDATE, the SET currently doesn't support subscripted fields (FWD doesn't need this at this point, as TemporaryBuffer.removeRecords
will only emit a _multiplex = 1
condition.
#612 Updated by Greg Shah almost 4 years ago
Some notes about fql.g
from my quick review:
- The overall structure looks ... familiar. :) This is not a complaint.
- I guess it is OK to have a missing implementation in
insertStatement
if it can never be generated directly from 4GL code. - It seems like
subselect
is unnecessary since it is the same rule asselectStatement
. You could easily callselectStatement
from bothprimary_expr
andstatement
but with aboolean
argument that would control whether the root node wasSELECT_STMT
orSUBSELECT
. - Is it true that you can't have complex expressions in
ORDER BY
? It seems like this is a case we have to support.
In regard to #4011-597, each of the warnings should be addressed. There are only 2 possible cases:
- There is a real problem in that valid input can be parsed incorrectly due to the ambiguity of the grammar. In this case we must resolve the issue, often with additional lookahead/semantic predicates. Adding a semantic predicate will usually be detected as resolving the ambiguity. Another approach is to factor the rules such that the left side common terms are in common rules where any ambiguity is resolved.
- The ambiguity is adequately handled by the default matching as coded in the grammar. In this case it is safe (and preferred) to disable the warning.
I understand the instinct (and fear) to leave the warnings there. But in truth, I think they are not helpful. In any bug, I wouldn't spend time looking at the warnings, I would just look at the input text, any stack trace and the grammar itself. Usually it is quite clear.
I would request that each location be reviewed and one of the above strategies taken such that all warnings are gone.
Let's consider the first warning:
[ant:antlr] fql.g:864:7: warning:nondeterminism between alts 1 and 3 of block upon [ant:antlr] fql.g:864:7: k==1:SYMBOL [ant:antlr] fql.g:864:7: k==2:FROM [ant:antlr] fql.g:864:7: k==3:FROM,SYMBOL,LBRACKET
In fql.g
:
select_expr : SELECT^ ( alias | function | property <--- line 864 (COMMA! (alias | function | property))* ) ;
My lack of understanding in the possible variants of properties/aliases makes it hard to see if this code is correct. In other words, I don't know the possible valid input that can be seen here. The function
is unambiguous because it only matches SYMBOL LPARENS
. Neither alias
nor property
allow that so function
cannot be confused with those.
Both alias
and property
do seem quite possibly ambiguous with the other.
alias
can match:
SYMBOL SYMBOL SYMBOL SYMBOL SYMBOL LBRACKET ... SYMBOL SYMBOL DOT SYMBOL SYMBOL SYMBOL DOT SYMBOL LBRACKET ... SYMBOL SYMBOL PROPERTY SYMBOL SYMBOL PROPERTY LBRACKET ...
property
can match:
SYMBOL SYMBOL LBRACKET ... SYMBOL DOT SYMBOL SYMBOL DOT SYMBOL LBRACKET ... SYMBOL PROPERTY SYMBOL PROPERTY LBRACKET ...
You can see here that both rules can match a lone SYMBOL
, so that would cause the warning. Since alias
is the first alternative, a lone SYMBOL
will always be matched there instead of in property
. If that is OK, then it is probably OK to disable warnings. If I've missed something, then there may be other possible ambiguous inputs and those would need to be considered. For example, the lexer creates PROPERTY
from DOT SYMBOL
which means there is some hidden possible cases. For example, it seems like SYMBOL DOT SYMBOL
might be lexed as SYMBOL PROPERTY
which then could conflict but again it could also be fine.
The above rules are a good example where left factoring the common terms might be very helpful. It would make the code easier to determine if it is correct in addition to very possibly eliminating the warning.
I'm happy to look at specific cases where you need help or have questions.
#613 Updated by Eric Faulhaber almost 4 years ago
I've committed revisions 11494 and 11495. 11494 is Ovidiu's import fix. 11495 partially fixes the test case in #4011-577, in that it fixes the premature validation and flushing of the record. I am still working on the dirty database aspect of the fix, which is the cause behind the CAN-FIND not working properly.
#614 Updated by Ovidiu Maxiniuc almost 4 years ago
Related to parsing of FQL: at this moment FQL2SQL is unable to parse queries like:from com.goldencode.appname.dmo.SomeTable as st where st.f1 = 10
This is needed because at the time of hand-written code the DMO implementation class does not exist yet. When trying to parse this, some error like:line 1:10: unexpected token: .goldencode
will be printed to output console (just that vague). We need to allow full DMO interface name in FROM option. I will expand the alias
to allow that but I expect more undeterminsm/collisions with alias.property
syntax.
#615 Updated by Greg Shah almost 4 years ago
It is not clear to me why we match PROPERTY
in the lexer (as DOT SYMBOL
). Since there are many possible qualified symbol name combinations that cannot be matched in the lexer, it seems that these could be unified in the parser if there was no PROPERTY
node. Instead let the DOT
and SYMBOL
tokens flow through to the parser and create a common rule that matches aliases, unqualified properties and qualified properties. Much of the ambiguity could be eliminated and it would be easy to match these cases where there is an arbitrary number of "qualifier" levels (like a package name).
If it is needed to know if there was intervening whitespace between these tokens, there are tricks for handling that as well.
#616 Updated by Ovidiu Maxiniuc almost 4 years ago
- added support for DMOs in FROM clause of SELECT statement. For this I removed the DOT as separator of ALIAS and PROPERTY and the DOT-starts-PROPERTY workaround altogether. The code is a bit simpler, but the FQL2SQL converter had to be updated to recreate the tree structure it knows. Because of this change it is possible that some of the testcases to fail, I have not covered all testcases to fix side-effects;
- for completeness, I added implementation for INSERT INTO... although no support in FQL2SQL;
- also added parser support for GROUP BY and HAVING clauses in SELECT statement;
- (LE): removed
alias
from SELECT children. It was never picked-up and FQL2SQL already was handing it by searching theproperty
in the aliases list; - (LE): replaced SUBSELECT with SELECT_STATEMENT (as they were virtually identical).
For the moment the warnings output by antlr is not decreased. Working on that.
#617 Updated by Constantin Asofiei almost 4 years ago
There is a problem in 4011a, where we emit NULLS LAST
in the ORDER BY
clause. This causes a performance regression, for 4GL queries like:
FOR EACH table1 NO-LOCK, FIRST table2 WHERE table2.f1 = table1.f1 AND table2.f2 = "A"
which get converted to an SQL like:
select table1.*, table2.recid from table1 cross join table2 where table2.recid = ( select table2.recid from table2 where upper(rtrim(table2.f1)) = upper(rtrim(table1.f1)) and upper(rtrim(table2.f2)) = 'A' order by upper(rtrim(table2.f3)) asc, upper(rtrim(table2.f4)) asc limit 1 ) order by table1.f4 asc nulls last, table1.f5 asc nulls last, table1.f6 desc nulls last limit 1
Without the NULLS LAST
, postgresql manages to use the indexes on table1
. With NULLS LAST
, postgresql fallsback to scanning instead of index.
Another issue is that the SUBSELECT doesn't have the NULLS LAST
. At least we should be consistent and add them there, too, for a dialect which needs it.
I'm working on adding the NULLS LAST
dependent on the dialect.
#618 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Another issue is that the SUBSELECT doesn't have the
NULLS LAST
. At least we should be consistent and add them there, too, for a dialect which needs it.
Scratch this, I wasn't looking at the right SQL. We emit it for the SUBSELECT, too.
#619 Updated by Constantin Asofiei almost 4 years ago
Postgresql states that:
By default, null values sort as if larger than any non-null value; that is, NULLS FIRST is the default for DESC order, and NULLS LAST otherwise. Note that the ordering options are considered independently for each sort column.
If OE is sorting the nulls always last, then for DESC sort we may need to emit NULLS LAST
, for any component with DESC. And, later on, in FQL2SQL, emit the NULLS LAST
if the direction is DESC
- this should make postgresql able to match the index.
#620 Updated by Constantin Asofiei almost 4 years ago
I have some tests, and OE sorts the nulls always last, regardless of direction.
H2 has a bug in that it doesn't consider the NULLS LAST
option. Trying to understand why this is happening.
#621 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
I have some tests, and OE sorts the nulls always last, regardless of direction.
Sorry, my brain had a short-circuit... I was looking at the FWD output for H2, and the problem is with H2 - I think there's a problem in the actual row sort in the index.
OE follows the same principles as postgresql - nulls last for ASC, first for DESC, even if there is no index on that column.
#622 Updated by Ovidiu Maxiniuc almost 4 years ago
I did a quick test with H2:
create table t1 (
f1 int4,
f2 int4,
f3 int4
)
delete from t1;
insert into t1 values (0, 1, 2);
insert into t1 values (0, null, 2);
insert into t1 values (null, null, null);
insert into t1 values (1, 2, 3);
select * from t1;
select * from t1 order by f1;
select * from t1 order by f1 nulls last;
select * from t1 order by f1 desc;
select * from t1 order by f1 desc nulls last;
select * from t1 order by f1 nulls last, f2 asc;
select * from t1 order by f1 nulls last, f2 asc nulls last;
It seems to me that it works fine. Here is the result for last two selects:
F1 F2 F3 ---- ---- ---- 0 null 2 0 1 2 1 2 3 null null null F1 F2 F3 ---- ---- ---- 0 1 2 0 null 2 1 2 3 null null null
#623 Updated by Constantin Asofiei almost 4 years ago
What's the result for select * from t1 order by f1 desc;
?
#624 Updated by Ovidiu Maxiniuc almost 4 years ago
select * from t1 order by f1 desc; F1 F2 F3 ---- ---- ---- 1 2 3 0 null 2 0 1 2 null null null (4 rows, 6 ms)
LE: It is the exact reverse order of
select * from t1 order by f1;
#625 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, add an ASC index for F1 and you will see that H2 is wrong - it will place the NULL first.
Also, H2 is wrong with or without a DESC index for F2 - the NULL must be first.
#626 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Also, H2 is wrong with or without a DESC index for F2 - the NULL must be first.
To be more specific - if you have F2 DESC
(or F1 DESC
), H2 will be wrong with or without an index on that field, in the same direction.
#627 Updated by Constantin Asofiei almost 4 years ago
I don't understand what is happening. The same SQL in the standalone H2 server knows to move the nulls to last position, when NULLS LAST
exists. But in FWD, the nulls are first...
I've even connected using the same URL as the _temp table (in-memory DB, 'local temporary' table, same indexes, even insert/update statements to set the fields...).
#628 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
I don't understand what is happening. The same SQL in the standalone H2 server knows to move the nulls to last position, when
NULLS LAST
exists. But in FWD, the nulls are first...I've even connected using the same URL as the _temp table (in-memory DB, 'local temporary' table, same indexes, even insert/update statements to set the fields...).
So, the reason was that I forgot that I had a -Dh2.sortNullsHigh=true
in Eclipse for the FWD server. And this overrides any NULLS LAST
setting.
With my patch to add NULLS FIRST/LAST
(depending on direction) for H2, FWD works OK in regards to sorting nulls. And for PostgreSQL, these are not emitted, and also works OK (regarding nulls).
The only difference is in how FWD sorts records which are equal in terms of comparing the null fields. OE seems to use the record's ID value, and if there is no index, the order is kind of random. I'll get back to this once 4011a is rebased from trunk.
#629 Updated by Constantin Asofiei almost 4 years ago
The NULLS LAST/FIRST fix is in 4011a rev 11499.
#630 Updated by Constantin Asofiei almost 4 years ago
In TemporaryBuffer.copyAllRows
, we can skip record validation if:
// skip validation when: // 1. same table and not in append mode (as dst will be empty) or // 2. there are no unique constraints in dst table // TODO: enhance this if we are not in append mode, the dst and src tables are different, and they // have the same set of unique indexes (by the field position, not name) boolean skipValidate = (!append && dstBuf.getDMOImplementationClass() == srcBuf.getDMOImplementationClass()) || dstMeta.getUniqueConstraints().isEmpty();
This avoids unneeded overhead if we know the data can be copied to the destination table safely. But, does anyone know if we have a way to check if the unique indexes in two DMO impl match? If they match and we are not in append mode, then we can skip the validation, also.
#631 Updated by Constantin Asofiei almost 4 years ago
- a field part of an unique index is touched (so we need to validate the unique index)
- the buffer is explicitly released or created (we do this anyway)
- there is a query on it, which will flush the buffer (we do this anyway)
The only problem I see is CAN-FIND, but I think we can safely flush here, too, as we know the current record is valid (because we flushed if a field part of an unique index was changed).
#632 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Eric, an idea about the temp-table flushing. We currently save on every touched field. But I think we can defer this until:
- a field part of an unique index is touched (so we need to validate the unique index)
Looks like we already do this... I thought something was wrong here, as I see many validateMaybeFlush
calls incoming from a changed temp-table field, but I think this is because the field is actually part of an unique index. I'll look some more.
#633 Updated by Constantin Asofiei almost 4 years ago
For temp-tables, FWD has problems with this test:
def temp-table tt1 field f1 as int field f2 as int field f3 as int field f4 as int field f5 as int. def temp-table tt2 field f1 as int field f2 as int field f3 as int field f4 as int field f5 as int index ix1 f4. def temp-table tt3 field f1 as int field f2 as int field f3 as int field f4 as int field f5 as int index ix1 is unique f4. create tt1. if not can-find(tt1 where tt1.f1 = 0) then message "1a bad". tt1.f1 = 1. if not can-find(tt1 where tt1.f1 = 1) then message "1b bad". tt1.f2 = 1. if not can-find(tt1 where tt1.f2 = 1) then message "1c bad". tt1.f3 = 1. if not can-find(tt1 where tt1.f3 = 1) then message "1d bad". tt1.f4 = 1. if not can-find(tt1 where tt1.f4 = 1) then message "1e bad". tt1.f5 = 1. if not can-find(tt1 where tt1.f5 = 1) then message "1f bad". create tt2. if can-find(tt2 where tt2.f1 = 0) then message "2a bad". tt2.f1 = 1. if can-find(tt2 where tt2.f1 = 1) then message "2b bad". tt2.f2 = 1. if can-find(tt2 where tt2.f2 = 1) then message "2c bad". tt2.f3 = 1. if can-find(tt2 where tt2.f3 = 1) then message "2d bad". tt2.f4 = 1. if not can-find(tt2 where tt2.f4 = 1) then message "2e bad". tt2.f5 = 1. if not can-find(tt2 where tt2.f5 = 1) then message "2f bad". create tt3. if can-find(tt3 where tt3.f1 = 0) then message "3a bad". tt3.f1 = 1. if can-find(tt3 where tt3.f1 = 1) then message "3b bad". tt3.f2 = 1. if can-find(tt3 where tt3.f2 = 1) then message "3c bad". tt3.f3 = 1. if can-find(tt3 where tt3.f3 = 1) then message "3d bad". tt3.f4 = 1. if not can-find(tt3 where tt3.f4 = 1) then message "3e bad". tt3.f5 = 1. if not can-find(tt3 where tt3.f5 = 1) then message "3f bad".It fails in cases 1a,1b,1c,1d,1e,1f,2f,3f:
- for
tt1
, without indexes, OE seems to flush on create and after every field change. - for
tt2
andtt3
, what is interesting is that it doesn't flush until the index field was set. If this is moved beforef4
is set, then OE doesn't find the record:tt2.f5 = 1. if not can-find(tt2 where tt2.f5 = 1) then message "2e bad".
- OE has a default scan index, plus any custom index
- OE attaches to the scan index when the record was created
- once the record was attached to all indexes (their fields were set), then it starts 'flushing after every changed field'.
I don't think in FWD is OK to flush after every changed field. We can keep some state at the record and let CAN-FIND flush as needed. The point is that (for tt1
case) we should reduce the number of flushes, as we do now.
#634 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
But, does anyone know if we have a way to check if the unique indexes in two DMO impl match?
I'm not sure what you are asking. Across different DMO implementation classes, two unique indices can never match, as they operate on different tables. Unique indices for different instances of the same DMO implementation class will always be the same, since they represent records in the same table. Sorry, I think I missed your point.
#635 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
I'm not sure what you are asking. Across different DMO implementation classes, two unique indices can never match, as they operate on different tables. Unique indices for different instances of the same DMO implementation class will always be the same, since they represent records in the same table. Sorry, I think I missed your point.
When passing temp-table as parameters, the schema for the temp-table argument at the caller and the temp-table the parameter definition must match. And this match is done only using the field type and their order, without checking the field names. I'm asking about some API which given two temp-tables, checks if their index match in terms of matching the field types, and not their names.
#636 Updated by Constantin Asofiei almost 4 years ago
Regarding the test at #4011-633 - I've created a physical database with the same table structure and indexes, and OE behaves the same in terms of flushing, for this test. So this behavior doesn't seem to be specific to temp-tables.
But the temp-table specific optimizations should be made, as we currently Validation.checkUniqueConstraints
will always go ahead and flush if the temp-table has only non-unique indexes. I've checked the customer scenarios and looks like most of the cases with this aggressive flush have just a non-unique index.
#637 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
In
TemporaryBuffer.copyAllRows
, we can skip record validation if:
[...]This avoids unneeded overhead if we know the data can be copied to the destination table safely. But, does anyone know if we have a way to check if the unique indexes in two DMO impl match? If they match and we are not in append mode, then we can skip the validation, also.
Constantin, are you implementing any of these optimizations?
#638 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
In
TemporaryBuffer.copyAllRows
, we can skip record validation if:
[...]This avoids unneeded overhead if we know the data can be copied to the destination table safely. But, does anyone know if we have a way to check if the unique indexes in two DMO impl match? If they match and we are not in append mode, then we can skip the validation, also.
Constantin, are you implementing any of these optimizations?
I have this uncommitted patch at this time. I haven't worked on the index match when the temp-table DMOs are not the same.
### Eclipse Workspace Patch 1.0 #P p2j4011a Index: src/com/goldencode/p2j/persist/TemporaryBuffer.java =================================================================== --- src/com/goldencode/p2j/persist/TemporaryBuffer.java (revision 2370) +++ src/com/goldencode/p2j/persist/TemporaryBuffer.java (working copy) @@ -2572,6 +2572,16 @@ persistence.scroll(entities, fql, parms, 0, 0, ResultSet.TYPE_FORWARD_ONLY); BufferManager bufMgr = srcBuf.getBufferManager(); + DmoMeta dstMeta = DmoMetadataManager.getDmoInfo(dstBuf.getDMOInterface()); + + // skip validation when: + // 1. same table and not in append mode (as dst will be empty) or + // 2. there are no unique constraints in dst table + // TODO: enhance this if we are not in append mode, the dst and src tables are different, and they + // have the same set of unique indexes (by the field position, not name) + boolean skipValidate = + (!append && dstBuf.getDMOImplementationClass() == srcBuf.getDMOImplementationClass()) || + dstMeta.getUniqueConstraints().isEmpty(); try { @@ -2614,22 +2624,25 @@ // validate a record against the target table // TODO also validate against already copied records - try + if (!skipValidate) { - dstBuf.validate(dstRec); - } - catch (ValidationException e) - { - if (skipUniqueConflicts) + try { - continue; + dstBuf.validate(dstRec); } - else + catch (ValidationException e) { - ErrorManager.recordOrShowError(e.getNumber(), e.getMessage(), false); - commit *= -1; - validationError = true; - break; + if (skipUniqueConflicts) + { + continue; + } + else + { + ErrorManager.recordOrShowError(e.getNumber(), e.getMessage(), false); + commit *= -1; + validationError = true; + break; + } } }
#639 Updated by Eric Faulhaber almost 4 years ago
Note that in the cases where the DMO implementation classes are different, it is not just unique constraints that are validated. You could still have other validation failures, like index size or mandatory fields.
Also, I think your comment is correct that this should only be optimized when we are not in append mode (or we are in append mode, starting with an empty destination table), and the other conditions are met (i.e., same tables or same indices).
#640 Updated by Greg Shah almost 4 years ago
Is index size a problem for temp-tables? I thought that was a SQLServer thing.
#641 Updated by Ovidiu Maxiniuc almost 4 years ago
Greg Shah wrote:
Is index size a problem for temp-tables? I thought that was a SQLServer thing.
4GL has its own issue we try to emulate. Before 9.x there was a smaller number, but for modern OE is about 1971 bytes.
#642 Updated by Eric Faulhaber almost 4 years ago
Ovidiu Maxiniuc wrote:
Greg Shah wrote:
Is index size a problem for temp-tables? I thought that was a SQLServer thing.
4GL has its own issue we try to emulate. Before 9.x there was a smaller number, but for modern OE is about 1971 bytes.
This is what I was thinking of when I mentioned the index size check, but did we ever actually determine that there is value in enforcing this legacy limitation, even when the FWD implementation doesn't have a similar limitation? Can business logic vary in a useful way based on it? If not, this particular check seems like overhead we should drop.
#643 Updated by Eric Faulhaber almost 4 years ago
Regarding the rebase TODO in DatabaseManager.getDatabaseDMOs
...
The javadoc for DatabaseManager.getDatabaseDMOs
is a bit confusing (the "table DMO names" part is unclear), but the method seems to be about returning all the DMO implementation class names for the given database, so the caller can do a Class.forName
with them to initialize _File
metadata. Maybe we just hand back the DMO implementation classes themselves and skip the Class.forName
in the caller? Nothing else calls this DatabaseManager
method.
Do whatever makes sense to you here, but certainly we have that information. IIRC, we are generating all the implementation classes for the interfaces during server startup already, or maybe that's only for the auto-registered databases. Either way, we have the information, but the timing of when the classes are generated may need to be refactored.
There may be better ways of extracting the needed information for the _File
metadata from our new structures, but let's limit the refactoring for now and just get this working.
#644 Updated by Constantin Asofiei almost 4 years ago
FWDDataObject
- this class is used outside of the FWD context, by customer code via LegacyJavaAppserver
. I'm worried that bringing all the BaseRecord
overhead may break its usage. Only primaryKey()
is required from BaseRecord
.
In any case, all the Record
and BaseRecord
is not needed at all. I think is best to remove the extends TempRecord
and leave it as it was before.
#645 Updated by Constantin Asofiei almost 4 years ago
SortCriterion
WHERE_WITH_NULLS
I think can be removed. I'll need to double-check the customer project, but I suspectNULLS LAST/FIRST
fix solved the original problem.getAnsiJoin
andgetCompositeRestrictor
can be removed.
TempTableResultSet
- the code ininit
should be like this:// TODO: REBASE 4011a: return type has changed from [] to [][] java.util.List<Object[][]> rowData = TemporaryBuffer.readAllRows(dmo, props); java.util.List<Object[]> rows = new ArrayList<>(rowData.size()); java.util.List<Object[]> rowsMeta = new ArrayList<>(rowData.size()); rowData.forEach(rd -> { rows.add(rd[0]); rowsMeta.add(rd[1]); }); rowIter = rows.iterator(); rowMetaIter = rowsMeta.iterator(); propIter = props.iterator();
instead of this:// TODO: REBASE 4011a: return type has changed from [] to [][] // rowIter = TemporaryBuffer.readAllRows(dmo, props).iterator(); propIter = props.iterator();
Why weren't these changes picked up during the rebase?TemporaryBuffer
makeMutable
- yes, we need an interface there, otherwise the proxy can't be created. If you add backTempTableRecord
interface (and a separate interface withprimaryKey
getter/setter, or do you add it toTempTableRecord
?), then theFWDDataObject
can use that interface, too.ReferenceProxy
-FWDDataObject
has nothing to do with the FWD persist layer.defaultProps
needs to contain the mapping of the meta properties and the PK property. I think we can hard-code these without usingTempTableRecord
(or add a static method toTempRecord
to return the list of default properties in a temp-table record).
RecordBuffer
makeArgumentBuffer
andcreateProxy
- yes, we need theTempTableRecord
equivalent, otherwise proxy creation will not work.
FieldReference.getGetter
andgetSetter
, andDBUtils.getDMOPropertyType
. - these should already be able to work withTempRecord
, but after you add theTempTableRecord
equivalent, change it to that interface.
#649 Updated by Greg Shah almost 4 years ago
Eric Faulhaber wrote:
Ovidiu Maxiniuc wrote:
Greg Shah wrote:
Is index size a problem for temp-tables? I thought that was a SQLServer thing.
4GL has its own issue we try to emulate. Before 9.x there was a smaller number, but for modern OE is about 1971 bytes.
This is what I was thinking of when I mentioned the index size check, but did we ever actually determine that there is value in enforcing this legacy limitation, even when the FWD implementation doesn't have a similar limitation? Can business logic vary in a useful way based on it? If not, this particular check seems like overhead we should drop.
Generally, I'm OK with eliminating it if it is not a compatibility issue for real customer code.
We originally discussed this in #2273. I have not read through the details, but someone should do that to confirm there was no reason it was needed.
#650 Updated by Igor Skornyakov almost 4 years ago
What is the counterpart of the DMOIndex.getBasePackage()
in 4011a?
Thank you.
#651 Updated by Eric Faulhaber almost 4 years ago
DmoMetadataManager.getBasePackage
LE: Sorry, it's actually DmoMetadataManager.getDmoBasePackage
, and it will include the trailing .dmo
package.
#652 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, there is a TODO in TableMapper
:
// TODO: REBASE 4011a: this structure IS NOT populated. More, we have P2JField class to hold the same information /** The sorted list of fields. */ private final List<LegacyFieldInfo> fieldList = new ArrayList<>();
However, if I look at trunk rev 11347, there is code to initialize fieldList
. It seems to have dropped out in the rebase. I don't think I saw anyone respond to this TODO previously. Do you have a resolution in mind here?
#653 Updated by Constantin Asofiei almost 4 years ago
IndexHelper.getPrimaryIndex
:
- it was using
dmoClass
instead ofdmoIface
to get the annotation - it was prefixing the index with
idx__
, whileP2JIndex
has the real name
This resulted in the primary index not being used when transfering the data to the requester (by the Agent), thus the records were in the wrong order.
Please review this patch:
### Eclipse Workspace Patch 1.0 #P p2j4011a Index: src/com/goldencode/p2j/persist/IndexHelper.java =================================================================== --- src/com/goldencode/p2j/persist/IndexHelper.java (revision 2370) +++ src/com/goldencode/p2j/persist/IndexHelper.java (working copy) @@ -332,7 +332,7 @@ String indexName = null; Class<? extends Record> dmoClass = DBUtils.dmoClassForEntity(entity); Class<? extends DataModelObject> dmoIface = DmoMetadataManager.getDMOInterface(dmoClass); - Indices lIndexes = dmoClass.getAnnotation(Indices.class); + Indices lIndexes = dmoIface.getAnnotation(Indices.class); if (lIndexes != null) { for (Index lIndex : lIndexes.value()) @@ -359,8 +359,6 @@ } else { - indexName = DBUtils.getPrefixedParameter(indexName, "idx__"); - String schema = DatabaseManager.getSchemaByClass(dmoClass); Iterator<P2JIndex> iter = DmoMetadataManager.databaseIndexes(dmoIface); while (iter.hasNext()) {
#654 Updated by Eric Faulhaber almost 4 years ago
The patch looks ok to me.
#655 Updated by Eric Faulhaber almost 4 years ago
- Related to Bug #4703: investigate whether performance of TempTableDataSourceProvider can be improved added
#656 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
TemporaryBuffer
makeMutable
- yes, we need an interface there, otherwise the proxy can't be created. If you add backTempTableRecord
interface (and a separate interface withprimaryKey
getter/setter, or do you add it toTempTableRecord
?), then theFWDDataObject
can use that interface, too.RecordBuffer
makeArgumentBuffer
andcreateProxy
- yes, we need theTempTableRecord
equivalent, otherwise proxy creation will not work.
About the TemporaryBuffer and RecordBuffer 'TempRecord'- so, do we add back the TempTableRecord and Persistable interfaces? Because I don't see another way of solving these, to allow the proxy to be created.
#657 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
TemporaryBuffer
makeMutable
- yes, we need an interface there, otherwise the proxy can't be created. If you add backTempTableRecord
interface (and a separate interface withprimaryKey
getter/setter, or do you add it toTempTableRecord
?), then theFWDDataObject
can use that interface, too.RecordBuffer
makeArgumentBuffer
andcreateProxy
- yes, we need theTempTableRecord
equivalent, otherwise proxy creation will not work.About the TemporaryBuffer and RecordBuffer 'TempRecord'- so, do we add back the TempTableRecord and Persistable interfaces? Because I don't see another way of solving these, to allow the proxy to be created.
This is fixed in 4011a_rebase rev 11525. Please review.
#658 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
There are some issues inIndexHelper.getPrimaryIndex
:
- it was using
dmoClass
instead ofdmoIface
to get the annotation- it was prefixing the index with
idx__
, whileP2JIndex
has the real nameThis resulted in the primary index not being used when transfering the data to the requester (by the Agent), thus the records were in the wrong order.
Please review this patch:
[...]
Committed to 4011a_rebase rev 11526.
#659 Updated by Constantin Asofiei almost 4 years ago
blob and clob import is broken now. I'm working on fixing PropertyMapper
.
#660 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
blob and clob import is broken now. I'm working on fixing
PropertyMapper
.
The same problem is in TypeManager.setBlobParameter
. How do we create the blob, and be dialect specific? At TypeManager
, we only have a PreparedStatement
. And stmt.getConnection().createBlob()
abends in postgresql JDBC driver, as 'is not implemented'.
Some details about postgresql are here: https://jdbc.postgresql.org/documentation/80/binary-data.html, but this is very dialect specific.
Were there any tests related to blob/clob, done with 4011a?
#661 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Some details about postgresql are here: https://jdbc.postgresql.org/documentation/80/binary-data.html, but this is very dialect specific.
Were there any tests related to blob/clob, done with 4011a?
No, I did not reach this point up until now. I did not expect this level of complexity.
#662 Updated by Constantin Asofiei almost 4 years ago
There's a change in 4011a_rebase which breaks RecordBuffer.initialize
. This was previously done when getDMOImplementationClass
was being called, like by makeMutable
- the side effect was that the tempTableRef
was being constructed during initialize
, and this is exactly when the buffer is defined (before the external program's execute
is ran).
Now, in makeMutable
this was replaced with getDmoInterface
, and the initialize
is being done on openScope
- which is incorrect, as the static temp-table resource gets leaked to another scope... I've fixed this by calling buffer.initialize
in makeMutable
.
Also, Hotel GUI has this issue when trying to Checkout a Guest (from the Guests tab):
ArrayStoreException.<init>() line: 48 [local variables unavailable] System.arraycopy(Object, int, Object, int, int) line: not available [native method] AdaptiveQuery.load(Object[], LockType, boolean) line: 2447 CompoundQuery.retrieveImpl(int, LockType, boolean, boolean) line: 2624 CompoundQuery.retrieve(int, LockType, boolean, boolean) line: 2033 CompoundQuery.peekNext() line: 1334 Cursor.reposition(Long[], boolean) line: 365 Cursor.repositionByID(rowid, rowid...) line: 295 CompoundQuery(DynamicQuery).repositionByID(rowid, rowid...) line: 226 CompoundQuery.repositionByID(rowid, rowid...) line: 1 QueryWrapper.repositionByID(rowid, rowid...) line: 2871 GuestsFrame$2.lambda$null$0() line: 722
The problem here is that
data
for the AdaptiveQuery.load
is a full record, and this expects it to be only a record ID (Long value).
I've fixed this by building the record ID array from the Record instance.
These are in 4011a_rebase rev 11529
#663 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin Asofiei wrote:
Some details about postgresql are here: https://jdbc.postgresql.org/documentation/80/binary-data.html, but this is very dialect specific.
Were there any tests related to blob/clob, done with 4011a?No, I did not reach this point up until now. I did not expect this level of complexity.
OK, I'll look tomorrow, maybe TypeManager should not receive a byte[]
but an actual java.sql.Blob
instance.
#664 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, we are still converting LOBs for DMOs (interfaces and implementation classes), it seems, same as before. What is being generated for the DDL for these types? Previously, it looks like Hibernate generated oid
for BLOB and text
for CLOB (for PostgreSQL), and blob
and clob
, respectively, for H2.
#665 Updated by Constantin Asofiei almost 4 years ago
About the 'reWriteBatchedInserts=true' optimization for data import. I think this is just reporting '0' after the insert, but we fill the log with warnings that 'no record was inserted', in Persister.executeBatch
.
Some details are here: https://vladmihalcea.com/postgresql-multi-row-insert-rewritebatchedinserts-property/ and we may just find a way to block the warning in Persister.executeBatch
, for the data import case. Please advise.
#666 Updated by Ovidiu Maxiniuc almost 4 years ago
Eric Faulhaber wrote:
Ovidiu, we are still converting LOBs for DMOs (interfaces and implementation classes), it seems, same as before. What is being generated for the DDL for these types? Previously, it looks like Hibernate generated
oid
for BLOB andtext
for CLOB (for PostgreSQL), andblob
andclob
, respectively, for H2.
We use the same:
H2: "blob" -> "blob" and "clob" -> "clob"
PSQL: "blob" -> "oid" and "clob" ->"text"
See end of construction for fwd2sql
map in each dialect.
#667 Updated by Constantin Asofiei almost 4 years ago
4011a_rebase r11531 has modified build.gradle to use the fwd-h2 patched in this task, as ver 1.4.197-20200530.
#668 Updated by Eric Faulhaber almost 4 years ago
First time I'm trying to fire up the FWD server with 4011_rebase (rev 11531), in order to run a test case. I get this on startup:
com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2083) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:999) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.IncompatibleClassChangeError: Implementing class at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:756) at java.lang.ClassLoader.defineClass(ClassLoader.java:635) at com.goldencode.asm.AsmClassLoader.findClass(AsmClassLoader.java:186) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at com.goldencode.asm.AsmClassLoader.loadClass(AsmClassLoader.java:220) at com.goldencode.p2j.persist.orm.DmoClass.load(DmoClass.java:1518) at com.goldencode.p2j.persist.orm.DmoClass.forInterface(DmoClass.java:386) at com.goldencode.p2j.persist.orm.DmoMetadataManager.getImplementingClass(DmoMetadataManager.java:265) at com.goldencode.p2j.persist.orm.DmoMetadataManager.registerDmo(DmoMetadataManager.java:180) at com.goldencode.p2j.persist.meta.MetadataManager.addTableToFile(MetadataManager.java:646) at com.goldencode.p2j.persist.meta.MetadataManager.lambda$populateFileTable$1(MetadataManager.java:612) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at com.goldencode.p2j.persist.meta.MetadataManager.populateFileTable(MetadataManager.java:612) at com.goldencode.p2j.persist.meta.MetadataManager.access$600(MetadataManager.java:108) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$static$2(MetadataManager.java:1798) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$new$3(MetadataManager.java:1841) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$populateAll$6(MetadataManager.java:1906) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.populateAll(MetadataManager.java:1906) at com.goldencode.p2j.persist.meta.MetadataManager.populateDatabase(MetadataManager.java:572) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1611) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:864) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1244) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2079) ... 5 more
I've updated to the latest testcases
revision to pick up the p2j.cfg.xml
changes needed for the metadata tables. If I start the server with 4011a rev 11500, it works without error.
Any ideas?
#669 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Eric Faulhaber wrote:
Ovidiu, we are still converting LOBs for DMOs (interfaces and implementation classes), it seems, same as before. What is being generated for the DDL for these types? Previously, it looks like Hibernate generated
oid
for BLOB andtext
for CLOB (for PostgreSQL), andblob
andclob
, respectively, for H2.We use the same:
H2: "blob" -> "blob" and "clob" -> "clob"
PSQL: "blob" -> "oid" and "clob" ->"text"
See end of construction forfwd2sql
map in each dialect.
I don't understand how this impacts TypeManager.setBlobParameter
and Record._setBlob
. I've added _setBlob
as:
byte[] datum = null; if (w != null && !w.isUnknown()) { datum = w.getByteArray(); } setDatum(offset, datum);
and
_getBlob
as:byte[] datum = (byte[]) data[offset]; return datum != null ? new blob(datum) : new blob();
Also, I've made changes for clob
, object
and comhandle
and I've reached a point where TypeManager
knows of a BiFunction<Connection, byte[], Blob>
'blobCreator' function like this:
static int setBlobParameter(PreparedStatement stmt, int index, Object val) throws SQLException { if (val == null) { stmt.setNull(index, Types.BLOB); } else { Connection conn = stmt.getConnection(); BiFunction<Connection, byte[], Blob> blobCreator = blobCreators.get(conn.getMetaData().getURL()); Blob blob = blobCreator.apply(conn, (byte[]) val); stmt.setBlob(index, blob); } return 1; }
The blobCreators
map is populated by PooledDataSourceProvider
, like this:
TypeManager.addBlobCreator(url, dialect::blobCreator);
Now I'm stuck at the dialect-specific implementation for public abstract Blob blobCreator(Connection conn, byte[] bytes);
. (Note that there is an equivalent clobCreator
, too).
#670 Updated by Constantin Asofiei almost 4 years ago
BlobType.readProperty
is like this:
public int readProperty(ResultSet rs, int rsOffset, Object[] data, int propIndex) throws SQLException { Blob blob = rs.getBlob(rsOffset); if (rs.wasNull()) { data[propIndex] = null; } else { blob b = new blob(blob); data[propIndex] = b.getByteArray(); } return 1; }
But I don't know how correct this is, yet.
Also, something else to consider. If we create a LargeObject (a new OID) each time we update the BLOB field, then this may result in a leak of 'unreferenced OIDs'.
#671 Updated by Constantin Asofiei almost 4 years ago
And now I'm stuck because we are using c3p0 (which has com.mchange.v2.c3p0.impl.NewProxyConnection
instances for java.sql.Connection
, with its inner
field wrapping the real PGConnection
instance), and I need the real PGConnection
to access the getLargeObjectAPI
API.
com.mchange.v2.c3p0.impl.NewProxyConnection
has public Object rawConnectionOperation(Method m, Object target, Object[] args)
and this might work, but this is ugly.
#672 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
And now I'm stuck because we are using c3p0 (which has
com.mchange.v2.c3p0.impl.NewProxyConnection
instances forjava.sql.Connection
, with itsinner
field wrapping the realPGConnection
instance), and I need the realPGConnection
to access thegetLargeObjectAPI
API.
com.mchange.v2.c3p0.impl.NewProxyConnection
haspublic Object rawConnectionOperation(Method m, Object target, Object[] args)
and this might work, but this is ugly.
Not enough, the code I have looks like this:
@Override public Blob blobCreator(Connection conn, byte[] bytes) { try { LargeObjectManager lobj = null; if (conn instanceof C3P0ProxyConnection) { Method mthd = PGConnection.class.getMethod("getLargeObjectAPI"); C3P0ProxyConnection c3p0conn = (C3P0ProxyConnection) conn; lobj = (LargeObjectManager) c3p0conn.rawConnectionOperation(mthd, C3P0ProxyConnection.RAW_CONNECTION, null); } else { PGConnection pgconn = (PGConnection) conn; lobj = pgconn.getLargeObjectAPI(); } long oid = lobj.createLO(LargeObjectManager.READ | LargeObjectManager.WRITE); org.postgresql.largeobject.LargeObject obj = lobj.open(oid, LargeObjectManager.WRITE); obj.write(bytes); obj.close(); return new PgBlob((BaseConnection) pgconn, oid); } catch (SQLException e) { throw new RuntimeException(e); } }
I need to find another way of creating the PgBlob
instance...
#673 Updated by Ovidiu Maxiniuc almost 4 years ago
I found a possible issue in DmoMetadataManager
, when tables from different databases have the same name.
I am not aware if this case occurs in our customers, but the issue should be fixed. Unless there is something more important I can help with. Please let me know.
#674 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Unless there is something more important I can help with. Please let me know.
Ovidiu, please check Hotel GUI for any other issues with 4011a_rebase.
Also, go ahead with the ChUI app regression testing.
#675 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu, please check Hotel GUI for any other issues with 4011a_rebase.
OK, I will focus on this.
Also, go ahead with the ChUI app regression testing.
.. while running in background (devsrv01) the ChUI tests.
#676 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
I need to find another way of creating the
PgBlob
instance...
In the end, PostgreSQL worked just by passing a java.sql.Blob
(or Clob
) implementation. For the H2 and SqlServer2008 I'm still using the JDBC Connection.createBlob/createClob to create these.
The import worked in the customer project.
#677 Updated by Constantin Asofiei almost 4 years ago
I have a new case of name mismatch in TempTableBuilder.getExistingIndexes
.
The field name is order
, P2JIndexComponent
has order
for both name
and originalName
, and columnToPropertyMap
(from DatabaseManager.getColumnToPropertyMap(database, dmoClass, null, false)
) has order_=order
.
The lookup is done as columnToPropertyMap.get("order")
, where "order"
is component.getName()
.
I need some advice on this one.
#678 Updated by Constantin Asofiei almost 4 years ago
The recreate is this:
def temp-table tt1 field order as int index ix1 order. create tt1. tt1.order = 1. def var h as handle. create temp-table h. h:create-like(buffer tt1:handle). h:temp-table-prepare("tt2"). h:copy-temp-table(buffer tt1:handle). def var hq as handle. create query hq. hq:add-buffer(h:default-buffer-handle). hq:query-prepare("for each tt2"). hq:query-open(). hq:get-first(). message h:default-buffer-handle::order.
with this NPE:
Caused by: java.lang.NullPointerException at com.goldencode.p2j.persist.TableMapper$LegacyFieldInfo.access$19(TableMapper.java:3463) at com.goldencode.p2j.persist.TableMapper.getLegacyFieldName(TableMapper.java:1240) at com.goldencode.p2j.persist.TempTableBuilder.getExistingIndexes(TempTableBuilder.java:688) at com.goldencode.p2j.persist.TempTableBuilder.getExistingIndexes(TempTableBuilder.java:720) at com.goldencode.p2j.persist.TempTableBuilder.createTableLikeImpl(TempTableBuilder.java:3106) at com.goldencode.p2j.persist.TempTableBuilder.createLike(TempTableBuilder.java:1175)
#679 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
I have a new case of name mismatch in
TempTableBuilder.getExistingIndexes
.The field name is
order
,P2JIndexComponent
hasorder
for bothname
andoriginalName
, andcolumnToPropertyMap
(fromDatabaseManager.getColumnToPropertyMap(database, dmoClass, null, false)
) hasorder_=order
.The lookup is done as
columnToPropertyMap.get("order")
, where"order"
iscomponent.getName()
.I need some advice on this one.
Ovidiu, is the lookup key wrong here?
#680 Updated by Eric Faulhaber almost 4 years ago
Either, the column name key (i.e., the post-SQL-keyname-conflict-resolution version with the trailing underscore) is wrong in DatabaseManager.columnToProperyMap
, or the unadjusted component name in the P2JIndex
is wrong. I think we should figure out which one is different using pre-4011a code and match that behavior. However, I would like Ovidiu's input, in case a change was intentional for some reason.
#681 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Either, the column name key (i.e., the post-SQL-keyname-conflict-resolution version with the trailing underscore) is wrong in
DatabaseManager.columnToProperyMap
, or the unadjusted component name in theP2JIndex
is wrong.
The P2JIndexComponent
name and originalName are both order_
with trunk.
#682 Updated by Eric Faulhaber almost 4 years ago
That suggests to me that the column
attribute of the @Property
annotation on the DMO interface is wrong, and should be order_
instead of order
. That would be a schema conversion issue. We resolve conflicts with SQL keynames in NameConverter
, but maybe we are setting the column name incorrectly into that annotation.
What is the column
attribute on the DMO implementation class for the trunk-converted case? Or is there one? I think we might have pulled it from the HBM file pre-4011a.
But again, I'd like Ovidiu's input, in case there are other dependencies on that column
attribute...
#683 Updated by Constantin Asofiei almost 4 years ago
In 4011a_rebase the index component is built from the DMO interface:
@Index(name = "tt1_ix1", legacy = "ix1", components = { @IndexComponent(name = "order", legacy = "order") })
while in trunk is built from dmo_index.xml:
<class interface="Tt1_1_1"> <index name="idx__tt1_ix1"> <column name="order_"/> </index> </class>
In 4011a_rebase, the field getter has this at the DMO iface:
@Property(id = 1, name = "order", column = "order_", legacy = "order", initial = "0", order = 0)
What seems to me is that the P2JIndexCompoent
can be used with FQL and SQL style index components... and is very hard (at least for me) to easily determine which usage is which.
#684 Updated by Ovidiu Maxiniuc almost 4 years ago
I think the problem is the ambiguity of P2JIndex
. It is a descriptor of an index, which may be either a Progress index or an SQL database index. The P2JIndexComponent
will keep this meaning.
In DmoMeta.getDatabaseIndexes()
the p2jIndex.addComponent()
uses compName
which is the property name (in this case "order"). Having the property name allows access to Property
and so to SQL column. I think this is the correct way, but in the current circumstances it is wrong. Why? because in the TempTableBuilder.getExistingIndexes
, that value is mapped through columnToPropertyMap
. This map is correctly built. The lookup will fail. We already have the fieldName
(the property name to be correct).
In my opinion, we should set in TempTableBuilder
:686:
String fieldName = columnName;
and get rid of
columnToPropertyMap
completely. I do not like this double mapping.
More than that, the P2JIndexComponent
and P2JIndex
should have a property to know what kind of data they carry. Or better use a convention and always use java/property names. This way we can obtain either the legacy name and the SQL counterpart for both objects.
#685 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
What seems to me is that the
P2JIndexCompoent
can be used with FQL and SQL style index components... and is very hard (at least for me) to easily determine which usage is which.
Exactly! This is the problem.
#686 Updated by Eric Faulhaber almost 4 years ago
I am all for getting rid of multi-layer mappings; they slow things down and complicate code maintenance. But there are other callers to that DatabaseManager API. How much effort do you estimate to clean those up, so that everything is using the cleaner mapping?
I also agree with your suggestion about adding a flag to P2JIndex
to make it more obvious which type of use it is. When I wrote this class, I intentionally left it flexible to be able to handle both, but I see this has become a cause for confusion, so let's correct that.
#687 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
In my opinion, we should set in
TempTableBuilder
:686:
[...]
and get rid ofcolumnToPropertyMap
completely. I do not like this double mapping.
This solves this NPE, but know this is moved to DmoMetadataManager.getExistingFields
. For the recid
(PK) field, the ParmType.fromClass
returns null
:
ret.add(new P2JField(p.name(), ParmType.fromClass(bdtType),
Here, bdtType
is java.lang.Long
. The same will be for _multiplex
and I suspect all the other TempTableRecord
fields.
How do I map these? Or do I just ignore the non-BDT fields (i.e. PK and TempTableRecord
fields), and assume that these are implicitly added?
#688 Updated by Ovidiu Maxiniuc almost 4 years ago
Use this in DmoMetadataManager
:375 (method getExistingFields()
):
if (p.id() < 0)
{
// skip ReservedProperties
continue;
}
#689 Updated by Constantin Asofiei almost 4 years ago
Thanks, that solve this one. Now moving to more TempTableRecord
properties. The failure is this:
[07/02/2020 01:33:52 EEST] (com.goldencode.p2j.persist.Persistence:SEVERE) [00000001:0000000C:ReportAppserverProcess-->local/_temp/primary] error executing query [from Tt1_1_1__Impl__ as tt1 where ((tt1._multiplex = ?0) and (tt1._rowState = 1)) order by tt1.order asc, tt1._multiplex asc, tt1.recid asc] com.goldencode.p2j.persist.PersistenceException: Error while processing the SQL list at com.goldencode.p2j.persist.orm.SQLQuery.list(SQLQuery.java:416) at com.goldencode.p2j.persist.orm.Query.list(Query.java:257) at com.goldencode.p2j.persist.Persistence.list(Persistence.java:1532) Caused by: org.h2.jdbc.JdbcSQLException: Column "TT1_1_1__I0_._ROWSTATE" not found; SQL statement: select tt1_1_1__i0_.recid as id0_, tt1_1_1__i0_._multiplex as column1_0_, tt1_1_1__i0_.order_ as order2_0_ from tt1 tt1_1_1__i0_ where tt1_1_1__i0_._multiplex = ? and tt1_1_1__i0_._rowState = 1 order by tt1_1_1__i0_.order_ asc nulls last, tt1_1_1__i0_._multiplex asc nulls last, tt1_1_1__i0_.recid asc nulls last limit ? [42122-197] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
because of this:
for each tt1 where row-state(tt1) = 1: message tt1.order. end.
I don't see the _rowMeta
and others in the DMOMeta.fields
as ReservedProperty
.
#690 Updated by Ovidiu Maxiniuc almost 4 years ago
The dataset-related fields are added to DmoMeta
only if the table has before/after tables declared (see src/com/goldencode/p2j/persist/orm/DmoMeta.java
:189). It seems that this too aggressive? Probably, as you could write queries on those properties even if they are not "enabled" by having the pair table. Remove the "if" line please.
#691 Updated by Constantin Asofiei almost 4 years ago
Thanks, that solved this. And following is TableMapper.legacyGettersByProperty
- this doesn't include the TempTableRecord
properties, but srcBuf.getLegacyFieldNameMap().getLegacyField2Name()
includes them. And BUFFER-COPY
fails with a NPE because of this.
So, do you recall if BUFFER-COPY touches these properties?
#692 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Thanks, that solved this. And following is
TableMapper.legacyGettersByProperty
- this doesn't include theTempTableRecord
properties, butsrcBuf.getLegacyFieldNameMap().getLegacyField2Name()
includes them. AndBUFFER-COPY
fails with a NPE because of this.So, do you recall if BUFFER-COPY touches these properties?
I do not remember, need to test, but momentarily I do not have a working OE environment. However, it seem logical to update (or at least reset) them.
#693 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin Asofiei wrote:
Thanks, that solved this. And following is
TableMapper.legacyGettersByProperty
- this doesn't include theTempTableRecord
properties, butsrcBuf.getLegacyFieldNameMap().getLegacyField2Name()
includes them. AndBUFFER-COPY
fails with a NPE because of this.So, do you recall if BUFFER-COPY touches these properties?
I do not remember, need to test, but momentarily I do not have a working OE environment. However, it seem logical to update (or at least reset) them.
I've checked with trun; srcBuf.getLegacyFieldNameMap().getLegacyField2Name()
doesn't include the TempTableRecord
properties.
Is it safe to change getLegacyField2Name()
, to omit these properties?
#694 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Is it safe to change
getLegacyField2Name()
, to omit these properties?
It seems that omitting these in TableMapper$LegacyTableInfo.loadFields
(same as by DmoMeta
), solves the BUFFER-COPY.
#695 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Is it safe to change
getLegacyField2Name()
, to omit these properties?
I do not know exactly. Were they omitted in trunk?
If so, I guess we have to filter them out in TableMapper
:1993.
#696 Updated by Eric Faulhaber almost 4 years ago
I have committed 4011a_rebase rev 11533, which fixes the CAN-FIND in the test case from #4011-577 and enables dirty share for all persistent tables by default. While safer functionally, this default behavior will have a negative impact on performance. This default behavior can be overridden with a directory entry in the persistence
node, as follows:
<node class="container" name="persistence"> ... <node class="boolean" name="disable-global-dirty-share"> <node-attribute name="value" value="true"/> </node> ... </node>
The idea is to figure out which tables actually need the dirty share support, to mark them with a dirty-read
annotation with a schema hint (or preferably to fix the 4GL code to eliminate the need), and then to disable the default dirty share behavior with the above directory setting. I am working on some logging/instrumentation to help identify which tables and areas of the code need dirty share support.
#697 Updated by Constantin Asofiei almost 4 years ago
Ovidiu/Eric: when does a deleted record get evicted from the orm.Session.cache
? I can't find any place for this... and when reclaiming IDs for a temp-table, at some point we end up with a cache collision. My assumption is that the cache is not being updated when a record is deleted.
#698 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Ovidiu/Eric: when does a deleted record get evicted from the
orm.Session.cache
? I can't find any place for this... and when reclaiming IDs for a temp-table, at some point we end up with a cache collision. My assumption is that the cache is not being updated when a record is deleted.
No, the problem was that TemporaryBuffer.nextPrimaryKey
was resetting the DMO's ID sequence, to a reclaimed key. This can't be done - reclaimed keys are used 'as is' until exhausted, while the ID generator sequence must remain always increasing.
#699 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, in SQL, do we still need the '_offset' column when comparing a datetime-tz field with another (string representation of a) datetime-tz?
I ask because in both H2 and PostgreSQL we have a timestamp with time zone
field type. So, do we still need the offset? And if not, what is the java.sql
equivalent of this field, to set it at the pl.Operators
method definition?
#700 Updated by Ovidiu Maxiniuc almost 4 years ago
All our supported dialects have a field type which includes the time zone. However, they decided to support them differently. The big problem is PostgreSQL. If you write such field and it PSQL will transform it to local timezone. From POV of internal SQL operations this is fine, they compare well as the instant in time is kept. The problem is when the value is fetched and rehydrated back, we loose the time offset, it will be always the same for all fields, as it was adjusted when saved. For that matter we decided to keep the '_offset' column to set the DTZ to correct time zone/offset.
Why do you need to compare string representations? As we have the timestamp with time zone
field types (or equivalent) for all dialects, the SQL should compare those instances.
#701 Updated by Eric Faulhaber almost 4 years ago
Committed DirtyShareSupport
class missing from last commit, as 4011b rev 11534.
#702 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
Ovidiu/Eric: when does a deleted record get evicted from the
orm.Session.cache
? I can't find any place for this... and when reclaiming IDs for a temp-table, at some point we end up with a cache collision. My assumption is that the cache is not being updated when a record is deleted.No, the problem was that
TemporaryBuffer.nextPrimaryKey
was resetting the DMO's ID sequence, to a reclaimed key. This can't be done - reclaimed keys are used 'as is' until exhausted, while the ID generator sequence must remain always increasing.
I'm not sure what you mean by "was resetting the DMO's ID sequence". Where do you see the sequence being reset?
Please note that the use of the reclaimed keys (for temp-tables only) was critical in fixing a bug (a very long time ago -- I've long since forgotten the exact details) to prevent us from either getting into an infinite loop or skipping/missing records in a PreselectQuery
set to dynamic mode.
The use of reclaimed keys for persistent tables OTOH was too unpredictable and was abandoned.
#703 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Constantin Asofiei wrote:
Ovidiu/Eric: when does a deleted record get evicted from the
orm.Session.cache
? I can't find any place for this... and when reclaiming IDs for a temp-table, at some point we end up with a cache collision. My assumption is that the cache is not being updated when a record is deleted.No, the problem was that
TemporaryBuffer.nextPrimaryKey
was resetting the DMO's ID sequence, to a reclaimed key. This can't be done - reclaimed keys are used 'as is' until exhausted, while the ID generator sequence must remain always increasing.I'm not sure what you mean by "was resetting the DMO's ID sequence". Where do you see the sequence being reset?
Please note that the use of the reclaimed keys (for temp-tables only) was critical in fixing a bug (a very long time ago -- I've long since forgotten the exact details) to prevent us from either getting into an infinite loop or skipping/missing records in a
PreselectQuery
set to dynamic mode.The use of reclaimed keys for persistent tables OTOH was too unpredictable and was abandoned.
This code in TemporaryBuffer$Context.nextPrimaryKey
:
if (keys != null) { ret = keys.first(); id.set(ret); <------------------------- the DMO's ID sequence is reset to the reclaimed key, which is incorrect keys.remove(ret); if (keys.isEmpty()) { reclaimedKeys.remove(dmoIface); } }
#704 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
All our supported dialects have a field type which includes the time zone. However, they decided to support them differently. The big problem is PostgreSQL. If you write such field and it PSQL will transform it to local timezone. From POV of internal SQL operations this is fine, they compare well as the instant in time is kept. The problem is when the value is fetched and rehydrated back, we loose the time offset, it will be always the same for all fields, as it was adjusted when saved. For that matter we decided to keep the '_offset' column to set the DTZ to correct time zone/offset.
So, what you are saying is that all operators which have a datetime-tz
operand (its equivalent in SQL) no longer need PL/Java? Previously HQLPreprocessor was emitting something like lte(tt1.f1, "timezone-with-offset-string")
, where timezone-with-offset-string
is the string representation of that dtz substitution parameter.
So, instead of lte
, we should emit tt1.f1 < ?
, and use PreparedStatement.setTimestamp
to set the statement's substitution parameter?
#705 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
So, instead of
lte
, we should emittt1.f1 < ?
, and usePreparedStatement.setTimestamp
to set the statement's substitution parameter?
Yes, this is the idea. The databases will handle this correctly as per my tests with DTZ fields.
#706 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
This code in
TemporaryBuffer$Context.nextPrimaryKey
:
[...]
OK, yes, I see now. Agreed, that is a regression.
#707 Updated by Constantin Asofiei almost 4 years ago
4011b rev 11535 completed implementation for blob, clob, comhandle, handle, object and raw field types. Please review.
#708 Updated by Constantin Asofiei almost 4 years ago
- Ignore DMO properties like _error-flag, _row-state, etc, where only table-defined legacy props are required.
- Fixed field's initial value when that is null.
- TempTableBuilder.getExistingIndexes - use the ORM field name when building the index.
- Fixed ID reclaim in TemporaryBuffer (do not change the DMO's ID generator value).
- Fixed TemporaryBuffer.removeRecords when there are OO fields.
- BufferImpl - 2 cases of hard-coded 'id' field changed to Session.PK.
#709 Updated by Ovidiu Maxiniuc almost 4 years ago
4011b rev 11535 seems fine to me. It's really strange that PSQL dialect has so specials needs.
4011b rev 11536: one question for DmoMeta
ctor: the dataset related properties are accessible (even if values returned are nulls) for all tables, including permanent? At least this I understand from your comment. In this case they should be moved outside the if (tempTable)
block.
#710 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
4011b rev 11535 seems fine to me. It's really strange that PSQL dialect has so specials needs.
4011b rev 11536: one question forDmoMeta
ctor: the dataset related properties are accessible (even if values returned are nulls) for all tables, including permanent? At least this I understand from your comment. In this case they should be moved outside theif (tempTable)
block.
The comment meant to replace the 'only before or after table' one. It's confusing, I'll change it.
#711 Updated by Eric Faulhaber almost 4 years ago
I'm getting an NPE when checking in a guest in Hotel GUI:
Caused by: java.lang.NullPointerException at com.goldencode.p2j.persist.RandomAccessQuery.processDirtyResults(RandomAccessQuery.java:4490) at com.goldencode.p2j.persist.RandomAccessQuery.executeImpl(RandomAccessQuery.java:4383) at com.goldencode.p2j.persist.RandomAccessQuery.execute(RandomAccessQuery.java:3368) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1466) at com.goldencode.p2j.persist.RandomAccessQuery.first(RandomAccessQuery.java:1363) at com.goldencode.hotel.UpdateStayDialog$3.lambda$body$1(UpdateStayDialog.java:750) ...
The index names by sort phrase map has been regressed. Working on it now.
#712 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, is it normal for a before-table to explicitly generate the _error-flag
, _row-state
and other related properties, at the DMO Interface? I ask because although these are 'private', they are emitted with a non-negative ID.
#713 Updated by Ovidiu Maxiniuc almost 4 years ago
They have an id reserved in ReservedProperty
, indeed.
However, if I remember correctly, they are true fields in database, so they can be accessed using any of these forms:
ROW-STATE(tt0) tt0.__row-state__ hBuffTt0:ROW-STATE buffer tt0::__row-state__ buffer tt0:buffer-field("__row-state__"):value
IIRC, the first two forms including in where predicates where they are converted to same construction.
#714 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
They have an id reserved in
ReservedProperty
, indeed.
In the DMO interface, the annotation is this:
@Property(id = 6, name = "_peerRowid", column = "_peerRowid", legacy = "__after-rowid__", initialNull = true, order = 50)
There's no
ReservedProperty
here.#715 Updated by Constantin Asofiei almost 4 years ago
More blob, clob and datetimetz fixes are in 4011b rev 11537.
#716 Updated by Ovidiu Maxiniuc almost 4 years ago
- in which cases in
FqlToSqlConverter.getSqlColumnAlias
, thesqlName
is null col "column" needs to be used? It makes sense, but my imagination does not help now. Only digits? TypeManager.setFwdDatetimetzParameter()
you decided not to set the second parameter reserved for DTZ. This will generate some imbalancing between the SQL parameters. Do you have a testcase for this?
Regarding the before-table specific reserved fields: I wonder if they need at all to be added in DmoMeta
ctor. They might have been needed in a previous iteration, but now they are generated directly in DMO, as you noticed.
#717 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Review for 4011b rev 11537:
- in which cases in
FqlToSqlConverter.getSqlColumnAlias
, thesqlName
is null col "column" needs to be used? It makes sense, but my imagination does not help now. Only digits?
I don't have a standalone test. This was because FQL2SQL is treating the _errorString and the other TempTableRecord properties as 'legacy fields'. And for these, there is no letter-only prefix.
TypeManager.setFwdDatetimetzParameter()
you decided not to set the second parameter reserved for DTZ. This will generate some imbalancing between the SQL parameters. Do you have a testcase for this?
Well, didn't you state that in the WHERE clause, we need to work only with the actual field, and not the _offset
one? That's why I'm not emitting the _offset
one.
Regarding the before-table specific reserved fields: I wonder if they need at all to be added in
DmoMeta
ctor. They might have been needed in a previous iteration, but now they are generated directly in DMO, as you noticed.
If they are not treated as ReservedProperties
anymore, lots of code will break. Unless the annotation at the DMO interface will be changed from Property
to ReservedProperty
.
#718 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu Maxiniuc wrote:
TypeManager.setFwdDatetimetzParameter()
you decided not to set the second parameter reserved for DTZ. This will generate some imbalancing between the SQL parameters. Do you have a testcase for this?Well, didn't you state that in the WHERE clause, we need to work only with the actual field, and not the
_offset
one? That's why I'm not emitting the_offset
one.
I see, then we have a problem. The same API is used for INSERT/UPDATE statements. When a DTZ is saved both components are needed to be stored.
Probably we can solve this by adding a new parameter to ParameterSetter.setParameter()
hierarchy to let the method know how the parameter need to be set.
#719 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
I see, then we have a problem. The same API is used for INSERT/UPDATE statements. When a DTZ is saved both components are needed to be stored.
Probably we can solve this by adding a new parameter toParameterSetter.setParameter()
hierarchy to let the method know how the parameter need to be set.
As we discussed last night, for insert/update a record, we use the Persister
APIs, which rely on a DataHandler
implementation. For datetimetz
, this calls TypeManager.setOffsetDateTimeParameter
, and not setFwdDatetimetzParameter
.
My understanding is that TypeManager.setFwd
are used only in query SQLs, where the arguments are real BDT instances. For insert/update, the arguments are the datums at the record, which are not BDT.
The only case where an UPDATE FQL is used by FWD is TemporaryBuffer.removeRecords
- and in this case, the statement will always be like UPDATE ... SET _multiplex
- no other property can appear in the SET
clause.
So I think we are good with the changes.
#720 Updated by Eric Faulhaber almost 4 years ago
In email, Constantin wrote:
BufferManager.beginTx
andendTx
both stand out in the visualvm sampler. This is because we are iterating ~3000openBuffers
, on each and every scope start/finish.
This openBuffers
iteration is a long-standing issue that we need to solve. We need to explore ways to make the list of open buffers to iterate on every BufferManager
beginTx
and endTx
call much smaller. I haven't looked into this for a while, but IIRC most of these buffers can't be accessed from the current block and this work is just about maintaining state for when we pop back up to a scope where they can be used. There must be a way to safely short-cut/short-circuit this work for these buffers.
Rather than copying these around between collections, I'd prefer to enhance ScopedList
(if possible) to make this more efficient. But that implicitly assumes that the buffers to be iterated are not interleaved with those that are not to be iterated, because the underlying data structure in ScopedList
is a double-linked list.
#721 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
In email, Constantin wrote:
BufferManager.beginTx
andendTx
both stand out in the visualvm sampler. This is because we are iterating ~3000openBuffers
, on each and every scope start/finish.... but IIRC most of these buffers can't be accessed from the current block and this work is just about maintaining state for when we pop back up to a scope where they can be used. There must be a way to safely short-cut/short-circuit this work for these buffers.
The problem here is that no matter how/where you define a buffer, it can be accessed via its handle from anywhere else (in case of persistent programs or OE class). So no, we can't use 'scope behavior' for them.
There are two issues that I can think of:- for
REFERENCE-ONLY
temp-tables, there is no real buffer in 4GL for them, but I create aTemporaryBuffer
instance anyway. And this gets registered withBufferManager
, even if it can never be accessed. - a lazy approach of opening a new scope for a buffer. Only a very small percentage of the
BufferManager.openBuffers
is used in a certain top-level block (especially from the global block, where all the buffers for the persistent programs/OE classes reside). I'm not sure how to do this yet, but the idea is for the buffer to know that is used in a block where its scope wasn't opened, and open it. This may be only for the ones in the global scope inBufferManager.openScopes
(I think we call those in the 'world scope').
#722 Updated by Greg Shah almost 4 years ago
Is the primary problem this loop in beginTx()
and similar in endTx()
:
for (RecordBuffer buffer : bm.openBuffers) { if (buffer.getBlockDepth() == blockDepth) { // Already processed this one (buffers can be open in multiple, nested scopes). continue; } try { if (fullTx && buffer.isActive()) { bm.bufferActiveInTransaction(buffer); } } finally { // Notify each open buffer that a new block scope has opened. buffer.enterBlock(blockDepth, inTx); } }
#723 Updated by Constantin Asofiei almost 4 years ago
Greg Shah wrote:
Is the primary problem this loop in
beginTx()
and similar inendTx()
:
Exactly. The global scope, more precise, as this is where the buffers in persistent programs/classes end up. I've made some changes to not register reference-only buffers, but this reduced them with ~10-20%, no more.
#724 Updated by Constantin Asofiei almost 4 years ago
I don't understand how Loader.readExtentData
worked before. Shouldn't it increase the i
var by the seriesSize
?
### Eclipse Workspace Patch 1.0 #P p2j4011arebase Index: src/com/goldencode/p2j/persist/orm/Loader.java =================================================================== --- src/com/goldencode/p2j/persist/orm/Loader.java (revision 2399) +++ src/com/goldencode/p2j/persist/orm/Loader.java (working copy) @@ -552,7 +552,7 @@ } lastExtent = extent; - i++; + i = i + seriesSize; } // return the index of the next slot to be filled in the data array (if any statements
Otherwise, if there are other extent tables to be read, the start
value will be incorrect... if there are more than one field with the same extent in the previous run.
#725 Updated by Greg Shah almost 4 years ago
Constantin Asofiei wrote:
Greg Shah wrote:
Is the primary problem this loop in
beginTx()
and similar inendTx()
:Exactly. The global scope, more precise, as this is where the buffers in persistent programs/classes end up. I've made some changes to not register reference-only buffers, but this reduced them with ~10-20%, no more.
I'm analyzing the usage of the data structures referenced/state management handled during enterBlock()
and exitBlock()
. My intention is to find a way to eliminate these calls completely if possible. I will be proposing a radically different approach, if possible. So far I have analyzed 2 parts. I've discussed these with Eric to confirm my understanding, so I believe these are safe to implement.
1. Eliminate the call to TriggerTracker.resetState()
from RecordBuffer.enterBlock()
.
Inside TriggerTracker
, the state
variable is a simple int
. It is only accessed from 2 locations:
resetState()
(which unconditionally sets the value toUNCHECKED
)isTriggerEnabled()
which heavily reads and modifies this variable as a bitfield
To eliminate resetState()
, in isTriggerEnabled()
we just need to be able to detect that at least 1 (but possibly more) beginTx()
/endTx()
"event" has occurred since the last call to isTriggerEnabled()
. This is easily done like so:
- There is one and only one instance of
TriggerTracker
perRecordBuffer
instance. It is created during theRecordBuffer
constructor. At that time we also have the "context-local singleton"BufferManager
instance. We must pass thisBufferManager
instance into theTriggerTracker
constructor. This will be saved as a member variable inTriggerTracker
. - The
BufferManager
instance will have a newlong
member variable (scopeTransitions
) that will be "global" to the context. This member will be incremented in bothbeginTx()
and also incremented inendTx()
. It is never decremented. There must be an accessor for it. TriggerTracker
will have a newlong
member variablelastScopeTransition
.- At the top of
isTriggerEnabled()
, we will comparethis.lastScopeTransition
tobufferManager.scopeTransitions
. If it is different, then we know that one or morebeginTx()
and/orendTx()
events have occurred since the last call toisTriggerEnabled()
. This means thatstate
can be reset right then toUNCHECKED
. - If
state
was reset, we also must assignthis.lastScopeTransition
frombufferManager.scopeTransitions
so that we can remember the new comparison point for the next call toisTriggerEnabled()
.
That is it. With that change the resetState()
call can be removed from both beginTx()
and endTx()
and replaced with incrementing (and never decrementing) scopeTransitions
.
2. Make undone
an instance member of RecordBuffer
instead of using the extra
object of the ScopedSymbolDictionary
.
The same instance of undone
is stored as the extra
object in every scope of pinnedLockTypes
. We create the instance during the RecordBuffer
constructor and we copy it into every addScope()
. We only ever read it from the dictionary. There is no reason to use the dictionary for this case. By making undone
a member variable, we reduce processing, we simplify the code and we reduce dependencies on pinnedLockTypes
.
Eric is planning to make these changes. I'm continuing to analyze the data structures to take this idea further (if possible).
#726 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
1. Eliminate the call to
TriggerTracker.resetState()
fromRecordBuffer.enterBlock()
.
[...]
Implemented in 4011b/11546.
#727 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
2. Make
undone
an instance member ofRecordBuffer
instead of using theextra
object of theScopedSymbolDictionary
.
[...]
Implemented in 4011b/11547.
#728 Updated by Greg Shah almost 4 years ago
- File ScopedDictionary.java added
I'm still analyzing the RecordBuffer.pinnedLockTypes
usage.
As part of this work, I've written changes to ScopedDictionary
to eliminate the use of the ListIterator
in all "at scope" operations. I believe this should be a faster approach. It also reduces the code and makes it easier to understand, IMO.
BufferManager
uses getDictionaryAtScope()
very heavily. Any code that passes SD.size() - 1
as the first parameter can be changed to -1 and it will be faster.
I have tested this with Hotel GUI using 11544 + this change. Both conversion and runtime work fine. I see an NPE (in dirty share stuff) with guest checkin, but if I recall correctly Eric already saw this.
Constantin: Would you please review this and see if you can find any issues?
#729 Updated by Constantin Asofiei almost 4 years ago
Greg Shah wrote:
As part of this work, I've written changes to
ScopedDictionary
to eliminate the use of theListIterator
in all "at scope" operations. I believe this should be a faster approach. It also reduces the code and makes it easier to understand, IMO.
...
Constantin: Would you please review this and see if you can find any issues?
The changes look OK. But I wonder if we still need the LinkedList, as any 'positional access' will iterate the list. Maybe a ArrayDeque can be faster.
#730 Updated by Constantin Asofiei almost 4 years ago
There is an issue with the ProxyAssembler.assembleClass
code. Here, we want to override all passed methods, and also make them public - but there is no way to override a package or private method. The compiler will still link to the super-definition.
The question above comes from a regression in RecordBuffer.getDialect
, which was being called for the handler instance (an ArgumentBuffer
instance) because its modifier was changed from protected
to package-private. The proxy is created via ArgumentBuffer.createProxy
):
RecordBuffer dmoProxy = (RecordBuffer) ProxyFactory.getProxy(RecordBuffer.class, true, new Class[0], false, null, null, this);
I'd like to protect against these kind of failures, but not by making the proxy creation fail - instead, the runtime should tell me "I'm invoking a method from the handler instance, and not from the proxied instance". But I don't see how this can be possible. If I add a Override
annotation, I assume the bytecode verification will fail. Otherwise, if the parent method is not public
or protected
, it can't be overriden, and I can't throw an exception in the sub-class.
#731 Updated by Constantin Asofiei almost 4 years ago
4011b contains the fix for #4011-730 and other misc - please review.
#732 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
4011b contains the fix for #4011-730 and other misc - please review.
The changes seem good to me (4011b/11552).
#733 Updated by Constantin Asofiei almost 4 years ago
Eric/Ovidiu, someone please document and explain how Loader.readExtentData
should work. My change which increments i
with seriesSize
is not OK.
_Field
record (from the MetaField
table). I understand that:
ResultSet rs
contains all rows for all the properties with the same extent. First row element 1 for each prop, second row element 2, etc.props
contains all properties 'denormalized'. But these don't follow theResultSet
, they are placed one after another.while (rs.next())
should populate on first iteration first element of each extent property, on the second one the second element, etc.
Something is wrong here, and I can't put my finger on it.
#734 Updated by Constantin Asofiei almost 4 years ago
Looking at the code, the protection against 'bad data' is not OK. The code should attempt to read exactly 'extent' rows, for exactly 'seriesSize' (which is the number of properties with the same extent) columns. No more, no less. Any deviation means the data is corrupted in the database.
And also return the index where the next series (of properties with extent larger than the current series) starts.
The i < len
protection and PropertyMeta pm = props[i];
is very confusing. This would suggest the properties are advanced based on the number of results? But rs
is not advancing in the properties list, but indices in the series.
I think this code:
PropertyMeta pm = props[i]; int extent = pm.getExtent();
should be moved before the
while
loop, i
replaced with start
, and let i
increment by 1 (which is the current element in the series, aka the currently iterated rs
record).
The returned value should be return start + pm.extent * pm.seriesSize;
, to reference the first property in the next "extent series".
#735 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric/Ovidiu, someone please document and explain how
The issue is when loading aLoader.readExtentData
should work. My change which incrementsi
withseriesSize
is not OK._Field
record (from theMetaField
table). I understand that:
ResultSet rs
contains all rows for all the properties with the same extent. First row element 1 for each prop, second row element 2, etc.props
contains all properties 'denormalized'. But these don't follow theResultSet
, they are placed one after another.while (rs.next())
should populate on first iteration first element of each extent property, on the second one the second element, etc.Something is wrong here, and I can't put my finger on it.
The _Field
table should not be getting denormalized, unless there is an explicit schema hint to do so, and I don't know any application where we do that.
Generally speaking, the data should always be laid out as described in the Loader
class javadoc. However, if an extent field has been denormalized, it should be treated as separate, scalar fields. Once denormalized, everything in the ORM framework should treat these as scalar fields; we should not have conditional code paths for normalized and denormalized extent fields.
We know we need to rationalize several redundant data structures in which we store DMO meta-information. Is the code which generates the SQL query statement using meta-information that perhaps has not yet been adjusted for denormalization (this was implemented later in 4011a; maybe we missed something).
#736 Updated by Eric Faulhaber almost 4 years ago
Before you make any changes at this low level, please explain the error you are seeing, which you are trying to solve.
#737 Updated by Eric Faulhaber almost 4 years ago
Eric Faulhaber wrote:
Generally speaking, the data should always be laid out as described in the
Loader
class javadoc.
To clarify, I mean the array of PropertyMeta
objects created by RecordMeta
describing the structure of the BaseRecord.data
array, should reflect the layout in the Loader
class javadoc. The PropertyMeta
objects are the canonical description of the meaning of that data
array.
#738 Updated by Constantin Asofiei almost 4 years ago
props
for the MetaField
DMO has its structure like this:
- 52 scalar props
- 48 entries for 6 extent 8 properties, like
prop1 (8 times), prop2 (8 times), etc
- 64 entries for 1 extent 64 property.
props
should have the structure like this?
- 52 scalar props
- 6 entries for 6 extent 8 properties, like
prop1 (1 time), prop2, ... prop8
- 1 entry for 1 extent 64 property
Because props
contains the denormalized list of properties, the return value (the 'start' for the next series) is wrong, with the original i = i + 1
increment. And if I switch it to i = i + seriesSize
, then the i
will be wrong, but the return value will be correct.
#739 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
And if I switch it to
i = i + seriesSize
, then thei
will be wrong, but the return value will be correct.
Sorry, the return value is still incorrect. i
is (theoretically) correct as we are still inside a property with the same extent for this 'extent 8' series.
#740 Updated by Constantin Asofiei almost 4 years ago
A side note: for a FindQuery.canFindImpl
call, do we really need to load the full record in the database? Because the buffer will not be updated, and we only care if the record exists (as 'any' or 'one').
#741 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
A side note: for a
FindQuery.canFindImpl
call, do we really need to load the full record in the database? Because the buffer will not be updated, and we only care if the record exists (as 'any' or 'one').
No, good point, we don't. We didn't have an option with our Hibernate implementation, but this would be something to optimize now.
#742 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, my problem is thatprops
for theMetaField
DMO has its structure like this:
- 52 scalar props
- 48 entries for 6 extent 8 properties, like
prop1 (8 times), prop2 (8 times), etc
- 64 entries for 1 extent 64 property.
This is correct. Every element of an extent field gets the same PropertyMeta
instance. This is intended to enable an iteration of the data
array to easily look up the corresponding PropertyMeta
object at the same index and vice-versa. However, this means we have to take care to skip ahead properly for extent fields when performing operations which scan the PropertyMeta
array to get meta-information about the logical representation of properties at the DMO level.
Are you saying that theprops
should have the structure like this?
- 52 scalar props
- 6 entries for 6 extent 8 properties, like
prop1 (1 time), prop2, ... prop8
- 1 entry for 1 extent 64 property
No.
Because
props
contains the denormalized list of properties, the return value (the 'start' for the next series) is wrong, with the originali = i + 1
increment. And if I switch it toi = i + seriesSize
, then thei
will be wrong, but the return value will be correct.
You were confusing me with your use of the term "denormalized". I understand why you used it to describe the data layout, but it has a specific meaning w.r.t. how we change the schema when a schema hint instructs us to denormalize an extent field during schema conversion. I was thinking of the latter meaning.
What is the layout of the result set you are working with?
#743 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
There are 3 SELECT being executed, each with 1What is the layout of the result set you are working with?
ResultSet
:
- 1 for the scalar fields (52 properties, 1 row)
- 1 for the extent-8 fields (6 properties, 8 rows)
- 1 for the extet-64 field (1 property, 64 rows)
props
must be used with:
start=0
for the scalar fields, at thereadScalarData
callstart=52
for the extent-8 fields, at the firstreadExtentData
callstart=100
for the extent-64 field, at the secondreadExtentData
call
#744 Updated by Eric Faulhaber almost 4 years ago
Does this patch work?
=== modified file 'src/com/goldencode/p2j/persist/orm/Loader.java' --- src/com/goldencode/p2j/persist/orm/Loader.java 2020-07-07 18:45:35 +0000 +++ src/com/goldencode/p2j/persist/orm/Loader.java 2020-07-07 22:06:37 +0000 @@ -522,6 +522,7 @@ { int len = props.length; int i = start; + int newStart = start; int lastExtent = -1; // each row stores an element at index N for one or more extent fields of the same extent @@ -552,11 +553,12 @@ } lastExtent = extent; - i = i + seriesSize; + i++; + newStart += seriesSize; } // return the index of the next slot to be filled in the data array (if any statements // follow), or -1 if we did not advance the index (indicates record was not found) - return i > start ? i : -1; + return newStart > start ? newStart : -1; } }
#745 Updated by Constantin Asofiei almost 4 years ago
There is a bug related to validation/flushing. I have a record like this:
tt2111:1 RecordState{NEW NOUNDO NEEDS_VALIDATION CHANGED} dirty: {0} data: {} multipex: 64
which is loaded in the
TemporaryBuffer
, and a CAN-FIND
is done on this table. But, as the record is not flushed, FWD can't find it.
The record is changed like:
create tt1. tt1.f1 = 0.
and there is an index on
tt1.f1
. FWD will go down this validation code:if (queryCheckMethod) { // use the query method of checking unique constraints instead of the persist & rollback // method; this will not get hung up on a null mandatory field in a transient DMO generateUniqueIndicesCondition(checkUniqIndices); }
as
queryCheckMethod
is true, but it will not flush the record to the database.#746 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Does this patch work?
[...]
Well, my patch just returned the correct 'nextStart' by multiplying the seriesSize with the extent. Your patch (although correct) showed another latent bug: Loader.load
. There is a if (!rs.next())
call on line 396. This advances the cursor in the ResultSet - readScalarData
is aware of this, but readExtentData
will do a while (rs.next())
and skip the first record. I'm trying a rs.beforeFirst()
call before the start = readExtentData(dmo, rs, props, start);
call, to reposition the cursor properly, but the rs
is not scrollable.
I think some more refactoring is needed here.
#747 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Does this patch work?
[...]
Well, my patch just returned the correct 'nextStart' by multiplying the seriesSize with the extent. Your patch (although correct) showed another latent bug:
Loader.load
. There is aif (!rs.next())
call on line 396. This advances the cursor in the ResultSet -readScalarData
is aware of this, butreadExtentData
will do awhile (rs.next())
and skip the first record. I'm trying ars.beforeFirst()
call before thestart = readExtentData(dmo, rs, props, start);
call, to reposition the cursor properly, but thers
is not scrollable.I think some more refactoring is needed here.
Agreed. Are you working on this, or do you want someone else to pick it up?
#748 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Agreed. Are you working on this, or do you want someone else to pick it up?
The fix was simple, see 4011b rev 11553. The idea was to call rs.next()
only for the SELECT for the scalar properties. Otherwise, when reading the extent properties, the code will terminate the same, if no record was found.
#749 Updated by Greg Shah almost 4 years ago
Constantin: I have the following questions about RecordBuffer.worldScope
.
- Is the idea that
worldScope == true
means that the buffer is associated with a persistent procedure? - The value is determined in
exitBlock()
usingworldScope = worldScope || activeScopeDepth > this.blockDepth;
. This means that any buffers that are only accessed from below the open scope depth will not be determined to be persistent even if they are in fact persistent. This seems like a problem, though I'm not entirely sure how it might be seen. - We are getting rid of
exitBlock()
completely. (How we do this will be discussed later). But we need a different way to determine this value. I was hoping that during construction of the buffer, we could determine whether or not this is being created in a persistent procedure. Is this feasible?
#750 Updated by Eugenie Lyzenko almost 4 years ago
After patching directory.xml what I have for freshly imported DB and server start up:
Picked up _JAVA_OPTIONS: -Duser.timezone=GMT+3 Jul 09, 2020 8:20:19 AM com.goldencode.p2j.main.StandardServer bootstrap INFO: FWD v4.0.0_p2j_4011b_11551 Server starting initialization. [07/09/2020 08:20:19 GMT+03:00] (SecurityManager:INFO) {main} Loaded 1 CA certificates, 0 objects ignored [07/09/2020 08:20:19 GMT+03:00] (SecurityManager:INFO) {main} Loaded 7 peer certificates, 0 objects ignored [07/09/2020 08:20:19 GMT+03:00] (SecurityManager:WARNING) {main} No private keys defined in the P2J directory [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded auth-mode object [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 5 groups, 0 objects ignored [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 7 processes, 0 objects ignored [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 11 users, 0 objects ignored [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 7 resource plugins, 0 failed [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:WARNING) {main} No custom server extension defined in P2J directory [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:WARNING) {main} No custom client extension defined in P2J directory [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 4 ACLs for resource type <remoteentrypoint> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 6 ACLs for resource type <system> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 1 ACLs for resource type <entrypoint> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 14 ACLs for resource type <admin> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 1 ACLs for resource type <trustedspawner> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:WARNING) {main} ACLs branch <net> is ignored [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 6 ACLs for resource type <directory> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {main} Loaded 3 ACLs for resource type <remotelaunchoption> [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:WARNING) {main} Security audit log is disabled [07/09/2020 08:20:20 GMT+03:00] (SecurityManager:INFO) {00000000:00000001:standard} No exported entry points defined in the P2J directory [07/09/2020 08:20:22 GMT+03:00] (com.goldencode.p2j.persist.ConversionPool:INFO) Runtime conversion pool initialized in 2448 ms. [07/09/2020 08:20:22 GMT+03:00] (com.goldencode.p2j.persist.DatabaseManager:INFO) Using H2 database version 1.4.197 (2018-03-18) [07/09/2020 08:20:22 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager:INFO) Metadata server modules: [Myconnection, Connect, Index, DatabaseFeature, Sequence, FieldTrig, Lock, Field, Area, Usertablestat, Filelist, FileTrig, Tablestat, File, IndexField, Startup, Db] [07/09/2020 08:20:22 GMT+03:00] (com.goldencode.p2j.persist.DatabaseManager:INFO) Database local/_temp/primary initialized in 0 ms. SLF4J: Class path contains multiple SLF4J bindings. ... [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager:INFO) Persisting record for the metatable MetaDb__Impl__ [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager:INFO) Persisting record for the metatable MetaFilelist__Impl__ [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager:INFO) Persisting record for the metatable MetaFileTrig__Impl__ [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager:INFO) Persisting record for the metatable MetaStartup__Impl__ [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager$SystemTable:INFO) Populating [_Database-Feature] [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager$SystemTable:INFO) Populated [_Database-Feature] in 11 ms [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager$SystemTable:INFO) Populating [_Sequence] [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager$SystemTable:INFO) Populated [_Sequence] in 0 ms [07/09/2020 08:20:26 GMT+03:00] (com.goldencode.p2j.persist.meta.MetadataManager$SystemTable:INFO) Populating [_File] com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2083) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:999) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.IncompatibleClassChangeError: Implementing class at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:756) at java.lang.ClassLoader.defineClass(ClassLoader.java:635) at com.goldencode.asm.AsmClassLoader.findClass(AsmClassLoader.java:186) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at com.goldencode.asm.AsmClassLoader.loadClass(AsmClassLoader.java:220) at com.goldencode.p2j.persist.orm.DmoClass.load(DmoClass.java:1518) at com.goldencode.p2j.persist.orm.DmoClass.forInterface(DmoClass.java:386) at com.goldencode.p2j.persist.orm.DmoMetadataManager.registerDmo(DmoMetadataManager.java:193) at com.goldencode.p2j.persist.meta.MetadataManager.addTableToFile(MetadataManager.java:646) at com.goldencode.p2j.persist.meta.MetadataManager.lambda$populateFileTable$1(MetadataManager.java:612) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at com.goldencode.p2j.persist.meta.MetadataManager.populateFileTable(MetadataManager.java:612) at com.goldencode.p2j.persist.meta.MetadataManager.access$600(MetadataManager.java:108) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$static$2(MetadataManager.java:1798) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$new$3(MetadataManager.java:1841) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$populateAll$6(MetadataManager.java:1906) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.populateAll(MetadataManager.java:1906) at com.goldencode.p2j.persist.meta.MetadataManager.populateDatabase(MetadataManager.java:572) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1610) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:864) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1244) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2079) ... 5 more
#751 Updated by Igor Skornyakov almost 4 years ago
The profiling of the Hotel GUI app (switching tabs) did not reveal any hot spots. The only thing which is worth to mention is:
com.goldencode.p2j.util.HandleOps.getLegacyAnnotation(handle, String, boolean, Class) HandleOps.java:773 445 228644
The first number here is the time in ms, the second one is a number of calls. Maybe it makes sense to use some kind of cache here.
There were very few database operations. Those ones which took most are:
select — stay__impl0_.recid as id0_, stay__impl0_.stay_id as stay1_0_, stay__impl0_.room_num as room2_0_, stay__impl0_.start_date as start3_0_, stay__impl0_.end_date as end4_0_, stay__impl0_.checked_out as checked5_0_, stay__impl0_.num_guests as num6_0_, stay__impl0_.price as price7_0_, room__impl1_.recid as id8_, room__impl1_.room_num as room9_0_, room__impl1_.floor as floor10_0_, room__impl1_.room_type as room11_0_, room__impl1_.active as active12_0_, roomtype__2_.recid as id13_, roomtype__2_.room_type as room14_0_, roomtype__2_.parent_type as parent15_0_, roomtype__2_.description as description16_0_, roomtype__2_.image as image17_0_, roomtype__2_.max_persons as max18_0_ — from — stay stay__impl0_ — cross join — room room__impl1_ — cross join — room_type roomtype__2_ — where — stay__impl0_.checked_out = false and room__impl1_.room_num = stay__impl0_.room_num and roomtype__2_.room_type = room__impl1_.room_type — order by — stay__impl0_.stay_id asc, room__impl1_.room_num asc, roomtype__2_.room_type asc — limit ? 13 6 2 update dtt3 set field54=? where recid=? and _multiplex=? 16 5 3 create local temporary table tt162 ( — recid bigint not null, — _multiplex integer not null, — _errorFlag integer, — _originRowid bigint, — _errorString varchar, — _peerRowid bigint, — _rowState integer, — roomnum integer, — fullname varchar, — nguests integer, — chkin date, — chkout date, — duration integer, — roomtype varchar, — roomprice integer, — primary key (recid) — ) transactional; — create index idx_mpid__tt162__3 on tt162 (_multiplex) transactional; 4 4 1
The numbers are: total time, avg. time and count.
#752 Updated by Igor Skornyakov almost 4 years ago
I've generated several reports and exports at the Guests
tab. The only meaningful hotspot is:
org.postgresql.jdbc.PgPreparedStatement.executeQuery() PgPreparedStatement.java 2085 2112
It was called from 3 places:
com.goldencode.p2j.persist.orm.Session.refresh(BaseRecord) Session.java 1168 1218 com.goldencode.p2j.persist.orm.SQLQuery.list(Session, List) SQLQuery.java:362 483 444 com.goldencode.p2j.persist.orm.SQLQuery.scroll(Session, int, boolean, List) SQLQuery.java:162 414 438
Again, the numbers are time (ms) and count.
#753 Updated by Constantin Asofiei almost 4 years ago
Eric, something else to consider. In Persister.insert
, we use 1 + sum(distinct field's extent)
INSERT statements to persist a full record. We can reduce this to 1 + count(distinct field's extent)
, by using an INSERT ... VALUES (record-for-index-1) ... (record-for-index-n)
, to batch all of them in a single INSERT statement. If my understanding is right and each INSERT
is a trip to the database, then this will reduce these trips considerably.
#754 Updated by Constantin Asofiei almost 4 years ago
Greg Shah wrote:
Constantin: I have the following questions about
RecordBuffer.worldScope
.
- Is the idea that
worldScope == true
means that the buffer is associated with a persistent procedure?
Yes, but not just that - it also means the persistent procedure is no longer on the stack.
- The value is determined in
exitBlock()
usingworldScope = worldScope || activeScopeDepth > this.blockDepth;
. This means that any buffers that are only accessed from below the open scope depth will not be determined to be persistent even if they are in fact persistent. This seems like a problem, though I'm not entirely sure how it might be seen.
Here I considered that activeScopeDepth
is the scope depth where the buffer was opened and blockDepth
is the current block depth. If activeScopeDepth > this.blockDepth
, then this means we are using a buffer for which its external program is no longer on the stack (and this is possible only if the program is ran persistent). As soon as the external program (ran persistent) finishes its 'execute' method, the buffers will be considered in 'worldScope'. Until then, the behavior was meant to be the same as for the buffers for non-persistent programs.
- We are getting rid of
exitBlock()
completely. (How we do this will be discussed later). But we need a different way to determine this value. I was hoping that during construction of the buffer, we could determine whether or not this is being created in a persistent procedure. Is this feasible?
In ControlFlowOps$InternalEntryCaller.invokeImpl(String, Object...)
, there is a pushWorker.run();
call which ends up calling ProcedureManager$ProcedureHelper.pushCalleeInfo
- this includes the 'persistent' flavor of a RUN call.
So, if you want to interrogate this somewhere during the buffer initialization code, use ProcedureManager$ProcedureHelper.peekCalleeInfo
to get the CalleeInfo
instance and change the interface to add a method isPersistent
, which in ProcedureManager$CalleeInfoImpl
will report the persistentProc
flag.
BTW, there should be no explicit requirements to handle instance or static buffers defined in legacy OE classes - FWD already considers these instances are being 'ran persistent'.
#755 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Here I considered that
activeScopeDepth
is the scope depth where the buffer was opened andblockDepth
is the current block depth. IfactiveScopeDepth > this.blockDepth
, then this means we are using a buffer for which its external program is no longer on the stack (and this is possible only if the program is ran persistent).
Are there really no other cases where the buffer scope can be smaller than the external procedure scope? In other words, is it possible there are some cases where we are assuming worldScope == true
means persistent procedure when it actually does not?
As soon as the external program (ran persistent) finishes its 'execute' method, the buffers will be considered in 'worldScope'. Until then, the behavior was meant to be the same as for the buffers for non-persistent programs.
OK, so you are attributing more meaning to the value of worldScope
than we assumed (which was that it was just a flag to indicate the buffer is being used by a persistent procedure).
In this case, can't we just set worldScope
to true
in RecordBuffer.finishedImpl
when the outermost buffer scope is closing, instead of checking at each exitBlock
call? If the buffer survives beyond this point, it will mean it is a a persistent procedure buffer in world scope, otherwise it is no longer in use and will be garbage collected and the value of worldScope
won't matter.
This presumes the answer to my question above is that worldScope
can only be true
for persistent procedure buffers. It also presumes that the outermost scope of a "normal" buffer, once closed, cannot be re-opened. If this last assumption is incorrect, we can reset worldScope
to false
in RecordBuffer.openScope
when in the outermost scope.
#756 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Are there really no other cases where the buffer scope can be smaller than the external procedure scope? In other words, is it possible there are some cases where we are assuming
worldScope == true
means persistent procedure when it actually does not?
Hmm... the part I missed was a buffer which is scoped to some other block (part of the external program), and not to the external program itself. But in this case, I think the RecordBuffer instance will be removed from BufferManager.openBuffers
, and it will not end up in the 'global' scope in openBuffers
. Also, when exitBlock
is called for the buffer's block (where the scope was opened), activeScopeDepth
will be the same as blockDepth
, and it will not be marked as worldScope
. But I can't tell how correct this is, as if it was removed from BufferManager.openBuffers
, and it doesn't end up in the BM.openBuffers
global scope, it will never be marked as workdScope
.
In this case, can't we just set
worldScope
totrue
inRecordBuffer.finishedImpl
when the outermost buffer scope is closing, instead of checking at eachexitBlock
call? If the buffer survives beyond this point, it will mean it is a a persistent procedure buffer in world scope, otherwise it is no longer in use and will be garbage collected and the value ofworldScope
won't matter.This presumes the answer to my question above is that
worldScope
can only betrue
for persistent procedure buffers.
Yes, that was the intention when I added worldScope
. I think the solution to mark set this flag in finishedImpl
(and back to false
if the scope is reopened) should work.
It also presumes that the outermost scope of a "normal" buffer, once closed, cannot be re-opened. If this last assumption is incorrect, we can reset
worldScope
tofalse
inRecordBuffer.openScope
when in the outermost scope.
Example 2 here https://proj.goldencode.com/projects/p2j/wiki/Chapter_21_Record_Buffer_Definition_and_Scoping#Implicit-Scope-Expansion-Examples shows that the buffer's outermost scope can be re-opened more than once. And there are other examples where the scopes are isolated in different (non-nested) blocks, so again the 'outermost' scope can be opened more than once.
#757 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, a call like ScrollableResults.get(0, TempRecord.class)
is not valid in FWD, right? We should use ScrollableResults.get()[0]
instead, correct?
#758 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin Asofiei wrote:
Ovidiu, a call like
ScrollableResults.get(0, TempRecord.class)
is not valid in FWD, right? We should useScrollableResults.get()[0]
instead, correct?
Actually, this depends on the context, but in your case ScrollableResults.get()[0]
must be used. ScrollableResults.get(int, Class)
will return the column from the original SQL result. This is useful when ScrollableResults
is the result of "non-typed" queries. That is never a TempRecord
. You need to invoke ScrollableResults.get()
to have the result row proccessed and obtain the array of Record
-s on current row and extract the first element ([0]
).
#759 Updated by Constantin Asofiei almost 4 years ago
Ovidiu, one more thing please: TemporaryBuffer.copyDMO
no longer works in 4011b, as there are no declared fields in the DMO impl class (obviously).
What's the best way to copy the source DMO to the destination DMO? Is dst.copy(src)
enough?
#760 Updated by Ovidiu Maxiniuc almost 4 years ago
Actually, no.
IIRC, that method will work with different DMO definitions: the method worked passing two temp-tables with same fields but different order. Doing dst.copy(src)
will blindly copy the fields assuming the order is the same. I think the most performance-wise to write this is iterating the source and using the RecordMeta.getIndexOfProperty()
to obtain the target index in destination, but the usage of this method is not encouraged, as documented.
#761 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Actually, no.
IIRC, that method will work with different DMO definitions: the method worked passing two temp-tables with same fields but different order. Doingdst.copy(src)
will blindly copy the fields assuming the order is the same. I think the most performance-wise to write this is iterating the source and using theRecordMeta.getIndexOfProperty()
to obtain the target index in destination, but the usage of this method is not encouraged, as documented.
How will extent fields work with RecordMeta.getIndexOfProperty()
? Previous implementation of copyDMO
was not processing extent fields at all.
I want to leave its behavior the same, and process the fields in their 'order' (as TableMapper.getLegacyOrderedList
returns). Because I see that this is used not only for copying between the after-table and before-table, but also I think between the data-source table and the destination temp-table.
#762 Updated by Constantin Asofiei almost 4 years ago
RecordBuffer.copyDMO
(including extent fields), but I need some input with how you would like things to look.
- I rely on
srcMeta.getIndexOfProperty(srcFld.getJavaName())
to get an index into theRecordMeta.getPropertyMeta
properties. I understand this is expensive... but I'm not sure of another approach. - for getting the value from the source DMO, I have
srcProp.getDataHandler().getField(src, srcPropIdx);
wheresrc
is the sourceRecord
instance andsrcPropIndex
is the index into theRecord.data
field (obtained viagetIndexOfProperty
). - for setting the value into the dest DMO, I currently use
dstProp.getDataHandler().setField(dst.getData(), srcVal, dstPropIdx, dstProp);
. and I've changedRecord.getData(PropertyMapper)
to remove its argument. As I understand you don't want to allow this kind of low-level access. Instead, I'm thinking to add aDataHandler.setField(Record, Object val, int offset)
, and implement this in all sub-classes. But this will also require to make allRecord
_set
-prefixed setter methodspublic
.
#763 Updated by Greg Shah almost 4 years ago
I wonder if we are missing out on an opportunity caused by the 4GL behavior here. It is my understanding that the copying from table to table (through BUFFER-COPY
or via table parameters) is allowed in the 4GL as long as the two tables have compatible structure. That means that the number, order and type of the fields is the same, even if the names are different.
I suspect that the reason this works in the 4GL is because they are doing a low level copy based on that structure itself, rather than a slower field-by-field access.
Why can't we do the same thing? We have access to the low level record structure of each buffer. Can't we just iterate through the internal arrays of data and attempt to copy the contents at every index position into the same index position in the target array? I presume we can easily check that the array lengths match and as each field is processed, check that the types are the same (or compatible if that is an option). Presumeably, the 4GL fails if this is not the case and we would as well. But the common case is that it is not going to fail and we can just copy the data very quickly.
Sorry for the stupid idea, if I'm missing something here.
#764 Updated by Ovidiu Maxiniuc almost 4 years ago
Constantin,
Please verify whether
define temp-table tt1 field f1 as char field f2 as int.
and
define temp-table tt2 field f2 as int field f1 as char.
are compatible in 4GL, ie: the buffer-copy
can be applied to them. I remember they are, but am not very sure.
#765 Updated by Eugenie Lyzenko almost 4 years ago
The 4011b
rev 11555
runtime does not with previously converted application/DB.
Do I have to reconvert/re-import?
com.goldencode.p2j.cfg.ConfigurationException: Initialization failure at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2083) at com.goldencode.p2j.main.StandardServer.bootstrap(StandardServer.java:999) at com.goldencode.p2j.main.ServerDriver.start(ServerDriver.java:483) at com.goldencode.p2j.main.CommonDriver.process(CommonDriver.java:444) at com.goldencode.p2j.main.ServerDriver.process(ServerDriver.java:207) at com.goldencode.p2j.main.ServerDriver.main(ServerDriver.java:860) Caused by: java.lang.NoSuchFieldError: FULL_VERSION at com.goldencode.p2j.persist.dialect.H2Helper.getH2Version(H2Helper.java:138) at com.goldencode.p2j.persist.DatabaseManager.initialize(DatabaseManager.java:1490) at com.goldencode.p2j.persist.Persistence.initialize(Persistence.java:864) at com.goldencode.p2j.main.StandardServer$11.initialize(StandardServer.java:1244) at com.goldencode.p2j.main.StandardServer.hookInitialize(StandardServer.java:2079) ... 5 more
#766 Updated by Eric Faulhaber almost 4 years ago
Eugenie Lyzenko wrote:
The
4011b
rev11555
runtime does not with previously converted application/DB.Do I have to reconvert/re-import?
No, this is not about conversion. I understand Stanislav made this change due to an API change in H2 v1.4.200.
Do you have fwd-h2-1.4.200-20200710.jar
in your build/lib/
directory?
#767 Updated by Eugenie Lyzenko almost 4 years ago
Eric Faulhaber wrote:
Eugenie Lyzenko wrote:
The
4011b
rev11555
runtime does not with previously converted application/DB.Do I have to reconvert/re-import?
No, this is not about conversion. I understand Stanislav made this change due to an API change in H2 v1.4.200.
Do you have
fwd-h2-1.4.200-20200710.jar
in yourbuild/lib/
directory?
Yes, the problem was in old 1.4.197*
jar files left in application deploy/lib
. They should be manually removed in addition to deploy.prepare
to avoid the init failure above.
#768 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric, I've fixedRecordBuffer.copyDMO
(including extent fields), but I need some input with how you would like things to look.
- I rely on
srcMeta.getIndexOfProperty(srcFld.getJavaName())
to get an index into theRecordMeta.getPropertyMeta
properties. I understand this is expensive... but I'm not sure of another approach.- for getting the value from the source DMO, I have
srcProp.getDataHandler().getField(src, srcPropIdx);
wheresrc
is the sourceRecord
instance andsrcPropIndex
is the index into theRecord.data
field (obtained viagetIndexOfProperty
).- for setting the value into the dest DMO, I currently use
dstProp.getDataHandler().setField(dst.getData(), srcVal, dstPropIdx, dstProp);
. and I've changedRecord.getData(PropertyMapper)
to remove its argument. As I understand you don't want to allow this kind of low-level access. Instead, I'm thinking to add aDataHandler.setField(Record, Object val, int offset)
, and implement this in all sub-classes. But this will also require to make allRecord
_set
-prefixed setter methodspublic
.
RecordBuffer.copyDMO
is something I want to replace with something more efficient at a lower level. However, I'm not sure we can for persistent tables, without a significant re-factoring of the code which manages assign triggers. However, I think we can use a more efficient, low-level copy for temp-tables, as long as a temp-table defined LIKE a persistent table does not pick up the persistent table's assign triggers. If it does, maybe we detect that and fall back to using copyDMO
.
copyDMO
is too slow for the simple temp-table copying that we need to do here. It uses reflection and I thought it was using setter methods to invoke the invocation handler to set the DMO's properties, but it appears it was changed at some point to call the proxies' field's set methods directly using reflection. I can't see how this is working at all, now that we're not using POJOs with fields anymore. But this seems problematic even before 4011a, since it bypasses the invocation handler and all of its assign trigger logic. Also, I'm not 100% sure the matching algorithm leading up to it is correct (IIRC, we are still using names as part of the match), though I suppose you are calling it directly from a different call path.
So, copyDMO
needs review anyway, as it may be broken, but that is outside the scope of your optimization effort. You mention you fixed it. Is this what you meant?
Since this temp-table copy is a common operation, I would like to support it at a low level (i.e., below TemporaryBuffer
, in BaseRecord
or TempRecord
). I need to review the existing logic to better understand the requirements. Where is the copy logic you are replacing?
#769 Updated by Eric Faulhaber almost 4 years ago
Greg Shah wrote:
I wonder if we are missing out on an opportunity caused by the 4GL behavior here. It is my understanding that the copying from table to table (through
BUFFER-COPY
or via table parameters) is allowed in the 4GL as long as the two tables have compatible structure. That means that the number, order and type of the fields is the same, even if the names are different.I suspect that the reason this works in the 4GL is because they are doing a low level copy based on that structure itself, rather than a slower field-by-field access.
Why can't we do the same thing? We have access to the low level record structure of each buffer. Can't we just iterate through the internal arrays of data and attempt to copy the contents at every index position into the same index position in the target array? I presume we can easily check that the array lengths match and as each field is processed, check that the types are the same (or compatible if that is an option). Presumeably, the 4GL fails if this is not the case and we would as well. But the common case is that it is not going to fail and we can just copy the data very quickly.
Sorry for the stupid idea, if I'm missing something here.
Greg, I would like to get to something very close to what you are suggesting. Our array is simply defined as Object[] data
, in BaseRecord
. The casting to/from the correct data types is done in the type-aware getter and setter methods in Record
, a subclass of BaseRecord
. The application-specific DMO is aware of the offsets assigned to each field, and its getters/setters invoke the type-specific getters/setters in Record
, passing in the correct offsets (which are baked into the DMO definition).
At runtime, we read metadata from the DMO interfaces which allows the RTE to understand the structure of the data
Object
array. I think we can use this information to determine whether two temp-tables "match", in order to perform a fast, low-level copy. I'm not sure of all the rules which must be applied to determine a match, but we must be doing this at the higher level already, albeit with reflection and more overhead.
#770 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Constantin,
Please verify whether
[...]
and
[...]are compatible in 4GL, ie: the
buffer-copy
can be applied to them. I remember they are, but am not very sure.
Yes, BUFFER-COPY
works:
define temp-table tt1 field f1 as char field f2 as int. define temp-table tt2 field f2 as int field f1 as char. create tt1. tt1.f1 = "a1". tt1.f2 = 1. create tt2. buffer-copy tt1 to tt2. message tt2.f1 tt2.f2.
#771 Updated by Ovidiu Maxiniuc almost 4 years ago
Unfortunately, we cannot use dst.copy(src)
as the order of fields is not the same, even if the set of the fields are equals. The field-by-name lookup must be implemented, even if the assignment will certainly not use reflection.
#772 Updated by Constantin Asofiei almost 4 years ago
Ovidiu Maxiniuc wrote:
Unfortunately, we cannot use
dst.copy(src)
as the order of fields is not the same, even if the set of the fields are equals. The field-by-name lookup must be implemented, even if the assignment will certainly not use reflection.
Keep in mind that RecordBuffer.copyDMO
is used only from these cases:
BufferImpl.mergeChangesImpl(BufferImpl, boolean) BufferImpl.rejectRowChanges() (2 matches) TemporaryBuffer.copyChanges(TemporaryBuffer, TemporaryBuffer, TemporaryBuffer, TemporaryBuffer) (2 matches) TemporaryBuffer.copyParentUnchangedRecords(BufferImpl, BufferImpl, BufferImpl, String) TemporaryBuffer.delete(Supplier<logical>, Supplier<character>) (2 matches) TemporaryBuffer.rejectChanges(BufferImpl, BufferImpl) (2 matches)
In at least some of these cases, the source is the after-table and the destination is the before-table. In this case, the tables match 100% and we can avoid the field-by-name lookup.
#773 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
So,
copyDMO
needs review anyway, as it may be broken, but that is outside the scope of your optimization effort. You mention you fixed it. Is this what you meant?
This used used when copying the after-table record to the before-table, for example when is deleted. So this is part of #4055, in terms that I had to fix issues with before-table support in 4011b.
Since this temp-table copy is a common operation, I would like to support it at a low level (i.e., below
TemporaryBuffer
, inBaseRecord
orTempRecord
). I need to review the existing logic to better understand the requirements. Where is the copy logic you are replacing?
For copying the after-table record to before-table, I can use something like this, as both table match exactly in terms of schema:
if (fast) { // copy the source 'data' directly into the destination Long pk = dst.primaryKey(); Integer multiplex = null; if (dst instanceof TempRecord) { multiplex = ((TempRecord) dst)._multiplex(); } dst.copy(src); dst.primaryKey(pk); if (dst instanceof TempRecord) { ((TempRecord) dst)._multiplex(multiplex); } return true; }
For the other cases, I changed the code like this:
RecordMeta srcMeta = src._getRecordMeta(); RecordMeta dstMeta = dst._getRecordMeta(); PropertyMeta[] srcPropMeta = srcMeta.getPropertyMeta(false); PropertyMeta[] dstPropMeta = dstMeta.getPropertyMeta(false); List<TableMapper.LegacyFieldInfo> srcFlds = TableMapper.getLegacyOrderedList(srcBuffer, true); List<TableMapper.LegacyFieldInfo> dstFlds = TableMapper.getLegacyOrderedList(dstBuffer, true); int srcNumFields = srcBuffer._numFields(); if (srcNumFields != dstBuffer._numFields()) { return false; } for (int i = 0; i < srcNumFields; i++) { TableMapper.LegacyFieldInfo srcFld = srcFlds.get(i); TableMapper.LegacyFieldInfo dstFld = dstFlds.get(i); if (srcFld.getDataType() != dstFld.getDataType()) { return false; } // TODO: improve this int srcPropIdx = srcMeta.getIndexOfProperty(srcFld.getJavaName()); int dstPropIdx = dstMeta.getIndexOfProperty(dstFld.getJavaName()); PropertyMeta srcProp = srcPropMeta[srcPropIdx]; PropertyMeta dstProp = dstPropMeta[dstPropIdx]; if (srcProp.getExtent() != dstProp.getExtent()) { return false; } if (srcProp.getExtent() == 0) { BaseDataType srcVal = srcProp.getDataHandler().getField(src, srcPropIdx); dstProp.getDataHandler().setField(dst.getData(), srcVal, dstPropIdx, dstProp); } else { for (int k = 0; k < srcProp.getExtent(); k++) { BaseDataType srcVal = srcProp.getDataHandler().getField(src, srcPropIdx + k); dstProp.getDataHandler().setField(dst.getData(), srcVal, dstPropIdx + k, dstProp); } } }
As I mentioned above, I had to change Record.getData()
to expose the data[]
array always. Plus, getIndexOfProperty
is not supposed to be used AFAICT.
What other way can this be rewritten? IMO I'd add a DataHandler.setField(Record, Object, offset)
, and let all sub-classes implement this.
#774 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Since this temp-table copy is a common operation, I would like to support it at a low level (i.e., below
TemporaryBuffer
, inBaseRecord
orTempRecord
). I need to review the existing logic to better understand the requirements. Where is the copy logic you are replacing?For copying the after-table record to before-table, I can use something like this, as both table match exactly in terms of schema:
[...]
We would like to make this fast, low-level copy the common case (more common than just having the same exact DMO class as the source and destination). Greg thought we could use a similar tactic to do this as we currently use to specify a signature shorthand for procedure call parameters (e.g., "IIIO").
If the source and destination DMO signatures match and there are no assign triggers, we would do the fast copy. Otherwise we fall back to the slower mode. For BUFFER-COPY cases, we would also need to consider include/exclude field lists.
The signature could be something like a letter for the data type, followed optionally by a number (the extent value, if the field is an extent field). The letter would be the same as is used by the JVM to represent the corresponding primitive (e.g., I
for int
(integer
), J
for long
(int64
), etc.). For types with no corresponding primitive, we would just choose a unique letter which makes sense. LOBs I suppose would rule out the fast copy.
We could encode the presence of assign triggers into the signature as well, although in the long run, we will want to refactor the assign trigger firing logic to work with the fast copy.
The signatures would be composed at startup, when we are registering the DMOs and analyzing their annotations, and would be part of the DMO runtime metadata (DmoMeta
or RecordMeta
).
Based on the types of copies allowed between DMOs which do not match exactly, can you think of ways we could apply this technique more broadly than only for an exact signature match?
#775 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
For copying the after-table record to before-table, I can use something like this, as both table match exactly in terms of schema:
[...]
Are the instanceof
checks needed? Isn't the destination DMO always an instance of TempRecord
?
Does the destination record need to have a copy of the source DMO's current meta-state (see BaseRecord.deepCopy
), or is this meaningless for the use of this copy downstream?
#776 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
For copying the after-table record to before-table, I can use something like this, as both table match exactly in terms of schema:
[...]Are the
instanceof
checks needed? Isn't the destination DMO always an instance ofTempRecord
?
For current cases, yes.
Does the destination record need to have a copy of the source DMO's current meta-state (see
BaseRecord.deepCopy
), or is this meaningless for the use of this copy downstream?
No, the records are from different tables and they work separately.
#777 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
As I mentioned above, I had to change
Record.getData()
to expose thedata[]
array always. Plus,getIndexOfProperty
is not supposed to be used AFAICT.
Well, I was trying to get rid of all cases where we have to look up values in maps with string keys during low-level operations, since doing this is slow. The idea was to have a well-defined order of the properties we could rely on to conduct low-level operations. However, I don't see a way around this in this case (at least in the short term), since we only have the names of the fields to work with, and they are potentially out of order.
However, this copy algorithm is called potentially for multiple records between the same pair of source and destination DMO types, right? Can't we figure out the mapping of fields (i.e., their offset in each DMO) once, outside of the result set loop (or lazily, in the first pass), and apply this mapping for every record that is copied? Accessing a pair of offsets in an array will be much faster than doing the name lookup for every field, duplicated for every record being copied.
What other way can this be rewritten? IMO I'd add a
DataHandler.setField(Record, Object, offset)
, and let all sub-classes implement this.
I agree with your idea to add the DataHandler.setField
variant. Would this avoid the need to make the data
array publicly accessible?
#778 Updated by Eric Faulhaber almost 4 years ago
Wait, I just looked at this more closely. Is there any functional reason we need to convert to/from BDT instances? If we are just trying to produce a copy BaseRecord
with equivalent data, this is unnecessary. So, I think the DataHandler
may not be necessary at all.
Why not create an a two-dimensional array which maps the PropertyMeta
objects needed for the copy, in source DMO property order. Pass this to a new public static void copy(BaseRecord from, BaseRecord to, PropertyMeta[] copyMap)
API, and let BaseRecord
do the low-level copying of elements in the data
arrays.
LE: sorry, two-dimensional array is not needed. I was writing down a different idea and forgot to remove this.
#779 Updated by Eric Faulhaber almost 4 years ago
A question which remains in my mind w.r.t. the copy operations is about LOBs. Will this work for LOBs in their current implementation? Is it just the reference to the LOB which we need to copy, or do we need to reference a new instance of a copied LOB in the copy DMO? This is not an issue for all the other data types, since the data elements are references to immutable objects.
#780 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
A question which remains in my mind w.r.t. the copy operations is about LOBs. Will this work for LOBs in their current implementation? Is it just the reference to the LOB which we need to copy, or do we need to reference a new instance of a copied LOB in the copy DMO? This is not an issue for all the other data types, since the data elements are references to immutable objects.
BLOBs are kept in the Record.data
as a byte[]
- which, as you mention, is mutable. CLOBs are kept as String.
So these need special processing when copying the BaseRecord.data
via System.arrayCopy
.
#781 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Wait, I just looked at this more closely. Is there any functional reason we need to convert to/from BDT instances? If we are just trying to produce a copy
BaseRecord
with equivalent data, this is unnecessary. So, I think theDataHandler
may not be necessary at all.
Yes, there is no need to go through BDT instances.
Why not create an
a two-dimensionalarray which maps thePropertyMeta
objects needed for the copy, in source DMO property order. Pass this to a newpublic static void copy(BaseRecord from, BaseRecord to, PropertyMeta[] copyMap)
API, and letBaseRecord
do the low-level copying of elements in thedata
arrays.
Thanks, this should work.
#782 Updated by Constantin Asofiei almost 4 years ago
Constantin Asofiei wrote:
Does the destination record need to have a copy of the source DMO's current meta-state (see
BaseRecord.deepCopy
), or is this meaningless for the use of this copy downstream?No, the records are from different tables and they work separately.
But I think we should mark all properties in the destination record as changed.
#783 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Constantin Asofiei wrote:
Does the destination record need to have a copy of the source DMO's current meta-state (see
BaseRecord.deepCopy
), or is this meaningless for the use of this copy downstream?No, the records are from different tables and they work separately.
But I think we should mark all properties in the destination record as changed.
We actually don't use the CHANGED
state for anything at the property level at this time. Instead, the dirtyProps
bit set is used to track touched properties (not exactly the same as changed).
If the copied records need to go through the normal validation process, they will need more state to be copied. If they bypass validation and are saved directly, then they will not.
How are they used downstream from the copy? My current understanding is that they are persisted immediately, which will wipe out the NEW and NEEDS_VALIDATION record state, and clear the dirty state of the properties. As such, copying state would not be needed.
#784 Updated by Eric Faulhaber almost 4 years ago
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Why not create an array which maps the
PropertyMeta
objects needed for the copy, in source DMO property order. Pass this to a newpublic static void copy(BaseRecord from, BaseRecord to, PropertyMeta[] copyMap)
API, and letBaseRecord
do the low-level copying of elements in thedata
arrays.Thanks, this should work.
BTW, in case it was not clear, I meant for this API to be a utility method in BaseRecord
, so we do not have to work directly with the data
array in TemporaryBuffer
.
#785 Updated by Constantin Asofiei almost 4 years ago
Eric Faulhaber wrote:
Constantin Asofiei wrote:
Eric Faulhaber wrote:
Why not create an array which maps the
PropertyMeta
objects needed for the copy, in source DMO property order. Pass this to a newpublic static void copy(BaseRecord from, BaseRecord to, PropertyMeta[] copyMap)
API, and letBaseRecord
do the low-level copying of elements in thedata
arrays.Thanks, this should work.
BTW, in case it was not clear, I meant for this API to be a utility method in
BaseRecord
, so we do not have to work directly with thedata
array inTemporaryBuffer
.
copyDMO
uses the properties as returned by TableMapper.getLegacyOrderedList
, which is sorting them like this:
tinfo.likeSequential ? LegacyFieldInfo::getOrder : LegacyFieldInfo::getPositionI assume this was added for a specific case in mind. So, I need to sort the
PropertyMeta
in the same way. And BaseRecord.copy
will:
- check if the number of properties from source and destination is the same
- walk the properties and check their data type + extent - if it doesn't match, fail.
- otherwise, the full copy was performed.
BTW, previous code was leaving a 'partial copy' if a match was not found during field iteration. I don't know how correct is this.
#786 Updated by Eric Faulhaber almost 4 years ago
Ovidiu, based on this comment in Persister.update
:
if (updateCount == 0 && LOG.isLoggable(Level.WARNING)) { // TODO: lower error level maybe? This is naturally issued when a records fails validation of an // unique index. LOG.log(Level.WARNING, "Failed to UPDATE #" + dmo.primaryKey() + " of " + dmo.getClass().getName() + "."); }
...would you expect this warning always to be followed up by the legacy unique constraint violation message, if it is due to the record failing validation of a unique index? If that legacy message is not immediately after this in the log, does the above warning indicate an unexpected error?
#787 Updated by Ovidiu Maxiniuc almost 4 years ago
It seems to be that the message is misleading. If the record fails a unique index validation, that line is in fact unreachable, because this condition is reported and caught as SQLException
above.
I think the only reason the counter remains null is when none of the records match the where predicate of the update
statements. And this is certainly an issue, as FWD knows that a record exists with a specific PK (or parent__id
/list__index
combination for extent fields) and/or _multiplex
(for temp-tables) but actually the database cannot match it.
#788 Updated by Eric Faulhaber almost 4 years ago
I have committed 4011b/11564, which skips the index lookup in RAQ.initialize
if the table being queried has no legacy index defined. This does not fix a functional bug, but it cuts down on log noise by preventing misleading Error locating index for sort phrase [%s]
error messages from being logged.
#789 Updated by Eric Faulhaber almost 4 years ago
I committed rev 11575, which defers storing buffers whose scopes are opened in the BufferManager.openBuffers
list, until they are initialized. In some applications, this will prevent buffers which are never actually used from being iterated in Commitable
callbacks. Constantin, please review (with 11574, which I mistakenly committed before I had cleaned up some code).
#790 Updated by Eric Faulhaber almost 4 years ago
Rev 11584 resolves all but the 2f and 3f parts of the test case in #4011-633.
#791 Updated by Constantin Asofiei over 3 years ago
This code in BufferImpl.fill
:
if (isMerge || isReplace) { if (!buffer().prevalidate(isReplace)) { BlockManager.undoNext(); // TODO: how to SILENTLY abort this batch? } }
was replaced with this code:
if (isMerge || isReplace) { boolean validated = false; try { validated = buffer().validate(false); } catch (ValidationException e) { // ignore, no 'expected' exception will be thrown } if (!validated && isReplace) { BlockManager.undoNext(); // TODO: how to SILENTLY abort this batch? } }
This is not equivalent with the old prevalidate
logic. MERGE and REPLACE fill modes are no longer working properly, when there are collisions on the index.
Ovidiu: how easy is to add back something equivalent?
#792 Updated by Ovidiu Maxiniuc over 3 years ago
The validation has changed a bit. I need more input on this. What exactly is wrong with MERGE and REPLACE fill modes?
#793 Updated by Constantin Asofiei over 3 years ago
Ovidiu, see this test:
def temp-table tt1 field f1 as int field f2 as int index ix1 is unique f1. def temp-table tt2 field f1 as int field f2 as int. def var i as int. def dataset ds1 for tt1. def data-source dsrc1 for tt2. procedure proc2. def input parameter dataset for ds1. message i ":" tt1.f1 tt2.f1. i = i + 1. end. buffer tt1:attach-data-source(data-source dsrc1:handle, ?, ?). buffer tt1:SET-CALLBACK-PROCEDURE("after-row-fill", "proc2", this-procedure). create tt2. tt2.f1 = 1. tt2.f2 = -1. create tt2. tt2.f1 = 1. tt2.f2 = -2. create tt2. tt2.f1 = 2. tt2.f2 = -3. buffer tt1:fill-mode = "merge". // test with replace, too dataset ds1:fill(). for each tt1: message tt1.f1 tt1.f2. // 4GL shows (1,-1) and (2,-3) for MERGE and (1,-2) and (2,-3) for REPLACE end.
FWD will abend on the index collision for record #2 in tt1, with (1,-2)
values.
#794 Updated by Ovidiu Maxiniuc over 3 years ago
BufferImpl.fill
is not only in the code from 791 but in prevous lines, too.
- use of
dereference
for copying the data from a field to another
Beside the big computational load (we must use directdata
copy here, I think), the call will let the internal data of the DMO in a wrong state. After invoking the setter with the new value, the current record of the buffer state is reset (resetActiveState()
, called in RB:11937). Although this seems normal when viewed locally, it is not fine when called fromfill()
. In this case the changes on DMO should behave like being grouped in a batch (shouldn't it?). In the end, whendereference()
returnss, theactiveOffset
will not mark the last affected property; - using
_originRowid
to update the link to source record.
This method is implemented to do the full management of DMO internals, including the update of the record state and individual field status. This means, the DMO state is set to CHANGED (even if the DMO legacy data has not actually changed), marking the property #1 as dirty and settingactiveOffset
to 1 (because_ORIGIN_ROWID_DATA_INDEX
= 1). The problem here is a bit more sensitive: in this case the management is probably unneeded. However, in other cases, this and the other reserved hidden properties probably need the management. For example, if they has changed but the legacy data has not: the changes will be lost if the DMO is not saved. And a second case, therow-state
attribute: since it is used as index component; - the validation itself.
FWD bases all DMO management (including validation) on its the internal state: dirty fields,activeOffset
, record state. As noted above, all these have erroneous values. Even if all fields/properties were changed inBufferImpl.fill
,validateMaybeFlush()
will fail to detect the proper changes leading to incorrect DMO management.
#795 Updated by Eric Faulhaber over 3 years ago
#796 Updated by Ovidiu Maxiniuc over 3 years ago
Constantin,
I replaced the last conditional from the code you posted in note-791 with
// was there an unique conflict with this record?
if (!validated)
{
if (isMerge)
{
// merge mode: ignore it, just skip to next record
BlockManager.undoNext(lbl);
}
else
{
// replace mode: eliminate competition,then try again on clean ground
try
{
OrmUtils.dropUniqueIndexConflicts(buffer,
buffer.getPersistence(),
buffer.getMultiplexID());
}
catch (PersistenceException e)
{
throw new ErrorConditionException(
"FILL: Failed to make room for new record in REPLACE mode", e);
}
// now there should not be any collisions and the new record should be flushed
}
}
I am running in MERGE mode, and the current record failed validation. In this case, BlockManager.undoNext(lbl);
is executed. My problem is that, instead of next
, leave
branch is selected and executed and I do not understand why. Wouldn't it be normal to undo creation of current record (from line 7088) and field-copy operations and loop with next record from the source?
#797 Updated by Constantin Asofiei over 3 years ago
Ovidiu Maxiniuc wrote:
We are talking about two different types of 'attributes' of the temp-tables:
- structural (actual data structure, meaning the fields, their types and indexes, probably triggers). These are information strictly related to database persistence;
- decorative: data not used when interacting with persistence layer, but for display or other serialization (SERIALIZE-NAME, NAMESPACE-URI, NAMESPACE-PREFIX, XML-NODE-NAME).
This is from #4011-183. I assume the support for the non-schema attributes for the temp-table and their fields never got completed? In FWD these are part of the 4GL resource, and even if two temp-tables have the same schema, these can be different and must be emitted accordingly.
#798 Updated by Ovidiu Maxiniuc over 3 years ago
Constantin,
Have you encountered this necessity in customer code?
The problem might be more complicated than adding the right API and emitting the chained builders. For example, see the P2JField
and LegacyFieldInfo
structures. They overlap and might need to be merged. But the idea is that the current APIs and their calls expect the 'decorative' (as I named them) attributes to be the consistent for a field. To switch to new approach, the SERIALIZE-NAME, NAMESPACE-URI, NAMESPACE-PREFIX, XML-NODE-NAME should be removed from these struct-like container classes AND from generated DMOs and just added to generated buffer definitions. When these attributes are needed, the buffer must be queried, not the DMO because, as you said, the same DMO will have different attributes when loaded in different buffers.
#799 Updated by Constantin Asofiei over 3 years ago
Ovidiu Maxiniuc wrote:
Have you encountered this necessity in customer code?
Yes. The previous support needs to be added back, in some way or another.
The problem might be more complicated than adding the right API and emitting the chained builders.
Beside that, we need to consider dynamic temp-tables - for these, we no longer emit the attributes at the annotation, and in case like CREATE-LIKE
, we need to copy them from the source temp-table.
For fields, the 'descriptive' attributes are these, at the DEFINE TEMP-TABLE
{ [ BGCOLOR expression] [ COLUMN-LABEL label] [ DCOLOR expression] [ DECIMALS n] [ EXTENT n] [ FONT expression] [ FGCOLOR expression] [ FORMAT string] [ HELP help-text] [ INITIAL {constant|{ [ constant[ , constant]... ] }} ] [ LABEL label[ , label]...] [ MOUSE-POINTER expression] [[ NOT ] CASE-SENSITIVE ] [ PFCOLOR expression] [ SERIALIZE-HIDDEN ] [ SERIALIZE-NAME serialize-name ] [ TTCODEPAGE | COLUMN-CODEPAGE codepage] [ XML-DATA-TYPE string] [ XML-NODE-TYPE string] [ XML-NODE-NAME node-name] {[view-as-phrase]} }I haven't checked how many are supported via the buffer-field handle resource, but:
- whatever FWD supported previously needs to be added back
- the approach will allow any other remaining attributes to be added easily.
#800 Updated by Ovidiu Maxiniuc over 3 years ago
I tried to keep as much compatibility as I was aware when I added the new annotations and even added new attributes. I do not think any of the old attributes stored in the now dropped LegacyProperty
and LegacyTable
were dropped, since the same TRPL code is used to populate the new ones. The problem must be at other level, more precisely when they are queried. That's why it would be best if you could mail me a sample code from customer application that if not working any more.
#801 Updated by Constantin Asofiei over 3 years ago
Ovidiu, OK, I see that these are emitted at the Table
annotation for the DMO interface. Before that , we had code like this in p2o.xml
:
<!-- temp and work tables --> <rule> tmpTabProps = createList(help.textKeyIgnoreCase, 'model', 'temp-name', 'serialize-name', 'before-table') </rule>
Now, in p2o_pre.xml
, the key for the table and fields is missing the serialize-name
and such. I'll add these back and reconvert the application.
#802 Updated by Eric Faulhaber over 3 years ago
4011e was merged to trunk and committed as revision 11348.
#803 Updated by Eric Faulhaber over 3 years ago
Trunk revisions 11347.1.316 and 11347.1.318 implemented a mechanism to keep track of the versions of persistent DMOs in the FWD server's memory, to prevent a stale record from being loaded from the session cache, after that record had been updated and committed by a separate user session.
This implementation added a requirement to update this version information in a shared clearinghouse object (DmoVersioning
) whenever a record is removed from the cache and whenever the cache itself is cleared or the session closes. The initial implementation involves a call into a ConcurrentHashMap
, which does at minimum a lookup, a decrement to a reference count in an AtomicIntegerArray
, and possibly an entry removal. There was some concern that doing this work (especially the map access) at every cache clear/close would add significant overhead to the FWD ORM.
A reference to the AtomicIntegerArray
is held in every cached BaseRecord
, so an alternative implementation could be to decrement the reference count only as the cache is cleared/closed, and to leave the removal of entries to a separate reaper thread. This would avoid the potential overhead of performing the ConcurrentHashMap
lookup(s) and possible entry removal inline with business logic. I did some testing to see if there was a meaningful performance advantage to be gained by implementing this more complex, alternative design.
The test consisted of running the following code, converted, against a database table containing 100,000 records:
etime(yes). for each vehicle no-lock: end. display etime.
I instrumented the code in Session
which loops through the cached records to deregister each one from the DmoVersioning
object, to measure the time spent doing so. The initial implementation performs the full deregistration, including the reference count decrement and ConcurrentHashMap
access. The alternative implementation performs only the reference count decrement through the BaseRecord
reference to the AtomicIntegerArray
, leaving the ConcurrentHashMap
access presumably to a separate reaper thread, which I did not implement.
I found that regardless of the implementation, the time spent overall averaged around 5.9 sec to process the FOR EACH loop, and the time spent on the versioning deregistration averaged around 18ms.
The conclusion is that there does not appear to be a measurable advantage to implementing the more complex design, so we will stay with the initial implementation.
#804 Updated by Roger Borrello over 3 years ago
Eric Faulhaber wrote:
4011e was merged to trunk and committed as revision 11348.
Was there a FWD revision number change with this merge, like FWD 4.1.0?
#805 Updated by Greg Shah over 3 years ago
We don't want to move to 4.1 before we release 4.0. On the other hand, this is a very major change. Until 4.0 is release, we only have internal users or internal customer testing which both will be based on a very specific branch + revno. That is more important than the external version number. Overall, I think we just leave this as 4.0.0 for now.
#806 Updated by Eric Faulhaber over 3 years ago
The performance enhancement to force NO-UNDO mode for all temp-tables (regardless of how they were defined) is implemented in 3821c/11547.
It is disabled by default (meaning normal 4GL behavior: temp-tables are undoable unless defined as NO-UNDO in 4GL code). To enable the override to force all temp-tables to be NO-UNDO, add the following to the persistence
section of the directory:
<node class="boolean" name="force-no-undo-temp-tables"> <node-attribute name="value" value="true"/> </node>
There are no conversion changes. When this mode is active, a warning is logged by the FWD server the first time a temp-table is encountered which was not defined as NO-UNDO.
#807 Updated by Roger Borrello over 3 years ago
Eric Faulhaber wrote:
It is disabled by default (meaning normal 4GL behavior: temp-tables are undoable unless defined as NO-UNDO in 4GL code). To enable the override to force all temp-tables to be NO-UNDO, add the following to the
persistence
section of the directory:[...]
Is this a safe setting to add into existing projects? In other words, is there a downside to setting up the directory in this manner?
#808 Updated by Eric Faulhaber over 3 years ago
Roger Borrello wrote:
Is this a safe setting to add into existing projects? In other words, is there a downside to setting up the directory in this manner?
This feature was requested by a development team who have made it a design point to use only NO-UNDO temp-tables in an application, as a performance enhancement. Most applications will not have been written this way, and may have known and unknown dependencies on the default behavior of "regular" temp-tables. I would only enable this with the understanding and express consent of the developers of those applications, who best understand those dependencies. Otherwise, unexpected behavior could result.
#809 Updated by Greg Shah about 3 years ago
Is #4011-774 complete?
#810 Updated by Adrian Lungu about 3 years ago
We have the idea in #4011-774 implemented as DmoSignature
.
#811 Updated by Eric Faulhaber about 3 years ago
- Status changed from WIP to Closed
- % Done changed from 0 to 100
#812 Updated by Ovidiu Maxiniuc almost 3 years ago
- Related to Bug #5489: Cleanup / reimplement merge DMO definitions added
#813 Updated by Constantin Asofiei over 1 year ago
I'm editing note #4011-5 to add other tasks. I'll comment when I'm done.
#814 Updated by Constantin Asofiei over 1 year ago
Constantin Asofiei wrote:
I'm editing note #4011-5 to add other tasks. I'll comment when I'm done.
I've finished editing.
#815 Updated by Constantin Asofiei over 1 year ago
I'm editing #4011-5 again, I'll comment when I'm done.
#816 Updated by Constantin Asofiei over 1 year ago
Constantin Asofiei wrote:
I'm editing #4011-5 again, I'll comment when I'm done.
I've finished editing.