Feature #3813
misc DB features part deux
100%
Related issues
History
#1 Updated by Greg Shah over 5 years ago
- Related to Feature #3758: misc database features added
#2 Updated by Greg Shah over 5 years ago
Language Statements
BUFFER-COMPARE- implement NO-LOBS (already done for the POC?)BUFFER-COPY- implement NO-LOBS (already done for the POC?)COPY-LOB(will be worked on in #2135 phase 3)
Built-In Function
- DBPARAM - there is a limited implementation in
ConnectionManager
; the limitation is that we only report the connection options we support, currently:-db
(physical database),-dt
(database type - always "PROGRESS"),-ld
(logical database),-H
(host),-S
(socket/service),-U
(userid),-P
(password; actual value not reported),-1
(single client, reported but not enforced currently).
Attributes
BUFFER:AFTER-ROWID(part of ProDataSet support #3809)BUFFER:BEFORE-ROWID(part of ProDataSet support #3809)BUFFER:ORIGIN-ROWID(part of ProDataSet support #3809)BUFFER:AFTER-BUFFER(part of ProDataSet support #3809)BUFFER:BEFORE-BUFFER(part of ProDataSet support #3809)BUFFER:RECORD-LENGTH(probably just uses theRECORD-LENGTH()
function implementation, see #3858)
- BUFFER-FIELD:DEFAULT-STRING (needs implementation - looks like this is just an unformatted string representation of a field's INITIAL value)
- BUFFER-FIELD:DECIMALS (needs implementation)
QUERY:BASIC-LOGGING(will be worked with LOG-MANAGER support, see #3853)
Methods
BUFFER:BUFFER-COMPARE- implement NO-LOBS (already done for the POC?)BUFFER:BUFFER-COPY- implement NO-LOBS (already done for the POC?)BUFFER:BUFFER-VALIDATE- BUFFER:RAW-TRANSFER - finish implementation
QUERY:CREATE-RESULT-LIST-ENTRY(this is already being worked in #3762)QUERY:GET-BUFFER-HANDLE
- TEMP-TABLE:CREATE-LIKE
TEMP-TABLE:READ-XML(this should already be complete from #3809 and #3574)TEMP-TABLE:WRITE-XML(this should already be complete from #3809 and #3574)
Built-In Function Usage in WHERE Clauses
New use cases such as ROW-STATE()
and VALID-OBJECT()
need to be handled. Reports need to be reviewed to determine the use cases that need support in PL/Java and which are safe for substitution parameter usage.
#3 Updated by Constantin Asofiei almost 5 years ago
DEFAULT-STRING
conversion and runtime stub was added in 3751a rev 11322
#4 Updated by Constantin Asofiei almost 5 years ago
BUFFER-VALIDATE
method is required for the current large app work.
#5 Updated by Greg Shah over 4 years ago
- Status changed from New to WIP
- Assignee set to Igor Skornyakov
From Eric:
Pending Ovidiu's review, I believe the rest of #3813-2 is now pretty accurate, though I don't know how heavily some of the recently implemented features have been tested. Looks like the main work left is the 2 BUFFER-FIELD attributes (doesn't look like a lot of work); BUFFER:RAW-TRANSFER (don't think that has been touched since you started the implementation); and catching the UDF support up with some more recently implemented built-ins. As your comment in Redmine regarding the latter suggests, it may not be possible/practical to implement some built-ins as UDFs, depending on whether they need access to state from the persistence runtime.
Ovidiu: Could you please review #3813-2 and provide your feedback on the status of these features? I just went through and updated the status as I understand it, but there are some features you've worked on/around recently and I'm not sure where they stand ATM. Specifically:
- TEMP-TABLE:CREATE-LIKE method: it looks like you did some refactoring recently for your dataset work. What is the current status of this method's implementation, both for temp-table and dataset?
- Please comment on the "Built-In Function Usage in Where Clauses" section, to the extent you are aware of new features implemented in the course of the dataset work which may not yet be reflected in our UDF support (assuming such support is sensible/feasible).
From Ovidiu:
The TEMP-TABLE:CREATE-LIKE was renamed (the former name was "createTableLike") because the same method name is used by PDS so they share the same interface. The Dataset variant should be fully functional. I do not recall changing the TempTable actual implementation.
Regarding the 'Built-In Function Usage in WHERE Clauses'. In 4124a there was added support for ROW-STATE, but in 3809e (based on 4124a) the BEFORE-ROWID, AFTER-ROWID, ERROR, ERROR-STRING functions/attributes (where applicable) was also added. The current implementation converts these functions/attributes into the appropriate fields/properties, so no support is needed in UDF.
4124a was merged to trunk, but 3809e is not yet in trunk. I expect it in trunk this week.
#6 Updated by Igor Skornyakov over 4 years ago
Which branch should I use for this task?
Thank you.
#7 Updated by Greg Shah over 4 years ago
4335a
#8 Updated by Igor Skornyakov over 4 years ago
The following statement from the Progress documentation regarding DBPARAM
functions (see https://docs.progress.com/bundle/abl-reference/page/DBPARAM-function.html) looks incorrect or at least misleading:
If the CONNECT statement contains the -pf parameter, which refers to a parameter file, the string DBPARAM returns includes the parameters in the file without "-pf" or any reference to the file.
For example, I have a .pf file with multiple parameters, but only two of them are included in the DBPARAM
result in addition to the parameters already supported by FWD: -Bt
(TEMP-TABLE-BUFFERS) and -fc
(SCHEMA-FLD-CACHE-SZ).
I understand that these parameters have meaningful counterparts in FWD.
Of course, I can try to figure the exact list of the parameter file parameters which are considered by DBPARAM
. However, I suspect that most of them will have no FWD counterparts as well.
My question is. How the result of the DBPARAM
function is supposed to be used, or how it is used in the application(s) of our customers?
Thank you.
#9 Updated by Greg Shah over 4 years ago
For example, I have a
.pf
file with multiple parameters, but only two of them are included in theDBPARAM
result in addition to the parameters already supported by FWD:-Bt
(TEMP-TABLE-BUFFERS) and-fc
(SCHEMA-FLD-CACHE-SZ).
Are you saying that the 4GL includes some but not all of the parameters listed in the .pf
file in its DBPARAM
result?
I understand that these parameters have meaningful counterparts in FWD.
Not always. Internally, the FWD implementation is very different from the 4GL. For example, the -Bt
(TEMP-TABLE-BUFFERS) and -fc
(SCHEMA-FLD-CACHE-SZ) seem unlikely to map to anything useful in FWD.
I can try to figure the exact list of the parameter file parameters which are considered by DBPARAM.
Yes, I think this is needed.
However, I suspect that most of them will have no FWD counterparts as well.
Correct. But even a meaningless parameter probably should be included if specified by the user.
How the result of the DBPARAM function is supposed to be used, or how it is used in the application(s) of our customers?
I've seen it used in 2 ways:
- Output in logfiles or debug information for support purposes.
- Parsed to answer specific questions (e.g. to see whether a particular value is being passed).
#10 Updated by Igor Skornyakov over 4 years ago
Greg Shah wrote:
For example, I have a
.pf
file with multiple parameters, but only two of them are included in theDBPARAM
result in addition to the parameters already supported by FWD:-Bt
(TEMP-TABLE-BUFFERS) and-fc
(SCHEMA-FLD-CACHE-SZ).Are you saying that the 4GL includes some but not all of the parameters listed in the
.pf
file in itsDBPARAM
result?
Exactly.
Yes, I think this is needed.
OK. Thank you.
How the result of the DBPARAM function is supposed to be used, or how it is used in the application(s) of our customers?
I've seen it used in 2 ways:
- Output in logfiles or debug information for support purposes.
- Parsed to answer specific questions (e.g. to see whether a particular value is being passed).
I see. Thank you.
#11 Updated by Igor Skornyakov over 4 years ago
DBPARAM
function.
Parameter | Description |
---|---|
-1 | Single-user Mode |
-adminport n | AdminServer Port |
-aibufs n | Specifies the number of after-image buffers when running AIW |
-aistall | Suspends database activity when an empty after-image (AI) file is unavailable |
-B n | Specifies the number of blocks in the database buffers |
-bibufs n | Specifies the number of before-image buffers when running BIW |
-bistall | Sends a message to the log file when the recovery log threshold is reached. Use with -bithold |
-bithold n | Specifies the maximum size of the recovery log files |
-Bp n | Requests a number of private read-only buffers |
-Bpmax n | Requests a maximum number of private buffers |
-c n | Specifies the number of index place holders or cursors |
-cache filename | Reads the database schema from a local file instead of the database |
-classpath pathname | Identifies the Java classpath to use when starting an SQL server |
-cs n | Changes the maximum number of index levels |
-ct n | Max number of connection retries |
-DataService ds-name | The name of the DataService to connect through a NameServer to an ODBC, ORACLE, or SQL Server |
-db name | Physical Database Name. |
-directio | Use Direct I/O to open all files in unbuffered mode |
-Dsrv params | DataServer parameters |
-dt db-type | Identifies the database type |
-fc n | Changes the number (soft limit) of entries in the schema field cache |
-G n | Specifies the number of seconds before OpenEdge reuses a before-image cluster |
-groupdelay n | Specifies the number of milliseconds a transaction waits before committing |
-H host-name | Identifies a remote host |
-hash n | Specifies the number of hash table entries for the buffer pool |
-i | No Crash Protection |
-L n | Lock Table Entries |
-ld logical-dnname | Assigns the logical database name |
-lkwtmo n | Lock Timeout in seconds |
-Mm n | Specifies the message buffer size for network client/server protocols |
-N network-type | Network type |
-n n | Number of Users |
-P value not shown | password |
-pinshm | Pin Shared Memory |
-properties filename | Identifies the properties file an AdminServer uses when File starting a database server or servergroup |
-RO | Open a database for read-only access |
-S service-name or port-number | Service name or port number |
-servergroup name | Identifies a logical collection of server processes to start |
-spin n | Identifies a logical collection of server processes to start |
-trig dir-name or lib-name | Identifies the directory or library containing the ABL triggers for a database |
-U userid | Specifies the user ID |
#12 Updated by Igor Skornyakov over 4 years ago
Can anybody help me to figure which of the database connection parameters described in the previous note have direct meaningful counterparts in FWD?
Thank you.
#13 Updated by Eric Faulhaber over 4 years ago
Igor Skornyakov wrote:
Can anybody help me to figure which of the database connection parameters described in the previous note have direct meaningful counterparts in FWD?
Thank you.
Currently, ConnectionManager.connect
supports:
-1 (reported, but not yet enforced) -db -H -ld -P -S -U
We've considered -RO
, but it is not yet implemented.
#14 Updated by Igor Skornyakov over 4 years ago
Eric Faulhaber wrote:
Igor Skornyakov wrote:
Can anybody help me to figure which of the database connection parameters described in the previous note have direct meaningful counterparts in FWD?
Thank you.Currently,
ConnectionManager.connect
supports:[...]
We've considered
-RO
, but it is not yet implemented.
I see. Thank you.
I understand that I should add support for the remaining options in the CONNECT
statement by just storing values and adding them to the DBPARAM
.
Is my understanding correct?
Thank you.
#15 Updated by Greg Shah over 4 years ago
- Related to Feature #4175: add support for -1 (single user) option for CONNECT statement added
#16 Updated by Greg Shah over 4 years ago
- Related to Feature #4392: add -pf support for appserver CONNECT() method added
#17 Updated by Greg Shah over 4 years ago
I understand that I should add support for the remaining options in the CONNECT statement by just storing values and adding them to the DBPARAM.
Is my understanding correct?
Yes.
For example, I have a .pf file with multiple parameters, but only two of them are included in the DBPARAM result in addition to the parameters already supported by FWD: -Bt (TEMP-TABLE-BUFFERS) and -fc (SCHEMA-FLD-CACHE-SZ).
One thing that was not clear from #3813-11: which of these items can be specified in the CONNECT
but do not appear in DBPARAM
?
We also have 2 other highly related tasks (#4175 and #4392). I think it makes sense to go ahead and implement them at this same time. This is the runtime support for -1
and the -pf
support.
#18 Updated by Igor Skornyakov over 4 years ago
Greg Shah wrote:
One thing that was not clear from #3813-11: which of these items can be specified in the
CONNECT
but do not appear inDBPARAM
?
I've tested these items only with the configuration (.pf) file. According to the Progress documentation, all of them can be specified in the CONNECT
statement. I expect that they will appear in DBPARAM
in this case as well. However, I will double-check.
We also have 2 other highly related tasks (#4175 and #4392). I think it makes sense to go ahead and implement them at this same time. This is the runtime support for
-1
and the-pf
support.
Yes, I saw it. Thank you.
#19 Updated by Igor Skornyakov over 4 years ago
All connection parameters from #3813-11 are accepted in the CONNECT
statement and are shown by DBPARAM
in this case.
#20 Updated by Greg Shah over 4 years ago
Igor Skornyakov wrote:
All connection parameters from #3813-11 are accepted in the
CONNECT
statement and are shown byDBPARAM
in this case.
Did the original pf file you tested have parameters that are not related to CONNECT
? Perhaps that is why they don't show up (unrecognized parms are probably ignored by CONNECT
).
#21 Updated by Igor Skornyakov over 4 years ago
Greg Shah wrote:
Igor Skornyakov wrote:
Did the original pf file you tested have parameters that are not related toCONNECT
? Perhaps that is why they don't show up (unrecognized parms are probably ignored byCONNECT
).
The test .pf files contained parameters which are not db related. 4GL complains about them (I'm going to reproduce this in FWD). I've commented them out. I understand that the very first .pf file I've used for testing also contained ignored parameters.
#22 Updated by Igor Skornyakov over 4 years ago
The conversion of the CONNECT
statement looks a little bit strange.
For example. The statement
CONNECT -1 -adminport 8000 -aibufs 1 -aistall -B 10 -bibufs 20 -bistall -bithold 10 -Bp 1 -Bpmax 1 -c 199 -cache E:\testcases\fwd1.bin -classpath c:\ -cs 11 -DataService aaaaaaaaaaa -db E:\testcases\uast\security\fwd1.db -directio -Dsrv aaaa -dt PROGRESS -fc 1000 -G 1 -groupdelay 10 -hash 10 -i -L 1000 -ld fwd1 -lkwtmo 1000 -Mm 400 -n 1 -P secret -pinshm -properties ddd -RO -servergroup grp -spin 100 -trig c:\ -U fwdb@domain.
Is converted as
ConnectionManager.connect("-1", "-ld", "fwd1", "-P", "-P secret", "-RO", "-U", "-U fwdb@domain");
Apart from missed parameters we see that
-P
and -U
appear twice in the converted statement.Is it done on purpose?
Thank you.
#23 Updated by Eric Faulhaber over 4 years ago
Hm, you're correct, that conversion looks completely wrong. I think the conversion may have been broken along the way, compared to the original implementation. Please see the javadoc for ConnectionManager.connect(Object...)
and the implementation of that method for the expected input.
Essentially, there are two alternatives:
- a single string with the physical database name OR a space delimited set of options which are parsed by downstream code; OR
- multiple parameters in an expected order (without the option qualifiers), each representing a supported option.
In either case, the duplication of the -P
and -U
options is incorrect. However, the dropping of unsupported parameters is expected.
#24 Updated by Igor Skornyakov over 4 years ago
Eric Faulhaber wrote:
Hm, you're correct, that conversion looks completely wrong. I think the conversion may have been broken along the way, compared to the original implementation. Please see the javadoc for
ConnectionManager.connect(Object...)
and the implementation of that method for the expected input.Essentially, there are two alternatives:
- a single string with the physical database name OR a space delimited set of options which are parsed by downstream code; OR
- multiple parameters in an expected order (without the option qualifiers), each representing a supported option.
I see. Thank you. Working on the conversion support of additional parameters.
In either case, the duplication of the
-P
and-U
options is incorrect. However, the dropping of unsupported parameters is expected.
I understand this.
#25 Updated by Eric Faulhaber over 4 years ago
The original, ordered list approach was based on a very limited set of connection options, which were basically the minimum required to connect. As we add support for new options, that approach may no longer be feasible, or may continue to be limited to that original set of options.
Also, there are cases where a connect option string is built up at runtime, or comes from a *.pf
file. In these cases, we have to deal with legacy options which are inappropriate for FWD's design, or which at least need some additional attention to work with FWD's design.
Of course, legacy, internal implementation detail options should just be ignored.
Options having to do with file system or network resources, if relevant in the new environment, are mapped to the new environment. For instance, when we encounter the file path for a physical database, we attempt to map this to a database configuration in the directory. When we encounter a port service name, this is likewise mapped to a resource in the directory.
If any of the newly supported options have similar characteristics, they should be handled in a similar fashion.
#26 Updated by Igor Skornyakov over 4 years ago
Eric Faulhaber wrote:
The original, ordered list approach was based on a very limited set of connection options, which were basically the minimum required to connect. As we add support for new options, that approach may no longer be feasible, or may continue to be limited to that original set of options.
Also, there are cases where a connect option string is built up at runtime, or comes from a
*.pf
file. In these cases, we have to deal with legacy options which are inappropriate for FWD's design, or which at least need some additional attention to work with FWD's design.Of course, legacy, internal implementation detail options should just be ignored.
Options having to do with file system or network resources, if relevant in the new environment, are mapped to the new environment. For instance, when we encounter the file path for a physical database, we attempt to map this to a database configuration in the directory. When we encounter a port service name, this is likewise mapped to a resource in the directory.
If any of the newly supported options have similar characteristics, they should be handled in a similar fashion.
I see my current task as following. We extend CONNECT
statement conversion and runtime support so that DBPARAM
will return a result that is compatible with 4GL. This does not include support of the semantics of additional options (apart from the -pf
option which should be supported in a sense that parameters from the configuration file will be shown by DBPARAM
).
Is my understanding correct?
Thank you.
#27 Updated by Eric Faulhaber over 4 years ago
For implementation of the DBPARAM
builtin function, correct. For those options which we intend to add support, the semantics will be implemented later, in separate tasks.
#28 Updated by Igor Skornyakov over 4 years ago
Eric Faulhaber wrote:
For implementation of the
DBPARAM
builtin function, correct. For those options which we intend to add support, the semantics will be implemented later, in separate tasks.
Thank you.
#29 Updated by Igor Skornyakov about 4 years ago
I've essentially re-worked the CONNECT
statement conversion. The only remaining problem I see now is about the parameters with values containing a backslash (e.g. "-cache E:\testcases\fwd1.bin"). The backslash and (in some cases) the next character is "eaten" somewhere. Trying to figure where this happens and how to fix it.
#30 Updated by Greg Shah about 4 years ago
Isn't this just the fact that the \
is an escape char and will always be eaten? And when used with some characters (e.g. t
, n
...) the next character is eaten as well because the combination has special meaning (e.g. \t
is a tab character)
#31 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Isn't this just the fact that the
\
is an escape char and will always be eaten? And when used with some characters (e.g.t
,n
...) the next character is eaten as well because the combination has special meaning (e.g.\t
is a tab character)
I think so. However, this means that the conversion will be incorrect in many situations (and, in fact, it is incorrect now for the "CONNECT C:\some\path\somedb." statement. My question is about the processing step when the escape characters are processed.
#32 Updated by Igor Skornyakov about 4 years ago
It seems that a backslash is processed at a very early stage. I see eaten chars in a .cache file. Not sure how to deal with this.
#33 Updated by Greg Shah about 4 years ago
This means your project is not configured as a Windows project. See p2j.cfg.xml
and the <parameter name="opsys" value="WIN32" />
should be set there if it is a windows project.
String literals are normally processed in convert/literals.rules
and they should be emitted using progressToJavaString()
which in TRPL is backed by ExpressionConversionWorker
and the actual code is in character.progressToJavaString()
.
#34 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
This means your project is not configured as a Windows project. See
p2j.cfg.xml
and the<parameter name="opsys" value="WIN32" />
should be set there if it is a windows project.
The "opsys" doesn't help but <parameter name="unix-escapes" value="false" />
resolves the issue.
String literals are normally processed in
convert/literals.rules
and they should be emitted usingprogressToJavaString()
which in TRPL is backed byExpressionConversionWorker
and the actual code is incharacter.progressToJavaString()
.
I see. Thank you.
#35 Updated by Greg Shah about 4 years ago
Setting opsys
should have implicitly set unix-escapes
, so that sounds like a bug. Please figure that out now, because it will hit someone else at some point. The code that implements this is in Configuration.java
.
#36 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Setting
opsys
should have implicitly setunix-escapes
, so that sounds like a bug. Please figure that out now, because it will hit someone else at some point. The code that implements this is inConfiguration.java
.
The problem is in line 538 of the Configuration.java
:
parmMap.computeIfAbsent("unix-escapes", x -> String.valueOf(!isWin));
This means that even if the incorrect value of the "unix-escapes" was specified before "opsys" (as in my case) it will not be overriden. This seems to be done intentionally, but can be consusing indeed.
From the other side, I do not see a reason for using
computeIfAbsent
method with a lambda expression with depends only on a local variable of the computeDerivedValues
function.I suggest using
parmMap.put("unix-escapes", String.valueOf(!isWin));
instead.
Is it OK?
#37 Updated by Greg Shah about 4 years ago
This means that even if the incorrect value of the "unix-escapes" was specified before "opsys" (as in my case) it will not be overriden. This seems to be done intentionally, but can be consusing indeed.
Correct. This is intentional. The idea is that it is not appropriate to override an explicit value specified by the user. So, if it is present we will accept whatever value you explicitly specified.
If not present, then we set it to a sensible default.
I suggest using
...
instead.
Is it OK?
No, I think it is working as designed.
In your case, were you explicitly setting unix-escapes
to true
even though opsys
was set to win32
?
#38 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
In your case, were you explicitly setting
unix-escapes
totrue
even thoughopsys
was set towin32
?
Yes, unix-escapes
was set to true
and it was before the opsys
set. It is a general problem - how the multiple settings of the same parameter are processed.
#39 Updated by Greg Shah about 4 years ago
I agree that we need to improve our documentation to explain this. For now, I think you can ignore it.
#40 Updated by Igor Skornyakov about 4 years ago
Using VALUE()
in the CONNECT
statement looks a little tricky. Consider the following code:
DEF VAR c1 AS CHAR NO-UNDO. c1 = " -db E:\testcases\uast\security\fwd1.db". DEF VAR c2 AS CHAR NO-UNDO. c2 = " -P secret". CONNECT VALUE("-1" + c1) -ld VALUE("fwd1" + c2) -U fwdb@domain.
The first
VALUE
expression is OK, while the second one results in the ** The character is not permitted in name fwd1 -P secret. (274)
error.#41 Updated by Greg Shah about 4 years ago
Is this the 4GL or FWD?
#42 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Is this the 4GL or FWD?
It is 4GL. The current implementation of FWD doesn't support any of these use cases. Working on it.
#43 Updated by Igor Skornyakov about 4 years ago
The VALUE
usage in the CONNECT
statement looks really tricky. For example
CONNECT VALUE("E:\testcases\uast\security\fwd1.db") -P secret -U fwdb@domain -bibufs 12 -1 -ld fwd1.
works fine
CONNECT VALUE("-db E:\testcases\uast\security\fwd1.db -P secret") -P secret -U fwdb@domain -bibufs 12 -1 -ld fwd1.
works too. However
CONNECT VALUE("E:\testcases\uast\security\fwd1.db -P secret") -P secret -U fwdb@domain -bibufs 13 -1 -ld fwd1.
fails with Warning: -ld is not a database parameter and was ignored. (1402)
error
#44 Updated by Constantin Asofiei about 4 years ago
Does CONNECT E:\testcases\uast\security\fwd1.db -P secret -P secret -U fwdb@domain -bibufs 13 -1 -ld fwd1.
work in 4GL? (I mean, without VALUE).
#45 Updated by Igor Skornyakov about 4 years ago
Constantin Asofiei wrote:
Does
CONNECT E:\testcases\uast\security\fwd1.db -P secret -P secret -U fwdb@domain -bibufs 13 -1 -ld fwd1.
work in 4GL? (I mean, without VALUE).
Yes, it works.
#46 Updated by Greg Shah about 4 years ago
For now, I don't know of a case where customer code uses this approach. Please create a new task (under the Database project) for this strange behavior. Link it here as a related task and make me a watcher.
We will defer that work.
#47 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
For now, I don't know of a case where customer code uses this approach. Please create a new task (under the Database project) for this strange behavior. Link it here as a related task and make me a watcher.
We will defer that work.
OK. I will do it tomorrow. Regarding the scope of this task. Should I skip support of the VALUE
expression containing more than one option?
Thank you.
#48 Updated by Greg Shah about 4 years ago
#3813 does need to handle the case where there are more than 1 option inside a single VALUE()
expression. That is a common thing.
#49 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
#3813 does need to handle the case where there are more than 1 option inside a single
VALUE()
expression. That is a common thing.
I see. Thank you.
#50 Updated by Igor Skornyakov about 4 years ago
- Related to Bug #4535: CONNECT statement with VALUE expression quirks added
#51 Updated by Igor Skornyakov about 4 years ago
- The
CONNECT
statement can create more than one connection. This is mentioned in the Progress doc, but I've overlooked this before. - The
CONNECT
statement can have multiple instances of the same key for the same database. Only the latest found affects the actual connection but all of them are included in theDBPARAM
string. - There can be only one parameter not preceded by the key (name started with '-'). It is treated as the first database name (as if it was preceded by '-db' key) and it is not necessary the first argument. See however #4535 as the situation becomes more complicated if the
VALUE
clause is used.
#52 Updated by Greg Shah about 4 years ago
#53 Updated by Igor Skornyakov about 4 years ago
#54 Updated by Igor Skornyakov about 4 years ago
4GL uses some weird logic when parsing CONNECT
statement arguments which I cannot understand.
For example, if one uses an incorrect key '-xdb' instead of '-db' the error message is:
Warning: -x is not a database parameter and was ignored. (1402)
In some situations two errors are generated:
Warning: -xd is not a database parameter and was ignored. (1402) Warning: -b is not a database parameter and was ignored. (1402)
Is it OK if FWD will complain about a complete key, e.g.
Warning: -xdb is not a database parameter and was ignored. (1402)
?
Thank you.
#55 Updated by Greg Shah about 4 years ago
Is it OK if FWD will complain about a complete key, e.g.
Yes, for now it is OK. Please create a new task for this quirk and make it related. We will defer the work on that.
#56 Updated by Igor Skornyakov about 4 years ago
- Related to Bug #4537: CONNECT statement parameters parsing quirks added
#57 Updated by Igor Skornyakov about 4 years ago
Finished rework of CONNECT
/DBPARAM
.
Committed to 4335a revision 11416.
Working on the parameter file support (#4395). The support should not be complicated, but a number of use cases should be checked.
#58 Updated by Igor Skornyakov about 4 years ago
4GL allows -pf
option in the .pf file. However, this may result in the application hang due to infinite recursion. I do not think that we have to reproduce this and FWD will report an error if such recursion is discovered.
Is it correct?
Thank you.
#59 Updated by Greg Shah about 4 years ago
Yes, failing on recursion is a good idea. Hangs don't have to be duplicated.
#60 Updated by Igor Skornyakov about 4 years ago
Added configuration file (-pf
option of the CONNECT
statement).
Committed to 4335a revision 11419.
Please note that some additional testing is required.
#61 Updated by Igor Skornyakov about 4 years ago
Additional tests of the CONNECT
-pf
option support passed. A minor typo was fixed.
4335a revision 11421 is ready for the code review (files ConnectionManager.java
and database_general.rules
).
#62 Updated by Igor Skornyakov about 4 years ago
The conversion of the uast/raw_transfer/raw_transfer_basic_multi_column_mixed_data_type_encoding.p
fails/
[java] EXPRESSION EXECUTION ERROR: [java] --------------------------- [java] throwException( sprintf("## unknown variable type %s/%s/%s", ref.parent.parent, ref.parent, ref)) [java] ^ { ## unknown variable type FUNC_CHAR/COLON/KW_BUFFER [ATTR_INT id <347892352089> 125:39] } [java] --------------------------- [java] EXPRESSION EXECUTION ERROR: [java] --------------------------- [java] evalLib("gen_accessors", ref1, tmp) [java] ^ { Expression execution error @1:1 } [java] --------------------------- [java] ERROR: [java] com.goldencode.p2j.pattern.TreeWalkException: ERROR! Active Rule: [java] ----------------------- [java] RULE REPORT [java] ----------------------- [java] Rule Type : WALK [java] Source AST: [ CRC-VALUE ] BLOCK/STATEMENT/KW_DISP/EXPRESSION/FUNC_CHAR/COLON/ATTR_INT/ @125:39 {347892352089} [java] Copy AST : [ CRC-VALUE ] BLOCK/STATEMENT/KW_DISP/EXPRESSION/FUNC_CHAR/COLON/ATTR_INT/ @125:39 {347892352089} [java] Condition : throwException( sprintf("## unknown variable type %s/%s/%s", ref.parent.parent, ref.parent, ref)) [java] Loop : false [java] --- END RULE REPORT --- [java] [java] [java] [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1070) [java] at com.goldencode.p2j.convert.TransformDriver.processTrees(TransformDriver.java:563) [java] at com.goldencode.p2j.convert.ConversionDriver.back(ConversionDriver.java:572) [java] at com.goldencode.p2j.convert.TransformDriver.executeJob(TransformDriver.java:906) [java] at com.goldencode.p2j.convert.ConversionDriver.main(ConversionDriver.java:1022) [java] Caused by: com.goldencode.expr.ExpressionException: Expression execution error @1:1 [ATTR_INT id=347892352089] [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:275) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:210) [java] at com.goldencode.p2j.pattern.PatternEngine.apply(PatternEngine.java:1633) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1531) [java] at com.goldencode.p2j.pattern.PatternEngine.processAst(PatternEngine.java:1479) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1034) [java] ... 4 more [java] Caused by: com.goldencode.expr.ExpressionException: Expression execution error @1:1 [java] at com.goldencode.expr.Expression.execute(Expression.java:484) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:497) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.RuleContainer.apply(RuleContainer.java:585) [java] at com.goldencode.p2j.pattern.RuleSet.apply(RuleSet.java:1) [java] at com.goldencode.p2j.pattern.AstWalker.walk(AstWalker.java:262) [java] ... 9 more [java] Caused by: com.goldencode.expr.ExpressionException: Expression execution error @1:1 [java] at com.goldencode.expr.Expression.execute(Expression.java:484) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:497) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.Rule.executeActions(Rule.java:745) [java] at com.goldencode.p2j.pattern.Rule.coreProcessing(Rule.java:712) [java] at com.goldencode.p2j.pattern.Rule.apply(Rule.java:534) [java] at com.goldencode.p2j.pattern.NamedFunction.execute(NamedFunction.java:450) [java] at com.goldencode.p2j.pattern.AstSymbolResolver.execute(AstSymbolResolver.java:712) [java] at com.goldencode.p2j.pattern.CommonAstSupport$Library.execLib(CommonAstSupport.java:1534) [java] at com.goldencode.p2j.pattern.CommonAstSupport$Library.evalLib(CommonAstSupport.java:1510) [java] at com.goldencode.expr.CE11579.execute(Unknown Source) [java] at com.goldencode.expr.Expression.execute(Expression.java:391) [java] ... 19 more [java] Caused by: com.goldencode.p2j.pattern.CommonAstSupport$UserGeneratedException: ## unknown variable type FUNC_CHAR/COLON/KW_BUFFER [ATTR_INT id <347892352089> 125:39] [java] at com.goldencode.p2j.pattern.CommonAstSupport$Library.throwException(CommonAstSupport.java:2817) [java] at com.goldencode.p2j.pattern.CommonAstSupport$Library.throwException(CommonAstSupport.java:2802) [java] at com.goldencode.expr.CE11585.execute(Unknown Source) [java] at com.goldencode.expr.Expression.execute(Expression.java:391) [java] ... 32 more
Is it a regression?
Thank you.
#63 Updated by Greg Shah about 4 years ago
No, we don't have any support for the CRC-VALUE
attribute.
#64 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
No, we don't have any support for the
CRC-VALUE
attribute.
I see. Thank you.
#65 Updated by Greg Shah about 4 years ago
I don't think those testcases were used in FWD. We used them to explore RAW-TRANSFER
in the 4GL.
For the purposes of this task, the focus should be:
- Confirm that the method version has the same behavior as the language statement. If not, understand the differences.
- Implement method support for this in buffer, using the existing language statement implementation as makes sense. Hopefully common code can be used.
#66 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
I don't think those testcases were used in FWD. We used them to explore
RAW-TRANSFER
in the 4GL.For the purposes of this task, the focus should be:
- Confirm that the method version has the same behavior as the language statement. If not, understand the differences.
- Implement method support for this in buffer, using the existing language statement implementation as makes sense. Hopefully common code can be used.
I see. Thank you.
#67 Updated by Ovidiu Maxiniuc about 4 years ago
Review for 4335a revisions 11416, 11419, and 11421.
InConnectionManager.java
:
- javadocs: I think some methods should be describing their parameters and return value in more details, especially when they are complex structures like
List<List<Map.Entry<...>
List<Map.Entry<>
is a very strange structure. Why not usingMap
directly? Also, if I understood it right, instead ofList<List<...
I believe the data structure should be a map having as keys the database names (as detected in parser) and the values other maps with detectedCONNECT_OPTION
as keys andConnectOptionValue
as values;- line 4696: an extra
new ArrayList<>();
. - line 4726: variable not used;
- there are multiple minor deviations from the GCD coding standard. Most of them are related to white-spaces (space after keywords or javadoc sections delimitations). And there is this enum class named CONNECT_OPTION. Its instances in uppercase are fine.
database_general.rules
:
- lines 433 and 439: the backslash in "
'"%s\"'
doesn't seem right; - the type of
supConnOpts
isLinkedHashMap
. I believeHashMap
would be sufficient here as no iteration is performed over it; - when
optionType == "i"
and the text cannot be parsed as a long, you decide to make it a string instead. Is this correct? - some options are mapped to
"l"
type insupConnOpts
. They don't seem to be processed.
#68 Updated by Igor Skornyakov about 4 years ago
Review for 4335a revisions 11416, 11419, and 11421.
InConnectionManager.java
:
- javadocs: I think some methods should be describing their parameters and return value in more details, especially when they are complex structures like
List<List<Map.Entry<...>
OK. I will think about how to make it more clear.
List<Map.Entry<>
is a very strange structure. Why not usingMap
directly? Also, if I understood it right, instead ofList<List<...
I believe the data structure should be a map having as keys the database names (as detected in parser) and the values other maps with detectedCONNECT_OPTION
as keys andConnectOptionValue
as values;
The problem is that DBPARAM
shows all CONNECT
options even if some of them are specified more than once. See #3813-51. Please note also that single CONNECT
statement can include options for multiple databases. This is the reason for List<List<>>
.
- line 4696: an extra
new ArrayList<>();
.
Thank you. Fixed.
- line 4726: variable not used;
Thank you. Fixed.
- there are multiple minor deviations from the GCD coding standard. Most of them are related to white-spaces (space after keywords or javadoc sections delimitations). And there is this enum class named CONNECT_OPTION. Its instances in uppercase are fine.
CONNECT_OPTION was renamed. Will try to fix other formatting issues. My problem is that I do not see them (in most cases).
Indatabase_general.rules
:
- lines 433 and 439: the backslash in "
'"%s\"'
doesn't seem right;
I've just retained the code from the existing implementation.
- the type of
supConnOpts
isLinkedHashMap
. I believeHashMap
would be sufficient here as no iteration is performed over it;
This is correct, thank you. Fixed
- when
optionType == "i"
and the text cannot be parsed as a long, you decide to make it a string instead. Is this correct?
I've just retained the code from the existing implementation.
- some options are mapped to
"l"
type insupConnOpts
. They don't seem to be processed.
The logical options do not have values
#69 Updated by Igor Skornyakov about 4 years ago
I've found a number of incompatibilities in the behavior of the RAW-TRANSFER
statement and BUFFER:RAW-TRANSFER
method between 4GL and FWD. The most important (so far) is that FWD doesn't check that source and target buffers' structure are the same.
I understand that it is related to the RAW-TRANSFER
of BUFFER
to RAW
. I've not yet analyzed the implementation in details but the TODO comment in the BufferImpl.rawCopyTo(raw r)
method states that FWD is not binary compatible with 4GL on this. Maybe it makes sense to fix it based on my experience with CLIENT-PRINCIPAL
(de)serialization?
#70 Updated by Greg Shah about 4 years ago
TODO comment in the BufferImpl.rawCopyTo(raw r) method states that FWD is not binary compatible with 4GL on this. Maybe it makes sense to fix it based on my experience with CLIENT-PRINCIPAL (de)serialization?
Yes, please take a look at it. If you feel that you understand the solution, do fix it. Please see my previous work in #3549. If it is going to take an extended amount of time, then we will defer it.
#71 Updated by Igor Skornyakov about 4 years ago
BUFFER-FIELD:AVAILABLE
attribute is not supported in FWD (raises ** AVAILABLE is not a queryable attribute for BUFFER-FIELD widget. (4052)
error) and it is a problem for BUFFER:RAW-TRANSFER
support.
Is there a simple workaround?
Thank you.
#72 Updated by Igor Skornyakov about 4 years ago
FWD behavior of the BUFFER-FIELD:BUFFER-VALUE()
in the situation when there is no data record is not the same as in 4GL:
FWD reports a single message ** No target-data record is available. (91)
with an error-status:error
= true
while 4GL reports this message with error-status:error
= false
and additional message Unable to extract BUFFER-VALUE for field a-raw. (7366)
.
I've also noticed another incompatibility: LENGTH(raw)
when raw
is UNDEFINED
returns 0
in FWD but UNKNOWN
in 4GL.
#73 Updated by Igor Skornyakov about 4 years ago
I've added support for the BUFFER-FIELD:AVAILABLE
attribute which is sufficient for BUFFER:RAW-TRANSER
runtime support (see #3813-71).
However, handle:available()
is still converted to handle:unwrapBuffer().available()
which results in ** AVAILABLE is not a queryable attribute for BUFFER-FIELD widget. (4052)
error.
What is the right way to fix this?
Thank you.
#74 Updated by Igor Skornyakov about 4 years ago
I've noticed a strange effect.
Let hFieldHandle
is a handle for the RAW
field of a TEMP-TABLE
. After RAW-TRANSFER
to this table the converted expression LENGTH(hFieldHandle:BUFFER-VALUE())
returns incorrect value 18
. However, if we assign LENGTH(hFieldHandle:BUFFER-VALUE())
to a RAW
variable, the length of this variable has a correct value 8
.
#75 Updated by Ovidiu Maxiniuc about 4 years ago
Igor Skornyakov wrote:
I see thatI've added support for the
BUFFER-FIELD:AVAILABLE
attribute which is sufficient forBUFFER:RAW-TRANSER
runtime support (see #3813-71).
However,handle:available()
is still converted tohandle:unwrapBuffer().available()
which results in** AVAILABLE is not a queryable attribute for BUFFER-FIELD widget. (4052)
error.
What is the right way to fix this?
Thank you.
AVAILABLE
is an attribute for Buffer object handle, Buffer-field object handle. In FWD conversion, the kw_avail
is handled in methods_attributes.rules
:2588. This is why you always get an unwrapBuffer
. What you need to do:
- create a new interface
Available
(as example see theRejectable
). Don't forget to put theLegacyAttribute
annotation; - add a method (
unwrapAvailable()
) inhandle
; - remove the block of code at
methods_attributes.rules
:2588 and add a line inload_descriptors
with reference to the new interface and accessor (only getter since this is a read-only attribute); - make
Buffer
andBufferField
implement the interface.
That should do it.
#76 Updated by Igor Skornyakov about 4 years ago
Ovidiu Maxiniuc wrote:
Igor Skornyakov wrote:
I see thatI've added support for the
BUFFER-FIELD:AVAILABLE
attribute which is sufficient forBUFFER:RAW-TRANSER
runtime support (see #3813-71).
However,handle:available()
is still converted tohandle:unwrapBuffer().available()
which results in** AVAILABLE is not a queryable attribute for BUFFER-FIELD widget. (4052)
error.
What is the right way to fix this?
Thank you.AVAILABLE
is an attribute for Buffer object handle, Buffer-field object handle. In FWD conversion, thekw_avail
is handled inmethods_attributes.rules
:2588. This is why you always get anunwrapBuffer
. What you need to do:
- create a new interface
Available
(as example see theRejectable
). Don't forget to put theLegacyAttribute
annotation;- add a method (
unwrapAvailable()
) inhandle
;- remove the block of code at
methods_attributes.rules
:2588 and add a line inload_descriptors
with reference to the new interface and accessor (only getter since this is a read-only attribute);- make
Buffer
andBufferField
implement the interface.That should do it.
Got it. Thank s lot!
#77 Updated by Greg Shah about 4 years ago
Also, the new interface must be added to HandleCommon
.
#78 Updated by Igor Skornyakov about 4 years ago
Fixed the issue described in #3813-72. The problem with LENGTH(raw)
is also resolved as it was a result of the fact that the BUFFER-FIELD:BUFFER-VALUE()
in the absence of record have thrown exception instead of return UNLNOWN
.
#79 Updated by Igor Skornyakov about 4 years ago
The reason for the quirk described in #3813-74 is following:
The 4GL expression LENGTH(hFieldHandle:BUFFER-VALUE())
os converted to DynamicOps.length(new character(hFieldHandle.unwrapBufferField().value()))
. This means that we calculate not the length of the RAW
value but the length of its conversion to CHARACTER
.
At this moment I do not understand how this can be fixed.
#80 Updated by Igor Skornyakov about 4 years ago
Added correct error messages and fixed incompatibilities of the RAW-TRANSFER
support for use cases I could imagine. See uas/raw-transfer/buffer_raw_transfer_test.p
.
Committed to 4335a revision 11432.
#81 Updated by Igor Skornyakov about 4 years ago
RAW-TRANSFER
to be done
- Rework
BUFFER
<->RAW
(de)serialization which is binary compatible with 4GL. - Test the exact behavior of 4GL when the
RAW
toBUFFER
deserialization is only partially successful and adjust FWD support accordingly (should be easy).
#82 Updated by Greg Shah about 4 years ago
Task branch 4335a was merged to trunk as revision 11345.
The ConnectionManager
change was removed before it was merged since it caused a regression in the ChUI application.
#83 Updated by Igor Skornyakov about 4 years ago
It seems that the CONNECT statement which caused the gso_241
fail is incorrect.
I've added logging and the arguments of the ConnectionManager.connect
call are:
"<customer_logical_db_name>","from-db","127.0.0.1","<customer_db_service_name>"
This is an incorrect call as only one argument (db name) can be provided w/o being preceeded with
-<name>
key.#84 Updated by Igor Skornyakov about 4 years ago
Igor Skornyakov wrote:
It seems that the CONNECT statement which caused the
gso_241
fail is incorrect.
I've added logging and the arguments of theConnectionManager.connect
call are:
[...]
This is an incorrect call as only one argument (db name) can be provided w/o being preceeded with-<name>
key.
This is a conversion error as the 4GL code contains preceding keys:\
connect <customer db name> -ld value(ipLDB) -H value(t-ip) -S value(t-socket + "<customer suffix>")
#85 Updated by Igor Skornyakov about 4 years ago
I've found the fix for the CONNECT conversion (see #3813-84).
To which branch should I commit it?
Thank you.
#86 Updated by Greg Shah about 4 years ago
If it does not depend on 4335a, you can put it in 4231b. If 4335a is needed first, then wait for the 4231b rebase from trunk 11345 and then check it in.
#87 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
If it does not depend on 4335a, you can put it in 4231b. If 4335a is needed first, then wait for the 4231b rebase from trunk 11345 and then check it in.
I see. Thank you.
#88 Updated by Igor Skornyakov about 4 years ago
A fixed CONNECT
statement rework was committed to the branch 4231b revision 11368.
#89 Updated by Igor Skornyakov about 4 years ago
Working on BUFFER-FIELD:DEFAULT-STRING
and BUFFER-DECIMALS
attributes support.
#90 Updated by Igor Skornyakov about 4 years ago
Contrary to what is said in the Progress documentation the DEFAULT-STRING
attribute is read-only. This is logical as the related INITIAL
attribute is such.
#91 Updated by Greg Shah about 4 years ago
By the end of the weekend, please report the list of open items remaining in this task. We can push the #3813-81 items into another task unless you see a reason they are needed now.
#92 Updated by Igor Skornyakov about 4 years ago
We have no support for the BUFFER-FIELD:DEFAULT-VALUE (even conversion is not supported). This can be a little bit tricky at least for the DATE/DATETIME/DATETIME-TZ fields.
For example for the field FIELD s-datetime-tz AS DATETIME-TZ INITIAL NOW
the value of the DEFAULT-STRING
attribute is the string "now" while for INITIAL
and the DEFAULT-VALUE
it is a value of the NOW
function at the moment of attribute retrieval.
#93 Updated by Greg Shah about 4 years ago
Doesn't the DEFAULT-VALUE
of the DMO map more closely to BUFFER-FIELD:DEFAULT-VALUE
instead of using the DEFAULT-STRING
in the DMO?
#94 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Doesn't the
DEFAULT-VALUE
of the DMO map more closely toBUFFER-FIELD:DEFAULT-VALUE
instead of using theDEFAULT-STRING
in the DMO?
Sorry, I do not understand the question. In my test BUFFER-FIELD:DEFAULT-VALUE
may have different values at different times. I forgot the details about the DMO map but I understand the value of BUFFER-FIELD:DEFAULT-VALUE
and BUFFER-FIELD:INITIAL
may have sense only at runtime and for a specific moment.
#95 Updated by Igor Skornyakov about 4 years ago
At the moment FWD runtime support if the INITIAL attribute for fields like FIELD s-datetime AS DATETIME INITIAL NOW
is not correct. A runtime exception is thrown by instantiateFromStringWorker
.
We need to correctly distinguish between datetime
literal and valid expression as NOW
. The first question is what can be such expressions.
#96 Updated by Ovidiu Maxiniuc about 4 years ago
I am working on 4011a branch, where the field's attributes (including the default value) are defined as DMO interface annotation. At runtime, the following code from instantiateFromStringWorker()
will porcess the initial
attribute of Property
, just after unknown checks, but I do not recall if I added in this branch or it should already be in trunk:
if ("now".equalsIgnoreCase(spec))
{
assign(datetimetz.now());
return;
}
#97 Updated by Igor Skornyakov about 4 years ago
The following field definition
FIELD s-datetime AS DATETIME EXTENT 2 INITIAL ["04-01-2020 13:25:30.000", NOW]
is converted to the code which does not compile:
SourceData_1_1Impl.java:20: error: SourceData_1_1Impl is not abstract and does not override abstract method setSDatetime(date) in SourceData_1
#98 Updated by Igor Skornyakov about 4 years ago
Ovidiu Maxiniuc wrote:
I am working on 4011a branch, where the field's attributes (including the default value) are defined as DMO interface annotation. At runtime, the following code from
instantiateFromStringWorker()
will porcess theinitial
attribute ofProperty
, just after unknown checks, but I do not recall if I added in this branch or it should already be in trunk:
[...]
Thank you, Ovidiu. I'm working with 4321b. As of rev. 11417 it doesn't contain this change.
BTW: do I understand correctly that more complicated expressions cannot be used in the INITIAL
clause?
Thank you.
#99 Updated by Ovidiu Maxiniuc about 4 years ago
BTW: do I understand correctly that more complicated expressions cannot be used in the
INITIAL
clause?
TBH, before seeing your message, I was not aware that the INITIAL
clauses can contain multiple values for extent values. I had the impression that you can set only one value for initialization of all items in the extent field. I tried your example and noticed that it's actually worse, the declaration for your s-datetime
(from note 97) converted (with 4011a) as:
@Property(id = 4, name = "SDatetime", column = "s_datetime", legacy = "s-datetime", initial = "04-01-2020", order = 30, extent = 2)
public datetime getSDatetime(int index);
I will schedule working on this issue. 4011a is not yet stable enough, but we'll see whether we will ported to your branch. Do you need this fixed soon? What is the generated code for this field with your branch?
#100 Updated by Igor Skornyakov about 4 years ago
Ovidiu Maxiniuc wrote:
I will schedule working on this issue. 4011a is not yet stable enough, but we'll see whether we will ported to your branch. Do you need this fixed soon? What is the generated code for this field with your branch?
I do not think that it is very urgent. In my branch, the interface is generated as if there is no EXTENT
clause (getSDatetime()
/setSDatetime(date SDatetime)
) while the implementation contains methods with index
parameter (getSDatetime(int index)
/setSDatetime(int index, date element)
).
#101 Updated by Igor Skornyakov about 4 years ago
I cannot find where the value of the DECIMAL option of the field definition is located in the conversion artifacts.
Can anybody help with this?
Thank you.
#102 Updated by Igor Skornyakov about 4 years ago
Igor Skornyakov wrote:
I cannot find where the value of the DECIMAL option of the field definition is located in the conversion artifacts.
Can anybody help with this?
Thank you.
Well, It seems to be a scale
attribute in the .hbm.xml
file. How It can be accessed programmatically?
Thank you.
#103 Updated by Eric Faulhaber about 4 years ago
Where foo
represents a DMO and bar
is a decimal
property, the following will return the value of DECIMAL from the field definition:
foo.getBar().getPrecision();
#104 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
Where
foo
represents a DMO andbar
is adecimal
property, the following will return the value of DECIMAL from the field definition:[...]
Got it. Thanks a lot!
#105 Updated by Igor Skornyakov about 4 years ago
The issue described in the #3813-97 was a result of the following. I've compiled two tests. The TEMP-TABLE source-data
was defined in both but in one test the field s-dataetime
was defined as scalar but in another one as EXTENT 2
#106 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
Where
foo
represents a DMO andbar
is adecimal
property, the following will return the value of DECIMAL from the field definition:[...]
Eric,
I understand that the getPrecision()
you're talking about is a method of decimal
, so it can be used only if the field has value. It is strange that the LegacyField
annotation has no attribute which corresponds to the DECIMAL
option of the field definition while other options do have such a counterpart. Was it done on purpose? Please clarify.
Thank you.
#107 Updated by Igor Skornyakov about 4 years ago
Do we have methods for rendering date/numeric data using SESSION:DATE-FORMAT
and SESSION:NUMERIC-FORMAT
attributes' value?
Thank you.
#108 Updated by Greg Shah about 4 years ago
Igor Skornyakov wrote:
Do we have methods for rendering date/numeric data using
SESSION:DATE-FORMAT
andSESSION:NUMERIC-FORMAT
attributes' value?
Thank you.
From methods_attributes.rules
:
<rule>ftype == prog.kw_date_fmt <action>methodText = "SessionUtils.getDateFormat"</action> <rule>isAssign <action>methodText = "SessionUtils.setDateFormat"</action> </rule> </rule> ... <rule>ftype == prog.kw_num_fmt <action>methodText = "NumberType.getNumericFormat"</action> <rule>isAssign <action> methodText = "NumberType.setNumericFormat" </action> </rule> </rule>
From gaps/expressions.rules
:
<rule>attrs.put(prog.kw_date_fmt, rw.cvt_lvl_full | rw.rt_lvl_full)</rule> ... <rule>attrs.put(prog.kw_num_fmt , rw.cvt_lvl_full | rw.rt_lvl_full)</rule>
#109 Updated by Greg Shah about 4 years ago
We need to correctly distinguish between datetime literal and valid expression as NOW. The first question is what can be such expressions.
In my experience, TODAY
and NOW
are considered "literals" even though they are really a kind a function. Thus with this classification they can be included in initializers that normally must only ever have a constant.
Please do confirm this and post the result here so that we are sure.
#110 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
We need to correctly distinguish between datetime literal and valid expression as NOW. The first question is what can be such expressions.
In my experience,
TODAY
andNOW
are considered "literals" even though they are really a kind a function. Thus with this classification they can be included in initializers that normally must only ever have a constant.Please do confirm this and post the result here so that we are sure.
As far as I understand this is correct. In 4GL if we specify INITIAL NOW
for e.g. datetime
field the value of the DEFAULT-STRING
will be "now" (see #3813-92)
#111 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Igor Skornyakov wrote:
Do we have methods for rendering date/numeric data using
SESSION:DATE-FORMAT
andSESSION:NUMERIC-FORMAT
attributes' value?
Thank you.From
methods_attributes.rules
:[...]
From
gaps/expressions.rules
:[...]
Thank you, Greg, but my question was about the standard method (function) of applying the values' of these attributes to formatting numeric/date values. Do we have any or I have to implement it myself? This is required for DEFAULT-STRING
support.
Thank you,
#112 Updated by Greg Shah about 4 years ago
The date
, datetime
, datetime-tz
, integer
, int64
and decimal
classes all are designed to format their state honoring those configuration values. Just use those classes directly. Normally, formatted strings are rendered using toString()
for the default format string and toString(String fmt)
for a custom format string. Is this what you are asking?
#113 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
The
date
,datetime
,datetime-tz
,integer
,int64
anddecimal
classes all are designed to format their state honoring those configuration values. Just use those classes directly. Normally, formatted strings are rendered usingtoString()
for the default format string andtoString(String fmt)
for a custom format string. Is this what you are asking?
Greg. According to the Progress documentation
DEFAULT-STRING renders its dates and numeric values in LOCAL format, subject to the DATE-FORMAT and NUMERIC-FORMAT attributes of the SESSION system handle.
Do you mean that e.g. for
datetime
value I can use toString(SessionUtils.getDateFormat())
?Thank you.
#114 Updated by Greg Shah about 4 years ago
DEFAULT-STRING renders its dates and numeric values in LOCAL format, subject to the DATE-FORMAT and NUMERIC-FORMAT attributes of the SESSION system handle.
I don't know what they mean by LOCAL
. And it is not clear if there is some problem here or not.
Do you mean that e.g. for datetime value I can use toString(SessionUtils.getDateFormat())?
No. A format string is not the same thing as the DATE-FORMAT
. A format string for date
might by 99/99/9999
while the date-format
might be dmy
. As I explained above, the BDT classes internally already honor the date-format
and numeric-format
. You just need to call the right version of toString()
to render it. Please look at the toString()
implementations to see why I mean.
#115 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
No. A format string is not the same thing as the
DATE-FORMAT
. A format string fordate
might by99/99/9999
while thedate-format
might bedmy
. As I explained above, the BDT classes internally already honor thedate-format
andnumeric-format
. You just need to call the right version oftoString()
to render it. Please look at thetoString()
implementations to see why I mean.
As far as I can see the best match is toStringExport()
for DATE
/DATETIME
/DATETIME-TZ
. However, I do not a corresponding method for numeric (based only on group/decimal separators). It is easy to implement though.
#116 Updated by Igor Skornyakov about 4 years ago
DEFAULT-STRING
uses only decimal separator when rendering DECIMAL
values. The value of the group separator is not used (groups are not separated).
#117 Updated by Eric Faulhaber about 4 years ago
Igor Skornyakov wrote:
Eric Faulhaber wrote:
Where
foo
represents a DMO andbar
is adecimal
property, the following will return the value of DECIMAL from the field definition:[...]
Eric,
I understand that thegetPrecision()
you're talking about is a method ofdecimal
, so it can be used only if the field has value.
The DMO field will always return a BDT value, never null
, even if it is unknown value. The precision does not change, because the getter method returns a copy of the BDT stored in the DMO. So, getPrecision()
should always return the correct value.
It is strange that the
LegacyField
annotation has no attribute which corresponds to theDECIMAL
option of the field definition while other options do have such a counterpart. Was it done on purpose? Please clarify.
I don't think that was intentional. Please add it if you need it. My advice about getPrecision
presumes you have a DMO instance to work with. If this is not the case, you'll need to use the TableMapper
API.
#118 Updated by Igor Skornyakov about 4 years ago
On the conversion of the field definition
FIELD s-datetime AS DATETIME INITIAL "04-01-2020 13:25:30.000"
the time part is lost.
#119 Updated by Igor Skornyakov about 4 years ago
In addition to the #3813-95, I've found that current INITIAL
attribute support generates exceptions for other data types (such as raw
or even character
) when trying to instantiate the value reflectively.
I understand that fixing this is not in the scope of this task. Is this correct?
Thank you.
#120 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
I don't think that was intentional. Please add it if you need it. My advice about
getPrecision
presumes you have a DMO instance to work with. If this is not the case, you'll need to use theTableMapper
API.
Thank you Eric. I will try to follow your advice.
#121 Updated by Greg Shah about 4 years ago
Igor Skornyakov wrote:
On the conversion of the field definition
[...]
the time part is lost.
Please show details about this.
We already expected that the TZ offset was lost at the database for DATETIME-TZ
types (see #2156), but we did not expect the time to be lost.
#122 Updated by Greg Shah about 4 years ago
I've found that current INITIAL attribute support generates exceptions for other data types (such as raw or even character) when trying to instantiate the value reflectively.
This is surprising. What types are supported correctly? I'm especially surprised that character is not supported. That is the most trivial of the types for an initializer.
In regard to types like HANDLE
, RAW
or CLASS
, it was my understanding that there is no valid way to define an initial value for those.
I understand that fixing this is not in the scope of this task. Is this correct?
It depends on the scope of the problem.
#123 Updated by Igor Skornyakov about 4 years ago
Another issue:
The converted statement SessionUtils.setDateFormat(new character("ydm"))
causes runtime exception:
Caused by: java.lang.RuntimeException: Unresolvable remote export public abstract void com.goldencode.p2j.util.SessionExports.setDateFormat(com.goldencode.p2j.util.character). at com.goldencode.p2j.net.RemoteObject$RemoteAccess.obtainRoutingKey(RemoteObject.java:1580) at com.goldencode.p2j.net.RemoteObject$RemoteAccess.invokeCore(RemoteObject.java:1464) at com.goldencode.p2j.net.InvocationStub.invoke(InvocationStub.java:145) at com.sun.proxy.$Proxy11.setDateFormat(Unknown Source) at com.goldencode.p2j.util.SessionUtils.setDateFormat(SessionUtils.java:557) at com.goldencode.testcases.db.DefaultStringTest.lambda$execute$0(DefaultStringTest.java:40) at com.goldencode.p2j.util.Block.body(Block.java:604) at com.goldencode.p2j.util.BlockManager.processBody(BlockManager.java:8527) at com.goldencode.p2j.util.BlockManager.topLevelBlock(BlockManager.java:8200) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:480) at com.goldencode.p2j.util.BlockManager.externalProcedure(BlockManager.java:451) at com.goldencode.testcases.db.DefaultStringTest.execute(DefaultStringTest.java:29) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.goldencode.p2j.util.Utils.invoke(Utils.java:1496) at com.goldencode.p2j.main.StandardServer$MainInvoker.execute(StandardServer.java:2152) at com.goldencode.p2j.main.StandardServer.invoke(StandardServer.java:1588) at com.goldencode.p2j.main.StandardServer.standardEntry(StandardServer.java:549) at com.goldencode.p2j.main.StandardServerMethodAccess.invoke(Unknown Source) at com.goldencode.p2j.util.MethodInvoker.invoke(MethodInvoker.java:156) at com.goldencode.p2j.net.Dispatcher.processInbound(Dispatcher.java:755) at com.goldencode.p2j.net.Conversation.block(Conversation.java:412) at com.goldencode.p2j.net.Conversation.run(Conversation.java:232) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: No export or no access to com.goldencode.p2j.util.SessionExports:public abstract void com.goldencode.p2j.util.SessionExports.setDateFormat(com.goldencode.p2j.util.character)
Formally this issue is not related to #3813 and does not prevent tesing.
SessionUstls.setNumericFormat
and gettesr work OK.#124 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Igor Skornyakov wrote:
On the conversion of the field definition
[...]
the time part is lost.Please show details about this.
We already expected that the TZ offset was lost at the database for
DATETIME-TZ
types (see #2156), but we did not expect the time to be lost.
I've defined a field like this:
FIELD s-datetime AS DATETIME INITIAL "04-01-2020 13:25:30.000"
But in the
String property = fieldRef.getProperty(); String init = TableMapper.getLegacyFieldInitial(getDMOProxy(), property);
The value of the
init
doesn't contain time part.#125 Updated by Greg Shah about 4 years ago
Formally this issue is not related to #3813 and does not prevent tesing.
We still should probably fix this now. The core issue is not obvious to me. I don't know why we are using SessionUtils
on the client for this purpose. It is a bad practice because the majority of the code in this class is only ever valid for use on the server. We really should have a separate SessionDaemon
for the client which implements SessionExports
and only does that job.
#126 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
I've found that current INITIAL attribute support generates exceptions for other data types (such as raw or even character) when trying to instantiate the value reflectively.
This is surprising. What types are supported correctly? I'm especially surprised that character is not supported. That is the most trivial of the types for an initializer.
In regard to types like
HANDLE
,RAW
orCLASS
, it was my understanding that there is no valid way to define an initial value for those.
Well, regarding character
is was probably a result of my mistake - it works. For the BinaryData
, handle
, rowid
, and recid
either constructor do not accept empty character (which is returned by TableMapper.getLegacyFieldInitial(getDMOProxy(), property)
as for such types the INITIAL clause in the field definition is not allowed) or such a constructor does not exist.
I understand that fixing this is not in the scope of this task. Is this correct?
It depends on the scope of the problem.
I've implemented preventive pre-check in the getDefaultString
and do not try to vet the values for the problematic datatypes.
#127 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Formally this issue is not related to #3813 and does not prevent tesing.
We still should probably fix this now. The core issue is not obvious to me. I don't know why we are using
SessionUtils
on the client for this purpose. It is a bad practice because the majority of the code in this class is only ever valid for use on the server. We really should have a separateSessionDaemon
for the client which implementsSessionExports
and only does that job.
OK. I will work on this.
Thank you.
#128 Updated by Greg Shah about 4 years ago
I've implemented preventive pre-check in the getDefaultString and do not try to vet the values for the problematic datatypes.
Most likely, the initial value for these problem cases should be the unknown value. Please check the 4GL to confirm.
#129 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Most likely, the initial value for these problem cases should be the unknown value. Please check the 4GL to confirm.
This is correct, and FWD now returns UNKNOWN
as DEFAULT-STRING
without trying to get the value of the INITIAL
attribute.
#130 Updated by Igor Skornyakov about 4 years ago
#131 Updated by Igor Skornyakov about 4 years ago
A SESSION:TIMEZONE = 3.
(SessionUtils.setTimezone
) causes the same exception as in #3813-123 (No export or no access to com.goldencode.p2j.util.SessionExports
).
It is strange as the SessionUtils.setTimezone
is executed at the client-side.
#132 Updated by Greg Shah about 4 years ago
Igor Skornyakov wrote:
A
SESSION:TIMEZONE = 3.
(SessionUtils.setTimezone
) causes the same exception as in #3813-123 (No export or no access to com.goldencode.p2j.util.SessionExports
).
It is strange as theSessionUtils.setTimezone
is executed at the client-side.
Yes. This is what I meant by my comment:
I don't know why we are using SessionUtils on the client for this purpose. It is a bad practice because the majority of the code in this class is only ever valid for use on the server. We really should have a separate SessionDaemon for the client which implements SessionExports and only does that job.
#133 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Yes. This is what I meant by my comment:
I don't know why we are using SessionUtils on the client for this purpose. It is a bad practice because the majority of the code in this class is only ever valid for use on the server. We really should have a separate SessionDaemon for the client which implements SessionExports and only does that job.
Maybe it will be faster to implement this rather than trying to understand what's exactly wrong with the current approach? There are only five simple methods in SessionExport. What do you think?
Thank you.
#134 Updated by Igor Skornyakov about 4 years ago
BTW: maybe the problem is not with the SessionUtils
per se, but in the way SessionExports
is instantiated in the ThinClient
- as a separate StaticNetworkServer
?
#135 Updated by Greg Shah about 4 years ago
Yes, it will certinly make it better, even if it is not the full solution.
You can pattern it using the EnvironmentDaemon
(the client side of EnvironmentOps
).
#136 Updated by Greg Shah about 4 years ago
Igor Skornyakov wrote:
BTW: maybe the problem is not with the
SessionUtils
per se, but in the waySessionExports
is instantiated in theThinClient
- as a separateStaticNetworkServer
?
Perhaps. Either way, implementing the proper split client daemon part will be better.
#137 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Yes, it will certinly make it better, even if it is not the full solution.
You can pattern it using the
EnvironmentDaemon
(the client side ofEnvironmentOps
).
OK, thank you.
#138 Updated by Igor Skornyakov about 4 years ago
I understand that EnvironmentDaemon
is not a suitable pattern for the situation with SESSION:DATE-FORMAT
and the other methods from SessionExports
as its state is not updated programmatically from the server-side.
I've found another solution - extracted setters/getters for the attributes in question to a separate class that is used both on server and client sides and delegated all the remote operations to the ThinClient
/LogicalTerminal
. This resolves the problem.
However, another issue was discovered after that: FWD uses SESSION:DATE-FORMAT
when parsing a date literal in the INITIAL
clause of the field definition and complains if they are not compatible. This is not compatible with 4GL behavior which accepts "MM-DD-YYYY" format regardless of the value of the SESSION:DATE-FORMAT
.
Should I fix this new issue now?
Thank you.
#139 Updated by Greg Shah about 4 years ago
I understand that EnvironmentDaemon is not a suitable pattern for the situation with SESSION:DATE-FORMAT and the other methods from SessionExports as its state is not updated programmatically from the server-side.
The core structure is a good model. If you want to see storing state, you can look at FileSystemOps
/FileSystemDaemon
.
I've found another solution - extracted setters/getters for the attributes in question to a separate class that is used both on server and client sides and delegated all the remote operations to the ThinClient/LogicalTerminal. This resolves the problem.
LogicalTerminal
is meant to be dedicated to UI. Non-UI stuff should not be there.
ThinClient
does (regrettably) have some non-UI stuff related to the client process startup/restart. But we still try to avoid non-UI stuff there as well.
Please use the separate daemon model for this case and move the stuff out of LogicalTerminal
/ThinClient
.
FWD uses SESSION:DATE-FORMAT when parsing a date literal in the INITIAL clause of the field definition and complains if they are not compatible. This is not compatible with 4GL behavior which accepts "MM-DD-YYYY" format regardless of the value of the SESSION:DATE-FORMAT.
Yes, fix it.
#140 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
I understand that EnvironmentDaemon is not a suitable pattern for the situation with SESSION:DATE-FORMAT and the other methods from SessionExports as its state is not updated programmatically from the server-side.
The core structure is a good model. If you want to see storing state, you can look at
FileSystemOps
/FileSystemDaemon
.I've found another solution - extracted setters/getters for the attributes in question to a separate class that is used both on server and client sides and delegated all the remote operations to the ThinClient/LogicalTerminal. This resolves the problem.
LogicalTerminal
is meant to be dedicated to UI. Non-UI stuff should not be there.
ThinClient
does (regrettably) have some non-UI stuff related to the client process startup/restart. But we still try to avoid non-UI stuff there as well.Please use the separate daemon model for this case and move the stuff out of
LogicalTerminal
/ThinClient
.FWD uses SESSION:DATE-FORMAT when parsing a date literal in the INITIAL clause of the field definition and complains if they are not compatible. This is not compatible with 4GL behavior which accepts "MM-DD-YYYY" format regardless of the value of the SESSION:DATE-FORMAT.
Yes, fix it.
OK, thank you. Please note however that, as far as I understand at this moment the only remote operations which are not LogicalTerminal
/ThinClient
interaction are these 5 methods from SessionExports
(which do not work now). And SESSION:NUMBER-FORMAT
setter propagated the change to the client-side via LogicalTerminal
.
#141 Updated by Igor Skornyakov about 4 years ago
I've removed my changes regarding LogicalTerminal
. Apart from extracting SessionExports
operations to a separate class, I've changed the way it is registered - instead of RemoteObject.registerStaticNetworkServer
I'm using RemoteObject.registerServer
like with EnvironmentDaemon
. This resolves the issue with setters' propagation to the client. Working on the formatting issue from #3813-138.
#142 Updated by Igor Skornyakov about 4 years ago
Fixed the issue with INITIAL
value parsing for date fields.
Committed with SessionExport
support to 4231b revision 11437.
#143 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
The DMO field will always return a BDT value, never
null
, even if it is unknown value. The precision does not change, because the getter method returns a copy of the BDT stored in the DMO. So,getPrecision()
should always return the correct value.It is strange that the
LegacyField
annotation has no attribute which corresponds to theDECIMAL
option of the field definition while other options do have such a counterpart. Was it done on purpose? Please clarify.I don't think that was intentional. Please add it if you need it. My advice about
getPrecision
presumes you have a DMO instance to work with. If this is not the case, you'll need to use theTableMapper
API.
Eric,
I need the get the DECIMALS
value from the BufferFieldImpl
method I do not see how get the DMO instance. The TableMapper.getFieldValue()
requires DMO interface name and it doesn't work for temporary tables.
As I wrote before, the only place in the generated artifacts where I see the value of the DECIMALS
clause of the field definition is the scale
attribute of the "<property column=" element in the "*Impl.hbm.xml@ files. How can I access it programmatically from the BufferFieldImpl
method?
Please advise, this will save me a lot of time.
Thank you
#144 Updated by Eric Faulhaber about 4 years ago
Igor Skornyakov wrote:
I need the get the
DECIMALS
value from theBufferFieldImpl
method I do not see how get the DMO instance. TheTableMapper.getFieldValue()
requires DMO interface name and it doesn't work for temporary tables.
As I wrote before, the only place in the generated artifacts where I see the value of theDECIMALS
clause of the field definition is thescale
attribute of the "<property column=" element in the "*Impl.hbm.xml@ files. How can I access it programmatically from theBufferFieldImpl
method?
Hibernate is deprecated and has been removed in branch 4011a. Do not add new dependencies on it. This will just create more work when we merge that branch to trunk.
It is possible to get a LegacyFieldInfo
object for a temp-table. It is just not being done in that getFieldValue
API. You need a TempTable
instance, so that you can invoke the private TableMapper.locateTemp
method. You get a TempTable
instance from RecordBuffer.getParentTable
(when invoked on a TemporaryBuffer
instance). Do you have access to a TemporaryBuffer
instance in your use case?
You will need to create a new TableMapper
API to get what you want. See how the other APIs in that class use locateTemp
with the TempTable
instance that is passed to them. Model your solution on that. Once you access a LegacyFieldInfo
for your temp-table, you should be able to extract the information you need from that.
#145 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
Igor Skornyakov wrote:
I need the get the
DECIMALS
value from theBufferFieldImpl
method I do not see how get the DMO instance. TheTableMapper.getFieldValue()
requires DMO interface name and it doesn't work for temporary tables.
As I wrote before, the only place in the generated artifacts where I see the value of theDECIMALS
clause of the field definition is thescale
attribute of the "<property column=" element in the "*Impl.hbm.xml@ files. How can I access it programmatically from theBufferFieldImpl
method?Hibernate is deprecated and has been removed in branch 4011a. Do not add new dependencies on it. This will just create more work when we merge that branch to trunk.
It is possible to get a
LegacyFieldInfo
object for a temp-table. It is just not being done in thatgetFieldValue
API. You need aTempTable
instance, so that you can invoke the privateTableMapper.locateTemp
method. You get aTempTable
instance fromRecordBuffer.getParentTable
(when invoked on aTemporaryBuffer
instance). Do you have access to aTemporaryBuffer
instance in your use case?You will need to create a new
TableMapper
API to get what you want. See how the other APIs in that class uselocateTemp
with theTempTable
instance that is passed to them. Model your solution on that. Once you access aLegacyFieldInfo
for your temp-table, you should be able to extract the information you need from that.
Thank you, Eric.
I will try this approach at the weekend.
#146 Updated by Igor Skornyakov about 4 years ago
LegacyField
annotations are generated? We need at least two fixes for this:
- Fix the
INITIAL
value todatetime
/datetime-tz
fields (at this moment only date part is generated) - Add the value of the
DECIMALS
attribute (without it the only place in the generated artifacts where the value of this attribute is present are.hbm.xml
files, but we need support for theBUFFER-FIELD:DECIMALS
attribute even in the absence of the underlying record and without using Hibernate).
Thank you.
#147 Updated by Ovidiu Maxiniuc about 4 years ago
Property
DMO annotation of the new persistence framework developed in 4011.
String initial() default "";
for theINITIAL
. I noticed recently that there is a special syntax for extent field we are not properly support;int scale() default 10;
for theDECIMALS
.
Both attributes and can be directly accessed via the new DmoMetadataManager
/ DmoMeta
classes. In this case I think working on these issues will be obsolete in a matter of weeks.
Am I right Eric?
#148 Updated by Igor Skornyakov about 4 years ago
Ovidiu Maxiniuc wrote:
Igor, I believe both of the above attributes are members fields in the newProperty
DMO annotation of the new persistence framework developed in 4011.
String initial() default "";
for theINITIAL
. I noticed recently that there is a special syntax for extent field we are not properly support;int scale() default 10;
for theDECIMALS
.Both attributes and can be directly accessed via the new
DmoMetadataManager
/DmoMeta
classes. In this case I think working on these issues will be obsolete in a matter of weeks.
Am I right Eric?
Ovidiu,
I'm supposed to work at least on DECIMALS
support right now.
#149 Updated by Eric Faulhaber about 4 years ago
Ovidiu Maxiniuc wrote:
[...]
Both attributes and can be directly accessed via the newDmoMetadataManager
/DmoMeta
classes. In this case I think working on these issues will be obsolete in a matter of weeks.
Am I right Eric?
Yes.
#150 Updated by Eric Faulhaber about 4 years ago
Igor Skornyakov wrote:
I'm supposed to work at least on
DECIMALS
support right now.
If there is not a critical bug that is dependent upon this, then I think it makes sense to skip working on this since, as Ovidiu notes, it is already supported (albeit in a different way) in 4011a. We are trying to merge that branch to trunk ASAP, though it looks to be a number of weeks out yet.
Greg, thoughts?
#151 Updated by Eric Faulhaber about 4 years ago
Sorry, Igor, would not have been giving you implementation advice had I realized Ovidiu already implemented this feature in 4011a.
#152 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
Sorry, Igor, would not have been giving you implementation advice had I realized Ovidiu already implemented this feature in 4011a.
I see. Thank you, Eric.
#153 Updated by Greg Shah about 4 years ago
Eric Faulhaber wrote:
Igor Skornyakov wrote:
I'm supposed to work at least on
DECIMALS
support right now.If there is not a critical bug that is dependent upon this, then I think it makes sense to skip working on this since, as Ovidiu notes, it is already supported (albeit in a different way) in 4011a. We are trying to merge that branch to trunk ASAP, though it looks to be a number of weeks out yet.
Greg, thoughts?
Eric: I assume you mean that just the backing value is there in the DMO in 4011a. The rest of the support is not there yet?
Igor: Can you implement the parts of the support except for the lookup of the value? Then it can be finished when 4011a is merged to trunk.
#154 Updated by Eric Faulhaber about 4 years ago
Greg Shah wrote:
Eric: I assume you mean that just the backing value is there in the DMO in 4011a. The rest of the support is not there yet?
Igor: Can you implement the parts of the support except for the lookup of the value? Then it can be finished when 4011a is merged to trunk.
OK, I see what you are saying. Correct, BUFFER-FIELD:DECIMALS is not exposed as an attribute at this point. In 4011a, the BufferFieldImpl
implementation still looks like this:
/**
* Obtain the number of decimal places, after the decimal point, that are stored in this
* buffer-field object that corresponds to a DECIMAL field.
*
* @return the number of decimals after the decimal point for this buffer-field.
*/
@Override
public integer getDecimals()
{
UnimplementedFeature.missing("DECIMALS attribute not implemented");
return new integer();
}
#155 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Igor: Can you implement the parts of the support except for the lookup of the value? Then it can be finished when 4011a is merged to trunk.
Greg, At this moment the only way to move forward I see is to fix LegacyField
annotation generation (see #3813-143). I will resume working on it but any advice regarding generation of this annotation will help a lot.
Thank you.
#156 Updated by Constantin Asofiei about 4 years ago
Igor, for LegacyField
look in dmo_common.rules
, around line 1780, for code like this:
<!-- graft "fieldId" annotation --> <rule> tpl.graft('annotation_assign_num', null, fieldAst, 'key', 'fieldId', 'value', fieldId.toString()) </rule>
#157 Updated by Eric Faulhaber about 4 years ago
Igor Skornyakov wrote:
Greg Shah wrote:
Igor: Can you implement the parts of the support except for the lookup of the value? Then it can be finished when 4011a is merged to trunk.
Greg, At this moment the only way to move forward I see is to fix
LegacyField
annotation generation (see #3813-143). I will resume working on it but any advice regarding generation of this annotation will help a lot.
Thank you.
No, please don't work on the conversion part. The LegacyField
annotation is precisely the part of the puzzle which is made obsolete in 4011a. Please focus on the runtime portion of the solution instead.
That is, add an instance variable to TableMapper$LegacyFieldInfo
for the DECIMALS attribute and add a corresponding parameter for it to that inner class' constructor.
In the call to the constructor, you can hard-code that parameter (e.g., to 10) for now, but leave a TODO in the code to lookup the actual value once branch 4011a has been merged to trunk.
Add a TableMapper
API to expose this information to the persistence runtime layer, then use that API from BufferFieldImpl.getDecimals
to implement the attribute, so it can be accessed by converted business logic.
#158 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
No, please don't work on the conversion part. The
LegacyField
annotation is precisely the part of the puzzle which is made obsolete in 4011a. Please focus on the runtime portion of the solution instead.That is, add an instance variable to
TableMapper$LegacyFieldInfo
for the DECIMALS attribute and add a corresponding parameter for it to that inner class' constructor.In the call to the constructor, you can hard-code that parameter (e.g., to 10) for now, but leave a TODO in the code to lookup the actual value once branch 4011a has been merged to trunk.
Add a
TableMapper
API to expose this information to the persistence runtime layer, then use that API fromBufferFieldImpl.getDecimals
to implement the attribute, so it can be accessed by converted business logic.
I see. Thank you, Eric.
#159 Updated by Igor Skornyakov about 4 years ago
#160 Updated by Igor Skornyakov about 4 years ago
Implemented runtime support for the BUFFER:BUFFER-VALIDATE
.
4231b revision 11447.
#161 Updated by Igor Skornyakov about 4 years ago
The only remaining thing in this task is Built-In Function Usage in WHERE Clauses
.
Where can I take a look at the sample support for such functions?
Thank you.
#162 Updated by Eric Faulhaber about 4 years ago
Igor Skornyakov wrote:
The only remaining thing in this task is
Built-In Function Usage in WHERE Clauses
.
Where can I take a look at the sample support for such functions?
Thank you.
Igor, I will open a new task for this. We will need to put together a process for identifying which built-in functions need to be supported in WHERE clauses. I'll link from here when it's ready, but it may be a day or two...
#163 Updated by Igor Skornyakov about 4 years ago
Eric Faulhaber wrote:
Igor, I will open a new task for this. We will need to put together a process for identifying which built-in functions need to be supported in WHERE clauses. I'll link from here when it's ready, but it may be a day or two...
I see. Thank you, Eric.
#164 Updated by Constantin Asofiei about 4 years ago
Igor, there is an issue in BufferFieldImpl
- this import "import org.aspectj.org.eclipse.jdt.core.dom.*;" is not OK, the project won't compile in Eclipse. Please don't use compile-classpath.jar, and add in Eclipse project's classpath all jars from build/lib/
only. Is your Eclipse project a gradle project?
Also, you've added lots of imports to this file, which don't make sense...
#165 Updated by Igor Skornyakov about 4 years ago
Constantin Asofiei wrote:
Igor, there is an issue in
BufferFieldImpl
- this import "import org.aspectj.org.eclipse.jdt.core.dom.*;" is not OK, the project won't compile in Eclipse. Please don't use compile-classpath.jar, and add in Eclipse project's classpath all jars frombuild/lib/
only. Is your Eclipse project a gradle project?Also, you've added lots of imports to this file, which don't make sense...
Sorry. I understand these imports were added automatically by Eclipse code assistant. I will check the imports more carefully in the future.
In my environment, the project compiles both with Eclipse and Gradle but the Eclipse project doesn't have a Gradle nature. It uses jars from build/lib/
as you recommended some time ago.
#166 Updated by Greg Shah about 4 years ago
Igor: Can you please commit your import fixes so that the build is working again? I need to push this to our customers.
#167 Updated by Igor Skornyakov about 4 years ago
Greg Shah wrote:
Igor: Can you please commit your import fixes so that the build is working again? I need to push this to our customers.
Done in 4231b revision 11461.
#168 Updated by Greg Shah about 4 years ago
- Related to Feature #4375: add support for -pf <profile_filename> option for CONNECT statement added
#169 Updated by Greg Shah about 4 years ago
- Related to deleted (Feature #4392: add -pf support for appserver CONNECT() method)
#170 Updated by Greg Shah almost 4 years ago
- Related to Feature #4613: add missing support for database user defined functions (UDF) if they are supported in runtime added
#171 Updated by Greg Shah almost 4 years ago
Considering that the WHERE clause work has been moved to #4613, is there any reason to keep this task open?
#172 Updated by Greg Shah almost 4 years ago
Task branch 4231b has been merged to trunk as revision 11347.
Considering that the WHERE clause work has been moved to #4613, is there any reason to keep this task open?
This question is still open.
#173 Updated by Greg Shah almost 4 years ago
Branch 3821c revision 11374 has the implementation of BUFFER-FIELD:DEFAULT-VALUE
as well as the fixes to date
/datetime
to properly parse TODAY
(only works for date
) and NOW
(only works for datetime
and datetime-tz
). These are the items noted in #3813-92 (and following notes).
#174 Updated by Greg Shah over 3 years ago
What is left open in this task?
#175 Updated by Igor Skornyakov over 3 years ago
Greg Shah wrote:
What is left open in this task?
According to #3813-(162,163), the only remaining issue is Built-In Function Usage in WHERE Clauses
but there is a separate task for this. I also remember that I had problems with BUFFER-FIELD:DECIMALS
but now it is easy to implement with new properties of the LegacyField
annotation. At least for permanent databases.
#176 Updated by Greg Shah over 3 years ago
According to #3813-162, #3813-163, the only remaining issue is Built-In Function Usage in WHERE Clauses but there is a separate task for this.
We will work these in #4613.
I also remember that I had problems with BUFFER-FIELD:DECIMALS but now it is easy to implement with new properties of the LegacyField annotation. At least for permanent databases.
We have other such property changes in 3821c, right? If you finish BUFFER-FIELD:DECIMALS
in 3821c, it can just be brought over to the new ORM approach during rebase. Or will this be a bigger issue to handle now rather than waiting?
#177 Updated by Igor Skornyakov over 3 years ago
Greg Shah wrote:
We have other such property changes in 3821c, right? If you finish
BUFFER-FIELD:DECIMALS
in 3821c, it can just be brought over to the new ORM approach during rebase. Or will this be a bigger issue to handle now rather than waiting?
Just rebasing will not work because the annotation is different with the new ORM. However, the required manual change is very simple.
#178 Updated by Greg Shah over 3 years ago
I understand there will be some edits. Am I correct in my understanding that there are already other such edits that will be needed?
#179 Updated by Igor Skornyakov over 3 years ago
Greg Shah wrote:
I understand there will be some edits. Am I correct in my understanding that there are already other such edits that will be needed?
Ovidiu made these edits when 4011 was rebased to a trunk which contained my changes regarding metadata.
#180 Updated by Greg Shah over 3 years ago
So there are no pending properties changes in 3821c?
#181 Updated by Igor Skornyakov over 3 years ago
Greg Shah wrote:
So there are no pending properties changes in 3821c?
It looks like that. Please note that it was 4 months ago and can forget some minor things.
#182 Updated by Greg Shah over 3 years ago
OK, then lets wait until 4011b is in trunk (and 3821c rebased) before we finish this last item.
#183 Updated by Igor Skornyakov about 3 years ago
The BUFFER-FIELD:DECIMALS
support looks correct with 3821c/12264.
#184 Updated by Greg Shah about 3 years ago
- % Done changed from 0 to 100
- Status changed from WIP to Closed
#185 Updated by Greg Shah 9 months ago
- Related to Feature #4392: add -pf support for appserver CONNECT() method added