Feature #4350
method overload when they differ by a temp-table, dataset, buffer or object or extent or parameter modes
90%
Related issues
History
#1 Updated by Constantin Asofiei over 4 years ago
- Related to Feature #3751: implement support for OO 4GL and structured error handling added
#2 Updated by Constantin Asofiei over 4 years ago
See #3751-492 for a comprehensive list of overload rules. We need support for method overload when they differ by a temp-table, dataset, buffer, object, extent or parameter modes - this is both conversion and runtime (to i.e. find the correct API in case of a dynamic call).
Although the constructor may be an edge case, we should fix those, too. But these have a more complicated approach, as currently these names are hard-coded for dynamic invocation. We may need to emit LegacySignature
annotations (if we don't already) and disambiguate from that, and not from the Java converted method name.
#3 Updated by Greg Shah over 4 years ago
- Related to Feature #4373: finish core OO 4GL support added
#4 Updated by Marian Edu about 4 years ago
Constantin Asofiei wrote:
See #3751-492 for a comprehensive list of overload rules. We need support for method overload when they differ by a temp-table, dataset, buffer, object, extent or parameter modes - this is both conversion and runtime (to i.e. find the correct API in case of a dynamic call).
Although the constructor may be an edge case, we should fix those, too. But these have a more complicated approach, as currently these names are hard-coded for dynamic invocation. We may need to emit
LegacySignature
annotations (if we don't already) and disambiguate from that, and not from the Java converted method name.
It looks like we do get some of those method erasure issues with generated code for our test cases, in ByteBucket there are methods having the same name that takes different OO types as input. It looks like the erasure issue was solved by renaming one of the method (putBytes_1 when input is Memptr as opposed to simply putBytes when ByteBucket is passed as input), problem is in converted code where the same putBytes
method is being called with a Memptr instance instead of calling putBytes_1. The legacy parameter annotation does have a qualified value set to a value of "OpenEdge.Core.Memptr", did tried all lower case values as it seemed this is used most of the time to no avail... anything we're missing here, does this still need to be implemented or it should work already (we're on 4335a branch)?
#5 Updated by Constantin Asofiei about 4 years ago
Do you have a simple example for this? Conversion rules should choose the putBytes_1
if the argument is Memptr
.
#6 Updated by Greg Shah almost 4 years ago
- Assignee set to Greg Shah
- Status changed from New to WIP
Do you have a simple example for this? Conversion rules should choose the
putBytes_1
if the argument isMemptr
.
This is an object reference to the built-in class OpenEdge.Core.Memptr
. Our ClassDefinition.fuzzyMethodMatch()
doesn't handle that in the current trunk.
I'm working on this task.
#7 Updated by Greg Shah almost 4 years ago
We have a step that comes at the end for matching parameter modes.
Constantin: Why does it need to come at the end? In what case would it change the outcome if processed earlier?
#8 Updated by Greg Shah almost 4 years ago
Constantin wrote:
you mean the narrowing/widening match, based on the parameters mode?
#9 Updated by Greg Shah almost 4 years ago
No, I mean this code:
// multiple fuzzy matches, get by parameter mode iter = matches.iterator(); outer: while (iter.hasNext()) { MemberData dat = iter.next(); ParameterKey[] candidate = dat.signature; inner: for (int i = 0; i < candidate.length; i++) { Integer candidateMode = candidate[i].mode; Integer callerMode = caller[i].mode; if (Objects.equals(candidateMode, callerMode) || callerMode == null) { // either the same or the caller hasn't specified the argument mode // TODO: the mode is not mandatory at the definition... assume INPUT? continue inner; } else { // mismatch continue outer; } } ...
It happens only after we have a list of possible matches from the fuzzy matching above. I plan to implement the missing rules above which should handle the precedence of matching properly, yielding only a single match each time (if the code compiles in the 4GL) OR yielding a case where we must defer to runtime dynamic invocation. My changes will completely reorder the processing because there are multiple phases of checking for OO class matches.
As far as I know the parameter modes are not actually part of the "fuzziness". In fact, we probably can cut down the possible match candidates at the beginning to remove all that don't match the parameter modes.
However, I want to ensure I don't miss something important about the existing parameter processing.
#10 Updated by Constantin Asofiei almost 4 years ago
Greg Shah wrote:
As far as I know the parameter modes are not actually part of the "fuzziness". In fact, we probably can cut down the possible match candidates at the beginning to remove all that don't match the parameter modes.
At the caller, the parameter modes are not mandatory. So you can't rely on them to cut-down on the possible list of matches.
The reason of the code you posted is explained by this comment:
// the fuzzy lookup is done in phases // 1. collect all matches, regardless of parameter modes // 2. if only one match found, use that // 3. if multiple matches found: // - if one exact match by exact parameter type, use that // - if multiple matches, get by parameter modes // > if caller hasn't specified a mode for an argument, use wildcard // > only one match should be found // > if multiple matches, show warning List<MemberData> matches = new ArrayList<>();
#11 Updated by Greg Shah almost 4 years ago
OK, an important point is if caller hasn't specified a mode for an argument, use wildcard
.
Still, I don't understand why it must be at the end.
#12 Updated by Constantin Asofiei almost 4 years ago
Greg Shah wrote:
OK, an important point is
if caller hasn't specified a mode for an argument, use wildcard
.Still, I don't understand why it must be at the end.
I don't recall exactly, it might have been before all the widening/narrowing rules were put in place. Although it might have something to do with the 'object' and 'TableParameter' types, for these we don't match them exactly (yet).
#13 Updated by Greg Shah almost 4 years ago
I'm looking at the table, dataset and buffer matching right now. In SymbolResolver.translateType()
we have this code:
// TABLE and BUFFER parameters - these will have the TABLE or BUFFER prefix, // followed by the schemaname. case KW_TABLE: ref = parm.getType() == KW_TABLE ? parm : parm.getImmediateChild(KW_TABLE, null); ref = ref.getImmediateChild(TEMP_TABLE, null); // for temp-tables, the compile time mapping is done via the table's field signature cls = "TEMP-TABLE"; break; case KW_BUFFER: ref = parm.getType() == KW_BUFFER ? parm : parm.getImmediateChild(KW_BUFFER, null); if (ref.downPath("KW_FOR")) { ref = ref.getImmediateChild(KW_FOR, null); } ref = (Aast) ref.getFirstChild(); if (ref.getType() == TEMP_TABLE || (ref.getType() == BUFFER && "".equals(ref.getAnnotation("dbname")))) { cls = "BUFFER <temp-table>"; } else { cls = String.format("BUFFER %s", ref.getAnnotation("schemaname")); } break; case KW_DATASET: cls = "DATASET"; break;
- For
TEMP-TABLE
I plan to add a signature string (TEMP-TABLE <signature>
) that describes the structure of the table (number, order, type and extent). I will have to add this into theSchemaDictionary
, I think. - Why do temp-tables have
BUFFER <temp-table>
instead of the schema name? Testing showed that the "compatible structure" processing of table parms is not used for buffers. Thebuffer for table
really must reference the same table, down to the names, if I understand correctly. Am I missing something here? Do we need to process both the schema name and the structure to be safe? - For datasets, I don't know what is needed. Do we need to match the name portion or is there some kind of structure to match?
Any ideas are welcome.
In SymbolResolver.translateType()
, we also have this code:
case KW_DATA_SRC: // TODO: example with this as parameter and argument? cls = "DATA-SOURCE"; break;
As far as I know this is not possible in the 4GL. We don't match on it in the parser. Is this needed?
#15 Updated by Greg Shah over 3 years ago
- Subject changed from add conversion/runtime support for special cases of method overload in OpenEdge to method overload when they differ by a temp-table, dataset, buffer or object or extent or parameter modes
#16 Updated by Greg Shah over 3 years ago
Started rework of fuzzy matching to implement missing features. This first pass refactors the fuzzyMethodLookup() processing to make a list of all possible matches at the top of the method (by using the new candidates() method). This approach only returns candidates which match the access rights and number of parameters checks. It also returns all candidates from the entire inheritance hierarchy. This is how the 4GL does it, so that better matches in the inheritance hierarchy take precedence over local matches. The parameter mode processing was also switched. INPUT parameter types widen from caller to callee and OUTPUT parameter types narrow from caller to callee.
These first changes have been added to 3821c revision 11956.
#17 Updated by Marian Edu over 3 years ago
Greg Shah wrote:
Started rework of fuzzy matching to implement missing features. This first pass refactors the fuzzyMethodLookup() processing to make a list of all possible matches at the top of the method (by using the new candidates() method). This approach only returns candidates which match the access rights and number of parameters checks. It also returns all candidates from the entire inheritance hierarchy. This is how the 4GL does it, so that better matches in the inheritance hierarchy take precedence over local matches. The parameter mode processing was also switched. INPUT parameter types widen from caller to callee and OUTPUT parameter types narrow from caller to callee.
These first changes have been added to 3821c revision 11956.
Just a word on access rights, those doesn't always have to match... while one can't make a method access more restrictive it can certainly make it more accessible (private->protected->public, this works, the other way around does not).
Probably not related, although maybe, how do you plan to handle the erasure issue with ctor's - normally a JsonArray should have ctor with both array of JsonObject or JsonArray but there can only be one method with object[]?
#18 Updated by Greg Shah over 3 years ago
Just a word on access rights, those doesn't always have to match... while one can't make a method access more restrictive it can certainly make it more accessible (private->protected->public, this works, the other way around does not).
As far as I know, we already implement this properly.
Probably not related,
Correct. This task is for handling the fuzzy method matching which occurs at conversion time. Picking the correct constructor is not part of that work.
how do you plan to handle the erasure issue with ctor's - normally a JsonArray should have ctor with both array of JsonObject or JsonArray but there can only be one method with object[]?
I think the idea is:
- Implement a form of naming modification that provides for this overloading, similar to how we handle overloading of extent cases or parameter modes.
- Modify the runtime support to call the proper method name.
But I think this is not implemented yet.
Constantin: Do we need a task for this? I was not expecting this to be part of #4350.
#19 Updated by Constantin Asofiei over 3 years ago
Greg Shah wrote:
But I think this is not implemented yet.
Yes, is not implemented yet.
Constantin: Do we need a task for this? I was not expecting this to be part of #4350.
Yes, we need a task.
#20 Updated by Greg Shah about 3 years ago
- Related to Feature #5113: implement c'tor overloading for cases where type erasure is an issue added
#21 Updated by Greg Shah about 3 years ago
- Related to Feature #5160: About cast() function support added
#22 Updated by Greg Shah about 3 years ago
In regard to dataset and dataset-handle parameters, I need some help.
- Based on the 4GL docs, these are described as mirroring the temp-tables/table-handles (the rules for which are documented in #3751-492) instead of mirroring buffers.
- temp-tables vs buffers vs datasets
- With temp-tables we match the structure of the table (field type, number, order and extent) but without table/field names or options.
- With buffers for temp-tables we cannot match just the structure, the 4GL actually cares about the names of the table/fields so we have to add that into the structure of the table to find a match. I have a few more cases to check here before I can finish the implementation.
- For datasets:
- Do we need to match the dataset name or is this just a match on structure?
- I'm guessing there is some level of structural matching:
- I assume we need to handle each buffer/temp-table AND each data relation. What are the rules that define these?
- For each buffer/temp-table:
- The 4GL syntax operates on buffers, but the documentation suggests that datasets act like temp-tables, so it suggests that each buffer might be treated as its backing temp-table.
- Is it just pure structure like with table parameters?
- Or do we need the table and field names similar to how buffer parameters work?
- The 4GL syntax operates on buffers, but the documentation suggests that datasets act like temp-tables, so it suggests that each buffer might be treated as its backing temp-table.
- For data relations:
- Are names (the data relation name, each buffer/temp-table name, names in the optional field-mapping clause) important?
- If we must match on names, then I assume we have to handle the fact that there are default names (e.g.
RelationX
where X is an ordinal starting with 1). - Do we need to encode the structure?
- parent buffer/temp-table
- child buffer/temp-table
- optional field mapping
- For each buffer/temp-table:
- I assume we need to handle each buffer/temp-table AND each data relation. What are the rules that define these?
- Is the overall ordering of the structure normalized in some way? In other words, can you define the same dataset in 2 or more ways in source code?
- table-handles vs dataset-handles
- For table-handles, we have no schema information until runtime so everything we do there we can easily do to dataset-handles as well.
- This means some limited cases for exact matches, some limited cases for fuzzy matches and everything else deferred until runtime.
Marian/Ovidiu: Can you give me your best understanding of the answers to the above questions? Also: do we have existing testcases that can help here?
#23 Updated by Marian Edu about 3 years ago
Greg Shah wrote:
Marian/Ovidiu: Can you give me your best understanding of the answers to the above questions? Also: do we have existing testcases that can help here?
Afraid not, for temp-tables we have tests for all different combinations in table/parameter
folder: table/table-handle; in/out/in-out; value/reference/bind... for datasets I've created at some point tests stubs to do the same but nothing was done there since we've started with the OO stuff :(
Validation is strict if passed by reference, otherwise it's more loose for temp-tables and most probably this holds true for datasets but tell the truth for cases like this we usually use include files for temp-table/dataset definition so the structure is always the same so no idea how Progress handle this. If you want I can go ahead and fill in the tests unless this is already covered someplace else.
#24 Updated by Greg Shah about 3 years ago
If you want I can go ahead and fill in the tests unless this is already covered someplace else.
How much work is this?
It would be important to have some overloaded cases too:
- more than 1 method which only differ by the dataset definition in 1 parm
- cases where there are both a dataset (or datasets) and a dataset-handle
#25 Updated by Marian Edu about 3 years ago
Greg Shah wrote:
If you want I can go ahead and fill in the tests unless this is already covered someplace else.
How much work is this?
2-3 days probably too cover everything.
It would be important to have some overloaded cases too:
- more than 1 method which only differ by the dataset definition in 1 parm
- cases where there are both a dataset (or datasets) and a dataset-handle
I don't think we have any tests for method overload even for temp-tables, we can craft something though.
#26 Updated by Ovidiu Maxiniuc about 3 years ago
Greg Shah wrote:
In other words, can you define the same dataset in 2 or more ways in source code?
Yes, you can create tables with the same "architecture" using both static (define dataset ..
) and dynamic (create dataset
) modes. If the correct construction primitives are used, the resulting datasets are functionally undistinguishable (well, at least with the exception of dynamic
attribute). Internally, FWD converts the static chained primitives to a vector of lambdas which are executed by the builder object as they were sequentially invoked for a dynamic dataset.
I am not really an expert in this area. The dataset were implemented at the same time with OO support and the topic of this tracker is at the border of both. I was not aware that if there can be declared two methods in same OE class with a dataset parameter but which differ by their structure. If the parameter is a dataset-handle
, it is clear, by design, that the structure is not known at compile time.
#27 Updated by Greg Shah about 3 years ago
Marian Edu wrote:
Greg Shah wrote:
If you want I can go ahead and fill in the tests unless this is already covered someplace else.
How much work is this?
2-3 days probably too cover everything.
Yes, please work this. It should be worked over the next week since we need to clear this task.
It would be important to have some overloaded cases too:
- more than 1 method which only differ by the dataset definition in 1 parm
- cases where there are both a dataset (or datasets) and a dataset-handle
I don't think we have any tests for method overload even for temp-tables, we can craft something though.
I have a set of testcases in our old/crappy testcase repo. I will check them in to the new/hotness testcases repo. You can extend them for datasets, hopefully it will save time. I'll post here when they are ready. I need to do some cleanup and add a readme.
#28 Updated by Greg Shah about 3 years ago
I have a set of testcases in our old/crappy testcase repo. I will check them in to the new/hotness testcases repo.
See tesstcases repo revision 1012. The oo/params/readme.txt
explains the current state. I did have to rework the fully qualified class/interface names to match a cleaner "package" structure. An example inheritance hierarchy of classes/interfaces is in oo/basic/
which are used for many of the object parameter types. All the method parameter tests are in oo/params/
. I apologize in advance that I did not retest everything after I changed the fully qualified package names. Any issues should be just be typos.
The table-parm-*.[pi]
and the TableParm*.cls
show how TABLE
, TABLE-HANDLE
and some BUFFER
processing works. I think you can duplicate them and modify for dataset/dataset-handle.
#29 Updated by Marian Edu about 3 years ago
Greg Shah wrote:
I have a set of testcases in our old/crappy testcase repo. I will check them in to the new/hotness testcases repo.
See tesstcases repo revision 1012. The
oo/params/readme.txt
explains the current state. I did have to rework the fully qualified class/interface names to match a cleaner "package" structure. An example inheritance hierarchy of classes/interfaces is inoo/basic/
which are used for many of the object parameter types. All the method parameter tests are inoo/params/
. I apologize in advance that I did not retest everything after I changed the fully qualified package names. Any issues should be just be typos.The
table-parm-*.[pi]
and theTableParm*.cls
show howTABLE
,TABLE-HANDLE
and someBUFFER
processing works. I think you can duplicate them and modify for dataset/dataset-handle.
Greg, just for method matching at conversion time you can use the tests I've uploaded on oo/params
folder - dataset-parm-overload-test.p.
Did ran a conversion on those and as it is now the conversion doesn't find the right method in base class to override, the generated code for extension class does not compile.
I was thinking more about different behavior when in/out/in-out, append, by-reference are used. Transaction processing and effect of unique indexes it probably that of when temp-tables are used so I expect this to be the same only 'merged' together for all tables in the dataset. Data relations are not like foreign-key constraints so I don't expect those to have any meaning other that when using fill and query repositioning. However, this is different from what we've just did for OO dataset/dataset handle method override so if not urgent I would rather close the JSON Array/Object and whatever was left in OO package first.
#30 Updated by Greg Shah about 3 years ago
The testcases are very good, thank you. The behavior is indeed just like tables/table-handles except for the interesting findings about the structural matching:
only the following cases are detected as differences in structure: table number, order, table structure (field number, type, extent and order)
things that are ignored: dataset names, table names, dataset options, table/field options, table indexes, data relations
It is interesting that the relations are ignored. I'm surprised that it is purely the number and order of the tables (and their structure). It should be pretty straightforward to implement this part since I can build on the table structure that I already calculate.
Did ran a conversion on those and as it is now the conversion doesn't find the right method in base class to override, the generated code for extension class does not compile.
This is expected. I'm trying to fix it now.
However, this is different from what we've just did for OO dataset/dataset handle method override so if not urgent I would rather close the JSON Array/Object and whatever was left in OO package first.
Good plan. I have what I need for the method override cases so the other items can be deferred.
#31 Updated by Greg Shah about 3 years ago
I've put a complete implementation of the fuzzy matching together, including fixes for many latent problems. I'm now deep into the debugging and cleanup of my changes based on testing with in oo/params/
. I'm hitting this code in ClassDefinition.annotateCallSignature()
:
// TODO: conversion needs to be enhanced to support wrapping array parms System.out.printf("WARNING: FUZZY TYPE EXTENT detected at parm %d " + "(def = %s, call = %s), node = %s\n", idx, defSig[idx], callSig[idx], call.dumpTree());
I already have to detect all of these cases in the fuzzy matching process. So I can store off the results of that matching and reuse it later. In this case it is not clear what we need. Today we just ignore it.
There is only 1 case where we have fuzzy extent match: a fixed extent passed to an indeterminate extent. This is only matched if there is no fixed extent of the same size.
Constantin: What do we need here to handle this case downsteam? What changes are needed downsteam to convert properly?
#32 Updated by Greg Shah about 3 years ago
I've been analyzing ClassDefinition.annotateCallSignature()
to determine what I need to record during the fuzzy matching process so that the annotations can be done without duplicating the fuzzy maatching logic. I have an easy way to return an override classname for each parameter that is fuzzy matched (and any that are exact matches will have no override).
The problem is that there are many strange sections of code in ClassDefinition.annotateCallSignature()
which don't have an obvious purpose (at least to me).
1. Use of parm.getType() == PARAMETER
.
This is only matched when the parser uses param_passing_list
which is used by RUN statement and other language statement cases for parameter passing. In the context of method calls, it can be seen from a new_phrase
.
- What is the purpose of these references?
- Why not detect this case at the top of the method and just reset
parm
to
2. For this code:
else if (parm.getType() == UNKNOWN_VAL) { if (classname.startsWith("object")) { classname = "object"; } }
- Is the idea that this will emit as
new object()
? - Why don't we just handle this in
literals.rules
?
3. With this code:
else if (parm.getType() == FUNC_CLASS || parm.getType() == VAR_CLASS || parm.getType() == FIELD_CLASS || parm.getType() == FUNC_CLASS || parm.getType() == ATTR_CLASS || parm.getType() == METH_CLASS || parm.getType() == OO_METH_CLASS) { force = false; }
Does this code need to change to handle the widening/narrowing?
3. force
is disabled above for object cases, but there in this code below:
if (classname.startsWith("object<")) { classname = "object"; force = false; } else if (classname.startsWith("jobject<")) { force = false; classname = "jobject"; }
I don't understand the purpose of this. Plus classname
is reassigned, but will ignored below because force
is false
.
4. I need some feedback on what we should be coding for buffers, tables, datasets, table-handles and dataset-handles. I'm assuming exact matches need no override, but for the fuzzy matches I'm not sure what is needed downstream. Do we need changes downstream?
#33 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
1. Use of
parm.getType() PARAMETER
.This is only matched when the parser uses
param_passing_list
which is used by RUN statement and other language statement cases for parameter passing. In the context of method calls, it can be seen from anew_phrase
.
- What is the purpose of these references?
One case was added to fix extent arguments at OO constructor calls (new_phrase
). There is some inconsistency how we parse arguments, in some cases there is a PARAMETER node, in some cases there isn't. For the second case, why don't recall why was needed, but I assume the reason was the same inconsistency.
- Why not detect this case at the top of the method and just reset
parm
to
Some TRPL rules expect some annotations to exist at the 'direct child' of the call. In some cases this is PARAMETER node, the rest is the actual argument. Moving the parm
to the first child of PARAMETER I suspect will break things.
OTOH, checks like parm.getType() FUNC_CLASS
on line 2857 will never be true for new_phrase
.
2. For this code:
[...]
- Is the idea that this will emit as
new object()
?
Yes, I think that was the reason.
- Why don't we just handle this in
literals.rules
?
The qualifier via the generic has no sense if the literal ?
unknown is used (as it can match with anything), so the decision was to strip it early.
3. With this code:
[...]
Does this code need to change to handle the widening/narrowing?
The force
flag is meant to change the Java type for the parameter as to be compatible with the target in a direct Java call (thus the wrap_parameter
annotation). For OO arguments, the actual Java type is object
. Even if you wrap this to something else, the 'concrete' 4GL type is passed via a generic, and Java doesn't use this when resolving overloads.
Also, I don't understand how widening/narrowing is involved with OO arguments - if you have two colliding (in terms of Java signature) methods, like m1(p1 as oo.foo)
and m1(p1 as oo.bar)
, then the conversion should decide to use whatever the Java method name is for this call (determined from the argument's type), as these two methods will convert something like m1(object<oo.foo> p1)
and m1_1(object<oo.bar>)
.
3.
force
is disabled above for object cases, but there in this code below:[...]
I don't understand the purpose of this. Plus
classname
is reassigned, but will ignored below becauseforce
isfalse
.
classname
was reassigned because there is some commented out code - I agree it does nothing, but if you refactor this, remove the commented out code, too. Some changes in annotateCallSignature
tried to 'keep in sync' with commented out code, although at this time I assume that is obsolete and can be removed completely.
4. I need some feedback on what we should be coding for buffers, tables, datasets, table-handles and dataset-handles. I'm assuming exact matches need no override, but for the fuzzy matches I'm not sure what is needed downstream. Do we need changes downstream?
I can't visualise what the fuzzy match case is here. TableParameter
Java type is used for table
arguments and for buffers something like Tt1_1_1.Buf
.
If by fuzzy you mean you can pass a different DMO which is the same as the signature (but with different field names), then how will the call look like? The main problem is the buffer case, where we pass it directly.
And if these are for different DMOs, we need some kind of cast function to build a buffer using the target method's temp-table, from the caller's argument.
For the table-handle and dataset-handle, I don't see what fuzzy matching can the conversion do, as the concrete type is hidden from the conversion. Only the runtime can match them.
For dataset, the signature will have DataSetParameter
; I don't see anything special to do with the argument, expect to use it to determine the correct target and link to that via the annotations.
#34 Updated by Greg Shah about 3 years ago
- File 4350_fuzzy_matching_changes_20210415.patch added
Also, I don't understand how widening/narrowing is involved with OO arguments - if you have two colliding (in terms of Java signature) methods, like m1(p1 as oo.foo) and m1(p1 as oo.bar), then the conversion should decide to use whatever the Java method name is for this call (determined from the argument's type), as these two methods will convert something like m1(object<oo.foo> p1) and m1_1(object<oo.bar>).
Yes, the new version of fuzzyMethodMatch()
handles this matching calculation now. We will return back the MemberData
for the correct match. annotateMethodCall()
stores the tempidx
(and tempidx-file
) annotatations from that MemberData
, so the downstream code will pick up the correct converted method name automatically.
I'm also returning a String[]
called overrides
which is the replacement type for any of the parameters that were fuzzy matched. In an exact match, all of the overrides
elements will be null
. In a fuzzy match, at least one parameter (but it can be more than one or even all) will be non-null. I will store the correct values during the fuzzy matching so that we don't have to duplicate the logic. For the non-object BDTs, it is easy to see the right classname. But for the object cases, it is not obvious.
I'm attaching my current changes.
Here is how objects widen (INPUT parameters can be more narrow at the caller than the called code):
private boolean isWideningTypeMatch(String caller, String callee, Runnable incrementer) { if (("int64".equals(caller) && "decimal".equals(callee)) || ("integer".equals(caller) && ("int64".equals(callee) || "decimal".equals(callee))) || ("character".equals(caller) && "longchar".equals(callee)) || ("datetime".equals(caller) && "datetimetz".equals(callee)) || ("date".equals(caller) && ("datetime".equals(callee) || "datetimetz".equals(callee)))) { return true; } if (caller.startsWith("object") && callee.startsWith("object")) { // parse out the classes ClassDefinition src = parseObjectSpec(caller); ClassDefinition tgt = parseObjectSpec(callee); // check if the target is in the interface list of the caller type's specific class if (src.isImplemented(tgt)) { return true; } // check if the target matches the caller type's parent class hierarchy (including implemented // interfaces) if (src.isInheritedFrom(tgt, true, incrementer)) { return true; } // lowest priority/last ditch check return "progress.lang.object".equals(tgt.getName()); } return false; }
There are 3 levels that match the rules posted in #3751-492 (current implemented interfaces, then closest parent classes/parent implemented interfaces and finally object itself).
Here is how objects narrow (OUTPUT parameters can be wider at the caller than the called code):
private boolean isNarrowingTypeMatch(String caller, String callee, Runnable incrementer) { if (("int64".equals(caller) && "integer".equals(callee)) || ("decimal".equals(caller) && ("integer".equals(callee) || "int64".equals(callee))) || ("longchar".equals(caller) && "character".equals(callee)) || ("datetime".equals(caller) && "date".equals(callee)) || ("datetimetz".equals(caller) && ("date".equals(callee) || "datetime".equals(callee)))) { return true; } if (caller.startsWith("object") && callee.startsWith("object")) { // parse out the classes ClassDefinition src = parseObjectSpec(caller); ClassDefinition tgt = parseObjectSpec(callee); // check if the target parameter is a child class of the caller (this explicitly ignores // implemented interfaces which is how the 4GL does it) if (tgt.isInheritedFrom(src, false, incrementer)) { return true; } return false; } return false; }
There is only one kind of narrowing match for objects, as documented in #3751-492 (closest child class match).
In all cases we know the exact type of the object which is being assigned. For INPUT mode, this is the callee's parameter definition type and for OUTPUT mode this is the caller's instance type. Do I just use that assignment type in the override?
#35 Updated by Greg Shah about 3 years ago
4. I need some feedback on what we should be coding for buffers, tables, datasets, table-handles and dataset-handles. I'm assuming exact matches need no override, but for the fuzzy matches I'm not sure what is needed downstream. Do we need changes downstream?
I can't visualise what the fuzzy match case is here.
TableParameter
Java type is used fortable
arguments and for buffers something likeTt1_1_1.Buf
.If by fuzzy you mean you can pass a different DMO which is the same as the signature (but with different field names), then how will the call look like? The main problem is the buffer case, where we pass it directly.
Buffers cannot match anything other than the same DMO. There is no fuzzy matching for them. I shouldn't have included that in the list.
And if these are for different DMOs, we need some kind of cast function to build a buffer using the target method's temp-table, from the caller's argument.
Both tables and datasets can match structurally. But those are EXACT matches! I was not planning to do anything special for the normal structural matches. Do we already deal with those properly?
The only fuzzy match that is possible for tables and datasets is:
- A table will match a table-handle parameter if there are no exact matches to table parameters AND there is one and only one method with a table-handle parameter in that position.
- A dataset will match a dataset-handle parameter if there are no exact matches to dataset parameters AND there is one and only one method with a dataset-handle parameter in that position.
For the table-handle and dataset-handle, I don't see what fuzzy matching can the conversion do, as the concrete type is hidden from the conversion. Only the runtime can match them.
Although we do indeed now detect the dynamic case and notify the annotation processing to mark them, there is a case of compile time fuzzy matching for both table-handle and dataset-handle:
- If there is one and only one table parameter (and no table-handle parameter) in that position, then the table will be matched.
- If there is one and only one dataset parameter (and no dataset-handle parameter) in that position, then the dataset will be matched.
It is not clear what we need to specify for this case.
#36 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
Buffers cannot match anything other than the same DMO. There is no fuzzy matching for them. I shouldn't have included that in the list.
That's not correct - see this test.
def temp-table tt1 field f2 as int. def var o as oo.foo. o = new oo.foo(). o:m1(buffer tt1). o:m1(table tt1).
where
oo.foo
is this:class oo.Foo: def temp-table tt1 field f1 as int. method public void m1(buffer b for tt1). end. method public void m1(input table for tt1). end. end.
In this example, tt1 in oo.foo and the program is not the same DMO (as the field names differ).
And if these are for different DMOs, we need some kind of cast function to build a buffer using the target method's temp-table, from the caller's argument.
Both tables and datasets can match structurally. But those are EXACT matches! I was not planning to do anything special for the normal structural matches. Do we already deal with those properly?
See the above example. Currently there is no check for the structure of the argument table/buffer, for buffer/table parameters.
But fuzzy matching is only for resolving the target method - and 4GL doesn't allow method overload via a BUFFER/TABLE parameter (is a compile time error to have two m1(buffer for tt1)
and m1(buffer for tt2)
.
So I don't see an issue regarding fuzzy matching, just a potential conversion problem to 'cast' the buffer to the target type. The parameter type will always be BUFFER
for fuzzy matching.
Although we do indeed now detect the dynamic case and notify the annotation processing to mark them, there is a case of compile time fuzzy matching for both table-handle and dataset-handle:
- If there is one and only one table parameter (and no table-handle parameter) in that position, then the table will be matched.
- If there is one and only one dataset parameter (and no dataset-handle parameter) in that position, then the dataset will be matched.
It is not clear what we need to specify for this case.
What's the goal here? If the fuzzy match finds the correct method, I don't see what is needed for the argument. FWD will emit TableParameter
and DataSetParameter
.
#37 Updated by Greg Shah about 3 years ago
Buffers cannot match anything other than the same DMO. There is no fuzzy matching for them. I shouldn't have included that in the list.
That's not correct - see this test.
...
This can only be done using temp-tables. I thought we already converted temp-tables with the same structure (but optionally different names) to the same DMO.
I can say that we already handle this case in our exact matching rules. The signature I calculate for buffers is the table name (which has to be the same case-insensitively) and a structural match to the table (no field names). So we already match properly.
If we are using 2 different DMOs, then we will probably need an intermediate layer like we do with table and dataset parameters.
What's the goal here? If the fuzzy match finds the correct method, I don't see what is needed for the argument. FWD will emit TableParameter and DataSetParameter.
So there is no need for an override for tables, table-handles, datasets or dataset-handles?
#38 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
What's the goal here? If the fuzzy match finds the correct method, I don't see what is needed for the argument. FWD will emit TableParameter and DataSetParameter.
So there is no need for an override for tables, table-handles, datasets or dataset-handles?
Sorry, I don't really see what you mean by 'override'.
But if you are referring to do something special with the arguments - no, I don't see any need.
BTW, all the fuzzy logic which you do at parsing time to find the proper matching method so it can be directly called, will need to be duplicate for dynamic OO calls (hopefully the rules are the same...).
#39 Updated by Greg Shah about 3 years ago
Sorry, I don't really see what you mean by 'override'.
But if you are referring to do something special with the arguments - no, I don't see any need.
Yes, this is what I mean. For example, for a widening match (happens in INPUT mode) from integer
to decimal
on parameter 2 I would expect that overrides[1]
would be decimal
and this would be written into the classname
annotation at the caller's 2nd parameter.
Th narrowing case (OUTPUT mode) is not as obvious. In a caller with a narrowing from decimal
to integer
, what do we need to emit the proper output parameter wrapper? Should we annotate the caller's type (e.g. integer
here) as the override?
#40 Updated by Greg Shah about 3 years ago
BTW, all the fuzzy logic which you do at parsing time to find the proper matching method so it can be directly called, will need to be duplicate for dynamic OO calls (hopefully the rules are the same...).
Yes, that is true. I believe the behavior is the same except that the actual parameters (including table-handle
and dataset-handle
) have real runtime data that is matched.
I also hope the matching logic is easier to understand than before. It also may have some amount of code that can be re-used, though it may need additional refactoring to fully use it.
#41 Updated by Greg Shah about 3 years ago
I still need some ideas on the object widening/narrowing AND on the FUZZY TYPE EXTENT detected
case from #4350-31.
#42 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
Yes, this is what I mean. For example, for a widening match (happens in INPUT mode) from
integer
todecimal
on parameter 2 I would expect thatoverrides[1]
would bedecimal
and this would be written into theclassname
annotation at the caller's 2nd parameter.
Thanks, now I understand. By 'override' you mean to override the actual argument type (by wrapping it in a different type to match the signature).
Th narrowing case (OUTPUT mode) is not as obvious. In a caller with a narrowing from
decimal
tointeger
, what do we need to emit the proper output parameter wrapper? Should we annotate the caller's type (e.g.integer
here) as the override?
I think there is a bug in FWD currently. A func0(output d)
call with a func0(output i as int)
will emit the argument as wrap(new integer(d))
. This is incorrect, as the reference will be lost, and the wrong instance will be updated by OutputParameter
.
I would expected the wrap(Class<BDT>, BDT var)
to be emitted instead, as in wrap(integer.class, d)
for the argument. So in your case, the override needs to be emitted as the first argument for wrap
.
In any case, what needs to be done for OUTPUT
mode is to save the override at the classname
annotation - this was originally used to emit the correct wrap
first argument, but now I see the override is emitted at chp_wrapper
for some reason.
#43 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
I've put a complete implementation of the fuzzy matching together, including fixes for many latent problems. I'm now deep into the debugging and cleanup of my changes based on testing with in
oo/params/
. I'm hitting this code inClassDefinition.annotateCallSignature()
:
The code mentioned here is for when the array type is not an exact match. You can pass a int extent
argument to a dec extent
parameter in 4GL. This case is not supported in FWD, even for functions. So this is something new to add, maybe an InputExtentParameter
both at caller argument and parameter definition.
There is only 1 cases where we have fuzzy extent match: a fixed extent passed to an indeterminate extent. This is only matched if there is no fixed extent of the same size.
If here you are considering only the extent match (and not the element type for the array, which seems to follow the same narrow/widen rules), then the only cases I can see are OUTPUT/INPUT-OUTPUT, which emit in FWD as OutputExtentParameter
and InputOutputExtentParameter
(although the conversion is broken now for these, when used at function calls).
So, I don't think there is a need for an override for OUTPUT/INPUT-OUTPUT - as these will emit as the same FWD type regardless of the array type in 4GL. The FWD runtime can find the 'concrete' type for the argument by checking the array's element type, and do the conversion accordingly (although I think we already do this implicitly).
#44 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
I still need some ideas on the object widening/narrowing.
What do you mean here? If is the override for the argument's type - I don't see a need, as the Java type is always object
. We just need to ensure the converted method overloads (which collide via the 'object' parameter) have unique names, and the proper one is 'fuzzy matched'.
At runtime, the caller's argument will just pass this reference to the other side (i.e. for INPUT), and the method's definition still knows to treat that reference as the parameter's type, as that is how the code was converted. I don't see a reason to do more than that for direct Java calls.
Even for dynamic calls - the concrete legacy type is held at the instance, in the object.type
field, set when the variable/parameter was defined. And that can be used by the runtime to do the 'fuzzy match'.
#45 Updated by Greg Shah about 3 years ago
We just need to ensure the converted method overloads (which collide via the 'object' parameter) have unique names
I thought this was already handled. Otherwise the overloaded methods would not have compiled already.
and the proper one is 'fuzzy matched'.
I've already done this part in my changes.
At runtime, the caller's argument will just pass this reference to the other side (i.e. for INPUT), and the method's definition still knows to treat that reference as the parameter's type, as that is how the code was converted. I don't see a reason to do more than that for direct Java calls.
What about the OUTPUT mode narrowing case? This is a scenario there the callee (parameter def) is a subclass of the caller's type and it is OK because in the 4GL, this is assigned back.
#46 Updated by Greg Shah about 3 years ago
What is this code for:
if ("integer".equals(classname) || "int64".equals(classname) || "character".equals(classname)) { // for OUTPUT and INPUT-OUTPUT variables, we need to force the classname in // certain cases if (!defType.equals(callType)) { parm.putAnnotation("classname", classname); } }
#47 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
What is this code for:
[...]
For OUTPUT, if the argument is int64 and the argument parameter is integer, then this is not a natural narrowing in FWD, as integer is not a sub-class of int64 (is the other way around, int64 is a subclass of integer). So explicit 'cast' via the wrap(BDT, ...)
is needed.
Same for character and longchar - they are both sub-class of Text, so we need explicit 'cast'.
#48 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
We just need to ensure the converted method overloads (which collide via the 'object' parameter) have unique names
I thought this was already handled. Otherwise the overloaded methods would not have compiled already.
Yes, is already handled AFAIK.
At runtime, the caller's argument will just pass this reference to the other side (i.e. for INPUT), and the method's definition still knows to treat that reference as the parameter's type, as that is how the code was converted. I don't see a reason to do more than that for direct Java calls.
What about the OUTPUT mode narrowing case? This is a scenario there the callee (parameter def) is a subclass of the caller's type and it is OK because in the 4GL, this is assigned back.
Same as for INPUT - the argument and parameter are both 'object'. There is nothing to narrow in Java terms. The FWD runtime is responsible of any needed checks (if any). Even object.type
needs to remain 'untouched', as this is the legacy OO type used to define the object variable/parameter, and not the type of the instance is currently referring.
#49 Updated by Greg Shah about 3 years ago
It looks like we don't handle table paramter or dataset parameter overloading of the method definitions themselves. I've solved the fuzzy matching issues but we just emit all table
/table-handle
cases as TableParameter
and all dataset
/dataset-handle
cases as DataSetParameter
. So even a simple case where there is one table
and one table-handle
signature will conflict (same for the dataset
and dataset-handle
).
Why not at least implement a TableHandleParameter
and DataSetHandleParameter
? These should not need extra disambiguation in Java.
Then we need a plan for disambiguating the tables and datasets where they are different in the 4GL but not in Java. I guess we have the name modification approach there. I thought this was already done but just found out otherwise. Where do I look for that code?
#50 Updated by Constantin Asofiei about 3 years ago
The dataset-handle vs dataset and table-handle vs table already works in 3821c - I've tested these methods and they convert with unique names:
method public void ds1(input table for tt1). end. method public void ds1(input dataset ds1). end. method public void ds1(Table-handle ht1). end. method public void ds1(dataset-handle ds1). end.
Note that for table-handle and dataset-handle, the 4GL argument type is handle. During conversion, the conflict is determined by ConvertedClassName.isMethodNameConflict
which relies on SymbolResolver.calculateDefinitionSignature
to calculate the signature for the definition.
#51 Updated by Greg Shah about 3 years ago
My changes have certainly broken this part.
Note that for table-handle and dataset-handle, the 4GL argument type is handle.
Can you elaborate on what you mean here? The signature is calculated as table-handle
or dataset-handle
, not handle
. And in the 4GL, these are not interchangeable with handle
nor are they considered the same type.
#52 Updated by Constantin Asofiei about 3 years ago
Greg Shah wrote:
My changes have certainly broken this part.
Note that for table-handle and dataset-handle, the 4GL argument type is handle.
Can you elaborate on what you mean here? The signature is calculated as
table-handle
ordataset-handle
, nothandle
. And in the 4GL, these are not interchangeable withhandle
nor are they considered the same type.
Sorry, I meant the runtime type - at runtime these parameters are treated as actual handles.
For parsing/conversion purposes, yes, the type at the signature is set as table-handle
or dataset-handle
.
What test is failing for you? Can you check with 3821c, maybe what you see is a regression from your changes?
#53 Updated by Greg Shah about 3 years ago
All of the following cases are converted to TableParameter
:
INPUT TABLE
INPUT-OUTPUT TABLE
OUTPUT TABLE
INPUT TABLE-HANDLE
INPUT-OUTPUT TABLE-HANDLE
OUTPUT TABLE-HANDLE
We already have an issue where 2 signatures that only differ by the table type will conflict within the mode (INPUT TABLE tt1
will conflict with INPUT TABLE tt2
). By our approach, all of these cases conflict with each other and have to be disambiguated. In other words, now INPUT TABLE tt1
conflicts with OUTPUT TABLE whatever
and conflicts with INPUT-OUTPUT TABLE-HANDLE
and so forth.
It seems to me we can make the generated code more distinct and also cleaner by specializing the TableParameter
into 6 subclasses:
InputTableParameter
forINPUT TABLE
InputOutputTableParameter
forINPUT-OUTPUT TABLE
OutputTableParameter
forOUTPUT TABLE
InputTableHandle
forINPUT TABLE-HANDLE
InputOutputTableHandle
forINPUT-OUTPUT TABLE-HANDLE
OutputTableHandle
forOUTPUT TABLE-HANDLE
The converted code will read better. Also, we can remove the 2 boolean
parameters from the end of TemporaryBuffer.associate(TableParameter, Temporary, boolean, boolean);
and TemporaryBuffer.createDynamicTable(TableParameter, handle, boolean, boolean);
.
Is there a reason not to do this? I would plan the same idea for DataSetParameter
.
#54 Updated by Greg Shah about 3 years ago
SymbolResolver.WorkArea.javaMethodCollisions
is a JVM-wide map that stores the javaname mapped to a given Java signature + legacy signature combination. The javadoc states that it is meant to resolve two cases: method overrides (where the Java name of a child method must be the same as the overridden parent method's Java name) and method overloads.
Both the override and overload cases are things that should be calculated within a given class or a given class inheritance hierarchy. But this map of maps (javaMethodCollisions
) is global. This means that it will be mapping across unrelated class hierarchies. At a minimum, it will cause more conflicts to be detected than it should, depending on how the map is used later.
I plan to rework this code to work within the class hierarchy.
Constantin: Is there something this is meant to do that can't be done as I plan?
#55 Updated by Constantin Asofiei almost 3 years ago
The main reason this is JVM wide is because I couldn't find a way to solve this case.
You have a Foo.m1
and Foo.m-1
method, and this m-1
method exists in the base class Bar
, and also in another sub-class of Bar
, Foo2
.
Foo2
and Bar
were already processed and decided that m-1
must be named m1
. Now, when processing Foo
, m-1
can't be converted to m1
, and if is renamed, then it affects Bar
and Foo2
. You can decide the rename Foo.m1
, but this same issue can happen with other hierarchy branches/super-classes.
Above is a simple example, but this can be extrapolated with more branches and conflicts. In the end, I couldn't find a way to go back and (re)solve the collisions for classes which were already processed.
#56 Updated by Greg Shah almost 3 years ago
My idea was this:
- We already convert the entire parent hierarchy before any child classes.
- In the parent, we will already have determined non-conflicting method names for the methods in that class (and all above it).
- In the child, any methods that are overridden are required to have the
OVERRIDE
keyword. Even if that was not the case, we already can detect when we override a parent class/interface method. These methods must be processed first and must have whatever name is defined in their parent hierarchy. - All non-override methods would be processed in a second pass and would be disambiguated from the override cases.
I think this handles all cases and it can do so within the bounds of the class hierarchy.
Do you see any flaws in my reasoning?
#57 Updated by Constantin Asofiei almost 3 years ago
Hmm... a two-pass approach I think it might work.
But non-override methods will need to not collide with any Java method name from the hierarchy (this includes property methods, event methods, etc, all these Java methods we generate additionally in some cases).
But what about interfaces? Maybe process them via a BFS walk of the interface graph?
#58 Updated by Greg Shah almost 3 years ago
But non-override methods will need to not collide with any Java method name from the hierarchy (this includes property methods, event methods, etc, all these Java methods we generate additionally in some cases).
Yes, this is a good point. We will need to check up the tree, for all parents.
But what about interfaces? Maybe process them via a BFS walk of the interface graph?
Don't we already process parent interfaces before child interfaces and imlemented interfaces before the class that implements them?
#59 Updated by Constantin Asofiei almost 3 years ago
Greg Shah wrote:
Don't we already process parent interfaces before child interfaces and imlemented interfaces before the class that implements them?
No, see TransformDriver.sortLegacyClasses
- this sorts only the classes, and convertSourceNamesToAstNames
actualy places the interfaces after the classes.
#60 Updated by Greg Shah almost 3 years ago
As part of the shift to a subclass approach for TableParameter
and DataSetParameter
, I'm also making the following changes:
- The instance of
TableParameter
andDataSetParameter
will be able to report if it isINPUT
,INPUT-OUTPUT
orOUTPUT
.- For converted code, we will emit the correct subclass which will intrinsically have this state encoded.
- In the locations in FWD where one of these classes is created directly:
- In places where the IO flags are fixed, we will use the correct subclass.
- In places where the IO flags vary, these flags will be passed in to the constructor.
- There are only 5 places that have to be modified for
TableParameter
and in 4 of those locations we already have the IO flags. - For
DataSetParameter
, all 4 locations have the IO flags known.
- I am going to add
APPEND
as a member ofParameterMode
. In the 4GL, I believeAPPEND
is mutually exclusive withBIND
,BY-REFERENCE
andBY-VALUE
, so it is more properly handled as that same mode enum instead of with an extraboolean
in the constructor.
Marian: In regard to AbstractTtcollection
there is this code p1.ref().toTable(new TableParameter(tableHandle, true));
. It is not clear what the IO flags should be for this case.
#61 Updated by Greg Shah almost 3 years ago
Constantin: I think the "table-handle for" case has been regressed. Or perhaps I misunderstand the idea.
I found table-handle_parameters/tablehandle_parameter_with_for.p
in testcases/uast/
:
procedure input_parameter_without_for: // Use the below line to demonstrate typical behavior (without the "for") define input parameter table-handle gh. message gh:name. end procedure. procedure input_parameter_with_for: // Use these 2 lines to demonstrate the behavior of the "for" being handled. // Note that order matters. If you reverse the "define variable" and "define input parameter" // lines, it will result in the below error: // ** Unknown Field or Variable name - gh. (201) // Only unsubscripted INPUT HANDLE program variables allowed in TABLE-HANDLE FOR phrase. (9080) define variable gh as handle no-undo. define input parameter table-handle for gh. message gh:name. end procedure. define variable tt as handle no-undo. create temp-table tt. define variable tth as handle no-undo. tt:add-new-field("num","integer"). tt:temp-table-prepare("ordx"). tth = tt:handle. run input_parameter_with_for (table-handle tth). run input_parameter_without_for (table-handle tth).
And here is the key portion of output from 3821c rev 12377 (my changes are not present):
... ControlFlowOps.invokeWithMode("input_parameter_with_for", "I", new TableParameter(tth)); ControlFlowOps.invokeWithMode("input_parameter_without_for", "I", new TableParameter(tth)); })); } @LegacySignature(type = Type.PROCEDURE, name = "input_parameter_without_for", parameters = { @LegacyParameter(name = "gh", type = "HANDLE", mode = "INPUT") }) /** * Use the below line to demonstrate typical behavior (without the "for") */ public void inputParameterWithoutFor(final TableParameter _gh) { handle gh = new handle(); internalProcedure(new Block((Body) () -> { TemporaryBuffer.createDynamicTable(_gh, gh, true, false); message(gh.unwrap().name()); })); } @LegacySignature(type = Type.PROCEDURE, name = "input_parameter_with_for", parameters = { @LegacyParameter(name = "gh", type = "HANDLE", mode = "INPUT") }) /** * Use these 2 lines to demonstrate the behavior of the "for" being handled. */ public void inputParameterWithFor(final TableParameter _gh) { handle gh = TypeFactory.handle(); internalProcedure(new Block((Body) () -> { TemporaryBuffer.createDynamicTable(_gh, gh, true, false); message(gh.unwrap().name()); })); } }
There is no difference that I can see.
#62 Updated by Greg Shah almost 3 years ago
Wait, I see 1 difference. The initialization of the gh
handle. Is that the only difference that is supposed to be there?
#63 Updated by Constantin Asofiei almost 3 years ago
I don't recall the table-handle for
version, but from what I can see, this case just specifies the real handle var which will reference the parameter.
In 4GL, is possible even to have the def var gh as handle.
in the external program:
define variable gh as handle no-undo. procedure input_parameter_with_for: define input parameter table-handle for gh. message gh:name. end procedure.
For the simple table-handle
, I think 4GL creates a default handle variable to hold that reference.
I don't see any issues in the converted code.
#64 Updated by Greg Shah almost 3 years ago
I've noticed that convert_member_name
in annotations_prep.xml
is only ever called with mtype names.method
. It seems that the mtype names.variable
is dead code. Is there any reason to keep that code?
#65 Updated by Constantin Asofiei almost 3 years ago
Greg Shah wrote:
I've noticed that
convert_member_name
inannotations_prep.xml
is only ever called withmtype names.method
. It seems that themtype names.variable
is dead code. Is there any reason to keep that code?
The names.variable
code can be removed; but convert_member_name
should be renamed to convert_method_name
.
#66 Updated by Greg Shah over 2 years ago
The current implementation splits the conversion of method names between parse time, annotations prep (naming.rules
) and annotations (oo_references.rules
). In other words, we calculate the javaname
(converted Java method name) for methods (and property getter/setter/extent/length-of) as late as the annotations phase. Each of these phases has a slightly different disambiguation approach. It is quite complicated and thus is quite confusing.
Constantin: Some questions.
- Am I right in remembering that we did it this way (deferring parts to later phases instead of handling it during parsing) because we were calculating things on a project-wide basis instead of within a given class hierarchy?
- Since I am getting rid of the project-wide approach, do you see any reason that I can't push the calculation of the converted javanames all the way back to parse time?
- What was the reasoning behind the
ConvertedClassName
approach? Normally, we would have wanted to store any conversion naming inside theClassDefinition
so that there is one simple approach to storing/retrieving answers about a given class. Is the problem that we didn't want to store the entire project's set ofClassDefinition
instances in memory all the way through the annotations phase? Then again, I don't know that we actually unload the class definition instances. Don't they stay around (for the entire conversion run) in theSymbolResolver.WorkArea
instance which is mapped into thread-local storage?
#67 Updated by Greg Shah over 2 years ago
I think we always have a ClassDefinition
instance whenever we also have a ConvertedClassName
instance. Let me know if there is some case where this is not true. Considering this, I don't see why we would ever want to have a separate ConvertedClassName
helper class when we can just store everything (and make a simpler implementation) in the ClassDefinition
.
#68 Updated by Constantin Asofiei over 2 years ago
Greg Shah wrote:
The current implementation splits the conversion of method names between parse time, annotations prep (
naming.rules
) and annotations (oo_references.rules
). In other words, we calculate thejavaname
(converted Java method name) for methods (and property getter/setter/extent/length-of) as late as the annotations phase. Each of these phases has a slightly different disambiguation approach. It is quite complicated and thus is quite confusing.Constantin: Some questions.
- Am I right in remembering that we did it this way (deferring parts to later phases instead of handling it during parsing) because we were calculating things on a project-wide basis instead of within a given class hierarchy?
Yes - the entire class/interface tree had to be available before we can calculate the names.
- Since I am getting rid of the project-wide approach, do you see any reason that I can't push the calculation of the converted javanames all the way back to parse time?
As long as the javanames are calculated after the parsing has finished for the entire inheritance tree, then that's OK. Keep in mind that a javaname (for a method, variable, property, etc) set in a super-class/super-interface affects all the classes/interfaces which inherit that.
- What was the reasoning behind the
ConvertedClassName
approach? Normally, we would have wanted to store any conversion naming inside theClassDefinition
so that there is one simple approach to storing/retrieving answers about a given class. Is the problem that we didn't want to store the entire project's set ofClassDefinition
instances in memory all the way through the annotations phase? Then again, I don't know that we actually unload the class definition instances. Don't they stay around (for the entire conversion run) in theSymbolResolver.WorkArea
instance which is mapped into thread-local storage?
I think ConvertedClassName
was added as the first attempt at resolving the javanmes for the OO members. IIRC it was kept at least because we add 'implicit methods' which don't exist at parser time, like for class events, property getter/setter, etc - see SymbolResolver.trackMethodSignature
.
#69 Updated by Constantin Asofiei over 2 years ago
Greg Shah wrote:
I think we always have a
ClassDefinition
instance whenever we also have aConvertedClassName
instance. Let me know if there is some case where this is not true. Considering this, I don't see why we would ever want to have a separateConvertedClassName
helper class when we can just store everything (and make a simpler implementation) in theClassDefinition
.
I don't recall exactly how skeleton classes and references for Java-style code from 4GL are solved. Otherwise, this should be a one-to-one mapping.
#70 Updated by Greg Shah over 2 years ago
I'm confused by this code in SymbolResolver.loadConvertedClass()
:
ClassDefinition clsDef = getClassDefinition(qname); if (clsDef != null) { Object[] clsDetails = ConversionData.getConvertedClassName(filename); cclass = new ConvertedClassName(clsDef, (String) clsDetails[0], (String) clsDetails[1]); wa.cvtCache.put(qname, cclass); return cclass; }
Why do we need to call ConversionData.getConvertedClassName()
to get the package and base class name? We already have it in the clsDef
(the name
member, which we can easily parse into package and base class name).
#71 Updated by Constantin Asofiei over 2 years ago
Greg Shah wrote:
I'm confused by this code in
symbolResolver.loadConvertedClass()
:[...]
Why do we need to call
ConversionData.getConvertedClassName()
to get the package and base class name? We already have it in theclsDef
(thename
member, which we can easily parse into package and base class name).
That part gets the converted Java name and package, not the legacy name. I think that should already be in ClassDefinition.javaClassName
, for incremental conversion.
#72 Updated by Greg Shah over 2 years ago
Consider:
<function name="java_override_oo_names"> <return name="ovrdNames" type="java.util.Map" /> <!-- override names of certain classes - use lowercase keys --> <rule>ovrdNames = create("java.util.HashMap")</rule> <rule>ovrdNames.put("core.string", "LegacyString")</rule> <rule>ovrdNames.put("lang.object", "_BaseObject_")</rule> <rule>ovrdNames.put("lang.class", "LegacyClass")</rule> <rule>ovrdNames.put("lang.error", "LegacyError")</rule> <rule>ovrdNames.put("getclass", "getLegacyClass")</rule> <rule>ovrdNames.put("tostring", "toLegacyString")</rule> <rule>ovrdNames.put("equals", "legacyEquals")</rule> <rule>ovrdNames.put("String", "LegacyString")</rule> <!-- do not map "Object" to "BaseObject", as this has different flavor depending on where is used (i.e. _BaseObject_ vs BaseObject) --> <rule>ovrdNames.put("Class", "LegacyClass")</rule> <rule>ovrdNames.put("reflect.Constructor", "LegacyConstructor")</rule> <rule>ovrdNames.put("reflect.Method", "Legacy4GLMethod")</rule> <rule>ovrdNames.put("Error", "LegacyError")</rule> ...
Is there any reason why we need to keep String
instead of string
, Class
instead of class
and Error
instead of error
?
#73 Updated by Constantin Asofiei over 2 years ago
java_override_oo_names
map is used from:
common-progress.rules:resolve_class_name
- uses pkg.classname lowercasedannotations_prep.rules:convert_member_name
- uses member name, lowercasednaming.rules:checkClassName
- uses the exact class name. I think this is the reason why the non-lowercased versions were added, but I don't recall if I saw this in a customer project or when I was trying to convert the skeletons. This uses the converted Java name for a 4GL class/program (added explicitly to the conversion), and checks if it collides in the override map.
So, I'd keep this unchanged, but group these and add a comment for them.
#74 Updated by Greg Shah about 2 years ago
I've spent the last few days trying to stabilize and finish my previous work. I'm deep into the changes to eliminate the ConvertedClassName
and push all name conversion back into the ClassDefinition
.
We are using ClassDefinition.initialize(ConvertedClassName)
to calculate java method names for builtin methods and for builtin property method names. Shouldn't these be looked up via reflection at the time the ClassDefinition
is loaded? I don't understand why we use the name converter for these cases. If it works, it is just luck that makes it work since these names can have been manually mapped to anything.
#75 Updated by Constantin Asofiei about 2 years ago
Greg Shah wrote:
I've spent the last few days trying to stabilize and finish my previous work. I'm deep into the changes to eliminate the
ConvertedClassName
and push all name conversion back into theClassDefinition
.We are using
ClassDefinition.initialize(ConvertedClassName)
to calculate java method names for builtin methods and for builtin property method names. Shouldn't these be looked up via reflection at the time theClassDefinition
is loaded? I don't understand why we use the name converter for these cases. If it works, it is just luck that makes it work since these names can have been manually mapped to anything.
The main reason the converter is used is the case when we don't have a counterpart skeleton in FWD, just the .cls for it (so reflection can't be used). Current conversion is lenient of this, and converts 'on the fly' the skeleton info.
Otherwise, if there is no fallback, will it abend? IMO these kind of direct abends are not good, as for large projects they are very time consuming.
#76 Updated by Greg Shah about 2 years ago
The main reason the converter is used is the case when we don't have a counterpart skeleton in FWD, just the .cls for it (so reflection can't be used). Current conversion is lenient of this, and converts 'on the fly' the skeleton info.
When you say "counterpart skeleton in FWD", I guess you mean that we don't have a Java class for a built-in OO class?
In order to parse any reference to built-in OO 4GL, we must have a skeleton .cls
. So we must have a skeleton, but we don't necessarily have the Java class that backs it. Are you saying that this code is meant to allow conversion of something that won't be able to compile since we wouldn't have a matching class in com.goldencode.p2j.oo
?
#77 Updated by Constantin Asofiei about 2 years ago
Greg Shah wrote:
The main reason the converter is used is the case when we don't have a counterpart skeleton in FWD, just the .cls for it (so reflection can't be used). Current conversion is lenient of this, and converts 'on the fly' the skeleton info.
When you say "counterpart skeleton in FWD", I guess you mean that we don't have a Java class for a built-in OO class?
Exactly.
Are you saying that this code is meant to allow conversion of something that won't be able to compile since we wouldn't have a matching class in
com.goldencode.p2j.oo
?
Yes. It's from before we started adding the p2j.oo skeleton implementations, when we needed to be able to convert without abending.
If that is removed, the prerequisite (for conversion to complete) will be to have stubbed skeletons in p2j.oo for any project which uses something not yet in FWD. But that wouldn't be such a big task, as we have reports which would tell us what's missing.
#78 Updated by Greg Shah almost 2 years ago
If that is removed, the prerequisite (for conversion to complete) will be to have stubbed skeletons in p2j.oo for any project which uses something not yet in FWD. But that wouldn't be such a big task, as we have reports which would tell us what's missing.
I don't think we can parse without the skeletons there, so we already seem to have this dependency.
#79 Updated by Constantin Asofiei almost 2 years ago
Greg Shah wrote:
If that is removed, the prerequisite (for conversion to complete) will be to have stubbed skeletons in p2j.oo for any project which uses something not yet in FWD. But that wouldn't be such a big task, as we have reports which would tell us what's missing.
I don't think we can parse without the skeletons there, so we already seem to have this dependency.
We still can parse skeletons without a Java counterpart.
I don't object removing this 'legacy' support, where there are no Java skeletons, only the .cls ones. As the parsing works without the Java skeletons, it is just a matter of determining from reports which skeletons are required to be stubbed out, before attempting conversion.
#80 Updated by Greg Shah almost 2 years ago
In annotations/naming.rules
we use ConvertedClassName.addMethod()
to add entries for the c'tors. It seems like this is not needed.
#82 Updated by Greg Shah almost 2 years ago
- Related to Feature #6379: buffer parameters can work as structural matches (without field name matching) added
#83 Updated by Greg Shah almost 2 years ago
We have an issue with compile-time issue with fuzzy matching and the OutputExtentParameter
.
In oo/params/overloads-test2.p
we have a call like this:
def var vxdecimal as DECIMAL extent 3. ... method-num = obj:test-method-xint64-OUTPUT(OUTPUT vxdecimal).
obj
is of type Overloads2
which has this:
method public integer test-method-xint64-OUTPUT(OUTPUT v as INT64 extent 3): return 77. end.
In my current code, this converts without error but it cannot compile. The OverloadsTest2.java has this:
decimal[] vxdecimal = UndoableFactory.decimalExtent(3); ... methodNum.assign(obj.ref().testMethodXint64Output(new OutputExtentParameter<decimal>() { public decimal[] getVariable() { return vxdecimal; } public void setVariable(final decimal[] newRef) { vxdecimal = newRef; } }));
And the Overloads2.java
has this:
public integer testMethodXint64Output(final OutputExtentParameter<int64> extv) { int64[] v = TypeFactory.initOutput(extv, 3); ...
This predictably leads to this compile problem:
testcases/src/com/goldencode/testcases/oo/params/OverloadsTest2.java:663: error: incompatible types: <anonymous OutputExtentParameter<decimal>> cannot be converted to OutputExtentParameter<int64> [javac] methodNum.assign(obj.ref().testMethodXint64Output(new OutputExtentParameter<decimal>()
I think we could emit OutputExtentParameter
differently in this "morphing" case:
methodNum.assign(obj.ref().testMethodXint64Output(new OutputExtentParameter<int64>() { public int64[] getVariable() { return toInt64(vxdecimal); } public void setVariable(final int64[] newRef) { vxdecimal = toDecimal(newRef); } }));
I would add static conversion functions to implement the morphing.
My concern is the setter. Can we safely copy the data to a new instance or do we actually need to assign the exact reference to the array? For example, can one use this mechanism to share array instances across classes?
I've resolved the same problem for the OutputExtentField
case by emitting the new OutputExtentField<definition_type>(buffer, "property_name")
where I use definition_type
instead of caller_type
. I think that the invocation handling for the setters will handle this at runtime. Am I wrong about that?
This issue does not present for INPUT-OUTPUT
cases since there is no fuzzy matching there. I think a similar issue does affect INPUT
cases but it will need a different solution.
#84 Updated by Greg Shah almost 2 years ago
This is a status update.
Every time I move the code to follow recent updates the code breaks badly and takes days to rework. I've had to do this 4 times recently:
- from 3821c rev 12442 to 13813
- from 3821c rev 13813 to 13839
- from 3821c rev 13839 to 6129a 13835
- from 6129a rev 13835 to 6129a 13865
Each one of these is essentially a rebase. And unfortunately, the recent changes have been especially painful to integrate. Even though most recently you did not change the ConvertedClassName
stuff, there were other changes that are hard to integrate. Anything related to conversion of properties, enums, events or methods is deadly right now. Likewise, most changes to ClassDefinition
and SymbolResolver
are pretty painful too.
I'm trying to clear the following:
- get override and abstract properties working
- fix enum processing (your recent changes with javaEnumNames is broken because this is not compatible with our new approach)
- ensure that the conversion is at least as good as it was last week
- fix at least one more of the 3 remaining compile issues
At that point it may make sense for you to look at it and try it out. The change is very large and it is intensely integrated with all the resource types which have method names. If it is safe for use in the current two high priority projects, then we should get this into 6129a and then incrementally address the remaining items. I'll post the list of remaining items separately.
#85 Updated by Greg Shah almost 2 years ago
- File cvt_20220523_16414110.log added
- File file-cvt-list.txt.oo_params added
- File 4350_fuzzy_matching_changes_20220523a.patch added
- File ges_4350_changes_20220523a.zip added
- File build_20220523_164221.log added
The attached update can be applied over 6129a revision 13867. This is the list of remaining work:
- compile issues (46 errors in
oo/basic/
andoo/params/
)- #6379 buffer issue with structural match instead of DMO match (2 errors in
BufferExternalCaller.java
, this should be deferred) - output mode object parameters can "narrow" by passing a parent class at the caller and assigning back an instance of the child class (16 "incompatible types" errors in
ClassParmOverload.java
andClassParmOverloadTest.java
) - abstract/override issues (3 errors in
ExtTest.java
and 1 error inTrivialInterfaceUser.java
) - input extent BDT parameters need to replace normal wrapping with a static helper method that can widen (24 "no suitable constructor" errors)
- #6379 buffer issue with structural match instead of DMO match (2 errors in
- constructor overload
ClassDefinition.registerMethod()
needs to be checked to confirm that constructors are being properly disambiguated and annotated- the conversion rules need to be checked to confirm they are doing the lookup of the right javaname
the runtime support for constructor selection is not handled at all yet
- runtime method overloading support
DYNAMIC-INVOKE()
and reflection usage is not implemented- check if we force this properly during conversion (we mark it but I don't know if we emit properly) for the cases where this must be deferred to runtime ("implicit
DYNAMIC-INVOKE()
")
- enum conversion
- check if
oo/enums/
is broken - recent change to add
javaEnumNames
toClassDefinition
may not be fully implemented in the new approach;ClassDefinition.initialize()
no longer exists and it is not clear if some processing needs to be added to the ctorClassDefinition(SymbolResolver sym, String qname)
(there is a TODO comment there)
- check if
- event conversion
- needs to be checked, but I think it is broken
- event processing methods are added in
SymbolResolver.trackPropertyOrEventMethodSignatures()
by callingClassDefinition.registerPropertyMethod()
, this code calculates and disambiguates the javaname at that time (saving the result as annotations) - there are two unsubscribe methods and only one annotation so this is broken
- the TRPL rules that lookup/calculate the javaname probably needs to be remapped to lookup the annotations
- runtime testing (the
oo/enum/
andoo/params/
cases need to be tested to see if everything is OK) - reduce the unnecessary logging for
WARNING: missing 'jtype' annotation
(I think this is harmless but it is just messy)
#86 Updated by Constantin Asofiei almost 2 years ago
Greg Shah wrote:
- the runtime support for constructor selection is not handled at all yet
The runtime for constructor overload is fixed in 6129a.
#87 Updated by Greg Shah almost 2 years ago
- File 4350_fuzzy_matching_changes_20220524a.patch added
- File ges_4350_changes_20220524a.zip added
Here is the version that can be applied over 6129a rev 13874.
If you use the zip, make sure to delete src/com/goldencode/p2j/uast/ConvertedClassName.java
. If you use the patch file, I guess that step won't be needed. There is no functional difference in the code, it is just merged up to the latest 6129a level.
#88 Updated by Greg Shah almost 2 years ago
- File ges_4350_incremental_changes_20220524a_to_20220525a.patch added
- File 4350_fuzzy_matching_changes_20220525a.patch added
- File ges_4350_changes_20220525a.zip added
Here are my current changes. It fixes the ConcurrentModificationException
and puts some TODOs in the parser. I've also included an incremental patch that can be applied over my previous changes.
In addition to the customer application code that should be tested, make sure to retest the testcases for oo/basic/
and oo/params/
(use the file cvt list from #4350-85). Those convert fully but there are known compile issues as noted in #4350-85. We probably should also test oo/enums/
and some basic tests for class events.
I think the testcases/uast/dsh_th_args
tests should be moved into the new testcases project, in oo/params/
.
I am available for any questions or assistance.
#89 Updated by Greg Shah almost 2 years ago
The changes I just posted are based on 6129a rev 13878.
#90 Updated by Greg Shah almost 2 years ago
- % Done changed from 0 to 90
- Assignee changed from Greg Shah to Constantin Asofiei
#91 Updated by Constantin Asofiei almost 2 years ago
Greg, there are two overloaded methods, single-parameter, of type decimal and int64. The call is done with integer. Shouldn't fuzzyMethodLookup
rely on caller
to narrow down the list? The way it does know, it returns the 'decimal' overload, and not the 'int64'.
#92 Updated by Greg Shah almost 2 years ago
Greg, there are two overloaded methods, single-parameter, of type decimal and int64. The call is done with integer. Shouldn't
fuzzyMethodLookup
rely oncaller
to narrow down the list? The way it does know, it returns the 'decimal' overload, and not the 'int64'.
fuzzyMethodLookup
uses the parameters from caller
to do the phase 1 processing which makes the list of possible matches. During that processing, it keeps track of some metrics like the number of exact matches, number of widening/narrowing matches etc... (see MatchMetrics
). The idea is to try to find the best fit. This particular case may suggest that there is another criteria, the amount of widening or narrowing. In this case, it is a smaller change to widen from integer to int64 than it is to widen from integer to decimal. I suspect that is why the 4GL picks the int64
version.
#93 Updated by Constantin Asofiei almost 2 years ago
Another fuzzy lookup part which is broken is related to:
// allow a match to a dataset-handle if there is only 1 dataset-handle option
DATASET-HANDLE and DATASET (plus TABLE-HANDLE and TABLE) are interchangeable, depending on mode. But, this is limited if there is only one candidate with DATASET-HANDLE. This is not correct, the overload may be on a different parameter, not on the DATASET-HANDLE. So you can have overloads like dataset-handle, char
and dataset-handle, int
, while the caller has dataset, int
- this must match on dataset-handle, int
.
What was the intent to limit to only one?
#94 Updated by Greg Shah almost 2 years ago
Constantin Asofiei wrote:
Another fuzzy lookup part which is broken is related to:
[...]DATASET-HANDLE and DATASET (plus TABLE-HANDLE and TABLE) are interchangeable, depending on mode. But, this is limited if there is only one candidate with DATASET-HANDLE. This is not correct, the overload may be on a different parameter, not on the DATASET-HANDLE. So you can have overloads like
dataset-handle, char
anddataset-handle, int
, while the caller hasdataset, int
- this must match ondataset-handle, int
.What was the intent to limit to only one?
In your example, a caller passing dataset, int
should result in dataset-handle, char
being filtered out at phase 1 because the 2nd parameter int
can't match a char
. So the list being processed in phase 1 should only have dataset-handle, int
in it and this will work with the rule as noted.
If this is broken, the first thing to check is if the code is properly filtering out in phase 1.
#95 Updated by Greg Shah almost 2 years ago
The problem may be that we need to allow multiple possible matches in allowIfOnlyOne()
instead of a hard fail if num > 1
, so that subsequent parameters can be used for filtering.
#96 Updated by Greg Shah almost 2 years ago
Or said another way, I think the allowIfOnlyOne()
processing probably needs to happen in phase 2 not in phase 1.
#97 Updated by Constantin Asofiei almost 2 years ago
Greg Shah wrote:
Or said another way, I think the
allowIfOnlyOne()
processing probably needs to happen in phase 2 not in phase 1.
Yes, that's my thinking too. Testing now.
#98 Updated by Constantin Asofiei almost 2 years ago
Greg, I want to create a 4350a branch from 6129a (I'll take care of rebasing and making sure we don't go over the rails). Otherwise, it may become pretty difficult to review any changes I make.
#99 Updated by Constantin Asofiei almost 2 years ago
dataset, dataset-handle and table, table-handle are interchangeable regardless of input/output/input-output mode.
#100 Updated by Greg Shah almost 2 years ago
Constantin Asofiei wrote:
Greg, I want to create a 4350a branch from 6129a (I'll take care of rebasing and making sure we don't go over the rails). Otherwise, it may become pretty difficult to review any changes I make.
Yes, this makes sense.
#101 Updated by Constantin Asofiei almost 2 years ago
Greg Shah wrote:
Constantin Asofiei wrote:
Greg, I want to create a 4350a branch from 6129a (I'll take care of rebasing and making sure we don't go over the rails). Otherwise, it may become pretty difficult to review any changes I make.
Yes, this makes sense.
Created task branch 4350a from 6129a rev 13891
ges_4350_changes_20220525a.zip (after it was upgraded to rev 13891) is committed to rev 13892
#102 Updated by Constantin Asofiei almost 2 years ago
The 'provisional dataset' reported in #6129-40 is still a problem.... on one hand, DATASETs can never be 'provisional' (as this flag was added to distinguish between local resources - like vars - and class members), as a DATASET can never be defined in a method.
#103 Updated by Constantin Asofiei almost 2 years ago
Greg, please point me to some tests where this is shown, from isNarrowingTypeMatch
:
// check if the target parameter is a child class of the caller (this explicitly ignores // implemented interfaces which is how the 4GL does it) if (tgt.isInheritedFrom(src, false, incrementer))
I think the condition is backwards, I have a case for OUTPUT, where the argument is progress.lang.error
and the parameter def is some custom class extending progress.lang.Apperror
.
#104 Updated by Greg Shah almost 2 years ago
My findings are documented in #3751-492, search on "Child Class Hierarchy Processing" for this part. This kind of "narrowing" match is specifically for OUTPUT
parameters that are objects. For tests, see oo/params/class-parm-tests.i
and search for calc23b
, calc5
, calc26
, calc27
, calc8
, calc28
and calc30
.
#105 Updated by Constantin Asofiei almost 2 years ago
Greg, another question: do you have tests with overloaded methods from skeleton classes (include toString
and equals
from progress.lang.Object
)?
The Java enum fixes from 6129a were required to force using the Java names from the skeletons. The same is with the skeleton methods, be it called directly or overloaded, they must use the Java name as it exists in FWD.
#106 Updated by Greg Shah almost 2 years ago
do you have tests with overloaded methods from skeleton classes (include toString and equals from progress.lang.Object)?
I don't think there are any cases of this in my tests.
#107 Updated by Constantin Asofiei almost 2 years ago
Greg, are DatasetParameter and TableParameter switched to their mode'ed versions only for OO, or is the plan to switch for everything (including functions and procedures in external programs)?
#108 Updated by Greg Shah almost 2 years ago
Greg, are DatasetParameter and TableParameter switched to their mode'ed versions only for OO, or is the plan to switch for everything (including functions and procedures in external programs)?
Although I think we should (eventually) switch everything to the new mode'ed versions, right now only OO method calls get annotated with the "jtype" which causes the emit of the specialized version. This happens in ClassDefinition.addMethod()
for the method defs and ClassDefinition.annotateCallSignature()
for method calls. The downstream conversion processing uses the read_jtype
function and passes the "fallback" type like TableParameter
.
I don't see a reason to push this further for the first pass, but it would only take an additional annotation to "activate" it.
#109 Updated by Constantin Asofiei almost 2 years ago
I've rebased 4350a from 6129a, new rev is 13901.
I'll commit the changes soon, and after that merge into 6129a. Greg, is this OK or do you want to review them in 4350a?
#110 Updated by Constantin Asofiei almost 2 years ago
And forgot to mention: I didn't get a chance to check your testcases at all, I focused only on the current POC app.
#111 Updated by Greg Shah almost 2 years ago
Greg, is this OK or do you want to review them in 4350a?
It is OK. Go ahead.
#112 Updated by Constantin Asofiei almost 2 years ago
4350a was rebased from 6129a, new rev 13903.
4350a rev 13903 was merged to 6129a rev 13902 and archived.
#113 Updated by Greg Shah almost 2 years ago
Code Review Task Branch 4350a Revision 13903
1. In annotations_prep.xml
, the convert_method_name
is now just a call to ClassDefinition.getConvertedMethodName()
. We might as well just drop the convert_method_name
function as it no longer has any purpose.
2. It is not clear to me that the changes in override processing for ClassDefinition.registerMethod()
are correct. You've deliberately changed some things that were needed to resolve issues related to the oo/params/
testcases. In particular, the override
keyword can only be used with things you inherit from a class but in OE you get a compile error if you try to use it to "override" something from an interface. I tried to distinguish these cases which I think are now no longer distinguished. In particular, case 2 was supposed to avoid interface lookups (lookupOverrideName(legacySig, true, false)
while case 4 was supposed to avoid class lookups (lookupOverrideName(legacySig, false, true)
. We will see more when we try to clear the oo/params/
again.
3. I have the doubt about the change to isInheritedFrom()
in isNarrowingTypeMatch()
. isInheritedFrom()
now always searches interfaces. I think that this is going to break cases in oo/params/
.
In regard to items 2 and 3, please just put TODO: comments there to note that there is some additional review needed in those spots.
#114 Updated by Constantin Asofiei almost 2 years ago
Greg, the OE's problem with the OVERRIDE keyword is that is not consistent: for conversion, anything which is an override (explicit or implicit) must have the same name as the super-class or super-interface method.
For example, you can have a method defined in an interface, then defined abstract in some class implementing this interface, and after that defined with OVERRIDE in a sub-class of the abstract class (as for this the OVERRIDE option is mandatory, as it overrides a class method) - everything must convert with the same, for the Java override to work.
To further enforce this on the converted code (and check the consistence of the overridden method names and signatures), I think we need to emit Override
annotation regardless of the OVERRIDE
option, if the parser determined that this is a method override.
Regarding both points 2 and 3, I have some standalone testcases which explain my changes, I need to clean them and place them in oo/params
. But I agree that what I changed is a little messy.
#115 Updated by Constantin Asofiei almost 2 years ago
6129a/13907 fixes the runtime for dataset-handle/dataset/table/table-handle in OO method/ctor calls.
#116 Updated by Constantin Asofiei almost 2 years ago
- Related to Bug #6404: conversion for dataset-handle and table-handle parameters with table fields or class properties added
#117 Updated by Constantin Asofiei almost 2 years ago
Another issue which needs to be fixed is incremental conversion, it fails with a null javaname annotation when processing an OVERRIDE method.
#118 Updated by Constantin Asofiei almost 2 years ago
DataSetParameter
and TableParameter
now assume that only one option can be set at the parameter - this is incorrect, APPEND can be combined with BY-REFERENCE, BIND, BY-VALUE.
#119 Updated by Greg Shah almost 2 years ago
Can we use EnumSet
to handle this requirement? I'm hoping the result will be cleaner than splitting out the flag as a boolean.
#120 Updated by Constantin Asofiei almost 2 years ago
Greg Shah wrote:
Can we use
EnumSet
to handle this requirement? I'm hoping the result will be cleaner than splitting out the flag as a boolean.
Yes, that will work.
#121 Updated by Constantin Asofiei almost 2 years ago
Fixed an issue with OUTPUT arguments, when the argument is an interface and the parameter is a class implementing the interface: 'ClassDefinitoin.isInheritedFrom' must consider the directly implemented interfaces, too, not just the interfaces from the inherited parent classes.
This was committed to branch 6129a.3821c.13813 revision 13956.
It will soon be committed to the main/unstable 6129a (which is following 3821c via rebasing), too.
#122 Updated by Constantin Asofiei almost 2 years ago
The fix in 6129a.3821c.13813 revision 13956 was committed to main 6129a branch revision 14248.
Other fixes in 6129a related to #4350 are these:
revno: 14247 committer: Constantin Asofiei <ca@goldencode.com> branch nick: 6129a timestamp: Wed 2022-07-27 11:56:51 +0300 message: Fixed OO dataset/table parameters (at the method definition and method call) when there is an APPEND option. Refs #4350 ------------------------------------------------------------ revno: 14245 committer: Constantin Asofiei <ca@goldencode.com> branch nick: 6129a timestamp: Wed 2022-07-27 11:47:15 +0300 message: OO variable references must get the refid javaname, before checking the javaname annotation. Refs #4350 ------------------------------------------------------------ revno: 14242 committer: Constantin Asofiei <ca@goldencode.com> branch nick: 6129a timestamp: Wed 2022-07-27 11:32:04 +0300 message: Fixed implicit constructor name and AST, to be compatible with incremental conversion. Refs #4350
#123 Updated by Constantin Asofiei over 1 year ago
Runtime for constructor overload with object
arguments is not yet complete. Having two constructors like:
constructor public CtorTest(input v4 as progress.lang.object): message "v4". end. constructor public CtorTest(input v5 as progress.lang.class): message "v5". end.
this code will not resolve the constructors properly:
def var l as progress.lang.object. def var l1 as progress.lang.object. def var l2 as progress.lang.class. l = new oo.ctortest(l1). l = new oo.ctortest(l2).
This should be tested with DYNAMIC-NEW
and DYNAMIC-INVOKE
, too.