Project

General

Profile

Other Customization

FWD offers a great deal of flexibility, enabling you to produce converted output which is tailored to your needs. Various tools and configuration options allow you to customize the conversion of names (programs, variables, database schema constructs), to alter the structure of a database schema, and to integrate custom Java source code as a natural part of the conversion process.

Naming

The names of programming and database constructs require at least some minimal processing to ensure they are appropriate for use after conversion. Program, method, and variable names must not conflict with Java reserved words. Likewise, database/schema names must not conflict with SQL (or HQL) keywords. Naming rules for Progress 4GL constructs differ from those of Java and SQL (the former are generally more lenient), which means certain characters (or the order in which they appear) allowed in Progress are not allowed in Java or SQL.

In addition to dealing with these show stopper issues, FWD attempts to promote consistency across a project by applying a uniform convention to converted names. This may be especially important for a 4GL code base which has been maintained over many years by diverse developers, with no established (or perhaps under-enforced) naming conventions.

Supported Naming Conventions

Early in the conversion process, as source code is parsed and then analyzed, certain symbols are identified as requiring name conversion services. These might be references to external procedures, user-defined functions, variables, database tables, etc. Depending upon the nature of the source symbol, conversion logic determines the functional category of the name in terms of the target environment: a Java class name, method name, variable name, or an SQL table or column name, etc.

Each category is converted using a particular convention, summarized by the following table:

Convention Applied Source (4GL) Construct Target Construct Source Example Target Example
Camel-cased, capitalized External Procedure (i.e., program) Java class (business logic) 1 my-program.p
Some_Utility.w
MyProgram.java
SomeUtility.java
OO 4GL Class/Interface/Enum Java class (business logic) 1 My-Class.cls
a_program.cls
MyClass.java
AProgram.java
Database table Java class (DMO) acct-receivable
work_order
lineitem
AcctReceivable
WorkOrder
LineItem 2
Camel-cased, de-capitalized Internal procedure, user function Java method do-some-work
my_func
doSomeWork
myFunc
Variable Java variable my-var
sub_total
LastNum
myVar
subTotal
lastNum
Database field Java variable (DMO “property”) account#
invoiceDate
reason-code
accountNumber
invoiceDate
reasonCode
Upper-cased, underscore-separated Literal values determined to be constants 3 Java “constant” (final static variable) Default-prefix 3 DEFAULT_PREFIX 3
Lower-cased, underscore-separated Program Directory Structure Java Package Structure 1 soMe/Path/to/
anOther/patH-in/THE/project/
some/path/to/
another/path_in/the/project/
Database SQL database Top-Secret-Data
db1
top_secret_data
db1
Database table SQL table acct-receivable
work_order
lineitem
acct_receivable
work_order
line_item 2
Database field SQL column account#
invoiceDate
reason-code
account_number
invoice_date
reason_code
Database index SQL index pi-sort-by-code
si_work_date
pi_sort_by_code
si_work_date 4

Table Notes:

  1. The directory path segments in which a 4GL external procedure or class/interface/enum resides will be converted into package names, in which the resulting Java code will be placed. Starting at the basepath directory in the project (see Configuration Reference), each successive subdirectory will be converted into the equivalent package directory structure which resides inside the pkgroot (see Configuration Reference) for the application. For example, an external procedure ./4gl/src/myapp/some/path/to/the/program.p with a basepath of ./4gl/src/myapp/ and a pkgroot of com.acme.myapp will have a fully qualified Java class name of com.acme.myapp.some.path.to.the.Program and will be stored in com/acme/myapp/some/path/to/the/Program.java.
  2. These examples presume some non-default configuration has been defined to allow the monolithic name lineitem to be broken into its component syllables, line and item, during name conversion. Please see the discussion of the mechanics of name conversion below for details.
  3. Currently, conversion logic does not identify “constants” (i.e., variables which are defined once and never changed) in 4GL source code. So, while this type of name conversion is implemented internally, it is not used by conversion programs at the time of this writing. In the future, conversion logic may use context to identify what it considers to be constants in 4GL source code.
  4. The examples of converted index names shown here are representative of an intermediate form generated during the name conversion subprocess. However, before an index actually is created in the target database (during data import), a prefix is prepended to this intermediate form to guarantee uniqueness in the target schema. This prefix is of the form idx__{table name}_. For instance, if the example pi_sort_by_code given above were associated with a table named work_order, the final name of the index as applied to the database schema during data import would be idx__work_order_pi_sort_by_code.

Planned future development will provide a simple mechanism to override these conventions to produce a different style of output for each type of target construct.

The Mechanics of Name Conversion

Name conversion is syllable-driven. The name converter uses dictionaries of match phrases and replacement phrases to convert each syllable of a name independently. By default, the content of the dictionaries is quite limited, consisting of only the minimum information necessary to prevent the generation of names which would obviously be illegal or impractical in the target environment. However, the dictionaries are configurable, enabling very complex expansions and substitutions. Different dictionaries can be defined for database constructs (termed “data” names) and for non-database constructs (termed “code” names).

The process begins with the name converter scanning a name for known match phrases. When such a phrase is detected, the converter determines that it has found a syllable which may require replacement. It then looks up the appropriate replacement phrase and substitutes it into the target name at the relative location of the match phrase. If no replacement phrase is found, the original syllable is used unchanged in the target name. The appropriate convention from the table above is applied as these syllables are written into the target name.

For example, consider the following match phrase dictionary entries:

Match Phrase Replacement Phrase
ot over time
acct account

During the conversion of a 4GL schema, a table named otacct would be broken down into the syllables ot and acct. The phrase over time would be substituted for ot and the phrase account would be substituted for acct. When generating the name of the corresponding DMO class, the Java class naming convention would be applied, for a final target name of OverTimeAccount. When generating the name of the corresponding database table, the SQL table naming convention would be applied, for a final target name of over_time_account.

Note that the following rules apply to the detection of match phrases in source names:

  • Camel-casing trumps match phrases. Camel-casing within a source name defines a broad division of that name into segments. Each segment may still contain independent syllables, but a syllable cannot span segments. The philosophy behind this rule is that a camel-cased name suggests the intent of the original developer to divide the name into the segments demarcated by each embedded, capital letter. Note that at the time of this writing, multiple capital letters grouped together each represent individual segments, which is not necessarily ideal. This approach may be changed in the future. Camel-casing is detected in a preprocessing phase. Once complete, the resulting segments are scanned for match phrases.
  • First match wins. Given two overlapping match phrases within a source name, the phrase which begins earlier is preferred to the one beginning later. For example, if preformat and formatted were both defined as match phrases and the database field preformatted were being scanned, the match phrase preformat would be detected, and the match phrase formatted would be ignored. Note that this can lead to subtle errors; in this case, a DMO property name of preformatTed and an SQL column name of preformat_ted, which almost certainly is not what was intended.
  • Longer match wins. Given two overlapping match phrases within a source name which begin at the same character, the longer match phrase is preferred to the shorter one. For example, if preformat and preformatted were both defined as match phrases and the database field preformatted were being scanned, the match phrase preformatted would be detected, and the match phrase preformat would be ignored.

Changing Name Conversion Output

While the conversion of names is highly configurable, FWD generally applies sensible defaults. In the absence of explicit customization, the following substitutions are configured for all target name types:

Match Phrase Replacement Phrase
# number
% percent
$ dollar
& and
- (empty string)
_ (empty string)
/ (empty string)

Entries which have an empty string as their replacement phrase cause the corresponding match phrase to be dropped from the target name. The use of an empty string as a replacement phrase serves only to delineate the syllables surrounding the given match phrase. This mechanism is used to eliminate characters which are illegal in Java or SQL names, such as the hyphen (-) and forward slash (/). While not illegal, the underscore (_) also is dropped, since it would be redundant with the naming convention used to demarcate syllables in the target name. For example, the 4GL name my_table should convert to a DMO name of MyTable and to an SQL table name of my_table, not My_Table and my___table, respectively.

In addition to the default substitutions in the table above, all reserved keywords of the Java language (see The Java Language Specification, Third Edition by James Gosling, et al.) are handled specially: if such a keyword represents the only syllable in a source name, the underscore (_) character is appended to the target name to avoid Java compiler errors. Thus, for example, class becomes class_ and break becomes break_.

This default behavior is suitable for the majority of names requiring conversion, however, you may find that the default output does not meet your needs. Perhaps the original developers of your application used very cryptic abbreviations, which you would now like to make more verbose or self-explanatory. Perhaps file system naming restrictions in the early days of your application's development required very short program names, which you would now like to expand. And one final reason is the case when the name of an external or internal procedure or of an user-defined function starts with a digit (thus resulting in an invalid Java name, as the leading digit(s) are not dropped during conversion). Whatever the reason, there are several approaches you can take to address this need:

  1. Modify the 4GL source code or database schema. With this approach, you change the undesirable names in the 4GL version of your application before you convert it. This is the most direct approach, but it is also the most tedious to implement and the most prone to human error. It is best used if there are a very small number of changes to be made, but it is not practical if the same changes must be made in hundreds or thousands of places across your application. Consider that every reference to a changed construct must be manually altered. For example, if you change the name of a table in a database schema, every reference to that table in the code base must be updated (this is complicated by the fact that Progress allows references to tables to be abbreviated in source code). If you change the name of a program, every RUN statement or other reference to it must be changed to match. There is no tool support in FWD to help you with this approach.
  2. Provide custom match phrase dictionary entries. The default content of these dictionaries can be overridden to provide domain-specific substitutions which are applied globally across your schemas and source code. This approach is best used if the naming conventions used in your code base are consistent and well understood. However, you must be prepared to handle unexpected conversions introduced by your dictionary entries, such as the preformat_ted example above. The larger and more diverse your application is, and the less consistent and less well understood your current naming conventions are, the more exceptions you should be prepared to handle.
    Also, this approach can be used to automatically rename any function or procedure name with leading digits to a more suitable name which can safely convert to a Java name.
  3. Provide fine-grained conversion hints to target specific cases (database only). This approach can either stand alone or be used to resolve conflicts resulting from the global mappings described in the previous item. Note that currently, name conversion hints only are supported for database tables and fields. The use of these hints is described in the chapter entitled Conversion Hints.

Since the implementation of approach №1 is an entirely manual effort and is not supported by the FWD tool set, it will not be discussed here. This section will focus primarily on item №2, and to a lesser degree on item №3 as it pertains to resolving issues that cannot be resolved otherwise, or those issues introduced by implementing item №2.

As noted above, there are two separate match phrase dictionaries used during conversion: one for business logic constructs (variables, programs, functions, etc.), and one for schema-level constructs (tables, fields, etc.). These dictionaries are defined by separate configuration files (in the cfg directory off the project root): codenames.xml and datanames.xml, respectively. In the absence of these files, the default behavior described above is applied.

Both files are identical in structure and purpose. As their names suggest, they are XML documents. An abbreviated sample of a datanames.xml file follows:

<?xml version="1.0"?>

<!-- Name converter match phrase defaults -->
<phrase-root>

   <!-- Standard symbols -->
   <standard>
      <phrase match="#"        replace="number" />
      <phrase match="%"        replace="percent" />
      <phrase match="$"        replace="dollar" />
      <phrase match="&amp;"    replace="and" />
      <phrase match="-"        replace="" />
      <phrase match="_"        replace="" />
      <phrase match="/"        replace="" />
      <phrase match="id"       replace="identifier" full="true" />
   </standard>

   <!-- Domain-specific substitutions -->
   <custom>
      <phrase match="ap"       replace="payable" />
      <phrase match="app"      replace="approval" />
      <phrase match="disc"     replace="discount" />
      <!-- ... additional entries ... -->
   </custom>

   <!-- Domain-specific conflict resolution -->
   <preserve>
      <phrase match="approval" />
      <phrase match="approve" />
      <phrase match="approved" />
      <phrase match="discount" />
      <phrase match="discontinued" />
      <!-- ... additional entries ... -->
   </preserve>

</phrase-root>

The file consists of three major XML nodes, denoted by the element names standard, custom, and preserve. The document is divided into these three categories to allow conversion programs to apply varying levels of customization to different types of name conversions. Each major node contains 0 or more phrase elements, each with various combinations of attributes.

A phrase element defines a single dictionary entry. The match attribute is mandatory; it defines a syllable in the source name for which the conversion algorithm should search. The replace attribute is optional; it defines a replacement phrase which should be substituted for the corresponding match phrase in the target name. If absent, its value defaults to that of the match attribute. This effectively leaves the match phrase intact in the output, but defines it as an independent syllable. The full attribute is optional; it defaults to false. If set to true, this indicates to the conversion algorithm that the match phrase should only be matched if it is the only syllable present in the source name. Otherwise, the match is ignored.

If multiple phrase elements are defined within a given section, duplicate entries (i.e., those with the same match attribute value) which appear later in the containing node override those which appear earlier. Although not flagged by the conversion tools as an error, it is generally a good idea to avoid such duplication, since it may be confusing and lead to unexpected results. For this reason, we recommend defining these entries in alphabetical order.

The standard node contains those dictionary entries which should be applied to all circumstances of name conversion. This section should contain all substitutions that are absolutely necessary for any name in your project; at minimum it should define all the changes necessary to avoid illegal names in the target environment. This includes entries for all the default match phrases listed above, since those defaults will be ignored for a given scope of constructs (i.e., database or business logic), if the corresponding configuration file is present.

The custom and preserve entries work hand-in-hand. Entries in the custom node define those phrases in the source name which should be replaced in the target name, while entries in the preserve node define those phrases which should be preserved from source to target name. Since the point of the latter entries is to preserve phrases rather than replace them, no replace attribute should be specified for phrase elements in the preserve node. After you add an entry to the custom node, you will often find it necessary to add one or more entries to the preserve node to prevent more specific, enclosing phrases from being incorrectly split along the smaller match phrases they contain.

For example, note the entry above for the match phrase ap, which seeks to substitute the phrase payable for every instance of the syllable ap in a source name. Arguably, this is a somewhat dangerous entry, as the phrase ap conceivably could be embedded in many words. If this entry were allowed to stand alone, a database field named approval would convert to an SQL column named payable_proval.

With the addition of a more specific entry for the match phrase app - which seeks to substitute the phrase approval - this outcome is avoided, because app is a more specific match than ap. Unfortunately, it doesn't much improve the situation, resulting instead in the similarly poor outcome of approval_proval. Likewise, the names approve and approved would convert to approval_prove and approval_proved, respectively. Clearly, this was not the intent of the appapproval entry. To prevent this, entries for approve, approved, and approval are added to the preserve section. Now, each of these phrases produces a more specific match than app, and consequently, those names are not corrupted during conversion.

In some cases, more specific entries in the custom or preserve sections will not help. The phrase app could simply mean something entirely different in a certain context. In a discount table, for instance, a field named disc-app might indicate the concept of a “discount applied”. In such a case, a schema hint will be necessary to resolve the conflict, so that this field name does not become discount_approval. The following entry in the appropriate schema's hints file (see the Conversion Hints chapter) will resolve the issue:

<schema>
   ...
   <table name=”discount”>
      <field name=”disc-app”>
         <phrase match=”app” replace=”applied” />
      </field>
   </table>
   ...

This will result in a DMO property name of discountApplied and an SQL column name of discount_applied, which is what we want. Note that it was not necessary to specify any override for the phrase disc in the hint, since that portion of the field name already was set to convert correctly from the global entries in the datanames.xml file above.

The decision of which groups of dictionary entries to apply to which types of name conversion is made by the conversion programs themselves. Currently, all entries specified in codenames.xml are applied to the conversion of all names related to business logic source code. Specifically, entries in all sections are considered when converting the names of programs, internal procedures, functions (non-built-ins), variables, streams, and explicitly defined buffers.

The application of the entries in datanames.xml is a bit more complicated. Only the entries in the standard section are considered when converting the names of database indexes and all non-persistent (i.e., temp-table/workfile) buffers, tables, and those fields where the table is not defined in terms of a persistent table (that is, using the 4GL LIKE keyword). All entries are considered when converting the names of databases and all persistent (i.e., non-temp-table/workfile) tables, buffers, and fields, plus temp-table/workfile fields where the table is defined in terms of a persistent table (that is, using the 4GL like keyword).

At the time of this writing, “real-world” overrides of match phrase dictionaries have only been done for schema-level constructs. In our experience, customizing the conversion of database names has been a more manageable, smaller-scoped effort than trying to do so for business logic constructs. As a result, the tool support in FWD currently is skewed to customizing database names more so than business logic names. Name-related conversion hints only apply to database tables and fields; reports which summarize name conversions are only produced for schema constructs at this time. Comparable support for business logic name conversion customization would require some enhancements to the FWD technology.

When database names are converted, a history report is generated for each persistent schema. A combined report is created for all temp-table/workfile schemas. This report is a simple text file which is placed in the project root directory. It is named schema_names_{database}.rpt, where database is the name of the database schema, as extracted from the associated .df file, or the constant _temp for the combined report of temp-table schemas.

The report summarizes the name conversions which occurred for each table, field, and index in a schema. It contains the following columns:

Column Meaning
TOKEN The token type assigned to the name being converted by the FWD schema lexer. This is the tool that scans and tokenizes the .df file. Possible token types are:
     TABLE - database table
     INDEX - database index
     FIELD_{data type} - database field, where {data type} indicates the data type of the field (CHAR, DATE, DEC, INT, LOGICAL, RAW, or ROWID)
TYPE The type of target construct for which the name was converted (Java class, Java variable, SQL table, SQL column, SQL index). TABLE tokens cause two conversions: one for a Java class (a DMO class) and one for an SQL table. FIELD_* tokens also cause two: one for a Java variable (a DMO property) and on for an SQL column. INDEX tokens cause an SQL index name to be generated.
CONVERTED The converted (target) name
ORIGINAL The original (source) name
NOTES Descriptive information, if any, provided by the Progress schema. This information is included in the report as a convenience, to provide context to the table and field names. It will be a description, if provided, otherwise a label (for a field), if provided, otherwise nothing.

The purpose of this report is to provide a simple way to review the results of database name conversions. Expect to use this report heavily, especially in the early stages of customizing the datanames.xml file. Getting this customization right is an iterative process. We recommend carefully reviewing this report after the first schema conversion (using the f2+m0 options to the ConversionDriver program) and marking up all the converted names which do not appear to be correct. Make the appropriate adjustments to datanames.xml, then reconvert, using f0+m0 on subsequent runs after the first. Following each change and reconversion, use your favorite file comparison tool to compare the latest version of the names report with the previous version. Hopefully most of the differences will represent improvements rather than regressions. Continue this process until all names convert as expected. As you run later phases of conversion, the converted schema names will flow through all of the source code conversion steps.

Database

This section deals with some conversion customizations that are specific to database issues. This includes assigning static names to temporary tables, adding “synthetic indexes” to a table, and cleanly dropping tables from a schema.

Static Temporary Table Names

During conversion, temp-table definitions are extracted from 4GL source code. Each uniquely structured temp-table schema is analyzed and used to build various artifacts needed by the converted application. One of these artifacts is a Data Model Object (DMO) Java interface. For each DMO interface, a concrete implementation class which implements that interface is assembled on-the-fly at runtime. Instances of these classes are used by FWD's Object to Relational Model (ORM) runtime framework to interact with the temporary tables in a special-purpose database. The interfaces are used within the converted application's business logic layer to interact with the FWD equivalent of a Progress record buffer.

DMO interfaces created by the conversion tools are generated within a hard-coded sub-package of your application's source code hierarchy. If your application source code is organized in packages prefixed with com.mydomain.myapp, these DMO interfaces will be created in the com.mydomain.myapp.dmo._temp package.

Each DMO interface is assigned a unique name, based on the name assigned to the temp-table in 4GL code, and using the normal name conversion rules. This default name conversion is appropriate in most cases, but under certain circumstances you might prefer to assign your own, well-known name to a temporary table DMO. This may be necessary, for instance, if you integrate custom Java code which references a well-known temporary table DMO name.

This is achieved using a schema conversion hint. However, unlike the other schema hints we have discussed, this one is encoded in a hints file that resides alongside the Progress program containing the temp-table affected by the hint. Consider the following example, in a file named $P2J_HOME/src/myapp/program15c.p:

...
DEFINE TEMP-TABLE abc-xyz
  FIELD abc AS INT
  FIELD xyz AS CHAR
  INDEX pi-abc FIELD abc.
...

Assuming the name does not conflict with other converted temp-table DMOs in the application encountered before it, the DMO interface representing this temp-table would be named AbcXyz. Suppose you want to assign a different, well-known name to the DMO for this temp-table. This could be achieved by the following hint, encoded in a file named $P2J_HOME/src/myapp/program15c.p.hints:

<hints>
   ...
   <schema>
      ...
      <table name="abc-xyz" temp="well known" />
      ...
   </schema>
   ...
</hints>

Now, the DMO interface would be named WellKnown.

Synthetic Indexes

The term “synthetic index” refers to a database index that is required in a converted database, which was not present in the original database. The need to create such an index should be relatively rare and is likely to be recognized only late in the project cycle, during testing of the converted application. Synthetic indexes currently are supported only for persistent (non temporary) tables.

Usually, the need to add a synthetic index is related to a performance problem in the converted system. Conversion carries over all indexes from an original, 4GL database schema to the corresponding, new schema. Databases in general rely heavily on indexes to provide good performance. However, different database implementations may utilize the same index differently when executing a query. Query planner implementations vary, and one cannot rely on the PostgreSQL query planner algorithms to choose the same index that Progress would for similar record lookups.

Thus, you may find while testing your application that in certain cases, the addition of an index to the converted database schema would benefit performance of an important query. We caution you to be very careful in your decision to add indexes. While doing so may benefit one query, it may hurt the performance of others unexpectedly. In any event, every new index requires additional disk space, and adds some amount of overhead to insert, update, and possibly delete operations on the affected table.

Given that you have considered these warnings and still want to go ahead and add an index, there are several ways to go about it:

  • Add the index manually to the converted schema. Using SQL data definition language (i.e., CREATE INDEX) is a perfectly acceptable way to add an index. However, if you are still at a point in your project where you have the need to re-convert your application occasionally and to import new data sets into your target database, this adds an extra, manual step.
  • Add the index to the original Progress schema and re-convert. This may sound like a simple approach, but we do not recommend doing this, because it is likely to change the behavior of your Progress application, and it can cause the converted result to be different. For any given language statement in Progress which retrieves records without explicit sort instructions, the 4GL runtime selects what it considers the most appropriate index at the time the query is executed. Records are returned in the order defined by this index. Thus, adding an index may change how certain parts of your application process records. FWD mimics these decisions during conversion and makes all sorting is explicit in the converted source code. Thus, the resulting application will pick up these unintended changes.
  • Define a synthetic index. With this approach, a conversion hint is added to the appropriate schema's hints file (e.g., data/namespace/mydb.schema.hints off the project root). The hint defines the index to be added to the target database, but it ensures that this index is not considered during conversion, when mimicking Progress' index selection process. The converted output remains the same as it would have been without the index. However, during data import, the index is created in the target database automatically.

An example of a synthetic index hint would look something like this:

<schema>
   ...
   <table name="customer">
      <index name="si_cust_num">
         <index-field name="cust-num" />
      </index>
   </table>
   ...
</schema>

Assuming the table name customer is not changed by any customized name conversion, this will cause an index named idx__customer_si_cust_num to be added to the customer table in the target database during data import. The index will contain a single column whose name will be the result of converting the field name cust-num (cust_num by default).

Please refer to the Conversion Hints chapter for details on the syntax of this hint.

Dropping Tables

In some circumstances, you may have a need to drop a persistent database table from a schema.

The simplest case is that a table is not longer needed because it has been determined to be “dead”. That is, it is no longer used in the application, or it will not be used after conversion, most likely because code that references it has been eliminated during dead code analysis.

In other cases, you may want to hold onto the information in a database table and continue to use it, but you may want to move it out of the database and into some other backing store. For example, some applications may keep read-only application configuration data in a database table. If this data only changes when the application code changes, you may decide to store this information as an application resource instead, or - if the information needs to change for different installations of your application - in the FWD directory.

Security data is another candidate for migration from application database tables to the FWD directory: this information may be maintained by a different group than the one responsible for application data. Also, the same security information may be needed across multiple applications. Rather than duplicate this data in multiple application databases, it can be centralized in a common FWD directory.

Whatever the reason, the FWD conversion can be customized to drop tables from a schema during conversion. If this data must be made available via a different mechanism, custom conversion logic can be applied to integrate that different mechanism into the new application's Java code. This is necessary to properly handle references to those tables in the original code.

The Simple Case - Dropping a Table Outright

In this case, you have determined the target table simply is no longer necessary. It is not referenced in the code that will survive conversion, so it should not be in the converted database schema. There are two ways to drop the table.

The first is to simply remove the table from the exported schema file (.df file), either by not exporting it from the Progress environment in the first place, or by manually editing the .df file after the export is performed. This approach will work, but it has the flaw of not being self-documenting: the table simply is missing, and unless the reason is documented externally, it is prone to error if the export process needs to be repeated for some reason.

A better way is to use a conversion hint to drop the table. The reason for dropping the table can be documented alongside the hint, and this effort need only be done once, no matter how many times the schema is exported subsequently. Assuming your schema export file is named mydb.df, you would add a hint to the $P2J_HOME/data/namespace/mydb.schema.hints file:

<!-- Dropping table 'deadtable', which was made obsolete by dropping
     the 'no-longer-used.p' module -->
<table name="deadtable" drop="true" />

This would ensure that deadtable (and all of its fields and indexes) are omitted from the converted mydb schema.

The Complex Case - Re-purposing a Dropped Table

The case of replacing a dropped table with a different access idiom - say, retrieving the data from the FWD directory or from an application resource in a jar file - is considerably more complicated. In fact, certain implementation details regarding this type of replacement are beyond the scope of this document, but the concepts are presented here to convey what is possible with FWD.

Suppose your existing application uses a table (say, mydb.printer) which stores information about all of the printer configurations available in your organization. As part of the conversion of your application, you have decided that you want to centralize management of this information, so that it can be shared across the enterprise. Perhaps you have an LDAP server where you will store this data, or maybe you want to put it into a FWD directory. Let's not worry too much about what the new mechanism is; for now, the important point is that you want to move printer configuration data out of your application database, but you want your converted application to continue to have access that information (without having to re-factor your converted business logic by hand).

The solution involves several steps:

  1. Analyze how your application accesses the printer table.
  2. Migrate the data from the printer table to its new home.
  3. Develop custom Java code to provide a new way to access printer configuration data from its new home.
  4. Configure conversion to drop the printer table.
  5. Provide custom conversion rules to fix up references to the printer table in business logic.

Let's look at each of these steps in more detail.

Analyze How Your Application Accesses the Target Data

Before you decide to drop a table upon which your application depends, you should understand how your application uses that table. The best way to go about this analysis is to leverage the FWD reporting engine to find every reference to that table in your application. This will yield better results than a simple text search, since the FWD reporting engine understands Progress 4GL syntax (for example, references to table names in 4GL code may be abbreviated).

The following use of the “simple search” facility (see Reporting chapter, Simple Search section) should find all references to the printer table in your 4GL code. Note that line breaks and indents in the following command are included for clarity only, but should be omitted when executing the search:

java -classpath $P2J_HOME/p2j/lib/p2j.jar
     com.goldencode.p2j.pattern.PatternEngine
        criteria="\"evalLib(\"record_types\") and
                  parent.type != prog.kw_like and
                  getNoteString(\"schemaname\") == mydb.printer"\" 
        reports/simple_search
        ./src/
        "*.[pPwW].ast" 

Review each instance of a reference to your target table in the resulting report.

  • If all references represent record reads (i.e., your application never writes to this table), this is a preferable scenario, because it means you can replace the persistent table with a temporary table, which can be populated dynamically by custom Java code. That code would read the data from the appropriate backing store and populate a temporary table with all possible records. The temporary table could then be used by business logic in place of the dropped table.
  • If all references represent record reads using simple queries, or a very small variety of queries, this is the best case scenario, because it means you have the most flexibility in how you replace these table references with custom code. In other words, the fewer and simpler the idioms your application uses to access data in the target table, the simpler it will be to replace all these idioms with a limited number of calls to custom Java code, which accesses the data from its new backing store.

We will focus this discussion on tables which are effectively read-only from the perspective of application business logic. While it is possible to replace a table to which the application writes data with some alternative backing store, this situation represents a much more complicated case of custom code replacement. As such, it is beyond the scope of this document.

Migrate the Data to Its New Home

There is no tool support in the FWD project for this step, because the process you follow to migrate the data of a dropped table is highly dependent upon your needs. If you want to move your data into an LDAP server, a FWD directory, an XML file, etc., you will need to do it manually, or develop an automated way to export the data from the Progress database table into the format/target of your choice. Since the target data is in the Progress database to begin with, this formatting can be done with Progress 4GL code. The data also could be exported into a data export file (.d file), and manipulated in the scripting or programming language of your choice.

Write Java Code to Access the Data

This step likewise has no direct support in the FWD project, because it too is highly dependent upon your choice of a new data store. If you store the data in an LDAP server, you will need to provide Java code to read it from that server and present it in a form that is usable by your application. Likewise if it is stored in the FWD directory, a flat file, etc.

The design of your custom data retrieval code depends on how you expect your converted business logic to access this data. If the existing business logic:

  • relies upon a wide variety of database access methods; or
  • uses complicated where clauses within FIND, FOR, OPEN QUERY or other data access statements; or
  • commonly retrieves a large proportion of the available records in the table,

the best approach may be to dynamically populate a temporary table with all of the table's records, and to allow the converted business logic to operate on the temporary table in the same manner in which it previously used the persistent table. This approach leaves the existing business logic intact for the most part, changing only the table being referenced and invoking some custom code to initialize that table properly before it is used.

If you were to use this approach to replace the printer table from the hypothetical case above, you might write some custom Java code

In the case where existing business logic has a small variety of relatively simple idioms to access records from the target table, it may make sense to create custom Java logic for each of these access points. For instance, if your existing business logic only ever gets a single record at a time from the printer table using a mechanism like:

DEFINE VAR printer-name as character.
DEFINE VAR resolution as integer.
DEFINE VAR color as logical.
...
/* populate printer-name variable with a valid printer name */
...
FIND printer WHERE printer.name = printer-name.
ASSIGN
   resolution = printer.resolution
   color = printer.color.

you might create a PrinterConfiguration Java bean, which implements a static method:

public static PrinterConfiguration find(character printerName)

You would then create custom conversion rules (see below) to replace each instance of the above FIND statement with a call to PrinterConfiguration.find(printerName), and to replace each read of a printer record field to an invocation of the appropriate accessor (“getter”) method on the PrinterConfiguration Java bean returned by the find(printerName) method call.

If neither of these approaches seems to fit your target table, based on the table's size or how it is used by business logic, you might want to reconsider dropping that table. Very large tables will likely cause performance and memory resource problems when converted to temporary tables. On the other hand, writing lots of custom Java code and conversion rules to replace a wide variety of data access idioms can develop into a lot of work quickly. Choose your battles carefully.

Drop the Target Table

A conversion hint is used to drop a persistent table from a database. You can either drop the table outright as noted above, or replace it with a temporary table of the same structure. To simply drop the printer table from the mydb schema, you would encode the following hint in the $P2J_HOME/data/namespace/mydb.schema.hints file:

<table name="printer" drop="true" />

Note that references to the printer table in your application are now undefined and will cause conversion errors unless you create custom conversion rules to replace them with meaningful Java code in the converted application.

To drop the printer table and replace it with a temporary table, whose corresponding DMO interface name is TempPrinterConfig, encode the following hint:

<table name="printer" drop="true" temp="printer config" />

This ensures the printer table is not a part of the converted mydb schema. It also ensures that all references to the printer table in business logic will now refer to a TempPrinterConfig DMO in converted code. Of course, that reference will have no meaning unless you add some custom conversion rules to emit Java code to initialize the backing temporary table.

Fix Up Business Logic References to the Dropped Table

The only thing left to do is to implement custom conversion rules to inject Java business logic which invokes the Java code you wrote in step 3 above. This is an advanced topic that is beyond the scope of this document, though an introduction to the concepts is presented in the Integrating Custom Java Code section below. The FWD Internals book continues the discussion of re-purposing a dropped table from the perspective of integrating hand-written Java code into converted business logic.

Customizing Comment Conversion

Dropping Comments

A UAST hint named drop_comment_search_list of data type string[] can be used to provide a list of one or more match strings. The conversion will drop (omit from output) any comment that has content that includes one of the strings in this list. This can be specified at the file or directory level but a useful approach is to apply it across the entire project by creating a directory.hints file in the topmost source directory of the project.

Here is an example:

<?xml version="1.0"?>

<!-- UAST hints -->
<hints>
   <uast name="drop_comment_search_list" datatype="string[]">
      <array-val value="This .W file was created with the Progress UIB." />
      <array-val value="_UIB-CODE-BLOCK-" />
      <array-val value="****  Preprocessor Definitions  ****" />
      <array-val value="_UIB-PREPROCESSOR-BLOCK-" />
      <array-val value="****  Frame Definitions  ****" />
      <array-val value="parameter-definitions" />
      <array-val value="****  Definitions  ****" />
      <array-val value="DESIGN Window definition (used by the UIB)" />
      <array-val value="*** Included-Libraries ****" />
      <array-val value="**** Procedure Settings ****" />
      <array-val value="Settings for THIS-PROCEDURE" />
      <array-val value="***  Create Window  ****" />
      <array-val value="****  Main Block  ****" />
      <array-val value="-CUSTOM-PROPERTIES" />
      <array-val value="_ADM-CODE-BLOCK" />
   </uast>
</hints>

Deleting Invalid Content from Javadoc Comments

4GL comments that appear just before an internal procedure definition or a function definition are converted into javadoc. The javadoc processing is sensitive to characters that are included, since the result is written into HTML. When a 4GL comment is converted into javadoc, there are often many drawing characters and other "junk" which shouldn't emit into javadoc. This "junk" can be deleted using the UAST hint named zap_javadoc_list of data type string[].

Each array-val element represents a regular expression that will be replaced with "" (empty string). In other words, any match of one of these "regexes" to comment content will cause that matched content to be removed.

This has no effect on non-javadoc comments.

Integrating Custom Conversion Logic

Part of the power of the FWD conversion technology is that it is extensible. Customizable hooks are provided to allow you to implement specialized conversion logic at key points during the code back-end phase of the conversion process (see the Process Overview chapter): early in the logic analysis sub-phase and during the core conversion sub-phase. The conversion engine “hooks” your custom conversion rules by calling out into TRPL rule-sets that you program to match the specific needs of your project. These rule-sets are executed like any others in the overall conversion process. This allows you to drop single lines or entire sections of code, inject custom code at any location in your converted code base, or to restructure your code to arbitrarily change the resulting Java code.

This section will not discuss the details of implementing these hooks, as that requires a working knowledge of TRPL, the language in which conversion programs are written. That topic is covered in a separate book, FWD Internals. The purpose of this discussion is simply to introduce the primary facilities available for integrating customization into the conversion process.

Annotation Hooks

The logic analysis sub-phase of the code back-end phase is also known as the annotations sub-phase, because the results of analyzing the application's 4GL ASTs generally are recorded in those ASTs as annotations. These will be read and used by downstream rule-sets within the code back-end process (such as the core conversion sub-phase).

Quite early in the annotations sub-phase (just after unreachable code analysis), FWD attempts to execute an optional rule-set named customer_specific_annotations_prep. This rule-set, if present, resides in the file $P2J_HOME/pattern/customer_specific_annotations_prep.rules. It is called once per 4GL program, so the rules that comprise it can be applied against all of the ASTs for each program in your application, before the annotations phase advances to the next sub-phase of the process.

This rule-set is invoked within the context of visiting 4GL ASTs, so the idea is to perform any custom analysis that is specific to your project, and annotate certain tree nodes with information you divine from your analysis. For instance, you may want to detect every token which represents a database access statement operating on a particular table, and mark that AST node with a special annotation which makes it stand out to downstream conversion logic.

TRPL provides a great deal of power and flexibility, so you can define rules which are as broad or narrowly targeted as you need, down to searching for a particular token or combination of tokens, a specific line of code, instance of a string literal, particular program, function, procedure, variable name, etc. Any number of annotations can be added to an AST, so long as each is uniquely named, so as not to conflict with an existing annotation on that node.

In addition to applying annotations, TRPL allows you to perform a wide range of transformations upon ASTs. You can change the type or text of any token, mark it as hidden from downstream processing, remove it entirely, insert new tokens, or otherwise rearrange the tree's structure in arbitrary ways. The transformed AST becomes input to downstream conversion phases.

In this phase, you can create custom nodes or modify existing ones so that, depending on how they are annotated, they will be handled downstream by the tools defined in the convert/standard_customer_specific_tools.rules or by your own custom tools. The data type for the special annotations is boolean, with the specific note that the standard tools will check for the existence of the annotation, and not its value. To set the annotation, use the putNote(<annotation>, <value>) or the copy.putAnnotation(<annotation>, <value>) APIs; to remove it, use the removeNote(<annotation>) or copy.remoteAnnotation(<annotation>) APIs.

The standard tools file handles any prog.customer_specific node which does not have a handled annotation set to true(the value for this annotation is checked); each node will be handled in a specific way, depending on how it was annotated. The standard tools will treat any prog.customer_specific node which has one of the special annotations and does not have handled annotation set to true.

To create a prog.customer_specific node, you can use the following snippet:

<variable name="ref" type="com.goldencode.ast.Aast" />
<action>
   ref = createProgressAst(prog.customer_specific, <text>, <parent>[, <idx>])
</action>

where:

  • text is the text contained by this node (which can be a method call, variable name or something else, depending on the annotation).
  • parent is the parent node to which it will be attached.
  • idx represents the 0-based position in the parent's children where the new node will be created. If not specified, this is assumed to be -1 which creates the node as the last child.

Details about how the createProgressAST API is used in the TRPL rules can be found in the FWD Internals book. For our case, it is important to note that this API call returns an AST reference to which annotations can be added, as in:

<action>ref.putAnnotation(<annotation>, <value)</action>

The ID of this new node can be obtained using the lastref.id field.

The next table describes the annotations handled by the standard tools and how the generated AST (plus converted Java code) will look.

prog.customer_specific annotation standard_customer_specific_tools API Details
vardef add_vardef Defines an instance variable. Assuming the lastref variable references a prog.customer_specific node, the following TRPL code will set its annotations:
<action>lastref.putAnnotation("vardef",   true)</action>
<action>
   lastref.putAnnotation("classname",     <classname>)
</action>
<action>lastref.putAnnotation("javaname", <javaname>)</action>
<action>lastref.putAnnotation("def_init", <def_init>)</action>

where:
     classname represents the new variable's type (a java.lang.String value), with or without the package.
     javaname represents the instance field name (a java.lang.String value).
     def_init, if set to true, will initiate the instance field with an object created using the default constructor; else, the instance field will be assigned to null.
Using the following TRPL code to set the annotations for a prog.customer_specific node:
<action>lastref.putAnnotation("vardef",    true)</action>
<action>
   lastref.putAnnotation("classname", "SecurityManager")
</action>
<action>
   lastref.putAnnotation("javaname",  "_securitymanager_")
</action>
<action>lastref.putAnnotation("def_init",  false)</action>

the node referenced by the lastref variable will look like (the text of the prog.customer_specific node is empty):
<ast col="0" id="34359738409" line="0" text="" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="vardef" 
              value="true"/>
  <annotation datatype="java.lang.String" key="classname" 
              value="SecurityManager"/>
  <annotation datatype="java.lang.String" key="javaname" 
              value="_securitymanager_"/>
  <annotation datatype="java.lang.Boolean" key="def_init" 
              value="false"/>
  ...
</ast>

and the resulting Java code will look like:
SecurityManager _securitymanager_ = null;

varref add_varref Adds a variable reference to i.e. an assignment or parameter. Assuming the lastref variable references a prog.customer_specific node, the following TRPL code will set the needed annotations:
<action>lastref.putAnnotation("varref", true)</action>
<action>lastref.putAnnotation("refid", <refid>)</action>
<action>lastref.putAnnotation("etype", <etype>)</action>

where refid is a java.lang.Long variable, which holds the AST ID of the referenced variable. When this is used, the AST referenced by the lastref variable will look like:
<ast col="0" id="34359738415" line="0" text="<varname>" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="varref" 
              value="true"/>
  <annotation datatype="java.lang.String" key="etype" 
              value="<datatype>"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="<refid>"/>
  ...
</ast>

The text of the prog.customer_specific node is set to the variable's name. The etype annotation is a special annotation which holds the variable's data type, as text; this text must be the same as the variable's 4GL data type (i.e. character, integer, decimal, etc).
Assuming we want to replace all IF user-name = “admin” 4GL tests to if _username_.equals(“admin”) tests in the generated java code we can use the replace_with_customer_specific tool to replace the user-name's AST with a prog.customer_specific node and then add a varref annotation:
<rule>type == prog.var_char and
      text == "user-name"   and
      upPath(“KW_IF”)
   <action>
      lastref = execLib("replace_with_customer_specific",
                        copy, "_username_", null, "character")
   </action>
   <action>
      lastref.putAnnotation("varref", true)
   </action>
   <action>
      lastref.putAnnotation("refid", uid)
   </action>
</rule>

where uid is a reference to the _username_ variable definition (a java.lang.Long value). This changes the AST from:
<ast col="0" id="34359738386" line="0" text="statement" 
     type="STATEMENT">
  <ast col="1" id="34359738387" line="98" text="if" 
       type="KW_IF">
    <ast col="0" id="34359738389" line="0" text="expression" 
                 type="EXPRESSION">
      <ast col="20" id="34359738390" line="98" text="=" 
           type="EQUALS">
        <ast col="4" id="34359738394" line="98" 
             text="user-name" type="VAR_CHAR">
          <annotation datatype="java.lang.Long" key="oldtype" 
                      value="2332"/>
          <annotation datatype="java.lang.Long" key="refid" 
                      value="34359738371"/>
        </ast>
        <ast col="22" id="34359738396" line="98" 
             text=""admin"" type="STRING">
           ...
        </ast>
      </ast>
    </ast>
    <ast col="1" id="34359738398" line="99" text="then" 
         type="KW_THEN">
       ...
    </ast>
  </ast>
</ast>

to:
<ast col="0" id="34359738386" line="0" text="statement" 
     type="STATEMENT">
  <ast col="1" id="34359738387" line="98" text="if" 
       type="KW_IF">
    <ast col="0" id="34359738389" line="0" text="expression" 
                 type="EXPRESSION">
      <ast col="20" id="34359738390" line="98" text="=" 
           type="EQUALS">
        <ast col="0" id="34359738415" line="0" 
             text="_username_" type="CUSTOMER_SPECIFIC">
          <annotation datatype="java.lang.String" key="etype" 
                      value="character"/>
          <annotation datatype="java.lang.Boolean" 
                      key="varref" value="true"/>
          <annotation datatype="java.lang.Long" key="refid" 
                      value="34359738412"/>
        </ast>
        <ast col="22" id="34359738396" line="98" 
             text=""admin"" type="STRING">
           ...
        </ast>
      </ast>
    </ast>
    <ast col="1" id="34359738398" line="99" text="then" 
         type="KW_THEN">
       ...
    </ast>
  </ast>
</ast>

assign add_assign Creates a Java-style assignment (using the Java assignment operator =). Assuming the lastref variable references a prog.customer_specific node, the following TRPL code will set the needed annotations:
<action>lastref.putAnnotation("assign", true)</action>
<action>lastref.putAnnotation("refid", <refid>)</action>

where refid is a java.lang.Long variable, which holds the AST ID of the variable on the left-side of the assignment. When this is used, the AST referenced by the lastref variable will look like (the text of the prog.customer_specific is set to assign):
<ast col="0" id="34359738410" line="0" text="assign" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="assign" 
              value="true"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="<refid>"/>
  ...
</ast>

Assuming the left-side of the assignment is the _securitymanager_ variable created in the example for the vardef annotation (the <refid> points to the _securitymanager_ variable definition), we can create a prog.customer_specific node which will convert as a java static method call, and will act as the right-side of the assignment:
<action>
   lastref = createProgressAst(prog.customer_specific,
                               "SecurityManager.getInstance",
                               lastref,
                               0)
</action>

After this, the AST will look like:
<ast col="0" id="34359738410" line="0" text="assign" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="assign" 
              value="true"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="34359738409"/>
  ...
  <ast col="0" id="34359738411" line="0" 
       text="SecurityManager.getInstance" 
       type="CUSTOMER_SPECIFIC">
    ...
  </ast>
</ast>

while the generated Java code will look like:
_securitymanager_ = SecurityManager.getInstance();

assign_wrapper add_assign_wrapper Creates a 4GL-style assignment, using the assign APIs for the wrapper data types. Assuming the lastref variable references a prog.customer_specific node, the following TRPL code will set the needed annotations:
<action>lastref.putAnnotation("assign_wrapper", true)</action>
<action>lastref.putAnnotation("refid", <refid>)</action>

where refid is a java.lang.Long variable, which holds the AST ID of the variable on the left-side of the assignment. When this is used, the AST referenced by the lastref variable will look like (the text of the prog.customer_specific is set to assign_wrapper):
<ast col="0" id="34359738410" line="0" text="assign_wrapper" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="assign_wrapper" 
              value="true"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="<refid>"/>
  ...
</ast>

Assuming the left-side of the assignment is a character _username_ variable, we can create a prog.customer_specific node which will convert as a java static method call, and will act as the parameter of assign wrapper call:
<action>
   lastref = createProgressAst(prog.customer_specific,
                               "_securitymanager_.getUserId",
                               lastref,
                               0)
</action>

with the resulting AST looking like:
<ast col="0" id="34359738410" line="0" text="assign_wrapper" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="assign_wrapper" 
              value="true"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="34359738409"/>
  ...
  <ast col="0" id="34359738411" line="0" 
       text="_securitymanager_.getUserId" 
       type="CUSTOMER_SPECIFIC">
    ...
  </ast>
</ast>

while the generated Java code will look like:
_username_.assign(_securitymanager_.getUserId());

constructor add_constructor The constructor annotation instantiates a new object with the type specified by the annotation's text.
Assuming we want to rewrite all RUN foo.p statements to instantiate a FooOverwritten object, the following TRPL code is needed:
<rule>type == prog.statement and downPath("KW_RUN/FILENAME")

   <!-- get the filename node -->
   <action>
      ref = copy.getChildAt(0)
                .getImmediateChild(prog.filename, null)
   </action>

   <rule>ref.text.equalsIgnoreCase("foo.p")
      <!-- change statement node -->
      <action>copy.setType(prog.customer_specific)</action>
      <action>putNote("constructor", true)</action>

      <!-- set the c'tor -->
      <action>copy.setText("FooOverwritten")</action>
      <!-- remove the statement's children,
           to leave only the new node  -->
      <action>copy.getChildAt(0).remove()</action>
   </rule>
</rule>

The AST will change from:
<ast col="0" id="38654705693" line="0" text="statement" 
     type="STATEMENT">
  <ast col="1" id="38654705694" line="30" text="run" 
       type="KW_RUN">
    <ast col="5" id="38654705697" line="30" 
         text="foo.p" type="FILENAME"/>
  </ast>
</ast>

to:
<ast col="0" id="38654705693" line="0" 
     text="FooOverwritten" type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="constructor" 
              value="true"/>
  ...
</ast>

where the text of the prog.customer_specific node is set to the java class name which needs to be instantiated.
default handling (no annotation) n/a Any prog.customer_specific node which was not already handled (is not marked with one of the previous annotations) will be converted to a Java static or instance method call.
Assuming we want to replace all java-static-call() function calls from 4GL to ExternalClass.javaStaticCall() static calls in the generated Java code, we can alter the AST which defines the function call in 4GL to be a prog.customer_specific node, with its text set to the static method call:
<rule>evalLib("function_calls")
   <rule>type == prog.func_char and
         text.equalsIgnoreCase("java-static-call")
      <action>copy.setType(prog.customer_specific)</action>
      <action>
         copy.setText("ExternalClass.javaStaticCall")
      </action>
   </rule>
</rule>

This will change the AST from:
<ast col="9" id="55834574944" line="28" text="java-static-call" 
     type="FUNC_CHAR">
  <annotation datatype="java.lang.Long" key="oldtype" 
              value="2332"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="55834574874"/>
</ast>

to:
<ast col="9" id="55834574944" line="28" 
     text="ExternalClass.javaStaticCall" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Long" key="oldtype" 
              value="2332"/>
  <annotation datatype="java.lang.Long" key="refid" 
              value="55834574874"/>
  <annotation datatype="java.lang.Boolean" key="wrap" 
              value="true"/>
</ast>

To convert to an instance call, set the text of the prog.customer_specific node to the method name and:
     set the refid annotation to reference the AST where the variable is defined.
     set both the needs_param and nonstatic annotations to true.
import n/a When this String annotation exists for a prog.customer_specific node, it will automatically add as an import the package/class specified by the annotation.
Assuming the lastref variable references a prog.customer_specific node, the following TRPL code will set the annotation:
<action>
   lastref.putAnnotation("import", <package_or_class>)
</action>

and the new annotation will be added to the AST as in (the text of the prog.customer_specific node is empty):
<ast col="0" id="38654705693" line="0" text="" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.String" key="import" 
              value="<package_or_class>"/>
  ...
</ast>

In the generated Java code, an import declaration will be added, as in:
import <package_or_class>;

handled n/a If this boolean annotation exists and is set to true, the prog.customer_specific node will not be processed by the standard tools (it means it was handled by the custom conversion rules - see next section, Conversion Hooks, for details).
When setting this annotation, the resulting AST associated with the prog.customer_specific node will look like:
<ast col="0" id="38654705693" line="0" text="" 
     type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.Boolean" key="handled" 
              value="true"/>
  ...
</ast>

n/a rewrite_run Rewrites RUN program statements so that another static method is executed. Accepts these parameters:
     ref represents the AST which holds the RUN program statement (a node com.goldencode.ast.Aast instance, of prog.statement type)
     mthd represents the method to be executed (a java.lang.String value). This can be either a static or instance method, assuming the instance variable was defined previously.
     import is the package/class import required to have access to the specified method. This is an optional parameter, and must be a java.lang.String value or null.
Assuming all RUN util/@obsolete-program.p@ calls must be rewritten to OtherProgram.newMethod() calls, by executing the following TRPL code:
<rule>type == prog.statement and downPath("KW_RUN/FILENAME")

   <variable name="ref"   type="com.goldencode.ast.Aast" />

   <!-- get the filename node -->
   <action>
      ref = copy.getChildAt(0)
                .getImmediateChild(prog.filename, null)
   </action>

   <rule>ref.text.equalsIgnoreCase("util/obsolete-program.p")
      <action>
         execLib("rewrite_run",
                 ref,
                 "OtherProgram.newMethod",
                 "com.compay.app.util.*")
      </action>
   </rule>
</rule>

will rewrite the RUN statement's AST from:
<ast col="0" id="73014444076" line="0" text="statement" 
     type="STATEMENT">
  <ast col="1" id="73014444077" line="61" text="run" 
       type="KW_RUN">
    <ast col="5" id="73014444080" line="61" 
         text="util/override-this.p" type="FILENAME"/>
  </ast>
</ast>

to:
<ast col="0" id="73014444076" line="0" 
     text="OtherProgram.newMethod" type="CUSTOMER_SPECIFIC">
  <annotation datatype="java.lang.String" key="import" 
              value="com.compay.app.util.*"/>
</ast>

Note how the text of the prog.customer_specific node is set to the java static method call.

More complex examples to demonstrate how the standard tools can be used in the customer_specific_annotations_prep can be found in the Post-Conversion Progress 4GL Development → Techniques for Integrating Java Code into Reconverted 4GL Code section of the Developing 4GL Code During and After Conversion chapter of the FWD Developer Book.

Conversion Hooks

The core conversion sub-phase of the code back-end phase invokes customer_specific_conversion. This rule-set, if present, resides in the file $P2J_HOME/pattern/customer_specific_conversion.rules.

Since it is invoked within the context of the creation of Java ASTs which are based on the fully annotated set of 4GL ASTs, this is where you would implement rules to leverage the annotations added or transformations performed during customer_specific_annotations_prep, in order to emit custom Java code into business logic, or to arbitrarily change the way certain 4GL constructs convert into Java.

The customer_specific_conversion.rules should check only for prog.customer_specific nodes. So, its structure must be:

<?xml version="1.0"?>

<rule-set input="tree">

   <!-- register worker objects -->
   <worker class="com.goldencode.p2j.uast.ProgressPatternWorker" 
           namespace="prog" />
   <worker class="com.goldencode.p2j.uast.JavaPatternWorker" 
           namespace="java" />
   <worker class="com.goldencode.p2j.convert.ConverterHelper" 
           namespace="help" />
   <worker class="com.goldencode.p2j.pattern.TemplateWorker" 
           namespace="tw" />

   <!-- define needed variables -->
   <variable name="ref"  type="com.goldencode.ast.Aast" />
   <variable name="flag" type="java.lang.Boolean" />
   <!-- ... add as more variables as you need -->

   <!-- expression libraries -->
   <include name="common-progress" />

   <!-- define all needed private functions -->
   <func-library access="private">
       <!-- ... -->
   </func-library>

   <!-- main processing (once per tree) -->
   <init-rules>
      <rule>tw.load("convert/java_templates")</rule>
   </init-rules>

   <walk-rules>

      <!-- check if we are on a prog.customer_specific node -->
      <rule>type == prog.customer_specific

         <!-- this flag will be set to true after we've processed
              this node, so further rules will automatically skip it -->
         <action>flag = false</action>

         <rule>!flag and [node satisfies some condition]
            <!-- set the flag to true, so the next rule checks will be
                 skipped faster -->
            <action>flag = true</action>

            <!-- do something with the node -->
         </rule>

         <rule>!flag and [node satisfies some other condition]
            <!-- set the flag to true, so the next rule checks will be
                 skipped faster -->
            <action>flag = true</action>

            <!-- do something with the node -->
         </rule>

         <!-- ... add as many rules as you need -->

         <rule>flag
            <!-- if this node was handled by these rules, mark it as
                 handled so the standard rules will skip it -->
            <action>copy.putAnnotation(“handled”, true)</action>
         </rule>
      </rule>

   </walk-rules>

</rule-set>

As mentioned in the Annotation Hooks section, any prog.customer_specific node which was handled by the customer_specific_conversion and must not be processed by the standard tools must either be removed or annotated with a handled annotation set to true, as in (assuming we are currently processing the prog.customer_specific node):

<action>copy.putAnnotation(“handled”, true)</action>

Details about implementing core conversion hooks are provided in the book FWD Internals.

Summary

FWD offers numerous ways to tailor conversion output to the specific needs of your project. This chapter explored ways to customize name conversions of various code and data constructs, including assigning fixed names to temp-table-related DMO interfaces. We discussed adding “synthetic” indexes to converted database tables to improve performance, and how to drop tables. We introduced the topic of extending the base conversion logic using customer specific hooks implemented in TRPL to arbitrarily remove or restructure business logic, or to inject custom Java code.


© 2004-2017 Golden Code Development Corporation. ALL RIGHTS RESERVED.