Project

General

Profile

Introduction

Overview

This document is specifically focused on guiding the reader through the transformations that occur when using the FWD automated conversion technology. The FWD project represents technology that can be used for two primary purposes:

  • Conversion - convert Progress ABL source code into a functionally equivalent, drop-in replacement written in the Java language.
  • Runtime - run the resulting converted Java application, providing the functions and features of Progress ABL in a compatible manner.

The purpose of this book is to allow a developer or consultant to know exactly what outputs will result from the FWD conversion process, given a specific set of inputs. The outputs that are documented are heavily dependent upon the runtime portions of FWD, but the details of the runtime processing will not be covered in this book. For details on the runtime, see the chapter on Runtime Architecture. More details are available in the book entitled FWD Internals.

The details of how to use the FWD conversion tools is not covered in this book, but those details can be found in the book entitled FWD Conversion Handbook. How the conversion process works is similarly out of scope for this book, but a high level description is in Conversion Technology Architecture and more details can be found in the book entitled FWD Internals.

This book is split into 6 parts.

Part 1 - Conversion Front End describes the inputs that can be successfully processed by the conversion front end. The conversion front end is deliberately designed to process a very wide range of possible Progress 4GL inputs, even when the conversion middle or back end phases cannot successfully transform those inputs into semantically equivalent replacements. The purpose of such a design is to allow the FWD tools (especially the reporting and analysis tools) to be utilized on a very wide range of valid projects. The FWD tools depend upon the outputs from the conversion front end in order to properly operate. This enables a developer to deeply inspect an application using the powerful tools available in FWD. Using the knowledge gathered from these tools, that developer can then make the necessary modifications to the conversion middle and back end phases to handle any gaps.

Part 2 - Application Structure provides a high level overview of which outputs are created from each kind of input. This gives a developer a map of where (in the outputs) to find the replacement functionality based on each input.

Part 3 - Schema Conversion describes the transformations which are supported in the middle phase of the conversion process. This “conversion middle” is the phase that converts each database schema and temp-table/work-table schema into the semantically identical replacement definition in a relational database. This includes all the structural elements of the database such as fields, tables, indexes as well as the corresponding configuration of these elements such as names and data types.

Parts 4 through 6 describe the supported transformations that occur in the last phase of the conversion (the code back end). This is where the application source code is converted into its Java equivalent. The back end is split into 3 parts based on a rough categorization of functionality: Part 4 - Base Language, Part 5 - Database Access and Part 6 - User Interface. The transformation of each valid 4GL code input that has a valid output construct is described, along with all supported options.

A developer using this book can understand the exactly how a 4GL application will convert as well as which gaps exist in what can be converted in an application. In addition, a developer can understand the output of the FWD conversion process and map that back to each input. Finally, this book also allows a developer to understand how to manually code Java source that is the equivalent of specific 4GL inputs.

Reading Syntax Diagrams

Some sections of this book may have syntax diagrams which describes specific parsing rules of some FWD compenent. A parsing rule is the logic by which the parser matches a specific set of inputs. It is the valid grammar for the input that will be accepted by that parser. For each rule there is a syntax diagram providing a visual depiction of the logic.

The following example is instructive:

On the left is the name of the rule, in this example the rule is named sequence. The matching logic is read left to right. Starting at the leftmost arrow, the sequence rule expects to match a token of the type KW_ADD which would be generated by the a lexer or tokenizer of the input stream. A lexer converts a stream of characters (normally read from a file) into a stream of tokens. Each token is a “word” in the language being parsed and that word is comprised of one or more related characters from the input stream. In the above example, the ProgressLexer reads the text ADD and creates a token with the type KW_ADD as a result.

After the KW_ADD token, the parser expects to read a token of the type KW_SEQUENCE (which matches the text SEQUENCE), then a string literal (token type STRING) such as “sequence-name”. In this example, there can only be one of each of these constructs and each of these is required. Following the STRING is the contents of zero or more rule references. In this case the contents of the rules initial, increment, cycleOnLimit, minVal and maxVal can appear, in any order, any number of times or not at all.

Any blue text surrounded by a rectangle is a specific token type to match.

Any purple text in a rectangle with rounded corners is a reference to another rule. See the section for that rule to understand what can be matched there. This can be considered a “call” to the rule, just like a method or function is called in a programming language. The sub-tree created by that rule (and any other rules that that rule references) will be grafted into the resulting tree in place of the rule reference.

The arrows indicate whether something is optional (there is a bypass arrow that allows one to “go around” the construct), required, whether there are alternatives to a given match, whether something can be repeated (a loop back to the beginning of some construct) and the order of the matching.

A rectangular box with blue text can represent a list of possible token types that match. Here is an example where KW_CP and KW_CPSTREAM can both be encountered. Either one will match and at least one must be present but both should not be present.


© 2004-2022 Golden Code Development Corporation. ALL RIGHTS RESERVED.