Project

General

Profile

Writing 4GL Testcases

Introduction

In the FWD project, there are multiple reasons for writing 4GL testcases:

  1. To demonstrate a use of the 4GL language which can show a difference between the FWD implementation and Progress 4GL/OpenEdge. In short, this is to recreate a bug in FWD.
  2. To comprehensively explore the behavior and functionality of 4GL features so that a specification can be be written, such that a replacement can be implemented in FWD.
  3. To test the compatibility of the FWD implementation of some 4GL feature. The same suite of tests from item 2 above is also used for this purpose.

Follow these guidelines when implementing 4GL testcases.

Simple is Better

Write each testcase with the least amount of features and dependencies which will still exhibit the behavior you are trying to test/explore.

If you are copying existing code, you should try to remove unnecessary options/causes/phrases, unnecessary triggers, functions and statements. Strip such code down to only those things absolutely required. This makes it more likely that the code will be able to convert and execute. It also makes it easier to see exactly what the code is doing, without a question about whether the other features are or are not needed.

GUI vs ChUI

If you are working on code that is not dependent upon the type of user interface, please write the testcases so that they can be executed in ChUI mode. This provides a much easier path to testing the 4GL code, since it allows testing via ssh to a Linux system and avoids the need for a Windows 4GL GUI system, Windows platform dependencies and all the extra overhead that comes along with it.

Using Database (or Not)

Do NOT use the database features of the language unless you are working on code that requires it. Using database features increases complexity of testcases and if it is not needed, then the additional complexity can unnecessarily obscure the code in testcases. We always want to keep each test as simple as possible.

When you do need to use database support, then your testcase should use temp-tables where possible. Of course, if you are exploring a feature to determine how it works differently between permanent tables and temp-tables then you have no choice but to use a permanent database. Likewise, some 4GL features only work on permanent tables. For example, to test transaction isolation/interaction across user sessions would require a permanent database table/index since temp-tables are specific to a single user's session. Using temp-tables makes the resulting runtime environment much simpler, which makes it easier to execute the testcase since the database wouldn't need to exist and be filled with specific data.

No Use of Progress Software Corporation Sample Databases or Code

Whenever using database features, whether temp-tables or permanent tables, do not use any code or database that is owned or may be owned by Progress Software Corporation (PSC). For example, PSC provides the "Sports" or "Sports2000" database. Although it is likely these sample databases are not protected by licensing restrictions, the licensing of these databases is not clear. For this reason, do not export Progress' sports database schema or data. Do not convert any of code provided by PSC.

All testcases MUST be written to be completely independent of anything owned by PSC.

Create Your Own Test Database Schema and Data

It is important to note that it is very rare that you would need to use a permanent database for UI (or even base language) testcases. All of the language statements you needed to test (e.g. UPDATE or CHOOSE) work the same way for regular variables, temp-tables and permanent tables. So usually, it is best to just use variables and avoid all the database nonsense. If you really need to see how database integrates with some UI or base language feature, try to use temp-tables first. It is possible that some features only exist when using permanent database AND/OR there are some use cases that just need to be tested in both temp and permanent database environments. But these should be rare cases when doing UI work.

Likewise, we must not include any customer code or customer data in our testcases. If you are basing a testcase on something seen in customer code, please write it yourself using what you have seen, rather than copying the code or data.

If a permanent table is necessary, please use the TBD: details to come, including which schemata are available, how to coordinate changes, how to provide data for import/population/reset. If you make modifications, you'll have to check in any schema and static data changes when you check in your testcases.

Testcase Project

You will need access to the following:

The testcase repository in which you will add your tests can be obtained from xfer.goldencode.com as follows:

bzr co sftp://<userid>@xfer.goldencode.com/opt/testcases

This will create a testcases subdirectory in your current directory. From there, you would add a testcases/create-result-list-entry/ subdirectory and put your tests in there. Other people are working in the other directories, so if you need to change something there, let's coordinate that change. Please check in your changes each day, even if they are not complete or functional. As long as you aren't changing parts of the repo being worked on or used by others, it doesn't matter if broken code is checked in to testcases/create-result-list-entry/.

See this example of how to setup a conversion project for the testcases project.

Testcase Organization

Please organize all testcases in sub-directories of xfer.goldencode.com/opt/testcases/. Name the subdirectory based on the feature/functionality being tested (e.g. testcases exploring buttons in the UI might be in a xfer.goldencode.com/opt/testcases/ui/button/ directory).

You will have to make some changes to allow things to run as a relative path from the xfer.goldencode.com/opt/testcases/ directory. For example, RUN statement and include file pathing will need to have the sub-directory pathing prepended.

For particularly large/complex sets of tests, there may be multiple levels of sub-directories to organize the related tests into groups. The library_calls/ examples in the next section has library_calls/misc/ and library_calls/return/ for example. Common features and the main test runner are in the library_calls/ directory itself and then testcases that are related to library call return value processing are in the library_calls/return/ directory.

Use Descriptive File Names

Please don't just write testcases that are named testcase.p. We need testcase file names that more accurately describe the test being executed. For example, library_calls/misc/case_insensitive_proc_name_in_run.p is a pretty clear name that describes the test being executed.

Another bad approach to avoid is to number testcases like button_test0.p, button_test1.p, ... Although we can tell the tests are related to buttons, we can't tell what specifically is being tested about buttons. You should not have to read the code to have a decent idea of what the test does.

Automated/Batch Execution

All non-interactive testcases should be written such that they can be executed as part of an automated suite of tests.

When exploring a new 4GL feature, one virtually always will need many testcases in order to exhaustively determine the behavior of the feature. It is a very very rare feature which would only need a single testcase. Considering this, one would expect that the tests should be organized so that they can be executed as a single batch. It is useful to create a main "test runner" program that calls the other external procedures. For example, here is the xfer.goldencode.com/opt/testcases/library_calls/test_runner.p:

{library_calls/log_hlp.i}
{library_calls/libname.i}

def var i as int.

/* clear the logfile */
os-delete "testapi.log".

run log-it (input "test_runner start").

run library_calls/load_unload/load_unload_tests.p.

/* this is only called to force a persistent load of the library into memory */
/* since we are no longer processing the load/unload tests; the rest of the */
/* tests can be broken if the library is reloading constantly */
run ptr_size (output i).

run library_calls/misc/misc_tests.p.
run library_calls/return/return_parameter.p.
run library_calls/output/output_parameter.p.
run library_calls/input_output/input_output_parameter.p.
run library_calls/input/input_parameter.p.

/* this actually releases the library (on Windows, anyway) */
release external "{&libname}".

run log-it (input "test_runner end").

procedure ptr_size external "{&libname}" cdecl persistent:
   def return parameter sz as long.
end.

This code initializes logging, logs the start and end of processing and calls each set of sub-tests in turn. The external procedures that are called are actually sets of related sub-tests. For example, here is the content of library_calls/misc/misc_tests.p:

{library_calls/log_hlp.i}

run log-it (input "misc tests start").

do on stop undo, leave on error undo, leave:
   run library_calls/misc/name_decoration.p.
   run library_calls/misc/find_by_name_failure.p.
   run library_calls/misc/find_by_ordinal.p.
   run library_calls/misc/case_insensitive_proc_name_in_run.p.
   run library_calls/misc/order_of_input_checks.p.
   run library_calls/misc/order_of_output_checks.p.
   run library_calls/misc/polymorphic_type_support.p.
   run library_calls/misc/os_api_call.p.
   run library_calls/misc/return_value_impact.p.
end.

run log-it (input "misc tests end").

Again, note how this initializes logging, logs the start/end and executes a set of specific tests. This structure can support testcases of arbitrary complexity.

In each individual test, you should also ensure that the code does logging. Here is an example from library_calls/misc/find_by_ordinal.p:

{library_calls/log_hlp.i}
{library_calls/libname.i}

def var i as int.
def var errmsg as char.

run log-it (input "find_by_ordinal start").

... main test code here ...

run log-it (input "find_by_ordinal end").

Another important feature of automated/batch processing is the need to test against captured results. One example of this is when there is an error condition that occurs as a result of some code that fails at runtime. Here is an example from library_calls/misc/find_by_ordinal.p:

/* non-existent ordinal (and function name) */
run some-unknown-garbage-function-name no-error.
errmsg = error-status:get-message(1).
if not error-status:error or
   not error-status:get-number(1) eq 3259 or
   not errmsg = "Could not find the entrypoint 15557. (3259)" 
   then run log-err (input "Expected error 3259 for missing ordinal. Got '" + errmsg + "'.").

In this example, the procedure named some-unknown-garbage-function-name does not exist. The code generates a runtime error 3259 in the 4GL. We run the code using NO-ERROR here so that the failure does not display and does not pause. Using NO-ERROR also allows the code to check for the recorded ERROR-STATUS data that can prove that the error 3259 was raised as expected.

Of course, where a 4GL feature actually has results which the program can check directly (as opposed to ERROR-STATUS side-effects), that is the best choice. Something like this:

num = ?.
run return_uint8_cdecl (output num).
if num ne 0 then run log-err (input "Expected 0 from 'byte into return integer' test but found " + string(num)).

When there is something can't be encoded into the source as a direct test, put the details or other results into comments so that the result is still captured and stored with the testcase. This is especially useful for ChUI-based code that has some visual results on the screen which can't be tested directly. We often actually cut/paste the 4GL screen data into a comment to make it available for future reference.

Testing Block Structure, Scoping and Transaction Properties

The original idea of the 4GL was to enable non-programmers to create real applications. This was a miserable failure. But the idea itself led to certain design decisions that are deeply embedded into the 4GL. In particular, the 4GL has a great deal of implicit (and explicit) behavior regarding the block structure of the program. Each block has properties relating to transactions, resources, error processing and control flow. Each block has runtime state that can change the control flow and processing of a program in ways that cannot be easily seen by reading the code. Some blocks have special behaviors with frames (or other UI features) and queries (and other database features like buffers).

For this reason, it is important to understand when a feature may have some behavior that relates to blocks, block properties, scoping or transactions. In such cases, you must very carefully test variants to determine the extent of any dependency and to explore the full behavior of the feature in light of any dependencies.

Block Types:

  • Top-Level
    • External Procedure
    • Internal Procedure
    • Function
    • Trigger
    • Method
  • Inner Blocks
    • DO
    • REPEAT
    • FOR
    • EDITING

All blocks have implicit properties that are there by default. All inner blocks also have specific options that can be specified explicitly, which can modify the block's properties from the default. You should also consider how these blocks nest. Any feature with behavior that is dependent upon blocks will typically have some behavior that changes based on block nesting.

The location of the definition and references of many resources will determine the scoping of those resources. The scope of a resource is the block with which that resource is associated. It usually determines a great deal of implicit behavior. For example, the scope of a buffer determines when it releases records, when its record may get implicitly changed, when exclusive locks might get degraded to share locks, when database triggers are called on the record, when modifications to a record are committed... Frames are another resource which are scoped. There is a range of UI behavior associated with that including the lifetime of the frame (and contained resources) as well as implicit DOWN behavior.

Any feature that interacts with resources that need to commit/rollback (UNDO in 4GL terms), or which are aware of record locking behavior will likely also be dependent upon the transaction properties of the enclosing blocks. The 4GL has a full transaction, sub-transaction and no-transaction level that is associated with a block. All blocks have one of these 3 transaction types. Please note that sub-transaction is a kind of partial commit where the current changes are added into the overall change set (which can still be rolled back). If a rollback occurs of just the sub-transaction, then the current block (and contained block's) changes are rolled back but the previously created changes from outside the current block are not rolled back. The 4GL has unlimited levels of sub-transaction support. Testing features in all 3 transaction types can be important.

You must also test looping and retry behavior. Block iteration can cause a great deal of implicit behavior from DOWN in frames to queries that advance to the next record. Retry is a special 4GL feature where the current block is reset and re-executed from the top, depending on runtime state and some condition that will cause the retry. Please note that retry is not limited to looping blocks. Any block can be retried.

Our documentation (both FWD javadoc and the FWD books) have much detail on all of the above features. These are described here to give you a good idea of the kinds of variations you need to consider when trying to fully explore a 4GL feature. Refer to the documentation for the full details.

Testing ERROR/ENDKEY/STOP/QUIT Conditions and NO-ERROR

The 4GL runtime will often generate one of 4 possible "conditions". These are used to signal abnormal behavior or sometimes just to change the flow of control. The conditions are ERROR, ENDKEY, STOP or QUIT.

ERROR and ENDKEY can be generated by user actions (e.g. the END-ERROR key) and are very commonly raised by 4GL code in response to unexpected problems.

STOP is used to denote a fatal problem.

QUIT is raised to cause the program to exit.

All of these can be "caught" using inner blocks that have an implicit or explicit ON phrase (e.g. ON ERROR undo, retry my-block). This means that you can configure blocks such that you can use the control flow to test exactly whether or not a specific condition is raised. For example:

def var fails as int.

fails = 0.

my-block:
do on error undo, leave my-block:
   /* do something here that should raise an ERROR */

   /* this code should never be reached */
   fails = fails + 1.
end.

if fails ne 0 then /* do something here to log or report the failure */.

Of course, you can reverse this logic to prove that the code does not raise an error and that the code at the end of the block did get executed.

There are some rare cases where NO-ERROR does not actually suppress the error. These are always undocumented "features" (really: just bugs) in the 4GL. But if you encounter such a case, you can use the block processing to "eat" the failure:

my-block:
do on error undo, leave my-block:
   /* do something here raises an ERROR even when NO-ERROR is used */
   /* typically, you still have to specify NO-ERROR so that the error message doesn't display */
end.

/* execution continues here */

You often need to test generated errors to confirm which of these (if any) are raised. Most commonly, 4GL features will raise the ERROR condition.

If you know that an ERROR condition is raised in a certain situation, then you can often use the NO-ERROR clause to allow the code to continue to execute while you then check the ERROR-STATUS handle to confirm that the details of the raised error match what is expected. There is an example of this in the Automated/Batch Execution section above.

Not all 4GL errors throw “exceptions”, some of them look like warnings, where an apparent error message is displayed but the actual ERROR condition is not raised. These warnings will sometimes record info into the ERROR-STATUS handle and sometimes they won't. The 4GL is just not very consistent.

For such cases, please look at the ErrorManager's APIs, like recordOrShowError(), recordOrThrowError(), displayError(), etc. Sometimes errors are shown with or without a starting **. You will have to use the ERROR-STATUS:ERROR and related attributes to check if an error was registered by the call. Depending on the ERROR flag, you will decide on which ErrorManager API to use for the "error".

There is a 4GL feature called SESSION:SUPPRESS-WARNINGS. When set to YES from the procedure editor, if you run a program which doesn't set the SESSION:SUPPRESS-WARNINGS attribute after a program which set it, it will inherit the value of the SESSION:SUPPRESS-WARNINGS from the previous run. Thus, when dealing with errors, is recommended to start the test program from the command line, using the pro -p <progname> command. If you need permanent database support, use pro -p <progname> -db <location of your p2j_test.db file>. Try to avoid the use of SESSION:SUPPRESS-WARNINGS otherwise you will lose sight of generated warning messages.

For each error handling case, you need to determine how it behaves in 4GL and expect that you would use the ErrorManager API depending on this behavior (i.e. recordOrThrowError if ERROR condition is raised or displayOrLogError if ERROR condition is not raised but error messages are recorded).

Checking In Testcases

As part of testcase work you are authorized to perform, a location will be allocated for where in the testcases repository the source files should be added. All 4GL testcases that we create are stored in the bzr repo xfer.goldencode.com/opt/testcases. Usually, the files should be stored in a subdirectory that is named for the 4GL feature you are investigating. If the testcase is useful to you, then it is something that needs to be checked in.

Only check in to this project if you are authorized to do so. This project is being carefully curated. If you have any questions about whether you should check in changes or how to do so safely, please discuss this with your FWD project contact before you check in the files.

If you need to reference these in Redmine, just check them into bzr and then reference the name in the Redmine task history like xfer.goldencode.com/opt/testcases/some_feature/testcase1.p.

It is also OK to cut and paste example 4GL code into Redmine (using a <pre></pre> section) which can be important to help you explain an issue.


© 2007-2019 Golden Code Development Corporation. ALL RIGHTS RESERVED.