Project

General

Profile

Writing 4GL Testcases

Introduction

In the FWD project, there are multiple reasons for writing 4GL testcases:

  1. To demonstrate a use of the 4GL language which can show a difference between the FWD implementation and Progress 4GL/OpenEdge. In short, this is to recreate a bug in FWD.
  2. To comprehensively explore the behavior and functionality of 4GL features so that a specification can be be written, such that a replacement can be implemented in FWD.
  3. To test the compatibility of the FWD implementation of some 4GL feature. The same suite of tests from item 2 above is also used for this purpose.

Follow these guidelines when implementing 4GL testcases.

Simple is Better

Write each testcase with the least amount of features and dependencies which will still exhibit the behavior you are trying to test/explore.

If you are copying existing code, you should try to remove unnecessary options/causes/phrases, unnecessary triggers, functions and statements. Strip such code down to only those things absolutely required. This makes it more likely that the code will be able to convert and execute. It also makes it easier to see exactly what the code is doing, without a question about whether the other features are or are not needed.

GUI vs ChUI

If you are working on code that is not dependent upon the type of user interface, please write the testcases so that they can be executed in ChUI mode. This provides a much easier path to testing the 4GL code, since it allows testing via ssh to a Linux system and avoids the need for a Windows 4GL GUI system, Windows platform dependencies and all the extra overhead that comes along with it.

When testing user interface functionality, instead of writing different test cases for GUI/ChUI, use the SESSION:DISPLAY-TYPE attribute for conditional testing when behaviour is different based on the display mode (TTY is for character mode, GUI for graphical).

Using Database (or Not)

Do NOT use the database features of the language unless you are working on code that requires it. Using database features increases complexity of testcases and if it is not needed, then the additional complexity can unnecessarily obscure the code in testcases. We always want to keep each test as simple as possible.

When you do need to use database support, then your testcase should use temp-tables where possible. Of course, if you are exploring a feature to determine how it works differently between permanent tables and temp-tables then you have no choice but to use a permanent database. Likewise, some 4GL features only work on permanent tables. For example, to test transaction isolation/interaction across user sessions would require a permanent database table/index since temp-tables are specific to a single user's session. Using temp-tables makes the resulting runtime environment much simpler, which makes it easier to execute the testcase since the database wouldn't need to exist and be filled with specific data.

For the Testcases project, unless explicit approval has been provided in advance, it is expected that a single/unified database schemata and set of test data will be used across all test sets. A single reset of the database should be sufficient to run all test sets, in any order, without requiring any further database reset. That means each test set should only edit or depend on data that is unique and/or which it can maintain itself.

No Use of Progress Software Corporation Sample Databases or Code

Whenever using database features, whether temp-tables or permanent tables, do not use any code or database that is owned or may be owned by Progress Software Corporation (PSC). For example, PSC provides the "Sports" or "Sports2000" database. Although it is likely these sample databases are not protected by licensing restrictions, the licensing of these databases is not clear. For this reason, do not export Progress' sports database schema or data. Do not convert any of code provided by PSC.

All testcases MUST be written to be completely independent of anything owned by PSC.

Create Your Own Test Database Schema and Data

It is important to note that it is very rare that you would need to use a permanent database for UI (or even base language) testcases. All of the language statements you needed to test (e.g. UPDATE or CHOOSE) work the same way for regular variables, temp-tables and permanent tables. So usually, it is best to just use variables and avoid all the database nonsense. If you really need to see how database integrates with some UI or base language feature, try to use temp-tables first. It is possible that some features only exist when using permanent database AND/OR there are some use cases that just need to be tested in both temp and permanent database environments. But these should be rare cases when doing UI work.

Likewise, we must not include any customer code or customer data in our testcases. If you are basing a testcase on something seen in customer code, please write it yourself using what you have seen, rather than copying the code or data.

If a permanent table is necessary, please use the tstcasesdb database as much as possible - re-use existing tables and if new ones are needed please use meaningful names to make it clear what it is used for and increase re-usability, try to treat it like a regular database not a trash bin ;)

The definition and data dump files are stored inside the data folder and those are being used for the initial database import:
  • tstcasesdb.df - the data definition file
  • dump/tstcasesdb - the data dump files

When writing unit tests in ABL that needs database connection the tstcasesdb database can be created using the data definition file and import the data dump files, there is a simple procedure that can automate this by creating the tstcasesdb database inside the db folder - support/db/create_tstcasesdb.p.

Unit tests that depends on permanent database often alter the persisted data so it is always a best practice to reset the database state after each test in order to have an unaltered starting point for each test - there is one procedure we use for this in support/db/reload_tstcasesdb.p.

If you make modifications, you'll have to check in any schema, static data and potentially reload procedure changes when you check in your testcases.

Testcase Project

You will need access to the following:

The testcase repository in which you will add your tests can be obtained from xfer.goldencode.com as follows:

bzr co sftp://<userid>@xfer.goldencode.com/opt/testcases

This will create a testcases subdirectory in your current directory. From there, you would add a testcases/tests/create-result-list-entry/ subdirectory and put your tests in there. Other people are working in the other directories, so if you need to change something there, let's coordinate that change. Please check in your changes each day, even if they are not complete or functional. As long as you aren't changing parts of the repo being worked on or used by others, it doesn't matter if broken code is checked in to testcases/tests/create-result-list-entry/.

See Testcases for more details.

Testcase Organization

Please organize all testcases in sub-directories of xfer.goldencode.com/opt/testcases/tests. Name the subdirectory based on the feature/functionality being tested (e.g. testcases exploring buttons in the UI might be in a xfer.goldencode.com/opt/testcases/tests/ui/button/ directory).

You will have to make some changes to allow things to run as a relative path from the xfer.goldencode.com/opt/testcases/ directory. For example, RUN statement and include file pathing will need to have the sub-directory pathing prepended.

For particularly large/complex sets of tests, there may be multiple levels of sub-directories to organize the related tests into groups. The library_calls/ examples in the next section has library_calls/misc/ and library_calls/return/ for example. Common features and the main test runner are in the library_calls/ directory itself and then testcases that are related to library call return value processing are in the library_calls/return/ directory.

Support Files

Often various other files are needed for some tests - include files, external procedures, classes and/or data input files used as reference. Instead of having everything together in a single directory try to separate those from the actual tests. As a recommendation use a support sub-directory to store those files, for input data files you might want to use a data sub-directory under this support folder.

While it's very tempting to re-use support files from different tests features be aware those might change in that respective feature set and this will affect your tests, unit testing is not that much about `code reuse` and is preferable to have a better isolation between tests instead.

Common Support Files

While the tests should be kept as simple and clear as possible given the large number of tests created is not uncommon to have code that starts repeating quite often or we need common data structure definitions (temp-table, dataset) that mirror the database structure that can be used in multiple tests.

For cases like this we have small utility classes and include files under the support sub-directory for the testcases root. When you find something that looks like a good candidate for a 'helper' class try to find the right place to add it to in that support directory, some functionality might already be there so is probably worth to take a look at what is already there and how those routines are used in other tests before starting to write new tests.

However, please DO NOT do the following:

  • store data input files inside common support directory
  • change the data structure definitions in include files (normally under definition sub-directories)
  • change the routines that dynamically create those structures (normally under create sub-directories)
  • change the records created in those data structures (normally under populate sub-directories)

Use Descriptive File Names

Please don't just write testcases that are named testcase.p. We need testcase file names that more accurately describe the test being executed. For example, library_calls/misc/case_insensitive_proc_name_in_run.p is a pretty clear name that describes the test being executed.

Another bad approach to avoid is to number testcases like button_test0.p, button_test1.p, ... Although we can tell the tests are related to buttons, we can't tell what specifically is being tested about buttons. You should not have to read the code to have a decent idea of what the test does.

Automated/Batch Execution

All 4GL compatibility tests must be written using the built-in 4GL Unit Testing support which is compatible with the ABLUnit framework. Additional details about our approach can be found in Testing Process.

All testcases (interactive and non-interactive) should be written such that they can be executed as part of an automated suite of tests.

When exploring a new 4GL feature, one virtually always will need many testcases in order to exhaustively determine the behavior of the feature. It is a very very rare feature which would only need a single testcase. Considering this, one would expect that the tests should be organized so that they can be executed as a single batch. It is useful to create a main "test suite" that calls the other test procedures/classes. For example, here is the xfer.goldencode.com/opt/testcases/tests/copy_lob/TestSuiteClass.cls:

block-level on error undo, throw.
@TestSuite(classes="tests.copy_lob.TestBlobBlob,
    tests.copy_lob.TestBlobBlobOverlay,
    tests.copy_lob.TestBlobFile,
    tests.copy_lob.TestBlobFileAppend,
    tests.copy_lob.TestBlobInvalidSrc,
    tests.copy_lob.TestBlobInvalidTrg,
    tests.copy_lob.TestBlobLongchar,
    tests.copy_lob.TestBlobLongcharOverlay,
    tests.copy_lob.TestBlobMemptr,
    tests.copy_lob.TestBlobMemptrOverlay,
    tests.copy_lob.TestBom,
    tests.copy_lob.TestClobClob,
    tests.copy_lob.TestClobClobOverlay,
    tests.copy_lob.TestClobFile,
    tests.copy_lob.TestClobFileAppend,
    tests.copy_lob.TestClobInvalidSrc,
    tests.copy_lob.TestClobInvalidTrg,
    tests.copy_lob.TestClobLongchar,
    tests.copy_lob.TestClobLongcharOverlay,
    tests.copy_lob.TestClobLongcharTrim,
    tests.copy_lob.TestClobMemptr,
    tests.copy_lob.TestClobMemptrOverlay,
    tests.copy_lob.TestFileBlob,
    tests.copy_lob.TestFileBlobOverlay,
    tests.copy_lob.TestFileClob,
    tests.copy_lob.TestFileClobOverlay,
    tests.copy_lob.TestFileFile,
    tests.copy_lob.TestFileFileAppend,
    tests.copy_lob.TestFileInvalidSrc,
    tests.copy_lob.TestFileInvalidTrg,
    tests.copy_lob.TestFileLongchar,
    tests.copy_lob.TestFileLongcharOverlay,
    tests.copy_lob.TestFileMemptr,
    tests.copy_lob.TestFileMemptrOverlay,
    tests.copy_lob.TestLongcharBlob,
    tests.copy_lob.TestLongcharBlobOverlay,
    tests.copy_lob.TestLongcharClob,
    tests.copy_lob.TestLongcharClobOverlay,
    tests.copy_lob.TestLongcharFile,
    tests.copy_lob.TestLongcharFileAppend").
class tests.copy_lob.TestSuiteClass: 

end class.

"Old" Test Cases

Before the ABLUnit engine was made available in FWD test cases were written as regular procedures that were sending the results to a log file, always started by adding a start line into the output file and ending with a final end line to make sure the procedure was actually executed till the end. The "assertions" in between were merely only output additional messages into the output file, the test was considered to pass if there were no other messages in between start and end lines.

Some older testcase suites (e.g. "libcalls") emit significant non-error output inside the brackets.

A "standard" test procedure always included the log_hlp.i helper, had one call of log-it at the start and one at the end and the main test code in between so looked like this:

{common/log_hlp.i}

def var i as int.
def var errmsg as char.

run log-it ("[test tag] start").

... main test code here ...

run log-it ("[test tag] end").

The log-it procedure was simply writing the text sent as input parameter into the output log file and was used for non-error/information messages like start/end, all error lines on the other hand started with UNEXPECTED_ERROR: followed by the actual error message and ended with the # followed by the line number in test procedure to make it easier to find the cause of the error - for that the log-err procedure, or other assert procedures were used.

Normally the main test code was then calling various ABL statements using the NO-ERROR clause and then either asserted the fact that a specific error was raised or that the outcome was matching the one in ABL.

For errors most of the time we were checking both the error number and the error message, no point in checking the error message for same error number over and over again though.

/* non-existent ordinal (and function name) */
run some-non-existent-procedure in this-procedure no-error.
errmsg = error-status:get-message(1).
if not error-status:error or
   not error-status:get-number(1) eq 6456 or
   not errmsg = substitute("Procedure &1 has no entry point for some-non-existent-procedure. (6456)", search(this-procedure:file-name))
   then run log-err (input "Expected error 6456 for missing procedure. Got '" + errmsg + "'.", {&line-number}).

In this example, the procedure named some-non-existent-procedure does not exist. The code generates a runtime error 6456 in the 4GL so we assert both the error number and message match the one in 4GL, alternatively we've could use one of the assert procedures:

  • assert-err - only checking an error was raised
  • assert-err-num - check an error with certain error number was raised
  • assert-err-nums - check multiple errors with certain error numbers were raised

Of course, where a 4GL feature actually has results which the program can check directly (as opposed to ERROR-STATUS side-effects), that is the best choice. Something like this:

num = ?.
run return_uint8_cdecl (output num) no-error.
assert-not-err("return_uint8_cdecl should not raise error", {&line-number}).
if num ne 0 then run log-err ("Expected 0 from 'byte into return integer' test but found " + string(num), {&line-number}).

In order to execute multiple tests pertaining to a given functionality in one go certain "runner" procedures were used - the code there initializes logging, logs the start and end of processing and calls each set of sub-tests in turn. The external procedures that are called are actually sets of related sub-tests. For example, here is the content of library_calls/misc/misc_tests.p:

{library_calls/log_hlp.i}

run log-it (input "misc tests start").

do on stop undo, leave on error undo, leave:
   run library_calls/misc/name_decoration.p.
   run library_calls/misc/find_by_name_failure.p.
   run library_calls/misc/find_by_ordinal.p.
   run library_calls/misc/case_insensitive_proc_name_in_run.p.
   run library_calls/misc/order_of_input_checks.p.
   run library_calls/misc/order_of_output_checks.p.
   run library_calls/misc/polymorphic_type_support.p.
   run library_calls/misc/os_api_call.p.
   run library_calls/misc/return_value_impact.p.
end.

run log-it (input "misc tests end").

Again, note how this initializes logging, logs the start/end and executes a set of specific tests. This structure can support testcases of arbitrary complexity.

When there is something can't be encoded into the source as a direct test, put the details or other results into comments so that the result is still captured and stored with the testcase. This is especially useful for ChUI-based code that has some visual results on the screen which can't be tested directly. We often actually cut/paste the 4GL screen data into a comment to make it available for future reference.

"New" ABLUnit Test Cases

With the addition of the new testing engine in FWD that supports the ABLUnit (and to some extent also the OEUnit) testing frameworks the decision was to write all new test cases using this new approach. More details about the ABLUnit can be found on the ABLUnit Progress Online Documentation, it's pretty much like JUnit where standard annotations are used for setup/teardown and test methods.

By writing all of our 4GL test code using ABLUnit, we can use automation to run the same tests on both FWD and OE.

Previous recommendations about descriptive naming and grouping of test cases by subject area remains, some additional recommendations that might help below:

  • do not write the tests counting on the execution order of the Test methods, see remarks about 'shared' context below.
  • try not to use 'shared' context - class level variables/properties, unless those are effectively final and only set inside the Before method and never changed afterward in Test methods.
  • shared data structures (temp-table/dataset) should be either final (populated in Before) or reset to a predefined state before (or after) each Test by using Setup or TearDown).
  • pay attention to non-dynamic widgets (variables defined with VIEW-AS and part of a FRAME), once realized their behaviour changes and can not revert back to un-realized.
  • the number of assertions inside a Test method should be kept to the minimum - not always possible to only have one assertion per method but know that the test will stop on first failed assertion.
  • favour test classes over procedures unless the test requires structures/statements that can't be used in classes (internal procedures/functions, use of THIS-PROCEDURE).
  • group of test cases for a given area in test suite(s) so it can be run as a whole test unit - there is a limitation on the number of tests that can be added to a test suite. This is probably only an OE limitation, not one in FWD, however all tests must be written to be successfully executed in both OE and FWD, so we must live with the limitation.

Beside those recommendations there are some helper methods that can be used:

  • support.test.AssertExt - methods to assert errors/warnings and some 'extensions' for the 'core' assertions that might be useful.
  • support.test.HandleFactory - methods to instantiate various dynamic objects and clean those up, useful to reset/clean the state between test methods.
  • support.test.EventLog - can be used to record various events like user interface triggers but it can also be used for anything like a callback (publish, named events) to record the succession of events fired for a particular action/operation.
  • support.test.SikuliHelper - can be used to launch the Sikuli script automation for tests that require to simulate user interaction.
  • support.ui.Dimension - can be used to capture size&position of user interface widgets (including theirs parent/frame) in order to compare what was changed in those attributes after performing actions on the widget.
  • support.ui.DimensionSupport - various helper method for character/pixel transformation and widget/session information.
  • support.FileUtils - some helper methods: get a temporary file with optional content, read content from a file, check if file/directory exists.

For full details, see Testcase Helper Code.

Tips to Migrate "Old" Test Cases

Test cases written before the ABLUnit engine was made available should be "migrated" to the new format, while there is no automated process available those are some guidelines for the migration process:

  • each "old" test procedure will translate to a test class or procedure as appropriate.
  • recording the start and end of the test in the log file is not needed anymore so it can be removed.
  • recording errors/exceptions in the log file should be replaced with calls to appropriate assert method - either from 'core' Assert or from AssertExt.
  • the original top-down program flow needs to be broken down in, possibly multiple, Test methods.
  • when broken down in multiple Test methods make sure 'static' context does not get into way and the test methods execution order is not important.

Testing Block Structure, Scoping and Transaction Properties

The original idea of the 4GL was to enable non-programmers to create real applications. This was a miserable failure. But the idea itself led to certain design decisions that are deeply embedded into the 4GL. In particular, the 4GL has a great deal of implicit (and explicit) behavior regarding the block structure of the program. Each block has properties relating to transactions, resources, error processing and control flow. Each block has runtime state that can change the control flow and processing of a program in ways that cannot be easily seen by reading the code. Some blocks have special behaviors with frames (or other UI features) and queries (and other database features like buffers).

For this reason, it is important to understand when a feature may have some behavior that relates to blocks, block properties, scoping or transactions. In such cases, you must very carefully test variants to determine the extent of any dependency and to explore the full behavior of the feature in light of any dependencies.

Block Types:

  • Top-Level
    • External Procedure
    • Internal Procedure
    • Function
    • Trigger
    • Method
  • Inner Blocks
    • DO
    • REPEAT
    • FOR
    • EDITING

All blocks have implicit properties that are there by default. All inner blocks also have specific options that can be specified explicitly, which can modify the block's properties from the default. You should also consider how these blocks nest. Any feature with behavior that is dependent upon blocks will typically have some behavior that changes based on block nesting.

The location of the definition and references of many resources will determine the scoping of those resources. The scope of a resource is the block with which that resource is associated. It usually determines a great deal of implicit behavior. For example, the scope of a buffer determines when it releases records, when its record may get implicitly changed, when exclusive locks might get degraded to share locks, when database triggers are called on the record, when modifications to a record are committed... Frames are another resource which are scoped. There is a range of UI behavior associated with that including the lifetime of the frame (and contained resources) as well as implicit DOWN behavior.

Any feature that interacts with resources that need to commit/rollback (UNDO in 4GL terms), or which are aware of record locking behavior will likely also be dependent upon the transaction properties of the enclosing blocks. The 4GL has a full transaction, sub-transaction and no-transaction level that is associated with a block. All blocks have one of these 3 transaction types. Please note that sub-transaction is a kind of partial commit where the current changes are added into the overall change set (which can still be rolled back). If a rollback occurs of just the sub-transaction, then the current block (and contained block's) changes are rolled back but the previously created changes from outside the current block are not rolled back. The 4GL has unlimited levels of sub-transaction support. Testing features in all 3 transaction types can be important.

You must also test looping and retry behavior. Block iteration can cause a great deal of implicit behavior from DOWN in frames to queries that advance to the next record. Retry is a special 4GL feature where the current block is reset and re-executed from the top, depending on runtime state and some condition that will cause the retry. Please note that retry is not limited to looping blocks. Any block can be retried.

Our documentation (both FWD javadoc and the FWD books) have much detail on all of the above features. These are described here to give you a good idea of the kinds of variations you need to consider when trying to fully explore a 4GL feature. Refer to the documentation for the full details.

Testing ERROR/ENDKEY/STOP/QUIT Conditions and NO-ERROR

The 4GL runtime will often generate one of 4 possible "conditions". These are used to signal abnormal behavior or sometimes just to change the flow of control. The conditions are ERROR, ENDKEY, STOP or QUIT.

ERROR and ENDKEY can be generated by user actions (e.g. the END-ERROR key) and are very commonly raised by 4GL code in response to unexpected problems.

STOP is used to denote a fatal problem.

QUIT is raised to cause the program to exit.

All of these can be "caught" using inner blocks that have an implicit or explicit ON phrase (e.g. ON ERROR undo, retry my-block). This means that you can configure blocks such that you can use the control flow to test exactly whether or not a specific condition is raised. For example:

def var fails as int.

fails = 0.

my-block:
do on error undo, leave my-block:
   /* do something here that should raise an ERROR */

   /* this code should never be reached */
   fails = fails + 1.
end.

if fails ne 0 then /* do something here to log or report the failure */.

Of course, you can reverse this logic to prove that the code does not raise an error and that the code at the end of the block did get executed.

There are some rare cases where NO-ERROR does not actually suppress the error. These are always undocumented "features" (really: just bugs) in the 4GL. But if you encounter such a case, you can use the block processing to "eat" the failure:

my-block:
do on error undo, leave my-block:
   /* do something here raises an ERROR even when NO-ERROR is used */
   /* typically, you still have to specify NO-ERROR so that the error message doesn't display */
end.

/* execution continues here */

You often need to test generated errors to confirm which of these (if any) are raised. Most commonly, 4GL features will raise the ERROR condition.

If you know that an ERROR condition is raised in a certain situation, then you can often use the NO-ERROR clause to allow the code to continue to execute while you then check the ERROR-STATUS handle to confirm that the details of the raised error match what is expected. There is an example of this in the Automated/Batch Execution section above.

Not all 4GL errors throw “exceptions”, some of them look like warnings, where an apparent error message is displayed but the actual ERROR condition is not raised. However the warnings info are recorded into the ERROR-STATUS handle, make sure you check the NUM-MESSAGES property - if ERROR condition is not set but NUM-MESSAGES is greater than zero then those are only "warnings".

For such cases, please look at the ErrorManager's APIs, like recordOrShowError(), recordOrThrowError(), displayError(), etc. Sometimes errors are shown with or without a starting **. You will have to use the ERROR-STATUS:ERROR and related attributes to check if an error was registered by the call. Depending on the ERROR flag and NUM-MESSAGES property, you will decide on which ErrorManager API to use for the "error".

There is a 4GL feature called SESSION:SUPPRESS-WARNINGS. When set to YES from the procedure editor, if you run a program which doesn't set the SESSION:SUPPRESS-WARNINGS attribute after a program which set it, it will inherit the value of the SESSION:SUPPRESS-WARNINGS from the previous run. Thus, when dealing with errors, is recommended to start the test program from the command line, using the pro -p <progname> command. If you need permanent database support, use pro -p <progname> -db <location of your p2j_test.db file>. Try to avoid the use of SESSION:SUPPRESS-WARNINGS otherwise you will lose sight of generated warning messages.

For each error handling case, you need to determine how it behaves in 4GL and expect that you would use the ErrorManager API depending on this behavior (i.e. recordOrThrowError if ERROR condition is raised or displayOrLogError if ERROR condition is not raised but NUM-MESSAGES property is higher than zero).

With the addition of the structured error handling mechanism as an alternative to the NO-ERROR clause one can use a CATCH block instead and when an error is being thrown an actual Progress.Lang.Error instance is available inside the CATCH block.

All "internal" errors raised by ABL statements are of type Progress.Lang.SysError, other errors like application errors or coming from various vendor frameworks (including the Progress's own OO API) must be some kind of Progress.Lang.AppError, this one being the only "exception" that ABL developers can extend. Normally SysError properties/methods maps to the one of the ERROR-STATUS system handle, the AppError also have the ReturnValue property that maps to the RETURN-VALUE function. When testing Progress OO API it might be interesting to test the actual type of the "exception" being thrown in order to replicate the exact same functionality in FWD, for that the CATCH approach is to be used instead of the NO-ERROR clause.

There are some useful methods to test error conditions inside the support.test.AssertExt helper class.

Checking In Testcases

As part of testcase work you are authorized to perform, a location will be allocated for where in the testcases repository the source files should be added. All 4GL testcases that we create are stored in the bzr repo xfer.goldencode.com/opt/testcases. Usually, the files should be stored in a subdirectory that is named for the 4GL feature you are investigating. If the testcase is useful to you, then it is something that needs to be checked in.

Only check in to this project if you are authorized to do so. This project is being carefully curated. If you have any questions about whether you should check in changes or how to do so safely, please discuss this with your FWD project contact before you check in the files.

If you need to reference these in Redmine, just check them into bzr and then reference the name in the Redmine task history like xfer.goldencode.com/opt/testcases/some_feature/testcase1.p.

It is also OK to cut and paste example 4GL code into Redmine (using a <pre></pre> section) which can be important to help you explain an issue.


© 2004-2023 Golden Code Development Corporation. ALL RIGHTS RESERVED.