Project

General

Profile

Testing Process

Introduction

The primary objectives of the FWD project are:

  • Provide a complete replacement for OpenEdge which operates with 100% compatibility (with the 4GL language behavior) and is at least as fast and as scalable.
  • Provide significant functional, performance and scalability improvements over what is possible in OpenEdge, making FWD the best strategic choice for 4GL development.

The FWD development process is heavily biased toward creating Java versions of 4GL language features. This means that we write many tests (tens, hundreds or even thousands of tests for a single 4GL feature) to explore how each 4GL feature works. When comprehensively written, these tests define the specification for how the 4GL feature must work. This means that the FWD development process is often an extreme form of Test Driven Development.

FWD is a development platform and it is also a runtime platform/application framework. This complexity and scope means that testing is critical to ensure that changes can be made without breaking compatibility or introducing performance/scalability issues. The amount of testing that is required is very large, which means that automated testing is critical.

Functional Tests

  • Unit Testing - relatively simple pure Java code designed to check that specific Java features/APIs in FWD are working
  • 4GL Compatibility Testing - 4GL code written to test specific 4GL language features, each test can be used for front-end testing, conversion testing and runtime testing
  • Regression Testing - business scenarios executed in larger 4GL applications, the entire 4GL application can be used for both conversion and runtime testing

Performance and Scalability Tests

  • Load/Scalability Testing
  • Performance Testing

It is a key goal for the project to implement a full, comprehensive suite of fully automated tests for each of the above categories. At this time, there is a large amount of work to do and it will take time to complete it all. Even then, we will always have more work to add tests and maintain existing tests over time.

In addition to creating these tests and automating everything, it is intended to have daily runs of these tests using CI/CD and DevOps automation. Getting this implemented will take time, but it is in process.

Automated Testing Approaches

Test Type Approach
Java Unit Testing JUnit 5, see #1886
Customer 4GL Application Unit Testing See 4GL Unit Testing for details.
#3827 ABLUnit
#6237 OEUnit
Non-Interactive 4GL Code ABLUnit (same approach as Customer 4GL Application Unit Testing above; for reasoning, see #6183)
Appserver TBD but probably the Harness, will need enhancements, see #7034
REST or SOAP Harness
ChUI Interactive Harness
GUI Interactive Sikuli (see #3704 and Automating GUI Testing)
Load/Scalability Harness
Performance Harness

CI/CD (Continuous Integration/Continuous Delivery)

At this time none of the automated tests are run automatically. Where they are automated, the suites can only be started manually. We plan to change this and implement some form of CI/CD and DevOps automation to enable this across the full set of test types and suites.

Unit Testing

Java Code

The base support for Java Unit Testing has been implemented in FWD in task #1886. The approach uses JUnit 5 and can be executed from the FWD build scripting (./gradlew test).

This will be used for relatively simple pure Java code that is designed to directly test specific Java features/APIs in FWD. A critical idea here is that the tests will NOT require starting a FWD application server or client. These are all tests which must be able to run in an arbitrary JVM process that has no special configuration, state or database access. These tests directly use the FWD classes and report results. Anything more complicated will be tested using 4GL compatibility approaches or the regression testing approaches.

At this time we do not have any significant set of JUnit tests and these are not expected to be the primary testing used for FWD, since what we really care about is 4GL compatibiltiy testing. See below for how that is done. This means that we do not spend much time running the JUnit tests and many of those tests are currently broken but it is not a priority to fix them since customers are not seeing the issues.

Test classes are checked in with the rest of the FWD code, but with different pathing. Put the test classes into the same relative package as the classes they test, but with a test/ base directory instead of src/. This means that unit tests for src/com/goldencode/p2j/util/date.java would be in test/com/goldencode/p2j/util/.

This is different from the standard Maven layout. That is OK. FWD is not going to adopt the Maven layout.

For any test helper classes that have dependencies upon FWD, put them in test/com/goldencode/p2j/testing/. Any class in src/com/goldencode/p2j/ is part of FWD itself (not generic). For anything generic, please put the helpers in test/com/goldencode/testing/. Generic code is code that would be used in other projects outside of FWD. Such code should not have dependencies on anything in src/com/goldencode/p2j/.

In general, do not use a class name starting with Test for a class that contains tests. We already organize these in a top level test/ directory so what is the point of duplicating that as a class name prefix?

4GL Code

For 4GL code, FWD supports both ABLUnit (#3827) and OEUnit (#6237) compatibility. See 4GL Unit Testing for details. We've implemented a JUnit5 Test Engine that integrates with FWD and allows the execution of converted 4GL unit tests. This will be the primary way that we implement 4GL compatibility testing.

4GL Compatibility Tests

Automation of these tests and full CI/CD is currently being implemented un #6853. Some details about the approach are being decided in #6183.

Projects

The existing work is 3 sets of tests, 2 are under source control in bzr repositories. You may need to watch this clip from MIB to understand.

  • Old and Busted (Old UAST Testcases) - older, limited usage, not publicly available, poorly structured, meant to be deprecated
  • New Hotness (Testcases) - newer, cleaner, public, meant to be the strategic and automated 4GL compatibility suite
  • tc - early attempt at automated tests, focused on blocks/transactions and persistence, never finished, not in source control

Neither of these repositories are properly automated such that they can be executed as a non-interactive batch using CI/CD. This will be changed, but it will take some time.

The Testcases needs to be made much more complete. Some of that will come from migrating tests from Old UAST Testcases, which involves cleaning them up and filling in gaps. Most of the work will need to be done from scratch to comprehensively cover the all 4GL language features. The objective is to use these tests to implement both conversion and runtime testing.

The 3rd set of tc testcases that are not currently managed in source control. These are block/transaction tests and persistence tests written without any OO or ProDataSet usage. These tests can be found at ~/shared/projects/p2j/tc/. These should be reviewed, migrated to Testcases, standardized with the newer approach and fleshed out to be complete.

Unless explicit approval has been provided in advance, it is expected that a single/unified database schemata and set of test data will be used across all 4GL compatibility test sets. A single reset of the database should be sufficient to run all test sets, in any order, without requiring any further database reset. That means each test set should only edit or depend on data that is unique and/or which it can maintain itself.

No matter which testcase project is used, you MUST follow the approach in Writing 4GL Testcases.

Front-End

The front-end tests mostly still need to be written. The idea is to focus on testing a wide variety of inputs and confirming that the outputs of the given front-end phase are exactly as expected. Complete and correct 4GL programs are not the primary objective. The objective is to ensure that the front-end processing is correct. The result must run from CI/CD.

Preprocessing

Some tests already exist in Old UAST Testcases. These need to be expanded to be more comprehensive and they need to be automated. The resulting test suite should be part of Testcases.

The existing tests are in:

testcases/preproc/
testcases/uast/*preproc*
testcases/uast/defined_preproc_builtin/
testcases/uast/escaped_quote_preproc_consumption/
testcases/uast/preproc/
testcases/uast/runtime_preproc/

All of these tests should just focus on various inputs and after running through the preprocessor, a known cache file should be output.

Lexing

These should test all keywords, reserved/unreserved, abbreviations, various strings, numbers... the idea is to check the full range of possible token matches. The input streams provided should generate a known stream of output tokens. No tests exist yet.

Parsing

This should focus on variations of valid input for each parser rule. Each input should generate a known AST subtree. No tests exist yet.

4GL Syntax Checking

These tests should focus on the variations of invalid input for each lexer token match and for each parser rule. It is the opposite of the Lexing and Parsing tests above. This is meant to encode invalid input and check that these inputs cannot be matched. A clean mechanism for reporting failures must be implemented and the tests must generate the failures as expected. Those failures need to be descriptive enough so that 4GL developers will know exactly what the problem is. The OpenEdge compiler errors do NOT need to be matched exactly, especially since many of those errors are confusing. No tests exist yet.

These tests will be used to implement/test the syntax checking version of the parser (see #3882).

Conversion

Automate the conversion of all Testcases and compare the conversion outputs with a saved off baseline. The checking is primarily a matter of source file diffing. This automation doesn't exist yet, though the ChUI regressions tests (below) can be used as a starting point. The result must run from CI/CD.

Runtime

Automate the deployment of the testing environment after the conversion run and then automate the test runs of all Testcases. These will be implemented with the techniques planned in #6183 and in Writing 4GL Testcases. The automation does not exist yet. The result must run from CI/CD.

Regression Testing

The following applications are used for regression testing. The idea is to execute real business scenarios, including complex multi-user flows.

  • ChUI Regression Testing - there is a customer's ERP system which has hundreds of automated interactive tests written in the Harness ; a second customer's ChUI system would be good to implement at least some basic tests
  • Appserver Regression Testing - this is a customer's complex server-side application which has a separate proprietary middle tier and a thousands of JUnit tests that execute through the middle tier to hit the appserver
  • Hotel GUI (we need to document and automate the tests)
  • Hotel ChUI (we need to document and automate the tests)
  • GUI Applications - there are 3 active customer GUI ERP applications, each of which needs at least some smoke tests automated
  • REST Regression Testing - there are 2 customer applications which can be tested, one already has smoke tests implemented using the Harness and the other application needs to be automated
  • SOAP Regression Testing - there is a customer GUI application which also has a SOAP interface, we need to implement tests using the Harness

The above lists are purposely not explicit about which customer systems are used, since that is not information that can be made public. These systems can only be run internally at Golden Code (or individually at customer sites). GCD will maintain build/test servers that will implement CI/CD to test each of these cases. More details of the specific systems can be found in the customer-specific projects and their wiki pages.

Conversion

Automate the conversion of each of the above applications and compare the conversion outputs with a saved off baseline. The checking is primarily a matter of source file diffing. The ChUI regressions tests already have a starting point but the other applications need to be implemented. The result must run from CI/CD.

Runtime

Automate the deployment of the testing environment after the conversion run and then automate the test runs of each of the above applications. Each of the automation approaches will be based on the Automated Testing Approaches. The ChUI Regression Tests are already automated, as are the Appserver Regression Tests. The various GUI test automation does not exist yet, though some early work has been done in #3704. In all cases, the deployment and automation needs to be improved and cleaned up. The result must run from CI/CD.

Load/Scalability Testing

These tests do not yet exist. A plan needs to be made and implemented. The Harness is a good tool for this purpose. Testing should involve a range of different 4GL application types:

  • APIs in appserver, REST and SOAP
  • interactive ChUI
  • interactive GUI
  • batch

The result must run from CI/CD.

Performance Testing

These tests do not yet exist. A plan needs to be made and implemented. The Harness is a good tool for this purpose. Testing should involve a range of different 4GL application types:

  • APIs in appserver, REST and SOAP
  • interactive ChUI
  • interactive GUI
  • batch

The result must run from CI/CD.


© 2004-2023 Golden Code Development Corporation. ALL RIGHTS RESERVED.