Project

General

Profile

Database Access

Introduction

This section discusses the ways in which an non-converted code connects with and uses legacy databases which were converted from Progress databases, and which are now managed by a FWD application server. This non-converted code can be:

  • a hosted service (Java code running inside the FWD server)
  • an external application (outside of the FWD server)

The FWD runtime environment exposes a public API to enable an hosted services and external applications to perform transactional, CRUD (create, read, updated, delete) operations on legacy (and new) data.

Besides lower-level data access methods, the public, persistence API provides methods which should be used in consistent ways by an external application, to ensure it is a "good citizen" with respect to data access. The portions of the persistence runtime used by legacy, converted code honor certain conventions and access patterns to prevent data corruption which otherwise might result from uncontrolled, concurrent data access. It is critical for an external application to implement these same techniques when running alongside legacy, converted code. These techniques are described in detail, later in this section.

Accessing the Database from Hosted Services

Data Model Objects

As with converted code, hand-written, external applications which require database access generally will deal with Data Model Objects (DMOs). The public API for persistence services is biased toward this type of use, as many of its methods either accept or return instances of DMOs, either individually, or within collection objects. DMOs used by an external application can be those which were converted along with the original, Progress application, from that application's database schemas. An external application may introduce new DMO interfaces and implementation classes, or modify existing ones, as defined by new business requirements. The definition or modification of DMO interfaces and implementation classes requires some special treatment, which is discussed later in this section.

External Application Architecture

The external application implements a client-server architecture. The client component drives the control flow of the application, requesting services from the server component as necessary. The server component runs within the process space of the FWD server. It acts as a “staging area” for persistence-related work requested by the client, exporting an application-specific API to the client. Some API methods are used to read data, others to insert, update, or delete data. Data read from the database is organized into serializable structures appropriate to the application's needs, before being returned to the client. Data to be updated or inserted is sent from the client to the server via the application-specific API.

Persistence API

Lower level persistence services are provided through the public API in the com.goldencode.p2j.persist.Persistence class. This class is the central point for CRUD (create, read, updated, delete) access to all databases of the converted application. An application-specific API must be implemented using this public API. The following table lists the most important, public methods of the Persistence class. Please refer to the class' JavaDoc documentation for usage instructions and further detail.

Category Method Notes
Transaction void beginTransaction() Implicitly begins a Hibernate session.
Transaction void commit() Implicitly ends a Hibernate session.
Transaction void rollback() Implicitly ends a Hibernate session.
Transaction boolean isTransactionOpen() Will report existence of either an implicit or an explicit transaction.
Lock void lock(LockType,
RecordIdentifier,
boolean)
Change the lock type of a record.
Read ScrollableResults scroll(String,
Object[],
Type[],
int,
int,
ScrollMode)
Prefer this method to the list method for all queries which could return an unknown number of records. Use ScrollMode.FORWARD_ONLY if possible, as this enables the use of a server-side cursor in some databases (including PostgreSQL). See javadoc comments of the other variants of the scroll method for more details.
Read List list(String,
Object[],
Type[],
int,
int,
boolean)
Only intended for known, small result sets. Avoid using this method for queries which could return an unknown number of results, since all objects will be held in memory within the returned list. This can have a very detrimental effect on the FWD server as a whole! Some JDBC driver implementations (including PostgreSQL) exacerbate the problem by holding a copy of all results within the driver as well.
Read Persistable load(String,
Class,
Serializable,
LockType,
boolean)
Return a single DMO and acquire the specified lock type for the associated record.
Read ResultSet executeSQLQuery(String, Object[]) Execute an SQL query and return a result set. Another variant of this method exists, which does not accept an array of substitution parameters.
Create Serializable nextPrimaryKey(String) Safely allocates (or recycles) a unique primary key for a newly created record.
Create void save(Persistable, Serializable) Adds the DMO to the current Hibernate session and persists the corresponding record to the database.
Delete void delete(Object) Removes the DMO from the current Hibernate session and deletes the corresponding record from the database.
Create, Update, Delete void executeSQL(String, Object[]) Execute an arbitrary SQL statement (other than a query). Another variant of this method exists, which does not accept an array of substitution parameters.

Obtaining a Persistence Instance

One instance of the com.goldencode.p2j.persist.Persistence class can be created for each physical database accessed by a FWD server instance. Each such instance is shared by all user contexts, including all legacy contexts and all external application contexts. These instances are created, cached, managed, and provided by com.goldencode.p2j.persist.PersistenceFactory, a final class with only static methods and no public constructor. The following example illustrates how to obtain a Persistence instance:

import com.goldencode.p2j.persist.*;
...
Database db = new Database("mydb");
Persistence persistence = PersistenceFactory.getInstance(db);

The PersistenceFactory.getInstance(Database) method will check its cache to determine whether a Persistence object for the mydb database already exists, and if so, it will return that instance. Otherwise, it will instantiate a new Persistence object associated with the mydb database, cache that instance, and return it.

Transactions

The public API exposed by the Persistence class supports explicit actions to begin a transaction, and to commit or rollback that transaction after some unit of work is performed. An explicit transaction is begun using the method Persistence.beginTransaction.

The following example illustrates the transaction idiom that should be used for any unit of work (including read-only data access):

Persistence persistence = PersistenceFactory.getInstance(new Database("mydb"));
...

boolean beganTx = persistence.beginTransaction();

try
{
   // do primary work
   ...

   if (beganTx)
   {
      persistence.commit();
   }
}
catch (PersistenceException exc)
{
   if (beganTx && persistence.isTransactionOpen())
   {
      try
      {
         persistence.rollback();
      }
      catch (PersistenceException exc2)
      {
         // do some error handling for the rollback error
         ...
      }
   }

   // do some error handling for the primary error
   ...
}

We first attempt to begin a transaction just before the outer try-catch block (this may become a try-catch-finally block once record locking is involved - see Locking and Transactions below), and we record whether the transaction was started successfully in the beganTx variable. The Persistence.beginTransaction method will return false in the event a transaction already is active, which tells us it was not necessary to begin a new transaction, presumably because higher level code already began one. More importantly, it tells us we should not attempt to commit the current transaction, because that would most likely cause problems for the higher level code which expects to commit that transaction at a later point, after its higher-level unit of work is complete.

We then perform our primary work and attempt to commit the transaction, if necessary (i.e., if beganTx is true). If a PersistenceException is thrown as a result, it is caught and we attempt to roll back the current transaction, but only if it was begun with the earlier call to Persistence.beginTransaction, and only if it still is open. In fact, if the error occurred during the call to Persistence.commit, the transaction may already have been rolled back. In this case, Persistence.isTransactionOpen will return false, and we should not attempt to roll back again.

The rollback attempt itself can throw PersistenceException, which requires a separate try-catch block to handle any such error. We then handle the primary error in a manner appropriate to the application.

Clearly, much of the above example is scaffolding code to begin a transaction and to ensure it is properly committed or rolled back. Even more scaffolding code is necessary once we add record locking to the mix. It is certainly possible (and recommended) to re-factor and simplify this idiom by using helper classes or an Inversion of Control framework, such as Spring. No matter how the code is organized or re-factored though, the important point here is that all database work must be performed inside an explicit transaction (even read-only access!).

If the Persistence API is used to do any database work outside an explicit transaction, an implicit transaction may be opened, which is not desirable in an external application. Implicit transactions are transactions which are opened and committed (or rolled back) by the persistence runtime automatically for certain database work, primarily for read operations. They exist to support legacy, converted code, which does not always open a transaction to read records from the database. They were introduced to prevent a problem with idle database connections, whereby the Hibernate framework would automatically open (but never close) a new transaction for read requests. When the read request was finished, the transaction would remain open indefinitely, consuming database resources unnecessarily. However, external application code should not allow an implicit transaction to be created, and should instead always manage an explicit transaction for any database work.

FWD has no limitation related to executing each API in its own transaction. If there is a need to keep the transaction open across multiple API calls, it is possible, but the complexity depends on whether the APIs need to use the 4GL-like buffers and queries or not.

To be able to use the 4GL-like buffers and queries, each usage must be done within an 4GL-style block. As this kind of block can't be left opened using the APIs in BlockManager, the TransactionManager APIs must be used instead. Following is a “compressed” algorithm on what kind of remote APIs need to be created so that a 4GL style block with transaction support can be used to invoke remote APIs within the same transaction:

invoke a remote API to initialize all the variables, buffers, etc
try
   invoke a remote API which starts a top level block
   try
      invoke a remote API which starts a transaction block
      invoke a remote API to open the scope for all the buffers you will use
      invoke your remote APIs which use the buffers/queries/etc
   catch exceptions
      rollback transaction
      handle errors
   finally
      close the transaction block
catch ConditionException's
   ignore
finally
   close the top level block

With this approach, the code will be similar to how 4GL works. Note that the 4GL-style buffers (which are anonymous proxies returned by the RecordBuffer.define call) will not be able to be used as return values for the exported APIs. Instead, return one or more of the DMO's properties or its ID.

If you need transaction support only on Hibernate level or JDBC level, you can use the Persistence APIs and work directly with the DMOs, and not with their RecordBuffer proxies. Even in this case, it is needed to execute all the Persistence APIs within the scope of a top level block:

invoke a remote API to initialize all the variables, buffers, etc
try
   invoke a remote API which starts a top level block
   begin transaction
   try
      invoke remote APIs which work with the Persistence class APIs
      commit transaction
   catch exceptions
      rollback transaction
      handle errors
catch ConditionException's
   ignore
finally
   close the top level block

In any case, as context-local data is kept, the same session must be used to execute all the involved APIs.

Record Locks

Although the locking and unlocking of records usually is done as part of another operation, such as reading or updating records, it is a topic of some complexity which deserves individual attention. It is important to understand how record locks are obtained and released, because the burden of proper record lock management lies squarely with the developer, and the proper functioning of the FWD server depends upon the developer getting this right.

The FWD runtime environment implements a cooperative, pessimistic record locking model. The model is pessimistic because it assumes that in the absence of record locks, concurrent attempts to make incompatible changes to the same record can and will occur. It protects against this situation by requiring temporary, exclusive access to a record before changing or deleting it. The model is cooperative because this contract, while very important to honor, is not enforced programmatically. Although converted code uses portions of the persistence runtime which honor the requirements of the pessimistic locking model, it is possible to write an external application which bypasses the requirements of this model. The developer of external code therefore must take extreme care to cooperate with the other sessions and processes which rely on pessimistic locking to maintain data integrity.

The unusual design choice of a cooperative, pessimistic locking model was forced by the fact that the Progress environment itself relies upon a pessimistic locking model with very specific semantic requirements. Because the native record locking implemented by various database vendors do not necessarily match the exact semantics of Progress' implementation, the FWD environment provides a compatible implementation, which exists in the application server rather than in the backing database. It is this separation of record locking from the database which requires the cooperative nature of the record locking model.

While converted, legacy code operates naturally in this environment, a hosted application must take special care to lock and unlock records properly. Improper locking techniques in external applications can cause deadlocks or orphaned locks which at this time can only be resolved by (a) terminating the external application; or (b) restarting the FWD server. The basic rules of thumb to avoid this situation are:

  • always lock records in a consistent order; and
  • always unlock what you lock.

Techniques to implement these cardinal rules are discussed below, but first, the basics...

To lock or unlock a record, use the lock API in the Persistence class:

// assumes we previously have obtained a Persistence instance "persistence" 
RecordIdentifier ident = new RecordIdentifier("my_table", 1L);
try
{
   persistence.lock(LockType.EXCLUSIVE, ident, true);
}
catch (LockUnavailableException exc)
{
   // never thrown for requested lock type
}

In this case, the lock method will block until an exclusive lock can be obtained on the record my_table:1. To return immediately (i.e., abnormally) in the event the lock is unavailable, you would instead use:

try
{
   persistence.lock(LockType.EXCLUSIVE_NO_WAIT, ident, true);
}
catch (LockUnavailableException exc)
{
   // the lock is unavailable, handle accordingly...
   ...
}

As in Progress, two types of locks are supported:

  • SHARE - acquire this type of lock for read-only access to a record, to ensure that a record is not changed or deleted while you hold the lock.
  • EXCLUSIVE - acquire this type of lock if you intend to create, update, or delete a record.

For converted code, the contracts specified for these lock types are honored automatically by the portions of the persistence runtime which support legacy persistence semantics. However, there is no way for the runtime to enforce these contracts for hand-written Java code in a hosted, external application. Therefore, it is critical that all such code only makes modifications to the database while holding an EXCLUSIVE lock for each of the affected records.

Locking Order

The order in which exclusive record locks are acquired and released is important to the reliability of both the converted application and the external application. Getting this wrong can lead to application deadlocks.

Exclusive locks should always be acquired in a consistent order across all components of a system. Before writing logic which acquires and holds multiple locks, first review all other programs which lock records in the same table or group of tables (including programs in the converted application). Ensure that any new logic is written to acquire locks in the same order.

Consider two sessions, each attempting to acquire an exclusive lock on two records at the same time, but in the opposite order:

Session A Session B
Attempts to acquire lock on record 1 Attempts to acquire lock on record 2
Acquires lock on record 1 Acquires lock on record 2
Does some work Does some work
Attempts to acquire lock on record 2 Attempts to acquire lock on record 1
Attempt to lock record 2 blocks; lock is held by session B Attempt to lock record 1 blocks; lock is held by session A

These two sessions are now deadlocked; neither can continue because the other holds the next lock to be acquired.

Obtaining the locks in the same order across sessions eliminates the possibility of this deadlock:

Session A Session B
Attempts to acquire lock on record 1 Attempts to acquire lock on record 1
Acquires lock on record 1  
  Attempt to lock record 1 blocks; lock is held by session A
Attempts to acquire lock on record 2  
Acquires lock on record 2  
Does some work  
Releases lock on record 1  
  Acquires lock on record 1
  Attempts to acquire lock on record 2
  Attempt to lock record 2 blocks; lock is held by session A
Releases lock on record 2  
  Acquires lock on record 2
  Does some work
  Releases lock on record 1
  Releases lock on record 2

The practice of locking records in a consistent order is a database topic which is not specific to writing an external application for the FWD environment. Nevertheless, it is mentioned here because of its importance to the reliability of the FWD server as a whole.

Locking and Transactions

The idiom for a transaction which makes database modifications is expressed in the following pseudo-code:

begin transaction
try
   determine which record is to be created/updated/deleted
   obtain EXCLUSIVE lock on the record's table/primary key
   fetch or create the record
   make necessary changes to (or delete) record
   commit transaction
catch exceptions
   rollback transaction
   handle errors
finally
   release EXCLUSIVE lock

Note that the lock release must occur after the transaction ends, regardless of the success or failure of the work performed within the transaction. Exclusive locks must not be released before a transaction is committed (or rolled back). Doing so would provide another session the opportunity to grab an exclusive lock on the same record and commit conflicting changes, perhaps before the current session's changes to that record are committed. This violates the contract of pessimistic locking and can lead to data corruption. It is permissible, however, to downgrade an exclusive lock to a share lock during a transaction, then release the share lock after the transaction ends. This guarantees another session cannot obtain an exclusive lock before the current session's changes are committed.

It is important to understand that the termination of a transaction does not automatically release record locks obtained or used within it; this is always the responsibility of the programmer. This requirement is a side effect of separating the implementation of record locking from the backing database.

A failure to release locks at the appropriate time will leave orphaned locks in the FWD runtime environment's lock manager, which can cause other sessions attempting to acquire those locks to hang. If not released by business logic, orphaned locks will only be released when the session holding those locks (i.e., the external application) is terminated.

Reading Records

Records can be read from the database with or without locking. If the application can tolerate stale data, it generally is best from the perspective of resource use and server throughput to read records without locking them. This allows another session which needs to update the same records to do so without waiting unnecessarily.

If the possibility of stale data is unacceptable for a given use case, use a share lock. If the records read are to be updated or deleted, use an exclusive lock at the outset. Avoid first reading with share lock and attempting to upgrade to an exclusive lock later in the same transaction. If two sessions obtain a share lock on the same record, then both try to upgrade to an exclusive lock in order to update or delete the record, this will cause a deadlock, since neither can upgrade to an exclusive lock while the other retains its share lock.

The recommended idiom for reading records from a converted database - without locking - in an external application is:

begin transaction
try
   execute a query
   for each result
      do some work
   commit transaction
catch exceptions
   rollback transaction
   handle errors

...and with locking...

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record with desired lock type
      do some work
   commit transaction
catch exceptions
   rollback transaction
   handle errors
finally
   release locks

The preferred method for reading records is the Persistence.scroll API listed in the table of important Persistence methods above. When used with ScrollMode.FORWARD_ONLY and a reasonable JDBC fetch size configured in the FWD directory, this is the best method to minimize the memory footprint of a query. This method accepts an HQL (Hibernate Query Language) query and returns a ScrollableResults object, which can be used to iterate the query's results. Please refer to published Hibernate documentation for details on HQL and the ScrollableResults interface.

An example of using the scroll API to read records with a share lock follows:

import java.io.Serializable;
import java.util.*;
import org.myorg.myapp.dmo.mydb.impl.*;
import org.hibernate.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

import static com.goldencode.p2j.util.character;

...
private Persistence persistence = PersistenceFactory.getInstance("mydb");
...

/**
 * Process all customer orders for a given customer.
 *
 * @param   customerName
 *          Customer name.
 */
void processCustomerOrders(String customerName)
{
   // Remember locks to be released
   List<Serializable> locked = new ArrayList<Serializable>();

   boolean beganTx = persistence.beginTransaction();

   try
   {
      // Compose query to find all customer_order records with the customer's name
      String hql =
         "select co.id from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

      while(sr.next())
      {
         // Load each record with a share lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.SHARE,
                                                        true);

         locked.add(pk);

         // Do some work with custOrder DMO
         ...
      }

      // Commit transaction
      if (beganTx)
      {
         persistence.commit();
      }
   }
   catch (PersistenceException exc)
   {
      // Roll back transaction
      if (beganTx && persistence.isTransactionOpen())
      {
         try
         {
            persistence.rollback();
         }
         catch (PersistenceException exc2)
         {
            // do some error handling for the rollback error
            ...
         }
      }

      // Handle error appropriately...
      ...
   }
   finally
   {
      try
      {
         // Release all acquired locks
         for (Serializable pk : locked)
         {
            RecordIdentifier ident = new RecordIdentifier("customer_order", pk);
            persistence.lock(LockType.NONE, ident, true);
         }
      }
      catch (LockUnavailableException exc)
      {
         // Handle accordingly;  unexpected when releasing a lock
      }
   }
}

Let's take a closer look at this example. The statement

import java.io.Serializable;

is necessary because when obtaining locks on a set of records, we first execute a query which returns only the primary keys of the records we wish to lock. Many of the APIs of the Persistence class accept parameters of type java.io.Serializable to represent primary keys. Next, we have

import java.util.*;

which allows us to use the collection interface List and the implementation class ArrayList in the body of our example. The statement

import org.myorg.myapp.dmo.mydb.impl.*;

imports the package containing the CustomerOrderImpl DMO class, which this example assumes was generated by the conversion of an application whose root package is org.myorg.myapp and which represents a table in the database mydb. DMO interfaces are generated into a package structure which follows the convention {application root package}.dmo.{database name}. The corresponding DMO implementation classes are found in {application root package}.dmo.{database name}.impl.

Next, we import the main Hibernate package:

import org.hibernate.*;

This is necessary to reference Hibernate classes like ScrollableResults and ScrollMode.

Finally, we import some FWD packages:

import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

These give us access to the FWD persistence and data wrapper framework classes, respectively.

As a convenience, we have a static import to resolve static methods in the character data wrapper class, such as toUpperCase and rightTrim:

import static com.goldencode.p2j.util.character;

The point of the processCustomerOrder method is to perform some work with each customer order for a particular customer in the hypothetical mydb database. This particular use case calls for a guarantee that a given record will not be changed by another session while we are working with it, so we acquire a share lock for each record.

Within the body of the method, we first initialize a java.util.ArrayList object to store the primary keys of those records we lock, so we have a collection of identifiers to tell us which records to unlock at the end of our transaction.

We begin a new database transaction, then open a try block within which the work of that transaction will be done.

Next is the query itself, which is written in HQL:

      // Compose query to find all customer_order records with the customer's name
      String hql =
         "select co.id from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

We will come back to this in a moment, but for now, note that this query statement presupposes several things:

  • the conversion process created a DMO named CustomerOrderImpl@;
  • that DMO has a character property, custName, which represents the name of the customer associated with a particular customer order;
  • that DMO has another property, id, which represents the surrogate primary key of the backing table (all DMOs generated by the conversion will have such a property);
  • the custName property represents a case-insensitive field in the original Progress table.

Next, we do a bit of preparation of the customer name which was passed into the method as its only argument (customerName):

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

The question mark (?) at the end of the HQL statement is a placeholder which indicates that this query expects a query substitution parameter. When this query is executed by the Persistence.scroll API, this placeholder will be replaced by the character object customerNameParm created above.

To create the customerNameParm object, we first invoke the static method character.rightTrim on the customerName string, which trims all space ( ), tab (\t), newline (\n), and carriage return (\r) characters, if any, from the right side of the customer name. Then, we uppercase the resulting text using the static method character.toUpperCase. This returns a character data wrapper object containing the original customer name, with whitespace padding (if any) trimmed from the end of the string, and all in upper case.

Likewise, in the HQL statement, we right-trim the same characters from the column which corresponds to the CustomerOrderImpl DMO property custName, then uppercase it as well.

The right-trimming and uppercasing of both the DMO property in the query statement and the substitution parameter are artifacts of legacy Progress query behavior. Remember the assumption that the original Progress database field represented by the DMO property custName was case-insensitive. The converted, PostgreSQL database will make a case-_sensitive_ comparison during query execution, which requires that both the database column and substitution parameter be uppercased, to ensure values of any case are found correctly. Of course, if the original Progress database field was case-sensitive, you should uppercase neither the DMO property in the HQL statement, nor the query substitution parameter.

Likewise, when Progress matches textual fields, it ignores any ' \t\n\r' whitespace characters which right-pad the field and comparison value. This is why we right-trim both the column and query substitution parameter.

You might be tempted to skip the uppercasing and right-trimming steps from your own code, as it quickly becomes tedious to do this for every textual property. However, it is highly recommended that you do not omit these actions, for the following reasons:

  • Your query may miss matching certain records because they may contain whitespace right-padding or may not match the case of your query substitution parameter.
  • If the textual column you are using as a query criterion is indexed, your query will not be able to take advantage of that index. The indexes created by the conversion process reference the expression upper(rtrim({column name}, E' \t\n\r')), not the column itself. The database's query planner will not be able use the index if your query statement does not exactly match the same syntax when referencing that column. This may result in a slower query. While your external application may be able to tolerate the performance penalty, consider that the sub-optimized query may add unnecessary load to the database server, which may impact other sessions of both your external application, as well as of the converted application.

Of course, for non-textual database columns, these functions need not (in fact, cannot) be used. So, for example, an HQL statement that matches on a particular customer number instead of on the customer's name might look like this instead:

      String hql = "select co.id from CustomerOrderImpl co where co.custNumber = ?";

The corresponding code to create a query substitution parameter array would look something like this:

      // Wrap parameter in FWD data type
      integer customerNumberParm = new integer(custNumber);
      Object[] parms = new Object[] { customerNumberParm };

Returning to the first example, the customerNameParm object is then stored in an Object array of length 1, named parms. This array is passed both to the Persistence.scroll method and to a static helper method, DBUtils.makeTypeArray(Object[]). The latter method inspects the array of substitution parameters and selects the appropriate Hibernate data type object for each. It creates an array of org.hibernate.type.Type s, the elements of which correspond to the query substitution parameters at the same positions in the Object array passed to this method. It is important that the array of query substitution parameters contain only parameters of the following data types:

  • java.lang.Integer - only to be used for parameters representing temporary table multiplex IDs (see the JavaDoc documentation for the com.goldencode.p2j.persist.TemporaryBuffer class for details)
  • java.lang.Long - only to be used for parameters representing surrogate primary keys
  • com.goldencode.p2j.character
  • com.goldencode.p2j.date
  • com.goldencode.p2j.decimal
  • com.goldencode.p2j.integer
  • com.goldencode.p2j.logical
  • com.goldencode.p2j.raw
  • com.goldencode.p2j.recid (should not be necessary for most queries)
  • com.goldencode.p2j.rowid (should not be necessary for most queries)

Note that the select clause of the HQL statement includes only co.id, which represents the primary key of the CustomerOrderImpl DMO. We do not select the entire record at this point, because we need to acquire a share lock to each record we process. This requires that the query be performed in two passes. The first pass uses the Persistence.scroll API to collect the primary keys of all the records which match the query's criterion:

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

A word of caution: DO NOT use the Persistence.list here in place of Persistence.scroll! For very large result sets, this can cause a spike in the JVM's heap memory which can have very serious repercussions for the stability of the FWD server process. Please refer to the@ javadoc documentation for these methods for more details.

We then iterate through the result set, fetch each record with a share lock in a second pass, add the record's primary key to the list of locks to be released later, and process the record. Note another assumption in this part of the example: the call to Persistence.load presupposes that the database table which backs the CustomerOrderImpl DMO is named customer_order.

      while(sr.next())
      {
         // Load each record with a share lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.SHARE,
                                                        true);

         locked.add(pk);

         // Do some work with custOrder DMO
         ...
      }

The last thing we do before the try block ends is commit the transaction.

If a PersistenceException is thrown, we catch it in the catch block, roll back the transaction, and process the error as is appropriate for the use case.

Note that there is no separate catch block for LockUnavailableException, which is a checked exception thrown by the Persistence.load method. There are two reasons for this:

  • We do not expect this exception to be thrown, because we have requested LockType.SHARE. This exception is thrown only for the non-blocking LockType variants (i.e., LockType.SHARE_NO_WAIT and LockType.EXCLUSIVE_NO_WAIT).
  • LockUnavailableException is a subclass of PersistenceException, so the catch block which catches PersistenceException is sufficient to meet the checked exception requirements of the Persistence.load method signature.

In the finally block, we iterate through the primary keys of all the records we have locked, and release each one. Persistence.lock throws a checked exception, LockUnavailableException. However, it is unexpected that this exception would be thrown while unlocking a record.

In the event we did not need a lock on each record, the example would be different in the following ways:

  • We would not collect primary keys in an ArrayList, nor have a finally clause.
  • The query would be done in a single pass, in which we would collect the full records instead of only primary keys. To accomplish this, we would simply omit the select clause from the HQL statement, such that it instead would begin “from CustomerOrderImpl ...
  • Accordingly, the call to the Persistence.load API in the ScrollableResults iterating loop would be unnecessary.

Such an example would look like this:

import org.myorg.myapp.dmo.mydb.impl.*;
import org.hibernate.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

import static com.goldencode.p2j.util.character;

...

private Persistence persistence = PersistenceFactory.getInstance("mydb");

...

/**
 * Process all customer orders for a given customer.
 *
 * @param   customerName
 *          Customer name.
 */
void processCustomerOrders(String customerName)
{
   boolean beganTx = persistence.beginTransaction();

   try
   {
      // Compose query to find all customer_order records with the customer's name
      String hql = "from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

      while(sr.next())
      {
         // Read each record
         CustomerOrderImpl custOrder = (CustomerOrderImpl) sr.get(0);

         // Do some work with custOrder DMO
         ...
      }

      // Commit transaction
      if (beganTx)
      {
         persistence.commit();
      }
   }
   catch (PersistenceException exc)
   {
      if (beganTx && persistence.isTransactionOpen())
      {
         try
         {
            persistence.rollback();
         }
         catch (PersistenceException exc2)
         {
            // do some error handling for the rollback error
            ...
         }
      }

      // Handle error appropriately...
      ...
   }
}

Using SQL Instead of HQL

HQL is used in converted code to enable business logic to be independent of any particular database vendor's SQL dialect. For the purposes of expressing queries derived from legacy Progress source code, HQL is adequate. However, HQL has its limitations in more complex queries. This is primarily due to restrictions or defects in Hibernate's HQL parser implementation. For example, certain uses of the GROUP BY clause in HQL statements are not handled properly, and expressions are not fully supported in some sections of an HQL statement.

When you encounter such a limitation, you may find it necessary to express a query using SQL instead of HQL. This is supported by the Persistence APIs:

public ResultSet executeSQLQuery(String)
public ResultSet executeSQLQuery(String, Object[])

These methods should be used within the context of the same general idiom as is described above for reading data. However, there are some additional points to consider when using this mechanism:

  • These APIs perform no mapping of database records to Java DMOs. Queries are expressed using table and column names, rather than DMO class and property names.
  • Database-level record locking (e.g., SELECT...FOR UPDATE) should not be used in your SQL statements as a substitute for the FWD runtime locking described above. Reliance on database-level record locking alone without using the FWD runtime lock manager may lead to data corruption.
  • It is the caller's responsibility to close the ResultSet object returned by these methods.
  • Currently, these methods are not implemented to take advantage of server-side database cursors, so the same memory warnings apply as for the Persistence.list method above. These methods originally were intended only for housekeeping purposes, internal to the FWD persistence framework, where large result sets were not anticipated. This limitation likely will be addressed in a future FWD release.

Creating/Inserting New Records

A record is created using the following idiom:

begin a transaction
try
  allocate a unique primary key
  obtain an exclusive lock on that key for the appropriate table
  instantiate the appropriate DMO implementation class
  set the primary key as the DMO's ID
  optionally set other DMO properties
  persist the DMO
  commit transaction
catch exceptions
  roll back transaction
  handle errors
finally
  release the record lock

An example follows:

import org.myorg.myapp.dmo.mydb.impl.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

...

Persistence persistence = PersistenceFactory.getInstance("mydb");

RecordIdentifier ident = null;

boolean beganTx = persistence.beginTransaction();

try
{
   Long pk = (Long) persistence.nextPrimaryKey("customer_order");
   ident = new RecordIdentifier("customer_order", pk);
   persistence.lock(LockType.EXCLUSIVE, ident);

   CustomerOrderImpl custOrder = new CustomerOrderImpl();
   custOrder.setId(pk);
   custOrder.setCustName(new character("ACME Ltd."));
   ...

   persistence.save(custOrder, pk);

   if (beganTx)
   {
      persistence.commit();
   }
}
catch (LockUnavailableException exc)
{
   persistence.rollback();

   // Handle error
   ...
}
catch (PersistenceException exc)
{
   if (beganTx && persistence.isTransactionOpen())
   {
      try
      {
         persistence.rollback();
      }
      catch (PersistenceException exc2)
      {
         // do some error handling for the rollback error
         ...
      }
   }

   // Handle error
   ...
}
finally
{
   if (ident != null)
   {
      try
      {
         persistence.lock(LockType.NONE, ident);
      }
      catch (LockUnavailableException exc)
      {
         // Handle accordingly;  unexpected when releasing a lock
      }
   }
}

Just as with locks acquired while reading records, it is critical that the lock manually acquired with Persistence.lock in the try block be released in the finally block, to prevent a lock leak.

Updating Existing Records

To update existing database records, we use a modified version of the idiom to read records with locks:

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record (as a DMO) with EXCLUSIVE lock type
      use the DMO's setter methods to set desired property values
   commit transaction
catch exceptions
   roll back transaction
   handle errors
finally
   release locks

A practical example is very similar to the original reading example above. The only differences are that we request LockType.EXCLUSIVE in the Persistence.load API call, and the work that we do is calling setter methods on the DMO to set the properties we wish to update:

         ...

         // Load each record with an exclusive lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.EXCLUSIVE,
                                                        true);

         locked.add(pk);

         // Make updates to custOrder DMO
         custOrder.setSomething(new integer(5));
         custOrder.setSomethingElse(new character("abcdef"));

         ...

Although every DMO has a public setter method setId(Long), you must never change the primary key itself as part of an update. Doing so will cause problems for Hibernate and is unsupported.

You may have noticed that no Persistence API is invoked to update the record explicitly. This is because Hibernate implements a feature known as transparent write-behind. This simply means that updates made to Java objects representing database records (i.e., DMOs) are not necessarily flushed to the database immediately. Instead, Hibernate attempts to batch up changes and flush them at the latest possible moment. Hibernate uses snapshots of each DMO's contents to manage the proper points to perform such a flush, such as before another query on the same table, and before a transaction is committed.

Deleting Records

The idiom for deleting records is very similar to that used to perform updates:

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record (as a DMO) with EXCLUSIVE lock type
      delete the record
   commit transaction
catch exceptions
   roll back transaction
   handle errors
finally
   release locks

Accordingly, the practical example is very similar to the update example above:

         ...

         // Load each record with an exclusive lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.EXCLUSIVE,
                                                        true);

         locked.add(pk);

         // Delete the custOrder DMO
         persistence.delete(custOrder);

         ...

One notable difference is that an explicit call to Persistence.delete(Object) is required to perform the delete action. This will both remove the DMO from the underlying Hibernate session and delete the corresponding record from the database.

Although it may be tempting to execute a DELETE SQL statement to delete many rows at once, avoid doing this, as it completely bypasses the record locking mechanism and as such, compromises data integrity. It also will prevent any DMOs which may still reside in the underlying Hibernate session from being released, causing a memory leak.

Adding and Modifying Database Tables and DMOs

When working with existing code or writing a new application from scratch, it is possible to need to either add a complete new table or to modify an existing one. When a new table needs to be added or an existing one needs to be changed, following issues need to be addressed:

  • write the DDL to add the new table or change an existing one
  • write the DDL to add a new index to a new or existing table
  • create or modify the DMO interface, DMO implementation class and Hibernate configuration file (.hbm.xml file)
  • how to work with the dmo_index.xml file

As the temporary tables are backed up by a H2 database and are automatically managed by FWD (their creation, indexes, etc), we will discuss only what is needed to maintain the backing tables in the PostgreSQL database. If temporary tables need to be added or changed, there is no need to manually maintain DDL code. All is needed is to maintain the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file.

Write DDL to add the new table

First, we will present how the developer can add a new table to the underlying permanent relational database. To accomplish this, DDL will be needed to create the table. The DDL can be generated in two ways: written from scratch by the developer or generated automatically following the steps in the Schema generation target section of the Building Converted Code chapter in the FWD Conversion Handbook.

The simplest way to get the DDL needed to create a new table is to first write the Hibernate configuration file (.hbm.xml file) for this new table. Once this file is complete, run the SCHEMA ant task in the build.xml file to generate the DDL for all tables in the specified database. The generated .sql file will contain the DDL for all the tables. All its left is to identify the CREATE TABLE, ALTER TABLE and CREATE INDEX statements which address to this table or any composite table associated with the extent fields and copy the statements to another file. After the file was created, execute the SQL statements using the required database. For PostgreSQL, the command to execute this file is the following:

psql -h <host> -p <port> -U <username> <database-name> -f <schema-file-name>.sql

The .sql generated by the SCHEMA ant task groups the DDL in following sections:

  • ALTER TABLE statements to drop all table constraints
  • DROP TABLE statements to drop all tables
  • CREATE TABLE statement to create the relational table and the composite tables (associated with any extent fields); the composite table name uses the parent table's name as a prefix, followed by the ”__” string and the extent value.
  • for each composite table, there will be a CREATE INDEX statement followed by an ALTER TABLE, to index the composite table by the parent__id column and to create the actual foreign key reference from parent__id column to the main table.
  • a special case are the case-sensitive text column and the varbinary columns; these will need to be registered in the dmo-index.xml file in a special section, for that DMO
  • also, if a column relates to a column with the same name in another table and a foreign relation exists between these to columns, no actual foreign key constraint will be enforced on database level; instead, the foreign keys will be registered in a special section for that DMO in the dmo-index.xml file

From each section, you must retain only those statements which refer to the new table or its child composite tables. Also, the copied statements must be pasted in the new file in the same order as they appear in the original file. More details about this file can be found in the Data Definition Language (DDL) chapter of the Part 3 of the FWD Conversion Reference book.

The other approach is to write the DDL statements to create the table, foreign relations and default indexes from scratch. Lets start with the DDL to create a table. First step is to determine what are the fields and their corresponding types; second, if the tables needs support for 4GL-compatible extent fields, we need to determine which one are extent fields. Each of the newly created table must have a column named ID, which is the PRIMARY KEY for this table. For all the table-specific columns, the DDL must specify their name (which must follow the table naming convention), the not null constraint (if required) and the default value (if required). To summarize, when defining a new table, following construct must be used, where <table_name> will be replaced with the actual table name::

create table <table_name>
{
   id bigint,
   <column_name> <type> [not null] [default <value>],
   ...,
   primary key (id)
}

If compatibility with legacy code is needed, then each column's type must be one of the types supported by FWD. For a complete list of supported types, see the Data Type conversion section of the Part 3 in the FWD Conversion Reference.

If any of the table's fields are Progress extent fields, then these fields are mapped to composite tables. The name for the composite table follows the naming convention described in the Data Definition Language (DDL) chapter of the Part 3 in the FWD Conversion Reference book. Each composite table contains all the extent fields which have the same size. Beside the actual fields, the table must contain the parent__id and the list__index fields. The parent__id is a foreign reference to the record in the main table, to which the extent values belong. The list__index is an index to position this element in the value list. All other fields in the composite table are extent fields of the same size. The main difference between the composite table and the parent table is that the composite table has the primary key composed from the parent__id and the list__index columns (in this order). To create a composite table, following code structure can be used, where <extent_size> will be replaced with the extent value for the contained columns:

create table <table_name__<extent_size>>
{
   parent__id bigint,
   <column_name> <type> [not null],
   ...,
   list__index integer,
   primary key (parent__id, list__index)
}

After the composite table was created, it must be added an index on the parent__id column and the foreign relation from the parent__id to the main table:

create index <table_name__<extent_size>>_fkey on <table_name__<extent_size>> (parent__id);
alter table <table_name__<extent_size>>
   add constraint <constraint ID>
   foreign key (parent__id)
   references <table_name>;

Although the backing database has no limitations on the column names between tables, when using multiple composite tables special attention is needed so that the column names are distinct among all the composite tables and the main table.

Write DDL to change an existing table

If the table already exists and it is needed to add a new column or change an existing one, there is no alternative other then writing the SQL code by hand. For this, the ALTER TABLE statement is used to either add a new column or change an existing one; when there is a case of an extent column, it will be treated slightly different. So, using ALTER TABLE DDL it is possible to add a new column, set or remove its not null constraint or even drop an existing column.

To add a new (non-extent) column, its name, type, and non null constraint (if any) most be specified. The ALTER TABLE statement will be used to modify the table definition:

alter table <table_name>
   add column <column_name>
   type <type>
   [not null];

Once a new column was added, support for this column must be added to the DMO interface and implementation class, to the Hibernate .hbm.xml configuration and also the dmo_index.xml must be added any foreign relations, index or other column properties. Detail on how to modify these files will follow later in this section.

When adding a new extent column, first we must find what is the composite table to which this column belongs. For this, we must if there already exists a table named <table_name__<extent_size>>. This can be done by running the following code in a psql console:

\d <table_name__<extent_size>>

If the table already exists, its definition will appear on the screen. Else, the console will show an error, which tells us that the table does not exist. In case the table already exists, an ALTER TABLE statement similar to adding a “simple” column to the table can be used to add the extent column:

alter table <table_name__<table_size>
   add column <column_name>
   type <type>
   [not null];

After adding a new column to the composite table, the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file must be updated. Details on how to update these file will follow later in this section.

If an existing column needs to be changed, then the ALTER TABLE statement can be used to set or remove its not null constraint. Although the ALTER TABLE statement allows the changing of a column's name or type, we do not recommend such actions (especially if the table and column is already used in the application). What will be discussed is what needs to be done to set or remove the not null constraint.

First, about setting the not null constraint. When setting this constraint to an existing column (which already may have records with the value for this column set to null), its important to determine what is the appropriate default value for such columns. Once this was determined, before enforcing this constraint on the table, we set this property to the default value, for all the records with null value:

update <table_name>
   set <column_name> = <default>
   where <column_name> is null;

After ensuring there are no null values for this column, we can go ahead and enforce the constraint:

alter table <table_name>
   alter column <column_name>
   not null;

Once the not null constraint is set, we must also update the Hibernate configuration file; details how to do this will follow later in this section.

When a not null constraint was determined to be no longer in effect, the ALTER TABLE statement will be used to drop it. After removing the constraint, the Hibernate configuration file will need to be updated too. To remove the constraint from the column, use following construct:

alter table <table_name>
   alter column <column_name>
   drop not null;

The final use described for the ALTER TABLE statement is to drop an existing column. This must be done only when it is confirmed that the column is not used anymore in the business logic, so that it can be safely removed from the backing table.

alter table <table_name>
   drop column <column_name>;

When an extent column (compatible with legacy converted code) needs to have the not null constraint set, the same steps as for a normal table column will be used: determine what is the appropriate default value, update existing records, enforce the constraint; special attention is needed so that the correct composite table is used. If an extent column needs to be removed, a similar ALTER TABLE statement will be used to remove the column from the composite table; if the composite table remains without any columns, it can be removed safely from the backing database only if all its constraints were dropped first.

alter table <table_name__<extent_size>> drop constraint <constraint_name>;
drop table <table_name__<extent_size>>;

In the above code, the <constraint_name> is the name of the constraint which enforces the foreign key relation between the parent__id column and the parent table. Once the composite table was removed from the backing database, the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file need to be updated accordingly.

Write the DDL to add a new index to a new or existing table

Once the table maintenance is complete, we can think about what kind of indexes need to be added. The backing database supports both unique and ordinary indexes. Unique indexes speed data access and also ensures that the combination of indexed columns remains unique. To create an index from scratch, following DDL can be used:

create [unique] index <index_name>
   on <table_name>
   (<column_1>, <column_2>, ...);

When creating an index, special attention needs to be given to any text columns; such columns must be indexed using a special expression, as described in the Data Definition Language (DDL) chapter of the Part 3 in the FWD Conversion Reference book. So, the case-sensitive columns will be indexed using the rtrim(<column_name>, E' \t\n\r') expression and the case-insensitive columns will be indexed using the upper(rtrim(<column_name>, E' \t\n\r')) expression. The DDL to create such an index for a case-insensitive text column would look like:

create [unique] index <index_name>
   on <table_name>
   (upper(rtrim(<column_name>, E' \t\n\r')));

If the table already contains data for the indexed columns and an unique index is created, it might fail if the existing values are not unique. In this case, you should consider either making the values unique or if the unique index for those columns is really necessary. If the index creation succeeded and the table contains data, it can be useful to execute the ANALYZE command for the indexed table, to collect statistics about the existing data. To run this command, following code can be executed at the psql console:

analyze <table_name>;

Finally, after an unique or ordinary index is created, the dmo_index.xml file will need to be updated so that the DMO will be aware of the new index (how to do this will be explained later in this section).

How to create new DMOs from scratch

When a new table is added to the database, the job is not complete until this table is registered with the business logic too. To accomplish this, we must map the table to a DMO which will be used all over the business logic. For each DMO, there are three files which need to be created: the DMO interface, the DMO implementation class and the Hibernate configuration file. Details about the structure and contents of each file are found in the Data Model Objects section of the Part 3 - Schema conversion chapter of the FWD Conversion Reference book.

All these DMO related files will need to be placed in the correct package; if the schema name is mydb, then the DMO interface will be placed in the dmo.mydb package and the DMO implementation class will be placed in the dmo.mydb.impl package.

If a temporary DMO needs to be added or changed, all the steps are the same as for permanent DMOs. The only requirement is that the DMO interface, DMO implementation class and Hibernate configuration file will be placed in the correct package, for the _temp schema (i.e. dmo._temp package).

How to change existing DMOs

When working with existing DMOs, an important issue is how was the existing DMO created - during conversion or was hand-written from scratch. Depending on who created the DMO - it was automatically generated or was written from scratch in the first place - it will lead to different scenarios.

In the first case, when modifying DMOs which were generated during conversion, you should make sure that no subsequent conversions need to be run. This translates to making sure no further changes will be needed in the legacy 4GL code; if the legacy 4GL code is still under maintenance, then the changes to an an existing DMO which maps to a legacy 4GL table will be overwritten on the next conversion run. So, in this case, it is best to do the changes in the 4GL schema and expose them to the new code by running the conversion routine. This approach will also expose the changed table to the existing 4GL code, too. If the 4GL code is no longer maintained, and all changes will be done in the converted Java code, it is OK to change the Java code for the generated DMO.

In the later case, when the DMO was written from scratch, there are no compatibility constraints related to its changes. Considering that it was not used in the legacy code, its changes will not get overwritten if the legacy code is reconverted.

After it was determined that it is safe to modify DMO's Java sources, the next step is to determine what are the changes in the database table which reflect in the DMO too. In all cases, there will be three files which will need to be changed: the DMO interface, the DMO implementation class and the DMO's Hibernate configuration file. Depending on which were the database table changes, different steps will need to be taken.

The database table changes which affect the DMO are changes related to table fields - adding or dropping an existing field. Any table index changes will not affect the DMO sources - the changes will be in the dmo_index.xml file. Also, any changes related to case-sensitivity or foreign-keys will need to be synchronized with the dmo_index.xml file too.

When a field needs to be dropped, changes will need to be performed in each of the DMO's files. Following table describes what needs to be dropped from each file:

Action DMO Interface DMO Implementation class Hibernate mapping
remove accessors (getters and setters) X X  
remove field definition   X X
remove composite class, if no more fields   X X
remove the special composite field, if composite class is removed   X X
remove the special composite method, if composite class is removed   X  

After removing the field definition and the accessors from the DMO implementation class, the default constructor and the assign(Undoable) method must be updated too. If the composite table has no other field, after its definition was removed, the private List composite<X> field and all its references must be removed too.

When adding a new field, the affected areas are the same as when dropping a DMO field. In a case of a non-extent field, the first step is to add the field to the Hibernate mapping file. The new field will be defined in a property node, which is a child of the class node. For the property, the table column name, DMO property name and the FWD-compatible type will be specified as attributes. After modifying the mapping file, the next step is to add accessors definition (getters and setters) to the DMO interface. Last step is to modify the DMO implementation class, which will need to be added the instance field definition and implementation for the getters and setters defined in the DMO interface; also, there will be changes in the default constructor (to initialize the field) and in the assign(Undoable) method, to copy the field's value to another DMO instance.

In case of an extent field, the changes start with identifying the composite class to which this field belongs. The composite class for this field is the one which groups together fields of the same extent; if there are no other fields of the same extent, it means no composite class exists and it will need to be added.

The composite class is an inner class defined under the DMO implementation class, with the same structure as mentioned in the previous section (which refers to adding a new DMO). After this class was created, it is needed to add the composite field (with all its related changes - getter, setter and initialization in constructor). All the remainder steps (add the class field which holds the composite elements, getters, setters, size method, the default constructor and assign(Undoable) method changes) are the same as the ones mentioned with adding a new DMO.

For the DMO interface, it is needed to add the definitions for all the methods related to the field which holds the list of composite elements (the getters, setters, size method).

The Hibernate configuration file will need to be changed so that the backing composite database table is mapped to the correct class and also to create the field which holds the list of composite elements. The steps needed to create this mapping are found in the previous section, related to adding a DMO from scratch.

How to work with the dmo_index.xml file

While the 4GL code is still maintained, this file should not be edited by hand, as the file gets generated on each conversion (assuming that the schema generation step is included). Instead, there are two different ways to add a new table: either edit the main database schema file and add the new table definition directly in 4GL style or write the new DMO directly in Java. When writing it directly in Java, DMO registration and other DMO-related information must be added to a special merge file. More details about this special merge file and about how the schema changes can be handled while the 4GL code is still maintained can be found in the Managing Data Model Object (DMO) Changes section of the Integrating Hand-Written Java chapter of this book.

Managing data records using SQL

This section will describe how to manually maintain data records, in cases when the application doesn't provide a way (i.e. cases when there are computed columns, records are not managed by the user). This will cover only managing records in the permanent database; as the records for a temporary table exist only as the user session is active, no manual record insert, update or delete will be needed.

If for a certain table the application doesn't provide a way for altering data records, it is possible to do this using the psql console. Only concern is if the server is running or not. In the first case, if the server is running and a record needs to be altered, it is needed to lock that record, so the business logic will not be able to alter it until the explicit lock is released.

The first part will present the steps needed to alter a record in a table with no extent fields (i.e. there are no composite tables). The changes in these steps for tables with extent fields will be presented after this.

If the server is started, then the record must be locked; for this, the Administrative Console provides the Console/Acquire locks screen, which allows the possibility of explicitly locking a record ID. After an entry was added to this screen, by pressing the Lock Records button will add an exclusive lock for this record ID; now, it is safe to alter or add the record, which will be discussed later in this section. When the work is done, the lock must be released explicitly by pressing the Unlock Records button.

If the server is not started, then no locking is needed - all is left is to start the psql console and connect to the required database:

psql -h <host> -p <port> -U <username> <database-name>

In case the server is running, the first step after the SQL console was started is to find the ID for the record we need to lock. If the record is a new record, then we will have to find the ID which will be set to the new record.

When adding a new record, the new record ID can be found by running this command at the psql console:

database=# select max(id) + 1 from <table>;

where <table> is he name of the table to which the new record needs to be added. This will return an unique value, which will be used in the Administrative Console as the ID for the record which will be locked. The command will return an output like this:

?column?
-----------
 120949467
(1 row)

After the lock was acquired or it is ensured the server is not running, the @INSERT@statement will be used to add the actual data for this record. The syntax for this command is:

database=# insert into <table> (id, <column1>, <column2>, ...) values (ID, <value1>, <value2>, ...);

where:

  • <table> is the database table name
  • ID is the value computed at the previous step
  • <column1>, <column2> is the list of columns defined for this database table
  • <value1>, <value2> is the list of values for each specified table column

On success, the command will return an output like this:

INSERT 0 1

Depending on each column type, their specified value must follow a specific format; also, the values must be specified in the same order as the column names. Following table describes the format for each PostgreSQL type supported by FWD:

PostgreSQL Type Format Example
TEXT The value must be enclosed in apostrophes 'value'
DATE The date must be formatted using the 'YYYY-MM-DD' format '2010-12-31'
NUMERIC Use a dot (.) as decimal separator. No digit grouping is needed. 1.234
INTEGER No digit grouping is needed. 1234
BIGINT No digit grouping is needed. 1234
BIGINT No digit grouping is needed. 1234
BOOLEAN No special notes. true or false
BYTEA The binary string must be enclosed in apostrophes '<binary string>'

In all cases when the value must be set to the 4GL unknown value (?), the null value will be used. Special attention will be needed for non-null columns, but in these cases the SQL server will generate an error and the operation will not complete.

After the record was inserted, the lock for the record's ID must be released, if it was acquired (i.e. the server is running).

If a record needs to be deleted, the DELETE statement will be used to delete the record. The first step is to determine the ID for the to-be-deleted record; if the server is running, the record needs to be locked. To find the records which need to be deleted, a SELECT statement can be used to filter through the table records and find the ID of the interested ones:

database=# select id from <table> where <clause>;

where <clause> is the filter applied to the records. The output of this statement, if it found any records, is this:

    id
-----------
 120362683
(1 row)

If no record was found, the output will look like this:

    id
-----------
(0 rows)

After the record ID was found (and locked, if needed), following statement can be used to delete the record with the specified ID from the backing database table:

database=# delete from <table> where id = <ID>;

where <ID> is the ID for the record which needs to be deleted. Although any kind of expression can be specified in the WHERE clause, it is not recommended to use other expression then the id = <ID>, especially if the server is running. This way, the administrator has full control over which record gets removed from the table. The output of this statement, if executed successfully, is this:

DELETE 1

After the statement was executed, the lock for this record's ID must be released in the Administrative Console, if it was acquired.

When a record needs to be changed and this can't be done using the application's UI, the first step is the same as when deleting a record: find the ID for the needed record and lock it, if required. After this, the UPDATE statement will be used to change the record:

database=# update <table> set <column1> = <value1>, <column2> = <value2>, ... where id = <ID>;

where, for each column which needs to be updated, its value is specified using the same format as defined above. If executed successfully, the statement will return the following output:

UPDATE 1

Same as when deleting a record, after all updates to this record are done, the last step is to release the lock on this record ID in the Administrative Console.

For the fields which are part of a composite table (they are extent fields), the record insert, update or delete must use the composite table. If locking is needed, the record ID will be the record to which the extent field belongs.

When adding a new field for which there is one or more composite tables, the job is not done until these tables are populated, so that it will hold an entry for each index:

database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., <index>);

where:

  • parent__ID is the parent record to which the extent fields belong to
  • list__index is the position of this entry

The above INSERT statement template must be duplicated the same number of times as the extent size for the composite table's fields, where list__index will take values from 1 to the extent size:

database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 1);
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 2);
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 3);
...
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., <extent size>);

In the above statements, as the composite table has an unique index on the (parent__id, ist__index) columns, the list__index must be unique.

When updating an extent field, instead of using the UPDATE statement for the parent table, we use it to change the field in the composite table to which it belongs:

database=# update <composite table> set <column1> = <value1> where id = <ID> and list__index = <index>;

The above statement will update the extent field value at the specified index. If all the entries need to be updated for this field, the and list__index = <index> filter must be removed from the where clause. Note that the list__index and the parent__id must not be changed.

In case a record is deleted and it contains extent fields, first of all the child records in all the composite tables (i.e. for all the extent fields) must be deleted before the parent record is deleted and after the record ID was locked, in case the server is running. The DELETE statement template which will be executed for each composite table is this:

database=# delete from <composite table> where parent__id = <ID>;

Using POJOs for Database Entities

What is a POJO

A POJO is a simple class used for storing/transporting data. The POJOs are not related to any technology so they are not sharing any part of code with FWD. However, FWD is able to generate these classes, optionally, during the conversion process. In particular, FWD can generate POJOs that represent database entities.

This is a parallel mechanism to the DMO (data model object) which represents database entities that a structured using legacy datatypes (e.g. subclasses of BaseDataType) and other 4GL style features. POJOs can represent the same database entities as DMOs but they do so without any of the legacy system dependencies. This is optimal for passing data to/from external applications which need to be loosely coupled (rather than tightly coupled) to the converted code.

Enabling Generation of POJOs at Conversion Time

By default the POJO classes are not generated at all. The only way to enable POJO generation for a specified schema is to add

<parameter name="generate-pojos" value="true" />
in the desired namespace in your p2j.cfg.xml. Both to "true" and "yes" are accepted as positive values.

To configure the path where they are generated use the global scope of same configuration file

<parameter name="pojoroot" value="com.goldencode.testcases.pojoSpecialPackage" />
If not specified, a pojo package sibling to dmo will be used.

Note:
Generation of POJOs for temp-tables is not (yet) possible simply because this cannot be configured (the _temp schema is not fully supported in p2j.cfg.xml).

An Example

For completeness we will use a complex permanent table which covers all supported field data type. Consider the definition of a tm-all permanent table, somewhere in your input .df file:

It has scaler fields named f-X of various types and 4 extent fields named e-Y.

To force it to extreme, we will use a partial denormalization using the following .hint file

<?xml version="1.0"?>

<!-- UAST hints for p2j_test schema -->
<hints>
   <schema>
      <table name="tm-all">
         <custom-extent>
            <field name="e-1"/>
            <field name="e-2"/>
            <!-- <field name="e-3"/> this extent field remains normalized -->
            <!-- <field name="e-4"/> this extent field also remains normalized -->
         </custom-extent>
      </table>
   </schema>
</hints>

As result, the expected DMO interface is:

My pojoroot was set to com.goldencode.testcases.pojo3 so the generated POJO class is:

Notice it does not have any dependency on other APIs except for classes from standard Java. The java.lang.String is used for most exotic types while GregorianCalendar is used for mapping date, datetime and datetime-tz ABL data types. The full set of type mapping the the following:

OE data type DMO data type
com.goldencode.p2j.util package
POJO data type
blob blob byte[]
character character java.lang.String
clob clob java.lang.String
comhandle comhandle java.lang.String
date date java.time.GregorianCalendar
datetime datetime java.time.GregorianCalendar
datetime-tz datetimetz java.time.GregorianCalendar
decimal decimal java.math.BigDecimal
int64 int64 java.lang.Long
integer integer java.lang.Integer
handle handle java.lang.Long
logical logical java.lang.Boolean
object object java.lang.Long
raw raw byte[]
recid recid java.lang.Integer
rowid rowid java.lang.Long

Working with POJOs

When working with POJOs there are two main types of operations: conversion to/from DMOs and de-/serialization to JSON. Of course de-/serialization to XML or other types can be used but these are out of the scope of this document.

Obtaining a POJO from a DMO

The way a POJO object can be obtained from a DMO is using the Record.toPOJO() method. If the POJO class was defined (that is, generated at conversion time), an object of that type will be returned. On any error null is returned. The method is abstract in Record class but it is implemented on-the-fly in the DMO implementation class when the DMO is registered with DmoMetadataManager. A hard constraint for this is that the conversion method needs access to protected data from BaseRecord. The assembler will use the record's recordMeta to generate the method's code without using reflection at all. The required conversions to POJO types are injected as required.

In our case, let's consider we have a record in a FWD buffer which have eventually been populated flushed, and refetched:

TmAll.Buf tmAll = RecordBuffer.define(TmAll.Buf.class, "fwd", "tmAll", "tm-all");
[...]
RecordBuffer.openScope(tmAll);
[...]
         tmp1.assign(new character("Lorom Ipsom Delir 1."));
         tmp2.assign(new character("Ipsem 2. Lerom Dalot"));

         tmAll.create();
         tmAll.setF1(new character("Polo"));
         tmAll.setF2(date.fromLiteral("02/27/2006"));
         tmAll.setF3(decimal.fromLiteral("2.1234056789"));
         tmAll.setF4(new integer(798));
         tmAll.setF5(new int64(111222333444555L));
         tmAll.setF6(new logical(false));
         tmAll.setF7(new datetime(new character("03/26/2003 18:46:01")));
         tmAll.setF8(new datetimetz(new datetime(new character("04/25/2002T15:29:23+02:00"))));
         new LobCopy(new SourceLob(tmp1), new TargetLob(new FieldReference(tmAll, "f9"))).run();
         tmAll.setF10(new clob("Erog Scugito Ets #3"));
         tmAll.setF11(tmAll.getF11().setLength(20));
         tmAll.setF11(tmAll.getF11().setString(tmp2, 1));
         tmAll.setF12(tmAll.recordID());
         tmAll.setE11(new character());
         tmAll.setE12(new character("Alpha"));
         tmAll.setE13(new character("Beta"));
         tmAll.setE14(new character(""));
         tmAll.setE15(new character("\13"));
         tmAll.setE21(new date());
         tmAll.setE22(date.today());
         tmAll.setE23(date.fromLiteral("12/13/2014"));
         tmAll.setE24(date.fromLiteral("07/08/09"));
         tmAll.setE25(date.fromLiteral("2/3/3022"));
         tmAll.setE3(0, new decimal());
         tmAll.setE3(1, new integer(0));
         tmAll.setE3(2, decimal.fromLiteral("888.1234567"));
         tmAll.setE4(0, new integer());
         tmAll.setE4(1, new integer(0));
         tmAll.setE4(2, new integer(-999999));
         tmAll.release();
[...]
new FindQuery(tmAll, (String) null, null, "tmAll.f1 asc, tmAll.f2 desc, tmAll.f3 asc, tmAll.f4 desc, tmAll.f5 asc, tmAll.f6 desc, tmAll.f7 asc, tmAll.f8 desc, tmAll.f12 desc").first();

A copy of the associated POJO ca be obtained using:

com.goldencode.testcases.pojo3.fwd.TmAll pojo = (com.goldencode.testcases.pojo3.fwd.TmAll) tmAllDMO.toPOJO();
There are two issues to note here:
  1. the result requires a cast. This is required because the method is abstract in Record class and the result type is unknown until runtime;
  2. the full path of the POJO is used. That is because the POJO class and DMO interface share the same name. Without specifying the while package, the compiler will fail to identify each object type.

Of course, if the actual object type is not important at that place, the code can be simplified as:

Object pojo = tmAllDMO.toPOJO();

Converting a POJO to a DMO object

To obtain a DMO record from a POJO, the Record.fromPOJO() static method must be used. If the class of the parameter was registered as a POJO type when a DMO interface was registered with DmoMetadataManager then an object of this specific type will be created and populated with data from the POJO object sent as parameter. If the object's type is unknown or in case of other errors, null is return.
To populate the new DMO with data, the fromPOJO() invokes populateFromPOJO() for the newly created object. At this moment, the DMO object is aware of the type of the POJO and knows what getter methods to access. This method is also abstract in Record and assembled at runtime, when the DMO is registered. Again, there is no reflection used and, if required, static conversion methods from Record class are injected.

DataModelObject respawnedDMO = Record.fromPOJO(pojo);

This operation does not require the resulting object to be mandatory stored in FWD's database, so the resulting object is not associated with FWD, does not have any flag set and has no private-key set. If the association is required, then the RecordBuffer.associate() method must be used as follows:

((BufferImpl) tmAll).buffer().associate(respawnedDMO);
Assuming there are no exceptions thrown several things will happen:
  • the respawnedDMO's properties will be marked as dirty, and the NEW and CHANGED flags of its status will be set in order to make possible and successful the flush operation when requested on the buffer.
  • a new primary key is generated and associated with the DMO object;
  • the old DMO stored in buffer is flushed and buffer released to make room for respawnedDMO which is set as the current record in buffer.

Serializing as JSON

Serialization to JSON is really simple.

         ObjectSerializer pojoSerializer = new ObjectSerializer();
         JsonNode pojoAsJson = pojoSerializer.toJson(pojo);
         String prettyPojoJSON = pojoAsJson.toPrettyString();
         System.out.println(prettyPojoJSON);

If the above code is executes, the output console will contain:

{
  "f1" : "Polo",
  "e11" : null,
  "e12" : "Alpha",
  "e13" : "Beta",
  "e14" : "",
  "e15" : "\u000B",
  "f2" : 1140991200000,
  "e21" : null,
  "e22" : 1658178000000,
  "e23" : 1418421600000,
  "e24" : 1247000400000,
  "e25" : 33200748000000,
  "f3" : 2.12,
  "e3" : [ null, 0, 888.12 ],
  "f4" : 798,
  "e4" : [ null, 0, -999999 ],
  "f5" : 111222333444555,
  "f6" : false,
  "f7" : 1048697161000,
  "f8" : 1019741363000,
  "f9" : "TG9yb20gSXBzb20gRGVsaXIgMS4=",
  "f10" : "Erog Scugito Ets #3",
  "f11" : "SXBzZW0gMi4gTGVyb20gRGFsb3QA",
  "f12" : 1214001
}

Deserializing from JSON

The reverse operation, the de-serialization can be implemented as follows:

         com.goldencode.testcases.pojo3.fwd.TmAll pojoFromJson = null;
         JsonParser parser = null;
         try
         {
            parser = new ObjectMapper().getFactory().createParser(new StringReader(prettyPojoJSON));
            pojoFromJson = new ObjectMapper().readValue(parser, com.goldencode.testcases.pojo3.fwd.TmAll.class);
         }
         catch (IOException e)
         {
            System.out.println("Failed to read back POJO from JSON: " + e);
         }

Of course, we do not expect any errors. To have a confirmation of the operations and view the content of the new object created we can use JSON format again:

System.out.println(pojoSerializer.toJson(pojoFromJson).toPrettyString());

Note on JSON Compatibility between DMOs and POJOs

One of the initial ideas was to use JSON as intermediary protocol format during the transport between modules of the applications or even applications. Ideally, one peer to be able to serialize (DMO) objects in JSON format and, when the object is read by the other peer to allow it to be deserialized as the other type (POJO). Unfortunately, this is not possible because the DMOs are designed to keep compatibility with legacy ABL code, so they will use legacy name for the element/attributes whilst the POJO are not aware of the original metadata so they will use the property names for same purpose. Occasionally, the legacy name and property names are the same, but generally, they are not a match. A second issue is because the date-related types are not mapped identically (the POJOs use GregorianClendar). So we cannot serialize a DMO and deserialize it as POJO (or the other way around) because we the deserializer will be unable to map the values, resulting in an object integrity breakage. For now, only the following protocols will work:

Sender Side --serialization--> Transmission --deserialization--> Receiver Side
1 POJO JSON (property) POJO
2 DMO JSON (legacy-name) DMO
3 DMO --toPOJO()--> POJO JSON (property) POJO
4 POJO JSON (property) POJO --fromPOJO()--> DMO

A solution (not yet implemented) would be to create a special serialization for DMOs in which the property names to be used as field names in JSON and the data type correctly mapped. That is because the POJOs will not be able to retain additional information. If de-/serialization is added to DMO then a protocol which use different 'containers' for data will probably work.

Direct JDBC Usage

Note: this section is under construction.

Creating Database Records External to the FWD Server

Surrogate Primary Keys

When converting a Progress database schema to a new schema for use with one of FWD's supported databases, the FWD conversion adds a surrogate primary key column. This primary key has no business meaning to the converted application. It is used to represent the Progress RECID/ROWID value to application logic. It is used within the FWD persistence runtime for various purposes as well. Whereas the RECID/ROWID in Progress is not explicitly defined for a table in the Progress schema, the surrogate primary key represents an additional, explicit column in each converted table. Thus, the existence of this explicit primary key column must be taken into account for external code which interacts with the migrated database.

The surrogate primary key column is named recid by default, to avoid any conflict with existing field names in the original database, since recid is a reserved keyword in Progress.

The primary key is defined as a 64-bit integer. It is non-nullable. Each record in the database has a unique surrogate primary key, when records are created by the FWD database import process or by the FWD runtime environment.

The FWD runtime creates new primary key values for persistent (i.e., non-temp-table) records using a sequence named p2j_id_generator_sequence. If records are created and inserted externally to the FWD server, this sequence should be used to generate new surrogate primary keys, to ensure primary keys are not duplicated across records. Note that because the primary key is explicitly defined in a FWD schema, the value for this extra column must be included in SQL INSERT statements. The primary key column is not defined as an automatically incremented data type in the FWD schema, because when creating new records, the FWD runtime needs to generate the primary key value and use it internally, before the database insert occurs.


© 2004-2022 Golden Code Development Corporation. ALL RIGHTS RESERVED.