Project

General

Profile

Integrating External Applications

FWD provides a remote object protocol that allows the development and deployment of a distributed system using simple Java classes. From a programming perspective, the FWD class that provides the remote object facility is com.goldencode.p2j.net.RemoteObject. The RemoteObject protocol allows a developer to code a Java interface that is exported as an application programming interface (API) on one or more systems and which is accessed remotely from another system. The services that are exported and the calling code that accesses those services are just “plain old Java obejcts” (POJOs).

At the core of this approach is a Java interface that represents the API to be provided by the FWD server. On the FWD server, an implementation of that interface is registered as an external API. On the remote side (the client), a network session is established with the FWD server. Once authenticated, the client obtains a local object with the same interface that is the API. That local object is a proxy, such that any calls to the local object are transparently redirected to the FWD server where they are routed to the registered implementation code. Any parameters of the method that is called are included with that method call and will be delivered to the server code. The server code executes and provides any return value or throws an exception, either of which are transparently returned back to the remote client. This natural approach makes it extremely simple to program servers and clients. No complicated network logic is needed, the developer can focus exclusively on the functionality of the services being provided.

Using the FWD remote object approach, anything you can do with a regular Java object is naturally available via this mechanism. The only limitation is that anything sent as a parameter or returned as a return value MUST be Serializable. Even exceptions are properly forwarded as needed, so the entire design is really just a remote Java object API.

The rest of this chapter describes how to create and access hosted services using the FWD remote object protocol.

Hosting Custom Services

Design the API

Each exported API is defined as a Java interface. The methods included in the interface are the methods that will be callable via the FWD remote object protocol.

The more important limitation is that all method parameters and all return values must be of a type that is Serializable.

Example:

package com.acme.corp.something;

public interface SomeQueue
{
   public void clear();

   public boolean isEmpty();

   public int getSize();

   public SomeEvent next();

   public SomeEvent peek();

   public SomeEvent peek(int index);

   public SomeEvent[] filter(SomeEventCriteria[] criteria, boolean direction)
   throws SomeCheckedException;
}

It is valid to have void return values and an empty parameters list for any method. All of the primitive Java types (e.g. int, long, float, double, boolean...) are Serializable and thus they can be used freely as parameters and return types.

Many Java object types are Serializable (e.g. java.lang.Integer or java.util.ArrayList), so they too can be used freely. Any custom classes (SomeEvent or SomeEventCriteria in the example above) that are used as parameters or return types must be Serializable. See below for some tips on optimizing performance.

There is no specific limit to the number of parameters, primitives and Serializable objects of any type can be intermixed as needed.

Array types can be used as parameters and as a return type.

Exceptions (including checked exceptions) can be specified and thrown. Unchecked exceptions don't have to be listed, but checked exceptions do need to be specified as part of the method signature. Only instances descended from java.lang.Exception are forwarded over the remote object protocol. The Exception class is Serializable so any custom exceptions will naturally work with the protocol.

Method overloading is supported, but the method parameter lists must be different. The return type is not considered in matching the proper method signature.

Always remember that the code that implements the API is always in a separate JVM process if not also on a separate machine. When a method is called, any object instances passed as parameters will be delivered to the implementing method as a copy of that instance. That is how serialization works: it renders the instance into a form that can be transmitted over the network, transfers it and uses that data to create a new instance of that same object on the remote side, then passes that new instance to the method implementation. On return, the same behavior occurs for the return value. The caller receives a copy of the return value instance from the server side, it doesn't receive the same instance. This means that any given instance passed as parameter or return value is not shared between the caller and service, the two sides instead each have a copy of the instance that was sent by the other side. This means that subsequent changes to the state of an instance won't be visible on the other side of the session. For example, calling a setter of the class passed as a parameter only changes that instance (on the server side), it does not change the state of the caller's original instance. If changed state is needed on both sides, the API methods must return the changed instance or include that changed data as part of the data in some larger aggregating class which is the return value.

The process of serialization can be expensive in terms of performance. For complex objects (which may contain references to many other complex objects), leaving the serialization to the JVM is usually a bad idea. For simple data, use primitives instead of the Java wrapper classes (e.g. use int instead of Integer) wherever possible. The primitives serialize faster than the wrappers. Likewise, for performance reasons, use arrays instead of Java collections as parameters or return values.

Implementing Externalizable in custom data objects (which are transferred as parameters or return values) is a way to improve performance. Externalizable is a sub-interface of Serializable, which means that the result meets the requirements of the remote protocol. The difference is that with Externalizable must create methods to read and write instances of the class to a stream. In effect, the developer must manually write the code to serialize (writeExternal()) and deserialize (readExternal()) the instance. Many of the techniques described above can be used to render the instance data much more efficiently than would occur by default by the JVM's serialization processing. For large objects or even for small objects that are sent frequently, this can make a measurable difference in performance of the remote object protocol.

Implement the Server Code

The server side will be in one of two forms: a traditional Java class that implements the interface of the hosted service OR a Java class that has a exactly matching static method for each of the methods in the interface.

The first approach means that the hosted service is provided by a single non-static instance which can contain instance data just as any other object instance. That single instance is used for all method calls, no matter how many simultaneous sessions are making calls to the service. The following is an example:

package com.acme.corp.something.server;

import com.acme.corp.something.*;
import java.util.*;

public class ServerQueue
implements SomeQueue
{
   private Deque queue = new ArrayDeque<SomeEvent>();

   public synchronized void clear()
   {
      queue.clear();
   }

...

}

This example uses instance data and synchronizes access to it to ensure data integrity when accessed concurrently by multiple threads. The example is abbreviated (using ellipses) but in practice the entire interface must be implemented as is normal in Java.

The second approach allows a static “implementation” of the interface. There is no Java language feature that allows an interface to be implemented using static methods. FWD mimics the behavior that would occur if it was possible to implement an interface using static methods. The code must exactly duplicate all methods in the interface with the only difference being that each method is implemented using the static keyword.

Example:

package com.acme.corp.something.server;

import com.acme.corp.something.*;
import java.util.*;

public class ServerQueue
{
   private static Deque queue = new ArrayDeque<SomeEvent>();

   public static synchronized void clear()
   {
      queue.clear();
   }

...

}

In this case the data must be static so that the methods can access the data. There will not be any instances of the class created. Instead the hosted service will be called on a static basis, directly on the class itself. Class-level synchronization is used in this case to ensure data integrity.

Neither implementation approach requires the use of Serializable for the class that implements the hosted service itself since that class is not ever sent over the protocol. But any data that is received or sent, must be Serializable as is noted in the section above entitled Design the API.

See the chapter on Integrating Hand-Written Java Code for details on how to write code on the server-side. If it is necessary to call converted 4GL code from the hosted service, examine the section entitled Progress-Compatible Top-Level Session Wrapper.

If the hosted service will be accessed by multiple simultaneous clients, this means that there may be a need to store session-specific state. For details on how to safely store and access session-specific state, see the section on Context-Local Data in the chapter on Runtime Hooks and Plug-Ins. If there is only ever a single FWD session accessing the API, then this may seem less important, but it is not possible through configuration to ensure that this assumption is always honored. It is unsafe to store state across API calls unless specific precautions are taken to protect the data and ensure its integrity. All access to any shared data must be synchronized or otherwise protected with a locking mechanism to ensure that only one thread will ever have access to reading or writing the data at any given time. By making the access mutually exclusive, it is ensured that the data's integrity will be maintained. By using non-shared data (session-specific state such as is documented in the Context-Local Data section), this issue is eliminated since each session has an independent version of the data.

If the API implementation is completely stateless, then there is no data to store and the session-specific storage of state is not needed.

Modify the FWD Server Classpath

The resulting interface, implementation class and all other supporting classes must be accessible in the FWD server's classpath. If the classes are in a jar file, that jar must be explicitly listed in the classpath. If the class files are “loose", then the root path to the top-most package must be included in the classpath.

The FWD server must be able to load all of the needed classes from the file system or a jar in the file system.

Register the API at FWD Server Startup

Once the interface and implementation classes are written (and compiled) and are available to be loaded in the FWD server, the FWD server must be configured to export that API when when the server starts. This exporting process is known as registration.

The following code must be executed to register the API when the implementation class is non-static (when the interface is implemented directly):

import com.acme.corp.something.*;
import com.acme.corp.something.server.*;
import com.goldencode.p2j.net.*;

...

RemoteObject.registerNetworkServer(SomeQueue.class, new ServerQueue());

In the case where the static “implementation” has been used, the following code is used to register the API:

import com.acme.corp.something.*;
import com.acme.corp.something.server.*;
import com.goldencode.p2j.net.*;

...

Class<?>[] ifaces = new Class[] { SomeQueue.class };
RemoteObject.registerStaticNetworkServer(ifaces, ServerQueue.class);

There is a startup hook available that allows custom logic to run when the FWD server is starting. That is the optimal location to place any registration calls. That ensures that the API is loaded before any clients can connect to the server. For details, please see the section entitled Server Initialization and Termination Hooks in the Runtime Hooks and Plug-Ins chapter.

Creating a Process Account and Credentials

The external application is represented on the FWD server as a process account. The external application must have exclusive and private access to a unique private encryption key and a signed certificate that are stored in a key-store file. That key store will be encrypted and password protected, but the contents of that key-store allow the external application to authenticate with the FWD server. The code to access the FWD server references the key-store in the bootstrap configuration, providing the FWD session establishment code with the proper aliases and passwords to decrypt and use the contents of the key-store file. This certificate is sent to the server as the external application's credentials.

To validate that the FWD server is the expected server, the external application uses a trust-store. The trust-store provides public certificates for the known FWD servers. The code to access the FWD server references the trust-store in the bootstrap configuration, providing the FWD session establishment code with the proper alias that represents the server's certificate. If that certificate matches the server's certificate as sent during the SSL handshaking process, then the connection is allowed.

The important thing here is to ensure that the external application has access to its specific key-store and to the trust-store for the server (which is not a private file). For details on how to create these files, please see the chapter entitled Key-Stores and Trust-Stores in the FWD Runtime Installation, Configuration and Administration Guide.

The certificate for the external application must be created as a “Peer Certificate” in the FWD server's configuration. This adds the certificate to the list of known certificates that can be associated with an account. For details on how to create the peer certificate on the server, please see the chapter entitled Creating a Peer Certificate in the FWD Runtime Installation, Configuration and Administration Guide.

On the FWD server, a valid process account is required to exist. For details on how to create the process account on the server, please see the chapter entitled Creating a Process Account in the FWD Runtime Installation, Configuration and Administration Guide. As part of that task, the process account must be associated with the peer certificate previously created. That ensures that when the external application sends the certificate as its credentials, that the FWD server will match up the proper process account and allow access. This process account MUST be in the list of subjects supplied in the net ACL for the exported API. See the section below entitled Modify the FWD Server Security Configuration.

Once these tasks are complete, the external application should be able to successfully access the server.

Modify the FWD Server Security Configuration

After the code is written and the API exported, that API will not be accessible unless the FWD server's security configuration allows access to the account(s) that need to use that API. By default, an exported API is not accessible to any accounts. To resolve this, at least one Access Control List (ACL) entry must be added to the FWD server's configuration.

There are two mechanisms to make the change. The first method is the graphical administrative user-interface (the “admin interface”) which is a Java based applet that runs in a browser. The second method is direct editing of the server's directory file (which is the backing configuration for the server).

For details, please see the section entitled Adding a NET Resource ACL in the book entitled FWD Runtime Installation, Configuration and Administration Guide.

The following information will be required:

  • The fully qualified package and class name of the interface being exported: com.acme.corp.something.SomeQueue in this example. This will become the name of the net resource that is being protected, otherwise known as the Resource Instance Name.
  • The list of accounts that must have access. This will become the Subjects list in the ACL.
  • The Rights needed will be read and execute.

The key to all of this is that the new ACL configures the net resource to allow a specific list of subjects a particular set of rights (permissions) for a specific set of exported APIs.

Don't make the mistake of using the implementing class name for the Resource Instance Name. In the example above, it would be an error to specify com.acme.corp.something.server.SomeServer since that is not the interface class name. Whatever class defines the interface you are exporting must be used both in the ACL as the Resource Instance Name and in the RemoteObject.obtainNetworkInstance() on the client (see the section below entitled Accessing Services. The client just gets a local proxy that represents the interface and which redirects any calls to the server. Only the server knows the specific class name of the implementation, the client has no idea what implementation class is used on the server. They rendezvous based on the interface name and its contained method signatures.

Debugging Tips

Problems with the mechanics of the remote object protocol can be debugged using logging at the FINEST level for the com.goldencode.p2j.net package. All exported APIs (each method in each exported interface) will be listed when it is exported. Every inbound and outbound method call via the protocol will have detailed log entries, including timing information. See the chapter entitled Logging Infrastructure for details on how to change the logging settings.

Security-related problems (ACLs, account authentication, security decision failures) can be debugged using logging that is specific to the com.goldencode.p2j.security.SecurityManager class. Set logging to DATA for the security manager, by adding the following section to the /security/config section of the FWD directory:

<node class="string" name="debug-level">
   <node-attribute name="value" value="DATA"/>
</node>

All logging output (for both the standard net package logging and the security manager logging) will appear in the server log.

Establishing a Session with the FWD Server

External applications access the FWD server through the remote object protocol. Before that can be used, the external application must establish a session with the FWD server. This involves writing custom Java code to connect to the FWD server and then authenticate using a known account. Once complete, the session can be used to obtain access to the exported APIs of the FWD server.

An external application can establish multiple sessions to the same FWD server. These sessions can be either direct or virtual; although their usage is the same (when writing code to invoke a remote API, there is no difference on the type of session used), there are some differences in what and how many resources are used in each case. When using direct sessions, FWD establishes an unique queue for each created session. So, all traffic for that session will use its own queue (and its own opened socket). On the other hand, the virtual sessions use only one queue, and all communication is safely multiplexed over this queue.

When virtual sessions are used, there may be a bandwidth issue at some point depending on transaction volumes, since only a single socket will be used for all traffic and all communication is done using a single pair of reader/writer threads, on each end. The protocol works by sending messages which are made to look like API calls. Only one message can be sent on the socket at a time (in one direction). Once that message is sent, the next can be sent and so on, but they will be serialized. There are reading and writing sides to the socket and these operate independently. In addition, the processing is asynchronous between two requests, so one can start and another can start and either one can finish before the other. The only thing that is synchronous is that the start of an API call will always come before the response to that API call. But the API calls themselves will be independent, except they would share the transport.

On the other hand, when using direct sessions, each session will have its own queue and its own pair of reader/writer threads, on each side. This will provide more bandwidth and resources, as compared to the virtual sessions case. But, as the number of created direct sessions increases, the number of open sockets and thread count on the FWD server will increase; depending on the configuration of the machine on which the FWD server is running, you may reach the limit of possible opened sockets or starve other applications which need to use sockets.

To conclude, when deciding to use either virtual or direct session, you should take into account the amount of remote API requests executed by the external application and how fast the response is needed. If the external application needs to execute the remote APIs very frequently and the response needs to be received in real time, then direct sessions should be used. Else, using virtual sessions will suffice and unnecessary resources will not be occupied.

The following code shows how to connect (by creating a direct session) via SSL to the FWD server using an X.509 certificate to establish the external application's identity:

import com.goldencode.p2j.cfg.*;
import com.goldencode.p2j.net.*;
import com.goldencode.p2j.security.SecurityManager;

...

Session session = null;

try
{
   BootstrapConfig bc = new BootstrapConfig();

   bc.setServer(false);
   bc.setConfigItem("net", "server", "host", hostname);
   bc.setConfigItem("net", "server", "port", port);
   bc.setConfigItem("net", "connection", "secure", "true");
   bc.setConfigItem("net", "dispatcher", "threads", "2");
   bc.setConfigItem("security", "truststore", "filename", truststore);
   bc.setConfigItem("security", "truststore", "alias", alias);
   bc.setConfigItem("security", "certificate", "validate", "true");
   bc.setConfigItem("security", "keystore", "filename", keystore);
   bc.setConfigItem("security", "keystore", "processalias", processalias);
   bc.setConfigItem("access", "password", "truststore", tspasswd);
   bc.setConfigItem("access", "password", "keystore", kspasswd);
   bc.setConfigItem("access", "password", "keyentry", kepasswd);

   SecurityManager secMgr  = SecurityManager.createInstance(bc);
   SessionManager  sessMgr = SessionManagerFactory.createLeafNode(bc);
   session = sessMgr.connectDirect(null, null);

   if (session == null)
   {
      // Houston, we have a problem!
   }
   else
   {
      // Use the session here.
   }
}

catch (Exception excpt)
{
   // Unexpected failure, deal with it.
}

The BootstrapConfig class encodes the configuration of the FWD environment including how to contact the server. Once that configuration is setup, the SecurityManager and SessionManager are initialized. Then the connection is established (via LeafSessionManager.connectDirect()). Authentication occurs during connectDirect() and it is completely non-interactive.

In addition to the above code, the external application will need access to the p2j.jar so that the various FWD classes can be used. In addition a key-store and trust-store will be required. The key-store contains the private key/certificate for the identity of the external application. It is matched up with a specific FWD account by the FWD server. The trust-store contains the certificate for the FWD server and it allows the SSL handshake process to validate that the server is legitimate.

For details on how to create and configure a new process account (along with the creation of the key-store and trust-store files), please see the section below on Creating a Process Account and Credentials.

When using virtual sessions, the only difference is related to the code which creates the session. So, the following line in the above code:

session = sessMgr.connectDirect(null, null);

needs to be replaced with this one:

session = sessMgr.connectVirtual(bc, null);

At the time the sessMgr.connectVirtual() call ends (and no exceptions were thrown), a new virtual session will have been created. If this is the first attempt to connect to this FWD server, then the queue is started. Note that once a virtual session is established to a FWD server, all the subsequent virtual sessions will authenticate using the same credentials as for the first session. This is done automatically by FWD.

When connecting to a FWD server, the external application is known as a “leaf” node in FWD terms. From this, it is possible to create multiple direct or multiple virtual sessions to another router node. But, it is not possible to create direct and virtual sessions at the same time, to the same router node. So, you need to use either direct or virtual sessions; in case the client tries to create a virtual session once a direct one has been established or a direct session once a virtual session has been established, FWD will fail with an IllegalStateException exception. Note that once all sessions are disconnected, the next created session one can be either way - virtual or direct.

The limitation above is per each router node; one might decide to create multiple virtual sessions to a router and multiple direct sessions to another router node. As FWD manages the created sessions by router nodes, the only thing you need to consider is to be consistent - once a direct or virtual session is established to a router node, all the other sessions established to the same router node must be of the same type.

An important note is that the SecurityManager.createInstance and the SessionManagerFactory.createLeafNode must be called only once, before the creation of any session. When all created sessions are terminated, as there is no other active connection, these methods can be safely executed again; the result will be that the the FWD client state is reinitialized.

When virtual sessions are created, a BootstrapConfig instance is passed to the connectVirtual call each time a new session is needed; so, the configuration can point to the same router node or to a completely different one, with the limitation that all authentication is done using the credentials used when the first virtual connection was made to that router node. In the direct sessions case, if a session needs to be established to the same router node using different credentials or to another router node, then instead of using:

session = sessMgr.connectDirect(null, null);

use:

session = sessMgr.connectDirect(config, null, null);

where config is another instance of the BootstrapConfig class. If the config parameter is null, then the default configuration (used when the SessionManager and SecurityManager were initialized) will be used.

The code in the previous example shows how to establish the session just before executing the remote API, and each time the remote API is executed. This approach is time consuming, as each time a session is created the socket needs to be opened, the queue needs to be started, authentication must be done, etc. Instead we recommend using a pooling approach: sessions are opened and saved in a pool; when a remote API needs to be executed, a session is retrieved from the pool; the session is put back in the pool when all work with it is done. This way, a number of sessions are always opened and used as necessary; if the pool goes out of sessions, you can decide to wait until another session becomes available or to create another session (which will be added to the pool when the work is finished with it). Following is an example of how a session pool can be defined:

import com.goldencode.p2j.cfg.*;
import com.goldencode.p2j.net.*;
import com.goldencode.p2j.security.SecurityManager;

public class FWDSessionPool
{
   private static final int POOL_SIZE = 20;

   private static List<Session> pool = null;

   private static SessionManager sessMgr;

   public static Session getSession()
   {
      Session sess = null;
      while (sess == null)
      {
         synchronized (pool)
         {
            if (!pool.isEmpty())
            {
               // We have a session, check if it is valid
               sess = pool.get(0);
               if (sess.isRunning())
               {
                  // The session is still active, we can use it
                  break;
               }
               // If the session is not active, establish another one
               sess = createSession();
            }
         }

         // The pool is empty, wait for a while, maybe a session is released
         try
         {
            Thread.sleep(1000);
         }
         catch (InterruptedException e)
         {
            e.printStackTrace();
         }
      }

      return sess;
   }

   public static void freeSession(Session sess)
   {
      synchronized (pool)
      {
         // add the session back to the pool
         pool.add(sess);
      }
   }

   public static Session createSession()
   {
      Session sess = null;

      while (sess == null)
      {
         try
         {
            sess = sessMgr.connectDirect(null, null);
            break;
         }
         catch (Exception e)
         {
            // There was a problem when connecting, wait a while for the
            // remote server to get back up
            e.printStackTrace();
         }

         try
         {
            // wait a while before trying again
            Thread.sleep(1000);
         }
         catch (InterruptedException e)
         {
            e.printStackTrace();
         }
      }

      return sess;
   }

   public static void initPool()
   {
      if (pool != null)
      {
         // The pool was already initialized
         return;
      }

      try
      {
         BootstrapConfig bc = new BootstrapConfig();

         bc.setServer(false);
         bc.setConfigItem("net", "server", "host", hostname);
         bc.setConfigItem("net", "server", "port", port);
         bc.setConfigItem("net", "connection", "secure", "true");
         bc.setConfigItem("net", "dispatcher", "threads", "2");
         bc.setConfigItem("security", "truststore", "filename", truststore);
         bc.setConfigItem("security", "truststore", "alias", alias);
         bc.setConfigItem("security", "certificate", "validate", "true");
         bc.setConfigItem("security", "keystore", "filename", keystore);
         bc.setConfigItem("security", "keystore", "processalias", processalias);
         bc.setConfigItem("access", "password", "truststore", tspasswd);
         bc.setConfigItem("access", "password", "keystore", kspasswd);
         bc.setConfigItem("access", "password", "keyentry", kepasswd);

         SecurityManager secMgr  = SecurityManager.createInstance(bc);
         sessMgr = SessionManagerFactory.createLeafNode(bc);

         pool = new LinkedList<Session>();

         for (int i = 0; i < POOL_SIZE; i++)
         {
            Session session = createSession();
            pool.add(session);
         }
      }

      catch (Exception excpt)
      {
         // Unexpected failure, deal with it.
      }
   }
}

In this example, the initPool method is used to populate the pool with 20 direct sessions; the pool can be initiated either when your application is started or the first time the pool is used, in the class's static constructor. When a session is needed, the getSession method will be used to retrieve a session from the pool; when all work with the session is done, don't forget to add it back to the pool, using the freeSession method. The following construct can be useful when dealing with pools of objects:

Session sess = null;
try
{
   sess = FWDSessionPool.getSession();
   // invoke the remote APIs
}

finally
{
   // Add the session back to the pool
   FWDSessionPool.freeSession(sess);

}

In this example, if the remote FWD server goes down or there is a network problem, the next getSession call will block and try to establish a new session until the FWD server gets back online.

Another important note about the external application is that it can host multiple FWD nodes. When the SecurityManager and SessionManager are initialized, a single instance of each class is created and available to the external application. This is true in the default case, when each class is loaded by the same class loader, the one used to start the application (the primordial class loader). If multiple class loaders are used, one could initiate multiple instances of the security and session managers, each one being a different node; this node would be isolated within the context of the class loader used to load the manager's classes, allowing different singleton instances of the security and session managers to exist in the same VM.

There is no restriction on the type of node started within the context of a certain class loader. Depending on what the external application needs, different modules of it could initiate themselves as leaf nodes, while other modules could initiate as router nodes. This is useful because, even if leaf nodes can communicate with multiple routers (by passing a different bootstrap configuration instance at the create session method), a router node, once the security and session managers are initialized, can't be downgraded to a leaf node unless the managers are re-initialized. The idea is, using multiple class loaders, one may host in the same VM multiple nodes, of different types - routers or leaves.

Accessing Services

The external application must have access to both the p2j.jar file and to the jar or class files needed to access the application-specific interface. In both cases, the classpath must be modified and the jar files or class files must be available via that classpath.

Java's serialization is used by the FWD remote object protocol to turn object instances into a stream of bytes that can be sent over the network and converted back into an equivalent object instance on the other side. This is known as marshalling and unmarshalling a method's parameters. The use of serialization means that the version of the shared interface and all shared classes must be the same on both sides of the network connection. It is best to use the exact same jar file on both sides. To do otherwise will often cause problems.

Assuming a valid FWD session has been established, it is very easy to use that session to call the hosted service (the exported APIs) on the FWD server. The following is an example:

import com.acme.corp.something.*;
import com.goldencode.p2j.net.*;

...

// The FWD Session object must be working by this point.  This
// example assumes that it exists as an instance variable named
// “session”.

SomeQueue queue = (SomeQueue) RemoteObject.obtainNetworkInstance(SomeQueue.class, session);
// From here the SomeQueue instance is used, just as if it was local.
if (!queue.isEmpty())
{
   SomeEvent event = queue.peek(0);
   ...
}

The object returned by the RemoteObject.obtainNetworkInstance() method is a proxy. That means it transparently redirects method calls on the local instance to the FWD server where those are turned into method calls on the implementation instance or class. Parameters, return values and exceptions are moved over the network as expected, so long as all of the types are Serializable. The method calls are synchronous (the thread does not return until the called code returns or throws an exception), just as a local method call is synchronous. The remote object protocol hides the complexity of what is effectively a “remote procedure call” or RPC mechanism.

If any of the method calls can throw checked exceptions, then the Java compiler (usually javac) will force the calling code to handle those exceptions just as if they were being generated locally. All normal exception handling works and is made transparent via the remote object protocol.

The session can be shared by more than one thread. This means that 2 or more threads can each obtain a network instance (a proxy) and make concurrent and independent calls to the server. Each simultaneous thread's usage will be executed on different threads on the server but each of the threads will share the same security context. For this reason the use of context-local or shared data must be handled very carefully. Context-local data is not thread-specific, so access via multiple threads must be synchronized by the user.

In addition, if the same session is used to execute remote APIs concurrently, as the context-local data is not thread specific, these APIs will not be able to use the FWD's framework safely. The context-local data is not thread specific because FWD runtime was built to simulate the 4GL's single-threaded behavior; that is, at any time, only a single thread can access this context-local data. By using the same session to execute the remote APIs concurrently, access to the context-local data in the FWD runtime will not be “single threaded”, so errors and unexpected behavior may and will occur. If the remote APIs use any parts of the 4GL replacement APIs in the FWD runtime and the same session is used to execute them, then all API invocations will share the same context-local data and will concurrently update it; for example, the FWD runtime needs to keep context-local data for all frame processing, persistence processing and block/transaction processing.

A first example of possible unexpected behavior is the persistence framework. This framework can't be used safely (by remote APIs which get executed concurrently using the same session) because the undo data is kept by each session's context, so executing concurrent API calls which use the same table will generate errors, if you want to undo or rollback certain changes. More, if the session is assumed to only read records using FWD persistence runtime, it will not work either. This is because by using a single FWD session, there is a chance that the same Hibernate session will be used by two concurrent API calls, and Hibernate sessions are not thread safe. More, the Hibernate session might get closed by an API call while another API call uses it, thus preventing the second API call to complete its query. Beside these reasons, the FWD's Persistence class is context safe; but, as in many other places in FWD, the context-local data which is needed by this class is not thread safe.

Another example is the FWD's block processing - for each block, context-local data will need to be saved. But, as in the persistence case, this context-local data is not thread-safe; so, executing 4GL-type blocks using the same session concurrently, will not be thread safe and errors may occur.

The conclusion is that you don't have a safe way of using the same FWD session to execute multiple API calls in a thread-safe manner, considering that these API calls use the 4G: replacement APIs in the FWD runtime. If you use JDBC to access the data, and no 4GL behavior is needed, then you are outside the FWD runtime; so, the above issues do not apply.

Terminating a Session with the FWD Server

FWD sessions can and probably should be long-lived. There is a cost to the setup of a session which may not amortize well on a per-call or per-transaction basis. It is best to organize usage in longer sessions, optimally keeping the session open for the lifetime of the usage.

When all usage of a given FWD session is complete, the session is safely ended using Session.terminate() on the session instance that was obtained using connectDirect() or connectVirtual(). In the case of a catastrophic failure, the protocol will attempt to cleanup on both sides of the connection, but that is not a good practice.

For direct sessions, calling terminate() is the equivalent to logging off the server, disconnecting the socket and stopping the queue. For the virtual sessions case, the terminate() call will not disconnect the socket nor stop the queue, unless this is the last remaining virtual session connected to that server.

If a SessionListener implementation is specified at the connectDirect() or connectVirtual() call, then the SessionListener.terminate() will be executed just before the queue is stopped and just before the session is ended. During the SessionListener.terminate() call, execute only any additional needed cleanup - do not attempt to create another session, as this will have unexpected results on the SessionManager's state.

If the leaf node disconnects abnormally from the server (i.e. the network goes down), then the first components which notices this are the reader and the writer threads. As the socket is no longer active, it generates an exception which gets caught in the reader and writer threads associated with this socket and queue. This exception triggers the queue termination; in turn, the queue terminates all sessions associated with it, before terminating itself. If there was an active API call when the queue disconnected, then a SilentUnwindException will be generated. This is why it is recommended to bracket the API call with a try-catch clause and handle this exception.

try
{
   // invoke the remote APIs
}

catch (SilentUnwindException e)
{
   // the connection went down, API call was unsuccessful

}

On the FWD server side, this is noticed by the Reader and Writer threads too. The behavior is the same as on the leaf node - it terminates all sessions registered with that queue -, except for one difference: if there was an active API call before the leaf node disconnected or not. In the case the active API call uses the FWD runtime, then it will notice the leaf node termination at the next block iteration and will abort the processing; else, the Dispatcher thread will finish the API execution, but will drop the result.

Accessing the Database from Hosted Services

This section discusses the ways in which an external application connects with and uses legacy databases which were converted from Progress databases, and which are now managed by a FWD application server. The FWD runtime environment exposes a public API to enable an external application to perform transactional, CRUD (create, read, updated, delete) operations on legacy (and new) data.

Besides lower-level data access methods, the public, persistence API provides methods which should be used in consistent ways by an external application, to ensure it is a "good citizen" with respect to data access. The portions of the persistence runtime used by legacy, converted code honor certain conventions and access patterns to prevent data corruption which otherwise might result from uncontrolled, concurrent data access. It is critical for an external application to implement these same techniques when running alongside legacy, converted code. These techniques are described in detail, later in this section.

Data Model Objects

As with converted code, hand-written, external applications which require database access generally will deal with Data Model Objects (DMOs). The public API for persistence services is biased toward this type of use, as many of its methods either accept or return instances of DMOs, either individually, or within collection objects. DMOs used by an external application can be those which were converted along with the original, Progress application, from that application's database schemas. An external application may introduce new DMO interfaces and implementation classes, or modify existing ones, as defined by new business requirements. The definition or modification of DMO interfaces and implementation classes requires some special treatment, which is discussed later in this section.

External Application Architecture

The external application implements a client-server architecture. The client component drives the control flow of the application, requesting services from the server component as necessary. The server component runs within the process space of the FWD server. It acts as a “staging area” for persistence-related work requested by the client, exporting an application-specific API to the client. Some API methods are used to read data, others to insert, update, or delete data. Data read from the database is organized into serializable structures appropriate to the application's needs, before being returned to the client. Data to be updated or inserted is sent from the client to the server via the application-specific API.

Persistence API

Lower level persistence services are provided through the public API in the com.goldencode.p2j.persist.Persistence class. This class is the central point for CRUD (create, read, updated, delete) access to all databases of the converted application. An application-specific API must be implemented using this public API. The following table lists the most important, public methods of the Persistence class. Please refer to the class' JavaDoc documentation for usage instructions and further detail.

Category Method Notes
Transaction void beginTransaction() Implicitly begins a Hibernate session.
Transaction void commit() Implicitly ends a Hibernate session.
Transaction void rollback() Implicitly ends a Hibernate session.
Transaction boolean isTransactionOpen() Will report existence of either an implicit or an explicit transaction.
Lock void lock(LockType,
RecordIdentifier,
boolean)
Change the lock type of a record.
Read ScrollableResults scroll(String,
Object[],
Type[],
int,
int,
ScrollMode)
Prefer this method to the list method for all queries which could return an unknown number of records. Use ScrollMode.FORWARD_ONLY if possible, as this enables the use of a server-side cursor in some databases (including PostgreSQL). See javadoc comments of the other variants of the scroll method for more details.
Read List list(String,
Object[],
Type[],
int,
int,
boolean)
Only intended for known, small result sets. Avoid using this method for queries which could return an unknown number of results, since all objects will be held in memory within the returned list. This can have a very detrimental effect on the FWD server as a whole! Some JDBC driver implementations (including PostgreSQL) exacerbate the problem by holding a copy of all results within the driver as well.
Read Persistable load(String,
Class,
Serializable,
LockType,
boolean)
Return a single DMO and acquire the specified lock type for the associated record.
Read ResultSet executeSQLQuery(String, Object[]) Execute an SQL query and return a result set. Another variant of this method exists, which does not accept an array of substitution parameters.
Create Serializable nextPrimaryKey(String) Safely allocates (or recycles) a unique primary key for a newly created record.
Create void save(Persistable, Serializable) Adds the DMO to the current Hibernate session and persists the corresponding record to the database.
Delete void delete(Object) Removes the DMO from the current Hibernate session and deletes the corresponding record from the database.
Create, Update, Delete void executeSQL(String, Object[]) Execute an arbitrary SQL statement (other than a query). Another variant of this method exists, which does not accept an array of substitution parameters.

Obtaining a Persistence Instance

One instance of the com.goldencode.p2j.persist.Persistence class can be created for each physical database accessed by a FWD server instance. Each such instance is shared by all user contexts, including all legacy contexts and all external application contexts. These instances are created, cached, managed, and provided by com.goldencode.p2j.persist.PersistenceFactory, a final class with only static methods and no public constructor. The following example illustrates how to obtain a Persistence instance:

import com.goldencode.p2j.persist.*;
...
Database db = new Database("mydb");
Persistence persistence = PersistenceFactory.getInstance(db);

The PersistenceFactory.getInstance(Database) method will check its cache to determine whether a Persistence object for the mydb database already exists, and if so, it will return that instance. Otherwise, it will instantiate a new Persistence object associated with the mydb database, cache that instance, and return it.

Transactions

The public API exposed by the Persistence class supports explicit actions to begin a transaction, and to commit or rollback that transaction after some unit of work is performed. An explicit transaction is begun using the method Persistence.beginTransaction.

The following example illustrates the transaction idiom that should be used for any unit of work (including read-only data access):

Persistence persistence = PersistenceFactory.getInstance(new Database("mydb"));
...

boolean beganTx = persistence.beginTransaction();

try
{
   // do primary work
   ...

   if (beganTx)
   {
      persistence.commit();
   }
}
catch (PersistenceException exc)
{
   if (beganTx && persistence.isTransactionOpen())
   {
      try
      {
         persistence.rollback();
      }
      catch (PersistenceException exc2)
      {
         // do some error handling for the rollback error
         ...
      }
   }

   // do some error handling for the primary error
   ...
}

We first attempt to begin a transaction just before the outer try-catch block (this may become a try-catch-finally block once record locking is involved - see Locking and Transactions below), and we record whether the transaction was started successfully in the beganTx variable. The Persistence.beginTransaction method will return false in the event a transaction already is active, which tells us it was not necessary to begin a new transaction, presumably because higher level code already began one. More importantly, it tells us we should not attempt to commit the current transaction, because that would most likely cause problems for the higher level code which expects to commit that transaction at a later point, after its higher-level unit of work is complete.

We then perform our primary work and attempt to commit the transaction, if necessary (i.e., if beganTx is true). If a PersistenceException is thrown as a result, it is caught and we attempt to roll back the current transaction, but only if it was begun with the earlier call to Persistence.beginTransaction, and only if it still is open. In fact, if the error occurred during the call to Persistence.commit, the transaction may already have been rolled back. In this case, Persistence.isTransactionOpen will return false, and we should not attempt to roll back again.

The rollback attempt itself can throw PersistenceException, which requires a separate try-catch block to handle any such error. We then handle the primary error in a manner appropriate to the application.

Clearly, much of the above example is scaffolding code to begin a transaction and to ensure it is properly committed or rolled back. Even more scaffolding code is necessary once we add record locking to the mix. It is certainly possible (and recommended) to re-factor and simplify this idiom by using helper classes or an Inversion of Control framework, such as Spring. No matter how the code is organized or re-factored though, the important point here is that all database work must be performed inside an explicit transaction (even read-only access!).

If the Persistence API is used to do any database work outside an explicit transaction, an implicit transaction may be opened, which is not desirable in an external application. Implicit transactions are transactions which are opened and committed (or rolled back) by the persistence runtime automatically for certain database work, primarily for read operations. They exist to support legacy, converted code, which does not always open a transaction to read records from the database. They were introduced to prevent a problem with idle database connections, whereby the Hibernate framework would automatically open (but never close) a new transaction for read requests. When the read request was finished, the transaction would remain open indefinitely, consuming database resources unnecessarily. However, external application code should not allow an implicit transaction to be created, and should instead always manage an explicit transaction for any database work.

FWD has no limitation related to executing each API in its own transaction. If there is a need to keep the transaction open across multiple API calls, it is possible, but the complexity depends on whether the APIs need to use the 4GL-like buffers and queries or not.

To be able to use the 4GL-like buffers and queries, each usage must be done within an 4GL-style block. As this kind of block can't be left opened using the APIs in BlockManager, the TransactionManager APIs must be used instead. Following is a “compressed” algorithm on what kind of remote APIs need to be created so that a 4GL style block with transaction support can be used to invoke remote APIs within the same transaction:

invoke a remote API to initialize all the variables, buffers, etc
try
   invoke a remote API which starts a top level block
   try
      invoke a remote API which starts a transaction block
      invoke a remote API to open the scope for all the buffers you will use
      invoke your remote APIs which use the buffers/queries/etc
   catch exceptions
      rollback transaction
      handle errors
   finally
      close the transaction block
catch ConditionException's
   ignore
finally
   close the top level block

With this approach, the code will be similar to how 4GL works. Note that the 4GL-style buffers (which are anonymous proxies returned by the RecordBuffer.define call) will not be able to be used as return values for the exported APIs. Instead, return one or more of the DMO's properties or its ID.

If you need transaction support only on Hibernate level or JDBC level, you can use the Persistence APIs and work directly with the DMOs, and not with their RecordBuffer proxies. Even in this case, it is needed to execute all the Persistence APIs within the scope of a top level block:

invoke a remote API to initialize all the variables, buffers, etc
try
   invoke a remote API which starts a top level block
   begin transaction
   try
      invoke remote APIs which work with the Persistence class APIs
      commit transaction
   catch exceptions
      rollback transaction
      handle errors
catch ConditionException's
   ignore
finally
   close the top level block

In any case, as context-local data is kept, the same session must be used to execute all the involved APIs.

Record Locks

Although the locking and unlocking of records usually is done as part of another operation, such as reading or updating records, it is a topic of some complexity which deserves individual attention. It is important to understand how record locks are obtained and released, because the burden of proper record lock management lies squarely with the developer, and the proper functioning of the FWD server depends upon the developer getting this right.

The FWD runtime environment implements a cooperative, pessimistic record locking model. The model is pessimistic because it assumes that in the absence of record locks, concurrent attempts to make incompatible changes to the same record can and will occur. It protects against this situation by requiring temporary, exclusive access to a record before changing or deleting it. The model is cooperative because this contract, while very important to honor, is not enforced programmatically. Although converted code uses portions of the persistence runtime which honor the requirements of the pessimistic locking model, it is possible to write an external application which bypasses the requirements of this model. The developer of external code therefore must take extreme care to cooperate with the other sessions and processes which rely on pessimistic locking to maintain data integrity.

The unusual design choice of a cooperative, pessimistic locking model was forced by the fact that the Progress environment itself relies upon a pessimistic locking model with very specific semantic requirements. Because the native record locking implemented by various database vendors do not necessarily match the exact semantics of Progress' implementation, the FWD environment provides a compatible implementation, which exists in the application server rather than in the backing database. It is this separation of record locking from the database which requires the cooperative nature of the record locking model.

While converted, legacy code operates naturally in this environment, a hosted application must take special care to lock and unlock records properly. Improper locking techniques in external applications can cause deadlocks or orphaned locks which at this time can only be resolved by (a) terminating the external application; or (b) restarting the FWD server. The basic rules of thumb to avoid this situation are:

  • always lock records in a consistent order; and
  • always unlock what you lock.

Techniques to implement these cardinal rules are discussed below, but first, the basics...

To lock or unlock a record, use the lock API in the Persistence class:

// assumes we previously have obtained a Persistence instance "persistence" 
RecordIdentifier ident = new RecordIdentifier("my_table", 1L);
try
{
   persistence.lock(LockType.EXCLUSIVE, ident, true);
}
catch (LockUnavailableException exc)
{
   // never thrown for requested lock type
}

In this case, the lock method will block until an exclusive lock can be obtained on the record my_table:1. To return immediately (i.e., abnormally) in the event the lock is unavailable, you would instead use:

try
{
   persistence.lock(LockType.EXCLUSIVE_NO_WAIT, ident, true);
}
catch (LockUnavailableException exc)
{
   // the lock is unavailable, handle accordingly...
   ...
}

As in Progress, two types of locks are supported:

  • SHARE - acquire this type of lock for read-only access to a record, to ensure that a record is not changed or deleted while you hold the lock.
  • EXCLUSIVE - acquire this type of lock if you intend to create, update, or delete a record.

For converted code, the contracts specified for these lock types are honored automatically by the portions of the persistence runtime which support legacy persistence semantics. However, there is no way for the runtime to enforce these contracts for hand-written Java code in a hosted, external application. Therefore, it is critical that all such code only makes modifications to the database while holding an EXCLUSIVE lock for each of the affected records.

Locking Order

The order in which exclusive record locks are acquired and released is important to the reliability of both the converted application and the external application. Getting this wrong can lead to application deadlocks.

Exclusive locks should always be acquired in a consistent order across all components of a system. Before writing logic which acquires and holds multiple locks, first review all other programs which lock records in the same table or group of tables (including programs in the converted application). Ensure that any new logic is written to acquire locks in the same order.

Consider two sessions, each attempting to acquire an exclusive lock on two records at the same time, but in the opposite order:

Session A Session B
Attempts to acquire lock on record 1 Attempts to acquire lock on record 2
Acquires lock on record 1 Acquires lock on record 2
Does some work Does some work
Attempts to acquire lock on record 2 Attempts to acquire lock on record 1
Attempt to lock record 2 blocks; lock is held by session B Attempt to lock record 1 blocks; lock is held by session A

These two sessions are now deadlocked; neither can continue because the other holds the next lock to be acquired.

Obtaining the locks in the same order across sessions eliminates the possibility of this deadlock:

Session A Session B
Attempts to acquire lock on record 1 Attempts to acquire lock on record 1
Acquires lock on record 1  
  Attempt to lock record 1 blocks; lock is held by session A
Attempts to acquire lock on record 2  
Acquires lock on record 2  
Does some work  
Releases lock on record 1  
  Acquires lock on record 1
  Attempts to acquire lock on record 2
  Attempt to lock record 2 blocks; lock is held by session A
Releases lock on record 2  
  Acquires lock on record 2
  Does some work
  Releases lock on record 1
  Releases lock on record 2

The practice of locking records in a consistent order is a database topic which is not specific to writing an external application for the FWD environment. Nevertheless, it is mentioned here because of its importance to the reliability of the FWD server as a whole.

Locking and Transactions

The idiom for a transaction which makes database modifications is expressed in the following pseudo-code:

begin transaction
try
   determine which record is to be created/updated/deleted
   obtain EXCLUSIVE lock on the record's table/primary key
   fetch or create the record
   make necessary changes to (or delete) record
   commit transaction
catch exceptions
   rollback transaction
   handle errors
finally
   release EXCLUSIVE lock

Note that the lock release must occur after the transaction ends, regardless of the success or failure of the work performed within the transaction. Exclusive locks must not be released before a transaction is committed (or rolled back). Doing so would provide another session the opportunity to grab an exclusive lock on the same record and commit conflicting changes, perhaps before the current session's changes to that record are committed. This violates the contract of pessimistic locking and can lead to data corruption. It is permissible, however, to downgrade an exclusive lock to a share lock during a transaction, then release the share lock after the transaction ends. This guarantees another session cannot obtain an exclusive lock before the current session's changes are committed.

It is important to understand that the termination of a transaction does not automatically release record locks obtained or used within it; this is always the responsibility of the programmer. This requirement is a side effect of separating the implementation of record locking from the backing database.

A failure to release locks at the appropriate time will leave orphaned locks in the FWD runtime environment's lock manager, which can cause other sessions attempting to acquire those locks to hang. If not released by business logic, orphaned locks will only be released when the session holding those locks (i.e., the external application) is terminated.

Reading Records

Records can be read from the database with or without locking. If the application can tolerate stale data, it generally is best from the perspective of resource use and server throughput to read records without locking them. This allows another session which needs to update the same records to do so without waiting unnecessarily.

If the possibility of stale data is unacceptable for a given use case, use a share lock. If the records read are to be updated or deleted, use an exclusive lock at the outset. Avoid first reading with share lock and attempting to upgrade to an exclusive lock later in the same transaction. If two sessions obtain a share lock on the same record, then both try to upgrade to an exclusive lock in order to update or delete the record, this will cause a deadlock, since neither can upgrade to an exclusive lock while the other retains its share lock.

The recommended idiom for reading records from a converted database - without locking - in an external application is:

begin transaction
try
   execute a query
   for each result
      do some work
   commit transaction
catch exceptions
   rollback transaction
   handle errors

...and with locking...

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record with desired lock type
      do some work
   commit transaction
catch exceptions
   rollback transaction
   handle errors
finally
   release locks

The preferred method for reading records is the Persistence.scroll API listed in the table of important Persistence methods above. When used with ScrollMode.FORWARD_ONLY and a reasonable JDBC fetch size configured in the FWD directory, this is the best method to minimize the memory footprint of a query. This method accepts an HQL (Hibernate Query Language) query and returns a ScrollableResults object, which can be used to iterate the query's results. Please refer to published Hibernate documentation for details on HQL and the ScrollableResults interface.

An example of using the scroll API to read records with a share lock follows:

import java.io.Serializable;
import java.util.*;
import org.myorg.myapp.dmo.mydb.impl.*;
import org.hibernate.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

import static com.goldencode.p2j.util.character;

...
private Persistence persistence = PersistenceFactory.getInstance("mydb");
...

/**
 * Process all customer orders for a given customer.
 *
 * @param   customerName
 *          Customer name.
 */
void processCustomerOrders(String customerName)
{
   // Remember locks to be released
   List<Serializable> locked = new ArrayList<Serializable>();

   boolean beganTx = persistence.beginTransaction();

   try
   {
      // Compose query to find all customer_order records with the customer's name
      String hql =
         "select co.id from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

      while(sr.next())
      {
         // Load each record with a share lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.SHARE,
                                                        true);

         locked.add(pk);

         // Do some work with custOrder DMO
         ...
      }

      // Commit transaction
      if (beganTx)
      {
         persistence.commit();
      }
   }
   catch (PersistenceException exc)
   {
      // Roll back transaction
      if (beganTx && persistence.isTransactionOpen())
      {
         try
         {
            persistence.rollback();
         }
         catch (PersistenceException exc2)
         {
            // do some error handling for the rollback error
            ...
         }
      }

      // Handle error appropriately...
      ...
   }
   finally
   {
      try
      {
         // Release all acquired locks
         for (Serializable pk : locked)
         {
            RecordIdentifier ident = new RecordIdentifier("customer_order", pk);
            persistence.lock(LockType.NONE, ident, true);
         }
      }
      catch (LockUnavailableException exc)
      {
         // Handle accordingly;  unexpected when releasing a lock
      }
   }
}

Let's take a closer look at this example. The statement

import java.io.Serializable;

is necessary because when obtaining locks on a set of records, we first execute a query which returns only the primary keys of the records we wish to lock. Many of the APIs of the Persistence class accept parameters of type java.io.Serializable to represent primary keys. Next, we have

import java.util.*;

which allows us to use the collection interface List and the implementation class ArrayList in the body of our example. The statement

import org.myorg.myapp.dmo.mydb.impl.*;

imports the package containing the CustomerOrderImpl DMO class, which this example assumes was generated by the conversion of an application whose root package is org.myorg.myapp and which represents a table in the database mydb. DMO interfaces are generated into a package structure which follows the convention {application root package}.dmo.{database name}. The corresponding DMO implementation classes are found in {application root package}.dmo.{database name}.impl.

Next, we import the main Hibernate package:

import org.hibernate.*;

This is necessary to reference Hibernate classes like ScrollableResults and ScrollMode.

Finally, we import some FWD packages:

import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

These give us access to the FWD persistence and data wrapper framework classes, respectively.

As a convenience, we have a static import to resolve static methods in the character data wrapper class, such as toUpperCase and rightTrim:

import static com.goldencode.p2j.util.character;

The point of the processCustomerOrder method is to perform some work with each customer order for a particular customer in the hypothetical mydb database. This particular use case calls for a guarantee that a given record will not be changed by another session while we are working with it, so we acquire a share lock for each record.

Within the body of the method, we first initialize a java.util.ArrayList object to store the primary keys of those records we lock, so we have a collection of identifiers to tell us which records to unlock at the end of our transaction.

We begin a new database transaction, then open a try block within which the work of that transaction will be done.

Next is the query itself, which is written in HQL:

      // Compose query to find all customer_order records with the customer's name
      String hql =
         "select co.id from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

We will come back to this in a moment, but for now, note that this query statement presupposes several things:

  • the conversion process created a DMO named CustomerOrderImpl@;
  • that DMO has a character property, custName, which represents the name of the customer associated with a particular customer order;
  • that DMO has another property, id, which represents the surrogate primary key of the backing table (all DMOs generated by the conversion will have such a property);
  • the custName property represents a case-insensitive field in the original Progress table.

Next, we do a bit of preparation of the customer name which was passed into the method as its only argument (customerName):

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

The question mark (?) at the end of the HQL statement is a placeholder which indicates that this query expects a query substitution parameter. When this query is executed by the Persistence.scroll API, this placeholder will be replaced by the character object customerNameParm created above.

To create the customerNameParm object, we first invoke the static method character.rightTrim on the customerName string, which trims all space ( ), tab (\t), newline (\n), and carriage return (\r) characters, if any, from the right side of the customer name. Then, we uppercase the resulting text using the static method character.toUpperCase. This returns a character data wrapper object containing the original customer name, with whitespace padding (if any) trimmed from the end of the string, and all in upper case.

Likewise, in the HQL statement, we right-trim the same characters from the column which corresponds to the CustomerOrderImpl DMO property custName, then uppercase it as well.

The right-trimming and uppercasing of both the DMO property in the query statement and the substitution parameter are artifacts of legacy Progress query behavior. Remember the assumption that the original Progress database field represented by the DMO property custName was case-insensitive. The converted, PostgreSQL database will make a case-_sensitive_ comparison during query execution, which requires that both the database column and substitution parameter be uppercased, to ensure values of any case are found correctly. Of course, if the original Progress database field was case-sensitive, you should uppercase neither the DMO property in the HQL statement, nor the query substitution parameter.

Likewise, when Progress matches textual fields, it ignores any ' \t\n\r' whitespace characters which right-pad the field and comparison value. This is why we right-trim both the column and query substitution parameter.

You might be tempted to skip the uppercasing and right-trimming steps from your own code, as it quickly becomes tedious to do this for every textual property. However, it is highly recommended that you do not omit these actions, for the following reasons:

  • Your query may miss matching certain records because they may contain whitespace right-padding or may not match the case of your query substitution parameter.
  • If the textual column you are using as a query criterion is indexed, your query will not be able to take advantage of that index. The indexes created by the conversion process reference the expression upper(rtrim({column name}, E' \t\n\r')), not the column itself. The database's query planner will not be able use the index if your query statement does not exactly match the same syntax when referencing that column. This may result in a slower query. While your external application may be able to tolerate the performance penalty, consider that the sub-optimized query may add unnecessary load to the database server, which may impact other sessions of both your external application, as well as of the converted application.

Of course, for non-textual database columns, these functions need not (in fact, cannot) be used. So, for example, an HQL statement that matches on a particular customer number instead of on the customer's name might look like this instead:

      String hql = "select co.id from CustomerOrderImpl co where co.custNumber = ?";

The corresponding code to create a query substitution parameter array would look something like this:

      // Wrap parameter in FWD data type
      integer customerNumberParm = new integer(custNumber);
      Object[] parms = new Object[] { customerNumberParm };

Returning to the first example, the customerNameParm object is then stored in an Object array of length 1, named parms. This array is passed both to the Persistence.scroll method and to a static helper method, DBUtils.makeTypeArray(Object[]). The latter method inspects the array of substitution parameters and selects the appropriate Hibernate data type object for each. It creates an array of org.hibernate.type.Type s, the elements of which correspond to the query substitution parameters at the same positions in the Object array passed to this method. It is important that the array of query substitution parameters contain only parameters of the following data types:

  • java.lang.Integer - only to be used for parameters representing temporary table multiplex IDs (see the JavaDoc documentation for the com.goldencode.p2j.persist.TemporaryBuffer class for details)
  • java.lang.Long - only to be used for parameters representing surrogate primary keys
  • com.goldencode.p2j.character
  • com.goldencode.p2j.date
  • com.goldencode.p2j.decimal
  • com.goldencode.p2j.integer
  • com.goldencode.p2j.logical
  • com.goldencode.p2j.raw
  • com.goldencode.p2j.recid (should not be necessary for most queries)
  • com.goldencode.p2j.rowid (should not be necessary for most queries)

Note that the select clause of the HQL statement includes only co.id, which represents the primary key of the CustomerOrderImpl DMO. We do not select the entire record at this point, because we need to acquire a share lock to each record we process. This requires that the query be performed in two passes. The first pass uses the Persistence.scroll API to collect the primary keys of all the records which match the query's criterion:

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

A word of caution: DO NOT use the Persistence.list here in place of Persistence.scroll! For very large result sets, this can cause a spike in the JVM's heap memory which can have very serious repercussions for the stability of the FWD server process. Please refer to the@ javadoc documentation for these methods for more details.

We then iterate through the result set, fetch each record with a share lock in a second pass, add the record's primary key to the list of locks to be released later, and process the record. Note another assumption in this part of the example: the call to Persistence.load presupposes that the database table which backs the CustomerOrderImpl DMO is named customer_order.

      while(sr.next())
      {
         // Load each record with a share lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.SHARE,
                                                        true);

         locked.add(pk);

         // Do some work with custOrder DMO
         ...
      }

The last thing we do before the try block ends is commit the transaction.

If a PersistenceException is thrown, we catch it in the catch block, roll back the transaction, and process the error as is appropriate for the use case.

Note that there is no separate catch block for LockUnavailableException, which is a checked exception thrown by the Persistence.load method. There are two reasons for this:

  • We do not expect this exception to be thrown, because we have requested LockType.SHARE. This exception is thrown only for the non-blocking LockType variants (i.e., LockType.SHARE_NO_WAIT and LockType.EXCLUSIVE_NO_WAIT).
  • LockUnavailableException is a subclass of PersistenceException, so the catch block which catches PersistenceException is sufficient to meet the checked exception requirements of the Persistence.load method signature.

In the finally block, we iterate through the primary keys of all the records we have locked, and release each one. Persistence.lock throws a checked exception, LockUnavailableException. However, it is unexpected that this exception would be thrown while unlocking a record.

In the event we did not need a lock on each record, the example would be different in the following ways:

  • We would not collect primary keys in an ArrayList, nor have a finally clause.
  • The query would be done in a single pass, in which we would collect the full records instead of only primary keys. To accomplish this, we would simply omit the select clause from the HQL statement, such that it instead would begin “from CustomerOrderImpl ...
  • Accordingly, the call to the Persistence.load API in the ScrollableResults iterating loop would be unnecessary.

Such an example would look like this:

import org.myorg.myapp.dmo.mydb.impl.*;
import org.hibernate.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

import static com.goldencode.p2j.util.character;

...

private Persistence persistence = PersistenceFactory.getInstance("mydb");

...

/**
 * Process all customer orders for a given customer.
 *
 * @param   customerName
 *          Customer name.
 */
void processCustomerOrders(String customerName)
{
   boolean beganTx = persistence.beginTransaction();

   try
   {
      // Compose query to find all customer_order records with the customer's name
      String hql = "from CustomerOrderImpl co where upper(rtrim(co.custName, ' \t\n\r')) = ?";

      // Wrap parameter in FWD data type, right trim it, uppercase it
      character customerNameParm = toUpperCase(rightTrim(customerName));
      Object[] parms = new Object[] { customerNameParm };

      // Obtain primary keys of records matching query criteria
      ScrollableResults sr = persistence.scroll(hql,
                                                parms,
                                                DBUtils.makeTypeArray(parms),
                                                0,
                                                0,
                                                ScrollMode.FORWARD_ONLY);

      while(sr.next())
      {
         // Read each record
         CustomerOrderImpl custOrder = (CustomerOrderImpl) sr.get(0);

         // Do some work with custOrder DMO
         ...
      }

      // Commit transaction
      if (beganTx)
      {
         persistence.commit();
      }
   }
   catch (PersistenceException exc)
   {
      if (beganTx && persistence.isTransactionOpen())
      {
         try
         {
            persistence.rollback();
         }
         catch (PersistenceException exc2)
         {
            // do some error handling for the rollback error
            ...
         }
      }

      // Handle error appropriately...
      ...
   }
}

Using SQL Instead of HQL

HQL is used in converted code to enable business logic to be independent of any particular database vendor's SQL dialect. For the purposes of expressing queries derived from legacy Progress source code, HQL is adequate. However, HQL has its limitations in more complex queries. This is primarily due to restrictions or defects in Hibernate's HQL parser implementation. For example, certain uses of the GROUP BY clause in HQL statements are not handled properly, and expressions are not fully supported in some sections of an HQL statement.

When you encounter such a limitation, you may find it necessary to express a query using SQL instead of HQL. This is supported by the Persistence APIs:

public ResultSet executeSQLQuery(String)
public ResultSet executeSQLQuery(String, Object[])

These methods should be used within the context of the same general idiom as is described above for reading data. However, there are some additional points to consider when using this mechanism:

  • These APIs perform no mapping of database records to Java DMOs. Queries are expressed using table and column names, rather than DMO class and property names.
  • Database-level record locking (e.g., SELECT...FOR UPDATE) should not be used in your SQL statements as a substitute for the FWD runtime locking described above. Reliance on database-level record locking alone without using the FWD runtime lock manager may lead to data corruption.
  • It is the caller's responsibility to close the ResultSet object returned by these methods.
  • Currently, these methods are not implemented to take advantage of server-side database cursors, so the same memory warnings apply as for the Persistence.list method above. These methods originally were intended only for housekeeping purposes, internal to the FWD persistence framework, where large result sets were not anticipated. This limitation likely will be addressed in a future FWD release.

Creating/Inserting New Records

A record is created using the following idiom:

begin a transaction
try
  allocate a unique primary key
  obtain an exclusive lock on that key for the appropriate table
  instantiate the appropriate DMO implementation class
  set the primary key as the DMO's ID
  optionally set other DMO properties
  persist the DMO
  commit transaction
catch exceptions
  roll back transaction
  handle errors
finally
  release the record lock

An example follows:

import org.myorg.myapp.dmo.mydb.impl.*;
import com.goldencode.p2j.persist.*;
import com.goldencode.p2j.util.*;

...

Persistence persistence = PersistenceFactory.getInstance("mydb");

RecordIdentifier ident = null;

boolean beganTx = persistence.beginTransaction();

try
{
   Long pk = (Long) persistence.nextPrimaryKey("customer_order");
   ident = new RecordIdentifier("customer_order", pk);
   persistence.lock(LockType.EXCLUSIVE, ident);

   CustomerOrderImpl custOrder = new CustomerOrderImpl();
   custOrder.setId(pk);
   custOrder.setCustName(new character("ACME Ltd."));
   ...

   persistence.save(custOrder, pk);

   if (beganTx)
   {
      persistence.commit();
   }
}
catch (LockUnavailableException exc)
{
   persistence.rollback();

   // Handle error
   ...
}
catch (PersistenceException exc)
{
   if (beganTx && persistence.isTransactionOpen())
   {
      try
      {
         persistence.rollback();
      }
      catch (PersistenceException exc2)
      {
         // do some error handling for the rollback error
         ...
      }
   }

   // Handle error
   ...
}
finally
{
   if (ident != null)
   {
      try
      {
         persistence.lock(LockType.NONE, ident);
      }
      catch (LockUnavailableException exc)
      {
         // Handle accordingly;  unexpected when releasing a lock
      }
   }
}

Just as with locks acquired while reading records, it is critical that the lock manually acquired with Persistence.lock in the try block be released in the finally block, to prevent a lock leak.

Updating Existing Records

To update existing database records, we use a modified version of the idiom to read records with locks:

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record (as a DMO) with EXCLUSIVE lock type
      use the DMO's setter methods to set desired property values
   commit transaction
catch exceptions
   roll back transaction
   handle errors
finally
   release locks

A practical example is very similar to the original reading example above. The only differences are that we request LockType.EXCLUSIVE in the Persistence.load API call, and the work that we do is calling setter methods on the DMO to set the properties we wish to update:

         ...

         // Load each record with an exclusive lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.EXCLUSIVE,
                                                        true);

         locked.add(pk);

         // Make updates to custOrder DMO
         custOrder.setSomething(new integer(5));
         custOrder.setSomethingElse(new character("abcdef"));

         ...

Although every DMO has a public setter method setId(Long), you must never change the primary key itself as part of an update. Doing so will cause problems for Hibernate and is unsupported.

You may have noticed that no Persistence API is invoked to update the record explicitly. This is because Hibernate implements a feature known as transparent write-behind. This simply means that updates made to Java objects representing database records (i.e., DMOs) are not necessarily flushed to the database immediately. Instead, Hibernate attempts to batch up changes and flush them at the latest possible moment. Hibernate uses snapshots of each DMO's contents to manage the proper points to perform such a flush, such as before another query on the same table, and before a transaction is committed.

Deleting Records

The idiom for deleting records is very similar to that used to perform updates:

begin transaction
try
   execute a query to obtain primary keys
   for each result
      fetch full record (as a DMO) with EXCLUSIVE lock type
      delete the record
   commit transaction
catch exceptions
   roll back transaction
   handle errors
finally
   release locks

Accordingly, the practical example is very similar to the update example above:

         ...

         // Load each record with an exclusive lock
         Serializable pk = (Serializable) sr.get(0);
         CustomerOrderImpl custOrder = persistence.load("customer_order",
                                                        CustomerOrderImpl.class,
                                                        pk,
                                                        LockType.EXCLUSIVE,
                                                        true);

         locked.add(pk);

         // Delete the custOrder DMO
         persistence.delete(custOrder);

         ...

One notable difference is that an explicit call to Persistence.delete(Object) is required to perform the delete action. This will both remove the DMO from the underlying Hibernate session and delete the corresponding record from the database.

Although it may be tempting to execute a DELETE SQL statement to delete many rows at once, avoid doing this, as it completely bypasses the record locking mechanism and as such, compromises data integrity. It also will prevent any DMOs which may still reside in the underlying Hibernate session from being released, causing a memory leak.

Adding and Modifying Database Tables and DMOs

When working with existing code or writing a new application from scratch, it is possible to need to either add a complete new table or to modify an existing one. When a new table needs to be added or an existing one needs to be changed, following issues need to be addressed:

  • write the DDL to add the new table or change an existing one
  • write the DDL to add a new index to a new or existing table
  • create or modify the DMO interface, DMO implementation class and Hibernate configuration file (.hbm.xml file)
  • how to work with the dmo_index.xml file

As the temporary tables are backed up by a H2 database and are automatically managed by FWD (their creation, indexes, etc), we will discuss only what is needed to maintain the backing tables in the PostgreSQL database. If temporary tables need to be added or changed, there is no need to manually maintain DDL code. All is needed is to maintain the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file.

Write DDL to add the new table

First, we will present how the developer can add a new table to the underlying permanent relational database. To accomplish this, DDL will be needed to create the table. The DDL can be generated in two ways: written from scratch by the developer or generated automatically following the steps in the Schema generation target section of the Building Converted Code chapter in the FWD Conversion Handbook.

The simplest way to get the DDL needed to create a new table is to first write the Hibernate configuration file (.hbm.xml file) for this new table. Once this file is complete, run the SCHEMA ant task in the build.xml file to generate the DDL for all tables in the specified database. The generated .sql file will contain the DDL for all the tables. All its left is to identify the CREATE TABLE, ALTER TABLE and CREATE INDEX statements which address to this table or any composite table associated with the extent fields and copy the statements to another file. After the file was created, execute the SQL statements using the required database. For PostgreSQL, the command to execute this file is the following:

psql -h <host> -p <port> -U <username> <database-name> -f <schema-file-name>.sql

The .sql generated by the SCHEMA ant task groups the DDL in following sections:

  • ALTER TABLE statements to drop all table constraints
  • DROP TABLE statements to drop all tables
  • CREATE TABLE statement to create the relational table and the composite tables (associated with any extent fields); the composite table name uses the parent table's name as a prefix, followed by the ”__” string and the extent value.
  • for each composite table, there will be a CREATE INDEX statement followed by an ALTER TABLE, to index the composite table by the parent__id column and to create the actual foreign key reference from parent__id column to the main table.
  • a special case are the case-sensitive text column and the varbinary columns; these will need to be registered in the dmo-index.xml file in a special section, for that DMO
  • also, if a column relates to a column with the same name in another table and a foreign relation exists between these to columns, no actual foreign key constraint will be enforced on database level; instead, the foreign keys will be registered in a special section for that DMO in the dmo-index.xml file

From each section, you must retain only those statements which refer to the new table or its child composite tables. Also, the copied statements must be pasted in the new file in the same order as they appear in the original file. More details about this file can be found in the Data Definition Language (DDL) chapter of the Part 3 of the FWD Conversion Reference book.

The other approach is to write the DDL statements to create the table, foreign relations and default indexes from scratch. Lets start with the DDL to create a table. First step is to determine what are the fields and their corresponding types; second, if the tables needs support for 4GL-compatible extent fields, we need to determine which one are extent fields. Each of the newly created table must have a column named ID, which is the PRIMARY KEY for this table. For all the table-specific columns, the DDL must specify their name (which must follow the table naming convention), the not null constraint (if required) and the default value (if required). To summarize, when defining a new table, following construct must be used, where <table_name> will be replaced with the actual table name::

create table <table_name>
{
   id bigint,
   <column_name> <type> [not null] [default <value>],
   ...,
   primary key (id)
}

If compatibility with legacy code is needed, then each column's type must be one of the types supported by FWD. For a complete list of supported types, see the Data Type conversion section of the Part 3 in the FWD Conversion Reference.

If any of the table's fields are Progress extent fields, then these fields are mapped to composite tables. The name for the composite table follows the naming convention described in the Data Definition Language (DDL) chapter of the Part 3 in the FWD Conversion Reference book. Each composite table contains all the extent fields which have the same size. Beside the actual fields, the table must contain the parent__id and the list__index fields. The parent__id is a foreign reference to the record in the main table, to which the extent values belong. The list__index is an index to position this element in the value list. All other fields in the composite table are extent fields of the same size. The main difference between the composite table and the parent table is that the composite table has the primary key composed from the parent__id and the list__index columns (in this order). To create a composite table, following code structure can be used, where <extent_size> will be replaced with the extent value for the contained columns:

create table <table_name__<extent_size>>
{
   parent__id bigint,
   <column_name> <type> [not null],
   ...,
   list__index integer,
   primary key (parent__id, list__index)
}

After the composite table was created, it must be added an index on the parent__id column and the foreign relation from the parent__id to the main table:

create index <table_name__<extent_size>>_fkey on <table_name__<extent_size>> (parent__id);
alter table <table_name__<extent_size>>
   add constraint <constraint ID>
   foreign key (parent__id)
   references <table_name>;

Although the backing database has no limitations on the column names between tables, when using multiple composite tables special attention is needed so that the column names are distinct among all the composite tables and the main table.

Write DDL to change an existing table

If the table already exists and it is needed to add a new column or change an existing one, there is no alternative other then writing the SQL code by hand. For this, the ALTER TABLE statement is used to either add a new column or change an existing one; when there is a case of an extent column, it will be treated slightly different. So, using ALTER TABLE DDL it is possible to add a new column, set or remove its not null constraint or even drop an existing column.

To add a new (non-extent) column, its name, type, and non null constraint (if any) most be specified. The ALTER TABLE statement will be used to modify the table definition:

alter table <table_name>
   add column <column_name>
   type <type>
   [not null];

Once a new column was added, support for this column must be added to the DMO interface and implementation class, to the Hibernate .hbm.xml configuration and also the dmo_index.xml must be added any foreign relations, index or other column properties. Detail on how to modify these files will follow later in this section.

When adding a new extent column, first we must find what is the composite table to which this column belongs. For this, we must if there already exists a table named <table_name__<extent_size>>. This can be done by running the following code in a psql console:

\d <table_name__<extent_size>>

If the table already exists, its definition will appear on the screen. Else, the console will show an error, which tells us that the table does not exist. In case the table already exists, an ALTER TABLE statement similar to adding a “simple” column to the table can be used to add the extent column:

alter table <table_name__<table_size>
   add column <column_name>
   type <type>
   [not null];

After adding a new column to the composite table, the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file must be updated. Details on how to update these file will follow later in this section.

If an existing column needs to be changed, then the ALTER TABLE statement can be used to set or remove its not null constraint. Although the ALTER TABLE statement allows the changing of a column's name or type, we do not recommend such actions (especially if the table and column is already used in the application). What will be discussed is what needs to be done to set or remove the not null constraint.

First, about setting the not null constraint. When setting this constraint to an existing column (which already may have records with the value for this column set to null), its important to determine what is the appropriate default value for such columns. Once this was determined, before enforcing this constraint on the table, we set this property to the default value, for all the records with null value:

update <table_name>
   set <column_name> = <default>
   where <column_name> is null;

After ensuring there are no null values for this column, we can go ahead and enforce the constraint:

alter table <table_name>
   alter column <column_name>
   not null;

Once the not null constraint is set, we must also update the Hibernate configuration file; details how to do this will follow later in this section.

When a not null constraint was determined to be no longer in effect, the ALTER TABLE statement will be used to drop it. After removing the constraint, the Hibernate configuration file will need to be updated too. To remove the constraint from the column, use following construct:

alter table <table_name>
   alter column <column_name>
   drop not null;

The final use described for the ALTER TABLE statement is to drop an existing column. This must be done only when it is confirmed that the column is not used anymore in the business logic, so that it can be safely removed from the backing table.

alter table <table_name>
   drop column <column_name>;

When an extent column (compatible with legacy converted code) needs to have the not null constraint set, the same steps as for a normal table column will be used: determine what is the appropriate default value, update existing records, enforce the constraint; special attention is needed so that the correct composite table is used. If an extent column needs to be removed, a similar ALTER TABLE statement will be used to remove the column from the composite table; if the composite table remains without any columns, it can be removed safely from the backing database only if all its constraints were dropped first.

alter table <table_name__<extent_size>> drop constraint <constraint_name>;
drop table <table_name__<extent_size>>;

In the above code, the <constraint_name> is the name of the constraint which enforces the foreign key relation between the parent__id column and the parent table. Once the composite table was removed from the backing database, the DMO interface, DMO implementation class, Hibernate configuration file and dmo_index.xml file need to be updated accordingly.

Write the DDL to add a new index to a new or existing table

Once the table maintenance is complete, we can think about what kind of indexes need to be added. The backing database supports both unique and ordinary indexes. Unique indexes speed data access and also ensures that the combination of indexed columns remains unique. To create an index from scratch, following DDL can be used:

create [unique] index <index_name>
   on <table_name>
   (<column_1>, <column_2>, ...);

When creating an index, special attention needs to be given to any text columns; such columns must be indexed using a special expression, as described in the Data Definition Language (DDL) chapter of the Part 3 in the FWD Conversion Reference book. So, the case-sensitive columns will be indexed using the rtrim(<column_name>, E' \t\n\r') expression and the case-insensitive columns will be indexed using the upper(rtrim(<column_name>, E' \t\n\r')) expression. The DDL to create such an index for a case-insensitive text column would look like:

create [unique] index <index_name>
   on <table_name>
   (upper(rtrim(<column_name>, E' \t\n\r')));

If the table already contains data for the indexed columns and an unique index is created, it might fail if the existing values are not unique. In this case, you should consider either making the values unique or if the unique index for those columns is really necessary. If the index creation succeeded and the table contains data, it can be useful to execute the ANALYZE command for the indexed table, to collect statistics about the existing data. To run this command, following code can be executed at the psql console:

analyze <table_name>;

Finally, after an unique or ordinary index is created, the dmo_index.xml file will need to be updated so that the DMO will be aware of the new index (how to do this will be explained later in this section).

How to create new DMOs from scratch

When a new table is added to the database, the job is not complete until this table is registered with the business logic too. To accomplish this, we must map the table to a DMO which will be used all over the business logic. For each DMO, there are three files which need to be created: the DMO interface, the DMO implementation class and the Hibernate configuration file. Details about the structure and contents of each file are found in the Data Model Objects section of the Part 3 - Schema conversion chapter of the FWD Conversion Reference book.

All these DMO related files will need to be placed in the correct package; if the schema name is mydb, then the DMO interface will be placed in the dmo.mydb package and the DMO implementation class will be placed in the dmo.mydb.impl package.

If a temporary DMO needs to be added or changed, all the steps are the same as for permanent DMOs. The only requirement is that the DMO interface, DMO implementation class and Hibernate configuration file will be placed in the correct package, for the _temp schema (i.e. dmo._temp package).

How to change existing DMOs

When working with existing DMOs, an important issue is how was the existing DMO created - during conversion or was hand-written from scratch. Depending on who created the DMO - it was automatically generated or was written from scratch in the first place - it will lead to different scenarios.

In the first case, when modifying DMOs which were generated during conversion, you should make sure that no subsequent conversions need to be run. This translates to making sure no further changes will be needed in the legacy 4GL code; if the legacy 4GL code is still under maintenance, then the changes to an an existing DMO which maps to a legacy 4GL table will be overwritten on the next conversion run. So, in this case, it is best to do the changes in the 4GL schema and expose them to the new code by running the conversion routine. This approach will also expose the changed table to the existing 4GL code, too. If the 4GL code is no longer maintained, and all changes will be done in the converted Java code, it is OK to change the Java code for the generated DMO.

In the later case, when the DMO was written from scratch, there are no compatibility constraints related to its changes. Considering that it was not used in the legacy code, its changes will not get overwritten if the legacy code is reconverted.

After it was determined that it is safe to modify DMO's Java sources, the next step is to determine what are the changes in the database table which reflect in the DMO too. In all cases, there will be three files which will need to be changed: the DMO interface, the DMO implementation class and the DMO's Hibernate configuration file. Depending on which were the database table changes, different steps will need to be taken.

The database table changes which affect the DMO are changes related to table fields - adding or dropping an existing field. Any table index changes will not affect the DMO sources - the changes will be in the dmo_index.xml file. Also, any changes related to case-sensitivity or foreign-keys will need to be synchronized with the dmo_index.xml file too.

When a field needs to be dropped, changes will need to be performed in each of the DMO's files. Following table describes what needs to be dropped from each file:

Action DMO Interface DMO Implementation class Hibernate mapping
remove accessors (getters and setters) X X  
remove field definition   X X
remove composite class, if no more fields   X X
remove the special composite field, if composite class is removed   X X
remove the special composite method, if composite class is removed   X  

After removing the field definition and the accessors from the DMO implementation class, the default constructor and the assign(Undoable) method must be updated too. If the composite table has no other field, after its definition was removed, the private List composite<X> field and all its references must be removed too.

When adding a new field, the affected areas are the same as when dropping a DMO field. In a case of a non-extent field, the first step is to add the field to the Hibernate mapping file. The new field will be defined in a property node, which is a child of the class node. For the property, the table column name, DMO property name and the FWD-compatible type will be specified as attributes. After modifying the mapping file, the next step is to add accessors definition (getters and setters) to the DMO interface. Last step is to modify the DMO implementation class, which will need to be added the instance field definition and implementation for the getters and setters defined in the DMO interface; also, there will be changes in the default constructor (to initialize the field) and in the assign(Undoable) method, to copy the field's value to another DMO instance.

In case of an extent field, the changes start with identifying the composite class to which this field belongs. The composite class for this field is the one which groups together fields of the same extent; if there are no other fields of the same extent, it means no composite class exists and it will need to be added.

The composite class is an inner class defined under the DMO implementation class, with the same structure as mentioned in the previous section (which refers to adding a new DMO). After this class was created, it is needed to add the composite field (with all its related changes - getter, setter and initialization in constructor). All the remainder steps (add the class field which holds the composite elements, getters, setters, size method, the default constructor and assign(Undoable) method changes) are the same as the ones mentioned with adding a new DMO.

For the DMO interface, it is needed to add the definitions for all the methods related to the field which holds the list of composite elements (the getters, setters, size method).

The Hibernate configuration file will need to be changed so that the backing composite database table is mapped to the correct class and also to create the field which holds the list of composite elements. The steps needed to create this mapping are found in the previous section, related to adding a DMO from scratch.

How to work with the dmo_index.xml file

While the 4GL code is still maintained, this file should not be edited by hand, as the file gets generated on each conversion (assuming that the schema generation step is included). Instead, there are two different ways to add a new table: either edit the main database schema file and add the new table definition directly in 4GL style or write the new DMO directly in Java. When writing it directly in Java, DMO registration and other DMO-related information must be added to a special merge file. More details about this special merge file and about how the schema changes can be handled while the 4GL code is still maintained can be found in the Managing Data Model Object (DMO) Changes section of the Integrating Hand-Written Java chapter of this book.

Managing data records using SQL

This section will describe how to manually maintain data records, in cases when the application doesn't provide a way (i.e. cases when there are computed columns, records are not managed by the user). This will cover only managing records in the permanent database; as the records for a temporary table exist only as the user session is active, no manual record insert, update or delete will be needed.

If for a certain table the application doesn't provide a way for altering data records, it is possible to do this using the psql console. Only concern is if the server is running or not. In the first case, if the server is running and a record needs to be altered, it is needed to lock that record, so the business logic will not be able to alter it until the explicit lock is released.

The first part will present the steps needed to alter a record in a table with no extent fields (i.e. there are no composite tables). The changes in these steps for tables with extent fields will be presented after this.

If the server is started, then the record must be locked; for this, the Administrative Console provides the Console/Acquire locks screen, which allows the possibility of explicitly locking a record ID. After an entry was added to this screen, by pressing the Lock Records button will add an exclusive lock for this record ID; now, it is safe to alter or add the record, which will be discussed later in this section. When the work is done, the lock must be released explicitly by pressing the Unlock Records button.

If the server is not started, then no locking is needed - all is left is to start the psql console and connect to the required database:

psql -h <host> -p <port> -U <username> <database-name>

In case the server is running, the first step after the SQL console was started is to find the ID for the record we need to lock. If the record is a new record, then we will have to find the ID which will be set to the new record.

When adding a new record, the new record ID can be found by running this command at the psql console:

database=# select max(id) + 1 from <table>;

where <table> is he name of the table to which the new record needs to be added. This will return an unique value, which will be used in the Administrative Console as the ID for the record which will be locked. The command will return an output like this:

?column?
-----------
 120949467
(1 row)

After the lock was acquired or it is ensured the server is not running, the @INSERT@statement will be used to add the actual data for this record. The syntax for this command is:

database=# insert into <table> (id, <column1>, <column2>, ...) values (ID, <value1>, <value2>, ...);

where:

  • <table> is the database table name
  • ID is the value computed at the previous step
  • <column1>, <column2> is the list of columns defined for this database table
  • <value1>, <value2> is the list of values for each specified table column

On success, the command will return an output like this:

INSERT 0 1

Depending on each column type, their specified value must follow a specific format; also, the values must be specified in the same order as the column names. Following table describes the format for each PostgreSQL type supported by FWD:

PostgreSQL Type Format Example
TEXT The value must be enclosed in apostrophes 'value'
DATE The date must be formatted using the 'YYYY-MM-DD' format '2010-12-31'
NUMERIC Use a dot (.) as decimal separator. No digit grouping is needed. 1.234
INTEGER No digit grouping is needed. 1234
BIGINT No digit grouping is needed. 1234
BIGINT No digit grouping is needed. 1234
BOOLEAN No special notes. true or false
BYTEA The binary string must be enclosed in apostrophes '<binary string>'

In all cases when the value must be set to the 4GL unknown value (?), the null value will be used. Special attention will be needed for non-null columns, but in these cases the SQL server will generate an error and the operation will not complete.

After the record was inserted, the lock for the record's ID must be released, if it was acquired (i.e. the server is running).

If a record needs to be deleted, the DELETE statement will be used to delete the record. The first step is to determine the ID for the to-be-deleted record; if the server is running, the record needs to be locked. To find the records which need to be deleted, a SELECT statement can be used to filter through the table records and find the ID of the interested ones:

database=# select id from <table> where <clause>;

where <clause> is the filter applied to the records. The output of this statement, if it found any records, is this:

    id
-----------
 120362683
(1 row)

If no record was found, the output will look like this:

    id
-----------
(0 rows)

After the record ID was found (and locked, if needed), following statement can be used to delete the record with the specified ID from the backing database table:

database=# delete from <table> where id = <ID>;

where <ID> is the ID for the record which needs to be deleted. Although any kind of expression can be specified in the WHERE clause, it is not recommended to use other expression then the id = <ID>, especially if the server is running. This way, the administrator has full control over which record gets removed from the table. The output of this statement, if executed successfully, is this:

DELETE 1

After the statement was executed, the lock for this record's ID must be released in the Administrative Console, if it was acquired.

When a record needs to be changed and this can't be done using the application's UI, the first step is the same as when deleting a record: find the ID for the needed record and lock it, if required. After this, the UPDATE statement will be used to change the record:

database=# update <table> set <column1> = <value1>, <column2> = <value2>, ... where id = <ID>;

where, for each column which needs to be updated, its value is specified using the same format as defined above. If executed successfully, the statement will return the following output:

UPDATE 1

Same as when deleting a record, after all updates to this record are done, the last step is to release the lock on this record ID in the Administrative Console.

For the fields which are part of a composite table (they are extent fields), the record insert, update or delete must use the composite table. If locking is needed, the record ID will be the record to which the extent field belongs.

When adding a new field for which there is one or more composite tables, the job is not done until these tables are populated, so that it will hold an entry for each index:

database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., <index>);

where:

  • parent__ID is the parent record to which the extent fields belong to
  • list__index is the position of this entry

The above INSERT statement template must be duplicated the same number of times as the extent size for the composite table's fields, where list__index will take values from 1 to the extent size:

database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 1);
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 2);
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., 3);
...
database=# insert into <composite table> (parent__id, <column1>, <column2>, ..., list__index) values (<record ID>, <value1>, <value2>, ..., <extent size>);

In the above statements, as the composite table has an unique index on the (parent__id, ist__index) columns, the list__index must be unique.

When updating an extent field, instead of using the UPDATE statement for the parent table, we use it to change the field in the composite table to which it belongs:

database=# update <composite table> set <column1> = <value1> where id = <ID> and list__index = <index>;

The above statement will update the extent field value at the specified index. If all the entries need to be updated for this field, the and list__index = <index> filter must be removed from the where clause. Note that the list__index and the parent__id must not be changed.

In case a record is deleted and it contains extent fields, first of all the child records in all the composite tables (i.e. for all the extent fields) must be deleted before the parent record is deleted and after the record ID was locked, in case the server is running. The DELETE statement template which will be executed for each composite table is this:

database=# delete from <composite table> where parent__id = <ID>;

© 2004-2017 Golden Code Development Corporation. ALL RIGHTS RESERVED.