P2J Project Guide
Author
|
Greg
Shah
Eric Faulhaber
Nick Saxon
Sergey Yevtushenko
Stanislav Lomany
|
Date
|
May
12,
2009
|
Access
Control
|
CONFIDENTIAL
|
Contents
Introduction
Installation and
Configuration
Quick Start
(Converting a Simple Testcase and Running the Result)
Preparing a
Project for
Conversion
Conversion Tools
Data Migration
Logging
Debugging
How to Run Testcases
Documentation
Legal Notices
Appendix A. OpenLDAP Step
By Step
Configuration
Guide
Appendix
B.
P2J
Directory Configuration Options List
Appendix
C. Setting
Up a
Certificate Authority Using OpenSSL
Appendix
D. Issuing
Sample
SSL Certificates Using OpenSSL and Your Own CA
Appendix
E.
Vendor-Specific Database Setup
Appendix
F. Notes on Running a P2J Client
Appendix
G. Making Admin Client Use Separate JVM
Introduction
P2J is an acronym that stands for "Progress 4GL to Java" or "Progress
to Java". The P2J project represents technology that can be
used
for two primary purposes:
- conversion
- convert Progress 4GL source code into a functionally equivalent,
drop-in replacement written in the Java language
- runtime
- run the
resulting converted Java application, providing the functions and
features of Progress 4GL in a compatible manner
To understand each of these purposes and their high level design,
please read the following documents:
Progress
4GL to Java Conversion
Progress
4GL to Java Runtime
Installation
and
Configuration
Directory
Structure
The following table describes all critical directories in the project
and the purpose of their contents.
Relative
Path
|
Purpose
|
p2j
|
project root,
includes ANT
script build.xml and this document
|
p2j/build |
compiled Java
classes are output
here by javac |
p2j/build/lib |
p2j.jar is output
here
|
p2j/design
|
high level design
document
|
p2j/diagrams
|
OpenOffice
presentation file
with each slide as a different diagram
|
p2j/dist/docs/api |
javadoc is output
here (detailed
design documents are integrated into the javadoc)
|
p2j/lib
|
runtime location
for 3rd party
jars (ANTLR, Hibernate...) |
p2j/manifest
|
stores jar file
manifests
|
rules/annotations
|
pattern engine
rules for stage 3
of the conversion, annotations
|
rules/callgraph
|
pattern engine
rules for call
graph generation
|
rules/convert
|
Progress 4GL source
to Java
source conversion rule sets
|
rules/dbref |
dead database table
and field
processing
|
rules/fixup
|
post parsing
pattern engine
fixup rules
|
rules/include
|
common useful
pattern engine
rules, including Progress-specific rules
|
rules/reports
|
starter templates
and generic
reports
|
rules/schema
|
data and database
conversion
rules
|
rules/unreachable |
unreachable code
processing
|
p2j/src/com/goldencode/expr
|
the expression
engine
|
p2j/src/com/goldencode/html |
html output helpers |
p2j/src/com/goldencode/p2j
|
global classes for
the P2J
project
|
p2j/src/com/goldencode/p2j/admin |
support for administration of the runtime |
p2j/src/com/goldencode/p2j/cfg
|
classes that store
and retrieve
configuration data from local XML files
|
p2j/src/com/goldencode/p2j/convert
|
code conversion
|
p2j/src/com/goldencode/p2j/directory |
runtime support for
a directory
service (with LDAP back-end)
|
p2j/src/com/goldencode/p2j/e4gl |
embedded 4GL preprocessing support (WebSpeed/Blue
Diamond) |
p2j/src/com/goldencode/p2j/io
|
stream helper
classes
|
p2j/src/com/goldencode/p2j/main |
client and server
driver
applications and bootstrap
|
p2j/src/com/goldencode/p2j/net |
secure transaction
processing
and messaging protocol implementation
|
p2j/src/com/goldencode/p2j/pattern |
pattern engine
|
p2j/src/com/goldencode/p2j/persist |
database
persistence runtime
|
p2j/src/com/goldencode/p2j/preproc
|
the Progress
compatible
preprocessor
|
p2j/src/com/goldencode/p2j/schema
|
the Progress schema
conversion
tools
|
p2j/src/com/goldencode/p2j/security |
security runtime
implementation
|
p2j/src/com/goldencode/p2j/test |
example/test
applications
|
p2j/src/com/goldencode/p2j/uast
|
the UAST
implementation
(includes the Progress Lexer and Parser)
|
p2j/src/com/goldencode/p2j/ui |
user interface
package
|
p2j/src/com/goldencode/p2j/ui/chui |
character-specific
user interface
|
p2j/src/com/goldencode/p2j/util
|
general string and
other
helper classes
|
p2j/src/com/goldencode/p2j/xml |
XML helper classes
|
p2j/testcases/e4gl |
embedded
4GL testcases
|
p2j/testcases/directory |
LDAP-based
directory service
testcases
|
p2j/testcases/preproc |
preprocessor
testcases
|
p2j/testcases/simple |
simple standalone runtime configuration
for client and server |
p2j/testcases/uast |
lexer and parser
testcases
|
p2j/testcases/test |
runtime
directory/network/security testcases |
Prerequisites
JDK
Ant
p2jspi.jar
CHARVA
NCURSES
TermInfo Database
Hibernate
Included Jar Files
1. JDK 6 must be
installed. At this time it is well tested with the Sun
1.6.0_b105 for Linux x86.
The PATH for the user
is set to include the directory containing the command line
executables (e.g. java
or javac)
of this installation. In Linux this would be the bin/
directory. In other words, this should be the default
installation for any shell or command prompt from which the P2J
technology will be built or used.
It is
also recommended to set the user's CLASSPATH (on Unix/Linux) as follows:
export CLASSPATH=.
2. To build, Apache
Ant is
required. On Linux systems versions 1.6.0 and 1.6.5 have been
found to be
sufficient. Due to the ANTLR dependencies, version
1.5.3
does not work. Please ensure that Ant can be found via the
user's
PATH.
3. The H2 database
is used in embedded mode for temporary table support and for internal
runtime housekeeping. To ensure H2 collates its string data
in
the same way as Progress, a custom implementation of java.text.spi.CollatorProvider
is used. This resource is installed and made available to the
P2J
runtime using the Java Extension Mechanism. This is necessary
in
order for J2SE's Locale
and related classes
to have access to the custom collation services that are needed by P2J.
To enable this support, the p2jspi.jar
file must be copied from the p2j/build/lib
directory (after P2J has been built) to the Java Extension directory
for the target JVM. For a Sun JVM, this directory is located
at $JAVA_HOME/jre/lib/ext
. Note
that administrator rights may be required, depending upon where the JVM
is installed.
4. CHARVA is an LGPL project that provides a set of Java
classes
and a
native (JNI) library (libTerminal.so) to create and manage an NCURSES
terminal (Character UI or CHUI) interface on UNIX or Linux.
CHARVA has been
heavily modified for use in P2J. When running a P2J server or
client, the modified charva.jar
and JNI shared
object libTerminal.so
must both be available. Pre-built versions are included in
the
p2j/lib directory and these can be distributed as part of the runtime
system.
The conversion process does not require these files.
The charva.jar is used to compile the P2J project,
but this
is handled by the Ant build script which uses the jar file that is
shipped with P2J in the p2j/lib directory.
To install and build
the CHARVA project from source code, please see the
charva_project_modifications.html
file included with the GCD customizations zip (this is a separate
archive from the P2J project). After any updates, the new
versions of both modules will need to be copied to p2j/lib and P2J will
need to be rebuilt completely.
5. Patch NCURSES. CHARVA requires NCURSES for
terminal
support. A customized
version of NCURSES 5.5 is required. NCURSES 5.5 is the
minimum
starting point since a bug has been
found in the NCURSES getch() that affects process launching (getch
improperly blanks the screen). This bug is fixed in NCURSES
5.5.
In addition, an NCURSES
modification is
required to provide enough thread safety to enable a dual-threaded
approach to using NCURSES (NCURSES is not
thread safe). The basic idea is that one thread may be
dedicated
to
reading the keyboard via getch() while another thread is used for all
output tasks.
One approach is to download the ncurses-5.5.tar.gz from
ftp://invisible-island.net/ncurses/ncurses-5.5.tar.gz. The same patches that work with v5.5 will also work with v5.6.
cd ~/projects
tar -xzvf ncurses-5.5.tar.gz
There is a different patch for v5.7 (lib_getch_c_v5.7_20090512.patch instead of lib_getch_c_20060828.patch).
In you are using Ubuntu, obtaining the source code is as simple as:
cd ~/projects
apt-get source ncurses
Starting in Ubuntu Jaunty Jackalope (v9.04), ncurses 5.7 is the installed version.
Once you have the source project installed, here are the build/install
instructions (assumes that the above Charva setup is already done using
the recommended directory structure and assumes you substitute the minor version number 5, 6, or 7 in for <version>):
cd ncurses-5.<version>
patch ncurses/base/lib_getch.c
../charva_mods/ncurses/<version_specific_lib_getch_patch_file>
patch include/curses.h.in
../charva_mods/ncurses/curses_h_in_20060828.patch
./configure --with-shared
make
su root
make install.libs
make install.progs
make install.man
cd /lib
cp /usr/lib/libncurses.so.5.<version> .
chmod +x libncurses.so.5.<version>
rm libncurses.so.5
ln -s libncurses.so.5.<version>
libncurses.so.5
The last 5 steps (fixing up the /lib directory) may only be
needed on some distributions (e.g. SuSE). Red Hat does not have a
/lib/libncurses.so.5.
Please note that if you are using ncurses v5.5 or v5.6, you must use the patch file named lib_getch_c_20060828.patch for the <version_specific_lib_getch_patch_file>. If you are using ncurses v5.7, you MUST use the lib_getch_c_v5.7_20090512.patch instead of lib_getch_c_20060828.patch. However, the curses.h patch is the same for all 3 versions.
WARNING: if the source version you are using is not customized to your distribution it is recommended that you DO NOT run "make install.data" (this
would install a new terminfo database). It has been found that
Linux distributions such as SuSE has a customized terminfo database
which behaves better than the one (at least in the xterm case) which is
included in NCURSES 5.5. If you do decide to install this, you really should backup /usr/share/terminfo first. However, if you use apt-get on Ubuntu, then you may safely use "make install" which installs everything.
6. Update the TERMINFO database.
vt320
The
existing definition of vt320 in Linux may cause some confusion in
certain terminal emulators:
- The TAB key can be recognized as F67! The reason
is:
the TAB key, which is Ctrl-I, is assigned twice, for ht and knxt
attributes.
- The HOME key is determined as the sequence of ESC-O, H
keys and mapped to GET, H key-functions instead of HOME key-function.
- The END key is determined as the sequence of ESC-O, F keys
and mapped to GET, F key-functions instead of END key-function.
To fix it:
Decompile the vt320 terminfo entry into a temporary file:
infocmp vt320 > vt320.tmp
Edit the vt320.tmp file. Notice the terminfo location mentioned in the
first line. It is typically:
/usr/share/terminfo/v/vt320
Confirm that the following entry exists:
ht=^I,
If it does not exist, add it.
Locate the knxt attribute:
knxt=^I,
If it exists, delete it entirely from the containing line.
Find the following entry:
khome=
Change its definition to:
khome=\EOH,
Add the following entry (or change it to this one if it is present and
incorrect):
kend=\EOF,
Save the file. Recompile the definition:
cd /usr/share/terminfo/v
sudo mv vt320 vt320-bak
sudo tic -o/usr/share/terminfo vt320.tmp
Use the directory found in the decompiled file above if it is not
/usr/share/terminfo.
vt220
The
existing definition of vt220 in Linux may cause some confusion in
certain terminal emulators. The END key is determined as the ESC-F
combination and mapped to FIND-NEXT key-function instead of END
key-function. Also, cursor visibility can not be changed by default (cursor is
always ON).
To fix these:
Decompile the vt220 terminfo entry into a temporary file:
infocmp vt220 > vt220.tmp
Edit the vt220.tmp file. Notice the terminfo location mentioned in the
first line. It is typically:
/usr/share/terminfo/v/vt220
Add the following entry (or change it to this one if it is present and
incorrect):
kend=\E[F,
Add the following entry (or change it to this one if it is present and
incorrect), to allow cursor visibility change:
civis=\E[?25l,
Add the following entry (or change it to this one if it is present and
incorrect), to allow cursor visibility change:
cnorm=\E[?25h
Save the file. Recompile the definition:
cd /usr/share/terminfo/v
sudo mv vt220 vt220-bak
sudo tic -o/usr/share/terminfo vt220.tmp
Use the directory found in the decompiled file above if it is not
/usr/share/terminfo.
xterm
The
existing definition of xterm in Linux may cause some confusion in
certain terminal emulators. The backspace key can
be recognized as delete (removing the character under the
cursor
instead of removing the character to the left of the cursor)!
The
problem exists because NCURSES translates the ^H character (which is
normally thought of as backspace) into the "delete virtual key".
Instead, the "kbs" definition must be set to \177 (normally
thought of as delete) which NCURSES converts into a "backspace
virtual key". This is counter-intuitive, but it does work. To
fix
it:
Decompile the xterm terminfo entry into a temporary file:
infocmp xterm > xterm.tmp
Edit the xterm.tmp file. Notice the terminfo location mentioned in the
first line. It is typically:
/usr/share/terminfo/x/xterm
Find the following entry:
kbs=
A correct definition is:
kbs=\177,
An incorrect definition is:
kbs=^H,
If it is incorrect, change it to be the \177 definition. Save the file.
Recompile the definition:
cd /usr/share/terminfo/x
sudo mv xterm xterm-bak
sudo tic -o/usr/share/terminfo xterm.tmp
Use the directory found in the decompiled file above if it is not
/usr/share/terminfo.
7. A modified version of
Hibernate 3.0.5 is required for this
project. The
modified hibernate3.jar is already included as part of the
p2j/lib but the following instructions can be used to
duplicate
this
result (assumes that the P2J project is installed in ~/projects/p2j) if
necessary:
Download the hibernate-3.0.5.zip or hibernate-3.0.5.tar.gz.
cd ~/projects
unzip hibernate-3.0.5
cd
hibernate-3.0/grammar
patch hql-sql.g
../../p2j/lib/hibernate_3.0.5_hql-sql.g.20060906_patch
patch hql-sql.g
../../p2j/lib/hibernate_3.0.5_hql-sql.g.20061018_patch
patch sql-gen.g
../../p2j/lib/hibernate_3.0.5_sql-gen.g.20060906_patch
patch sql-gen.g
../../p2j/lib/hibernate_3.0.5_sql-gen.g.20061018_patch
patch sql-gen.g
../../p2j/lib/hibernate_3.0.5_sql-gen.g.20061101_patch
cd ..
patch
src/org/hibernate/hql/ast/SqlGenerator.java ../p2j/lib/hibernate_3.0.5_SqlGenerator.java.20061024_patch
java
-cp
"lib/ant-launcher-1.6.3.jar" org.apache.tools.ant.launch.Launcher -lib
lib
cp
../hibernate/hibernate3.jar
../p2j/lib
The P2J project does not yet
support Hibernate 3.1.3 (or later). However, the
same patches can be manually applied to the grammars in that version
(the grammars are slightly different so it needs a manual
patch).
The instructions (assumes that the hibernate-3.1.3.zip has been
downloaded):
cd ~/projects
unzip hibernate-3.1.3
cd
hibernate-3.1/grammar
<manually apply
patches>
cd ..
java -cp
"lib/ant-launcher-1.6.5.jar" org.apache.tools.ant.launch.Launcher -lib
lib
cp hibernate3.jar
../p2j/lib
8. A
modified version of
Jetty 6.1.14 (Apache 2.0 License) is used in this
project . The
modified jetty-6.1.14.jar is included as part of the
p2j/lib
but the following instructions can be used to
create the modified version. (assumes that the P2J project is installed
in ~/projects/p2j and Jetty is installed as ~/projects/jetty-6.1.14)
Jetty uses Maven to do its build so Maven need to be installed
first using the command below.
sudo apt-get install maven2
download jetty-6.1.14.zip from
http://dist.codehaus.org/jetty/jetty-6.1.14/
unzip the package and a folder named jetty-6.1.14 will be created.
run the following command to make sure jetty can be built as is
cd ~/projects/jetty-6.1.14
mvn install
If the following jar files can be found, the build can be considered
successful.
jetty-6.1.14/lib/jetty-6.1.14.jar
The
modification is done in the following source file and the main change
is to add a new member function, setSSLContext(SSLContext ctx), so that
the Jetty code can use the existing ssl context object created in P2J
code.
jetty-6.1.14/modules/jetty/src/main/java/org/mortbay/jetty/security/SslSocketConnector.java
Run the following Patch command to modify the code.
cd ~/projects/p2j/lib
patch
../../jetty-6.1.14/modules/jetty/src/main/java/org/mortbay/jetty/security/SslSocketConnector.java jetty_6.1.14_SslSocketConnector.java.20090111_patch
run following command to build the modified Jetty at folder
~/projects/jetty-6.1.14/ to get the modified version of Jetty-6.1.14.jar
mvn install
9. External
JAR files included in this
distribution (nothing
needs to be
done, these all exist already in p2j/lib):
- ANTLR 2.7.4
- Apache Jakarta
- Schema namespace and
configuration processing:
- commons-beanutils-core.jar
- commons-collections-2.1.1.jar
- commons-digester.jar
- commons-logging-1.0.4.jar
- Encoded (as a string) raw variable processing:
- PostgreSQL
- postgresql-8.1-408.jdbc3.jar
- H2 Database
- Hibernate
- asm.jar
- c3p0-0.9.1.jar
- cglib-2.1_3.jar
- dom4j-1.6.jar
- ehcache-1.1.jar
- hibernate3.jar (modified as noted
above)
- jta.jar
- Jetty
- jetty-6.1.14.jar (modified
as noted
above)
- jetty-util-6.1.14.jar
- servlet-api-2.5-6.1.14.jar
- CHARVA
10.
The Admin Client, implemented as an applet, has some additional
requirements. Due to some static data in the SecurityManager and
Dispatcher classes, the every client's session has to run in a separate
JVM. See the Appendix G
for important installation notes. The
prerequisites are:
- The SUN's Next Generation Java plugin is required. It comes
with
the JDK version 1.6.0 update 10 and later, so the JDK should be
upgraded.
- Since this version if the plugin is compatible only with
the most
recent versions of browsers, the browser should be at the compatible
level. Firefox 3 is acceptable, but not Firefox 2.
Installation
The following files are part of the P2J distribution. These
examples are for Milestone 12 (M12) which was delivered October 1, 2007.
File
|
Purpose
|
p2j_20071001.zip
|
The P2J source
code,
documentation and a working build.
|
To do a full install,
run the following commands on the target system (Linux is assumed):
mkdir /gc/p2j
cd /gc
unzip p2j_20071001.zip
cd ..
export
CLASSPATH=/gc/p2j/build/lib/p2j.jar:.:
See:
P2J
Development Environment Configuration
Most importantly one must first properly set the CLASSPATH (and export
it if running bash
on Linux or sh
on Unix). Assuming that you are NOT
using the p2j.jar file and p2j is installed in /gc :
export
CLASSPATH=/gc/p2j/lib/antlr.jar:/gc/p2j/lib/commons-digester.jar:/gc/p2j/lib/commons-collections-2.1.1.jar:/gc/p2j/lib/commons-logging-1.0.4.jar:/gc/p2j/lib/commons-beanutils-core.jar:/gc/p2j/lib/commons-codec-1.3.jar:
/gc/
p2j/lib/c3p0-0.9.1.jar:
/gc/
p2j/lib/cglib-2.1_3.jar:
/gc/
p2j/lib/dom4j-1.6.jar:
/gc/
p2j/lib/ehcache-1.1.jar:
/gc/
p2j/lib/hibernate3.jar:
/gc/
p2j/lib/postgresql-8.1-408.jdbc3.jar:
/gc/
p2j/lib/jta.jar:
/gc/p2j/lib/charva.jar:
.:
If you are using the generated p2j.jar file, then it is a bit simpler:
export CLASSPATH=/gc/p2j/build/lib/p2j.jar:.:
When running the P2J tools (for conversion or reporting...), it is
important to have the current directory set to the "root" of the
project. This is also called the "P2J_HOME". This
allows
configuration files to be found in the $P2J_HOME/cfg directory.
The
P2J_HOME System
Property
The P2J_HOME system property allows a large number of the P2J programs
to find a directory
hierarchy which they need for configuration file processing and other
purposes. The system property is defined at the command line when
running an application as follows:
java -DP2J_HOME=<some_directory>
<application_class> [<parameters>]
For instance:
java -DP2J_HOME=/gc/p2j
com.goldencode.p2j.schema.InspectorHarness namespace
These programs expect, at minimum:
- that
<some_directory>
exists in the local
file system;
- that
<some_directory>
is in fact a directory;
- and that
<some_directory>
contains a cfg
subdirectory which in turn contains XML configuration files.
Additional subdirectories may be required, depending on certain
configuration settings. The individual program descriptions above will
mention these additional requirements as needed.
As long as the current
directory is set to the P2J_HOME, the -D option is not needed on the
command line for the P2J tools.
How to Build P2J
If p2j is installed in the directory /gc :
cd /gc/p2j/
ant all
This will create the subdirectory structure, create the full set of
generated files, it will compile all Java files, create the jar file
and
generate the javadoc.
Other useful targets:
Targets
|
Use
|
ant
clean |
removes all file
created during
the build process |
ant
|
this is the default
target,
which just generates files and compiles (note
that some dependencies of generated ANTLR files are not properly
handled by the build yet, so if any problems are encountered make sure
you run a clean first!)
|
ant javadoc
|
creates the
javadoc, generating
files and compiling classes if necessary
|
Quick
Start (Converting a Simple Testcase and Running the Result)
Once the P2J environment is properly installed and
configured,
you are ready to run your first conversion. This section
describes the steps by which one can convert a Progress 4GL testcase to
Java (and then run the result in the P2J environment).
Included in the project is a directory with a large variety of
testcases, most of which are "standalone". This means that
there
is a single external procedure file (.p) and it does not call any other
external procedures. In other words there are no dependencies
on
other procedures. P2J can handle the most complex procedure
dependencies, but it is easiest to start with the simple standalone
case. These instructions will try to document the differences
needed for the more complex case, but it will primarily focus on the
simple case.
The directory containing the testcases is p2j/testcases/uast.
Most importantly, this directory also contains a "project
configuration" that is valid for converting the contained source files.
This means that the minimum necessary configuration is
available
to allow the P2J conversion process to operate properly.
These
instructions assume that the 4GL source code to be converted resides in
this pre-existing "project" directory.
For a good example of a 4GL procedure that has database access, user
iterface processing and some business logic, please see the file p2j/testcases/uast/primes.p.
The rest of these instructions will assume that this file is
the source that is being converted.
This process is useful for testing your installation and for
looking at a sample conversion. But in addition, this
procedure
is very useful for debugging converted Progress code (when the problem
is due to the P2J conversion process or the P2J runtime).
Often
when isolating errant behavior in converted Progress code, it
is
desirable to develop the most simple Progress 4GL test case
that
demonstrates the correct behavior. The small test case may then be
converted and launched in the P2J runtime to demonstrate the difference
and help to determine how the conversion or runtime needs to be altered.
These instructions assume that you are running on Linux and using the
bash shell.
Instructions:
1. Set critical
environment variables.
The P2J must be set to the path to the location in the filesystem in
which the P2J environment is installed. This is the
P2J
directory itself as opposed to the "project" directory in which the 4GL
source code exists. Strictly speaking, this environment
variable
is not directly used by any of the P2J code or tools, but it is
referenced throughout the following instructions for convenience.
The CLASSPATH must be set to include the p2j.jar file that is the core
technology for conversion (and can also be used at runtime).
Option A:
Use a script such as the following and "source it" into your
current environment.
#!/bin/bash
export
P2J=/path/to/p2j
export
CLASSPATH=$P2J/build/lib/p2j.jar
Option B:
Modify your ~/.bash_profile or ~/.bashrc to include the necessary
export statements.
Option C:
Manually execute the export statements at the command line.
2. Change your current
directory into the "project" directory.
cd
$P2J/testcases/uast
This is the directory which is the root for the configuration, schemas,
4GL source code... that is to be converted. In this example,
it
is already existing. In this case, the source files happen to
reside in the "project root". But that is neither desirable
nor
common.
If you had a different project already setup, you would switch to that
directory instead.
3. Run the automated
conversion process.
primes.p is the testcase being converted. It happens to live
in
the "project root" ($P2J/testcases/uast/primes.p). That means
that the relative pathname is simply primes.p. If you have a
different filename or a relative path, you will need to know exactly
the path to the source to be converted.
If your testcase uses the database (whether it be a temp-table or the
regular database) in any way, you will need to run the following
command:
java -Xmx512m
com.goldencode.p2j.convert.ConversionDriver -d2 f2+m0+cb
{relative/path/to/your_test_case_here.p}
If your testcase only has UI + business logic and has NO database
access at all, then you can run this command (which will be faster):
java -Xmx512m
com.goldencode.p2j.convert.ConversionDriver -d2 f0+cb
{relative/path/to/your_test_case_here.p}
So in this specific case, the exact command would be (primes.p uses
temp-tables):
java -Xmx512m
com.goldencode.p2j.convert.ConversionDriver -d2 f2+m0+cb primes.p
If you have a more complex application with multiple source files, you
can pass an explicit list on the command line. There are also
command options to specify a directory and file specification (with
wildcards) that specifies a large group of files at once.
This is
the ConversionDriver
-s option.
4. Find the Java class
name of the converted program which you want to run.
In this example, you will find the source code in $P2J/src/com/goldencode/testcases.
In case of a different project, the root path in which the
Java
source code is created will be specified via the combination of
the output-root
and pkgroot
configuration variables in the <project_root_dir>/cfg/p2j.cfg.xml
main project configuration file. From the $P2J/testcases/uast/cfg/p2j.cfg.xml:
<parameter
name="output-root"
value="${P2J_HOME}/../../src" />
<parameter
name="pkgroot"
value="com.goldencode.testcases" />
To find the class name, you can first look at the end of the conversion
output (which went to the terminal in step 3 above). Here is
an
example:
...
------------------------------------------------------------------------------
Generate Java Source
Business Logic and Frames Classes
------------------------------------------------------------------------------
./PrimesEditRangeRangeParms
./PrimesEditRangeRangeParmsDef
./PrimesFrame0
./PrimesFrame0Def
./primes.p.jast
Elapsed job
time: 00:00:01.308
------------------------------------------------------------------------------
Elapsed job
time: 00:00:20
In this case, the file we are converting is primes.p so the
file we are interested in above is primes.p.jast.
The .jast files are Java Abstract Syntax Trees (Java ASTs).
These are tree structured definitions (of the converted Java
program) which have been generated by the conversion process.
It
is these files that get "anti-parsed" out to actual .java source code.
To find the class name that is associated with this file,
open
the .jast:
less primes.p.jast
Approximately 10 lines down from the top, you will see this data that
represents part of the class definition:
<ast col="0" id="150323855753" line="0" text="Primes"
type="KW_CLASS">
<annotation datatype="java.lang.String" key="access"
value="public"/>
<annotation datatype="java.lang.String" key="pkgname"
value="com.goldencode.testcases"/>
To determine the class name, you would than the "value" attribute of
the annotation with the key="pkgname". That gives us the
package
of com.goldencode.testcases.
Then add a "." separator and then add the value of the "text"
attribute of the AST node where type="KW_CLASS". This is the
specific class name of "Primes". So the
complete fully qualified class name of the converted file is "com.goldencode.testcases.Primes".
Write this fully qualified name down for later use.
If you wish to find this
file in the file system, the "." separators have to be
converted to "/"
separators and a ".java" must be added to the end. The proper
filename in the file system is com/goldencode/testcases/Primes.java which is found in $P2J/src/.
5. Change directory to
P2J.
cd $P2J
This may not be needed in the case that you are using a custom project
directory. In such cases, there is usually an ANT build
script in
the project root directory. In the case of the $P2J/testcases/uast,
the build is done as part of the main P2J build since the source files
are generated into that directory hierarchy.
6. Rebuild your project.
Run the "jar" task from the ANT build.
ant jar
At this point the converted com/goldencode/testcases/Primes.class
(and all the other supporting classes and inner classes, from the UI
and database too) will be included in the $P2J/build/lib/p2j.jar.
If you are in a different project root, then whatever jar
files
were created by the ANT build script should now contain the
converted classes.
7. Change directory to
the location of the P2J server.
cd
$P2J/testcases/simple/server
There is a "simple" runtime configuration that allows a server to be
started to execute converted testcases. It exists in the
above
directory along with a script to start the server.
8. Edit the directory to make your converted program the
default entry point for the server.
The directory.xml
file
contains the core configuration of the P2J server. One of the
things that is configured there is the default "entry point" program
for P2J clients when they connect to the server. If you have converted
multiple files, you still must know which one is the main entry point
for the application. That must be specified in the directory.
Use a text editor to edit the directory.xml
file. In it you will find this section:
<node class="string" name="p2j-entry">
<node-attribute name="value"
value="com.goldencode.testcases.[insert_test_case_name_here].execute"
/>
</node>
The value="" portion needs to specify the fully qualified name of the
Java class name with the .execute
appended to the end. That is the name of the converted method
that corresponds to the external procedure of the
original file.
In the case of a program that resides in $P2J/testcases/uast,
such as primes.p,
the resulting class name (Primes)
can simply be replaced where the text says "[insert_test_case_name_here]".
The resulting text should look like this for our example:
<node class="string" name="p2j-entry">
<node-attribute name="value"
value="com.goldencode.testcases.Primes.execute" />
</node>
There are 2 other changes which may be needed depending on the use (or
lack thereof) of database support. By default, the directory
is
configured to support load the "persistence" layer of the P2J runtime
(that is the portion that provides database support). By
default temp-tables are also enabled.
At this time, if no database is used at all in the testcases,
you will
need to "comment out" (encapsulate the specified section of the file
inside a comment like <!-- commented
stuff --> ) the "Persistence" class in the startup
section:
<node class="strings" name="startup">
<node-attribute name="values"
value="com.goldencode.p2j.util.SharedVariableManager" />
<node-attribute name="values"
value="com.goldencode.p2j.util.UnnamedStreams" />
<node-attribute name="values"
value="com.goldencode.p2j.ui.LogicalTerminal" />
<node-attribute name="values"
value="com.goldencode.p2j.persist.Persistence" />
<node-attribute name="values"
value="com.goldencode.p2j.util.AccumulatorManager" />
</node>
The result should look like this:
<node class="strings" name="startup">
<node-attribute name="values"
value="com.goldencode.p2j.util.SharedVariableManager" />
<node-attribute name="values"
value="com.goldencode.p2j.util.UnnamedStreams" />
<node-attribute name="values"
value="com.goldencode.p2j.ui.LogicalTerminal" />
<!--
<node-attribute name="values"
value="com.goldencode.p2j.persist.Persistence" />
-->
<node-attribute name="values"
value="com.goldencode.p2j.util.AccumulatorManager" />
</node>
This causes the persistence layer to NOT load during startup and
disables all database support.
At this time, if you have permanent database tables in use, but no
temp-tables in use in the testcases, you will need to "comment out" the
following section (inside the "database" section of the directory):
<node class="container" name="_temp">
...snip...
</node>
Which would look like this:
<!--
<node class="container" name="_temp">
...snip...
</node>
-->
That ensures that the temp-table support is not enabled.
Some of the provided testcases use the "p2j_test" database. To enable
that non-temp-table database, you will have to "uncomment" that section
(by default it is commented out already). Find the section
under
the "database" node, which looks like this:
<!--
<node class="container" name="p2j_test">
...snip...
</node>
-->
And remove the comments from it so that it looks like this:
<node class="container" name="p2j_test">
...snip...
</node>
When using the p2j_test database Note that unless you have a p2j_test
PostgreSQL database on your localhost that is listening to the default
port, you will also need to modify the JDBC connection URL (with
parameters that are specific to your system). The URL looks
like
this by default:
<node class="string" name="url">
<node-attribute name="value"
value="jdbc:postgresql://localhost/p2j_test" />
</node>
How to edit this is beyond the scope of this quick start. The
JDBC URL follows the JDBC standard conventions.
9. Start the P2J server.
Start the server on the default port:
./server.sh
To look at other options (including how to enable debugging):
./server.sh -?
10. Open a new bash shell.
The server has now "taken over" your previous shell. Open a
new
one using whatever facility in which you are most comfortable.
11. Set critical
environment variables.
Duplicate the actions of step #1 to ensure that the new shell has the
right configuration.
12. Change directory to
the location of the P2J client.
cd
$P2J/testcases/simple/client
There
is a "simple" runtime configuration that allows a client to be
started
which is designed to connect to the "simple" server.
13. Check if the server started properly.
Watch the server's primary log file to confirm that it starts up
properly:
tail -f
../server/server.log
The server is fully started when you see text similar to this:
[07/18/2008 15:19:18
EDT] (SessionManager.listen():INFO) {00000000:00000001:standard} Server
ready
14. Run the P2J client.
./client.sh
No
login is needed. This simple server's directory is setup with
"open"
security which is how Progress is *always* setup. In other
words, in
this configuration, the application is solely responsible for
implementing its own authentication, authorization, access control and
auditing. This is useful for test cases but it is not normally used in
a production environment.
15.
The converted program should run as expected.
If there are any problems, you can check the server/server.log or
server/stdout.log (for server side issues) and the client/stderr.log
(for any catastrophic client-side issues).
Preparing
a
Project for Conversion
Schema Loader
Before the source code can be scanned, it is necessary to import
Progress schema data, such that the SchemaDictionary
can use this information for schema symbol resolution during source
code parsing. The "SchemaLoader" program creates an XML
document
containing schema
information for a Progress database.
The program requires an entry in the p2j.cfg.xml
configuration
file (found in the cfg
subdirectory of the
path defined
by the the P2J_HOME
system property). This entry must name the Progress schema export file
(i.e., *.df
file) to use for its import, the
XML output
file to which the schema data will
be
written, and a database name to substitute for the '?' found in the
Progress export file. An example follows, with the relevant portion of
the XML
document highlighted in red:
<?xml version="1.0"?>
<!-- P2J main configuration -->
<cfg>
<global>
<parameter name="propath"
value="${P2J_HOME}/src/syman:.:" />
</global>
<schema metadata="standard" >
<namespace
name="standard"
importFile="data/standard_91c.df"
xmlFile="data/namespace/standard_91c.dict" />
</schema>
</cfg>
Note that the export and output file paths in this example are relative
to the P2J_HOME path, so the additional data
and data/namespace
subdirectories must be created , in order for this
program to run properly. Additionally, the following file,
named schema-dict.rul.xml
,
must reside in the cfg
subdirectory:
<?xml version="1.0"?>
<!-- Digester rules for schema dictionary configuration
-->
<digester-rules>
<pattern value="cfg">
<pattern value="schema">
<object-create-rule
classname="com.goldencode.p2j.schema.SchemaConfig" />
<set-properties-rule />
<pattern
value="namespace">
<object-create-rule
classname="com.goldencode.p2j.schema.SchemaConfig$NsConfig" />
<set-properties-rule />
<set-next-rule
methodname="addNamespace" />
</pattern>
</pattern>
</pattern>
</digester-rules>
To run the program:
java -DP2J_HOME=<home_directory>
com.goldencode.p2j.schema.SchemaLoader [-q]
where -q
optionally turns on quiet mode which
reduces,
but does not eliminate, the output printed to the console.
Since it is legal for a Progress schema dump export file to contain
snippets of invalid Progress source code for certain directives, such
as VALEXP
expressions, it is possible to see
some error
messages as the schema data is processed on import. The
following
is an example of the console output when running the program with the
configuration described above:
java -DP2J_HOME=/gc/p2j
com.goldencode.p2j.schema.SchemaLoader
Importing 'standard_91c.df' for schema 'standard'...
Persisted schema 'standard' to 'standard_91c.schema'
line 1:10: expecting LPARENS, found 'famaster'
Error parsing VALEXP property: can-find famaster where
famast.fa-num = facost.fa-num
No AST generated by parser
line 1:10: expecting LPARENS, found 'famaster'
Error parsing VALEXP property: can-find famaster where
famast.fa-num = fadepr.fa-num
No AST generated by parser
Error in .#1:12 ==> Include file "depmeth.i" not found
Error parsing VALEXP property: {depmeth.i}
Preprocessor done with 1 error(s).
line 1:10: unexpected token: bonus
line 1:1: unexpected token: bonus-dep-$
line 1:12: unexpected token: ,change-code
Error in .#1:25 ==> Include file "AUD" not found
Error parsing VALEXP property: index{"AUD",change-code}
<> 0
Preprocessor done with 1 error(s).
line 1:12: unexpected token: ,change-code
Error in .#1:25 ==> Include file "AUD" not found
Error parsing VALEXP property: index{"AUD",change-code}
<> 0
Preprocessor done with 1 error(s).
Running
an
Initial Source Code "Scan"
The "ScanDriver" tool is used to process a directory or a set of
directories at
once. The user provides the starting directory and a
specification of
which files to process. The driver makes a list of the
matching
files (it recurses into all subdirectories) and then for each file, it
preprocesses the file and then runs the parser. The
preprocessor
output is stored in a file with a ".cache" appended to the
name.
Subsequent runs of the scanner will use any .cache file that already
exists which greatly reduces the overall runtime. The parser
output is stored in a file with a ".parser" output name, although each
run of the scanner will overwrite these files so you must save them
manually if you wish to compare the output between runs.
Starting with M4, this tool also supports persistence of the resulting
ASTs. Each file's AST is stored into an XML file which can be
reread back into an in-memory AST much faster than the parser can
create an AST from the preprocessed source. For more details
on
the AST persistence, see the com/goldencode/p2j/uast/AstPersister.java.
It is important to note that all forms of pattern engine processing
against the source tree really should operate upon a set of persisted
AST files that are the result of a project-wide scan. In
addition, for some tools to work properly, one must run "post-parser fixup"
processing.
Syntax:
java -Xmx256m
-DP2J_HOME=<home_directory>
com.goldencode.p2j.uast.ScanDriver [-qclpxr] <directory>
<filespec>
Example :
cd /gc/p2j/src/syman
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.uast.ScanDriver -qclpx . "*.[pPwW]"
2>&1 |
tee
err.log
The options (-qclpx)
that have been specified will provide the following:
q -> runs the preprocessor
c -> saves the preprocessor output to a ".cache" file and if it
exists (and has a later timestamp than the source file) on next
invocation, bypasses the preprocessor and loads the cache instead
l -> saves off the lexer output into a ".lexer" file
p -> saves off the parser output into a ".parser" file
x -> persists the AST into an XML form and if it exists on next
invocation, reads the AST from this file rather than re-parsing
In addition, at the end of the run of the ScanDriver a registry.xml
file is written (into $P2J_HOME/cfg/registry.xml) which stores some
persistent ID information that allows unique identification of any AST
node in any file project-wide. For more details on the AST
registry, see the com/goldencode/p2j/uast/AstRegistry.java.
You should take care of this file, but in the case where you corrupt or
lose this file, it can be rebuilt by the ScanDriver (option -qcxr) off
of a full set of persisted AST XML files. For more details on
the
AST persistence, see the com/goldencode/p2j/uast/ScanDriver.java.
To understand the backing code that ScanDriver uses to obtain an AST,
see the com/goldencode/p2j/uast/AstGenerator.java.
This is a long running
process (it can
take 1-2 hours on a large project). Do not kill this while it
is
running. If you do so, you will have a partially build set of
ASTs and no registry. The best resolution for this issue is
to rescan. Use
-Xmx512m to avoid an OutOfMemoryError.
Running
Post-Parse Fixups
The ProgressParser is implemented as a single-pass processor of the
lexed token stream. Due to limitations of this approach and
limitations in ANTLR, the resulting ASTs generated in the parser
sometimes need some modifications before certain processing will work
properly. These modifications are known as post-parse fixups.
Once a full
scan of
the source code is complete (with the -x option to persist ASTs), one
must run the a pattern engine pipeline called
"fixups/post-parse-fixups" against the entire source tree.
Syntax:
java
-DP2J_HOME=<home_directory>
com.goldencode.p2j.pattern.PatternEngine [-dq] <pipeline>
[directory] [filespec]
cd /gc/p2j/src/syman
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine fixups/post_parse_fixups
src/syman
"*.[pPwW].ast"
Note that this same command can be run with a current directory that is
$P2J_HOME. In either case, you
must not use "." for the directory specification.
Here we
use src/syman
and this is
required even if you have a current directory which is /gc/p2j/src/syman.
This
is needed because the P2J registry has "normalized" filenames that are
relative to the P2J_HOME.
This will make a list of all source files in the application,
then it will create or load (if there is a persisted AST) an AST for
each one in turn. For each AST, the fixups/post_parse_fixups
pipeline will be executed. This pipeline creates permanent
cross-references between all variable references and the original
locations where the variable was defined. In addition, some
RUN
statements have mis-categorized FILENAME nodes which really should be
internal procedure nodes (INT_PROC type). This fixup
processing
resolves these issues.
Optionally, the original AST can be backed up before overwriting the
AST with the fixed version. If one adds backup="false" to the
command line (in front of the fixups/post_parse_fixups, the
original file is renamed to the AST filename + ".original" before new
AST is persisted to the AST filename. By default this backup
does
not occur and the new AST is simply persisted.
All future processing will use the fixed ASTs. Note that this
process MUST be rerun every time the AST files are regenerated (every
time a "scanning" run is made).
This is a long running
process (it can
take over an hour on a large project). Do not kill this while
it
is
running. If you do so, you will have a partially modified set
of
ASTs. The best resolution for this issue is to rescan. Use
-Xmx256m to avoid an OutOfMemoryError.
Re-scanning
the Source Tree
Any time the preprocessor, lexer or parser changes or some other
modification to the
tools which would cause structurally different source ASTs to be
generated, the source tree will need to be "re-scanned" or
re-generated. This is also useful in the case where
modifications
have been made that are not easy or possible to be reversed.
The ScanDriver (which uses the AstGenerator class for its work) assumes
any *.[pPwW].cache or *.[pPwW].ast files it finds are valid and skips
the preprocessing or parsing step, respectively, associated with those
files. This means that simply running the
ScanDriver again
does not regenerate ASTs. In order to regenerate ASTs, you
must
delete the *.[pPwW].ast files and
then rerun the ScanDriver. If this is a preprocessor change,
you
must
also delete the *.[pPwW].cache files before running ScanDriver.
The process is simple:
- If you wish, back up your current scan results.
The
following files need to be preserved:
- $P2J_HOME/cfg/registry.xml
- $P2J_HOME/src/syman/*.[pPwW].{cache,lexer,parser,ast,ast.original,schema}
- Delete the
following files (the
others will be overwritten):
- $P2J_HOME/cfg/registry.xml
- $P2J_HOME/src/syman/*.[pPwW].{ast,ast.original,schema}
- Note that the .ast.original files are not created by
default.
- Run the scan processing per the above section.
- Re-run the post-parse
fixups.
Conversion
Tools
Statistics
and
Reports
Starting with M5 (April 1, 2005), an enhanced version of the statistics
and
reporting function is available. This allows one to run
arbitrary, user-defined queries across a single file, a set of files or
the entire
project. Starting with M5, the output of these user-defined
conditions is written
to fully hyperlinked HTML files for review.
It is important that you have already run the ScanDriver
on the
files that you intend to report on. Usually, you run the
ScanDriver once on the entire project and then all files have an
associated persisted AST. In addition, the AST registry would
be
fully populated for the project. With this as a starting
point,
reports can be readily generated using a command line tool.
The statistics and report processing is a "plugin" to the pattern
engine which is implemented by the
com/goldencode/p2j/pattern/PatternEngine.java class. This
engine
is also a command line driver program that can be used to invoke the
statistics and reporting.
The $P2J_HOME/p2j/rules/reports directory contains some sample reports
that can be
referenced using this driver. The general idea is that each
.xml
file defines a "pipeline" of "rule sets" that can be used to inspect,
convert, transform or otherwise process one or more ASTs. The
pattern processing engine handles obtaining the ASTs, providing tree
walking services as needed, reading the directives in a pipeline or
rule set and then processing the rule sets on each node of the
tree. Based on user-defined expressions one can record
matches in
one or more "statistics" and then at the end of the rule set, these
statistics are used to write reports into text files.
In particular, to understand the statistics processing one should look
at the com/goldencode/p2j/util/StatsWorker.java class.
Example command line (assumes that there is a ./reports subdirectory in
your current directory in which the output will be created):
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine condition="\"expr('statements')\""
filePrefix="'stmt'" sumTitle="'Language
Statements'" dumpType="'simple'" outputDir="'reports'" reports/rpt_template
src/syman/train2 "*.[pPwW].ast"
This example will process all .p, .P, .w and .W files in the
$P2J_HOME/src/syman/train2 directory. It will load or create
an
AST for each file in turn and it will apply the pipeline of rule-sets
defined in the reports/rpt_template.xml file that must reside in the
PATPATH
(this is a global configuration parameter in $P2J_HOME/cfg/p2j.cfg.xml)
that usually points to the
current directory, $P2J_HOME/pattern and the $P2J_HOME/p2j/rules
directory.
Many command prompt variable substitutions are provided in the generic
template to
define runtime replaceable parameters that can be read and used by the
rule set. These variable substitutions must be inserted after
the
Java class name, after any pattern engine option flags but before
the report name. In particular, the following are provided:
Definition
|
Default Value
|
Use
|
condition="\"<expression>\"" |
n/a
|
<expression>
is a valid
user-defined expression that represents the condition being tested
(matched). Note that the extra set of double quotes is needed to ensure
that the rule set parses this as a string rather than as a boolean
expression. At runtime, the string will be converted into a
compiled boolean expression and executed.
|
filePrefix="'<filename>'" |
n/a
|
<filename>
is text to
prepend to the auto-generated detailed reports to allow easy
identification. Note that the extra set of single quotes is
needed to ensure that the
rule set parses this as a string rather than as an unidentified symbol.
|
sumTitle="'<title>'" |
"Summary Report"
|
<title>
is the text used
in the header of the summary report. Note that the extra set
of
single quotes is needed to ensure that the
rule set parses this as a string rather than as an unidentified symbol.
|
dumpType="'<type>'" |
"parser"
|
"parser" is the
type of text
that will be reported for each match, other options are
"simple",
"simplePlus"
and "lisp". Note that the extra set of single quotes is
needed to
ensure that the
rule set parses this as a string rather than as an unidentified symbol.
|
outputDir="'<directory>'"
|
"."
|
The directory in
which to
generate all output. All precreated report templates accept
this
variable. Note that the extra set of single quotes is needed
to
ensure that the
rule set parses this as a string rather than as an unidentified symbol.
|
dumpAtAncestor=<boolean>
|
false
|
If set to true, the
text to be
dumped upon matching with the "condition" will be at the ancestor up
"level" number of parents. See "level" variable.
|
level=<num>
|
1
|
The ancestor at
which to dump
the text. If set to 0, it dumps the current node.
If 1, the
parent (and all children) are dumped. If 2, the grandparent
tree
is dumped...
|
sortOrder="'<order>'"
|
"alpha"
|
<order>
specifies the sort
order of the
summary report. Options are "alpha" for alphanumeric
ascending
order, "insertion" for the order in which the statistics were added or
"matches" for descending order from most to fewest number of
matches. Note that the extra set of single quotes is needed
to
ensure that the
rule set parses this as a string rather than as an unidentified symbol.
|
userMultiplex=<boolean>
|
false
|
If true, this
enables a
user-controlled multiplexing mode where the basis for categorizing the
matches into common "buckets" or groups is a user-specified string
expression defined in the variable multiplexStringExpr
(see below).
|
multiplexStringExpr="'<expression>'" |
n/a
|
<expression>
is a valid
user-defined expression that returns a string that uniquely
identifies/categorizes a match. All nodes that match AND for
which this expression generates the same exact string will be counted
in the same category. Each category gets a separate line in
the
summary report and a separate HTML detailed output report.
This
is useful to categorize nodes based on something like a lowercased
version of their text rather than the token type, which is how it would
be done if userMultiplex
is false.
Note that the
extra set of double quotes is needed to ensure
that the rule set parses this as a string rather than as a boolean
expression. At runtime, the string will be converted into a
compiled
boolean expression and executed. |
The first time a report is run against the entire project (or any
significant subset like an entire subdirectory), text and HTML versions
of the source files (original source, preprocessor cache file, lexer
output and parser output) are saved off under the
"outputDir".
This process will take quite a while (30 minutes), but once these files
have been copied/generated they won't be re-generated until the source
files change.
To trigger all this processing once, run the following command:
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'"
reports/rpt_gen_file_targets
src/syman "*.[pPwW].ast"
Interesting reports to run (you may want to try replacing the green text with
your own values):
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'" varname="'t-save-file'"
vartype="var_char"
reports/generic_variable_usage_report
src/syman "*.[pPwW].ast"
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'" reports/rpt_external_file_linkage
src/syman "*.[pPwW].ast"
java -Xmx256m
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'"
sumTitle="'Language Statements'" filePrefix="'statements'"
condition="\"evalLib('statements')\"" dumpType="'simple'"
reports/rpt_template src/syman "*.[pPwW].ast"
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'"
sumTitle="'External Command Execution'" filePrefix="'commands'"
condition="\"evalLib('commands')\"" reports/rpt_template src/syman
"*.[pPwW].ast"
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine outputDir="'reports'"
sumTitle="'Built-In Functions'" filePrefix="'builtin_functions'"
condition="\"evalLib('builtin_funcs')\"" userMultiplex="true"
multiplexStringExpr="'this.lookupTokenName(getNoteLong(\"oldtype\"))'"
reports/rpt_template src/syman "*.[pPwW].ast"
A library of common expressions has been provided in the
P2J_HOME/pattern/common-progress.rules file. This is an XML
file
with useful named expressions. These expressions can be
referenced expr('<name>')
in any expression. Other example files exist in the
P2J_HOME/p2j/rules/ directory which can be used for experimentation.
The pattern engine command line syntax:
Syntax:
java PatternEngine
[options]
["<variable>=<expression>"...]
<profile>
[<directory> "<filespec>"]
where
options:
-d
<debuglevel> Message output mode;
debuglevel values:
0 = no message output
1 = status messages only
2 = status + debug messages
3 = verbose trace output
-c
Call graph walking mode
variable
= variable name
expression = infix
expression used to initialize variable
profile = rules pipeline configuration
filename
directory =
directory in which to search recursively for persisted ASTs
filespec
= file filter specification to use in persisted AST search
Automated
Database and Code Conversion
To run a full conversion:
cd /gc
java -Xmx1800m com.goldencode.p2j.convert.ConversionDriver
-sd2 F2+M0+CB src/syman "*.[pPwW]"
2>&1 |
tee
err.log
Warning: this command
will take quite
a while (2 hours on a fast system, 6 hours on a slower
system). The -Xmx1800m is used to avoid an
OutOfMemoryError.
After conversion successfully completes, to build the resulting
application:
cd /gc
ant jar
This will take 10-30 minutes depending on the hardware and
the size of the project.
Details:
Syntax:
java
-DP2J_HOME=<home> ConversionDriver [options]
<mode>
<filelist>
java
-DP2J_HOME=<home> ConversionDriver -S[options]
<mode>
<directory> "<filespec>"
Where
Mode (1 or more of
the following values, use '+' to delimit
if there are 2 or more values; don't insert spaces!)
-------------------------Front End-------------------------
F0
= preprocessor (in honor cache mode), lexer, parser,
AST persistence and post-parse fixups
F1
= F0 + forced preprocessor execution
F2
= F1 + schema loader/fixups
F3
= F2 + call graph generation
--------------------------Middle---------------------------
M0
= schema annotations, P2O generation, Hibernate mapping
docs, DMO generation, brew for DMO classes
M1
= M1 + length statistics
----------------------Code Back End------------------------
CB
= unreachable code, annotations, frame generator,
base structure, core conversion and brew
CQ
= CB + first restore pristine output from the last front
end run (WARNING: this may have unpredictable results
and/or may fail completely --> NOT FULLY DEBUGGED)
Options
Dn
= set debug level 'n' (must be a numeric digit between
0 and 3 inclusive)
S = use filespecs instead of an explicit file list
N = no recursion in filespec mode
filelist
= arbitrary list of absolute and/or relative file
names to scan (default approach)
directory = a
relative or absolute path (only used when using
filespecs)
filespec
= file filter specification of the filenames in the
given directory to scan, should be enclosed in
double quotes if any of the wildcard characters '*'
or '?' are used (only used when using filespecs)
Call
Graph Generation
As of milestone M12,
running this on a
standalone basis is no longer recommended. Please see Simple, Automated
Database
and Code Conversion.
The call graph generation is highly dependent upon a manually generated
root entry
points list, manually generated hints and upon a clean version of the
source tree.
There are 2 outputs from the call graph processing:
- A dead file listing. A list of all files (.i, .p
and .w)
that are unreachable by any end-user.
- A persisted call graph that can be used to drive the
pattern
recognition engine during conversion.
Assuming the source tree has been fully scanned
and fixed up,
one can run the call
graph generation using the following command:
java
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.uast.CallGraphGenerator menu/syman.p.ast
2>&1
|
tee call_graph.log
If you run this tool without the filename or filename list, it will use
the list of
root nodes in the $P2J_HOME/cfg/rootlist.xml and it will create
persisted graphs for each entry in that list. To get a
correct
dead files list, one must run using the rootlist.xml.
To see the results of this run, examine the persisted call graph in menu/syman.p.ast.graph.
Unreachable
Code
Processing
As of milestone M12,
running this on a
standalone basis is no longer recommended. Please see Simple,
Automated Database and Code Conversion.
The unreachable code processing is rather complete at this time. It
handles almost all known cases when code can be unreachable.
There are two kinds of results from the unreachable code processing:
- Each processed AST file is updated so each node is
annotated as
reachable or unreachable.
- A log file is generated which lists unreachable nodes along
with
source files and lines.
Assuming the source tree has been fully
scanned and fixed
up,
one can run the unreachable code processing using the following command:
java
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine unreachable/unreachable .
"*.[pPwW].ast"
The results are stored in the AST files and in
$P2J_HOME/unreach.log.
Unreferenced
Tables/Fields
Processing
The unreferenced tables/fields processing is partially completed at
this time. It handles references to tables and fields in most vast
majority of code but does not handle indirect references to tables and
fields performed via references to indexes or via system tables.
The unreferenced tables/fields processing is a two stage process. First
stage collects references and produces intermediate file which contains
all collected information. Second stage applies collected information
to schema files.
Assuming the source tree has been fully
scanned and fixed
up,
one can run the unreferenced tables/fields processing using the
following command:
cd /gc/p2j
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine -c dbref/collect_refs | tee
df_ref.log
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine dbref/apply_refs
data/namespace "*.schema" | tee db_apply.log
The resulting file produced at the first stage is
$P2J_HOME/dbref.xml. Results of the second stage are in
$P2J_HOME/data/*.schema.ast files, where each table and field node is
annotated as referenced or not referenced.
Schema
Conversion
and Data Model Creation
As of milestone M12,
running this on a
standalone basis is no longer recommended. Please see Simple,
Automated Database and Code Conversion.
At the time of writing, this aspect of the project is a work in
process. Schema conversion occurs in several stages:
- Import schema from
*.df
files.
- Run schema fixups.
- Generate a P2O intermediate AST.
- G
ather data length statistics for
character and raw
data fields.
- Generate Hibernate mapping documents from the P2O AST.
- Generate Java Data Model Object (DMO) class definitions
from the
P2O AST.
- Build DMO classes.
- Generate relational schema DDL.
Schema Import
This step is explained in detail in the Schema
Loader section.
Schema Fixups
This step postprocesses the Progress schema AST generated by the *.df
file import step to make it easier to use in future steps.
The
fixed-up AST is persisted to a file with the same root name as the
input AST file, except with a .schema
extension instead
of the original filename's extension (typically .dict
).
To
execute this step run:
cd /gc/p2j
java -Xmx128m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/fixups data/namespace
"*.dict" 2>&1 | tee err.log
java -Xmx128m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/fixups src/syman
"*.dict" 2>&1 | tee err.log
Scanning the Source for Schema Data
Unfortunately, it is not possible to generate a relational schema and
Java data model from the Progress schema information alone.
Since
table relations in Progress are defined explicitly only in the Progress
source code, a source code analysis is necessary for this
conversion. In this step, all source code ASTs are inspected
and
natural joins between tables are detected. This information
is
added to the Progress schema files imported by the Schema Loader and
fixed up in the previous step. In addition, we identify all
table
indexes which are explicitly referenced in the source code with the use-index
directive, then add this information as annotations to the schema AST.
To execute this step run:
cd /gc/p2j
java -Xmx128m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/annotations src/syman
"*.ast"
2>&1 | tee err.log
Generating the P2O AST
The P2O (for "Progress to Object") AST is an intermediate form
containing all information necessary to generate the Hibernate
configuration files and Java data model class definitions in subsequent
steps. It is linked via many cross references to the original
Progress schema AST. In this step, Progress database, table,
field, and index names are converted into names that can be used "on
the other side" in a relational schema and in Java classes.
To
execute this step run:
cd /gc/p2j
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/p2o data/namespace
"*.schema"
2>&1 | tee err.log
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/p2o src/syman
"*.schema"
2>&1 | tee err.log
Gather Data Length Statistics
Text (character) and binary (raw) data
fields in Progress are variable length, but in practice most are quite
short. To optimize the relational schema generated by the
conversion
process, it is desirable that such shorter fields be converted to fixed
length fields in the target database schema. The "length"
attribute
which appears in the exported Progress schema refers to the length of
the default, visual representation of a field, but has little or no
bearing to the field's actual length. Thus, it is necessary
to
scan
the existing data in order to gather statistics on these fields, such
that a "safe" length can be determined.
This step reads through data exported from a database (as *.d
files), parsing each text and raw field and calculating its
length.
These length statistics are saved into a distribution within an XML
document. The collection of such XML documents later becomes
an
input
to Hibernate mapping document creation.
To execute this step, ensure the database dump files for the target
database exist in the data/dump
subdirectory
(see Exporting
Data from Progress
to generate these files if necessary), then run:
cd /gc/p2j
java -Xmx256m -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine "statsPath='data/namespace'"
schema/data_analyzer data/namespace
"*.p2o"
2>&1 | tee err.log
Generating Hibernate Mapping Documents
This step processes the P2O AST. For every table/class
encountered, a separate XML mapping document is created, which allows
Hibernate to manage the mapping between a Data Model Object class
and its associated, relational database table(s). To execute
this
step run:
cd /gc/p2j/>. app>
java -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine
"statsPath='data/namespace'"
schema/hibernate
data/namespace
"*.p2o"
2>&1 | tee err.log
Generating Java Data Model Classes
This step processes the P2O AST to generate Java source code for the
Java Data Model Object interfaces and classes which correspond with the
database
tables.
To execute this step run:
java -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine schema/java_dmo data/namespace
"*.p2o"
2>&1 | tee err.log
java -DP2J_HOME=/gc/p2j
com.goldencode.p2j.pattern.PatternEngine convert/brew
src/../<app_impl_location> "*.jast"
2>&1 | tee err.log
Building Java Data Model Classes
This step executes a limited build of the customer-specific
project. If the previous step executed successfully, the
project
now includes the DMO interfaces and implementation classes.
To execute this step run:
cd /gc/p2j/<application_dir>
ant clean
ant
Generating Data Definition Language (DDL)
DDL statements are used to actually generate the relational
schema. We use Hibernate's SchemaExport
tool to
generate DDL from
the Hibernate mapping documents we created in the previous
step.
The resulting documents can then be fed to a utility such as
PostgreSQL's
psql
to finally generate the database
schema.
Warning: due to a problem with how ant
handles the CLASSPATH, the current CLASSPATH must be nulled before
running this step, otherwise problems can occur when trying to resolve
classes for Hibernate's user types.
To execute this
step run:
cd /gc
export CLASSPATH=
ant -Dtarget.db=<db_name> schema
2>&1 | tee err.log
where <db_name>
is replaced
with an actual database
name, such as p2j_db
, as in:
ant -Dtarget.db=p2j_db schema 2>&1 | tee
err.log
Progress
4GL to Java
Source Code Conversion
As of milestone M12,
running this on a
standalone basis is no longer recommended. Please see Simple,
Automated Database and Code Conversion.
Starting with the M6 milestone (May 2, 2005 deliverable), Progress 4GL
source
to Java source conversion is available. This conversion
requires
that the project home directory be setup as documented
above. After home directory setup has occurred, the
normal scanning
process must
be used to generate Progress source ASTs.
After fixups
are run, 6
conversion rule-sets can be run in turn:
- unreachable/unreachable (using an .ast file as input) -
analyzes
the source ASTs and annotates all nodes regarding whether they can ever
be executed or not
- annotations/annotations (using an .ast file as input) -
inspects
the tree, calculates names and many other important values then stores
these in annotations
- convert/frame_generator (using an .ast file as input) -
generates
1 interface definition Java AST and 1 class definition Java AST for
each frame used in a Progress source file
- convert/base_structure (using an .ast file as input) -
generates
the top level structure of the Java source file to which the rest of
the converted code can be grafted
- convert/core_conversion (using an .ast file as input) -
walks the
source AST nodes and generates the proper Java AST nodes using
expressions and some conversion helpers
- convert/brew (using a .jast file as input) - translates the
Java
ASTs into Java source code
At this time, one MUST rescan and fixup the source each time any of the
first 5 conversion rule-sets are run. The brew rule-set can
be
run any number of
times and each time it will overwrite any generated .java source file.
To see an example:
cd testcases/uast
export TARGET=hello_world.p && rm -f
$TARGET.* &&
java -DP2J_HOME=. com.goldencode.p2j.uast.ScanDriver -qclpx . "$TARGET"
&& java -DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine
-f fixups/post_parse_fixups "$TARGET.ast" && java
-DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f unreachable/unreachable
"$TARGET.ast" && java -DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f -d 2
annotations/annotations "$TARGET.ast" && java
-DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f -d 2
convert/frame_generator "$TARGET.ast" && java
-DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f convert/base_structure
"$TARGET.ast" && java -DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f -d 2
convert/core_conversion "$TARGET.ast" && java
-DP2J_HOME=.
com.goldencode.p2j.pattern.PatternEngine -f convert/brew "$TARGET.jast"
Then compare
the hello_world.p
to the HelloWorld.java!
To see more details on the conversion process, please
review the
convert
package summary which contains the design details on the
current
conversion process. The rule-set files in p2j/rules/convert
should also be reviewed since much of the conversion logic resides in
these files.
Data Migration
In order for the converted application to access data, it is necessary
to export a snapshot of the application's current data from one or more
Progress databases and to import it into corresponding, target,
relational databases. This process requires a significant
number
of hours for a large database and should be done during a scheduled
outage when the production application is not updating the target
database(s).
Exporting
Data from
Progress
The import process works with exported Progress database data stored in
*.d
text files. A separate *.d
file is
created for each database table being migrated. If these
files
are not supplied, they will need to be generated by the Progress Data
Dictionary. To export data for all tables in a Progess
database,
follow these steps:
- Open the Data Dictionary
- If necessary, connect to the target database and make it
the
working database
- Select from the menu: Admin --> Dump Data
and
Definitions --> Table Contents (.d file)...
- Select all tables to be migrated to the new database and GO
(generally F1)
- Select a target directory, use the NO-MAP option for
character
mapping, use code page ISO8859-15; choose OK
When the process completes, there should be one data file for each
table selected in step 4 above.
Importing
Data into the Target Database
A (mostly) vendor-neutral solution has been designed for the processing
of the data files exported from Progress and for their import into the
target, relational database. However, at this time, only
PostgreSQL has been tested and is fully supported as a target
database. See Appendix
E for a quick start on setting up the target database
environment.
All *.d
files to be imported should be in the
data/dump
subdirectory of the current project. The target database
server
should be running. If not running the import from p2j.jar
(TODO: not yet supported), ensure the CLASSPATH contains
references to, at minimum, the following items:
<p2j_project_root>/p2j/build/classes
antlr.jar
commons-beanutils-core.jar
commons-digester.jar
commons-collections-2.1.1.jar
commons-logging-1.0.4.jar
hibernate3.jar
dom4j-1.6.jar
postgresql-8.1-405.jdbc3.jar
jta.jar
cglib-2.1.jar
asm.jar
c3p0-0.8.5.2.jar
ehcache-1.1.jar
<p2j_home>/cfg
.
From the P2J home directory, issue the following command (omit
newlines):
java -Xmx128m
-DP2J_HOME=<home_directory>
com.goldencode.p2j.pattern.PatternEngine
-d 2
schema/import
data/namespace
"<db_name>*.p2o"
2>&1 | tee schema_import.log
where <home_directory>
must be
replaced with the
target P2J_HOME directory and <db_name>
must be
replaced by the target database name.
Once this command is run, any table which already contains at least one
record will be skipped on subsequent imports. Thus, if
possible,
subsequent runs should be preceded by first dropping the database,
recreating it, and re-importing the target database schema.
If
this is not practical because of the size of the database being
imported, at minimum all records in any partially imported tables
should be deleted before attempting the import again.
Logging
At this time, the conversion process has no structured logging.
All output is done via the console (STDOUT or STDERR).
For the runtime, there is extensive and configurable logging for the
server (but none for the client). As with the majority of the runtime
configuration, these values are read from the directory. The
logging infrastructure is based on that provided by the java.util.logging
package in J2SE. The terms used there will map to those used below.
Note that logging levels in J2SE are constants of the Level class.
The following mappings are used to convert a string in the
configuration into a Level
constant (in order of least verbose to most verbose):
Configuration
Value |
Logging
Level |
OFF |
Level.OFF |
SEVERE |
Level.SEVERE |
WARNING |
Level.WARNING |
INFO |
Level.INFO |
FINE |
Level.FINE |
FINER |
Level.FINER |
FINEST |
Level.FINEST |
ALL |
Level.ALL |
The directory will usually contain at least one "logging" section,
however the defaults set below will take effect if there is no such
section. The configuration is normally found via the path
"/server/default/logging" and in that case it defines the default
values for all servers. A given server can have a more
specific
or custom logging configuration by adding the section via the path
"/server/[serverid]/logging" where [serverid] is the unique server ID
for the server being configured. The server-specific path is
searched first and if found it is honored. Otherwise the default path
is searched and honored if found. If no logging configuration
is
found at all, the defaults will be used.
Node
Path and ID |
Type |
Default
Value |
Notes |
.../logging/level |
string |
WARNING |
This specifies the logging level for any otherwise
unconfigured handlers. Note that at this time, additional handler
support is not provided or used in P2J. |
.../logging/handlers/console/level |
string |
WARNING |
The logging level for the console handler. This will
limit
all log output via the console to this level and above. All of this
console output will be written to the server process' STDERR.
Warning: no matter
what logging
level the logger (see below) is set to, only log entries of equivalent
or less verbosity will be allowed to emit on this handler.
Thus
this logging level is a filter to exclude content that has been created
by one or more loggers. |
.../logging/handlers/file/level |
string |
OFF |
The logging level for the file handler. This will limit
all log output via the file to this level and above.
Warning: no matter
what logging level the logger (see below) is set to,
only log entries of equivalent or less verbosity will be allowed to
emit on this handler. Thus this logging level is a filter to
exclude
content that has been created by one or more loggers. |
.../logging/handlers/file/append |
boolean |
FALSE |
TRUE will append to the file handler's
previous log file, FALSE will overwrite. |
.../logging/handlers/file/limit |
integer |
64 |
Maximum file size in kilobytes for the file handler. |
.../logging/handlers/file/pattern |
string |
%h/p2j_server_%g.log |
File name pattern for the file handler. %h is the
current user's home directory, %g is a generation number. |
.../logging/handlers/file/count |
integer |
4 |
Number of log file generations for the file handler. |
.../logging/loggers/[qualified_logger_name]/level |
string |
n/a |
The logging level for any arbitrary logger name (in a
hierarchical namespace). By convention, the namespace is
based
on the package/class name hierarchy. This is not requried since any
arbitrary text can be inserted. However, P2J does use the
package
and optionally the class name when creating loggers. If
these loggers have a name that begins with at least one of the
specified logging levels in this section, the most specific (longest)
matching name will specify the logging level for that logger.
There may be any number of these entries. As a result, one
may
specify the logging level for a specific package (and by default all
contained classes and sub-packages and their classes). In addition, a
more specific class or package name will override the "parent" logging
level with the one specified in that more specific entry.
Note that this
sets the level
for the source of a log message. That means that the class
that
is logging will only log messages at that level (and all less verbose
levels). But the handler's logging level (which is set
separately) may exclude the generated messages if the handler's logging
level is less verbose than that of the logger being used. |
Although J2SE supports pluggable handlers and formatters, only console
and file handler types are supported at this time. Likewise,
user-defined formatters are not supported. P2J uses its
own custom
formatter that displays state from the current session and thread's
context. Each log entry will look like the following in the
case
that the message was output from a server thread that does not have an
end-user context:
[10/02/2007 13:49:11
EDT] (SecurityManager:INFO) {main} Message text.
In the case that the message is logged from a thread that
has an end-user context, the message would look like this:
[10/02/2007 13:49:11
EDT] (Persistence:WARNING) {026:014:ges} Message text.
In both cases, the first section is the date/timestamp.
Likewise, both will have a second section with the class name
of
the source of the message and the message level. The portion
with
curly braces is either a {thread_name}
or user context data. The user context data is in
the format {session_id:thread_id:user_id}.
Debugging
Converted applications are Java classes that run inside an application
server environment. This can be thought of as a "container"
which
provides services and Progress 4GL compatibility. Special
understanding and strategies are needed to effectively debug
applications in such an environment.
Any deviation between a converted application and the behavior of the
original 4GL application can be due to a problem in the conversion
process, a defect in the runtime or both.
In the following discussion, the following are assumed:
- the original 4GL program works as expected
- this 4GL program was converted to its Java equivalent via
the P2J conversion process
- neither the 4GL program nor the converted Java program have
been manually modified since the conversion ran
- the behavior for the Java program deviates from the
original 4GL behavior
Step 0:
prepare the debugging environment.
Since this debugging session is all about determining the root cause of
the difference in behavior between a 4Gl program and the converted
equivalent, it is critical to have "twin" Progress 4GL and Java
environments.
These twin environments must match in all of the following:
- database schema
- database contents/data
- the same application source code (being run in Progress and
as a conversion input to the P2J process)
- the same terminal types (e.g. VT320) and sizes (80x24) in
CHUI applications
Any deviation may make debugging painful or even impossible.
Another important point is that the debugging environments should
sufficiently duplicate the conditions of the production
environment such that the probem can be duplicated. It is
assumed
that the production environment is never used for debugging.
Step 1:
recreate the problem.
It must be possible to deterministically cause the difference/problem
to occur in the Java environment. It is often important to
document the exact key sequences the user must execute to get to the
difference, since minor changes in user input can subtly change the
runtime behavior in the Java code even when such differences are not
obvious in the 4GL environment. Once the problem has a
documented
and consistence recreation scenario that shows a deviation from the
original 4GL environment, this step is complete.
WARNING: Progress 4GL is
absolutely
full of bizzare, non-intuitive, hidden behavior that appears to be
wrong. Do not assume that a strange application behavior on
the
Java side is wrong before you check the exact case on the 4GL side.
P2J works to duplicate all of these user-visible behaviors
exactly. Always compare the environments with the same exact
recreate steps and using screens/terminals that are compared "side by
side".
Step 2: find
a "starting point" for the failure.
In order to debug a problem, one must have a good starting point.
The following are common cases:
- error message
- Error messages can be application-level or they can be
originated in the P2J runtime. Either way, they would be displayed on
the screen (often in one of the "message" lines, sometimes in an alert
box).
- A runtime error message will usually begin with two
asterisk
characters, followed by the error text and an error number in
parenthesis. For example: ** Some error message text.
(-1)
- Runtime errors will also be written to the logging
implementation. This may be useful to put the problem in the
context of the larger log file OR in the case where a user reports that
he or she saw an error message, but did not report the exact text of
the message.
- The specific error text of a given runtime error will be
easily traced back to a small number of locations (usually
only
one location in practice) in the runtime code.
- If the error message displayed is an application message,
then
the error text can be traced back to its source in the application code.
- Once the source of the message is found, a debugging
breakpoint
can be set at that location such that an effective debugging session
can be started.
- Of course, it is possible that the error message itself
contains enough information to diagnose the problem, avoiding the need
for a debugger session.
- exception
stack trace
- Some abends are accompanied by such a specific exception
that
the problem is self-evident. If this is not the case, then a
more
thorough investigation will be needed.
- There are 4 common or "expected" conditions that can be
normally encountered by Progress 4GL code. These conditions
are ERROR, END-KEY, STOP and QUIT. These conditions
map to a
sub-class of the ConditionException which is a P2J runtime class.
Any such conditions that are raised by the operation of a
converted application will not cause an abnormal end (abend). Rather,
these exceptions are naturally handled and in fact are necessary to
ensure the compatible application control flow.
- Other than these expected ConditionException instances,
if an
unhandled exception is encountered by the application code, this will
naturally unwind the stack completely and the user's session will exit
(abnormally).
- Before exiting, the runtime will generally log
the stack trace for any such exception. So when a user
reports
that their session abended, the stack trace can usually be found in the
server's log file. In some cases, the STDERR of the client
may
have received the stack trace output.
- When an abend occurs on the server, it will usually be
generated on a user's session thread. This means that only that user
will be affected by the abend. The server will continue
running
and all other users will continue working without any affect.
All
other (non-user) threads are especially well protected from exceptions
and are running a small, well tested and well known code base.
Since all real work occurs on user-session threads, this is
where
the problems can occur. At the root of a server stack trace
will
usually be a Thread.run() and a Conversation.run() call respectively.
User session threads are "conversation" threads that are
dynamically created and used for the lifetime of the user's session.
This is a dedicated thread for that specific user and it will
not
return until the user exits the application, generates an unhandled
STOP condition (usually via CTRL-C on the client) or encounters an
abend. Further up the stack trace one will usually find a
reference to StandardServer.standardEntry() which is the primary entry
point on the server for interactive clients. The Java
reflection
classes are used in many places to implement dynamic invocation of
classes and methods. Further up the stack will be an
invocation
of the "top-level" application entry point. This is the
execute()
method of the Java class which was generated from the Progress 4GL
program that is normally used at startup of the application.
From
there the stack trace will be completely dependent upon the call chain
of processing in the application.
- When an abend occurs on the client, this will almost
always be
thrown on the "main" thread of the application. For an interactive
client, the root of the stack would normally be
ClientDriver.main(). Further up the stack is usually a
reference
to something like $Proxy2.standardEntry() which is a remote invocation
of the StandardServer.standardEntry() method that exists on the server
side. Above there, one will normally see
Dispatcher.processInbound() which is an indication that the server has
called a client-side exported API to obtain some kind of
service such as rendering an application frame (a screen).
- Note that exceptions thrown on the client during the
processing
of some server-driven request (this is almost always the case) will
unwind the stack "over" the network and be re-thrown on the server
side. This means that client side stack traces can and will
be
seen in the server's log file. Just because the stack trace
is
present in the server log, don't assume it is a server-side issue.
- Near the top of the stack (client or server) will usually
be a
good location where one can set a breakpoint to start a productive
debugging session.
- More information about the roles of specific threads in
the system can be found here.
- thread dump
- If one does not have any error message or exception stack
trace
from which to start, one can generate a thread dump on either the
client and/or the server. A thread dump is a listing of all currently
existing threads in the JVM and a stack trace from each of those
threads. By interpreting these stack traces, one can get a very clear
snapshot of the status of the client and/or server.
- Before triggering the thread dump, it is best to recreate
the
problem and to get as close to the failure point as possible without
passing the failure point. It is also important to ensure
that
the processing stays at that location long enough for one to trigger
the dump.
- The thread dumping feature is a standard facility of the
Sun
HotSpot based JVM, however virtually all JVMs will have this feature.
In the Sun JVM, there is a simple mechanism to notify the JVM
that a thread dump needs to be produced. The JVM will pause
all
threads, for each one it will write the stack trace to STDOUT and then
the threads will be resumed. The threads do not halt
permanently
and there is no negative side effects of this thread dump. So
this procedure is safe to use on a production server.
- On Linux, one must know the process ID (pid) of the
client or
server process which needs to produce the dump. This can
normally
be obtained via the ps
command. To generate the dump, run the command kill -SIGQUIT
<pid> OR kill -3 <pid>
- On Windows, one must find the console in which the client
or
server is running and then manually execute the CTRL-BREAK key
combination while that console has the "focus" (while user input is
directed to that console).
- Please note that the server output will be cleanly
captured on
STDOUT, but on an interactive CHUI client, STDOUT is in use for the
application itself. In other words, generating a thread dump
on a
Linux CHUI client process will corrupt the screen. In
addition,
it is likely that the terminal is not in a scrolling mode.
This
would cause most of the thread dump to be "lost" since the user will
not be able to scroll up the terminal buffer to see all of the output.
On Linux, in the KDE Konsole terminal application, this
problem
can be resolved by executing the "Edit -> Reset & Clear
Terminal" menu item BEFORE
the
thread dump is triggered. That will allow the terminal buffer to be
scrolled and the full thread dump output will be visible. It is likely
that after this the CHUI client will not operate properly so it may
need to be exited.
- It is perfectly acceptable to generate thread dumps more
than
once for the same process. There is no adverse affect,
however
one will need to be careful in the interpretation of the STDOUT
contents as the correct thread dump must be found.
- Once the thread dump is available, it can be used just
like an exception stack
trace to get a starting point for debugging.
- logging
- When all else fails, logging can be a useful fallback
plan. The
reason one doesn't usually start with logging is that the output from
logging can be quite voluminous and thus very time consuming to
interpret.
- Please see the section on the logging
facilities
built into the runtime. These facilities would be used to configure
each server's logging settings. Note that after changing any of the
directory configuration for logging, the server will need to be
restarted for those settings to take effect.
- The following classes have highly useful, general purpose
logging implemented:
- com.goldencode.p2j.util.TransactionManager will provide
output of every block related event and its current state using the
FINE logging level
- com.goldencode.net.RemoteObject will provide a log
entry for
every outbound remote call in FINER logging level and with FINEST
level, it provides a dump of the remote call parameters and a log entry
(including the elapsed time) for when the remote call completes
- com.goldencode.net.Dispatcher will provide a log entry
for
every inbound remote call in FINER logging level and with FINEST level,
it
provides a dump of the remote call parameters and a log entry
(including the elapsed time) for when the call completes
Step 3:
debug from there!
How to
Run Testcases
Preprocessor
Testcases
The preprocessor is functionally complete at this time. For
more information review the high level design
documentation and the detailed design docs (start with the
package
summary for com.goldencode.p2j.preproc).
Syntax:
java
com.goldencode.p2j.preproc.Preprocessor [-option ...]
filename [
file-arguments ]
Where -option is one or more of the following:
-debug
-keeptildes
-lexonly
-out
filename
-propath
path:path:...
-pushback n
-marker n
-hints hints-file-name
All options have to precede the source filename. File arguments, if
any, must follow the usual Progress rules for coding positional or
named
arguments (the operating system / shell may dictate some specific
escape sequences to be used instead of characters like '&',
'=',
'"'
etc).
-debug
option turns on the debug output.
-keeptildes
option preserves the tildes used
in the escape
sequences on
output
-lexonly
option changes the logic of the
preprocessor to
eliminate
expansions and include file processing, the processing is limited to
lexing only and this is useful for debugging
-out
option directs the preprocessor output to
a specific
file instead
of stdout
-propath
is the equivalent of the Progress'
PROPATH string
-pushback
specifies a different pushback
buffer size for
the input
stream (the default is 1024 and should be sufficient for most runs of
the preprocessor)
-marker
option tells the Preprocessor what
byte is used as
a special internal marker. The default value 1 is likely to be
appropriate for the most cases. In general, any value in the ranges 1
to 7, 16 to 32 and 128 to 255 can be used as long as the input file
does not produce this byte. If that happens, however, an exception will
be raised.
-hints
option specifies the filename for the
hints file.
How to run a testcase:
java
-DP2J_HOME=<home_directory>
com.goldencode.p2j.preproc.Preprocessor
-out toplevel.out
/gc/p2j/testcases/preproc/toplevel.test
or
java
-DP2J_HOME=<home_directory>
com.goldencode.p2j.preproc.Preprocessor
-out toplevel.out
/gc/p2j/testcases/preproc/toplevel.test "&name1=xyz"
Using the -propath option, one can easily run the Preprocessor against
real Progress source code. For example:
java
-DP2J_HOME=/gc/p2j
com.goldencode.p2j.preproc.Preprocessor -propath /apps/syman ar/invalign.p
Note that at the current time, any use of the preprocessor
function DEFINED() on a variable name that is NOT defined, will result
in an output such as the following:
line 1:11:
unexpected token:
UNDEFINED-NAME
In this example, the symbol UNDEFINED-NAME did not exist. The
line and column number will always be line 1, column 11. This
is
a temporary artifact of the process by which preprocessor expressions
are evaluated and this will be resolved before the final release.
There are a number of source files that will generate such errors
during an initial scanning run. None of these items are
actually
a real problem.
Progress
Lexer Testcases
The lexer is functionally complete. There may be one or two
minor
modification that will occur before the end of the year, but it is
likely that it will properly handle 99.9% of all source
code. For more information review the high
level design documentation and the detailed design docs
(start with
the package summary for com.goldencode.p2j.uast).
To run a testcase:
java -DP2J_HOME=<home_directory>
com.goldencode.p2j.uast.LexerTestDriver <filename>
The output will be the list of tokens dumped to stdout.
Useful
testcases exist in p2j/testcases/uast.
Progress
Parser Testcases
The parser is functionally complete. It will be fully
debugged
and get tree building modifications, but otherwise it is
done. It
handles most source code, however it does fail in some
cases. It runs all
testcases in p2j/testcases/uast. For more information review
the high
level design documentation and the detailed design docs
(start with
the package summary for com.goldencode.p2j.uast).
To run a testcase:
java
-DP2J_HOME=<home_directory>
om.goldencode.p2j.uast.ParserTestDriver <filename> [debug]
The first parameter is a mandatory filename (relative or
absolute). There is no propath processing at this
time. Any
second parameter turns debug mode on. This is interesting
since
it
will bring up a JTree viewer to display the tree.
The output will be the text view of the AST dumped to stdout.
Useful testcases exist in p2j/testcases/uast.
Progress
Expression Evaluation Testcases
The ProgressExpressionEvaluator provides an dynamic interpreter for
real Progress 4GL expressions (numeric, string and boolean).
This
is fully functional for the above expression types but does not provide
support for date, recid, rowid, handle or raw expressions.
This
was built to leverage the UAST lexer and parser for all the expression
processing since the Preprocessor must dynamically evaluate expressions
in conditional &IF statements.
Since only
numeric,
string and boolean expressions are valid in the Preprocessor, the
implementation was limited to those options. The design can
handle all expressions, but there was no need to implement the
additional support at this time. Another note, the
preprocessor
doesn't provide any support for lvalues or most functions. So
the
command line driver program expects all operands to be either a
subexpression or a literal. For more information review the high
level design documentation and the detailed design docs
(start with
the package summary for com.goldencode.p2j.uast).
To run a testcase:
java -DP2J_HOME=<home_directory>
com.goldencode.p2j.uast.ExpressionTestDriver "expression"
The first parameter is a mandatory expression which MUST be enclosed in
double quotes. This allows Java to see the entire expression
as a
single string. If you wish to try string expressions, please
use
the single quoted form of string literal inside the double quotes.
The result will be the output to stdout.
Try:
cd p2j/testcases/uast
java -DP2J_HOME=.
com.goldencode.p2j.uast.ExpressionTestDriver "40 * (64 / 5 + 4 - -6) /
67 > 3"
This should be true.
java -DP2J_HOME=.
com.goldencode.p2j.uast.ExpressionTestDriver "(40 * (64 / 5 + 4 - -6) /
67 ) * (3 modulo 78)"
This will result in 40.83582089552239.
Note that in the preprocessor, all decimal results are converted to
integer on return. This is how the Preprocessor driver is
implemented, but it is NOT how the ExpressionTestDriver
is
implemented (this test driver has no concept of the
Preprocessor). All numeric results are left in decimal form
which
does not alway generate the same result as Progress in the case where
the Progress does a conversion to integer. If such function
is
needed, it is left as an exercise for the reader <g>.
java -DP2J_HOME=.
com.goldencode.p2j.uast.ExpressionTestDriver
"'hello
'
begins 'hell'"
This should be true.
java -DP2J_HOME=.
com.goldencode.p2j.uast.ExpressionTestDriver "'hello
' matches 'hello'"
This should be false.
java -DP2J_HOME=.
com.goldencode.p2j.uast.ExpressionTestDriver "'hello
' matches 'hello*'"
This should be true.
All Progress operators are functional.
Progress
Lexer
and Parser Automated Regression Testing
Automated regression testing is possible using the ScanDriver.java and
an established set of baselines as a comparison.
To run the testing (assumes your current directory is the project root):
cd p2j/testcases/uast
export testdir=bb && rm -f cfg/registry.xml
&& rm -f
*.{ast,schema} && rm -fr $testdir && md
$testdir
&& java -DP2J_HOME=. com.goldencode.p2j.uast.ScanDriver
-qclpx
. "*.p" >err.log 2>&1 && mv
*.{lexer,parser} $testdir
&& mv err.log $testdir && cp
*.{ast,cache,schema}
$testdir && dirdiff -S $testdir baseline/
Replace the "bb" portion of the "export testdir=bb" with any relative
directory that you wish to use to store the output from the lexer and
parser. This directory will be deleted if it exists and then
created and for every .p
testcase in the current directory it will generate a lexer and parser
output file which will be moved to the new directory. The
dirdiff
command brings up
an X-based directory comparison tool that graphically shows the files
that are different between the new and baseline directories.
Double clicking on any one of the files brings up a window with the
color coded output from diff. Note that the err.log is
expected
to differ between the directories since there is one file
(reserved_keywords.p) which is expected to throw an exception in the
parser. Since the parser's line numbers change from compile
to
compile, this is the only difference that should be seen from run to
run.
Bitset Dumping
ANTLR generated lexers and parsers are nominally based on the LL(k)
recursive descent parsing concept. However, to simplify the
code
generation and reduce the complexity of the resulting tests, ANTLR uses
a "linear approximate" algorithm instead of a true LL(k)
algorithm. From an implementation perspective, this means
that
bitsets are used to aggregate or rollup all k==1 possibilities into one
antlr.collections.Bitset and likewise all k==2 possibilities are rolled
into another antlr.collections.Bitset. Then the test for a
match
becomes very simple: the logic is bitset1.member(LA(1))
&& bitset2.member(LA(2)). This
means that when one
is debugging or reading the generated code, one must be able to dump
the contents of each bitset to understand what the true match logic is
doing.
This is especially important since this linear approximate approach
generates apparent ambiguity where none exists. For example,
if
one is matching on the following:
A B | B A
The LL(k) approach would generate:
if ( (LA(1)==A
&&
LA(2)==B) || (LA(1)==B && LA(2)==A) )
{
// matches A B or B A
}
Note that this will only match A B or B A. It
will NOT
match A A or B B.
The linear approximate approach would generate:
_tokenSet_1 (all possible LA(1) tokens) --> A, B
_tokenSet_2 (all possible
LA(2)
tokens) --> A, B
if (
_tokenSet_1.member(LA(1))
&& _tokenSet_2.member(LA(2))
)
{
//
matches A B or B A or
A A or B B!
}
Here we can see that the linear approximate
algorithm
matches A A or B B constructions when it shouldn't (since they are not
part of the original rule, instead they were artificially added because
of the implementation mechanism that INDEPENDENTLY "rolls up" the
possible tokens in each k position. The loss of the relationship
between the k==1 and k==2 positions is what provides the simplification
of the generated logic and it also creates ambiguity.
To understand and disambiguate such generated code, one must be able to
dump the Bitset contents. A class has been written to do
this. The basic command line is as follows:
java
com.goldencode.p2j.util.DumpBitSet
<containing_classname> <bitset_name>
Both parameters are required. The
1st parameter specifies the parser or lexer's classname. This
is
needed to find the BitSet to dump. In the case of a Parser,
it is
also used to
get the array of token type names that match the integer token type
values that are encoded in the Bitset. This allows the dump
to
display the token types and the corresponding symbolic name.
This
list is always found inside a parser. In a Lexer, BitSets
refer
to lists of characters rather than token types. In this case,
no
array of token names is needed.
The 2nd parameter is the
name of the member variable that is the Bitset to dump. The
are
written with the naming convention _tokenSet_XYZ where XYZ is some
sequentially incremented number created at the time the code was
generated.
To print a Bitset from the UAST parser (just change the name of the
Bitset as needed):
java
com.goldencode.p2j.util.DumpBitSet
com.goldencode.p2j.uast.ProgressParser _tokenSet_137
To print a Bitset from the UAST lexer (just change the name of the
Bitset as needed):
java
com.goldencode.p2j.util.DumpBitSet
com.goldencode.p2j.uast.ProgressLexer _tokenSet_2
Schema
Lexer Test Driver
This program is used to verify that the ProgressLexer is correctly
lexing
tokens in a Progress schema dump file, when running in schema lexing
mode. Each token is represented as a
formatted string written to stdout.
To run the program:
java com.goldencode.p2j.schema.LexerTestDriver
<filename>
Schema
Parser Test Driver
This program is used to verify that the SchemaParser is correctly
matching tokens created by the ProgressLexer when running in schema
lexing mode, and is correctly creating an
Abstract Syntax Tree (AST). When run against a valid schema
dump
file,
the tree is displayed in a frame window using a Swing JTree control. If
debug is enabled, the text view of the AST is dumped to stdout.
To run the program:
java com.goldencode.p2j.schema.ParserTestDriver
<filename>
[debug]
SchemaDictionary
Symbol Resolver
The SchemaDictionary
is used by the uast
package to resolve schema entity name references found in Progress
source code. Using this dictionary, an abbreviated or unqualified name
reference in Progress code can be resolved to a fully qualified name
which represents a database table or field. It also can return table
type (i.e., schema table, temp-table, work-table, buffer) for table
references and data type (e.g., integer, character, logical, date,
etc.) for field references.
This class has a main method so that it can be run as a standalone
program for test and debug purposes. In this mode, the class will load
name entries for default databases, as defined in the p2j.cfg.xml
configuration file. The command line interface will then prompt the
user for a lookup type: 'D' for database, 'T' for table, 'F' for field,
then for a search name. The search name can be structured
as follows:
For search type 'D':
For search type 'T':
<database_name>.<table_name>
<table_name>
For search type 'F':
<database_name>.<table_name>.<field_name>
<table_name>.<field_name>
<field_name>
The <database_name>
component
may never be
abbreviated. The <table_name>
and <field_name>
components may be abbreviated. The program will report the existence of
databases, tables, and fields. In addition, it will report the fully
qualified, unabbreviated names of tables and fields. For fields, it
will also report the data type (the table type for table name lookups
is not reported, since it is always a schema table).
Additional options include 'A' to create an alias for a logical
database name, 'U' to dump the schema dictionary's namespace
information to a file in the local directory called 'schema.dmp', or
'Q' to quit.
The program requires one or more namespace
entries in the
p2j.cfg.xml
configuration file (found in the cfg
subdirectory of the path defined by the the P2J_HOME
system property). Each entry must name:
- the database name associated with this namespace;
- the XML filename where the namespace dictionary information
for
this database resides;
- optionally, whether this database should be loaded into the
SchemaDictionary by default at dictionary construction time.
An example follows, with the relevant portion of
the XML
document highlighted in red:
<?xml version="1.0"?>
<!-- P2J main configuration -->
<cfg>
<schema metadata="standard"
>
<namespace
name="standard"
importFile="data/standard_91c.df"
xmlFile="data/namespace/standard_91c.dict" />
</schema>
</cfg>
Note that the export and output file paths in this example are relative
to the P2J_HOME path, so the additional data
and data/namespace
subdirectories must be created (and the Schema
Loader program run as a prerequisite), in order for
this
program to run properly. Additionally, the following file,
named schema-dict.rul.xml
,
must reside in the cfg
subdirectory:
<?xml version="1.0"?>
<!-- Digester rules for schema dictionary configuration
-->
<digester-rules>
<pattern value="cfg">
<pattern value="schema">
<object-create-rule
classname="com.goldencode.p2j.schema.SchemaConfig" />
<set-properties-rule />
<pattern
value="namespace">
<object-create-rule
classname="com.goldencode.p2j.schema.SchemaConfig$NsConfig" />
<set-properties-rule />
<set-next-rule
methodname="addNamespace" />
</pattern>
</pattern>
</pattern>
</digester-rules>
To run the program:
java -DP2J_HOME=<home_directory>
com.goldencode.p2j.schema.SchemaDictionary
Compiled
Expression Engine Test Driver
TODO - Rewrite or
remove; this
section is obsolete
The expr
package has a TestDriver
class
which provides an example of how a program uses the compiled expression
engine. The program can be run from the command line or
programmatically for regression testing. It allows expression-based
queries to be made against a simple "database" of employee data, for
the
purpose of testing the expression compiler.
A properties file is used to populate the program
with data, such that expressions provided by the user can reference
this data using variables. A sample properties file for this purpose is
provided with the testcases: testcases/expr/resolver.properties
.
The syntax for the program is as follows:
java com.goldencode.expr.TestDriver
<properties>
<type> <expression> [<first>
[<last>]]
where:
<properties>
is the
name of the properties
file
<type>
is 'A' for an
arithmetic expression,
'L' for a logical expression
<expression>
is an
expression in double quotes
<first>
is the
optional 1-based index of the
first record to access
<last>
is the
optional 1-based index of the
last record to access
For example, change directory to .../testcases/expr
,
then
run:
java com.goldencode.expr.TestDriver
resolver.properties L
"
name = 'Larry' or
city = 'St.
Louis'
" 1 3
This will run the specified expression against all records in the
database (the first and last parameters here are actually unnecessary
but are provided for demonstration purposes, since there are only three
records).
Assuming you use the sample resolver.properties
file, the
following variables are available for use in your expressions:
- name (string) - an employee name
- dob (date) - employee date of birth; must be in one of the
following formats, within single quotes:
- overtime (double) - overtime hours for the current period
- city (string) - employee's city
- union (boolean) - employee's union affiliation status
- begin (date) - time of day employee begins work shift
- end (date) - time of day employee ends work shift
- @now() (date) - a macro which expands to the current system
time
Strings must be enclosed in single quotes. Time/date values must be in
one of the following formats:
yyyy.MM.dd
(date)
MM/dd/yyyy
(date)
MMM dd, yyyy
(date)
HH:mm:ss
(time)
yyyy/MM/dd HH:mm:ss
(timestamp)
The sample properties file contains three employee records. The program
loops through the range of records specified by the <first>
and <last>
command line parameters (or all records if these are not supplied),
applying the expression to each record, then prints the results. For
more details and to see some example expressions, please refer to the
Test Driver Example section in the expr
package summary
document (go to the P2J
javadoc root,
then follow the link to the com.goldencode.expr
package).
Directory
Package Test Applications
The table below lists available test applications, which are in the
directory package.
Application
|
Role
|
Description
|
RemapTestDriver1
|
utility
|
Creates a P2J
directory with
some sample contents and perfoms various regression tests.
|
The only utility here is named RemapTestDriver1. It instantiates the
directory front-end, backed by XML back-end, and adds a number of
various objects with their attributes. The directory XML file can be
both created or modified.
Before running this test, switch to testcases/directory.
Use this command line to run it:
java
com.goldencode.p2j.directory.RemapTestDriver1
appTestServer.xml 6yhn
A test directory XML file will be created. This copy of the
directory is not
intended for use with the runtime directory/net/security testcases!
LDAP Directory
Back-end implementation notes
LDAP Back-end implements the full set of directory operations.
Test environment
LDAP server software: OpenLDAP version 2.2.6. For testing
purposes the schema definition and sample security database
provided in the DirectoryService design document was used.
4. Change directory to testcases/test and run following command:
java
com.goldencode.p2j.test.StartupDirectory appTestServerLdap.xml 6yhn
This command creates new minimal directory structure suitable for use
by server test application.
Note: if for
some reason LDAP
contains entries left from older tests or created by other applications
(for example RemapperTestDriver) it may cause StartupDirectory to
fail. In this case wrong entries must be deleted using
external
LDAP utility, for example GD LDAP Client.
5. Run simple test application:
Start test server:
java
com.goldencode.p2j.test.TestServer appTestServerLdapTLS.xml 6yhn
Run test application:
java
com.goldencode.p2j.main.SingleLogonDriver appDirTest.xml secnvs.xml -p
1qaz -p 3edc
The result of the last step will be dump of the directory
tree.
When prompted for the user ID/password type "nvs1", enter, enter (i.e.
empty password).
Resulting listing should look similar to shown below:
...
3 Getting AUTHMODEREQ
3 Got AUTHMODEREQ: 3
Enter user ID:nvs1
Enter user password:
3 Sending AUTHDATA
3 Sent AUTHDATA
3 Getting AUTHRESULT
3 Got AUTHRESULT: 256
Queue connected
successfully
security
{class =
</meta/class/container>}
holidays {class =
</meta/class/dates>}
#attribute values = {2004-12-31, 2005-01-17, 2005-02-21, 2005-05-30,
2005-07-04,
2005-09-05,
2005-10-10, 2005-11-11, 2005-11-24, 2005-12-26}
acl {class =
</meta/class/container>}
directory {class = </meta/class/container>}
10 {class = </meta/class/directoryRights>}
#attribute permissions = {00111111}
200 {class = </meta/class/binding>}
#attribute subjects = {ges, ecf, nvs1}
#attribute reftype = {true}
#attribute reference = {/p2j}
100 {class = </meta/class/binding>}
#attribute subjects = {server}
#attribute reftype = {true, true, true, true}
#attribute reference = {/security, /meta, /meta/class/user/password, /meta/class/process/secret}
30 {class = </meta/class/directoryRights>}
#attribute permissions = {01000000}
100 {class = </meta/class/binding>}
...
Note: all
Java applications
mentioned above assume that Java
environment (including classpath for the P2J classes) are set up
properly.
Runtime
DirectoryNetworking/Security
Packages Test Applications
The set of networking/security test applications is extended based on:
- unified command line drivers for both the client and server
sides
of the P2J application
- encrypted XML bootstrap configuration files
- application classes supplied as elements of the
configuration
files
- shared trust stores
- private key stores
The testcase/test subdirectory comes with all above mentioned files.
Please make this
subdirectory
your current directory before running any
testcase discussed below. Updating the cacerts
file is no longer
necessary.
The following table lists available test applications in this category.
Application
|
Role
|
Description
|
StartupDirectory
|
utility
|
Creates a usable
P2j testcase
directory for use with all test applications listed below |
DirPrintTest
|
utility
|
Prints the
specified P2J
directory in a readable form |
TestServer |
server
|
Initializes the
server side
of the protocol, then accepts and processes
all kinds of incoming connections. |
ClientTest |
client
|
Contacts a
number of different entry points with different numbers and types of
parameters, obtaining different return values and different access
rights. Includes checks of such things as
exceptions delivery from the remote side and asynchronous
calls.
|
AddressTest |
client
|
Tests handling
of the address
resolution requests. Mostly it checks simple AddressServer
implementation provided by the NET package. Tests include various
valid and invalid requests and it is expected that future
implementation of the AddressServer will behave identically to this
one.
|
EchoTest
|
client
|
A kind of "ping"
application.
It checks request processing path which is not touched by any other
application. Also, echo processing is implemented by the protocol
itself, without calling external entry points (just like ICMP protocol
in the TCP/IP) so this application can be used for diagnostic purposes
(link check, server hangs detection, etc.). |
ShutdownTest
|
client
|
Performs remote
server
shutdown operation. Whether the server honors the request depends on
the Access Control Lists, because shutting down the server is a
protected operation. Check the access rights to the shutdown
instance
of the system resource, and to the system:shutdown
instance of the net resource assigned to the
subject
trying to perform this
operation.
|
DirTest
|
client
|
Performs remote
P2J directory
printing.
|
Despite the simplicity of these test applications these steps cover
about 80% of the package's functionality.
To run any test application, one needs to know appropriate XML
configuration files and their passwords. The server takes one XML file
with password. The clients take two. The table below explains
the
contents of the XML files.
Filename
|
Password
|
Contents
|
appTestServer.xml
|
6yhn
|
Entire bootstrap
configuration
file for the server invoking TestServer
|
appClientTest.xml
|
1qaz
|
Primary, shared
bootstrap
configuration file for all users invoking ClientTest |
appAddressTest.xml
|
1qaz |
Primary, shared
bootstrap
configuration file for all users invoking AddressTest |
appEchoTest.xml
|
1qaz |
Primary, shared
bootstrap
configuration file for all users invoking EchoTest |
appShutdownTest.xml
|
1qaz |
Primary,
shared bootstrap
configuration file for all users invoking ShutdownTest |
appDirTest.xml
|
1qaz
|
Primary,
shared bootstrap
configuration file for all users invoking DirTest |
appUniTest.xml
|
1qaz
|
Primary, shared
bootstrap
configuration file for all users invoking any one of
the
tests. The test is specified using -k command
line option.
|
secapp.xml
|
2wsx
|
Secondary, private
bootstrap
configuration file for application logon
|
secges.xml
|
5tgb
|
Secondary, private
bootstrap
configuration file for user ges logon |
sececf.xml
|
4rfv
|
Secondary, private
bootstrap
configuration file for user ecf logon |
secnvs.xml
|
3edc
|
Secondary, private
bootstrap
configuration file for user nvs logon |
Before running the server application for the first time, or after any
changes to the name or location of the P2J directory file, one has to
recreate the P2J directory. The command line for this task is:
rm
directory.xml
java
com.goldencode.p2j.test.StartupDirectory
appTestServer.xml 6yhn
This command creates the XML-based P2J directory which is stored in the
file named directory.xml created in the current file system directory.
Now, a testcase directory is ready. One can review the contents of the
directory by using this utility:
java
com.goldencode.p2j.test.DirPrintTest
appTestServer.xml 6yhn
It is advised to print the directory before running the server. Review
the output with the security package description in hand for good
understanding of the structure of the directory and what exactly is
defined in it, because the contents of the directory completely control
how the rest of these testcases run.
One important setting in the directory is how the security package
maintains the audit log
of
security relevant events. As shipped, the directory instructs the
security package to maintain a rotating log of 3 files in the home directory
of the
currently logged on user, named "audit0-0.log", "audit0-1.log", and
"audit0-2.log". These log files rotate automatically as soon as the
file size reaches 8Mb, approximately. The location, naming pattern,
number of log files and maximum size of each all are defined in the P2J
directory.
The server application command line syntax is:
java
com.goldencode.p2j.main.ServerDriver
appTestServer.xml 6yhn [-k application-key] [-d debugLevel] [-a
arguments]
Option -k may be used to select a different application, if the
configuration file comes with more than one. In this case, the only
application defined in the configuration file is "server", which is the
default and -k is not used.
Please look at server-.xml, which is a plain text copy of the encrypted
server.xml file, to get the idea about server configuration.
The debugging level can be optionally set as the last argument -d debugLevel.
This is a
numeric
value which generally is interpreted as:
- -1 this level is assigned to error messages and cannot be
disabled;
- 0 warnings;
- 1 statistics;
- 2 detailed lists;
- 3 even more detailed traces;
- 4 detailed traces that may include sensitive security data
like
passwords;
- higher levels may be defined later.
As the debug feature may expose security data, it is not freely
available. The server's process account defined in the directory
determines the ability of the server to produce debug output. The fact
that the user specifies a debug level in the command line does not mean
the access to the feature will be granted unconditionally. Check the
directory contents for the "debug" instance of the "system" resource
under the '/security/acl" path to see who is allowed to turn on the
debug output and at which level. As shipped, the server, which runs
under the "server" account, is allowed to turn on the debugging up to
level 9.
Optional -a arguments
specifies application
specific
arguments. TestServer application does not accept any.
The client application command line syntax is (using ClientTest as an
example):
java
com.goldencode.p2j.main.SingleLogonDriver
appClientTest.xml 1qaz some-secondary.xml password [-k
application-key] [-a arguments]
Use one of the secondary XML files with its password instead of
some-secondary.xml. Again, files ending with '-.xml' are plain text
copies of those encrypted XML files, available for review and
modification, if necessary.
Option -k
may be used to select a different
application,
if the
configuration file comes with more than one. In this case, the only
application defined in the configuration file is client
,
which is the
default and -k
is not used. However, another
file exists,
appUniTest.xml, where all tests are combined. With that file, -k
option can be specified as:
client
to run ClientTest (can be
omitted as this is
the default);
address
to run AddressTest;
echo
to run EchoTest;
shutdown
to run ShutdownTest;
dir
to run DirTest;
Optional -a arguments
specifies application
specific
arguments. DirTest is the only application that may use arguments. One
can specify
-m
to include the meta information
into the printout
- starting-node
to print
only the specified branch of the directory
The former EditTest testcase has been promoted to the utility category.
As part of the test
output, there are the exchanged TLS certificates. When using sececf.xml
file, the client does not send its certificate, because the
certification path for the certificate is not entirely trusted. This is
intentional. The certificate was created as such for testing this
particular case.
It is important to keep the *.store files in the same directory as
*.xml because those are key stores and trust stores that are pointed to
by name from *.xml configuration files. Moving them would require
modifications to the bootstrap configuration files.
For more details, please refer to the BootstrapConfig class (start with
the package summary for com.goldencode.p2j.cfg).
Production
Level Client and Server Test
New production level client and server drivers and standard
applications differ from the test ones significantly. New features make
them mostly incompatible with the existing testcases.
First of all, the new standard server application relies heavily on the
contents of the P2J directory. To test all new functionality, a
different test directory is provided: min_directory.xml
These are new features shared by client and server drivers:
- configuration files may be either encrypted or in plain
text; if
no password is given with the command line, the file is assumed in
plain text;
- configuration file name is no longer a required parameter;
if no
file name is given with the command line, a default file name is
assumed (
standard_client.xml
or standard_server.xml
,
depending on the driver type);
- configuration file does not have to even exist; all
required
configuration items can be specified with the command line;
- if the configuration file (either specified or assumed)
does
exist and there are configuration items with the command line, the
latter add to or override the items from the file.
The following is the contents of the standard_client.xml
:
<?xml version="1.0"?>
<!-- Client side bootstrap configuration for the standard client
application
-->
<node type="client">
<net>
<server
host="localhost"/>
<server
port="3333"/>
<dispatcher
threads="2"/>
</net>
<security>
<truststore
filename="shtrust.store"/>
<truststore
alias="p2jserver"/>
</security>
<applications>
<ui
chui="true" />
<class
client="com.goldencode.p2j.main.StandardClient$Entry"/>
</applications>
<access>
<password
truststore="poiuyt"/>
</access>
</node>
A new configuration item <ui chui="true" />
specifies
that a character oriented user interface (CHUI) is required.
The following is the contents of the standard_server.xml
:
<?xml version="1.0"?>
<!-- Server side bootstrap configuration
-->
<node type="server">
<net>
<server
port="3333"/>
<server
nodeaddress="65536"/>
<dispatcher
threads="2"/>
<router
threads="2"/>
</net>
<security>
<server
id="standard"/>
<keystore
filename="stdsrvkey.store"/>
<keystore
alias="#security.server.id"/>
</security>
<directory>
<schema
source="schema.xml"/>
<backend
type="xml"/>
<xml
filename="min_directory.xml"/>
</directory>
<applications>
<class
server="com.goldencode.p2j.main.StandardServer"/>
</applications>
<access>
<password
keystore="zxcvbn"/>
<password
keyentry="876543"/>
</access>
</node>
This file refers to a new server keystore file stdsrvkey.store
and
the corresponding alias is standard.
The standard server application implements these new features:
- directory based server initialization; a list of classes to
load
and methods to call at startup;
- directory based application exports; these are application
methods that the NET package makes visible over the network;
- directory based users main entry points.
A new test directory creation utility, MinStartupDirectory.java
,
is provided. The produced directory defines:
- user ID
abc
for tests;
- this user's password as
password
;
- a test main entry point
StandardTransaction.java
assigned to the user abc
;
- exports for use with the updated
DirectoryEdit
utility.
How to run the
server:
java
com.goldencode.p2j.main.ServerDriver
How to run the
test
transaction, using all configuration items from the command line:
java
com.goldencode.p2j.main.ClientDriver net:server:host=localhost
port=3333 dispatcher:threads=2
security:truststore:filename=shtrust.store alias=p2jserver
keystore:filename=shrkey.store useralias=shared
applications:class:client=com.goldencode.p2j.test.StandardTest\$Entry
ui:chui=true access:password:truststore=poiuyt keystore=xcvbnm
keyentry=345678
How to run
DIrectoryEdit
using all configuration items from the command line:
java
com.goldencode.p2j.main.ClientDriver net:server:host=localhost
port=3333 dispatcher:threads=2
security:truststore:filename=shtrust.store alias=p2jserver
keystore:filename=shrkey.store useralias=shared
applications:class:client=com.goldencode.p2j.test.DirectoryEdit\$Entry
ui:chui=true access:password:truststore=poiuyt keystore=xcvbnm
keyentry=345678
Only the class name differs here from the previous case.
Understanding
P2J Threading
The P2J runtime environment is heavily threaded. These
threads
are organized in related sets called thread groups. This
analysis
ignores any threads that are created as part of the virtual machine but
which are not application threads (e.g. any thread in the "system"
thread group).
Thread
Name |
Thread
Group |
Client |
Server |
main |
main |
This is the thread on which the system initializes.
Once a session with the server is established, this thread
becomes the "conversation" thread (a dedicated message processing
thread) if the network protocol is in conversation mode. |
This is the thread on which the system initializes.
Once initialization is complete, this thread listens on the
open
server socket for new connections. The thread stays in a loop to accept
connections until the server needs to exit. |
reader |
protocol |
Reads new messages out of the network session's socket
and enqueues them for local processing. |
Reads new messages out of the network session's socket
and enqueues them for routing or local processing. |
writer |
protocol |
Dequeues outbound messages and writes them
to network session's socket. |
Dequeues outbound messages and writes them
to network session's socket. |
Thread X |
conversation |
n/a |
A dynamically created thread that is dedicated to
message
processing for a specific session that is in conversation mode. The
thread is created at session establishment and ends when the session
terminates. |
Thread Y |
dispatcher |
A thread that processes inbound messages when the
session is
not in conversation mode. This thread also processes all asynchronous
messages for the session, even when in conversation mode. |
A thread that processes inbound messages when the
session is not in
conversation mode. This thread also processes all asynchronous messages
for the session, even when in conversation mode. |
Thread Z |
router |
n/a |
A thread that processes inbound messages which are not
destined for the local system. These threads also handle all address
space management for the P2J network. |
KeyReader |
main |
This is a dedicated thread that fills the type-ahead
buffer
with any user generated keystrokes. It is in an infinite loop handling
this function. Other client threads read the keys out of the type-ahead
buffer as needed. Note that this must be a separate thread to duplicate
the asynchronous nature of the CTRL-C key combination in a Progress
environment. CTRL-C interrupts most processing whether the
current logic is executing on the client or the server. |
n/a |
Documentation
A great place to start:
High Level
Design Document
If that is not enough, the next place to look is at the details:
The detailed design documentation is integrated with the
javadoc.
The main design document for each package is the package-summary.html
file which is found by
opening the p2j javadoc and from the overview page (the
default
page on the right side), select a package. Scroll down to see the REAL
contents.
Note that preproc, schema and uast packages are well documented at this
time. All others have javadoc but not necessarily a detailed
design document. If
you do not
read the package summary for each of these packages you are missing out
on the details!!!
With the M3 (February 1, 2005) delivery, there are new
documents
for the User
Interface specifications (see design/specs/*/spec.html),
the Directory
Service, the Networking
and Transaction Processing package and Security.
The user interface (UI) runtime (both client and server side)
specifications are available in design/specs.
Legal Notices
Trademarks
- Golden Code and the GC
logo are registered trademarks of Golden Code Development Corporation.
- Java, J2SE are
trademarks or registered trademarks of Sun Microsystems, Inc. Java
Compatible and the Java Compatible coffee
cup logo are trademarks of Sun Microsystems, Inc. used under license
agreement with Sun Microsystems, Inc.
- Progress is a registered trademark of
Progress Software Corporation.
Appendix A.
OpenLDAP Step By Step
Configuration Guide
Prerequisites
Explanation below makes following assumptions:
- OpenLDAP software is installed using standard tools (e.g.
SuSE
YaST2).
- OpenLDAP configuration files are located in the
/etc/openldap.
- Data files, certificates, sample configurations, etc. are
located
in the ~/p2j/testcases/test/ldap directory.
- Certification Authority (CA) certificate, server
certificate and
server key file are prepared and named respectively:
gcd-root-ca-cert.pem, p2jserver-cert.pem, p2jserver-key.pem.
- Currently user is logged in as a regular user.
- Samples below assume the current user is 'user' at host
'host'.
Warning:
OpenLDAP does not
allow storing certificates and key files (necessary to establish TLS
connections) inside the LDAP directory. They must be stored in files
accessible to the OpenLDAP server at startup. In
order to
allow
unattended startup/shutdown of the OpenLDAP server, the key file must
NOT be password protected (encrypted). The protection of these
sensitive files must be
provided at the system level, by assigning appropriate permissions.
Preliminary Steps
- Sample slapd.conf must be updated to
match desired
configuration:
Entries below should contain desired domain name. Sample uses goldencode.com:
...
suffix "dc=goldencode,dc=com"
rootdn "cn=Manager,dc=goldencode,dc=com"
...
Also, in some cases following entry must be
uncommented and changed to match real certificate:
...
# Uncomment and change to match actual certificate CN:
#sasl-regexp
# cn=p2j\ server,ou=it\ operations,o=golden\ code\ development\
corporation,l=atlanta,st=georgia,c=us
# cn=Manager,dc=goldencode,dc=com
...
The purpose of this expression is to let OpenLDAP assign rights of one
CN to the other CN. In particular this might be necessary to provide
read/write access to the P2J server when certificate with CN which does
not match DN in the directory is used. Without it connected client will
have only guest level rights. Refer to OpenLDAP
documentation for mode details about writing sasl-regexp
expressions, rights assignment and other administrative tasks.
Note 1: slapd
converts all
data from CN of the certificate to lower
case before performing match.
Note 2: In some cases it might be helpful to comment out the statement,
restart slapd
in debug mode
(by specifying -d
-1
in command line) and try to connect to LDAP server using P2J server (or
other utility such as StartupDirectory
or DirectoryCopy)
and watch
certificate CN in the debug output. Then found CN just copy as is into
the slapd.conf
and insert back
slash before each space (see example
above).
- Initialization file init.ldif must be
updated to match
actual domain name, organization name, etc.
- Script file initDir must be updated to
use correct
administrative DN (most likely - one listed in the slapd.conf as rootdn).
- LdapMapGen configuration file mapping.xml,
server
configuration files (required for the use of StartupDirectory, DirectoryCopy, and
so on) must be
updated
to include proper LDAP parameters: URL, credentials, location of the
mapping data (if necessary), etc.
Main Steps
1. Login as root
user@host:~>su
- root
Password:*****
host:~ #
2. Change directory to /p2j/testcases/test/ldap:
host:~ # cd
/p2j/testcases/test/ldap
host:/p2j/testcases/test/ldap #
3. Create subdirectory:
host:/p2j/testcases/test/ldap
#
mkdir /etc/openldap/pem
host:/p2j/testcases/test/ldap #
4. Copy files:
host:/p2j/testcases/test/ldap
#
cp *.pem /etc/openldap/pem
host:/p2j/testcases/test/ldap # cp gcd.schema
/etc/openldap/schema
host:/p2j/testcases/test/ldap # cp slapd.conf /etc/openldap
5. Start LDAP server:
host:/p2j/testcases/test/ldap
#
slapd -h "ldap:/// ldaps:///"
At this point OpenLDAP may request password for the PEM file if server
key file is password protected:
Enter PEM pass
phrase:
host:/p2j/testcases/test/ldap #
6. Load initial LDAP configuration:
host:/p2j/testcases/test/ldap
#
./initDir
Enter LDAP Password:
...
host:/p2j/testcases/test/ldap # exit
user@host:~>
At this point OpenLDAP is completely configured and ready to load
mapping data.
7. Loading mapping data:
user@host:~>
cd
/p2j/testcases/test/ldap
user@host:/p2j/testcases/test/ldap> java
com.goldencode.p2j.directory.LdapMapGen mapping.xml -u
Writing schema mapping to cn=mapping
Total 71 mappings written
Done.
user@host:/p2j/testcases/test/ldap>
At this point directory is ready to load initial data with StartupDirectory or DirectoryCopy
utilities.
Appendix
B.
P2J Directory Configuration Options List
Introduction
There are only 2 sources of configuration data for any given P2J server
or client. The first source is called the bootstrap
configuration. The second source is the P2J Directory.
Bootstrap Config
This is 1 or 2 XML files (optionally encrypted) which hold the minimum
configuration values needed to name the server/client, initialize the
network and and either connect to the server (if this node is a client)
or to connect to the directory server (if this node is a
server).
All other configuration data is stored in the P2J Directory.
TBD: add documentation here describing the format/structure/options
available with the bootstrap config.
See:
com.goldencode.p2j.main.ServerDriver
com.goldencode.p2j.main.ClientDriver
com.goldencode.p2j.net.ServerBootstrap
com.goldencode.p2j.net.ClientBootstrap
com.goldencode.p2j.cfg.BootstrapConfig
P2J Directory
TBD: add documentation here describing the format/structure/options
available for the following:
com.goldencode.p2j.test.MinStartupDirectory.java
com.goldencode.p2j.main.StandardServer
com.goldencode.p2j.security.*
Progress-compatible
Runtime
Options
The Progress-compatible runtime allows its behavior to be customized
via configuration values in the directory. These values can
be
placed in multiple locations in the directory and the search
algorithm's obeys a well defined precedence order to determine which
value is found first:
- /server/<serverID>/runtime/<user_or_process_account>/<id>
- /server/<serverID>/runtime/<group>/<id>
- /server/<serverID>/runtime/default/<id>
- /server/default/runtime/<user_or_process_account>/<id>
- /server/default/runtime/<group>/<id>
- /server/default/runtime/default/<id>
This means that any value can be set for all users on all servers, for
all users on a specific server, for specific groups or specific users
or any combination of the above.
The following options can be specified:
Runtime
User
|
Option
ID
|
Directory
Node Type
|
Default
Value
|
Notes
|
com.goldencode.p2j.util.date
|
windowingYear
|
integer |
1950 |
|
com.goldencode.p2j.util.NumberType
|
numberGroupSep |
string
|
queried JVM value
or ',' if no
value can be queried from the JVM
|
|
com.goldencode.p2j.util.NumberType
|
numberDecimalSep |
string
|
queried JVM value
or '.' if no
value can be queried from the JVM |
|
com.goldencode.p2j.util.EnvironmentOps
|
currentLanguage |
string
|
'?'
|
|
com.goldencode.p2j.util.EnvironmentOps
|
dataServers |
string
|
'PROGRESS,ORACLE,AS400,ODBC,MSS' |
|
com.goldencode.p2j.util.EnvironmentOps
|
opsys/override |
string
|
none
|
This provides a
manual override
to hard code the OPSYS value instead of allowing the runtime to
determine it using the "os.name" system property.
|
com.goldencode.p2j.util.EnvironmentOps
|
opsys/mapping/<os.name_system_property> |
string
|
none
|
This allows a
specific "os.name"
value (e.g. "Linux") to be mapped to the text provided.
|
com.goldencode.p2j.util.EnvironmentOps
|
searchpath |
string
|
".:"
|
This is the active
search path
used in the SEARCH() method and available when one uses the PROPATH
function.
|
com.goldencode.p2j.util.EnvironmentOps
|
path-separator |
string
|
":"
|
|
com.goldencode.p2j.util.EnvironmentOps
|
file-system/propath |
string
|
".:"
|
This value is the
legacy
system's PROPATH which is used to resolve RUN VALUE()
expressions. Please see EnvironmentOps.convertName() and
EnvironmentOps.convertMethod() |
com.goldencode.p2j.util.EnvironmentOps
|
file-system/path-separator |
string
|
":"
|
The original source
system's
filesystem value (not the same as the current P2J system's value).
|
com.goldencode.p2j.util.EnvironmentOps
|
file-system/file-separator |
string
|
"/"
|
The original source
system's
filesystem value (not the same as the current P2J system's value). |
com.goldencode.p2j.util.EnvironmentOps
|
file-system/case-sensitive |
boolean
|
true
|
The original source
system's
filesystem value (not the same as the current P2J system's value). |
com.goldencode.p2j.util.EnvironmentOps
|
version
|
string
|
"9.1C" |
|
com.goldencode.p2j.util.SourceNameMapper
|
file-system/pkgroot |
string
|
""
|
|
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN |
container
|
|
NNN is a sequential
index to
make these nodes unique. |
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN/relativename |
string
|
|
This is the
relative
Progress source file name. |
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN/classname |
string
|
|
This is the
associated
relative Java class file name. |
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN/method-name-map/funcYYY |
container |
|
Where YYY is
a sequential index to make these nodes unique, this entire sub-tree is
optional but if it exists both progress and java child attributes must
exist. |
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN/method-name-map/funcYYY/progress |
string
|
|
This
is the Progress internal procedure or function name. |
com.goldencode.p2j.util.SourceNameMapper
|
class-mappings/fileNNN/method-name-map/funcYYY/java |
string
|
|
This is
the associated Java method name. |
Appendix
C. Setting
Up a Certificate Authority Using OpenSSL
Secured network connections based on SSL/TLS protocols require that the
servers and, optionally, the clients, obtain their private keys and
related public certificates. There are two ways. It's possible to buy
the keys and certificates from a well known authority, like VeriSign.
It is also possible to use a free software and issue the keys and
certificates. This section discusses the second way.
OpenSSL is a free software that has many applications, but we are
interested in its ability to create files needed for SSL/TLS. These
files are the private keys and related public certificates.
OpenSSL comes with many distributions of Linux. If it is not
distributed with a particular Linux CDs, it can be downloaded directly
from the OpenSSL project web site: http://www.openssl.org/source/openssl-0.9.8a.tar.gz
Please refer to the installation instructions.
The rest of this section discusses how to set up your own private
key/public certificate factory, commonly known as Certificate
Authority, or CA.
- Locate the file CA.pl.
This
file is part of the OpenSSL. A typical location is /usr/local/ssl/misc/CA.pl.
Edit
the file and change these two keywords: $SSLEAY_CONFIG
and $CATOP.
The first keyword
specifies the configuration file to use. Set it as follows:
$SSLEAY_CONFIG="-config
/etc/openssl.cnf";
The second keyword specifies a writeable location in the filesystem
where the CA infrastructure will
reside. Let's set it as follows:
$CATOP="/var/ssl";.
Copy the resulting file to /usr/local/bin/CA.pl
and make it executable:
chmod +x
/usr/local/bin/CA.pl
- Create the configuration file /etc/openssl.cnf
Take the
following configuration as a base and modify it accordingly. Replace
every occurance of Golden
Code
Development and gcd
with a different name and abbreviation. You may also changes some
defaults, like default_days,
which sets the certificate validity period.
#
# OpenSSL example configuration file.
# This is mostly
being used
for generation of certificate requests.
#
# This
definition stops the
following lines choking if HOME isn't
# defined.
HOME
= .
RANDFILE
= $ENV::HOME/.rnd
# Extra OBJECT
IDENTIFIER
info:
#oid_file
= $ENV::HOME/.oid
oid_section
= new_oids
# To use this
configuration
file with the "-extfile" option of the
# "openssl x509"
utility,
name here the section containing the
# X.509v3
extensions to use:
#
extensions
=
#
(Alternatively, use a
configuration file that has only
# X.509v3
extensions in its
main [= default] section.)
[ new_oids ]
# We can add new
OIDs in here
for use by 'ca' and 'req'.
# Add a simple
OID like this:
#
testoid1=1.2.3.4
# Or use config
file
substitution like this:
#
testoid2=${testoid1}.5.6
####################################################################
[ ca ]
default_ca
= CA_default
# The default ca
section
####################################################################
[ CA_default ]
dir
=
/var/ssl
# Where everything is kept
certs
=
$dir/certs
#
Where the issued certs are kept
crl_dir
= $dir/crl
#
Where the issued crl are kept
database
=
$dir/index.txt # database index file.
#unique_subject
= no
# Set to
'no' to allow creation of
# several ctificates with same subject.
new_certs_dir
= $dir/newcerts
# default place
for new certs.
certificate
= $dir/gcd-root-ca-cert.pem # The CA
certificate
serial
=
$dir/gcd-root-ca-cert.srl
# serial number
#crlnumber
= $dir/crlnumber # the current crl number
must be
# commented out to leave a V1 CRL
crl
= $dir/crl.pem
# The current CRL
private_key
= $dir/private/gcd-root-ca-key.pem # The private key
RANDFILE
=
$dir/private/.rand # private random
number file
x509_extensions
= usr_cert
# The extentions to add
to the cert
# Comment out
the following
two lines for the "traditional"
# (and highly
broken) format.
name_opt
=
ca_default
# Subject Name options
cert_opt
=
ca_default
# Certificate field
options
# Extension
copying option:
use with caution.
#
copy_extensions = copy
# Extensions to
add to a CRL.
Note: Netscape communicator chokes on V2 CRLs
# so this is
commented out by
default to leave a V1 CRL.
# crlnumber must
also be
commented out to leave a V1 CRL.
#
crl_extensions = crl_ext
default_days
= 365
# how
long to certify for
default_crl_days=
30
# how long
before next CRL
default_md
=
sha1
# which md to use.
preserve
=
no
# keep
passed DN ordering
# A few
difference way of
specifying how similar the request should look
# For type CA,
the listed
attributes must be the same, and the optional
# and supplied
fields are
just that :-)
policy
= policy_match
# For the CA
policy
[ policy_match ]
countryName
= match
stateOrProvinceName
= match
organizationName
= match
organizationalUnitName
= optional
commonName
= supplied
emailAddress
= optional
# For the
'anything' policy
# At this point
in time, you
must list all acceptable 'object'
# types.
[
policy_anything ]
countryName
= optional
stateOrProvinceName
= optional
localityName
= optional
organizationName
= optional
organizationalUnitName
= optional
commonName
= supplied
emailAddress
= optional
####################################################################
[ req ]
default_bits
= 1024
default_keyfile
= privkey.pem
distinguished_name
= req_distinguished_name
attributes
= req_attributes
x509_extensions
= v3_ca # The extentions to add to the
self signed
cert
# Passwords for
private keys
if not present they will be prompted for
# input_password
= secret
#
output_password = secret
# This sets a
mask for
permitted string types. There are several options.
# default:
PrintableString,
T61String, BMPString.
#
pkix
: PrintableString, BMPString.
# utf8only: only
UTF8Strings.
# nombstr :
PrintableString,
T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a
literal mask
value.
# WARNING:
current versions
of Netscape crash on BMPStrings or UTF8Strings
# so use this
option with
caution!
string_mask =
nombstr
# req_extensions
= v3_req #
The extensions to add to a certificate request
[
req_distinguished_name ]
countryName
= Country
Name
countryName_default
= US
countryName_min
= 2
countryName_max
= 2
stateOrProvinceName
= State
stateOrProvinceName_default
= Georgia
localityName
=
Locality (city etc)
localityName_default
= Atlanta
0.organizationName
= Golden Code Development Corporation
0.organizationName_default
= Golden Code Development Corporation
# we can do this
but it is
not needed normally :-)
#1.organizationName
= Second Organization Name (eg, company)
#1.organizationName_default
= World Wide Web Pty Ltd
organizationalUnitName
= Organizational Unit
organizationalUnitName_default
= IT Operations
commonName
= Subject
Name
commonName_max
= 64
emailAddress
= email
emailAddress_max
= 64
#
SET-ex3
= SET
extension number 3
[ req_attributes
]
challengePassword
= A challenge password
challengePassword_min
= 4
challengePassword_max
= 20
unstructuredName
= An optional company name
[ usr_cert ]
# These
extensions are added
when 'ca' signs a request.
# This goes
against PKIX
guidelines but some CAs do it and some software
# requires this
to avoid
interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some
examples of
the usage of nsCertType. If it is omitted
# the
certificate can be used
for anything *except* object signing.
# This is OK for
an SSL
server.
#
nsCertType
=
server
# For an object
signing
certificate this would be used.
# nsCertType =
objsign
# For normal
client use this
is typical
# nsCertType =
client, email
# and for
everything
including object signing:
#nsCertType =
client, email,
objsign
# This is
typical in keyUsage
for a client certificate.
# keyUsage =
nonRepudiation,
digitalSignature, keyEncipherment
# This will be
displayed in
Netscape's comment listbox.
nsComment
=
"OpenSSL Generated Certificate"
# PKIX
recommendations
harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always
# This stuff is
for
subjectAltName and issuerAltname.
# Import the
email address.
#
subjectAltName=email:copy
# An alternative
to produce
certificates that aren't
# deprecated
according to
PKIX.
#
subjectAltName=email:move
# Copy subject
details
#
issuerAltName=issuer:copy
#nsCaRevocationUrl
= http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
[ v3_req ]
# Extensions to
add to a
certificate request
basicConstraints
= CA:FALSE
keyUsage =
nonRepudiation,
digitalSignature, keyEncipherment
[ v3_ca ]
# Extensions for
a typical CA
# PKIX
recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
# This is what
PKIX
recommends but some broken software chokes on critical
# extensions.
#basicConstraints
=
critical,CA:true
# So we do this
instead.
basicConstraints
= CA:true
# Key usage:
this is typical
for a CA certificate. However since it will
# prevent it
being used as an
test self-signed certificate it is best
# left out by
default.
# keyUsage =
cRLSign,
keyCertSign
# Some might
want this also
# nsCertType =
sslCA, emailCA
# Include email
address in
subject alt name: another PKIX recommendation
#
subjectAltName=email:copy
# Copy issuer
details
#
issuerAltName=issuer:copy
# DER hex
encoding of an
extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is
a standard
or added object
# You can even
override a
supported extension:
#
basicConstraints= critical,
DER:30:03:01:01:FF
[ crl_ext ]
# CRL extensions.
# Only
issuerAltName and
authorityKeyIdentifier make any sense in a CRL.
#
issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always,issuer:always
- Create /var/ssl
directory.
- Run the CA creation command:
/usr/local/bin/CA.pl
-newca
- Switch to the /var/ssl
directory. The following steps are taken from there.
- Create a self-signed root CA certificate (passphrase: 1qaz), valid
for 10 years,
using this command: openssl
req
-config /etc/openssl.cnf -new -x509 -keyout private/gcd-root-ca-key.pem
-out gcd-root-ca-cert.pem -days 3650
- Using your favorite editor, create the serial number file gcd-root-ca-cert.srl
and put
characters 01
there.
You have completed the setup of your own Certificate Authority. The
certificate gcd-root-ca-cert.pem
is the so called root CA certificate. It has to be made
known to
the applications as a trusted authority certificate. All other
certificates produced by this CA will refer to this root certificate.
Appendix
D. Issuing
Sample SSL Certificates Using OpenSSL and Your Own CA
This task includes creating the P2J server private key and public
certificate, arranging them into key store and trust store files, and
importing certificates into the P2J directory.
Creating the P2J Server Certificate
- Change the directory to /var/ssl.
- Enter the superuser mode su
to obtain the write access to the CA files.
- Generate a new pair: private key and public certificate
using
this command:
openssl req
-config /etc/openssl.cnf -new -keyout newreq.pem -out newreq.pem
The dialog should look like this:
Country Name
[US]:
State [Georgia]:
Locality (city etc) [Atlanta]:
Golden Code Development Corporation [Golden Code Development
Corporation]:
Organizational Unit [IT Operations]:
Subject Name []:P2J Server
email []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
- Sign the newly created certificate using this command and
the
root CA password 1qaz:
openssl ca
-config
/etc/openssl.cnf
-policy policy_anything -out newcert.pem -infiles newreq.pem
- Rename and save the private key file:
mv newreq.pem
p2jserver-key.pem
- Verify the information in the certificate:
openssl x509 -in
newcert.pem
-noout
-text
- Save the certificate:
openssl
x509 -in newcert.pem -out p2jserver-cert.pem
rm newcert.pem
Creating the Key Store and the Trust Store Files
- Copy the root CA certificate gcd-root-ca-cert.pem
and the
P2J server certificate p2jserver-cert.pem
into your target test directory (testcases/simple/server_prep).
Make it your current directory.
- Import the root CA certificate into a new trust store shtrust.store
under the alias gcdrootca:
keytool -import
-keystore
shtrust.store -alias gcdrootca -file gcd-root-ca-cert.pem
The
dialog should look like this:
Enter keystore
password: poiuyt
Owner: CN=GCD
Certification
Authority, OU=IT Operations, O=Golden Code Development Corporation,
L=Atlanta, ST=Georgia, C=US
Issuer: CN=GCD
Certification
Authority, OU=IT Operations, O=Golden Code Development Corporation,
L=Atlanta, ST=Georgia, C=US
Serial number: 0
Valid from: Mon
Feb 14
11:44:36 EST 2005 until: Thu Feb 12 11:44:36 EST 2015
Certificate
fingerprints:
MD5: 72:7F:DC:1F:BF:2B:B8:60:E0:1B:89:0E:8A:27:5F:DA
SHA1: 78:A1:B7:F3:80:2E:C9:D6:46:62:09:95:03:EC:D1:F5:DC:A1:B8:DA
Trust this
certificate?
[no]: yes
Certificate was
added to
keystore
- Import the P2J server certificate into the same trust store
under
the alias p2jserver:
keytool -import
-keystore
shtrust.store -alias p2jserver -file p2jserver-cert.pem
- Prepare the P2J server's private key for import into a key
store:
openssl pkcs8
-in
p2jserver-key.pem -topk8 -outform DER -nocrypt -out p2jserver-key.der
Use 1234
as a passphrase.
- Import the P2J server's private key and public certificate
into a
new key store stdsrvkey.store
under the alias standard:
java
com.goldencode.p2j.security.KeyImport stdsrvkey.store standard
p2jserver-key.der p2jserver-cert.pem
The dialog should look like this:
Enter keystore
password:xcvbnm
Enter key entry password:876543
Keystore: stdsrvkey.store, alias standard, RSA private key imported.
- Remove the temporary key file:
rm
p2jserver-key.der
Importing certificates into the directory
- Remove the existing directory file:
rm
min_directory.xml
- Rerun the directory creation utility:
java
com.goldencode.p2j.test.MinStartupDirectory standard_server.xml
Appendix
E. Vendor-Specific Database Setup
The following sections describe setup instructions for a database
installation which will be compatible with the P2J runtime
environment. These instructions are intended only to serve as
a
quick-start guide to create a usable development environment and should
not be relied upon for a production level installation.
PostgreSQL
This environment generally has excellent documentation for
installation, use, and administration. The target version for
this project is 8.x. Currently, we are using 8.1.3 in
development.
Please use the following links to obtain the software and relevant
documentation:
NOTE:
all instructions
which follow assume the PostgreSQL database is installed in the
directory /usr/local/pgsql
, and that the
cluster's data
directory is /usr/local/pgsql/data
.
These are
common choices, but are not required. You may change these
directories according to your needs, but be sure to take note of this
difference when following the instructions below.
Special
Preparation on Linux - Collation Fix
PostgreSQL uses an operating system level locale configuration to
determine its collation (sorting) behavior. Unlike Progress,
PostgreSQL offers no on-the-fly services (ala convmap.dat) to change
collation behavior temporarily at runtime. Thus, to modify a
PostgreSQL database's collation behavior, it is necessary to select a
(permanent) collation strategy at the time the database's cluster is
initialized.
Unfortunately, there is no "out-of-the-box" locale in
Linux,
which matches Progress' "basic" collation strategy perfectly.
Thus it was necessary to create a custom locale and install it at the
operating system level. This was done by creating a Unix
Locale
Definition, then compiling it using the standard localedef
utility. The custom locale we created is named en_US@p2j_basic
.
It is based upon the
standard Linux locale of
en_US
, however its LC_CTYPE
category is
based upon a blend of the POSIX
locale and
Progress'
default settings, and its LC_COLLATE
category
is based
upon Progress' default collation behavior. If this paragraph
made
no sense to you, please see IEEE
Std 1003.1,
2004 Edition, Chapters 6-7 for some background on the role
and
implementation of locales in Unix.
To prepare the Linux operating system for a PostgreSQL database cluster
which uses this custom locale, the following steps are necessary:
- Ensure the Linux package which provides the
libc
locale database sources is installed on the target machine.
- Run
localedef --help
and review
the bottom of the
output to determine which directories contain these source files (as
well as the path which contains the system's binary locales).
The
source will often reside in some derivative of i18n
,
such
as /usr/share/i18n
. The binaries
will often reside
in subdirectories of /usr/lib/locale
.
- Login as the system's superuser.
- Switch to the directory containing the charmap files (e.g.,
/usr/share/i18n/charmaps
).
Locate the archive containing the ISO-8859-1
charmap
definition. Decompress it if necessary (it may be gzipped or
otherwise archived).
- Copy the custom P2J locale source file from the
p2j/locale
subdirectory into the system's locales source directory (e.g., /usr/share/i18n/locales
).
- Run the locale compiler:
localedef
-c -f ISO-8859-1
-i en_US@p2j_basic /usr/lib/locale/en_US@p2j_basic/
.
If
the directory for locale binaries is not /usr/lib/locale
,
replace this portion of the target path accordingly.
The database cluster can now be initialized; see Installation below.
Installation
Follow the installation instructions in Chapter 14 of the User
Manual up to, but not including, the initdb
step.
The simple instructions in Section 14.1, "The Short
Version" generally are adequate for a development
environment. As
a convenience, it is worthwhile to add /usr/local/pgsql/bin
to the PATH environment variable.
The initdb
step is the point at which we bind
the
database cluster to the custom locale we compiled during the previous Special
Preparation
- Collation Fix step. Replace the initdb
instruction in the PostgreSQL documentation with the
following: /usr/local/pgsql/bin/initdb
--locale=en_US@p2j_basic -D /usr/local/pgsql/data
.
All
databases created in this cluster will now use the custom locale for
their collation strategies.
Launch the Server
As part of the installation procedure, you will have created an
unprivileged UNIX user, postgres
, with which
to run the
database server (the server cannot be run as superuser). As root
,
change the postgres
user's password if you
have not
already done this. A simple command line to launch the server
follows:
su postgres -c "/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data 2>&1 | tee pgsql.log"
Be sure the postgres
user has write access to
the log
file (the name psql.log
is not
required; any name
can be used).
Create the gc
Superuser
We must create a database user with which to access the database server
both from the P2J application server and for ad-hoc
connections.
For convenience in the development environment, this user will be a
superuser (in the database sense) which can create and drop databases
and other database users. This is the gc
user. This
user should not be
used in production. To create the gc
user,
launch the server and issue the following command (the -s
option indicates this user is a superuser; -P
indicates a
password is required; -E
indicates
the password
will be stored
encrypted):
When prompted for a password, enter p3rs1st
.
It is
important to use this username and password, as this login data
currently is embedded in the default configuration files used for
development.
Configure the Server
We make one deviation from the out-of-the-box server configuration,
which is to
disable local access and enable access only via TCP sockets.
In
the /usr/local/pgsql/data/pg_hba.conf
file
(assuming the
default installation location), change the following section:
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
to:
# "local" is for Unix domain socket connections only
#local all all trust
# IPv4 local connections:
host all gc 127.0.0.1/32 md5
# IPv6 local connections:
host all gc ::1/128 md5
To make these changes take effect, shut down the server (with Ctrl+C
)
and re-launch it.
Test the Client Connection
Test the ability to connect a client application with the following
commands, which create a test database, launch PostgreSQL's command
line SQL client, psql
against that database,
then drop
the test database:
createdb -U gc -h localhost test
psql -U gc -h localhost test
dropdb -U gc -h localhost test
These assume the PATH has been updated as recommended above.
Create the P2J Target Database(s)
Before it is possible to import data from Progress *.d
files, a target database must be created and a schema
imported.
Assuming a target database name of "xyz", issue the following command
to create the database:
createdb -U gc -h localhost -E LATIN1 xyz
Now import the converted schema created during schema migration:
psql -U gc -h localhost <schema-create-xyz.sql
The target database is now properly prepared to accept
imported records.
Appendix
F. Notes on Running a P2J Client
Job Control and
P2J Client
In the environments where interactive shells provide the job control
feature, some special care should be taken when providing custom shell
scripts that launch P2J clients. A typical such an environment is bash
shell under linux. The job control feature makes it possible to stop
the client process at some moments by typing Ctrl-Z on the terminal.
The operating system suspends the P2J client process and bash shell
regains control, which would differ from a typical Progress
environment. Below is the explanation of when and why this may happen
and how t avoid it.
bash, being the parent of the client jvm process, receives
SIGTSTP signal from the tty driver in the kernel, if the controlling
terminal is not in the raw mode. The kernel sends this signal to all
processes in the foreground process group in the bash session. bash
itself is in the group.
Normally, the controlling terminal is in raw mode, and the
signal is not sent. However, every time the JVM launches an interactive
child process, the terminal is restored to the regular cooked mode, and
pressing Ctrl-Z generates the SIGTSTP signal.
The following happens next. JVM safely ignores the signal.
However, bash receives it and then performs a "service" for us. He
reacts to the signal by sending the SIGSTOP signal to the current
processes in the foreground job. This latter signal can't be caught nor
blocked nor ignored. Thanks to bash, the JVM process stops
unconditionally and bash displays the command prompt.
Disengaging the JVM from the parent bash won't help, since
bash would display the command prompt immediately and we would have the
concurrent use of the terminal issue.
Thus, the only solution is not to allow bash receive the
SIGTSTP. One of the two known ways is the TRAP instruction mentioned
above. Another and more efficient way is using exec to invoke JVM
process instead of eval. This causes the parent bash to go away.
Appendix
G. Making Admin Client Use Separate JVM
Prior to JDK release 1.6.0 update 10, all applets within the single
browser session had to share a single JVM. These notes are here to
facilitate the manual installation of the JDK and the Firefox 3 browser
since at the moment the required packages are not available through the
usual Ubuntu repositories.
- Download the latest JDK from
http://java.sun.com/javase/downloads/index.jsp and install it somewhere
(here - /opt/java)
- Create a symbolic link from /usr/lib/jvm to the installed
JDK:
cd /usr/lib/jvm
sudo ln -s /opt/java/jdk1.6.0_12 java-6u12-sun
- Create /usr/lib/jvm/.java-6u12-sun.jinfo as follows:
me=java-6-sun-1.6.0.12
alias=java-6u12-sun
priority=64
section=non-free
jre ControlPanel /usr/lib/jvm/java-6u12-sun/jre/bin/ControlPanel
jre java /usr/lib/jvm/java-6u12-sun/jre/bin/java
jre java_vm /usr/lib/jvm/java-6u12-sun/jre/bin/java_vm
jre javaws /usr/lib/jvm/java-6u12-sun/jre/bin/javaws
jre jcontrol /usr/lib/jvm/java-6u12-sun/jre/bin/jcontrol
jre keytool /usr/lib/jvm/java-6u12-sun/jre/bin/keytool
jre pack200 /usr/lib/jvm/java-6u12-sun/jre/bin/pack200
jre policytool /usr/lib/jvm/java-6u12-sun/jre/bin/policytool
jre rmid /usr/lib/jvm/java-6u12-sun/jre/bin/rmid
jre rmiregistry /usr/lib/jvm/java-6u12-sun/jre/bin/rmiregistry
jre unpack200 /usr/lib/jvm/java-6u12-sun/jre/bin/unpack200
jre orbd /usr/lib/jvm/java-6u12-sun/jre/bin/orbd
jre servertool /usr/lib/jvm/java-6u12-sun/jre/bin/servertool
jre tnameserv /usr/lib/jvm/java-6u12-sun/jre/bin/tnameserv
jre jexec /usr/lib/jvm/java-6u12-sun/jre/lib/jexec
jdk HtmlConverter /usr/lib/jvm/java-6u12-sun/bin/HtmlConverter
jdk appletviewer /usr/lib/jvm/java-6u12-sun/bin/appletviewer
jdk apt /usr/lib/jvm/java-6u12-sun/bin/apt
jdk extcheck /usr/lib/jvm/java-6u12-sun/bin/extcheck
jdk idlj /usr/lib/jvm/java-6u12-sun/bin/idlj
jdk jar /usr/lib/jvm/java-6u12-sun/bin/jar
jdk jarsigner /usr/lib/jvm/java-6u12-sun/bin/jarsigner
jdk java-rmi.cgi /usr/lib/jvm/java-6u12-sun/bin/java-rmi.cgi
jdk javac /usr/lib/jvm/java-6u12-sun/bin/javac
jdk javadoc /usr/lib/jvm/java-6u12-sun/bin/javadoc
jdk javah /usr/lib/jvm/java-6u12-sun/bin/javah
jdk javap /usr/lib/jvm/java-6u12-sun/bin/javap
jdk jconsole /usr/lib/jvm/java-6u12-sun/bin/jconsole
jdk jdb /usr/lib/jvm/java-6u12-sun/bin/jdb
jdk jhat /usr/lib/jvm/java-6u12-sun/bin/jhat
jdk jinfo /usr/lib/jvm/java-6u12-sun/bin/jinfo
jdk jmap /usr/lib/jvm/java-6u12-sun/bin/jmap
jdk jps /usr/lib/jvm/java-6u12-sun/bin/jps
jdk jrunscript /usr/lib/jvm/java-6u12-sun/bin/jrunscript
jdk jsadebugd /usr/lib/jvm/java-6u12-sun/bin/jsadebugd
jdk jstack /usr/lib/jvm/java-6u12-sun/bin/jstack
jdk jstat /usr/lib/jvm/java-6u12-sun/bin/jstat
jdk jstatd /usr/lib/jvm/java-6u12-sun/bin/jstatd
jdk jvisualvm /usr/lib/jvm/java-6u12-sun/bin/jvisualvm
jdk native2ascii /usr/lib/jvm/java-6u12-sun/bin/native2ascii
jdk rmic /usr/lib/jvm/java-6u12-sun/bin/rmic
jdk schemagen /usr/lib/jvm/java-6u12-sun/bin/schemagen
jdk serialver /usr/lib/jvm/java-6u12-sun/bin/serialver
jdk wsgen /usr/lib/jvm/java-6u12-sun/bin/wsgen
jdk wsimport /usr/lib/jvm/java-6u12-sun/bin/wsimport
jdk xjc /usr/lib/jvm/java-6u12-sun/bin/xjc
plugin xulrunner-1.9-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin firefox-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin firefox-3.0-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin iceape-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin iceweasel-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin mozilla-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin midbrowser-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
plugin xulrunner-javaplugin.so /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so
- Create a Perl script for one time use and save it as /tmp/java6u12sun.pl:
#!/usr/bin/perl
use strict;
# Read .jinfo file.
my @lines = ();
open(JINFO, '/usr/lib/jvm/.java-6u12-sun.jinfo')
or die "Can't open .jinfo file.";
@lines = <JINFO>;
close(JINFO);
# Install alternatives.
for (@lines)
{
if ($_=~ m+/usr/lib/jvm/java-6u12-sun+)
{
my @split = split ' ', $_;
system "update-alternatives --install /usr/bin/@split[1] @split[1] @split[2] 64";
}
}
1;
- Run the script to install alternatives:
cd /tmp
chmod +x java6u12sun.pl
sudo ./java6u12sun.pl
- Select the java-6u12-sun jav alternative, i.e.
sudo update-java-alternatives -s java-6u12-sun
- Make sure you use FIrefox 3. Check the plugins directory of the Firefox profile. If there is no symbolic link from there to the /usr/lib/jvm/java-6u12-sun/jre/lib/i386/libnpjp2.so file, then create it using ln -s ... command.
- Run ControlPanel. This utility has multiple tabs. Select the Java tab and press the View button in the Java Applet Runtime Settings pane. Make sure there is only one checked box in the Enabled column and that the line is for the JRE version you've just installed.
- Restart the browser and now everything is ready.
- Note that multiple JVMs for applets will cause multiple Java
Console windows to appear if they are configured to be open
automatically.
Copyright (c) 2004-2008,
Golden Code
Development Corporation.
ALL RIGHTS RESERVED. Use is subject to license terms.