- System Requirements
The conversion, runtime and resulting converted applications have been well tested on:
- 32-bit and 64-bit Linux distributions (various, but mostly Ubuntu and Red Hat/CentOS) running on a variety of x86/x86_64 compatible hardware.
- 32-bit and 64-bit Windows on x86/x86_64 compatible hardware. The minimum required versions are Windows XP 32-bit Professional SP3 and Windows XP 64-bit Professional SP2 for 32 and 64 bit modes respectively. Windows XP and Windows 7 are both well tested and supported in both 32-bit and 64-bit versions. Later Windows versions will also work. 16-bit Windows is not supported in any way.
- 64-bit Solaris 10 on Sparc v9.
There are no real hardware dependencies other than the memory, disk, CPU and network resources that are necessary to adequately run the system. However, the small amount of native code in the runtime, does require porting to support non-Linux environments. See Is the runtime portable?.
Due to the high levels of compatibility between Solaris and other UNIX variants, it is expected that the porting effort to support other UNIX environments (such as AIX or HP-UX) is minimal.
There are no significant resource requirements for building the FWD technology itself. The build process takes less than 2 minutes on common hardware and does not require any unusual amount of CPU or RAM resources.
A snapshot of the FWD source code itself (without source control metadata) is around 50MB in size. A full check-out (with source control metadata) of FWD will be closer to 250MB and a version that is fully built will use approximately 600MB of space in total.
There are two aspects to Code Analytics usage:
- Report Calculations - Before the analytics can be used, the 4GL application's source code and schemata must be parsed and the report results must be calculated. This is a non-interactive batch process that is run each time new source code and schemata are available.
- Web Server - This is the mechanism for accessing the Code Analytics results, after the Report Calculation batch processing is complete.
Prepare a system meeting the following hardware requirements:
|CPU||At this time the batch processing for Report Calculations is single-threaded. This means the batch processing for Code Analytics is very sensitive to single-core performance capabilities of the CPU. Running in environments designed for sharing a CPU across many processes (server environments and/or virtual machines) will not achieve the same results as running on dedicated hardware that has a reasonably recent desktop or notebook CPU. If a single core is very fast and is not heavily shared with other processes, then you will find the batch processing will be significantly faster.
A multi-core CPU can be highly useful in running the Web Server depending on the amount of source code in the project and the number of users simultaneously accessing the report server. If there are few simultaneous users or the project is smaller, then there will be little dependency upon the CPU.
|RAM||Very small projects can be processed with a JVM heap as small as 256MB or 512MB. For projects in the 1MM LOC range, it is recommended to have a JVM heap of at least 2GB and if possible 4GB. Projects in the 5-6MM LOC range all the way up to 15MM LOC can be processed in a JVM heap of 4GB.
|Disk Space||Parsing a project and making the Report Calculations both store a great deal of state on the disk. Small and medium sized projects may take 50GB of disk space to parse and calculate reports. Larger projects will require more space. For example, a 15MM LOC project may take 200GB of disk space for parsing and reports.
Consider the number of conversion projects and their relative size when calculating disk space.
The speed of the disk will directly affect the parsing and reporting speed. A 5MM LOC application might take 7.5 hours to parse and generate reports on a traditional hard disk where the same system using an SSD might finish in 4.5 hours.
|Network||Conversion can be run disconnected from the network.|
Prepare a system meeting the following minimum hardware requirements:
|Feature||Small Sized Projects (< 250KLOC)||Medium Sized Projects (between 250KLOC and 1MLOC)||Large Sized Projects (between 1MLOC and 7MLOC)||Very Large Projects (Over 7MLOC)||Details|
|CPU||2||2||2||2|| At this time the conversion process is single-threaded. This means the conversion is very sensitive to single-core performance capabilities of the CPU. Running in environments designed for sharing a CPU across many processes (server environments and/or virtual machines) will not achieve the same results as running on dedicated hardware that has a reasonably recent desktop or notebook CPU. If a single core is very fast and is not heavily shared with other processes, then you will find the conversion will be significantly faster.
While a multi-core CPU can be highly useful in running the conversion JVM, the benefit of those cores will not be as high since the current conversion process is largely single threaded (at this time). Where the multi-core CPU helps is in processing other threads in the conversion JVM or handling other loads for the system. The conversion process will be reworked for full multi-threading (see #1770), which is expected to massively reduce the time for conversion (especially for large projects).
|RAM||512MB to 2GB||2GB to 4GB||4GB to 16GB||16GB to 32GB+||JVM heap is the primary limiting factor. The heap usage increases with project size AND it also increases with OO 4GL usage.|
|Disk Space||10GB to 20GB||20GB to 50GB||50GB to 150GB||150GB to 250GB+||The conversion process stores a great deal of state on the disk. This disk space usage increases with the size of the project. It may also be useful to store more than one conversion project on the system. Reports require large amounts of disk space in addition to the space needed for conversion. Consider the number of conversion projects and their relative size when calculating disk space.|
|Network||N/A||N/A||N/A||N/A||Conversion can be run disconnected from the network.|
There is no unique FWD component which only exists on a development environment. Rather, a developer would use such a system for the following purposes:
- Build FWD.
- Generate and Access Code Analytics.
- Convert ABL code into Java.
- Run the FWD application server to test and/or debug the converted code or FWD itself.
- Run one or more FWD clients to test and/or debug the converted code or FWD itself.
- (Optionally) Access a local database server to test or debug the persistence layer/database support of FWD.
On the other hand, the most challenging hardware requirements for the application server or database server can be avoided because the development environment is by nature not a production environment. In other words, it only must work for the processing needs of the developer and his or her testing/debugging. It does not need to be scaled to hundreds or thousands of users. This means that one must have a system that is useful for modern development purposes, but it does not need to be a server-class system.
Prepare a system meeting the following hardware requirements:
|CPU||This system will require a relatively recent multi-core desktop-class CPU. Both AMD and Intel CPUs have been tested and have been found to work well. A minimum specification would be a dual-core or quad-core 64-bit CPU. Even older CPUs such as the Intel Core-2 Duo have been found to be quite suitable for both conversion and runtime performance, but a more modern CPU will be of great benefit. Do NOT use a 32-bit CPU. 32-bit CPUs are not suitable for this task.|
|RAM||4GB is the minimum specification, but a more reasonable amount is 8GB or 16GB. It would be rare to need more than 16GB of RAM.|
|Disk Space||The conversion process stores a great deal of state on the disk. Even medium sized projects may take 5GB of disk space to convert fully. Larger projects will require more space. It may also be useful to store more than one conversion project on the system. As an example, a 5MM LOC project can consume 90GB of space (most of it is for all the intermediate artifacts of conversion). Reports require large amounts of disk space in addition to the space needed for conversion. That same 5MM LOC project needed 45GB (much of the space is for a copy of the intermediate artifacts of the parsing process). Larger projects might generate reports of 200GB in size.
Consider the number of conversion projects and their relative size when calculating disk space.
If this system is also being used as a local database server, there must be space for the database data as it is being migrated and for the resulting database instance. Migrations require space for the .d data export files and the database instances consume a significant amount of disk space for the imported database. Progress databases are very heavy in index usage and Progress is optimized to keep the index data small. Since the same logical indexes are maintained in the target runtime environment, the target database will have to maintain those indexes on disk. Common relational databases do not optimize the space of indexes to the same degree as Progress. This means it should be expected that the imported database will require significantly more disk space than the original Progress database.
Consider the number of size of the databases to be imported and the number of database instances when calculating disk space.
Although it is possible to work in an environment with as little as 500GB of disk space, a 1TB disk would be a better choice. Consider using an SSD to optimize performance.
|Network||Internet access will be needed to obtain the Java dependencies (at least once) to build FWD. Conversion can be run disconnected from the network. The application server, clients and database server can all be run locally without Internet access.|
The inherently client-oriented design of the ABL, combined with the requirement to maintain compatibility with the Progress ABL runtime implementation, means that there are some client-specific dependencies that require a matching client process to be implemented in Java.
A FWD client process exists for each active user of the system. This process is virtually 100% Java code. That means there is a one-to-one relationship between an active user and the Java Virtual Machine (JVM) process that is running the FWD client code for that user.
Usage of the Swing GUI or Swing ChUI clients is a "fat client" or single-user installation. This means that the FWD technology is installed and executes on the end-user's system.
All other implementations (web GUI, native/NCURSES ChUI, web ChUI, batch mode, appserver) are typically done on shared/multi-user hardware. This means that there are multiple client JVM instances installed and executing simultaneously on the same system (one JVM per active user).
Shared/multi-user installations must be sized by considering the maximum number of simultaneously active clients. A good practice would be to multiply the single user requirements by the maximum number of simultaneous users and add some extra resources for growth or contingency.
|CPU||Most processing for the converted code will occur on the server for virtually all converted applications. It would be a very unusual application that primarily needed to execute code associated with the client-side dependencies. This means that at most points in time for most users, the processing is executing on the application server. This means that each user does not typically need a dedicated CPU.
For single-user installations, both AMD and Intel CPUs have been tested and have been found to work well. As a rule of thumb, if the JVM being used works well on a given CPU, then that CPU is typically OK. The FWD client code is rarely CPU-bound, for a single-user. If your application relies on native library calls or has other dependencies upon CPU-specific architecture, ensure that these libraries or dependencies will work with the CPU architecture you choose.
This is different in the multi-user scenario. When the CPU resources are being shared, the target hardware must have adequate CPU capacity to keep each concurrent client JVM responsive. The actual need must be sized based on the converted application's requirements. Make sure to load test realistic interactive and non-interactive scenarios so that the client CPU sizing can be determined.
The FWD runtime is highly threaded, even on the client side, so a multi-core CPU environment will be of great benefit for this purpose. For a multi-user install, multiple CPUs and/or multiple cores is essential.
|RAM||The client code (whether ChUI, GUI, batch or appserver) is reasonably small and a standard maximum heap size of 16MB or 32MB is usually sufficient to handle the client requirements.
If a server was being used to run 500 clients, the system should have adequate RAM for 500 JVM process instances. Examples of this scenario could be Appserver agents, batch processes or the use of the FWD Web Client. In all of these cases the machine which is configured to launch these clients must have enough memory to manage the highest expected number of simultaneously running clients.
The actual minimum memory footprint of a single JVM must be calculated. This is specific to the J2SE implementation in use, the operating system that is a target and will also vary by the converted application's requirements. Although the client JVM memory requirements do not grow to large or unlimited sizes, the number and size of simultaneous windows, frames, streams, shared libraries and other client resources in use can change the memory requirements.
A particularly subtle thing to plan for is ABL code that directly use memory (via
|Disk||The disk size of the FWD binaries is less than 100MB. The JDK or JRE will take some space too, but no more than 100MB). The converted application jars are never required on the client installation, so only the FWD code and Java need to be there. If logging is enabled, there can be some 10s of megabytes of log files generated in a worst case scenario. There is no appreciable other use of disk space required by the FWD client installation per-se.
All process launching, stream I/O and other platform-specific dependencies are delegated to the thin client. As a result, the application's disk space requirements must also be handled. For example, if the application can write reports into text files (for example, in a user's home directory), then that disk space must be allocated on the system on which the FWD client runs. This must be sized on a per-application basis.
|Network||It is possible (but uncommon) to run a single-user mode with the database, application server and client install on the same system. In such a scenario, the external network would not be used and only localhost networking would come into play.
Under normal conditions, the user may be remote from the client system and the client system may be remote from the application server (the client system does not directly communicate with the database server, so that is out of consideration). This means that TCP/IP based communications must be possible between the client and the application server. Any firewalls, routers and name servers in that network path must be suitably configured. The user's access to the client system may be local (for a dedicated single user system) or it may be remote via web (ChUI or GUI) or terminal emulator (for ChUI applications). Suitable networking must be available for any remote user access.
For best performance, provide a low-latency network connection between the FWD Client and FWD Server. This is not necessary, it is just desirable.
|CPU||CPU capacity is quite important since all application logic runs in this process. Each user is represented by 3 threads (2 threads are network daemons and 1 is where the application logic is processed). All of these threads are likely to block often, waiting on network I/O with the thin client or the database. This means that a single CPU will be shared well between some number of clients. This amount would need to be estimated based on the specific application is use. The larger the number of concurrent users, the greater the concurrent CPU requirements for the application.
It is important to plan for a multi-processor system with large CPU resources for systems with hundreds or thousands of concurrent users.
It is important to implement realistic load testing that simulates the common application scenarios during peak system loads. Use this to ensure that the CPU resources are sufficient for the server. Some amount of growth and contingency should be added.
|RAM||All converted application business logic runs on the application server. All processing for connected clients runs in a single JVM application server process. This process must be sized with certain working set (memory) and CPU resources allocated per client. One should plan for 3MB of RAM per connected client plus the application's specific peak RAM requirements per user. This can be large in some cases of temp-table usage since the temp-table support is done in the application server's memory. Consider the complexity of the batch processes, reports or other user actions implemented in your application. For example, during long running ABL transactions, depending on the business logic, the FWD application server may accumulate in-memory data (for rollback processing) which will not be discarded until the transaction is committed or rolled back. It is not unreasonable to allocate an application server heap measured in GB rather than MB. Larger installations with hundreds of users might require 8GB or more to fully support the application requirements. Consider that in OpenEdge, each client handles its own temp-table and application memory requirements. In FWD, this is aggregated in a single application server that handles all users.
The application's specific maximum working set requirements (for a single client running a full range of operations) must be estimated. In addition, the application server's database caching implementation will need backing RAM. Finally, the application server environment itself will have certain overhead. This can be estimated with a minimum between 64MB and 128MB. This value will increase based on certain factors such as the number of classes in the converted application (each application class will consume space in the heap) and the size of the directory (very complex and voluminous security configurations versus more compact representations).
|Disk||Disk space is not at issue on the application server as no application I/O occurs there. The only disk space requirement is the need to store and access the JAR files for the application and the FWD runtime. Typically, the total disk space needed will be quite low. The FWD runtime is approximately 50MB in size. The generated application JAR files for a medium to large application might range from 50MB to 100MB. There will be additional JAR files needed for the software dependencies, this is normally 100MB or less. In addition, there will be log files generated by the server, but these normally don't require more than 100MB even with multiple generations. 300MB to 400MB of total disk space is normally sufficient.|
|Network||It is possible (but uncommon) to run a single-user mode with the database, application server and client install on the same system. In such a scenario, the external network would not be used and only localhost networking would be required.
Under normal conditions, the application server may be remote from either or both the client system and the database server. This means that TCP/IP based communications must be possible between the application server and both the client and/or the database server. Any firewalls, routers and name servers in that network path must be suitably configured.
High speed network support is of great value to the application server as it limits the performance of all communication to the clients and to the database. Most applications have very high levels of communication between the application server and the database. For this reason, it is important to enable high performance and high throughput links between the application server and the database server.
Typically, the database server system is "I/O bound". By this, it is meant that the disk performance and throughput is a primary determinant of the performance and throughput of database transactions. It is still important to ensure the proper memory and CPU resources are available. If the database server has less memory or CPU horsepower than is needed, these will become the bottleneck.
In planning for disk space, one cannot simply use the database size of Progress database(s). The Progress Database Server is designed as an advanced index engine. Although it sacrifices most use cases for DBMS filtering and sorting, it does handle indexes very thoroughly. To reduce the number of filtering use cases which are not supported, it using equality and range matching index bracketing techniques. This is not a proper solution, but it is what could be easily done given the index-heavy nature of the Progress Database. From a disk space perspective, the Progress Database does store the indexes in much less space than one sees for RDBMS systems like PostgreSQL. Since the same logical indexes are maintained in the target runtime environment, the target database will have to maintain those indexes on disk. Common relational databases do not optimize the space of indexes to the same degree as Progress. This means it should be expected that the imported database will require significantly more disk space than the original Progress database. For this reason, make sure to size the disk based on test data imports using data sets that are representative of the sizes which the runtime installations will require.
It is important to include caching and other performance tuning requirements to fully plan for the hardware requirements. These cannot be universally known in advance. Plan to test representative application scenarios, including simulating the peak load situations that can actually occur in production. No sizing can be complete without this testing.
The database migration itself requires space for the
.d data export files in addition to the disk space for the imported database. This requirement is temporary until the migration is complete.
FWD does not provide a database per-se, but relies upon 3rd party databases being installed. Documenting the hardware sizing of these 3rd party technologies is beynd the scope of this document. Please refer to the chosen database's (e.g. PostgreSQL) specific documentation to size and plan for the database server's hardware requirements.
On Windows, use of the NTFS file system is mandatory to enable large database files storage.
Estimating Total Resource Requirements¶
© 2004-2021 Golden Code Development Corporation. ALL RIGHTS RESERVED.