Feature #4722
data import should be able to run with only the converted application jar file (and FWD)
100%
Related issues
History
#1 Updated by Greg Shah almost 4 years ago
The current approach has a fragile design which requires that we have a conversion project (or at least parts of it such as data/namespace/
) in place in the file system. This is something that takes extra effort to apply and is error prone.
Our import should be able to be executed with only FWD and the converted application jar(s) present.
#2 Updated by Greg Shah almost 4 years ago
- Related to Feature #2235: fix deployment issues related to dynamic database conversion resources added
#3 Updated by Greg Shah almost 4 years ago
- Related to Feature #4723: make it significantly easier to run database import added
#4 Updated by Eric Faulhaber over 3 years ago
IIRC, the data/namespace
dependency was just for access to the application's generated *.p2o
XML file, which already exists in the application jar file for dynamic database conversion at runtime.
The other dependency on the file system is access to the data/dump/<database_name>/
directory, where the exported data is expected to live.
Both of these dependencies could be specified as parameters to a new command line program, which would internally invoke the TRPL code which runs the import. We would have to load the *.p2o
file as a resource from the jar.
#5 Updated by Greg Shah about 3 years ago
Both of these dependencies could be specified as parameters to a new command line program
I don't understand why we need to tell the import to load the .p2o
files from the jar, it should always do that. I can see no good reason for it to ever read them from the file system.
I'm also worried that we are dependent upon the p2j.cfg.xml
, but that can come from the jar too.
#6 Updated by Greg Shah about 2 years ago
- Related to Feature #6202: allow .df files to exist in a path that is not directly in data/ added
#7 Updated by Ovidiu Maxiniuc about 2 years ago
- Assignee set to Ovidiu Maxiniuc
- Status changed from New to WIP
- % Done changed from 0 to 60
Partial work in this area was done in recent commits (revisions: 13695, 13746, 13753, 13776 of branch 3821c).
#8 Updated by Roger Borrello over 1 year ago
Using a conversion with 3821c_14194, my ant import
for a customer application is receiving this in the import log:
[java] Elapsed job time: 00:00:00.167 [java] Aug 27, 2022 5:48:10 PM com.goldencode.p2j.schema.SchemaConfig getDefaultProfile [java] INFO: Using merged schema profile. [java] Aug 27, 2022 5:48:11 PM com.goldencode.p2j.schema.SchemaConfig getSchemaResource [java] SEVERE: Failed to locate 'data/standard.dict' as a resource. Using developer mode: attempting to locate it as local file in ${cvtpath} location. [java] ERROR: [java] com.goldencode.ast.AstException: Bogus registry resource ./cfg/registry.xml [java] at com.goldencode.ast.XmlFilePlugin.<init>(XmlFilePlugin.java:231) [java] at com.goldencode.p2j.pattern.PatternEngine.lambda$static$0(PatternEngine.java:504) [java] at com.goldencode.ast.AstManager.getPlugin(AstManager.java:646) [java] at com.goldencode.ast.AstManager.save(AstManager.java:633) [java] at com.goldencode.p2j.pattern.PatternEngine.finish(PatternEngine.java:1224) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1088) [java] at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2168)
I made sure the data/*.dict
files were in place before performing the import, and confirmed they were in the application jar file:
rfb@rfb:~/app/build/lib$ unzip -l app.jar |grep data.*\.dict 10316562 2022-08-25 12:30 data/db1.dict 1760467 2022-08-25 12:30 data/db2.dict 2078211 2022-08-25 12:30 data/standard.dict
I looked in the registry.xml
(which is both on the file system and in the application jar file when I import) and see that it is referencing them in the ./cvt/data/
directories:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!--AST Registry--><!--Generated Thu Aug 25 10:08:53 EDT 2022--><name-root>
<mapping flags="256" id="4294967296" next="4294986402" treename="./cvt/data/standard.dict"/>
<mapping flags="256" id="8589934592" next="8589951119" treename="./cvt/data/db2.dict"/>
<mapping flags="256" id="12884901888" next="12884997322" treename="./cvt/data/db1.dict"/>
...
Could that be where the issue is? Should the jar include these in the cvt
directory?
#9 Updated by Greg Shah over 1 year ago
Could that be where the issue is? Should the jar include these in the cvt directory?
Yes and probably yes. You can try it to see.
#10 Updated by Roger Borrello over 1 year ago
Greg Shah wrote:
Could that be where the issue is? Should the jar include these in the cvt directory?
Yes and probably yes. You can try it to see.
Unfortunately, no difference. The error is the same, and the jar file contains:
rfb@rfb:~/app$ unzip -l build/lib/app.jar | grep .*data\/standard 2078211 2022-08-29 12:11 cvt/data/standard.dict 524931 2022-08-29 12:11 cvt/data/standard.p2o 2078211 2022-08-25 12:30 data/standard.dict 524931 2022-08-25 12:30 data/standard.p2o
I also placed those files on the filesystem, in case, and no difference.
The import seems to complete, so I'd like to know if this is an error that doesn't harm the outcome (since I need to make sure the customer can test).
#11 Updated by Roger Borrello over 1 year ago
The issue was that I had been trying to link p2j to the convert
deployment and not run ant deploy.prepare
, which copies the FWD jar files into the build/lib
of the converted project. When I did that, I was able to import without issues.
So having the FWD jar files available in the classpath isn't enough for the import to proceed.
#12 Updated by Constantin Asofiei over 1 year ago
Import always requires ant deploy.prepare
, as it needs the application's jar. Did you have this in our deploy/lib
folder?
#13 Updated by Roger Borrello over 1 year ago
Constantin Asofiei wrote:
Yes...Import always requires
ant deploy.prepare
, as it needs the application's jar. Did you have this in ourdeploy/lib
folder?
- I have
/opt/fwd/convert
as the directory with the contents of thedist/fwd_deploy-convert_4.0.0_p2j_3821c_14208_yyyymmdd.zip
archive - The
p2j
in the project is linked to/opt/fwd/convert
- the
deploy/lib
directory only contains the application's jar.
So the build.xml
of the project contains:
<path id="app.classpath"> <fileset dir="${fwd.lib.home}" includes="*.jar"/> <fileset dir="${deploy.home}/lib" includes="*.jar"/> </path>
Where the fwd.lib.home
attribute results in ./p2j/build/lib
, which should be the /opt/fwd/convert/build/lib
where they jars reside.
Now the question of what is in that convert
zip... I had been utilizing the fwdConvert
configuration from build.gradle
, but due to #6669, I end up using:
<property name="archive.convert_destfile" value="${dist.home}/fwd_deploy-convert_${major}.${minor}.${release}_${repo}_${branch}_${rev}_${today_date}.zip" /> <zip destfile="${archive.convert_destfile}" > <zipfileset dir="${basedir}/build/lib" prefix="build/lib" includes="**" /> <zipfileset dir="${basedir}/build/lib" prefix="build/lib/spi" includes="fwdspi.jar" /> <zipfileset dir="${basedir}" includes="version.properties rules/**" /> <!-- Ignore ./convert until #6669 <zipfileset dir="${dist.home}/convert" prefix="build/lib" includes="**" /> --> </zip>
So everything in build/lib
gets archived into that zip file, along with build/lib/spi
and rules
(and version.properties
).
#15 Updated by Ovidiu Maxiniuc 11 months ago
- % Done changed from 60 to 100
- the
rules/schema/import.xml
was added top2j.jar
- the
import.xml
and its .rules dependencies can be loaded from jar instead of file system when theimportMode
is on.
To test against hotel_gui project, I created a new folder with only p2j.jar
, hotel.jar
and dependent libs and database dump. There is no cfg/rules or other conversion artefacts. The command to do the import is the following:
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=50505 \ -server -Xmx2g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader \ -cp lib/p2j.jar:lib/hotel.jar com.goldencode.p2j.pattern.PatternEngine \ -d 2 "dbName=\"hotel\"" "targetDb=\"postgresql\"" \ "url=\"jdbc:postgresql://localhost:5434/hotel\"" "uid=\"fwd_admin\"" "pw=\"fwd_password\"" \ "maxThreads=2" "dataPath=\"./hotel-dump/\"" schema/import data/ hotel.p2o
Note that the database should be already created and the schema installed. The latter is done using psql
utility and it requires the content of ddl/
to be accessible from file system.
#16 Updated by Greg Shah 11 months ago
The latter is done using
psql
utility and it requires the content ofddl/
to be accessible from file system.
We need to change this to first try using the DDL from the application jar. If that can't be found, then using the versions in ddl/
would be OK as a fallback.
#17 Updated by Roger Borrello 11 months ago
Greg Shah wrote:
The latter is done using
psql
utility and it requires the content ofddl/
to be accessible from file system.We need to change this to first try using the DDL from the application jar. If that can't be found, then using the versions in
ddl/
would be OK as a fallback.
Something like this would beg for a way to determine if the DDL is in the jar, either via some sort of query, or some sort of return value (guess those are basically the same). Automation can't just do both, it has to be smart.
#18 Updated by Ovidiu Maxiniuc 11 months ago
I do not think we should guess. The build script should know whether it has added the ddl/
folder into (a) application jar, so the location is clear. It just needs to be passed to the "schema-configuration" utility. I think com.goldencode.p2j.persist.deploy.ScriptRunner
should work; it just needs to stream the appropriate .sql
file and execute each statement using the db connection obtained from the provided JDBC link.
I will investigate whether this is possible and if needed, I will add the necessary changes to 4722a.
#19 Updated by Constantin Asofiei 9 months ago
What do I need to do for the data import to work with all the artifacts in the .jar files, so they are read from the classpath?
I'm trying to import the data using only the application jars, and it can't find the .p2o files.
#20 Updated by Ovidiu Maxiniuc 9 months ago
Branch 4722a should help you do just that. It adds the import.xml
script to p2j.jar
and allows additional files to be loaded from the application jar (or other jar provided in cp). The branch contains a sample shell script for importing the data for hotel application.
I have rebased the branch 4722a to latest trunk (14662).
#22 Updated by Ovidiu Maxiniuc 9 months ago
Greg Shah wrote:
Ovidiu: Is there anything else needed?
Roger raised the problem of some kind of defaulting from detection whether the p2o artifacts are available as a jar resource or as files (in some subdirectory). But this is a matter of building the application. If the customer will choose the jar, then he will compress the resources in jar and script the import to load them from there. Otherwise, the resource should be provided in a location he prefers and write the import target of ant script accordingly.
I am not aware of other open issues at current time.
#23 Updated by Greg Shah 9 months ago
I see little reason to work from the file system instead of the jar. Any processing from the jar, should be highly standardized. The .p2o
files need to be found easily but the location does not have to be different for different applications, so let's standardize it and write all utility code to use that standard.
My hope here is that we don't need to ever write a custom script for import.
#24 Updated by Ovidiu Maxiniuc 9 months ago
The problem comes from the fact that the paths for .df file is configurable (for each namespace independently). The conversion creates the .schema
and .p2o
artifacts in a similar path, but starting from the $cvt
folder. Of course, the application build script should know that (who writes the p2j.cfg.xml
will also write that build.xml
) and can put the artifacts in a standardized location.
The shell script for import I provided is only as example, what are the current expected parameters so that the import feature to work correctly. The changes are minimal compared to current trunk. I used it to test the 'standaloneliness' in a directory which lacked all other conversion artifacts, except: application, p2j and their dependent jars and, of course, the dump file set.
In fact, it is about hotel project and should not reach the FWD trunk repository.
#25 Updated by Constantin Asofiei 9 months ago
Ovidiu Maxiniuc wrote:
... in a directory which lacked all other conversion artifacts, except: application, p2j and their dependent jars and, of course, the dump file set.
I assume the ddl/
folder needs to be present, too?
#26 Updated by Ovidiu Maxiniuc 9 months ago
I was talking only about the import, with the database created and schema installed.
The file local ddl/
(this is a standard(-ized) location) is accessed by executing the desired set of SQLs statements using database specific utility (these are: psql
for PostgreSQL, org.h2.tools.RunScript
for H2, mariadb
for MariaDB), in which case the ddl/
must be directly accessible in file system or
OTOH, the jarred ddl/
can be also processed by using our com.goldencode.p2j.persist.deploy.ScriptRunner
which knows how execute the SQL scripts from jars. Actually, I did not use this, except for installing the udfs, but the mechanism is the same.
#28 Updated by Ovidiu Maxiniuc 9 months ago
OK, I will investigate how the ScriptRunner
can be exactly used to run those scripts.
#29 Updated by Roger Borrello 9 months ago
I'd like to walk through the use of postgres Docker Official Image so as to discern any additional requirements for this task before it closes, or perhaps make an improvement to my process. The fruits of this can be put into a "cheat sheet".
Customers are free to work from postgres DOI, or our official image goldencode/base_postgres_14:latest
which contains all the locales we currently support. When a container is started, POSTGRES_INITDB_ARGS
is set with the value for the locale and used during the creation of the database (for example, -e POSTGRES_INITDB_ARGS=--locale=en_US@iso88591_fwd_basic
). In either case, the steps to build the DB should be the same.
To use the postgres DOI, we setup a series of post-init scripts (shell or SQL) in conjunction with environment variables that specify parameters. If the directory indicated by the PGDATA
environment variable doesn't exist, the scripts are executed in lexical order, otherwise PostgreSQL is started for that directory.
For the most part I am following along the steps that build_db.xml
performs, with the exception of dropping the database. That should be done externally to the automation by removing the location where PGDATA is mounted from (bind or volume) so that the steps are all performed again.
There is a point of design related to artifacts from the conversion, from FWD, and the DB init scripts: a) whether they are put into the image, or b) mounted into the container. This really depends upon whether we are recommending customers build an image from ours, or just use ours to start a container. I'm not sure which is better, but I would need to add a Dockerfile and include the image build in the build_app.sh
versus just specifying the correct parameters for starting the container.
Here's what I have so far:
a00-create-roles.sh - creates admin and user roles, using environment variables for each and passwords.
b00-import-prep-system.sql - alters system sets max_wal_size
and maintenance_work_mem
c00-create-db-sh - This is in development, and needs access to the sql schema:
dbadmin=$APP_DBADMIN dbname=$APP_DBNAME dbschema=$APP_DBSCHEMA # From conversion: ddl/schema_table_${dbname}_postgresql.sql echo "Creating DB $dbname using OWNER $dbadmin" createdb -U $dbadmin $dbname echo "Creating scheme for $dbname from $dbschema" psql -U $dbadmin -f $dbschema $dbname
d00-udfs.sh - This is where the UDFs are merged in. Note that we do not have access to p2j.jar unless it is mounted into the image. And even if we did, we don't have java support in order to pull something off like: java -Xmx1g -Dfile.encoding=UTF-8 -jar $FWD_JAR com.goldencode.p2j.persist.deploy.ScriptRunner ${db_url} ${dbadmin} ${dbadmin_pw} udf.install.search_path
. It is just as easy to access the UDFs out of the p2j.jar and run:
udfs=${udfs_dir_name}"udfs.sql" words=${udfs_dir_name}"words-udfs-sql.sql" echo "Creating UDFs in DB $dbname..." psql -U $dbadmin -f $udfs $dbname psql -U $dbadmin -f $words $dbname
Of course that extraction would need to occur before the container is started, since the postgres images (ours or the DOI) wouldn't have the toolset to perform the extraction.
e00-import.sh - This is the most problematic area, due to the lack of tools mentioned for the UDFs. That was at least easy to work around. But for this, I haven't a clue how to get them into the DB. My thought is perhaps it could be performed after the other steps are completed via a separate container. That could be the server image, with an "import" entrypoint.
f00-post-import.sql - Simply performs an analyze.
And I have a question related to the DDL... I don't see any reference to the schema_index
SQL in build_db.xml
. There is reference to schema_tables
and schema_word_tables
, but not schema_index
. Isn't that required?
#30 Updated by Roger Borrello 9 months ago
I had success building the DB with this command:
run_cmd="docker run -p 5434:5432 --rm --name fervent_carson \ -e APP_DBADMIN=fwd_admin -e APP_DBADMIN_PASSWORD=admin \ -e APP_DBUSER=fwd_user -e APP_DBUSER_PASSWORD=user \ -e APP_DBNAME=appdb \ -e UDFS_DIR=/udfs/ \ -e APP_TABLE_SCHEMA=/ddl/schema_table_appdb_postgresql.sql \ -e POSTGRES_PASSWORD=fwd_admin -e POSTGRES_USER=postgres -e POSTGRES_DB=postgres \ -e PGDATA=/pgdata -e POSTGRES_INITDB_ARGS=--locale=en_US@iso88591_fwd_basic \ -v appdb-db:/pgdata -v ./mnt/post-init:/docker-entrypoint-initdb.d \ -v /opt/fwd/udf/postgresql:/udfs \ -v /opt/app/ddl:/ddl \ goldencode/base_postgres_14:latest"
This resulted the container ready for connections.
The e00-import.sh
script is setup to connect to the container from the host. This is it:
#!/bin/bash set -e dbuser=$APP_DBUSER # fwd_user dbuser_password=$APP_DBUSER_PASSWORD # user dbname=$APP_DBNAME # appdb dbport=$APP_DBPORT # 5434 reldata_dir_name=$APP_RELDATA_DIR # ../data dump_dir_name=${reldata_dir_name}/dump/${dbname} container_name=$DB_CONTAINER # fervent_carson fwd_jar=~/projects/fwd/4722a/build/lib/p2j.jar echo "Restoring DB $dbname from dumps in $dump_dir_name ..." java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -cp $fwd_jar com.goldencode.p2j.pattern.PatternEngine \ -d 2 \ dbName="${dbname}" \ targetDb="postgresq" \ url="jdbc:postgresql://$(docker_container_ip.sh ${container_name}):${dbport}/${dbname}" \ uid="${dbuser}" \ pw="${dbuser_password}" \ dataPath="${dump_dir_name}" \ maxThreads=4 \ schema/import \ $reldata_dir_name \ ${dbname}.p2o
This results in the following command:
java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=appdb targetDb=postgresq url=jdbc:postgresql://172.17.0.2:5434/appdb uid=fwd_user pw=user dataPath=../data/dump/appdb maxThreads=4 schema/import ../data appdb.p2o
I get one of 2 errors, depending upon whether I am running from the basedir
or from the basedir/docker
, which is where the e00-import.sh
is located.
rfb@rfb:~/app/docker$ APP_DBUSER=fwd_user APP_DBUSER_PASSWORD=user APP_DBNAME=appdb APP_DBPORT=5434 DB_CONTAINER=fervent_carson APP_RELDATA_DIR=../data ./e00-import.sh Restoring DB appdb from dumps in ../data/dump/appdb ... Exception in thread "main" java.lang.ExceptionInInitializerError Caused by: java.lang.RuntimeException: com.goldencode.p2j.cfg.ConfigurationException: Error loading p2j.cfg.xml configuration at com.goldencode.p2j.cfg.Configuration.createConfiguration(Configuration.java:1516) at com.goldencode.p2j.cfg.Configuration.getInstance(Configuration.java:1342) at com.goldencode.p2j.cfg.Configuration.getParameter(Configuration.java:680) at com.goldencode.p2j.cfg.Configuration.getParameter(Configuration.java:660) at com.goldencode.p2j.pattern.PatternEngine.<clinit>(PatternEngine.java:522) Caused by: com.goldencode.p2j.cfg.ConfigurationException: Error loading p2j.cfg.xml configuration ... 5 more Caused by: com.goldencode.p2j.cfg.ConfigurationException: '/home/rfb/app/docker/cfg' does not exist at com.goldencode.p2j.cfg.Configuration.normalizePath(Configuration.java:1617) at com.goldencode.p2j.cfg.Configuration.getConfigDirectory(Configuration.java:1536) at com.goldencode.p2j.cfg.Configuration.createConfiguration(Configuration.java:1429) ... 4 more rfb@rfb:~/app/docker$ cd .. rfb@rfb:~/app$ APP_DBUSER=fwd_user APP_DBUSER_PASSWORD=user APP_DBNAME=appdb APP_DBPORT=5434 DB_CONTAINER=fervent_carson APP_RELDATA_DIR=data ./docker/e00-import.sh Restoring DB appdb from dumps in data/dump/appdb ... Using merged schema profile. Failed to locate 'data/standard.dict' as a resource. Using developer mode: attempting to locate it as local file in ${cvtpath} location. Elapsed job time: 00:00:00.153 ERROR: com.goldencode.ast.AstException: Bogus registry resource ./cfg/registry.xml at com.goldencode.ast.XmlFilePlugin.<init>(XmlFilePlugin.java:232) at com.goldencode.p2j.pattern.PatternEngine.lambda$static$0(PatternEngine.java:523) at com.goldencode.ast.AstManager.getPlugin(AstManager.java:646) at com.goldencode.ast.AstManager.save(AstManager.java:633) at com.goldencode.p2j.pattern.PatternEngine.finish(PatternEngine.java:1250) at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1114) at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2219)
So which of those would be more appropriate? I would have thought either, if all the resources were in the p2j.jar
.
#31 Updated by Roger Borrello 9 months ago
I should note that I didn't convert the application with 4722
, I am just trying to use the p2j.jar
. Let me know if I'm off my rocker.
#32 Updated by Ovidiu Maxiniuc 9 months ago
I see a few issues with the
java
statement:
- you have a typo in
targetDb="postgresq"
(finall
is missing) - the application parameters which expect
"
to be passed must be escaped in embedded in another set of quotes. For example:
targetDb="postgresq"
is expected to be written as:"targetDb=\"postgresql\""
I am not sure how the parameter are passed tojava
, this may or may not be a problem; - if the
cfg/p2j.cfg.xml
in inside the application jar, then the jar must be added to-cp
. Otherwisecfg/p2j.cfg.xml
must be accessible, in current directory - please add a
/
at the ofdataPath
("../data/dump/appdb/") andreldata_dir_name
("../data/")
Roger Borrello wrote:
I should note that I didn't convert the application with
4722
, I am just trying to use thep2j.jar
. Let me know if I'm off my rocker.
You mean you use the jar of 4722
but application is converted with something else (like trunk)? That should be no problem. 4722
is a pure runtime update. Just make sure the resources you need are in the jar.
#33 Updated by Roger Borrello 9 months ago
Ovidiu Maxiniuc wrote:
Roger,
I see a few issues with thejava
statement:
Thanks... corrected, although I'm not sure about the placement of the quotes within a bash script. See below.
You mean you use the jar of
4722
but application is converted with something else (like trunk)? That should be no problem.4722
is a pure runtime update. Just make sure the resources you need are in the jar.
Thanks... I see the 6561 2023-07-12 01:59 cfg/p2j.cfg.xml
in the app jar.
I now have:
rfb@rfb:~/app$ APP_DBUSER=fwd_user APP_DBUSER_PASSWORD=user APP_DBNAME=appdb APP_DBPORT=5434 DB_CONTAINER=fervent_carson APP_RELDATA_DIR=data ./docker/e00-import.sh ++ java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/app/lib/app.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=appdb targetDb=postgresql url=jdbc:postgresql://172.17.0.2:5434/appdb uid=fwd_user pw=user dataPath=data/dump/appdb/ maxThreads=4 schema/import data/ appdb.p2o Using merged schema profile. Failed to locate 'data/standard.dict' as a resource. Using developer mode: attempting to locate it as local file in ${cvtpath} location. Elapsed job time: 00:00:00.251 ERROR: com.goldencode.ast.AstException: Bogus registry resource ./cfg/registry.xml at com.goldencode.ast.XmlFilePlugin.<init>(XmlFilePlugin.java:232) at com.goldencode.p2j.pattern.PatternEngine.lambda$static$0(PatternEngine.java:523) at com.goldencode.ast.AstManager.getPlugin(AstManager.java:646) at com.goldencode.ast.AstManager.save(AstManager.java:633) at com.goldencode.p2j.pattern.PatternEngine.finish(PatternEngine.java:1250) at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1114) at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2219)
#34 Updated by Ovidiu Maxiniuc 9 months ago
1. The log reports that data/standard.dict
could not be loaded as a resource from jar. Apparently, it was loaded from file system, from conversion directory. Otherwise, another error would have been written, like:Failed to locate 'data/standard.dict' resource as a local file.
To remove this, please add data/standard.dict
to your application jar. It is needed by dynamic query conversion, when the _meta tables are involved.
2. Bogus registry resource ./cfg/registry.xml
.
This resource (cfg/registry.xml
) is also missing from the jar. Unlike the previous resource, It has no support (yet) for loading it from file system in case it cannot be loaded from jar.
#35 Updated by Roger Borrello 9 months ago
Ovidiu Maxiniuc wrote:
To remove this, please add
data/standard.dict
to your application jar. It is needed by dynamic query conversion, when the _meta tables are involved.
2.Bogus registry resource ./cfg/registry.xml
.
This resource (cfg/registry.xml
) is also missing from the jar. Unlike the previous resource, It has no support (yet) for loading it from file system in case it cannot be loaded from jar.
Thank you very much! Right now the jar
task contains:
<jar jarfile="${build.home}/lib/${appname}.jar" basedir="${build.home}/classes.aop" casesensitive="false" includes="**/${pkgroot}/** dir_schema_ext.xml *.jasper **.xml cfg/** **/*.dict **/*.p2o **/*.wrx **/*.png **/*.jpg **/*.bmp **/*.ico app/**" manifest="${manifest.dir}/${manifest_appname}.mf" zip64Mode="as-needed" index="true" /> </target>
Those file are already in the app jar:
rfb@rfb:~/app$ unzip -l build/lib/app.jar | grep data/ 0 2023-07-20 15:43 data/ 10740320 2023-07-20 15:43 data/menu.dict 9984731 2023-07-20 15:43 data/menu.p2o 1835592 2023-07-20 15:43 data/records.dict 1632630 2023-07-20 15:43 data/records.p2o 2095041 2023-07-20 15:43 data/standard.dict 526146 2023-07-20 15:43 data/standard.p2o
And:
rfb@rfb:~/app$ unzip -l build/lib/app.jar | grep cfg 0 2023-07-20 15:43 cfg/ 6561 2023-07-20 15:43 cfg/p2j.cfg.xml 7077778 2023-07-20 15:43 cfg/registry.xml
#36 Updated by Ovidiu Maxiniuc 9 months ago
Roger, could you put your app.jar
in a shared folder on devsrv01. I would like to attempt to debug against it, to understand whet goes wrong for you.
#37 Updated by Ovidiu Maxiniuc 9 months ago
Roger, I used your jar and attempted to import a single table from your main database. It works. The jar is good, probably the scripts need more tuning.
Be sure you use the correct name for the jar in classpath. Instead of app.jar
, the jar have the same name as the application.
#38 Updated by Roger Borrello 9 months ago
Ovidiu Maxiniuc wrote:
Roger, I used your jar and attempted to import a single table from your main database. It works. The jar is good, probably the scripts need more tuning.
Be sure you use the correct name for the jar in classpath. Instead ofapp.jar
, the jar have the same name as the application.
Thanks... I notice the registry.xml
in the jar contains references to ./cvt/data/standard.dict
, but that cvt
portion of the path is not stored in the jar. Which is correct?
#39 Updated by Ovidiu Maxiniuc 9 months ago
The registry.xml
is not used with import. It may be opened as resource in common configuration, but its content is not examined now. This might be a problem in other scenarios.
Here we are interested in the p2o file which contains the required information of the DMO and dump-file. Its location is computed as:$reldata_dir_name + ${dbname}.p2o
(using the parameters from e00-import.sh
, #4722-29). It will be printed in console as the ast being processed.
IIRC, the data/standard.dict
also should not be processed now, as well. But in #4722-33 it is successfully loaded from file system.
#40 Updated by Roger Borrello 9 months ago
Ovidiu, were you able to duplicate my issue at all? I still cannot get the script to work.
I have a very quick process to recreate:- Checkout the customer project
- Run the command to build the docker images:
./docker/build_docker.sh -d <path to application deploy zipfile>
- Start the container:
(cd docker; ./control_db.sh -p 5433)
- Attempt the import:
APP_DBUSER=fwd_user APP_DBUSER_PASSWORD=user APP_DBNAME=appdb APP_DBPORT=5433 DB_CONTAINER=appdb_pg14 APP_RELDATA_DIR=data ./docker/e00-import.sh
It would help so much if you could attempt these steps.
#41 Updated by Roger Borrello 9 months ago
After your requested update to my parameters, I have been able to make the connection and import... thank you for the help!
The quotations are very tricky. But with this, you can perform an import into a postgres container:
cmd="java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 \ -cp ${fwd_jar}:${app_jar} com.goldencode.p2j.pattern.PatternEngine \ -d 2 \ "dbName=\\\"${dbname}\\\"" \ "targetDb=\\\"postgresql\\\"" \ "url=\\\"jdbc:postgresql://$(get_container_ip ${container_name}):${dbport}/${dbname}\\\"" \ "uid=\\\"${dbuser}\\\"" \ "pw=\\\"${dbuser_password}\\\"" \ "dataPath=\\\"${dump_dir_name}/\\\"" \ "maxThreads=4" \ schema/import \ ${reldata_dir_name}/ \ ${dbname}.p2o"
Would be great when this is merged to trunk;-)
#42 Updated by Roger Borrello 8 months ago
Has this been merged to trunk yet? I have a new import.sh
script that should work with this usage:
Usage: ./import.sh [-r<data_dir>] [-d<dump_dir>] [-c] [-f<path_to_p2j.jar>] [-a<path_to_app.jar>] --db_admin=<dbadmin> --db_adminpw=<dbadminpw> --db_user=<dbuser> --db_userpw=<dbuserpw> --db_name=<dbname> --db_port=<dbport> --db_schema=<schema_file> --db_host=<dbhost> Where: r = Relative path to the data directory (def=data) d = Path to the data dump directory (def=data/dump/hotel) c = Running in a container. If this option is passed in, "--db_host=" specifies a container f = Path to p2j.jar (def=/opt/fwd/build/lib/p2j.jar) a = Path to app.jar (def=/opt/hotel/lib/hotel.jar) --db_admin=<dbadmin> DB admin user for DB creation (def=fwd_admin) --db_adminpw=<dbadminpw> DB admin user password for the import (def=admin) --db_user=<dbuser> DB user for the import (def=fwd_user) --db_userpw=<dbuserpw> DB user password for the import (def=user) --db_name=<dbname> DB name for import (def=hotel) --db_port=<dbport> Port for the import (def=5432) --db_schema=<schema_file> DDL to build the schema (def=ddl/schema_table_hotel_postgresql.sql) --db_host=<dbhost> PostGreSQL server name or IP. If -c is specified, this is a container name (def=localhost) NOTE: APP_DBADMIN_PASSWORD and APP_DBUSER_PASSWORD variables can be set in the environment instead of passed on command line.
I am getting Caused by: java.io.FileNotFoundException: File 'schema/import.xml' not found in path:
error. Looking in p2j.jar
, we are missing rules/schema/import.xml
:
rfb@rfb:~$ unzip -l /opt/fwd/build/lib/p2j.jar | grep import 14245 2023-08-22 10:53 rules/schema/import_util.rules
#43 Updated by Roger Borrello 8 months ago
- File import.sh added
OK... I see it isn't merged to trunk. What is left to do? I updated:
build.xml src/com/goldencode/p2j/pattern/ConfigLoader.java src/com/goldencode/p2j/util/FileSpecList.java
in my copy of trunk, and my script worked. Attached for review. I am open to suggestions.
#44 Updated by Roger Borrello 8 months ago
I do notice that my application jar file does not include the ddl necessary to create the table structure. I can easily include them in the jar file, but I'm not sure if the ScriptRunner
or PatternEngine
can be used to create the schema from the files in the jar.
#45 Updated by Roger Borrello 8 months ago
- File import.sh added
- Handle multiple database imports
- Retrieve DDL from the application jar
Usage: ./import.sh [-r<data_dir>] [-d<dump_dir>] [-c] [-f<path_to_p2j.jar>] [-a<path_to_app.jar>] --db_admin=<dbadmin> --db_adminpw=<dbadminpw> --db_user=<dbuser> --db_userpw=<dbuserpw> --db_names=<db names> --db_host=<dbhost> --db_port=<dbport> Where: r = Relative path to the data directory (def=data) d = Path to the data dump directories if performing an import (def=data/dump) c = Running in a container. If this option is passed in, "--db_host=" specifies a container f = Path to p2j.jar (def=/opt/fwd/build/lib/p2j.jar) a = Path to app.jar (def=/opt/hotel/lib/hotel.jar) --db_admin=<dbadmin> DB admin user for DB creation (def=fwd_admin) --db_adminpw=<dbadminpw> DB admin user password for the import (def=admin) --db_user=<dbuser> DB user for the import (def=fwd_user) --db_userpw=<dbuserpw> DB user password for the import (def=user) --db_names=<db names> DB names for import (def=hotel) Option format is "<db names (separate multiples with commas)>") For example: "--db_names=hotel" or "--db_names=hotel,ledger,prices" --db_host=<dbhost> PostGreSQL server name or IP. If -c is specified, this is a container name (def=localhost) --db_port=<dbport> Port to connect to for DB access (def=5432) NOTE: APP_DBADMIN_PASSWORD and APP_DBUSER_PASSWORD variables can be set in the environment instead of passed on command line.
The application's build.xml
needs to be modified in that the ddl
files are to be included. For hotel, this looks like:
--- /tmp/brz-diff-ixyb47z9/old/build.xml +++ /home/rfb/projects/hotel_gui/build.xml @@ -616,6 +616,11 @@ <fileset dir="${cvt.home}/${data.rel}" includes="*.dict" /> <fileset dir="${cvt.home}/${data.rel}" includes="*.p2o" /> </copy> + + <!-- Copy ddl/*.sql --> + <copy todir="${build.home}/classes/ddl"> + <fileset dir="ddl" includes="*.sql" /> + </copy> </target> <!-- jar --> @@ -635,7 +640,8 @@ **/*.png **/*.jpg **/*.bmp - **/*.ico" + **/*.ico + ddl/**" manifest="${manifest.dir}/${appname}.mf" zip64Mode="as-needed" index="true" />
If there is a way to use the tools to work with the DDL files without extracting them from the jar, I'll make an update. But for now I'll just unzip.
#46 Updated by Roger Borrello 8 months ago
- File deleted (
import.sh)
#47 Updated by Greg Shah 8 months ago
Roger Borrello wrote:
I made some upgrades to the program to:
- Handle multiple database imports
- Retrieve DDL from the application jar
[...]
The application's
build.xml
needs to be modified in that theddl
files are to be included. For hotel, this looks like:
[...]If there is a way to use the tools to work with the DDL files without extracting them from the jar, I'll make an update. But for now I'll just unzip.
Igor: Please review and answer this.
#48 Updated by Roger Borrello 8 months ago
If this tool cuts the muster, should it be part of FWD versus something given to the customers for use in their applications? It is generic enough to work with any DB we know of at this point. If it is something to be included with the customer application, would it just be copied by them from the hotel application?
#49 Updated by Greg Shah 8 months ago
Customers need to be able to deploy without a project installation. Optimally, the solution would only require the FWD code (primarily p2j.jar
and its dependencies) and the application jar(s). From there one should be able to point to the dump files and the database cluster and it should just work.
Reliance upon build.xml
or anything like it is not good because that stuff isn't going to be there at deployment time.
#50 Updated by Roger Borrello 8 months ago
That is all understood... and import.sh
allows you to perform an import with only the p2j.jar
, the app.jar
and your .d
files if you want to import data. I was really looking for direction for where import.sh
(or other appropriate name) would be placed. In the p2j project so it could be used across any project, or only in the sample projects for use by customers.
#51 Updated by Ovidiu Maxiniuc 8 months ago
I think the build.xml
is only modified to allow the packaging of the ddl
scripts inside the app's jar file.
Once these .sql
files are deployed this way, I think the ScriptRunner
is the best tool for executing them, in the same manner as it does with the new udf scripts (which are stored in p2j.jar
instead of app-specific.jar
).
#52 Updated by Roger Borrello 8 months ago
Ovidiu Maxiniuc wrote:
I think the
build.xml
is only modified to allow the packaging of theddl
scripts inside the app's jar file.Once these
.sql
files are deployed this way, I think theScriptRunner
is the best tool for executing them, in the same manner as it does with the new udf scripts (which are stored inp2j.jar
instead ofapp-specific.jar
).
The import.sh
performs:
# Grab the DDL from the app.jar temp_dir=$(mktemp -d "tempdir_ddl_XXXXXX") unzip -qj $app_jar ddl/schema_table_* -d $temp_dir for item in "${db_array[@]}"; do db_table_schema="schema_table_${item}_postgresql.sql" if [ ! -e "${temp_dir}/${db_table_schema}" ]; then echo "ERROR: cannot find ${db_table_schema} in ${app_jar}. Check build." show_usage && exit 1 fi done ... db_table_schema=${temp_dir}/schema_table_${item}_postgresql.sql echo "Creating schema for $item from $db_table_schema ..." export PGPASSWORD=$dbuser_pw psql -U $dbuser -h $pg_name -p $dbport -f $db_table_schema $item
to pull those .sql
files into the filesystem and then use them. If they are in the jar, what would the syntax to the ScriptRunner
be to use them?
#53 Updated by Eric Faulhaber 8 months ago
ScriptRunner
currently only accepts the commands:
udf.install
udf.install.search_path
It would need some modifications to install DDL.
#54 Updated by Roger Borrello 8 months ago
Any objections to me rebase-ing to the latest trunk? My checkout is at revision 14663. No pending changes to commit to this branch?
#55 Updated by Roger Borrello 8 months ago
Branch was rebased to trunk_14721 and is at revision 14722. I noticed there were many spelling/formatting updates in build.xml
header that are not in trunk. These were reverted to trunk, but I've saved off a copy of the build.xml
. Please advise what to do with those updates.
Also, once I finish all testing on my import.sh
so that it works in standard, as well as Docker mode, I will put it into this branch, replacing the hotel_import.sh
that is there. Is that OK? Should it stay named hotel_import.sh
, as it doesn't just apply to hotel.
#56 Updated by Greg Shah 8 months ago
Branch was rebased to trunk_14721 and is at revision 14722. I noticed there were many spelling/formatting updates in
build.xml
header that are not in trunk. These were reverted to trunk, but I've saved off a copy of thebuild.xml
. Please advise what to do with those updates.
Keep the changes unless there is a reason not to do so.
Also, once I finish all testing on my
import.sh
so that it works in standard, as well as Docker mode, I will put it into this branch, replacing thehotel_import.sh
that is there. Is that OK?
Yes
Should it stay named
hotel_import.sh
, as it doesn't just apply to hotel.
There should not be anything named hotel in FWD.
#57 Updated by Roger Borrello 8 months ago
I've replaced the hotel-import.sh
with import
that now utilizes the p2j.jar and app.jar. There is a new requirement to include the DDL into the application jar. The hotel and hotel_gui sample applications on devsrv01
have been updated to be able to utilize this new tool.
#59 Updated by Roger Borrello 8 months ago
Greg Shah wrote:
Roger: Based on your testing, there anything else needed in this task?
From my perspective, no. Even if there were an enhancement to have ScriptRunner be able to execute the DDL, it would still need to be part of the jar.
#60 Updated by Greg Shah 8 months ago
Even if there were an enhancement to have ScriptRunner be able to execute the DDL,
The DDL definitely should be able to be executed directly from the jar. It makes no sense to have to unpack the DDL first. I think this needs to be implemented.
Igor: What is the effort for this?
it would still need to be part of the jar.
That is a perfectly fine restriction.
#61 Updated by Igor Skornyakov 8 months ago
Greg Shah wrote:
Even if there were an enhancement to have ScriptRunner be able to execute the DDL,
The DDL definitely should be able to be executed directly from the jar. It makes no sense to have to unpack the DDL first. I think this needs to be implemented.
Igor: What is the effort for this?
it would still need to be part of the jar.
That is a perfectly fine restriction.
At this moment ScriptRunner executes directly from the .jar only the UDFs definition script. I believe that of we will add more scripts to the .jar it will take minimal efforts to make them be executed by the ScriptRunner as well.
#63 Updated by Igor Skornyakov 8 months ago
Greg Shah wrote:
Igor: Please go ahead with this change. Assume that the DDL is available in the jar in a known location.
OK.
One question. UDFs DDL is the same for all applications so it is natural that it is located in the p2j.jar
. In what jar other (application-specific) scripts should be located?
Thank you.
#68 Updated by Igor Skornyakov 8 months ago
I see the following message on the database import fron the holel-gui
:
[java] Failed to locate 'data/standard.dict' as a resource. Using developer mode: attempting to locate it as local file in ${cvtpath} location. [java] ERROR: [java] com.goldencode.ast.AstException: Bogus registry resource cfg/registry.xml [java] at com.goldencode.ast.XmlFilePlugin.<init>(XmlFilePlugin.java:232) [java] at com.goldencode.p2j.pattern.PatternEngine.lambda$static$0(PatternEngine.java:523) [java] at com.goldencode.ast.AstManager.getPlugin(AstManager.java:646) [java] at com.goldencode.ast.AstManager.save(AstManager.java:633) [java] at com.goldencode.p2j.pattern.PatternEngine.finish(PatternEngine.java:1250) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1114) [java] at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2219)
It seems that the database import is finished normally.
It is expected?
Thank you.
#69 Updated by Greg Shah 8 months ago
Please note also that at this moment
ScriptRunner
works only withPostgreSQL
andMariaDB
sinceH2
UDFs are deployed in a different way.
Do we need it for H2?
Yes. Although we don't recommend it for production usage, we absolutely do support it for test/dev environments. We use it heavily with Hotel GUI and Hotel ChUI, for example.
#70 Updated by Greg Shah 8 months ago
Igor Skornyakov wrote:
I see the following message on the database import fron the
holel-gui
:
[...]
It seems that the database import is finished normally.
It is expected?
Thank you.
No, this doesn't look right. Are you running it from the top level Hotel GUI project directory (where build.xml
exists?
#71 Updated by Roger Borrello 8 months ago
If you are using the latest hotel_gui
project, and the latest 4722a
, you should be able to run: ./p2j/import.sh -f ./p2j/build/lib/p2j.jar -a ./build/lib/hotel.jar
The changes you are making would allow me to not have to extract the DDL first from the jar beforehand. I was using unzip
before, but switched to jar xvf
to minimize the requirements on the postgres
Docker image.
# Grab the DDL from the app.jar temp_dir=$(mktemp -d "tempdir_ddl_XXXXXX") (cd $temp_dir; jar xvf $app_jar ddl/) #unzip -qj $app_jar ddl/schema_table_* -d $temp_dir for item in "${db_array[@]}"; do db_table_schema="schema_table_${item}_postgresql.sql" if [ ! -e "${temp_dir}/ddl/${db_table_schema}" ]; then echo "ERROR: cannot find ${db_table_schema} in ${app_jar}. Check build." show_usage && exit 1 fi db_table_schema=${temp_dir}/ddl/schema_table_${item}_postgresql.sql echo "Creating schema for $item from $db_table_schema ..." export PGPASSWORD=$dbuser_pw psql -U $dbuser -h $pg_name -p $dbport -f $db_table_schema $item done
Here is usage, and output:
rfb@rfb:~/projects/hotel_gui_271$ ./p2j/import.sh -? Usage: ./p2j/import.sh [-d<dump_dir>] [-c] [-h<host or container>] [-r<data_dir>] [-p<port>] [-f<path_to_p2j.jar>] [-a<path_to_app.jar>] --db_adminid=<dbadmin> --db_adminpw=<dbadminpw> --db_userid=<dbuser> --db_userpw=<dbuserpw> --db_names=<db names> Where: d = Path to the data dump directories if performing an import (def=data/dump) c = Running in a container. If this option is passed in, "-h" specifies a container h = Target server or container (if -c specified) (def=localhost) r = Relative path to the data directory (def=data) p = Target port of server or container (def=5432) f = Path to p2j.jar (def=/opt/fwd/build/lib/p2j.jar) a = Path to app.jar (def=/opt/hotel/lib/hotel.jar) --db_admin=<dbadmin> DB admin user for DB creation (def=fwd_admin) --db_adminpw=<dbadminpw> DB admin user password for the import (def=admin) --db_user=<dbuser> DB user for the import (def=fwd_user) --db_userpw=<dbuserpw> DB user password for the import (def=user) --db_names=<db names> DB names for import (def=hotel) Option format is "<db names (separate multiples with commas)>") For example: "--db_names=hotel" or "--db_names=hotel,ledger,prices" NOTE: APP_DBADMIN_PASSWORD and APP_DBUSER_PASSWORD variables can be set in the environment instead of passed on command line. rfb@rfb:~/projects/hotel_gui_271$ ./p2j/import.sh -f ./p2j/build/lib/p2j.jar -a ./build/lib/hotel.jar created: ddl/ inflated: ddl/schema_index_hotel_h2.sql inflated: ddl/schema_index_hotel_mariadb.sql inflated: ddl/schema_index_hotel_postgresql.sql inflated: ddl/schema_index_standard_h2.sql inflated: ddl/schema_table_hotel_h2.sql inflated: ddl/schema_table_hotel_mariadb.sql inflated: ddl/schema_table_hotel_postgresql.sql inflated: ddl/schema_table_standard_h2.sql inflated: ddl/schema_word_tables_hotel_h2.sql inflated: ddl/schema_word_tables_hotel_mariadb.sql inflated: ddl/schema_word_tables_hotel_postgresql.sql inflated: ddl/schema_word_tables_standard_h2.sql Dropping DB hotel ... Creating DB hotel using OWNER fwd_admin ... Creating schema for hotel from tempdir_ddl_EzzzSh/ddl/schema_table_hotel_postgresql.sql ... CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE INDEX ALTER TABLE CREATE SEQUENCE Setup UDFs ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui_271/build/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/hotel fwd_admin admin udf.install.search_path SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/udf/postgresql/udfs.sql] Running script: [/udf/postgresql/words-udfs-sql.sql] Setting search_path for [hotel] Import dump from /home/rfb/projects/hotel_gui_271/data/dump/hotel into hotel ... Performing cmd=java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui_271/build/lib/hotel.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=\"hotel\" targetDb=\"postgresql\" url=\"jdbc:postgresql://localhost:5432/hotel\" uid=\"fwd_user\" pw=\"user\" dataPath=\"/home/rfb/projects/hotel_gui_271/data/dump/hotel/\" maxThreads=4 schema/import data/ hotel.p2o Using unnamed schema profile. INFO: Type match assertion disabled; set "checkTypes" to true to enable INFO: Data export files will be read from '/home/rfb/projects/hotel_gui_271/data/dump/hotel/' INFO: Using 4 threads for import ./cvt/data/hotel.p2o SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. MLog initialization issue: slf4j found no binding or threatened to use its (dangerously silent) NOPLogger. We consider the slf4j library not found. 0 [MLog-Init-Reporter] INFO com.mchange.v2.log.MLog - MLog clients using log4j logging. 102 [main] INFO com.mchange.v2.c3p0.C3P0Registry - Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] 190 [main] INFO com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> z8kfsxaymnnupnjj9qzm|6d07a63d, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.postgresql.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> z8kfsxaymnnupnjj9qzm|6d07a63d, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:postgresql://localhost:5432/hotel, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ] Found identity generator sequence starting at 1. IMPORT ORDER: 1) guest.d:20802.0 [10401 bytes] 2) stay.d:15508.0 [7754 bytes] 3) reservation.d:3532.0 [1766 bytes] 4) rate.d:2946.0 [1473 bytes] 5) room.d:1042.0 [521 bytes] 6) room-type.d:986.0 [493 bytes] 7) _user.d:496.0 [248 bytes] 8) service.d:484.0 [242 bytes] 9) guest-id-type.d:426.0 [213 bytes] 10) service-type.d:426.0 [213 bytes] BEGIN guest.d (guest) BEGIN rate.d (rate) BEGIN stay.d (stay) BEGIN reservation.d (reservation) rate.d: SQL: drop index if exists idx__rate_idx__room_type_date guest.d: SQL: drop index if exists idx__guest_idx__stay_order stay.d: SQL: drop index if exists idx__stay_idx__stay_id stay.d: SQL: create unique index idx__stay_idx__stay_id on stay (stay_id) reservation.d: SQL: drop index if exists idx__reservation_idx__room_num_date guest.d: SQL: create unique index idx__guest_idx__stay_order on guest (stay_id, order_) reservation.d: Skipping dropping of word index: idx__guest rate.d: Set up UTF-8 character encoding. reservation.d: Set up UTF-8 character encoding. stay.d: Set up UTF-8 character encoding. guest.d: Set up UTF-8 character encoding. rate.d: SQL: create index idx__rate_idx__room_type_date on rate (room_type, start_date, end_date, recid) rate.d: Analyzing rate... stay.d: Analyzing stay... guest.d: Analyzing guest... reservation.d: SQL: create index idx__reservation_idx__room_num_date on reservation (room_num, start_date, end_date, recid) END rate.d. 56 records imported into rate 6.31538% complete: 56 recs in 0:00:00.954 (58/sec, 3522/min, 211320/hr) BEGIN room.d (room) room.d: SQL: drop index if exists idx__room_idx__room_num room.d: SQL: create unique index idx__room_idx__room_num on room (room_num) END stay.d. 219 records imported into stay 39.56011% complete: 275 recs in 0:00:00.960 (286/sec, 17187/min, 1031250/hr) BEGIN room-type.d (room_type) room-type.d: SQL: drop index if exists idx__room_type_idx__type END guest.d. 225 records imported into guest 84.15366% complete: 500 recs in 0:00:00.963 (519/sec, 31152/min, 1869158/hr) BEGIN _user.d (meta_user) reservation.d: Skipping creation of word index: idx__guest reservation.d: Analyzing reservation... room.d: Set up UTF-8 character encoding. END reservation.d. 36 records imported into reservation 91.72526% complete: 536 recs in 0:00:00.974 (550/sec, 33018/min, 1981108/hr) BEGIN service.d (service) room-type.d: SQL: create unique index idx__room_type_idx__type on room_type (room_type) _user.d: SQL: drop index if exists idx__meta_user_idx__userid service.d: SQL: drop index if exists idx__service_idx__stay_charge _user.d: SQL: drop index if exists idx__meta_user_idx__user_domain_name service.d: Set up UTF-8 character encoding. _user.d: SQL: drop index if exists idx__meta_user_idx__user_sql_only_user _user.d: SQL: create unique index idx__meta_user_idx__userid on meta_user (upper(rtrim(userid)), upper(rtrim(domain_name))) room.d: Analyzing room... service.d: SQL: create index idx__service_idx__stay_charge on service (stay_id, service_type, service_date, recid) END room.d. 30 records imported into room 93.95901% complete: 566 recs in 0:00:00.991 (571/sec, 34268/min, 2056104/hr) room-type.d: Set up UTF-8 character encoding. BEGIN guest-id-type.d (guest_id_type) guest-id-type.d: SQL: drop index if exists idx__guest_id_type_idx__id_type guest-id-type.d: SQL: create unique index idx__guest_id_type_idx__id_type on guest_id_type (id_type) _user.d: Set up ISO8859-15 character encoding. room-type.d: Analyzing room_type... service.d: Analyzing service... END room-type.d. 7 records imported into room_type 96.07271% complete: 573 recs in 0:00:01.007 (569/sec, 34141/min, 2048460/hr) BEGIN service-type.d (service_type) service-type.d: SQL: drop index if exists idx__service_type_idx__charge_type service-type.d: SQL: create unique index idx__service_type_idx__charge_type on service_type (service_type) _user.d: SQL: create index idx__meta_user_idx__user_domain_name on meta_user (upper(rtrim(domain_name)), sql_only_user, upper(rtrim(userid)), recid) END service.d. 5 records imported into service 97.11027% complete: 578 recs in 0:00:01.009 (572/sec, 34370/min, 2062239/hr) guest-id-type.d: Set up UTF-8 character encoding. guest-id-type.d: Analyzing guest_id_type... _user.d: SQL: create index idx__meta_user_idx__user_sql_only_user on meta_user (sql_only_user, recid) END guest-id-type.d. 3 records imported into guest_id_type 98.02350% complete: 581 recs in 0:00:01.023 (567/sec, 34076/min, 2044574/hr) service-type.d: Set up UTF-8 character encoding. service-type.d: Analyzing service_type... _user.d: Analyzing meta_user... END service-type.d. 3 records imported into service_type 98.93672% complete: 584 recs in 0:00:01.034 (564/sec, 33887/min, 2033268/hr) END _user.d. 1 records imported into meta_user 100.00000% complete: 585 recs in 0:00:01.037 (564/sec, 33847/min, 2030858/hr) Updated identity generator sequence starting at 40001. Total records processed: 585 in 0:00:01.036 (564.672 records/sec) Number of failed records: 0. Total sequences initialized: 0. Reading merge DMO definitions... Elapsed job time: 00:00:01.712
#72 Updated by Igor Skornyakov 8 months ago
Greg Shah wrote:
Please note also that at this moment
ScriptRunner
works only withPostgreSQL
andMariaDB
sinceH2
UDFs are deployed in a different way.
Do we need it for H2?Yes. Although we don't recommend it for production usage, we absolutely do support it for test/dev environments. We use it heavily with Hotel GUI and Hotel ChUI, for example.
OK. Will be added.
#73 Updated by Igor Skornyakov 8 months ago
Greg Shah wrote:
Igor Skornyakov wrote:
I see the following message on the database import fron the
holel-gui
:
[...]
It seems that the database import is finished normally.
It is expected?
Thank you.No, this doesn't look right. Are you running it from the top level Hotel GUI project directory (where
build.xml
exists?
Yes, I do.
#74 Updated by Igor Skornyakov 8 months ago
Roger Borrello wrote:
If you are using the latest
hotel_gui
project, and the latest4722a
, you should be able to run:./p2j/import.sh -f ./p2j/build/lib/p2j.jar -a ./build/lib/hotel.jar
The changes you are making would allow me to not have to extract the DDL first from the jar beforehand. I was using
unzip
before, but switched tojar xvf
to minimize the requirements on thepostgres
Docker image.
[...]Here is usage, and output:
[...]
I see. Thank you!
#75 Updated by Roger Borrello 8 months ago
In revisiting getting the postgres image augmented to be able to run the data import internally, we have to install Java 11, as mentioned. The p2j.jar
and app.jar
are built with Java 8, but the runtime is Java 11:
docker-cact-db-1 | Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /db/p2j.jar:/db/app.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/menu fwd_admin admin udf.install.search_path docker-cact-db-1 | Exception in thread "main" docker-cact-db-1 | java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils docker-cact-db-1 | at com.goldencode.p2j.persist.dialect.P2JPostgreSQLDialect.<clinit>(P2JPostgreSQLDialect.java:255) docker-cact-db-1 | at com.goldencode.p2j.persist.dialect.Dialect$1.<init>(Dialect.java:226) docker-cact-db-1 | at com.goldencode.p2j.persist.dialect.Dialect.<clinit>(Dialect.java:223) docker-cact-db-1 | at com.goldencode.p2j.persist.deploy.ScriptRunner.main(ScriptRunner.java:121) docker-cact-db-1 | Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang.StringUtils docker-cact-db-1 | at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) docker-cact-db-1 | at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) docker-cact-db-1 | at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527) docker-cact-db-1 | ... 4 more docker-cact-db-1 | INFO: no data import requested.
So perhaps I have to add more libraries to the image.
#76 Updated by Ovidiu Maxiniuc 8 months ago
Greg Shah wrote:
Ovidiu: What additional testing is needed?
I meant testing whether the ScriptRunner
could be used for processing the scripts from ddl/
without/with minimum modifications. I hoped that would be possible after seeing the work done by the tool: just grab a .sql file from jar and execute it.
I had the answer in #4722-53.
I have no other unknown issues. Working with Roger helped validation of the solution.
#78 Updated by Roger Borrello 8 months ago
This branch has been rebased to trunk_14724 and is now at revision 14728.
#79 Updated by Roger Borrello 8 months ago
I created a postgres image containing the FWD convert deployment. All the libraries that are required to run the ScriptRunner
and PatternEngine
are available so the NoClassDefFoundError
on the Apache items is no longer a problem.
However, the postgres
container has network challenges from within the post-init scripts that create the database. If any network parameters, like -h localhost -p 5432
are specified, and error indicating the connection is refused is returned: psql: error: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
I can easily modify import.sh
to suppress those parameters from within a container, but when we get to the ScriptRunner
, the url seems to create the same issue:
Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /opt/fwd-deploy/convert/build/lib/p2j.jar:/db/app.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/menu fwd_admin admin udf.install.search_path SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception in thread "main" org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195) at org.postgresql.Driver.makeConnection(Driver.java:454) at org.postgresql.Driver.connect(Driver.java:256) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228) at com.goldencode.p2j.persist.deploy.ScriptRunner.main(ScriptRunner.java:128) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.postgresql.core.PGStream.<init>(PGStream.java:70) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ... 7 more
The odd thing is, I can run a bash shell to the same container, and run the same command, and it works:
root@adc8dfb4e45e:/tmp# su postgres sh -c "java -Xmx1g -Dfile.encoding=UTF-8 -cp /opt/fwd-deploy/convert/build/lib/p2j.jar:/db/app.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/menu fwd_admin admin udf.install.search_path" SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/udf/postgresql/udfs.sql] Running script: [/udf/postgresql/words-udfs-sql.sql] Setting search_path for [menu]
So this only seems to occur in the very original container.
#80 Updated by Roger Borrello 8 months ago
Found the key... during the post-init phase of the postgres image startup, all connections have to be Unix sockets. TCP/IP sockets are not allowed at that point. So I had to make the import.sh
smart enough to not utilize network parameters, as well as create the JDBC postgresql URL connect via Unix socket. Basically:
# URL must utilize Unix port if we are in a container if [ ! -e "/.dockerenv" ]; then db_url=jdbc:postgresql://${dbhost}:${dbport}/${item} else db_url=jdbc:postgresql://${dbhost}:${dbport}/${item}?socketFactory=org.newsclub.net.unix.AFUNIXSocketFactory\$FactoryArg\&socketFactoryArg=/run/postgresql/.s.PGSQL.${dbport} fi
I have been able to test this in a standard configuration, in a Docker configuration (both container and compose).
This branch was rebased to trunk_14726
and the latest import.sh
commit brings the revision to 14731.
#81 Updated by Roger Borrello 8 months ago
This branch was rebased to trunk_14728
and is now at revision 14734
.
#83 Updated by Igor Skornyakov 8 months ago
Add support for more SQL script types for PostgreSQL and MariaDB.
Committed to 4722/14735.
New commands are create.tables, create.indexes, and create.word.objects.
The name(s) of the sql script(s) for these commmands are mandatory. Should be provided w/o "schema" prefix and dialect suffix.
For example the table can be created like this:
<java classname="com.goldencode.p2j.persist.deploy.ScriptRunner"
fork="true"
failonerror="true"
dir="${basedir}" >
<classpath refid="app.classpath"/>
<jvmarg value="-Xmx1g"/>
<jvmarg value="-Dfile.encoding=UTF-8"/>
<jvmarg value="-Djava.util.logging.config.file=${p2j.home}/cfg/logging.properties"/>
<arg value ="${sql.url.pg}"/>
<arg value="${sql.user}" />
<arg value ="${sql.user.pass}"/>
<arg value ="create.tables"/>
<arg value ="table_fwd"/>
</java>
For creation of the word tables objects use:
<java classname="com.goldencode.p2j.persist.deploy.ScriptRunner"
fork="true"
failonerror="true"
dir="${basedir}" >
<classpath refid="app.classpath"/>
<jvmarg value="-Xmx1g"/>
<jvmarg value="-Dfile.encoding=UTF-8"/>
<jvmarg value="-Djava.util.logging.config.file=${p2j.home}/cfg/logging.properties"/>
<arg value ="${sql.url.pg}"/>
<arg value="${sql.user}" />
<arg value ="${sql.user.pass}"/>
<arg value ="create.word.objects"/>
<arg value ="word_tables_fwd"/>
</java>
Working on support for H2 database.
#84 Updated by Igor Skornyakov 8 months ago
I've noticed the following.
If I use <parameter name="ddl-dialects" value="h2,postgresql,mariadblenient" />
in the p2j.cfg.xml
the drop index
statament for PostgreSQL is generated with on <table>
clause which is OK for MariaDB but is not supported by PostgreSQL.
This can be an indication that something is wrong with dialect settings in converison which can cause other problems.
I think that I've already reported this error some time ago, but I could not find where it was done.
#85 Updated by Ovidiu Maxiniuc 8 months ago
Igor Skornyakov wrote:
I understand that on the "normal" database creation the
schema_index_${dbname}_${dialect}.sql
DDL script is not used and we generate it just if re-indexing of the database is required?
Is this correct?
Yes, this is correct. During the database import, the schema_table_..
statements are used for creating the database structure, but the indices are not created until the effective import starts for individual table. That is to accelerate the commits, the non-unique indices are dropped. The import runs with unique indices to avoid key duplication, but after all records are in database, the non-unique indices are set back. The performance gain is obtained by not updating the indices after each insert, but a single indexing operation at the end.
More important, all information about the indices are extracted from .p2o
of the associated schema.
If I use
<parameter name="ddl-dialects" value="h2,postgresql,mariadblenient" />
in thep2j.cfg.xml
thedrop index
statament for PostgreSQL is generated withon <table>
clause which is OK for MariaDB but is not supported by PostgreSQL.
This can be an indication that something is wrong with dialect settings in converison which can cause other problems.
No. This should not happen. For example, in meta _user
table which is injected in all tables, we have for MariaDb dialect:
drop index if exists _userid on meta_user;
create unique index _userid on meta_user (_Userid, _Domain_Name);
but for PostgreSQL:drop index if exists idx__meta_user_userid;
create unique index idx__meta_user_userid on meta_user (upper(rtrim(userid)), upper(rtrim(domain_name)));
Please check whether this is a regression of your changes.
#86 Updated by Igor Skornyakov 8 months ago
Ovidiu Maxiniuc wrote:
Igor Skornyakov wrote:
I understand that on the "normal" database creation the
schema_index_${dbname}_${dialect}.sql
DDL script is not used and we generate it just if re-indexing of the database is required?
Is this correct?Yes, this is correct. During the database import, the
schema_table_..
statements are used for creating the database structure, but the indices are not created until the effective import starts for individual table. That is to accelerate the commits, the non-unique indices are dropped. The import runs with unique indices to avoid key duplication, but after all records are in database, the non-unique indices are set back. The performance gain is obtained by not updating the indices after each insert, but a single indexing operation at the end.
More important, all information about the indices are extracted from.p2o
of the associated schema.
I see, thank you.
If I use
<parameter name="ddl-dialects" value="h2,postgresql,mariadblenient" />
in thep2j.cfg.xml
thedrop index
statament for PostgreSQL is generated withon <table>
clause which is OK for MariaDB but is not supported by PostgreSQL.
This can be an indication that something is wrong with dialect settings in converison which can cause other problems.No. This should not happen. For example, in meta
_user
table which is injected in all tables, we have for MariaDb dialect:
[...]but for PostgreSQL:[...]Please check whether this is a regression of your changes.
However, this is how it works now. This cannot be caused by my changes since I've made changes only in ScriptRunner
. Please note thiat it happens only if we have both postgresql and mariadb in "ddl-dialects" in p2j.cfg.xml
.
#87 Updated by Ovidiu Maxiniuc 8 months ago
The statement for dropping the index is getDropIndexString()
for each Dialect
. If you look at MariaDbLenientDialect
it is:
return "drop index " + (ifExists ? "if exists " : "") + idxName + " on " + tableName;
while for P2JPostgreSQLDialect
it is:return "drop index if exists " + quote(getSqlIndexName(index));
There is no
on
in the latter. More than that, the name of the index in PostgreSQL must be unique so the index name contains the table name. OTOH, MariaDb does not impose this uniqueness (because of the syntax) but has some limitations regarding the length of the index name, so we intentionally removed the table name from index name.
The reason for what you describe looks like the dialect is not updated when the list is split and iterated, or possible otherwise incorrectly instantiated. I tested with latest trunk and the full list of dialects and it works correctly for me.
#88 Updated by Igor Skornyakov 8 months ago
Ovidiu Maxiniuc wrote:
The statement for dropping the index is
getDropIndexString()
for eachDialect
. If you look atMariaDbLenientDialect
it is:[...] while forP2JPostgreSQLDialect
it is:[...]
There is noon
in the latter. More than that, the name of the index in PostgreSQL must be unique so the index name contains the table name. OTOH, MariaDb does not impose this uniqueness (because of the syntax) but has some limitations regarding the length of the index name, so we intentionally removed the table name from index name.
Yes, I've seen this. Strictly speaking I've noticed the prpblem only in the schema_word_tables_${dbname}_postgresql.sql
file
The reason for what you describe looks like the dialect is not updated when the list is split and iterated, or possible otherwise incorrectly instantiated. I tested with latest trunk and the full list of dialects and it works correctly for me.
It is strange. I will try to debig the generation when I will finish with more urgent tasks.
#89 Updated by Igor Skornyakov 8 months ago
Igor Skornyakov wrote:
Ovidiu Maxiniuc wrote:
The statement for dropping the index is
getDropIndexString()
for eachDialect
. If you look atMariaDbLenientDialect
it is:[...] while forP2JPostgreSQLDialect
it is:[...]
There is noon
in the latter. More than that, the name of the index in PostgreSQL must be unique so the index name contains the table name. OTOH, MariaDb does not impose this uniqueness (because of the syntax) but has some limitations regarding the length of the index name, so we intentionally removed the table name from index name.Yes, I've seen this. Strictly speaking I've noticed the prpblem only in the
schema_word_tables_${dbname}_postgresql.sql
fileThe reason for what you describe looks like the dialect is not updated when the list is split and iterated, or possible otherwise incorrectly instantiated. I tested with latest trunk and the full list of dialects and it works correctly for me.
It is strange. I will try to debig the generation when I will finish with more urgent tasks.
Fixed in 4722/14737.
#92 Updated by Ovidiu Maxiniuc 8 months ago
Review of 4722a, up to 14738.
I am generally OK with the code. Just a few notes to add:
Globally:- the module names (first line inside header) should be updated for all
H2ScriptRunner.java
,PostgreSQLScriptRunner.java
andMariaDbScriptRunner.java
. - the method description,
@param
,@return
and@throw
tags should be separated by a blank line. The content of the three tags should be tab-aligned. - the definitions of
COUNT
andSCALE
constants require too much computation. Why not define it as a simpleString
? If needed, it can be split on multiple lines and concatenated by+
so it will look almost identical to current form. The compiler will do the operations statically so no.stream().collect(Collectors.joining(..
necessary at runtime. This is a price we do not have to pay. -
//
comments are recommended instead of/*
&*/
inside method bodies, especially for single line comments.
ScriptRunner
:
main()
: the command-line arguments seems to be obsolete. A better explanation is necessary because this is the primary source of documentation for this utility. I do not know whether it is documented anywhere else.addScriptPath()
: the/ddl/schema_...
pattern is hardcoded. Maybe forudf/
functions this is OK, I think we should not be so limited and allow the scripts to be executed from any location.
#93 Updated by Roger Borrello 7 months ago
I am working on updating the import script to utilize the new options for building the database tables. I am receiving this output:
Dropping DB hotel ... Creating DB hotel using OWNER fwd_admin ... Creating schema for hotel from table_hotel ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui/build/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner 'jdbc:postgresql://localhost:5432/hotel' fwd_admin admin create.tables table_hotel SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/ddl/schema_table_hotel_postgresql.sql]
It hangs here, and does not proceed.
In the hotel.jar
I have included the files:
rfb@rfb:~/projects/hotel_gui$ unzip -l build/lib/hotel.jar | grep ddl 0 2023-09-18 17:13 ddl/ 5795 2023-09-18 17:13 com/goldencode/hotel/dmo/_meta/meta_index_ddl.sql 60319 2023-09-18 17:13 com/goldencode/hotel/dmo/_meta/meta_table_ddl.sql 1792 2023-09-18 17:13 ddl/schema_index_hotel_h2.sql 1712 2023-09-18 17:13 ddl/schema_index_hotel_mariadb.sql 1606 2023-09-18 17:13 ddl/schema_index_hotel_postgresql.sql 5795 2023-09-18 17:13 ddl/schema_index_standard_h2.sql 43523 2023-09-18 17:13 ddl/schema_table_hotel_h2.sql 3503 2023-09-18 17:13 ddl/schema_table_hotel_mariadb.sql 3098 2023-09-18 17:13 ddl/schema_table_hotel_postgresql.sql 60319 2023-09-18 17:13 ddl/schema_table_standard_h2.sql 1385 2023-09-18 17:13 ddl/schema_word_tables_hotel_postgresql.sql
Am I doing something wrong? I am building the command with:
# Build the schema, use user account export PGPASSWORD=$dbuser_pw db_table_schema="table_${item}" echo "Creating schema for $item from $db_table_schema ..." cmd="java -Xmx1g -Dfile.encoding=UTF-8 -cp ${fwd_jar}:${app_jar} com.goldencode.p2j.persist.deploy.ScriptRunner '$db_url' $dbadmin $dbadmin_pw create.tables $db_table_schema" echo "Performing cmd=$cmd" eval $cmd
BTW, the -Djava.util.logging.config.file=${fwd.home}/cfg/logging.properties
option expects that the file system is available. Shouldn't this also come from the p2j.jar
? That is why I didn't include it in my command.
#94 Updated by Roger Borrello 7 months ago
Roger Borrello wrote:
It hangs here, and does not proceed.
I noticed I was using fwd_admin
instead of fwd_user
. I changed it, but the results are the same.
#95 Updated by Ovidiu Maxiniuc 7 months ago
I noticed the quotes around the $db_url
. I do not think that is the main cause, but I am sure they are not removed before parsing so that could fail. The $db_url
normally does not contain spaces so quotes or double quotes are not necessary, anyway. Please try without them.
#96 Updated by Igor Skornyakov 7 months ago
Roger Borrello wrote:
Roger Borrello wrote:
It hangs here, and does not proceed.
I noticed I was using
fwd_admin
instead offwd_user
. I changed it, but the results are the same.
The problem caused by the fact that DDL scripts for the hotel
projects had DOS line ending.
Fixed in 4722a/14739.
#97 Updated by Roger Borrello 7 months ago
Igor Skornyakov wrote:
The problem caused by the fact that DDL scripts for the
hotel
projects had DOS line ending.
Fixed in 4722a/14739.
Thank you! The 14740 revision contains an import.sh
that works great:
rfb@rfb:~/projects/hotel_gui$ p2j/import.sh -a ./build/lib/hotel.jar -d ./data/dump/ Dropping DB hotel ... Creating DB hotel using OWNER fwd_admin ... Creating schema for hotel from table_hotel ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui/build/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/hotel fwd_user user create.tables table_hotel SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/ddl/schema_table_hotel_postgresql.sql] Setup UDFs ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui/build/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/hotel fwd_admin admin udf.install.search_path SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/udf/postgresql/udfs.sql] Running script: [/udf/postgresql/words-udfs-sql.sql] Setting search_path for [hotel] Import dump from /home/rfb/projects/hotel_gui/data/dump/hotel into hotel ... Performing cmd=java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui/build/lib/hotel.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=\"hotel\" targetDb=\"postgresql\" url=\"${db_url}\" uid=\"fwd_user\" pw=\"user\" dataPath=\"/home/rfb/projects/hotel_gui/data/dump/hotel/\" maxThreads=4 schema/import data/ hotel.p2o Using unnamed schema profile. INFO: Type match assertion disabled; set "checkTypes" to true to enable INFO: Data export files will be read from '/home/rfb/projects/hotel_gui/data/dump/hotel/' INFO: Using 4 threads for import ./cvt/data/hotel.p2o SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. MLog initialization issue: slf4j found no binding or threatened to use its (dangerously silent) NOPLogger. We consider the slf4j library not found. 0 [MLog-Init-Reporter] INFO com.mchange.v2.log.MLog - MLog clients using log4j logging. 106 [main] INFO com.mchange.v2.c3p0.C3P0Registry - Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] 196 [main] INFO com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> z8kfsxay1c5t9yq1lajoe|488d1cd7, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.postgresql.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> z8kfsxay1c5t9yq1lajoe|488d1cd7, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:postgresql://localhost:5432/hotel, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ] Found identity generator sequence starting at 1. IMPORT ORDER: 1) guest.d:20802.0 [10401 bytes] 2) stay.d:15508.0 [7754 bytes] 3) reservation.d:3532.0 [1766 bytes] 4) rate.d:2946.0 [1473 bytes] 5) room.d:1042.0 [521 bytes] 6) room-type.d:986.0 [493 bytes] 7) _user.d:496.0 [248 bytes] 8) service.d:484.0 [242 bytes] 9) guest-id-type.d:426.0 [213 bytes] 10) service-type.d:426.0 [213 bytes] BEGIN guest.d (guest) BEGIN rate.d (rate) BEGIN reservation.d (reservation) BEGIN stay.d (stay) stay.d: SQL: drop index if exists idx__stay_idx__stay_id rate.d: SQL: drop index if exists idx__rate_idx__room_type_date guest.d: SQL: drop index if exists idx__guest_idx__stay_order stay.d: SQL: create unique index idx__stay_idx__stay_id on stay (stay_id) guest.d: SQL: create unique index idx__guest_idx__stay_order on guest (stay_id, order_) reservation.d: SQL: drop index if exists idx__reservation_idx__room_num_date reservation.d: Skipping dropping of word index: idx__guest guest.d: Set up UTF-8 character encoding. rate.d: Set up UTF-8 character encoding. stay.d: Set up UTF-8 character encoding. reservation.d: Set up UTF-8 character encoding. rate.d: SQL: create index idx__rate_idx__room_type_date on rate (room_type, start_date, end_date, recid) guest.d: Analyzing guest... rate.d: Analyzing rate... END guest.d. 225 records imported into guest 44.59355% complete: 225 recs in 0:00:00.959 (234/sec, 14077/min, 844629/hr) BEGIN room.d (room) room.d: SQL: drop index if exists idx__room_idx__room_num END rate.d. 56 records imported into rate 50.90894% complete: 281 recs in 0:00:00.961 (292/sec, 17544/min, 1052653/hr) BEGIN room-type.d (room_type) room-type.d: SQL: drop index if exists idx__room_type_idx__type room.d: SQL: create unique index idx__room_idx__room_num on room (room_num) room-type.d: SQL: create unique index idx__room_type_idx__type on room_type (room_type) stay.d: Analyzing stay... END stay.d. 219 records imported into stay 84.15366% complete: 500 recs in 0:00:00.974 (513/sec, 30800/min, 1848049/hr) BEGIN _user.d (meta_user) _user.d: SQL: drop index if exists idx__meta_user_idx__userid _user.d: SQL: drop index if exists idx__meta_user_idx__user_domain_name _user.d: SQL: drop index if exists idx__meta_user_idx__user_sql_only_user room.d: Set up UTF-8 character encoding. _user.d: SQL: create unique index idx__meta_user_idx__userid on meta_user (upper(rtrim(userid)), upper(rtrim(domain_name))) room-type.d: Set up UTF-8 character encoding. room.d: Analyzing room... room-type.d: Analyzing room_type... reservation.d: SQL: create index idx__reservation_idx__room_num_date on reservation (room_num, start_date, end_date, recid) END room.d. 30 records imported into room 86.38741% complete: 530 recs in 0:00:00.991 (534/sec, 32088/min, 1925327/hr) BEGIN service.d (service) service.d: SQL: drop index if exists idx__service_idx__stay_charge END room-type.d. 7 records imported into room_type 88.50111% complete: 537 recs in 0:00:00.993 (540/sec, 32447/min, 1946827/hr) service.d: Set up UTF-8 character encoding. BEGIN guest-id-type.d (guest_id_type) _user.d: Set up ISO8859-15 character encoding. guest-id-type.d: SQL: drop index if exists idx__guest_id_type_idx__id_type guest-id-type.d: SQL: create unique index idx__guest_id_type_idx__id_type on guest_id_type (id_type) service.d: SQL: create index idx__service_idx__stay_charge on service (stay_id, service_type, service_date, recid) reservation.d: Skipping creation of word index: idx__guest _user.d: SQL: create index idx__meta_user_idx__user_domain_name on meta_user (upper(rtrim(domain_name)), sql_only_user, upper(rtrim(userid)), recid) reservation.d: Analyzing reservation... END reservation.d. 36 records imported into reservation 96.07271% complete: 573 recs in 0:00:01.007 (569/sec, 34141/min, 2048460/hr) BEGIN service-type.d (service_type) service-type.d: SQL: drop index if exists idx__service_type_idx__charge_type service-type.d: SQL: create unique index idx__service_type_idx__charge_type on service_type (service_type) guest-id-type.d: Set up UTF-8 character encoding. _user.d: SQL: create index idx__meta_user_idx__user_sql_only_user on meta_user (sql_only_user, recid) service.d: Analyzing service... guest-id-type.d: Analyzing guest_id_type... service-type.d: Set up UTF-8 character encoding. END guest-id-type.d. 3 records imported into guest_id_type 96.98594% complete: 576 recs in 0:00:01.028 (560/sec, 33618/min, 2017120/hr) END service.d. 5 records imported into service 98.02350% complete: 581 recs in 0:00:01.028 (565/sec, 33910/min, 2034630/hr) service-type.d: Analyzing service_type... _user.d: Analyzing meta_user... END service-type.d. 3 records imported into service_type 98.93672% complete: 584 recs in 0:00:01.036 (563/sec, 33822/min, 2029343/hr) END _user.d. 1 records imported into meta_user 100.00000% complete: 585 recs in 0:00:01.038 (563/sec, 33815/min, 2028901/hr) Updated identity generator sequence starting at 40001. Total records processed: 585 in 0:00:01.038 (563.584 records/sec) Number of failed records: 0. Total sequences initialized: 0. Reading merge DMO definitions... Elapsed job time: 00:00:01.715 Creating word table schema for hotel from word_tables_hotel ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/hotel_gui/build/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:postgresql://localhost:5432/hotel fwd_admin admin create.word.objects word_tables_hotel SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Running script: [/ddl/schema_word_tables_hotel_postgresql.sql] ANALYZE
I will test in the multi-DB configuration in docker, as well.
For the SLF4J
issue, I included the cfg/logging.properties
in the hotel.jar
, and specified -Djava.util.logging.config.file=/cfg/logging.properties
on the command line for ScriptRunner
, but that didn't change the error. How can that warning be resolved, or suppressed?
#98 Updated by Greg Shah 7 months ago
For the
SLF4J
issue, I included thecfg/logging.properties
in thehotel.jar
, and specified-Djava.util.logging.config.file=/cfg/logging.properties
on the command line forScriptRunner
, but that didn't change the error. How can that warning be resolved, or suppressed?
We definitely don't want to put properties files all over the place.
#99 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
We definitely don't want to put properties files all over the place.
Definitely not. For the purposes of this import tool, what do you recommend so we don't get those errors:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
#101 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
Did you try to include
fwd-slf4j.jar
in the classpath (described in the documentation)?
Is that for sure next to p2j.jar
? If so, I will change the -f
option to be the directory location of the FWD jars, not the explicit path to p2j.jar
, and include it in the classpath. Is this Pandora's box?
#103 Updated by Greg Shah 7 months ago
I don't think it opens pandora's box to assume that all our dependent libraries are in the same directory. That is the normal way we deploy. We wouldn't want to pick up random versions littered around the system. And our classpath augmentations in the manifest file would break.
#104 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
I don't think it opens pandora's box to assume that all our dependent libraries are in the same directory. That is the normal way we deploy. We wouldn't want to pick up random versions littered around the system. And our classpath augmentations in the manifest file would break.
Understood... I was just being a minimalist in terms of only adding what was necessary for a successful configuration. For example, I could include the logging.properties
from the application in the jar, and extract it to a temporary location for use in the -Djava.util.logging.config.file=$logging_properties
option, then remove it. Or I could leave whatever is the default.
#105 Updated by Greg Shah 7 months ago
As soon as you require logging.properties
there is now a 2nd way to configure logging and it is incompatible with our primary logging infrastructure. Please avoid any such usage. The idea of the fwd-slf4j.jar
is to make all the non-FWD usage of SLF4J to automatically log to the FWD logging infrastructure.
#106 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
As soon as you require
logging.properties
there is now a 2nd way to configure logging and it is incompatible with our primary logging infrastructure. Please avoid any such usage. The idea of thefwd-slf4j.jar
is to make all the non-FWD usage of SLF4J to automatically log to the FWD logging infrastructure.
So is that to say our build_db.xml
files should have <jvmarg value="-Djava.util.logging.config.file=${fwd.home}/cfg/logging.properties"/>
removed? I was just following that as my example. My script works just fine without that line, but I want to be consistent. I can update hotel projects, if so.
#108 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
Yes, I think we should not need the
logging.properties
.
There are also 3 instances of debug arguments included in the hotel_gui build_db.xml: <jvmarg value="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=xxxx" />
So those be left as "examples", or removed?
#109 Updated by Ovidiu Maxiniuc 7 months ago
The chances are these were added by me when debugging various pieces of code. Certainly, they are not a problem in hotel projects, but I think in deployment / production scripts they should be removed.
#110 Updated by Roger Borrello 7 months ago
I committed an updated import.sh
to this branch which allows for the tables to be built without the logging warnings.
I also am updating the hotel and hotel_gui projects to match up in the DB import area, without the logging.properties
. However, neither build is currently generating ddl/schema_word_tables_hotel_h2.sql
(nor ddl/schema_word_tables_standard_h2.sql
for that matter). Any clues what could be causing this? The ddl/schema_word_tables_hotel_postgresql.sql
is there (0 bytes). I've been pounding my head for a while on this.
#111 Updated by Eric Faulhaber 7 months ago
Roger Borrello wrote:
I committed an updated
import.sh
to this branch which allows for the tables to be built without the logging warnings.I also am updating the hotel and hotel_gui projects to match up in the DB import area, without the
logging.properties
. However, neither build is currently generatingddl/schema_word_tables_hotel_h2.sql
(norddl/schema_word_tables_standard_h2.sql
for that matter). Any clues what could be causing this? Theddl/schema_word_tables_hotel_postgresql.sql
is there (0 bytes). I've been pounding my head for a while on this.
These files are generated during schema conversion (specifically, DDL generation), not during import. If it is there, but 0 bytes, perhaps we have no word indices in the hotel[_gui] projects? In that case, it would make more sense not to generate a file at all, rather than an empty one.
#112 Updated by Eric Faulhaber 7 months ago
Never mind. I found this in the hotel.df
for Hotel GUI:
ADD INDEX "guest" ON "reservation" AREA "Schema Area" WORD INDEX-FIELD "guest" ASCENDING
#113 Updated by Roger Borrello 7 months ago
- File cvt_20230919_171243.log added
I didn't see any particular problems building. What stage should they be logged at? Log attached.
#114 Updated by Eric Faulhaber 7 months ago
I just updated to the latest Hotel GUI project (rev 273) and ran ant clean deploy.all
, using FWD trunk rev 14723. My ddl
directory looks like this:
total 144 -rw-rw---- 1 ecf ecf 1792 Sep 19 18:09 schema_index_hotel_h2.sql -rw-rw---- 1 ecf ecf 1712 Sep 19 18:09 schema_index_hotel_mariadb.sql -rw-rw---- 1 ecf ecf 1606 Sep 19 18:09 schema_index_hotel_postgresql.sql -rw-rw---- 1 ecf ecf 5795 Sep 19 18:09 schema_index_standard_h2.sql -rw-rw---- 1 ecf ecf 43633 Sep 19 18:09 schema_table_hotel_h2.sql -rw-rw---- 1 ecf ecf 3609 Sep 19 18:09 schema_table_hotel_mariadb.sql -rw-rw---- 1 ecf ecf 3098 Sep 19 18:09 schema_table_hotel_postgresql.sql -rw-rw---- 1 ecf ecf 60319 Sep 19 18:09 schema_table_standard_h2.sql -rw-rw---- 1 ecf ecf 1268 Sep 19 18:09 schema_word_tables_hotel_h2.sql -rw-rw---- 1 ecf ecf 776 Sep 19 18:09 schema_word_tables_hotel_mariadb.sql -rw-rw---- 1 ecf ecf 1436 Sep 19 18:09 schema_word_tables_hotel_postgresql.sql -rw-rw---- 1 ecf ecf 0 Sep 19 18:09 schema_word_tables_standard_h2.sql
schema_word_tables_hotel_h2.sql
:
Show
schema_word_tables_hotel_mariadb.sql
:
Show
schema_word_tables_hotel_postgresql.sql
:
Show
standard.df
indeed contains no WORD indices, which explains the 0-byte schema_word_tables_standard_h2.sql
file.
This was done outside of Docker, so there may be something going on with the container, or with your uncommitted changes.
#115 Updated by Roger Borrello 7 months ago
Thanks, Eric. Did you run any prepare_template.sh
varieties?
#116 Updated by Eric Faulhaber 7 months ago
Roger Borrello wrote:
I didn't see any particular problems building. What stage should they be logged at? Log attached.
Looks like it's done silently (only logs for errors), but it happens during the "Generate Data Model Objects (database DMOs)" step, just after the series of "Generating INDEX DDL for table ..." messages. It happens in rules/schema/java_dmo.xml
:
<post-rules> <rule>!ddlSchema.equals("_temp") <!-- write CREATE TABLE statements in a fresh file --> <action>ddl.generateTableDDLs(ddlSchema)</action> <!-- append the SEQUENCES creation DDL statements --> <action>ddl.generateSequenceDDLs(ddlSchema)</action> <!-- put the index creation DDL to different file --> <action>ddl.generateIndexDDLs(ddlSchema)</action> <!-- put misc word tables' related DDL to different file --> <action>ddl.generateWordTablesDDLs(ddlSchema)</action> </rule> </post-rules>
For some reason, we only log the index DDL creation, not any of the other three types of DDL creation. And that's not even proper logging; it's done with System.out.println
.
#117 Updated by Eric Faulhaber 7 months ago
Roger Borrello wrote:
Thanks, Eric. Did you run any
prepare_template.sh
varieties?
No, this was an update of an existing hotel_gui
checkout. I think I ran prepare_hotel.sh
for this one, many moons ago.
Is prepare_template.sh
a requirement now? TBH, I haven't kept up with every change in these sample projects.
#118 Updated by Roger Borrello 7 months ago
Hmm... I checked out revision 273, ran the ./prepare_template.sh -n < hotel.input.linux.pg
and got a successful build. However, the DDL for H2 was still missing, and there was an error, but not fatal:
import.db.pg: [java] Listening for transport dt_socket at address: 2180 [java] Using unnamed schema profile. [java] Elapsed job time: 00:00:00.185 [java] Failed to locate 'data/standard.dict' as a resource. Using developer mode: attempting to locate it as local file in ${cvtpath} location. [java] ERROR: [java] com.goldencode.ast.AstException: Bogus registry resource ./cfg/registry.xml [java] at com.goldencode.ast.XmlFilePlugin.<init>(XmlFilePlugin.java:232) [java] at com.goldencode.p2j.pattern.PatternEngine.lambda$static$0(PatternEngine.java:523) [java] at com.goldencode.ast.AstManager.getPlugin(AstManager.java:646) [java] at com.goldencode.ast.AstManager.save(AstManager.java:633) [java] at com.goldencode.p2j.pattern.PatternEngine.finish(PatternEngine.java:1250) [java] at com.goldencode.p2j.pattern.PatternEngine.run(PatternEngine.java:1114) [java] at com.goldencode.p2j.pattern.PatternEngine.main(PatternEngine.java:2219) [java] [exec] CREATE FUNCTION [exec] DROP TRIGGER [exec] CREATE TRIGGER [exec] DROP TRIGGER [exec] CREATE TRIGGER [exec] ALTER TABLEpsql:ddl/schema_word_tables_hotel_postgresql.sql:13: NOTICE: trigger "reservation__guest__upd" for relation "reservation" does not exist, skipping [exec] [exec] psql:ddl/schema_word_tables_hotel_postgresql.sql:20: NOTICE: trigger "reservation__guest__ins" for relation "reservation" does not exist, skipping [exec] psql:ddl/schema_word_tables_hotel_postgresql.sql:27: NOTICE: constraint "pk__reservation__guest" of relation "reservation__guest" does not exist, skipping [exec] ALTER TABLE [exec] DROP INDEX [exec] CREATE INDEX [exec] ALTER TABLE [exec] DROP INDEX [exec] CREATE INDEX [exec] ALTER TABLE [exec] psql:ddl/schema_word_tables_hotel_postgresql.sql:29: NOTICE: index "idx__reservation__guest" does not exist, skipping [exec] psql:ddl/schema_word_tables_hotel_postgresql.sql:31: NOTICE: constraint "fk__reservation__guest" of relation "reservation__guest" does not exist, skipping [exec] psql:ddl/schema_word_tables_hotel_postgresql.sql:32: NOTICE: index "fkidx__reservation__guest" does not exist, skipping [exec] ANALYZE clean.db.mariadb: create.db.mariadb: import.db.mariadb: import.db: deploy.all: BUILD SUCCESSFUL Total time: 5 minutes 45 seconds rfb@rfb:~/projects/hotel_gui_273$ ls -l ddl total 128 -rw-rw---- 1 rfb rfb 1792 Sep 19 18:38 schema_index_hotel_h2.sql -rw-rw---- 1 rfb rfb 1606 Sep 19 18:38 schema_index_hotel_postgresql.sql -rw-rw---- 1 rfb rfb 5795 Sep 19 18:38 schema_index_standard_h2.sql -rw-rw---- 1 rfb rfb 43523 Sep 19 18:38 schema_table_hotel_h2.sql -rw-rw---- 1 rfb rfb 3098 Sep 19 18:38 schema_table_hotel_postgresql.sql -rw-rw---- 1 rfb rfb 60319 Sep 19 18:38 schema_table_standard_h2.sql -rw-rw---- 1 rfb rfb 1385 Sep 19 18:38 schema_word_tables_hotel_postgresql.sql
#119 Updated by Roger Borrello 7 months ago
Eric Faulhaber wrote:
Is
prepare_template.sh
a requirement now? TBH, I haven't kept up with every change in these sample projects.
I believe the project is configured for H2 by default. I have not been configuring the import.sh
to handle all DBs, although I believe that would be wise. For now, I was concentrating on getting PostgreSQL correct.
#120 Updated by Roger Borrello 7 months ago
Branch 4722a has been rebased to the latest trunk (14745). It is at revision 14758. Ovidiu and Igor, can you confirm the updates to src/com/goldencode/p2j/persist/dialect/MariaDbLenientDialect.java
and src/com/goldencode/p2j/persist/orm/DDLGeneratorWorker.java
are correct? There were some conflicts.
#121 Updated by Ovidiu Maxiniuc 7 months ago
data/standard.dict
is not found in jar. In this case, FWD will try to find it in file structure following the project structure. Since this file structure is not present in deploy mode, I named itdeveloper mode
. Apparently the file was found there, otherwise an error with higher level would have been logged;./cfg/registry.xml
also not found in classpath/jar. In this case the stack trace is printed. I do not know why is this needed and also the following stack line does not seem right:com.goldencode.ast.AstManager.save(AstManager.java:633)
. Saving an AST during import?
#123 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
There is something going on with my setup such that the H2 word tables are not generated from my conversion. I wasn't sure if the project had to be setup one way or another:
prepare_template.sh
is only used for turning Hotel GUI into a template for a new application. You would only ever useprepare_hotel.sh
to setup the Hotel GUI project for conversion/runtime.
./prepare_template.sh -n < hotel.input.linux.pg
= PostgreSQL on Linux./prepare_template.sh -n < hotel.input.linux
= H2 on Linux./prepare_template.sh -n < hotel.input.windows
= H2 on Windows
#124 Updated by Ovidiu Maxiniuc 7 months ago
Roger Borrello wrote:
Branch 4722a has been rebased to the latest trunk (14745). It is at revision 14758. Ovidiu and Igor, can you confirm the updates to
src/com/goldencode/p2j/persist/dialect/MariaDbLenientDialect.java
andsrc/com/goldencode/p2j/persist/orm/DDLGeneratorWorker.java
are correct? There were some conflicts.
I do not see any issues with the rebase except for the following line from DDLGeneratorWorker
related to contains
operator / word tables. I vague remember the following change:
--- r14758/src/com/goldencode/p2j/persist/orm/DDLGeneratorWorker.java
+++ myMind/src/com/goldencode/p2j/persist/orm/DDLGeneratorWorker.java
@@ -1410,7 +1410,7 @@
}
out.print(eoln);
}
- if (!dialect.useUdf4Contains())
+ if (dialect.useWordTables())
{
generateWordTable(dbName, table, out, dialect, eoln);
}
Igor, can you confirm (or infirm) this?
#125 Updated by Igor Skornyakov 7 months ago
Ovidiu Maxiniuc wrote:
Roger Borrello wrote:
Branch 4722a has been rebased to the latest trunk (14745). It is at revision 14758. Ovidiu and Igor, can you confirm the updates to
src/com/goldencode/p2j/persist/dialect/MariaDbLenientDialect.java
andsrc/com/goldencode/p2j/persist/orm/DDLGeneratorWorker.java
are correct? There were some conflicts.I do not see any issues with the rebase except for the following line from
DDLGeneratorWorker
related tocontains
operator / word tables. I vague remember the following change:
[...]Igor, can you confirm (or infirm) this?
Yes, this fix is correct (tested). It addresses the issue #4722-85
#126 Updated by Roger Borrello 7 months ago
The updates to prevent an unneeded DDL from being generated if the dialect does not use them is why there isn't any H2 DDL created. With my debug statements, I see this:
[java] NOT generating schema_word_tables_hotel_h2.sql since the dialect does not use them ... [java] NOT generating schema_word_tables_hotel_mariadb.sql since the dialect does not use them ... ... [java] NOT generating schema_word_tables_standard_h2.sql since the dialect does not use them ...
But the build_db.xml
shows it is still attempting to handle the file:
[java] Exception in thread "main" org.h2.message.DbException: IO Exception: "java.io.FileNotFoundException: ddl/schema_word_tables_hotel_h2.sql (No such file or directory)"; "ddl/schema_word_tables_hotel_h2.sql" [90031-200] [java] at org.h2.message.DbException.get(DbException.java:194) [java] at org.h2.message.DbException.convertIOException(DbException.java:394) [java] at org.h2.tools.RunScript.process(RunScript.java:331) [java] at org.h2.tools.RunScript.runTool(RunScript.java:143) [java] at org.h2.tools.RunScript.main(RunScript.java:70) [java] Caused by: org.h2.jdbc.JdbcSQLNonTransientException: IO Exception: "java.io.FileNotFoundException: ddl/schema_word_tables_hotel_h2.sql (No such file or directory)"; "ddl/schema_word_tables_hotel_h2.sql" [90031-200] [java] at org.h2.message.DbException.getJdbcSQLException(DbException.java:505) [java] at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) [java] ... 5 more [java] Caused by: java.io.FileNotFoundException: ddl/schema_word_tables_hotel_h2.sql (No such file or directory) [java] at java.io.FileInputStream.open0(Native Method) [java] at java.io.FileInputStream.open(FileInputStream.java:195) [java] at java.io.FileInputStream.<init>(FileInputStream.java:138) [java] at java.io.FileInputStream.<init>(FileInputStream.java:93) [java] at org.h2.store.fs.FilePathDisk.newInputStream(FilePathDisk.java:337) [java] at org.h2.store.fs.FileUtils.newInputStream(FileUtils.java:224) [java] at org.h2.tools.RunScript.process(RunScript.java:187) [java] at org.h2.tools.RunScript.process(RunScript.java:328) [java] ... 2 more
#127 Updated by Igor Skornyakov 7 months ago
Roger Borrello wrote:
The updates to prevent an unneeded DDL from being generated if the dialect does not use them is why there isn't any H2 DDL created. With my debug statements, I see this:
[...]But the
build_db.xml
shows it is still attempting to handle the file:
[...]
This can happen only if your build_db.xml
contains a corresponding ScriptRunner
call.
In my environment I do not see this.
#128 Updated by Roger Borrello 7 months ago
Igor Skornyakov wrote:
This can happen only if your
build_db.xml
contains a correspondingScriptRunner
call.
In my environment I do not see this.
I'm using hotel[_gui], which do have these calls. Should there ever be a call to the ScriptRunner
for schema_word_tables
for H2, if it will no longer be generated? It looks like the same situation for Maria DB.
#129 Updated by Igor Skornyakov 7 months ago
Roger Borrello wrote:
Igor Skornyakov wrote:
This can happen only if your
build_db.xml
contains a correspondingScriptRunner
call.
In my environment I do not see this.I'm using hotel[_gui], which do have these calls. Should there ever be a call to the
ScriptRunner
forschema_word_tables
for H2, if it will no longer be generated? It looks like the same situation for Maria DB.
I do not know how this can happen. Word tables for H2 are not supposed to be implemented because of some H2 limitations. For MariaDB word tables support is not yet implemented (see #6741).
#130 Updated by Roger Borrello 7 months ago
Igor Skornyakov wrote:
I do not know how this can happen. Word tables for H2 are not supposed to be implemented because of some H2 limitations. For MariaDB word tables support is not yet implemented (see #6741).
I plan on removing the call to the ScriptRunner
in the hotel projects for H2 Word Tables. They must have been in there by mistake.
Regarding MariaDB, is there a quick start instruction anywhere, to know how to configure IDs or ports so it can work alongside my PostgreSQL for testing? I want to make sure the hotel projects work with MariaDB. I have 10.6 installed via:
sudo apt install mariadb-server
But not sure if it's ready to use for hotel or not.
#131 Updated by Ovidiu Maxiniuc 7 months ago
Try https://proj.goldencode.com/projects/p2j/wiki/Database_Server_Setup_for_MariaDb_on_Linux.
I am sure I did the import and encountered no issues while doing navigation in gui application.
#132 Updated by Roger Borrello 7 months ago
I am updating the import.sh
script to handle all the DBs... on H2, to create the database, there are not separate commands to create the database, then build the tables from DDL. Instead, it is one command (from build_db.xml
):
<java classname="org.h2.tools.RunScript" fork="true" failonerror="true" dir="${basedir}" > <jvmarg value="-Xmx1g"/> <jvmarg value="${java.locale.providers.param}"/> <jvmarg value="${java.ext.dir.param}"/> <jvmarg value="-Dfile.encoding=UTF-8"/> <jvmarg value="-Djava.system.class.loader=com.goldencode.asm.AsmClassLoader"/> <arg value ="-url"/> <arg value ="${sql.url.h2}"/> <arg value ="-user"/> <arg value="${sql.user}"/> <arg value ="-password"/> <arg value="${sql.user.pass}"/> <arg value ="-script"/> <arg value = "ddl/schema_table_${db.name}_h2.sql"/> <arg value ="-continueOnError"/> <classpath refid="app.classpath"/> </java>
Is this able to be replaced with the ScriptRunner
?
db_table_schema="table_${item}" cmd="java -Xmx1g -Dfile.encoding=UTF-8 -cp ${fwd_jar}/p2j.jar:${fwd_jar}/fwd-slf4j.jar:${app_jar} com.goldencode.p2j.persist.deploy.ScriptRunner $db_url $dbuser $dbuser_pw create.tables $db_table_schema"
#133 Updated by Roger Borrello 7 months ago
I checked in an updated import.sh
which works with H2. I am still testing with Maria, which I just attempted.
- Do I need to create a
fwd_user
user? - Do I need to create an initial database, like a cluster for PostgreSQL?
- What is the default port for specifying?
#134 Updated by Ovidiu Maxiniuc 7 months ago
Roger Borrello wrote:
Some things I need clarification with respect to Maria:
- Do I need to create a
fwd_user
user?
Yes. Although you can connect using the superuser with the console for admin, we need a user with less access for normal usage.
- Do I need to create an initial database, like a cluster for PostgreSQL?
Probably no. The import script automatically creates the database, too.
- What is the default port for specifying?
3306, IIRC.
#135 Updated by Roger Borrello 7 months ago
With some updates to my import.sh
, I am close with mariadb:
Dropping DB hotel ... Database "hotel" dropped Creating DB hotel using OWNER fwd_admin ... Creating schema for hotel from table_hotel ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/fwd/4722a/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui/deploy/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:mariadb://localhost:3306/hotel fwd_admin admin udf.install.search_path Running script: [/udf/mariadb/udfs.sql] Setup UDFs ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/fwd/4722a/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui/deploy/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner jdbc:mariadb://localhost:3306/hotel fwd_admin admin udf.install.search_path Running script: [/udf/mariadb/udfs.sql] Import dump from /home/rfb/projects/hotel_gui/data/dump/hotel into hotel ... Performing cmd=java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/home/rfb/projects/hotel_gui/deploy/lib/spi -cp /home/rfb/projects/fwd/4722a/build/lib/p2j.jar:/home/rfb/projects/fwd/4722a/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui/deploy/lib/hotel.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=\"hotel\" targetDb=\"mariadb\" url=\"jdbc:mariadb://localhost:3306/hotel\" uid=\"fwd_user\" pw=\"user\" dataPath=\"/home/rfb/projects/hotel_gui/data/dump/hotel/\" maxThreads=4 schema/import data/ hotel.p2o Using unnamed schema profile. INFO: Type match assertion disabled; set "checkTypes" to true to enable INFO: Data export files will be read from '/home/rfb/projects/hotel_gui/data/dump/hotel/' INFO: Using 4 threads for import ./cvt/data/hotel.p2o MLog clients using slf4j logging. Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> z8kfsxay1ec1e8u1epzofj|2474f125, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.mariadb.jdbc.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> z8kfsxay1ec1e8u1epzofj|2474f125, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:mariadb://localhost:3306/hotel, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ] Error: 1044-42000: Access denied for user 'fwd_user'@'localhost' to database 'hotel' Error: 1044-42000: Access denied for user 'fwd_user'@'localhost' to database 'hotel' Error: 1044-42000: Access denied for user 'fwd_user'@'localhost' to database 'hotel' ... Error: 1044-42000: Access denied for user 'fwd_user'@'localhost' to database 'hotel' com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@543e5c60 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception: java.sql.SQLSyntaxErrorException: (conn=251) Access denied for user 'fwd_user'@'localhost' to database 'hotel' at org.mariadb.jdbc.export.ExceptionFactory.createException(ExceptionFactory.java:282) at org.mariadb.jdbc.export.ExceptionFactory.create(ExceptionFactory.java:370) at org.mariadb.jdbc.client.impl.ConnectionHelper.authenticationHandler(ConnectionHelper.java:287) at org.mariadb.jdbc.client.impl.StandardClient.<init>(StandardClient.java:185) at org.mariadb.jdbc.Driver.connect(Driver.java:70) at org.mariadb.jdbc.Driver.connect(Driver.java:101) at org.mariadb.jdbc.Driver.connect(Driver.java:27) at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125) at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44) at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870) at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
Should the user ID for the PatternEngine be fwd_admin
?
#136 Updated by Ovidiu Maxiniuc 7 months ago
Should the user ID for the PatternEngine be fwd_admin?
Not mandatory. fwd_user
does not have enough privileges granted. Insert
, Update
, Select
, Index
(for creating indices), Execute
(stored procedures), Delete
are a minimum. Others may apply.
The Index
is the key. This is normally an admin privilege which we normally do not need for fwd_user
. So probably we let it with the minimum it requires, and use fwd_admin
for import.
#137 Updated by Roger Borrello 7 months ago
Thanks, Ovidiu. I was following the MariaDB setup from build_db.xml
in hotel_gui. I'll update that. Right now my import.sh
has been tested in hotel for PostgreSQL, H2, and Maria DB. It has been tested in CounterAct both in merged and multi-datbase. I am testing in the Docker configurations. I am also working on the Hotel applications to use Docker for all the DBs.
#138 Updated by Roger Borrello 7 months ago
The latest revision of import.sh
is checked into the branch:
rfb@rfb:~/projects/hotel_gui$ ./p2j/import.sh -? Usage: ./p2j/import.sh [-d<dump_dir>] [-qc] [-h<host or container>] [-r<data_dir>] [-p<port>] [-f<path_to_FWD_jars>] [-a<path_to_app.jar>] --db_type=<type> --db_adminid=<dbadmin> --db_adminpw=<dbadminpw> --db_userid=<dbuser> --db_userpw=<dbuserpw> --db_names=<db names> Where: d = Path to the data dump directories if performing an import. Must be specified to import. q = Do *not* drop/recreate database. Just perform import. c = Running in a container. If this option is passed in, "-h" specifies a container h = Target server or container (if -c specified) (def=localhost) r = Relative path to the data directory (def=data) p = Target port of server or container (def=5432) NOTE: specify "-p 3306" (or other value) if "--db_type=mariandb" to override default of 5432 f = Path to directory containing FWD jars (p2j.jar, fwd-slf4j.jar) (def=/opt/fwd/build/lib) a = Path to app.jar file containing DDL (def=/opt/hotel/lib/hotel.jar) --db_type=<type> Database type (pg, h2, or mariadb) (def=pg) --db_home=<dbhome> Location for h2 database, ignored if dbtype not h2 (def=./deploy/db) --db_adminid=<dbadmin> DB admin user for DB creation (def=fwd_admin) --db_adminpw=<dbadminpw> DB admin user password for the import (def=admin) --db_user=<dbuser> DB user for the import (def=fwd_user) --db_userpw=<dbuserpw> DB user password for the import (def=user) --db_names=<db names> DB names for import (def=hotel) Option format is "<db names (separate multiples with commas)>") For example: "--db_names=hotel" or "--db_names=hotel,ledger,prices" NOTE: APP_DBADMIN_PASSWORD and APP_DBUSER_PASSWORD variables can be set in the environment instead of passed on command line.
The postgres built for <large_gui_application>
works well, both single and multi databases. Working on updating hotel projects, although I cannot integrate until this branch is merged to trunk.
#139 Updated by Roger Borrello 7 months ago
Branch 4722a has been rebased to the latest trunk (14754).
Is there anything keeping this from being merged with trunk?
#141 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
Roger: What is the migration plan? There are changes to apply for both Hotel projects? What about other projects?
What is the plan for build_db.xml
? To be honest, I never meant for import.sh
to replace it.
#142 Updated by Greg Shah 7 months ago
For Hotel GUI/ChUI, I would expect that the standard projects might be simplified to some degree and I would expect that we would implement any changes to the default processing to ensure that the import can be run without needing the conversion project present.
I don't assume that build_db.xml
is gone, but if it is no longer needed that is a different story.
I don't want to force every project to have edits but if there are required changes that would otherwise regress things, then we need a plan for that.
#143 Updated by Ovidiu Maxiniuc 7 months ago
I am not aware of any open issues here.
When I first created the import.sh
my intention was to have a faster way of testing the import without eventual changes in build_db.xml
. The idea was that after I have the stable parameters of the process to add them back to build_db.xml
. At the same time, other developers could see easier which were the new parameters.
However, Roger did a nice job enhancing my temporary script and I think we should definitely keep it. At the same time, we are already used with the simpler ant import.db
command which takes the arguments from build.properties
. Therefore, I guess we should keep them both, and rewrite the target from build_db.xml
to call the import script instead of directly launching Java. Who wants the simple way, to use the ant script, those who are in need for a more fine-tuning approach will use the shell script, directly.
#145 Updated by Roger Borrello 7 months ago
The benefit of the build_db.xml
is it falls right in line with the ant
build of the application and provides a level of granularity so as to provide support for a multitude of databases and configurations (like UDF vs <gulp> PL/Java). It also affords you some platform independence, since we have the ant
capabilities on Windows and Linux. I haven't made any effort to get import.sh
working on a Windows configuration, as the original intent (as I saw it) was to provide a tool for importing into the postgres Docker container. For that reason alone, I'd leave the 2 tools in place.
Because ant
requires a development environment, the build_db.xml
doesn't fit nicely for DB management in deployments. It also contains usage of other DBs that are unnecessary for customers that are using one DB type and not the others. If the purpose of the hotel apps is to a) provide sample usage for FWD out-of-the-box, as well as b) for application development samples, they both have a place.
#146 Updated by Roger Borrello 7 months ago
Branch 4722a has been rebased to trunk_14756.
#147 Updated by Roger Borrello 7 months ago
Branch 4722a has been rebased to trunk_14758 at revision 14775.
#148 Updated by Roger Borrello 7 months ago
Branch was rebased to trunk 14764 and is at revision 14781.
#150 Updated by Roger Borrello 7 months ago
Greg Shah wrote:
Are these changes safe to merge now?
Is there anything we need to handle from a project migration/cfg perspective when this is merged?
I believe it is. From what I have seen, it is compatible with the original mechanism of data import, and changes can be made across projects as needed or desired. Perhaps there may be extra/unneeded files being gathered in distribution zip/tar files?
#152 Updated by Ovidiu Maxiniuc 7 months ago
Yes.
I re-reviewed the changes and only have only one question: there are some commits with no file changes (14781, 14777, 14776, 14768, 14766). Is this normal or something has happened with the changes?
#153 Updated by Roger Borrello 7 months ago
Ovidiu Maxiniuc wrote:
I re-reviewed the changes and only have only one question: there are some commits with no file changes (14781, 14777, 14776, 14768, 14766). Is this normal or something has happened with the changes?
The rebase process sometimes oddly flags a conflict for the addition or renaming of a file. I found it odd, too, but the files in the branch ended up with the proper changes.
#154 Updated by Roger Borrello 7 months ago
Branch 4722a was rebased with trunk_14766 and is at revision 14783.
#155 Updated by Roger Borrello 7 months ago
Branch 4722a was rebased with trunk_14769 and is at revision 14787. I committed an update to import.sh
which changes the use of "pg" to "postgres" so that it would be more compatible with the Docker images.
#156 Updated by Roger Borrello 6 months ago
Branch 4722a was rebased with trunk_14772 and is at revision 14790.
#158 Updated by Roger Borrello 6 months ago
Greg Shah wrote:
Is there anything we need to handle from a project migration/cfg perspective when this is merged?
No. The original mechanism of data import remains the same. Changes can be made across projects as needed or desired. The import.sh
tool is meant to be a standalone utility, not replace the import.db
target.
#160 Updated by Roger Borrello 6 months ago
4722a was merged to trunk rev 14773 and archived.
#161 Updated by Roger Borrello 6 months ago
- Status changed from Review to Test
#162 Updated by Els van Dun 6 months ago
Hi Roger,
We found a possible regression after the merge with trunk of this issue.
In xfer testcases I can convert my testcase of a few days back. But now i get the following error:
[java] Caused by: org.h2.jdbc.JdbcSQLNonTransientException: IO Exception: "java.io.FileNotFoundException: ddl/schema_word_tables_tstcasedata_h2.sql (No such file or directory)"; "ddl/schema_word_tables_tstcasedata_h2.sql" [90031-200] [[java] Exception in thread "main" org.h2.message.DbException: IO Exception: "java.io.FileNotFoundException: ddl/schema_word_tables_tstcasedata_h2.sql (No such file or directory)"; "ddl/schema_word_tables_tstcasedata_h2.sql" [90031-200] [java] at org.h2.message.DbException.get(DbException.java:194) [java] at org.h2.message.DbException.convertIOException(DbException.java:394) [java] at org.h2.tools.RunScript.process(RunScript.java:331) [java] at org.h2.tools.RunScript.runTool(RunScript.java:143) [java] at org.h2.tools.RunScript.main(RunScript.java:70) java] at org.h2.message.DbException.getJdbcSQLException(DbException.java:505) [java] at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) [java] ... 5 more [java] Caused by: java.io.FileNotFoundException: ddl/schema_word_tables_tstcasedata_h2.sql (No such file or directory) [java] at java.io.FileInputStream.open0(Native Method) [java] at java.io.FileInputStream.open(FileInputStream.java:195) [java] at java.io.FileInputStream.<init>(FileInputStream.java:138) [java] at java.io.FileInputStream.<init>(FileInputStream.java:93) [java] at org.h2.store.fs.FilePathDisk.newInputStream(FilePathDisk.java:337) [java] at org.h2.store.fs.FileUtils.newInputStream(FileUtils.java:224) [java] at org.h2.tools.RunScript.process(RunScript.java:187) [java] at org.h2.tools.RunScript.process(RunScript.java:328) [java] ... 2 more [copy] Warning: Could not find file /v1/<app>-dev.local/workspace/eclipse/xfer_testcases/name_map.cache to copy.
My testcase doesn't require a database but is seems the DDL folder is not created anymore.
#163 Updated by Tijs Wickardt 6 months ago
Hello Roger, I'm experiencing the same regression as Els in #4722-162 .
We aren't completely sure which commit after p2j trunk rev 14748
causes the regression, but we did notice you committed something related to h2 and word schema within this task.
Can you reproduce the issue as well? It's simply the procedure described at Testcases , it doesn't matter which testcase you use, conversion fails because of the missing ddl.
Do you yourself think it's related to your commit? If not, we'll have to deep dive.
For now, at our company (for testcases development) we reverted temporarily to:
FWD: branch trunk revision 14748 Possenet: branch trunk revision 13 Skeleton: branch trunk revision 114 ***** configuration: revision 683 FWD Docker image created at Wed 27 Sep 2023 12:07:33 PM UTC.
From image:
*****/****-dev fwd-trunk sha256:c26990d0922e71d4fd2140c7ba0952f9b4e2e75da00bae0ea6e70257e2fc4e5f
#164 Updated by Roger Borrello 6 months ago
Does your jar include the ddl
files?
#165 Updated by Tijs Wickardt 6 months ago
Roger Borrello wrote:
Does your jar include the
ddl
files?
After revert?
Els can check on Monday, I'm not at my workstation right now.
All before revert went to trash.
When you run the procedure described at Testcases, does your jar include them?
#166 Updated by Roger Borrello 6 months ago
Tijs Wickardt wrote:
When you run the procedure described at Testcases, does your jar include them?
Sorry, Tijs, but I hadn't had a chance. I will be OOO until Tuesday.
#167 Updated by Tijs Wickardt 6 months ago
Roger Borrello wrote:
Tijs Wickardt wrote:
When you run the procedure described at Testcases, does your jar include them?
Sorry, Tijs, but I hadn't had a chance. I will be OOO until Tuesday.
No worries, understood.
#168 Updated by Ovidiu Maxiniuc 6 months ago
Just sharing my experience.
It might be an easy fix: check whether the jar (which should contain the .sql
resource(s)) is in the right folder and added to classpath
of RunScript
target. I remember my import was not starting, complaining that 'data/standard.dict
cannot be loaded although I could clearly see it inside the jar. It turned out that the hotel.jar
was not copied in the deploy location. To fix it I had to execute deploy.prepare
target.
#169 Updated by Tijs Wickardt 6 months ago
- Related to Support #7957: testcases project setup added
#170 Updated by Roger Borrello 6 months ago
I committed updates to the testcases
project which should work better. The changes bring it closer in line to hotel_gui.
#171 Updated by Roger Borrello 6 months ago
When configuring the database using import.sh
, the H2 creation/import doesn't seem to be working properly. When I connect to the DB, I cannot see any tables:
fwd@c6fc63f8b9bc:/opt/hotel/server$ TERM=xterm java -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/local/lib/jvm/openlogic-openjdk-8u292-b10-linux-x64/jre/lib/ext/:/opt/hotel/lib/spi -classpath /opt/fwd/build/lib/fwd-h2-1.33-trunk.jar:/opt/hotel/lib/spi/fwdspi.jar org.h2.tools.Shell Welcome to H2 Shell 1.4.200 (2019-10-14) Exit with Ctrl+C [Enter] jdbc:h2:~/test URL jdbc:h2:/opt/hotel/db/hotel;DB_CLOSE_DELAY=-1;MV_STORE=FALSE;RTRIM=TRUE;AUTO_SERVER=TRUE [Enter] org.h2.Driver Driver [Enter] User fwd_user Password Connected Commands are case insensitive; SQL statements end with ';' help or ? Display this help list Toggle result list / stack trace mode maxwidth Set maximum column width (default is 100) autocommit Enable or disable autocommit history Show the last 20 statements quit or exit Close the connection and exit sql> show schemas; SCHEMA_NAME INFORMATION_SCHEMA PUBLIC (2 rows, 3 ms) sql> show tables; TABLE_NAME | TABLE_SCHEMA (0 rows, 2 ms) sql>
On my laptop:
rfb@rfb:~/projects/hotel_gui_20231027/deploy/server$ TERM=xterm java -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/local/lib/jvm/openlogic-openjdk-8u292-b10-linux-x64/jre/lib/ext/:/opt/hotel/lib/spi -classpath /opt/fwd/build/lib/fwd-h2-1.33-trunk.jar:/opt/hotel/lib/spi/fwdspi.jar org.h2.tools.Shell Welcome to H2 Shell 1.4.200 (2019-10-14) Exit with Ctrl+C [Enter] jdbc:h2:/home/rfb/testcases/deploy/db/p2j_test;DB_CLOSE_DELAY=-1;MV_STORE=FALSE;ifexists=true; URL jdbc:h2:../db/hotel;DB_CLOSE_DELAY=-1;MV_STORE=FALSE;RTRIM=TRUE;AUTO_SERVER=TRUE [Enter] org.h2.Driver Driver [Enter] fwd_user User fwd_user Password Connected Commands are case insensitive; SQL statements end with ';' help or ? Display this help list Toggle result list / stack trace mode maxwidth Set maximum column width (default is 100) autocommit Enable or disable autocommit history Show the last 20 statements quit or exit Close the connection and exit sql> show tables; TABLE_NAME | TABLE_SCHEMA GUEST | PUBLIC GUEST_ID_TYPE | PUBLIC META_USER | PUBLIC META_USER__8 | PUBLIC RATE | PUBLIC RESERVATION | PUBLIC ROOM | PUBLIC ROOM_TYPE | PUBLIC SERVICE | PUBLIC SERVICE_TYPE | PUBLIC STAY | PUBLIC (11 rows, 6 ms) sql>
The command I was using in import.sh
isn't quite the same as the one in the create.db.h2
task. I updated it, but it still isn't creating the .h2.db
file.
java -Xmx1g -Dfile.encoding=UTF-8 -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader \ -Djava.locale.providers=SPI,JRE \ -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/home/rfb/projects/hotel_gui_20231027/deploy/lib/spi \ -cp /opt/fwd-deploy/convert/build/lib/p2j.jar:/opt/fwd-deploy/convert/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui_20231027/deploy/lib/hotel.jar \ org.h2.tools.RunScript -url jdbc:h2:/home/rfb/projects/hotel_gui_20231027/deploy/db/hotel -user fwd_user -password user -script ddl/schema_table_hotel_h2.sql
There are files created, but not the
.h2.db
file:rfb@rfb:~/projects/hotel_gui_20231027$ java -Xmx1g -Dfile.encoding=UTF-8 -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/home/rfb/projects/hotel_gui_20231027/deploy/lib/spi -cp /opt/fwd-deploy/convert/build/lib/p2j.jar:/opt/fwd-deploy/convert/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui_20231027/deploy/lib/hotel.jar org.h2.tools.RunScript -url jdbc:h2:/home/rfb/projects/hotel_gui_20231027/deploy/db/hotel -user fwd_user -password user -script ddl/schema_table_hotel_h2.sql rfb@rfb:~/projects/hotel_gui_20231027$ ls -l /home/rfb/projects/hotel_gui_20231027/deploy/db/ total 64 -rw-rw---- 1 rfb rfb 65536 Oct 27 16:28 hotel.mv.db
The create.db.h2
works:
rfb@rfb:~/projects/hotel_gui_20231027$ ant clean.db.h2 create.db.h2 Buildfile: /home/rfb/projects/hotel_gui_20231027/build.xml clean.db.h2: download-ant-contrib: init-ant-contrib: db.task: clean.db.h2: dir.check: check.fwd.dir: init: clean.db.h2: download-ant-contrib: init-ant-contrib: db.task: clean.db.h2: create.db.h2: download-ant-contrib: init-ant-contrib: db.task: create.db.h2: BUILD SUCCESSFUL Total time: 1 second rfb@rfb:~/projects/hotel_gui_20231027$ ls -l /home/rfb/projects/hotel_gui_20231027/deploy/db/ total 180 -rw-rw---- 1 rfb rfb 184320 Oct 27 16:29 hotel.h2.db
As far as I can tell, the target is performing the same as my command:
<java classname="org.h2.tools.RunScript" fork="true" failonerror="true" dir="${basedir}" > <jvmarg value="-Xmx1g"/> <jvmarg value="${java.locale.providers.param}"/> <jvmarg value="${java.ext.dir.param}"/> <jvmarg value="-Dfile.encoding=UTF-8"/> <jvmarg value="-Djava.system.class.loader=com.goldencode.asm.AsmClassLoader"/> <arg value ="-url"/> <arg value ="${sql.url.h2}"/> <arg value ="-user"/> <arg value="${sql.user}"/> <arg value ="-password"/> <arg value="${sql.user.pass}"/> <arg value ="-script"/> <arg value = "ddl/schema_table_${db.name}_h2.sql"/> <arg value ="-continueOnError"/> <classpath refid="app.classpath"/>
I even output all the variables, and they are the same. The
${java.ext.dir.param}
was shorter, and I tried it directory, but no dice.#172 Updated by Roger Borrello 6 months ago
Nevermind... my url did not include: ;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0
#173 Updated by Roger Borrello 6 months ago
I have an update to import.sh
which I need to get to trunk. I thought #7572 might be a good task, since the fix in the 7572a branch is ready for review, and is very minor.
#174 Updated by Roger Borrello 6 months ago
Roger Borrello wrote:
I have an update to
import.sh
which I need to get to trunk. I thought #7572 might be a good task, since the fix in the 7572a branch is ready for review, and is very minor.
Or should I create 4722b?
#175 Updated by Ovidiu Maxiniuc 6 months ago
Yes, Roger, I think you should fork a new branch, 4722b.
I also have small updates related to import, not dramatic, but they should improve the resilience of this process.
#176 Updated by Roger Borrello 6 months ago
Ovidiu Maxiniuc wrote:
Yes, Roger, I think you should fork a new branch, 4722b.
I also have small updates related to import, not dramatic, but they should improve the resilience of this process.
Done... and update to import.sh
committed.
Question... Is the -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader
parameter something I should be including in calls to ScriptRunner
and PatternEngine
?
#177 Updated by Roger Borrello 6 months ago
Branch 4722b was rebased to trunk_14795 and is at revision 14796.
#178 Updated by Ovidiu Maxiniuc 6 months ago
Roger Borrello wrote:
Question... Is the
-Djava.system.class.loader=com.goldencode.asm.AsmClassLoader
parameter something I should be including in calls toScriptRunner
andPatternEngine
?
The AsmClassLoader
is the component which loads dynamically generated, in-memory classes. The TRPL classes (the xmls from rules/
are dynamically converted to Java classes, for speed) and DMO classes (they are dynamically generated from the DMO interfaces found in <pkgroot>/ddl/<db-schema>
). AFAIK, the ScriptRunner
does not access any of these entities, so the answer is no.
#179 Updated by Roger Borrello 6 months ago
Ovidiu Maxiniuc wrote:
Roger Borrello wrote:
Question... Is the
-Djava.system.class.loader=com.goldencode.asm.AsmClassLoader
parameter something I should be including in calls toScriptRunner
andPatternEngine
?AFAIK, the
ScriptRunner
does not access any of these entities, so the answer is no.
Thanks, Ovidiu. That is how it was implemented (included with PatternEngine
, but not with ScriptRunner
.
#180 Updated by Roger Borrello 6 months ago
Branch 4722b was rebased to trunk_14798 and is at revision 14799.
#181 Updated by Roger Borrello 6 months ago
Branch 4722b was rebased to trunk_14805 and is at revision 14806.
#182 Updated by Roger Borrello 6 months ago
Branch 4722b was rebased to trunk_14815 and is at revision 14816.
#183 Updated by Roger Borrello 6 months ago
Branch 4722b was rebased to trunk_14819 and is at revision 14820.
#184 Updated by Roger Borrello 5 months ago
Branch 4722b was rebased to trunk_14829 and is at revision 14830.
#187 Updated by Roger Borrello 5 months ago
Greg Shah wrote:
Does this branch only have the script change?
I believe Ovidiu had something for this branch:
Ovidiu Maxiniuc wrote:
Yes, Roger, I think you should fork a new branch, 4722b.
I also have small updates related to import, not dramatic, but they should improve the resilience of this process.
#188 Updated by Roger Borrello 5 months ago
Branch 4722b was rebased to trunk_14834 and is at revision 14835.
#190 Updated by Ovidiu Maxiniuc 5 months ago
I committed my 4722b changes as r14836.
Nothing fancy, just more verbose logs and allowed loading of FS resources if not found in jar during import. The event is logged, too.
#191 Updated by Roger Borrello 5 months ago
Ovidiu Maxiniuc wrote:
I committed my 4722b changes as r14836.
Nothing fancy, just more verbose logs and allowed loading of FS resources if not found in jar during import. The event is logged, too.
I'm getting some compilation errors:
[ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:605: error: unclosed string literal [ant:javac] LOG.log(Level.FINE, "Failed to create close session (" + Persistence.ID_GEN_SEQUENCE + "), exc); [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:605: error: ';' expected [ant:javac] LOG.log(Level.FINE, "Failed to create close session (" + Persistence.ID_GEN_SEQUENCE + "), exc); [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:623: error: illegal start of expression [ant:javac] private OutputStream openStream(String filename) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:623: error: ';' expected [ant:javac] private OutputStream openStream(String filename) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:623: error: ';' expected [ant:javac] private OutputStream openStream(String filename) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: illegal start of expression [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: illegal start of expression [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: ';' expected [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: ';' expected [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: not a statement [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:668: error: ';' expected [ant:javac] private void write(OutputStream os, String text) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:669: error: not a statement [ant:javac] throws IOException [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:669: error: ';' expected [ant:javac] throws IOException [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: illegal start of expression [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: illegal start of expression [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: ';' expected [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: ';' expected [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: not a statement [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: ';' expected [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:684: error: ';' expected [ant:javac] private void writeStmt(OutputStream os, String text, String delimiter) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:685: error: not a statement [ant:javac] throws IOException [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:685: error: ';' expected [ant:javac] throws IOException [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:693: error: illegal start of expression [ant:javac] public class Library [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2412: error: illegal start of expression [ant:javac] public final static class SqlRecordLoader [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2496: error: illegal start of expression [ant:javac] private static class ImportBundle [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2496: error: illegal start of expression [ant:javac] private static class ImportBundle [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2496: error: ';' expected [ant:javac] private static class ImportBundle [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2496: error: not a statement [ant:javac] private static class ImportBundle [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2496: error: ';' expected [ant:javac] private static class ImportBundle [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2497: error: not a statement [ant:javac] implements Comparable<ImportBundle> [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2497: error: ';' expected [ant:javac] implements Comparable<ImportBundle> [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2500: error: illegal start of expression [ant:javac] private final SqlRecordLoader loader; [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2503: error: illegal start of expression [ant:javac] private final List<P2JIndex> indexes; [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2506: error: illegal start of expression [ant:javac] private final long fileLength; [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2509: error: illegal start of expression [ant:javac] private double weight = 0.0; [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2512: error: illegal start of expression [ant:javac] private double rank = -1.0; [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: ')' expected [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: illegal start of expression [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: ';' expected [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: illegal start of expression [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: illegal start of expression [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2522: error: ';' expected [ant:javac] ImportBundle(SqlRecordLoader loader, List<P2JIndex> indexes) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2545: error: ';' expected [ant:javac] public int compareTo(ImportBundle that) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2545: error: ';' expected [ant:javac] public int compareTo(ImportBundle that) [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2570: error: ';' expected [ant:javac] public String toString() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2580: error: ';' expected [ant:javac] SqlRecordLoader getLoader() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2590: error: ';' expected [ant:javac] List<P2JIndex> getIndexes() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2601: error: ';' expected [ant:javac] double getWeight() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2612: error: ';' expected [ant:javac] double getRank() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2622: error: ';' expected [ant:javac] long getFileLength() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2638: error: illegal start of expression [ant:javac] void calculateWeight() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2638: error: ';' expected [ant:javac] void calculateWeight() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2655: error: illegal start of expression [ant:javac] void calculateRank() [ant:javac] ^ [ant:javac] /home/fwd/p2j/src/com/goldencode/p2j/schema/ImportWorker.java:2655: error: ';' expected [ant:javac] void calculateRank() [ant:javac] ^ [ant:javac] 54 errors > Task :ant-compile FAILED
#192 Updated by Roger Borrello 5 months ago
I fixed the minor typo and checked in revision 14837.
#193 Updated by Roger Borrello 5 months ago
Ovidiu, I am using this branch, and getting an error finding the ddl, when it is in the hotel.jar
file. When I unzip to the ddl/
directory, the RunScript
works:
^Crfb@rfb:/opt/hoteljava -Xmx1g -Dfile.encoding=UTF-8 -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/opt/hotel/lib/spi -cp /home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/p2j.jar:/home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/fwd-slf4j.jar:/opt/hotel/lib/hotel.jar org.h2.tools.RunScript -url 'jdbc:h2:/opt/hotel/db/hotel;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0' -user fwd_user -password user -script ddl/schema_table_hotel_h2.sqlql Exception in thread "main" org.h2.message.DbException: IO Exception: "java.io.FileNotFoundException: ddl/schema_table_hotel_h2.sql (No such file or directory)"; "ddl/schema_table_hotel_h2.sql" [90031-200] at org.h2.message.DbException.get(DbException.java:194) at org.h2.message.DbException.convertIOException(DbException.java:394) at org.h2.tools.RunScript.process(RunScript.java:331) at org.h2.tools.RunScript.runTool(RunScript.java:143) at org.h2.tools.RunScript.main(RunScript.java:70) Caused by: org.h2.jdbc.JdbcSQLNonTransientException: IO Exception: "java.io.FileNotFoundException: ddl/schema_table_hotel_h2.sql (No such file or directory)"; "ddl/schema_table_hotel_h2.sql" [90031-200] at org.h2.message.DbException.getJdbcSQLException(DbException.java:505) at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) ... 5 more Caused by: java.io.FileNotFoundException: ddl/schema_table_hotel_h2.sql (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at java.io.FileInputStream.<init>(FileInputStream.java:93) at org.h2.store.fs.FilePathDisk.newInputStream(FilePathDisk.java:337) at org.h2.store.fs.FileUtils.newInputStream(FileUtils.java:224) at org.h2.tools.RunScript.process(RunScript.java:187) at org.h2.tools.RunScript.process(RunScript.java:328) ... 2 more rfb@rfb:/opt/hotel$ unzip /opt/hotel/lib/hotel.jar ddl/* Archive: /opt/hotel/lib/hotel.jar creating: ddl/ inflating: ddl/schema_index_hotel_h2.sql inflating: ddl/schema_index_hotel_mariadb.sql inflating: ddl/schema_index_hotel_postgresql.sql inflating: ddl/schema_index_standard_h2.sql inflating: ddl/schema_table_hotel_h2.sql inflating: ddl/schema_table_hotel_mariadb.sql inflating: ddl/schema_table_hotel_postgresql.sql inflating: ddl/schema_table_standard_h2.sql inflating: ddl/schema_word_tables_hotel_postgresql.sql rfb@rfb:/opt/hotel$ java -Xmx1g -Dfile.encoding=UTF-8 -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/opt/hotel/lib/spi -cp /home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/p2j.jar:/home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/fwd-slf4j.jar:/opt/hotel/lib/hotel.jar org.h2.tools.RunScript -url 'jdbc:h2:/opt/hotel/db/hotel;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0' -user fwd_user -password user -script ddl/schema_table_hotel_h2.sql rfb@rfb:/opt/hotel$
How can I determine what is going wrong?
#194 Updated by Roger Borrello 5 months ago
Ovidiu... I must have really missed something for H2 regarding the use of DDL in the application jar file. I hadn't switched from using org.h2.tools.RunScript
to using com.goldencode.p2j.persist.deploy.ScriptRunner
. Sorry about that...
So now the tables are created, but importing has errors on a table
rfb@rfb:~/projects/hotel_gui_20231101$ ./p2j/import.sh -d ./data/dump/ -f ./p2j/build/lib -a deploy/lib/hotel.jar --db_type=h2 --db_home=/opt/hotel/db Dropping DB hotel ... Creating DB hotel of type h2 ... Creating schema for hotel from table_hotel ... Performing cmd=java -Xmx1g -Dfile.encoding=UTF-8 -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/home/rfb/projects/hotel_gui_20231101/deploy/lib/spi -cp /home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/p2j.jar:/home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui_20231101/deploy/lib/hotel.jar com.goldencode.p2j.persist.deploy.ScriptRunner 'jdbc:h2:/opt/hotel/db/hotel;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0' fwd_user user create.tables table_hotel Running script: [/ddl/schema_table_hotel_h2.sql] Import dump from /home/rfb/projects/hotel_gui_20231101/data/dump/hotel into hotel using ID fwd_user ... Performing cmd=java -Xmx1g -Djava.system.class.loader=com.goldencode.asm.AsmClassLoader -Dfile.encoding=UTF-8 -Djava.locale.providers=SPI,JRE -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext:/home/rfb/projects/hotel_gui_20231101/deploy/lib/spi -cp /home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/p2j.jar:/home/rfb/projects/fwd/container_20231023/projects/4722b/build/lib/fwd-slf4j.jar:/home/rfb/projects/hotel_gui_20231101/deploy/lib/hotel.jar com.goldencode.p2j.pattern.PatternEngine -d 2 dbName=\"hotel\" targetDb=\"h2\" url=\"'jdbc:h2:/opt/hotel/db/hotel;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0'\" uid=\"fwd_user\" pw=\"user\" dataPath=\"/home/rfb/projects/hotel_gui_20231101/data/dump/hotel/\" maxThreads=4 schema/import data/ hotel.p2o Using unnamed schema profile. INFO: Type match assertion disabled; set "checkTypes" to true to enable INFO: Data export files will be read from '/home/rfb/projects/hotel_gui_20231101/data/dump/hotel/' INFO: Using 4 threads for import ./cvt/data/hotel.p2o MLog clients using slf4j logging. Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> z8kfsxaz1uhvrmo9fso0q|3e6f3f28, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.h2.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> z8kfsxaz1uhvrmo9fso0q|3e6f3f28, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:h2:/opt/hotel/db/hotel;MV_STORE=FALSE;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ] Found identity generator sequence starting at 1. IMPORT ORDER: 1) guest.d:20802.0 [10401 bytes] 2) stay.d:15508.0 [7754 bytes] 3) reservation.d:3532.0 [1766 bytes] 4) rate.d:2946.0 [1473 bytes] 5) room.d:1042.0 [521 bytes] 6) room-type.d:986.0 [493 bytes] 7) _user.d:496.0 [248 bytes] 8) service.d:484.0 [242 bytes] 9) guest-id-type.d:426.0 [213 bytes] 10) service-type.d:426.0 [213 bytes] BEGIN stay.d (stay) BEGIN guest.d (guest) BEGIN rate.d (rate) BEGIN reservation.d (reservation) stay.d: SQL: drop index if exists idx__stay_idx__stay_id rate.d: SQL: drop index if exists idx__rate_idx__room_type_date guest.d: SQL: drop index if exists idx__guest_idx__stay_order stay.d: SQL: create unique index idx__stay_idx__stay_id on stay (stay_id nulls last) guest.d: SQL: create unique index idx__guest_idx__stay_order on guest (stay_id nulls last, order_ nulls last) reservation.d: SQL: drop index if exists idx__reservation_idx__room_num_date reservation.d: Skipping dropping of word index: idx__guest stay.d: Set up UTF-8 character encoding. guest.d: Set up UTF-8 character encoding. rate.d: Set up UTF-8 character encoding. reservation.d: Set up UTF-8 character encoding. reservation.d: Failed to populate word table reservation__guest: pk = 30002. org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "RESERVATION__GUEST" not found; SQL statement: INSERT INTO reservation__guest VALUES(?,UPPER(?)),(?,UPPER(?)) [42102-200] at org.h2.message.DbException.getJdbcSQLException(DbException.java:453) at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) at org.h2.message.DbException.get(DbException.java:205) at org.h2.message.DbException.get(DbException.java:181) at org.h2.command.Parser.readTableOrView(Parser.java:7672) at org.h2.command.Parser.readTableOrView(Parser.java:7643) at org.h2.command.Parser.parseInsert(Parser.java:1766) at org.h2.command.Parser.parsePrepared(Parser.java:968) at org.h2.command.Parser.parse(Parser.java:857) at org.h2.command.Parser.parse(Parser.java:829) at org.h2.command.Parser.prepareCommand(Parser.java:752) at org.h2.engine.Session.prepareLocal(Session.java:670) at org.h2.engine.Session.prepareCommand(Session.java:608) at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1260) at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:86) at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:365) at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:567) at com.goldencode.p2j.schema.ImportWorker$Library.populateWordTable(ImportWorker.java:1733) at com.goldencode.p2j.schema.ImportWorker$Library.populateWordTables(ImportWorker.java:1676) at com.goldencode.p2j.schema.ImportWorker$Library.importTable(ImportWorker.java:1475) at com.goldencode.p2j.schema.ImportWorker$Library.lambda$importAsync$3(ImportWorker.java:1974) at java.lang.Thread.run(Thread.java:750) ...
Where does
RESERVATION__GUEST
come from? I don't see it in the DDL.#195 Updated by Roger Borrello 5 months ago
Branch 4722b was rebased to trunk_14841 and is at revision 14845.
#196 Updated by Roger Borrello 5 months ago
Ovidiu, when I run ant import.db
using the H2 database, it is trying to import into the reservation__guest table. Since there isn't a WORD table to import for the H2 configuration, why might this be happening? There isn't any explicit calling of WORD tables in the build_db.xml
.
This is for hotel_gui.
#197 Updated by Roger Borrello 5 months ago
Roger Borrello wrote:
Ovidiu, when I run
ant import.db
using the H2 database, it is trying to import into the reservation__guest table. Since there isn't a WORD table to import for the H2 configuration, why might this be happening? There isn't any explicit calling of WORD tables in thebuild_db.xml
.This is for hotel_gui.
This does not happen with trunk, so something in this branch is causing H2 word tables to be created. Log snippet is noted in #4722-194
#198 Updated by Roger Borrello 5 months ago
Roger Borrello wrote:
This does not happen with trunk, so something in this branch is causing H2 word tables to be created. Log snippet is noted in #4722-194
Actually it looks like there is more logging with this branch. In populateWordTable
the logging is:
LOG.log(Level.FINE, "Failed to populate word table " + wordTableName, e);
instead of:
LOG.log(Level.FINE, "", e);
So the same could be happening.
#199 Updated by Ovidiu Maxiniuc 5 months ago
My last commit does not change business code in ImportWorker
, just the logs are more verbose.
I had a look at the DDL and, indeed, there is no reason to process a table named reservation__guest
. I will do a debug session to understand the problem in depth. At any rate, it makes very little sense: the log is on a catch
branch and if the exception is thrown in trunk, at least the stack should have been printed, if not the name of the processed temp-table.
#200 Updated by Roger Borrello 5 months ago
Branch 4722b was rebased to trunk_14857 and is at revision 14861.
#201 Updated by Roger Borrello 4 months ago
Branch 4722b was rebased to trunk_14870 and is at revision 14874.
#203 Updated by Roger Borrello 4 months ago
Greg Shah wrote:
I'd like to get this into trunk. What is the effort to make this safe?
If Ovidiu could verify the processing of reservation__guest
has nothing to do with changes in this task, that should do it. I've used the import.sh
many times, and am confident in it.
I just tested this import.sh
with p2j.jar
from this branch, and it gives the reservation__guest
error. I tried with the latest p2j.jar
and it works fine. So there's something really odd here. It might be a case of this version not letting the issue go by and the older version ignoring it.
#204 Updated by Roger Borrello 4 months ago
Just want to log the reservation.d processing from non-4722b:
BEGIN reservation.d (reservation) reservation.d: SQL: drop index if exists idx__reservation_idx__room_num_date reservation.d: Skipping dropping of word index: idx__guest reservation.d: Set up UTF-8 character encoding. reservation.d: SQL: create index idx__reservation_idx__room_num_date on reservation (room_num nulls last, start_date nulls last, end_date nulls last, recid) reservation.d: Skipping creation of word index: idx__guest reservation.d: Analyzing reservation... END reservation.d. 36 records imported into reservation
and from 4722b:
BEGIN reservation.d (reservation) reservation.d: SQL: drop index if exists idx__reservation_idx__room_num_date reservation.d: Skipping dropping of word index: idx__guest reservation.d: Set up UTF-8 character encoding. reservation.d: Failed to populate word table reservation__guest: pk = 30002. reservation.d: Failed to populate word table reservation__guest: pk = 30003. reservation.d: Failed to populate word table reservation__guest: pk = 30004. reservation.d: Failed to populate word table reservation__guest: pk = 30005. reservation.d: Failed to populate word table reservation__guest: pk = 30006. reservation.d: Failed to populate word table reservation__guest: pk = 30007. reservation.d: Failed to populate word table reservation__guest: pk = 30008. reservation.d: Failed to populate word table reservation__guest: pk = 30009. reservation.d: Failed to populate word table reservation__guest: pk = 30010. reservation.d: Failed to populate word table reservation__guest: pk = 30011. reservation.d: Failed to populate word table reservation__guest: pk = 30012. reservation.d: Failed to populate word table reservation__guest: pk = 30013. reservation.d: Failed to populate word table reservation__guest: pk = 30014. reservation.d: Failed to populate word table reservation__guest: pk = 30015. reservation.d: Failed to populate word table reservation__guest: pk = 30016. reservation.d: Failed to populate word table reservation__guest: pk = 30017. reservation.d: Failed to populate word table reservation__guest: pk = 30018. reservation.d: Failed to populate word table reservation__guest: pk = 30019. reservation.d: Failed to populate word table reservation__guest: pk = 30020. reservation.d: Failed to populate word table reservation__guest: pk = 30021. reservation.d: Failed to populate word table reservation__guest: pk = 30022. reservation.d: Failed to populate word table reservation__guest: pk = 30023. reservation.d: Failed to populate word table reservation__guest: pk = 30024. reservation.d: Failed to populate word table reservation__guest: pk = 30025. reservation.d: Failed to populate word table reservation__guest: pk = 30026. reservation.d: Failed to populate word table reservation__guest: pk = 30027. reservation.d: Failed to populate word table reservation__guest: pk = 30028. reservation.d: Failed to populate word table reservation__guest: pk = 30029. reservation.d: Failed to populate word table reservation__guest: pk = 30030. reservation.d: Failed to populate word table reservation__guest: pk = 30031. reservation.d: Failed to populate word table reservation__guest: pk = 30032. reservation.d: Failed to populate word table reservation__guest: pk = 30033. reservation.d: Failed to populate word table reservation__guest: pk = 30034. reservation.d: Failed to populate word table reservation__guest: pk = 30035. reservation.d: Failed to populate word table reservation__guest: pk = 30036. reservation.d: Failed to populate word table reservation__guest: pk = 30037. reservation.d: SQL: create index idx__reservation_idx__room_num_date on reservation (room_num nulls last, start_date nulls last, end_date nulls last, recid) reservation.d: Skipping creation of word index: idx__guest reservation.d: Analyzing reservation... END reservation.d. 36 records imported into reservation
All those exceptions are between:
reservation.d: Set up UTF-8 character encoding.
and
reservation.d: SQL: create index idx__reservation_idx__room_num_date on reservation (room_num nulls last, start_date nulls last, end_date nulls last, recid)
#206 Updated by Roger Borrello 4 months ago
Greg Shah wrote:
It is clear there is a regression that needs to be fixed here.
Well... not really. The culprit is:
--- /home/rfb/projects/fwd/trunk_master/src/com/goldencode/p2j/schema/ImportWorker.java
+++ /home/rfb/projects/fwd/container_20231023/projects/4722b/src/com/goldencode/p2j/schema/ImportWorker.java
@@ -1740,7 +1743,7 @@
}
catch (SQLException e)
{
- LOG.log(Level.FINE,
+ LOG.log(Level.INFO,
String.format("%s: Failed to populate word table %s: pk = %d.%n",
fileName,
wordTableName,
The logging is now being shown on the screen, which also shows the stack trace. I think it's just displaying a stack trace unnecessarily.
There's also some inconsistency in the logging, such that there are places where both System.out
and LOG.log
are used, and others where only LOG.log
is used:
System.out.printf("%s: Failed to populate word table %s: pk = %d.%n", fileName, wordTableName, primaryKey); LOG.log(Level.FINE, String.format("%s: Failed to populate word table %s: pk = %d.%n", fileName, wordTableName, primaryKey), e);
Maybe Ovidiu left behind INFO
for testing, and it should stay FINE
?
#207 Updated by Ovidiu Maxiniuc 4 months ago
Please update to latest 4922b. There are two issues fixed, which in fact, were present in trunk:
- changing the log level was only hiding back an existing problem. I added the guard which prevents word tables to be populated in case such tables were not defined in the first place;
- the computed columns for indexed fields other than character were not generated in create tables statements. Dialect dependant, only visible in strict version of MariaDb.
Eric, please review these last two commits.
#208 Updated by Roger Borrello 4 months ago
Ovidiu Maxiniuc wrote:
Roger,
Please update to latest 4922b. There are two issues fixed, which in fact, were present in trunk:
Great! Don't forget to update the ImportWorker.java
header. I'll test it out this AM.
#209 Updated by Roger Borrello 4 months ago
Roger Borrello wrote:
Great! Don't forget to update the
ImportWorker.java
header. I'll test it out this AM.
I meant to post that this fix works great in my configuration!
#210 Updated by Roger Borrello 4 months ago
Branch 4722b was rebased to trunk_14882 and is at revision 14888.
#212 Updated by Roger Borrello 4 months ago
Greg Shah wrote:
Does this need any further testing?
Not from my perspective.
#214 Updated by Ovidiu Maxiniuc 4 months ago
I think so but, to my knowledge, the changes r14878..14884 were not reviewed.
#215 Updated by Roger Borrello 4 months ago
Branch 4722b was rebased to trunk_14886 and is at revision 14892.
#217 Updated by Eric Faulhaber 4 months ago
Code review 4722b/14887-14892:
Generally good, but please see comments/questions below.
I think the import.sh
changes are ok, but I did not confirm with a test. Has this been tested?
In the XmlFilePlugin
c'tor, there is this comment:
// NOTE: if !useResources and [name] failed to be load as a resource do an attempt to load it // from file system. This should only happen when in developer mode. In production, this // must be a configuration error (or lack or resource itself).
Note that this is within a conditional test if (useResources)
, so the !useResources
premise to the comment is confusing, since the useResources
flag is known to be true
at this point. Shouldn't the comment read: // NOTE: if useResources is true and...
?
At ImportWorker
line 606, we have the following:
LOG.log(Level.FINE, "Failed to create close session (" + Persistence.ID_GEN_SEQUENCE + ")", exc);
This is error handling in response to a failure closing the current session when creating the identity sequence. The wording is confusing. Should it be, "Failed to close session after creating sequence (" + ...
?
Not introduced with this update, but why do we have a mixture of System.out
and LOG.log
output in this class?
Are the memptr
changes supposed to be part of this update?
#218 Updated by Ovidiu Maxiniuc 4 months ago
Eric Faulhaber wrote:
In the
XmlFilePlugin
c'tor, there is this comment:
[...]
AtImportWorker
line 606, we have the following:
[...]
Indeed, both of these are copy/paste typos. I corrected them. Committed revision 14893.
Not introduced with this update, but why do we have a mixture of
System.out
andLOG.log
output in this class?
IIRC, this file was always logging to STDIO because it was supposed to be called from TRPL import. I think, when the new central logger was added, only the exceptions were rewritten to be handled by the new API, the normal messages stayed.
Are the
memptr
changes supposed to be part of this update?
Indeed, they are not related. But I would like to get the file on repo, I find annoying the deprecation messages when building FWD. I do not know of a specific task to accommodate such changes.
#219 Updated by Roger Borrello 4 months ago
Eric Faulhaber wrote:
I think the
import.sh
changes are ok, but I did not confirm with a test. Has this been tested?
Yes... inside and outside of containers, with H2 and PostgreSQL DB's.
#220 Updated by Roger Borrello 4 months ago
Ovidiu, there are a couple of conflicts when attempting to rebase against trunk revision 14890. Could you take a look at rebasing? The changes in src/com/goldencode/ast/XmlFilePlugin.java
are especially complicated.
#222 Updated by Constantin Asofiei 4 months ago
Greg Shah wrote:
Constantin: Please review the
memptr
change in 4722b rev 14888. If this was a safe change, why didn't we make it back in rev 14854 when we introduced the newer roaringbitmap (IIRC that was where those APIs were deprecated)?
The changes are safe, I just missed them back then.
#224 Updated by Ovidiu Maxiniuc 4 months ago
I rebased the branch to trunk/14890 and fixed the (strange, albeit simple) conflicts.
Pushed up 4722b to revision 14897.
I did the import for hotel gui with the 3 main dialects. No issues detected. I think the branch is ready to be merged.
#226 Updated by Ovidiu Maxiniuc 4 months ago
- Status changed from Merge Pending to Test
Branch 4722b was merged into trunk revision 14895 and archived.
#227 Updated by Roger Borrello 3 months ago
- Status changed from Test to WIP
- % Done changed from 100 to 90
I created a new branch 4722c due to an issue when attempting to run import.sh
in a container. The createdb is not able to access the remote postgres container.
#228 Updated by Roger Borrello 3 months ago
- Status changed from WIP to Review
- % Done changed from 90 to 100
Revision 14935 is ready for review. This holds up the ability to use a multi-container environment.
#230 Updated by Roger Borrello 3 months ago
Greg Shah wrote:
What testing is needed?
I've built some docker images using the branch, and I'm working on the hotel_gui container using the new docker image(s).
#231 Updated by Roger Borrello 3 months ago
My testing is complete. There is now a multi-container configuration (server/client and postgres) for hotel_gui. There is more work to be done in terms of configuration automation, but hardcoded values in the docker-compose.yml
are working.
#233 Updated by Roger Borrello 3 months ago
Greg Shah wrote:
Please test in a non-Docker environment.
It works in a normal environment.
#234 Updated by Roger Borrello 3 months ago
The Docker implementation is the most stringent user of this data import feature, so while this is partially docker related, it does fit nicely in this task.
We've talked about how the postgres
image utilizes the exit files in the /docker-entrypoint-initdb.d/
directory to step through the cluster and database creation. Currently, I have minimal support in a derivative of the postgres
image (addition of custom locales) so that the container can proceed with creation of the cluster and roles, then import.sh
(addition of p2j.jar and then the application jar).
We can also handle the database import driven by the server container, with the postgres
container just performing the cluster and role creation upon startup (we'd still need custom locales added to the postgres
image). The @4722c* branch enables this by making sure the createdb is network enabled.
The discussion is how best to drive the configuration necessary to perform this. In the single container implementation this is controlled by passing the --db_dump=<path_to_dump>
parameter to the run_docker.sh
script that creates the container. When included it will add a mapping to the container's /dbdump
directory and an additional parameter passed to deploy/server/start_server.sh
via the environment so that the database import is called before the server starts. Very simple.
In the compose situation, we have to place all mappings and environment variables in the yaml files. For example:
version: '3.4'
services:
hotel_gui_db1:
## build: ./builds/hotel_gui-db-14
image: hotel_gui_postgres:latest
restart: unless-stopped
container_name: hotel_gui_postgres
ports:
- 5434:5432
volumes:
- hotel_db:/pgdata:delegated
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: fwd_admin
POSTGRES_INITDB_ARGS: --locale=en_US@iso88591_fwd_basic
POSTGRES_DB: postgres
PGDATA: /pgdata
PGPORT: 5432
POSTGRES_HOST_AUTH_METHOD: password
TZ: 'America/New_York'
APP_DBTYPE: postgres
APP_DBADMIN: fwd_admin
APP_DBADMIN_PASSWORD: admin
APP_DBUSER: fwd_user
APP_DBUSER_PASSWORD: user
APP_DBNAMES: hotel
APP_DBHOST: localhost
APP_DBPORT: 5432
APP_DUMP_DIR: /dbdump
hotel_gui_server:
image: hotel_gui_server:latest
container_name: hotel_gui_server
ports:
- 7443:7443
- 7449-7459:7449-7459
- 3333:3333
- 3433:3433
- 2202:22
command: ['bash','-x', '/opt/hotel/docker/docker-entrypoint.sh', 'postgres', 'hotel', 'detached', 'prepare_dir.json']
depends_on:
- hotel_gui_db1
volumes:
- ../deploy/etc:/opt/hotel/etc
- ../deploy/logs:/opt/hotel/logs
- ../data/dump:/dbdump:cached
environment:
START_SERVER_GETOPTS: '-z -x /opt/hotel/etc/server.xml -c -p /opt/hotel/etc/prepare_dir.json -r /dbdump/'
The thing I am struggling with most is how to build out the yaml file(s) (there can be a main docker-compose.yml
and a docker-compose.override.yml
to "override" any values set in the first file).
Variable substitution would be one method. I'd like the source of the variable values to be a singular place, like the JSON that is used to create the directory. But several values are not kept in it:
{
"server_xml_file": "/opt/hotel/etc/server.xml",
"directory_xml_file": "/opt/hotel/etc/directory.xml",
"spawner_path": "/opt/spawner/spawn",
"client_start_dir": "/opt/hotel/client",
"dateFormat": "mdy",
"numberGroupSep": ",",
"numberDecimalSep": ".",
"p2j_entry": "start.p",
"pkgroot": "com.goldencode.hotel",
"propath": ".:common:",
"search_path": ".:common",
"path_separator": ":",
"file_separator": "/",
"case_sensitive": "TRUE",
"os_user": "fwd",
"kbd_layout": "US",
"server_log": "/opt/hotel/logs",
"client_log": "/opt/hotel/logs",
"embedded_host": "{embedded_host}",
"dbnames": "hotel",
"hotel": {
"dbtype": "postgres",
"dbuser": "fwd_user",
"dbuserpass": "user",
"dbadmin": "fwd_admin",
"dbadminpass": "admin",
"collation": "en_US@iso88591_fwd_basic",
"dbhost": "hotel_gui_db1",
"dbport": 5432
}
}
How far back do I go? Would these now be questions asked during the prepare_template.sh
timeframe? Or should I accept parameters to a docker/run_compose.sh
script that would build the file on the fly, just before composing (with the default utilizing what yaml might be there already?)
I'd like to get these implementation decisions correct before moving onto customer code.
#235 Updated by Roger Borrello 3 months ago
Branch 4722c was rebased to trunk_14941 and is at revision 14942.
I believe this is ready for merge.
#237 Updated by Roger Borrello 3 months ago
Branch 4722c was merged to trunk at revision 14942 and was archived.
#238 Updated by Roger Borrello 3 months ago
- Status changed from Internal Test to Test