Project

General

Profile

Bug #7413

DUMP-NAME is not guaranteed to be unique in the .df

Added by Constantin Asofiei 11 months ago. Updated 11 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
Start date:
Due date:
% Done:

0%

billable:
No
vendor_id:
GCD
case_num:
version:

History

#1 Updated by Constantin Asofiei 11 months ago

If two tables have the same DUMP-NAME in the .df, the FWD can not start, as it enforces an unique index in the meta table.

Editing the .df by hand can lead to mistakes where the same name is used. FWD should not abend this way.

org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException: Unique index or primary key violation: "PUBLIC.IDX__META_FILE_DUMP_NAME ON PUBLIC.META_FILE(DB_RECID NULLS LAST, DUMP_NAME NULLS LAST, OWNER NULLS LAST) VALUES ( /* key:20 */ 21, CAST('year1' AS VARCHAR_IGNORECASE), NULL, 21, FALSE, 0, 0, 0, -21, 1, CAST('*' AS VARCHAR_IGNORECASE), CAST('*' AS VARCHAR_IGNORECASE), CAST('*' AS VARCHAR_IGNORECASE), CAST('*' AS VARCHAR_IGNORECASE), '', 4032, CAST('' AS VARCHAR_IGNORECASE), '', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), NULL, 0, FALSE, FALSE, 'yc', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), X'', NULL, NULL, NULL, NULL, '?', '?', CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), CAST('?' AS VARCHAR_IGNORECASE), NULL, NULL, CAST('' AS VARCHAR_IGNORECASE), CAST('*' AS VARCHAR_IGNORECASE), CAST('*' AS VARCHAR_IGNORECASE), CAST('' AS VARCHAR_IGNORECASE), CAST('' AS VARCHAR_IGNORECASE), NULL, 0, NULL, CAST('PUB' AS VARCHAR_IGNORECASE), CAST('N' AS VARCHAR_IGNORECASE), CAST('N' AS VARCHAR_IGNORECASE), CAST('N' AS VARCHAR_IGNORECASE), CAST('N' AS VARCHAR_IGNORECASE), CAST('PUB' AS VARCHAR_IGNORECASE), 5, CAST('Y' AS VARCHAR_IGNORECASE), CAST('T' AS VARCHAR_IGNORECASE), CAST('year1' AS VARCHAR_IGNORECASE), NULL, NULL, NULL, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, CAST('' AS VARCHAR_IGNORECASE), NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, '?', '?', '?', '?', '?', '?', '?', '?')"; SQL statement:
insert into meta_file (file_name, prime_index, file_number, dft_pk, numkey, numkfld, numkcomp, template, numfld, can_create, can_read, can_write, can_delete, desc, db_recid, valexp, valmsg, fil_misc11, fil_misc12, fil_misc13, fil_misc14, fil_misc15, fil_misc16, fil_misc17, fil_misc18, fil_misc21, fil_misc22, fil_misc23, fil_misc24, fil_misc25, fil_misc26, fil_misc27, fil_misc28, last_change, db_lang, hidden, frozen, dump_name, crc, fil_res11, fil_res12, fil_res13, fil_res14, fil_res15, fil_res16, fil_res17, fil_res18, fil_res21, fil_res22, fil_res23, fil_res24, fil_res25, fil_res26, fil_res27, fil_res28, cache, for_size, for_flag, for_cnt1, for_cnt2, for_name, for_owner, for_format, for_info, for_type, for_id, for_number, file_label, can_dump, can_load, valmsg_sa, file_label_sa, ianum, version, field_map, creator, has_ccnstrs, has_fcnstrs, has_pcnstrs, has_ucnstrs, owner, rssid, tbl_status, tbl_type, user_misc, last_modified, last_modified_offset, mod_sequence, file_attributes1, file_attributes2, file_attributes3, file_attributes4, file_attributes5, file_attributes6, file_attributes7, file_attributes8, file_attributes9, file_attributes10, file_attributes11, file_attributes12, file_attributes13, file_attributes14, file_attributes15, file_attributes16, file_attributes17, file_attributes18, file_attributes19, file_attributes20, file_attributes21, file_attributes22, file_attributes23, file_attributes24, file_attributes25, file_attributes26, file_attributes27, file_attributes28, file_attributes29, file_attributes30, file_attributes31, file_attributes32, file_attributes33, file_attributes34, file_attributes35, file_attributes36, file_attributes37, file_attributes38, file_attributes39, file_attributes40, file_attributes41, file_attributes42, file_attributes43, file_attributes44, file_attributes45, file_attributes46, file_attributes47, file_attributes48, file_attributes49, file_attributes50, file_attributes51, file_attributes52, file_attributes53, file_attributes54, file_attributes55, file_attributes56, file_attributes57, file_attributes58, file_attributes59, file_attributes60, file_attributes61, file_attributes62, file_attributes63, file_attributes64, category, fil_misc31, fil_misc32, fil_misc33, fil_misc34, fil_misc35, fil_misc36, fil_misc37, fil_misc38, fil_misc41, fil_misc42, fil_misc43, fil_misc44, fil_misc45, fil_misc46, fil_misc47, fil_misc48, recid) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-200]
    at org.h2.message.DbException.getJdbcSQLException(DbException.java:459)
    at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
    at org.h2.message.DbException.get(DbException.java:205)
    at org.h2.message.DbException.get(DbException.java:181)
    at org.h2.index.BaseIndex.getDuplicateKeyException(BaseIndex.java:103)
    at org.h2.pagestore.db.TreeIndex.add(TreeIndex.java:70)
    at org.h2.pagestore.db.PageStoreTable.addRow(PageStoreTable.java:97)
    at org.h2.command.dml.Insert.insertRows(Insert.java:195)
    at org.h2.command.dml.Insert.update(Insert.java:151)
    at org.h2.command.CommandContainer.update(CommandContainer.java:198)
    at org.h2.command.Command.executeUpdate(Command.java:251)
    at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:240)
    at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.execute(NewProxyPreparedStatement.java:67)
    at com.goldencode.p2j.persist.orm.SQLStatementLogger.log(SQLStatementLogger.java:363)
    at com.goldencode.p2j.persist.orm.Persister.insert(Persister.java:908)
    at com.goldencode.p2j.persist.orm.Persister.insert(Persister.java:408)
    at com.goldencode.p2j.persist.orm.Session.save(Session.java:760)
    at com.goldencode.p2j.persist.Persistence.save(Persistence.java:3048)
    at com.goldencode.p2j.persist.Persistence.save(Persistence.java:2996)
    at com.goldencode.p2j.persist.meta.MetadataManager.setMetaTableRecord(MetadataManager.java:1322)
    at com.goldencode.p2j.persist.meta.MetadataManager.addTableToFile(MetadataManager.java:841)
    at com.goldencode.p2j.persist.meta.MetadataManager.lambda$populateFileTable$3(MetadataManager.java:786)
    at java.util.ArrayList.forEach(ArrayList.java:1259)
    at com.goldencode.p2j.persist.meta.MetadataManager.populateFileTable(MetadataManager.java:786)
    at com.goldencode.p2j.persist.meta.MetadataManager.access$700(MetadataManager.java:142)
    at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$static$5(MetadataManager.java:2231)
    at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$new$6(MetadataManager.java:2284)
    at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.lambda$populateAll$8(MetadataManager.java:2356)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
    at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
    at com.goldencode.p2j.persist.meta.MetadataManager$SystemTable.populateAll(MetadataManager.java:2356)
    at com.goldencode.p2j.persist.meta.MetadataManager.populateDatabase(MetadataManager.java:714)
    at com.goldencode.p2j.persist.DatabaseManager.initMetaDb(DatabaseManager.java:1488)
    at com.goldencode.p2j.persist.DatabaseManager.activateMetaDb(DatabaseManager.java:1368)
    at com.goldencode.p2j.persist.DatabaseManager.lambda$activate$8(DatabaseManager.java:1430)
    at com.goldencode.p2j.persist.DatabaseManager.activateAndRegister(DatabaseManager.java:1456)
    at com.goldencode.p2j.persist.DatabaseManager.activate(DatabaseManager.java:1430)
    at com.goldencode.p2j.persist.ConnectionManager.makeConnection(ConnectionManager.java:3616)
    at com.goldencode.p2j.persist.ConnectionManager.connect(ConnectionManager.java:3422)
    at com.goldencode.p2j.persist.ConnectionManager.connectDbImpl(ConnectionManager.java:3103)
    at com.goldencode.p2j.persist.ConnectionManager.connect_(ConnectionManager.java:680)
    at com.goldencode.p2j.persist.ConnectionManager.connect(ConnectionManager.java:581)
    at com.goldencode.p2j.persist.DatabaseManager.autoConnect(DatabaseManager.java:982)
    at com.goldencode.p2j.util.Agent.prepare(Agent.java:2972)

#2 Updated by Eric Faulhaber 11 months ago

Constantin Asofiei wrote:

If two tables have the same DUMP-NAME in the .df, the FWD can not start, as it enforces an unique index in the meta table.

Editing the .df by hand can lead to mistakes where the same name is used. FWD should not abend this way.

That makes sense. When we implemented the infrastructure for static metadata, no consideration was made for hand-edit mistakes in the DF file. It was assumed the input was correct and consistent.

These types of constraint violations should be caught and reported during conversion, because in some cases, there will be no good way to recover from invalid metadata at runtime. In this case, DUMP-NAME is not used directly by the FWD runtime, but it might be used by an application, for data export purposes, for example.

The obvious short term workaround is to correct the duplicate DUMP-NAME value which caused this abnormal end. For a more robust, longer term solution, I think we need a general-purpose conversion-time fix.

Static metadata content is collected by the metaschema.xml rule set. Perhaps we do an early walk which collects the unique index information for each metadata table we support. We might be able to use or adapt the UniqueTracker class to enforce unique constraints during the collection of static metadata, so we catch any errors like this one at conversion time, rather than at runtime.

#3 Updated by Greg Shah 11 months ago

If the 4GL doesn't care that the DUMP-NAME is unique, why do we care? It seems like there is no compatibility requirement here. We've introduced a problem that doesn't exist in OE.

In other words, if the value is incorrect that should not stop the FWD server from running. I don't see the conversion as a good solution. Hand edits to DMOs later might introduce the same problem. The runtime should not be sensitive to this. And I wouldn't want to kill the conversion over this either.

#4 Updated by Constantin Asofiei 11 months ago

There is an unique index on the _field._dump-name, in the meta schema:

ADD INDEX "_Dump-Name" ON "_File" 
  AREA "Schema Area" 
  UNIQUE
  INDEX-FIELD "_Db-recid" ASCENDING
  INDEX-FIELD "_Dump-name" ASCENDING ABBREVIATED
  INDEX-FIELD "_Owner" ASCENDING

The problem here is with manual edit of the .df file, where things can go wrong.

#5 Updated by Greg Shah 11 months ago

Why would the dump-name itself be the cause of a collision?

What happens in OE when this same .df is processed?

#6 Updated by Constantin Asofiei 11 months ago

Greg Shah wrote:

Why would the dump-name itself be the cause of a collision?

Well, the index is unique, and I assume _Db-recid and _Owner is the same for all records.

What happens in OE when this same .df is processed?

I just tested and OE makes the dump-name unique, I used year1 and got year1-------1.

#7 Updated by Eric Faulhaber 11 months ago

We could do something similar for the dump name during conversion, if there is a unique constraint violation, so conversion doesn't stop.

However, that doesn't address the general purpose problem of metadata unique constraint violations due to hand-editing mistakes.

Also available in: Atom PDF