Hash File

Maumont 2008/04/10 18:00

What's currently wrong

The current Lookup management is insufficient to process a big amount of data.

tHashOutput and tHashInput components

Job designer context

Examples of designs with the tHashOuput and tHashInput components:

First step
:specifications:example2_thash_components.pngLoad and read from the same tHashOutput component

:specifications:example3_thash_components.png

:specifications:example1_thash_components.png

Second step
:specifications:example4_thash_components.pngHASH_FILE_A is updated and next read will take account of previous modification: we can use a memory buffered Hash to realize this functionality, so for next lines lookups data will be read from file and from memory.
If the hash buffer is full, data will be sorted and written, then next updates will be added to memory hash

"tHashOutput" Component

By default the tHashOutput component order ascendant and write keys and rows data in some files in binary format.

[NRO]

"Component" view description :

Add a combo labeled “Link with” which contains only tHashOutput components which has “Clear data” checked. This field will interact with view like:

  • If a “tHashOutput” component is selected, the rules for other visible fields will depend on field “Clear data” state
  • If no “tHashOutput” component is selected, all options are visible

The existing checkox “Clear data” will have an impact on “Keys management” and “Data write mode” fields:

  • These fields will be hidden if “Clear data” is unchecked
  • These fields will be enabled and visible if “Clear data” is checked

“Clear data” title could be renamed as “Clear data and enable hash configuration”

“Clear data” checkbox hasn't effect on “Base file path” field.

New options will be available on tHashOutput “Component” view such as :

A combo “Keys management” with these items (internal key name: KEYS_MANAGEMENT) :

  • “Keep only first key occurrence” (internal value name: KEEP_FIRST)
  • “Keep only last key occurrence” (internal value name: KEEP_LAST)
  • “Keep all key occurrences” (internal value name: KEEP_ALL)

A combo “Data write mode” with these items (internal key name: DATA_WRITE_MODE):

  • “Data into Persistent target” (default value) (internal value name: PERSISTENT)
  • “Data into Memory” (internal value name: MEMORY)

If “Data into File” is selected the following options will be available (internal key name: BASE_FILE_PATH) :

  • a text field “Base file path”

[NEW ITEM] Add a memory heap max field, the value should be a number with MB (Mega Bytes) as unit. These field has to appear when “Data into Persistent target” is selected.

Optional features to realize later :

  • Ignore case, or case sensitive
  • disable order of any columns

How user could disable the automatic order process on any columns on GUI ?

"tHashInput" component

The “tHashInput” component can read data loaded/written by “tHashOutput”.

[NRO]

"Component" view description

New options will be available on tHashInput “Component” view such as :

Don't forget when a tHashInput read persistent data, data can come from a component which is declared in an other job.

Modify the combo “Component List” by a combo labeled “Link with” which contains only tHashOutput components which has “Clear data” checked. This field will interact with view like:

  • If a “tHashOutput” component is selected, only “Keys management” is visible, indeed “Keys management” option can be different from “Keys management” output component
  • If no “tHashOutput” component is selected, all options are visible

A combo “Keys management” with these items :

  • “Default: use same mode as tHashOutput component” (internal value name: DEFAULT)
  • “Keep only first key occurrence”
  • “Keep only last key occurrence”
  • “Keep all key occurrences”

A combo “Data read mode” with these items :

  • “Data from Persistent source” (default value) (internal value name: PERSISTENT)
  • “Data from Memory” (internal value name: MEMORY)

If “Data from Persistent source” is selected the following options will be available :

  • a text field “Base file path”
  • later a combo to select the “Algorithm” could be added

Communications/behaviors between components:

Between tHashOutput and tHashInput:

We could highlight in a same job all related tHash components which are related to the selected tHash.

Memory configuration case:

No change will be applied to existing behavior, tHashInput lists all existing tHashOutput on the current job.

File configuration case:

In a first time, there will not have communication between these components. Maybe in future we will list into tHashInput all filePaths configured on tHashOutput in a same job.

Between tHashOutput and tMap/tJoin:

tHashOutput will be able to be used as output and input at same time like with the file input components.

If tHashOutput is connected with a Main connection to tMap/tJoin, all data are sent to the next component like a classic input file, with only one difference: data are filtered by the “Keys management” option.

If tHashOutput is connected with a Lookup connection to tMap/tJoin, the lookup mode is enabled and data from tHash are read according to the Algorithm configured into tHashOutput (by default it will be the “Ordered files” algorithm as described in a next paragraph).

tHashOuput become the reference for option “Keys management”, then tMap/tJoin will retrieve this information from tHashOutput, the same option in tMap is actually the combo which appears for each lookup table. Yet, in case of implicit lookup, lookup configuration from tMap will stay the reference.

[NRO] We must ensure that “Keys management” from tHashOutput can be modified from tMap when user changes “Lookup type” and inversely. These fields must be synchonized, so properties should be modified in one direction or in the other.

Between tHashInput and tMap/tJoin:

If tHashInput is connected with a Main connection to tMap/tJoin, all data are sent to the next component like a classic input file, with only one difference: data are filtered by the “Keys management” option.

If tHashinput is connected with a Lookup connection to tMap/tJoin, the lookup mode is enabled and data from tHash are read according to severals conditions :

  • 1. the KEYS_MANAGEMENT are read from the file descriptor
  • 2. the KEYS_MANAGEMENT selected by user on Component view

KEYS_MANAGEMENT cases:

  • KEYS_MANAGEMENT into file descriptor is “KEEP_FIRST”
    • KEYS_MANAGEMENT into Component view is “KEEP_FIRST” ⇒ first key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_LAST” ⇒ first key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_ALL” ⇒ first key put is returned
  • KEYS_MANAGEMENT into file descriptor is “KEEP_LAST”
    • KEYS_MANAGEMENT into Component view is “KEEP_FIRST” ⇒ last key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_LAST” ⇒ last key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_ALL” ⇒ last key put is returned
  • KEYS_MANAGEMENT into file descriptor is “KEEP_ALL”, this is the only case where KEYS_MANAGEMENT in Component view can affect the match
    • KEYS_MANAGEMENT into Component view is “KEEP_FIRST” ⇒ first key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_LAST” ⇒ last key put is returned
    • KEYS_MANAGEMENT into Component view is “KEEP_ALL” ⇒ all keys put are returned

tHashInput become the reference for option “Keys management”, then tMap/tJoin will retrieve this information from tHashInput, the same option in tMap is actually the combo which appears for each lookup table.

[NRO] We must ensure that “Keys management” from tHashInput can be modified from tMap when user changes “Lookup type” and inversely. These fields must be synchonized, so properties should be modified in one direction or in the other.

The tMap component must be synchronized only with the directly connected components on lookup connections, then all other related tHashInput/tHashOutput must'nt be synchronized.

Description of the generated files for the "Ordered files" algorithm

Definitions : {BASE_FILE_PATH} is the value related to the editable field in “Component” view. {INDEX_FILE} is the file index, for each data loading, n pair of files will be generated, KeysData files and RowData files. These pairs corresponds with each sorted packet.

Files naming convention

The file "Descriptor" (ASCII file):

Name of the descriptor file (ASCII format, same format as *.properties or binary, no idea for the moment):

	{BASE_FILE_PATH}.desc

or

	{BASE_FILE_PATH}_desc.bin
Names of Keys files (binary files):
{BASE_FILE_PATH}_KeysData_{INDEX_FILE}.bin
Names of Data files (binary files):
{BASE_FILE_PATH}_RowData_{INDEX_FILE}.bin

Data structure

The file "Descriptor":

Data are written in bytes.

This file will contain the following informations :

  • STRATEGY
    • ORDERED_FILES
    • BERKLEY_DB
    • XXL
  • KEYS_MANAGEMENT :
    • KEEP_FIRST
    • KEEP_LAST
    • KEEP_ALL
  • Total count lines ?
  • PAIR_FILES_COUNT=11 –> count of created pair files, this info will allow to know the number of pairs to iterate.
  • FILE_{INDEX_FILE}_ORDERED=YES/NO
Keys files:

Data are written in bytes.

Columns :

  • KEY_COLUMN_1
  • KEY_COLUMN_2
  • VALUES_DATA_SIZE (number of bytes for all data in the corresponding Data file, this is not the position in the file)
Data files:

Data are written in bytes.

Columns :

  • VALUE_COLUMN_1
  • VALUE_COLUMN_2

Hash files in a tMap context

Configuration case with « Lookup Files » only

:specifications:case1_tmap.png

See particularly names of lookups such as “LF1” (Lookup File 1) and “LF2” (Lookup File 2), but there is not constraint rule on Table names.

Note: there is not visual difference between Lookup files and Lookup memory on tMap GUI, but we could use a different table color to show the difference (?).

Processing steps details

1. Sort of « LF1 » (Lookup 1)

Sort on the following columns in ascendant order :

  • LF1.K1
  • LF1.K2

La struture dans le groupe de fichiers résultants sera la suivante :

The file KeysData:
  • LF1_K1
  • LF1_K2
  • VALUES_DATA_SIZE
The file ValuesData:
  • LF1_INDEX_ORIGIN (optional for final sort)
  • LF1_V

2. Sort of « LF2 » (Lookup 2)

Sort on the following columns in ascendant order :

  • LF2.K1
  • LF2.K2
The file KeysData:
  • LF2_K1
  • LF2_K2
  • VALUES_DATA_SIZE
The file ValuesData:
  • LF2_INDEX_ORIGIN (optional for final sort)
  • LF2_V

3. Sort of « M » data (Main)

Sort on the following expressions in ascendant order:

  • M.K1 – M.K2 (expression of column LF1.K1 ⇒ EXPR_LF1_K1)
  • M.K1 + M.K2 (expression of column LF1.K2 ⇒ EXPR_LF1_K2)

The « M » file structure will be:

  • M_INDEX_ORIGIN (optional for final sort)
  • EXPR_LF1_K1
  • EXPR_LF1_K2
  • M_K1
  • M_K2
  • M_V

In this case, no need to separate keys and values in two separated files because this file will be the master file for the first join operation.

4. Join between « M » et « LF1 »

We will call “Join_M_LF1” the join between « M » et « LF1 ». Resulting data will be sorted and written on file(s) Join_M_LF1 at same time.

The « Join_M_LF1 » file structure will be:

  • M_INDEX_ORIGIN (optional for final sort)
  • M_K1
  • M_K2
  • M_V
  • LF1_INDEX_ORIGIN (optional for final sort)
  • LF1_K1
  • LF1_K2
  • LF1_V

Sort on the following expressions in ascendant order:

  • LF1.K1 – LF1.K2 (expression de la colonne LF2.K1)
  • LF1.K1 + LF1.K2 (expression de la colonne LF2.K2)

5. Join between « Join_M_LF1 » et « LF2 » that we will call « Join_M_LF1_LF2 »

The « Join_M_LF1_LF2 » file structure will be:

  • M_INDEX_ORIGIN (optional for final sort)
  • M_K1
  • M_K2
  • M_V
  • LF1_INDEX_ORIGIN (optional for final sort)
  • LF1_K1
  • LF1_K2
  • LF1_V
  • LF2_INDEX_ORIGIN (optional for final sort)
  • LF2_K1
  • LF2_K2
  • LF2_V

6. Optional final sort

This process will allow to sort final data like data would be sorted if data was processed by memory lookups. it will use columns called *_INDEX_ORIGIN.

Sorted Main Writer-Reader / Sorted Lookup Writer-Reader / Sorted Join Writer-Reader

All these functionalities could be done by a same based class.

[SLA] (5 days)

We can create new classes to realize these operations from existing classes from HashFile project.

Modification code strategy

We will try to minimize the modifications on the existing tMap templates.

We will copy properties from new generated classes RowSortContainer* shown in appendix into existing generated classes values *Struct as needed. This strategy should allow us to keep a maximum amount of existing code.

Details on join between two files :

The two files must be sorted from same columns or expressions-columns. Then, the first file (up) will be the master file and the second will be the slave. They will be read in stream, it allows to gain processing time on IO read and write.

Sample case :

:specifications:tableau_sample_case_join.png

In this example, we must compare during the process the parent item keys with the child item keys:

  • 1) [index 1 : “A”] == “A” : match OK, set cursor on next child key “AB”
  • 2) [index 1 : “A”] < “AB” : match NOK, set cursor on next parent key [index 1 : “B”] as parent key reference
  • 3) [index 2 : “B”] > “AB” : match NOK, set cursor on next child key “AC”
  • 4) [index 2 : “B”] > “AC” : match NOK, set cursor on next child key “B”
  • 3) [index 2 : “B”] == “B” : match OK, set cursor on next child key “BB”
  • 3) [index 2 : “B”] < “BB” : match NOK, set cursor on next parent key [index 3 : “B”] as parent key reference, parent keys index 2 and index 3 are equals so set cursor on previous child key [index 4 : “B”]
  • 3) [index 3 : “B”] == “B” : match OK, set cursor on next child key “BB”
  • 5) [index 3 : “B”] < “BB” : match NOK , set cursor on next parent key “C” as parent key reference
  • 5) [index 4 : “C”] > “BB” : match NOK , set cursor on next child key “BC”
  • 6) [index 4 : “C”] > “BC” : match NOK , set cursor on next child key “D”
  • 7) [index 4 : “C”] < “D” : match NOK, set cursor on next parent key “D” as parent key reference
  • 8) [index 5 : “D”] == “D” : match OK, valeur: D1, set cursor on next child key “D”
  • 9) [index 5 : “D”] == “D” : match OK, valeur: D2 , set cursor on next child key “DA”
  • 10) [index 5 : “D”] < “DA” : match NOK, no more keys, end of process

Possibles optimizations :

Buffer size exceeds the memory limit set by user

If Buffer size exceeds the memory limit set by user, the filling processing can be stopped and the write processing can be started. Then a new sort processing can be started.

Reuse instances as many as possible

Reuse the created instances then the sort processing: this will allow to limit instance creation, then to limit the CPU time to manage the GC to free instances and recreate it.

For practical purposes, only one array for one buffer will be needed, it suffices to sort only a limited number of items in the buffer and extract the same number to be written into the file. To avoid the instance creation, we can copy field by field the values.

Load the item's values only if match is validated

During a join, only the keys are needed, so we have to avoid to load values while the join does not match. When the join will match we will load the values effectively.

Limit the data to write into files

Main tMap file

No optimization for this case. Only one file will be needed for this case, because in all cases we will need to load all data.

Lookup files

For each sorted data packet (during sort process), we will create two files: a Keys file and a Values file. When joining data, we will read the Keys file to match with master file. Generally Keys file will be smaller than Values file then Keys file will be faster to read than Values file. To abstract, when several keys in succession does not match, only one skip() on the Values file will be necessary, it will allow to limit skip and loads from Values file.

Write keys columns only into Keys file, no need to write keys column into Values file. Then, at instance loading keys columns will be loaded from Keys file.

Maybe we will be able to pre-load a quantity of keys, to read more keys in one IO operation.

Join file

When the join between two rows does not match, only data of master file will have to be kept. So we will gain time.

Memory sort algorithm

The memory sort algorithm will be improved by using the Fast Quick Sort algorithm org.talend.designer.components.sort.fastquicksort.FastQSortAlgorithm.java placed in the “HashTests” project.

Heap memory evaluation :

We can evaluate heap memory at runtime, as one goes along the buffer is filled.

The measuring can be done with at least two ways :

  • either by calculating memory for each instance, this solution should be the most precise, but it may heap processor resources.
  • either by measuring heap JVM memory and by comparing before fill buffer and during fill buffer. Yet this solution can be imprecise because other components in same job can heap memory also.

To prevent to create a lot of items, we can reuse already created items by releasing previous instances of items's properties. This operation could be done for all items after current item such as below:

BEFOREAFTER
Item1:
- Integer myIntegerValue = 10
- String myStringValue = “test_a”
- long myLongValue = 20
No modification
(Last loaded item)
Item2
- Integer myIntegerValue = 10
- String myStringValue = “test_b”
- long myLongValue = 20
No modification
Item3:
- Integer myIntegerValue = 10
- String myStringValue = “test_c”
- long myLongValue = 20
Item3:
- Integer myIntegerValue = null
- String myStringValue = null
- long myLongValue = 20
Item4:
- Integer myIntegerValue = 15
- String myStringValue = “test_d”
- long myLongValue = 25
Item4:
- Integer myIntegerValue = null
- String myStringValue = null
- long myLongValue = 25

It is useless to update primitive values at default value, this will not free memory.

Appendix

Samples of generated classes :

class RowSortContainerMain_M implements ITMapMainRow, Comparable<RowSortContainerMain_M> {
 
	{JAVA_TYPE} expKey_LF1__K1; // calculé à partir de M.K1 – M.K2
	{JAVA_TYPE} expKey_LF1__K2; // calculé à partir de M.K1 + M.K2
 
	{JAVA_TYPE} M__K1;
	{JAVA_TYPE} M__K2;
	{JAVA_TYPE} M__V;
 
	public byte[] toKeysValuesData() {
 
		DataOutputStream dos = null;
		...
		dos.write{JAVA_TYPE}(expKey_LF1__K1);
		dos.write{JAVA_TYPE}(expKey_LF1__K2);
		...
	}
 
	public void loadRowValuesData(byte[] data) {
 
		DataInputStream dis = null;
		...
		this.M__K1 = dis.read{JAVA_TYPE}(M__K1);
		this.M__K2 = dis.read{JAVA_TYPE}(M__K2);
		this.M__V = dis.read{JAVA_TYPE}(M__V);
		...
	}
 
/*** see useful of these methods at this step (?)
 
         public void loadFrom(MStruct M) {
                M__K1 = M.K1;
                M__K2 = M.K2;
                M__V = M.V;
          }
 
            public void loadInto(MStruct M) {
                M.K1 = M__K1;
                M.K2 = M__K2;
                M.V = M__V;
            }
 
***/
 
	public int compareTo( RowSortContainerMain_M other) {
 
		int returnValue = 0;
		returnValue = checkNullsAndCompare(this. expKey_LF1__K1 , other. expKey_LF1__K1 );
		if(returnValue != 0) {
			return returnValue;
		}
		returnValue = checkNullsAndCompare(this. expKey_LF1__K2 , other. expKey_LF1__K2 );
		if(returnValue != 0) {
			return returnValue;
		}
		return returnValue;
	}
 
}
 
 
class RowSortContainerLookup_LF1 implements ITMapLookupRow, Comparable<RowSortContainerLookup_LF1> {
 
	{JAVA_TYPE} LF1__K1;
	{JAVA_TYPE} LF1__K2;
	{JAVA_TYPE} LF1__V;
 
	public byte[] toKeysData(int valuesDataSize) {
 
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(LF1__K1); 
		dos.write{JAVA_TYPE}(LF1__K2);
 
		dos.writeInt(valuesDataSize);
		...
 
	}
 
	public int loadKeysData(byte[] keysData) {
 
		DataInputStream dis = null;
		...
		this.LF1__K1  = dis.read{JAVA_TYPE}();
		this.LF1__K2 = dis.read{JAVA_TYPE}();
 
		int valuesDataSize = dis.readInt();
		...
 
		return valuesDataSize;
	}
 
	public byte[] toValuesData() {
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(LF1__V); 
		...
	}
 
	public void loadValuesData(byte[] valuesData) {
		DataInputStream dis = null;
		...
		this.LF1__V = dis.read{JAVA_TYPE}();
		...
	}
 
/*** see useful of these methods at this step (?)
 
         public void loadFrom(LF1Struct LF1) {
                LF1__K1 = LF1.K1;
                LF1__K2 = LF1.K2;
                LF1__V = LF1.V;
          }
 
            public void loadInto(LF1Struct LF1) {
                LF1.K1 = LF1__K1;
                LF1.K2 = LF1__K2;
                LF1.V = LF1__V;
            }
 
***/
 
	public int compareTo( RowSortContainerLookup_LF1 other) {
 
		int returnValue = 0;
		returnValue = checkNullsAndCompare(this.LF1__K1 , other.LF1__K1 );
		if(returnValue != 0) {
			return returnValue;
		}
		returnValue = checkNullsAndCompare(this.LF1__K2 , other.LF1__K2 );
		return returnValue;
	}
 
}
 
class RowSortContainerLookup_M__LF1 implements ITMapLookupRow, Comparable<RowSortContainerLookup_M__LF1> {
 
	{JAVA_TYPE} expKey_LF2__K1; // calculé à partir de LF1.K1 – LF1.K2
	{JAVA_TYPE} expKey_LF2__K2; // calculé à partir de LF1.K1 + LF1.K2
 
	{JAVA_TYPE} M__K1;
	{JAVA_TYPE} M__K2;
	{JAVA_TYPE} M__V;
 
	{JAVA_TYPE} LF1__K1;
	{JAVA_TYPE} LF1__K2;
	{JAVA_TYPE} LF1__V;
 
	public byte[] toKeysData(int valuesDataSize) {
 
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(expKey_LF2__K1); 
		dos.write{JAVA_TYPE}(expKey_LF2__K1);
 
		dos.write{JAVA_TYPE}(valuesDataSize);
		...
 
	}
 
	public byte[] toValuesData() {
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(M__K1); 
		dos.write{JAVA_TYPE}(M__K2); 
		dos.write{JAVA_TYPE}(M__V); 
 
		dos.write{JAVA_TYPE}(LF1__K1);
		dos.write{JAVA_TYPE}(LF1__K2); 
		dos.write{JAVA_TYPE}(LF1__V); 
 
		...
	}
 
/*** see useful of these methods at this step (?)
 
         public void loadFrom(MStruct M, LF1Struct LF1) {
                M__K1 = M.K1;
                M__K2 = M.K2;
                M__V = M.V;
 
                LF1__K1 = LF1.K1;
                LF1__K2 = LF1.K2;
                LF1__V = LF1.V;
          }
 
            public void loadInto(MStruct M, LF1Struct LF1) {
                M.K1 = M__K1;
                M.K2 = M__K2;
                M.V = M__V;
 
                LF1.K1 = LF1__K1;
                LF1.K2 = LF1__K2;
                LF1.V = LF1__V;
            }
 
***/
 
	public int compareTo( RowSortContainerLookup_M__LF2 other) {
 
		int returnValue = 0;
		returnValue = checkNullsAndCompare(this. expKey_LF2__K1 , other. expKey_LF2__K1 );
		if(returnValue != 0) {
			return returnValue;
		}
		returnValue = checkNullsAndCompare(this. expKey_LF2__K2 , other. expKey_LF2__K2 );
 
		return returnValue; 
	}
 
}
 
 
 
class RowSortContainerLookup_LF2 implements ITMapLookupRow, Comparable<RowSortContainerLookup_LF2> {
 
	{JAVA_TYPE} LF2__K1;
	{JAVA_TYPE} LF2__K2;
	{JAVA_TYPE} LF2__V;
 
	public byte[] toKeysData(int valuesDataSize) {
 
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(LF2__K1); 
		dos.write{JAVA_TYPE}(LF2__K2);
 
		dos.writeInt(valuesDataSize);
		...
 
	}
 
	public int loadKeysData(byte[] keysData) {
 
		DataInputStream dis = null;
		...
		this.LF2__K1  = dis.read{JAVA_TYPE}();
		this.LF2__K2 = dis.read{JAVA_TYPE}();
 
		int valuesDataSize = dis.readInt();
		...
 
		return valuesDataSize;
	}
 
	public byte[] toValuesData() {
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(LF2__V); 
		...
	}
 
	public void loadValuesData(byte[] valuesData) {
		DataInputStream dis = null;
		...
		this.LF2__V = dis.read{JAVA_TYPE}();
		...
	}
 
	public int compareTo( RowSortContainerLookup_LF2 other) {
 
		int returnValue = 0;
		returnValue = checkNullsAndCompare(this.LF2__K1 , other.LF2__K1 );
		if(returnValue != 0) {
			return returnValue;
		}
		returnValue = checkNullsAndCompare(this.LF2__K2 , other.LF2__K2 );
		return returnValue;
	}
 
}
 
 
 
class RowSortContainerLookup_M__LF1__LF2 implements ITMapLookupRow, Comparable<RowSortContainerLookup_M__LF1__LF2> {
 
	{JAVA_TYPE} M__K1;
	{JAVA_TYPE} M__K2;
	{JAVA_TYPE} M__V;
 
	{JAVA_TYPE} LF1__K1;
	{JAVA_TYPE} LF1__K2;
	{JAVA_TYPE} LF1__V;
 
	public byte[] toKeysData(int valuesDataSize) {
 
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(expKey_LF2__K1); 
		dos.write{JAVA_TYPE}(expKey_LF2__K1);
 
		dos.write{JAVA_TYPE}(valuesDataSize);
		...
 
	}
 
	public byte[] toValuesData() {
		DataOutputStream dos = null;
		...
		// (vue simplifiée de l'écriture)
		dos.write{JAVA_TYPE}(M__K1); 
		dos.write{JAVA_TYPE}(M__K2); 
		dos.write{JAVA_TYPE}(M__V); 
 
		dos.write{JAVA_TYPE}(LF1__K1);
		dos.write{JAVA_TYPE}(LF1__K2); 
		dos.write{JAVA_TYPE}(LF1__V); 
 
		...
	}
 
         public void loadFrom(MStruct M, LF1Struct LF1, LF2Struct LF2) {
                M__K1 = M.K1;
                M__K2 = M.K2;
                M__V = M.V;
 
                LF1__K1 = LF1.K1;
                LF1__K2 = LF1.K2;
                LF1__V = LF1.V;
 
                LF2__K1 = LF2.K1;
                LF2__K2 = LF2.K2;
                LF2__V = LF2.V;
          }
 
            public void loadInto(MStruct M, LF1Struct LF1) {
 
                M.K1 = M__K1;
                M.K2 = M__K2;
                M.V = M__V;
 
                LF1.K1 = LF1__K1;
                LF1.K2 = LF1__K2;
                LF1.V = LF1__V;
 
                LF2.K1 = LF2__K1;
                LF2.K2 = LF2__K2;
                LF2.V = LF2__V;
 
 
            }
 
                /******  OR  *******/
 
            public void loadInto(MStruct M) {
 
                M.K1 = M__K1;
                M.K2 = M__K2;
                M.V = M__V;
 
            }
 
            public void loadInto(LF1Struct LF1) {
 
                LF1.K1 = LF1__K1;
                LF1.K2 = LF1__K2;
                LF1.V = LF1__V;
 
            }
 
            public void loadInto(LF2Struct LF2) {
 
                LF2.K1 = LF2__K1;
                LF2.K2 = LF2__K2;
                LF2.V = LF2__V;
 
            }
 
 
	public int compareTo( RowSortContainerLookup_M__LF2 other) {
 
		int returnValue = 0;
		returnValue = checkNullsAndCompare(this. expKey_LF2__K1 , other. expKey_LF2__K1 );
		if(returnValue != 0) {
			return returnValue;
		}
		returnValue = checkNullsAndCompare(this. expKey_LF2__K2 , other. expKey_LF2__K2 );
 
		return returnValue; 
	}
 
}

Maumont 2008/04/10 18:00

 
specifications/hash_file.txt · Last modified: 2011/12/17 03:51 (external edit)
 
 
Recent changes RSS feed Driven by DokuWiki