1.
We can’t control this for persistent data. Starting
from V8.5, by default variable lenght data is stored as bounded length field
i.e. field is allocated with max length size irrespective of how much data
actually stored. This will increase the performance as datastage does not need
additional processing to find out the actual length of data. Before prior V8.5,
we can change this behavior set setting environment variable
APT_COMPRESS_BOUNDED_FIELDS=True.
2.
But for scratch disk memory requirements, we can control this behavior
to reduce the memory requirement (Example for SORT) by setting APT_OLD_BOUNDED_LENGTH=True.
This will reduce the CPU utilization (i.e. impacts the performance) but reduces
the memory usage.
More details could be found at IBM Link
In case of oracle connection, if the length of varchar fields missing then Connector stage assumes the length of 4000. Use environment variable CC_ORA_UNBOUNDED_STRING_LENGTH to set this default length. So the fields are processed as "Bounded length filed" irrespective of whether we give length or max length or not.
In case of DB2 connection, if the length is missing then Connector stage assumes the length of 32.
No comments:
Post a Comment