next up previous index
Next: The C++ Interface Up: CUDD: CU Decision Diagram Previous: User's Manual   Index

Subsections


Programmer's Manual

This section provides additional detail on the working of the CUDD package and on the programming conventions followed in its writing. The additional detail should help those who want to write procedures that directly manipulate the CUDD data structures.


Compiling and Linking

If you plan to use the CUDD package as a clear box (for instance, you want to write a procedure that traverses a decision diagram) you need to add

#include "cuddInt.h"
to your source files. In addition, you should link libcudd.a to your executable. Some platforms require specific compiler and linker flags. Refer to the Makefile in the top level directory of the distribution.


Reference Counts

Garbage collection in the CUDD package is based on reference counts. Each node stores the sum of the external references and internal references. An internal BDD or ADD node is created by a call to cuddUniqueInter, an internal ZDD node is created by a call to cuddUniqueInterZdd, and a terminal node is created by a call to cuddUniqueConst. If the node returned by these functions is new, its reference count is zero. The function that calls cuddUniqueInter, cuddUniqueInterZdd, or cuddUniqueConst is responsible for increasing the reference count of the node. This is accomplished by calling Cudd_Ref.

When a function is no longer needed by an application, the memory used by its diagram can be recycled by calling Cudd_RecursiveDeref (BDDs and ADDs) or Cudd_RecursiveDerefZdd (ZDDs). These functions decrease the reference count of the node passed to them. If the reference count becomes 0, then two things happen:

  1. The node is declared ``dead;'' this entails increasing the counters of the dead nodes. (One counter for the subtable to which the node belongs, and one global counter for the unique table to which the node belongs.) The node itself is not affected.
  2. The function is recursively called on the two children of the node.
For instance, if the diagram of a function does not share any nodes with other diagrams, then calling Cudd_RecursiveDeref or Cudd_RecursiveDerefZdd on its root will cause all the nodes of the diagram to become dead.

When the number of dead nodes reaches a given level (dynamically determined by the package) garbage collection takes place. During garbage collection dead nodes are returned to the node free list.

When a new node is created, it is important to increase its reference count before one of the two following events occurs:

  1. A call to cuddUniqueInter, to cuddUniqueInterZdd, to cuddUniqueConst, or to a function that may eventually cause a call to them.
  2. A call to Cudd_RecursiveDeref, to Cudd_RecursiveDerefZdd, or to a function that may eventually cause a call to them.
In practice, it is recommended to increase the reference count as soon as the returned pointer has been tested for not being NULL.


NULL Return Values

The interface to the memory management functions (e.g., malloc) used by CUDD intercepts NULL return values and calls a handler. The default handler exits with an error message. If the application does not install another handler, therefore, a NULL return value from an exported function of CUDD signals an internal error.

If the aplication, however, installs another handler that lets execution continue, a NULL pointer returned by an exported function typically indicates that the process has run out of memory. Cudd_ReadErrorCode can be used to ascertain the nature of the problem.

An application that tests for the result being NULL can try some remedial action, if it runs out of memory. For instance, it may free some memory that is not strictly necessary, or try a slower algorithm that takes less space. As an example, CUDD overrides the default handler when trying to enlarge the cache or increase the number of slots of the unique table. If the allocation fails, the package prints out a message and continues without resizing the cache.


Cudd_RecursiveDeref vs. Cudd_Deref

It is often the case that a recursive procedure has to protect the result it is going to return, while it disposes of intermediate results. (See the previous discussion on when to increase reference counts.) Once the intermediate results have been properly disposed of, the final result must be returned to its pristine state, in which the root node may have a reference count of 0. One cannot use Cudd_RecursiveDeref (or Cudd_RecursiveDerefZdd) for this purpose, because it may erroneously make some nodes dead. Therefore, the package provides a different function: Cudd_Deref. This function is not recursive, and does not change the dead node counts. Its use is almost exclusively the one just described: Decreasing the reference count of the root of the final result before returning from a recursive procedure.


When Increasing the Reference Count is Unnecessary

When a copy of a predefined constant or of a simple BDD variable is needed for comparison purposes, then calling Cudd_Ref is not necessary, because these simple functions are guaranteed to have reference counts greater than 0 at all times. If no call to Cudd_Ref is made, then no attempt to free the diagram by calling Cudd_RecursiveDeref or Cudd_RecursiveDerefZdd should be made.


Saturating Increments and Decrements

On 32-bit machines, the CUDD package stores the reference counts in unsigned short int's. For large diagrams, it is possible for some reference counts to exceed the capacity of an unsigned short int. Therefore, increments and decrements of reference counts are saturating. This means that once a reference count has reached the maximum possible value, it is no longer changed by calls to Cudd_Ref, Cudd_RecursiveDeref, Cudd_RecursiveDerefZdd, or Cudd_Deref. As a consequence, some nodes that have no references may not be declared dead. This may result in a small waste of memory, which is normally more than offset by the reduction in size of the node structure.

When using 64-bit pointers, there is normally no memory advantage from using short int's instead of int's in a DdNode. Therefore, increments and decrements are not saturating in that case. What option is in effect depends on two macros, SIZEOF_VOID_P and SIZEOF_INT, defined in the external header file (cudd.h). The increments and decrements of the reference counts are performed using two macros: cuddSatInc and cuddSatDec, whose definitions depend on SIZEOF_VOID_P and SIZEOF_INT.


Complement Arcs

If ADDs are restricted to use only the constants 0 and 1, they behave like BDDs without complement arcs. It is normally easier to write code that manipulates 0-1 ADDs, than to write code for BDDs. However, complementation is trivial with complement arcs, and is not trivial without. As a consequence, with complement arcs it is possible to check for more terminal cases and it is possible to apply De Morgan's laws to reduce problems that are essentially identical to a standard form. This in turn increases the utilization of the cache.

The complement attribute is stored in the least significant bit of the ``else'' pointer of each node. An external pointer to a function can also be complemented. The ``then'' pointer to a node, on the other hand, is always regular. It is a mistake to use a complement pointer as it is to address memory. Instead, it is always necessary to obtain a regular version of it. This is normally done by calling Cudd_Regular. It is also a mistake to call cuddUniqueInter with a complemented ``then'' child as argument. The calling procedure must apply De Morgan's laws by complementing both pointers passed to cuddUniqueInter and then taking the complement of the result.


The Cache

Each entry of the cache consists of five fields: The operator, three pointers to operands and a pointer to the result. The operator and the three pointers to the operands are combined to form three words. The combination relies on two facts:

The cache does not contribute to the reference counts of the nodes. The fact that the cache contains a pointer to a node does not imply that the node is alive. Instead, when garbage collection takes place, all entries of the cache pointing to dead nodes are cleared.

The cache is also cleared (of all entries) when dynamic reordering takes place. In both cases, the entries removed from the cache are about to become invalid.

All operands and results in a cache entry must be pointers to DdNodes. If a function produces more than one result, or uses more than three arguments, there are currently two solutions:

Support of the former solution is under development. (See cuddLCache.c..) Support for the latter solution may be provided in future versions of the package.

There are three sets of interface functions to the cache. The first set is for functions with three operands: cuddCacheInsert and cuddCacheLookup. The second set is for functions with two operands: cuddCacheInsert2 and cuddCacheLookup2. The second set is for functions with one operand: cuddCacheInsert1 and cuddCacheLookup1. The second set is slightly faster than the first, and the third set is slightly faster than the second.


Cache Sizing

The size of the cache can increase during the execution of an application. (There is currently no way to decrease the size of the cache, though it would not be difficult to do it.) When a cache miss occurs, the package uses the following criteria to decide whether to resize the cache:

  1. If the cache already exceeds the limit given by the maxCache field of the manager, no resizing takes place. The limit is the minimum of two values: a value set at initialization time and possibly modified by the application, which constitutes the hard limit beyond which the cache will never grow; and a number that depends on the current total number of slots in the unique table.
  2. If the cache is not too large already, resizing is decided based on the hit rate. The policy adopted by the CUDD package is ``reward-based.'' If the cache hit rate is high, then it is worthwhile to increase the size of the cache.
When resizing takes place, the statistical counters used to compute the hit rate are reinitialized so as to prevent immediate resizing. The number of entries is doubled.

The rationale for the ``reward-based'' policy is as follows. In many BDD/ADD applications the hit rate is not very sensitive to the size of the cache: It is primarily a function of the problem instance at hand. If a large hit rate is observed, chances are that by using a large cache, the results of large problems (those that would take longer to solve) will survive in the cache without being overwritten long enough to cause a valuable cache hit. Notice that when a large problem is solved more than once, so are its recursively generated subproblems. If the hit rate is low, the probability of large problems being solved more than once is low.

The other observation about the cache sizing policy is that there is little point in keeping a cache which is much larger than the unique table. Every time the unique table ``fills up,'' garbage collection is invoked and the cache is cleared of all dead entries. A cache that is much larger than the unique table is therefore less than fully utilized.


Local Caches

Sometimes it may be necessary or convenient to use a local cache. A local cache can be lossless (no results are ever overwritten), or it may store objects for which canonical representations are not available. One important fact to keep in mind when using a local cache is that local caches are not cleared during garbage collection or before reordering. Therefore, it is necessary to increment the reference count of all nodes pointed by a local cache. (Unless their reference counts are guaranteed positive in some other way. One such way is by including all partial results in the global result.) Before disposing of the local cache, all elements stored in it must be passed to Cudd_RecursiveDeref. As consequence of the fact that all results in a local cache are referenced, it is generally convenient to store in the local cache also the result of trivial problems, which are not usually stored in the global cache. Otherwise, after a recursive call, it is difficult to tell whether the result is in the cache, and therefore referenced, or not in the cache, and therefore not referenced.

An alternative approach to referencing the results in the local caches is to install hook functions (see Section 3.16) to be executed before garbage collection.


The Unique Table

A recursive procedure typically splits the operands by expanding with respect to the topmost variable. Topmost in this context refers to the variable that is closest to the roots in the current variable order. The nodes, on the other hand, hold the index, which is invariant with reordering. Therefore, when splitting, one must use the permutation array maintained by the package to get the right level. Access to the permutation array is provided by the macro cuddI for BDDs and ADDs, and by the macro cuddIZ for ZDDs.

The unique table consists of as many hash tables as there are variables in use. These has tables are called unique subtables. The sizes of the unique subtables are determined by two criteria:

  1. The collision lists should be short to keep access time down.
  2. There should be enough room for dead nodes, to prevent too frequent garbage collections.
While the first criterion is fairly straightforward to implement, the second leaves more room to creativity. The CUDD package tries to figure out whether more dead node should be allowed to increase performance. (See also Section 3.4.) There are two reasons for not doing garbage collection too often. The obvious one is that it is expensive. The second is that dead nodes may be reclaimed, if they are the result of a successful cache lookup. Hence dead nodes may provide a substantial speed-up if they are kept around long enough. The usefulness of keeping many dead nodes around varies from application to application, and from problem instance to problem instance. As in the sizing of the cache, the CUDD package adopts a ``reward-based'' policy to decide how much room should be used for the unique table. If the number of dead nodes reclaimed is large compared to the number of nodes directly requested from the memory manager, then the CUDD package assumes that it will be beneficial to allow more room for the subtables, thereby reducing the frequency of garbage collection. The package does so by switching between two modes of operation:
  1. Fast growth: In this mode, the ratio of dead nodes to total nodes required for garbage collection is higher than in the slow growth mode to favor resizing of the subtables.
  2. Slow growth: In this mode keeping many dead nodes around is not as important as keeping memory requirements low.
Switching from one mode to the other is based on the following criteria:
  1. If the unique table is already large, only slow growth is possible.
  2. If the table is small and many dead nodes are being reclaimed, then fast growth is selected.
This policy is especially effective when the diagrams being manipulated have lots of recombination. Notice the interplay of the cache sizing and unique sizing: Fast growth normally occurs when the cache hit rate is large. The cache and the unique table then grow in concert, preserving a healthy balance between their sizes.


Allowing Asynchronous Reordering

Asynchronous reordering is the reordering that is triggered automatically by the increase of the number of nodes. Asynchronous reordering takes place when a new internal node must be created, and the number of nodes has reached a given threshold. (The threshold is adjusted by the package every time reordering takes place.)

Those procedures that do not create new nodes (e.g., procedures that count the number of nodes or minterms) need not worry about asynchronous reordering: No special precaution is necessary in writing them.

Procedures that only manipulate decision diagrams through the exported functions of the CUDD package also need not concern themselves with asynchronous reordering. (See Section 3.2.1 for the exceptions.)

The remaining class of procedures is composed of functions that visit the diagrams and may create new nodes. All such procedures in the CUDD package are written so that they can be interrupted by dynamic reordering. The general approach followed goes under the name of ``abort and retry.'' As the name implies, a computation that is interrupted by dynamic reordering is aborted and tried again.

A recursive procedure that can be interrupted by dynamic reordering (an interruptible procedure from now on) is composed of two functions. One is responsible for the real computation. The other is a simple wrapper, which tests whether reordering occurred and restarts the computation if it did.

Asynchronous reordering of BDDs and ADDs can only be triggered inside cuddUniqueInter, when a new node is about to be created. Likewise, asynchronous reordering of ZDDs can only be triggered inside cuddUniqueInterZdd. When reordering is triggered, three things happen:

  1. cuddUniqueInter returns a NULL value;
  2. The flag reordered of the manager is set to 1. (0 means no reordering, while 2 indicates an error occurred during reordering.)
  3. The counter reorderings of the manager is incremented. The counter is initialized to 0 when the manager is started and can be accessed by calling Cudd_ReadReorderings. By taking two readings of the counter, an application can determine if variable reordering has taken place between the first and the second reading. The package itself, however, does not make use of the counter: It is mentioned here for completeness.

The recursive procedure that receives a NULL value from cuddUniqueInter must free all intermediate results that it may have computed before, and return NULL in its turn.

The wrapper function does not decide whether reordering occurred based on the NULL return value, because the NULL value may be the result of lack of memory. Instead, it checks the reordered flag.

When a recursive procedure calls another recursive procedure that may cause reordering, it should bypass the wrapper and call the recursive procedure directly. Otherwise, the calling procedure will not know whether reordering occurred, and will not be able to restart. This is the main reason why most recursive procedures are internal, rather than static. (The wrappers, on the other hand, are mostly exported.)


Debugging

By defining the symbol DD_DEBUG during compilation, numerous checks are added to the code. In addition, the procedures Cudd_DebugCheck, Cudd_CheckKeys, and cuddHeapProfile can be called at any point to verify the consistency of the data structure. (cuddHeapProfile is an internal procedure. It is declared in cuddInt.h.) Procedures Cudd_DebugCheck and Cudd_CheckKeys are especially useful when CUDD reports that during garbage collection the number of nodes actually deleted from the unique table is different from the count of dead nodes kept by the manager. The error causing the discrepancy may have occurred much earlier than it is discovered. A few strategicaly placed calls to the debugging procedures can considerably narrow down the search for the source of the problem. (For instance, a call to Cudd_RecursiveDeref where one to Cudd_Deref was required may be identified in this way.)

One of the most common problems encountered in debugging code based on the CUDD package is a missing call to Cudd_RecursiveDeref. To help identify this type of problems, the package provides a function called Cudd_CheckZeroRef. This function should be called immediately before shutting down the manager. Cudd_CheckZeroRef checks that the only nodes left with non-zero reference counts are the predefined constants, the BDD projection functions, and nodes whose reference counts are saturated.

For this function to be effective the application must explicitly dispose of all diagrams to which it has pointers before calling it.


Gathering and Interpreting Statistics

Function Cudd_PrintInfo can be called to print out the values of parameters and statistics for a manager. The output of Cudd_PrintInfo is divided in two sections. The first reports the values of parameters that are under the application control. The second reports the values of statistical counters and other non-modifiable parameters. A quick guide to the interpretation of all these quantities follows. For ease of exposition, we reverse the order and describe the non-modifiable parameters first. We'll use a sample run as example. There is nothing special about this run.


Non Modifiable Parameters

The list of non-modifiable parameters starts with:

    **** CUDD non-modifiable parameters ****
    Memory in use: 32544220
This is the memory used by CUDD for three things mainly: Unique table (including all DD nodes in use), node free list, and computed table. This number almost never decreases in the lifetime of a CUDD manager, because CUDD does not release memory when it frees nodes. Rather, it puts the nodes on its own free list. This number is in bytes. It does not represent the peak memory occupation, because it does not include the size of data structures created temporarily by some functions (e.g., local look-up tables).

    Peak number of nodes: 837018
This number is the number of nodes that the manager has allocated. This is not the largest size of the BDDs, because the manager will normally have some dead nodes and some nodes on the free list.

    Peak number of live nodes: 836894
This is the largest number of live nodes that the manager has held since its creation.

    Number of BDD variables: 198
    Number of ZDD variables: 0
These numbers tell us this run was not using ZDDs.

    Number of cache entries: 1048576
Current number of slots of the computed table. If one has a performance problem, this is one of the numbers to look at. The cache size is always a power of 2.

    Number of cache look-ups: 2996536
    Number of cache hits: 1187087
These numbers give an indication of the hit rate in the computed table. It is not unlikely for model checking runs to get hit rates even higher than this one (39.62%).

    Number of cache insertions: 1809473
    Number of cache collisions: 961208
    Number of cache deletions: 0
A collision occurs when a cache entry is overwritten. A deletion occurs when a cache entry is invalidated (e.g., during garbage collection). If the number of deletions is high compared to the number of collisions, it means that garbage collection occurs too often. In this case there were no garbage collections; hence, no deletions.

    Cache used slots = 80.90% (expected 82.19%)
Percentage of cache slots that contain a valid entry. If this number is small, it may signal one of three conditions:
  1. The cache may have been recently resized and it is still filling up.
  2. The cache is too large for the BDDs. This should not happen if the size of the cache is determined by CUDD.
  3. The hash function is not working properly. This is accompanied by a degradation in performance. Conversely, a degradation in performance may be due to bad hash function behavior.
The expected value is computed assuming a uniformly random distribution of the accesses. If the difference between the measured value and the expected value is large (unlike this case), the cache is not working properly.

    Soft limit for cache size: 1318912
This number says how large the cache can grow. This limit is based on the size of the unique table. CUDD uses a reward-based policy for growing the cache. (See Section 4.4.1.) The default hit rate for resizing is 30% and the value in effect is reported among the modifiable parameters.

    Number of buckets in unique table: 329728
This number is exactly one quarter of the one above. This is indeed how the soft limit is determined currently, unless the computed table hits the specified hard limit. (See below.)

    Used buckets in unique table: 87.96% (expected 87.93%)
Percentage of unique table buckets that contain at least one node. Remarks analogous to those made about the used cache slots apply.

    Number of BDD and ADD nodes: 836894
    Number of ZDD nodes: 0
How many nodes are currently in the unique table, either alive or dead.

    Number of dead BDD and ADD nodes: 0
    Number of dead ZDD nodes: 0
Subtract these numbers from those above to get the number of live nodes. In this case there are no dead nodes because the application uses delayed dereferencing Cudd_DelayedDerefBdd.

    Total number of nodes allocated: 836894
This is the total number of nodes that were requested and obtained from the free list. It never decreases, and is not an indication of memory occupation after the first garbage collection. Rather, it is a measure of the package activity.

    Total number of nodes reclaimed: 0
These are the nodes that were resuscitated from the dead. If they are many more than the allocated nodes, and the total number of slots is low relative to the number of nodes, then one may want to increase the limit for fast unique table growth. In this case, the number is 0 because of delayed dereferencing.

    Garbage collections so far: 0
    Time for garbage collections: 0.00 sec
    Reorderings so far: 0
    Time for reordering: 0.00 sec
There is a GC for each reordering. Hence the first count will always be at least as large as the second.

    Node swaps in reordering: 0
This is the number of elementary reordering steps. Each step consists of the re-expression of one node while swapping two adjacent variables. This number is a good measure of the amount of work done in reordering.


Modifiable Parameters

Let us now consider the modifiable parameters, that is, those settings on which the application or the user has control.

    **** CUDD modifiable parameters ****
    Hard limit for cache size: 8388608
This number counts entries. Each entry is 16 bytes if CUDD is compiled to use 32-bit pointers. Two important observations are in order:
  1. If the datasize limit is set, CUDD will use it to determine this number automatically. On a Unix system, one can type ``limit'' or ``ulimit'' to verify if this value is set. If the datasize limit is not set, CUDD uses a default which is rather small. If you have enough memory (say 64MB or more) you should seriously consider not using the default. So, either set the datasize limit, or override the default with Cudd_SetMaxCacheHard.
  2. If a process seems to be going nowhere, a small value for this parameter may be the culprit. One cannot overemphasize the importance of the computed table in BDD algorithms.
In this case the limit was automatically set for a target maximum memory occupation of 104 MB.

    Cache hit threshold for resizing: 15%
This number can be changed if one suspects performance is hindered by the small size of the cache, and the cache is not growing towards the soft limit sufficiently fast. In such a case one can change the default 30% to 15% (as in this case) or even 1%.

    Garbage collection enabled: yes
One can disable it, but there are few good reasons for doing so. It is normally preferable to raise the limit for fast unique table growth. (See below.)

    Limit for fast unique table growth: 1363148
See Section 4.5 and the comments above about reclaimed nodes and hard limit for the cache size. This value was chosen automatically by CUDD for a datasize limit of 1 GB.

    Maximum number of variables sifted per reordering: 1000
    Maximum number of variable swaps per reordering: 2000000
    Maximum growth while sifting a variable: 1.2
Lowering these numbers will cause reordering to be less accurate and faster. Results are somewhat unpredictable, because larger BDDs after one reordering do not necessarily mean the process will go faster or slower.

    Dynamic reordering of BDDs enabled: yes
    Default BDD reordering method: 4
    Dynamic reordering of ZDDs enabled: no
    Default ZDD reordering method: 4
These lines tell whether automatic reordering can take place and what method would be used. The mapping from numbers to methods is in cudd.h. One may want to try different BDD reordering methods. If variable groups are used, however, one should not expect to see big differences, because CUDD uses the reported method only to reorder each leaf variable group (typically corresponding present and next state variables). For the relative order of the groups, it always uses the same algorithm, which is effectively sifting.

As for enabling dynamic reordering or not, a sensible recommendation is the following: Unless the circuit is rather small or one has a pretty good idea of what the order should be, reordering should be enabled.

    Realignment of ZDDs to BDDs enabled: no
    Realignment of BDDs to ZDDs enabled: no
    Dead nodes counted in triggering reordering: no
    Group checking criterion: 7
    Recombination threshold: 0
    Symmetry violation threshold: 0
    Arc violation threshold: 0
    GA population size: 0
    Number of crossovers for GA: 0
Parameters for reordering. See the documentation of the functions used to control these parameters for the details.

    Next reordering threshold: 100000
When the number of nodes crosses this threshold, reordering will be triggered. (If enabled; in this case it is not.) This parameter is updated by the package whenever reordering takes place. The application can change it, for instance at start-up. Another possibility is to use a hook function (see Section 3.16) to override the default updating policy.


Extended Statistics and Reporting

The following symbols can be defined during compilation to increase the amount of statistics gathered and the number of messages produced by the package:

Defining DD_CACHE_PROFILE causes each entry of the cache to include an access counter, which is used to compute simple statistics on the distribution of the keys.


Guidelines for Documentation

The documentation of the CUDD functions is extracted automatically from the sources by Stephen Edwards's extdoc. (The Ext system is available via anonymous FTP from ic.eecs.berkeley.edu.) The following guidelines are followed in CUDD to insure consistent and effective use of automatic extraction. It is recommended that extensions to CUDD follow the same documentation guidelines.


next up previous index
Next: The C++ Interface Up: CUDD: CU Decision Diagram Previous: User's Manual   Index
Fabio Somenzi 2012-02-04