The program is designed to generate random instances of Constraint Satisfaction Problems (CSPs) that meet a set of specified parameters, such as the number of variables, domain size, constraint density, tightness. At the same time, it can generate any combination of binary, ternary, and/or quaternary constraints specifies as percentage of the total number of constraints in the problem.

]]>We have developed two algorithms for solving the following combinatorial tasks:

• Given a finite set S and a natural number k, find all subsets of S of size k. In the literature, this problem is called k-subsets and k-combinations.

• Given two natural numbers k, n where k [less than or equal to] n, find all k-compositions of n where a k-composition is an ordered combination of k nonzero natural numbers whose sum is n. Note that in the literature, a k-composition of n can have null numbers. Further, some authors require that the sum of the k numbers to be less or equal to n. Both algorithms are based on building an intermediary tree data-structure. Using similar tree structures for generating various combinatorial objects under constraints is a “reasonably standard approach” [Hartke 2010]. Algorithms exist in the literature for k-combinations and k-compositions. For example, Wilf [1989] discusses combinatorial Gray codes, and attributes an algorithm for k-compositions to Knuth. Ruskey [1993] shows a bijection between the compositions of Knuth and the combinations of Eades and McKay [1984]. Algorithm pseudocode for those combinatorial problems is reported in Google1, Section 4.3 and 5.7 of [Ruskey 2010], and [Arndt 2010a; 2010b]. The goal of this document is to report the pseudocode of the algorithms implemented in our software [Karakashian 2010].

]]>The main challenge facing the design and implemention of such a searchable file system is how to update file indices in \emph{real-time} in a scalable way to obtain accurate file-search results. Updating file indices in hierarchical file systems or existing file-search solutions usually induces performance bottleneck and limits scalability. Thus we propose a lightweight, scalable and metadata organization, \emph{Propeller}, for future searchable file systems. Propeller partitions the namespace according to file-access patterns, which exposes massive parallelism for the emerging manycore architecture, and provides versatile system-level file-search functionalities, to support future searchable file systems. The extensive evaluation results of our \emph{Propeller} prototype show that it achieves significantly better file-index and file-search performance than a database-based solution (MySQL) and only incurs negligible overhead to the normal file I/O operations on a state-of-the-art file system (Ext4).

]]>We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time.

The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.

]]>This paper (i) highlights the observation of cache set-level non-uniformity of capacity demand, and (ii) presents a novel L2 cache design, named SNUG (Set-level Non-Uniformity identifier and Grouper), to exploit the fine-grained non-uniformity to further enhance the effectiveness of cooperative caching. By utilizing a per-set shadow tag array and saturating counter, SNUG can identify whether a set should either spill or receive blocks; by using an index-bit flipping scheme, SNUG can group peer sets for spilling and receiving in an flexible way, capturing more opportunities for cooperative caching. We evaluate our design through extensive execution-driven simulations on Quald-core CMP systems. Our results show that for 6 classes of workload combinations our SNUG cache can improve the CMP throughput by up to 22.3%, with an average of 13.9%, over the baseline configuration, while the state-of-the-art DSR scheme can only achieve an improvement by up to 14.5% and 8.4% on average.

]]>