Turbo Constraint Solver
Loading...
Searching...
No Matches
hybrid_dive_and_solve.hpp File Reference
#include "common_solving.hpp"
#include <mutex>
#include <thread>
#include <chrono>

Go to the source code of this file.

Functions

void hybrid_dive_and_solve (const Configuration< battery::standard_allocator > &config)
 

Function Documentation

◆ hybrid_dive_and_solve()

void hybrid_dive_and_solve ( const Configuration< battery::standard_allocator > & config)

"Dive and solve" is a new algorithm to parallelize the standard "propagate and search" algorithm of constraint programming. Given a depth d, we create 2^d subproblems that we solve in parallel. We create as many CPU threads as blocks on the GPU (option config.or_nodes). A CPU thread takes the next subproblem available and run the "propagate and search" algorithm on it. The CPU thread offloads the propagation to the GPU, but take care of splitting and backtracking in the search tree, as well as maintaining the best bound found, and the statistics. Therefore, a kernel with only 1 block is executed each time we propagate a node. Since many CPU threads execute in parallel, there are many kernels running in parallel.

We call a task solving a subproblem a "cube" (this loosely follows the terminology of SAT solving with "cube and conquer"). By CPU cube, we refer to the local state of a CPU thread for solving a subproblem. By GPU cube, we refer to the local state of a GPU block for solving a subproblem. Note that at each moment, there are at most config.or_nodes cubes active in parallel. This is the point of entry, we preprocess the problem, create the threads solving the problem, wait for their completion or an interruption, merge and print the statistics.