N-Queens

n# solsrrc1rc2rc3alldiff
892853449493844284
93523,9001,9212,0053,6131,115
1072418,6579,7029,96916,0745,043
1214,200459,470239,556235,191428,376108,980
n# solsalldiffsym
81290
946264
1092881
121,78716,510
  1. What is happening when going r → rc1 → alldiff ? Why? From r to rc1, we are defining a combined model, composed of a dual view on the same problem, one from the perspective of the columns, and one from the perspective of the rows. These views have the same sets of solutions, and by combining them we enhance the constraint propagation thanks to the channeling constraints introduced to maintain consistency between the variables of rows and cols. This results in noticeably less failures compared to the r model. Going to alldiff, it is defined as a model that uses the “alldifferent” global constraint. Being a global constraint, it has embed specialized propagation which exploits the substructure of the problem, inferring strongly onto the base propagation. In this case, alldifferent is restricting the number of times values are taken, which enhances propagation strength compared to the r and rc1 models who don’t take advantage of this constraint.

  2. What is happening when going rc1 → rc2 → rc3 ? Why? We are removing implied constraints: a semantically redundant constraint for us. Each removed constraint is implied from inverse(rows,cols) ) (C1) as: alldifferent([X_1,..,X_n]) redundant as C1 already imposes all values of to be different (the constraints effectively maps indexes 1 on 1 between the 2 arrays, and if there’s a repeated value inside X_i then Y_j should contain two distinct values, which is not possible) alldifferent([Y_1,..,Y_n]) redundant as C1 imposes all values of to be different (see above explaination) redundant as the channeling constraint C1 imposes this same constraint implicitly trough symmetry: since we have , has to abide by the same rule, because if the board is not able to make diagonal attacks, by inverting the board like C1 imposes, the same is true for the board. if Diagonal attacks are not possible in any of the two boards, we can say that it is not possible in both boards. These constraints do not change the solution set, but they narrow the search space of the solver, thus making the failures higher when the search space is wider, which is expected.

  3. What is happening when going alldiff → alldiffsym? Why? Because we’re adding a symmetry breaking constraint, the given solutions for the alldiffsym model do not include many of the solutions found in the alldiff model, since all the missing solutions are all symmetric solutions of the already existing set of solutions found by the alldiffsym model. By constraining the symmetries in the solutions, we are narrowing the search space considerably, this is why we have noticeably less failures.

PROF NOTES

  • It is not clear why “combining models enhances constraint propagation thanks to the channeling constraints.” I can ask this during the oral exam
    • channeling constraints make sure to link together the different parts of the solution space (row, column, diagonal constraints).
    • By channeling these variables (row, column, diagonal), we make sure that assigning values to one set of variables (e.g., rows) automatically informs or restricts the values of others (e.g., columns or diagonals), helping propagate constraints more effectively.

Sequence

nBaseBase+Implied
FailsTimeFailsTime
50065229s 979msec49511s 401msec
1,0001,3622m 17s99549s 409msec
  1. What is happening when going base → base+implied ? Why? The implied constraints cut empty-solution branches earlier than the Base model, thus making the search space progressively smaller compared to the Base model, which explores the empty-solution branches deeper, costing us more fails and more time calculating a solution. Implied constraints are semantically redundant but are a good constraints to have to help the solver compute solutions faster, as it’s able to remove all the roads that lead nowhere sooner than if the implied constraints were not there.