In some scenarios, all the features in our datasets will be 1/0 dummies (1 flagging the presence of an attribute, and 0 otherwise). These can obviously be accommodated using any usual regression or classification technique. But the problem is that we wouldn't be truly analyzing the interactions between them, unless we add all possible interaction terms (this is usually a tedious task—and we would potentially need to add an enormour amount of combinations).
The relevant question in these cases, is whether the presence of an attribute in conjunction with the presence or absence of other attributes causes an effect. Logic regression can be used in both regression and classification models, and the objective is to find the best possible sets of interactions that cause the highest accuracy (for classification models) or the lowest RMSE (for regression models...