A Proposed Improvement to Decision Trees, Sort of A Neural Net

Decision trees remove a lot of power from themselves by having a root node. They are trees after all and who is to say that one starting point for most data holds for all data. There may be an alternate way to handle them, one I will test when I have time against an appropriate dataset such as legal case data. The proposed solution is simple, change the structure. Don’t use a decision tree, use a decision graph. Every node has the ability to be chosen. Also, Don’t just create one vertex, create many vertices.

Feasibility

This is actually a more feasible solution than it appears. By training a neural net to choose the correct node to start with and specifying correct paths from each node, the old (n)*(n-1) edge relationship to the number of nodes doesn’t need to be a reality. This also retains a decision tree like shape. Basically, a decision graph.

Positives

We now have a more manual hybrid between a neural net, decision tree, and graph that still needs to be tested. However, we also have man different possible outcomes without relying on one root node. This is a much better starting point and many more than just a single path.

Negatives

This is still a manual process whose best outcome is probably connecting every node to every other node and training outcomes on the best path. It can also be extremely calculation intensive to get right.

Conclusion

For now, just that decision trees are limited and not really the ideal solution in most circumstances.