Description
In this talk, we introduce Splitting Stump Forests – small ensembles of weak learners extracted from a trained random forest. The high memory consumption of random forest ensemble models renders them unfit for resource-constrained devices. We show empirically that we can significantly reduce the model size and inference time by selecting nodes that evenly split the arriving training data and applying a linear model on the resulting representation. Our extensive empirical evaluation indicates that Splitting Stump Forests outperform random forests and state-of-the- art compression methods on memory-limited embedded devices.
Note: This paper received the best paper award at Discovery Science 2024
Talk by Fouad Alkhoury