The Promise and Challenges of Designing Digital Logic though Backpropagation
Certain hardware functions such as branch predictors or memory prefetchers operate speculatively and therefore (1) don't have an ideal specification and (2) are tolerant to design and implementation errors. Designing such modules can be represented as supervised learning task, however, machine learning models typically do not have an efficient hardware implementation. In this work we present an approach for training and synthesizing area-efficient hardware purely from input-output traces. Next, we extend the data-driven hardware specification problem from conventional datasets (e.g., branch traces) to synthetic datasets, where we design hardware that matches input-output (but not timing) behavior of software binaries. We present our language-agnostic high-level synthesis tool and discuss some challenges in training and verifying generated hardware designs.