AI4HLS: New Frontiers in High-Level Synthesis Augmented with Artificial Intelligence

Artificial intelligence (AI) and Machine Learning (ML) methods are profoundly changing design automation (DA). All areas of DA are  exploring AI-based approaches to improve the quality result. From system-level design to logic synthesis, placing, and routing, and all physical  layout methods, AI promises to identify optimal solutions while significantly reducing time-to-completion of the search. In this special session, we  specifically discuss the use of AI-enabled methods in High-Level Synthesis (HLS). While generative AI and large language models (LLMs) have the  potential of being revolutionary for HLS, thanks to their ability to generate register transfer-level (RTL) descriptions starting from natural language,  they need to be carefully evaluated when considering quality and verifiability of the resulting design, and the size of the design they can currently  deal with. Additionally, they are not the only potentially disruptive solution, as design space exploration and classical optimization methods in HLS  might greatly benefit from ML. This special session discusses the opportunities and challenges of augmenting HLS with AI, including LLMs,  prediction methods, and novel optimization approaches, and their impacts on the overall agile hardware design automation flow. 

  • The Promise and Challenges of Designing Digital Logic though Backpropagation

    Certain hardware functions such as branch predictors or memory prefetchers operate speculatively and therefore (1) don't have an ideal  specification and (2) are tolerant to design and implementation errors. Designing such modules can be represented as supervised learning task,  however, machine learning models typically do not have an efficient hardware implementation. In this work we present an approach for training  and synthesizing area-efficient hardware purely from input-output traces. Next, we extend the data-driven hardware specification problem from  conventional datasets (e.g., branch traces) to synthetic datasets, where we design hardware that matches input-output (but not timing) behavior of  software binaries. We present our language-agnostic high-level synthesis tool and discuss some challenges in training and verifying generated  hardware designs.

  • Extending High-Level Synthesis with AI/ML Methods

    Artificial Intelligence (AI) and Machine Learning (ML) methods provide significant opportunities of improving quality of results when performing high-level synthesis (HLS). For example, they can be used to model and predict metrics of the final design (e.g., area, considering aspects such as interconnect overhead for different device technologies), facilitating exploration when searching for the best design trade-offs. They can also enable identifying hidden correlations across the various phases of the synthesis and the various optimizations performed, identifying the most effective pipelines. Finally, in more general terms, bio-inspired heuristic algorithms can improve the design space exploration for the synthesis process in terms of time and quality of the result. This talk discusses opportunities and challenges to augment HLS with AI/ML using as example flow the SODA Synthesizer, an open-source hardware generation toolchain which includes SODA-OPT, a hardware/software partitioning and pre-optimization tool developed with the MLIR framework, and PandA-Bambu, a state-of-the art HLS tool. SODA interfaces with OpenROAD to provide a complete end-to-end toolchain.

  • Are LLMs Any Good for High-Level Synthesis?

    As the demand for custom hardware accelerators grows, the need for rapid and efficient design methodologies becomes critical. The  escalating complexity of integrated circuits and the increasing demand for faster and more energy-efficient designs necessitate innovative  methodologies in High-Level Synthesis (HLS). Large Language Models (LLMs) have demonstrated their ability to automate various aspects of  computational tasks, including programming and software engineering. This paper investigates the potential of LLMs to streamline the HLS process  from high-level languages to hardware descriptions, with implications for applications such as AI acceleration, embedded systems, and high 
    performance computing. We survey the state-of-the-art on using LLMs in the HLS process and conduct experiments comparing Verilog designs  generated from C/C++ using a standard HLS tool (e.g., Vitis HLS) with several LLM-based approaches. These approaches include direct LLM translation  of C/C++ benchmarks to Verilog and the use of LLMs to interpret natural language specifications into both benchmarks and Verilog code. Our  evaluation aims to assess the quality and efficiency of designs produced by each methodology. This study aims to illuminate the role of LLMs in HLS,  identifying the approaches that yield the most optimized hardware designs.

  • Higher(er) Level Synthesis: Can HLS Tools Benefit from LLMs

    High Level Synthesis (HLS) tools offer rapid hardware design from C code, but their compatibility is limited by code constructs. This talk  investigates Large Language Models (LLMs) for refactoring C code into HLS-compatible formats. We present several case studies using an LLM to  rewrite C code for NIST 800-22 randomness tests, a QuickSort algorithm and AES-128 into HLS-synthesizable c. The LLM iteratively transforms the C  code guided by user prompts, implementing functions like streaming data and hardware-specific signals. This evaluation demonstrates the LLM's  potential to assist hardware design refactoring regular C code into HLS synthesizable C code. Joint work with Luca Collini and Ramesh Karri.