Project Title: Modern Hardware Acceleration in Deep Learning - a Novel Approach and Analysis Student: Daudi Bandres Course: MSci in Computer Science with Artificial Intelligence Abstract: This dissertation investigates emerging hardware solutions for accelerating neural network computations, including developing a novel implementation of a machine learning accelerator that can implement a neural networks directly to achieve higher performance and energy efficiency compared to traditional CPU and GPU-based approaches. The project aims to provide a comprehensive review of the state-of-the-art in hardware encoding of neural networks, investigate the performance, power consumption, and energy efficiency of different hardware acceleration techniques and implementations, evaluate the trade-offs between hardware acceleration and software-based implementations, and propose novel approaches and techniques for hardware encoding of neural networks. The research involves conducting extensive experiments using appropriate benchmarks and datasets and collecting and analyzing data to evaluate the effectiveness of different approaches. The end goal is to provide a better understanding of the benefits and drawbacks of hardware acceleration for machine learning and to invesitgate a new method of encoding artificial intelligence models into hardware that can potentially improve their performance and energy efficiency.