An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer.
The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor.
The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language.
The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand.
The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.”
This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism".
It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet.
To know more about this project, visit the libre risc-v website.
NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning?
AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans
PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn