View on GitHub ROCm_Logo

ROCm, a New Era in Open GPU Computing

Platform for GPU-Enabled HPC and Ultrascale Computing

Are You Ready to ROCK?

The ROCm Platform brings a rich foundation to advanced computing by seamlessly integrating the CPU and GPU with the goal of solving real-world problems.

On April 25th, 2016, we delivered ROCm 1.0 built around three pillars:

1) Open Heterogeneous Computing Platform (Linux(R) Driver and Runtime Stack), optimized for HPC & Ultra-scale class computing;

2) Heterogeneous C and C++ Single Source Compiler, to approach computation holistically, on a system level, rather than as a discrete GPU artifact;

3) HIP, acknowledging the need for freedom of choice when it comes to platforms and APIs for GPU computing.

Using our knowledge of the HSA Standards and, more importantly, the HSA Runtime, we have been able to successfully extended support to the dGPU with critical features for accelerating NUMA computation. As a result, the ROCK driver is composed of several components based on our efforts to develop the Heterogeneous System Architecture for APUs, including the new AMDGPU driver, the Kernel Fusion Driver (KFD), the HSA+ Runtime and an LLVM based compilation stack which provides support for key languages. This support starts with AMD’s FIJI Family of dGPUs, and has expanded to include the Hawaii dGPU Family in ROCm 1.2. ROCm 1.3 further extends support to include the Polaris Family of ASICs.

Supported CPUs

The ROCm Platform leverages PCIe Atomics (Fetch ADD, Compare and SWAP, Unconditional SWAP, AtomicsOpCompletion). PCIe atomics are only supported on PCIe Gen3 Enabled CPUs and PCIe Gen3 Switches like Broadcom PLX. When you install your GPUs make sure you install them in a fully PCIe Gen3 x16 or x8 slot attached either directly to the CPU’s Root I/O controller or via a PCIe switch directly attached to the CPU’s Root I/O controller. In our experience many issues stem from trying to use consumer motherboards which provide Physical x16 Connectors that are electrically connected as e.g. PCIe Gen2 x4. This typically occurs when connecting via the Southbridge PCIe I/O controller. If you motherboard is part of this category, please do not use this connector for your GPUs, if you intend to exploit ROCm.

Our GFX8 GPU’s ( Fiji & Polaris Family) use PCIe Gen 3 and PCIe Atomics.

Current CPUs which support PCIe Gen3 + PCIe Atomics are:

Upcoming CPUs which will support PCIe Gen3 + PCIe Atomics are:

Experimental support for GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 note they do not support or take advantage of PCIe Atomics. However, we still recommend that you use a CPU from the list provided above.

Not Supported or Very Limited Support Under ROCm

Support for future APUs

We are well aware of the excitement and anticipation built around using ROCm with an APU system which fully exposes Shared Virtual Memory alongside and cache coherency between the CPU and GPU. To this end, in 2017 we plan on testing commercial AM4 motherboards for the Bristol Ridge and Raven Ridge families of APUs. Just like you, we still waiting for access to them! Once we have the first boards in the lab we will detail our experiences via our blog, as well as build a list of motherboard that are qualified for use with ROCm.

New Features to ROCm

Developer preview of the new OpenCl 1.2 compatible language runtime and compiler

IPC support

The latest ROCm platform - ROCm 1.5

The latest tested version of the drivers, tools, libraries and source code for the ROCm platform have been released and are available under the roc-1.5.0 or rocm-1.5.0 tag of the following GitHub repositories:

Additionally, the following mirror repositories that support the HCC compiler are also available on GitHub, and frozen for the rocm-1.5.0 release:

Supported Operating Systems

The ROCm platform has been tested on the following operating systems:

Installing from AMD ROCm Repositories

AMD is hosting both debian and rpm repositories for the ROCm 1.5 packages. The packages in the Debian repository have been signed to ensure package integrity. Directions for each repository are given below:

Debian repository - apt-get

Add the ROCm apt repository

For Debian based systems, like Ubuntu, configure the Debian ROCm repository as follows:

wget -qO - http://packages.amd.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://packages.amd.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'

The gpg key might change, so it may need to be updated when installing a new release.

Install or Update

Next, update the apt-get repository list and install/update the rocm package:

Warning: Before proceeding, make sure to completely uninstall any pre-release ROCm packages:

sudo apt-get update
sudo apt-get install rocm

Then, make the ROCm kernel your default kernel. If using grub2 as your bootloader, you can edit the GRUB_DEFAULT variable in the following file:

sudo vi /etc/default/grub
sudo update-grub

Once complete, reboot your system.

We recommend you verify your installation to make sure everything completed successfully.

To install ROCm with Developer Preview of OpenCL

Start by following the instruction of installing ROCm with Debian repository:

at the step “sudo apt-get install rocm” replace it with:

 sudo apt-get install rocm rocm-opencl

To install the development kit for OpenCL, which includes the OpenCL header files, execute this installation command instead:

 sudo apt-get install rocm rocm-opencl-dev

Then follow the direction for Debian Repository

Upon restart, To test your OpenCL instance

Build and run Hello World OCL app..

HelloWorld sample: wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cpp wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cl

Build it using the default ROCm OpenCL include and library locations: g++ -I /opt/rocm/opencl/include/opencl1.2 ./HelloWorld.cpp -o HelloWorld -L /opt/rocm/opencl/lib/x86_64 -lOpenCL

Run it: ./HelloWorld

Un-install

To un-install the entire rocm-dev development package execute:

sudo apt-get autoremove rocm
Installing development packages for cross compilation

It is often useful to develop and test on different systems. In this scenario, you may prefer to avoid installing the ROCm Kernel to your development system.

In this case, install the development subset of packages:

sudo apt-get update
sudo apt-get install rocm-dev

Note: To execute ROCm enabled apps you will require a system with the full ROCm driver stack installed

Removing pre-release packages

If you installed any of the ROCm pre-release packages from github, they will need to be manually un-installed:

sudo apt-get purge libhsakmt
sudo apt-get purge radeon-firmware
sudo apt-get purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')

If possible, we would recommend starting with a fresh OS install.

RPM repository - dnf (yum)

A dnf (yum) repository is also available for installation of rpm packages. To configure a system to use the ROCm rpm directory create the file /etc/yum.repos.d/rocm.repo with the following contents:

[remote]

name=ROCm Repo

baseurl=http://packages.amd.com/rocm/yum/rpm/

enabled=1

gpgcheck=0

Execute the following commands:

sudo dnf clean all
sudo dnf install rocm

As with the debian packages, it is possible to install rocm-dev individually. To uninstall the packages execute:

sudo dnf remove rocm

Just like Ubuntu installs, the ROCm kernel must be the default kernel used at boot time.

Manual installation steps for Fedora

A fully functional Fedora installation requires a few manual steps to properly setup, including:

Verify Installation

To verify that the ROCm stack completed successfully you can execute to HSA vectory_copy sample application (we do recommend that you copy it to a separate folder and invoke make therein):

cd /opt/rocm/hsa/sample
make
./vector_copy

Closed Source Components

The ROCm platform relies on a few closed source components to provide legacy functionality like HSAIL finalization and debugging/profiling support. These components are only available through the ROCm repositories, and will either be deprecated or become open source components in the future. These components are made available in the following packages:

Getting ROCm Source Code

Modifications can be made to the ROCm 1.5 components by modifying the open source code base and rebuilding the components. Source code can be cloned from each of the GitHub repositories using git, or users can use the repo command and the ROCm 1.5 manifest file to download the entire ROCm 1.5 source code.

Installing repo

Google’s repo tool allows you to manage multiple git repositories simultaneously. You can install it by executing the following commands:

curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo

Note: make sure ~/bin exists and it is part of your PATH

Cloning the code

mkdir ROCm && cd ROCm
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.5.0
repo sync

These series of commands will pull all of the open source code associated with the ROCm 1.5 release.