By Alexander Blake-Davies and Mike Schmit

I’m Mike Schmit, Director of Software Engineering with the Radeon Technologies Group at AMD. I’m leading the development of a new open-source 360-degree video-stitching framework called Radeon Loom, and I’ll explain what it is and why it is important. This is the first part in a series of posts (to be continued on GPUOpen.com) that will explain in detail how Radeon Loom works.

First, an explanation of why we chose the name Radeon Loom: From Neolithic cave paintings to modern-day IMAX® and the VR/AR experiences, people have enjoyed 360-degree artistic experiences, including Renaissance frescoes and interior paintings, dramatic cycloramas and panoramas of the 1800s and more recently immersive visual experiences from major movie studios. Looms for weaving cloth have been around for thousands of years and were vital to creating art and epic storytelling cloth tapestries, like the 70-meter long Bayeux Tapestry depicting the 11th century Norman Conquest of England. Radeon Loom maintains this legacy, enabling today’s digital storytellers to weave and stitch the next generation of timeless epics.

The Loom and Radeon Loom

Like modern machine looms that are capable of creating beautiful images from thousands of spools of thread, our Radeon GPUs are also multi-threaded and capable of processing thousands of process threads — and producing stunning digital images.

The innovative Jacquard Loom of 1801 France introduced automated punch cards mechanisms to control intricate patterns being woven on factory looms, enabling a tremendous productivity boost and freeing master weavers to devote more time to designing new creations — giving rise to today’s fashion industry. Our goal of Radeon Loom is essentially the same: boost the creative dynamic for cinematic VR video experiences, simplify and streamline the creation of high-quality 360 video, and free content creators to focus on innovative new cinematic content.

Real-time Radeon Loom Video Stitching Example

Our first target is to enable preview of 360-degree video in a headset such as an Oculus Rift or HTC Vive while filming with a high-quality 360 camera rig. We asked ourselves, “Are directors and crew going to be confident enough to use 360-degree VR video on a set, with top-tier talent, without being able to immediately see the results in this new medium?” Clearly, the answer was no.

We came up with a few solutions after several design iterations, including the solution shown below:

radeon-loom-diagram-1

There are several important details. First, we are using a fast workstation graphics card, an AMD FirePro™ W9100 or one of the new Radeon™ Pro WX series, as the faster cards can support more cameras and higher resolution. Second, we are capturing the data streams from the cameras via an SDI capture card and our DirectGMA software, so the data is delivered straight into GPU memory buffers. Third, we are using Black Magic cameras that have genlock for synchronization with HDMI® output and converting each signal to broadcast-grade SDI (Serial Digital Interface) with a converter.

Once the data is in the GPU memory via DirectGMA a complex set of algorithms stitches together all the images into a 360-degree spherical video. Once the video is stitched together, the result is sent out over SDI to one or more PCs with HMDs (head mounted displays) for immediate viewing and/or streaming to the internet.

It’s important to remember that there are some practical issues on the placement of all the equipment for a real-time 360 video setup that must be considered. With 360 cameras, you don’t generally get to have a camera operator behind the camera since they would be visible in the video. Therefore, you probably will want to locate the stitching and/or viewing PCs far away, or, for example, behind a wall or green-screen.

Why Stitching is Difficult

In the next part of this series I will go into how 360 video stitching works, but before that I should start with a brief explanation of why it is such a difficult problem. If you may have seen some high-quality 360 videos, you might think that 360-degree stitching is essentially a solved problem. However, it isn’t. But much credit must be given to all the algorithm pioneers over the past decades that have incrementally solved so many issues with panoramic stitching and 360-degree VR stitching.

However, basic problems such as parallax, camera count vs. seam count, and exposure differences between sensors must be addressed. First, a quick explanation of the parallax problem. Put simply, two cameras in different positions will see the same object from a different perspective, just as a finger held close to your nose appears with different backgrounds when viewed from each of your eyes opened one at a time.

The second issue is that utilizing more cameras is better because you get higher resolutions and better optical quality (due to less lens distortion). However, this aspect is countered by having many more seams to contend with, which creates more opportunities for artifacts and also means more processing time. In addition, as people and objects move across the seams, the parallax problem is repeatedly exposed with small angular differences.

The third issue involves the fact that each camera sensor is observing different lighting conditions. For example, a video of a sunset may have a west-facing camera looking at the sun, while an east-facing camera is viewing a much darker region. Clever algorithms exist to adjust and blend the exposure variations across images, but it comes at the cost of lighting, color accuracy, and overall dynamic range. The problem is amplified in low-light conditions, potentially limiting artistic expression.

Stitching Optimizations

Our design process follows a few simple optimization guidelines, such as “touch each pixel as few times as possible.” And, once you do “touch” or read a pixel, do as many operations as possible on it before writing it back out. This is very simple to say, but much harder to achieve in practice because of the large data sizes. By using DirectGMA (available only on AMD’s FirePro™ and Radeon™ Pro Graphics) we avoid the need to copy the data into the CPU memory, then into the GPU memory, and back again.

Another important optimization is to arrange the workloads to keep the GPU busy. What we’ve done is to prepare the pipeline in advance with lists of pixel groups that must be processed.

To reach our goal of real-time stitching with numerous cameras, we examined many possible options. Ultimately, we chose or developed algorithms that take the fewest compute cycles with the best quality, and then optimized their implementation. Of course, we also selected algorithms that are amenable to optimization on our massively parallel and scalable FirePro™ and Radeon™ GPUs.

radeon-software-crimson-relive-radeon-loom-slide
Radeon Loom Beta Preview Available Now

360-degree video creation has exploded, with almost a half-million 360 videos uploaded to Facebook, YouTube, and other social media sites in 2016 alone and is expected to be an $11.5 billion industry by 20251. 360-degree videos are currently recorded using anywhere from 2 to 24 individual cameras and as we said above, the more cameras used, the higher resolution and quality the final 360 experience ends up. However, stitching the output of these multiple camera views into a single seamless video image is a major challenge.

Radeon Loom is set to revolutionize the 360 video stitching process by addressing the challenges it poses through massively parallel GPU processing to enable both real-time live stitching and fast offline stitching of 360 videos. Radeon Loom uses AMD’s open source implementation of the KhronosTM OpenVXTM computer vision framework and is capable of stitching input from up to 24 cameras to 4k x 2k stitched output in real-time, and from up to 31 cameras at up to 8k x 4k offline depending on configuration.

Although I will continue to share more information about Radeon Loom in a continuing series on GPUOpen.com (with the next post covering the technical details of the Loom stitching pipeline), if you are a developer working in the 360 video industry you don’t have to wait. You can download the beta preview of the Radeon Loom Stitching Library on GPUOpen.com today.

Mike Schmit is the Director of Software Engineering for Radeon Loom at AMD’s Radeon Technology Group.

Alexander Blake-Davies, Software Product Marketing Specialist for Professional Graphics at AMD’s Radeon Technology Group. Links to third party sites and references to third party trademarks are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third-party endorsement of AMD or any of its products is implied. Use of third party names or marks is for informational purposes only and no endorsement of or by AMD is intended or implied.

2 Comments

Got something to add? Leave a Reply.

  • Loriguan

    I’m a founder of a VR live stream company in Beijing of China.
    I’m very insteresting with the Radeon Loom Stitching resolution.
    Now, I’m look for resolution for 8K live stream stitching, and I need your amd radeon support very well.
    Please concact me for deep corperation.

  • ultimateshoulderrides

    The Radeon Loom SDK is expected to be available to 360 video application developers, hardware manufacturers, and VR content creators.

Your email address will not be published. Required fields are marked *

    Khronos and OpenVX are trademarks of Khronos Group, Inc. HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing, LLC in the United States and other countries.