Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
3D Deep Learning with Python

You're reading from   3D Deep Learning with Python Design and develop your computer vision model with 3D data using PyTorch3D and more

Arrow left icon
Product type Paperback
Published in Oct 2022
Publisher Packt
ISBN-13 9781803247823
Length 236 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (4):
Arrow left icon
Xudong Ma Xudong Ma
Author Profile Icon Xudong Ma
Xudong Ma
Vishakh Hegde Vishakh Hegde
Author Profile Icon Vishakh Hegde
Vishakh Hegde
Lilit Yolyan Lilit Yolyan
Author Profile Icon Lilit Yolyan
Lilit Yolyan
David Farrugia David Farrugia
Author Profile Icon David Farrugia
David Farrugia
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. PART 1: 3D Data Processing Basics
2. Chapter 1: Introducing 3D Data Processing FREE CHAPTER 3. Chapter 2: Introducing 3D Computer Vision and Geometry 4. PART 2: 3D Deep Learning Using PyTorch3D
5. Chapter 3: Fitting Deformable Mesh Models to Raw Point Clouds 6. Chapter 4: Learning Object Pose Detection and Tracking by Differentiable Rendering 7. Chapter 5: Understanding Differentiable Volumetric Rendering 8. Chapter 6: Exploring Neural Radiance Fields (NeRF) 9. PART 3: State-of-the-art 3D Deep Learning Using PyTorch3D
10. Chapter 7: Exploring Controllable Neural Feature Fields 11. Chapter 8: Modeling the Human Body in 3D 12. Chapter 9: Performing End-to-End View Synthesis with SynSin 13. Chapter 10: Mesh R-CNN 14. Index 15. Other Books You May Enjoy

Understanding 3D coordination systems

In this section, we are going to learn about the frequently used coordination systems in PyTorch3D. This section is adapted from PyTorch’s documentation of camera coordinate systems: https://pytorch3d.org/docs/cameras. To understand and use the PyTorch3D rendering system, we usually need to know these coordination systems and how to use them. As discussed in the previous sections, 3D data can be represented by points, faces, and voxels. The location of each point can be represented by a set of x, y, and z coordinates, with respect to a certain coordination system. We usually need to define and use multiple coordination systems, depending on which one is most convenient.

Figure 1.2 – A world coordinate system, where the origin and axis are defined independently of the camera positions

Figure 1.2 – A world coordinate system, where the origin and axis are defined independently of the camera positions

The first coordination system we frequently use is called the world coordination system. This coordinate system is a 3D coordination system chosen with respect to all the 3D objects, such that the locations of the 3D objects can be easy to determine. Usually, the axis of the world coordination system does not agree with the object orientation or camera orientation. Thus, there exist some non-zero rotations and displacements between the origin of the world coordination system and the object and camera orientations. A figure showing the world coordination system is shown here:

Figure 1.3 – The camera view coordinate system, where the origin is at the camera projection center and the three axes are defined according to the imaging plane

Figure 1.3 – The camera view coordinate system, where the origin is at the camera projection center and the three axes are defined according to the imaging plane

Since the axis of the world coordination system usually does not agree with the camera orientation, for many situations, it is more convenient to define and use a camera view coordination system. In PyTorch3D, the camera view coordination system is defined such that the origin is at the projection point of the camera, the x axis points to the left, the y axis points upward, and the z axis points to the front.

Figure 1.4 – The NDC coordinate system, in which the volume is confined to the ranges that the camera can render

Figure 1.4 – The NDC coordinate system, in which the volume is confined to the ranges that the camera can render

The normalized device coordinate (NDC) confines the volume that a camera can render. The x coordinate values in the NDC space range from -1 to +1, as do the y coordinate values. The z coordinate values range from znear to zfar, where znear is the nearest depth and zfar is the farthest depth. Any object out of this znear to zfar range would not be rendered by the camera.

Finally, the screen coordinate system is defined in terms of how the rendered images are shown on our screens. The coordinate system contains the x coordinate as the columns of the pixels, the y coordinate as the rows of the pixels, and the z coordinate corresponding to the depth of the object.

To render the 3D object correctly on our 2D screens, we need to switch between these coordinate systems. Luckily, these conversions can be easily carried out by using the PyTorch3D camera models. We will discuss coordinatation conversion in more detail after we discuss the camera models.

You have been reading a chapter from
3D Deep Learning with Python
Published in: Oct 2022
Publisher: Packt
ISBN-13: 9781803247823
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime