Skip to content

sdf_renderer

PyTorch interface for diffferentiable renderer.

This module provides two functions
  • render_depth: numpy-based CPU implementation (not recommended, only for development)
  • render_depth_gpu: CUDA implementation (fast)

Camera

Pinhole camera parameters.

This class allows conversion between different pixel conventions, i.e., pixel center at (0.5, 0.5) (as common in computer graphics), and (0, 0) as common in computer vision.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
class Camera:
    """Pinhole camera parameters.

    This class allows conversion between different pixel conventions, i.e., pixel
    center at (0.5, 0.5) (as common in computer graphics), and (0, 0) as common in
    computer vision.
    """

    def __init__(
        self,
        width: int,
        height: int,
        fx: float,
        fy: float,
        cx: float,
        cy: float,
        s: float = 0.0,
        pixel_center: float = 0.0,
    ):
        """Initialize camera parameters.

        Note that the principal point is only fully defined in combination with
        pixel_center.

        The pixel_center defines the relation between continuous image plane
        coordinates and discrete pixel coordinates.

        A discrete image coordinate (x, y) will correspond to the continuous
        image coordinate (x + pixel_center, y + pixel_center). Normally pixel_center
        will be either 0 or 0.5. During calibration it depends on the convention
        the point features used to compute the calibration matrix.

        Note that if pixel_center == 0, the corresponding continuous coordinate
        interval for a pixel are [x-0.5, x+0.5). I.e., proper rounding has to be done
        to convert from continuous coordinate to the corresponding discrete coordinate.

        For pixel_center == 0.5, the corresponding continuous coordinate interval for a
        pixel are [x, x+1). I.e., floor is sufficient to convert from continuous
        coordinate to the corresponding discrete coordinate.

        Args:
            width: Number of pixels in horizontal direction.
            height: Number of pixels in vertical direction.
            fx: Horizontal focal length.
            fy: Vertical focal length.
            cx: Principal point x-coordinate.
            cy: Principal point y-coordinate.
            s: Skew.
            pixel_center: The center offset for the provided principal point.
        """
        # focal length
        self.fx = fx
        self.fy = fy

        # principal point
        self.cx = cx
        self.cy = cy

        self.pixel_center = pixel_center

        # skew
        self.s = s

        # image dimensions
        self.width = width
        self.height = height

    def get_o3d_pinhole_camera_parameters(self) -> o3d.camera.PinholeCameraParameters():
        """Convert camera to Open3D pinhole camera parameters.

        Open3D camera is at (0,0,0) looking along positive z axis (i.e., positive z
        values are in front of camera). Open3D expects camera with pixel_center = 0
        and does not support skew.

        Returns:
            The pinhole camera parameters.
        """
        fx, fy, cx, cy, _ = self.get_pinhole_camera_parameters(0)
        params = o3d.camera.PinholeCameraParameters()
        params.intrinsic.set_intrinsics(self.width, self.height, fx, fy, cx, cy)
        params.extrinsic = np.array(
            [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
        )
        return params

    def get_pinhole_camera_parameters(self, pixel_center: float) -> Tuple:
        """Convert camera to general camera parameters.

        Args:
            pixel_center:
                At which ratio of a square the pixel center should be for the resulting
                parameters. Typically 0 or 0.5. See class documentation for more info.
        Returns:
            - fx, fy: The horizontal and vertical focal length
            - cx, cy:
                The position of the principal point in continuous image plane
                coordinates considering the provided pixel center and the pixel center
                specified during the construction.
            - s: The skew.
        """
        cx_corrected = self.cx - self.pixel_center + pixel_center
        cy_corrected = self.cy - self.pixel_center + pixel_center
        return self.fx, self.fy, cx_corrected, cy_corrected, self.s

__init__(width, height, fx, fy, cx, cy, s=0.0, pixel_center=0.0)

Initialize camera parameters.

Note that the principal point is only fully defined in combination with pixel_center.

The pixel_center defines the relation between continuous image plane coordinates and discrete pixel coordinates.

A discrete image coordinate (x, y) will correspond to the continuous image coordinate (x + pixel_center, y + pixel_center). Normally pixel_center will be either 0 or 0.5. During calibration it depends on the convention the point features used to compute the calibration matrix.

Note that if pixel_center == 0, the corresponding continuous coordinate interval for a pixel are [x-0.5, x+0.5). I.e., proper rounding has to be done to convert from continuous coordinate to the corresponding discrete coordinate.

For pixel_center == 0.5, the corresponding continuous coordinate interval for a pixel are [x, x+1). I.e., floor is sufficient to convert from continuous coordinate to the corresponding discrete coordinate.

Parameters:

Name Type Description Default
width int

Number of pixels in horizontal direction.

required
height int

Number of pixels in vertical direction.

required
fx float

Horizontal focal length.

required
fy float

Vertical focal length.

required
cx float

Principal point x-coordinate.

required
cy float

Principal point y-coordinate.

required
s float

Skew.

0.0
pixel_center float

The center offset for the provided principal point.

0.0
Source code in sdfest/differentiable_renderer/sdf_renderer.py
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def __init__(
    self,
    width: int,
    height: int,
    fx: float,
    fy: float,
    cx: float,
    cy: float,
    s: float = 0.0,
    pixel_center: float = 0.0,
):
    """Initialize camera parameters.

    Note that the principal point is only fully defined in combination with
    pixel_center.

    The pixel_center defines the relation between continuous image plane
    coordinates and discrete pixel coordinates.

    A discrete image coordinate (x, y) will correspond to the continuous
    image coordinate (x + pixel_center, y + pixel_center). Normally pixel_center
    will be either 0 or 0.5. During calibration it depends on the convention
    the point features used to compute the calibration matrix.

    Note that if pixel_center == 0, the corresponding continuous coordinate
    interval for a pixel are [x-0.5, x+0.5). I.e., proper rounding has to be done
    to convert from continuous coordinate to the corresponding discrete coordinate.

    For pixel_center == 0.5, the corresponding continuous coordinate interval for a
    pixel are [x, x+1). I.e., floor is sufficient to convert from continuous
    coordinate to the corresponding discrete coordinate.

    Args:
        width: Number of pixels in horizontal direction.
        height: Number of pixels in vertical direction.
        fx: Horizontal focal length.
        fy: Vertical focal length.
        cx: Principal point x-coordinate.
        cy: Principal point y-coordinate.
        s: Skew.
        pixel_center: The center offset for the provided principal point.
    """
    # focal length
    self.fx = fx
    self.fy = fy

    # principal point
    self.cx = cx
    self.cy = cy

    self.pixel_center = pixel_center

    # skew
    self.s = s

    # image dimensions
    self.width = width
    self.height = height

get_o3d_pinhole_camera_parameters()

Convert camera to Open3D pinhole camera parameters.

Open3D camera is at (0,0,0) looking along positive z axis (i.e., positive z values are in front of camera). Open3D expects camera with pixel_center = 0 and does not support skew.

Returns:

Type Description
PinholeCameraParameters()

The pinhole camera parameters.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
def get_o3d_pinhole_camera_parameters(self) -> o3d.camera.PinholeCameraParameters():
    """Convert camera to Open3D pinhole camera parameters.

    Open3D camera is at (0,0,0) looking along positive z axis (i.e., positive z
    values are in front of camera). Open3D expects camera with pixel_center = 0
    and does not support skew.

    Returns:
        The pinhole camera parameters.
    """
    fx, fy, cx, cy, _ = self.get_pinhole_camera_parameters(0)
    params = o3d.camera.PinholeCameraParameters()
    params.intrinsic.set_intrinsics(self.width, self.height, fx, fy, cx, cy)
    params.extrinsic = np.array(
        [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
    )
    return params

get_pinhole_camera_parameters(pixel_center)

Convert camera to general camera parameters.

Parameters:

Name Type Description Default
pixel_center float

At which ratio of a square the pixel center should be for the resulting parameters. Typically 0 or 0.5. See class documentation for more info.

required

Returns: - fx, fy: The horizontal and vertical focal length - cx, cy: The position of the principal point in continuous image plane coordinates considering the provided pixel center and the pixel center specified during the construction. - s: The skew.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def get_pinhole_camera_parameters(self, pixel_center: float) -> Tuple:
    """Convert camera to general camera parameters.

    Args:
        pixel_center:
            At which ratio of a square the pixel center should be for the resulting
            parameters. Typically 0 or 0.5. See class documentation for more info.
    Returns:
        - fx, fy: The horizontal and vertical focal length
        - cx, cy:
            The position of the principal point in continuous image plane
            coordinates considering the provided pixel center and the pixel center
            specified during the construction.
        - s: The skew.
    """
    cx_corrected = self.cx - self.pixel_center + pixel_center
    cy_corrected = self.cy - self.pixel_center + pixel_center
    return self.fx, self.fy, cx_corrected, cy_corrected, self.s

SDFRendererFunction

Bases: Function

Renderer function for signed distance fields.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
class SDFRendererFunction(torch.autograd.Function):
    """Renderer function for signed distance fields."""

    @staticmethod
    def forward(
        ctx,
        sdf: torch.Tensor,
        position: torch.Tensor,
        orientation: torch.Tensor,
        inv_scale: torch.Tensor,
        width: Optional[int] = None,
        height: Optional[int] = None,
        fov_deg: Optional[float] = None,
        threshold: Optional[float] = 0.0,
        camera: Optional[Camera] = None,
    ) -> torch.Tensor:
        """Render depth image of a 7-DOF discrete signed distance field.

        The SDF position is assumed to be in the camera frame under OpenGL convention.

        That is, camera looks along negative z-axis, y pointing upwards and x to the
        right. Note that the rendered image will still follow the classical computer
        vision convention, of first row being up in the camera frame.

        This function internally usese numpy and is very slow due to the fully serial
        implementation. This is only for testing purposes. Use the GPU version for
        practical performance.

        Camera can be specified either via camera parameter giving most
        flexbility or alternatively by providing width, height and fov_deg.

        Args:
            ctx:
                Context object to stash information.
                See https://pytorch.org/docs/stable/notes/extending.html.
            sdf:
                Discrete signed distance field with shape (M, M, M).
                Arbitrary (but uniform) resolutions are supported.
            position:
                The position of the signed distance field origin in the camera frame.
            orientation:
                The orientation of the SDF as a normalized quaternion.
            inv_scale:
                The inverted scale of the SDF. The scale of an SDF the half-width of the
                full SDF volume.
            width:
                Number of pixels in x direction. Recommended to use camera instead.
            height:
                Number of pixels in y direction. Recommended to use camera instead.
            fov_deg:
                The horizontal field of view (i.e., in x direction).
                Pixels are assumed to be square, i.e., fx=fy, computed based on width
                and fov_deg.
                Recommended to use camera instead.
            threshold:
                The distance threshold at which sphere tracing should be stopped.
                Smaller value will be more accurate, but slower and might potentially
                lead to holes in the rendering for thin structures in the SDF.
                Larger values will be faster, but will overestimate the thickness.

                Should always be positive to guarantee convergence.
            camera:
                Camera parameters (not supported right now).
        Returns:
            The rendered depth image.
        """
        if None not in [width, height, fov_deg] and camera is not None:
            raise ValueError("Either width+height+fov_dev or camera must be provided.")
        if camera is not None:
            raise NotImplementedError(
                "Only width+height+fov_dev currently supported for CPU"
            )
        # for simplicity use numpy internally
        ctx.save_for_backward(sdf, position, orientation, inv_scale)
        sdf = sdf.detach().numpy()
        position = position.detach().numpy()
        orientation = orientation.detach().numpy()
        inv_scale = inv_scale.detach().numpy()
        sdf_object = SDFObject(sdf)
        image, derivatives = _render_depth(
            sdf_object,
            width,
            height,
            fov_deg,
            "d",
            threshold,
            position,
            orientation,
            inv_scale,
        )
        ctx.derivatives = derivatives
        return torch.from_numpy(image)

    @staticmethod
    def backward(ctx, inp: torch.Tensor):
        """Compute gradients of inputs with respect to the provided gradients.

        Normally called by PyTorch as part of a call to backward() on a loss.

        Args:
            grad_depth_image:
        Returns:
            Gradients of
                discretized signed distance field, position, orientation, inverted scale
                followed by None for all the non-supported variables passed to forward.
        """
        derivatives = ctx.derivatives
        sdf, pos, quat, inv_s = ctx.saved_tensors
        g_image = inp.numpy()
        g_sdf = g_p = g_q = g_is = g_w = g_h = g_fov = g_thresh = g_camera = None
        g_sdf = torch.zeros_like(sdf)
        g_p = torch.empty_like(pos)
        g_q = torch.empty_like(quat)
        g_is = torch.empty_like(inv_s)
        g_p[0] = torch.tensor(np.sum(derivatives["x"] * g_image))
        g_p[1] = torch.tensor(np.sum(derivatives["y"] * g_image))
        g_p[2] = torch.tensor(np.sum(derivatives["z"] * g_image))
        g_q[0] = torch.tensor(np.sum(derivatives["qx"] * g_image))
        g_q[1] = torch.tensor(np.sum(derivatives["qy"] * g_image))
        g_q[2] = torch.tensor(np.sum(derivatives["qz"] * g_image))
        g_q[3] = torch.tensor(np.sum(derivatives["qw"] * g_image))
        g_is = torch.tensor(np.sum(derivatives["s_inv"] * g_image))
        if "sdf" in derivatives:
            for k, v in derivatives["sdf"].items():
                g_sdf[k] = torch.tensor(np.sum(v * g_image))
        return g_sdf, g_p, g_q, g_is, g_w, g_h, g_fov, g_thresh, g_camera

backward(ctx, inp) staticmethod

Compute gradients of inputs with respect to the provided gradients.

Normally called by PyTorch as part of a call to backward() on a loss.

Parameters:

Name Type Description Default
grad_depth_image
required

Returns: Gradients of discretized signed distance field, position, orientation, inverted scale followed by None for all the non-supported variables passed to forward.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
@staticmethod
def backward(ctx, inp: torch.Tensor):
    """Compute gradients of inputs with respect to the provided gradients.

    Normally called by PyTorch as part of a call to backward() on a loss.

    Args:
        grad_depth_image:
    Returns:
        Gradients of
            discretized signed distance field, position, orientation, inverted scale
            followed by None for all the non-supported variables passed to forward.
    """
    derivatives = ctx.derivatives
    sdf, pos, quat, inv_s = ctx.saved_tensors
    g_image = inp.numpy()
    g_sdf = g_p = g_q = g_is = g_w = g_h = g_fov = g_thresh = g_camera = None
    g_sdf = torch.zeros_like(sdf)
    g_p = torch.empty_like(pos)
    g_q = torch.empty_like(quat)
    g_is = torch.empty_like(inv_s)
    g_p[0] = torch.tensor(np.sum(derivatives["x"] * g_image))
    g_p[1] = torch.tensor(np.sum(derivatives["y"] * g_image))
    g_p[2] = torch.tensor(np.sum(derivatives["z"] * g_image))
    g_q[0] = torch.tensor(np.sum(derivatives["qx"] * g_image))
    g_q[1] = torch.tensor(np.sum(derivatives["qy"] * g_image))
    g_q[2] = torch.tensor(np.sum(derivatives["qz"] * g_image))
    g_q[3] = torch.tensor(np.sum(derivatives["qw"] * g_image))
    g_is = torch.tensor(np.sum(derivatives["s_inv"] * g_image))
    if "sdf" in derivatives:
        for k, v in derivatives["sdf"].items():
            g_sdf[k] = torch.tensor(np.sum(v * g_image))
    return g_sdf, g_p, g_q, g_is, g_w, g_h, g_fov, g_thresh, g_camera

forward(ctx, sdf, position, orientation, inv_scale, width=None, height=None, fov_deg=None, threshold=0.0, camera=None) staticmethod

Render depth image of a 7-DOF discrete signed distance field.

The SDF position is assumed to be in the camera frame under OpenGL convention.

That is, camera looks along negative z-axis, y pointing upwards and x to the right. Note that the rendered image will still follow the classical computer vision convention, of first row being up in the camera frame.

This function internally usese numpy and is very slow due to the fully serial implementation. This is only for testing purposes. Use the GPU version for practical performance.

Camera can be specified either via camera parameter giving most flexbility or alternatively by providing width, height and fov_deg.

Parameters:

Name Type Description Default
ctx

Context object to stash information. See https://pytorch.org/docs/stable/notes/extending.html.

required
sdf Tensor

Discrete signed distance field with shape (M, M, M). Arbitrary (but uniform) resolutions are supported.

required
position Tensor

The position of the signed distance field origin in the camera frame.

required
orientation Tensor

The orientation of the SDF as a normalized quaternion.

required
inv_scale Tensor

The inverted scale of the SDF. The scale of an SDF the half-width of the full SDF volume.

required
width Optional[int]

Number of pixels in x direction. Recommended to use camera instead.

None
height Optional[int]

Number of pixels in y direction. Recommended to use camera instead.

None
fov_deg Optional[float]

The horizontal field of view (i.e., in x direction). Pixels are assumed to be square, i.e., fx=fy, computed based on width and fov_deg. Recommended to use camera instead.

None
threshold Optional[float]

The distance threshold at which sphere tracing should be stopped. Smaller value will be more accurate, but slower and might potentially lead to holes in the rendering for thin structures in the SDF. Larger values will be faster, but will overestimate the thickness.

Should always be positive to guarantee convergence.

0.0
camera Optional[Camera]

Camera parameters (not supported right now).

None

Returns: The rendered depth image.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
@staticmethod
def forward(
    ctx,
    sdf: torch.Tensor,
    position: torch.Tensor,
    orientation: torch.Tensor,
    inv_scale: torch.Tensor,
    width: Optional[int] = None,
    height: Optional[int] = None,
    fov_deg: Optional[float] = None,
    threshold: Optional[float] = 0.0,
    camera: Optional[Camera] = None,
) -> torch.Tensor:
    """Render depth image of a 7-DOF discrete signed distance field.

    The SDF position is assumed to be in the camera frame under OpenGL convention.

    That is, camera looks along negative z-axis, y pointing upwards and x to the
    right. Note that the rendered image will still follow the classical computer
    vision convention, of first row being up in the camera frame.

    This function internally usese numpy and is very slow due to the fully serial
    implementation. This is only for testing purposes. Use the GPU version for
    practical performance.

    Camera can be specified either via camera parameter giving most
    flexbility or alternatively by providing width, height and fov_deg.

    Args:
        ctx:
            Context object to stash information.
            See https://pytorch.org/docs/stable/notes/extending.html.
        sdf:
            Discrete signed distance field with shape (M, M, M).
            Arbitrary (but uniform) resolutions are supported.
        position:
            The position of the signed distance field origin in the camera frame.
        orientation:
            The orientation of the SDF as a normalized quaternion.
        inv_scale:
            The inverted scale of the SDF. The scale of an SDF the half-width of the
            full SDF volume.
        width:
            Number of pixels in x direction. Recommended to use camera instead.
        height:
            Number of pixels in y direction. Recommended to use camera instead.
        fov_deg:
            The horizontal field of view (i.e., in x direction).
            Pixels are assumed to be square, i.e., fx=fy, computed based on width
            and fov_deg.
            Recommended to use camera instead.
        threshold:
            The distance threshold at which sphere tracing should be stopped.
            Smaller value will be more accurate, but slower and might potentially
            lead to holes in the rendering for thin structures in the SDF.
            Larger values will be faster, but will overestimate the thickness.

            Should always be positive to guarantee convergence.
        camera:
            Camera parameters (not supported right now).
    Returns:
        The rendered depth image.
    """
    if None not in [width, height, fov_deg] and camera is not None:
        raise ValueError("Either width+height+fov_dev or camera must be provided.")
    if camera is not None:
        raise NotImplementedError(
            "Only width+height+fov_dev currently supported for CPU"
        )
    # for simplicity use numpy internally
    ctx.save_for_backward(sdf, position, orientation, inv_scale)
    sdf = sdf.detach().numpy()
    position = position.detach().numpy()
    orientation = orientation.detach().numpy()
    inv_scale = inv_scale.detach().numpy()
    sdf_object = SDFObject(sdf)
    image, derivatives = _render_depth(
        sdf_object,
        width,
        height,
        fov_deg,
        "d",
        threshold,
        position,
        orientation,
        inv_scale,
    )
    ctx.derivatives = derivatives
    return torch.from_numpy(image)

SDFRendererFunctionGPU

Bases: Function

Renderer function for signed distance fields.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
class SDFRendererFunctionGPU(torch.autograd.Function):
    """Renderer function for signed distance fields."""

    @staticmethod
    def forward(
        ctx,
        sdf: torch.Tensor,
        position: torch.Tensor,
        orientation: torch.Tensor,
        inv_scale: torch.Tensor,
        threshold: Optional[float] = 0.0,
        camera: Optional[Camera] = None,
    ) -> torch.Tensor:
        """Render depth image of a 7-DOF discrete signed distance field on the GPU.

        Also see render_depth_gpu for documentation.

        Args:
            ctx:
                Context object to stash information.
                See https://pytorch.org/docs/stable/notes/extending.html.
            sdf:
                Discrete signed distance field with shape (M, M, M).
                Arbitrary (but uniform) resolutions are supported.
            position:
                The position of the signed distance field origin in the camera frame.
            orientation:
                The orientation of the SDF as a normalized quaternion.
            inv_scale:
                The inverted scale of the SDF. The scale of an SDF the half-width of the
                full SDF volume.
            threshold:
                The distance threshold at which sphere tracing should be stopped.
                Smaller value will be more accurate, but slower and might potentially
                lead to holes in the rendering for thin structures in the SDF.
                Larger values will be faster, but will overestimate the thickness.

                Should always be positive to guarantee convergence.
            camera:
                Camera parameters.
        Returns:
            The rendered depth image.
        """
        fx, fy, cx, cy, _ = camera.get_pinhole_camera_parameters(0.5)
        (image,) = sdf_renderer_cpp.forward(
            sdf,
            position,
            orientation,
            inv_scale,
            camera.width,
            camera.height,
            cx,
            cy,
            fx,
            fy,
            threshold,
        )
        ctx.save_for_backward(image, sdf, position, orientation, inv_scale)
        ctx.width = camera.width
        ctx.height = camera.height
        ctx.fx = fx
        ctx.fy = fy
        ctx.cx = cx
        ctx.cy = cy
        return image

    @staticmethod
    def backward(ctx, grad_depth_image: torch.Tensor):
        """Compute gradients of inputs with respect to the provided gradients.

        Normally called by PyTorch as part of a call to backward() on a loss.

        Args:
            grad_depth_image:
        Returns:
            Gradients of
                discretized signed distance field, position, orientation, inverted scale
                followed by None for all the non-supported variables passed to forward.
        """
        g_sdf = g_p = g_q = g_is = g_thresh = g_camera = None
        g_sdf, g_p, g_q, g_is = sdf_renderer_cpp.backward(
            grad_depth_image,
            *ctx.saved_tensors,
            ctx.width,
            ctx.height,
            ctx.cx,
            ctx.cy,
            ctx.fx,
            ctx.fy
        )
        return g_sdf, g_p, g_q, g_is, g_thresh, g_camera

backward(ctx, grad_depth_image) staticmethod

Compute gradients of inputs with respect to the provided gradients.

Normally called by PyTorch as part of a call to backward() on a loss.

Parameters:

Name Type Description Default
grad_depth_image Tensor
required

Returns: Gradients of discretized signed distance field, position, orientation, inverted scale followed by None for all the non-supported variables passed to forward.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
@staticmethod
def backward(ctx, grad_depth_image: torch.Tensor):
    """Compute gradients of inputs with respect to the provided gradients.

    Normally called by PyTorch as part of a call to backward() on a loss.

    Args:
        grad_depth_image:
    Returns:
        Gradients of
            discretized signed distance field, position, orientation, inverted scale
            followed by None for all the non-supported variables passed to forward.
    """
    g_sdf = g_p = g_q = g_is = g_thresh = g_camera = None
    g_sdf, g_p, g_q, g_is = sdf_renderer_cpp.backward(
        grad_depth_image,
        *ctx.saved_tensors,
        ctx.width,
        ctx.height,
        ctx.cx,
        ctx.cy,
        ctx.fx,
        ctx.fy
    )
    return g_sdf, g_p, g_q, g_is, g_thresh, g_camera

forward(ctx, sdf, position, orientation, inv_scale, threshold=0.0, camera=None) staticmethod

Render depth image of a 7-DOF discrete signed distance field on the GPU.

Also see render_depth_gpu for documentation.

Parameters:

Name Type Description Default
ctx

Context object to stash information. See https://pytorch.org/docs/stable/notes/extending.html.

required
sdf Tensor

Discrete signed distance field with shape (M, M, M). Arbitrary (but uniform) resolutions are supported.

required
position Tensor

The position of the signed distance field origin in the camera frame.

required
orientation Tensor

The orientation of the SDF as a normalized quaternion.

required
inv_scale Tensor

The inverted scale of the SDF. The scale of an SDF the half-width of the full SDF volume.

required
threshold Optional[float]

The distance threshold at which sphere tracing should be stopped. Smaller value will be more accurate, but slower and might potentially lead to holes in the rendering for thin structures in the SDF. Larger values will be faster, but will overestimate the thickness.

Should always be positive to guarantee convergence.

0.0
camera Optional[Camera]

Camera parameters.

None

Returns: The rendered depth image.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
@staticmethod
def forward(
    ctx,
    sdf: torch.Tensor,
    position: torch.Tensor,
    orientation: torch.Tensor,
    inv_scale: torch.Tensor,
    threshold: Optional[float] = 0.0,
    camera: Optional[Camera] = None,
) -> torch.Tensor:
    """Render depth image of a 7-DOF discrete signed distance field on the GPU.

    Also see render_depth_gpu for documentation.

    Args:
        ctx:
            Context object to stash information.
            See https://pytorch.org/docs/stable/notes/extending.html.
        sdf:
            Discrete signed distance field with shape (M, M, M).
            Arbitrary (but uniform) resolutions are supported.
        position:
            The position of the signed distance field origin in the camera frame.
        orientation:
            The orientation of the SDF as a normalized quaternion.
        inv_scale:
            The inverted scale of the SDF. The scale of an SDF the half-width of the
            full SDF volume.
        threshold:
            The distance threshold at which sphere tracing should be stopped.
            Smaller value will be more accurate, but slower and might potentially
            lead to holes in the rendering for thin structures in the SDF.
            Larger values will be faster, but will overestimate the thickness.

            Should always be positive to guarantee convergence.
        camera:
            Camera parameters.
    Returns:
        The rendered depth image.
    """
    fx, fy, cx, cy, _ = camera.get_pinhole_camera_parameters(0.5)
    (image,) = sdf_renderer_cpp.forward(
        sdf,
        position,
        orientation,
        inv_scale,
        camera.width,
        camera.height,
        cx,
        cy,
        fx,
        fy,
        threshold,
    )
    ctx.save_for_backward(image, sdf, position, orientation, inv_scale)
    ctx.width = camera.width
    ctx.height = camera.height
    ctx.fx = fx
    ctx.fy = fy
    ctx.cx = cx
    ctx.cy = cy
    return image

render_depth_gpu(sdf, position, orientation, inv_scale, width=None, height=None, fov_deg=None, threshold=0.0, camera=None)

Render depth image of a 7-DOF discrete signed distance field on the GPU.

The SDF position is assumed to be in the camera frame under OpenGL convention.

That is, camera looks along negative z-axis, y pointing upwards and x to the right. Note that the rendered image will still follow the classical computer vision convention, of first row being up in the camera frame.

Camera can be specified either via camera parameter giving most flexbility or alternatively by providing width, height and fov_deg.

All provided tensors must reside on the GPU.

Parameters:

Name Type Description Default
sdf Tensor

Discrete signed distance field with shape (M, M, M). Arbitrary (but uniform) resolutions are supported.

required
position Tensor

The position of the signed distance field origin in the camera frame.

required
orientation Tensor

The orientation of the SDF as a normalized quaternion.

required
inv_scale Tensor

The inverted scale of the SDF. The scale of an SDF the half-width of the full SDF volume.

required
width Optional[int]

Number of pixels in x direction. Recommended to use camera instead.

None
height Optional[int]

Number of pixels in y direction. Recommended to use camera instead.

None
fov_deg Optional[float]

The horizontal field of view (i.e., in x direction). Pixels are assumed to be square, i.e., fx=fy, computed based on width and fov_deg. Recommended to use camera instead.

None
threshold Optional[float]

The distance threshold at which sphere tracing should be stopped. Smaller value will be more accurate, but slower and might potentially lead to holes in the rendering for thin structures in the SDF. Larger values will be faster, but will overestimate the thickness.

Should always be positive to guarantee convergence.

0.0
camera Optional[Camera]

Camera parameters.

None

Returns: The rendered depth image.

Source code in sdfest/differentiable_renderer/sdf_renderer.py
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
def render_depth_gpu(
    sdf: torch.Tensor,
    position: torch.Tensor,
    orientation: torch.Tensor,
    inv_scale: torch.Tensor,
    width: Optional[int] = None,
    height: Optional[int] = None,
    fov_deg: Optional[float] = None,
    threshold: Optional[float] = 0.0,
    camera: Optional[Camera] = None,
):
    """Render depth image of a 7-DOF discrete signed distance field on the GPU.

    The SDF position is assumed to be in the camera frame under OpenGL convention.

    That is, camera looks along negative z-axis, y pointing upwards and x to the
    right. Note that the rendered image will still follow the classical computer
    vision convention, of first row being up in the camera frame.

    Camera can be specified either via camera parameter giving most
    flexbility or alternatively by providing width, height and fov_deg.

    All provided tensors must reside on the GPU.

    Args:
        sdf:
            Discrete signed distance field with shape (M, M, M).
            Arbitrary (but uniform) resolutions are supported.
        position:
            The position of the signed distance field origin in the camera frame.
        orientation:
            The orientation of the SDF as a normalized quaternion.
        inv_scale:
            The inverted scale of the SDF. The scale of an SDF the half-width of the
            full SDF volume.
        width:
            Number of pixels in x direction. Recommended to use camera instead.
        height:
            Number of pixels in y direction. Recommended to use camera instead.
        fov_deg:
            The horizontal field of view (i.e., in x direction).
            Pixels are assumed to be square, i.e., fx=fy, computed based on width
            and fov_deg.
            Recommended to use camera instead.
        threshold:
            The distance threshold at which sphere tracing should be stopped.
            Smaller value will be more accurate, but slower and might potentially
            lead to holes in the rendering for thin structures in the SDF.
            Larger values will be faster, but will overestimate the thickness.

            Should always be positive to guarantee convergence.
        camera:
            Camera parameters.
    Returns:
        The rendered depth image.
    """
    if None not in [width, height, fov_deg] and camera is not None:
        raise ValueError("Either width+height+fov_dev or camera must be provided.")
    if camera is None:
        f = width / math.tan(fov_deg * math.pi / 180.0 / 2.0) / 2
        camera = Camera(width, height, f, f, width / 2, height / 2, pixel_center=0.5)

    return SDFRendererFunctionGPU.apply(
        sdf, position, orientation, inv_scale, threshold, camera
    )