jactorch#

Jacinle PyTorch functions and modules.

Contexts

ForwardContext

A context manager that serves as a global variable for the forward pass.

NNEnv

A basic environment that wraps around a nn.Module.

TrainerEnv

get_forward_context()

Get the current forward context.

IO

state_dict(model[, include, exclude, cpu])

Get a state dict representation of the model.

load_state_dict(model, state_dict[, ...])

Load a state dict into the model.

load_weights(model, filename[, include, ...])

Load weights from a file.

Parameter Filtering and Grouping

find_parameters(module, pattern[, return_names])

Find parameters in a module with a pattern.

filter_parameters(params, pattern[, ...])

Filter parameters with a pattern.

exclude_parameters(params, exclude)

Exclude parameters from a list of parameters.

compose_param_groups(model, *groups[, ...])

Compose the param_groups argument for torch optimizers.

param_group(pattern, **kwargs)

A helper function used for human-friendly declaration of param groups.

mark_freezed(model)

Freeze all parameters in a model.

mark_unfreezed(model)

Unfreeze all parameters in a model.

detach_modules(*modules)

A context manager that temporarily detach all parameters in the input list of modules.

Data Structures and Helpful Functions

All of the following functions accepts an arbitrary Python data structure as inputs (e.g., tuples, lists, dictionaries). They will recursively traverse the data structure and apply the function to each element.

async_copy_to(obj, dev[, main_stream])

Copy an object to a specific device asynchronizedly.

as_tensor(obj)

Convert elements in a Python data structure to tensors.

as_variable(obj)

DEPRECATED(Jiayuan Mao): as_variable has been deprecated and will be removed by 10/23/2018; please use as_tensor instead.

as_numpy(obj)

Convert elements in a Python data structure to numpy arrays.

as_float(obj)

Convert elements in a Python data structure to Python floating-point scalars.

as_cuda(obj)

Move elements in a Python data structure to CPU.

as_cpu(obj)

Move elements in a Python data structure to CPU.

as_detached(obj[, clone])

Detach elements in a Python data structure.

Arithmetics

atanh(x[, eps])

Computes \(\mathrm{arc}\tanh(x)\).

logit(x[, eps])

Computes \(\mathrm{logit}(x)\).

log_sigmoid(x)

Computes \(\log \sigma(x)\).

tstat(x)

Tensor stats: produces a summary of the tensor, including shape, min, max, mean, and std.

soft_amax(x, dim[, tau, keepdim])

Compute a soft maximum over the given dimension.

soft_amin(x, dim[, tau, keepdim])

Compute a soft minimum over the given dimension.

Clustering

kmeans(data, nr_clusters[, nr_iterations, ...])

Gradient

grad_multi(input, grad_multi)

Scale the gradient with respect to the input.

zero_grad(v)

Zero-grad the variable.

no_grad_func(func)

A decorator to disable gradient calculation for a function.

Indexing

batched_index_select(tensor, batched_indices)

Select elements from tensor according to batched_indices.

index_nonzero(tensor, mask)

Iteratively generates the values of tensor where mask is nonzero.

index_one_hot(tensor, dim, index)

tensor[:, :, index, :]

index_one_hot_ellipsis(tensor, dim, index)

tensor[:, :, index, ...].

inverse_permutation(perm)

Inverse a permutation.

leftmost_nonzero(tensor, dim)

Return the smallest nonzero index along the dim axis.

one_hot(index, nr_classes)

Convert a list of class labels into one-hot representation.

one_hot_dim(index, nr_classes, dim)

Convert a tensor of class labels into one-hot representation by adding a new dimension indexed at dim.

one_hot_nd(index, nr_classes)

Convert a tensor of class labels into one-hot representation.

reversed(x[, dim])

Reverse a tensor along the given dimension.

rightmost_nonzero(tensor, dim)

Return the smallest nonzero index along the dim axis.

set_index_one_hot_(tensor, dim, index, value)

tensor[:, :, index, :, :] = value.

batch

patch_torch_index()

tindex

findex

vindex

oindex

btindex

bfindex

bvindex

boindex

batched_index_int(tensor, index, dim)

batched_index_slice(tensor, start, stop, ...)

batched_index_vector_dim(tensor, indices, ...)

batched_index_vectors(tensor, indices, ...)

Kernel

cosine_distance(f_lookup, f)

Cosine distance kernel.

dot(f_lookup, f)

Dot product kernel, essentially a cosine distance kernel without normalization.

inverse_distance(f_lookup, f[, p, eps])

Inverse distance kernel.

Linear Algebra

normalize(tensor[, p, dim, eps])

Normalize the input along a specific dimension.

Log-Linear

batch_logmatmulexp(mat1, mat2[, use_mm])

Computes torch.bmm(mat1.exp(), mat2.exp()).log() in a numerically stable way.

log1mexp(x)

Computes log(1 - exp(x)) in a numerically stable way.

logaddexp(x, y)

Computes log(exp(x) + exp(y)) in a numerically stable way.

logits_and(x, y)

Computes logit(sigmoid(x) * sigmoid(y)) in a numerically stable way.

logits_or(x, y)

Computes logit(sigmoid(x) + sigmoid(y) - sigmoid(x) * sigmoid(y)) in a numerically stable way.

logmatmulexp(mat1, mat2[, use_mm])

Computes torch.matmul(mat1.exp(), mat2.exp()).log() in a numerically stable way.

logsumexp(tensor[, dim, keepdim])

Computes tensor.exp().sum(dim, keepdim).log() in a numerically stable way.

Masking

length2mask(lengths, max_length)

Convert a length vector to a mask.

length_masked_reversed(tensor, lengths[, dim])

Reverse a padded sequence tensor along the given dimension.

length_masked_softmax(logits, lengths[, ...])

Compute the softmax of the tensor while ignoring some masked elements.

mask_meshgrid(mask[, target_dims])

Create an N-dimensional meshgrid-like mask, where output[i, j, k, ...] = mask[i] * mask[j] * mask[k] * ....

masked_average(tensor, mask[, eps])

Compute the average of the tensor while ignoring some masked elements.

masked_softmax(logits[, mask, dim, eps, ninf])

Compute the softmax of the tensor while ignoring some masked elements.

Ranges

meshgrid(input1[, input2, dim])

Perform np.meshgrid along given axis.

meshgrid_exclude_self(input[, dim])

Exclude self from the grid.

Probability

check_prob_normalization(p[, dim, atol])

Check if the probability is normalized along a specific dimension.

normalize_prob(a[, dim])

Perform 1-norm along the specific dimension.

Quantization

quantize(x)

Quantize a tensor to binary values: (x > 0.5).float().

randomized_quantize(x)

Quantize a tensor to binary values: (rand() > x).float().

Sampling

sample_bernoulli(x)

Sample from a Bernoulli distribution.

sample_multinomial(x[, dim])

Sample from a multinomial distribution.

Shape

add_dim(tensor, dim, size)

Add a dimension at dim with size size.

add_dim_as_except(tensor, target, *excepts)

Add dimension for the input tensor so that

broadcast(tensor, dim, size)

Broadcast a specific dim for size times.

broadcast_as_except(tensor, target, *excepts)

Add AND expand dimension for the input tensor so that

concat_shape(*shapes)

Concatenate shapes into a tuple.

flatten(tensor)

Flatten the tensor.

flatten2(tensor)

Flatten the tensor while keep the first (batch) dimension.

force_view(tensor, *shapes)

Do a view with optional contiguous copy.

move_dim(tensor, dim, dest)

Move a specific dimension to a designated dimension.

repeat(tensor, dim, count)

Repeat a specific dimension for count times.

repeat_times(tensor, dim, repeats)

Repeat each element along a specific dimension for repeats times.

Submodules