Utilities

Conversion and promotion

The following methods generalize the idea of conversion and promotion to non-simple types. The idea is that many complex types (including arrays) are often built on a specific simple type. For example, one could have a container that holds three real numbers:

struct MyObj{T}
    a::T
    b::T
    c::Vector{T}
end

It might then be useful to have procedures that can convert MyObj{Int16} to, say MyObj{Float32}; similarly, we would like to put in place a promotion mechanism for container types. Note that this mechanism could be used also for the constructor of MyObj:

MyObj(a::T, b::T, c::Vector{T}) = MyObj{T}(a, b, c)
MyObj(a, b, c) = MyObj(promote_obj(a, b, c)...)

This is exactly what the following methods allow one to do.

Gravity.basetypeFunction
basetype(x)
basetype(T)

Return the type of x or of its elements; for types, return the base type.

This function is similar to eltype and even more to Base.eltypeof: for simple objects it just returns its type; for containers, it returns the type of its elements. Note that one needs to define specific methods of this function for structures holding a single type.

Note that the function can equally well be applied to types or to instances.

Example

struct MyObj{T}
    a::T
    b::T
    c::Vector{T}
end
basetype(::Type{MyObj{T}}) where {T} = T
convert_obj(::Type{T}, x::MyObj{T}) where {T} = x0
convert_obj(::Type{T}, x::MyObj{S}) where {T,S} = MyObj(
    convert_obj(T, x.a), convert_obj(T, x.b), convert_obj(T, x.c))
source
Gravity.convert_objFunction
convert_obj(T, x)

Convert x to type T, keeping inaltered the object structure.

This function generalizes convert using the same idea of Gravity.basetype: a composite object is converted by converting its fields.

The base implementation only defines the function for simple types, tuples, and vectors; its up to the developer to define the function for structures.

Examples

julia> Gravity.convert_obj(Float64, (1, 2e0, 3f0))
(1.0, 2.0, 3.0)

julia> Gravity.convert_obj(Float64, [1,2,3])
3-element Vector{Float64}:
 1.0
 2.0
 3.0

julia> x = [1, 2, 3]; Gravity.convert_obj(Int, x) === x
true

julia> Gravity.convert_obj(Float64, (1f0, [2,3], (4.0, 5, 6f0)))
(1.0, [2.0, 3.0], (4.0, 5.0, 6.0))
source

Threads usage

Gravity.tmapFunction
tmap(f, args...)

Threaded version of map.

Code taken from https://discourse.julialang.org/t/type-stable-threaded-map/.

See also https://julialang.org/blog/2023/07/PSA-dont-use-threadid/

Warning

This function is type-stable, but it assumes that the return types of f will be the same for every element of the array(s) args.

Example

julia> xs = rand(20);

julia> Gravity.tmap(sin, xs) ≈ map(sin, xs)
true

julia> Base.return_types(Gravity.tmap, (typeof(sin), typeof(xs)))
1-element Vector{Any}:
 Vector{Float64} (alias for Array{Float64, 1})
source
Gravity.parse_use_threadsFunction
parse_use_threads(use_threads::String)

Parse the string use_threads.

The string can contain the following values separated by commas or pipes:

  • "grid": use threads to map grid points in the source plane;
  • "imageplane": use threads to parallelize the computation of lensing operations for lenses of the same image plane;
  • "lensunmap": use threads to invert the lens equation for different images of the same source;
  • "xsources": use threads to map pixels of extended sources in the source plane;
  • "sampling": use threads for sampling operations (MCMC algorithms);
  • "default": equivalent to "xsources, sampling";
  • "always": use threads for all above operations;
  • "never": never use threads.

The function returns a bitmask with the selected threads and a negative number in case of error.

Example

julia> Gravity.parse_use_threads("sampling") == Gravity.THREADS_DEFAULT
true

julia> Gravity.parse_use_threads("never")
0
source

LogSumExp

Gravity.logsumexpFunction
logsumexp([f,] vals [, weigths])

Compute the logsumexp of f.(vals) with optional weights.

Example

julia> xs = rand(10); ws = rand(10);

julia> Gravity.logsumexp(xs) ≈ log(sum(exp, xs))
true

julia> Gravity.logsumexp(sin, xs) ≈ log(sum(exp ∘ sin, xs))
true

julia> Gravity.logsumexp(xs, ws) ≈ log(sum(exp.(xs) .* ws))
true
source

Local optimization using StaticArrays

To improve speed, we re-implemented many optimization techniques applied to functions operating on StaticArray. These function are all non-allocating and can result in a significant gain.

Gravity.OptimResultType
OptimResult

A concrete type to store the results of a gradient-free optimization.

Members

  • x: the minimizer
  • f: the minimum
  • niters: the number of iterations performed
  • stopreason: a symbol that stores the reason for stopping the minimization
    • :x for x-convergence reached (small changes in the minimizer)
    • :f for f-convergence reached (small changes in the function)
    • :fail no convergence reached within the given iterations
source
Gravity.OptimGradientResultType
OptimGradientResult

A concrete type to store the results of a gradient-free optimization.

Members

  • x: the minimizer
  • f: the minimum
  • df: the gradient at the minimum
  • niters: the number of iterations performed
  • stopreason: a symbol that stores the reason for stopping the minimization
    • :x for x-convergence reached (small changes in the minimizer)
    • :g for ∇-convergence reached (small values of the gradient)
    • :fail no convergence reached within the given iterations
source
Gravity.OptimHessianResultType
OptimHessianResult

A concrete type to store the results of a gradient-free optimization.

Members

  • x: the minimizer
  • f: the minimum
  • df: the gradient at the minimum
  • d2f: the hessian at the minimum
  • niters: the number of iterations performed
  • stopreason: a symbol that stores the reason for stopping the minimization
    • :x for x-convergence reached (small changes in the minimizer)
    • :g for ∇-convergence reached (small values of the gradient)
    • :fail no convergence reached within the given iterations
source
Gravity.optim_newtonFunction
optim_newton((f, df, hf), x0 [, args...] [; xtol2, gtol2, maxiters, linesearch])
optim_newton((f, df), x0 [, args...] [; xtol2, gtol2, maxiters, linesearch])
optim_newton(f, x0 [, args...] [; xtol2, gtol2, maxiters, linesearch])

Minimize a function using Newton's method.

In the typical call (first option above), the first parameter must be a tuple composed by three functions, f, df, and hf. All these functions will be called with a SVector x as first argument and with possible additional arguments args.... Of them:

  • f must returns a scalar representing the value of the function to minimize at x;
  • df must return the tuple (f(x), ∇f(x)), i.e. the function together with its gradient computed at x;
  • hf must return the tuple (f(x), ∇f(x), Hf(x)), where Hf is the hessian of f.

Alternatively, if Hf is not available, one can just pass the tuple (f, df): the hessian, in this case, will be computed using automated derivative by optim_autojacobian. If the gradient is also not available, one can just use f as first argument (third form above), and both the gradient and hessian will be computed by optim_autohessian.

In all calls to optim_newton, x0 is the starting position for the algorithm.

The function returns a OptimHessianResult.

Keywords

  • xtol2: squared tolerance in changes of x; default eps(T)
  • gtol2: squared tolerance in the final gradient; default zero
  • maxiters: maximum number of allowed iterations; default 1000
  • linesearch: if true, a backtrace linesearch is performed; this generally improves the stability and the convergence of the algorithm

Example

julia> f(x, y) = (y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2;

julia> df(x, y) = ((y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2, 
       SPoint(-2.0 * (y - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1], 200.0 * (x[2] - x[1]^2)));

julia> x0 = SPoint(0.0, 0.0);

julia> Gravity.optim_newton((f, df), x0, 1.0)
Optimization result (hessian based)
x = [0.9999999999999899, 0.9999999999999792]
∇ = [2.018385458768512e-13, -1.1102230246251565e-13]
H = [801.9999999999841 -399.99999999999596; -399.99999999999596 200.0]
after 34 iterations; x-convergence reached

julia> Gravity.optim_newton(f, x0, 1.0)
Optimization result (hessian based)
x = [0.9999999999999899, 0.9999999999999792]
∇ = [2.018385458768512e-13, -1.1102230246251565e-13]
H = [801.999999999984 -399.99999999999596; -399.99999999999596 200.0]
after 34 iterations; x-convergence reached
source
Gravity.optim_bfgsFunction
optim_bfgs(fdf, x0 [, args...] [; B0=I, xtol2, gtol2, maxiters, linetrace])
optim_bfgs(f, x0 [, args...] [; B0=I, xtol2, gtol2, maxiters, linetrace])

Find the minimum of a function using the BFGS method.

This procedure uses the Broyden–Fletcher–Goldfarb–Shanno algorithm, or BFGS for short, to find the minimum of a function; this method requires that both the function to minimize and its gradient are available.

In the typical call (first option above), the first parameter must be a tuple composed by two functions, (f, df). Both f and df will be called with a SVector x as first argument and with possible additional arguments args.... Of them

  • f must returns a scalar representing the value of the function to minimize at x;
  • df must return the tuple (f(x), ∇f(x)), i.e. the function together with its gradient computed at x;

If the gradient is also not available, one can just use f as first argument (second form above), and both the gradient will be computed by optim_autogradient.

In all calls, x0 is the starting position for the algorithm.

Optionally, one can also pass as keyword a square static matrix B0 representing an approximation of the Hessian at x0; is not passed, the identity matrix is used.

Keywords

  • B0: square static matrix with an approximation of the hessian at x0
  • xtol2: squared tolerance in changes of x; default eps(T)
  • gtol2: squared tolerance in the final gradient; default zero
  • maxiters: maximum number of allowed iterations; default 1000
  • linesearch: if true, a backtrace linesearch is performed; this generally improves the stability and the convergence of the algorithm

The procedure returns an object of kind OptimGradientResult.

Example

julia> f(x, y) = (y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2;

julia> df(x, y) = ((y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2, 
       SPoint(-2.0 * (y - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1], 200.0 * (x[2] - x[1]^2)));

julia> x0 = SPoint(0.0, 0.0);

julia> Gravity.optim_bfgs((f, df), x0, 1.0)
Optimization result (gradient based)
x = [1.0000000006302598, 1.0000000012361165]
f = 4.567788256566909e-19
∇ = [1.1021778162968838e-8, -4.880629234094158e-9]
after 27 iterations; x-convergence reached

julia> Gravity.optim_bfgs(f, x0, 1.0)
Optimization result (gradient based)
x = [1.0000000006302598, 1.0000000012361165]
f = 4.567788256566909e-19
∇ = [1.1021778162968838e-8, -4.880629234094158e-9]
after 27 iterations; x-convergence reached
source
Gravity.optim_nmFunction
optim_nm(f, x0 [, args...] [; step=1, xtol2, ftol2, maxiters, ...])
optim_nm(f, xₙ, fₙ [, args...] [; xtol2, gtol2, maxiters, ...])

Find the minimum of a function given its gradient.

This procedure uses the Nelder-Mead algorithm, also called downhill simplex or amoeba algrithm, which finds a minimum of f without requiring any derivative (gradient-free). The algorithm needs an initial symplex to start: this, for a problem of dimension N, can be entered as a set of N + 1 points xₙ together with the associated function values fₙ = f.(xₙ, args...). Alternatively, one can enter a starting point x0 and an associated step that will be taken in any direction; the function values are then automatically computed.

The function f must take a static vector (SVector) x as first argument; if it requires additionall arguments, these can be passed to the function (as variable arguments args...).

Keywords

  • xtol2: squared tolerance in changes of x; default eps(T)
  • ftol2: squared tolerance in the changes of f; default zero
  • maxiters: maximum number of allowed iterations; default 1000
  • 'α', 'β', 'γ', 'δ': parameters of the algorithm, here set accoording to DOI:10.1007/s10589-010-9329-3

The procedure returns an object of kind OptimResult.

Example

julia> f(x, y) = (y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2;

julia> x0 = SPoint(0.0, 0.0);

julia> Gravity.optim_nm(f, x0, 1.0)
Optimization result (gradient-free)
x = [1.0000000007965937, 1.0000000013080657]
f = 8.764000101453318e-18
after 101 iterations; x-convergence reached
source
Gravity.optim_autogradientFunction
optim_autogradient(f)

Use ForwardDiff to compute the gradient of f.

This function makes it possible to use gradient-based optimization methods for functions that do not have an analytical expression for their gradient.

See also: optim_autohessian, optim_autojacobian

Example

julia> f(x, y) = (y - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2;

julia> x0 = SPoint(0.0, 0.0);

julia> Gravity.optim_bfgs(Gravity.optim_autogradient(f), x0, 1.0)
Optimization result (gradient based)
x = [1.0000000000538531, 1.0000000001209275]
f = 2.038017913387265e-20
∇ = [-5.180774476372692e-9, 2.644240382210228e-9]
after 50 iterations; x-convergence reached
source
Gravity.optim_autojacobianFunction
optim_autojacobian(fdf)

Use ForwardDiff to compute the jacobian of df.

This function makes it possible to use hessian-based optimization methods for functions that have just the first derivative (gradient). The parameter fdf must be a tuple (f, df) where f is a scalar function and df is a function that returns a differential, that is, the tuple (f, ∇f).

See also: optim_autogradient, optim_autohessian ```

source

FITS files

Gravity.fits_readFunction
fits_read(filename)

Read an image from a FITS file.

Uses the CFITSIO library to read the image from the file filename. The result is an array with the image data.

source
Gravity.fits_headerFunction
fits_header(filename)

Read the header from a FITS file.

Uses the CFITSIO library to read the header from the file filename. The result is a string with the header information.

source
Gravity.fits_read_headerFunction
fits_read_header(filename)

Read the header and the data from a FITS file.

Uses the CFITSIO library to read the header and the data from the file filename. The result is a tuple with the header information and the data.

source
Gravity.fits_split_headerFunction
fits_split_header(header; convert=true)

Split a FITS header into a dictionary of keywords, values, and comments.

The header must be a string with the FITS header information. The result is a list of tuples with the keyword, value, and comment for each keyword in the header. If convert is true, the values are converted to the appropriate Julia types.

source
Gravity.fits_writeFunction
fits_write(filename, data [, header]; kw...)
fits_write(filename, dataset; kw...)

Write an image to a FITS file.

Uses the CFITSIO library to write the image data to the file filename (which must be a string). If header is provided, it is also written to the file. Note that if data is an OffsetArray and an header with WCS keywords is provided, the offsets are taken into account when writing the file (that is, the CRPIX keywords are adjusted accordingly).

A proper FITS header can be provided either as a string (in which case it is split into keywords, values, and comments) or as a list of tuples with the keyword, value, and comment for each keyword in the header.

Keywords

  • overwrite: if true, overwrite the file if it already exists (default: false)
  • append: if true, append the data to an existing FITS file as a new HDU ( default: false)
  • try_contiguous: if true, try to make a contiguous copy of the data before writing it to the file, if the data is not contiguous (default: true)
source
Gravity.fits_infoFunction
fits_info(filename)

Print information about a FITS file.

Uses the CFITSIO library to print information about the FITS file filename.

source

WCS conversions

Gravity.jl implements a fast and static version of the WCS conversion functions. It is limited to TAN projections, but is approximately 200 times faster than the standard WCSlib conversion.

Gravity.FastWCSType
FastWCS

An unmutable structure to hold a 2D WCS transformation.

Members

  • crpix: the pixel coordinate of the reference point
  • crval: the world coordinates of the reference point, in degrees
  • cd: the 2×2 transformation CD matrix
  • lonpole: native longitude of the celestial pole, in degrees
  • radesys: the reference frame (e.g., "ICRS", "FK5", "FK4", "GAL")

Internal member

The following quantities are automatically computed when the object is built:

  • M: a 3×3 matrix used to convert between native and celestial coordinates
  • scale: the pixel scale at the reference point, in degrees/pixel, computed as sqrt(abs(det(cd)))

Constructors

FastWCS(crpix, crval, cd, lonpole)
FastWCS(; [crpix=, crval=, cd=, cdelt=, crota=, pc=, lonpole=, ctype=, cunit=])

In the second form, the function accepts all major keywords and computes the appropriate quantities.

source
Gravity.pix_to_worldFunction
pix_to_world(wcs, x, y)
pix_to_world(wcs, pix [± error])

Convert pixel coordinates into world (celestial) ones.

wcs must be a FastWCS structure, while pix can be a a 2-tuple or a static vector with two elements (SVector), with an optional error (i.e., a MvNormal distribution). The result is of the same type of the input.

In case an error is provided, the coordinate identified by pix in the wcs frame is converted into world coordinates, propagating the uncertainty according to the local WCS Jacobian. The error must be expressed in pixels.

Note that a purely isotropic error (i.e., a MvNormal with a scalar standard deviation) is always translated into an isotropic error in world coordinates, while an elliptical error (i.e., a MvNormal with a diagonal or full covariance matrix) is properly transformed according to the local WCS Jacobian.

See also: world_to_pix, pix_to_pix

source
Gravity.world_to_pixFunction
world_to_pix(wcs, α, β)
world_to_pix(wcs, world [± error])

Convert world (celestial) coordinates into pixel ones.

wcs must be a FastWCS structure, while world can be a a 2-tuple or a static vector with two elements (SVector), with an optional error (i.e., a MvNormal distribution). The result is of the same type of the input.

In case an error is provided, the coordinate identified by world in the wcs frame is converted into pixel coordinates, propagating the uncertainty according to the local WCS Jacobian. The error must be expressed in degrees.

Note that a purely isotropic error (i.e., a MvNormal with a scalar standard deviation) is always translated into an isotropic error in pixel space, while an elliptical error (i.e., a MvNormal with a diagonal or full covariance matrix) is properly transformed according to the local WCS Jacobian.

See also: pix_to_world, pix_to_pix

source
Gravity.pix_to_pixFunction
pix_to_pix(wcs, wcs′, x, y)
pix_to_pix(wcs, wcs′, pix [± error])

Convert pixel coordinates into a different reference frame.

wcs and wcs′ must be two FastWCS structures, while pix can be a a 2-tuple or a static vector with two elements (SVector), with an optional error (i.e., a MvNormal distribution). The result is of the same type of the input.

The coordinate identified by pix in the wcs frame is converted into the wcs′ frame. The result is identical equal (within numerical precision) to the expression

world_to_pix(wcs′, pix_to_worl(wcs, pix))

However, a call to pix_to_pix is approximately three to five times faster than the expression above.

In case an error is provided, the coordinate identified by pix in the wcs frame is converted into world coordinates, propagating the uncertainty according to the local WCS Jacobian. The error must be expressed in pixels.

Note that a purely isotropic error (i.e., a MvNormal with a scalar standard deviation) is always translated into an isotropic error in pixel space, while an elliptical error (i.e., a MvNormal with a diagonal or full covariance matrix) is properly transformed according to the local WCS Jacobian.

See also: world_to_pix, pix_to_world

source
Gravity.wcs_from_gridFunction
wcs_from_grid(wcs, grid)
wcs_from_grid(wcs, xrange, yrange)

Create a new FastWCS structure adapted to a sub-grid.

wcs must be a FastWCS structure (for example, the global WCS of a model, obtained as model.global_wcs), while grid can be a BasicGrid structure, or two ranges defining the x and y ranges of the sub-grid.

source
Gravity.angular_distanceFunction
angular_distance(x, y)

Compute the angular distance between x and y.

Both x and y must be 2-vectors, 2-tuples, or SPoints, representing two coordinates in the sky in degrees. It is assumed that first(x) and first(y) correspond to longitude-like coordinates, while last(x) and last(y) to latitude-like ones. The result is also in degrees.

This function is based on the haversine formula, which is numerically stable for small distances.

source
Gravity.dms_to_degFunction
dms_to_deg(dms; separator=:auto)

Convert a DMS (degrees, minutes, seconds) string into decimal degrees.

If separator is set to :auto (the default), the function tries to automatically determine the separator used in the string (either : or space). Otherwise, the specified separator is used.

See also: hms_to_deg, deg_to_dms

Examples

julia> Gravity.dms_to_deg("-12:30:00.00")
-12.5

julia> Gravity.dms_to_deg("2 7 30")
2.125
source
Gravity.hms_to_degFunction
hms_to_deg(hms; separator=:auto)

Convert a HMS (hours, minutes, seconds) string into decimal degrees.

If separator is set to :auto (the default), the function tries to automatically determine the separator used in the string (either : or space). Otherwise, the specified separator is used.

See also: dms_to_deg, deg_to_hms

Examples

julia> Gravity.hms_to_deg("12:30:00")
187.5

julia> Gravity.hms_to_deg("05 15 30")
78.875
source
Gravity.deg_to_dmsFunction
deg_to_dms(deg; separator=(':', ':', ""), precision=2, pad1=3, sign=true)

Convert decimal degrees into a DMS (degrees, minutes, seconds) string.

The separator keyword can be either a single character (used between degrees, minutes, and seconds), a 2-tuple (used between degrees and minutes, and between minutes and seconds), or a 3-tuple (used between degrees and minutes, between minutes and seconds, and after seconds). The default is (':', ':', "").

The precision keyword sets the number of decimal digits used for the seconds value (default: 2).

The pad1 keyword sets the minimum number of digits used for the degrees value (default: 3).

If sign is set to true (the default), the resulting string will include a leading + or - sign.

See also: deg_to_hms, dms_to_deg

Examples

julia> Gravity.deg_to_dms(-23.4392911)
"-023:26:21.45"

julia> Gravity.deg_to_dms(12.3456789; separator=' ', sign=false)
"012 20 44.44"

julia> Gravity.deg_to_dms(5.0; precision=0, pad1=0, sign=false, separator=("d ", "m ", "s"))
"5d 00m 00s"

julia> Gravity.deg_to_dms(Gravity.dms_to_deg("123:45:67.89"))
"+123:46:07.89"

julia> Gravity.deg_to_dms(Gravity.dms_to_deg("123:45:59.9996"))
"+123:46:00.00"
source
Gravity.deg_to_hmsFunction
deg_to_hms(deg; separator=(':', ':', ""), precision=3, pad1=2, sign=false)

Convert decimal degrees into a HMS (hours, minutes, seconds) string.

The separator keyword can be either a single character (used between hours, minutes, and seconds), a 2-tuple (used between hours and minutes, and between minutes and seconds), or a 3-tuple (used between hours and minutes, between minutes and seconds, and after seconds). The default is (':', ':', "").

The precision keyword sets the number of decimal digits used for the seconds value (default: 3).

The pad1 keyword sets the minimum number of digits used for the degrees value (default: 2).

If sign is set to true, the resulting string will include a leading + or - sign. The default is false.

See also: deg_to_dms, hms_to_deg

Examples

julia> Gravity.deg_to_hms(23.4392911)
"01:33:45.430"

julia> Gravity.deg_to_hms(12.3456789; separator=' ')
"00 49 22.963"

julia> Gravity.deg_to_hms(5.0; precision=0, pad1=0, sign=false, separator=("h ", "m ", "s"))
"0h 20m 00s"

julia> Gravity.deg_to_hms(Gravity.hms_to_deg("12:34:56.789"))
"12:34:56.789"

julia> Gravity.deg_to_hms(Gravity.hms_to_deg("12:34:59.9996"))
"12:35:00.000"
source

Convolutions

Gravity.jl implements several convolution techniques based on different algorithms. The reason behind this, is that different algorithms scale in different ways with the input sizes: therefore it is wise to change the algorithm depending on the specific convolution to be performed.

Additionally, two modes for convolutions are possible. If one needs to perform a specific convolution just once, than a direct method should be used: this corresponds to the function `Gravity.convolve!. If, instead, the same convolution with the same kernel has to be carried several times, it is convenient to use an indirect method with a call to Gravity.convolve_prepare: this will return a function which, when called on the input array, will update the output array with the convolved data.

The best algorithm for indirect convolutions can be chosen automatically: this task can be done using an heuristic method, or by timing the various algorithms on the actual data (this is the default). The timing is cached, so that one does not need to perform this task again for inputs of the same size.

Gravity.blocksum!Function
blocksum!(dest, src; normalize=true)

Perform a block-sum of src and saves the results in dest

Both src and dest must be two arrays with identical number of dimensions. Moreover, the size of src in each dimension must be an integer multiple of the size of dest in the corresponding dimension. Note that the block used for the sum needs not to be a square (see the example below, where the block has size (4 ÷ 2)×(6 ÷ 2) = 2×3).

The optional normalize parameter indicates if the result is the average (default) or the sum of the data.

Warning

This function makes no attempt to check the offsets of the arguments if Offsetarrays are used: that is, the block-sum is performed regardless of the correctness of the axes, as long as the sizes are correct.

Example

julia> src = reshape(collect(1:24), 4, 6)
4×6 Matrix{Int64}:
 1  5   9  13  17  21
 2  6  10  14  18  22
 3  7  11  15  19  23
 4  8  12  16  20  24

julia> dest = Array{Int}(undef, 2, 2);

julia> Gravity.blocksum!(dest, src; normalize=false)
2×2 Matrix{Int64}:
 33  105
 45  117

julia> dest = Array{Int}(undef, 2, 3);

julia> Gravity.blocksum!(dest, src; normalize=false)
2×3 Matrix{Int64}:
 14  46  78
 22  54  86
source
Gravity.convolution_inputFunction
convolution_input(a_out, kernel [, binning])

Allocates an input array for a given convolution a_output array.

This function returns an OffsetArray which, when convolved with a given kernel, produces the requested a_output array. Note that the input returned is larger (by an amount size(kernel) .- 1) than the requested output; moreover, its offset guarantees that one can fully compute each pixel of a_out with no missing data. If kernel = nothing, the function assumes a Dirac's delta kernel.

The binning optional parameter can be used to specify the binning of the input array. If present, must be a tuple with length equal to the array dimensions, or single positive integer (in which case the same binning is used for all dimensions). If not specified, the input array is not binned. Note that the kernel is taken to be unbinned, that is, defined at the subpixel level.

Warning

The rules for the offsets returned in the input array are relatively straightforward when no binning is used. However, when binning is used, the offsets may be non-intuitive. More specifically, calling f the index that one would have for the first point of the input array without binning, the first point of the binned input array will have index f .* binning .- binning .÷ 2. However, this way the fisical centers of the binned input pixels have quite simple expressions: in fact, their centers can be computed as x[i] = (2i + (binning + 1) .% 2) ./ 2binning: that is, as x[i] = i / binning for odd binning factors, and as x[i] = (i - 0.5) / binning for even binning factors.

See also: convolution_output, convolution_input_axes

source
Gravity.convolution_input_axesFunction
convolution_input_axes(a_out, kernel [, binning])

Compute the axes of the input array needed in a convolution.

See convolution_input for details.

See also: convolution_output_axes

Example

julia> a_out = OffsetArray(zeros(4, 7), 2, -2);

julia> kernel = OffsetArrays.centered(zeros(3,5));

julia> Gravity.convolution_input_axes(a_out, kernel)
(2:7, -3:7)
source
Gravity.convolution_outputFunction
convolution_output(a_in, kernel [, binning])

Allocates an output array for a given convolution of the a_input array.

This function returns the OffsetArray which results from a convolution of a_in with kernel. Note that the returned array is smaller (by an amount size(kernel) .- 1) than the provided input a_in. If kernel = nothing, the function assumes a Dirac's delta kernel.

The binning optional parameter can be used to specify the binning of the input array. If present, must be a tuple with length equal to the array dimensions, or single positive integer (in which case the same binning is used for all dimensions). If not specified, the input array is not binned. Note that the kernel is taken to be unbinnned, that is, defined at the subpixel level.

See also: convolution_input, convolution_output_axes

source
Gravity.convolution_output_axesFunction
convolution_output_axes(a_in, kernel [, binning])

Compute the axes of the input array needed in a convolution.

a_in and kernel are two arrays (possibly with offsets). In particular, a_in is used together with kernel in a convolution to produce an array a_out. This function returns the largest axes of a_out such that the convolution is computed without any imputation (that is, so that each value of a_out is fully determined). If kernel = nothing, the function assumes a Dirac's delta kernel.

The binning optional parameter can be used to specify the binning of the input array. If present, must be a tuple with length equal to the array dimensions, or single positive integer (in which case the same binning is used for all dimensions). If not specified, the input array is not binned. Note that the kernel is taken to be unbinnned, that is, defined at the subpixel level.

See also: convolution_output, convolution_input_axes

Example

julia> a_in = OffsetArray(zeros(7, 10), 3, -2);

julia> kernel = OffsetArrays.centered(zeros(3,5));

julia> Gravity.convolution_output_axes(a_in, kernel)
(5:9, 1:6)
source
Gravity.convolution_check_axesFunction
convolution_check_axes(out, A, kernel)

Check out is a possible output of a convolution of A with kernel.

This function returns true if if the axes of out are consistent with a convolution of A with kernel, and false otherwise. The check includes the binning factor implicit in the sizes of the three arrays.

source
Gravity.morph_dilateFunction
morph_dilate(a_in, kernel [, binning])

Perform a morphological dilation of a_in using the given kernel.

This is equivalent to a convolution with a few of significant differences:

  • a_in is generally an Integer array, while kernel can be a Real array

  • The integer and (&) replace the multiplication and the integer or (|) the sum in the convolution expression

  • The output array is larger (by kernel) than the input array: in fact, it is computed as in convolution_input.

The optional argument binning can be a single integer or a tuple of integers with the same length as the array dimensions. If present, the output array is expanded by the binning factor in each dimension. Note that the kernel is taken to be unbinnned, that is, defined at the subpixel level.

If kernel = nothing, the function assumes a Dirac's delta kernel, and the output array is equal to the input array, or will be a block-expanded version of it if binning is specified.

Warning

This code should use @turbo, but that gives inconsistent results!

source
Gravity.convolution_fastest_methodFunction
convolution_fastest_method(a_out, a_in, kernel; avoid=(), direct=true)

Returns the most efficient convolution method for a fixed `kernel`.

This function analyzes the speed of various convolution algorithms
(turbo-based, sparse, or fft) and picks the most appropriate one. The
function uses heuristics to choose the best algorithm given the kernel
size, its separability, and the possible binning.

The `avoid` parameter can be used to specify methods to avoid. The
`direct` parameter indicates whether the estimate has to be performed for
direct convolutions ([`direct!](@ref)) or for indirect ones
([`convolve_prepare`](@ref)).
source
Gravity.convolve!Function
convolve!(a_out, a_in, kernel [, method])

Perform a convolution of a_in with kernel and saves the result in a_out.

For undersampled results, where a_out has a size smaller than what prescribed by convolution_output_axes. In this case, the final result is binned (either a-posteriori using blocksum!, or within the convolution algorithm itself).

The convolution is performed using the specified method, which must be of type Val{:Symbol}. All registered methods are provided in the global variable Gravity.convolution_direct_methods. These are

  • Val(:turbo). The convolution is performed using a standard algorithm,

accelerated with the @turbo macro of LoopVectorization.

  • Val(:binned). Similar to Val(:turbo), but the kernel is binned before

performing the convolution (using a specilized interal function, Gravity._bin_kernel).

  • Val(:mv). The convolution is performed using generating a sparse matrix

that represents the convolution with kernel, and by performing the matrix-vector (mv) multiplication to generate the result. In this operation, both a_in ad a_out are threated as vectors.

  • Val(:fft). The convolution is performed using the FFT algorithm, which

outperforms direct methods for large kernels.

  • Val(:fft2). Similar to Val(:fft), but uses a power of 2 for the size of all arrays, which is generally faster for typical FFT implementations.

In case no method is specified, a suitable method is chosen heuristically.

source
Gravity.convolve_prepareFunction
convolve_prepare(a_out, a_in, kernel, method; thread_safe=true)

Returns a function that performs a convolution between a_in and kernel.

This function creates a closure which, when called with arguments a_out and a_in, performs the convolution of a_in with kernel.

The convolution is performed using the specified method, which must be of type Val{:Symbol}. All registered methods are provided in the global variable Gravity.convolution_indirect_methods. These are

  • Val(:turbo). The convolution is performed using a standard algorithm,

accelerated with the @turbo macro of LoopVectorization.

  • Val(:binned). Similar to Val(:turbo), but the kernel is binned before

performing the convolution (using a specilized interal function, Gravity._bin_kernel).

  • Val(:mv). The convolution is performed using generating a sparse matrix

that represents the convolution with kernel, and by performing the matrix-vector (mv) multiplication to generate the result. In this operation, both a_in ad a_out are threated as vectors.

  • Val(:fft). The convolution is performed using the FFT algorithm, which

outperforms direct methods for large kernels.

  • Val(:fft2). Similar to Val(:fft), but uses a power of 2 for the size of all arrays, which is generally faster for typical FFT implementations.

  • Val(:separable). Only valid for 2D separable kernels (i.e., kernels that can be written as the external product of two vectors).

Additionally, if CUDA is available, further CUDA-based convolutions are possible:

  • Val(:cuda_direct). The convolution is performed in the GPU using the standard algorithm.

  • Val(:cuda_fft). The convolution is performed in the GPU using the FFT

algorithm, which outperforms direct methods for large kernels.

  • Val(:cuda_fft2). Similar to Val(:cuda_fft), but uses a power of 2 for the size of all arrays, which is generally faster for typical FFT implementations.

If thread_safe is true (default), the returned function will not share memory allocations, and therefore the result will be thread-safe. If thread-safety is not a requirement, setting thread_safe to false can improve performance as temporary arrays are not allocated in the returned function but pre-allocated in the calling function.

source
convolve_prepare(a_out, a_in, kernel [; thread_safe=true, samples=100, seconds=0.2,
                 verbose=false, no_fft, no_separation])

Returns a the most efficient convolution function for a fixed kernel.

This function creates a closure which, when called with arguments a_out and a_in, performs the convolution of a_in with kernel. The function uses the @b macro from Chairmarks.jl to analyze the speed of the various algorithms (turbo-based, sparse, or fft) and picks up the most appropriate. Note that, since benchmarks are often associated to positive biases, the various algorithms are ranked according to the minimum execution time, rather than to the mean one.

If thread_safe is true (default), the returned function will not share memory allocations, and therefore the result will be thread-safe. If thread-safety is not a requirement, setting thread_safe to false can improve performance as temporary arrays are not allocated in the returned function but pre-allocated in the calling function.

Keywords

  • seconds: the maximum cumulative time to be spent for timing each algorithm. If set to 0, heuristics will be used to pick the best algorithm without timing.
  • verbose: if true, the timing results will be printed.
  • check: if true, the correctness of each algorithm will be checked against the first one.
  • avoid: a tuple of symbols specifying the algorithms to be avoided. Valid symbols are: :turbo, :binned, :mv, :fft, :fft2, and :separable.
Info

The timings are cached according to the kernel size, so that subsequent calls with the same kernel size will be much faster.

Info

This function is thread-safe: however, if seconds is positive, it is wise to ensure that the the processing is not interrupted by other threads, as this could affect the timing results.

source

Timings cache

The following functions are responsible for the caching of the convolution timings. The data are saved on a file called convolution_timings.cache, located on the directory of the Gravity.jl package. This file is automatically updated when exiting Julia.

Gravity.load_convolution_timings_cacheFunction
load_convolution_timings_cache([filename]; silent=true)

Loads the convolution timings cache from filename.

By default, the file is convolution_timings.cache in the package directory.

If silent is false, messages are printed to inform the user about the success or failure of the operation.

Returns a dictionary mapping kernel sizes to timing results, which can be used to initialize the _convolution_timings global variable.

See also: save_convolution_timings_cache

source
Gravity.save_convolution_timings_cacheFunction
save_convolution_timings_cache([filename]; silent=true)

Saves the convolution timings cache to filename.

By default, the file is convolution_timings.cache in the package directory.

If silent is false, messages are printed to inform the user about the success or failure of the operation.

This function merges the existing cache (if any) with the current timings, averaging the timings for matching kernel sizes.

See also: load_convolution_timings_cache

source

Debugging

The source code generated by gravitymodel can be browsed with the help of the following function:

Gravity.sourcecodeFunction
sourcecode(model [, name; kw...])

Return the source code of a definition in model.

If name is not provided, an interactive menu is shown to select the definition to show. The special name "all" shows the entire source code.

Keyword arguments

  • lines: if true, line numbers are included (default: false)
  • docs: if true, docstrings are included (default: false)
  • alias: if true, aliases are resolved (default: true)
  • pager: if true, the output is shown in a pager (default: false)
source