Methods list
MIRT.MIRT
MIRT.LineSearchMM
MIRT.LineSearchMM
MIRT.LineSearchMMWork
MIRT.Afft
MIRT.Afft
MIRT.Anufft
MIRT.Aodwt
MIRT.Asense
MIRT._show_struct
MIRT.caller_name
MIRT.diff_adj
MIRT.diff_forw
MIRT.diff_map
MIRT.diffl
MIRT.diffl
MIRT.diffl
MIRT.diffl!
MIRT.diffl!
MIRT.diffl_adj
MIRT.diffl_adj
MIRT.diffl_adj
MIRT.diffl_adj!
MIRT.diffl_adj!
MIRT.diffl_map
MIRT.downsample1
MIRT.downsample2
MIRT.downsample3
MIRT.downsample_dim1
MIRT.dtft
MIRT.dtft
MIRT.dtft_adj
MIRT.dtft_adj
MIRT.dtft_init
MIRT.dtft_init
MIRT.embed
MIRT.embed
MIRT.embed!
MIRT.eql_root
MIRT.exp_mult
MIRT.exp_xform
MIRT.genkspace
MIRT.genspi
MIRT.getindex!
MIRT.interp1
MIRT.ir_dump
MIRT.ir_load_brainweb_t1_256
MIRT.ir_mri_coil_compress
MIRT.ir_mri_kspace_ga_radial
MIRT.ir_mri_sensemap_sim
MIRT.ir_mri_sensemap_sim
MIRT.ir_mri_sensemap_sim
MIRT.ir_mri_sensemap_sim_do
MIRT.ir_mri_smap1
MIRT.ir_mri_smap_r
MIRT.jinc
MIRT.line_search_mm
MIRT.make_dot_curvf
MIRT.make_dot_gradf
MIRT.map_many
MIRT.mask_or
MIRT.mask_outline
MIRT.maskit
MIRT.max_percent_diff
MIRT.mri_kspace_spiral
MIRT.mri_trajectory
MIRT.mri_trajectory_gads
MIRT.mri_trajectory_radial
MIRT.mri_trajectory_rosette3
MIRT.ncg
MIRT.ncg
MIRT.nufft_eltype
MIRT.nufft_errors
MIRT.nufft_init
MIRT.nufft_init
MIRT.nufft_typer
MIRT.ogm_ls
MIRT.ogm_ls
MIRT.pogm_restart
MIRT.poweriter
MIRT.rect
MIRT.reverser
MIRT.rmsd100
MIRT.rotate2d
MIRT.snr2sigma
MIRT.@shows
Methods usage
MIRT.MIRT
— ModuleMIRT is the Michigan Image Reconstruction Toolbox
MIRT.LineSearchMM
— TypeLineSearchMM{...}
Mutable struct for MM-based line searches.
MIRT.LineSearchMM
— MethodLineSearchMM(gradf, curvf, u, v; α0 ...)
LineSearchMM(u, v, dot_gradf, dot_curvf; α0 ...)
Construct iterator for line-search based on majorize-minimize (MM) approach for a general family of 1D cost functions of the form $h(α) = \sum_{j=1}^J f_j(u_j + α v_j)$ where each function $f_j(t)$ has a quadratic majorizer of the form
\[q_j(t;s) = f_j(t) + \nabla f_j(s) (t - s) + 1/2 \|t - s\|^2_{C_j(s)}\]
where $C_j(⋅)$ is diagonal matrix of curvatures. (It suffices for each $f_j$ to have a Lipschitz smooth gradient.)
Each function $f_j : \mathcal{X}_j ↦ \mathbb{R}$ where conceptually $\mathcal{X}_j ⊆ \mathbb{R}^{M_j}$, but we allow more general domains.
There are two outer constructors (based on the positional arguments):
- The simple way (not type stable) provides
gradf
vector of $J$ functions return gradients of $f_1,…,f_J$curvf
vector of $J$ functionsz -> curv(z)
that return a scalar or a vector of curvature values for each element of $z$
- The fancier way (type stable) provides
dot_gradf::AbstractVector{<:Function} = make_dot_gradf.(gradf)
Seemake_dot_gradf
.dot_curvf::AbstractVector{<:Function} = make_dot_curvf.(curvf)
Seemake_dot_curvf
.
in
u
vector of $J$ arrays $u_1,…,u_J$ (typically vectors)v
vector of $J$ arrays $v_1,…,v_J$ (typically vectors)
We require axes(u_j) == axes(v_j)
for all $j=1,…,J$.
option
α0::Real = 0f0
initial guess for step sizeninner::Int = 5
# max number of inner iterations of MM line searchwork = LineSearchMMWork(u, v, α)
pre-allocated work space for $u_j+α v_j$
MIRT.LineSearchMMWork
— TypeLineSearchMMWork{Tz <: AbstractVector{<:AbstractArray}}
Workspace for storing $z_j = u_j + α v_j$ in MM-based line search.
If all of those $z_j$ arrays had the same eltype
, then we could save memory by allocating just the longest vector needed. But for Unitful data they could have different eltype
s and size
s, which would require a lot of reinterpret
and reshape
to handle. So we just allocate separate work arrays for each $j$.
MIRT.Afft
— MethodA = Afft(samp::AbstractArray{Bool} ; T, dims, operator, work, ...)
Make a LinearMapAO
operator object for (under-sampled) FFT, of type T
, using given sampling pattern samp
. Especially for compressed sensing MRI with Cartesian sampling.
Option:
T::Type = ComplexF32
dims = 1:D
apply fft/bfft only along these dimensionsfft_forward::Bool = true
Usefalse
to havebfft!
in forward model.operator::Bool = true
set tofalse
to return aLinearMapAM
work::AbstractArray
work space for in-place fft operations- remaining arguments are passed to
plan_fft
Output
Returns a LinearMapsAA.LinearMapA[M|O]
object.
MIRT.Afft
— MethodA = Afft(xdim::Dims, fdim ; T, operator, unitary, work, ...)
Make a <:LinearMapAX
operator object for the FFT of an array of size xdim
, along dimensions fdim
, of type T
.
In:
xdim::Dims{D}
array sizefdim = 1:D
apply fft/bfft only along these dimensions
Option:
T::Type = ComplexF32
operator::Bool = true
return aLinearMapAO
set tofalse
to return aLinearMapAM
unitary::Bool = false
set totrue
for unitary DFTwork::AbstractArray
work space for in-place fft operations- remaining arguments are passed to
plan_fft
Output
Returns a LinearMapsAA.LinearMapA[M|O]
object.
MIRT.Anufft
— MethodAnufft(ω, N ; kwargs ...)
Make a LinearMapAO
object of size length(ω) × prod(N)
. See nufft_init
for options.
MIRT.Aodwt
— MethodA, levels, mfun = Aodwt(dims ; level::Int=3, wt=wavelet(WT.haar))
Create orthogonal discrete wavelet transform (ODWT) LinearMapAA
in
dims::Dims
tuple of dimensions
option
level::Int
# of levels; default 3wt
wavelet transform type (seeWavelets
package); default Haaroperator::Bool=true
default toLinearMapAO
T::Type
:Float32
by default; useComplexF32
if needed
out
A
aLinearMapAX
objectscales
array of sizedims
showing the scale of each coefficient
which is useful when imposing scale-dependent regularization
mfun
convenience function for A*X when X is a Matrix or Array (not vector)
2019-02-23 Jeff Fessler, University of Michigan
MIRT.Asense
— MethodAsense(samp, smaps; T)
Construct a MRI encoding operator model for D
-dimensional Cartesian sampling pattern samp
and coil sensitivity maps smaps
.
The input smaps
can either be a D+1
dimensional array of size (size(samp)..., ncoil)
, or a Vector of ncoil
arrays of size size(samp)
.
Input
samp::AbstractArray{<:Bool}
D
-dimensional sampling pattern.smaps::Vector{<:AbstractArray{<:Number}}
orAbstractArray{<:Number}
Option
T::Type = ComplexF32
dims = 1:D
apply fft/bfft only along these dimensionsfft_forward::Bool = true
Usefalse
to havebfft!
in forward model.unitary::Bool = false
set totrue
for unitary DFT- remaining arguments are passed to
plan_fft
Output
Returns a LinearMapsAA.LinearMapAO
object.
MIRT._show_struct
— Method_show_struct(io::IO, ::MIME"text/plain", st::Any)
Informative way to show fields of a struct (composite type).
MIRT.caller_name
— Methodcaller_name() or caller_name(;level=4)
Return "filename line fun():" as a string to describe where this function was called.
Stack levels:
- 1: #caller_name
- 2: caller_name()
- 3: function that invoked caller()
- 4: the calling function we want to return
Hence the default level is 4, but we increment it by one in case user says @show caller_name()
in which case stack[3] is a macro expansion.
MIRT.diff_adj
— MethodZ = diff_adj(dx, N::Dims{D} ; dims = 1:D)
Adjoint of finite differences of arrays along one or more dimensions. By default performs the same operations as $vec(Z) = [(I_{N_d} \otimes \cdots \otimes D_{N_1}); \dots; (D_{N_d} \otimes \cdots \otimes I_{N_1})]' * d$ where D_N
denotes the N-1 × N
1D finite difference matrix and ⊗
denotes the Kronecker product, but does it efficiently without using spdiagm
(or any SparseArrays
function).
in
dx
vector of typical lengthN_d*...*(N_1-1) + ... + (N_d-1)*...*N_1
N::Dims
desired output size
option
dims
dimension(s) for performing adjoint finite differences; default1:ndims(X)
out
Z
N_1 × ... × N_d
array by default
MIRT.diff_forw
— Methodd = diff_forw(X ; dims = 1:ndims(X))
Finite differences along one or more dimensions of an array, e.g., for anisotropic TV regularization.
By default performs the same operations as $d = [(I_{N_d} \otimes \cdots \otimes D_{N_1}); \dots; (D_{N_d} \otimes \cdots \otimes I_{N_1})] vec(X)$ where $D_N$ denotes the N-1 × N
1D finite difference matrix and ⊗
denotes the Kronecker product, but does it efficiently without using spdiagm
(or any SparseArrays
function).
Input dimension N
must exceed 1
for each dimension specified by dims
.
in
X
N_1 × ... × N_d
array (typically an N-D image).
option
dims
dimension(s) for performing finite differences; default1:ndims(X)
must have unique elements and be a nonempty subset of 1:ndims(X)
out
d
vector of default lengthN_d*...*(N_1-1) + ... + (N_d-1)*...*N_1
MIRT.diff_map
— MethodT = diff_map(N::Dims{D} ; dims = 1:D)
in
N::Dims
image size
out
T
LinearMapAA
object for computing finite differences viaT*x
MIRT.diffl!
— Methoddiffl!(g::AbstractArray, x::AbstractArray, dims::AbstractVector{Int} ; ...)
When x
is a N
-dimensional array, the i
th slice of the g
array (along its last dimension) is the diffl!
of x
along dims[i]
. This is useful for total variation (TV) and other regularizers that need finite differences along multiple dimensions.
MIRT.diffl!
— Methoddiffl!(g::AbstractArray, x::AbstractArray, dim::Int ; ...)
Apply left finite difference operator to input array x
, storing the result "in-place" in pre-allocated output array g
. (The letter g
is mnemonic for "gradient".)
Arrays g
and x
must have the same size, and cannot alias. By default, the "first" elements of g
are zero for dimension dim
. The default is dim=1
.
Option:
add::Bool = false
usex[i] + x[i-1]
instead ofx[i] - x[i-1]
(useful for certain diagonal majorizers).edge::Symbol = :zero
set the first elements of dimensiondim
to 0- Choose
edge=:circ
to use circulant (aka periodic) boundary conditions. - Choose
edge=:none
to leave the first elements untouched.
- Choose
Example
julia> x = [2, 6, 7]; g = similar(x); diffl!(g, x)
3-element Vector{Int64}:
0
4
1
MIRT.diffl
— Methodg = diffl(x::AbstractArray, dims::AbstractVector{Int} ; ...)
Allocating version of diffl!
for dims
MIRT.diffl
— Methodg = diffl(x::AbstractArray, dim::Int ; ...)
Allocating version of diffl!
along dim
Example
julia> x = [1,2] .+ [10 30 70]; g = diffl(x, 2)
2×3 Matrix{Int64}:
0 20 40
0 20 40
MIRT.diffl
— Methodg = diffl(x::AbstractArray ; ...)
Allocating version of diffl!
along dim=1
MIRT.diffl_adj!
— Methoddiffl_adj!(z::AbstractArray, g::AbstractArray, dims::AbstractVector{Int} ; ...)
Adjoint of diffl!
for multiple dimensions dims
. Here g
must have one more dimension than z
.
MIRT.diffl_adj!
— Methoddiffl_adj!(z, g, dim::Int ; ...)
Adjoint of left finite difference diffl!
, in-place. Arrays z
and g
must be same size. See diffl_adj
for details.
MIRT.diffl_adj
— Methodz = diffl_adj(g::AbstractArray, dim::Int ; ...)
Allocating version of diffl_adj!
along dim
.
Example
julia> g = ones(Int,2,3); z = diffl_adj(g, 2)
2×3 Matrix{Int64}:
-1 0 1
-1 0 1
MIRT.diffl_adj
— Methodz = diffl_adj(g::AbstractArray ; ...)
Allocating version of diffl_adj!
along dim=1
.
Example
julia> g = [0, 2, 5]; z = diffl_adj(g)
3-element Vector{Int64}:
-2
-3
5
MIRT.diffl_adj
— Methodz = diffl_adj(g::AbstractArray, dims::AbstractVector{Int} ; ...)
Allocating version of diffl_adj!
for dims
.
MIRT.diffl_map
— MethodT = diffl_map(N::Dims{D}, dims::AbstractVector{Int} ; kwargs...)
T = diffl_map(N::Dims{D}, dim::Int ; kwargs...)
in
N::Dims
image size
options: see diffl!
T::Type
forLinearMapAA
, defaultFloat32
operator::Bool = true
usefalse
forLinearMapAM
out
T
LinearMapAA
object for computing finite differences viaT*x
using diffl!
and diffl_adj!
MIRT.downsample1
— Methody = downsample1(x, down ; warn=true)
Downsample 1D vector by factor down
.
in
x [n1]
down::Int
downsampling factor
option
warn::Bool
warn if noninteger multiple; defaultisinteractive()
out
y [n1/down]
MIRT.downsample2
— Methody = downsample2(x, down ; warn=true, T)
Downsample by averaging by integer factors.
in
x [nx ny]
down
can be a scalar (same factor for both dimensions) or aNTuple{2,Int}
option
warn::Bool
warn if noninteger multiple; defaultisinteractive()
T::Type
specify output eltype; defaulteltype(x[1] / down[1])
out
y [nx/down ny/down]
MIRT.downsample3
— Methody = downsample3(x, down ; warn=true, T)
Downsample by averaging by integer factors.
in
x (nx,ny,nz)
down
can be a scalar (same factor for all dimensions) or aNTuple{3,Int}
option
warn::Bool
warn if noninteger multiple; default trueT::Type
specify output eltype; defaulteltype(x[1] / down[1])
out
y (nx/down,ny/down,nz/down)
MIRT.downsample_dim1
— Methody = downsample_dim1(x, down ; warn::Bool)
Down-sample x
by factor down
along first dimension by averaging.
in
x [n1 (Nd)]
down::Int
downsampling factor
option
warn::Bool
warn if non-integer multiple; defaultisinteractive()
out
y [n1÷down (Nd)]
MIRT.dtft
— MethodX = dtft(w, x ; n_shift=?)
multi-dimensional DTFT (DSFT)
$X[m] = \sum_{n=0}^{N-1} x[n] \exp(-i w[m,:] (n - n_{shift})), m=1,…,M$ where here n
is a CartesianIndex
in
w::AbstractMatrix{<:Real} (M,D)
frequency locations ("units" radians/sample)x::AbstractArray{<:Number} [(Nd)]
D-dimensional signal
option
n_shift::AbstractVector{<:Real}
often is N/2; default zeros(D)
out
X::AbstractVector{ComplexF64} (M)
DTFT
MIRT.dtft
— MethodX = dtft(w, x ; n_shift=?)
1D DTFT
$X[m] = \sum_{n=0}^{N-1} x[n] \exp(-i w[m] (n - n_{shift})), m=1,…,M$
in
w::AbstractVector{<:Real} (M)
frequency locations ("units" radians/sample)x::AbstractVector{<:Number} (N)
1D signal
option
n_shift::Real
often is N/2; default 0
out
X::AbstractVector{ComplexF64}
[M]
DTFT
MIRT.dtft_adj
— Methodx = dtft_adj(w, X, N ; n_shift=?)
adjoint for multi-dimensional DTFT (DSFT)
$x[n] = \sum_{m=1}^M X[m] \exp(i w[m,:] (n - n_{shift})), n=0,…,N-1$ where here n
is a CartesianIndex
in
X::AbstractVector{ComplexF64} (M)
DTFTw::AbstractMatrix{<:Real} (M,D)
frequency locations ("units" radians/sample)N::Dims (D)
dimensions of signalx
option
n_shift::AbstractVector{<:Real}
often isN/2
; defaultzeros(D)
T::Type
default(eltype(w) == Float64) ? ComplexF64 : ComplexF32
out
x::AbstractArray{<:Number} [(N)]
D
-dimensional signal
MIRT.dtft_adj
— Methodx = dtft_adj(w, X, N ; n_shift=?)
adjoint for 1D DTFT
$x[n] = \sum_{m=1}^M X[m] \exp(i w[m] (n - n_{shift})), n=0,…,N-1$
This is the adjoint (transpose) of dtft
, not an inverse DTFT.
in
w::AbstractVector{<:Real} (M)
frequency locations ("units" radians/sample)X::AbstractVector{ComplexF64} (M)
spectrum valuesN::Int
size of signalx
option
n_shift::Real
often is N/2; default 0T::Type
output data type; defaultComplexF64
out
x::AbstractVector{<:Number} (N)
signal
MIRT.dtft_init
— Methodd = dtft_init(w, N ; n_shift=?)
for multi-dimensional DTFT (DSFT)
in
w::AbstractMatrix{<:Real} (M,D)
frequency locations ("units" radians/sample)N::Dims
[D]
dimensions of signalx
option
n_shift::AbstractVector{<:Real}
often is N/2; default zeros(D)T::Type
defaultComplexF64
for testing NUFFT accuracy
out
d::NamedTuple
with fieldsdtft = x -> dtft(x), adjoint = y -> dtft_adj(y), A=LinearMapAO
MIRT.dtft_init
— Methodd = dtft_init(w, N ; n_shift=?)
For 1D DTFT
in
w::AbstractVector{<:Real} (M)
frequency locations ("units" radians/sample)N::Int
size of signalx
option
n_shift::Real
often is N/2; default 0T::Type
defaultComplexF64
for testing NUFFT accuracy
out
d::NamedTuple
with fieldsdtft = x -> dtft(x), adjoint = y -> dtft_adj(y), A=LinearMapAO
MIRT.embed!
— Methodembed!(array, v, mask ; filler=0)
embed vector v
of length sum(mask)
into elements of array
where mask
is true
, setting the remaining elements to filler
(default 0).
MIRT.embed
— Methodarray = embed(matrix::AbstractMatrix{<:Number}, mask::AbstractArray{Bool})
Embed each column of matrix
into mask
then cat
along next dimension In:
matrix [sum(mask) L]
mask [(N)]
Out:
array [(N) L]
MIRT.embed
— Methodarray = embed(v, mask ; filler=0)
embed vector v
of length sum(mask)
into elements of an array where mask
is true
; see embed!
.
MIRT.eql_root
— Methodx = eql_root(a,b,c)
Numerically stable method for computing the positive root of the quadratic polynomial -ax^2 - 2bx + c, a >= 0
. Assumes solvable equations; will throw otherwise.
in
a
The negative of thex^2
term. Must be positive.b
Half the negative of thex
term.c
The constant term.
out
x
The positive root that satisfies0 = -ax^2 - 2bx + c
.
MIRT.exp_mult
— MethodD = exp_mult(A, u, v ; warnboth)
Memory efficient and fast implementation of D = A' * exp(-u * v^T)
that is useful for B0-field-corrected MRI image reconstruction.
in
A [N L]
matrixu [N]
vectorv [M]
vectorwarnboth
warn if bothu
andv
are complex; default: true
out
D [L M]
complex vector:D = A' * exp(-u * v^T)
D_lm = sum_n A_nl^* exp(-u_n v_m)
MIRT.exp_xform
— Methodexp_xform(x, u, v ; mode::Symbol = :matrix)
in
x [N L]
possibly complex vector(s)u [D N]
possibly complex vectorsv [D M]
possibly complex vectorsmode::Symbol
:matrix
(default) |:element
|:row
|:column
out
y [M L]
typically complex vector
y[m,l] = sum_n x[n,l] exp(-sum(u[:,n] .* v[:,m]))
Iterates through subsets of the ML matrix designated by :mode
(i.e. row, column, element, or just computing the entire matrix) This is the 'slow' 'exact' transform model for MRI.
Output type will depend on input types.
MIRT.genkspace
— Methodgenkspace
Generate the proper length of k-space trajectory.
It linearly interpolates the output of genspiral
to the correct length()
& takes care of the rotations for the interleaves.
ld
is the length of the datanint
is the number of interleaves
Brad Sutton; University of Michigan
MIRT.genspi
— MethodGx, Gy, kx, ky, sx, sy, gts = genspi(...)
This is translation of C code from scanner: exactly what is played out to gradients at 4us.
Multi-shot spiral design uses Duyn's approximate slewrate limited design augmented with archimedean gmax
limit
in [args]
D
= FOV; cmN
= matrix size()Tmax
= longest acquisition allowed; sdts
= output sample spacing; sgtype
= trajectory type()
option [CVs]
nl
= number of interleavesgamp
= design grad max; G/cmgslew
= design slew rate; mT/m/msnramp
= number of rampdown points; default 0
out
Gx; Gy
time is in sec()
- rev 0 12/26/98 original
- rev 1 4/15/99 little better calc of ts
Borrowed from Doug Noll; Univ. of Michigan. Modified to take more input cv's.
MIRT.getindex!
— Methodgetindex!(y::AbstractVector, x::AbstractArray{T,D}, mask::AbstractArray{Bool,D})
Equivalent to the in-place y .= x[mask]
but is non-allocating.
For non-Boolean indexing, just use @views y .= A[index]
, per https://discourse.julialang.org/t/efficient-non-allocating-in-place-getindex-for-bitarray/42268
MIRT.interp1
— Methodyi = interp1(x, y, xi ; how=Gridded(Linear()), extrap=0)
1D interpolation of y = f(x)
at points xi
In:
x::AbstractVector{<:Real}
y::AbstractVector{<:Number}
Option:
how::Interpolations.InterpolationType
defaultGridded(Linear())
extrap::Any
how to extrapolate, e.g.,Flat()
; default0
other options from Interpolations.jl are Line()
Periodic()
Reflect()
Throw()
Output is same size as input xi
MIRT.ir_dump
— Methodir_dump(x::Any ; io::IO = stdout)
ir_dump(io::IO, x::Any)
Show all the fields of a structure or NamedTuple
more nicely than dump() does
MIRT.ir_load_brainweb_t1_256
— Methoddata = ir_load_brainweb_t1_256()
Load brainweb T1-weighted MRI slice of size 256 × 256
.
MIRT.ir_mri_coil_compress
— Method(odata, σ, Vr) = ir_mri_coil_compress(idata ; ncoil)
MRI coil compression via PCA. Given multiple MRI surface coil images (idata), use SVD/PCA to find a smaller number of virtual coil images (odata).
In:
idata
[(N) n_in]
: noisy complex images (2D or 3D) for each coil
Option:
ncoil
Desired # of virtual coils (default: 1)
Out:
odata
[(N) ncoil]
: virtual coil imagesσ
[n_in]
: singular values.Vr
[n_in, ncoil]
: compression matrix for reducing other data.
todo: currently ignores noise correlations
MIRT.ir_mri_kspace_ga_radial
— Methodkspace = ir_mri_kspace_ga_radial(; Nro=?, Nspoke=?, ...)
Generate k-space sampling pattern for "golden angle" radial sampling.
option
Nro:Int
number of samples in each readout/spoke, default 256Nspoke::Int
number of spokes, default 1start::Real
first angle in series [radians], default π/2angle::Real
angular spacing [radians], default GAdelta_ro::Real
readout spacing, default1/Nro
shift::Real
shift due to gradient delays, default 0- radial sample locations are
ir * delta_ro
- where
ir = [-(Nro/2 - 1):1:Nro/2] + shift
- radial sample locations are
out
kspace
[Nro Nspoke 2]
(Float32)
kx
and ky
k-space locations for Nspoke*Nro
samples in interval (-0.5 0.5]
for default shift
, delta_ro
so default units are "cycles / sample"
MIRT.ir_mri_sensemap_sim
— Method(smap,info) = ir_mri_sensemap_sim( :all ; kwargs)
Like ir_mri_sensemap_sim
but also returns info
with data for all coils, mainly for testing and plotting.
MIRT.ir_mri_sensemap_sim
— Method(smap,info) = ir_mri_sensemap_sim( ir_ic_pair ; kwargs)
Like ir_mri_sensemap_sim
but also returns info
with data for specific coils where ir_ic_pair::Vector{Tuple{Int,Int}}
. (Usually used internally only.)
info::NamedTuple
geometry information for plots
MIRT.ir_mri_sensemap_sim
— Methodsmap = ir_mri_sensemap_sim(...)
Simulate 2D or 3D sensitivity maps for sensitivity-encoded MRI based on grivich:00:tmf.
This code makes maps for multiple coils, but does not model coupling between coils, so most likely it is an approximation at best.
option
dims::Dims
image size; default (64, 64)dx::Real
pixel/voxel dimension; default: 3dy::Real
pixel/voxel dimension; default:dx
dz::Real
""ncoil::Int
# of coils total; default 4nring::Int
# of rings of coils; default 1rcoil::Real
coil radius; defaultdx * nx / 2 * 0.50
dz_coil
ring spacing in z; defaultnz*dz/nring
- (3D geometry is a cylinder)
coil_distance::Real
distance of coil center from isocenter- for central ring of coils as a multiple of
FOVx
, - where
FOVx=nx*dx
; default 1.2
- for central ring of coils as a multiple of
orbit::Real
default 360 [degrees]orbit_start::AbstractVector{<:Real} = fill(0, nring)
[degrees]scale::Symbol
:none
(default)ssos_center
make SSoS of center = 1
out
smap [dims ncoil]
simulated sensitivity maps (complex!)
All length parameters must have same units (e.g., mm or cm)
MIRT.ir_mri_sensemap_sim_do
— Method(smap, info) = ir_mri_sensemap_sim_do()
MIRT.ir_mri_smap1
— Methodir_mri_smap1()
Based on grivich:00:tmf for a circular coil in "x-y plane" of radius "a"
Note that coil x-y plane is not same as object x-y plane!
Returns (i,j,k)
components of $B$ vector for each (x,y,z)
location.
MIRT.ir_mri_smap_r
— Methodir_mri_smap_r(r, z)
Function for testing near 0.
MIRT.jinc
— Methodjinc(x)
Return jinc(x) = J1(pi*x)/(2x)
, where J1
is a Bessel function of the first kind.
Units of x
are typically cycles/m.
Return type is promote_type(typeof(x), Float32)
.
MIRT.line_search_mm
— Methodα = line_search_mm(args...; opt, fun, kwargs...)
Line-search based on majorize-minimize (MM) approach. This is a wrapper around the iterator LineSearchMM
. See its constructors for args
and other kwargs
.
option
fun(state)
User-defined function to be evaluated with thestate
initially and then after each iteration.out::Union{Nothing,Vector{Any}} = nothing
optional place to store result offun
for iterates0,…,ninner
: (Allmissing by default.) This is a
Vector{Any}of length
ninner+1`.
output
α
final iterate
This function mutates the optional arguments out
and work
.
MIRT.make_dot_curvf
— Methodmake_dot_curvf(curv::Function, [x, Tf::Type; w = similar(x)])
Make a function with arguments (v, x)
that computes the dot product between abs2.(v)
and curv(x)
where curv(x)
is a curvature function associated with a quadratic majorizer for some real-valued cost function f(x)
, evaluated at array x
, typically for use in a line-search method.
For a single-argument gradient function
curv(x)
, this returns the version equivalent to(v, x) -> dot(abs2.(v), curv(x))
. This version will be allocating, unlesscurv
has its own internal workspace due to a closure. This version expects only the argumentcurv
.For a two-argument mutating gradient function
curv!(w, x)
, this returns a function equivalent to(v, x) -> dot(abs2.(v), curv!(w, x))
using the keyword argumentw
as the work array.If
curv
is simply a real number (a Lipschitz constant), possibly with units, then this returns the function(v, x) -> curv * sum(abs2, v)
.
If f(x)
maps an Array x
of elements with units Ux
into real numbers with units Uf
, then its curvature has units Uf/Ux^2
. Those units are relevant to defining the work array w
.
in
curv::Function
see abovex
an array whosesize
andeltype
is used to allocatew
option
Tf::Type = typeof(one(eltype(x)))
Specifyeltype
of functionf(x)
, defaulting to unitless.w = similar(x, typeof(oneunit(Tf) / oneunit(eltype(x))^2))
work space for gradient calculation, with appropriate units (if needed).
out
(v, x) -> dot(abs2.(v), curv([w,] x))
or equivalent.
MIRT.make_dot_gradf
— Methodmake_dot_gradf(grad::Function, [x, Tf::Type; w = similar(x)])
Make a function with arguments (v, x)
that computes the dot product between an array v
and the gradient grad(x)
of some real-valued cost function f(x)
, evaluated at array x
, typically for use in a line-search method.
For a single-argument gradient function
grad(x)
, this returns the version(v, x) -> dot(v, grad(x))
. This version will be allocating, unlessgrad
has its own internal workspace due to a closure. This version expects only the argumentgrad
.For a two-argument mutating gradient function
grad!(w, x)
, this returns a function(v, x) -> dot(v, grad!(w, x))
using the keyword argumentw
as the work array.
If f(x)
maps an Array x
of elements with units Ux
into real numbers with units Uf
, then the gradient ∇f
has units Uf/Ux
. Those units are relevant to defining the work array w
.
in
grad::Function
see abovex
an array whosesize
andeltype
is used to allocatew
Tf::Type = typeof(one(eltype(x)))
Specifyeltype
of functionf(x)
, defaulting to unitless.
option
w = similar(x, typeof(oneunit(Tf) / oneunit(eltype(x))))
work space for gradient calculation, with appropriate units (if needed).
out
(v, x) -> dot(v, grad([w,] x))
or equivalent.
Example. Consider the cost function f(x) = sum(sin.(x))
that has gradient g(x) = cos.(x)
. The simple way here is make_dot_gradf(g)
, which will return the function (v, x) = dot(v, g(x)) === dot(v, cos.(x))
. That will work fine, but the cos.(x)
step will be allocating. A better way (for repeated use) is
function g!(work, x)
@. work = cos(x) # mutating
return work
end
Then make_dot_gradf(g!, x)
will return a function (v,x)
that does not allocate (except when first generated via a closure). For this particular example, an even better approach is to directly define dot_gradf(v,x) = sum(vx -> vx[1]*cos(vx[2]), zip(v,x))
, but make_dot_gradf
is here for more general cases.
MIRT.map_many
— Methody = map_many(fun::Function, x::AbstractArray{<:Any}, idim::Dims)
Apply a function fun
to leading slices of input x
; cousin of mapslices
in
fun::Function
maps input of sizeidim
to output of some sizeodim
x [idim ldim]
out
y [odim ldim]
Example: if fun
maps array of size (1,2) to array of size (3,4,5) and if input x
has size (1,2,7,8) then output y
will have size (3,4,5,7,8) where y[:,:,:,i,j] = fun(x[:,:,i,j])
MIRT.mask_or
— Methodmask_or(mask)
compress 3D mask to 2D by logical or
along z
direction
MIRT.mask_outline
— Methodmask_outline(mask)
return outer boundary of 2D mask (or mask_or for 3D)
MIRT.maskit
— Methodmaskit(x::AbstractArray{<:Number})
opposite of embed
MIRT.max_percent_diff
— Methodmax_percent_diff(s1, s2, [options])
Compute the "maximum percent difference" between two signals: s1, s2
.
Default is to normalize by maximum(abs, s1)
.
options
maxboth::Bool = false
use max of both arguments to normalizenormalize::Bool = false
normalize each before comparing
MIRT.mri_kspace_spiral
— Methodkspace, omega, gradxy = mri_kspace_spiral( [options] )
Make k-space spiral trajectory based on GE 3T scanner constraints
Option:
N
dimension of reconstructed imageNt
# of time pointsfov
field of view in cmdt
time sampling interval out; default5e-6
secgamp::Real
design gradient amplitude max, G/cm; default 2.2gslew::Int
design slew rate, mT/m/ms; default 180
Out:
kspace [Nt,2]
kspace trajectory[kx ky]
in cycles/cm, NO: cycles/FOVomega [Nt,2]
"" in radiansgradxy [Nt 2]
gradient waveforms in (units?)
MIRT.mri_trajectory
— Methodkspace, omega, wi = mri_trajectory( ; ktype, N, fov, arg_wi, kwargs...)
Generate kspace trajectory samples and density compensation functions.
option
ktype::Symbol
k-space trajectory type; default:radial
N::Dims
target image size; default (32,30)fov
field of view in x and y (and z); default (250,250) mmarg_wi
options to pass toir_mri_density_comp
- not yet donekwargs
options for the specific trajectory
out
kspace [Nk 2|3]
kspace samples in units 1/fovomega [Nk 2|3]
trajectory samples over [-π,π)wi [Nk 1]
(optional) density compensation factors
trajectory types:
:cartesian
:radial
:cart_y_2
:random
:half8
:epi_sin
:spiral0 :spiral1 :spiral3
:rosette3
:epi_under
:gads
(emulate golden-angle data sharing per winkelmann:07:aor)
MIRT.mri_trajectory_gads
— Methodomega, wi = mri_trajectory_gads(N, fov ; ...)
Emulate 2D golden angle radial sampling with data sharing
option
Nro
# of samples in each readout/spokeshift
shift along read-out due to gradient delays (stress)kmax_frac
fractions of maximum krad (0.5) for rings (annuli)under
under-sampling factor for each annulus
MIRT.mri_trajectory_radial
— Methodmri_trajectory_radial()
option
na_nr
default ensures proper sampling at edge of k-spacena
angular spokes; default: na_nr * nrnr
radial samples per spokeir
default:0:nr
todo: generalize to 3D using barger:02:trc
MIRT.mri_trajectory_rosette3
— Methodmri_trajectory_rosette3(N, fov ; ...)
3d rosette, with default parameters from bucholz:08:miw
option
omax
maximum omegant
time samples (65.536 ms for 4 usec dt)dt
time sample spacing (4 usec)ti
time samples
MIRT.ncg
— Method(x,out) = ncg(B, gradf, curvf, x0 ; ...)
Nonlinear preconditioned conjugate gradient algorithm to minimize a general "inverse problem" cost function of the form $\Psi(x) = \sum_{j=1}^J f_j(B_j x)$ where each function $f_j(v)$ has a quadratic majorizer of the form
\[q_j(v;u) = f_j(u) + \nabla f_j(u) (v - u) + 1/2 \|v - u\|^2_{C_j(u)}\]
where $C_j(u)$ is diagonal matrix of curvatures. (It suffices for each $f_j$ to have a Lipschitz smooth gradient.)
This CG method uses a majorize-minimize (MM) line search.
in
B
vector of $J$ blocks $B_1,…,B_J$gradf
vector of $J$ functions return gradients of $f_1,…,f_J$curvf
vector of $J$ functionsz -> curv(z)
that return a scalar or a vector of curvature values for each element of $z$x0
initial guess; needlength(x) == size(B[j],2)
for $j=1,…,J$
Usually x0
is a Vector
but it can be an Array
if each B_j
is a linear operator (e.g., LinearMapAO
) of suitable "dimensions".
option
niter
# number of outer iterations; default 50ninner
# number of inner iterations of MM line search; default 5P
# preconditioner; defaultI
betahow
"beta" method for the search direction; default:dai_yuan
fun
User-defined function to be evaluated with two arguments (x,iter).- It is evaluated at
(x0,0)
and then after each iteration.
- It is evaluated at
output
x
final iterateout
[niter+1] (fun(x0,0), fun(x1,1), ..., fun(x_niter,niter))
- (all 0 by default). This is an array of length
niter+1
- (all 0 by default). This is an array of length
MIRT.ncg
— Method(x,out) = ncg(grad, curv, x0, ...)
Special case of ncg
(nonlinear CG) for minimizing a cost function whose gradient is grad(x)
and that has a quadratic majorizer with diagonal Hessian given by curv(x)
. Typically curv = (x) -> L
where L
is the Lipschitz constant of grad
.
MIRT.nufft_eltype
— Methodnufft_eltype(::Type)
ensure plan_nfft eltype is Float32 or Float64
MIRT.nufft_errors
— Methodw, errs = nufft_errors( ; M=401, w=?, N=513, n_shift=0, ...)
Compute NUFFT approximation errors (for signal of length N
of unit norm), for given digital frequency values w
, i.e., Ω. Default w
is range(0, 2π/N, M)
.
MIRT.nufft_init
— Methodp = nufft_init(w, N ; nfft_m=4, nfft_sigma=2.0, pi_error=true, n_shift=0)
Setup 1D NUFFT, for computing fast $O(N \log N)$ approximation to
$X[m] = \sum_{n=0}^{N-1} x[n] \exp(-i w[m] (n - n_{shift})), m=1,…,M$
in
w::AbstractArray{<:Real}
[M]
frequency locations (aka Ω, units radians/sample)eltype(w)
determines theplan_nfft
type; so to save memory use Float32!size(w)
determinesodim
forA
ifoperator=true
N::Int
signal length
option
nfft_m::Int
see NFFT.jl documentation; default 4nfft_sigma::Real
"", default 2.0n_shift::Real
often is N/2; default 0pi_error::Bool
throw error if $|w| > π$, defaulttrue
- Set to
false
only if you are very sure of what you are doing!
- Set to
do_many::Bool
support extended inputs viamap_many
? defaulttrue
operator::Bool=true
set tofalse
to makeA
anLinearMapAM
out
p NamedTuple
(nufft = x -> nufft(x), adjoint = y -> nufft_adj(y), A::LinearMapAO)
The default settings are such that for a 1D signal of length N=512, the worst-case error is below 1e-5 which is probably adequate for typical medical imaging applications. To verify this statement, run nufft_plot1()
and see plot.
MIRT.nufft_init
— Methodp = nufft_init(w, N ; nfft_m=4, nfft_sigma=2.0, pi_error=true, n_shift=?)
Setup multi-dimensional NUFFT, for computing fast $O(N \log N)$ approximation to
$X[m] = \sum_{n=0}^{N-1} x[n] \exp(-i w[m,:] (n - n_{shift})), m=1,…,M$
in
w::AbstractArray{<:Real}
[M,D]
frequency locations (aka Ω, units radians/sample)eltype(w)
determines theplan_nfft
type; so to save memory use Float32!size(w)[1:(end-1)]
determinesodim
ifoperator=true
N::Dims{D}
signal dimensions
option
nfft_m::Int
see NFFT.jl documentation; default 4nfft_sigma::Real
"", default 2.0n_shift::AbstractVector{<:Real} (D)
often is N/2; default zeros(D)pi_error::Bool
throw error if $|w| > π$, defaulttrue
- Set to
false
only if you are very sure of what you are doing!
- Set to
do_many::Bool
support extended inputs viamap_many
? defaulttrue
operator::Bool=true
set tofalse
to makeA
aLinearMapAM
The default do_many
option is designed for parallel MRI where the k-space sampling pattern applies to every coil. It may also be useful for dynamic MRI with repeated sampling patterns. The coil and/or time dimensions must come after the spatial dimensions.
out
p NamedTuple
with fieldsnufft = x -> nufft(x), adjoint = y -> nufft_adj(y), A=LinearMapAO
(Usingoperator=true
allows theLinearMapAO
to supportdo_many
.)
MIRT.nufft_typer
— Methodnufft_typer(T::Type, x::AbstractArray{<:Real} ; warn::Bool=true)
type conversion wrapper for nfft()
MIRT.ogm_ls
— Method(x,out) = ogm_ls(B, gradf, curvf, x0; niter=?, ninner=?, fun=?)
OGM with a line search Drori&Taylor to minimize a general "inverse problem" cost function of the form $\Psi(x) = \sum_{j=1}^J f_j(B_j x)$ where each function $f_j(v)$ has a quadratic majorizer of the form
\[q_j(v;u) = f_j(u) + \nabla f_j(u) (v - u) + 1/2 \|v - u\|^2_{C_j(u)}\]
where $C_j(u)$ is diagonal matrix of curvatures. (It suffices for each $f_j$ to have a Lipschitz smooth gradient.)
This OGM method uses a majorize-minimize (MM) line search.
in
B
vector of $J$ blocks $B_1,…,B_J$gradf
vector of $J$ functions return gradients of $f_1,…,f_J$curvf
vector of $J$ functionsz -> curv(z)
that return a scalar or a vector of curvature values for each element of $z$x0
initial guess; needlength(x) == size(B[j],2)
for $j=1,…,J$
option
niter
# number of outer iterations; default 50ninner
# number of inner iterations of MM line search; default 5fun
User-defined function to be evaluated with two arguments (x,iter).- It is evaluated at (x0,0) and then after each iteration.
output
x
final iterateout (niter+1) (fun(x0,0), fun(x1,1), ..., fun(x_niter,niter))
- (all 0 by default). This is a vector of length
niter+1
.
- (all 0 by default). This is a vector of length
MIRT.ogm_ls
— Method(x,out) = ogm_ls(grad, curv, x0, ...)
Special case of ogm_ls
(OGM with line search) for minimizing a cost function whose gradient is grad(x)
and that has a quadratic majorizer with diagonal Hessian given by curv(x)
. Typically curv = (x) -> L
where L
is the Lipschitz constant of grad
.
MIRT.pogm_restart
— Methodx, out = pogm_restart(x0, Fcost, f_grad, f_L ;
f_mu=0, mom=:pogm, restart=:gr, restart_cutoff=0.,
bsig=1, niter=10, g_prox=(z,c)->z, fun=...)
Iterative proximal algorithms (PGM=ISTA, FPGM=FISTA, POGM) with restart.
in
x0
initial guessFcost
function for computing the cost function value $F(x)$- (needed only if
restart === :fr
)
- (needed only if
f_grad
function for computing the gradient of $f(x)$f_L
Lipschitz constant of the gradient of $f(x)$
option
f_mu
strong convexity parameter of $f(x)$; default 0.- if
f_mu > 0
, $(\alpha, \beta_k, \gamma_k)$ is chosen by Table 1 in [KF18]
- if
g_prox
functiong_prox(z,c)
for the proximal operator for $g(x)$g_prox(z,c)
computes $argmin_x 1/2 \|z-x\|^2 + c \, g(x)$
mom
momentum option:pogm
POGM (fastest); default!:fpgm
(FISTA), $\gamma_k = 0$:pgm
PGM (ISTA), $\beta_k = \gamma_k = 0$
restart
restart option:gr
gradient restart; default!:fr
function restart:none
no restart
restart_cutoff
for:gr
restart if cos(angle) < this; default 0.bsig
gradient "gamma" decrease option (value within [0 1]); default 1- see $\bar{\sigma}$ in [KF18]
niter
number of iterations; default 10fun
function(iter, xk, yk, is_restart)
user-defined function evaluated eachiter
with secondaryxk
, primaryyk
, and booleanis_restart
indicating whether this iteration was a restart
out
x
final iterate- for PGM (ISTA): $x_N = y_N$
- for FPGM (FISTA): primary iterate $y_N$
- for POGM: secondary iterate $x_N$, see [KF18]
out [fun(0, x0, x0, false), fun(1, x1, y1, is_restart), ...]
array of length[niter+1]
Optimization Problem: Nonsmooth Composite Convex Minimization
- $argmin_x F(x), F(x) := f(x) + g(x))$
- $f(x)$ smooth convex function
- $g(x)$ convex function, possibly nonsmooth and "proximal-friendly" [CP11]
Optimization Algorithms:
Accelerated First-order Algorithms when $g(x) = 0$ [KF18] iterate as below for given coefficients $(\alpha, \beta_k, \gamma_k)$
- For k = 0,1,...
- $y_{k+1} = x_k - \alpha f'(x_k)$ : gradient update
- $x_{k+1} = y_{k+1} + \beta_k (y_{k+1} - y_k) + \gamma_k (y_{k+1} - x_k)$ : momentum update
Proximal versions of the above for $g(x) \neq 0$ are in the below references, and use the proximal operator $prox_g(z) = argmin_x {1/2\|z-x\|^2 + g(x)}$.
- Proximal Gradient method (PGM or ISTA) - $\beta_k = \gamma_k = 0$. [BT09]
- Fast Proximal Gradient Method (FPGM or FISTA) - $\gamma_k = 0$. [BT09]
- Proximal Optimized Gradient Method (POGM) - [THG15]
- FPGM(FISTA) with Restart - [OC15]
- POGM with Restart - [KF18]
references
- [CP11] P. L. Combettes, J. C. Pesquet,
"Proximal splitting methods in signal processing," Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer, Optimization and Its Applications, 2011.
- [KF18] D. Kim, J.A. Fessler,
"Adaptive restart of the optimized gradient method for convex optimization," 2018 Arxiv:1703.04641, [http://doi.org/10.1007/s10957-018-1287-4]
- [BT09] A. Beck, M. Teboulle:
"A fast iterative shrinkage-thresholding algorithm for linear inverse problems," SIAM J. Imaging Sci., 2009.
- [THG15] A.B. Taylor, J.M. Hendrickx, F. Glineur,
"Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]
Copyright 2017-3-31, Donghwan Kim and Jeff Fessler, University of Michigan 2018-08-13 Julia 0.7.0 2019-02-24 interface redesign
MIRT.poweriter
— Methodv1,σ1 = poweriter(A; niter=?, ...)
Determine first right singular vector v1
and first singular value σ1
of A
by applying power iteration to A'A
in
A
M × N matrix
option
niter
default 200x0
initial guess ofv1
tol
stopping tolerance for s1, default 1e-6chat::Bool
verbose? default false
out
v1
[N]
principal right singular vectorσ1
spectral norm ofA
MIRT.rect
— Methodrect(x::Real)
Unit width rect function. Potential problem? Bring up with fess.
MIRT.reverser
— Methody = reverser(x, dims)
reverse array along specified dimensions (or all if unspecified)
MIRT.rmsd100
— Methodrmsd = rmsd100(x, y ; mask)
Compute 100 * RMSD (root mean squared difference) between x
and y
within domain mask.
in
x
: arrayy
: another array of same size
option
mask::Array{Bool}
: domain over which to compute the RMSE; defaulttrues(size(x))
out
- rmsd : rmsd of
x
vsy
withinmask
in %
MIRT.rotate2d
— Method(xr,yr) = rotate2d(x, y, theta)
2D rotation
MIRT.snr2sigma
— Methodsnr2sigma(db, yb)
Convert SNR in dB to noise σ for complex gaussian noise. No sqrt(2)
factors is needed here because randn(Complex{Float})
already accounts for that. (See randn
documentation.)
MIRT.@shows
— Macro@shows expr
Show the type and size of an expression expr
(typically a variable).
More concise output than @show
and typically this is all that is needed when debugging code.