Roundoff errors and rank

This example examines the effects of roundoff error associated with finite-precision arithmetic on matrix rank and singular value calculations, using the Julia language. The focus is rank-1 matrices.

This page comes from a single Julia file: round1.jl.

You can access the source code for such Julia documentation using the 'Edit on GitHub' link in the top right. You can view the corresponding notebook in nbviewer here: round1.ipynb, or open it in binder here: round1.ipynb.

Add the Julia packages that are need for this demo. Change false to true in the following code block if you are using any of the following packages for the first time.

if false
    import Pkg
    Pkg.add([
        "InteractiveUtils"
        "LaTeXStrings"
        "LinearAlgebra"
        "MIRTjim"
        "Plots"
        "StatsBase"
    ])
end

Tell Julia to use the following packages for this example. Run Pkg.add() in the preceding code block first, if needed.

using InteractiveUtils: versioninfo
using LaTeXStrings
using LinearAlgebra: rank, svd, svdvals, Diagonal, norm
using MIRTjim: prompt
using Plots: default, gui, plot, plot!, scatter, scatter!, savefig, histogram
using Plots.PlotMeasures: px
using StatsBase: mean, var
default(markerstrokecolor=:auto, label="", widen=true, markersize = 6,
 tickfontsize = 12, labelfontsize = 18, legendfontsize = 18, linewidth=2,
)

The following line is helpful when running this file as a script; this way it will prompt user to hit a key after each image is displayed.

isinteractive() && prompt(:prompt);

Roundoff errors and singular values

Examine the singular values of a matrix that ideally should have rank=1.

n = 100
T = Float32
u = ones(T, n) / T(sqrt(n))
v = ones(T, n) / T(sqrt(n))
Y = 1 * u * v' # theoretically rank-1 matrix
@assert rank(Y) == 1 # julia is aware of precision limits
sigma = svdvals(Y)
sigma[1], abs(sigma[1] - 1)
(1.0000001f0, 1.1920929f-7)
xaxis = (L"k", (1,n), [1, n÷2, n])
p1 = scatter(sigma; xaxis, yaxis = (L"σ_k", (-0.02, 1.02), -1:1), color=:blue)
good = findall(sigma .> 0)
xaxis = (L"k", (1,n), [2, n÷2, n])
p2 = scatter(good, log10.(sigma[good]);
    xaxis, yaxis = (L"\log_{10}(σ_k)", (-40, 2), -40:20:0), color=:red)
p12 = plot(p1, p2, layout=(2,1))
# savefig(p12, "round1.pdf")
Example block output
prompt()

Roundoff errors and rank

For a matrix that is sufficiently large relative to the precision of the matrix elements, the threshold in the rank function can be high enough that the returned rank is 0.

using LinearAlgebra: rank, svdvals
n = 1100 # > 1 / eps(Float16)
T = Float16
u = ones(T, n) / T(sqrt(n))
v = ones(T, n) / T(sqrt(n))
Y = u * v' # theoretically rank-1 matrix
rank(Y) # 0 !
0

The following plot shows the singular values and the threshold set in the rank function. There is a gap with a ratio of $10^5$ between $σ₁ = 1$ and $σ₂$ so rank's threshold is unnecessarily conservative in this case.

RMT suggests that the (smaller) tolerance $\max(M,N) \, ε \, σ₁$ can suffice, and indeed in this particular example it correctly separates the nonzero signal singular value from the noise singular values.

Here the matrix Y nicely fits the assumptions of RMT; there may be other situations with "worst case" data matrices where the conservative threshold is needed.

We have noticed that this plot looks different on a Mac and on Linux with Julia 1.9.2. Apparently some differences in the SVD libraries on different systems can affect the details of the tiny singular values.

s = svdvals(Y)
tol = minimum(size(Y)) * eps(T) * s[1] # from rank()
tol2 = sqrt(maximum(size(Y))) * eps(T) * s[1] # from RMT
p16 = plot([1, n], [1,1] * log10(tol),
 label="rank threshold: tol=$(round(tol,digits=2))",
 title = "Rank-1 matrix with $T elements",
)
plot!([1, n], [1,1] * log10(tol2),
 label="RMT threshold: tol=$(round(tol2,digits=2))")
scatter!(1:n, log10.(s); label="singular values", alpha=0.8,
 xaxis = (L"k", (1,n), [2, n÷2, n]),
 yaxis = (L"\log_{10}(σ)", (-45, 2), [-40:20:0; -5]),
 left_margin = 40px, bottom_margin = 20px,
 annotate = (200, -8, Sys.MACHINE, :left),
)
Example block output
prompt()
# savefig(p16, "round1-p16-$(Sys.MACHINE).pdf")

Roundoff error variance

N = 200
200

very high-precision reference, var=1

X = (2 * rand(BigFloat, N, N) .- 1) * sqrt(3)
for T in (Float16, Float32, Float64)
    local Y = T.(X) # quantize
    local Z = T.(Y - X) # quantization error
    @show T, mean(Z) # approximately 0
    vr = var(Float64.(Z)) # sample variance
    @show (vr, eps(T)^2/24) # empirical, predicted
end
(T, mean(Z)) = (Float16, Float16(-0.0))
(vr, eps(T) ^ 2 / 24) = (4.0046329934285165e-8, Float16(6.0e-8))
(T, mean(Z)) = (Float32, 2.0356881f-10)
(vr, eps(T) ^ 2 / 24) = (6.002010778728084e-16, 5.9211896f-16)
(T, mean(Z)) = (Float64, 4.3289502493103323e-19)
(vr, eps(T) ^ 2 / 24) = (2.0727683438551844e-33, 2.0543252740130515e-33)

Roundoff error plot

T = Float32
Y = T.(X) # quantize
Z = T.(Y - X); # quantization error

Examine histogram of floating point quantization errors

ph = histogram(vec(Z), bins = (-40:40)/40 * eps(T))
Example block output
prompt()

Examine floating point quantization errors

x = range(BigFloat(-1), BigFloat(1), 1001) * 2
z = T.(x) - x # quantization error
ylabel = latexstring("error: \$\\ (q(x) - x)/ϵ\$")
scatter(x, z / eps(T), yaxis=(ylabel, (-1,1).*0.51, (-2:2)*0.25))
plot!(x, (@. eps(T(x)) / eps(T) / 2), label=L"ϵ(x)/2", color=:blue)
pq = plot!(x, x/2, xaxis=(L"x",), label=L"x/2", legend=:top, color=:red)
Example block output
prompt()

Based on the quantization error plot above, the quantization error for a floating point number near $x$ is bounded above by $ϵ x / 2$. See the Julia manual for eps. Thus if $x ∼ \mathrm{Unif}(-a,a)$ then $E[z^2] = E[|q(x) - x|^2] = \frac{1}{2a} ∫_{-a}^a |q(x) - x|^2 \mathrm{d} x \leq \frac{1}{a} ∫_0^a |ϵ x / 2|^2 \mathrm{d} x = ϵ^2 a^2 / 12.$

Reproducibility

This page was generated with the following version of Julia:

using InteractiveUtils: versioninfo
io = IOBuffer(); versioninfo(io); split(String(take!(io)), '\n')
11-element Vector{SubString{String}}:
 "Julia Version 1.11.2"
 "Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)"
 "Build Info:"
 "  Official https://julialang.org/ release"
 "Platform Info:"
 "  OS: Linux (x86_64-linux-gnu)"
 "  CPU: 4 × AMD EPYC 7763 64-Core Processor"
 "  WORD_SIZE: 64"
 "  LLVM: libLLVM-16.0.6 (ORCJIT, znver3)"
 "Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)"
 ""

And with the following package versions

import Pkg; Pkg.status()
Status `~/work/book-la-demo/book-la-demo/docs/Project.toml`
  [6e4b80f9] BenchmarkTools v1.5.0
  [aaaa29a8] Clustering v0.15.7
  [35d6a980] ColorSchemes v3.27.1
⌅ [3da002f7] ColorTypes v0.11.5
⌃ [c3611d14] ColorVectorSpace v0.10.0
⌅ [717857b8] DSP v0.7.10
  [72c85766] Demos v0.1.0 `~/work/book-la-demo/book-la-demo`
  [e30172f5] Documenter v1.8.0
  [4f61f5a4] FFTViews v0.3.2
  [7a1cc6ca] FFTW v1.8.0
  [587475ba] Flux v0.15.2
  [a09fc81d] ImageCore v0.10.5
  [71a99df6] ImagePhantoms v0.8.1
  [b964fa9f] LaTeXStrings v1.4.0
  [7031d0ef] LazyGrids v1.0.0
  [599c1a8e] LinearMapsAA v0.12.0
  [98b081ad] Literate v2.20.1
  [7035ae7a] MIRT v0.18.2
  [170b2178] MIRTjim v0.25.0
  [eb30cadb] MLDatasets v0.7.18
  [efe261a4] NFFT v0.13.5
  [6ef6ca0d] NMF v1.0.3
  [15e1cf62] NPZ v0.4.3
  [0b1bfda6] OneHotArrays v0.2.6
  [429524aa] Optim v1.10.0
  [91a5bcdd] Plots v1.40.9
  [f27b6e38] Polynomials v4.0.12
  [2913bbd2] StatsBase v0.34.4
  [d6d074c3] VideoIO v1.1.1
  [b77e0a4c] InteractiveUtils v1.11.0
  [37e2e46d] LinearAlgebra v1.11.0
  [44cfe95a] Pkg v1.11.0
  [9a3f8284] Random v1.11.0
Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated`

This page was generated using Literate.jl.