### Speaker: Daniel Withopf

### Audience level: [ Intermediate | Advanced ]

Classical C++ linear algebra libraries usually provide dense matrices and sparse matrices where the sparseness is only known at run-time. At compile-time, there is either no information on sparseness at all or only information whether a matrix is (upper/lower) triangular, diagonal or symmetric.

However, due to run-time considerations (due to the wish to use SIMD operations), even this limited set of sparse matrix types is usually stored as full dense matrices. This means that even the up to 50% of entries which are 0 are stored. While this can be beneficial for run-time, it also introduces a memory overhead that can be significant, especially in resource-constrained environment.

In this talk, we will see how it is indeed possible to get a free lunch by combining memory efficiency with run-time efficiency.

We'll learn how any kind of sparseness information can be incorporated into a matrix class *at compile time*. This means that only the necessary (non-zero, non-one, non-symmetric) entries have to be stored making such matrices very memory efficient.

Furthermore, this enables the compiler to automatically remove all unnecessary operations, yielding a performance that is at least on par with hand-written sparse-matrix code (but automatically generated) and in some cases even significantly faster.