by Andries Steenkamp (CWI)
This article is a short update on my adventurous second secondment. It is divided into two parts. First, the human part where I tell you of my experience coming to Toulouse. The second part is the math part, where I give a sneak peek into what math I have been cooking along with Monique, Victor and Milan.
Technically, my secondment has already been in progress since mid-January. Though the first third was virtual because of the "usual reason". This meant that I could hit the ground running by the time I reached Toulouse. The gap in the regulation meant that I had to move fast. As such, I did not deliberate much on my accommodation and simply the AirBnB (nearest to LAAS-CNRS. It just so happens that it is in the same building as Milan's place of residence, though we were not direct neighbors. My host was friendly and welcoming. Despite the language gap, we communicated via AirBnB messenger and DeepL. The furniture was a bunk bed with a desk beneath. An older man may have bulked at the idea, but I, still in the summer of my youth, welcomed the requisite pilates to get out of bed.
Getting to LAAS is a simple walk, allowing time for audiobook consumption.
On my first day, Victor received my arrival and guided me through the admin. In addition to this, Victor had the foresight to also provide me with meal tickets while my canteen card was being created. This gesture has earned my most profound gratitude; thank you, Victor. With admin down, I was led to the MAC corridor where I met some of the great minds of LAAS including Jean-Bernard Lasserre. Finally, I could get back to the math grind with the formalities out of the way.
So, what exactly is this secondment about? It is always tricky writing about ongoing unpublished math. On the one hand, you don't want to give away the special sauce; on the other hand, you are unsure if the special sauce is valid. I'll try to give an intuition without revealing anything concrete.
My Toulouse secondment deals with sparsity. Suppose the concept is entirely new to you. In that case, sparsity is, very loosely speaking, exploiting a pattern of zeros in the data to perform possibly cheaper computations and obtain possibly better results.
Our plan was to consider **completely positive matrix factorization rank** as we did in a previous paper: Bounding the separable rank via polynomial optimization but now to exploit information on zero entries in the matrix. We want to see if we can transfer the sparsity "induced by a matrix's support" to the moment matrix of linear functional modeling the cp-rank. This probably leaves the listener with more questions than answers, well, too bad. The work is ongoing, so I will not divulge more, but I will share an image of the matrix support and the support of the recovered moment matrix. The take-home message is "the sparsity of the initial data matrix seems to be transferred to the moment matrices of level t =2,3".
The white squares are supported, and the black squares are not, i.e., they are zero.