Jump to content

Talk:Gram matrix

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Cewbot (talk | contribs) at 18:35, 2 February 2024 (Maintain {{WPBS}} and vital articles: 2 WikiProject templates. Create {{WPBS}}. Keep majority rating "Start" in {{WPBS}}. Remove 1 same rating as {{WPBS}} in {{Maths rating}}. Remove 1 deprecated parameter: field.). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Limits

[edit]

In the expression for , what is and  ? Deepak 16:54, 31 March 2006 (UTC)[reply]

and are initial and final time; the functions whose Gramian is being computed are defined on the interval . I've clarified this in the article.
Nbarth (email) (talk) 02:17, 18 January 2008 (UTC)[reply]

Too much physics-oriented

[edit]

The Gramian matrix can be calculated and is important in any inner product space. The integral of the multiplication of two functions which is shown in the article is just one case of an inner product.

Here is a good page on the subject: http://www.jyi.org/volumes/volume2/issue1/articles/barth.html

Yes, I agree. Gram matrices show up also in machine learning, where a number of methods depend on a set of input vectors in (for some finite ) only through the Gram matrix of this set. One may construct the Gram matrix using the standard inner product (dot product) on , or -- very usefully -- an arbitrary inner product, which corresponds to mapping the input vectors nonlinearly into some usually higher-dimensional space and taking the dot product there (known as the kernel trick). Either way, integrals are not involved. Eclecticos 05:04, 24 September 2006 (UTC)[reply]
Also, the usual name in the machine learning literature, and AFAIK in the linear algebra literature too, is "Gram matrix." I have never run across the variant "Gramian matrix" before, but perhaps it is used in physics? Eclecticos 05:04, 24 September 2006 (UTC)[reply]

I am aquainted with "Gramian" from mathematics. The main point is that the gramian matrix of some base (not necessarily orthonormal) of an euclidean (= inner product) space contains all the information on the geometry (the inner products) of that space.

In addition, checking for linear dependencies is only a specific case of determining the volume of the parallelopiped spanned by some vectors, which can be done easily by the gramian matrix. It is different from the determinant, since it applies to non-rectangular matrices as well.

For example - to calculate the area of a parallelogram given within a 3-d space by determinant is hard, because we need to find an orthonormal base for the plane in which the parallelogram lies and transform the vectors to that base, but using the gramian matrix it is very simple (see the external link).

Thank you for the kind words on the JYI article, which I wrote (many years ago).
I've significantly revised the page, stating it more generally (and making other revisions).
Nbarth (email) (talk) 02:21, 18 January 2008 (UTC)[reply]

What does (xi|xj) mean?

[edit]

Need definition of this term -- does it refer to the inner product of xi and xj?

Yes—I've updated the article accordingly to clarify.
Nbarth (email) (talk) 02:18, 18 January 2008 (UTC)[reply]
Wouldn't it be more appropriate to use the notation used in the inner product article, which I believe is standard in the litterature:
MorpheusCO (talk) 18:25, 21 April 2009 (UTC)[reply]
It definitely would make more sense. Using a pipe operator is absurd--it suggests logical disjunction or something similarly misleading. I'm going to try and fix it right now (I need to figure out how to properly embed a dot operator in the page). 72.227.165.191 (talk) 18:17, 3 September 2009 (UTC)[reply]
Or maybe I'm being a bit hasty? Is this standard notation in some field? It certainly seems bizarre to me, and I have a mathematics and computer science background... 72.227.165.191 (talk) 18:17, 3 September 2009 (UTC)[reply]
I "fixed" it (now it's a dot rather than a pipe). If this was a mistake on my part, please point out some references--I don't mean to step on some more informed person's toes. 72.227.165.191 (talk) 18:24, 3 September 2009 (UTC)[reply]
Errr... I mean angle brackets and a comma. That's what I changed it to. 72.227.165.191 (talk) 03:37, 4 September 2009 (UTC)[reply]

Hermitian Symmetry

[edit]

It seems to me that, because of the conjugate symmetry property of inner products, the Gramian matrix is not symmetric but Hermitian. MorpheusCO (talk) 18:19, 21 April 2009 (UTC)[reply]

Right, fixed. bungalo (talk) 10:09, 10 January 2010 (UTC)[reply]

Why is the Gramian pos def?

[edit]

It would be nice to know why the Gramian is pos. def. —Preceding unsigned comment added by 188.60.12.40 (talk) 16:59, 12 December 2010 (UTC)[reply]

Just saw this post. I put a proof in there.

Who is Dave Gramian?

[edit]

Google does not find any Dave Gramian!? — Preceding unsigned comment added by 217.75.195.210 (talk) 13:21, 11 October 2012 (UTC)[reply]

It was vandalism and should have read Jørgen Pedersen Gram. Intervallic (talk) 15:18, 11 October 2012 (UTC)[reply]

Gramian and covariance

[edit]
  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector. (Quote from the "Applications" section.)

This is a mess. If the vectors really are centered random variables, that is, they are vectors of the Hilbert space of all square integrable functions on a probability space (with zero mean), then the Gramian is exactly the covariance matrix (just by definition), with no scaling. Apparently it was meant that the vectors are a sample from a multidimensional probability distribution. Boris Tsirelson (talk) 06:14, 10 May 2015 (UTC)[reply]

I thought about this too, and both you and the article are right.
  1. If the vectors are centered scalar RVs, then the Gramian is equal to the variance matrix of the random row (!) vector . This is what you're talking about.
  2. If the vectors are non-stochastic vectors, understood as a sample of size from (such that each row of represents one observation), then the above statement may hold asymptotically for the "realized" Gramian by a CLT, if there is an applicable one. This is what the article is talking about, assuming iid sampling, implicitly appealing the the Lindeberg-Lévy CLT, and using the asymptotic result as a an approximation for finite samples.
188.108.213.242 (talk) 09:04, 21 July 2022 (UTC)[reply]

Wiki in other languages

[edit]

the corresponding French wikipedia page is https://fr.wikipedia.org/wiki/D%C3%A9terminant_de_Gram . Where do I connect it here ? — Preceding unsigned comment added by 194.199.26.79 (talk) 14:20, 24 April 2023 (UTC)[reply]

There is a section Gram matrix § Gram determinant that is directly accessible with Gram determinant. For getting the French article from here, you can use fr:Déterminant de Gram. D.Lazard (talk) 18:35, 24 April 2023 (UTC)[reply]
I have added [[fr:Matrice de Gram]] at the end of the article. So, the French article appears in the "other language" menu. D.Lazard (talk) 18:48, 24 April 2023 (UTC)[reply]