Talk:Jacobian matrix and determinant: Difference between revisions
m Signing comment by Wwignes - "→Explicit derivation of jacobian matrix and identity. attempted connect at hessian matrix: " |
|||
Line 383: | Line 383: | ||
and the supplemental diagram |
and the supplemental diagram |
||
https://drive.google.com/open?id=0Bx6iLxVsi_FZd3BNV0M5VTItdUE <!-- Template:Unsigned --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Wwignes|Wwignes]] ([[User talk:Wwignes#top|talk]] • [[Special:Contributions/Wwignes|contribs]]) 01:14, 18 October 2017 (UTC)</small> <!--Autosigned by SineBot--> |
|||
https://drive.google.com/open?id=0Bx6iLxVsi_FZd3BNV0M5VTItdUE |
Revision as of 01:16, 18 October 2017
Mathematics C‑class Mid‑priority | ||||||||||
|
Jacobians for maps between Riemannian manifolds
It would be nice to have a section on generalizations. Given a map f between two Riemannian manifolds (M, g_M) and (N, g_N) a generalization of the determinant of the Jacobian is given (if I'm not mistaken) by the ratio of the determinants of f^*(g_N) and g_M. --gm
Ok, here is a reference for this: http://books.google.com/books?id=afnlx8sHmQIC&pg=PA193&lpg=PA193&dq=jacobian+riemannian&source=bl&ots=JvH2bAV4c0&sig=RMdv-mJvEn4nWHOaDsqZ7vKFa6k&hl=en&ei=bEO6SvfxD5ux8QbMk92MCg&sa=X&oi=book_result&ct=result&resnum=5#v=onepage&q=jacobian%20riemannian&f=false --gm
Differentiable versus partial derivatives
I have serious doubts as to the validity of this entire section. From "Calculus on Manifolds" by Spivak, a function is defined to be differentiable at a point if it has a linear approximation there, and further this linear approximation is unique.
Further, the example used does not give the correct definition for the directional derivative, which is lim_ h->0 |f(a+h)-f(a)|/h. And though the definition given will work, the answer derived is incorrect, because there is confusion about which coordinate system (x.y)/(r,theta) is being used here.
The function is defined in (r,theta) space, but limit is being taken in (x,y) space, tending towards (0,0) along the (1,1) line. This corresponds to traversing the function in (r,theta) space in a line tending towards (0,pi/4), not towards (0,0) as requested.
In (x,y) space, the function is defined as xy/sqrt(x^2+y^2), and so neither it nor its jacobian are defined at (x,y)=(0,0). The function is defined at (r,theta)=(0,0), and so is it's jacobian = (0,0). A taylor series will show this very easily.
For the above reasons, I am removing this section. ObsessiveMathsFreak (talk) 14:36, 11 February 2008 (UTC)
Notations, Determinants
Hello. About notation, I seem to recall the notation DF for the Jacobian of F. Have others seen that notation? -- On a different note, maybe we can mention that if F is a scalar field then the gradient of F is its Jacobian. Happy editing, Wile E. Heresiarch 21:28, 23 Mar 2004 (UTC)
Isnt' there an error in the simplification of the determinant given as example? Where is the term x3 cos(x1) gone? -- looxix 00:32 Mar 24, 2003 (UTC)
I've always heard the initial consonant pronounced as a affricate, as in "John". Michael Hardy 01:23 Mar 24, 2003 (UTC)
- So have I, and it's listed as the first of two possible pronunciations at Merriam Webster. I've added it to the article. Pmdboi 03:36, 5 March 2006 (UTC)
- Since Jacobi is of German origin I guess his name and its derivatives should be pronounced only in the second way. English language rules can not be applied to words of foreign origin.
- Nonsense, Jacobian has become an English word with an English pronunciation. All modern English words have a historical origin with another pronunciation; this doesn't mean we should try to pronounce it in the historical way. "Paris" is not pronounced "paree" in English. And "Jacobian" is pronounced like "John" by every mathematician I know. That is why the dictionary correctly lists that pronunciation first. 84.75.181.226 (talk) 10:32, 8 October 2009 (UTC)
I originally put in the matrix here, and put in most of the structure. I did make a mistake in terminology, thou, as i see has been corrected. I defined the jacobian matrix, where the "Jacobian" per say, refers to the determinant of that matrix. My point is is that this page was originally designed to define the jacobian matrix, and i see that that definition is a stub. I have a copy of the page before it was fixed. i'm posting it in the stub for jacobian matrix. I think, then, it would be a good idea to discuss whether we might want to combine the two into one page? I'm for this. I think the ideas neeed to be presented closely together in order for fluent comprehension, and a brief and clear page describing first the jacobian matrix, and then the jacobian, would be simple to construct as well as being a better way to present the topic. Kevin Baas 2003.03.26
Why in the world do you call
a "basis" of an n-space? A basis would be something like this:
(By the way, the Latin phrase "per se" doesn't have an "a" or a "y" in it.) Michael Hardy 20:36 Mar 26, 2003 (UTC)
---
From my understanding of basis, it is the selection of a set of measurements from which one defines a coordinate system to describe a space. Thus, the unit vectors:
would be defined by way of the basis. That is, one could pick an entirely different system of measurements; entirely unrelated "unit"s of measurement, and have a different 'basis' from which to define 'distances' in a space, which would be equally valid, although (1,0,0) in one system would not be the same as (1,0,0) in another system.
Thus, when it is said that is a basis, i interpret this as saying that x1, ect. Is the system of normalized variables used to measure the space. One could have just as easily (and may find it usefull for other purposes) defined a topologically equavelent space with a different 'basis', orthogonal to this one.
However, this is merely a very fuzzy intuitive interpretation, and I'm not justifying the use. I am explaining what i think was the intention. -Kevin Baas
let me further add, that i think, thou my memory is shaky here, that a basis is a set of vectors. That is, they can only be something like: {(4,3,0), (5,0,4), (1,3,2)} such that {(4x,3x,0x), (5y,0y,4y), (1z,3z,2z)} are linearly independant. Thus, they depend on a pre-established system of variables, and are based off of the eigenvalues of that system. -Kevin Baas
Would it be correct to say f is conformal iff is orthogonal (where n is the dimension)? 142.177.126.230 23:49, 4 Aug 2004 (UTC)
i'm confused
Yes when i wrote jacobian and determine it, i get 0, zero, null, notting!?! Am i right?
thankx
| mail me
Tensor Product?
Isn't the Jacobain matrix just the tensor product of the grad operator? That is, is this correct?:
—Ben FrantzDale 16:57, 5 May 2006 (UTC)
Orientation
Furthermore, if the Jacobian determinant at p is positive, then F preserves orientation near p;
Orientation as in Orientability or Orientation (mathematics)? --Abdull 12:56, 25 May 2006 (UTC)
- As in both of those; they're the same concept. —Keenan Pepper 18:45, 25 May 2006 (UTC)
Jacobians and gradients. consistent definitions please!
In this wiki the gradient of a scalar function f(x) wrt a vector x is defined as a column vector:
.
The Jacobian matrix is defined as in this page I'm commenting now:
Later in the same Jacobian matrix entry it is said that the rows of the Jacobian matrix are the gradients of each component of f wrt x. This is obvoiusly impossible!
I propose to give a double definition, with a clear notation describing the adopted convention, as follows :
a)
b)
and similarly for the gradient as df/dx and df/dx'. This way everything fits. Note that the orientation of the variable wrt which we differentiate drives the orientation of the output gradient vectors. For the Jacobians, both the differentiated vector function f and the differentiator vector x drive the distribution of the output matrix elements. This way, we could also define the two following (quite useless I admit) constructions:
c) a column-stacked vector of all column gradients of the components of f.
d) a row-stacked vector of all row gradients of the components of f.
Personally I find form b) as being much more useful, as when we pas to the partial derivatives of matrix expressions we can almost mimic the scalar differentiation rules. Using a) in this context leads to incredibly confusing expressions, full of transposed marks.
Joan Sola LAAS-CNRS Toulouse France 82.216.60.51 11:37, 17 August 2006 (UTC)
- What is "obviously impossible"? The first row of the Jacobian matrix is (df1/dx1 ... df1/dxn), which is the gradient of the first component of the function f. The Jacobian matrix, as defined on this page, is form b) in your list, which seems to be precisely what you want. -- Jitse Niesen (talk) 11:07, 17 August 2006 (UTC)
- Excuse me I submitted with some errors. Reread what I posted now and you will see what I mean. Thanks82.216.60.51 11:39, 17 August 2006 (UTC)
- I'd like to insist on this topic. I corrected the first line of my comment, that originally said gradients were defined as row-vectors thus making my comment absurd. Gradient is defined as a column-vector in Wikipedia. So either we leave the vectors and matrices orientations out of the definition (thus giving this freedom to the user), either we use consistent definitions. I'm for this second alternative but I am not matematician so I can't have a strong position on this. --Note: I've just made up an account so I sign now as Joan Solà 15:20, 21 August 2006 (UTC) though I'm the same who started this topic. You can email me if you wish. Cheers. -- Joan Solà 15:20, 21 August 2006 (UTC)
- Indeed, the Wikipedia article gradient defines it as a column vector. That's the most common definition, I think, though it often makes more sense to define it as a row vector. Anyway, I changed the text to remove the contradiction; it now says that each row of the Jacobian matrix is the transpose of the gradient.
- Your proposal with etc. is cute, but it is not used as far as I know. According to our no-original-research policy, we cannot make up new definitions but have to stick with those already in use. Thanks for your comments. -- Jitse Niesen (talk) 07:58, 22 August 2006 (UTC)
- Allright I agree with the solution. Thks Joan Solà 12:56, 23 August 2006 (UTC)
Vanishing Jacobians
I was looking for information about Vanishing Jacobians on Google, and I saw this article. This article talks nothing about Vanishing. Maybe somebody should add something about Vanishing Jacobians?
James 03:25, 25 June 2007 (UTC)
Vanishing means becoming zero. This term is not specifically related to Jacobians. 84.75.181.226 (talk) 09:49, 8 October 2009 (UTC)
In dynamical systems, Stationary point
Doesn't the Hessian need to be checked to see whether it's a stationary point vs. an extremely unstable point? (i.e., a maximum). Ashi Starshade 18:17, 5 July 2007 (UTC)
- There are two closely connected meanings of "stationary point". The meaning intended in the article is that x is a stationary point for the dynamical system if . Another meaning is that x is a stationary point for a function f if the derivative of f is zero; this is the meaning that the article stationary point mentions. I don't quite understand your comment, but it seems that you're confusing these two meanings. -- Jitse Niesen (talk) 20:50, 6 July 2007 (UTC)
A stationary point is not a fixed point. i removed that part. It was in brackets. Thanks — Preceding unsigned comment added by 128.12.100.219 (talk) 16:09, 4 April 2012 (UTC)
Differentiable versus partial derivatives
Excuse me, but in this section it says
- for instance, in the (1,1) direction (45°) this equals .
Should it not say something like
since
? Dimbulb15 (talk) 23:31, 3 February 2008 (UTC)
Is It Possible?
First off I will admit that I only got "C's" and occasionally "B's" in Multivariable Calculus and Vector Calculus classes; so maybe I simply don't know what I'm talking about. But early in the article it states that the Jacobian Matirix can exist even if the function is NOT differentiable at a point; that Partial (and not Total) Derivatives are sufficient for it's existence. I thought that for a function to be differentiable then it also had to be continuous at the point in question; i.e. that all derivatives (Partial and Total) would exist. I suppose, for example, that if Z = f(X,Y) and if Y is constant then there would only be dZ/dX and that dZ/dY would be zero and therefore that the total derivative would not exist at the point. Could someone please clarify this for me. Thanks.JeepAssembler (talk) 21:55, 2 March 2008 (UTC)JeepAssemblerJeepAssembler (talk) 21:55, 2 March 2008 (UTC)
- Let H be the heaviside function. The let f(x,y)=H(xy). The partial derivatives exists at (0,0), but the function is not differentiable there. ObsessiveMathsFreak (talk) 11:00, 22 April 2008 (UTC)
- Also it being continuous is not a sufficient condition for differentiability. For example, the Weierstrass function is continuous everywhere but differentiable nowhere. Then it also follows from intuition that a function could be differentiable along one direction but not differentiable along another, wherefore a partial derivative may exist but a total differential won't.
- More formally, if S is an open subset of Rn and f:U->Rm then f is totally differentiable at a point p if and only if each component f_i:U-R is also differentiable. - 85.210.133.129 (talk) 03:14, 29 April 2011 (UTC)
Example?!
"The transformation from spherical coordinates to Cartesian coordinates is given by..." Shouldn't this be from cartesian coordinates to spherical coordinates? Wingless (talk) 20:36, 16 June 2008 (UTC)
Yes, and it's worse than that: phi is supposed to represent the azimuthal angle ([0,π] from z axis) while theta should stay in the x-y plane as it does in polar form. Will someone who's savvy with the formula code please correct the notation, both in the parameterization and in the Jacobian? —Preceding unsigned comment added by 71.249.100.27 (talk) 02:16, 2 December 2009 (UTC)
I went ahead an changed the phi theta notation since 71.249.100.27 is correct and while other forms are accepted, nobody seems to have brought a counter argument to his. I also think Wingless might be correct with his claim but I think the current writing may be correct from a conventional point of view. I'll let someone else make that change. Melink14 (talk) 04:25, 4 December 2009 (UTC)
Went ahead and changed this to standard physics notation. Here we use θ as the polar (zenith) angle and φ as the azimuthal angle (x-y plane, angle from x-axis). It is OK to use θ as the azimuthal, but then the notation should be (r,φ,θ) or the map should be changed to F: R+ × [0,2π) × [0,π) → R3 if one wishes to use the (r,θ,φ) notation as is standard.
Also -- the natural extension of 2D Polar coordinates is NOT spherical with the polar angle (from z-axis) set to pi/2. It is in fact cylindrical coordinates. (If it is not immediately obvious why spherical coordinates would produce the wrong results, consider that the Laplacian must be the same in 2D as it must in the natural extension of that coordinate system in 3D projected down to 2D. Working out the results in cylindrical coordinates, it is easy to verify that the Laplacian of polar coordinates is simply the Laplacian of cylindrical with z=0; if you take spherical and set the polar (zenith) angle to pi/2 you will get an extra term in the radial derivative that is not supposed to be there in 2D. This is part of the reason why we use θ to mean azimuthal angle in 2D Polar but then change θ to the Zenith angle in spherical and let φ be the azimuthal angle. It is fundamentally wrong to think that spherical and polar coordinates are this simply related.) 24.218.141.183 (talk) 06:58, 21 December 2009 (UTC)
I changed the language. The formula looks good to me (I'm not bothered by using phi or theta, so long as the domains are correct, and they are). If somebody could verify my correction then maybe the 'dubious' label should be removed. —Preceding unsigned comment added by BradLipovsky (talk • contribs) 21:36, 2 January 2010 (UTC)
I am taking a class on this and Wingless is correct. It is clear from the next two examples and from the text above this section that the example shows a transformation from Cartesian coordinates to spherical coordinates. I confirmed this from a Schaum's Outlines (Advanced Calculus) book. I went ahead and changed this. User: Anonymous —Preceding unsigned comment added by 130.245.246.244 (talk) 19:50, 11 December 2010 (UTC)
- Well, I changed it back, since it is clear from the formulas that F in the first example goes from spherical coordinates to Cartesian coordinates. --Lambiam 19:18, 12 December 2010 (UTC)
- You can change it back, but I worked out this type of problem in several homework with solutions, so unless the Jacobian is bidirectional (I am quite possibly wrong about this) then the correction I made makes sense. Functions form rows, variables form columns, which is correct from the function given, but a change from spherical coordinates to Cartesian coordinates would require the functions to be written in terms of (x, y, z). If you look at example three and pay attention to the nomenclature used to describe the problem, my edits follows. The Jacobian determinant of (x, y, z) to (r, θ, φ) is r2 sin θ. Actually you can confirm this from http://en.wikipedia.org/wiki/Multiple_integral under the spherical coordinates section. Thanks now I have wasted half an hour of studying for finals :-) —Preceding unsigned comment added by 130.245.246.244 (talk) 22:47, 12 December 2010 (UTC)
- The text in our article Multiple integral is about domain transformations, which work contravariantly. For the univariate case, when you want to go from a function f : D → R to a function g : E → R, for example because the new domain E is more convenient to work in than the old domain D, you can do that if you have a transformation T between the domains that goes in the opposite direction, from the new domain to the old domain: T : E → D. Then you can define function g by g(x) = f (T(x)). For use in an integral, using x for the variable in the new domain, which corresponds to variable y in the old domain by the relation y = T(x), we get then f (y)dy = g(x) |JT| dx, where JT is the one-element matrix (dy/dx). This generalizes to the multivariate case. --Lambiam 08:50, 13 December 2010 (UTC)
- You can change it back, but I worked out this type of problem in several homework with solutions, so unless the Jacobian is bidirectional (I am quite possibly wrong about this) then the correction I made makes sense. Functions form rows, variables form columns, which is correct from the function given, but a change from spherical coordinates to Cartesian coordinates would require the functions to be written in terms of (x, y, z). If you look at example three and pay attention to the nomenclature used to describe the problem, my edits follows. The Jacobian determinant of (x, y, z) to (r, θ, φ) is r2 sin θ. Actually you can confirm this from http://en.wikipedia.org/wiki/Multiple_integral under the spherical coordinates section. Thanks now I have wasted half an hour of studying for finals :-) —Preceding unsigned comment added by 130.245.246.244 (talk) 22:47, 12 December 2010 (UTC)
1:1 test
It says something about the inverse function theorem... Is there a way to test if a given region in a mapping's domain is 1:1 instead of a neighborhood around a point? I thought if the jacobian is identically 0 for this region it's 1:1 but it's not (by examples).
For a non-expert, the notation is confusing, so best to annotate variables (and expressions) with types.
For a non-expert, the notation is confusing, so it's best to annotate variables and expressions with types. So, the exposition would be SO much easier to follow if it contained ample phrases like "where p is a column vector of numbers' (or a column vector of functions, etc). As a computer programmer, I appreciate the help that a good compiler gives me with keeping track of the types of my expressions. ThinkerFeeler (talk) 05:20, 4 October 2008 (UTC)
- All variables should be defined, which includes mentioning its type; that is just proper writing style in mathematics. However, it's possible to overdo it, so this is something that's better discussed with concrete example. Therefore, could you please be more specific, or perhaps edit the article yourself (people will protest if you go to far)? I found only one place where p was introduced and the type was only implicit, and I fixed that. -- Jitse Niesen (talk) 12:04, 4 October 2008 (UTC)
Example 1 - Spherical to Cartesian coordinates, definition of
As an example for the post above (annotate variables for non experts), could the variables and please be annotated. As an engineer, I assumed these to be the x,y, and z directions in the cartesian system, however this is not clear. MrLaister —Preceding unsigned comment added by 129.11.76.230 (talk) 10:13, 23 October 2008 (UTC)
- I added a bit, saying that are the Cartesian coordinates and the spherical coordinates. Is that enough? -- Jitse Niesen (talk) 10:58, 23 October 2008 (UTC)
I changed the Jacobian entry at Row 3, Column 2 from -r.sin(phi) to -r.sin(theta), an obvious slip by someone but I also agree that x,y,z would be much more widely recognized than x1,x2,x3 in this context.Sweethaws (talk) 14:52, 23 December 2009 (UTC)
Applications
A more knowledgeable person should write something about the following application of Jacobians. The jacobian is a key element in the power flow problem. The power flow problem is the problem of having all sorts of different parameters in a power system between generators( at the power station) and loads(homes, businesses, ...) etc. Computers at the power station need to solve huge systems of non linear equations several times per minute, and as it turns out the best way to do it is a guess and check method based on newton's method. Newton's method requires a derivitave. When solving the system of equations for a multivariate context in newtons method, you use the jacobian matrix. If I had known that this was used for a problem like this in Calculus III, I would have paid more attention to the meaning and conceptual properties of the jacobian matrix.
75.162.43.124 (talk) 23:20, 26 March 2009 (UTC) Nate Nuzum
Vandalism
This phrase is clearly vandalism: "by some people, especially physicists who have trouble with complicated ideas such as the determinant." I removed it from the section "Jacobian determinant." —Preceding unsigned comment added by 68.163.47.166 (talk) 20:47, 16 June 2010 (UTC)
- LOL! just LOL. Gotta hand it to that Vandal. -- 85.210.133.129 (talk) 03:02, 29 April 2011 (UTC)
- Nice try, Vandal 206.207.225.66 (talk) 11:13, 3 March 2012 (UTC)
What if i change the order of the equations?
Let's use the example of the article:
The Jacobian determinant of the function F : R3 → R3 with components
What happens if I changed the order of the functions?
That is, if i change the order of the functions, the determinant changes its signal. This is consistent with determinant properties, I know, but my point is: which is the "right" order of the equations? Apparently the functions are NOT ordered. Luizabpr —Preceding undated comment added 23:38, 10 March 2011 (UTC).
- If you change the order of the functions you change the orientation of the axes, so a 'right-handed' coordinate system becomes a 'left-handed' one. This becomes a sign change in the determinant. This is mentioned in the article, though not very clearly. When you are doing an integral you need to take the magnitude of the determinant. Paul Matthews (talk) 10:04, 22 March 2011 (UTC)
Order of material
I re-arranged the order of things, putting all the examples towards the end. Previously it didn't make much sense because the examples used ideas like change of variable in integrals that did not come up until later in the article. Paul Matthews (talk) 11:05, 22 March 2011 (UTC)
Why matlab-code?
I see absolutely no point in including the matlab-code in this article. Why is it here? Lapasotka (talk) 08:40, 19 May 2011 (UTC)
Diagram request
I just added reqdiagram. This could do with a picture of a distortion of R2 onto R2 to illustrate how a local area deforms and how its area deforms by det J. —Ben FrantzDale (talk) 14:14, 18 October 2011 (UTC)
- I created a diagram that mimics one in one of my math textbook:
- Basically we have a map f : R2 to R2. The two vectors define a square (the red part on the left), which get mapped to a distorted rectangle (the red part on the right). The jacobian defines a linear transformation at each point, shown here is its action on two basis vectors. The resulting vectors define a parallelepiped whose area is an approximation for the area of the distorted rectangle. The area of parallelepiped is equal to the jacobian determinant, and as the rectangle gets smaller the error goes to zero.
- I'm not sure how to write this concisely (my main strength is in graphic design) or if the diagram needs more identifying markers, like a variable name for the point or red region. I'm interested in input on this before I put it on the article --Blacklemon67 (talk) 05:56, 22 April 2015 (UTC)
- A nice picture. But, on the first look, I got it as a surface in three dimensions. Alas, I have no idea how to avoid such wrong impression. Boris Tsirelson (talk) 06:23, 22 April 2015 (UTC)
- I noticed that too, but I'm pretty sure there isn't any way around it. Its just how the human mind parses curves. I could change "f" to "f : R2 -> R2" to make it more explicit --Blacklemon67 (talk) 06:39, 22 April 2015 (UTC)
- It could be placed in Section "Jacobian determinant", after ...", and the n-volume of a parallelepiped is the determinant of its edge vectors." A nonlinear map F : R2 to R2 sends a small square to a distorted rectangle close to the parallelepiped, the image of the square under the best linear approximation of F near the point. Boris Tsirelson (talk) 06:59, 22 April 2015 (UTC)
- Sounds good. I'm putting it in the article for now. --Blacklemon67 (talk) 07:09, 22 April 2015 (UTC)
List of Jacobians
I want to add a (long) list of Jacobians, especially of matrix functions. Should that be in this article, or should I make a separate article?
Kjetil Halvorsen 18:02, 28 August 2012 (UTC) — Preceding unsigned comment added by Kjetil1001 (talk • contribs)
Jacobi Matrix external reference
I don't think that the matrix described there is related to this page. — Preceding unsigned comment added by 79.183.148.124 (talk) 18:07, 11 November 2012 (UTC)
Good job
To the contributors: thanks for making this page so useful and comprehensible to non-specialists. It even contains pseudocode! A bit more on applications of the Jacobian would make it better, but a very good article - stellar by the standards of the average math article. Much better than 'Start' class. Doug (talk) 14:27, 2 August 2013 (UTC)
Generalizations of the Jacobian determinant
Jacobian determinant is also defined when the matrix is not square; see EoM. Boris Tsirelson (talk) 11:53, 16 May 2014 (UTC)
- I am not sure if this generalization is widely used. In any case, in the rectangular case, the rank of the Jacobian matrix plays a key role for determining the critical points. I have just added a section about that.
- Feel free to add a section about the generalizations that you have cited, but, please, mention their use, if they have some. In fact, if these generalizations are not used in some secondary sources, I doubt that they fulfill the criteria of WP:notability. D.Lazard (talk) 15:05, 16 May 2014 (UTC)
- First, they appear when calculating area of a surface (or integral over a surface) via integral in dx dy. Second, in the Area formula and Coarea formula (see EoM, too). Boris Tsirelson (talk) 16:48, 16 May 2014 (UTC)
- Note that Enneper surface uses this generalization to specify a Jacobian for a surface. Mark Hurd (talk) 14:56, 2 July 2014 (UTC)
- Many thanks for everyone who contributes to the article and talk page. They are really very helpful!
- Jacobian determinant of non-square can appear in application involving transformation of variables or change of variables, such as Robot Control, the Jacobian of transformation from the covariance matrix to the eigenvalues and eigenvectors in the Reversible Jump MCMC algorithm. In both applications, the corresponding Jacobian term can be non-square matrix. It seems reasonable and acceptable to calculate Jacobian determinant of non-square using the formula in the EoM page. Xiangju (talk) 16:53, 7 January 2015 (UTC)
Typo?
The last-but-one sentence of chapter Jacobian Matrix ends on "the former the first derivative of a scalar function of several variables, the latter the first derivative of a vector function of several variables." I think you got "former" and "latter" the wrong way round. Puddington (talk) 11:01, 18 August 2014 (UTC)
- No, the gradient relates to a scalar function and the Jacobian relates to a vector function. Mark Hurd (talk) 03:35, 19 August 2014 (UTC)
ipa pronunciation of "jacobian"
The IPA pronunciation guide is totally wrong. I've never heard anybody pronounce it /dʒɨˈkoʊbiən/ and especially not /jɨˈkoʊbiən/ — Preceding unsigned comment added by 71.227.186.180 (talk) 00:12, 8 February 2015 (UTC)
Remember when? Pepperidge farms remembers.
Remember when these articles were supposed to be in English and the crap contain should at least be readable to someone without a terminal degree in WTFology? — Preceding unsigned comment added by 24.206.154.127 (talk) 19:16, 20 September 2015 (UTC)
Determinant of non-square Jacobian matrix
There really should be some explanation of how one takes the determinant of the non-square Jacobian, specifically that the scaling factor for the CoV formula given by |det(J)| is really just a special case of sqrt(det(J^TJ)) which comes from the change in Riemannian metric when you perform the pull-back. I suppose this could go in change of variables formula page, but it's really got to go somewhere as it's maddening to not understand how this works. — Preceding unsigned comment added by 68.7.130.63 (talk) 05:02, 22 January 2016 (UTC)
- You introduce the concept of determinant of a non-square matrix. I have never heard of such a concept. It it exists, it is certainly not adopted by mathematics community, and does not deserve to appear in Wikipedia. However, if there are sufficiently many articles on this concept, it could be the object of its own article or of a section in determinant. In any case, the text that you have added to the article (and that I have reverted) does not belongs to the article without a reference to a reliably published article, which defines the determinant of a non-square Jacobian (WP:primary source) and references to articles that mention and use it (WP:secondary source). Without this, this is WP:Original research, and is not relevant to Wikipedia. D.Lazard (talk) 09:00, 22 January 2016 (UTC)
So how do you propose to compute the scaling factor for a parametrization R^3 -> R^4? This is how it's done, it's the square root of the determinant of J^T * J. — Preceding unsigned comment added by 68.7.130.63 (talk) 17:11, 22 January 2016 (UTC)
- Please, sign your post with four tildes (~~~~).
- I do not know what is the "scaling factor of a parameterization" nor what is a CoV formula (no Wikipedia article about them). In any case, even if its definition uses a Jacobian matrix, this scaling factor does not belong to this article. Moreover, the term of "scaling factor" suggests that the source and the target of the mapping have the same metric, which is a nonsense if the source and the target have different dimensions. D.Lazard (talk) 17:41, 22 January 2016 (UTC)
- This factor is well-known, it appears when calculating the volume form of a manifold (embedded) in Euclidean space in terms of a chart. See for example (8.1) in Sect. 8.1 in "Functions of several variables" by Wendell Fleming (Springer, 1977, "Undergraduate texts in mathematics"). However, to my regret, he does not call this factor "generalized Jacobian" (or another way; he just denotes and uses it). According to EoM, some author(s) call it this way; but I must admit that this usage is not notable. Boris Tsirelson (talk) 18:02, 22 January 2016 (UTC)
- Another source: Theorem 8.4 in Sect. 8.1 (what a coincidence!), and then Sect. 8.3, in "Manifolds and differential forms" by Reyer Sjamaar. (But still, no special name for this object.) Boris Tsirelson (talk) 18:15, 22 January 2016 (UTC)
- You wouldn't want to call it a "determinant" though, which simply isn't defined for a non-square matrix in general. Once I read up a bit on differential forms, I'll revisit how to best include things like that in this article.--Jasper Deng (talk) 18:52, 22 January 2016 (UTC)
- Sure. I regret not seeing it called "generalized Jacobian" (as I do :-) in my lectures Sect. 2c, p. 37), but I do not want to see it called "determinant". Boris Tsirelson (talk) 19:24, 22 January 2016 (UTC)
- You wouldn't want to call it a "determinant" though, which simply isn't defined for a non-square matrix in general. Once I read up a bit on differential forms, I'll revisit how to best include things like that in this article.--Jasper Deng (talk) 18:52, 22 January 2016 (UTC)
- And more sources:
- Sect. 5.3 in "Vector calculus, linear algebra and differential forms" by John Hubbard and Barbara Hubbard (Prentice-Hall, 2002);
- Sect. 4.11c in "Introduction to calculus and analysis" vol 2 by Richard Courant and Fritz John (Springer, 1989);
- Sect. 12.4 and 13.2.3 in "Mathematical analysis II" by Vladimir Zorich (Springer, 2004).
- Boris Tsirelson (talk) 13:25, 24 January 2016 (UTC)
Just checked back here to see if anyone had finally bothered to add the method of computing the scaling factor for CoV when the Jacobian isn't square, now that many sources have been provided, but still nothing, just goes to show that the people running this article don't really give a fuck about providing the best quality information and are more interested in their mini power trip.
- Because we face a terminological problem. Sources give the relevant formula, but hesitate to give a name ("generalized Jacobian" or another) to the coefficient. You mention "the CoV formula" but do not tell us, where did you take this name from. Thus we do not have a good reason to place that formula in this article. Boris Tsirelson (talk) 07:20, 23 September 2017 (UTC)
Use of Jacobian in integration - change of variables
It would be useful to have examples of how the Jacobian is used when changing variables in integration. In particular, it would be useful to clarify when one uses the Jacobian determinant versus when one uses the absolute value of that determinant.
A commenter in a previous section says that we always use the absolute value of the Jacobian determinant when changing variables in an integration. Is that true? Is it a theorem or is it tradition?
For example, has an unambiguous mathematical definition that makes its value zero and this definition is different than the traditional answer to the problem "Find the area between graph of and the axis between and " because we are expected to use an unsigned concept of area.
Is there a similar distinction between the mathematical definition of and the answer we get when doing a change of variables that involves using the absolute value of the Jacobian determinant?
Tashiro~enwiki (talk) 15:45, 27 February 2017 (UTC)
- Roughly, it depends on the interpretation of : is it Lebesgue measure or a differential form? If it is Lebesgue measure, then the absolute value appears. Note that taking in the n-dimensional case you do not get but rather where is the interval whose endpoints are and ; think what happens when . Boris Tsirelson (talk) 16:22, 27 February 2017 (UTC)
- This issue is pretty technical and depends on how the integration limits are defined. In a one dimensional integration the limits are ordered and if a change of variable changes the sign of dx then the limits also change appropriately. In a multidimensional definite integral the limits are a boundary and "changing the order of the limits" becomes somewhat problematic and technical. Easier to just take the absolute value, see what the new boundary is and "get the right answer". :) Cloudswrest (talk) 18:42, 27 February 2017 (UTC)
- Furthermore, how does one order the limits if the boundary is, say, a circle. For an area integral the dxdy is a signed quantity (differential form), but the "order of the limits" is the way you go around the circle. This line of reasoning leads to Green's theorem. Cloudswrest (talk) 20:31, 22 September 2017 (UTC)
Explicit derivation of jacobian matrix and identity. attempted connect at hessian matrix
I don't have the time to figure out how to convert word documents to wikipedia programming language. I made the word document downloadable on google drive. step by step explicit derivation of jacobian transformation shown.
https://drive.google.com/open?id=0Bx6iLxVsi_FZZ1JiOE1PTThGQUE
and the supplemental diagram
https://drive.google.com/open?id=0Bx6iLxVsi_FZd3BNV0M5VTItdUE — Preceding unsigned comment added by Wwignes (talk • contribs) 01:14, 18 October 2017 (UTC)