Talk:PlayStation 4
Please place new discussions at the bottom of the talk page. |
This is the talk page for discussing improvements to the PlayStation 4 article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find video game sources: "PlayStation 4" – news · newspapers · books · scholar · JSTOR · free images · free news sources · TWL · NYT · WP reference · VG/RS · VG/RL · WPVG/Talk |
The following references may be useful when improving this article in the future:
|
This page has archives. Sections older than 60 days may be automatically archived by Lowercase sigmabot III when more than 5 sections are present. |
Unified Memory and Heterogeneous System Architecture (HSA)
On HSA: Although it has not been officialy announced yet, the PS4 almost certainly utilizes the newest HSA features (check HSA architectual integration). Although this info was added to the Console section of this wiki, I do agree that it is better to not include this info until it is official; I am fine with the fact that it has been removed until then. Note: Sony's use of the term "unified memory", and the fact that AMD is designing the APU, hints that the PS4 does indeed utilize HSA (including the newest features). Without "HSA-MMU" (memory management unit) and HSA's "unified address space", the PS4 would simply be using shared memory (like in the Xbox360).
On shared memory: It is true that a shared memory architecture (falsely called unified memory in the Xbox360) does make it easier for a programmer to develop a program, unlike in a system with split memory pools for the CPU and GPU (like in the PS3). The programmer can choose how he wants to partition the memory, which is much more flexible.
On unified memory: Both "HSA-MMU" and a "unified address space" greatly reduce latency, seeing that the CPU and GPU can share pointers, which in turn removes the requirement to copy data from the CPU's memory resources to the GPU's memory resources. This also obviously simplifys the programming of a game-engine.
Kapitaenk (talk) 07:51, 20 March 2013 (UTC)
- First off, I want to start by acknowledging that I understand you have the best intentions for improving this article, and I agree with many of your contributions thus far. The problem here is that some of the information being inserted (by several editors) appears to reflect original research. Whether accurate or not, it shouldn't be included unless it is clearly supported by a reliable source. After checking the sources in the Console section thoroughly, I'm having trouble locating the proper support. Also, sometimes there is a tendency to combine information from multiple sources to advance a new position that is not directly mentioned in any of the sources. Though unintentional at times, this is considered synthesis (a form of original research) which is highly discouraged on Wikipedia. I'm not denying your knowledge on the subject, or even what you've said above. We just need to make sure that everything is properly cited.
- On latency:
- We can easily find sources that talk about the "unified address space". However, finding sources that discuss the effect on latency in great detail are scarce. If you can find one that relates specifically to AMD's Fusion architecture, then I would support mentioning it in the article. --GoneIn60 (talk) 15:35, 20 March 2013 (UTC)
- Correct. Which is why I reverted the topic, seeing that you also started your own research as well (which in my opinion was also technically not 100% correct, at least not with the wording you chose).
- There are articles about HSA'a latency improvements (although extremely scarce) but I suppose we should just leave it as it is until more official info about the PS4's technical capabilities are released, which should be around the E3 I suppose.
- Kapitaenk (talk) 10:28, 23 March 2013 (UTC)
- Well, let's talk about that. Here's the information you most recently reverted:
- The unified memory architecture gives the CPU and GPU access to the same memory pool, making it easier for programmers to write code that targets both processors.<ref name="Anandtech - AMD APU">{{cite web|url=http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/|title=AMD's Graphics Core Next Preview: AMD's New GPU, Architected For Compute|last=Smith|first=Ryan|date=December 21, 2011|accessdate=March 20, 2013}}</ref>
- If you look at page 6 in the source that I cited, it clearly states:
As a result GCN will be adding support for pointers, virtual functions, exception support, and even recursion. These underlying features mean that developers will not need to “step down” from higher languages to C to write code for the GPU, allowing them to more easily program for the GPU and CPU within the same application...the memory subsystem is also evolving to be able to service those features...This goes hand-in-hand with the earlier language features to allow programmers to write code to target both the CPU and the GPU, as programs (or rather compilers) can reference memory anywhere, without the need to explicitly copy memory from one device to the other before working on it
- Well, let's talk about that. Here's the information you most recently reverted:
- The memory architecture is partly responsible, according to the source, for making it easier on programmers to "target" both the GPU and CPU. This other source that you removed through your reversion also discusses the use of "GPU compute", a feature that allows "a strong GPU to help a weak CPU on certain non-graphical tasks". So you see, the statements I included had backing. --GoneIn60 (talk) 05:43, 24 March 2013 (UTC)
- You originally wrote: "The unified memory architecture also gives the CPU and GPU access to the same memory pool, making it easier for one processor to assist the other." and, after my objection, you changed it to: "The unified memory architecture gives the CPU and GPU access to the same memory pool, making it easier for programmers to write code that targets both processors." Both of these statements are extremely vague and as you can see from the text excerpt that you posted, are also not correct in this context.
- If you would like to write a text based on the quote that you just posted (GPGPU + numerous HSA hardware features; not to mention our discussion), then feel free to do so. Your source (although somewhat vague) is actually correct.
- Kapitaenk (talk) 05:52, 28 March 2013 (UTC)
- EDIT: "The memory architecture is partly responsible, according to the source, for making it easier on programmers to "target" both the GPU and CPU."
- The unified address space of HSA does not mean that it is easier for the programmer to write a program, it only means that the programmer does not have to explicitly copy data from the CPU's resources to the GPU's resources (and vice versa). This is because both processors share pointers. As I said, this greatly reduces latency, i.e. it is a performance feature - and not only for GPGPU features but also for "regular" graphical capabilities.
- I would strongly suggest that it does make programming easier. Copying data to-and-fro frequently requires asynchronous barriers, you trigger a transfer and proceed with your program, you then have to be cleaver about making sure the CPU is always busy working whilst data is still being DMA'd over, you have to ensure the copy has finished before continuing at certain points and in general can be a balancing act as to where and how to place/use your barriers and the concept itself breaks away from the simple procedural paradigm of working within a single thread. There's a lot less/none of this when data doesn't need copying.GMScribe (talk) 23:02, 27 May 2013 (UTC)
- EDIT:Not to mention there's not going to be any end-of-development memory bandwidth optimisation, something that's highly prevalent in high-end game development.GMScribe (talk) 23:04, 27 May 2013 (UTC)
- The ability to program a GPU with C++ (for example), now that is a feature which dramatically eases a programmers job.
- I think what's important here is that support for pointers, virtual space and recursion means that the fundamental hardware required to implement a high-level language is now present (such as a virtual machine or C++). Recursion alone means that it will no-longer always be a requirement to convert every recursive algorithm into a complex loop, that alone makes programming easier, recursion itself is a concept that makes programming easier.GMScribe (talk) 23:02, 27 May 2013 (UTC)
- Kapitaenk (talk) 06:21, 28 March 2013 (UTC)
- The memory architecture is partly responsible, according to the source, for making it easier on programmers to "target" both the GPU and CPU. This other source that you removed through your reversion also discusses the use of "GPU compute", a feature that allows "a strong GPU to help a weak CPU on certain non-graphical tasks". So you see, the statements I included had backing. --GoneIn60 (talk) 05:43, 24 March 2013 (UTC)
- I don't disagree about the reduction in latency, but we shouldn't mention it until we have a reliable source that does. It could be considered original research without one and a possible point of contention. I also understand your point that the unified address space is a performance enhancement, and not something that directly makes it easier for a programmer to write a program. The next-gen APU architecture as a whole is responsible for that; the unified address space is just a piece of the pie that serves as a complement (less code to write does make it easier, but that's not its main benefit). So I agree that the wording used is somewhat vague and inaccurate. If you'd like to take a stab with better wording, be my guest. I'm fine with the way it's worded now. Sometimes less is more! --GoneIn60 (talk) 12:37, 28 March 2013 (UTC)
- It depends what you mean by latency. It takes time to copy data + command overhead from CPU to GPU and that is a part of latency. From this point of view there's a clear reduction in latency. As for the general efficiency of the memory controller, I can't comment, only refer to AMD hUMA (http://www.theregister.co.uk/2013/05/01/amd_huma/ - the only publicly announced technology that uses a single pool of memory) which promises that data recently accessed on either the CPU or GPU will likely be pre-cached for the other via cache coherence, which will have a positive impact in many instances. Regardless, if the former statement is true, there will be a significant overall reduction in latency.GMScribe (talk) 22:50, 27 May 2013 (UTC)
- I don't disagree about the reduction in latency, but we shouldn't mention it until we have a reliable source that does. It could be considered original research without one and a possible point of contention. I also understand your point that the unified address space is a performance enhancement, and not something that directly makes it easier for a programmer to write a program. The next-gen APU architecture as a whole is responsible for that; the unified address space is just a piece of the pie that serves as a complement (less code to write does make it easier, but that's not its main benefit). So I agree that the wording used is somewhat vague and inaccurate. If you'd like to take a stab with better wording, be my guest. I'm fine with the way it's worded now. Sometimes less is more! --GoneIn60 (talk) 12:37, 28 March 2013 (UTC)
- Some news regarding HSA: http://av.watch.impress.co.jp/docs/series/rt/20130325_593036.html (first translations efforts: http://gamingeverything.com/44227/lots-of-ps4-hardware-tidbits/) --85.216.15.79 (talk) 15:42, 27 March 2013 (UTC)
- Chris Norden is talking about an "unified address space", which should pretty much confirm the HSA-like architecture. [1] Even the google translation of the already mentioned japanese articel should make that clear. --Belzebübchen (talk) 00:42, 28 March 2013 (UTC)
Edit Request
This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Hi, i would to request edit please? Their is a couple of information left out.— Preceding unsigned comment added by Spikes1472 (talk • contribs) 3 April 2013
- You need to specify here what you want to add or change first. --GSK ● ✉ ✓ 01:56, 4 April 2013 (UTC)
- Not done: It is not possible for individual users to be granted permission to edit a semi-protected article. You can do one of the following:
- You will be able to edit this article without restriction four days after account registration if you make at least 10 constructive edits to other articles.
- You can request the article be unprotected at this page. To do this, you need to provide a valid rationale that refutes the original reason for protection.
- You can provide a specific request to edit the article on this talk page and an editor who is not blocked from editing the article will determine if the requested edit is appropriate. —KuyaBriBriTalk 13:45, 4 April 2013 (UTC)
Battlefield 4
Following a discussion on the Battlefield 4 Wikipedia Article, we are unable to decide if Battlefield 4 has been offically confirmed for releace on the PS4. We are considering changing one of them. FranktheTank (talk) 10:26, 16 April 2013 (UTC)
Games
In development-Lords of the Fallen. http://www.psu.com/a019186/Lords-of-the-Fallen-announced-for-PS4-inspired-by-Dark-Souls — Preceding unsigned comment added by Popthepuff (talk • contribs) 12:37, 24 April 2013 (UTC)
PS4 Release?
Hi there all just wanted to ask about the release date. Has the console actually been confirmed by sony company that it is getting released at the end of 2013? European Combat Warrior (talk) 23:43, 26 April 2013 (UTC)
- Remember that talk pages should not be used as a discussion forum, but yes, Sony confirmed a "Holiday 2013" release. --GSK ● ✉ ✓ 23:54, 26 April 2013 (UTC)
O Im sorry didn't know. I just wanted to ask as there is no sources that shows a proper confirmation.European Combat Warrior (talk) 00:19, 27 April 2013 (UTC)
I'd just guess October 29, 2013 due to a couple of games (Battlefield 4, Assassin's Creed IV) having the same release date in North America, just a hunch though. — Preceding unsigned comment added by 76.118.83.121 (talk) 20:14, 22 May 2013 (UTC)
Backwards Compatibility Statement
I feel as though there should be a statement indicating the switch to x86/64 is why there is no B/C? It seems needed so people have a basic understanding why. — Preceding unsigned comment added by 99.110.78.153 (talk) 20:24, 4 May 2013 (UTC)
- Do you have a source which corroborates this? The switch to a different architecture alone does not make backwards compatibility impossible. If that were the case, neither the PS2 or PS3 would be backwards compatible with PS1 software. The reason for its exclusion is due to a decision by Sony to do so, presumably because they consider it to be too expensive to develop for it to be worth their while. Alphathon /'æɫ.fə.θɒn/ (talk) 11:40, 5 May 2013 (UTC)
Sony confirmed in their Press Release that backwards compatibility will, shortly after release, be implemented through Gaikai, which will use Cloud-based emulation or custom hardware to stream older products to compatible consoles. This is a sensible and modern approach to backward compatibility. However, this article states: "do not add Gaikai here, backwards compatibility means it plays older media, i.e. it runs games from the disc", however, this doesn't agree with the wikipedia page on backwards compatibility, where for example, the Vita plays old PSP and PS1 games. At some point most of these games were physical media and the Vita only supports the downloaded binary image of these games, not the physical media. Gaikai is still fundamentally a binary compatible backwards compatibility service. I believe we should be list the PS4 as having planned backwards compatibility, simply ensuring that it's appropriately contextualised, as this is an important consumer fact and general capability. GMScribe (talk) 17:14, 24 May 2013 (UTC)
Analogue sticks
This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Hi, there seems to be some slightly misleading references to the analogue sticks now being concave in a way similar to Xbox controllers. In fact, as can be seen in pictures released by Sony and those taken by the media, they are still slightly convex just with very pronounced ridges stopping a player's thumbs from slipping off (which, while effectively makes them concave on the whole, is really quite different to the entirely concave analogue sticks of the Xbox One controller). This can be seen here: http://cloud.attackofthefanboy.com/wp-content/uploads/2013/03/playstation-4-shortages.jpg Here: http://latimesherocomplex.files.wordpress.com/2013/02/ps4controller3.gif?w=600 And here: http://3.bp.blogspot.com/-vgntnkNgh7U/USbRSX_6etI/AAAAAAAAEHg/ZPeu2A3QrK0/s1600/playstation-4-cover.jpeg With an Xbox One controller for comparison: http://rack.2.mshcdn.com/media/ZgkyMDEzLzA1LzIyLzg4L0hvbGRpbmdfWGJvLmUyMDYyLmpwZwpwCXRodW1iCTEyMDB4OTYwMD4/5041f624/926/Holding_Xbox_One_Controller1.jpg And an Xbox 360 controller for comparison: http:/upwiki/wikipedia/commons/f/f4/Xbox_360_wired_controller_1.jpg Apologies if I've made a hatchet job of editing in this request, by the way. — Preceding unsigned comment added by 78.150.13.189 (talk • contribs) 17:55, 24 May 2013 (UTC)
- Done. And yes, you did make a hatchet job of it - X201 (talk) 18:11, 24 May 2013 (UTC)
Edit request
Please change "8 GB" to "8 GiB" (of RAM; gibibytes). Thank you 93.129.9.216 (talk) 22:24, 26 May 2013 (UTC)
- Why gibibytes instead of gigabytes? RocketLauncher2 (talk) 00:38, 27 May 2013 (UTC)
- Depends if you prefer SI or IEC notation, either way the values are equivalent and I'm not aware of Wikipedia adopting any one standard?GMScribe (talk) 23:10, 27 May 2013 (UTC)