Jump to content

Talk:Larrabee (microarchitecture)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Larrabee consumer graphics card canceled

[edit]

Large parts of this article are now obsolete, and it may now be appropriate to replace "GPU" in the article title with something else. Modeless (talk) —Preceding undated comment added 00:28, 5 December 2009 (UTC).[reply]

Intel cancelled the release of the first version of Larrabee hardware as a GPU. They didn't cancel the whole project or the GPUness; that's a common but incorrect misreading of their press statements. If you read the Intel statements directly (not other people's summaries) you can see they say something like "cancelling the first generation Larrabee graphics", and they mean "cancelling the first generation of Larrabee graphics", not "cancelling Larrabee graphics (which happens to be at the first generation)". It's just another delay. 72.86.22.222 (talk) 02:17, 16 December 2009 (UTC)[reply]

TFLOPS

[edit]

How do you get (32 cores)*(16x vector)*(2 GHz) = 2 TFLOPS ? It's 1 TFLOP, which is still pretty good. —Preceding unsigned comment added by 70.69.136.94 (talk) 03:19, 12 June 2009 (UTC)[reply]

Larrabee, like most GPUs, supports a fused multiply-accumulate instruction, which performs a multiplication and an addition in one clock cycle. When reporting FLOPS it's normal to count a FMAC instruction as two floating-point operations.

GDC 2009

[edit]

This article needs a screenshot of the demo from GDC - plus any other info released at the conference.

Larrabee demo video: http://www.youtube.com/watch?v=gxr_4sZ7w7k

I don't think that video is related to Larrabee, unfortunately. Project Offset is not Larrabee-exclusive, and AFAIK no actual Larrabee cards exist yet. That demo is likely running on Radeon or GeForce cards. ProjectOffset.com is the source of the video and it makes no mention of Larrabee.
Hopefully various tech news sites will soon post their summaries of the GDC Larrabee presentations. So far there hasn't been much new information released; the biggest new news was the release of a C++ implementation of the Larrabee new instructions, which is now linked from the "External Links" section of the article. Modeless (talk) 03:37, 28 March 2009 (UTC)[reply]
Aha, well thats a bit of a shame(?). Anyway, the new instructions are quite exciting. Almost warents a new article section that compares current IA SSE with the new set. 16 single precision floats in one register? I also love the mask with each instruction that allows you to cancel operations on certain slots. And pow() on chip - even x87 didn't have that! —Preceding unsigned comment added by 84.9.165.51 (talk) 16:08, 28 March 2009 (UTC)[reply]

Criticism

[edit]

Do these two sentences really warrant an entire section? Of course the criticism is valid, but it looks like somebody rushed to "hate". It makes the rest of the article feel unfinished and hasty, when actually the article is rather complete and of good quality. Perhaps it should be moved to another section? 84.9.165.14 (talk) 21:38, 5 October 2008 (UTC)[reply]

No objection? Moved. 84.9.165.14 (talk) 13:16, 11 October 2008 (UTC)[reply]

10 August edits

[edit]

Ramu50, your concern about marketing speak in calling x86 "popular" is valid; I've changed it to say "common" instead. I have also reintroduced the information about the Pentagon's involvement in Larrabee's design.

The SIGGRAPH paper clearly states that Larrabee has a coherent cache across all cores: (Italics added for emphasis) Page 1: "The cores each access their own subset of a coherent L2 cache". Page 3: "These cache control instructions also allow the L2 cache to be used similarly to a scratchpad memory, while remaining fully coherent." Page 11: "In contrast, all memory on Larrabee is shared by all processor cores. For Larrabee programmers, local data structure sharing is transparently supported by the coherent cached memory hierarchy regardless of the thread’s processor."

The text claiming that Larrabee has a crossbar architecture is not supported by sources. Larrabee uses a ring bus, as described in the SIGGRAPH paper.

Modeless (talk) 23:22, 10 August 2008 (UTC)[reply]

Title

[edit]

Does anyone else have an issue with the title of this page, "Larrabee (GPU)"? I don't know if it is known for certain if Larrabee is meant to stand alone as the only processor in a PC, like what everyone had before 1997, or function like a modern video card with a dedicated CPU. But either way I think calling this a GPU is a stretch. —Preceding unsigned comment added by 74.219.122.138 (talk) 13:30, 8 August 2008 (UTC)[reply]

Intel has announced that Larrabee will definitely be sold as a video card. It will be a PCI Express 2.0 card with its own RAM, marketed and sold for the express purpose of accelerating DirectX/OpenGL inside a PC with a separate CPU, just like other GPUs. Not to mention that Larrabee does include some fixed-function GPU hardware (the texture sampling units). Eventually if Intel announces other Larrabee-based products, like a standalone chip that slots into Intel motherboards with QuickPath, we might want to revisit the title of this article. But right now there is no indication that Intel is going to sell Larrabee as anything but a GPU. Modeless (talk) 18:35, 8 August 2008 (UTC)[reply]

Speculation

[edit]

Release date

[edit]

Any speculation about the estimated release date ? --Xav

Added. Modeless 20:59, 17 September 2007 (UTC)[reply]


Open drivers

[edit]

Likewise, any references to open source drivers? --Daniel11 06:43, 10 September 2007 (UTC)[reply]

Nope, not that I know of, but it makes sense. Intel has in the past been friendly to open-source driver efforts for their hardware, and the trend nowadays is to be more open to support GPGPU (as seen by NVIDIA's CUDA, ATI's CTM and more recently their release of complete documentation without NDAs), and since Intel would like Larrabee to dominate this market they will have to be open. I would be surprised if Intel didn't encourage open-source drivers for Larrabee, at least for the GPGPU side of things. Modeless 20:59, 17 September 2007 (UTC)[reply]

I think it will be easier to write open source drivers. Just use Gallium/Mesa software OpenGL stack, compile for x86, use in some loops this wide SIMD, and texture units. Only real probleme with just the hardware will be modeseting, but knowing Intel it will be working from the day-0. —Preceding unsigned comment added by 83.175.177.180 (talk) 09:53, 6 April 2009 (UTC)[reply]

Unfortunate code name

[edit]

Larrabee was the name of the slow-witted assistant to the Chief of CONTROL on the TV series Get Smart. ;) He's the guy who when told to take away a bugged aerosol can "and step on it" (meaning do it quickly), took the order literally. After an offscreen explosion, Larrabee came back in with his clothes shredded and said "Chief, I don't think you're supposed to step on those.". —Preceding unsigned comment added by Bizzybody (talkcontribs) 10:35, 8 December 2007 (UTC)[reply]

URL for the paper

[edit]

http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

I'll leave it to someone else with more time on their hands and better understanding of Wikipedia rules and regulations to beautify this article. --Tatejl (talk) 21:06, 5 August 2008 (UTC)[reply]

August Changes

[edit]

Requested permission from authors for image use. dila (talk) 20:37, 8 August 2008 (UTC)[reply]

Permission granted for the use of several ([1], [2], [3], [4]) images from the recent SIGGRAPH conference. dila (talk) 21:39, 8 August 2008 (UTC)[reply]
Provide the permission text to WP:OTRS please. --soum talk 03:59, 9 August 2008 (UTC)[reply]
I have sent the permission via email to [permissions-en@wikimedia.org]. dila (talk) 13:06, 9 August 2008 (UTC)[reply]


Removed entire opening paragraph and replaced with a minimal buzzword description of Larrabee. dila (talk) 01:40, 9 August 2008 (UTC)[reply]

Jeez Dila, you basically ruined the entire article. Did you even read, let alone understand, anything that you inserted? 99.236.145.40 (talk) 04:15, 9 August 2008 (UTC)[reply]
Huh? You mean those images that I took time to obtain ruin the article? And my opening description that remains after 27 edits? What exactly are you referring to? Wait, don't bother. dila (talk) 13:00, 9 August 2008 (UTC)[reply]

Comparison to Cell?

[edit]

Since the article compares the chip to GPUs from Nvidia, but is more of a multicore, high performance general purpose computer with a ring based connectivity, whouldn't at least a reference to the Cell (Cell Broadband Engine) be suitable? —Preceding unsigned comment added by 62.209.166.130 (talk) 11:16, 12 August 2008 (UTC)[reply]

There was one - see here[5]. It was probably removed by 99.236.145.40, the fool above. dila (talk) 14:23, 12 August 2008 (UTC)[reply]
Please be respectful of other editors. Comment as much as you want on their edits, but don't make it personal. --soum talk 09:29, 13 August 2008 (UTC)[reply]
I've started a comparison in the comparison section. Modeless (talk) 08:05, 13 August 2008 (UTC)[reply]
Read on only if you want the unsolicited advice from an experienced editor :). Wikipedia articles are to describe about the subject of the article, not about its competitors. So its better not to delve deep into the competition. Mention it in a one liner like "competes with GPUs like X, Y as well as with high performance architectures like Z". Use proper categories and navboxes to enable easy navigation between articles about competing products. Try to avoid comparison as much as possible, that becomes extremely unwieildly in the long run. Only include as much comparatory info as is necessary to understand the concepts. Like fully programmable pipeline in Larrabbee vs partial ones in regular GPUs or crossbar vs ring bus architecture. Do not put comparison about performance or other stuff in this article. Rather, if that information needs to be provided, create a separate "Comparison of Larrabee and..." articles (if necessary, a series of articles, even). A proper structure is the first thing in maintainability of articles. --soum talk 09:26, 13 August 2008 (UTC)[reply]
[edit]

This article looks like an advertisment to me. Especially with the inclusion of that programmability graph with all arrows pointing towards it. Also there is no mention of any competing products. NPOV anyone? --DustWolf (talk) 00:56, 13 August 2008 (UTC)[reply]

The competing products are mentioned in the first line. Would you like to see expansion of that? E_dog95' Hi ' 01:08, 13 August 2008 (UTC)[reply]
Almost the whole article is a comparison of Larrabee to current products. I would love nothing more than to compare Larrabee with the future products it will be competing with in late 2009. However, basically nothing is known about the next GPUs from NVIDIA and ATI. They keep all information about future products as secret as possible until right before release (days usually). Intel is being remarkably open for the GPU market. So there really isn't much to compare Larrabee to right now. I could add some speculation but I suppose that doesn't belong in a Wikipedia article (the Beyond3D forum is probably the best place). As for the PowerPoint slide, well, it is from an Intel marketing presentation, so I suppose it should look like an advertisement. Feel free to replace it. Modeless (talk) 08:21, 13 August 2008 (UTC)[reply]
I think the first chip image is good, but I guess the others are a bit "Pro-Intel". Also they are "public domain" which makes them really hassle free. dila (talk) 21:06, 13 August 2008 (UTC)[reply]
I would like to propose that the article is extended with an "architecture" section where the known architectual features are described. It will then be natural to go into the comparisons with GPUs from Nvidia, AMD/ATI as well as Cell/CBE chips from TSI. —Preceding unsigned comment added by 62.209.166.130 (talk) 07:44, 14 August 2008 (UTC)[reply]
This article reads like an intel press release. It states only the benefits of the product and none of its drawbacks. The comparisons made all benefit intel. --67.241.177.245 (talk) 02:15, 23 May 2009 (UTC)[reply]
Feel free to add some drawbacks in there, dude; just be sure to cite them. The problem is that all the unreleased chips that are going to be comparable to Larrabee are super extra top secret. Intel's publishing SIGGRAPH papers while ATI and NVIDIA are keeping their mouths shut. When ATI and NVIDIA's next-gen chips are revealed I am sure they will compare much more favorably to Larrabee. Furthermore, Larrabee's advantages are all in its flexible architecture; its drawbacks are likely to be in performance which, again, is secret right now. Modeless (talk) 05:39, 23 May 2009 (UTC)[reply]

GPGPU?

[edit]

Here's another interesting question: How can a chip that is not technically a GPU according to it's maker, compete in the GPGPU market? --DustWolf (talk) 13:47, 31 August 2008 (UTC)[reply]

Because it can be programmed to do the same job. 84.9.165.10 (talk) 19:01, 1 September 2008 (UTC)[reply]
As can practically any other piece of computer hardware, especially all the CPUs. That doesn't mean they are all in the GPGPU market.--DustWolf (talk) 23:40, 1 September 2008 (UTC)[reply]
Larrabee will have some hardware support for GPU operations (like texture filtering), while CPU's do not. 84.9.164.204 (talk) 20:54, 3 September 2008 (UTC)[reply]
Not exactly true, a CPU can emulate any function of a GPU (including texture filtering), making this more an issue of semantics than of hardware architecture. My vote is that this chip is more a CPU than a GPU, due to it's x86 instruction set. I'm not saying this doesn't make it part of the GPGPU market because I say so, but you can't just arbitrarily decide that depending on what the marketing department says. Use some official definition instead.--DustWolf (talk) 03:23, 7 September 2008 (UTC)[reply]
Also the reference source provided (2) has a link to a bogus article that has no mention of GPGPU, stram processing or Larrabee.--DustWolf (talk) 23:53, 1 September 2008 (UTC)[reply]
That was a rather bogus reference; I suspect it was an accidental copy/paste of the wrong press release link. I've updated it. The improved reference says things like "The Larrabee native programming model supports a variety of highly parallel applications [...] This enables [...] true general purpose computation on the graphics processor". And: "Initial product implementations of the Larrabee architecture will target discrete graphics applications, support DirectX and OpenGL, and run existing games and programs." Intel doesn't use the acronyms GPU or GPGPU; their preferred marketing-speak is "Architecture for Visual Computing". But it's obvious that what they are describing is intended to compete with GPUs in both graphics and GPGPU applications. In addition, the SIGGRAPH paper is a little less marketing-driven, and uses the acronyms "GPU" and "GPGPU" more times than I care to count right now (not to directly describe Larrabee, but to compare it to its competition). Modeless (talk) 06:45, 2 September 2008 (UTC)[reply]

The article also says CPU quite a bit.... and finishes with: "Larrabee is an appropriate platform for the convergence of GPU and CPU applications." Intel will be the ones to decide what acronym best suits it when they start marketing the thing to consumers, until then any classification based on deduction, or comparison to older architectures, is nothing but speculation.

I also think the statement that Intel will release a "video" card in late 2009/10 needs to be sourced properly, the current source states that Paul Ottelini said "A product" will be out then. He never said anything about a video card. A PCIe card has been called out as a possible package, depending on the application. This in no way confirms or denies that a video card is how Larrabee will first be sold. Making assumptions about Larrabee is a mistake, it's a completely new product and so does not need to match established conventions. Video cards are not the only way to use graphics processing, and graphics processing is not the only way to use Larrabee. 192.198.151.129 (talk) 11:31, 2 January 2009 (UTC)[reply]

Also, http://www.intel.com/technology/visual/microarch.htm?iid=tech_micro+larrabee is an official intel release on this, and states that Larrabee is a multi-core architecture. Calling Larrabee a GPU is clearly a misinterpretation, and I move to change the article name to "Larrabee (Microarchitecture)", and remove any references to Larrabee as a GPU. I cannot find one non-speculative source that calls Larrabee a GPU. In the SIGGRAPH paper the hardware differences between Larrabee and current GPUs and CPUs are both examined. The applications of Larrabee look to be targeted at the market segment that GPUs dominate, but those applications can also be run on CPUs and GPGPUs. And to the best of my knowledge Intel never called it a GPU. 192.198.151.129 (talk) 15:08, 2 January 2009 (UTC)[reply]

I have updated the source for the video card claim. The Intel press release clearly states that "The first product based on Larrabee will target the personal computer graphics market and is expected in 2009 or 2010", and more importantly "Initial product implementations of the Larrabee architecture will target discrete graphics applications". "Discrete graphics" is an industry term for video cards ("discrete" being the opposite of "integrated").
I contend that it doesn't matter whether Intel calls Larrabee a GPU or not: Intel's marketing department can't change the definition of "GPU". Most outside sources use the word "GPU" when talking about Larrabee. Larrabee will be a co-processor to a CPU which is capable of dramatically accelerating graphics operations, and that's all that's required to be called a GPU.
Please note that calling Larrabee a GPU in no way precludes its use in non-graphics applications. I completely agree that graphics processing is not the only way to use Larrabee. However, all of Intel's communications about Larrabee so far have been in reference to "visual computing" applications. All the public information about Larrabee (mostly the SIGGRAPH paper) solely concerns its first implementation as a GPU. Furthermore, Larrabee includes hardware which is pretty much only useful in graphics applications: the texture sampling units. When Larrabee's video card is released and information about other Larrabee-based products becomes available, including possible non-graphics focused products, it may make sense to break out graphics-specific information into a separate article about Larrabee's video card, and turn this into a generic article about Larrabee as an architecture. However, until Intel starts publicly talking about non-graphics applications of Larrabee, there is no basis for speculation in this article. Intel has not ever talked about a Larrabee product that is not a video card.
I suspect the reason for silence from Intel about non-graphics and non-games applications of Larrabee is due to internal politics. Larrabee is likely to blow the doors off Intel's flagship Core processor line for many HPC applications and even consumer applications other than graphics, but to crow publicly about that might be damaging to Larrabee inside of Intel. It has been speculated that for a time certain aspects of the Pentium processor were held back to avoid competing with Itanium (which was then the flagship top-performance chip) and I suspect the Larrabee team wants to avoid that by pretending for as long as possible that Larrabee will not compete with Core. It's also possible that Intel simply wants to manage expectations about Larrabee and not build up hype for its non-graphics applications when it plans to focus solely on the graphics market at the start. Modeless (talk) 21:22, 2 January 2009 (UTC)[reply]
The same source you quoted from contains: "Additionally, a broad potential range of highly parallel applications including scientific and engineering software will benefit from the Larrabee native C/C++ programming model." i.e. not just graphics applications, so they have crowed publicly about it, and also crowed about Larrabee as a step in the transition to many-core processors. Intel have therefore publicly talked about the non-graphics applications of the architecture, and based on what they say the importance of Larrabee clearly transcends a mere graphics card war with Nvidia. I expect that PCIe cards with the Larrabee architecture will not be called video cards, but something directed at the versatility of it versus a video card.
My main point for not calling Larrabee a GPU is that it is an architecture, like core. Intel have clearly stated this on the description of Larrabee on their website and in the SIGGRAPH paper, and have never referred to it as a chip/card/product. The sources you mention have misunderstood this. It cannot possibly be a GPU, and allowing the article name to call it a GPU is an obvious error. It is an incidental fact that when the first products based on this architecture are released, they do not have to be GPU's as opposed to GPGPU's or something tht doesn't fit into either category. Contrary to what you have said, the fact that a Larrabee based product will probably dramatically accelerate graphics performance alongside a CPU does not necessarily make it a GPU, because of the additional capabilities a Larrabee based product may have. As yet there are no sources that can state the performance specifications of the first products, the package they will be in, what they will be called, or what they will do.
As secondary sources SIGGRAPH and Intel must be considered more reliable sources than speculative internet magazines if the two conflict in terminology and ideas. Now you can speculate all you like, but unless there is a source that clearly supports what you're saying, the article needs to be changed.
Larrabee is not a GPU but an architecture, and the first products containing Larrabee have not yet been revealed as video cards. 86.44.199.65 (talk) 01:33, 4 January 2009 (UTC)[reply]

Contrary to what you have said, the fact that a Larrabee based product will probably dramatically accelerate graphics performance alongside a CPU does not necessarily make it a GPU, because of the additional capabilities a Larrabee based product may have.

This is wrong. Having additional capabilities doesn't prevent something from being a GPU. Current GPUs are being used for molecular dynamics simulation, database searching, financial modeling, etc etc. What defines a GPU is the capability of accelerating graphics APIs, not a lack of ability in other areas.
Yes, Larrabee is an architecture, and it's also a GPU. These things are not mutually exclusive: Larrabee is a GPU architecture. It includes GPU-specific features (texture sampling units, display interfaces), and is targeted at the GPU and GPGPU markets. Future chips based on the Larrabee architecture may drop GPU-specific features and may be designed and marketed for non-GPU and non-GPGPU purposes; however Intel has not announced or even suggested such plans. Furthermore, if you for some reason believe that "discrete graphics" does *not* refer to video cards then the burden of proof is on you, as today "discrete graphics product" is a synonym for "video card".

As secondary sources SIGGRAPH and Intel must be considered more reliable sources than speculative internet magazines if the two conflict in terminology and ideas.

There is no conflict. Intel has not talked about using Larrabee for anything other than GPU or GPGPU tasks. Intel has never denied that Larrabee is a GPU. All they have claimed is that Larrabee is more flexible than current GPUs, but I stress again, that flexibility doesn't make Larrabee *not* a GPU. As I said: if Intel announces a Larrabee product which is *not* a GPU; i.e. it does not have texture sampling units or display interfaces and is not intended for graphics acceleration, *that's* when the article should be changed and not before. Modeless (talk) 08:51, 5 January 2009 (UTC)[reply]
As a followup, I'd like to point out that Intel has never used the acronym "GPU" to describe their integrated graphics products either. Will you concede that Intel's products can be called GPUs even though Intel's marketing department has decided not to use the acronym themselves? Modeless (talk) 09:18, 5 January 2009 (UTC)[reply]
Is the Cell processor a GPU? Products using the Larrabee architecture could conceivably be used in standalone processors, for example in a games console. Your argument for taking "Intitial product implementations will be targeted at discrete graphics applications" and by extension deducing that Intel will release a video card, and arguing that a whole article should be based on that video card, does not add up. If this article is to remain as Larrabee(GPU) then the quantity of information that you could source would amount to a couple of lines. If you were to name it Larrabee(Microarchitecture) you could use most of the information already in the article, correctly sourced, and link the tiny stub on the first implementation of it to this. There is not enough information released on the first implementation of this architecture to warrant a substantial article. The architecture itself warrants an article. You have already implied this in your statement that when more implentations are released the article could be moved. Why not move it now and see how the "GPU" information stands up in it's own article? Because it won't stand up unless piggybacked on something of substance. 192.198.151.130 (talk) 10:30, 5 January 2009 (UTC)[reply]

Products using the Larrabee architecture could conceivably be used in standalone processors

I agree, and yet: this is still just speculation. There is no evidence Intel intends to do this. The only evidence of a Larrabee product that exists is of a Larrabee "discrete graphics" product.
Look, Intel clearly says Larrabee targets the PC graphics market. The PC graphics market has two segments: integrated (soldered to the motherboard) and discrete (video cards). When Intel says Larrabee will be a discrete graphics product, and you deny that this means a video card, what hidden alternative meaning do you propose for Intel's statement, and what evidence do you have to back it up?

There is not enough information released on the first implementation of this architecture to warrant a substantial article. The architecture itself warrants an article.

Precisely the opposite. The *only* information we have is about the first implementation of this architecture as a GPU. We know nothing about where Intel plans to take Larrabee in the future. Larrabee 2 could be a very different beast. If Larrabee flops, it could even be stillborn. The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.
P.S. No, the Cell processor is not a GPU. Tech news outlets do not routinely refer to it as a GPU. Cell does not include any graphics-specific hardware like texture units or display interfaces. Cell is not used as a graphics accelerator for a separate CPU, and when it attempts to render graphics by itself (as in PS3 Linux) its performance does not match GPUs. Tangentially, Toshiba's SpursEngine based on Cell might be called a GPU as it *is* used as a graphics accelerator for a separate CPU and *does* include graphics-specific hardware, but it doesn't seem to be intended to accelerate 3D graphics (only 2D video) so calling it a GPU would be a stretch. Modeless (talk) 04:14, 6 January 2009 (UTC)[reply]

The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.

Intel managed to do it with a full SIGGRAPH paper! They refer to Larrabee as an architecture repeatedly, and never once (that's right not once, not anywhere) refer to it as a GPU. They haven't even refered to it's potential products as GPUs, just products that will "target discrete graphics applications". If you built integrated graphics powerful enough you could "target discrete graphics applications". In fact you could draw many connotations from those words, one of which is that Larrabee based products will overthrow GPUs as the best available graphics products. The particular interpretation you have made is that Larrabee is a GPU.
Who knows what they meant?... Not you, not me, so let's not put it in an encyclopedia.
I get the feeling we're not going to agree on this, is there any way to get a third person, or some way of getting concensus on this? 192.198.151.129 (talk) 17:28, 8 January 2009 (UTC)[reply]

[Intel] never once (that's right not once, not anywhere) refer to it as a GPU

We've already gone through this; it's irrelevant. Intel makes products which even you cannot deny are GPUs, and yet Intel never refer to those as GPUs either. Would you lead a crusade to rid the Intel GMA article of the acronym "GPU" as well? Unless you find me an Intel quote saying Larrabee is *not* a GPU, this argument is dead.

The particular interpretation you have made is that Larrabee is a GPU. [...] is there any way to get a third person, or some way of getting concensus on this?

Me, and CNET, PC Perspective, Personal Computer World, Tech Radar, Daily Tech, XBitLabs, Gizmodo, ZDNet, Futuremark, Ars Technica, and countless other tech news sites too numerous to link here all call Larrabee a GPU. In particular I'd like to call your attention to the Ars Technica article:

"Intel confirmed that the first Larrabee products will consists of add-in boards aimed at accelerating 3D games—in other words, Larrabee will make its debut as a "GPU.""

Ars Technica is a respected and reliable secondary source, published by Condé Nast (owner of Wired, The New Yorker, and Vanity Fair, among others). I'm not drawing conclusions from thin air; I've got sources to spare. Modeless (talk) 06:59, 9 January 2009 (UTC)[reply]
Your sources are tertiary, mine are secondary. I reiterate the need for some kind of third party concensus. You should have no problem with that if you are so sure you're right 192.198.151.129 (talk) 11:35, 9 January 2009 (UTC)[reply]
Ars Technica, Personal Computer World, Tech Radar, PC Perspective, and ZDNet are all secondary sources. These are publications which do original reporting and analysis based on primary sources (e.g. statements by Intel at conferences or in press releases, or direct communication with Intel employees). Furthermore, while the SIGGRAPH paper is undoubtedly the best source we have, it is not in conflict with my sources. There is simply no contradiction or dispute in the sources presented here. You're welcome to seek a third party opinion if you want; you don't need my permission. Modeless (talk) 20:29, 9 January 2009 (UTC)[reply]

Good God, I imagine this will be the only way of working out if it's a GPU. Though I expect the MK II will be called a CPU by everyone. 86.13.78.220 (talk) 21:49, 19 February 2009 (UTC)[reply]

In this case I think the article does something like referring to "Anatidae" as "A duck". Which, of course, is absurd. 213.191.229.186 (talk) 19:39, 23 February 2009 (UTC)[reply]

Threading

[edit]

I just deleted the sections on simt and threading. The papers on Larrabee clearly describe a threading model almost identical to that used in current gpu's (especially Nvidia's), the only real difference is in terminology. In Larrabee each core has 4 'threads' but each thread manages 2-10 'fibers' (equivalent to a 'warp' in Nvidia's GPU's) and each fiber contains between 16 and 64 'strands' (equal to what nvidia calls a 'thread') each strand execute in a separate simd execution lane. This is exactly like Nvidia architecture where each warp consists of 32 "threads" each of which executes in a simd execution lane. And no, there is nothing 'effectively scalar' about either architecture. (Damn forgot to sign in). —Preceding unsigned comment added by 124.186.186.7 (talk) 06:40, 3 January 2009 (UTC)[reply]

90% of linear?

[edit]

There's talk about the scaling of performance with number of cores in this article and it is claimed to be linear or "90% of linear" in the case of 48 cores. This "90% of linear" talk is incoherent. A linear function is one that forms a line, hence the name. More precisely, f(x) is a linear function just in case there is a number n such that f(x) = n*x. 90% is just one possible value for n, and if f(x) is linear then .9*f(x) is linear. What the hell is this article saying at this point? I didn't edit because I presume that something that makes sense was intended and someone who knows about larrabee will understand this 90% of linear stuff. I just surfed here looking for some info on larabee.

Also, "scaling is linear with number of cores" doesn't mean very much. If each increase in the number of cores by one gives a 10% increase in performance regardless of whether your increasing for one to two of 40 to 41, the relationship between number of cores and performance is linear. philosofool (talk) 01:46, 9 February 2009 (UTC)[reply]

I agree that the wording is awkward. To pedantically write out the entire statement, it would be something like "A Larrabee chip with 48 cores has performance that is 90% of the performance that would be predicted by linear extrapolation from the performance of Larrabee chips with 32, 24, 16, and 8 cores." This kind of analysis makes sense and is useful to someone who is deciding how many cores to put on a Larrabee product. Even if the linear relationship has a low slope (which it doesn't in this case), it is still useful to know when the linear relationship ends.
In any case, please feel free to reword the entire section. All I care about is that the information in the graph is presented. It is a rather strange graph, but it represents the only quantitative Larrabee performance information yet released, so it is important to have it in the article.Modeless (talk) 05:51, 9 February 2009 (UTC)[reply]

I have altered that bit to read "At 48 cores the performance drops to 90% of what would be expected if the linear relationship continued." Not perfect, but I feel that it is a little more elegant than the original. The last bit that Philosofool talks about is incorrect though. The situation described (adding a core increases performance by 10%) would be a case of exponential scaling. Oh how I wish it were possible! Linear scaling will always produce a situation where doubling the number of cores doubles performance. Mostly scaling will be less than linear, with additional cores giving less and less benefit. 80.6.80.19 (talk) 23:31, 18 February 2009 (UTC)[reply]

Cancellation

[edit]

According to [6] and [7], Intel canceled the hardware version of Larrabee and the name would only exist as a software platform. —Preceding unsigned comment added by 174.112.102.18 (talk) 17:01, 5 December 2009 (UTC)[reply]

The term "software platform" differs from that of "software development platform". A software development platform is still a piece of hardware.192.55.54.36 (talk) 00:03, 24 December 2009 (UTC)[reply]

GPU

[edit]

Should this really be called a GPU? 192.198.151.37 (talk) 09:38, 8 December 2009 (UTC)[reply]

Requested move

[edit]
The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the move request was: move to Larrabee (microarchitecture) Labattblueboy (talk) 13:45, 10 February 2010 (UTC)[reply]



Larrabee (GPU)Larrabee (Computing Architecture) — Intel refers to Larrabee as an architecture, and has revealed that the first incarnation of Larrabee will not be a discrete GPU. —86.42.213.196 (talk) 18:49, 1 February 2010 (UTC)[reply]

Survey

[edit]
Feel free to state your position on the renaming proposal by beginning a new line in this section with *'''Support''' or *'''Oppose''', then sign your comment with ~~~~. Since polling is not a substitute for discussion, please explain your reasons, taking into account Wikipedia's naming conventions.

Discussion

[edit]
Any additional comments:
The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Full of Intel hype

[edit]

This article reminds me of all of that Merced hype and how it was supposed to cure cancer, end world hunger, and be so superior to every other CPU architecture that everybody would switch to it before it even came out. This article has pro-Intel marketing speak like how using x86 cores for simple parallel arithmetic is superior to using dedicated GPU cores (only superior for vendor lock-in) and just like Merced hype parts are written as if the Larrabee was already out instead of with speculations and possibilities.

Example: "This makes Larrabee more flexible than current GPUs, allowing more differentiation in appearance between games or other 3D applications."

So x86 architecture, cache coherency, and "very little specialized graphics hardware" are supposed to allow "more differentiation in appearance between games or other 3D applications"? That doesn't even make sense! It's like when Intel said that "the Internet was developed on x86," a massive lie! When the Internet was developed, x86 wasn't even around. It's like saying "the Interstate was developed for SMART cars!" —Preceding unsigned comment added by 69.54.60.34 (talk) 14:09, 31 March 2011 (UTC)[reply]

Actually it does make sense because those combined attributes enable the use of a software renderer, as opposed to relying on fixed function hardware. I think the problem here might be that your understanding of graphics is lacking, not the presence of any Intel "hype" or bias in the article. 83.226.206.82 (talk) 00:43, 18 May 2011 (UTC)[reply]

Cancellation of project

[edit]

This is directed to FarbrorJoakim with regard to his recent edits to this article. Intel has Knights Ferry, an implementation of the Larrabee architecture, in the roadmaps and scheduled for release on 22nm this year or the next. There has been no announcement whatsoever that the Larrabee project has been cancelled. Please stop making large-scale edits that are based on assumptions and without supplying any references. 83.226.206.82 (talk) 01:32, 29 May 2011 (UTC)[reply]

This article is seriously outdated and presents information that was valid maybe at the time of Intel's initial announcements about. Knight's Ferry is a different product and should live in it's own article. Detailed specs, written in Present tense as if this product existed, is misleading. If nothing has happened within a year, i.e. Intel at least mentioning it, it is time to do some major cleanup. (And you cannot have references to something that didn't happen (Larrabee)). FarbrorJoakim (talk) 21:31, 29 May 2011 (UTC)[reply]
Knight's Ferry is a chip (at least partly) based on the Larrabee architecture and instruction set, which is what is outlined in this article. Semiconductors companies like Intel will have architectures and then products based on those architectures. From the Knight's Ferry press release: http://www.intel.com/pressroom/archive/releases/2010/20100531comp.htm
"The Intel® MIC architecture is derived from several Intel projects, including "Larrabee" and such Intel Labs research projects as the Single-chip Cloud Computer."
In addition, Intel's CEO Paul Otellini has repeatedly and specifically stated that the Larrabee program is still active. http://www.thinq.co.uk/2010/5/12/intel-larrabee-is-still-alive-and-kicking/
83.226.206.82 (talk) 11:06, 30 May 2011 (UTC)[reply]