Albert Penello, director de planificación en Xbox, por fin se ha pronunciado respecto al famoso rumor que ha ido circulando todos estos días por Internet, donde se hacía referencia a una supuesta dGPU en el interior de Xbox One. Y ha sido tajante al respecto. No hay dGPU adicional. Os dejamos el tuit con el que nos hemos hecho eco de la noticia esta madrugada.
No add’l dGPU. Working on more tech deep-dives. XBO is plenty capable now and in the future. Perf. differences are greatly overstated.
— Albert Penello (@albertpenello) September 9, 2013
La traducción podría ser:
No hay dGPU adicional. Trabajando en una profunda inmersión en su tecnología. XBO es muy capaz, ahora y en el futuro. Se ha exagerado mucho sobre las diferencias de rendimiento.
Ya nos queda todo más claro. No hay dGPU. Un rumor menos pero… ¿a qué se refiere cuando dice –Working on more tech deep-dives-?
Está claro que el revuelo social hace su cometido hoy día en lo que a rumores se refiere, y como hemos podido ver, era necesaria una aclaración al respecto de todo lo que se ha ido leyendo en todos los sitios webs y medios del mundillo gamer, donde se recogían las informaciones de que «podría» existir una dGPU dentro de Xbox One.
A estas alturas muchos ya saben que el silencio de Microsoft sobre algunas partes de su hardware se debe a un contrato de confidencialidad con AMD que expira a finales de este mes, por lo que todos los rumores poco a poco iban cobrando «sentido». Pero ahora ya sabemos que al menos no se trata de una dGPU.
Albert también se ha tomado la molestia de aclarar el asunto en NeoGaf:
Lo raro del asunto es que ningún directivo de Microsoft se pronunciara al respecto para confirmar o desmentir esta información días atrás. Evidentemente, de ser cierto el rumor, no dirían nada, por aquello del contrato de confidencialidad, pero, si no era cierto, ¿por qué no decir nada? Pues parece ser que el señor Penello nos ha querido sacar de dudas mediante un tuit, donde desmiente la existencia de la famosa dGPU, pero nos deja con la intriga al comentar que están -trabajando en una profunda inmersión en su tecnología–. ¿A qué se refiere?
Un compañero por twitter nos aclara la dirección de estas declaraciones mediante una conversación por reddit:
Por lo que todo apunta a que Microsoft está apostando por un hardware optimizado. ¿Podrá competir Xbox One en términos de hardware con PlayStation 4?
Sea como sea, no saldremos de dudas realmente de si Xbox One alberga algo más en su arquitectura de lo que ya conocemos hasta que finalice el NDA firmado con AMD.
Pero si os apetece seguir con esta serie de rumores y de elucubraciones varias, algunas fuentes informan que todo el tiempo Microsoft no ocultaba nada referente a una dGPU, sino más bien sobre una nueva tecnología por parte de AMD llamada «Volcanic islands». Como veis, si de rumores se trata, todo esto podría inspirar una buena serie de televisión. (Y el famoso Misterxmedia vuelve a hacer de las suyas).
Y vosotros, ¿pensáis que Microsoft oculta algo más o es todo simplemente una serie de rumores poco fundamentados?
Os dejamos con una nota aclarativa de Albert en NeoGaf donde deja más datos técnicos del hardware de One:
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.
I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.
So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I’m simply creating FUD or spin.
I do want to be super clear: I’m not disparaging Sony. I’m not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn’t completely accurate. I think I’ve been upfront I have nothing but respect for those guys, but I’m not a fan of the mis-information about our performance.
So, here are couple of points about some of the individual parts for people to consider:
• 18 CU’s vs. 12 CU’s =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU’s, so it’s simply incorrect to say 50% more GPU.
• Adding to that, each of our CU’s is running 6% faster. It’s not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 – it’s called Kinect.
• Speaking of GPGPU – we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.
Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I’m sure this will get debated endlessly but at least you can see I’m backing up my points.
I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.
Given this continued belief of a significant gap, we’re working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we’ve done and how we balanced our system.
Thanks again for letting my participate. Hope this gives people more background on my claims.
At Microsoft, we have a position called a «Technical Fellow» These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it’s a rare and respected position.
We are lucky to have a small handful working on Xbox.
I’ve spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board – you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you’re saying you can’t add bandwidth – you can. If you want to dispute that ESRAM has simultaneous read/write cycles – it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn’t coming back smiling, I certainly wouldn’t be so bullish dismissing these claims.
I’m going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
[Fuente: Twitter de Albert Penello / Neogaf]