GPU by alex3run on Wednesday, February 20, 2013
And where is the official information about the GPU and it's power consumption?
alex3run
RE: programmable GPGPU?? by toyotabedzrock on Wednesday, February 20, 2013
That does not sound like a sgx gpu.

I think they should find a gpu that supports opengl es 3
toyotabedzrock
RE: programmable GPGPU?? by rd_nest on Wednesday, February 20, 2013
72GFlops..Most probably T604. Was same for N10.
rd_nest
Unanswered question at ISSCC... by banvetor on Wednesday, February 20, 2013
One of the unanswered questions at ISSCC was what is the delay penalty from switching between the A7 and the A15 cores... I don't see all that bright future for this baby.
banvetor
RE: Unanswered question at ISSCC... by StormyParis on Wednesday, February 20, 2013
Why ? because of a switching delay you don't know, for a switch you don't know the frequency of ?
StormyParis
RE: Unanswered question at ISSCC... by jeffkibuule on Wednesday, February 20, 2013
I don't think the delay switch matters that much, when the goal of these chips is reasonable performance with good battery life, not maximum performance, otherwise you much as well just chuck the A7 cores and run the A15s at full blast.
jeffkibuule
RE: Unanswered question at ISSCC... by twotwotwo on Wednesday, February 20, 2013
Yeah, the compromise-y nature of it is important for the whole thing to make sense. In theory, 6W's a lot. In real use, you rarely hit that--you usually just blast 1-2 of the A15s for a few seconds while you load a webpage or app or do some other big chunk of CPU-bound work.

If I'm going to second-guess and play armchair engineer (as DigitalFreak aptly put it), maybe you can imagine other uses for all that die area than going 4+4-core when many workloads still aren't heavily threaded--more cache w/the A15s, more GPU (I bet games on 1080p phone screens can use a lot), something. Apple was OK with dual-core, at least as of the A6(X). Other hand, I haven't the first clue how other designs perform, etc. and Samsung does, so I should close my mouth. :)
twotwotwo
RE: Unanswered question at ISSCC... by wsw1982 on Wednesday, February 20, 2013
I don't see the mansion of L3 cache, and the L2 cache of A7 and A15 are not shared. Therefore, it's highly possible the switch is across the main memory, which may add mile-seconds of delay (dumping and reloading cache data to and from low power ddr, power down and warm up the cores). How much should A15/A7 done to just even out of the performance and energy penalty of switching?
wsw1982
RE: Unanswered question at ISSCC... by Wilco1 on Wednesday, February 20, 2013
The L2 caches have a special port to allow cachelines to be swapped directly. When both caches are powered up, coherency is maintained between them.
Wilco1
RE: Unanswered question at ISSCC... by DigitalFreak on Wednesday, February 20, 2013
We have an armchair engineer in 'da house!
DigitalFreak
RE: Unanswered question at ISSCC... by xaml on Saturday, February 23, 2013
An ARM-chair engineer... ;)
xaml
RE: 20 us by Wilco1 on Wednesday, February 20, 2013
Wilco1
RE: Unanswered question at ISSCC... by tuxRoller on Wednesday, February 20, 2013
It depends on what is handling the switching.
These initial implementations are using a cpufreq driver with internel core switching being moved from the hypervisor to the kernel in order to switch between pairs (as illustrated above, the heterogeneous mode will come after a good solution is found for the scheduler). The switching times aren't bad b/c you have cache coherency (not shown above) and thus you only need to transfer the active register states.

http://lwn.net/Articles/481055/
http://lwn.net/Articles/501501/#Add%20Minimal%20Su...
tuxRoller
Can't Wait by amdwilliam1985 on Wednesday, February 20, 2013
I can't wait to see benchmarks on these.

"While it's possible for you to use both in parallel, initial software implementations will likely just allow you to run on the A7 or A15 clusters and switch based on performance requirements."
-Imagine future projects such as OUYA based on this baby with all cores enabled :)
This will be a perfect HTPC.

Intel better be prepare, time is ticking. Seems like every generation, ARM cpu takes a big jump in performance.
amdwilliam1985
RE: Can't Wait by Jinxed_07 on Wednesday, February 20, 2013
There's a difference between a more powerful CPU and one that simply has more cores slapped on. If ARM really had a more powerful CPU, then this architecture would only have one smaller CPU that was able to run everything while consuming less eneregy, rather then needing two in order to save energy.
Futhermore, if Intel should be afraid of ARM, then they should be afraid of AMD for making an 8-core processor that outperforms a 4-core processor by a bit.
Jinxed_07
architecture by flyingpants on Wednesday, February 20, 2013
Hello. In the first chart, it says both quad core CPUs are ARM 7. No mention of ARM 15. Is this correct?
flyingpants
RE: architecture by Cow86 on Wednesday, February 20, 2013
I'm afraid you are confused in this case...that is the architecture of the cores, being ARM v7...all Cortex cores use this architecture, the A5, A7, A8, A9 and A15...so it is correct :) The left column in that table is the A15, the right is the A7.
Cow86
RE: architecture by pyaganti on Thursday, February 21, 2013
Any idea why samsung is using A7 for little instead of A5? If A7 and A5 are both ARM v7 archietecture, it makes more sense if they use A5 instead of A7. Beacuse A5 is low power core than A7 and thats the main concept of little core right?
pyaganti
RE: architecture by Cow86 on Thursday, February 21, 2013
Because the A7 is specifically designed to be a Little core to the A15. The A5 is not. Furthermore the A7 has a better performance/watt than the A5, and a very similar die size.
Cow86
RE: architecture by saurabhr8here on Wednesday, February 20, 2013
Both say ARM v7a, which is the instruction set architecture. Both A7 and A15 processors use the same instruction set, hence they are able to implement the big.LITTLE architecture in the first place.
saurabhr8here
RE: architecture by twotwotwo on Wednesday, February 20, 2013
Nah, it's talking about the instruction set. v7a is the common instruction set for the A7 and A15 microarchitectures.
twotwotwo
RE: architecture by MrSpadge on Wednesday, February 20, 2013
The Architecture is Arm v7a, the actual chip designs are called A7 and A15. This means both designs understand the same instructions and can thus run the same software (which is needed for quick transparent switches). ARM is not very good at the naming game yet.
MrSpadge
RE: architecture by SetiroN on Wednesday, February 20, 2013
It can be misleading if you don't pay attention, ARM v7a is the architecture revision (ISA), Cortex A15 and A7 are the core's names, which aren't mentioned on that chart.
SetiroN
Is the a7 under powered by toyotabedzrock on Wednesday, February 20, 2013
I have to wonder if they tested the a7 to ensure it has the power to run the ui smoothly.
toyotabedzrock
RE: Is the a7 under powered by UpSpin on Wednesday, February 20, 2013
No, they never test it. They haven't even tested if the SoC works at all. And the performance numbers, they are just random numbers. /s

The A7 is only slightly slower than an A9. Android JB runs smooth on dual core A9 SoCs because it makes heavy use of the GPU for rendering. So for lag free UI the GPU will be more important.
A quad core A7 will be faster than a A9 dual core! It will handle the usual tasks without any issues at all.
UpSpin
A7~A8 by tuxRoller on Thursday, February 21, 2013
http://www.arm.com/products/processors/cortex-a/co...
They're claiming around a 20% improvement, but I'm guessing that's at the high end. Other numbers I've seen claim it to be a bit below an A8.
However, it uses MUCH less power than these other chips (arm claim it's similar to an A5 in terms of power draw).
Considering it looks as though the biggest change (to the A8) is in the branch prediction, and it should clock higher,
http://www.arm.com/products/processors/cortex-a/co...
tuxRoller
RE: Is the a7 under powered by phoenix_rizzen on Wednesday, February 20, 2013
The A7 is supposed to have just slightly less performance than an A9, but with much reduced power requirements. It will most likely take over from the A8 and low-end A9 SoCs for low-to-middle-range phones. Look for dual-core A7s to hit "feature" phones this year.
phoenix_rizzen
shared cache by MrSpadge on Wednesday, February 20, 2013
Looks like sharing the L2 between both CPU clusters might be a good idea. Done in some clever way it could even speed the switching up.
MrSpadge
No Smartphone SoC by UpSpin on Wednesday, February 20, 2013
According to the chart, the Quad Core A15 part consumes about 5W! Probably CPU only. If this SoC is put inside a smartphone, the battery would be dead in less than an hour if you also consider the power consumption of the display and PowerVR GPU..
This SoC is for tablets with large enough batteries and large enough surface to passivly cool the 5W+GPU waste power. Maybe in the next Galaxy Note 10 it will find a use or in a boosted Nexus 10, but never in a smartphone.
UpSpin
RE: No Smartphone SoC by tempestglen on Wednesday, February 20, 2013
4xA15=4.5 watt
4xA7=0.75 watt

80% of phone's running time is low performance, so battery life of exynos octa will be good.
tempestglen
RE: No Smartphone SoC by Aenean144 on Thursday, February 21, 2013
It really depends on the 80%.

The A7 turns back the clock about 3 years back to the Cortex-A8 days in terms of DMIPS/Hz. I can easily see many an app, process or thread wanting more. Will be interesting to see where running a web browser will land. It's not going to be pretty if it stays on the A7.
Aenean144
RE: No Smartphone SoC by UpSpin on Thursday, February 21, 2013
A7 has 1.9 DMIPS/MHz, A9 has 2.5 DMIPS/MHz.
The Galaxy Nexus has a 1.2 GHz Dual Core A9 --> 6000 DMIPS
This SoC has a 1.2 GHz Quad Core A7 --> 9120 DMIPS
Really, the LITTLE part should handle any normal tasks easily. Video playback gets done by hardware decoders, GUI rendering gets done by the GPU, website parsing and other processing stuff gets done by the CPU.

The Galaxy Nexus runs fluid. This A7 quad core is at least 30% faster, a smartphone could live without the A15 easily.
UpSpin
RE: No Smartphone SoC by Death666Angel on Thursday, February 21, 2013
All very true. :D
My Galaxy Nexus has some "think pauses" (I'm running a custom everything, so not sure if that happens on plain Android). But when that happens I often wonder if it is a CPU issue or a memory one. It mostly happens when starting/switching between memory intensive apps (big emails, video, browser). Would the noticeable performance increase be bigger from an A15 upgrade or from getting a midway decent SSD with >200MB/s seq r/w and >30MB/s rnd r/w. :)
Death666Angel
RE: No Smartphone SoC by UpSpin on Thursday, February 21, 2013
That's the idea behind big.LITTLE. In low demanding tasks use the A7 in high performance task use the A15.
But the device must be able to handle the A15 power consumption.
But if the A15 consume 5W and you start a game which will most probably make a use of the A15 power, your smartphone battery will be dead in an hour, just because of the CPUs.
Yes, in standby the battery life will be good, but I never denied this. That's what big.LITTLE is made for.
In heavy use however, this SoC will, with CPU and GPU full power, consume, most probably, 10W. A smartphone battery has <10Wh. A smartphone surface is too small to dissipate 10W.
Conclusion:
This SoC won't find a use in a smartphone. It's physically impossible, except you never make a use of the A15 cores, which defies the purpose of this SoC!
UpSpin
RE: No Smartphone SoC by Aenean144 on Wednesday, February 20, 2013
I think that's just CPU.

If you assume 3.5 DMIPS/MHz for Cortex-A15, a quad-core A15 running at 2 GHz is 3500*2*4 = 28000 DMIPS. That's quite close to the point in the upper right in the plot, which is actually a little over 5 Watts. Maybe 5.2 W.

Even in a tablet, the SoC may be prevented from maxing out the CPU and the GPU at the same time. This could be an 8 to 10 W SoC with both the GPU and CPU maxed out.
Aenean144
RE: No Smartphone SoC by xaml on Saturday, February 23, 2013
It is prominently claimed so for this very reason, here:
http://www.sammobile.com/2013/02/23/samsung-ditche...
xaml
RE: No Smartphone SoC by xaml on Saturday, February 23, 2013
That was @UpSpin, for the prehistoric lack of editing.
xaml
Layout by alexvoda on Wednesday, February 20, 2013
Is it me or is the layout of that chip really inefficient?
I'm not really knowledgeable in chip design but I think the orange brown area around the CPU may possibly be wasted surface.
At least compared to these:
http://www.anandtech.com/show/6323/apple-a6-die-re...
http://www.anandtech.com/show/6472/ipad-4-late-201...
http://www.anandtech.com/show/5831/amd-trinity-rev...

There doesn't seam to be a lot of space left for the GPU.
And since this will probably come in 1080p or higher devices the GPU matters
alexvoda
RE: Layout by Death666Angel on Wednesday, February 20, 2013
It's just you. :P
The stuff you linked to has the exact same "orange brown area". And that probably isn't wasted surface.
Death666Angel
RE: Layout by UpSpin on Wednesday, February 20, 2013
I don't think the die photo shows the whole die, only the upper half maybe.
I'm also no chip designer, but maybe the orange brown area gets used for wires to connect the different parts.
The top left part looks odd, not orange brown, not structured, but near the RAM interfaces. Maybe they blurred that part because it contains their 'secret CPU switching' part?
UpSpin
RE: Layout by alexvoda on Thursday, February 21, 2013
I see.
Yup, the photo only shows half of the die, Really made it more confusing.
I didn't mean wasted and in empty and does nothing but more as in no active component. And if that was the entire die the percentage occupied by the orange brown space would have been huge. Since this is probably just half of the die it matters a lot less.
alexvoda
RE: Layout by alex3run on Thursday, February 21, 2013
I think Samsung would unveil more details about this SoC later.
alex3run
It's nice to know someone has a brain by Shadowmaster625 on Wednesday, February 20, 2013
I was saying years ago how intel needed to take an atom and stick it on the same die as an i-series chip. And essentially do with them exactly what is described in this article. But of course, they didnt do it, and as a result they lost billions in potential mobile chip sales to companies like apple. Haswell looks kind of like an improvement, but you can tell they're still not doing what needs to be done.

90% of the time, a tablet/ultrabook only needs the cpu power of one single atom core. This is the basic fact that has been ignored by intel (and AMD) for more than a decade now. But samsung understands this.
Shadowmaster625
RE: It's nice to know someone has a brain by djgandy on Thursday, February 21, 2013
The problem with laptops was not the CPU, it was all the other cheap components that sucked power. Laptop screens are terrible and good value ones still use mechanical hard drives!
djgandy
RE: It's nice to know someone has a brain by UpSpin on Thursday, February 21, 2013
No!
Extra devices get shut down if not used.
Laptop screens are/were terrible, but this means they are low resolution TN panels. But TN panels have better transmittance than IPS. Low resolution displays have better transmittance than high resolution. I hope you're able to follow, but this means, yes, they are more efficient and don't require such a bright backlight.
The power consumption of a HDD is, in idle, higher than the one of a SSD. But therefore you get more space. And in general, the impact is small compared to the CPU and GPU power consumption.
I'm sorry, your logic is flawed.

Btw: This is an article about the power consumption of a SoC, and only the SoC! Why do you compare it with the power consumption of a whole system? This SoC consumes less power than an Intel CPU/GPU/chipset combo. That's what matters. Nothing else! So don't compare apples with oranges.

shadowmaster is right, your post nonsene.
UpSpin
Good to see competitive PowerVR on Android by mayankleoboy1 on Thursday, February 21, 2013
smart move to abandon Mali arch. It was slow compared to the PowerVR GPU's.
But is it enough to win benchmarks against iPhone6 and iPad5 ?

And Adreno 320 is a wimp by comparison.
mayankleoboy1
RE: Good to see competitive PowerVR on Android by alex3run on Thursday, February 21, 2013
Mali T604 is the same level as PowerVR 554 @280 MHz. And PowerVR 544MP3 is slower than both of those GPUs. So it would be very strange to see it inside Exynos Octa. More likely there will be Mali T658/678MP4 which is twice as fast as anything on the market today.
alex3run
Latest from AnandTech