Latest Pipeline Posts
Samsung's Galaxy Note 8.0: Introduction & Hands On
by Anand Lal Shimpi 9 hours ago

Samsung's goal for 2013 is to see the same success in tablets as it did in smartphones in 2012. It's a lofty goal, but one that is entirely feasible if the company brings more Nexus 10-class devices to market this year.

Today Samsung is officially introducing its first tablet launch of the new year: the Galaxy Note 8.0. To understand the Galaxy Tab/Note divide, you simply have to look at the Galaxy Tabs as content consumption focused devices while the Galaxy Note offerings are geared more towards productivity.
 
 
A big part of the productivity story is the integrated S Pen, which is present on all Galaxy Note devices including the new 8.0. The S Pen is a battery-less stylus that is driven by a Wacom digitizer layer in the Galaxy Note display stack. Samsung offers a combination of its own apps as well as customized third party apps to take advantage of the S Pen.
 
Where the Galaxy Note 8.0 breaks new ground is that it is the first device to ship with Awesome Note for Android. Samsung claims to have at least a 1 year exclusive for the pre-loaded Android version of the popular iOS application. 
 
 
The Galaxy Note 8.0 integrates Samsung's Exynos 4 Quad (4412) SoC, which features four ARM Cortex A9 cores running at up to 1.6GHz. The 4412 also features ARM's Mali 400MP4 GPU, this is the same SoC used in the Galaxy Note 10.1. The SoC is paired with 2GB of RAM. 

As its name implies, the Galaxy Note 8.0 features an 8-inch 1280 x 800 Samsung PLS display. New for the Note 8.0 is a special reading mode that appears to play with white balance/color calibration in order to reduce eyestrain:
 
Reading Mode: Disabled

Reading Mode: Enabled

The usual features like Smart Stay (using the front facing camera to detect when you're looking at the tablet and thus overriding display timeout settings) are also present.
 
Just like the rest of the Note lineup, Samsung includes an integrated IR blaster in the Galaxy Note 8.0.
 
Camera duties are handled by a 5MP rear facing module and a 1.3MP front facing camera.
 
The Note 8.0 will be available in 16GB and 32GB configurations, with a microSD card slot for expansion. There's a non-removable 4600 mAh battery inside the Galaxy Note 8.0 (should be around 17 Wh, so slightly bigger than what's in the iPad mini).

The 8.0 will ship with Android 4.1.2. All of the aforementioned specs apply to the international version of the Note 8.0, with US details forthcoming. Samsung will have WiFi, 3G and 4G LTE versions of the Galaxy Note 8.0 available starting sometime in Q2. The cellular versions retain full phone capabilities; in other words, you can hold a Galaxy Note 8.0 3G up to your head and make phone calls just like you would with a Galaxy S 3.
 
This year Samsung is trying to shorten the gap between the launch of a WiFi tablet and subsequent cellular enabled derivatives.
 
 
I had the opportunity to play with a pre-production version of the Galaxy Note 8.0 in Barcelona. The 8-inch form factor is honestly a good balance between portability and productivity. It's a bit bigger than the iPad mini, but at 338 grams it never feels heavy.The device itself looks very similar to other shipping Galaxy devices, with glossy plastic dominating the materials list. 
 
iPad mini vs Galaxy Note 8.0
  Apple iPad mini Samsung Galaxy Note 8.0
Dimensions 200 x 134.7 x 7.2mm 210.8 x 135.6 x 7.95mm
Display 7.85-inch 1024 x 768 IPS 8.0-inch 1280 x 800 PLS
Weight 308g (WiFi) 338g (WiFi)
Processor 1GHz Apple A5 (2 x Cortex A9, PowerVR SGX543MP2)

Samsung Exynos 4412 (4 x Cortex A9, Mali 400MP4)

Connectivity WiFi , Optional 4G LTE WiFi , Optional 3G/4G LTE
Memory 512MB 2GB
Storage 16GB—64GB 16GB/32GB + microSD
Battery 16.3Wh ~17Wh
Starting Price $329 TBD
 
 
The S Pen functionality is obviously a big selling point of the Galaxy Note family, and it seems to work relatively well. I wouldn't put the Galaxy Note lineup on the same level as Microsoft's Surface Pro, but it's also nowhere near as expensive either. The device's performance is a lot better with the latest software builds on the 8.0 that it was on the Galaxy Note 10.1 back when I first used it. There's still perceivable lag when using the pen to draw/write, but the stylus is still usable. Samsung also enables the ability to display multiple applications on the screen at the same time, which is also much better implemented than when the 10.1 first launched.
 
Overall, I'm not a fan of the Touchwiz UX customizations however and I'd much rather see a lighter weight software layer from Samsung instead. Icons and text always seem a bit too large for my tastes, although I understand what Samsung is going for with the design. Despite those complaints, the user interface remained relatively quick and responsive in my brief hands on.
 
 
While I believe the Galaxy Note 8.0 will do a good job filling out the Note lineup, if Samsung really wants to end up at the top of the tablet market it needs a much more aggressive foot forward. I would like to see increased emphasis on higher quality materials, a more streamlined/lightweight Touchwiz stack and aggressive adoption of new features. Samsung did a great job with the Nexus 10, I'd love to see what it could do with a similar approach to tablets of all sizes. 

How the HTC One's Camera Bucks the Trend in Smartphone Imaging
by Brian Klug, yesterday

Now that we’ve seen the HTC One camera announcement, I think it’s worth going over why this is something very exciting from an imaging standpoint, and also a huge risk for properly messaging to consumers.

With the One, HTC has chosen to go against the prevailing trend for this upcoming generation of devices by going to a 1/3.0" CMOS with 2.0 micron pixels, for a resulting 4 MP (2688 × 1520) 16:9 native image size. That’s right, the HTC One is 16:9 natively, not 4:3. In addition the HTC One includes optical image stabilization on two axes, with +/- 1 degree of accommodation and a sampling/correction rate of 2 kHz on the onboard gyro. Just like the previous HTC cameras, the One has an impressively fast F/2.0 aperture and 5P (5 plastic element) optical system. From what I can tell, this is roughly the same 3.82 mm (~28mm in 35mm effective) focal length, slightly different from the 3.63 mm of the previous One camera. HTC also has included a new generation of ImageChip 2 ISP, though this is of course still used in conjunction with the ISP onboard the SoC, and HTC claims it’s able to do full lens shading correction for vignetting and color, in addition to even better noise reduction, and realtime HDR video. Autofocus is around 200ms for a full scan, I was always impressed with AF speed the previous cameras had, this is even faster. When it comes to video HTC apparently has taken some feedback to heart and finally maxed out the encoder capabilities for the APQ8064/8064Pro/8960 SoC, which is 20 Mbps H.264 high profile.

HTC One Camera Specifications
Device HTC One
Sensor Size and Type 1/3" BSI CMOS
Resolution 4.0 MP 16:9 Aspect Ratio (2688 x 1520)
Focal Length 3.82mm
F/# F/2.0
Optical System 5P
OIS 2-axis +/- 1 degree, 2 kHz sampling
Max Capture Rate 8 FPS continual full res capture
Video Capture 1080p30, 720p60, 720p30, 1080p28 HDR, 768x432 96FPS
H.264 High Profile, 20 Mbps

The previous generation of high end smartphones shipped 1.4 micron pixels and a CMOS size of generally 1/3.2“ for 8 MP effective resolution. This year it seems as though most OEMs will go to 1.1 micron pixels on the same 1/3.2” size CMOS and thus get 13 MP of resolution, or choose to stay at 8 MP and absorb the difference with a smaller 1/4" CMOS and thinner optical system. This would give HTC an even bigger difference (1.1 micron vs 2.0 micron) in pixel size and thus sensitivity. It remains to be seen whether other major OEMs will also include OIS or faster optical systems this generation, I suspect we’ll see faster (lower F/#) systems from Samsung this time, some rumored images showed EXIF data of F/2.2 but nothing else insightful. Of course, Nokia is the other major OEM pushing camera, but even they haven’t quite gone backwards in pixel size yet, but they’ve effectively been in a different category for a while. We’ve already seen some handset makers go to binning (combining a 2x2 grid of pixels into one effective larger pixel) but this really only helps increase SNR and average out some noise rather than fundamentally increase sensitivity.

The side by sides that I took with the HTC One alongside a One X so far have been impressive, even without final tuning for the HTC One. I don’t have any sample images I can share, but what I have seen has gotten me excited about the HTC One in a way that only a few other devices (PureView 808, N8, HTC One X) have so far. Both the preview and captured image were visibly brighter and had more dynamic range in the highlights and shadows. So far adding HDR to smartphones has focused not so much on making images very HDR-ey but rather as a mitigation to recover some dynamic range and make smartphone images look more like what you’d expect from a higher end camera. Moreover, not having to use flash in low light situations is a real positive, something which currently adds a false color cast if you’re using a device with an LED.

Samsung's TV Discovery Service Enables TV to Smartphone/Tablet Streaming
by Anand Lal Shimpi yesterday

 

I'm not entirely sure I understand the point of MWC this year if everyone is going to pre-empt the show with announcements of their own (or in the case of the Galaxy S 4, wait until after the show to announce). Samsung does join the list of companies that are unveiling announcements prior to the show with the disclosure of its TV Discovery service. 
 
Samsung is in a very unique position in that it is not only a SoC, NAND, display and DRAM maker, but also a significant player in the smartphone, tablet and TV space. If any company is well positioned to understand the needs of the market, it's Samsung.
 
With its fingers in many pots, Samsung quickly recognized a strong relationship between smartphone/tablet usage in conjunction with simply watching TV. This lead to Samsung outfitting many of its devices with an IR emitter, like with the Galaxy Note 10.1. If your tablet is out while you're watching TV, you might as well use your tablet to control your TV as well.
 
To increase available synergies between smartphone/tablet and TV, Samsung launched its TV Discovery service. Samsung's TV Discovery is a combination of software and hardware that simply lets you get a good feel for what content you have available to watch on TV as well as aggregated from online streaming sources such as Netflix and Blockbuster. Like many other attempts at TV/PC or TV/gadget convergence, TV Discovery attempts to solve the problem of having tons of content spread across many services by presenting it all in a single app on your smartphone or tablet. 
 
TV Discovery will also have a personalization component as well, to suggest content for you to watch based on your preferences.
 
The software side isn't anything super unique, as we've seen many attempts at this before.  I don't know that another software aggregation service is going to dramatically change anything, but before we reach perfection there are always many iterations of attempts that we have to live through.
 
Devices equipped with an IR emitter will be able to serve as universal remote controls, just as before. 
 
What is most unique about Samsung's TV Discovery service however is the integration with Samsung TVs. With all 2013 SMART TVs, Samsung is promising the ability to stream content from your TV Discovery enabled smartphone/tablet to your TV and vice versa. Getting content from your smartphone or tablet onto your TV is nothing new, but we haven't had a good way of getting TV content onto your mobile device. Obviously you'll need a Samsung TV for this to work, as well as the Samsung smartphone/tablet, but it's an intriguing leverage of Samsung's broad device ecosystem.
 
You can expect to see the TV Discovery app ship on 2013 Samsung mobile devices as well as 2013 Samsung SMART TVs.

GeForce Titan Pre-Order Available
by Jarred Walton yesterday

This week saw the launch of NVIDIA's latest and greatest single GPU consumer graphics card, the GeForce Titan. Priced at a cool grand ($1000), the Titan isn't the sort of video card that every hobbyist and gamer can buy on a whim. Instead, NVIDIA is positioning it as an entry-level compute card (e.g. it's about one third the price of a Tesla K20), or an ultra-high-end gaming card for those who simply must have the best. We expect to see quite a few boutiques selling systems equipped with Titan, and indeed we've seen press releases from all the usual suspects.

This is as good a place as any to list those, so here's a short list, with estimated pricing based on a custom configured PC at each vendor. (I'm sure there are other vendors selling Titan as well; this is by no means intended to be a comprehensive list.)

  • AVADirect includes Titan as an option in many of their custom desktop systems, with a price of $1067 per GPU.
  • Falcon Northwest has Titan available in their SFF Tiki that Anand previewed this week, as well as their full desktop Talon and Mach V desktops. Titan adds $1095 per GPU to the cost of a FNW system.
  • iBUYPOWER currently has the Titan in their Revolt SFF, which Dustin recently reviewed. Pricing is $1111 per GPU, because that's such a cool number I guess. Titan is also available in their custom configured desktops
  • Maingear has a variety of desktop systems now available for configuration with Titan, with the GPUs adding $1090 each to the cost of the system.
  • OriginPC has both Genesis and Chronos systems with Titan; Ryan previewed the Genesis earlier this week while the Chronos goes after the SFF market. They appear to be charging $1156 per Titan GPU, but they're also one of the first (if not the only) vendor with liquid-cooled Titan availabe.

Obviously that's a higher cost per GPU at every one of the above vendors, and if you've already got a fast system you probably aren't looking to upgrade to a completely new PC. For those looking to buy a Titan GPU on it's own, Newegg is now listing a pre-order of the ASUS Titan at the $999 MSRP. The current release date is listed as February 28, so next Thursday. We expect EVGA and some other GPU vendors to also show up some time in the next week, and we'll update this list as appropriate.

The AnandTech Podcast: Episode 17
by Anand Lal Shimpi yesterday

We managed to get in one more Podcast before Brian and I leave for MWC 2013 today. With the number of major announcements that happened in the past week, we pretty  much had to find a way to make this happen. On the list for discussion today are the new HTC One, NVIDIA's GeForce GTX Titan, Tegra 4i and of course the Sony PlayStation 4. Enjoy!

The AnandTech Podcast - Episode 17
featuring Anand Shimpi, Brian Klug & Dr. Ian Cutress

iTunes
RSS - mp3m4a
Direct Links - mp3m4a

Total Time: 1 hour 9 minutes

Outline - hh:mm

HTC One - 00:00
NVIDIA's GeForce GTX Titan - 00:20
NVIDIA's Tegra 4i - 00:42
Sony's PlayStation 4 - 00:52

As always, comments are welcome and appreciated. 

ZTE to Build Tegra 4 Smartphone, Working on i500 Based Design As Well
by Anand Lal Shimpi 3 days ago

ZTE just announced that it would be building a Tegra 4 based smartphone for the China market in the first half of 2013. Given NVIDIA's recent statements about Tegra 4 shipping to customers in Q2, I would expect that this is going to be very close to the middle of the year. ZTE didn't release any specs other than to say that it's building a Tegra 4 phone. 

Separately, ZTE and NVIDIA are also working on another phone that uses NVIDIA's i500 LTE baseband

Sony Announces PlayStation 4: PC Hardware Inside
by Anand Lal Shimpi 3 days ago

Sony just announced the PlayStation 4, along with some high level system specifications. The high level specs are what we've heard for quite some time:

  • 8-core x86-64 CPU using AMD Jaguar cores (built by AMD)
  • High-end PC GPU (also built by AMD), delivering 1.84TFLOPS of performance
  • Unified 8GB of GDDR5 memory for use by both the CPU and GPU with 176GB/s of memory bandwidth
  • Large local hard drive

Details of the CPU aren't known at this point (8-cores could imply a Piledriver derived architecture, or 8 smaller Jaguar cores—the latter being more likely), but either way this will be a big step forward over the PowerPC based general purpose cores on Cell from the previous generation. I wouldn't be too put off by the lack of Intel silicon here, it's still a lot faster than what we had before and at this level price matters more than peak performance. The Intel performance advantage would have to be much larger to dramatically impact console performance. If we're talking about Jaguar cores, then there's a bigger concern long term from a single threaded performance standpoint.

Update: I've confirmed that there are 8 Jaguar based AMD CPU cores inside the PS4's APU. The CPU + GPU are on a single die. Jaguar will still likely have better performance than the PS3/Xbox 360's PowerPC cores, and it should be faster than anything ARM based out today, but there's not huge headroom going forward. While I'm happier with Sony's (and MS') CPU selection this time around, I always hoped someone would take CPU performance in a console a bit more seriously. Given the choice between spending transistors on the CPU vs. GPU, I understand that the GPU wins every time in a console—I'm just always an advocate for wanting more of both. I realize I never wrote up a piece on AMD's Jaguar architecture, so I'll likely be doing that in the not too distant future. 

The choice of 8 cores is somewhat unique. Jaguar's default compute unit is a quad-core machine with a large shared L2 cache, it's likely that AMD placed two of these together for the PlayStation 4. The last generation of consoles saw a march towards heavily threaded machines, so it's no surprise that AMD/Sony want to continue the trend here. Clock speed is unknown, but Jaguar was good for a mild increase over its predecessor Bobcat. Given the large monolithic die, AMD and Sony may not have wanted to push frequency as high as possible in order to keep yields up and power down. While I still expect CPU performance to move forward in this generation of consoles, I was reminded of the fact that the PowerPC cores in the previous generation ran at very high frequencies. The IPC gains afforded by Jaguar have to be significant in order to make up for what will likely be a lower clock speed.

We don't know specifics of the GPU, but with it approaching 2 TFLOPS we're looking at a level of performance somewhere between a Radeon HD 7850 and 7870. Update: Sony has confirmed the actual performance of the PlayStation 4's GPU as 1.84 TFLOPS. Sony claims the GPU features 18 compute units, which if this is GCN based we'd be looking at 1152 SPs and 72 texture units. It's unclear how custom the GPU is however, so we'll have to wait for additional information to really know for sure. The highest end PC GPUs are already faster than this, but the PS4's GPU is a lot faster than the PS3's RSX which was derived from NVIDIA's G70 architecture (used in the GeForce 7800 GTX, for example). I'm quite pleased with the promised level of GPU performance with the PS4. There are obvious power and cost constraints that would keep AMD/Sony from going even higher here, but this should be a good leap forward from current gen consoles.

Outfitting the PS4 with 8GB of RAM will be great for developers, and using high-speed GDDR5 will help ensure the GPU isn't bandwidth starved. Sony promised around 176GB/s of memory bandwidth for the PS4. The lack of solid state storage isn't surprising. Hard drives still offer a dramatic advantage in cost per GB vs. an SSD. Now if it's user replaceable with an SSD that would be a nice compromise.

Leveraging Gaikai's cloud gaming technology, the PS4 will be able to act as a game server and stream the video output to a PS Vita, wirelessly. This sounds a lot like what NVIDIA is doing with Project Shield and your NVIDIA powered gaming PC. Sony referenced dedicated video encode/decode hardware that allows you to instantaneously record and share screenshots/video of gameplay. I suspect this same hardware is used in streaming your game to a PS Vita.

Backwards compatibility with PS3 games isn't guaranteed and instead will leverage cloud gaming to stream older content to the box. There's some sort of a dedicated background processor that handles uploads and downloads, and even handles updates in the background while the system is off. The PS4 also supports instant suspend/resume.

The new box heavily leverages PC hardware, which is something we're expecting from the next Xbox as well. It's interesting that this is effectively how Microsoft entered the console space back in 2001 with the original Xbox, and now both Sony and MS have returned to that philosophy with their next gen consoles in 2013. The PlayStation 4 will be available this holiday season.

I'm trying to get more details on the CPU and GPU architectures and will update as soon as I have more info.

An Update on Intel's SSD 525 Power Consumption
by Anand Lal Shimpi 3 days ago

Intel's SSD 525 is the mSATA version of last year's SF-2281 based Intel SSD 520. The drive isn't just physically smaller, but it also features a new version of the Intel/SandForce firmware with a bunch of bug fixes as well as some performance and power improvements. Among the improvements is a tangible reduction in idle power consumption. However in our testing we noticed higher power consumption than the 520 under load. Intel hadn't seen this internally, so we went to work investigating why there was a discrepancy.

The SATA power connector can supply power to a drive on a combination of one or more power rails: 3.3V, 5V or 12V. Almost all 2.5" desktop SSDs draw power on the 5V rail exclusively, so our power testing involves using a current meter inline with the 5V rail. The mSATA to SATA adapter we use converts 5V to 3.3V for use by the mSATA drive, however some power is lost in the process. In order to truly characterize the 525's power we had to supply 3.3V directly to the drive and measure at our power source. The modified mSATA adapter above allowed us to do just that.

Idle power consumption didn't change much:

Drive Power Consumption - Idle

Note that the 525 still holds a tremendous advantage over the 2.5" 520 in idle power consumption. Given the Ultrabook/SFF PC/NUC target for the 525, driving idle power even lower makes sense.

Under load there's a somewhat more appreciable difference in power when we measure directly off of a 3.3V supply to the 525:

Drive Power Consumption - Sequential Write

Our 520 still manages to be lower power than the 525, however it's entirely possible that we simply had a better yielding NAND + controller combination back then. There's about a 10 - 15% reduction in power compared to measuring the 525 at the mSATA adapter's 5V rail with the 240GB model.

Drive Power Consumption - Random Write

There story isn't any different in our random write test. Measuring power sent direct to the 525 narrows the gap between it and our old 520 sample. Our original 520 still seems to hold a small active power advantage over our 525 samples, but with only an early sample to compare to it's impossible to say if the same would be true for a newer/different drive. 

I've updated Bench to include the latest power results.

Samsung Details Exynos 5 Octa Architecture & Power at ISSCC '13
by Anand Lal Shimpi 3 days ago

At CES this year Samsung introduced the oddly named Exynos 5 Octa SoC, one of the first Cortex A15 SoCs to implement ARM's big.LITTLE architecture. Widely expected to be used in the upcoming Galaxy S 4, the Exynos 5 Octa integrates 4 ARM Cortex A7 cores and 4 ARM Cortex A15 cores on a single 28nm LP HK+MG die made at Samsung's own foundry. As we later discovered, the Exynos 5 Octa abandons ARM's Mali GPU for Imagination's PowerVR SGX 544MP3, which should give it GPU performance somewhere between an iPad 3 and iPad 4.

The quad-core A7 can run at between 200MHz and 1.2GHz, while the quad-core A15 can run at a range of 200MHz to 1.8GHz. Each core can be power gated independently. The idea is that most workloads will run on the quad-core A7, with your OS hot plugging additional cores as performance demands increase. After a certain point however, the platform will power down the A7s and start switching over to the A15s. Both SoCs implement the same revision of the ARM ISA, enabling seamless switching between cores. While it's possible for you to use both in parallel, initial software implementations will likely just allow you to run on the A7 or A15 clusters and switch based on performance requirements.

What's most interesting about Samsung's ISSCC presentation is we finally have some hard power and area data comparing the Cortex A15 to the Cortex A7. The table above puts it into numbers. The quad-core A15 cluster occupies 5x the area of the quad-core A7 cluster, and consumes nearly 6x the power in the worst case scenario. The area difference is artificially inflated by the fact that the A15 cluster has an L2 cache that's 4x the size of the A7 cluster, but looking at the die photo below you can get a good feel for just how much bigger the A15 cores are themselves:

In its ISSCC presentation, Samsung stressed the value of its custom libraries, timing tweaks and process technology selection in bringing the Exynos 5 Octa to market. Samsung is definitely marching towards being a real player in the SoC space and not just another ARM licensee.

The chart below is one of the most interesting, it shows the relationship between small integer code performance and power consumption on the Cortex A7 and A15 clusters. Before switching from the little CPU to the big one, power consumption is actually quite reasonable - south of 1W and what you'd expect for a smartphone or low power tablet SoC. At the lower end of the performance curve for the big CPU things aren't too bad either, but once you start ramping up clock speed and core count power scales linearly. Based on this graph, it looks like it takes more than 3x the power to get 2x the performance of the A7 cluster using the Cortex A15s.

 

Intel Demos CloverTrail+ Based Lenovo IdeaPhone K900 Ahead of MWC
by Anand Lal Shimpi 3 days ago

Lenovo announced its ultra slim (6.9mm) 5.5" 1080p IdeaPhone K900 at CES earlier this year, based on Intel's presently unannounced CloverTrail+ SoC. While we're expecting to learn a lot more about CT+ next week at MWC, Intel did post a video showcasing the K900's performance. The video below includes footage of the K900 running Epic's Citadel for Android very smoothly at 1080p, as well as PR TextureMark.

PR TextureMark is a video decode/GPU texturing/memory bandwidth benchmark, although Intel was careful not to actually run the benchmark on the phone. Intel's Atom SoCs have always been very good on the memory interface side compared to the ARM competition, which makes PR TextureMark an obvious showcase for the platform.

Epic's Citadel runs well on the K900, but as we showed earlier, it runs well on just about every high-end Android smartphone at this point. It's clear that the CT+ based K900 however integrates a much better GPU than the PowerVR SGX 540 included in Medfield, as the latter wouldn't be able to run Citadel at 1080p this smoothly.

I should also point out that some have been incorrectly assuming that the K900 is based on Intel's forthcoming Merrifield silicon. Merrifield and the rest of Intel's 22nm SoC lineup isn't due to ship/sample until the latter part of this year. Lenovo's K900 will be available in China starting in April, and expanding to other territories afterwards.

OCZ Releases Vertex 3.20 with 20nm IMFT NAND
by Kristian Vättö 4 days ago

Yesterday OCZ introduced an updated version of their Vertex 3: The Vertex 3.20. The name derives from the fact that the new Vertex 3.20 uses 20nm IMFT MLC NAND, whereas the original Vertex 3 used 25nm IMFT NAND. OCZ did the same with Vertex 2 and it's a common practice to move to smaller lithography NAND when it becomes cost-effective. At first the new lithography NAND may be more expensive and limited in availability but once the process matures, prices start to fall and eventually will overtake the old process node. Fortunately OCZ has learned from their mistakes and now the Vertex 3 with new NAND is easily distinguishable from the original Vertex 3, unlike with the Vertex 2 when OCZ silently switched to 25nm NAND.

  Vertex 3.20 Vertex 3
Capacity 120GB 240GB 120GB 240GB
Controller SandForce SF-2281
NAND 20nm IMFT MLC NAND 25nm IMFT MLC NAND
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 520MB/s 520MB/s 500MB/s 520MB/s
4KB Random Read 20K IOPS 35K IOPS 20K IOPS 40K IOPS
4KB Random Write 40K IOPS 65K IOPS 60K IOPS 60K IOPS

I asked OCZ why only Vertex 3 was updated with 20nm NAND and OCZ told me that the 20nm NAND is slower than 25nm. Intel initially told me that their 20nm NAND is as fast as their 25nm NAND (only erase time is slightly slower but that shouldn't impact end-user performance), though it should be kept in mind that OCZ uses NAND from Micron too and their binning process may be different from Intel's. Either way, it doesn't make sense (at least yet) for OCZ to update their high-end SSDs with the slower 20nm NAND, which is why Vertex 4 and Vector will stick with 25nm IMFT NAND. 

In other news, OCZ is also looking to phase out Agility 3 and 4 models. If you've been reading about OCZ's new business strategy (in a nutshell, fewer products and more focus on high-end market), this move makes a lot of sense because Agility has always been a compromised budget lineup. In the near future the Vertex 3.20 will be OCZ's mainstream model, which is why it was important for OCZ to cut the costs by moving to smaller process node NAND. 

NVIDIA Announces Tegra 4i, Formerly Project Grey, With Integrated LTE and Phoenix Reference Design

It has been a while since we’ve heard anything about Project Grey, the first NVIDIA SoC with an integrated digital baseband, and the result of NVIDIA’s acquisition of soft-modem manufacturer Icera. Today, NVIDIA is ready to formalize Project Grey as Tegra 4i, and we have a bunch of information about this SoC and will obtain even more before MWC is upon us. NVIDIA’s roadmap from late 2011 put Grey in early 2013, and while other members of that roadmap haven’t necessarily stuck to the promised release schedule, Grey seems to be somewhere close to that schedule, at least as far as announcement and samples are concerned.

First, Tegra 4i includes the familiar 4+1 arrangement of cores we've seen since Tegra 3, but instead of Tegra 4's A15s, 4i includes ARM Cortex A9 CPUs running at a maximum single core clock of 2.3 GHz, we’re still waiting on a breakdown of the clock rates for dual and quad configuration, as well as the shadow core. NVIDIA has noted that it using R4 of ARM’s Cortex A9, which includes higher IPC thanks to the addition of a better data prefetching engine, dedicated hardware for cache preload instructions and some larger buffers. NVIDIA believes it is the first to implement the latest version of ARM's Cortex A9 core, however there's nothing stopping others from doing the same. 

NVIDIA likely chose to integrate ARM's Cortex A9 r4 instead of the Cortex A15 to reduce power consumption and die size. While Tegra 4 is expected to be around 80mm^2, Tegra 4i measures in at around 60mm^2 including integrated baseband. NVIDIA isn't talking about memory interfaces at this point, but do keep in mind that your memory interface is often defined by the size of your die.

The 4i SoC is also built on TSMC’s 28 HPM process, interestingly enough not the 28 HPL process used for Tegra 4. As Tegra 4i appears to be geared towards hitting very high clock speeds, the use of TSMC's 28nm HPM process makes sense.

Tegra 4i also gets the exact same ISP and computational photography features that Tegra 4 includes, along with the same video encode and decode blocks. When it comes to the GPU side, 4i includes 60 GPU cores, that's just shy of the 72 in Tegra 4 proper. We’re waiting on additional detail to understand if these cores include the same enhancements we saw in Tegra 4 vs. Tegra 3. We also don't know the clock speed of the GPU cores in Tegra 4i.

Tegra 4 Comparison
  Tegra 4 Tegra 4i
CPU Configuration 4+1 ARM Cortex A15 4+1 ARM Cortex A9 "r4"
Single CPU Max Clock 1.9 GHz 2.3 GHz
Process 28nm HPL 28nm HPM
GPU Cores 72 60
Memory Interface PCDDR3 and LPDDR3 LPDDR3
Display 3200x2000 1920x1200
Baseband No Integrated Modem

Icera i500
LTE Cat 3/Cat 4+CA TDD,FDD
100-150 Mbps DL (50 Mbps UL)  
TMs 1-8
WCDMA Cat 24/6 42 Mbps
DL (5.7 Mbps UL)Cat 24/6
TD-HSPA 4.2 Mbps DL
(2.2 Mbps UL) Including TD-SCDMA

Package 23x23 BGA
14x14 FCCSP
12x12 POP
12x12 FCCSP

Tegra 4i also includes the Icera i500 baseband IP block on-die, hence i for Icera. NVIDIA has disclosed some additional detail about i500 along the lines of what we’ve already written about. There’s full support for Category 3 (100 Mbps) LTE at launch, with a later upgrade to Category 4, along with support for 10 MHz + 10 MHz LTE carrier aggregation. In addition there’s support for the rest of the 3GPP suite of air interfaces, including WCDMA / HSPA+ up to 42 Mbps (Category 24), TD-SCDMA, and GSM/EDGE. i500 is also voice enabled with VoLTE support and CS-FB voice modes. NVIDIA claims that the i500 package is 7x7mm with a 6.5x6.5mm transceiver, and there are a total of 8 primary Rx ports (bands). NVIDIA also claims support for both 2x2 MIMO and 4x4 MIMO transmission modes on LTE. 

Functionally Tegra 4i is more like a heavily upgraded Tegra 3 than a Tegra 4 part thanks to the Cortex A9s.  It's clear that Tegra 4i is aimed more at the smartphone market while Tegra 4 proper aims at tablets or other platforms with a higher power budget and greater performance demands. 

In terms of time frame, NVIDIA expects the first Tegra 4i designs to begin shipping at the end of 2013, with most devices appearing in Q1 of 2014. It'll be interesting to see how a Cortex A9 based design holds up in Q1 2014, although the newer core and very high clock speed should do a good job of keeping the SoC feeling more modern than you'd otherwise expect.

The other big announcement is a reference design built around Tegra 4i called Phoenix. It's a smartphone with Tegra 4i inside, 5-inch 1080p display, LTE, and just 8 mm of thickness. What's more impressive is that NVIDIA claims the reference design can be picked up by an OEM and ship with an unsubsidized price tag of between $100-$300 USD. With Phoenix NVIDIA now joins the likes of Qualcomm and Intel, both of whom already have active smartphone reference design programs.

We have a lot more questions about Tegra 4, 4i, and Phoenix, but answers are coming.

NVIDIA GeForce 314.07 WHQL Available
by Jarred Walton 5 days ago

NVIDIA's beta R313 driver with performance enhancements for Crysis 3 (among other titles) has now received WHQL certification. We wouldn't expect much of a difference in performance relative to the beta drivers, but NVIDIA states they provide up to a 5% performance improvement in Crysis 3 with GTX 680. As usual, you can grab the drivers for all current desktop and mobile NVIDIA GPUs:

Desktop 314.07 Windows Vista/7/8 64-bit
Desktop 314.07 Windows Vista/7/8 32-bit
Mobile 314.07 Windows Vista/7/8 64-bit
Mobile 314.07 Windows Vista/7/8 32-bit

Thanks go as usual to reader SH SOTN for the quick notification.

Microsoft Surface Pro mSATA SSD Upgrade: Dangerous but Successful
by Anand Lal Shimpi 6 days ago

Unlike current ARM or Atom based tablets, Microsoft's Surface Pro integrates a full blown mSATA SSD. My review sample included a 128GB Micron C400, while I've seen reports of users getting Samsung PM830 (OEM SSD 830) based drives. Both of these drive options are great so long as you remember to keep a good amount of space on the drive free (15 - 25% free space is a good rule of thumb). Unfortunately, Microsoft only offers two capacities (64GB and 128GB) despite there being much larger mSATA SSDs available on the market. To make matters worse, supplies have been tight of the 128GB Surface Pro, with 64GB models a little easier to come by. 

A few adventurous Surface Pro owners have decided to try to swap out their 64GB mSATA SSDs with larger models. One of our readers (Tim K.) managed to successfully transplant a 240GB Intel SSD 525 in his Surface Pro. The trick is to make sure you clone the original GPT formatted mSATA SSD properly. For this, Tim used Reflect to clone the drive and MiniTool Partition Wizard to expand the data partition to the full capacity of the new SSD.

While he had no issues getting the drive working, his Surface Pro did sustain damage during the upgrade process. As we learned from iFixit's teardown of the tablet, there's a ton of adhesive everywhere and melting/breaking it is the only way to get inside Surface Pro. Unfortunately the cable that drives the touchscreen was pulled up when Tim separated the display from the VaporMg chassis. The tablet works as does the pen, but the display no longer functions as a capacitive touchscreen.

Tim tried to use a conductive pen to restore contact between the cable and the contacts on the back of the display but so far hasn't had any luck (if any MS engineers who worked on Surface Pro are reading this and have any suggestions feel free to comment here or email me). The process of disassembling the Surface Pro isn't easy. It took Tim roughly an hour and a half to get inside. With the knowledge that he now has, Tim believes that he'd be able to get in without damaging the unit but he cautions against anyone else looking to get into Surface Pro. I didn't want to risk tearing apart my Surface Pro review sample, so I'm grateful to Tim for going the distance to prove it works. He's also awesome enough to share photos of the aftermath with us and post a thread in the forums to help other folks brave enough to try this.

John566 over at the tabletpcreview forums managed to get into his Surface Pro safely (it looks like he didn't attempt a complete disassembly, but rather left the cable side of the display in place and just lifted up the other side). He was having issues getting the new SSD (a 256GB Micron C400) recognized however.

The AnandTech Podcast: Episode 16
by Anand Lal Shimpi 6 days ago

It's the calm before the storm. The coming weeks are full of big announcements from smartphones to PC components, leaving us to talk about everything we can before the onslaught. We discuss Intel's TV strategy, Microsoft's Surface Pro, the Pebble smartwatch, the removal of unofficial LTE support from the Nexus 4 and Broadcom's LTE baseband. We also set expectations for performance and power consumption on Haswell. Finally, we touch on the recent controversy surrounding range testing Tesla's Model S.

The AnandTech Podcast - Episode 16
featuring Anand Shimpi, Brian Klug & Dr. Ian Cutress

iTunes
RSS - mp3m4a
Direct Links - mp3m4a

Total Time: 1 hour 29 minutes

Outline - hh:mm

Microsoft's Surface Pro - 00:00
Setting Haswell Expectations - 00:24
Intel's TV Initiative - 00:31
The Pebble Smartwatch- 00:51
Nexus 4 Removal of LTE - 1:04
Broadcom LTE Baseband - 1:06
Controvery Surrounding Range Testing Tesla's Model S - 1:13

As always, comments are welcome and appreciated. 

AMD Reiterates 2013 GPU Plans: Sea Islands & Beyond
by Ryan Smith on 2/15/2013

A few minutes ago AMD wrapped up a somewhat impromptu conference call, which had been called together to reiterate the company’s 2013 GPU plans. While there is technically very little new here that we don’t already know – especially if you can read between the lines on previous AMD announcements – AMD wanted to clarify things after what has been a rather wild week for their PR department. And true to their word, they delivered clarity by the truckload.

Since we’ve been avoiding this ruckus so far until we could get clarification, let’s quickly discuss the past week. The instigator for AMD’s wild week was a somewhat infamous 4Gamer.net article published last Saturday. In that article 4Gamer published the following roadmap slide from AMD, which was then confirmed as real by AMD when multiple AMD Twitter accounted re-tweeted a tweet about the article.


AMD slide courtesy 4Gamer.net

The slide stated, to the amazement of many, that AMD’s product lineup would be “stable throughout 2013”. And although that slide is technically correct, how it’s been interpreted has spawned quite a bit of ballyhoo. It’s this ballyhoo that AMD wants to clear up, hence today’s call.

Diving right into things, for those of you that only follow the desktop side of things, in late 2012 AMD announced their 8000M series products. The 8000M series was a mix of new parts and refreshes in order to satiate AMD’s OEM partners, who are accustomed to having a yearly product cadence of parts so that they can update their laptops accordingly. The fact that this was a mix of rebands and new parts, and that AMD at the time was hesitant to name those parts, made the whole thing murkier than it needed to be for tech enthusiasts, who are accustomed to seeing one or the other.

AMD later confirmed what was what in the 8000M series; the 8500M, 8600M, and 8700M parts were all based on a new GCN GPU codenamed Mars, which was part of AMD’s GCN-based Solar Systems family. AMD at the time also stated that they will be introducing new 8000M parts in Q2 of this year, in the process implying that these will be products based on new GPUs in the Solar Systems family.

Meanwhile in January AMD’s OEM desktop lineup got a similar overhaul. The Sea Islands product family – the desktop code name for the same GPU family as Solar Systems – saw its first product release when AMD announced the 8500 and 8600 OEM families, which were based on the same Oland/Mars GPU that the previous month’s mobile parts were based on. At the same time AMD rebadged a bunch of other desktop 7000 series parts into the 8000 OEM series, with Cape Verde, Pitcairn, and Tahiti products all making the jump.

AMD Codename Cheat Sheet
Mobile Desktop
London (Family) Southern Islands (Family)
Solar Systems (Family) Sea Islands (Family)
Mars Oland
Chelsea/Heathrow Cape Verde
Wimbledon Pitcairn

The fact that these previous product announcements are seemingly at odds with AMD’s slide is where reading between the lines comes in handy, and unfortunately that’s not a talent that comes naturally. In fact AMD’s mobile roadmap has almost everything you need to know, but you need to be able mesh it with AMD’s published desktop roadmap, which is one of the things today’s call put to rest.

So first and foremost, AMD has reiterated that they’re continuing to work on Sea Islands/Solar Systems, and that we haven’t seen all of the Sea Islands chips yet. At the same time AMD also made clear that Sea Islands is based on the same architecture as Southern Islands – the first generation of Graphics Core Next (GCN1) – and that these parts are essentially just new configurations that we didn’t see with Southern Islands. This is why Oland is architecturally and feature-wise indistinguishable from previous GCN parts, and why it fits in to AMD’s product stack where it does.


AMD's FAD2012 Roadmap

Of course AMD won’t comment on specific details about future products, but the fact that they have additional chips in the pipeline lines up nicely with their mobile roadmap and when we can expect to see these new Sea Islands GPUs. With their annual rebadging out of the way, AMD’s mobile roadmap makes it clear they intend to replace the 7900M and 7800M (Pitcairn) with some kind of new part, and while AMD won’t give us more details on these parts, replacing them with new Sea Islands parts is virtually guaranteed.

As it turns out, things won’t be all that different on the desktop. As we said before, AMD’s earlier desktop slide is technically correct, it’s just incomplete. AMD’s existing 7000 series cards aren’t going anywhere for the near future, with the flaw in the slide being that it implies that AMD won’t be introducing new products in that time frame. Oland already exists on the desktop in the form of the 8500 OEM and 8600 OEM series, and again with AMD declining to comment on specific details for future products, you should know where this is going. AMD will be introducing new retail desktop 7000 series products in the first half of 2013.

Is it virtually guaranteed (but not confirmed) that at least one of those products be the retail version of Oland. With 384 stream processors, Oland offers performance a step below the existing Cape Verde 7700 series parts and should give AMD the ability to deliver 7000 series functionality at under $100. At the same time, with at least one other Sea Islands GPU in the works, it’s also a strong likelihood that whatever new GPU AMD is introducing on the mobile side in Q2 will see an eventual desktop release in AMD’s H1 2013 timeframe. And to be very clear here, none of this is guaranteed, as AMD has made it clear that a new 7000 series desktop product does not necessarily mean a new GPU. But based on what AMD is saying and what AMD has committed to, Sea Islands is destined to get a retail desktop release.

The fact that these Sea Islands products will be released as 7000 series products is going to throw long-time readers a bit of a curveball, but as we’ve previously discussed Sea Islands is little more than new configurations of GCN1, so they will fit in nicely among the existing 7000 series products. For AMD’s part they believe the Radeon HD 7000 series is a very strong brand at retail – almost unbelievably having sold more 7900 cards in January 2013 than in any month prior – so as opposed to the OEM world where OEMs are driving rebadging and new product numbers, AMD wants to keep the 7000 series on the retail desktop in order to capitalize on their success. Labeling Sea Islands retail desktop parts as members of the 7000 series will allow AMD to introduce new products while still keeping the 7000 branding they’ve become so proud of.

What you’ll note through all of this however is that whenever we talk about the desktop it’s in relation to mobile, and there is a reason for that. Sea Islands is primarily geared for OEM notebooks, a very important market for AMD to tap at a time when laptop sales now outpace desktop sales, and when laptops only continue to grow while desktops shrink. There has been a general trend towards launching laptop-first in the PC industry for the past couple of years, and AMD is now part of that trend. This is why Sea Islands GPUs like Oland are launching as 8000M products first, and only later as desktop OEM and retail desktop parts.

This mobile/desktop distinction is important, but perhaps most so for high-end gamers, as this is necessary to set expectations. So far we’ve continued to point at the AMD roadmap, where AMD’s products top out at Pitcairn-like products. AMD’s mobile lineup never used AMD’s biggest, fastest GPU (Tahiti) for everything from power to cost reasons; these GPUs are best suited for desktops and workstations. What this also means is that if AMD were to focus on refreshing their mobile lineup first and foremost, would they need to refresh their high-end desktop lineup? The answer to that is basically no. AMD has been very careful with their words here, but the gist of matters is that the 7900 series will remain the mainstay of AMD’s enthusiast product line until the end of 2013.

Now AMD has been careful here to always mention the 7900 series and not Tahiti, but so far they are one in the same. AMD’s lack of comments means that we cannot say anything is for sure, but nearly everything about AMD’s presentation was geared around driving home the point that AMD is happy with their current enthusiast products, and indeed that they believe they currently have the fastest products and that they need to do a better job of getting the word out. In other words, while it’s clear that Sea Islands will flesh out the lower end of AMD’s GPU lineup, AMD has been doing everything they can to prepare the press to accept the idea that Tahiti will remain as AMD’s fastest GPU until the end of the year. Sometimes what AMD doesn’t say says it all, and in this case what’s not being said (but being strongly implied) is that AMD will not be coming out with a GCN1 GPU more powerful than Tahiti.

Finally, AMD also used a bit of their time to talk about their plans for the end of the year. With the 7900 series seemingly set as-is for the rest of the year, AMD has formally announced that they will be introducing a new GPU microarchitecture by the end of 2013. GCN is heavily embedded into AMD’s product line, from their SoCs all the way up to their biggest GPUs, so from a business perspective AMD is incredibly reliant on it. But on a technical level it’s also still a fresh, modern architecture whose greatest task – being the GPU component of AMD’s HSA implementation – has yet to come.

Consequently future microarchitectures will be GCN based, as AMD will continue to refine GCN implementations and add features to the architecture, similar to what they did with VLIW5 over the span of 4 years. We don’t typically throw around the word microarchitecture when discussing GPUs, but with AMD’s plans that’s exactly what’s going on; we’re seeing a stratification of things into the all-encompassing architecture (Graphics Core Next) and the individual microarchitectures spawned from it like GCN1 and AMD’s yet-to-be-named microarchitecture.

In any case, AMD’s new GPU microarchitecture will in turn drive a new generation of products that will be introduced at the same time. AMD isn’t saying anything more about what’s to come from that family, but we would note that the timeline for the launch of this new family lines up with how long AMD expects the 7900 to remain their enthusiast mainstay.

Wrapping things up, while there was little new on AMD’s call besides their new microarchitecture, their call did go a long way towards clearing up their previous announcements and giving us a better idea of what to expect from AMD in the next few months. The long and short of it is that while AMD won’t be replacing the 7000 series on the retail desktop, they will be supplanting it with new products, and those products are almost certain to be based on their forthcoming Sea Islands GPUs. Based on what we’ve seen about Sea Islands so far on the mobile side, it should do a good job of fleshing out AMD’s product lineup to cover the gaps and areas where they don’t have direct competition against NVIDIA. At the same time however they clearly won’t be a significant departure from the products we’ve seen so far, and most importantly they won’t be a microarchitecture cadence.

As for enthusiasts, the implication that they’re not going to see anything faster than Tahiti until the next generation products at the end of this year is unfortunately unlikely to go over well. Enthusiasts have become used to annual GPU refreshes, and while they’re still somewhat here as we’re seeing with Sea Islands, that era appears to be coming to a close as microarchitectures improve, development costs go up, and the rate of introduction for new fabrication processes slows. And certainly this is quite a departure from the norm. But if nothing else, AMD is right about a couple of things: as it stands AMD is already competitive with NVIDIA’s contemporary high-end offerings, and they're finally competitive with NVIDIA when it comes to developer outreach. Ultimately with the success of the 7900 series AMD today is in a comfortable place, leaving them free to focus on what they already have and how to improve those sales even further.

Logitech Announces Enterprise Focused Webcam
by Jason Inofuentes on 2/15/2013

Logitech has been a big presence in the consumer PC peripheral space for ages, but their latest push follows us all to the office. The Logitech for Business division seeks to leverage the company’s assets to build out products that enhance business-to-business communications. And their latest efforts look a little familiar. Today Logitech is announcing the Logitech Webcam C930e, a 1080p USB camera with a few features specifically for enterprise users. 

Looking quite like their last consumer grade camera, the Logitech Webcam HD, the new model borrows some of those units features, including the Carl Zeiss optics and the industrial design. The wide-angle lens helps provide less of a talking head experience on video calls, and could be invaluable if there’s any whiteboarding planned or multiple people on one end of a call. The onboard ISP handles autofocus and exposure adjustments, with the latter something of a necessity in many office environments as overhead fluorescents lead to lots of backlit scenes. 
 
Encoding happens onboard, as well; and implements the H.264 Scalable Video Coding protocol. H.264 SVC allows for variable quality video streams to be encoded on the same bitstream, with the effect of providing a lower bandwidth option when network congestion would have lead to lots of artifacts. This is Logitech’s first video conferencing webcam that comes with SVC, a feature in many enterprise video conferencing solutions. 
 
Out of the box the C930e comes fully compatible with Microsoft Lync and Cisco solutions, along with Skype certification and will ring in at $109.99 when it arrives this May. Since this isn’t targeting the consumer space (they may be content with the $99.99 Webcam HD), you’ll not find the C930e at Best Buy, but if you’re interested it will be available through sites that cater to enterprise. 

Tegra 4 Shipment Date: Still Q2 2013
by Anand Lal Shimpi on 2/14/2013

Last night NVIDIA's CEO, Jen-Hsun Huang, stated that shipments of its Tegra 4 SoC to customers would begin in Q2. A few outlets incorrectly assumed this meant Q2 of NVIDIA's fiscal year, but I just confirmed with NVIDIA that Jen-Hsun was referring to calendar Q2 - in other words, the period of time between April and June 2013.

Jen-Hsun also confirmed what was announced at CES: Shield, the handheld Android gaming device (with PC streaming capabilities) based on Tegra 4, would also ship in Q2. Jen-Hsun did add that Shield will show up in the latter part of Q2, which likely points to a late May/June launch.

In short, there's no new news here. NVIDIA mentioned Q2 as the release timeframe for Shield at its press event at CES last month. Obviously Shield can't launch without Tegra 4, so it's safe to say that Tegra 4 will also be shipping in Q2. With customer shipments happening in Q2, I'd expect products (other than Shield) in late Q2 or early Q3.

The rest of the earnings call was pretty interesting. GPU revenues are still solid despite maturing processor graphics solutions (although growing slowly), and Tegra revenue was up 50% over the previous year thanks to the success of Tegra 3. NVIDIA is still struggling on the smartphone side, but tablets have been a huge part of the success of the Tegra business unit.

Nexus 4 JDQ39 4.2.2 OTA Update Removes Unofficial LTE on Band 4
by Brian Klug on 2/13/2013

Just after it launched, we discussed how the Nexus 4 included undocumented support for LTE on Band 4 (AWS) which could be enabled simply by choosing the appropriate RAT (Radio Access Technology) under Phone Info (by dialing *#*#4636#*#* - INFO). Back then, I noted that it was highly unlikely this would stick around for very long without the proper FCC paperwork, and although it took a bit longer than I expected, today's 4.2.2 update does away with this unofficial support for LTE entirely. 

  
Post 4.2.2 OTA (left), 4.2.1 (right) on Nexus 4

The OTA update for the Nexus 4 includes both software changes to Android (4.2.2 build JDQ39) along with a new baseband software image in the form of a delta update (radio.img.p). I tested the Nexus 4 the same way as I did in the previous article, on an Anritsu MD8475A base station emulator, which enables me to test any configuration or band network, and setup a Band 4 LTE network to attach the Nexus 4 to.

Before the OTA update, with the appropriate "LTE Only" selection made in the aforementioned "Phone Info" menu for preferred network type, the Nexus 4 would quickly attach to Band 4 LTE. After applying the update, the handset no longer attaches at all. In addition, trying to select the "LTE Only" preferred network type now quickly changes back to "WCDMA Preferred," likewise choosing one of the other modes which include LTE results in a change back to "WCDMA Preferred" after exiting and coming back. Previously this setting would persist until a reboot took place. 

I'm not surprised that undocumented LTE on Band 4 was removed, I am surprised however that it took this long. This also settles any lingering questions about LG Electronics filing for a Class II permissive change for the Nexus 4 to enable LTE on the bands supported by the hardware. If having support for LTE on Band 4 on your Nexus 4 is important to you, I'd recommend holding off on updating with the OTA zip for now, no doubt people will also make their own update images without the radio update as well.

The Android 4.2.2 update includes a number of other small changes as noted by other users, including enhanced quick toggles that can be long pressed to toggle WiFi or Bluetooth. 

Source: Google (OTA .zip link)

Apple Cuts Pricing on MacBook Pro with Retina Display and SSD Upgrades
by Anand Lal Shimpi on 2/13/2013

Earlier this morning Apple announced a combination of price cuts and spec updates to its MacBook Pro with Retina Display lineup. The price cuts impact the 13-inch rMBP, while the spec bumps extend across almost all models.

The good news is the price of the base and upgraded 13-inch rMBPs have dropped to $1499 and $1699, respectively. The 15-inch model remains untouched. The upgraded 13-inch rMBP configuration has a slightly faster Core i5 CPU (2.6GHz base clock instead of 2.5GHz, I believe this is a Core i5-3230M). The faster CPUs are nice to see, especially since that's really the only way to improve UI performance at this point until Apple brings some more software tweaks to OS X.

On the 15-inch side, both configurations get a 100MHz faster base clock (i7-3635QM and i7-3740QM most likely). The upgraded 15-inch model now comes with 16GB of DDR3L-1600 by default.

MacBook Pro with Retina Display Pricing
Model 13-inch (base) 13-inch (upgraded) 15-inch (base) 15-inch (upgraded)
Old Price $1699 $1999 $2199 $2799
New Price $1499 $1699 $2199 $2799
Old CPU 2.5GHz Core i5 2.5GHz Core i5 2.3GHz Core i7 2.6GHz Core i7
New CPU 2.5GHz Core i5 2.6GHz Core i5 2.4GHz Core i7 2.7GHz Core i7
Old Memory 8GB DDR3L 8GB DDR3L 8GB DDR3L 8GB DDR3L
New Memory 8GB DDR3L 8GB DDR3L 8GB DDR3L 16GB DDR3L
Old SSD 128GB 256GB 256GB 512GB
New SSD 128GB 256GB 256GB 512GB

While default storage configurations don't change, SSD upgrade pricing does. The 512GB and 768GB SSD upgrades drop in price a bit depending on what configuration you're looking at. For the upgraded 15-inch model, moving to a 768GB SSD is now a $400 upgrade. That's not a lot for a 768GB drive, but it doesn't take into account the cost of the base 512GB SSD you are paying for but don't get to keep.

MacBook Pro with Retina Display Storage Pricing
Model 13-inch (base) 13-inch (upgraded) 15-inch (base) 15-inch (upgraded)
128GB SSD - - - -
256GB SSD +$200 - - -
512GB SSD +$500 +$300 +$300 -
768GB SSD +$900 +$700 +$700 +$400

Overall these are welcome changes to pricing and specs. It was clear from the start that the MacBook Pro with Retina Display would eventually fall down to more reasonable prices, and this is likely the beginning of that curve. As high DPI displays become more commonplace, we'll see continued decline in the pricing department. These price cuts do come several months before the introduction of Haswell based rMBPs. Haswell's impact on the rMBP should be greatest on the 13-inch model, where the improved GPU performance will be able to make up for the fact that there's no discrete GPU (assuming Apple integrates Haswell GT3e silicon). You'll also see modest gains in idle power consumption, but the big platform battery life gains really come with Haswell ULx chips which we won't see until closer to the end of the year and will be used in tablets/convertibles.

Latest from AnandTech