Latest Pipeline Posts
A Look at SkyCross and their Actively Tuned VersiTune Smartphone Antenna
by Brian Klug 11 hours ago

On the last day of CES, while still getting over some of the death curry that we foolishly ingested the night before, I had a meeting with SkyCross, who recently announced a new actively tuned multiband antenna module for smartphones. With LTE, the demand for more and more bands on a smartphone has and will continue to increase, which has at present driven OEMs to simply include more antennas, each tailored for a specific frequency. Remember that the geometry of an antenna defines its radiative properties and resonant frequency, which in turn affects gain at different frequencies. At present I’ve seen smartphones with as many as four or five antennas, which poses considerable challenge for engineering a device, as number of antennas and their placement is often in direct competition with industrial design and other considerations.

Actively tuned antennas aren’t anything new for antennas in other RF applications, but they are a relatively new thing in the smartphone and tablet space. In the past that was primarily because it was easy enough to tune a design to the handful of bands the handset was designed for. The challenge of getting an antenna with the right impedance increases when we’re talking about a wider frequency range, hence the move to either more passive antennas for more bands or an actively tuned array. To date the only smartphone I’m aware of that has an active tuning block other than what you get on the power amplifiers to match a standard 50 Ohm smartphone antenna is the iPhone 5, which has an RFMD RF1102 tuner. Of course that design only has two cellular antennas to accommodate a bunch of bands, and the active tuning there also further mitigates deathgrip, which is itself just detuning.

By antenna tuning I’m referring to the ability of some part of the RF front end to actively switch in the right amount of complex impedance and match the antenna to the transmission line in the device. In the case of most applications this means switching in some capacitors using a switch or a bunch of relays and getting as close to a VSWR (Voltage Standing Wave Ratio) of 1.0 as possible. Without going on a big long discussion about SWR and VSWR, the naive explanation is that VSWR is one of the figures of merit that defines how well an antenna can radiate, and thus having good VSWR means the antenna radiates more of the power delivered to it, rather than reflecting it back into the transmission line.

The trend in smartphone antenna design is something I’ve noted in a handful of reviews — often there’s a plastic module at the bottom of the phone with the speaker, microphone, and antenna. This is modular so the OEM can then include the right primary antenna for the right region and set of bands the phone is destined to. Look at almost any major design and you’ll see something along those lines for the primary receive and transmit antenna at the bottom.

What SkyCross is offering is the ability for OEMs to bring this plastic module to them, along with the set of bands they require, and have a single custom design tailored to their application and all the bands. They also claim to have very good (low) correlation coefficients between the two antennas on their modules, even though there isn’t much separation distance between them. Getting low correlation between the two antennas is critical for getting good gains out of MIMO (like the 2x2 MIMO in LTE) and receive diversity. The end result is a smaller solution with fewer constraints on industrial design, and possibly even better correlation coefficients for even better MIMO.

SkyCross showed off an example design and module, which had two switches visible on it along with one other package. At the bottom you can see four feed points — one for ground, two for the two antennas, and finally a voltage rail for the active components. These can then accommodate the multiple frequency bands though tuning to the appropriate configuration. I’m told that modules such as these will start popping up in smartphones in 2013.

Source: SkyCross

First Impressions: the TECK Ergonomic Mechanical Keyboard
by Jarred Walton 2 days ago

This is my very first encounter with the “world’s first Truly Ergonomic Computer Keyboard”, aka the TECK. I received the keyboard today after inquiring about a review sample—the reason for me being the reviewer this time around is that Dustin has no interest in an ergonomic/split key keyboard. The company that makes the TECK goes by the name Truly Ergonomic, and right now this is the only product they make.

Several years in the works, the main claim to fame is that the keyboard is designed from the ground up for ergonomics. To that end, they’ve ditched the traditional layout and staggered keys in order to provide an optimized layout that offers better comfort while typing, but the changes are something that will take a lot of practice typing before you can type anywhere near your regular speed. And Truly Ergonomic makes no claims to the contrary—they recommend spending days if not weeks with the keyboard before you decide whether or not you really like it, going so far as to offer a 60-day money-back guarantee. Oh, and let’s not forget that the TECK also comes with mechanical switches, specifically Cherry MX Brown switches that are generally silent compared to many of the other mechanical switches out there.

Initial impressions are shocking—if you’ve ever tried the Dvorak key layout, I don’t think this could be any more alienating. Just about every "special purpose" key that I have become accustomed to locating by instinct is now in a new location—delete, tab, backspace, and enter are in the center column, with the spacebar split around the enter key. On the left, the Shift key is moved up one row, with CTRL where Shift normally resides and the ALT key at the bottom-left where CTRL usually sits. The right side gets the same treatment, and the enter key as noted has been relocated to the middle of the spacebar. Even the main body of the keyboard with the normal seeming QWERTY layout can feel equally alien to a “formerly” touch typist at first (I find that staring at the keys a bit while typing helps a bit right now). Elsewhere, where I normally find backspace is now an equal sign, the backslash and forward-slash are at the left where tab should be, there’s an extra key in the top-left that shifts all the numbers right one key, and we haven’t even gotten to the document navigation keys. The cursor keys reside under your right hand, down from the JKL area; Home/End/PgUp/PgDn are similarly located under your left hand.

The above paragraphs are the first paragraphs I’ve tried to type on the keyboard (plus some editing after the fact) and it has taken me fully twenty minutes with nearly constant mistakes to get them out! I’m already getting a bit more competent, but when the documentation suggests taking a while to adapt, they’re not kidding around. Truth be told, the whole experience can be a bit maddening at first—if you’ve ever been frustrated to the point where you feel a bit queasy in the gut and want to quit what you’re doing and go find something else more pleasant (like maybe beating your head against a wall)…well, I’m feeling a lot of that right now! I’m mostly writing this to give me a small amount of practice before trying some speed typing tests. I don’t think that the test is going to go well the first time around, but let’s find out.

I will be taking the tests twice on the TECK: once earlier in the writing process and a second time much later. Scores are expressed as “Gross WPM/Errors=Net WPM”. I found these tests on TypingTest.com, and I’m using three different text selections: Aesop’s Fables, Rules of Baseball, and Tigers in the Wild. And yes, these tests are hardly scientific, as typing the same text repeatedly on different keyboards can potentially skew the results. To help mitigate that, I’m serpentining through the keyboards and taking each test twice (so six tests on one keyboard). I’m starting at the top of the list with my old Microsoft Natural Elite, moving to the Rosewill RK-9100, and then finishing with the TECK before heading back up. I’m going to take each test four times and report the best result. (And for the final TECK result, I’ll revisit the test later.)

Round One Typing Test Results
Keyboard Test 1 Test 2 Test 3
MS Natural Elite 69/1=68 67/0=67 64/0=64
Rosewill 71/0=71 74/0=74 67/1=66
TECK (30 minutes) 24/2=22 27/7=20 31/8=23
TECK (90 minutes) 44/2=42 56/4=52 45/5=40

Ouch. I am still very clearly on the early part of a rather steep learning curve, but we’re talking about overcoming roughly 25 years of muscle memory as I adapt to the layout of the TECK—yes, in case you weren’t aware, I currently hold down the fort as the old fuddy-duddy for AnandTech, having just celebrated my 20-year high school reunion last summer. Another major difficulty for me is that I shift routinely between my desktop and various laptops, and if you’ve read my laptop reviews you probably already know that I’m quite particular about keyboard layouts. Here however the TECK isn’t a slightly tweaked layout just for kicks and giggles; it’s a completely whacked out (at first) arrangement that’s designed to be more ergonomic. And honestly, even in the short time I’ve been typing this, I’m starting to think they might be on to something, but change is never easy.

You can see the results from the table above, and when I get into a sort of zone while typing with the TECK, my speed seems to be better than before and I feel less strain/discomfort. The problem is that I’m not in the zone most of the time (yet), so I’ll go really fast for a few words or maybe even a whole sentence before the wheels fall off and I start hitting “=” instead of backspace. The layout definitely feels more compact and requires less movement, and I like everything in theory, but in practice I’m still making a lot of errors. But with only 90 minutes of typing on the TECK that’s hardly surprising; I’m at least getting closer to where I was on my previous keyboards. Where will I be in a week’s time? We’ll have to wait to find out, which is why this is only a “First Imoressions” rather than the full review.

I’ll post a complete review of the keyboard once I’ve had enough time with the device to really say how much I like (or perhaps dislike) what they’ve done, but as someone that has enjoyed using an MS Natural Elite PS/2 keyboard for most of my time writing for AnandTech, there’s a lot on tap here. I’ve long heard the benefits for touch typists of mechanical keys, but until now I haven’t seen anyone doing a curved/natural/ergonomic keyboard with mechanical keys (not that I've really looked around much--see the comments for a couple other options). The TECK is the first I’ve seen that’s readily available in the US, and while the current $222 price will almost certainly make you think twice it's actually lower than some of the alternatives, and I can tell you from personal experience that the costs of dealing with RSI, CTS, and other similar health problems are far higher than that. You’ll hear more about the TECK in a couple weeks, but for now I’m very intrigued. I’m just not sure how I’m going to go between desktops and laptops without feeling baffled for a little while if I end up sticking with the TECK!

Here’s one final parting shot to consider, taken after the rest of this article was written. I’ve now spent over two hours playing around with the TECK, and my speed and accuracy continue to improve. The worst part for me continues to be finding keys like quotes as well as accidentally reaching too far into the center keys (delete, tab, backspace) and messing things up. I’m getting better, and I can see the potential for the layout, but it will take some time….

Final Typing Test Results
Keyboard Test 1 Test 2 Test 3
TECK (120 minutes) 55/5=50 62/8=54 51/2=49

While I try to come to grips with the TECK, I’d love to hear any suggestions on ways to better adapt to a completely different keyboard. I’m also happy to entertain requests for any specific tests you’d like me to try, or if you have questions about the unit itself I can answer those as well. Incidentally, the keyboard is very solidly built, with far more weight to it than the diminutive size would suggest. I actually like the weightiness, though it would be less ideal for transporting it in a backpack. The palm rest is also removable and attached securely via multiple screws, which is a great way of doing things. Aesthetically, there’s a lot I like about the TECK, which is part of the reason I was so interested in getting a review sample. The only question is how well I can type after spending some quality time with the TECK.

To be continued….

AMD’s Mobility Catalyst 13.1 Update, Enduro Edition
by Jarred Walton 2 days ago

We posted earlier today about the public availability of AMD’s latest Catalyst 13.1 WHQL drivers, but being available and installing properly and without problems on Enduro laptops are sadly not the same thing. Armed with six different laptops, I took a moment to check on two things: first, would the AMD Mobility Radeon Driver Verification Tool allow the downloading of updated drivers; second, with the drivers available (either via the just-mentioned utility or via another source), would they install properly?

One thing I have not (yet) had time to do is any form of performance testing, so this is strictly a test to see if the drivers install properly and if all of the functions are present in the Catalyst Control Center. Here’s the list of the candidate laptops (some of which we have not yet reviewed) as well as the results of my two driver tests. I’ll follow the table with a lengthier discussion of the issues encountered where applicable.

AMD Mobility Catalyst 13.1 Laptop Testing
Laptop Utility DL? Installed?
Alienware M17x R4 (7970M) No Yes
AMD Llano Prototype (6620G + 6630M) Yes* No
AMD Trinity Prototype (7660G) Yes* Yes
MSI GX60 (7660G + 7970M) Yes* Yes
Samsung NP355V5C (7660G + 7670M) No Yes
Sony VAIO C (6630M) No Yes*
* - See notes below

Right off the bat, we can see there are problems with getting the drivers in the first place. Of the six laptops, the three with Intel processors fail to pass the utility’s “valid GPU/Vendor ID” test and simply refuse to download the full driver. The other three laptops (which have AMD APUs) pass the utility’s check, but then when the download is supposed to start and you select the save location you immediately get a message stating that the driver download has been canceled. I assume the download would work, if the utility functioned properly, but right now the only way to actually download the driver was through other means.

Option one for downloading the driver is to use the AMD Catalyst Control Center (or whatever it’s called these days) and check for driver updates. This also failed to find new drivers for most of the laptops, but the Trinity Prototype at least found and properly downloaded the drivers. The other alternative is to just find another web site that’s hosting the driver—Guru3D has them, and I assume others do as well. Needless to say, the process of getting AMD's mobile dirvers continues to be a pretty poor showing, but that’s nothing new. The only laptops where I expect zero difficulties in this area are those with either a discrete-only AMD GPU (no Enduro or switchable graphics) or an AMD APU and no dGPU (and be sure to avoid Sony, Toshiba, and Panasonic laptops if you want driver OEM support); not that others won’t work, but those are the least likely to have issues.

The second test is to install the drivers and see if they at least work in a few tests. The good news here is that five of the six laptops worked with the drivers, and the only one to fail completely is the Llano Prototype (which has always been a bit iffy, since it was never released to the public and the BIOS is a bit raw). Discounting the prototype, the drivers installed pretty much without any complaints or concerns on four of the laptops; the only one that gave me problems is the Sony VAIO C.

I’ve discussed the issues with the VAIO C and driver updates in the past, but the short story is that many of the other Dynamic Switchable Graphics laptops from the last year or so are likely to behave similarly (HP’s Envy 15 for instance). I got Windows 8 up and running on the Sony via a modified driver, as the stock drivers (either with Windows 8 or from AMD) did not work. With the modified drivers, you end up with the full driver being present (e.g. 12.11 beta11 prior to the 13.1 update), but the Global Switchable Graphics Settings section of the Catalyst Control Suite is non-functional—the lists where you select one of four modes for AC and battery power are blanked out. With the 13.1 drivers, things actually take a step backwards as far as I can tell: the CCC won’t start for me. The dGPU is present and working (I ran a couple games to verify this), but I can’t open up any of the switchable graphics settings or other driver settings.

Lack of performance testing aside, the latest driver release is an improvement at least from the installation standpoint, but there are a lot of remaining issues to address. The ideal continues to be widespread availability of drivers that simply install and work on any laptops with switchable graphics based on PowerXpress 4.0 or later hardware (Dynamic Switchable Graphics or Enduro), not to mention they should also work with discrete-only solutions. The GCN-based 7000M hardware tends to be better supported right now, whereas Northern/Southern Islands chips continue to have more issues. Please let us know if you've also had any difficulties with downloading and installing AMD's 13.1 mobility drivers, and we'll pass along any information to our AMD contacts.

AMD Catalyst 13.1 WHQL Drivers Available
by Chris Hansen 3 days ago

Having refined their 12.11 beta drivers, AMD has recently released their Catalyst Software Suite Version 13.1 update. This WHQL update includes all the performance improvements found in the previous AMD Catalyst 12.11 Beta 11 update while resolving a variety issues for Windows 8 and Windows 7 users.

The Catalyst Control Center has also gained a new design for its 3D application settings page, which is designed to allow uses to adjust their 3D settings individually per application.

Here are the direct links to the various drivers:

AMD Catalyst Software Suite 13.1 for Windows Vista/7/8 32-bit
AMD Catalyst Software Suite 13.1 for Windows Vista/7/8 64-bit

Corsair Scales Up: H90 and H110 Released
by Ian Cutress 3 days ago

I’m a big fan of these closed loop all-in-one liquid coolers.  For a little extra on the cost of an air cooler we can get a quieter cooling solution and something that can offer a great way to remove heat from the CPU without going for a full blown self-build water loop.  AnandTech covered the first Corsair closed-loop liquid cooling range back in June 2011, and Dustin recently covered six of them including four from Corsair and two from NZXT in December 2012, with the larger 280mm model taking the top spot.  With that in mind, Corsair has announced a pair of larger CLCs, in the form of the 140mm H90 and the 280mm H110.

The Corsair H90 is a single width 140x140mm model that comes with a single 140mm fan, making it the bigger version of the H55.  The H110 by contrast is a double length 140x280mm loop and a pair of fans, pushing the size of the H100 but using the Asetek based mounting system of the H55.  The switch to 140mm should allow for quieter operation from the bigger fans, and Corsair states the bundled fans are designed for the high static pressure that these loops need.

It is worth noting that both models use the Asetek mounting system used on the H80/H100 rather than the CoolIT mechanism of the H80i/H100i.  Similarly, there is no mention of Corsair Link integration like the H80i/H100i, which may mean that the -i variants could be coming later this year if Corsair want to release them (and they can get the OEM of the H80i/H100i, CoolIT, to make them).

We should be getting both in to review within due course, but eager buyers can find the H90 and H110 available at the Corsair Store online for $100 and $130 respectively.  Both coolers will support all modern motherboard sockets - 2011/1366/1156/1155 for Intel and FM2/FM1/AM3+/AM3/AM2 for AMD.

Lenovo Introduces Rugged Chromebook Aimed at K-12
by Jarred Walton 3 days ago

Google's Chromebook initiative hasn't really caught fire as well as their other OS of choice, Android, but with the latest updates and reduced pricing there's still life in the initiative. Acer's C7 for instance is apparently the fastest selling "laptop" on Amazon.com, no doubt helped by the $199 price point. Today Lenovo is joining the Chromebook offerings with their ThinkPad X131e, which takes a different approach.

Unlike the other Chromebooks to date, Lenovo is specifically touting the ruggedness of the X131e as a major selling point, highlighting the benefits such a laptop can offer to educational K-12 institutions. The X131e Chromebook is "built to last with rubber bumpers around the top cover and stronger corners to protect the Chromebook against wear and tear." The hinges are also rated to last more than 50K open/close cycles.

Other specifications include an 11.6" 1366x768 anti-glare LCD, low-light webcam, HDMI and VGA ports, and three USB ports (2x USB 3.0, 1x USB 2.0). Battery life is stated as 6.5 hours, which should be sufficient for the entire school day. The X131e weighs just under four pounds (3.92 lbs./1,78kg) with the 6-cell battery and measures 1.27" (32.2mm) thick. Storage consists of a 16GB SSD, and the X131e comes with 4GB of DDR3-1600. Lenovo does not state the specific processor being used, merely listing it as "latest generation Intel", which presumably means an Atom CPU though Celeron or Pentium are certainly possible. Customization options including colors, asset tagging, and school logo etching are also available.

Besides the rugged build quality, Lenovo cites other advantages of Chrome OS for the K-12 environment. There's built-in protection since all apps are curated through the Google Play store, and Lenovo's Chromebook allows IT teams to manage security and scalability through a management console, where they con configure, assign, and manage devices from a single interface.

The ThinkPad X131e Chromebook will be available starting February 26th via special bid volume pricing starting at $429. That's certainly higher than other options, but for a laptop that can actually withstand the rigors of the K-12 environment that's not too bad.

Fusion-io Launches ioScale for Hyperscale Market
by Kristian Vättö 3 days ago

We haven't even had time to cover everything we saw at CES last week, but there are already more product announcements coming in. Fusion-io launched their new ioScale product line at the Open Compute Summit, which was originally started by a few Facebook engineers who were looking for the most efficient and economical way to scale Facebook's computing infrastructure. Fusion-io's aim with the ioScale is to provide a product that makes building an all-flash datacenter more practical, the key benefits being the data density and pricing.

Before we look more closely at the ioScale, let's talk briefly about its target market: Hyperscale companies. The term hyperscale may not be familiar to all, but in essence it means a computing infrastructure that is highly scalable. A good example of a hyperscale company would be Facebook or Amazon, both of which must constantly expand their infrastructure due to increasing amounts of data. Not all hyperscale companies are as big as Facebook or Amazon, though, there are lots of smaller companies that may need as much scalability as Facebook and Amazon do.

Since hyperscale computing is all about efficiency, it's also common that commodity designs are used instead of pricier blade systems. Along with that goes expensive RAID arrays, network solutions and redundant power supplies for instance. The idea is that high-availability and scalability should be the result of smart software, not based upon expensive and - even worse - complex hardware. That way the cost of the infrastructure investments and management is kept as low as possible, which is crucial for a cloud service when a big portion of the income is often generated through ads or low-cost services. The role of software is simply huge in hyperscale computing and to improve the software, Fusion-io also provides an SDK called ioMemory that will assist developers in optimizing their software for flash memory based systems (for example, the SDK allows SSDs to be treated as DRAM, which will cut costs even more since less DRAM will be needed). 

The ioScale comes in capacities from 400GB to up to 3.2TB (single half length PCIe slot) making it one of the highest density, commercially available drives. Compared to traditional 2.5" SSDs, the ioScale provides significant space savings as you would need several 2.5" SSDs to build a 3.2TB array. The ioScale doesn't need RAID for parity as there is built-in redundancy, which is similar to SandForce's RAISE (some of the NAND die is reserved for parity data, so you can rebuild the data even if one or more NAND dies fail). 

The ioScale is all MLC NAND based, although Fusion-io couldn't specify the process node or manufacturer because they source their NAND from multiple manufacturers (makes sense given the volume required by Fusion-io). Different grades of MLC are also used but Fusion-io is promising that all their SSDs will match with the specifications regardless of the underlying components.

The same applies to the controller: Fusion-io uses multiple controller vendors, so they couldn't specify the exact controller used in the ioScale. One of the reasons is extremely short design intervals because the market and technology is evolving very quickly. Most of Fusion-io's drives are sold to huge data companies or governments, who are obviously very deeply involved in the design of the drives and also do their own validation/testing, so it makes sense to provide a variety of slightly different drives. In the past I've seen at least Xilinx' FPGAs used in Fusion-io's products, so it's quite likely that the company stuck with something similar for the ioScale.

What's rather surprising is the fact that ioScale is a single-controller design, even at up to 3.2TB. Usually such high capacity drives use a RAID approach, where multiple controllers are put behind a RAID controller to make the drive appear as a single volume. There are benefits with that approach too, but using a single controller often results in lower latencies (no added overhead by the RAID controller), prices (less components needed) and it takes less space. 

The ioScale has previously been available to clients buying in big volumes (think tens of thousands of units) but starting today it will be available in minimum order quantities of 100 units. Pricing starts at $3.89 per GB, which puts the 450GB model at $1556. For Open Compute Platforms, Fusion-io is offering a 30% immediate discount, which puts the ioScale at just $2.72/GB. For comparison, a 400GB Intel SSD 910 currently retails at $2134, so the ioScale is rather competitive in price, which is one of Fusion-io's main goals. Volume discounts obviously play a major role, so the quoted prices are just a starting point.

OpenCompute servers and AMD Open 3.0
by Johan De Gelas 4 days ago

Remember our review of Facebook's first OpenCompute Server? Facebook designed a server for their own purposes, but quickly released all the specs to the community. The result was a sort of "Open-source" or rather "Open-specifications" and "Open-CAD" hardware. The idea was that releasing the specifications to the public would advance and improve the platform quickly. The "public" in this case is mostly ODMs, OEMs and other semiconductor firms.

The cool thing about this initiative is that the organization managed to convince Intel and AMD to standarize certain aspects of the hardware. Yes, they collaborated. The AMD and Intel motherboards will have the same form factor, mounting holes, management interface and so on. The ODM/OEM only has to design one server: the AMD board can be swapped out for the Intel one and vice versa. The Mini-Mezzanine Slot, the way the power supply is connected, and so on are all standarized.

AMD is the first with this new "platform", and contrary to Intel's own current customized version of OpenCompute 2.0, is targeted at the mass market. The motherboard is designed and produced by several partners (Tyan, Quanta) and based upon the specifications of large customers such as Facebook. But again, this platform is not about the Facebooks of this world; the objective is to lower the power, space and cost of traditional servers. So although AVnet and Penguin Computing will be the first integrators that offer complete server systems based upon this spec, there is nothing stopping DELL, HP and others from doing the same. The motherboard design can be found below.

The T shape allows the PSU to be placed on the left, right, or on both sides (redundant PSUs). When placed in a case, the PSU is behind the rest of the hardware and thus does not heat up the rest of the cassis as you can see below.

The voltage regulation is capable of running EE and SE CPUs, ranging from 85W to 140W TDP. The voltage regulation disables several phases if they are not necessary in order to save power.

Servers can be 1U, 1.5U, 2U or 3U high. This server platform is highly modular, and the solutions built upon it can be widely different. AMD sees three different target markets.

Motherboards will not offer more than six SATA ports, but with the help of the PCIe cards you can get up to 35 SATA/SAS drives in there, to build a storage server. The HPC market demands exactly the opposite: in most cases CPU power and memory bandwidth matter the most. There will be an update around late February that wil support faster 1866 MHz DIMMs (1 DIMM per channel).

Our first impression is that this is a great initiative, building further upon the excellent ideas on which OpenCompute was founded. It should bring some of the cost and power savings that Facebook and Google have to the rest of us. The fact that the specifications are open and standarized should definitely result into some serious cost savings as vendors cannot lock you in like they do in the traditional SAN and blade server market.

We are curious how the final the management hardware and software will turn out. We don't expect it to be at the "HP ILO advanced" level, but we hope it is as good as an "Intel barebones server" management solution: being able to boot directly into the BIOS, a solid remote management solution and so on. The previous out of band management solution was very minimal as the first OpenCompute platform was mainly built for the "Hyperscale" datacenters.

The specifications of AMD Open 3.0 are available here.

Vizio's AMD Z60 Hondo-based Windows 8 Tablet PC at CES 2013
by Vivek Gowri 5 days ago

Even with the comprehensive overhaul of their notebook lineup, the big news out of Vizio’s CES booth was definitely their new Windows 8 tablet. The Vizio Tablet PC is the first system we’ve come across with AMD’s Z60 APU inside. It’s a 1GHz dual-core part, with a pair of Bobcat cores and an HD 6250 GPU onboard. The low clock speed allows it to hit a TDP of roughly 4.5W, easily the lowest of AMD’s APUs, but likely means that compute performance will likely be similar to or slightly worse than Clover Trail. This isn’t unexpected, since we saw the same situation play out with Ontario last year - basically a faster microarchitecture clocked significantly lower such that it performed roughly on par with Atom, except with significantly better GPU performance.

In addition to the AMD Z60, the Vizio Tablet PC comes with an 11.6” 1080p display, 2GB of memory, a 64GB SSD, stereo speakers, and Vizio’s now customary industrial design and attention to detail. The chassis is pretty thin at 0.4”, and at 1.66lbs isn’t too heavy for a system of this form factor. It’s a nice design, very flat and clean, and feels good in hand. The frame is aluminum, with a soft-touch back and glass front. I'll explore the hardware fully in the review, but for now, just know that it's a good looking, well executed design.

My main comparison point was the Samsung ATIV Smart PC 500T, a Clover Trail-based 11.6” (1366x768) tablet which weighs a very similar 1.64lbs. The ATIV isn’t a particularly well designed system, which I’ll get into in my review, so the Vizio is unsurprisingly a much nicer piece of hardware design, but what really got me was the performance of Z60. Even at 1080p, the Vizio feels smoother throughout the Windows 8 UI than Clover Trail at WXGA. The extra GPU horsepower of the APU certainly makes itself felt when compared to the PowerVR SGX545 in Atom Z2760. This is a good sign, and all of the hardware acceleration capabilities that opens up should make Z60 a much more livable computing situation than Atom. Obviously, it won’t come anywhere near 7W IVB, which I’d say is the current preferred Windows 8 tablet platform (and should be until Haswell comes) but it should be a good deal cheaper. 

The display is supposedly not IPS but is definitely some wide-angle panel type, so perhaps it’s a Samsung-sourced PLS panel or something similar. Pretty crisp, 1080p on an 11.6” panel is fantastic from a pixel density standpoint. We have no indications on price or release date, but Vizio says that it will be priced “competitively”. Competitive to what still remains a question, since the Z60-based Vizio kind of bridges the gap between Clover Trail and Ivy Bridge tablets, but I wouldn’t be shocked to see it drop at around $800. That puts it on par with the ASUS VivoTab 810C (the Atom one, not the one we reviewed) and just above the ATIV Smart PC ($749) but well below the 1080p Ivy Bridge tablets ($899 for Surface Pro, $949 for Acer’s W700). 

I’m excited, it looks like a pretty decent offering and I’m glad to see AMD get such a solid design win. Intel has long owned the mobile and ultramobile PC space, so it’s nice to see AMD finally put out a viable chip that will hopefully shake things up going forward. 

Dragging Core2Duo into 2013: Time for an Upgrade?
by Ian Cutress 5 days ago

As any ‘family source of computer information’ will testify, every so often a family member will want an upgrade.  Over the final few months of 2012, I did this with my brother’s machine, fitting him out with a Sandy Bridge CPU, an SSD and a good GPU to tackle the newly released Borderlands 2 with, all for free.  The only problem he really had up until that point was a dismal FPS in RuneScape.

The system he had been using for the two years previous was an old hand-me-down I had sold him – a Core2Duo E6400 with 2x2 GB of DDR2-800 and a pair of Radeon HD4670s in Crossfire.  While he loves his new system with double the cores, a better GPU and an SSD, I wondered how much of an upgrade it had really been.

I have gone through many upgrade philosophies over the decade.  My current one to friends and family that ask about upgrades is that if they are happy installing new components. then upgrade each component to one of the best in its class one at a time, rather than at an overall mediocre setup, as much as budget allows.  This tends towards outfitting a system with a great SSD, then a GPU, PSU, and finally a motherboard/CPU/memory upgrade with one of those being great.  Over time the other two of that trio also get upgraded, and the cycle repeats.  Old parts are sold and some cost is recouped in the process, but at least some of the hardware is always on the cutting edge, rather than a middling computer shop off-the-shelf system that could be full of bloatware and dust.

As a result of upgrading my brother's computer, I ended up with his old CPU/motherboard/memory combo, full of dust, sitting on top of one of my many piles of boxes.  I decided to pick it up and run the system with a top range GPU and an SSD through my normal benchmarking suite to see how it faired to the likes of the latest FM2 Trinity and Intel offerings, both at stock and with a reasonable overclock.  Certain results piqued my interest, but as for normal web browsing and such it still feels as tight as a drum.

The test setup is as follows:

Core2Duo E6400 – 2 cores, 2.13 GHz stock
2x2 GB OCZ DDR2 PC8500 5-6-6
MSI i975X Platinum PowerUp Edition (supports up to PCIe 1.1)
Windows 7 64-bit
AMD Catalyst 12.3 + NVIDIA 296.10 WHQL (for consistency between older results)

My recent testing procedure in motherboard reviews pairs the motherboard with an SSD and a HD7970/GTX580, and given my upgrading philosophy above, I went with these for comparable results.  The other systems in the results used DDR3 memory in the range of 1600 C9 for the i3-3225 to 2400 C9 for the i7-3770K.

The Core2Duo system was tested at stock (2.13 GHz and DDR2-533 5-5-5) and with a mild overclock (2.8 GHz and DDR2-700 5-5-6).  

Gaming Benchmarks

Games were tested at 2560x1440 (another ‘throw money at a single upgrade at a time’ possibility) with all the eye candy turned up, and results were taken as the average of four runs.

Metro2033

Metro2033 - One 7970

Metro2033 - One 580

While an admirable effort by the E6400, and overclocking helps a little, the newer systems get that edge.  Interestingly the difference is not that much, with an overclocked E6400 being within 1 FPS of an A10-5800K at this resolution and settings while using a 580.

Dirt3

Dirt3 - One 7970

Dirt3 - One 580

The bump by the overclock makes Dirt3 more playable, but it still lags behind the newer systems.

Computational Benchmarks

3D Movement Algorithm Test

3D Particle Movement Single Threaded

This is where it starts to get interesting.  At stock the E6400 lags at the bottom but within reach of an FX-8150 4.2 GHz , but with an overclock the E6400 at 2.8 GHz easily beats the Trinity-based A10-5800K at 4.2 GHz.  Part of this can be attributed to the way the Bulldozer/Piledriver CPUs deal with floating point calculations, but it is incredible that a July 2006 processor can beat an October 2012 model.  One could argue that a mild bump on the A10-5800K would put it over the edge, but in our overclocking of that chip anything above 4.5 GHz was quite tough (we perhaps got a bad sample to OC).

3D Particle Movement MultiThreaded

Of course the situation changes when we hit the multithreaded benchmark, with the two cores of the E6400 holding it back.  However, if we were using a quad core Q6600, stock CPU performance would be on par with the A10-5800K in an FP workload, although the Q6600 would have four FP units to calculate with and the A10-5800K only has two (as well as the iGPU).

WinRAR x64 3.93 - link

WinRar x64 3.93

In a variable threaded workload, the DDR2 equipped E6400 is easily outpaced by any modern processor using DDR3.

FastStone Image Viewer 4.2 - link

FastStone Image Viewer 4.2

Despite FastStone being single threaded, the increased IPC of the later generations usually brings home the bacon - the only difference being the Bulldozer based FX-8150, which is on par with the E6400.

Xilisoft Video Converter

Xilisoft Video Converter 7

Similarly with XVC, more threads and INT workloads win the day.

x264 HD Benchmark

x264 HD Pass 1

x264 HD Pass 2

Conclusions

When I start a test session like this, my first test is usually 3DPM in single thread mode.  When I got that  startling result, I clearly had to dig deeper, but the conclusion produced by the rest of the results is clear.  In terms of actual throughput benchmarks, the E6400 is comparatively slow to all the modern home computer processors, either limited by cores or by memory. 

This was going to be obvious from the start.

In the sole benchmark which does not rely on memory or thread scheduling and is purely floating point based the E6400 gives a surprise result, but nothing more.  In our limited gaming tests the E6400 copes well at 2560x1440, with that slight overclock making Dirt3 more playable. 

But the end result is that if everything else is upgraded, and the performance boost is cost effective, even a move to an i3-3225 or A10-5800K will yield real world tangible benefits, alongside all the modern advances in motherboard features (USB 3.0, SATA 6 Gbps, mSATA, Thunderbolt, UEFI, PCIe 2.0/3.0, Audio, Network).  There are also significant power savings to be had with modern architectures.

My brother enjoys playing his games at a more reasonable frame rate now, and he says normal usage has sped up by a bit, making watching video streams a little smoother if anything.  The only question is where Haswell will come in to this, and is a question I look forward to answering.

Checking Their Pulse: Hisense's Google TV Box at CES
by Jason Inofuentes 6 days ago

So, Google TV is still happening. Indeed, more players are getting into the game than ever. Hisense is a Chinese OEM/ODM that's seen steady growth in the television market internationally, and hopes to build a big presence in the US this year. Their Google TV box, Pulse, was announced as among the first to be built around the Marvell Armada 1500 chipset, and we've been waiting for it patiently ever since. It's available on Amazon right now, and we'll hopefully have it in for review soon. For now, we got a chance to take a peek at Hisense's interpretation of Google TV while on the show floor at CES. 

 

To re-cap, Google TV is the stab at altering the television viewing paradigm by Mountain View's finest. It has gone through some pretty immense transformations since it was first introduced and while all implementations share a basic UI paradigm, they've allowed OEMs to skin parts of the experience. The latest software iteration (V3, in their parlance), has three key conceits: Search, Voice and a recommendation engine. Search, understandably, is Google's strong suit, and is leveraged to great success. Voice's execution is good, though the value is limited. Primetime is their recommendation engine, and while it's no doubt quite good, it feels little different than the similar features provided by Netflix and the like. 

 

Hisense isn't shipping V3 software just yet, but a few things stand out about their software. We'll start with the Home screen. Lightly skinned, and functional, the screen is fairly satisfying. The dock and the three featured apps across the top are static, but that "Frequently Used" field is populated automatically based on your usage. That area below the video field would make a great place for a social feed widget, or perhaps some other useful data, but, as usual, is instead devoted to ad space. Just off the Home button, is a new button, that maps to an old function. Previously, hitting the Home button from the Home screen, brought you to a field where a user could configure widgets. Here that "double tap" is moved to a separate button, but looks largely the same. 

 

     

The remote control is a many buttoned affair, with a large touchpad (complete with scroll regions) on one side, and a QWERTY keyboard on the back. The touchpad is quite large, though responsiveness was a bit hit or miss, it's hard to blame the BT/WiFi powered hardware in such a spectrum crowded environment. The button lay out is oddly cramped for such a large remote, thanks to that touchpad and a similarly large set of directional keys. The QWERTY keyboard on the back, though, benefits from the acreage, and has a good layout. No motion controls are on offer here, this a tactile interface all the way. And truly, I'm not going to miss waving a wand around. 

There are three hardware things a Google TV needs to get right, and so far none have hit on all three. Video decode needs to be flawless and extensive; if local file playback is available, it shouldn't be limited to just a handful of codecs and containers, and it shouldn't ever falter. 3D rendering should at least be passable; as an Android device, it'd be nice to be able to play some games on these things, and so far that's something that's been ignored. More important than 3D though, 2D composition must be fast, no matter how many effects you throw at the screen. In many past devices, the UI was generally sluggish, but it slowed to an absolute crawl when you asked it to overlay a UI component over video. Imagine our surprise, then, when Hisense pulled it off without a hiccup. 

 

Hitting the Social button while watching a video brings up this lovely widget, which shows your Twitter and Facebook feeds and even offers sharing and filtering options. The filtering options are most intriguing, since they'd allow you to follow a content based hashtag (say #TheBigGame) and participate in the coversation related to the content you're watching, and all on the same screen. For terrestrial content the widget shifts the content into the upper left region so that none of it is obscured by the widget. 

 

But as nifty as the widget may be, what really set it apart was how quickly its components were drawn and updated. From the time the button was depressed to the fully composited and updated widget was shown couldn't have been but a second. Jumping from there to the Home screen was quicker, and opening Chrome and navigating to our home page all happened without noticeable stutter. 

 

Chatting with Marvell later, we discussed how they used their own IP to develop their composition engine and targeted just this sort of use case for it. Based on our time with their solution on the show floor, they and Hisense have done some good work. We can't wait to get our hands on the hardware ourselves and see jsut how good it gets. 

The Tegra 4 GPU, NVIDIA Claims Better Performance Than iPad 4
by Anand Lal Shimpi 6 days ago

At CES last week, NVIDIA announced its Tegra 4 SoC featuring four ARM Cortex A15s running at up to 1.9GHz and a fifth Cortex A15 running at between 700 - 800MHz for lighter workloads. Although much of CEO Jen-Hsun Huang's presentation focused on the improvements in CPU and camera performance, GPU performance should see a significant boost over Tegra 3.

The big disappointment for many was that NVIDIA maintained the non-unified architecture of Tegra 3, and won't fully support OpenGL ES 3.0 with the T4's GPU. NVIDIA claims the architecture is better suited for the type of content that will be available on devices during the Tegra 4's reign.
 
Despite the similarities to Tegra 3, components of the Tegra 4 GPU have been improved. While we're still a bit away from a good GPU deep-dive on the architecture, we do have more details than were originally announced at the press event.


    

Tegra 4 features 72 GPU "cores", which are really individual components of Vec4 ALUs that can work on both scalar and vector operations. Tegra 2 featured a single Vec4 vertex shader unit (4 cores), and a single Vec4 pixel shader unit (4 cores). Tegra 3 doubled up on the pixel shader units (4 + 8 cores). Tegra 4 features six Vec4 vertex units (FP32, 24 cores) and four 3-deep Vec4 pixel units (FP20, 48 cores). The result is 6x the number of ALUs as Tegra 3, all running at a max clock speed that's higher than the 520MHz NVIDIA ran the T3 GPU at. NVIDIA did hint that the pixel shader design was somehow more efficient than what was used in Tegra 3. 
 
If we assume a 520MHz max frequency (where Tegra 3 topped out), a fully featured Tegra 4 GPU can offer more theoretical compute than the PowerVR SGX 554MP4 in Apple's A6X. The advantage comes as a result of a higher clock speed rather than larger die area. This won't necessarily translate into better performance, particularly given Tegra 4's non-unified architecture. NVIDIA claims that at final clocks, it will be faster than the A6X both in 3D games and in GLBenchmark. The leaked GLBenchmark results are apparently from a much older silicon revision running no where near final GPU clocks.
 
Mobile SoC GPU Comparison
  GeForce ULP (2012) PowerVR SGX 543MP2 PowerVR SGX 543MP4 PowerVR SGX 544MP3 PowerVR SGX 554MP4 GeForce ULP (2013)
Used In Tegra 3 A5 A5X Exynos 5 Octa A6X Tegra 4
SIMD Name core USSE2 USSE2 USSE2 USSE2 core
# of SIMDs 3 8 16 12 32 18
MADs per SIMD 4 4 4 4 4 4
Total MADs 12 32 64 48 128 72
GFLOPS @ Shipping Frequency 12.4 GFLOPS 16.0 GFLOPS 32.0 GFLOPS 51.1 GFLOPS 71.6 GFLOPS 74.8 GFLOPS
 
Tegra 4 does offer some additional enhancements over Tegra 3 in the GPU department. Real multisampling AA is finally supported as well as frame buffer compression (color and z). There's now support for 24-bit z and stencil (up from 16 bits per pixel). Max texture resolution is now 4K x 4K, up from 2K x 2K in Tegra 3. Percentage-closer filtering is supported for shadows. Finally, FP16 filter and blend is supported in hardware. ASTC isn't supported.
 
If you're missing details on Tegra 4's CPU, be sure to check out our initial coverage. 

Intel's Quick Sync: Coming Soon to Your Favorite Open Source Transcoding Applications
by Anand Lal Shimpi 6 days ago

 

Intel's hardware accelerated video transcode engine, Quick Sync, was introduced two years ago with Sandy Bridge. When it was introduced, I was immediately sold. With proper software support you could transcode content at frame rates that were multiple times faster than even the best GPU based solutions. And you could do so without taxing the CPU cores. 
 
While Quick Sync wasn't meant for high quality video encoding for professional production, it produced output that was more than good enough for use on a smartphone or tablet. Given the incredible rise in popularity of those devices over recent history and given that an increasing number of consumers moved to notebooks as primary PCs, a fast way of transcoding content without needing tons of CPU cores was exactly what the market needed.
 
There was just one problem with Quick Sync: it had zero support in the open source community. The open source x264 codec didn't support Quick Sync, and by extension applications like Handbrake didn't either. You had to rely on Cyberlink's Media Espresso or ArcSoft's Media Converter. Last week, Intel put the ball in motion to change all of this. 
 
With the release of the Intel Media SDK 2013, Intel open sourced its dispatcher code. The dispatcher simply detects what driver is loaded on the machine and returns whether or not the platform supports hardware or software based transcoding. The dispatcher is the final step before handing off a video stream to the graphics driver for transcoding, but previously it was a proprietary, closed source piece of code. For open source applications whose license requires that all components contained within the package are open source as well, the Media SDK 2013 should finally enable Quick Sync support. I believe that this was the last step in enabling Quick Sync support in applications like Handbrake.
 
I'm not happy with how long it took Intel to make this move, but I hope to see the results of it very soon. 

Vizio's New Touch Notebook and AIO PCs at CES
by Vivek Gowri 6 days ago

Vizio used CES as the platform to debut the third revision to its PC lineup, which currently consists mostly of ultrabooks and all-in-ones. The first revision was the initial launch last summer, while the second revision brought touchpad updates (replacing the godawful Sentelic pads with better Synaptics units) and Windows 8. This third revision brings touchscreens and quad-core CPUs across the board to all Vizio systems, regardless of notebook or all-in-one. 

Vizio’s notebook lineup is presently structured with a Thin+Light and a Notebook; the former is available in two form factors (14” 900p and 15.6” 1080p) with Intel’s ULV processors, solid state storage, and integrated graphics, while the Notebook is 15.6” 1080p with quad-core IVB processors, Nvidia’s GT 640M LE graphics, and a 1TB hard drive paired with a 32GB caching drive. Across the board, we see IPS display panels, fully aluminum chassis, and uniform industrial design. 

The new Thin+Light Touch again come in 14” and 15” models, with either AMD A10 or Ivy Bridge i7 quads exclusively, with AMD dedicated graphics available with the AMD model. The dual-core and ULV parts are gone, and with nary a mention of the CN15 Notebook, it would appear that it has been killed off because of too much overlap with the Thin+Light Touch. Both quad-core CPUs and dedicated GPUs are available in the latter, so you’re not losing much, though that means there is no longer an Intel quad + dGPU config on offer.

As can probably be surmised from the name, the Thin+Light Touch is available exclusively with a capacitive multitouch display. This adds a bit of thickness and weight to the chassis, but the 15.6” model is still 4 pounds (from 3.89lbs before) so it’s not a huge amount. Other improvements include a much more structurally sound palmrest and interior, which results in significantly less flex in both the body as well as the keyboard. This is likely the most significant of the chassis-level upgrades, and fixes the last major flaw from the second revision notebooks. Battery capacity has been “nearly doubled” which indicates capacity should be close to 100Wh (the previous Thin+Light was 57.5Wh) with the hope of substantially improving battery life.

Gallery: Vizio Laptops

It seems like a pretty targeted generational update, with all of the pain points from the first two notebooks fixed. I think I’d still like to see some improvements in terms of ports on offer (2xUSB and no SD slot just isn’t enough), but the gorgeous IPS display and nice industrial design make up for any remaining flaws. Price points are expected to be similar to the previous Thin+Light, and availability is expected to be in the early spring timeframe. 

Vizio also had its All-in-One Touch series desktops at their suite in the Wynn, though these are not new products. Vizio updated the AIO series with touchscreen displays and Synaptics touchpads at the Windows 8 launch, and simply brought those to Las Vegas to complement their new notebook, tablet, and HDTV products on the show floor.

Razer Edge: Impressions and Thoughts
by Vivek Gowri 6 days ago

I spent a fair amount of time at CES playing with the Razer Edge, mostly because it was one of the more intriguing new products on the show floor. (Shield was another one, but Nvidia sadly kept it in a glass cage.) As recapped in our announcement post, it’s a 10.1” tablet that packs an ultra-low voltage Ivy Bridge CPU and an Nvidia GT 640M dGPU and comes with a gamepad accessory that turns it into the world’s largest GameBoy Advance. This, for a tablet, is a ridiculous amount of power. I’ve always been someone who appreciates insanity in mobile technology design, and the insanity of a 45W power envelope in a 10” form factor is something that I respect. 

The Edge on its own is pretty intense - 0.8” is really thick for a tablet, with a general sense of chunkiness that starkly contrasts the extremely svelte Blade. The intake and exhaust vents are put to the test in any extended gaming, and one of the units on the show floor that had been continuously running Dirt 3 for the previous few hours was....warm. It’ll be hard to tell how close to thermal equilibrium the Edge gets until we get one in our labs, but I expect it to throttle significantly at some point. 

17W Core i5 and i7 parts were chosen instead of the new 7W Y-series CPUs due to the higher clock speeds and better turbo capabilities of the U-series processors. The SSD has not yet been finalized, with different drives in all the prototypes that I played with. The display panel is in fact an IPS panel, which my announcement post was mistaken about (I was misinformed initially during the CES pre-briefing, but Razer’s engineering team corrected me). It looks pretty decent, and the capacitive touch panel was pretty responsive. The 1366x768 resolution matches up with most of the other 10.1” Windows tablets we’ve seen, and was likely chosen in lieu of 1080p so that the GT 640M LE could comfortably game at native resolution. 

There’s a 40Wh battery on board, with an extended 40Wh extended battery that fits in the gamepad and notebook docks. (It’s a 14.8V 2800 mAh battery, for an exact capacity of 41.44 Wh). 80Wh is a ton of battery for a device this small, but when stressed, it’ll go quickly. A rough estimate of the internal components gives us a basic estimate of 40W power draw (17W CPU, 22W dGPU, in most gaming situations figure a 50% load on CPU and 100% load on GPU, add about 10W for display, wireless and other miscellaneous stuff) and we’re sitting at an hour of gaming on the internal battery and 2 hours with the extended pack. Obviously, turning down settings to reduce system load, brightness, and the intensiveness of the game being played will affect these figures - Razer quotes a range of 2-4 hours of mobile gameplay. Normal battery life should be in the 3-5 hour range on the internal battery and about double that with the extended pack. 

The gamepad controller essentially works like an Xbox controller, with intuitive controls and built-in force feedback. It’s pretty cool, I feel like it’s something I would have absolutely killed for when I rode the bus to school every day back in my early undergrad days. The tablet clips into the gamepad, which essentially envelops the tablet like a case, and then you’re off. I spent my fair share of time playing Dirt on it, and it was just great. Control layout is identical to the 360, and the analogs and triggers are responsive. Razer definitely knows how to put a good 360 controller together, as evidenced by the Sabertooth, so this came as no real surprise. The setup adds a bit of heft to the tablet, to the tune of roughly 3.25 pounds, but for the amount of mobile gaming potential it brings, I’d say it’s a relatively small loss. The only downer was the $249 price point on the accessory.

The keyboard dock, on the other hand, was kind of a disappointment. It’s definitely a work in progress and isn’t expected to ship until Q3 (the gamepad and docking station will ship alongside the Edge in Q1), but it’s a clunky piece of kit with a currently not-very-good keyboard and a pretty unrefined hinge/latch design. I’ll chalk the flex down to the handbuilt state of the prototypes but the key sizing is way too small - instead of going edge to edge like most netbooks, there’s a border left around the keyboard that results in tiny keys. The keys absolutely need to be bigger for any semblance of a decent typing experience. There’s a lot of improvement that can be done here; I suggest the design team pick up a Transformer laptop dock or a late-model Eee PC and borrow liberally from that keyboard design. ASUS has absolutely perfected the 10.1” keyboard, so it’s not a bad idea. I’m not going to rake Razer over the coals on a product that clearly isn’t anywhere near finished yet though, so let’s move on.

The docking station was set up with an LCD TV and a pair of Sabertooth controllers in multiple places in the Razer CES booth, as well as their meeting suite. In all cases, the display was set to be mirrored, presumably to ensure that the games were played at the internal panel’s native 768p and not 1080p (where performance would understandably struggle). I’m still really interested in tossing an Edge + dock on my desk with a 24” display and a Bluetooth keyboard/mouse, it seems like one of the more viable 2 pound desktop replacements around. 

Pricing slots in at $999 for the base i5/4GB/64GB model, $1299 for the i7/8GB/128GB Pro model, and $1499 for the Pro plus Gamepad bundle. Doubling capacity to 256GB will run you an extra $150 for the Pro models. If you don’t want anything other than the tablet, the base model is a pretty good deal, but once you start adding accessories you might as well spring for the Pro bundle and resign yourself to paying Razer’s typically expensive peripheral costs. They don’t even try to deny that the Blade, the Edge, and all of their keyboards and mice are pricey - Razer has cultivated a premium brand ethos, and it’s done pretty well for them thus far.

G.hn and HomePlug Head for Showdown
by Ganesh T S 6 days ago

It has been a while since we covered PLC (powerline communication) technology here, but we took the opportunity to check up on the latest and greatest in the area at CES. G.hn has been championed by the HomeGrid forum and the companies promoting them in early 2011 included Sigma Designs, Lantiq and Marvell. In fact, at CES 2011, we visited the Sigma Designs suite to see G.hn silicon in action for the first time. Lantiq had also demonstrated a G.hn chipset at the same show. Much water has flown under the bridge since then, and Lantiq seems to have quietly dropped off advertising their XWAY HNX solutions on their website. Sigma Designs is financially not doing too well, and Michael Weissman, one of their most vocal G.hn proponents, has moved on. These factors, however, didn't prevent them from introducing their 2nd generation G.hn chipset (PDF). There has been a change of PR hands at Sigma Designs, and we were unfortunately not invited to see it in action. However, Marvell was gracious enough to invite us to check out their G.hn system in action.

Meanwhile, HomePlug invited us to check out a compatibility test using commercially available HPAV (HomePLug AV) equipment. Qualcomm Atheros is no longer the sole vendor, with Broadcom and M-Star also pitching in with their own solutions. The Broadcom solution with the integrated AFE (Analog Front End) has been well received by the vendors. HomeGrid forum regularly organizes plugfests too, but they are of little relevance if one can't purchase the involved equipment in the stores.

It is good to see G.hn silicon in what appears to be ready-to-ship casing, but the bigger question is one of compatibility with existing equipment. Marvell indicated that service providers are lining up to supply G.hn equipment to customers (particularly in the growing Asian markets). However, with HPAV equipment already well spread throughout the world (particularly through consumer channels), it remains to be seen if service providers can take the risk of their equipment performance degrade in a MDU (multiple dwelling unit) scenario where the adjoining units have HPAV equipment. Marvell does promise good network isolation in the MDU case, and it will be interesting to see how a HPAV network and G.hn network can co-exist.

The progress with G.hn seems to be very slow. It is a pity that silicon demonstrated as early as January 2011 is yet to ship to customers two years down the line. Under conditions of anonymity, some of the networking vendors told us that they have given up on G.hn and are looking forward to HPAV2 silicon coming out towards the end of the year. The HomeGrid forum and its members have been quick to publicize any service provider / supplier agreements, and till now, we have received reports of Comtrend, Suttle, Chunghwa Telecom Labs and Motorola Mobility showing interest in G.hn. As long as Sigma Designs and Marvell remain in the fray, G.hn lives to fight another day. We will be keeping close tabs to find out when the first G.hn products start shipping to the customers of the service providers who have opted for it.


 

Buffalo Technology Updates NAS and DAS Lineup at CES 2013
by Ganesh T S 6 days ago

Brian already updated readers on the new products from Buffalo in the networking and Thunderbolt space. There were updates on the NAS front too. The primary announcement was the launch of the LinkStation 400 series of NAS devices. The available models include single and dual bay configurations with the option of going diskless (410D / 420D / 421E). These NAS devices also incorporate support for the BuffaloLink remote service and new mobile apps. The chassis has a black matte finish. Buffalo claims support for 80 MBps+ in throughput performance. Pricing ranges from $149 for the 421E to to $719 for a 8TB 420D. Availability is slated for end of Q1 2013.

The BuffaloLink service enables secure cloud access to the NAS. It consolidates various NAS devices under one account and provides easy remote access. The service works via relay mechanism and doesn't require any port forwarding. Buffalo maintains servers in US, Europe and Asia for this purpose. The service is available free for the life of the product. Plans are also underway to expand BuffaloLink to include other products such as routers.

The DriveStation DDR is a USB 3.0 DAS (Direct Attached Storage) unit with a 1 GB DDR3 cache. This is in addition to the 32 - 64 MB cache already present in the hard disk. This DRAM allows caching of writes to the hard disk. This makes it appear to the user that the writes to the DAS are quite fast (as much as 350% rise in some cases). Of course, there is no protection against power loss. Users have to be extremely careful in ensuring that the DriveStation DDR is connected to a UPS in case critical data is being transferred to it. Pricing ranges from $119 for a 1TB version to $189 for a 3TB version. Availability is scheduled for end of Q1 2013.

Head on over to the source link for more specifics on the products launched.

 

Acer Shows Off 2880x1620 Panel
by Jarred Walton 6 days ago

We visited with Acer at this CES, and they didn't specifically have anything that we were told we could discuss, but after seeing another publication with pictures of Acer's pre-release 2880x1620 IPS display laptop it appears that's fair game. So, let me tell you what we know.

The panel as noted is 2880x1620, with a diagonal of around 15.6" (give or take 0.1" I'd guess). This is basically the non-Apple version of the QWXGA+ display, only in 16:9 attire rather than 16:10. The display is clearly IPS or some other wide viewing angle design, and when we walked into Acer's suite to look at the laptops and tablets, from an oblique angle it stood out as far and away the best display of the bunch. I also took some time to show the same image (wallpaper) on the 2880 panel alongside adjacent 1366x768 and 1080p panels (both TN), and the difference in color was astounding.

My best guess for when we'll see this LCD show up in an Acer laptop (and potentially in laptops from other vendors) is around late Q2 2013, when the Haswell launch occurs. That should give the OEMs plenty of time to figure out how they're going to deal with an ultra-high-DPI panel in Windows, and that's where Apple's control over both the hardware and the OS is going to be difficult to beat. Hopefully when the display shows up, manufacturers will also remember to spend the extra time and money to pre-calibrate for accurate colors, and it sounds like that's at least in the cards.

Interacting with HTPCs: Adesso, IOGear & Rapoo Demonstrate Options at CES 2013
by Ganesh T S 6 days ago

Media Center remotes are a dime a dozen, but, judging by the threads which frequently pop up on AVSForum, it appears as if full-sized keyboards are preferred by a number of users. Some of the popular options for controlling HTPCs include the diminutive Logitech diNovo Mini and the Lenovo N5902 keyboard / 'trackball' combo. The Logitech K400 with an integrated touchpad is also quite good and economical (and my personal HTPC solution for now), but it isn't really ideal as an extended alternative for a mouse. At CES, we went around the show floor looking for HTPC control solutions. In particular, we paid attention to the full size offerings. A separate mouse is out of the question in a HTPC setup, and mostly, we were bundled with either a touchpad or a trackball. The wireless communication happened in either the 5 GHz or 2.4 GHz band with a specialized USB receiver on the PC side. In some cases, the communication protocol of choice was Bluetooth. Communication devices using Bluetooth can also interface with tablets supporting Bluetooth.

Adesso:

Adesso had the yet-to-be-launched WKB-4150DW Bluetooth 3.0 aluminum touchpad keyboard on display. It is mainly intended to interface with tablets, but the build and features make it an ideal HTPC companion. The differentiating feature of this product is the option to use either 2.4 GHz (with a dedicated USB receiver) or Bluetooth for communication using a switch at the rear of the unit.

Older keyboard / trackpad / trackball combo models were also on display.

IOGear:

IOGear wasn't introducing any new keyboard / mouse combos, but they had their full lineup on display. The GKM571R appeared to be quite interesting given its minimalist design. The unit even turns off completely when the upper lid is closed. The on-lap keyboard with optical trackball and scroll-wheel, GKM581R, in addition to being an ergonomic alternative for HTPCs, is also compatible with multiple game consoles (including the PS3). The GKM681R retains the same compatibility of the GKM581R, but in a compact form factor, without the on-lap ergonomic design. The GKM561R has a laser trackball for 400, 800 or 1200 dpi settings. The unit is MCE-ready with appropriate shortcuts and also retains the game console compatibility of the previous two models.

Rapoo:

Rapoo had a variety of Windows 8 peripherals on display. Of interest to the HTPC crowd were the wireless multimedia touchpad keyboard (E9180P) and the wireless illuminated keyboard with touchpad (E9090P). Both of these communicate in the 5 GHz spectrum, avoiding interference with Wi-Fi, Bluetooth and other 2.4 GHz devices. There is support for customizable touch gestures for personalizing the navigation experience. The latter features inductive wireless charging and the backlight is adjustable.

We are looking forward to having some of these models over for review towards the end of this quarter.

Seagate and LaCie Demonstrate Complementary Product Lineups at CES 2013
by Ganesh T S on 1/14/2013

Seagate is well on its way to complete the acquisition of LaCie, and the two companies had a joint presence at CES 2013. For the most part, the companies have complementary lineups. There are two areas of overlap, namely, the external hard drive space and the entry-level business NAS systems and network attached hard disks. In the former space, LaCie differentiates by providing different aesthetics to the case itself. In the latter space, the differentiation is almost non-existent. In particular, both the LaCie 2big NAS and the Seagate BlackArmor NAS 220 serve the same market segment and have similar performance. It will be interesting to observe how LaCie and Seagate consolidate their budget business NAS offerings.

Seagate Wireless Plus:

The most important announcement from Seagate was the Wireless Plus portable hard drive. This is a follow-up product to the Seagate GoFlex Satellite that was reviewed in late 2011. Seagate claims to have increased the battery life by better optimizing the drive up time depending on the content being streamed. The included battery is good for up to 10 hours of video playback according to Seagate.

iOS and Android apps are available to interface with the wireless drive and access the content. In our hands-on testing, we found the Android app to perform way worse than the iOS app with respect to speed and ease of use. The STCK1000100 1 TB version is available for pre-order at a price point of $200. LaCie doesn't have any similar product in their line-up.

Seagate Central:

The Seagate Central is a network attached hard disk with a very pleasing industrial design. The unit is based on a Cavium chipset (ARM-based) and has a single GbE port as well as a USB port in a recessed nook. We voiced our concerns about the placement of the USB port (too close to the network jack, and also lacking clearance against the recessed wall). In terms of products in the same category, Seagate is pitting this against the Western Digital MyBook Live and the Iomega single bay network attached hard disk. The plus points of the Seagate Central include a Samsung SmartTV app to access the content, as well as Android and iOS apps which replicate the functionality seen in the Wireless Plus's apps. The issues we pointed out with the Android app in the Wireless Plus remain in the Seagate Central too.

The product will ship in March with a MSRP of $190, $220 and $260 for the 2TB, 3TB and 4TB versions respectively. LaCie has a network attached hard disk in the LaCie CloudBox as well as the LaCie d2 Network 2, though they aim at a different segment of the network attached hard disk market.

LaCie 5big NAS Pro:

This is a 5-bay NAS based on the Intel Atom D2700 platform meant as a performance offering in the SMB NAS market. We were able to present some thoughts with a beta unit just prior to CES. Do head on to the first part of our review for more information about the 5big NAS Pro.

LaCie 5big Thunderbolt:

Unless hard disks are placed in a RAID configuration, they are unable to saturate Thunderbolt links. LaCie introduced the 5big Thunderbolt, which can deliver up to 785 MBps of throughput. There are two Thunderbolt ports for daisy chaining support.

The pricing of the diskless version starts at $1200.

LaCie Blade Runner:

We have had Neil Poulton-designed external HDDs from LaCie before, and now, they have introduced the Blade Runner, designed by Philippe Starck. The design of the enclosure is hard to describe, so we will let the gallery below do the talking.

The 4 TB Blade Runner has a USB 3.0 interface. It is in a limited edition run of 10K units and has a MSRP of $300.

Seagate also briefed us under NDA on some exciting announcements scheduled for the next two quarters, along with some demonstrations. Stay tuned for more Seagate / LaCie coverage in the near future.

Latest from AnandTech