Divers

  • Xiaomi Unveils Mi Notebook Air, from $525 (AnandTech)

    Today Xiaomi has introduced its first pair of notebooks and unvieled the Mi Notebook Air family. As the name suggests, these are aimed head first into Apple's Air line of notebooks, albeit at a very different price point. The laptops feature 12.5” and 13.3” full HD displays and are based on Intel’s Core M as well as Core i5 microprocessors. The price of the ultra-thin all-metal notebooks starts from 3499 CNY ($525, although Xiaomi usually quotes prices including China tax, so $446 perhaps), which could make them very competitive in various markets. As with most Xiaomi products, they will be available in China first.

    Xiaomi, which is known primarily for its smartphones business and superstar VP, Hugo Barra, positions its laptops as integrated parts of its Mi Ecosystem (which includes smartphones, an Android TV STB, tablets, power banks, headphones, a wrist band and even an air purifier). Their design resembles that of other devices from Xiaomi and uses all-metal silver and gold enclosures. Both notebooks are made by Tian Mi, a partner of Xiaomi, and will run Microsoft Windows 10 Home.

    Xiaomi's Mi Notebook Air Family
      Mi Notebook Air 12.5" Mi Notebook Air 13.3"
    CPU SKU Intel Core
    m3-6Y30
    Intel Core
    i5-6200U
    7W cTDP Up
    Base 1.1 GHz 2.3 GHz
    Turbo 2.2 GHz  2.8 GHz
    iGPU SKU Intel HD Graphics 515 (GT2)
    24 EUs, Gen 9
    Intel HD Graphics 520 (GT2)
    24 EUs, Gen 9
    Base 300 MHz
    Turbo 850 MHz 1050 MHz
    dGPU - NVIDIA GeForce 940MX
    DRAM 4 GB LPDDR3-1866 8 GB DDR4-2133
    SSD 128 GB SATA (500 MB/s) 256 GB PCIe 3.0 x4 (1500 MB/s)
    Display 12.5" Full HD display 13.3" Full HD display
    Ports 1 x USB 3.1 (Gen 1) Type-C
    1 x USB 3.0 Type-A
    HDMI
    3.5mm TRRS jack
    1 x USB 3.1 (Gen 1) Type-C
    2 x USB 3.0 Type-A
    HDMI
    3.5mm TRRS jack
    Network 2x2:2 802.11ac with BT 4.1
    Battery 37 Wh 40 Wh
    Dimensions H: 12.9 mm
    W: 292 mm
    D: 202 mm
    H: 14.8 mm
    W: 309.6 mm
    D: 210.9 mm
    Weight 2.35 lbs (1.07 kg) 2.82 lbs (1.28 kg)
    Colors Gold, Silver
    Price 3499 CNY
    $525
    4999 CNY
    $750

    The entry-level laptop from Xiaomi is the Mi Notebook Air 12.5”, which is powered by the dual-core Intel Core m3-6Y30, featuring the Skylake microarchitecture as well as the ninth-generation of Intel's integrated graphics (Gen 9, HD Graphics 515 with 24 EUs). The CPU is rated at a 1.1/2.2 GHz core frequency (base/turbo), 4 MB of last level cache, and a 7W thermal design power (normally this CPU is rated at 4.5W, but the 1.1 GHz in the spec sheet implies that it is running in its 7W cTDP Up mode - this isn't a surprise given the size of the device. The laptop comes with 4 GB of LPDDR3-1866 RAM, 128 GB SATA SSD, 802.11ac 2x2 Wi-Fi, Bluetooth, 1 MP webcam, two microphones, custom AKG speakers and so on. It's not stated if the design uses dual channel memory at this point, and it would be interesting to find out. The system sports one USB Type-C port for charging and display output, one USB 3.0 Type-A port as well as one HDMI connector.

    The Mi Notebook Air 12.5” features a display panel with 1920x1080 resolution, 170° wide viewing angle, 300 nit brightness as well as 600:1 contrast ratio. Despite the low memory, the main advantage of the Mi Notebook Air 12.5” over its bigger brother is its 11.5 hours rated battery life and low weight of 1.07 . From many points of view, the 12.5” laptop from Xiaomi attempts to combine the key advantages of Apple’s MacBook and MacBook Air (at least, from hardware perspective). It comes with Intel Core M, a common resolution screen, long battery life as well as thin-and-light form factor (like the MacBook). However, the system costs starting from 3999 yuan ($525), which means that it is more affordable than Apple’s MacBook Air.

    The next up is the more powerful Xiaomi Mi Notebook Air 13.3”, which is based on the dual-core Intel Core i5-6200U (2.3/2.8 GHz, 3MB LLC, 15 W TDP, Intel HD Graphics 520, etc.) and is equipped with NVIDIA’s GeForce 940 MX discrete GPU featuring a 1 GB GDDR5 memory buffer. Xiaomi says that by equipping its 13.3” laptop with a standalone graphics processor it enables higher performance in games when compared with iGPU. The notebook sports 8 GB of DDR4-2133 memory, a 256 GB NVMe SSD with PCIe 3.0 x4 interface (with up to 1500 MB/s read speed, which means that they are running the PCH in low-power mode and reduce PCIe clock-rates), dual band 802.11ac 2x2 Wi-Fi, Bluetooth, a 1 MP webcam, two microphones, custom AKG speakers and so on. The laptop uses USB-C for charging and display output, two USB-A 3.0 ports and one HDMI connector.

    The larger laptop from Xiaomi features a better display panel than the smaller model. Despite the similar resolution, viewing angles and brightness, the 13.3” notebook has a rated 800:1 contrast ratio as well as 72% NTSC color gamut (vs 50% on the 12.5” model). However, the bigger and improved screen comes at a price: the Xiaomi Mi Notebook Air 13.3” is 14.8 mm thick and it weighs 1.28 kilograms. The laptop is equipped with a 40 Wh battery (compared to 37 Wh on the smaller model), which gives it up to 9.5 hours of rated battery life. The faster CPU, discrete GPU, faster RAM, speedier SSD and better display effect the pricing of Xiaomi’s 13.3” notebook: the model costs 4999 yuan, or $750 (or $640, if that original CNY price includes China tax).

    Xiaomi will only sell its initial family of laptops in China at this time, similar to its smartphone strategy. 

  • Ten Year Anniversary of Core 2 Duo and Conroe: Moore’s Law is Dead, Long Live Moore’s Law (AnandTech)

    Today marks a full 10 years since the first Core 2 processors, and hence Intel’s 64-bit Core microarchitecture, were made officially available. These included a number of popular dual-core processor parts, including the seemingly ubiquitous E6400 and the Core 2 Extreme X6800. These were built on Intel’s 65nm process, and marked a turning point in the desktop processor ecosystem. To quote Anand in our launch review: ‘you’re looking at the most impressive piece of silicon the world has ever seen’. Today’s anniversary is somewhat bittersweet, as this week saw the official launch of the ‘final’ biannual International Technology Roadmap for Semiconductors report which predicts the stalling of smaller silicon manufacturing nodes and the progression of new design paradigms to tackle the next 10-15 years of semiconductor innovation.

  • Project Tango Demoed with Qualcomm at SIGGRAPH 2016 (AnandTech)

    Project Tango at this point is probably not new to anyone reading this as we’ve discussed it before, but in the past few years Google has been hard at work making positional tracking and localization into a consumer-ready application. While there was an early tablet available with an Nvidia Tegra SoC inside, there were a number of issues on both hardware and software. As the Tegra SoC was not really designed for workloads that Project Tango puts on a mobile device, much of the work was done on the GPU and CPU, with offloading to dedicated coprocessors like ST-M’s Cortex M3 MCUs for sensor hub and timestamp functionality, computer vision accelerators like a VPU from Movidius, and other chips that ultimately increased BOM and board area requirements.

    At SIGGRAPH today Google recapped some of this progress that we’ve seen at Google I/O as far as algorithms go and really polishing the sensor fusion, feature tracking, modeling, texturing, and motion tracking aspects of Tango. Anyone that has tried to do some research into how well smartphones can act as inertial navigation devices will probably know that it’s basically impossible to avoid massive integration error that makes the device require constant location updates from an outside source to avoid drifting.

    With Tango, the strategy taken to avoid this problem works at multiple levels. At a high level, sensor fusion is used to combine both camera data and inertial data to cancel out noise from both systems. If you traverse the camera tree, the combination of feature tracking on the cameras as well as depth sensing on the depth sensing camera helps with visualizing the environment for both mapping and augmented reality applications. The combination of a traditional camera and a fisheye camera also allows for a sort of distortion correction and additional sanity checks for depth by using parallax, although if you’ve ever tried dual lens solutions on a phone you can probably guess that this distance figure isn’t accurate enough to rely completely on. These are hard engineering problems, so it hasn’t been until recently that we’ve actually seen programs that can do all of these things reliably. Google disclosed that without using local anchor points in memory that the system drifts at a rate of about 1 meter every 100 meters traversed, so if you never return to previously mapped areas the device will eventually have a non-trivial amount of error. However, if you return to previously mapped areas the algorithms used in Tango will be able to reset its location tracking and eliminate accumulated error.

    With the Lenovo Phab 2 Pro, Tango is finally coming to fruition in a consumer-facing way. Google has integrated Tango APIs into Android for the Nougat release this fall. Of course, while software is one part of the equation, it’s going to be very difficult to justify supporting Tango capabilities if it needs all of the previously mentioned coprocessors in addition to the depth sensing camera and fisheye camera sensors.

    In order to enable Tango in a way that doesn’t require cutting into battery size or general power efficiency, Qualcomm has been working with Google to make the Tango API run on the Snapdragon SoC in its entirety rather than on dedicated coprocessors. While Snapdragon SoCs generally have a global synchronous clock, Tango really pushes the use of this to its full extent by using this clock on multiple sensors to enable the previously mentioned sensor fusion. In addition to this, processing is done on the Snapdragon 652 or 820’s ISP and Hexagon DSP, as well as the integrated sensor hub with low power island. The end result is that there enabling the Tango APIs requires no processing on the GPU and relatively minimal processing on the CPU such that Tango-enabled applications can run without hitting thermal limits and allowing for more advanced applications using Tango APIs. Qualcomm claimed that less than 10% of cycles on the S652 and S820 are used on the CPU and less than 35% of cycles on the DSP are needed as well. Qualcomm noted in further discussion that the use of Hexagon Vector Extensions would further cut down on CPU usage, and that much of the current CPU usage was on the NEON vector units.

    To see how all of this translates Qualcomm showed off the Lenovo Phab 2 Pro with some preloaded demo apps like a home improvement application from Lowe's which supports size measurements and live preview of appliances in the home with fairly high level of detail. The quality of the augmented reality visualization is actually shockingly good to the extent that the device can differentiate between walls and the floor so you can’t just stick random things in random places, and the placement of objects is static enough that there’s no strange floatiness that often seems to accompany augmented reality. Objects are redrawn fast enough that camera motion results in seamless and fluid motion of virtual objects, and in general I found it difficult to see any real issues in execution.

    While Project Tango still seemed to have some bugs to iron out and some features or polish to add, it looks like as it is now the ecosystem has progressed to the point where Tango API features are basically ready for consumers. The environment tracking for true six degree of freedom movement surely has implications for mobile VR headsets as well, and given that only two extra cameras are needed to enable Tango API features it shouldn’t be that difficult for high-end devices to integrate such features, although due to the size of these sensors it may be more targeted towards phablets than regular smartphones.

  • Apple annonce ses résultats (MacBidouille)

    Apple a dévoilé ses résultats sur le dernier trimestre échu. Ils sont comme prévu en nette baisse.
    La société a enregistré un revenu de 42,358 milliards de dollars, en baisse de 11% par rapport à l'an dernier. La marge brute est de 16,106 milliards contre 19,681 l'an dernier et le bénéfice de 10,1 milliards contre 14,1.
    Elle a vendu 40,4 millions d'iPhone, contre 47,5 millions sur la même période en 2015 et 51,2 millions au trimestre précédent.
    Les ventes d'iPad continuent à décliner de 9% à 10 millions d'unités et les ventes de Mac reculent de 11% à 4,25 millions d'unités.

    Seuls les revenus des services annexes, App Store, iTunes sont en hausse de 19%.

    La société prévoit encore un trimestre prochain en baisse.

    Tim Cook a annoncé aux investisseurs qu'il était optimiste pour les ventes de l'iPhone 7. Interrogé sur la réalité augmentée (à cause de Pokemon Go), il a annoncé que sa société investissait beaucoup d'argent dans ce secteur.

  • Apple Announces Q3 FY 2016 Results: App Store Up, Hardware Down (AnandTech)

    Today Apple announced their third quarter results for their fiscal year 2016. Much like last quarter, Apple has struggled to maintain the sales pace of the iPhone 6s, compared to the iPhone 6. For the quarter, Apple had revenues of $42.358 billion, which is down 11% from a year ago. Gross margin was $16.106 billion, down from $19.681 billion in Q3 2015, and percentage wise it is 38.0%. Operating income was $10.1 billion, down from $14.1 billion last year, and net income was down almost $3 billion to $7.8 billion. Diluted earnings per share were $1.42, down from $1.85 a year ago. Despite the lower quarter, Apple did beat expectations which has helped their share price in after-hours trading.

    Apple Q3 2016 Financial Results (GAAP)
      Q3'2016 Q2'2015 Q3'2015
    Revenue (in Billions USD) $42.358 $50.557 $49.605
    Gross Margin (in Billions USD) $16.106 $19.921 $19.681
    Operating Income (in Billions USD) $10.105 $13.987 $14.473
    Net Income (in Billions USD) $7.796 $10.516 $10.677
    Margins 38.0% 39.4% 39.7%
    Earnings per Share (in USD) $1.42 $1.90 $1.85

    Apple announced a dividend of $0.57 per share payable on August 11th to shareholders of record as of August 8th. They also returned over $13 billion during Q3 through share buy-backs and dividends, and they have completed almost $177 billion of their $250 billion capital return program.

    iPhone sales are far and away the largest part of the company, and this quarter Apple sold 40.4 million handsets. That is down from the 51.2 million last quarter, and 47.5 million in Q3 2015, meaning iPhone sales were down 15% year-over-year. This resulted in revenue of $24 billion, down 23% from a year ago. It’s certainly a noticeable drop, and it shows just how successful the iPhone 6 was when it launched.

    Moving on, iPad sales continued their slow and steady decline. Sales of the tablet were just a hair under ten million for the quarter, which is a drop of 9% year-over-year. Revenue was $4.9 billion, which is up 7%. A year ago, the average selling price of the iPad was $415, but this quarter, average selling price for the iPad rose $85 to $490. Declining sales of the iPad Mini, as well as new sales of the higher priced iPad Pro are certainly the case, but Apple doesn’t break out the numbers for individual models to know just how much each was a factor.

    The Mac didn’t fare very well either, with unit sales of 4.25 million, which is down 11% year-over-year. This resulted in revenue of $5.24 billion, down 13%. With basically no Mac refreshes in a long time, they are no longer outperforming the PC market as a whole, which was the case for the last while.

    Apple’s “Other Products” includes Apple TV, Apple Watch, Beats, iPods, and accessories, and while none of this is broken down by sub-category, the Other Products as a whole also fell 16% in revenue compared to Q3 2015, with revenues for this quarter of $2.22 billion.

    Apple Q3 2016 Device Sales (thousands)
      Q3'2016 Q2'2016 Q3'2015 Seq Change Year/Year Change
    iPhone 40,399 51,193 47,534 -21% -15%
    iPad 9,950 10,251 10,931 -3% -9%
    Mac 4,252 4,034 4,796 +5% -11%

    The one segment in which Apple had strong growth was their Services segment. Services grew by 19% compared to Q3 2015, with revenue of $5.976 billion, which is up almost a billion or 19% year-over-year. Q2 2016 revenue was pretty much the same at $5.991 billion, meaning services have once again outpaced both Mac and iPad sales, and now represent the second largest segment at Apple.

    Apple Q3 2016 Revenue by Product (billions)
      Q3'2016 Q2'2016 Q3'2015 Revenue for current quarter
    iPhone $24.048 $32.857 $31.368 56.8%
    iPad $4.876 $4.413 $4.538 11.5%
    Mac $5.239 $5.107 $6.030 12.4%
    iTunes/Software/Services $5.976 $5.991 $5.028 14.1%
    Other Products $2.219 $2.189 $2.641 5.2%

    Overall, it’s the second consecutive quarter of revenue loss, and last quarter was the first time that happened since Q1 2003, so Apple is in somewhat unfamiliar territory. Their guidance for next quarter is $45.5 to $47.5 billion, and margins between 37.5% and 38%. That guidance is also for a loss of revenue, since Q4 2015 had the company coming in at $51.5 billion, and 39.9% margins. It will be interesting to see if hardware refreshes in the fall can stop the drop in sales.

    Source: Apple Investor Relations

  • Updated: AMD Announces Radeon Pro SSG: Fiji With M.2 SSDs On-Board (AnandTech)

    As part of this evening’s AMD Capsaicin event (more on that later), AMD’s Chief Architect and SVP of the Radeon Technologies Group has announced a new Radeon Pro card unlike anything else. Dubbed the Radeon Pro Solid State Graphics (SSG), this card includes M.2 slots for adding NAND SSDs, with the goal of vastly increasing the amount of local storage available to the video card.

    Details are a bit thin and I’ll update this later this evening, but in short the card utilizes a Polaris 10 Fiji GPU and includes 2 PCIe 3.0 M.2 slots for adding flash drives to the card. These slots are then attached to the GPU (it’s unclear if there’s a PCIe switch involved or if it’s wired directly), which the GPU can then use as an additional tier of storage. I’m told that the card can fit at least 1TB of NAND – likely limited by M.2 MLC SSD capacities – which massively increases the amount of local storage available on the card.

    As AMD explains it, the purpose of going this route is to offer another solution to the workset size limitations of current professional graphics cards. Even AMD’s largest card currently tops out at 32GB, and while this is a fair amount, there are workloads that can use more. This is particular the case for workloads with massive datasets (oil & gas), or as AMD demonstrated, scrubbing through an 8K video file.

    Current cards can spill over to system memory, and while the PCIe bus is fast, it’s still much slower than local memory, plus it is subject to the latency of the relatively long trip and waiting on the CPU to address requests. Local NAND storage, by comparison, offers much faster round trips, though on paper the bandwidth isn’t as good, so I’m curious to see just how it compares to the real world datasets that spill over to system memory.  Meanwhile actual memory management/usage/tiering is handled by a combination of the drivers and developer software, so developers will need to code specifically for it as things stand.

    For the moment, AMD is treating the Radeon Pro SSG as a beta product, and will be selling developer kits for it directly., with full availability set for 2017. For now developers need to apply for a kit from AMD, and I’m told the first kits are available immediately. Interested developers will need to have saved up their pennies though: a dev kit will set you back $9,999.

    Update:

    Now that AMD’s presentation is over, we have a bit more information on the Radeon Pro SSG and how it works.

    In terms of hardware, the Fiji based card is outfit with a PCIe bridge chip – the same PEX8747 bridge chip used on the Radeon Pro Duo, I’m told – with the bridge connecting the two PCIe x4 M.2 slots to the GPU, and allowing both cards to share the PCIe system connection. Architecturally the prototype card is essentially a PCIe SSD adapter and a video card on a single board, with no special connectivity in use beyond what the PCIe bridge chip provides.

    The SSDs themselves are a pair of 512GB Samsung 950 Pros, which are about the fastest thing available on the market today. These SSDs are operating in RAID-0 (striped) mode to provide the maximum amount of bandwidth. Meanwhile it turns out that due to how the card is configured, the OS actually sees the SSD RAID-0 array as well, at least for the prototype design.

    To use the SSDs, applications need to be programmed using AMD’s APIs to recognize the existence of the local storage and that it is “special,” being on the same board as the GPU itself. Ultimately the trick for application developers is directly streaming resources from  the SSDs treating it as a level of cache between the DRAM and system storage. The use of NAND in this manner does not fit into the traditional memory hierarchy very well, as while the SSDs are fast, on paper accessing system memory is faster still. But it should be faster than accessing system storage, even if it’s PCIe SSD storage elsewhere on the system. Similarly, don’t expect to see frame buffers spilling over to NAND any time soon. This is about getting large, mostly static resources closer to the GPU for more efficient resource streaming.

    To showcase the potential benefits of this solution, AMD had an 8K video scrubbing demonstration going, comparing performance between using a source file on the SSG’s local SSDs, and using a source file on the system SSD (also a 950 Pro).

    The performance differential was actually more than I expected; reading a file from the SSG SSD array was over 4GB/sec, while reading that same file from the system SSD was only averaging under 900MB/sec, which is lower than what we know 950 Pro can do in sequential reads. After putting some thought into it, I think AMD has hit upon the fact that most M.2 slots on motherboards are routed through the system chipset rather than being directly attached to the CPU. This not only adds another hop of latency, but it means crossing the relatively narrow DMI 3.0 (~PCIe 3.0 x4) link that is shared with everything else attached to the chipset.

    Though by and large this is all at the proof of concept stage. The prototype, though impressive in some ways in its own right, is really just a means to get developers thinking about the idea and writing their applications to be aware of the local storage. And this includes not just what content to put on the SSG's SSDs, but also how to best exploit the non-volatile nature of its storage, and how to avoid unnecessary thrashing of the SSDs and burning valuable program/erase cycles. The SSG serves an interesting niche, albeit a limited one: scenarios where you have a large dataset and you are somewhat sensitive to latency and want to stay off of the PCIe bus, but don't need more than 4-5GB/sec of read bandwidth. So it'll be worth keeping an eye on this to see what developers can do with it.

    In any case, while AMD is selling dev kits now, expect some significant changes by the time we see the retail hardware in 2017. Given the timeframe I expect we’ll be looking at much more powerful Vega cards, where the overall GPU performance will be much greater, and the difference in performance between memory/storage tiers is even more pronounced.

  • HP Updates The Z240 Workstation With The Core i7-6700K (AnandTech)

    HP has an interesting announcement today - they are refreshing their existing Z240 workstation, which is targeted towards small and medium-sized businesses, with a non-Xeon Core i7 based processor. It was already available with Skylake based Xeon CPUs, up to the Intel Xeon E3-1280 v5. That’s a 3.7-4.0 GHz Xeon, with 4 cores, 8 MB of cache, with an 80-Watt Thermal Design Power (TDP). That’s certainly an excellent choice for a lot of workloads that workstations are tasked with, and with support for ECC memory, reliability under load is also a key factor. But HP has been talking to their customers and found that many of them have been choosing to forgo the error checking capabilities of ECC and have been building or buying equivalent gaming-focused machines in order to get more performance for the money. Specifically, they have been building desktops with the Core i7-6700K, which is an unlocked 4.0-4.2 GHz quad-core design, with a 91-Watt TDP, and in pure frequency can offer up to 13% more performance than the fastest Skylake Xeon.

    So armed with this data, HP has refreshed the Z240 line today, with the usual Skylake Xeons in tow but also an option for the Core i7-6700K. This desktop sized workstation supports up to 64 GB of DDR4-2133, with ECC available on the Xeon processors only. It’s a pretty interesting move, but can make a lot of sense if most customers would probably rather purchase a workstation from a company like HP so that they get the testing and support offerings found with workstation class machines. If some of them had to resort to building their own in order to get the best CPU performance, HP has made a wise decision to offer this.

    Despite the higher TDP, HP has created fan profiles which they say will allow full turbo performance with no thermal throttling, while at the same time not exceeding their acoustic threshold which I was told was a mere 31 dB. Although they have offered closed loop liquid cooling on their workstations in the past, the Z240 achieves this thermal performance with more traditional air cooling.

    (Edit from Ian: It has not been stated if HP will implement a variation of MultiCore Turbo/Acceleration at this time, but given the limited BIOS options of the major OEMs in recent decades, this has probably been overlooked. Frankly, I would be surprised if the BIOS engineers had even heard of mainstream motherboard manufacturers implementing the feature, though I will happily be proved wrong.)

    The Z240 is currently offered with a wide range of professional graphics, if required, including the NVIDIA NVS 310, 315, and 510, and Quadro up to the M4000. With yesterday’s announcement of the Pascal Quadro, and today's announcement of the new Radeon Pro WX, they are likely to offer these soon. If a user requires AMD professional graphics, HP will offer the FirePro W2100, W5100, W4300, and the W7100 with 8 GB of memory.

    A simple device refresh mid-cycle is far from unexpected, but it is pretty interesting that by talking to their customers HP has found that many of them would prefer higher single threaded performance with a Core i7-6700K, rather than the Xeon ecosystem with a focus on stability and ECC. It will be interesting to see if Intel reacts to this, since the Xeon is a nice high margin product.

    As a small comparison, the highest clocked Xeon E3 v5 is the E3-1280 v5 at 3.7-4.0 GHz, and has a recommended customer price of $612 on Intel Ark. The one underneath is the E3-1275 v5 at 3.6-4.0 GHz, but is a more palatable $350. This latter part compares in price to the Core i7-6700K, which is at $350 list price also, however the i7-6700K has the margin on frequency at 4.0-4.2 GHz. Comparing the two Xeons to the Core i7, HP can offer a bit more performance with the trade-off of no ECC support, and in the case of the peak Xeon, save some money as well. For those that need the top raw CPU performance available especially for single-threaded workloads, short of overclocking, this is the way to go.

    Source: HP

  • MSI Shows New Radeon RX 480 Gaming Cards, with an 8-pin (AnandTech)

    Today MSI is announcing the latest entry in the Gaming X GPU line with the Radeon RX 480 Gaming X 8G/4G cards as well as non-X variants. The main difference between the non-X and X cards is in the core and memory frequencies, with the X card having the higher performance. In return, there will be a small price difference between the two variants.

    MSI Radeon RX 480 Gaming Specification Comparison
        GAMING X 8G GAMING X 4G GAMING 8G GAMING 4G
    Core Clock
     
    Silent 1266 MHz
    Gaming 1303 MHz 1279 MHz
    OC Mode 1316 MHz 1292 MHz
    Memory Clock (Reg/OC) 8.0/8.1 Gbps GDDR5 8.0 Gbps GDDR5
    VRAM 8 GB 4 GB 8 GB 4 GB
    Launch Date TBD Mid August 2016
    Launch Price Unknown ???

    Starting with appearance, all four models shown today feature an angular, aggressive, red and black design for the cooler, which glows through the red highlights. On the side of the card is an MSI logo lit by customizable RGB lighting which is adjusted through the MSI Gaming software bundled with the card. Around back there is a full cover back plate on the Gaming X cards, and moving back around to the cooler we have two large fans over a full-length cooler and PCB. Running through the heatsink are three heat pipes at 8mm thick each. These heat pipes are squared off at the bottom and mated to a nickel-plated copper baseplate, aiming to increase contact with the GPU core and hence increase heat transfer. This cooler is toned down from that used on the highest end cards, but should still make for a very capable cooling solution.

    Moving from form over to function, MSI’s RX 480 Gaming X cards are built from what MSI calls “Military Class 4” components, which is marketing speak for their choosing quality components to assemble this card. For power, the cards have a single 8-pin connector, and for the output we have two HDMI, two DisplayPort, and one DVI-D connection. This appears to be a very popular arrangement this generation, allowing one HDMI port for a monitor and another for a VR headset. MSI also use their TORX Fan 2.0 design which they say will generate 22% more air pressure, and like other Gaming and Gaming X cards the fans will shut off at temperatures below 60C. If true these fans coupled with the Twin Frozr VI heatsink could do an admirable job of quietly handling any heat an RX 480 can muster. For performance numbers on the Gaming X, we have a moderate clock speed gain over the reference card in OC mode, while the memory is bumped up to 8.1 Gbps in OC mode.

    With no word on pricing, both 8GB and 4GB versions of the MSI Radeon RX 480 Gaming X are expected to be in stores worldwide around the middle of August 2016.

     

  • AMD Announces Radeon Pro WX Series: WX 4100, WX 5100, & WX 7100 Bring Polaris to Pros (AnandTech)

    It’s been a while since we’ve last seen a new workstation graphics card from AMD. With their Fiji GPU not being a good fit for the market, the company hasn’t had a significant update of the lineup since 2014, when Tonga was introduced into the mix. However as part of their SIGGRAPH 2016 professional graphics event, AMD is giving their professional card lineup a proper update and then some.

    Announced Monday night, the company is introducing 3 new cards under their new Radeon Pro WX family, the WX 7100, WX 5100, and WX 4100. Powered by the AMD’s new Polaris family of GPUs, AMD is looking to bring the architecture’s power efficiency and display controller improvements to their workstation users. As this is based on the Polaris 10 and Polaris 11 GPUs, like their consumer desktop counterparts, AMD is targeting the bread-and-butter workstation market with their latest wares, in this case meaning the sub-$1000 market.

    AMD Workstation Video Card Specification Comparison
      WX 7100 W7100 WX 5100 WX 4100
    Stream Processors 2304 1792 1792 1024
    Texture Units 128 112 112 64
    ROPs 32 32 32 16
    Boost Clock >1.08GHz 920MHz >1.2GHz >975MHz
    Memory Clock ? 5Gbps GDDR5 ? ?
    Memory Bus Width 256-bit 256-bit 256-bit 128-bit
    VRAM 8GB 8GB 8GB 4GB
    TDP 150W? 150W ? ?
    GPU Polaris 10 Tonga Polaris 10 Polaris 11
    Architecture Polaris GCN 1.2 Polaris Polaris
    Manufacturing Process GloFo 14nm TSMC 28nm GloFo 14nm GloFo 14nm
    Launch Date Q4 2016 08/2014 Q4 2016 Q4 2016
    Launch Price (MSRP) <$1000 N/A TBA TBA

    Branding aside (more on that later today), the Radeon Pro WX series is essentially a continuation of AMD’s existing FirePro W series lineup and the traditional workstation market it targets. To that end the new Radeon Pro WX cards are retaining the FirePro W series numbering system, indicating which card/tier they are a replacement of.

    At the top of the new Radeon Pro WX stack is the WX 7100. The successor to the Tonga based W7100, this is based on AMD’s leading Polaris 10 GPU. Relative to its predecessor then, it should offer a decent performance boost, combining a slightly larger number of SPs with higher clockspeeds. AMD has disclosed that the card will ship with 2304 SPs (36 CUs), which is a fully enabled Polaris 10 GPU, making this very similar to the consumer RX 480. Meanwhile specific clockspeeds have not been revealed, but given AMD’s 5 TFLOPs minimum, this puts the boost clock at no lower than 1.08GHz.

    On the memory side the card will be shipping with 8GB of GDDR5 attached to a 256-bit bus. Clockspeeds have not been disclosed, but the consumer counterpart to this card, Radeon RX 480, used 8Gbps chips, so I’d expect at least 7 for the workstation card. I am a bit surprised that AMD opted to go with just 8GB of memory here – Polaris 10 should be able to support 16GB – but given the price goal and the target market, it makes sense.

    On the TDP front I’m still waiting for AMD to post the full specifications of the card. But it’s a very safe bet it’s a 150W card given the GPU configuration and the fact that its predecessor hit the same power target. Speaking of which, like W7100 before it, this is a single slot, full profile card. AMD has once again given the card 4 DisplayPort outputs, this time capable of the newest DisplayPort 1.4 standard.

    WX 5100

    The second of the new WX trio is the WX 5100. Also based on the Polaris 10 GPU, this card opts for a lower balance of price, performance, and power consumption.  This replaces the Bonaire based W5100, and comes with 1792 SPs (28 CUs) enabled, and a clockspeed that will be at least 1.2GHz. Compared to its predecessor it should be massively faster as AMD has more than doubled the number of SPs, not to mention the clockspeed boost.

    Attached to the GPU will be 8GB of GDDR5 memory over a 256-bit bus. Like the RX 7100 AMD has not disclosed memory frequencies here, though I’m going to be surprised if it’s as high as its bigger sibling since it needs to be a cheaper and lower power card. On that note TDPs are not available either; W5100 was a 75W card, but given the use of a mostly enabled Polaris 10, I’m not sure that’ll be the case here. RX 5100 is essentially a second-tier to 7100, which is something that did not exist in the previous FirePro generation.

    In terms of build we’re looking at a card that takes a cue from its predecessor, utilizing a single wide, full profile, but overall relatively short card design. AMD is aiming for continuity with the previous generation in their card designs, so WX 5100 should be a drop-in replacement in that respect.

    WX 4100

    Last but not least we have the WX 4100. This replaces the W4300 as the low performance member of the workstation card family. As you might expect from such a description, this is based on AMD’s forthcoming Polaris 11 GPU, which so far we haven’t seen yet, but we’re told is aggressively power optimized. In terms of underlying hardware we’re looking at a fully enabled Polaris 11 GPU, with 1024 SPs (16 CUs), clocked at no less than 975MHz boost. Relative to its predecessor it should deliver a good performance boost, with 33% more SPs and a modest clockspeed bump.

    With regards to memory, we’re looking at 4GB of GDDR5 attached via a 128-bit bus. Memory clockspeeds have not been disclosed. For that matter neither has TDP, but given that this is a Polaris 11 card meant to replace the W4300, it’s a very safe bet that this is a sub-75W card.

    For design we’re looking at the only low profile member of the new WX family. The card utilizes a single wide cooler design and is outfit with mini-DisplayPorts in order to get 4 of them on a single low profile card.

    Polaris Architecture & Card Availability

    While the immediate performance and power efficiency gains afforded by AMD’s Polaris architecture are going to be the biggest piece of news here, Polaris brings other new functionality to the table as well. For the professional graphics market and FirePro/Radeon Pro users, these should be some very welcome changes.

    When it comes to the display controller, Polaris represents a big step up from AMD’s prior generation architectures. DisplayPort 1.4 is now supported, which means that these cards can be used to drive 5K monitors via a single port, allowing up to 4 of such monitors to be driven per card. Meanwhile this also brings full and formal support for HDR and its many requirements (e.g. HDR metadata), which should be a boon for media users, especially now that HDR monitors are hitting the market. And though it’s not directly exposed on the new WX cards since all of them use DisplayPort, HDMI 2.0b is also supported, which again for media users should be useful for when they need to work specifically with HDMI displays/TVs via an adapter.

    Along those lines, Polaris also introduces AMD’s new video encode and decode block.  This marks the first time that HEVC decode and encode have been available on a FirePro workstation card, This once again is another media-centric feature in the pro graphics workspace, as it allows for much better (and faster) support for HEVC content, including of course HDR content.

    Finally, getting back to AMD’s Radeon Pro reformation, among the other changes AMD has announced is that they have significantly extended the warranty period for these new Radeon Pro WX cards. Whereas the older FirePro cards had a 3 year warranty, these new cards come with a 10 year warranty. Looking at the fine print in AMD’s announcement, this is compared of a 3 year warranty plus a 7 year extended warranty. I suspect this means that support after 3 years is more limited (e.g. possibly only hardware support and critical security fixes), but we’ll see what AMD has to say. But to put this in perspective, if you went back 10 years from now, this would mean AMD would still be supporting their ancient DX9-era R500-based FireGL 7300.Raja Koduri quipped that he’s never heard of anyone using a workstation card for 10 years, and I don’t doubt he’s right.

    Wrapping things up, the new Radeon Pro WX series cards will be released in Q4 of this year. AMD has not announced pricing at this time beyond the fact that the entire lineup will be under $1000. Pricing will be released closer to launch, though as AMD themselves have noted, most of their sales are via preconfigured OEM workstations, so the bulk of their customers will never buy a card directly to begin with. In any case, AMD’s regular OEM partners such as HP have already announced their support, so we should be seeing WX-equipped workstations show up in Q4 as well.

  • Seagate Expands Nytro Enterprise SSD Family with 2TB M.2 XM1440 (AnandTech)

    As Flash Memory Summit 2016 approaches, many major players in the SSD market are starting to announce new products. A year after introducing the Nytro XM1440 enterprise M.2 PCIe SSD, Seagate is expanding the lineup with a 2TB option. The XM1440 M.2 and XF1440 2.5" U.2 SSDs are based on the combination of Marvell's 88SS1093 PCIe 3.0 NVMe controller and Micron MLC NAND. The products are a result of a collaboration between Micron and Seagate, and are sold by Micron as the 7100 series. The 2.5" version has had a 2TB-class capacity option from the start, but the new XM1440 2TB is the first of its kind. The higher drive capacity is achieved through denser NAND packaging rather than from switching to higher-capacity 3D NAND dies.

    The XM1440 and XF1440 are available in either a capacity-optimized configuration intended for read-intensive workloads and rated for 0.3 drive writes per day, or in an endurance-optimized configuration for mixed workloads and rated for 3 drive writes per day. The latter sacrifices some usable capacity for increased overprovisioning and higher random write speeds, but otherwise they are the same drive. The 2TB XM1440 M.2 will unsurprisingly be one of the capacity-optimized variants, with similar specifications to the 1920GB XF1440 2.5" U.2 SSD.

    Seagate Nytro XF1440 and XM1440
    Drive Endurance Optimized Capacity Optimized
    Usable capacity 400 GB, 800 GB, 1600 GB (XF1440 only) 480 GB, 960 GB, 1920 GB
    Interface PCIe 3.0 x4 2.5" U.2 (XF1440)
    PCIe 3.0 x4 M.2 22110 (XM1440)
    Sequential read up to 2500 MB/s
    Sequential write up to 900 MB/s
    Random read IOPS up to 240K
    Random write IOPS up to 40K up to 15K
    Write endurance 3 DWPD 0.3 DWPD
    Warranty 5 years
    Peak power 12.5 W (XF1440), 8.25 W (XM1440)
    Average read/write power 9 W (XF1440), 7W (XM1440)

    Seagate is also introducing a PCIe add-in card counterpart to the XM1440 and XF1440 as the Nytro XP7102. Based on the same controller and NAND, the XP7102's model number appears to mark it as the entry-level option in a new XP7000 generation to replace the XP6000 series products that were multi-controller solutions with an on-board RAID controller. The Nytro XP7102 targets only the endurance-optimized mixed workload segment with 800GB and 1600GB as the only two capacity options, and has similar specifications to its XF1440 equivalents.

    The 2TB XM1440 M.2 will be available in November 2016 and the Nytro XP7102 PCIe add-in card is already available.

  • Windows 10 : il vous reste une semaine pour migrer gratuitement (Génération NT: logiciels)
    Tout n'est pas encore perdu pour les indécis, mais il va falloir se dépêcher : Windows 10 est encore gratuit pour une semaine.
  • Une reconnaissance de l'iris pour l'iPhone en 2018 ? (MacBidouille)

    Avec son TouchID (en particulier sur les modèles 6S), Apple a un des systèmes de reconnaissance d'empreinte digitale les plus efficaces du monde des smartphones.
    Selon Digitimes, c'est dans une réelle course que se lancent maintenant les fabricants de téléphones mobiles et leurs sous-traitants.

    Ainsi, Samsung devrait proposer sur son prochain modèle un nouveau système de reconnaissance de l'iris. Pour Apple, ce serait prévu pour 2018 et tout le monde cherche maintenant le Graal de la reconnaissance biométrique qui soit aussi efficace que fiable.

    Les difficultés techniques restent nombreuses car il faut être capable de faire une prise de vie très précise de l'iris sans avoir à coller son oeil à un capteur par trop lumineux et surtout que ce système soit capable de faire la distinction entre un oeil vivant et une photo en haute définition de ce dernier.

    Visiblement, cela représentera un vecteur de nouveautés et un moyen de se démarquer sur un marché où il ne semble plus y avoir grand chose à inventer ou à faire progresser.

  • AMD lance des cartes graphiques avec des supports SSD (MacBidouille)

    AMD vient d'annoncer la sortie (en beta test) d'un nouveau type de cartes graphiques, les Radeon Pro SSG.

    Au niveau graphique, on est en terre connue avec une puce Polaris 10, celle que l'on retrouve dans les RX 480.
    La différence de ces cartes est la présence en leur sein de deux connecteurs M2 capables de supporter deux SSD.

    Outre la possibilité d'offrir un vaste support de stockage sans connecteurs additionnels sur la carte mère, les SSD peuvent être utilisés pour étendre la mémoire embarquée de la carte graphique avec un accès local et donc plus rapide qu'en passant par la carte mère.

    Difficile de dire s'il s'agit d'une révolution future ou d'une pathétique tentative de faire du neuf avec des produits inférieurs à ceux de Nvidia.
    On le saura en 2017, au moment où ces cartes seront commercialisés, à moins de payer immédiatement 9999$ pour accéder à un kit de développement beta.


  • Updated: AMD Announces Radeon Pro SSG: Polaris With M.2 SSDs On-Board (AnandTech)

    As part of this evening’s AMD Capsaicin event (more on that later), AMD’s Chief Architect and SVP of the Radeon Technologies Group has announced a new Radeon Pro card unlike anything else. Dubbed the Radeon Pro Solid State Graphics (SSG), this card includes M.2 slots for adding NAND SSDs, with the goal of vastly increasing the amount of local storage available to the video card.

    Details are a bit thin and I’ll update this later this evening, but in short the card utilizes a Polaris 10 GPU and includes 2 PCIe 3.0 M.2 slots for adding flash drives to the card. These slots are then attached to the GPU (it’s unclear if there’s a PCIe switch involved or if it’s wired directly), which the GPU can then use as an additional tier of storage. I’m told that the card can fit at least 1TB of NAND – likely limited by M.2 MLC SSD capacities – which massively increases the amount of local storage available on the card.

    As AMD explains it, the purpose of going this route is to offer another solution to the workset size limitations of current professional graphics cards. Even AMD’s largest card currently tops out at 32GB, and while this is a fair amount, there are workloads that can use more. This is particular the case for workloads with massive datasets (oil & gas), or as AMD demonstrated, scrubbing through an 8K video file.

    Current cards can spill over to system memory, and while the PCIe bus is fast, it’s still much slower than local memory, plus it is subject to the latency of the relatively long trip and waiting on the CPU to address requests. Local NAND storage, by comparison, offers much faster round trips, though on paper the bandwidth isn’t as good, so I’m curious to see just how it compares to the real world datasets that spill over to system memory.  Meanwhile actual memory management/usage/tiering is handled by a combination of the drivers and developer software, so developers will need to code specifically for it as things stand.

    For the moment, AMD is treating the Radeon Pro SSG as a beta product, and will be selling developer kits for it directly., with full availability set for 2017. For now developers need to apply for a kit from AMD, and I’m told the first kits are available immediately. Interested developers will need to have saved up their pennies though: a dev kit will set you back $9,999.

    Update:

    Now that AMD’s presentation is over, we have a bit more information on the Radeon Pro SSG and how it works.

    In terms of hardware, the Polaris based card is outfit with a PCIe bridge chip – the same PEX8747 bridge chip used on the Radeon Pro Duo, I’m told – with the bridge connecting the two PCIe x4 M.2 slots to the GPU, and allowing both cards to share the PCIe system connection. Architecturally the prototype card is essentially a PCIe SSD adapter and a video card on a single board, with no special connectivity in use beyond what the PCIe bridge chip provides.

    The SSDs themselves are a pair of 512GB Samsung 950 Pros, which are about the fastest thing available on the market today. These SSDs are operating in RAID-0 (striped) mode to provide the maximum amount of bandwidth. Meanwhile it turns out that due to how the card is configured, the OS actually sees the SSD RAID-0 array as well, at least for the prototype design.

    To use the SSDs, applications need to be programmed using AMD’s APIs to recognize the existence of the local storage and that it is “special,” being on the same board as the GPU itself. Ultimately the trick for application developers is directly streaming resources from  the SSDs treating it as a level of cache between the DRAM and system storage. The use of NAND in this manner does not fit into the traditional memory hierarchy very well, as while the SSDs are fast, on paper accessing system memory is faster still. But it should be faster than accessing system storage, even if it’s PCIe SSD storage elsewhere on the system. Similarly, don’t expect to see frame buffers spilling over to NAND any time soon. This is about getting large, mostly static resources closer to the GPU for more efficient resource streaming.

    To showcase the potential benefits of this solution, AMD had an 8K video scrubbing demonstration going, comparing performance between using a source file on the SSG’s local SSDs, and using a source file on the system SSD (also a 950 Pro).

    The performance differential was actually more than I expected; reading a file from the SSG SSD array was over 4GB/sec, while reading that same file from the system SSD was only averaging under 900MB/sec, which is lower than what we know 950 Pro can do in sequential reads. After putting some thought into it, I think AMD has hit upon the fact that most M.2 slots on motherboards are routed through the system chipset rather than being directly attached to the CPU. This not only adds another hop of latency, but it means crossing the relatively narrow DMI 3.0 (~PCIe 3.0 x4) link that is shared with everything else attached to the chipset.

    Though by and large this is all at the proof of concept stage. The prototype, though impressive in some ways in its own right, is really just a means to get developers thinking about the idea and writing their applications to be aware of the local storage. And this includes not just what content to put on the SSG's SSDs, but also how to best exploit the non-volatile nature of its storage, and how to avoid unnecessary thrashing of the SSDs and burning valuable program/erase cycles. The SSG serves an interesting niche, albeit a limited one: scenarios where you have a large dataset and you are somewhat sensitive to latency and want to stay off of the PCIe bus, but don't need more than 4-5GB/sec of read bandwidth. So it'll be worth keeping an eye on this to see what developers can do with it.

    In any case, while AMD is selling dev kits now, expect some significant changes by the time we see the retail hardware in 2017. Given the timeframe I expect we’ll be looking at much more powerful Vega cards, where the overall GPU performance will be much greater, and the difference in performance between memory/storage tiers is even more pronounced.

  • NVIDIA Announces Quadro Pascal Family: Quadro P6000 & P5000 (AnandTech)

    If there was one word to describe the launch of NVIDIA’s Pascal generation products, it’s “expedient.” On the consumer side of the business the company has launched 3 different GeForce cards and announced a fourth (Titan X), while on the HPC side the company has already launched their Tesla P100 accelerator, with the PCIe version due next quarter. With the company moving so quickly it was only a matter of time until a Quadro update was announced, and now today at SIGGRAPH 2016 the company is doing just that.

    Being announced today are the two Quadro models that will fill out the high-end of the Quadro family, the P6000 and P5000. As hinted at by the name, these are based on NVIDIA’s latest Pascal generation GPUs., marking the introduction of Pascal to the Quadro family. And like NVIDIA’s consumer counterparts, these new cards should offer significant performance and feature upgrades over their Maxwell 2 based predecessors.

    NVIDIA Quadro Specification Comparison
      P6000 P5000 M6000 M5000
    CUDA Cores 3840 2560 3072 2048
    Texture Units 240? 160 192 128
    ROPs 96? 64 96 64
    Core Clock ? ? N/A N/A
    Boost Clock ~1560MHz ~1730MHz ~1140MHz ~1050MHz
    Memory Clock 9Gbps GDDR5X 9Gbps GDDR5X 6.6Gbps GDDR5 6.6Gbps GDDR5
    Memory Bus Width 384-bit 256-bit 384-bit 258-bit
    VRAM 24GB 16GB 24GB 8GB
    FP64 1/32 FP32 1/32 FP32 1/32 FP32 1/32 FP32
    TDP 250W 180W 250W 150W
    GPU GP102 GP104 GM200 GM204
    Architecture Pascal Pascal Maxwell 2 Maxwell 2
    Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm TSMC 28nm
    Launch Date October 2016 October 2016 03/22/2016 08/11/2015
    Launch Price (MSRP) TBD TBD $5000 $2000

    We will start, as always, at the top, with the Quadro P6000. As NVIDIA’s impending flagship Quadro card, this is based on the just-announced GP102 GPU. The direct successor to the GM200 used in the Quadro M6000, the GP102 mixes a larger number of SMs/CUDA cores and higher clockspeeds to significantly boost performance.

    Paired with P6000 is 24GB of GDDR5X memory, running at a conservative 9Gbps, for a total memory bandwidth of 432GB/sec. This is the same amount of memory as in the 24GB M6000 refresh launched this spring, so there’s no capacity boost at the top of NVIDIA’s lineup. But for customers who didn’t jump on the 24GB – which is likely a lot of them, including most 12GB M6000 owners – then this is a doubling (or more) of memory capacity compared to past Quadro cards. At this time the largest capacity GDDR5X memory chips we know of (8Gb), so this is as large of a capacity that P6000 can be built with at this time. Meanwhile this is so far the first and only Pascal card with GDDR5X to support ECC, with NVIDIA implementing an optional soft-ECC method for the DRAM only, just as was the case on M6000.

    NVIDIA has also sent over pictures of the card design, and confirmed that the card ships with the Quadro 6000-series standard TDP of 250W. Utilizing the same basic metal shroud and blower design as the M6000 cards, the P6000 should be suitable as drop-in replacement for older M6000 cards. Do note however that like M6000, external power is pulled via a single 8-pin power connector, so technically this card is out of spec (not that this was a problem for M6000).

    Unfortunately in their zeal to get this announcement out in time for SIGGRAPH - a frequent venue for Quadro announcements – we don’t have specific performance numbers available. NVIDIA has not locked down the GPU clockspeeds, and as a result we don’t just how P6000s clockspeeds and total throughput will compare to M6000’s. It goes without saying that it should be higher, but how much higher remains to be seen.

    For overall expected performance, NVIDIA has published that the P6000 is rated for 12 TFLOPs FP32. Given that it's a fully enabled GP102 we're looking at, this works out to a clockspeed of around 1560MHz. On paper this gives P6000 around 71% more shading performance and 37% more ROP throughput than the older Maxwell 2 M6000. This also puts the P6000 around 9% ahead of the recently announced NVIDIA Titan X.

    On a quick technical note, as this announcement comes just 4 days after NVIDIA announced the GP102 GPU used on this card, this Quadro announcement does confirm a few more things about GP102. Quadro P6000 ships with 3840 CUDA cores (30 SMs), confirming our earlier suspicions that GP102 was a (or at least) 30 SM part. Meanwhile this also confirms that GP102 can be outfit with 24GB of GDDR5X. Finally, NVIDIA has confirmed that there’s no high-speed FP64 support on GP102, which is why we’re looking at a 1/32 rate for even the top Quadro card.

    M5000

    Moving on, let’s talk about Quadro M5000. Based on NVIDIA’s GP104 GPU, this is the smaller, cheaper, lower power sibling to the P6000. This is a fully enabled part with all 2560 CUDA cores (20 SMs) active, so the performance gains versus M5000 should be similar to what we saw with the consumer GeForce GTX 1080. Clockspeeds are also comparable, so we're looking at sizable boost in shading/compute/texture performance of 2.06x, and ROP throughput has increased by 65%. Of the two cards, M5000 is going to the bigger upgrade versus its direct predecessor.

    Meanwhile on the memory front, P5000 is equipped with 16GB of GDDR5X memory. This is attached to GP104’s 256-bit memory bus, and like P6000 is clocked at 9Gbps. P5000’s predecessor, M5000, maxed out at just 8GB of memory, so along with a 36% increase in memory bandwidth, this doubles the amount of memory available for a Quadro 5000 tier card.

    Looking at the card design itself, to no surprise it strongly resembles the M5000, with its plastic blower dressed up in Quadro livery. The card’s TDP stands at 180W, which is a slight increase over M5000, but shouldn’t too significantly impact the drop-in replacement nature of the design.

    Pascal Features & Availability

    Along with the significant performance increase afforded by the Pascal architecture and TSMC’s 16nm FinFET manufacturing process, the other big news here is of course the functionality that comes to the Quadro P-series courtesy of Pascal. While for our regular readers there’s nothing new we haven’t seen already with GeForce, Pascal’s new functionality will apply a bit differently to the Quadro lineup.

    Perhaps the biggest change here is Pascal’s new display controller. With both the P6000 and P5000 shipping with 4 DisplayPorts, the DisplayPort 1.4 capable controller means that both cards can now support higher resolutions and refresh rates. Whereas the M-series maxed out at 4 4K@60Hz monitors, the P-series can now handle 4 monitors running 5K@60Hz, 4 4K monitors running at 120Hz, or even 8K monitors with additional limitations. Do note however that the per-card monitor limit is still 4 displays, as this is as many displays as Pascal can support.

    Speaking of multiple displays, alongside the Quadro card announcements NVIDIA is also announcing a new Quadro Sync card, the aptly named Quadro Sync 2. The multi-adapter/multi-display timing synchronization card is being updated to support the Pascal cards, and will support a larger number of adapters as well. The new Sync 2 will support 8 cards in sync, as opposed to 4 on the original Sync card. Coupled with the 4 display per card capability of Pascal, and this means synchronized video walls and other systems can now be built out to 32 displays.

    NVIDIA will also be heavily promoting Simultaneous Multi-Projection (SMP), the company’s multi-viewport technology. Like the consumer cards, VR is a big driver here, with NVIDIA looking reach out to VR developers. NVIDIA is also pitching this at VR CAVE systems, as they can see similar benefits from SMP’s geometry reprojection.

    Taking a look at the overall Quadro lineup, the P6000 and P5000 will at least for the time being be sitting alongside the existing M4000 and lower cards. Within the Quadro lineup these cards are meant for the most demanding workloads– massive memory sets and complex rendering/compute tasks – and they will be priced accordingly. Specific pricing has not been announced, but NVIDIA tells us to expect them to be priced similarly to the last generation cards. This would work out to $5000+ for Quadro P6000, and $2000+ for Quadro P5000 at launch.

    Finally, as we mentioned before NVIDIA was announcing these cards early, before the final clockspeeds have been locked down. This means that while the cards are being announced today, they won’t launch for another two months; NVIDIA expects them to be available in early October. It’s not unusual for Quadro cards to be announced ahead of time, though as SIGGRAPH is also a popular venue for AMD pro card announcements, the earlier than usual announcement may have been for multiple reasons.

    Ecosystem Announcements: New SDKs, Iray VR, & OptiX 4

    Along with the announcement of the Quadro P-series, NVIDIA is also using SIGGRAPH to announce updates to various software and ecosystem initiatives within the company. Overall a number of the company’s SDKs are receiving an update in some form, ranging from rendering to video encode and capture, the latter taking advantage of Pascal’s 8K encode/decode capabilities.

    Of particular note here, NVIDIA’s Iray physically based render plugin for 3D modeling applications is getting a significant update. As with other parts of their ecosystem, NVIDIA is doubling down on VR here as well. The next update to Iray will include support for generating panoramic VR lightfields – think high detail fixed position 3D panoramas – which can then be displayed on other devices. NVIDIA has been showing off an early version of this technology at GTC 2016 and other events, where it was used to show off renders of the company’s under-construction headquarters.

    The Iray update will also be part of a larger focus on integrating the company’s software with their DGX-1 server, which incorporates 8 Tesla P100 accelerators. Iray will be coming to DGX-1 this fall, supporting the same features that are already available in multi-GPU setups with the older Quadro VCA. Longer term, in 2017, the company will be adding NVLink support for better multi-GPU scaling.

    NVIDIA’s OptiX ray tracing engine is the other product that’s getting a DGX-1 update. OptiX 4.0, which is being released this week, adds support for the DGX-1, including NVLink support. It is interesting to note though that the company is only supporting clusters of 4 GPUs, despite the fact that DXG-1 has 8 GPUs (the other 4 GPUs form a second cluster). This may mean that OptiX needs direct GPU links to perform best – as in an 8-way configuration, some GPUs are 2 hops away – or it may just be that OptiX naturally doesn’t scale well beyond 4 GPUs.

    Finally, NVIDIA is also announcing a change to how mental ray support is handled for Maya. Previous, integrating the ray tracer with Maya was handled by Autodesk, but NVIDIA is currently in the process of taking that over. The goal of doing so is to allow mental ray to be updated and have features added at the more brisk pace that NVIDIA tends to work at. The new plugin is currently scheduled to ship in September, and as one of their first actions, NVIDIA will be integrating a new global illumination engine, GI-Next.

  • Bob Mansfield de retour pour l'Apple Car (MacBidouille)

    Selon le Wall Street Journal, Bob Mansfield est de retour à un poste clé chez Apple.

    Employé de longue date de la société, il était officiellement plus ou moins retraité depuis 2012, date à laquelle il avait été remplacé de son poste mais restait dans les effectifs.

    Donc, il serait maintenant de nouveau à un poste clé, surtout pour le futur d'Apple, au sein du projet Apple Car.

    Il pourrait avoir pris la tête de cette division secrète après le départ plus tôt dans l'année de son précédent responsable, Steve Zadesky.

  • Nvidia lance ses nouvelles cartes professionnelles (MacBidouille)

    Nvidia vient de fermer sa boucle en dévoilant ses Quadro P6000 et P5000, deux nouvelles cartes professionnelles utilisant l'architecture Pascal gravée en 16nm.

    La P6000 embarque 24 Go de DDR5x et un coeur P100 doté de 3840 coeurs Cuda. Elle peut contrôler 4 écrans 5K à 60 Hz simultanément.
    La P5000 a 16 Go de RAM, un GPU GP104 doté de 2560 unités de traitement.

    Ces cartes supportent DirectX 12, Vulkan, Open GL, Open CL, CUDA ainsi que le décodage matériel du h.264 et h.265 en 10 et 12 bits.

  • Ce serait la première photo d'un iPad Pro 2 (MacBidouille)

    Apple Insider a reçu cette photo qui serait selon sa source celle d'un prototype d'iPad Pro 2 12,9".

    L'appareil porte la référence MH1C2CD/F, qui n'est actuellement utilisée par aucun appareil iOS.

    Dans ces conditions, le seul intérêt de cette information est de parler de l'iPad Pro et de ce que pourrait apporter une seconde itération de ce modèle.

  • La fin de la course à la finesse de gravure prévue pour 2021 (MacBidouille)

    Jusqu'à maintenant tout était fait pour nous laisser penser que rien ou presque ne serait capable de mettre fin à la course à la gravure de puce la plus fine possible. On sait certes depuis longtemps qu'il y aura une barrière quantique, mais les fondeurs semblaient prendre plaisir à tout mettre en œuvre pour la faire reculer.
    On a commencé à prendre conscience des difficultés à graver toujours plus fin depuis peu de temps, depuis qu'Intel a cessé de proposer un nouveau type de gravure tous les 18 mois. Après avoir accumulé les retards de plus en plus conséquents le fondeur a fini par décider de proposer, trois, quatre et même cinq évolutions du design des transistors pour une finesse de gravure donnée.

    La Semiconductor Industry Association, qui regroupe entre autre choses IBM et Intel, vient de rendre officiel cet état de fait. Dans une nouvelle feuille de route, elle prédit que la fin de la course à la gravure toujours plus fine est très proche, 2021.
    Après, il ne sera pas impossible de faire encore mieux, mais les coûts de R&D et de production des puces seront tellement élevés que le jeu n'en vaudra pas la chandelle.

    On arrive donc à la fin d'un cycle qui est aussi ancien que celui de la production des processeurs.

    Il va maintenant falloir réinventer l'électronique grand public, si l'on pourra encore parler dans 15 ans d'électronique.

    En attendant, cela n'en sera pas pour autant la fin de l'évolution des puces. On a encore énormément de marge de manœuvre dans l'optimisation du design des transistors et la gestion de leur consommation. On aura aussi certainement droit, comme avec la Flash NAND, à des empilements de couches pour augmenter le nombre de transistors et probablement à la réemergence de tas de systèmes de coprocesseurs spécialisés et destinés à accélérer certains types de calculs.

  • Verizon serait sur le point de racheter Yahoo! (MacBidouille)

    Voilà des années que Yahoo! est officieusement à vendre. Jusqu'à maintenant tout le monde avait échoué à racheter dette société. Il semble que Verizon soit sur le point de finaliser un accord.
    La société aurait mis 5 milliards de dollars sur la table pour racheter Yahoo! et l'essentiel de ses biens immobiliers. En revanche, l'accord ne comprendrait pas le portefeuille de brevets.

    On devait en savoir plus d'ici quelques jours.

    [MàJ] L'information est confirmée. Verizon s'offre une grosse partie de Yahoo! pour 4,83 milliards de dollars.

  • Chine: les mouvements anti-américains s'en prennent à Apple (MacBidouille)

    Reuters rapporte que des mouvements anti-américains commencent à cibler Apple dans leur propagande.
    Ces mouvements populaires "spontanés" reprochent aux Etats-Unis leurs positions politiques, en particulier sur les revendications territoriales en mer de Chine Méridionale et appellent au boycott des Américains et de leurs produits.
    Ils demandent à leurs supporters de détruire leurs produits Apple et de se retourner vers des produits chinois.
    Des revendeurs Apple locaux ont également été pris à partie.

    On avait déjà connu des choses similaires visant la Japon et globalement, les ventes de voitures provenant de ce pays avaient marqué durablement le pas.

  • Price Check Q3 2016: DRAM Prices Down Over 20% Since Early 2016 (AnandTech)

    Slow sales in the first half of 2016 have negatively affected suppliers of virtually all of PCs, tablets and smartphone components. Producers of DRAM typically suffer more than others when it comes to pricing, since computer memory is considered a commodity and its quotes mostly depend on supply and demand rather than on technology advantages. Since early 2016, prices of DDR3 and DDR4 chips have declined by over 20%, according to DRAMeXchange (a division of TrendForce that tracks DRAM market). Despite the fact that it is slightly costlier to produce DDR4 memory because of slightly larger die sizes, at times the physical DDR4 modules can actually be cheaper than DDR3 ones.

  • Ils débarquent en France ! (MacBidouille)

    Ça y est, ils ont officiellement débarqués en France, vous l'avez peut-être lu ou entendu via d'autres vecteurs : Pokemon Go débarque en France.

    Prévoyez une batterie externe, tellement le truc draine cette dernière !

    PS : on ne conduit pas en jouant avec Pokemon Go, ni autre chose...

  • Mises à jour et téléchargements de la semaine (Génération NT: logiciels)
    Comme tous les dimanches, retrouvez notre résumé des mises à jour et téléchargements de la semaine.
  • Les meilleurs antivirus à mi-2016 (Génération NT: logiciels)
    À mi-2016, le bilan d'AV-Comparatives concernant les solutions antivirus confrontées aux mêmes vecteurs d'infection que dans le cadre d'une navigation de tous les jours n'est pas franchement favorable à Microsoft.
  • Samsung lance des poursuites judiciaires contre Huawei (MacBidouille)

    Samsung a annoncé avoir déposé plainte en Chine contre Huawei.

    Elle reproche à son concurrent de violer certains de ses brevets concernant le stockage des données sur smartphones ou le traitement des images.

    On peut d'une certaine manière saluer le courage de Samsung de lancer une procédure en Chine, sachant que la justice là-bas est certainement l'une des plus nationalistes au monde, ce qui ne lui donne pas énormément de chances d'obtenir gain de cause, au contraire.

  • Samsung lance des poursuites judiciaires contre Huawai (MacBidouille)

    Samsung a annoncé avoir déposé plainte en Chine contre Huawei.

    Elle reproche à son concurrent de violet certains de ses brevets concernant le stockage des données sur smartphones ou le traitement des images.

    On peut d'une certaine manière saluer le courage de Samsung de lancer une procédure en Chine sachant que la justice là bas est certainement l'une des plus nationalistes au monde ce qui ne lui donne pas énormément de chance d'obtenir gain de cause, au contraire.

  • Samsung lane des poursuites judiciaires contre Huawai (MacBidouille)

    Samsung a annoncé avoir déposé plainte en Chine contre Huawei.

    Elle reproche à son concurrent de violet certains de ses brevets concernant le stockage des données sur smartphones ou le traitement des images.

    On peut d'une certaine manière saluer le courage de Samsung de lancer une procédure en Chine sachant que la justice là bas est certainement l'une des plus nationalistes au monde ce qui ne lui donne pas énormément de chance d'obtenir gain de cause, au contraire.

  • Apple va ouvrir sa première boutique à Taïwan (MacBidouille)

    Apple a posté des annonces prouvant l'ouverture prochaine du premier Apple Store à Taïwan.

    Elle continue donc de s'implanter partout où cela est possible, seul moyen de doper sa croissance dans une période délicate.

  • Des failles de sécurité sous OS X et iOS dévoilées (MacBidouille)

    L'année dernière des failles de sécurité touchant Android avaient été dévoilées. Baptisées Stagefright elles permettent d'attaquer et de prendre le contrôle d'appareils à distance juste en envoyant des messages spécialement formatés dans ce but. Ces failles exploitent le système de rendu des images, ce qui les rend très délicates à gérer puisque la seule réception des messages, sans même les ouvrir, permet de déclencher cette attaque.
    Un chercheur en sécurité de Cisco a dévoilé des failles semblables sous iOS et OS X. Elles utilisent le même procédé, en faisant planter le code destiné à traiter des images et conduisent à la prise de contrôle potentielle des machines.
    Toutefois, il semble que la mise en place de code destiné à utiliser ces failles soit très complexe et bonne nouvelle, elles ont été corrigées dans les dernières mises à jour d'OS X et d'iOS.

    Il n'y a donc aucun danger si vous avez fait les toutes dernières mises à jour système et s'il existe, le danger est faible pour les versions non patchées vu la complexité de mise en oeuvre.

    On ne peut toutefois pas écarter tout risque et il reste à souhaiter qu'Apple mette à jour le maximum de versions de ses systèmes d'exploitation.

Que sert à l'homme de gagner tout le monde, s'il perd son âme ?
-+- Blaise Pascal (1623-1662), Pensées XII.782 -+-