Divers

  • Les alertes de sécurité sont ignorées à cause de notre cerveau (Génération NT: logiciels)
    Intervenant au mauvais moment, les alertes de sécurité sur ordinateur seraient tout simplement ignorées. Notre cerveau n'affectionne pas le multitâche.
  • Intel travaille avec ARM pour leur graver des puces en 10nm (MacBidouille)

    Intel et ARM ont publié un même communiqué de presse dans lequel on apprend que les deux sociétés se sont rapprochées. Ainsi, elles vont travailler ensemble pour pouvoir produire des puces ARM gravées en 10nm FinFET dans les usines Intel.

    C'est un revirement sans précédent pour Intel qui avait, à quelques rares exceptions près, refusé de produire des puces d'autres sociétés dans ses usines et en tout cas jamais des produits concurrents. Mais depuis la donne a changé et Intel a fait preuve de pragmatisme.

    Pour commencer, Intel semble avoir compris qu'il lui était dans un avenir proche impossible de concurrencer ARM sur son marché de prédilection, les puces mobiles. D'ailleurs Intel a mis en pause ses efforts dans ce domaine.
    Ensuite, on s'approche de plus en plus du moment où il sera impossible de graver plus fin. Chaque nouveau saut de finesse est plus coûteux et plus problématique, ce qui le rend plus long et complexe. Cela permet aux autres fondeurs de se rapprocher de ce que propose Intel.

    En acceptant les clients ARM, Intel a donc moins d'inconvénients qu'avant tout en ayant l'immense avantage de piquer des clients aux autres fondeurs et d'engranger chaque année sur ce business des milliards de dollars dédaignés jusqu'à maintenant.

    Il n'est donc plus impossible de penser qu'Apple pourrait faire produire une génération future de puces Ax (d'ici 2018 ou 2019) par Intel.

    Nous présumons même qu'Intel mettra le paquet sur cette possibilité et sera très concurrentiel. Si la société arrive à affaiblir les autres fondeurs, elles les forcera à ralentir leur R&D et aura même un certain pouvoir (relatif mais réel) face à AMD.

    [MàJ du 27/08/2016] Selon Nikkei Intel pourrait se lancer dans la production de puces Ax dès 2018 ou 2019. Le fondeur qui a ravalé sa fierté dans ce domaine et fait preuve de pragmatisme pourrait alors décrocher un contrat annuel pesant plusieurs milliards de dollars et faire un tort considérable à TSMC, de quoi ensuite lui piquer d'autres contrats juteux.

  • Spotify punit les artistes ayant donné une exclusivité à Apple (MacBidouille)

    Il y a deux jours nous vous parlions de la décision d'Universal Music de ne plus vendre donner d'exclusivité (temporaire) à des plateformes comme Apple Music.
    Bloomberg nous apprend que Spotify le concurrent acharné d'Apple Music a averti Majors et artistes qu'elles procèderait à des mesures de rétorsion si elles donnaient une exclusivité à ses concurrents. Dans ce cas, une fois sur Spotify les morceaux de ces artistes ne seraient plus proposés dans les listes de lectures mises en avant et le classement serait biaisé afin de faire tomber ces morceaux tout en bas.

    Sans donner un avis sur ces pratiques, on voit à quel niveau la guerre est totale entre tous ces acteurs et on se rend surtout compte que les artistes, si élégamment mis en avant ne sont que des produits commerciaux que l'on va mettre ou retirer des têtes de gondole en fonction de ce qu'ils peuvent rapporter à l'enseigne.

  • Tim Cook vient de toucher un joli bonus pour ses 5 ans chez Apple (MacBidouille)

    Tim Cook vient de fêter ses cinq ans à la tête d'Apple. Avant de quitter la société, Steve Jobs avait voulu sécuriser sa direction en proposant aux dirigeants clé une prime à valoir au bout de cinq ans (puis au bout de 10). C'est dans le cadre de ce contrat que Tim Cook vient de toucher ses émoluments, 100 millions de dollars.

    S'il reste jusqu'en 2021, son prochain bonus devrait s'élever à 500 millions de dollars.

    Attention toutefois, ce bonus n'est pas lié à sa seule présence mais aussi aux performances de l'action Apple.

    Un document de la SEC nous apprend que Tim Cook a vendu pour un total d'un peu plus de 35 millions de dollars d'actions de la société.

  • Nouvelles beta d'iOS 10 (MacBidouille)

    Apple propose aux développeurs la huitième beta d'iOS et au personnes enregistrées au programme de test public la septième.
    Elles sont accompagnées d'une 7eme beta de tvOS.

  • Intel’s 140GB Optane 3D XPoint PCIe SSD Spotted at IDF (AnandTech)

    As part of this year’s Intel’s Developer Forum, we had half expected some more insights into the new series of 3D XPoint products that would be hitting the market, either in terms of narrower time frames or more insights into the technology. Last year was the outing of some information, including the ‘Optane’ brand name for the storage version. Unfortunately, new information was thin on the ground and Intel seemed reluctant to speak any further about the technology that what had already been said.

    What we do know is that 3D XPoint based products will come in storage flavors first, with DRAM extension parts to follow in the future. This ultimately comes from the fact that storage is easier to implement and enable than DRAM, and the characteristics for storage are not as tight as those for DRAM in terms of break-neck speed, latency or read/write cycles.

    For IDF, Optane was ultimately relegated to a side presentation at the same time as other important talks were going on, and we were treated to discussions about ‘software defined cache hierarchy’ whereby a system with an Optane drive can define the memory space as ‘DRAM + Optane’. This means a system with 256GB of DRAM and a 768GB Optane drive can essentially act like a system with ‘1TB’ of DRAM space to fill with a database. The abstraction layer in the software/hypervisor is aimed at brokering the actual interface between DRAM and Optane, but it should be transparent to software. This would enable some database applications to move from ‘partial DRAM and SSD scratch space’ into a full ‘DRAM’ environment, making it easier for programming. Of course, the performance compared to an all-DRAM database is lower, but the point of this is to move databases out of the SSD/HDD environment by making the DRAM space larger.

    Aside from the talk, there were actually some Optane drives on the show floor, or at least what we were told were Optane. These were PCIe x4 cards with a backplate and a large heatsink, and despite my many requests neither demonstrator would actually take the card out to show what the heatsink looked like. Quite apart from which, neither drive was actually being used - one demonstration was showing a pre-recorded video of a rendering result using Optane, and the other was running a slideshow with results of Optane on RocksDB.

    I was told in both cases that these were 140 GB drives, and even though nothing was running I was able to feel the heatsinks – they were fairly warm to the touch, at least 40C if I were to narrow down a number.  One of the demonstrators was able to confirm that Intel has now moved from an FPGA-based controller down to their own ASIC, however it was still in the development phase.


    Click through for high resolution

    One demo system was showing results from a previous presentation given earlier in the lifespan of Optane: rendering billions of water particles in a scene where most of the scene data was being shuffled from storage to memory and vice versa. In this case, compared to Intel’s enterprise PCIe SSDs, the rendering reduced down from 22hr to ~9hr.

    It's worth noting that we can see some BGA pads on the image above. The pads seem to be in an H shape, and there are several present, indicating that these should be the 3D XPoint ICs. Some of the pads are empty, suggesting that this prototype should be a model that offers a larger size. Given the fact that one of the benefits of 3D XPoint is density, we're hoping to see a multi-terabyte version at some point in the distant future.

    The other demo system was a Quanta / Quanta Cloud Technology system node, featuring two Xeon E5 v4 processors and a pair of PCIe slots on a riser card – the Optane drive was put into one of these slots. Again, it was pretty impossible to see more of the drive aside from its backplate, but the onscreen presentation of RocksDB was fairly interesting, especially as it mentioned optimizing the software for both the hardware and Facebook.

    RocksDB is a high-performance key/store database designed for fast embedded storage, used by Facebook, LinkedIn and Yahoo, but the fact that Facebook was directly involved in some testing indicates that at some level the interest in 3D XPoint will brush the big seven cloud computing providers before it hits retail. In the slides on screen, the data showed a 10x reduction in latency as well as a 3x improvement in database GETs. There was a graph plotted showing results over time (not live data), with the latency metrics being pretty impressive. It’s worth noting that there were no results shown for storing key/value data pairs.

    Despite these demonstrations on the show floor, we’re still crying out for more information about 3D XPoint, how it exactly work (we have a good idea but would like confirmation), Optane (price, time to market) as well as the generation of DRAM products for enterprise that will follow. With Intel being comparatively low key about this during IDF is a little concerning, and I’m expecting to see/hear more about it during Supercomputing16 in mid-November. For anyone waiting on an Optane drive for consumer, it feels like it won’t be out as soon as you think, especially if the big seven cloud providers are wanting to buy every wafer from the production line for the first few quarters.

    More images in the gallery below.

     

  • HP Announces EliteBook Laptops with On/Off Sure View Privacy Screens (AnandTech)

    On Thursday, HP introduced adjustable privacy screens for the EliteBook 840 as well as the EliteBook 1040 notebooks. These are aimed at business users concerned about visual hacking and disclosing sensitive information to onlookers (something a number of journalists have to consider when working on NDA information when flying back from a press event). The protective measures for the screens are based on the Sure View technology jointly developed by HP and 3M, which relies on a number of HP’s proprietary technologies as well as 3M’s optical films. Ultimately, the user can control the privacy with an on/off switch as required, rather than equipping a perminant film that can reduce comfort. Right now, the Elitebook laptops are the first batch to get the technology, but eventually HP’s Sure View technology may appear on other PCs from the company depending on feedback.

    In the conference call, HP was keen to point out that according to the Pew Research Center, around 60% of employees nowadays take their work outside of office walls and can potentially (without knowing) share confidential information with the wrong people. Most users that have to work while on the move might not be aware of who is staring at their displays from the back or from the side (which is incredibly annoying in general even if you are playing Tetris). Knowing that, HP explained that some employees tend not to work with sensitive data in public places, which can reduce their overall productivity. Meanwhile there is a market to ensure privacy in cafes, airports and other venues - other people install aftermarket privacy screens on their laptops. While that helps businesses to better comply with regulatory requirements in healthcare, finance and industries dealing with sensitive information, such screens reduce viewing angles of notebooks which affects overall user experience. By contrast, HP’s Sure View can be turned on and off, thus improving privacy without persistent reduction of viewing angles of laptop screens.

    HP’s Sure View technology uses a special film from 3M as well as HP’s own backlighting. When the setting of the backlighting is adjusted, the film decreases viewing angles of the display down to around 35 degrees. The tech can be activated by pressing Fn + F2 and actual viewing angles can be further adjusted by pressing other combinations of buttons (viewing angle adjustments are independent of brightness and one is not dependent on the other). HP claims that the technology can be used with different display panels, but initially the company will use the technology on TN or SVA panels with FHD resolution. The Sure View does not consume extra power and since it relies on different setting of the backlighting, it might help to improve battery life a bit. HP does not reveal the price of its integrated privacy screen technology, but claims that with higher-end configs it will be virtually free. Moreover, the company mentioned that since it is not just a film on top of the display itself, it is going last throughout the lifetime of the laptop.

    As for configurations, the HP EliteBook 840 G3 and the HP EliteBook 1040 G3 are 14” laptops based on Intel’s dual-core Core Skylake-U chips with integrated HD Graphics 520 and vPro (select models only). They come with up to 32 GB of DDR4-2133 memory and use 2.5” HDDs/SSDs (EliteBook 840) or M.2 SSDs (EliteBook 1040). The notebooks are equipped with 802.11ac WiFi + Bluetooth modules, Gigabit Ethernet (via dongle in case of the 1040) as well as an LTE or HSPA+ WWAN module. As with many business PCs, the systems are equipped with anti-spill keyboards, fingerprint readers (optional in case of the 840) and TPM 1.2/2.0 modules.

    Both laptops feature 14” display panels and are fairly light and thin for mainstream business models. The EliteBook 840 weighs 1.48-1.7kilograms and is 1.89-2.02cm thick (the peak values for touch-enabled models. Meanwhile, the EliteBook 1040 weighs starting at 1.43 and is 1.58-1.65cm thick. Depending on the exact configuration, the EliteBook 840 can work for up to 13.5 hours, whereas the EliteBook 1040 can boast with 10 to 11.5 hours of battery life. Since configurations of the notebooks can vary, HP does not mention recommended prices, but we are talking about machines that start at around $1200 and can end up at $2000 or higher. B2B sales will differ in price depending on volume and support packages. Users should see systems with Sure View privacy screens available in September.

  • The Meizu PRO 6 Review (AnandTech)

    The Meizu PRO 6 features an attractive design and excellent build quality, but is not a clear upgrade over the previous generation PRO 5. Lurking inside its all-aluminum chassis is a MediaTek Helio X25 SoC with a deca-core CPU and 4GB of RAM. Its defining feature, however, is a 5.2-inch 1080p SAMOLED display, which helps it stand out amidst a forest of phablets.

  • Eurocom Sky E9E2 Laptop: Intel Core i7, Two NVIDIA GeForce GTX 1080/1070 GPUs in SLI, Optional 120 Hz Display Panel (AnandTech)

    Eurocom has released one of the world’s first laptops featuring two NVIDIA’s GeForce GTX 1080/1070 GPUs, along with one of Intel’s latest Core i7 CPUs for good measure. The Sky E9E2 machine is designed primarily for gamers, but it can also be equipped with up to 64 GB DRAM, up to 6 TB of storage and even optional 120 Hz display panels. Given the high-performance goals of the system, it not only costs a lot but also comes in a thick chassis designed to fit 17.3" screens as well.

    The Eurocom Sky X9E2 notebook is based on the Intel Z170 PCH and supports socketed Skylake-S processors (Intel Core i7-6700K, i5-6600K and i7-6700 options are available) that can be overclocked. The machine can fit up to four SO-DIMMs for a total of 64 GB of DDR4 memory, although maximum XMP support isn't directly listed. For graphics, the X9E2 uses one or two NVIDIA GeForce GTX 1070/1080 graphics processors in an MXM form-factor, which have 120-190 W TDP per card, but the system promises to deliver desktop-class performance in mobile form-factor. Installing a rather hot GPU into modern gaming laptop chassis should not be a problem in general, but Eurocom’s Sky X9E2 is among the first machines to integrate up to two Pascal graphics processors with a potential total TDP of <380 W. To cool the CPU as well as the GPU(s), the portable PC uses a sophisticated cooling system with multiple heat pipes as well as three huge blower fans.

    For storage, the Eurocom Sky X9E2 can integrate up to two 2.5”/9.5mm SSDs or HDDs (in consumer land, that's 4 TB of storage) as well as up to two M.2-2280 NVMe SSDs (another two more terabytes). In addition, the laptop has 6-in-1 card reader as well as two Thunderbolt 3.0 ports (which automatically suggests support for two USB Type-C ports with 10 Gbps transfer rate) and five USB 3.0 connectors. For connectivity, the Sky X9E2 has two Killer Networking E2400 GbE controllers as well as one M.2-2230 Wi-Fi 802.11ac with Bluetooth controller.

    When it comes to display options, end-users can choose between an IPS FHD panel, an AHVA FHD panel with 120 Hz refresh rate as well as an IPS UHD panel. Optionally, the machine also supports NVIDIA’s G-Sync technology. Moreover, the laptop has several display outputs (HDMI 2.0, DisplayPort 1.2 and Thunderbolt 3) in a bid to support NVIDIA’s SurroundView capability. For audio the PC has Sound Blaster X-Fi MB5 chip with 7.1-channel audio outputs as well as integrated 2 W speakers and a 2.5 W subwoofer.

    The Sky X9E2 desktop replacement comes with either a 330 W or 660 W PSU (the latter is required when its spec is maxed out and the system is equipped with two GPUs), an 8 cell Li-Ion 89 Wh battery (battery life from zero to some depending on configuration), weighs 5.5 kilograms (12.1 lbs) and is 47.2 mm (1.88 inch) thick. The starting price of the DTR machine from Eurocom is $2499, and can push much nearer five digits when maxed out.

  • Apple pourrait supprimer le bouton home des iPhone l'an prochain (MacBidouille)

    Selon les rumeurs, le prochain iPhone aura un bouton home statique doté d'un système haptique. De nouvelles rumeurs annoncent que ce bouton disparaitra totalement du modèle suivant, de quoi permettre à son écran OLED courbe de recouvrir toute la surface de l'appareil.

    Une autre rumeur annonce également qu'un modèle spécifique sera proposé au Japon. Il intégrera une technologie appelée FeliCa et répandue au Japon pour assurer le paiement sans contact.

  • Apple veut récolter les empreintes digitales des voleurs (MacBidouille)

    Apple a déposé un brevet très intéressant. Ce dernier explique des mécanismes permettant en cas de vol d'un iPhone (ou tout autre appareil ayant un capteur d'empreinte) de récupérer les empreintes digitales de ceux qui tenteraient de l'utiliser.
    Ainsi, il serait possible une fois l'appareil volé de déposer plainte en ayant les empreintes digitales du voleur, un moyen probablement très efficace pour le retrouver.

    Cela soulèvera certainement des débats sans fin pour savoir si ce genre de choses est légal et recevable en justice, mais ce serait sans conteste un moyen très dissuasif de lutter contre le vol de ces appareils.

  • Apple va créer 1000 nouveaux emplois en Irlande (MacBidouille)

    Selon l'Irish Independent, Apple va créer 1000 emplois de plus en Irlande.
    La société semble ainsi vouloir affirmer que ce pays est réellement important pour elle au niveau stratégique et pas seulement un paradis fiscal. Il est aussi possible que ce soit une compensation aux avantages que la société a obtenus en terme de taux d'imposition et qui sont maintenant sous l’œil de l'Europe.
    Le Trésor américain a d'ailleurs pris position contre l'Europe et craint que ce soit en fin de compte le contribuable américain qui paye la lourde addition que risque Apple en cas de problème avec Bruxelles.

    Derrière ces histoires d'optimisation on voit l'énorme guerre commerciale que se livrent les Etats-Unis et l'Europe. Les deux géants jouent actuellement à attaquer des sociétés du concurrent pour non-respect de réglementations (ce qui est le cas, mais les attaques sont hargneuses) et en les condamnant aux amendes maximales possibles par les législations.

  • Xiaomi Launches Redmi Note 4 in China (AnandTech)

    While we normally don’t cover China-only smartphone releases, Xiaomi is a fairly major player as far as volume as they were one of the first companies in the industry to ship high-end hardware for mid-range prices. We also tend to see that these devices eventually filter out to a more global launch, which The Xiaomi Redmi line has been a solid success pretty much anywhere it shipped, and today Xiaomi launched an update to their Redmi Note line, the Redmi Note 4.

    At a high level the Redmi Note 4 is shipping with fairly aggressive specs for the price, which starts at 135 USD for the model with 2GB of RAM and 16GB of internal storage, and 180 USD for the model with 3GB of RAM and 64GB of internal storage.

    There’s also a micoSD slot so the 16GB internal storage isn’t necessarily the end of the world.

    All SKUs ship with MediaTek’s Helio X20 SoC, but instead of a 2.5 GHz Cortex A72 at the high end it looks like Xiaomi is shipping a 2.1 GHz variant which could either be downclocked deliberately to reduce peak power consumption or a cost saving measure. To try and push for as much battery life as possible, Xiaomi has also shipped a 15.7 WHr battery in this device or 4100 mAh which is probably partially to offset the use of an SoC on a planar 20nm process, and also because most of the people buying phablets are likely doing so due to the battery life benefits. This battery is charged through conventional quick charge at 10W, so charge time may be a bit on the long side but I wouldn’t be surprised if this is a move to improve battery longevity and meet the fairly tight cost target.

    The display is a 5.5 inch 1080p unit with a claimed maximum brightness of 450 nits, contrast ratio of 1000:1, and sunlight display technology similar to Apical’s Assertive Display which dramatically improves outdoor visibility beyond what you might expect from a display of that brightness. There’s a mention of 72% NTSC gamut which leads me to believe that this is targeting sRGB fairly well depending upon display settings similar to the Mi Note and Mi Note Pro.

    The rear camera is a 13MP unit with PDAF and the front facing camera is a 5MP sensor. Both have an f/2.0 aperture but I don’t really see any mentions of the supplier of the module or sensor here. I suspect that this is going to be a Samsung ISOCELL sensor or something similar on the rear but absent actual data this is just a wild guess.

    As far as connectivity goes 802.11a/b/g/n/ac, WiFi Direct, and Bluetooth 4.2 are supported. There’s no explicit discussion of NFC connectivity but seeing as how Mi Pay and AliPay is supported as an NFC-based mobile payments service the Redmi Note 4 should support NFC. For cellular connectivity LTE category 6 and VoLTE are supported, and looking at Xiaomi’s page it looks like GSM bands 2/3/8, WCDMA bands 1/2/5/8, TD-SCDMA bands 34/39, FDD-LTE bands 1/3/5/7/8, and TD-LTE bands 38/39/40/41 are supported for this specific variant. There’s also a mention of support for CDMA2000/1X BC0 which is interesting to see. Dual SIM is supported but when using two SIMs it isn’t possible to use microSD. GPS, GLONASS, and Beidou are supported as GNSS constellations, and there’s an IR port, gyro, accelerometers, proximity sensors, ambient light sensor for auto brightness, and a magnetic hall sensor for things like flip covers. All of this is packaged into a phone that is 151x76x8.35mm and weighs 175 grams, which is actually fairly impressive considering its size, the aluminum unibody, and 15.7 WHr battery. The only real spec out of place at a high level is the lack of USB-C reversible connector, but microUSB is acceptable given the price.

    Putting aside the specs, this is actually looking to be a fairly promising phone. We can talk about whether Xiaomi is copying Apple or not, but the Redmi Note 4 seems to have a fairly unique design due to its rear camera placement and the use of a large circular lens somewhat reminiscent of HTC designs, but combines with a dual color LED mounted just below the camera and a fingerprint scanner that appears to be identical in shape to the camera lens between the two. The design of the Redmi Note 4 looks great considering the price, and there’s some irony in that for all of the marketing bluster surrounding the Note7’s symmetrical design, the Redmi Note 4 has visibly better overall ID detailing and overall symmetry. The use of the 2.5D glass, chamfered edges, and slightly curved back should also make for solid ergonomics while allowing for things like tempered glass screen protectors. The fingerprint scanner also is said to be one that allows for learning such that it extends the map of your fingerprints over time to allow for faster, more reliable use. MIUI 8 also has some interesting new features such as the ability to enter either a standard or private user mode depending upon the password/PIN/pattern you enter similar to a KNOX secure folder.

    Overall, the Redmi Note 4 from a distance looks to be a fairly impressive phone. It wasn’t all that long ago that things like aluminum unibody design, fast-focusing rear cameras, high quality, high density displays, and fingerprint scanners were impossible to find in a single package for a phone under 200 USD, and Xiaomi has managed to ship a phone with all of these things. It’ll be interesting to see how they manage the transition for the Redmi Note 4 to a global audience, which would likely mean a Qualcomm SoC and new RF front-end, but it’ll be interesting to see all the same. The Xiaomi Redmi Note 4 will be available in China in silver, gold, and gray, and the 2/16GB variant will retail for 899 RMB or 135 USD, and the 3/64GB variant for 1199 RMB or 180 USD.

  • ADATA Introduces Ultimate SU800 SSD: SMI Controller, 3D NAND, SATA Interface (AnandTech)

    ADATA has formally introduced its first SSDs based on 3D NAND flash memory. The Ultimate SU800 drives are designed for price-conscious market segments and use SATA interface, which means that they do not offer very high performance. Nonetheless, usage of high-capacity 3D NAND chips helps the manufacturer to increase its MTBF rating and could eventually help ADATA to offer very competitive pricing for these drives.

    The family of ADATA’s Ultimate SU800 SSDs includes models with 128 GB, 256 GB, 512 GB and 1 TB capacity. The drives are based on Silicon Motion’s SM2258 controller (which has four NAND flash channels and LDPC ECC technology) as well as 3D TLC NAND flash memory from an unknown manufacturer (either IMFT or SK Hynix). The drives come in 2.5”/7 mm form-factor and use SATA 6 Gbps interface.

    The manufacturer claims that the Ultimate SU800 SSDs support sequential read performance up to 560 MB/s as well as sequential write performance up to 520 MB/s when pseudo-SLC caching is used. The 128 GB model is naturally slower than its brethren are when it comes to writing (up to 300 MB/s), but its read speed is in line with higher-capacity SKUs. ADATA did not mention random performance of the SSDs as well as their power consumption, but the SM2258 controller is capable of up to 90,000 read and up to 80,000 IOPS.

    ADATA Ultimate SU800 Specifications
    Capacity 128 GB 256 GB 512 GB 1 TB
    Model Number ASU800SS-128GT-C ASU800SS-256GT-C ASU800SS-512GT-C ASU800SS-1TT-C
    Controller Silicon Motion SM2258
    NAND Flash 3D TLC NAND
    Sequential Read 560 MB/s
    Sequential Write 300 MB/s 520 MB/s
    Random Read IOPS Up to 90K IOPS
    Random Write IOPS Up to 80K IOPS
    Pseudo-SLC Caching Supported
    DRAM Buffer Yes, capacity unknown
    TCG Opal Encryption No
    Power Management DevSleep
    Warranty 3 years
    MTBF 2,000,000 hours
    MSRP $59.99 $79.99 $139.99 $269.99

    Thanks to higher endurance of 3D TLC NAND compared to planar flash memory made using thin process technologies, ADATA declares 2 million hours MTBF and offers three-year limited warranty on its new SSDs. While the warranty is standard for modern solid-state drives, the MTBF rating 0.5 million hours higher compared to current-generation entry-level SSDs from the company.

    At present ADATA already has a number of affordable SATA SSDs (e.g., Premier SP550 and SP580) based on planar TLC NAND flash memory. The company specifically noted in its press released that the new Ultimate SU800 will be faster than its existing entry-level models (and will provide higher MTBF). As a result, the new SSDs will be positioned above the currently available inexpensive models.

    Now, about the retail pricing. ADATA plans to charge $60, $80, $140 and $270 for 128 GB, 256 GB, 512 GB and 1 TB versions of its Ultimate SU800 SSDs, but they are not the cheapest in the company’s model range (even though the price of the 256 GB SKU seems very competitive). Moreover, next month the company plans to introduce another family of 3D NAND-based drives (the Ultimate SU900) with higher performance.

  • iOS 9.3.5 disponible (MacBidouille)

    Apple propose une nouvelle mise à jour d'iOS, la 9.3.5.
    Comme la précédente, sa raison d'être est de combler des failles de sécurité.

    Il y en aurait trois de comblées selon le New York Times

    [MàJ] Ars Technica rapporte que ces trois failles étaient exploitées depuis un moment et même commercialisées par une société spécialisée. Elles permettent via un malware (baptisé Pegasus) de pénétrer les appareils iOS et d'accéder au contenu, y compris les échanges de données par messagerie instantanée.

    Ce serait l'attaque la plus sophistiquée jamais mise en oeuvre sur les appareils iOS.

    Il est donc urgent de faire la mise à jour de ces appareils même si le "propriétaire" de ce code le loue très cher pour attaquer des personnes en particulier, pour l'essentiel actuellement des dissidents politiques dans les Emirats.

  • PlayStation Network : Sony active la vérification en deux étapes (Génération NT: logiciels)
    Une couche de sécurité supplémentaire est déployée pour les comptes du PlayStation Network avec la possibilité d'une vérification en deux étapes.
  • Intel Launches 3D NAND SSDs For Client And Enterprise (AnandTech)

    Today Intel is announcing a variety of new SSDs with their 3D NAND flash memory. The new models use a mix of 3D MLC and 3D TLC, some SATA and some PCIe, and variously target the consumer, business, embedded and data center markets. While we are still awaiting details on the timing of these product releases, it is clear that Intel is eager to put planar flash behind them. The drive for this is especially strong as the models being replaced are all either based on Intel's relatively expensive 20nm flash or on 16nm flash that Intel had to buy on the open market due to their decision to not participate in the 16nm node at IMFT.

    Product Series Market Interface 3D NAND
    SSD 600p Consumer Client M.2 PCIe 3 x4 TLC
    SSD Pro 6000p Business Client M.2 PCIe 3 x4 TLC
    SSD E 6000p Embedded, IoT M.2 PCIe 3 x4 TLC
    SSD E 5420s Embedded, IoT 2.5" and M.2 SATA MLC
    SSD DC S3520 Data Center 2.5" and M.2 SATA MLC
    SSD DC P3520 Data Center U.2 and PCIe x4 HHHL MLC

    First up, we have a M.2 PCIe SSD branded three different ways for three different markets. In the consumer market we have the SSD 600p series, while the business market will get the Pro 6000p series. The specs released so far differ only in mentioning that the Pro 6000p series will be supported by the remote secure erase feature of Intel's Active Management Technology. The third variant—for the embedded and Internet of Things market—will only get the two smallest capacities, which gives us a look at how this design will perform with the limited parallelism that results from using IMFT's high-capacity 384Gb 3D TLC die.

    Intel Client and Embedded PCIe SSDs
    Model Pro 6000p 600p E 6000p 750
    Capacity 128GB, 256GB, 360GB, 512GB, 1024GB 256GB 128GB 400GB, 800GB, 1.2TB
    NAND IMFT 32-layer 3D TLC IMFT 20nm MLC
    Interface M.2 2280 PCIe 3 x4 (single-sided) U.2 or PCIe 3 x4 HHHL
    Sequential Read up to 1800 MB/s 1570 MB/s 770 MB/s up to 2500 MB/s
    Sequential Write up to 560 MB/s 540 MB/s 450 MB/s up to 1200 MB/s
    4kB Random Read up to 155k IOPS 71k IOPS 35k IOPS up to 460k IOPS
    4kB Random Write up to 128k IOPS 112k IOPS 91.5k IOPS up to 290k IOPS
    Idle Power 10mW 4W
    Warranty 5 years 5 years

    The 600p and 6000p series are a much more mainstream design than Intel's previous NVMe SSD for the client market. The SSD 750 was a thinly-disguised enterprise drive, with power consumption and physical dimensions that are far too big for the M.2 form factor that has become the preferred choice for client PCIe storage. The SSD 750 was in many ways overkill from the start, and more recent M.2 drives (especially from Samsung) have caught up in peak performance to offer a much better value for typical client usage. The 600p will be going after the client PCIe storage market from the opposite end: as one of the first TLC PCIe SSDs, its performance specifications don't set any records but it will be a much more value-oriented product than any of the M.2 PCIe SSDs currently on the market. Intel has confirmed that the 600p and 6000p are using a third-party controller. UPDATE: Allyn Malventano at PC Perspective has uncovered a forum post with an uncensored picture of the 600p. The controller has "SMI" in big letters, suggesting that it is a Silicon Motion SM2260 or relative thereof, but with different markings than the samples Silicon Motion has been showing off at conventions. Intel has also used Silicon Motion controllers in drives like the 540s.


    SSD 600p

    In addition to the SSD E 6000p, there is a new series of SATA drives for the embedded market. The SSD E 5420s series consists of a 240GB 2.5" drive and a 150GB M.2 drive, both with 3D MLC and full power loss protection. The E 5420s is rated for one drive write per day, a substantial improvement over the 0.3 DWPD rating of the E 5410s or the 20GB/day of the E 5400s.

    Intel Embedded/IoT SATA SSDs
    Model E 5420s E 5410s E 5400s
    Capacity 240GB 150GB 80GB, 120GB 48GB, 80GB, 120GB, 180GB
    NAND IMFT 32-layer 3D MLC 16nm MLC 16nm TLC
    Interface 2.5" SATA M.2 SATA 2.5" SATA 2.5" and M.2 SATA
    Sequential Read 320 MB/s 165 MB/s up to 475 MB/s up to 560 MB/s
    Sequential Write 300 MB/s 145 MB/s up to 135 MB/s up to 475 MB/s
    4kB Random Read 65k IOPS 39k IOPS up to 68k IOPS up to 71k IOPS
    4kB Random Write 16k IOPS 28k IOPS up to 84k IOPS up to 85k IOPS
    Warranty 5 years 5 years 5 years


    SSD E 5420s

    Moving on to the data center products, the SSD DC S3520 is a new mid-range enterprise SATA SSD for read-oriented workloads and the third iteration of the S3500 series. The M.2 form factor has returned as an option after the DC S3510 series was only offered in the 2.5" form factor. As with the SATA drives for the embedded market, performance has decreased but endurance has been bumped up from 0.3 DWPD to 1 DWPD. The larger per-die capacity of the 3D MLC has caused the smallest capacity option to increase from 80GB to 150GB, but 1.6TB is still the largest option for the 2.5" form factor.

    Intel Enterprise SATA SSDs
    Model DC S3520 DC S3510
    Capacity 150GB, 240GB, 480GB, 800GB, 960GB, 1.2TB, 1.6TB 150GB, 240GB, 480GB, 760GB, 960GB 80GB, 120GB, 240GB, 480GB, 800GB, 1.2TB, 1.6TB
    NAND IMFT 32-layer 3D MLC 16nm MLC
    Interface 2.5" SATA M.2 SATA 2.5" SATA
    Sequential Read (up to) 450 MB/s 410 MB/s 500 MB/s
    Sequential Write (up to) 380 MB/s 320 MB/s 460 MB/s
    4kB Random Read (up to) 67.5k IOPS 53k IOPS 68k IOPS
    4kB Random Write (up to) 17k IOPS 14.4k IOPS 20k IOPS
    Endurance 1 DWPD 1 DWPD 0.3 DWPD
    Warranty 5 years 5 years 5 years

    SSD DC S3520

    (UPDATED) Finally, for the enterprise PCIe space we have the SSD DC P3520. In March the DC P3320 was announced as Intel's first 3D NAND SSD and the P3520 was mentioned but specifications were not provided at that time. Intel has since decided to only produce the P3520 and to price it close to the level of SATA SSDs. The reduced performance relative to the DC P3500 is a consequence of reduced parallelism at the same capacity that results from using the 256Gb 3D MLC rather than 128Gb 20nm MLC, and the size of this performance regression is a bit dismaying. The DC P3520 is clearly based on the same hardware platform as the rest of the PCIe data center drives, with a familiar layout for the PCB and heatsink evident in the add-in card version.

    Intel Enterprise PCIe SSDs
    Model DC P3520 DC P3320 (canceled) DC P3500
    Capacity 450GB (U.2 only), 1.2TB, 2TB 450GB (U.2 only), 1.2TB, 2TB 400GB, 1.2TB, 2TB
    NAND IMFT 32-layer 3D MLC IMFT 32-layer 3D MLC IMFT 20nm MLC
    Interface U.2 and PCIe 3 x4 HHHL U.2 and PCIe 3 x4 HHHL U.2 and PCIe 3 x4 HHHL
    Sequential Read (up to) 1700 MB/s 1600 MB/s 2700 MB/s
    Sequential Write (up to) 1350 MB/s 1400 MB/s 1800 MB/s
    4kB Random Read (up to) 375k IOPS 365k IOPS 430k IOPS
    4kB Random Write (up to) 26k IOPS 22k IOPS 28k IOPS
    4kB Random 70/30 Read/Write (up to) 80k IOPS 65k IOPS 80k IOPS
    Warranty 5 years 5 years 5 years


    SSD DC P3520 U.2

    These new SSDs will have a staggered release over the rest of the year. Starting next week the DC P3520 will be shipping, as well as the 128GB, 256GB and 512GB capacities of the SSD 600p and SSD Pro 6000p. The 2.5" DC S3520 will ship in early September. The rest are planned to be available in Q4.

  • Hot Chips 2016: NVIDIA Discloses Tegra Parker Details (AnandTech)

    At CES 2016 we saw that DRIVE PX2 had a new Tegra SoC in it, but to some extent NVIDIA was still being fairly cagey about what was actually in this SoC or what the block diagram for any of these platforms really looked like. Fortunately, at Hot Chips 2016 we finally got to see some details around the architecture of both Tegra Parker and DRIVE PX2.

    Starting with Parker, this is an SoC that has been a long time coming for NVIDIA. The codename and its basic architectural composition were announced all the way back at GTC in 2013, as the successor to the Logan (Tegra K1) SoC. However Erista (Tegra X1) was later added mid-generation - and wound up being NVIDIA's 20nm generation SoC - so until now the fate of Parker has not been clear. As it turns out, Parker is largely in line with NVIDIA's original 2013 announcement, except instead of a Maxwell GPU we get something based off of the newer Pascal architecture.

    But first, let's talk about the CPU. The CPU complex has been disclosed as a dual core Denver 2 combined with a quad core Cortex A57, with the entire SoC running on TSMC 16nm FinFET process. This marks the second SoC to use NVIDIA's custom-developed ARM CPU core, the first being the Denver version of the Tegra K1. Relative to K1, Parker (I suspect NVIDIA doesn't want to end up with TP1 here) represents both an upgrade to the Denver CPU core itself, and how NVIDIA structures their overall CPU complex, with the addition of a quartet of ARM Cortex-A57 cores joining the two Denver 2 cores.

    The big question for most readers, I suspect, is about the Denver 2 CPU cores. NVIDIA hasn't said a whole lot about them - bearing in mind that Hot Chips is not an exhaustive deep-dive style architecture event - so unfortunately there's not a ton of information to work with. What NVIDIA has said is that they've worked to improve the overall power efficiency of the cores (though I'm not sure if this factors in 16nm FinFET or not), including by implementing some new low power states. Meanwhile on the performance side of matters, NVIDIA has confirmed that this is still a 7-wide design, and that Denver 2 uses "an improved dynamic code optimization algorithm." What little that was said about Denver 2 in particular was focused on energy efficiency, so it may very well be that the execution architecture is not substantially different from Denver 1's.

    With that in mind, the bigger news from a performance standpoint is that with Parker, the Denver CPU cores are not alone. For Parker the CPU has evolved into a full CPU complex, pairing up the two Denver cores with a quad-core Cortex-A57 implementation. NVIDIA cheekily refers to this as "Big + Super", a subversion of ARM's big.LITTLE design, as this combines "big" A57 cores with the "super" Denver cores. There are no formal low power cores here, so when it comes to low power operation it looks like NVIDIA is relying on Denver.

    That NVIDIA would pair up Denver with ARM's cores is an interesting move, in part because Denver was originally meant to solve the middling single-threaded performance of ARM's earlier A-series cores. Secondary to this was avoiding big.LITTLE-style computing by making a core that could scale the full range. For Parker this is still the case, but NVIDIA seems to have come to the conclusion that both responsiveness and the total performance of the CPU complex needed addressed. The end result is the quad-core A57 to join the two Denver cores.

    NVIDIA didn't just stop at adding A57 cores though; they also made the design a full Heterogeneous Multi-Processing (HMP) design. A fully coherent HMP design at that, utilizing a proprietary coherency fabric specifically to allow the two rather different CPU cores to maintain that coherency. The significance of this - besides the unusual CPU pairing - is that it should allow NVIDIA to efficiently migrate threads between the Denver and A57 cores as power and performance require it. This also allows NVIDIA to use all 6 CPU cores at once to maximize performance. And since Parker is primarily meant for automotive applications - featuring more power and better cooling - unlike mobile environments it's entirely reasonable to expect that the design can sustain operation across all 6 of those CPU cores for extended periods of time.

    Overall this setup is very close to big.LITTLE, except with the Denver cores seemingly encompassing parts of both "big" and "little" depending on the task. With all of that said however, it should be noted that NVIDIA has not had great luck with multiple CPU clusters; Tegra X1 featured cluster migration, but it never seemed to use its A53 CPU cores at all. So without having had a chance to see Parker's HMP in action, I have some skepticism on how well HMP is working in Parker.

    Overall, NVIDIA is claiming about 40-50% more overall CPU performance than A9x or Kirin 950, which is to say that if your workload can take advantage of all 6 CPUs in the system then it’s going to be noticeably faster than two Twister CPUs at 2.2 GHz. But there’s no comparison to Denver 1 (TK1) here, or any discussion of single-thread performance. Though on the latter, admittedly I'm not sure quite how relevant that is for NVIDIA now that Parker is primarily an automotive SoC rather than a general purpose SoC.

    Outside of the CPU, NVIDIA has added some new features to Parker such as doubling memory bandwidth. For the longest time NVIDIA stuck with a 64-bit memory bus on what was essentially a tablet SoC lineup, which despite what you may think from the specs worked well enough for NVIDIA, presumably due to their experience in GPU designs, and as we've since learned, compression & tiling. Parker in turn finally moves to a 128-bit memory bus, doubling the aggregate memory bandwidth to 50GB/sec (which works out to roughly LPDDR4-3200).

    More interesting however is the addition of ECC support to the memory subsystem. This seems to be in place specfically to address the automotive market by improving the reliability of the memory and SoC. A cell phone and its user can deal with the rare bitflip, however things like self-driving vehicles can't afford the same luxury. Though I should note it's not clear whether ECC support is just some kind of soft ECC for the memory or if it's hardwired ECC (NVIDIA calls it "in-line" DRAM ECC). But it's clear that whatever it is, it extends beyond the DRAM, as NVIDIA notes that there's ECC or parity protection for "key on-die memories", which is something we'd expect to see on a more hardened design like NVIDIA is promoting.

    Finally, NVIDIA has also significantly improved their I/O functionality, which again is being promoted particularly with the context of automotive applications. There’s more support for extra cameras to improve ADAS and self-driving systems, as well as 4Kp60 video encode, CAN bus support, hardware virtualization, and additional safety features that help to make this SoC truly automotive-focused.

    The hardware virtualization of Parker is particularly interesting. It's both a safety feature - isolating various systems from each other - while also allowing for some cost reduction on the part of the OEM as there is less need to use separate hardware to avoid a single point of failure for critical systems. There’s a lot of extra logic going on to make this all work properly, and things like running the dual Parker SoCs in a soft lockstep mode is also possible. In the case of DRIVE PX2 an Aurix TC297 is used to function as a safety system and controls both of the Parker SoCs, with a PCI-E switch to connect the SoCs to the GPUs and to each other.

    Meanwhile, it's interesting to note that the GPU of Parker was not a big part of NVIDIA's presentation. Part of this is because Parker's GPU architecture, Pascal, has already launched in desktops and is essentially a known quantity now. At the same time, Parker's big use (at least within NVIDIA) is for the DRIVE PX2 system, which is going to be combining Parker with a pair of dGPUs. So in the big picture Parker's greater role is in its CPUs, I/O, and system management rather than its iGPU.

    Either way, NVIDIA's presentation confirms that Parker integrates a 256 CUDA Core Pascal design. This is the same number of CUDA Cores as on TX1, so there has not been a gross increase in GPU hardware. At the same time moving from TSMC's 20nm planar process to their 16nm FinFET process did not significantly increase transistor density, so there's also not a lot of new space to put GPU hardware. NVIDIA quotes an FP16 rate of 1.5 TFLOPs for Parker, which implies a GPU clockspeed of around 1.5GHz. This is consistent with other Pascal-based GPUs in that NVIDIA seems to have invested most of their 16nm gains into ramping up clockspeeds rather than making for wider GPUs.

    As the unique Maxwell implementation in TX1 was already closer to Pascal than any NVIDIA dGPU - in particular, it supported double rate FP16 when no other Maxwell did - the change from Maxwell to Pascal isn't as dramatic here. However some of Pascal's other changes, such as fine-grained context switching for CUDA applications, seems to play into Parker's other features such as hardware virtualization. So Pascal should still be a notable improvement over Maxwell for the purposes of Parker.

    Overall, it’s interesting to see how Tegra has evolved from being almost purely a mobile-focused SoC to a truly automotive-focused SoC. It’s fairly obvious at this point that Tegra is headed towards higher TDPs than what we’ve seen before, even higher than small tablets. Due to this automotive focus it’ll be interesting to see whether NVIDIA starts to integrate advanced DSPs or anything similar or if they continue to mostly rely on CPU and GPU for most processing tasks.

  • Hot Chips 2016: Exynos M1 Architecture Disclosed (AnandTech)

    While we can always do black-box testing to try and get a handle for what a CPU core looks like, there’s really only so much you can do given limited time and resources. In order to better understand what an architecture really looks like a vendor disclosure is often going to be as good as it gets for publicly available information. The Exynos M1 CPU architecture is Samsung’s first step into a custom CPU architecture for an mobile SoC. Custom CPU architectures are hardly a trivial undertaking, so it’s unlikely that a company would make the investment solely for a marketing bullet point.

    With that said, Samsung has provided some background for the Exynos M1, claiming that the design process started about 3 years ago in 2013 around the time of the launch of the Galaxy S4. Given the issues that we saw with Cortex A15 in the Exynos 5410, it's not entirely unsurprising that this could have been the catalyst for a custom CPU design. However, this is just idle speculation and I don't claim to have any knowledge of what actually led to Exynos M1.

    At a high level, Samsung pointed out that the Exynos M1 is differentiated from other ARM CPU designs by advanced branch prediction, roughly four instructions decoded per cycle, as well as the ability to dispatch and retire four instructions per cycle. As the big core in the Exynos 8890, it obviously is an out of order design, and there are some additional claims of multistride/stream prefetching and improved cache design.

    Starting with branch prediction, the major highlight point here is that the branch predictor uses a perceptron of sorts to reduce the rate at which branches miss. If you understand how pipelining works, it takes a significant amount of time to reload saved state and invalidate the execution that occurred after an incorrect branch. I’m no expert here but it looks like this branch predictor also has the ability to do multiple branch predictions at the same time, either as a sort of multi-level branch predictor or handling multiple successive branches. Perceptron branch prediction isn't exactly new in academia or in real-world CPUs, but it's interesting to see that this is specifically called out when most companies are reluctant to disclose such matters.

    Moving past branch prediction we can see some elements of how the cache is set up for the L1 I$, namely 64 KB split into four sets with 128-byte line sizes for 128 cache lines per set, with a 256 entry TLB dedicated to faster virtual address translation for instructions. The cache can read out 24 bytes per cycle or 6 instructions if the program isn’t using Thumb instruction encoding.

    On the instruction side we find decode, rename, and retire stages, register rename logic. The decode stage can handle up to 4 instructions per clock while the retire, and dispatch systems are all capable of handling four instructions every cycle, so best case throughput is going to be four instructions per cycle assuming the best-case scenario that the ARM instruction is a single micro-operation. 

    Other areas of interest include the disclosure of a 96 entry reorder buffer, which defines how many instructions can be in-flight at any given time. Generally speaking more entries is better for extracting ILP here, but it’s important to understand that there are some significant levels of diminishing returns in going deeper, so doubling the reorder buffer doesn’t really mean that you’re going to get double the performance or anything like that. With that said, Cyclone’s reorder buffer size is 192 entries and the Cortex A72 has 128 entries, so the size of this buffer is not really anything special and is likely a bit smaller in order to cut down on power consumption.

    For integer execution the Exynos M1 has seven execution ports, with most execution pipelines getting their own dedicated schedulers. It's to be noted that the branch monitor is able to be fed 2 µops per cycle. On the floating point side it looks like almost everything shares a single 32 entry scheduler, which can do a floating point multiply-accumulate operation every 5 cycles and a floating point multiplication every 4 cycles. Floating point addition is a 3 cycle operation.

    For loads and stores, a 32 KB, 8-way set associative cache with 64 byte line size is used as well as a 32 entry dTLB and 1024 entry L2 dTLB to hold address translations and the associated data for any given address, and allows out of order loads and stores to reduce visible memory latency. Up to 8 outstanding cache misses for loads can be held at any given time, which reduces the likelihood of stalling, and there are additional optimizations for prefetching as well as optimizations for other types of memory traffic.

    The L2 cache here is 2MB shared across all cores split into 16 sets. This memory is also split into 4 banks and has a 22 cycle latency and has enough throughput to fill two AArch64 registers every cycle, and if you look at the actual floorplan this diagram is fairly indicative of how it actually looks on the die.

    Samsung also highlighted the pipeline of the Exynos M1 CPU at a high level. If you're familiar with how CPUs work you'll be able to see how the basic stages of instruction fetch, decode, execution, memory read/write, and writeback are all present here. Of course, due to the out of order nature of this CPU there are also register rename, dispatch, and scheduling stages.

    It's fairly rare to see this kind of in-depth floorplanning shots from the designers themselves, so this slide alone is interesting to see. I don't have a ton to comment on here but it's interesting to see the distances of all the components of the CPU from the center of the core where most of the execution is happening.

    Overall, for Systems LSI's first mobile CPU architecture it's impressive just how quickly they turned out a solid design in three years from inception to execution. It'll be interesting to see what they do next once this design division really starts to hit its stride. CPU architectures are pipelined to some extent, so even if it takes three years to design one, if the mobile space as a whole is anything to go by then it's likely that we'll be seeing new implementations and designs from this group in the next year or two. Given the improvements we've seen from the Exynos 5420 to 7420 it isn't entirely out of question that we could see much more aggressive execution here in the near future, but without a crystal ball it's hard to say until it happens.

  • Zotac ZBOX MAGNUS EN980 SFF PC Review - An Innovative VR-Ready Gaming Powerhouse (AnandTech)

    The PC market has been subject to challenges over the last several years. However, gaming systems and small form-factor (SFF) PCs have weathered the storm particularly well. Many vendors have tried to combine the two, but space constraints and power concerns have ended up limiting the gaming performance of such systems. Zotac, in particular, has been very active in this space with their E-series SFF PCs. The Zotac ZBOX MAGNUS EN980 that we are reviewing today is the follow-up to last year's MAGNUS EN970 that combined a Broadwell-U CPU with a GTX 970M (rebadged as a GTX 960). The EN980's full-blown 65W Core i5-6400 Skylake desktop CPU and a no-holds barred VR-ready desktop GTX 980 coupled with an all-in-one watercooling solution seem to have addressed the EN970's shortcomings. Read on to find out how the unit performs in our rigorous benchmarking and evaluation process.

  • Windows 10 : la dernière mise à jour cumulative casse PowerShell (Génération NT: logiciels)
    Le passage à la build 14393.82 de Windows 10 a un impact négatif sur PowerShell dans le cadre de certains scénarios d'utilisation.
  • Universal Music ne proposera plus d'exclusivités à certaines plateformes (MacBidouille)

    Apple, comme ses concurrents dans le domaine de la musique, est sans cesse à la recherche de partenariats pour proposer artistes, morceaux et albums en exclusivité. Il s'agit d'un avantage concurrentiel très important permettant de se démarquer.
    Billboard rapporte que le patron d'Universal Music a déclaré que sa société ne signerait plus de tels contrats.

    Ce serait la conséquence d'un récent partenariat avec Apple dans lequel Universal Music n'aurait pas en fin de compte été gagnante.

    Ce sera certainement une motivation de plus pour qu'Apple franchisse le pas et commence à réellement produire des artistes, en résumé, devienne une Major à son tour.

  • L'USB 3.1 à 10 Gbits/s arrive sur les Mac (MacBidouille)

    Aujourd'hui, les Mac sont compatible avec la norme USB 3.0. Bientôt si l'on en croit le code de la dernière beta de Sierra (décortiqué par 9to5Mac) ce sera aussi le cas de sa déclinaison plus rapide, le 3.1 à 10 Gbits/s.

    C'est une bonne nouvelle qui permettra dans l'avenir de supporter des périphériques plus rapides, en particulier des SSD externes.

    Connaissant Apple, il ne serait d'ailleurs pas surprenant que cette nouvelle version de l'USB arrive avec le Thunderbolt 3 à 40 Gbits/s dont la norme permet de faire transiter dans le même connecteur de l'USB 3.1.

    Toutefois, l'arrivée de l'USB 3.1 nous intéresse plus que celle du Thunderbolt 3. En effet, la norme Thunderbolt n'a pas provoqué de changements conséquents auprès du grand public et elle est restée marginale même chez les professionnels.
    Son seul réel mérite est de pouvoir avec un unique connecteur permettre d'utiliser des protocoles très différents, Thunderbolt, USB, Displayport...

  • Offre d'emploi (MacBidouille)

    Chez Cliczone

    Nous avons 2 Offres d'emploi en CDI 39H à pourvoir prochainement

    Postes en régie principalement avec un peu d'itinérance sur Paris

    Votre mission, en tant que technicien systeme et réseau Apple autonome, consistera à fournir une assistance auprès des utilisateurs, à mettre en service et assurer le suivi de postes clients en réseaux (PAO, Bureautique, Web). Vous pourrez également être amenés à assurer la gestion et l'administration de serveurs MacOSX ainsi que de systèmes de sauvegardes.

    Vous devrez justifier de compétences confirmées sur MacOSX Client, ainsi que de bonnes notions les réseaux (Switch, Routeurs, VPN, etc.).

    Vous avez également une bonne expérience dans la rédaction de procédures.

    La passion du Mac et du Service seront aussi appréciés sinon plus que les certifications et autres diplômes :-)

    Candidatures attendues sur recrutement@cliczone.fr

  • Epic Games et Unreal Engine : les forums piratés (Génération NT: logiciels)
    Nouvelle fuite de données. Elle concerne les forums d'Epic Games. Plus de 800 000 comptes d'utilisateur sont affectés.
  • ARM met plus de vectoriel dans ses futures puces (MacBidouille)

    Les plus anciens ici se souviennent certainement du processeur G4. Il se démarquait du G3 grâce à ses instructions vectorielles Altivec, qui accéléraient énormément le traitement de certaines données, ce qui lui a permis de passer devant le G3.
    Intel en a aussi dans ses processeurs sous la dénomination AVX. Chez Intel ces jeux d'instructions sont plus complexes que ne l'était l'Altivec.

    L'intérêt de ces instructions vectorielles est de pouvoir réaliser simultanément certains calculs. Ainsi (en résumé) il est possible de traiter 4 données 32 bits en même temps avec une instruction vectorielle 128 bits. La plupart des processeurs Intel utilisent de l'AVX 256 bits. Le fondeur commence à proposer sur certains de ses processeurs de l'AVX 512 bits.

    Pour en revenir au sujet de la brève, ARM exploite aussi des instructions vectorielles appelées NEON. Elles n'ont toutefois pratiquement pas évolué en profondeur depuis fort longtemps, avant le premier iPhone.

    Les choses vont changer. ARM a annoncé son nouveau jeu d'instructions vectorielles appelé SVE, pour Scalable Vector Extension.

    Sur le papier, cela devrait apporter un gain très significatif dans certains types de traitement. C'est d'autant plus vrai qu'ARM va fournir un canevas SVE très lâche. Il sera en effet possible de créer des instructions (ou plutôt des designs de transistors) allant de 128 à 2048 bits, de quoi alors traiter dans ce dernier cas 64 instructions 32 bits en même temps.

    Assez clairement, cette initiative va permettre à ARM de tenter de séduire le monde des supercalculateurs, qui commence déjà à travailler avec ses puces.

    Avec un peu de chance, la montée en puissance des puces ARM devrait aussi convaincre Intel de se relancer dans une course à la puissance dont la société n'a plus besoin, ayant une marge d'avance plus que confortable sur AMD.

  • Tim Cook vient de toucher un joli bonus pour ses 5 ans chez Apple (MacBidouille)

    Tim Cook vient de fêter ses cinq ans à la tête d'Apple. Avant de quitter la société, Steve Jobs avait voulu sécuriser sa direction en proposant aux dirigeants clé une prime à valoir au bout de cinq ans (puis au bout de 10). C'est dans le cadre de ce contrat que Tim Cook vient de toucher ses émoluments, 100 millions de dollars.

    S'il reste jusqu'en 2021, son prochain bonus devrait s'élever à 500 millions de dollars.

    Attention toutefois, ce bonus n'est pas lié à sa seule présence mais aussi aux performances de l'action Apple.

  • NVIDIA Announces Paragon Game Bundle For GeForce Video Cards (AnandTech)

    As game releases ramp up and GPU manufacturers build market interest approaching the fall season, bundles seem to be popping up like daisies. The latest to join the fray is NVIDIA’s Paragon Game Ready Pack.

    Paragon is the latest game out of Epic games. The Free-to-Play MOBA is set to release later this year and runs on Epic’s own Unreal Engine 4. NVIDIA’s newest bundle, starting today, includes a number of perks. These perks include both cosmetic and gameplay related benefits that NVIDIA values at $115. Included with the bundle is 1000 Paragon coins, which is in game currency usable to buy Master challenges, skins, and boosts. Two skins, one Snakebite Murdock Skin and one War Chief Grux Skin, are included. Last on the list, are Seven Master Hero Challenges which each include a Challenge skin for their character that bestows additional XP rewards form levels one through five and provides additional rewards form levels six through ten, after which the Master Skin and title of Master will be unlocked. The Paragon Game Ready Pack will run for a relatively short period of time, from now until September 19, 2016.

    NVIDIA Current Game Bundles
    Video Card Bundle
    GeForce GTX Titan X (Pascal) None
    GeForce GTX 1080/1070/1060 Paragon Game Ready Pack
    GeForce GTX Titan X None
    GeForce GTX 980Ti/980/970 Paragon Game Ready Pack
    GeForce GTX 960/950 None
    GeForce GTX 980 For Notebooks Paragon Game Ready Pack
    GeForce GTX 980M/970M Paragon Game Ready Pack
    GeForce GTX 965M And Below None

    Many of NVIDIA’s mainstream and high end cards are included. NVIDIA lists both NCIX and Newegg as parts vendors and various system builders as participating as well.Though as always prospective buyers are encouraged to check before purchase that their retailer and card of choice are indeed participating in the bundle prior to purchase. 

  • AMD Bundles Together Deus Ex: Mankind Divided and AMD FX CPUs (AnandTech)

    Buying new hardware is almost always exciting, and purchases are even sweeter when they come with included gifts. In spirit of Deus Ex: Mankind Divided receiving DX12 support in the coming weeks AMD is bundling the game with select AMD FX CPU’s through participating retailers.

    The Deus Ex series has been well revered since its inception. The breadth of player choice, storytelling, and player customization was a big step in gaming, and the original Deus Ex is still regarded by many as the best PC game ever made. With varying degrees of success, the series has continued to find success in the same formula started back in 2000.

    Deus Ex: Mankind Divided follows the aftermath of Deus Ex: Human Revolution and continues the social commentary on human augmentation. Following the golden era of human augmentation augmented humans are being deemed outcasts and separated from the rest of society. From this backdrop the player continues as Adam Jensen, now an experienced covert agent, with the intent to unravel the building conspiracy against mankind.

    Starting this week, AMD is providing bundles with participating vendors on select 6- and 8- core FX processors. The purpose behind this bundle - besides clearing out FX inventory, I suspect - is to further promote DirectX 12, albeit in a roundabout fashion. In the coming weeks Mankind Divided will be receiving a patch that will enable DX12 support. With DX12 the game will have better multi-threading support for large core count CPUs. Given the relatively weak single-threaded performance of AMD's CPUs, this should help these CPUs close the performance gap.

    Anyhow, you’ll want to check with your retailer before purchase, but Newegg and the rest of the usual suspects should be participating. While looking around on Newegg I noticed that not all FX CPUs are listed as participating, so vigilance is advised before purchase even at participating retailers. The promotion will run from August 23, 2016 to November 14, 2016 or until game codes run out, whichever comes first.

    Those wishing to read more on Deus Ex: Mankind provided, find a vendor, or redeem their code can do so at the AMD Deus Ex: Mankind provided promotion page.

  • Windows 10 Anniversary Update : une build 14393.82 pour tous (Génération NT: logiciels)
    Une nouvelle mise à jour cumulative et des problèmes en moins pour Windows 10 Anniversary Update. Pas forcément ceux attendus.
> J'aimerais commencer des cours en photographie au mois de septembre,
> pour en faire d'une part mon obi
Effectivement, c'est une passion qui peut amener à devoir se serrez la
ceinture.
-+- Noëlle à Thierry, sur fr.rec.photo -+-