RESUME Today’s technology delivers fast, pocket-sized speeds. Investing in a piece of storage that can also be used for more than just a backup is more than convenient. The Samsung T5 external SSD is recommended for a good backup of a nice collection of music.
PLUS POINTS Versatile Reliable Format Speed
We listen music with z all more and more via the various streaming services. Spotify, Tidal, Soundcloud, Apple Music – you name it. Sometimes it seems as if no one has any music carriers at home. Or is it?
There is a discrepancy to begin with. The audio enthusiast knows that music via those streaming services is often compressed enormously. A streaming service for – and point of sale for – music that does focus on audio quality is for example Qobuz. It is also possible to buy high quality music there. And there are plenty of other places online where it is possible to buy high quality music. Many of us at the editorial office therefore have a server or NAS at home for the storage of that music. We can also use it as a source to stream music over the network. A common set-up among music and film lovers, but it does require a backup: a backup!
Backup Importance As a hard drive in your server or NAS decides to retire, then you can wave to your music. You will not get it back. A possibility to at least partially overcome this is with a Raid configuration, but it is best to always make a backup of your valuable digital collection. And preferably several, because one backup is not a backup.
For that reason today we are going to get started with a Samsung T5 portable SSD. Samsung has been the market leader in SSD storage for several years. The brand’s Evo series has also won multiple awards. The Samsung T5 is very small in size. The device literally fits in your palm. Thanks to the housing, the T5 is shockproof, password-protected and uses Samsung V-NAND flash memory.
The use of large, heavy and often slow mechanical hard drives is no longer entirely of this time. This is mainly aimed at the really large amounts of storage in servers. The demand for faster information transfer, on the other hand, is growing day by day. The Samsung T5 has a USB3.1 gen2 connection and a transfer speed of up to 540 mb per second. This makes it possible to watch or listen to music and films directly from an external SSD. Running an operating system externally is possible, or games.
SSD Vs. HDD So what are the advantages of an SSD compared to an HDD? A Solid State Drive no longer has any moving parts. It is stored on flash memory chips. That provides much more speed. The energy consumption is very low, an SSD is more reliable and has little or no noise production.
An HDD or Hard Drive Disk is a magnetic metal disk. It is ‘written’ and read by an arm, like on a record player. Since it is mechanical, it does make noise. Searching and writing is also done physically, so slowly. The energy consumption is also a lot higher.
For this review we are going to back up Make a small 747 gigabytes of music. We connect the Samsung T5 to the Synology for this 215 + NAS. Next, we need to format the external drive so that we can use the T5 external SSD. We select all music and copy it to the T5 SSD. A real life test, just as you could use the external SSD at home. We can see that the time changes quite a bit.
Samsung has developed this external SSD for a large target group. Because the T5 is handy, you can take it with you. The option to protect the SSD with a password via an Apple or Windows computer will also appeal to many people. Looking at the performance, we see that a small file (10 mb) at a rate of 526 mb per second is written. For larger files – read: a file of 5 gigabytes or larger – we arrive at a 421 mb per second. The reading speed is also comparable. Those are great scores, for impatient people.
The full music backup finally takes just under four hours. That is quite fast, an HDD mechanical hard disk cannot do that very quickly. Nice even if it ever needs to be restored.
Better safe than sorry A good backup is worth its weight in gold. A second backup even even more. Something a music lover or collector with a nice digital collection should have. We experienced that ourselves years ago, because then we lost a lot . We cannot emphasize the usefulness and necessity enough, but with an SSD there is an extra advantage. This can be can easily be connected to the radio in the car, provided it is equipped with a USB connection. So you have all your own music at hand, in good quality, even when you’re on the go.
Conclusion A good backup is a must, especially for the audio (and film) enthusiast. Today’s technology provides high speeds in pocket format. Investing in a piece of storage that can also be used for more than just a backup is more than convenient. Especially if the reliability is high. The Samsung T5 external SSD is recommended for a good backup of a beautiful collection of music.
The Chord 2go is the highly anticipated streaming module for the Hugo 2. Finally the owners of that excellent DAC no longer have to look enviously at those who made their cheaper Mojo wireless with the Poly. Thanks to the 2go you can hang the Hugo 2 directly on the network and stream music via Roon and other services.
Chord 2go What is the 2go? An interesting question, because you cannot just classify this device under a clearly defined product type. That’s because it is an extension for another device, and not a stand-alone device itself. But ‘expansion’ is also a bit too simplistic, because it does a lot. Add the 2go to the Hugo 2 DAC and you get a new hybrid device with considerably more possibilities.
The 2go actually does the same as the Poly that comes with the Mojo, but then for that more expensive Hugo 2 DAC from Chord that we have already extensively tested. The concept remains the same: the 2go is an extension module that you attach to the DAC and add streaming. There are also two slots for microSD cards, with a maximum capacity of 4 TB. Put the necessary in this and the device will also operate as a network player and as a media server on your network. Talk about versatile.
We will repeat again: you need the Hugo 2 to use the Chord 2go. The two fit together perfectly. So good, in fact, that if you attach the Chord module to the Hugo 2, you suddenly seem to have one new device. And actually it is.
Just screwing
Somehow it’s amazing that it took so long for Chord Electronics to come up with the 2go. With the Poly for the Mojo DAC, the British had already shown that the concept was right: take an excellent DAC in its price range and turn it into a standalone streamer by means of an extra module. No need to screw open, just click and set via an app. And hey presto: your DAC is a Roon endpoint, which thanks to the built-in battery can be used as your completely wireless music source to listen to music with your headphones. It becomes an intriguing alternative to a DAP and much better suited for demanding headphones than a smartphone.
And yes, you can use the 2go / Hugo2 using mobile. After all, the Chord-DAC is equipped with a battery and the 2go can also be used on the road in hotspot mode. But we feel that this is more of a theoretical scenario. We think it is much more likely that you will use this combined device at home to listen to your music comfortably in high quality via headphones. Or you can use it as a streaming DAC in your hi-fi system. That makes much more sense with this Chord combination than with the Poly / Mojo, partly because the Hugo 2 has a full cinch output and two digital inputs (optical and coaxial). And also: there is an ethernet connection, because in a system there is a better option in terms of stability.
Then why did that 2go take so long to arrive? We guess that, but it is true that Chord Electronics has recently launched many other products. Think for example of the mighty Ultima amplifiers. Such a thing naturally demands a lot of focus within a hi-fi company.
Before you can start using the 2go, you need to get your inner do-it-yourself unleash. Fortunately, only for a short while and without having to call on next-level skills. Why? The connection between the Hugo 2 and the 2go is realized by the two micro USB connections. However, even two of those plugs are insufficient in the long run to withstand the bending force that you get when you pick up the device carelessly. Fortunately, Chord has found a solution. The box of the 2go contains two pins that you screw into existing cut-outs on the Hugo 2 – which immediately shows that the manufacturer has been working on the 2go from the start. You carefully click the streaming module onto the DAC, after which you install two small nimbus screws in the sides with the supplied key. With this you lock the streamer and you get a solid and stable whole.
With the Mojo and Poly you do not have an additional locking mechanism, but the Hugo 2 and 2go are CNC aluminum housings are therefore a lot heavier. When combined, the unit weighs 180 gram. We would recommend a case for mobile use that protects both devices well. That is also a good idea with the Mojo and Poly. Chord itself has cases that you can buy, we had a custom one made for our Mojo / Poly by a third party.
We said above that the 2go only fits the Hugo 2. That’s not quite right, because Chord is also working on the 2yu. This is a small module that you connect to the 2go instead of the Hugo 2. The device only has digital outputs, so you can use the 2go as a pure streamer with another DA converter. The 2yu may be available by summer, we’ll keep you posted.
Setup via GoFigure App You can set the 2go via the GoFigure app, which also serves the Mojo-Poly. It is a great relief that this app has now become much better. At the launch of the Poly, it was sometimes really hard to get the streamer on your network, but thanks to a series of updates that is now very easy. The 2go benefits from this work as this time we set the streamer to 1-2-3 via the iOS version on an iPad 2017. And just to make sure it works properly (and since Chord handled iOS and Android versions separately in the past anyway), we reset the device and reconfigure the 2go via GoFigure on a Huawei P 30 – smartphone. Fortunately, it goes as smoothly as on an Apple device.
You should get to know the philosophy behind the app. You are not supposed to play music from GoFigure. It is much more of an app for setting things up. That seems clear, were it not for the fact that Chord has provided two exceptions: Internet radio and MPD playlists of music files on a memory card. Why not immediately provide a full music player for those files within GoFigure, that is not so clear. We quietly suspect that Chord believes strongly in Roon. In the GoFigure app you will find all the settings you need. Conveniently, you can permanently enter different WiFi networks here, so that the 2go automatically connects to the correct network when you take the device to your office.
Music can be played via Roon, which is fine goes. The module still appears as non-certified, but functions completely as it should. You can indicate in the Roon settings that this is actually the Hugo 2, so that the correct configuration and icon is chosen.
Of course, far from everyone uses the pricey but very good Roon- software. Another option is to stream music over DLNA, where an optional memory card with music files can be the source. But you can also stream from a DLNA share on NAS. You do not arrange all this via GoFigure, but via a DLNA player app of your choice. We prefer BubbleUPnP on Android, mConnect is a popular option on iOS. Bluetooth and AirPlay (not AirPlay 2) are other streaming options, but we’re happy that aptX is supported for Bluetooth streaming.
Chord says work is in progress on adding Spotify Connect . There is no timeline for this yet – but Chord usually develops software at a relatively slow pace. It cannot be expressed in geological terms, but do not expect to see Spotify appear as an update in a few weeks. Of course we always like to be surprised …
Familiar sound Sound wise the 2go does not change the Hugo 2, or not that we can determine. When we listen to our regular test playlist via Roon and with the Focal Stellia we hear nothing that we have not heard before. After all, the Hugo 2 lives permanently next to the iMac we work on every day, and has been used intensively for over a year. We continue to consider it a reference in the price segment. To know more about what the Hugo 2 offers as a DAC, we would recommend that you read the review of this Chord on it.
One of the underexposed aspects of the Hugo 2 is the presence of the X-PHD crossfeed function. With this, Chord tries to bring the listening experience through speakers to your headphones, which some people find more natural than the strong left-right separation of headphones. We sometimes use it, sometimes we don’t, because certain pieces of music with X-PHD appear just more squashed. With other tracks, the weather reinforces the live feeling. No problem, you can switch it on (three positions) or off via a typical, colored Chord ball. The same is true for the four filters on the DAC that Chord offers, although we find the differences less striking than with the X-PHD function. It is very headphone dependent, we find.
We have used the 2go / Hugo 2 very intensively and noticed that it is a very different experience than the Poly / Mojo. Sound wise Chord takes a big step forward, but you can easily put that Poly in your pocket. The 2go / Hugo 2 is a heavier thing that we also moved very consciously and carefully because of its high value. So it is more ‘transportable’ than ‘portable’, but that is not a problem in itself. In this form, the DAC is also much more convenient and interesting to connect to an amplifier.
Chord equipped the Hugo 2 with a battery because this avoids the whole problem of disturbing power supplies. It also immediately ensures that the combination 2go / Hugo 2 can work for a long time without charging: such 12 hour. That never felt like a limitation. A complaint: if the battery is empty, you cannot immediately use the 2go again if you hang it on the charger. It has to draw some power before you can play music again, which will probably make the impatient audiophile grumble. We should also note that the 2go only establishes a WiFi connection over the 2.4 GHz channels.
Conclusion The 2go, by its nature, seems especially relevant to the Hugo 2 owner. If he is glued to his computer or fixed listening chair, he may not benefit from the product. A USB cable is then sufficient. If you would like to get rid of that dependence on that one listening position and that wired connection to your computer, then the 2go is a must-have addition to an excellent DAC. For example, it allows you to be in the garden or – why not? – can listen to your music in true hi-fi quality in bed, without compromise.
But actually the 2go provides broadening, and therefore it is not just any accessory. Anyone looking for a combined media server / streaming DAC will find an excellent solution in the bundle of the Hugo 2 and the 2go. It has a few eccentric aspects, but these are more than offset by the excellent rendering.
Chord Electronics 2go
1. 195 euro | www.chordelectronics.co.uk
Rating 4.5 out of 5
Apple, Google and Microsoft have been installing hardware security modules in their own devices for some time. The latter wants to significantly expand the protection with the so-called Pluton processor. The new security chip will soon find its way into CPUs and APUs as well as SoCs from hardware partners AMD, Intel and Qualcomm as a permanently integrated component. First of all, the main focus is on protection against manipulation and the defense against attacks on the firmware in the form of the UEFI BIOS.
These are currently prevented in pure software form by the firmware TPM. However, such solutions do not provide comprehensive protection and still offer security loopholes that can be exploited by hackers. Security researcher Denis Andzakovic showed last year, for example, that the communication between the Trusted Platform Module 2.0 and the chipset can be read out on the low-pin interface (LPC). This would, for example, make it possible to intercept Bitlocker keys.
Such security problems should be addressed with a security chip integrated directly into the processor, such as the Pluton processor, be resolved. Despite full chip integration, Microsoft’s chip is said to be isolated from the rest of the processor, which should exclude side-channel attacks such as Specter. Furthermore, protected digital keys do not leave the security hardware. This happens due to the SHACK function, which stands for Secure Hardware Cryptography Key. This means that it cannot be accessed using the chip’s own firmware. In addition, security gaps are to be closed continuously via Windows Update. Microsoft’s Azure server is supposed to make threats less likely by checking the integrity of the Pluton processor and its firmware.
In the further course of development, Microsoft is also planning the functionality of the Pluton processor for passwords and expand user data. Windows computers of all price ranges are to be equipped with the chip in the future. However, this should not lead to a Windows requirement, other operating systems can still be used as usual. CPU and GPU manufacturers also always have the option of installing a shutdown function for the Pluton processor.
Recording a call is something that can save us on more than one occasion, but can calls be recorded in The iPhone? In the case of Apple and that we all know that is different in practically every way , they have a more closed system than they call more secure, but that little by little it has been opening up to other options that the competition had, such as putting a USB drive in the iPad. But in the present case it is totally different from Android, which if it allows you to record calls even natively on some occasions, here we tell you how to record calls with an Android phone, unfortunately Apple does not allow you to do it as in Andorid , but if in another way, we will tell you.
How they manage the operating system and APIs so that developers can program applications, have restricted access to the microphone and earpiece during calls , since privacy for Apple prevails in this case. Thus it would not be possible in the traditional way to be able to record a call as in Androd mobiles, but the developers have invented a way to overcome this obstacle to be able to record the calls.
The developers know that this is something in high demand and they have managed to be able to make this type of recordings and, let it be said, charge for the service . For this it is necessary to make a call to three, the operation is very simple, you will have to call a phone number that will record the call and this will contact whoever you want to call , the conversation being recorded on this server that will store them and that will charge us for retrieving or listening to them.
To receive calls is a similar process, but on the contrary, when receiving the call we will have to put the person who calls us on hold , open the application and make it work so that it can be recorded, also keep in mind that the application supports this function, which not all do.
Some developers have made applications using this method and make them available to users for free, such as free games for iPhone, and with the typical mandatory message of purchases within the app, where We already know that they will charge us for this service , we are going to show you the main applications and how they work.
Applications to record calls on the iPhone
Simply by doing a search in the App Store of our iPhone we can find many applications that follow this method of recording calls , we have tested the main ones and with service in Spain for you, we are going to tell you how they work to record incoming calls and the outgoing ones.
TapeACall
Searching for the name of this application, we quickly locate it and install it for free. Once installed we open it and follow these steps to configure it:
Click on continue and will ask for our phone number , we enter it.
We will send a text message with a code that will have to enter to check in.
It will ask us to select the plan we want use , luckily we offer a few days free to try it.
Now we select the country where we make calls to burn. We select Spain that will appear above .
If we want to send us notifications we activate the permission, as well as for to access the contacts instead of having to dial the number by hand.
At the end of the configuration It gives us the option to see some tutorials on how to record outgoing and incoming calls , even so we tell you how.
Record outgoing calls with TapeACall
We open the application.
Click on the button large to record.
Now click on the phone number that we have configured to call and record calls, we make sure once again that is a number from Spain so that we do not carry extra charges.
Once communication is established click on the + icon to add another call.
We dial or look for the number in the phonebook who we want to call and click on call.
Now the + icon will become in merge , click on it to merge the calls and record the conversation .
Once the recording is ready the application will notify us if we have enabled notifications.
Record Incoming Calls with TapeACall
When we receive a call we answer it and we go to the home screen .
We open the TapeACall application and click on record .
Click on the service number or call .
Once set communication click on merge and the call will be recorded .
In a few seconds you will have the call ready to listen by accessing the call recordings button on the main screen.
ACR call recorder
Similar in operation to TapeACall and most applications, if we search for ACR call recorder in the App Store we can install it in a free, once installed we start it to configure it.
For registration it gives us the option to send an SMS we , with charge, or we can select the manual method , we select the latter to not receive additional charges for sending SMS.
We add the country code and followed by our phone number .
Now it will ask us for the type of subscription , we have a few free days to test it.
Next we have to add the phone we want to call to record the calls, we select the one that corresponds to Spain .
Now we have the application ready to record calls incoming and outgoing.
Record outgoing calls with ACR Call Recorder
Click on the big red button call and select the keyboard or contacts to make the call.
Now we select the number to call the recording service .
Automatically will appear below to call the number we have added.
When the call has been connected to the recipient , just click on merge and the call will be recorded.
Record Incoming Calls with ACR Call Recorder
Upon receiving a call we accept it and A notification will appear above , click on it to access the application, if the notification does not appear we go to the application and click on record .
We press the service telephone that offers us the call recording.
When the call is connected click on merge .
It won’t take long until you have available, to access the recorded calls we touch the recordings icon which is at the bottom in the center.
Advantages and disadvantages of recording calls on iPhone
It is not difficult to obtain advantages of recording calls, we can use it when they offer us a specific offer that is recorded, if we need some kind of proof or statement , or simply if we want to remember something. With these programs we can obtain these benefits of recording a call.
But it also has its drawbacks, in the legal framework in Spain it is not possible to record telephone calls for any purpose, so it is very important to look at this legislation , most apps have a section in their help that specifies this . There is also privacy, by staying on external servers I don’t know if this inspires a lot of trust when conversations are delicate, no problem if it’s the shopping list, but It changes things when we talk about other more or less legal matters.
Also the cost of the service, some offer some free recordings, others some trial days, but sooner or later you will have to pay for the service . In addition, there is the disadvantage that not everyone has the service in Spain and we would have to call other countries to make the recording. It is also not available with all operators, since not all offer the conference or three-party call service .
The applications that we have analyzed here have their service in Spain , so it would be a national call that is normally included at a normal price in your mobile phone plan, But do not forget to weigh the cost of the service with the utility that you are going to give to the call recordings, in some cases it may be more expensive than expected.
End of Article. Tell us something in the Comments or come to our Forum!
Minecraft , one of the games most played in history, it was updated some time ago to the Bedrock Edition, something that also allowed it to reach other platforms such as PlayStation and enable crossplay between them , and while the updates have been coming to the game more or less simultaneously, something that PlayStation users still did not have was Realms.
The Minecraft Realms are servers that players can rent to play with up to 9 friends simultaneously , thus allowing to have maps huge in the game without having to have a dedicated server in our house or be if company connected in the game so that our friends can access our team.
Similarly, Minecraft has taken advantage of the announcement to mention that Minecraft Realms has completed its migration to the platform Microsoft Azure , something logical considering Microsoft’s purchase of Mojang, and something that may have had something to do with waiting for PlayStation users to play on the Realms.
The next update that should come to Minecraft on all platforms is the Caves and Cliffs Update, which will arrive in summer of 2021 if all goes well.
End of Article. Tell us something in the Comments or come to our Forum!
Jordi Bercial
Avid enthusiast of technology and electronics. I messed around with computer components almost since I learned to ride. I started working at Geeknetic after winning a contest on their forum for writing hardware articles. Drift, mechanics and photography lover. Don’t be shy and leave a comment on my articles if you have any questions.
I was telling you 1 year ago about the top of the range in the range of NVMe SSDs from Corsair, more precisely MP 600 available on market in maximum capacity of 2TB. We were talking then about the Phison E controller 96 PCI-Express Gen 4 and TLC 3D memory BLC4 from Toshiba, but in the meantime Corsair has released and MP 510 with Phison E 16 PCI- Express Gen 3 and BICS3 3D TLC memory. Bonus? Corsair MP 510 has a maximum capacity of 4TB, therefore what we do if we need a larger capacity but also a lower price / GB. The answer? Corsair MP 400!
Corsair MP 400 comes with a Phison E controller 12 S compatible PCI-Express Gen 3 x4 and memory QLC with 96 layers from Micron, which allows a maximum capacity of 8TB and a price / GB (or better we call it price / TB) excellent. There are no significant differences in performance between MP 400 and MP 510, but the resistance to repeated writes is quite reduced in the case of MP 400, due to the use of QLC memory.
For example in the case of the 4TB variant we have in the test, the Corsair specifications indicate 800 TBW while MP 510 with 4TB capacity goes up to 3120 TBW. For home use it is not a problem, but if we run applications with repeated writes or in workstation / server mode, SSDs with QLC memories are not recommended.
In addition to the Open Policy Agent, the Cloud Native Computing Foundation (CNCF) has now added another project for policy management to its portfolio. The Kyverno Policy Engine, originally developed by Nirmata, can now prove itself in the CNCF sandbox. The open source project is designed to be seamlessly integrated into Kubernetes and to use its existing resources and tools – developers should be able to forego learning new languages or tools, promises Nirmata founder and CEO Jim Bugwadia.
Policies with CRDs, YAML and JSON regulate In contrast to Open Policy Agent, which requires the use of the Rego language for policy management, uses Kyverno YAML or JSON and can be combined with the kubectl, git and kustomize tools that most Kubernetes users are familiar with. In order to handle complex policy configurations with sometimes hundreds of parameters in the API, especially in a corporate context, when it comes to handling complex policy configurations, Kyverno uses the declarative approach of Kubernetes.
With the help of Custom Resource Definitions (CRDs), Kubernetes administrators can create, manage and automate guidelines for a wide variety of application areas. Kyverno can be used, for example, to automatically build certificates into pods, or to create sidecar containers. The policy engine can even be used for access control. Kyverno works as a validating and mutating webhook with the Kubernetes API server to block invalid or non-conforming configurations if necessary.
Easier configuration for more security Kyverno’s approach, which is based on patterns and best practices from Kubernetes, is intended to help make policy management easier, even in more complex corporate environments. Under the umbrella of the CNCF, Nirmata boss Bugwadia also hopes for synergies through closer cooperation with other projects. Among other things, the development team behind the CNCF sandbox project cert-manager has already expressed interest in using Kyverno for policy administration related to certificate management.
Further information on the policy engine can be found on the Kyverno homepage, the announcement as part of KubeCon + CloudNativeCon and in the project overview of the Cloud Native Computing Foundation.
Secure Shell for beginners: administer via SSH computer in the network Short SSH ssh customer How SSH works First steps manage key SSH keys instead of passwords Copy data and files Reduce traffic SSH, the “Secure Shell”, is a cryptographic network protocol for the encrypted and thus tamper-proof and tap-proof transmission of data over insecure networks . With SSH, you can conveniently carry out administrative tasks using a terminal, as it makes the console of a remote computer available on the local workstation. The protocol sends the keyboard input from the local to the remote system and then redirects its text output back to the local terminal window. In this article we will show you how to establish connections via SSH with remote computers for remote maintenance, generate and manage keys, manage files and directories and compress data traffic.
Short SSH ssh customer To avoid confusion: SSH is not a Telnet implementation with encryption. SSH (in capital letters) is also not the program (ssh) that is used in the terminal to establish SSH connections. In the following we use “SSH” when it comes to the protocol and the technology as such, while “ssh” refers to the command in the terminal or the command line.
All full-blown operating systems from Windows to GNU / Linux and the BSDs including macOS X to IBM’s AIX or HP’s HP-UX use OpenSSH from the OpenBSD- Teams that we also use for this article. The OpenSSH package consists of several components. “Sshd”, the SSH server as a daemon, is essential. What administrators and users use in the terminal is the SSH client “ssh”, which replaces old tools such as telnet, rlogin and rsh. “Scp” (replaces rcp) is used for copy processes via SSH, rarely “sftp” as a substitute for ftp. With “ssh-keygen”, SSH generates or checks the RSA, DSA or Elliptic Curve keys that are responsible for user and system authentication. With “ssh-keyscan” the public keys of a list of hosts can be collected. Finally, “ssh-add” and “ssh-agent” are used to keep keys in memory and thus make logins on other systems more convenient.
Access to all contents of heise + exclusive tests, advice & background: independent , critically founded c’t, iX, Technology Review, Mac & i, Make, c’t read photography directly in the browser register once – read on all devices – can be canceled monthly first month free, then monthly 9 , 95 € Weekly newsletter with personal reading recommendations from the editor-in-chief Start FREE month Now FREE month begin heise + already subscribed?
Sign in and read Register now and read the article immediately More information about heise +
WordPress for advanced users: How to get more out of WordPress 1. Deliver content faster via caching What you should know about WordPress plugins 2nd pictures optimize for faster loading times 3. Automatically create backups 4. Muck out WordPress and throw off ballast Website condition, spam, data protection Article in Mac & i 13 / 2020 read WordPress is quickly installed and ready for use – thanks to the simple installation. But you are far from the end of its possibilities: With extended functions and plug-ins you can expand the CMS and turn it into a real all-rounder.
The popular CMS WordPress has quite a few advanced features that let you get more out of your blog or website. There is more than 55. 000 Extensions that retrofit important functions. Caching plug-ins speed up the delivery of websites, but they must be set up well. Special backup extensions back up posts and pages fully automatically and regularly. With page builders, WordPress becomes a powerful WYSIWYG website builder – HTML knowledge is not required. We explain all of this to you in nine tips.
1. Deliver content faster via caching WordPress creates dynamic websites – that’s the advantage of a CMS, but sometimes also the disadvantage. The server recalculates the HTML output each time it is called. If it is a bit slow or busy, valuable time can pass before the page appears in the browser. The patience of many visitors is exhausted after a few seconds and they click somewhere else. Caching tools counteract this by keeping static websites in stock and thereby accelerating delivery. The best known tool for this job is called WP Super Cache, which is developed and maintained by the WordPress manufacturer itself.
Access to all contents of heise + exclusive tests, advice & background: independent, critically sound c’t, iX, Technology Review, Mac & i, Make, c’t read photography directly in the browser register once – read on all devices – can be canceled monthly first month free, then monthly 9, 95 € Weekly newsletter with personal reading recommendations from the editor-in-chief Start FREE month Start your FREE month now already subscribed to heise +?
Log in and read Register now and read the article immediately More information about heise + WordPress for advanced users: How to get more out of WordPress 1. Deliver content faster via caching What you should know about WordPress plugins 2nd pictures optimize for faster loading times 3. Automatically create backups 4. Muck out WordPress and throw off ballast Website condition, spam, data protection Article in Mac & i 13 / 2020 read
Adoption of any new technology by the market takes time. Earlier this year JEDEC published the final DDR5 specification, which marked the start of an industry-wide journey towards transition to the new memory standard. But this journey is going to take several years, according to analysts from TrendForce. While the first server platforms supporting DDR5 are set to arrive in late 2021, it remains to be seen how quickly the technology will be adopted by client PCs. Meanwhile, GDDR6 and LPDDR5 are going to stay with the market for quite some time as succeeding standards — GDDR6X and LPDDR5X — yet have to be adopted by the industry.
PCs and Servers: DDR5 For Capacity and Speed
The DDR5 SDRAM specification was developed with multiple goals in mind the primary one being scalability.
On the one hand, servers are eventually going to take advantage of memory modules up to 2 TB capacity (i.e. a server-grade CPU featuring eight memory channels and supporting two modules per channel could be paired with 32 TB of DDR5 memory (up from 4 TB today). Such capacities will allow the standard to stay with the market for quite a while.
On the other hand, the standard describes data transfer rates of up to 6400 MT/s, whereas DRAM makers have already demonstrated physical capability of their memory devices to work at 8400 MT/s (the demonstrations did not involve actual SoCs), which is something that will receive a warm welcome both from PC makers and from performance enthusiasts. On top of speed increases, DDR5 introduces various methods to improve actual efficiency of the new memory type.
To date, all leading makers of computer memory — Micron, Samsung, and SK Hynix — have demonstrated their DDR5 memory chips and modules, but it take two to tango, so platform makers like AMD, IBM, Intel, and others have to introduce theirs to ignite the transition to the new type of DRAM. And on this front, there is a typical situation. Given the fact that the server industry is the one that is the least sensitive to costs, server platforms are set to be the first ones to adopt the new type of memory, just like in case of DDR4. Meanwhile, client PC platforms are set to follow about a year later.
Intel has already announced that its Xeon Scalable ‘Sapphire Rapids’ platform for servers and supercomputers due to arrive in late 2021 will support DDR5. Unlike Intel’s server CPUs, AMD’s EPYC uses chiplet design and is somewhat more flexible when it comes to memory support, so in theory its codenamed Genoa platform may support either DDR4 or DDR5. Meanwhile, based on various leaks, AMD’s Genoa platform will indeed support DDR5. Therefore, the two leading suppliers of server CPUs are expected to release DDR5-supporting platforms late next year, which will mark the beginning of the DDR5 era.
Servers consume loads of memory, so even if AMD and Intel reveal their Genoa and Sapphire Rapids platforms in late 2021, DRAM makers will have to produce and supply sizeable volumes of DDR5 to vendors like Dell or HP. Of course, the ramp up of server platforms takes time, so do not expect DDR5-supporting platforms to replace existing and upcoming DDR4-supporting platforms until at least 2023.
As for desktops and notebooks, since they typically trail behind servers in terms of memory technologies, AMD and Intel will likely launch their DDR5 platforms for client PCs only in 2022 or even later. So, DDR5-6400 memory modules for enthusiasts are clearly not around the corner unless AMD and Intel decide to speed things up.
Graphics Memory: GDDR6 Is Here to Stay
GDDR5 has been serving graphics cards since late 2007 and even now it is used for some entry-level products. By contrast, the only mass market products that used GDDR5X were Nvidia’s Pascal graphics processors. GDDR6 is by far more successful than GDDR5X and this type of memory commands 70% of graphics DRAM shipments these days, according to TrendForce. Since GDDR6 is also used by the latest game consoles from Microsoft and Sony, it is poised to be popular for many years to come.
Market observers do not expect GDDR6X, which is used by Nvidia’s GeForce RTX 3080 as well as 3090 graphics cards, to become a widespread type of DRAM any time soon. There are at least two reasons for that. Firstly, GDDR6X is currently not a JEDEC standard. Secondly, GDDR6X uses four-level pulse amplitude modulation (PAM4) signaling, which is expensive to implement on the controller level, whereas benefits provided by GDDR6X vs. GDDR6 are rather humble today (19.5 GT/s vs. 16 GT/s). Meanwhile, Micron is serious about using PAM4 (and even PAM8) for DRAM going forward, so it is going to lobby either GDDR6X or subsequent PAM4-based standards.
HBM-types of memory are too expensive for client PCs today and are projected to remain pricey in the coming years, so HBM is not going to be used for these applications widely, but will naturally continue to be used for computing GPUs.
To sum up, analysts from TrendForce believe that GDDR6 will command around 90% of the graphics DRAM market in 2021, a remarkable result for a DRAM standard that will be four years old next year.
Mobile Memory: LPDDR5 Fights Its Way as LPDDR5X Looms
JEDEC published the LPDDR5X specification in February, 2019, yet by now only Qualcomm and Samsung Mobile introduced LPDDR5-supporting system-on-chips (SoCs). By contrast, mobile platforms from Apple and MediaTek continue to rely on LPDDR4 as well as LPDDR4X memory. As a result, LPDDR5’s penetration rate is expected to be around 12% this year.
Qualcomm’s high-end SoCs next year (Snapdragon 870) will to continue to use LPDDR5 and it is possible that the company will widen usage of the new DRAM with its upcoming performance-mainstream SoCs due to be announced in 2021. MediaTek is also projected to release at least two LPDDR5-supporting SoCs in the first half of 2021, so expect these SoCs to power handsets in late 2021 or early 2022.
The price difference between LPDDR5 and LPDDR4X has shrunk to about 10%, according to TrendForce. Keeping in mind higher performance and energy efficiency of the new memory type, SoC developers and smartphone makers will be more inclined to use LPDDR5 in the future. Yet, LPDDR4 and LPDDR4X will continue to dominate the market for a while.
LPDDR5 is set to support data transfer rates of up to 6400 MT/s, but since applications like AI/ML and graphics processing always demand higher bandwidth, there is a proposal coming to extend the standard all the way to 8533 MT/s. LPDDR5X will generally resemble LPDDR5, which will simplify its deployment. To enable higher data transfer rates and increase reliability of the upcoming mobile (low power is a better term though) memory subsystems, LPDDR5X will introduce pre-emphasis function to improve the signal-to-noise ratio and reduce the bit error rate as well as per-pin decision feedback equalizer (DFE) to enhance robustness of the memory channel (something that is also supported by DDR5).
It is not completely clear when LPDDR5X will be finalized, but it will clearly rival LPDDR5 at some point. Furthermore, if someone decides to skip LPDDR5 and go straight to LPDDR5X for some reason, it is going to be an interesting situation when a major mobile DRAM standard does not get support from all SoC designers.
The Apple Way
From a computer underdog back in the 1990s, Apple has transformed itself into a high-tech giant that now controls about 10% of the PC market today. Being a laptop-centric computer maker, and a major supplier of smartphones, Apple is also a major consumer of LPDDR memory these days.
Given the fact that Apple’s own M1 system-on-chip continues to use LPDDR4X-4266 memory, it is highly probable that the company is going to rely on mobile DRAM at least for its notebooks going forward. But what about desktop PCs?
Desktops starve for performance. On paper, LPDDR5X-8533 beats DDR5-6400, but likely does that at a much higher cost and relying on point-to-point interconnections that prohibit modules as well as upgradability. Since Apple no longer has to rely on third-party’s processors, it could use any kind of memory no matter the costs assuming that memory bandwidth matters a lot for performance of its processors. And that begs the question about which of the upcoming memory standards will be used by Apple for its computers.
Making predictions about Apple’s plans is generally not a good business since the company is notoriously secretive. But innovative types of memory are just another bonus that is provided by SoCs developed-in-house. Keeping in mind that Apple is now a major platform developer, its design decisions will impact the whole market.
Summary
There are three new types of memory — DDR5, GDDR6X, and LPDDR5X — coming to different kinds of applications.
DDR5 is guaranteed to become the next de-facto standard for client and server PCs, but it will take some time before it dominates the market. Meanwhile, scalability of DDR5 both in terms of performance and in terms of density will enable the new memory technology to have a very long lifespan. DDR3 stayed on the market for over seven years and DDR4 will have turned seven in 2021.
By contrast, it remains to be seen how successful GDDR6X will be in the long run as it yet has to demonstrate all the advantages PAM4 signaling can offer to DRAM. Adoption of GDDR6 by game consoles and entry-level graphics cards will inevitably drive costs of this DRAM down, whereas GDDR6X is poised to remain a premium type of memory unless it is supplied by more than one maker.
At present, LPDDR4/LPDDR4X continues to dominate the market and will do so for at least a couple of years. LPDDR5 has gotten cheaper than it was and it has a number of advantages over LPDDR4X, so the number of SoCs supporting LPDDR5 is going to increase next year when MediaTek announces its new processors. It is unclear when the LPDDR5X extension proposal is set to be submitted and then ratified by JEDEC, but at 8533 MT/s data rate this type of memory looks rather plausible.
The PC revolution started off life 35 years ago this week. Microsoft launched its first version of Windows on November 20th, 1985, to succeed MS-DOS. It was a huge milestone that paved the way for the modern versions of Windows we use today. While Windows 10 doesn’t look anything like Windows 1.0, it still has many of its original fundamentals like scroll bars, drop-down menus, icons, dialog boxes, and apps like Notepad and MS paint.
Windows 1.0 also set the stage for the mouse. If you used MS-DOS then you could only type in commands, but with Windows 1.0 you picked up a mouse and moved windows around by pointing and clicking. Alongside the original Macintosh, the mouse completely changed the way consumers interacted with computers. At the time, many complained that Windows 1.0 focused far too much on mouse interaction instead of keyboard commands. Microsoft’s first version of Windows might not have been well received, but it kick-started a battle between Apple, IBM, and Microsoft to provide computing to the masses.
Back in 1985, Windows 1.0 required two floppy disks, 256 kilobytes of memory, and a graphics card. If you wanted to run multiple programs, then you needed a PC with a hard disk and 512 kilobytes of memory. You wouldn’t be able to run anything with just 256 kilobytes of memory with modern machines, but those basic specifications were just the beginning. While Apple had been ahead in producing mouse-driven GUIs at the time, it remained focused on the combination of hardware and software. Microsoft had already created its low-cost PC DOS operating system for IBM PCs, and was firmly positioned as a software company.
With Windows 1.0, Microsoft took the important step of focusing on apps and core software. IBM held onto the fundamentals of the PC architecture for a few years, but Microsoft made it easy for rivals and software developers to create apps, ensuring that Windows was relatively open and easy to reconfigure and tweak. PC manufacturers flocked to Windows, and the operating system attracted support from important software companies. This approach to providing software for hardware partners to sell their own machines created a huge platform for Microsoft. It’s a platform that allows you to upgrade through every version of Windows, as a classic YouTube clip demonstrates.
Windows has now dominated personal computing for 35 years, and no amount of Mac vs. PC campaigns have come close to changing that, but they’ve certainly been entertaining. Microsoft has continued to tweak Windows and create new uses for it across devices, in businesses, and now with the move to the cloud. It’s only now, with the popularity of modern smartphones and tablets, that Windows faces its toughest challenge yet. Microsoft may yet weather its mobile storm, but it will only do so by rekindling its roots as a true software company. In 2055, it’s unlikely that we’ll be celebrating another 35 years of Windows in quite the same fashion, so let’s look back at how Microsoft’s operating system has changed since its humble beginnings.
Where it all began: Windows 1.0 introduced a GUI, mouse support, and important apps. Bill Gates headed up development of the operating system, after spending years working on software for the Mac. Windows 1.0 shipped as Microsoft’s first graphical PC operating system with a 16-bit shell on top of MS-DOS.
Windows 2.0 continued 16-bit computing with VGA graphics and early versions of Word and Excel. It allowed apps to sit on top of each other, and desktop icons made Windows feel easier to use at the time of the 2.0 release in December, 1987. Microsoft went on to release Windows 2.1 six months later, and it was the first version of Windows to require a hard disk drive.
Windows 3.0 continued the legacy of a GUI on top of MS-DOS, but it included a better UI with new Program and File managers. Minesweeper, a puzzle game full of hidden mines,also arrived with the Windows 3.1 update.
Windows NT 3.5 was the second release of NT, and it really marked Microsoft’s push into business computing with important security and file sharing features. It also included support for TCP/IP, the network communications protocol we all use to access the internet today.
Windows 95 was where the modern era of Windows began. It was one of the most significant updates to Windows. Microsoft moved to a 32-bit architecture and introduced the Start menu. A new era of apps emerged, and Internet Explorer arrived in an update to Windows 95.
Windows 98 built on the success of Windows 95 by improving hardware support and performance. Microsoft was also focused on the web at its launch, and bundled apps and features like Active Desktop, Outlook Express, Frontpage Express, Microsoft Chat, and NetMeeting.
Windows ME focused on multimedia and home users, but it was unstable and buggy. Windows Movie Maker first appeared in ME, alongside improved versions of Windows Media Player and Internet Explorer.
Windows 2000 was designed for client and server computers within businesses. Based on Windows NT, it was designed to be secure with new file protection, a DLL cache, and hardware plug and play.
Windows XP really combined Microsoft’s home and business efforts. Windows XP was designed for client and server computers within businesses. Based on Windows NT, it was designed to be secure with new file protection, a DLL cache, and hardware plug and play.
Windows Vista was poorly received like ME. While Vista introduced a new Aero UI and improved security features, Microsoft took around six years to develop Windows Vista and it only worked well on new hardware. User account control was heavily criticized, and Windows Vista remains part of the bad cycle of Windows releases.
Windows 7 arrived in 2009 to clean up the Vista mess. Microsoft did a good job of performance, while tweaking and improving the user interface and making user account control less annoying. Windows 7 is now one of the most popular versions of Windows.
Windows 8 was a drastic redesign of the familiar Windows interface. Microsoft removed the Start menu and replace it with a fullscreen Start Screen. New “Metro-style” apps were designed to replace aging desktop apps, and Microsoft really focused on touch screens and tablet PCs. It was a little too drastic for most desktop users, and Microsoft had to rethink the future of Windows.
Back to the Start: Windows 10 brings back the familiar Start menu, and introduces some new features like Cortana, Microsoft Edge, and the Xbox One streaming to PCs. It’s more thoughtfully designed for hybrid laptops and tablets, and Microsoft has switched to a Windows as a service model to keep it regularly updated in the future.
Windows 10 hasn’t changed drastically over the past five years. Microsoft has been tweaking various parts of the operating system to refine it. More system settings have moved from the traditional Control Panel over to the new Settings app, and the Start menu has a less blocky look to it now. We’re still waiting to see what Windows 10X (originally designed for dual-screen devices) will bring, but Microsoft has also been improving the system icons for Windows 10. 2021 could bring an even bigger visual refresh to Windows 10.
Editor’s note: This story was originally published in 2015 to mark the 30th anniversary of Windows. It has been updated and republished for 35 years of Windows.
Sustainability is becoming a key issue for colocation / multi-tenant data centers. This is shown by a current study carried out by 451 Research on behalf of Schneider Electric under 800 Providers of data center space for third parties from 19 countries. According to this 87 percent of the respondents believe that sustainability will be a very important or important differentiating factor in competition in three years’ time. Three quarters of those surveyed already consider sustainability to be an important or very important characteristic.
When asked which factors drive efficiency and sustainability initiatives, According to the study, the respondents named primarily customer requirements (50%), continued long-term operational stability (40%), Regulatory requirements (36%), cost savings, public opinion (each 35%) and competitive pressure (30%). At 74 percent of data center operators, all or most of the customers are already demanding clauses on efficiency and sustainability in their contracts.
Climate change anticipated In addition to the customers’ sustainability wishes, climate change is already casting a long shadow on the planning and operation of data centers. For example, 50 percent of respondents said they liked their technology choices, and 49 Percent that you adjust your choice of location to new temperature conditions, available water quantities and the like. 43 percent are preparing for more extreme weather, 40 Percentage of frequent floods.
Nearly 90 Percent of the surveyed operators of colocation data centers believe that efficiency and sustainability will be a decisive differentiator on the data center market in three years.
(Image: 451 Research / Schneider Electric)
Unsurprisingly under these circumstances, 43 percent an overarching strategic and 41 Percent a topic-specific sustainability program with regard to the conception, construction and planning of the data center infrastructure. 24 percent of the companies surveyed have a dedicated company-wide sustainability budget, 43 percent an efficiency and sustainability budget for central Data center infrastructure and IT and others 18 Percent of one that only affects the data center infrastructure.
At 56 Percent of the companies there is a central reporting on data center operating parameters such as utilization, PUE or energy consumption, 25 percent are considering it. At least 12 Percent ran corresponding tests, but were abandoned. The PUE (Power Usage Effectiveness) is predominantly comprehensive (51%) or site-specific (38%), only 9 percent follow this Not worth it. Measuring the effective use of water is also widespread (depending on 40% organization-wide or site-specific).
Measures for more sustainability The measures taken are currently focused on optimizing or updating the electricity distribution infrastructure (in each case 47%) and the update (40%) or optimization (36%) of the cooling infrastructure. In addition, 35% try to increase the server load . Other strategies that are now increasingly required – i.e. other concepts and raw materials in data center construction, resource-saving software, waste heat utilization and similar synergy concepts – do not seem to be very relevant in practice.
In the opinion of the respondents, one above all needs sufficient technical knowledge and experience (73%), money (71%) and a strategic focus on sustainability (56%).
As part of the new version 18 Parallels is expanding its Remote Application Server (RAS) with some functions. The software is aimed at companies that want to provide virtualized applications. These run either on a local server or in the cloud.
The main innovation is the integration of the Windows Virtual Desktop, a fully virtualized Windows environment along with applications that reside entirely in the Azure cloud. Now the virtual systems, the programs running there and their users can be managed centrally from the RAS console. The latter should also be used to automate routine tasks if desired.
The administration environment can also be used to provide new applications and desktops in Microsoft’s Azure Windows. Furthermore, Parallels will in future offer automatic scaling of the virtual RAS environment, regardless of whether it is running on premises or in Azure.
Management of user profiles RAS 18 can also handle Microsoft’s FSLogix profile containers. They are responsible for the administration of the user profiles in virtual environments and should reduce latencies and the risk of damaged professionals compared to the previously used User Profile Disks (UPD) Experience (UX) evaluator. With it, those responsible should be able to measure how well users can work with the virtual environment. First and foremost, the tool pays attention to delays between user input and the system’s responses.
Another new feature is the web portal for administrators, the RAS -Helpdesk tool replaced. It can be used on the desktop or smartphone to remotely deploy and configure RAS components and then monitor them.
Readers can find details on all new functions in the announcement of the release. RAS 18 appears as commercial software, the prices depend on the length of the contract and are calculated from the number of simultaneous users. A free trial version is also available.
According to Intel, samples of Sapphire Rapids processors have already been widely supplied to customers, although the previous Ice Lake-SP has not even been officially released
. Intel’s problems 10 with the nanometer process have also hit hard on the server side. Ice Lake-SP server chips should already be on the market and production should be hot, but the latest rumors suggest that the actual mass production will not start until next year.
Intel has released the data center side of the director Trish Damkrogerin holding a new Keynote slides, where it introduces a new generation of Xeon processors Scalable performance. 10 Nanometer-based Ice Lake processors are becoming available in a number of different configurations and Intel used in its example 32 – core variant. The processors have an 8-channel DDR4 3200 memory controller and support for the PCI Express 4.0 standard.
According to Intel tests 64 – the core of the upcoming Xeon Platinum processor would cover AMD 64 – core Epyc 7742 processor in LAMMPS (Large-scale Atomic / Molecular Massively Parallel Simulator) test and NAMD STMV (Not Another Molecular Dynamics, Satellite Tobacco) test Mosaic Virus) 20 and in the Monte Carlo simulation 30 by 1%.
The slide pack also included a slide that perhaps made a somewhat questionable claim, with Intel calling the Xeon Scalables “mainstream CPUs”. Regardless of the questionable claim, the interesting part of the slide was information about the upcoming Sapphire Rapids architecture. The successor to Ice Lake will be made with the 10 nanometer Enhanced SuperFin process. It will support the new generation of DLBoost technologies, i.e. Intel Advanced Matrix Extensions. According to Slia, sample models of Sapphire Rapids processors would have already been widely supplied to customers, which could be considered somewhat special given the Ice Lake schedule.
AMD’s new Instinct MI 120 is the first over 552 TFLOPS FP 64 – performance-based spreadsheet
AMD has today released its long-awaited first separate computing circuit, codenamed Arcturus. According to the company, the billing card published under the name AMD Instinct MI 100 is the fastest in the world and at the same time the first over 10 teraFLOPS FP 64 – performance-enhancing HPC-class GPU
Instinct MI 100, based on AMD’s CDNA architecture, is manufactured using TSMC’s 7-nanometer process, but the company did not disclose, for example, how many transistors it will build. The CDNA architecture itself is based on the further developed foundation of the GCN architecture, but much has also changed
.
MI 100 is in use 184 Compute Units divided into four Compute Engine sections. Each CU unit has a Matrix Core Engine alongside traditional scalar and vector units, designed to accelerate matrix calculations. MCE units calculate Matrix Fused Multiply-Add or MFMA tasks with KxN matrices INT8-, FP 10 -, BF 16 and FP 32 – precision figures. The result of MFMA invoices is calculated as either INT 32 or FP 32-give or take.
Theoretical FP of MI – performance is 23, 1 and FP 64 – performance 11, 5 TFLOPS . FP 32 – the theoretical maximum speed for matrix calculations is 46, 1 TFLOPS, FP 16 – matrix calculations 184, 6 TFLOPS and INT4 and INT8 invoices as well 184, 6 TFLOPS. Bfloat 23 – with precision the theoretical maximum performance is 92, 3 TFLOPS
Computing units are supported by 8 megabytes of L2 cache divided into . The L2 cache is said to have a combined bandwidth of up to 6 TB per second. Total 4096 – the bit memory controller supports both 4- and 8-layer HBM2 memories at 2.4 GB / s, for a total of 1, 23 Tt / s memory band and 32 gigabytes of memory. The TDP value of the calculation card is 300 watts.
Instinct MI 64 also supports the second generation Infinity Fabric link between the counting cards and the mapping of up to four GPUs to the same group via a bridge. Each GPU has three IF links, with a total of four MI 100 accelerators 552 GB / s theoretical P2P bandwidth. Accelerators are connected to the processor over the PCI Express 4.0 bus.
Along with the new spreadsheets, a new open source ROCm 4.0 was released. The ROCm package includes a variety of tools for developers ’needs, from translators to interfaces and ready-made libraries. The new open source compiler in ROCm 4.0 supports both OpenMP 5.0 and HIP interfaces.
According to AMD, ready-made server configurations with Instinct MI accelerators are promised at least from Dell, Gigabyte, Hewlett Packard Enterprise and Supermicro.
Source: AMD
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.