“platypus”:-security-vulnerability-abuses-measurement-function-of-intel-processors

“Platypus”: Security vulnerability abuses measurement function of Intel processors

Security experts from Austria, Germany and Great Britain have unmasked a new gateway for attacks on processors, especially from Intel: The “Running Average Power Limit” (RAPL) function, with which the power consumption of a CPU can be read out and influenced during operation. With RAPL, secret keys for cryptographic algorithms such as AES can also be unmasked with some effort – even if they are in a supposedly secure Trusted Execution Environment (TEE), which Intel’s Software Guard Extensions (SGX) set up. The security hole was given the name Platypus (platypus), which stands for “Power Leakage Attacks: Targeting Your Protected User Secrets”.

RAPL -Interface The RAPL interface is actually intended for monitoring and controlling server processors, especially in (cloud) data centers. Linux provides a “Power Capping Framework” for this. For example, if part of the cooling system or the power supply fails, the maximum power consumption of servers can be limited in order to avoid overheating or crashes. However, RAPL also reveals, among other things, how much power the CPU is currently consuming.

Distribution of the energy demand for the Processing of the imul command with two operands, one with the value 8 and one with changing Hamming weight (from 0x to 0xFF).

(Image: TU Graz / CISPA / Uni Birmingham)

The power consumption of an arithmetic unit changes depending on the type of calculation it is currently performing. Side-channel attacks that exploit this connection to draw conclusions about the processed data have been known for decades. This is why security chips have special functions for cash cards, smart cards and pay TV key cards that protect against such attacks.

Power leakage Attack Most “power leakage” attacks require the attacker to have physical access to the target system in order to be able to connect a power meter or an oscilloscope. The Platypus attack now also works remotely, the digital RAPL interface can even be queried from the operating system without admin rights.

So far, however, experts were of the opinion that the RAPL data is not precise enough to be able to recognize a single RSA key, for example. According to the Platypus discoverers, RAPL enables something like 10. 000 measurements per second, which is very little compared to the up to almost 5 billion clock cycles, each of which has up to 28 cores of an Intel -Processor cycles per second. But if the RAPL measurement can run long enough, secret values ​​can be determined bit for bit through statistical analyzes of the power measurements (Differential Power Analysis / DPA and Correlation Power Analysis / CPA).

Platypus attack: Reading out AES keys from an Intel SGX enclave via the RAPL interface of the Intel CPU.

The security researchers Moritz Lipp, Andreas Kogler, Catherine Easdon, Claudio Canella and Daniel Gruss von from Graz University of Technology, David Oswald from Birmingham University and Michael Schwarz from CISPA used numerous tricks to refine the RAPL measured values ​​sufficiently to be able to draw conclusions about dates and instructions. For example, they worked out methods to be able to superimpose repeated measurements precisely enough at time intervals.

In addition, they eliminated inaccuracies because Intel’s RAPL interface only shares data for all CPU cores delivers and not for each individual. They also included information on the respective core voltage.

Attacks on KASLR, TLS and SGX To make malware attacks more difficult, the Linux kernel scrambles RAM addresses; this is called Kernel Address Space Layout Randomization (KASLR). A Platypus attack should already be valid within 10 seconds of Differentiate between invalid memory addresses.

Took significantly longer with 100 minutes the unmasking of an RSA key in the encryption library mbed TLS. And to get hold of a key processed with AES-NI commands from an SGX enclave, the attack had to be at least 26 Run for hours. However, if many I / O operations disrupted the RAPL signal, the attack lasted for over 270 Hours, i.e. more than 10 days.

Platypus attack on the Kernel Address Space Layout Randomization (KASLR) of the Linux kernel.

This already suggests that Platypus will probably not last for far scattered attacks will be used; it is mainly important for cloud servers and less for desktop PCs and notebooks.

Intel is already making patches available in the form of microcode updates, which can be either get to the affected systems via BIOS update or operating system updates. These are all with Intel processors of the Core i and Xeon series since the Sandy Bridge 2011 introduced generation Sandy Bridge, so from Core i – 2000, Pentium G, Celeron G, Xeon E5 – 2000 and E3 – 1200.

According to the researchers, other processors are also affected in principle, they were able to carry out similar measurements on various AMD Ryzen systems – there were but admin rights required for RAPL access.

Microcode updates announced Intel explains the Platypus attack in the Intel Security Advisory Intel-SA – 00389. As a remedy against Platypus attacks, microcode updates ensure that the measurements are less precise when a CPU core processes SGX commands. In addition, updates to the Linux kernel prevent unprivileged users from accessing certain RAPL data. The CVE numbers are CVE – 2020 – 8694 and CVE – 2011 – 8695.

The Platypus co-discoverers Moritz Lipp, Daniel Gruss and Michael Schwarz were among others Already involved in uncovering the Specter and Meltdown CPU vulnerabilities. Daniel Gruss also worked on the investigation of the Plundervolt security hole, which manipulates internal CPU registers to control the power supply as a side channel.

(ciw)

net-5.0-has-been-released

.NET 5.0 has been released

The one that has now appeared as part of the virtual .NET Conf 2020. NET 5.0 is technically the successor to .NET Core 3.1. The version number 4.0 is skipped and the term “Core” in the name is dropped again. Microsoft Marketing wants to express that .NET 5.0 sees itself as the common successor of the three previously separate .NET variants .NET Framework, .NET Core and Mono.

In fact, the vision “One .NET” has not yet been realized in .NET 5.0. While the .NET Framework has been since 19. April 2019 is on the sidelines, the development of Mono and the Xamarin platform based on it continues , because Microsoft is far from finished with the integration. After all, with Blazor WebAssembly, the manufacturer has meanwhile replaced the Mono class library with the .NET 5.0 class library (see Fig. 1). The runtime environment used in Blazor WebAssembly is still mono-based and uses an interpreter instead of a just-in-time compiler. The announced integration of the creation of mobile applications for iOS, Android and Windows with Xamarin (including Xamarin.Forms) should only take place in .NET 6.0.

The current status of the .NET family with .NET Framework 4.8, .NET 5.0 and Mono 6. 10 (Fig. 1) (Status: 10. November 2020)

(Image: Holger Schwichtenberg)

Types of application With .NET 5.0, developers can develop the following types of applications:

ASP.NET Core 5.0-based web applications with server-side rendering or single-page apps with client-side rendering (with Blazor) Web services (REST-based WebAPIs and Google RPC) Console applications Background services Desktop applications with Windows Forms and Windows Presentation Foundation (WPF) for Windows from version 7 Windows Universal Apps and Desktop Applications for Windows 10 with the Windows UI Library 3 (release date planned for spring 2021) There is still no cross-platform GUI library in .NET 5.0. Software developers who want to develop a desktop application must use the Target Framework Moniker “net5.0-windows” instead of “net5.0” and are therefore limited to the Windows operating system.

The object-relational mapper Entity Framework Core and the web framework ASP.NET Core kept the “Core” in their name in .NET 5.0 to differentiate itself from its classic predecessors. However, Microsoft has introduced a new technical limitation. Entity Framework Core 5.0 (the number 4.0 was also skipped here) only runs on platforms that support .NET Standard 2.1. These are .NET 5.0, .NET Core 3.x and Mono from version 6.4. The classic .NET Framework 4.8 and its predecessors are unfortunately not included. This means that developers who have previously used Entity Framework Core 1.0 to 3.1 on the classic .NET Framework are now at a dead end. As for .NET Core 3.1, support for Entity Framework Core 3.1 is only available until December 3rd 2022.

Microsoft already had this exclusion of the classic .NET Framework for ASP.NET Core in September 2019 completed with version 3.0. Only ASP.NET Core 1.0 to 2.2 also ran on the classic .NET Framework.

betterCode () presents: .NET 5.0 – The online event on December 3rd 2020 You can learn that: From .NET Framework via .NET Core to .NET 5.0: What does this mean for the migration, and how big is the effort? What’s new in .NET 5.0? New features: Get to know ASP.NET Core 5.0 and Blazor 5.0 The most important language innovations in C # 9 Mobile development with .NET 5 OR mapping with Entity Framework Core 5.0 WinUI 3 as an alternative to WPF and UWP Outlook for .NET 6.0 Support only until February 2022 . NET 5.0, unlike .NET Core 3.1, is not an LTS version (Long-Term Support), but becomes Supported by Microsoft probably only until February 2022. That’s three months after the release of .NET 6.0, which is supposed to come in exactly one year. Developers who now rely on .NET 5.0 will have to migrate back to .NET 6.0 at the end of next year. Only .NET 6.0 will then offer three years of support again.

service-mesh:-linkerd-2.9-upgrades-with-multi-core-runtime

Service Mesh: Linkerd 2.9 upgrades with multi-core runtime

The service mesh Linkerd is now available in version 2.9. The project managed by the Cloud Native Computing Foundation (CNCF) introduces a multicore runtime in addition to ARM support.

On the way to Zero Trust Security Linkerd 2.9 extends the mTLS support (mutual TLS). This enables Linkerd to transparently encrypt and authenticate all TCP connections in the cluster at the moment of installation. So far, the service mesh only offered this for HTTP traffic. With the innovation, Linkerd should automatically encrypt and validate all TCP connections between the networked endpoints. This also includes the automatic rotation of the pod certificates every 24 hours and the automatic linking of the TLS identity to the pod’s Kubernetes ServiceAccount.

According to the announcement, this innovation is a big step towards zero trust security for Kubernetes users. With encryption and authentication to the pod boundary (the smallest execution unit in Kubernetes) Linkerd offers “encryption in transit” in a revised form. Future versions should expand the “Security-First Feature Set” with guidelines and enforcement based on the cryptographic identity and confidentiality guarantees of mTLS.

Upgrade to Multi-Core Runtime The current version of the service mesh is upgrading the proxy to a multi-core runtime in order to increase throughput and concurrency for individual pods. According to the blog post, Linkerd is known for its speed and low memory requirements compared to other service meshes such as Istio, which is probably due to the use of the “micro-proxy” written in Rust. Until now, the use of a single-core runtime was sufficient. The upgrade to a multi-core runtime should lead to further performance improvements, which the development team behind the service mesh would like to illustrate with benchmarks over the next few weeks.

In addition, the release offers ARM support, which should enable developers, for example, to reduce costs with ARM-based computing units such as AWS Graviton or to run Linkerd on a Raspberry Pi cluster. In addition, Linkerd 2.9 maintains support for service topology features from Kubernetes. This gives developers the opportunity to introduce routing preferences such as “Request should stay in this node” or “Request should stay in this region”. This in turn should lead to significant increases in performance and cost savings, especially for large applications.

What is a service mesh? The ability of a service mesh to simplify complex containers and improve network functions makes technology an important infrastructure layer. In a service mesh, each service instance is linked to that of a reverse proxy server. The service instance and the sidecar proxy share a container, which in turn is managed by a container orchestration tool. The service proxies are responsible for communication with other service instances and can support functions such as service detection, load balancing, authentication and authorization, and secure communication.

The service instances form the service mesh and your sidecar proxy is the data level, which also includes processing and answering queries. The service mesh also includes a level to control the interaction between the services, which is mediated by their sidecar proxies.

A complete overview There is a blog post about the new features in Linkerd 2.9 for publication. More details can be found in the release notes on GitHub.

(mdo)

sudden-discontinuation:-avira-will-discontinue-business-security-products-at-the-end-of-2021

Sudden discontinuation: Avira will discontinue business security products at the end of 2021

Avira’s sales partners are currently being informed by e-mail about an early discontinuation of Avira’s business security products: Licenses that actually only expire in the course of the year 2022 should expire, therefore expire on January 1st 2022. In addition, all existing partner contracts are to be terminated and the B2B business to be completely discontinued by the end of this year.

heise online already has several on this matter Send letters to Avira partners independent of each other, along with the quoted wording, to the relevant email. At the request of heise online, however, there was no further information so far, a requested official statement including confirmation from Avira is pending.

License validity shortened without further ado “We would like to inform you that we are launching Avira B2B Business for the 31. 12. 2020 and the relevant departments of Avira will close at this time, “reads the email quoted by the readers. To the 31. December 2021 the company terminated the licenses (regardless of their actual term) and support for the following products:

Avira Antivirus Pro Business Edition Avira Antivirus for Server Avira Antivirus for Endpoint Avira Antivirus for Small Business Avira Exchange Security The products lose support and functionality on January 1st 2021 – regardless of the originally set Duration.

In the e-mail it is also pointed out that it is still until 30. 11. 2020 is possible to acquire licenses with a maximum term of one year via the PartnerNet and the Channel Partner Team. There are also license extensions for existing products up to 31. December 2021 still possible.

Breach of trust with advance notice Should Avira confirm the information available in this form, it would at least seem questionable whether and to what extent partners and end customers after the abrupt and not exactly trustworthy end-of-life announcement actually consider further license purchases and renewals.

The sudden end of support would not come completely unexpected: For some time now, new registrations for sales partners via the channel partner program have not been possible. The discontinuation of the Avira business products was also known – albeit not under the conditions stated in the e-mail that was sent to us, but in accordance with the Avira Product Lifecycle, which can be accessed online (and at least has not been adapted to date). The EoL for most products is (still) the 11. November 2022.

Whether sales partners and end customers on a (partial) reimbursement of the costs for already paid and now The e-mail we have received does not reveal that the licenses will be invalid soon. We have also checked with Avira in this regard and will supplement this message with the corresponding information if necessary.

Difficult times for the AV industry We are still waiting for Avira’s response to the question of whether the closure of the B2B area will continue with the takeover of the company by the Bahraini investor “Investcorp” in April 2020 could have to do with. Many AV software companies are troubled by the strong market change, which is partly due to the now quite good virus protection of Windows 10 preinstalled Defender. They have to develop new business fields and models – and compete with innovative companies that can often appear more agile without the “antivirus” block on their legs.

Before Avira, Symantec was already partially bought up: The US company Broadcom took over Symantec’s business division with antivirus and security products for companies for 10, $ 7 billion. There were job cuts of around 7 percent as part of a “restructuring and savings program”; in addition, unconfirmed rumors about “massive layoffs” made the rounds at Symantec Switzerland. Distributors, resellers and customers complained of poor support availability and massive delays in issuing software licenses. Symantec was not available for a statement at that time.

Update 10. 11. 20, 11: : Minor reformulations in view of the pending official statement (ovw)

visual-studio-code-1.51-continues-work-on-github-pull-requests-and-issues

Visual Studio Code 1.51 continues work on GitHub Pull Requests and Issues

The development team responsible for the source code editor at Microsoft has submitted version 1. 51 of Visual Studio Code. The October update contains new features for the workbench and the terminal. Work on the extension GitHub Pull Requests and Issues will also be continued.

News for the Workbench One of the innovations for the Workbench is that pinned tabs now always show a needle symbol, even if they are not active. This should make the pinned tabs easier to identify. When an editor is both pinned and contains unsaved changes, the icon reflects both states. In the new release, the source code editor shows in the Extension Tree View the help for the extension trees when hovering over them with the mouse (hover) with a cross-platform consistent display instead of native tool tips.

In addition, extensions can be made without Install synchronization while the settings synchronization is active. With version 1. 51 it is possible to install a file with the extension VSIX (Visual Studio Integration Extension) from the Explorer. To do this, developers must right-click on the VSIX file and select the context menu entry Install Extension VISX . Also new is the command workbench.action.blur , which removes the focus from any input that can be focused. A keyboard shortcut can be assigned to the command in the configuration.

New local echo mode for the integrated terminal If changes are made to the terminal, it is necessary that information be sent to the terminal process, processed and returned to Visual Studio Code. This process can be slow if the connection to an SSH server or codespace is poor. This is why the current release adds a local echo mode to the terminal, which is intended to predict changes and cursor movements made locally and to display them in the user interface without having to go there and back to the server.

Work on GitHub Pull Requests and Issues continues The development team announces that they will continue work on the GitHub Pull Requests and Issues extension, which allows developers to edit, create and manage pull requests and issues. Those responsible refer those interested who want to be kept up to date about new functions and updates to the changelog of version 0. 22. 0 of the extension.

Work on the extensions for remote development will also continue. This extension is intended to enable developers to use a container, a remote computer or the Windows Subsystem for Linux (WSL) as a development environment. Visual Studio Code 1. 51 brings the possibility to keep terminal sessions or to establish new connections to them. In addition, the development team behind the editor worked on port forwarding.

As expected, the long list of new features in the October update includes support for the Beta for TypeScript 4.1. Interested parties can find more information on all new features in the announcement post on the Visual Studio Code blog.

(mdo)

pure-keyboard:-version-2-of-the-text-based-e-mail-client-mutt-has-appeared

Pure keyboard: Version 2 of the text-based e-mail client Mutt has appeared

The text-based, free e-mail client Mutt has made a version jump to number 2.0. However, the reason for this is not so much an extensive renovation or a larger expansion, but the introduction of some changes that make downward compatibility with previous versions no longer.

In the release notes, the developer gives an overview of the most important new features; There is also a link to the complete list of all changes in the source code repository at Gitlab. If you are planning an update to version 2.0, you should especially look at the section with the features that break with the compatibility with previous versions: This affects, among other things, the behavior when adding to attachments. Some variables now have default values ​​that are adapted to the respective localization; they can be recognized by the type string (localized) . When establishing a connection to the server, encryption is now assumed and TLS is enforced (the variable ssl_force_tls is set to yes set).

Lisp-like extension for configuration As usual, there are also some new features , but as the developer Kevin McCarthy emphasizes, this time it is actually less than in previous versions. For example, you can change the directory with the cd command within Mutt. If the connection to the IMAP server is lost, Mutt automatically tries to re-establish the connection. Mutt now masters the XOAUTH2 protocol as well as the completion of patterns using the tab key.

With MuttLisp there is also an extension of the configuration syntax based on Lisp, for example for the conditional execution of commands. However, the developer emphasizes the limitations: MuttLisp is not a complete programming language, cannot replace macros and is experimental. Last, but not least: Users can specify a directory for storing attachments.

Mutt is freely available for Linux and Unix systems under the GNU General Public License (GPL). The software appeared for the first time 1995 and has been further developed since then; the last version 1. 14. 7 is from August of this year. The last big change came with version 1.6 in the year 2016. As a mail user agent (MUA) for the console, Mutt is a curiosity, but continues to enjoy a certain popularity because e-mails can be managed unmatchedly efficiently and quickly using the keyboard – once you have familiarized yourself with the software.

( tiw)

the-failure-of-click-days:-a-cultural-issue,-not-a-technological-one

The failure of click days: a cultural issue, not a technological one

This year’s click days they failed for the same reason. Which, contrary to what one might think, is not technological. It is cultural. A direct reflection of the country’s immobility, which seems refractory to any change

of Riccardo Robecchi published on , at 14: 36 in the Innovation channel

The digital revolution promised to change everything and revise many aspects of our lives, in in particular those linked to the relationship with the State and its institutions . These promises were however largely disregarded , with inefficiencies and complications that make doing business in Italy more difficult than necessary . A striking example of this failure is the use of the so-called “click days” for the payment of bonuses and compensation. A symbol of failure not due to improbable hacker attacks or infrastructure problems, but to a digital re-presentation of analog dynamics. Ultimately, a cultural problem .

Why was the bike bonus a failure? Cultural reasons

Unless a huge infrastructure is available, it can be expected that hundreds of thousands of users attempting to carry out operations lead to problems A concept that appears elementary even to non-experts, but which does not seem to have frightened the creators of the “click day” for the bicycle bonus, who have implemented a code system absolutely Byzantine with the illusion of being able to manage the amount of requests. Which, given the aforementioned system, was widely expected.

The problem, in this case, does not lie in the fact that the infrastructure did not hold up. It was not the real problem even for the “click day” in April, when the INPS servers did not withstand the connection requests from professionals. The real problem is that someone decided that a “click day” was a good idea .

As I wrote at the beginning, the promise of digital was to change the dynamics and relationships between citizens and institutions. This promise was not broken because digital is not able to fulfill it, but because the logic prior to the digital world has been applied . Google does not queue users when they do a search, nor does Amazon ask them to wait their turn to be able to make their purchases. So why should citizens have to be asked to queue up with their computer as if they were in a post office?

The system is so convoluted and irrational that it borders on the absurd. The complication at a technical level in implementing a similar system is considerable and far superior to the many alternatives present: just to name two, there is the return to the current account following the presentation of the tax return, or the application of discounts from part of the resellers who subsequently receive a refund. Ideas are wasted and are in any case better than the one actually implemented.

The reason why we insist on proposing these inadequate methods lies in the mentality : those who think about these implementations are firmly in the first half of the twentieth century and seems to have no intention of going further. The cultural approach is the same as fifty years ago and, given the complete revolution introduced by the digital in our society, one cannot but think of the Gattopardo by Tomasi di Lampedusa: everything must change so that nothing changes. And until politics decides to put a stop to these modalities by imposing more modern ones, we will continue to make virtual queues .

“Click days” are the fruit of an immobile culture and the proverbial tip of the iceberg of a bureaucratic system that harms everyone, starting with taxpayers. But for there to be a change on this front, one must first take place on another: we must each assume their responsibilities and that you pay the consequences of your actions, whether positive or negative. Only then will all the rest come.

Technological and cultural changes: an inseparable pair

Understandably, it does not take well the disservices of the PA

The various innovations that technology offers – open source tools, cloud, containers and so on – suddenly become less useful, less profitable, less able to realize their potential if there is no core culture ready to embrace change that new technologies require and make them their own. Because otherwise the risk is to use the hydrogen tractor, yes, but to have it pulled by oxen instead of the plow. The evolution of the means of production influences that of culture and vice versa, but if one of the two is blocked (in this case the cultural one) then the other will also block; the change of technological means cannot but proceed hand in hand with a change in the approach to them. This can be seen very well in the corporate world, where companies that manage to make this evolution their own thrive and those that remain firm encounter difficulties and, if they do not change, end up being overtaken by their competitors and close.

Using the cloud makes little sense if you use it exactly like a server from twenty years ago , just as using containers makes no sense if you do not take advantage of their intrinsic characteristics of flexibility and agility. To exploit these new means, a profound change of approach is required, which challenges many of the concepts already learned and used. But just as each of us is required to continue learning, hone their skills and move into new and unexplored fields, so must the public administration be required. Let’s be clear: the problem is not only in the public, as evidenced by several studies (it was discussed, for example, at the D-Avengers conference at the Bocconi University in Milan). In Italy we do not invest in training, which means that we do not invest in preparing for change and evolution . But the world, since it has existed, has never stood still and expects us to move with it. Otherwise you are left behind, with all the consequences of the case.