lightspeed:-new-open-source-software-for-the-complete-live-streaming-package

Lightspeed: New open source software for the complete live streaming package

A new open source tool appears under the name Lightspeed, with which users can set up a server for live streaming. The project should be characterized above all by the low latency of less than one second as well as the easy installation of the three necessary modules.

The basis is appropriate called Lightspeed Ingest, which carries out the FTL handshake with the clients involved. FTL is the Faster Than Light protocol, which puts the speed of transmission above its quality. It saw the light of day as part of Microsoft’s failed game streaming service Mixer.

Modular structure with three components Second The component is Lightspeed WebRTC, which plays out incoming RTP packets via WebRTC. The third module is a React website as a front end for the audience. At the start of the project, Lightspeed needs all three components, but developers should be able to write their own alternative modules in the future.

The requirement on the Windows, macOS or Linux streaming computer is the Open Broadcast Software (OBS) with the FTL SDK. Installation instructions can be found on GitHub. Lightspeed appears as open source software under the MIT license.

(fo)

heise-+-|-processor-performance:-core-to-core-latencies-of-modern-cpu-cores-in-detail

heise + | Processor performance: Core-to-core latencies of modern CPU cores in detail

For the performance of processors, not only the speed of the CPU cores plays a role, but also how fast they communicate with each other.

(Image: Denis Fröhlich / c’t)

Processor performance: Core-to-core latencies of modern CPU cores in detail Ring-Bus versus Mesh Unequal core quartets Article in c’t 2 / 2021 read The construction of modern processors is currently undergoing a change. Until a few years ago, large “monolithic” chips were still popular, combining all CPU cores and functions on a single semiconductor die, but the trend is now towards several individual chips. For example, AMD uses so-called chiplets for the powerful processors of the Ryzen, Ryzen Threadripper and Epyc series, which are several separate silicon dies on a common carrier. The CPU cores, which are responsible for the actual computing work, are located in up to eight CPU core dies (CCD). AMD packs all other functional blocks, such as the memory controller, PCI Express Root Hub and I / O functions, on a separate I / O die.

The chiplet design increases the yield than ever before The larger the area of ​​a semiconductor chip, the higher the probability that an error will creep in during manufacture. In addition, there is a maximum of chip area that can be exposed with today’s technology. It is approximately at 800 mm 2 .

Intel uses monolithic chips for the majority of its CPUs. This not only applies to the quad cores of the 09 – Nanometer mobile processors Core i – 1100 and the desktop CPUs Core i – 10000 up to 10 cores too, but also on the server chips of the Xeon series with up to 28 Cores.

Access to all contents of heise + exclusive tests, advice & background information : independent, critically founded c’t, iX, Technology Review , Mac & i, Make, c’t Read photography directly in the browser register once – read on all devices – can be canceled monthly first Month free, then monthly 9, 95 € Weekly newsletter with personal reading recommendations of the editor-in-chief Start FREE month Start the FREE month now heise + already subscribed?

Sign in and read Register now and read the article immediately More information about heise + Processor performance: Core-to-core latencies of modern CPU cores in detail Ring-Bus versus Mesh Unequal core quartets Article in c’t 2 / 2021 read

amd-radeon:-chiplet-based-gpus-like-ryzen-cpus?

AMD Radeon: Chiplet-Based GPUs Like Ryzen CPUs?

After CPUs, AMD seems intent to drastically change the design of their GPUs. The company is studying an MCM project that will allow multiple graphics chiplets to work properly together. The idea is in a newly published patent.

by Manolo De Agostini published , at 11: 01 in the Video Cards channel

AMD Radeon RDNA

AMD was the first to believe and adopt a MCM project for its microprocessors, from Ryzen consumer models up to the EPYC server proposals. The US company has highlighted how to separately produce the components of a CPU, untying the cores from the memory controller and other interfaces, not only favors a landing on the market quickly and predictably , but it also has advantages in terms of performance and consumption.

The MCM (multi-chip module) design, where a processor is composed of different blocks, also produced with different processes and connected via high-speed bus, allows for a more innovative approach to design, and this has allowed AMD to recover the gap with Intel and surpass it in different sectors of market, thanks to the possibility of increasing the number of cores through the creation of dedicated chiplets with advanced production processes. There is also an advantage in terms of production yields , as producing chiplets with the most appropriate process allows have fewer defects.

It is clear how the time of monolithic chips , i.e. a highly integrated project that contains all the various aspects of a microprocessor on the same die, both at the drips : it is less and less functional to innovations future and this not only applies to CPUs, but also to GPUs . Both AMD and Nvidia have long toyed with the idea of ​​moving to MCM design and this transition may be closer than we think .

A patent issued by AMD titled “ GPU Chiplets using high bandwidth crosslinks “exposes the idea, or at least one of the ideas on the table, that the company led by Lisa Su is stroking to make the project of a MCM GPU . In the documentation, AMD illustrates some of the reasons it has not taken that route so far : the high latency of communication between chiplets, programming models and the difficulty in implementing parallelism.

However, the company’s engineers think they can bypass these obstacles using an in-package interconnection called “ high bandwidth passive crosslink “, able to allow any “chiplet GPU” to communicate directly with the CPU and at the same time with the other chiplets. Each GPU, in addition to the cores, would have its own cache and the necessary to work autonomously: in short, each chiplet would represent a complete GPU, fully manageable by the operating system.

Unlike what we observe with AMD CPUs, where the company has crammed the cor ex 86 in a dedicated chiplet and everything else in an I / O die, in this case the idea looks more like making smaller GPUs and making them work together . One of the growing problems nowadays is that the most powerful graphics chips occupy such a large area that they require advanced and very expensive production processes, with yields not always exceptional.

Creating smaller GPUs would reduce production costs and have better yields, while the new interconnection would preserve performance and at the same time the software could make the most of the hardware, without particular modifications. At the moment it is not clear if and when this project will become a concrete reality, but it is rumored that this will happen after the solutions based on RDNA 3 architecture, expected between this year and next.