corona-warn-app-available-soon-at-f-droid

Corona Warn App available soon at F-Droid

Not only researchers find that the success of the corona warning app depends crucially on its acceptance. Increased acceptance was also one of the reasons why the decision was made early on for a decentralized solution – which should also be developed for more transparency, open source. Whatever happens: The source code can be viewed on Github.

Without Google Services All the source code? Nearly. There is still a small patch … The so-called “Exposure API” from Google (or Apple) is accessed for functionality. And their code is not open source. The app itself can currently only be obtained from the respective “official stores” of Google and Apple. Multiple requests to download the APK file for Android e.g. B. to make it available on Github, were rejected mainly for two reasons:

The app would work anyway without pre-installed Google Services do not work (which has not been true since the beginning of September, since microG now also maps the Exposure API) The RKI would not have been approved for this granted On the one hand, the RKI emphasizes that everyone should install this app – please On the other hand, however, they are not available to everyone: Users of newer Huawei devices in particular continue to look into the tube. Likewise, those who forego the Google connection (including the Google account required to use the Play Store) to protect their privacy.

In a few days at F-Droid So Marvin Wißfeld, the author of the microG framework, sat down again – and in a few days Weeks single-handedly achieved what the large companies SAP and DTAG apparently did not succeed in spite of government subsidies: He also recreated the client libraries (i.e. the non-free Google components previously required in the app itself to address the Exposure API) as open source . And made available under a free license, of course. In such a way that, as a so-called “drop-in replacement”, they can replace the proprietary Google libraries with just a few steps.

The F-Droid community immediately took this as an opportunity to do the whole thing in a fork of the app – and just a few hours later a functional app is created that is completely open source. This is currently still being tested internally – but should be available at F-Droid in a few days. And of course SAP / DTAG is also free to convert to these open source libraries. The corresponding offer has already been made to them. But probably the RKI has to agree again.

(bme)

proxmox-ve-6.3-with-the-latest-ceph-and-new-zfs-functions

Proxmox VE 6.3 with the latest Ceph and new ZFS functions

With the new version 6.3, Proxmox is updating its virtual environment. The latest cluster- and HA-capable VE is based on Debian GNU / Linux 10. 6 (“Buster”) with a specially adapted Linux kernel 5.4-LTS. Via LXC 4.0 and QEMU / KVM 5.1, the virtualization platform provides both resource-saving Linux containers and full-fledged VMs.

Administration can be done in the terminal or via an easy-to-use web interface. For large storage requirements, Proxmox uses ZFS version 0.8.5 in addition to the usual Linux file systems. There are many new features in the WebGUI, such as an improved editor to set the start sequence for VMs. The integration of ZFS into the virtualization platform has been optimized, slowly CIFS and NFS systems get more time until the timeout.

Ceph than Nautilus or Octopus The highly available, distributed and robust Ceph for Proxmox clusters can be used in two versions: as an older Ceph Nautilus 14 .2. 15 or as a more recent octopus 15 .2.6. The desired version is selected during installation in the configuration wizard. Thanks to its multi-site replication capability, Ceph Octopus offers advantages in terms of redundancy and disaster recovery.

Proxmox VE 6.3 directly supports the systemd-free Devuan GNU / Linux and in Linux containers (LXC) Kali Linux, a pre-configured Linux distribution for penetration testing and “ethical hacking”. Virtual machines under QEMU can now be shut down during a backup, vCPUs support other CPU options such as SSE4.2 and also get along better with multi-NUMA nodes.

Team: Hypervisor, Mail Gateway and Backup Server A week ago the Viennese Proxmox Server Solution GmbH published their Mail Gateway 6.4, a mail proxy that works between the firewall and mail server and is supposed to offer extensive spam and virus protection. An object-oriented set of rules defines filter rules according to user name, domains, time frame, type of content and the resulting action. Both Proxmox VE 6.3 and Mail Gateway 6.3 are designed to work with the new Proxmox Backup Server 1.0.

Like the E -Mail Gateway has Proxmox VE 6.3 built in its own backup.

(Image: Proxmox)

As usual, the manufacturer documents details of all innovations and planned developments in a Roadmap. Proxmox VE 6.3 is available immediately and is licensed under the GNU Affero GPL v3. The open source software can be used free of charge without access to the Enterprise Repository, professional support with access to the Enterprise Repository is available from 85 Euro (net) per year and CPU socket.

(avr)

with-the-bloodhound-on-an-active-directory-hunt

With the Bloodhound on an Active Directory hunt

At the first virtual SO-CON, the organizer SpecterOps gave insights into the tools and mindsets of professional Red and Blue teams in many different presentations. Even if the name SpecterOps may be less familiar in Germany, the company’s open source tools are all the better known. These include the projects PowerShell Empire, BloodHound, PowerSploit and GhostPack.

In the frame of the lecture “Six Degrees of Global Admin”, Andy Robbins introduced BloodHound 4.0. While the older versions of the tool still helped to analyze classic Active Directory environments and to represent possible attack paths using graph theory, the new version can now also examine Microsoft Azure. For this purpose, the new ingestor called AzureHound collects the data from the Azure Active Directory and the Azure Resource Manager. The tool imports this into a Neo4j graph database via the BloodHound GUI.

Attack Active Directory locally or in the cloud Especially with hybrid infrastructures – classic Active Directory and Azure AD in parallel use – or for VMs with Azure, it makes sense to transfer the data from both directory services to a database and load it into a graph. In this way, the software can possibly map additional attack paths that were not previously detectable. For example, a user synchronized from the local Active Directory to Azure AD could have extended rights to a VM, which would allow the local domain to be compromised, or the global administrator in Azure AD could be compromised by nesting group memberships.

In order to be able to extract and map the data from the Azure infrastructure, the BloodHound graph has ten new nodes: Tenants, Azure Users, Azure Security Groups, Apps, Service Principals, Subscriptions, Resource Groups, Virtual Machines , Devices and Key Vaults. There are also 14 new edges that represent the possible attacks.

As with the classic, local Active Directory, the rights of one are sufficient ordinary Azure AD user to query almost all required information. Only the subscriptions cannot be queried in this way by default. According to SpecterOps, the AzureHound needs almost two hours to collect the data, even in large environments with 240. 000 users.

data-processing:-ibm-integrates-confluent-with-cloud-pak-for-integration

Data processing: IBM integrates Confluent with Cloud Pak for Integration

IBM and Confluent have announced a partnership whereby the Confluent platform will become part of the Cloud Pak for Integration. Big Blue offers the latter as an integration platform for applications and data. It offers a modular approach for and with containerized applications.

(Image: IBM)

Write with Kafka Confluent is the company behind Apache Kafka, a distributed platform for big data applications in the field of messaging and streaming. The system originally developed by LinkedIn, version 2.6 of which appeared in April, has been under the umbrella of the Apache Software Foundation since 2012. The platform owes its name to the Czech author Franz Kafka, as Jay Kreps, one of the core developers, says: “I thought that since Kafka is a system that is optimized for writing, the name of a writer would make sense.”

The Kafka developers founded 2014 Confluent out of LinkedIn and, together with the company, offer a platform based on Apache Kafka designed for enterprise applications. As with other open source projects, the commercial offer extends the open source basis with additional functions for monitoring, infrastructure and operation, among other things. In addition to the Confluent platform, the company has a managed cloud service in its portfolio.

Extended offer Apache Kafka has been part of the Cloud Pak for Integration for some time. Through the partnership, IBM is now expanding its offer to include the complete Confluent platform with the additional functions.

Big Blue is responsible for first and second level support; Confluent takes over on the third level for more complex problems. The post on the Confluent blog sees the partnership as a sign that event streaming has now arrived in the mainstream after a long period in which start-ups in particular relied on Apache Kafka as early adopters.

In the cloud blog, IBM lists the reason for the integration that many companies want to create personalized applications and use real-time data such as transactions and clicks want to use on websites or sensor data from an IoT device. Confluent offers the most comprehensive platform for this type of data processing.

(rme)

linux:-gnu-guix-1.2-plays-it-safe-with-a-new-authentication-option

Linux: GNU Guix 1.2 plays it safe with a new authentication option

GNU Guix, a functional package management software for the GNU operating system, is celebrating its eighth birthday and version 1.2 was released seven months after the last release. The most important innovation is likely to be the ability to cryptographically authenticate channels, which should make Guix one of the most secure methods of providing operating systems.

Cryptography and multilingualism In addition to changes to the provision and some new interfaces, the current release also includes an extended reference manual, which, in addition to English, is now fully translated into French, German and Spanish. Translations into eleven other languages ​​are ongoing, and translations into Russian and Chinese are apparently the most advanced.

With channel authentication according to the blog entry, the open source GNU project behind the package management closes the apparently largest gap in the “software supply chain”: guix pull and related commands can now only retrieve authorized commits in the official Guix repository. The code of each authorized channel is encrypted when it is accessed. With the new command guix git authenticate the authentication mechanism can be used for any Git repository.

More security issues and new package options The build daemon and the origin – Programming interfaces have recently started accepting additional cryptographic hash functions, in particular SHA-3 and BLAKE2s. Previously, Guix had relied exclusively on SHA 86 hashes for source code. The new version of Guix also tries to track down system downgrades in order to prevent security gaps by rolling back to older operating system versions. As of Guix 1.2, automatic updates (Unattended Upgrade Service) run with the command guix pull && guix system reconfigure , users no longer have to trigger individually and manually.

Three new options for packet transformation are introduced in the command line: – -with-debug-info , – with-c-toolchain and [code] – without-tests . The profile records transformations and they can be replayed with guix upgrade . These changes affect the entire dependency tree including the “implicit” inputs that could not previously be transformed. The module (guix transformations) provides an interface for the transformation options in the command line. On the user side, there is now an overview of the available commands in the Guix help, the Guix pull has received a progress bar and a new, leaner “baseline compiler” means that the pull process should require fewer resources.

Background to GNU Guix and installation methods GNU Guix is ​​a functional package manager and one in the Development of advanced distribution of the GNU system. In addition to the standard functions of a classic package management, Guix also supports upgrades, rollbacks, package management without granting privileges, per-user profiles and offers a garbage collector. Guix can be run on any system running the Linux kernel, but it can also be used as an independent operating system on devices with suitable processor cores (such as i 686, x 86 _ 64, ARM7 and AArch 64). As a stand-alone GNU / Linux distribution, Guix offers a declarative, stateless approach to managing the operating system configuration. Guile programming interfaces and extensions make Guix particularly adaptable.

More information about the current release can be found in the blog entry on GNU Guix. The current version of the package manager Guix can be downloaded from the download area of ​​the GNU project. More information about the distribution is available on the project website.

(sih)

686

vpn:-wireguard-for-windows-continues-to-take-shape

VPN: WireGuard for Windows continues to take shape

The WireGuard project released Windows versions 0.2 and 0.3 of its VPN software in quick succession. An important innovation is a restricted view for regular users: Anyone who belongs to the Network Configuration Operator group can start and stop tunnels in it and view their status without administrator rights. However, any access to the keys remains blocked for them. With this function, the project aims in particular at company users whose system administrators use WireGuard, but do not want to give their users administrator rights.

Furthermore, the configuration can now be found under% ProgramFiles% WireGuard Data Configurations. She previously saved the software in the LocalSystem user profile. Microsoft advises against the latter, however, and is no longer migrating these settings files between Windows 10 versions. When updating, WireGuard automatically moves the configuration to its new location, where it is encrypted without user intervention.

WireGuard for the Surface Pro X and the Pi Furthermore, WireGuard for Windows can now be used on ARM and ARM 64 systems. For the latter, the project targets Microsoft’s Surface series and the Raspberry Pi. Previously, the software could only be used with x 86 – and amd 64 – use computers. Instead of letting the user choose the correct MSI himself, WireGuard now offers an installer that automatically downloads the correct installation file in the latest version, validates its signature and then sets up the program. However, the individual MSIs are retained so that administrators can keep and upload them themselves.

With further changes and bug fixes, WireGuard should also run faster and more stable . The project has also added further translations. According to the announcement of the two new versions, split tunneling is not yet ready for productive use. WireGuard for Windows appears as open source software under the MIT license.

The storage location of the configuration profile is not just any user profile, but the LocalSystem user profile. The unclear wording in the text has been corrected.

(fo)

free-image-editing-gimp-turns-25-years-old

Free image editing Gimp turns 25 years old

At the 21. November 1995 Spencer Kimball and Peter Mattis release the first public beta version of Gimp for Linux, Solaris and Unix. The program was developed as part of a semester project at Berkeley University in California. Gimp stands for “General Image Manipulation Program” – in the BDSM scene but also for a submissive person. The developers were inspired by the 54 Tarantino film Pulp Fiction for the naming.

The first official version published in January 1996 bears the number 0. 54. From now on, Gimp is unstoppable and is embarking on an unprecedented triumph. After a meeting with GNU founder Richard Stallman the following year, Kimball and Mattis changed the name of their program to “GNU Image Manipulation Program” without any acronym. And that name it still bears today.

Own format for Gimp 1.0 First in June 1998 Gimp gets the status 1.0 and a memory management system that allows large image files to be opened. In addition, Gimp 1.0 can store files in its own XCF format including layers and execute scripts in the Script-Fu language. The program is now also available for Windows and macOS. However, it is a time that demands a lot from users, for example compiling the installer for their platform themselves and installing the GUI toolkit GTK + manually.

Gimp 2.0 will be released in December 2004, but does not bring any earth-shattering innovations. It can import and output SVG files and brings simple functions for the CMYK color model and prepress.

Gimp 2.4 brings ICC color management system and Pressure simulation. When opening files, the program asks whether it should interpret or reject embedded ICC profiles.

What takes a long time will finally be GEGL In October 2008 the developers lay the foundation for the switch to the new graphics library with Gimp 2.6 GEGL. It promises complete ICC color management and image processing in 32 bit color depth per channel – optionally in floating point operations. For the time being, Gimp 2.8 continues to work with only 8 bit color depth per channel. Gimp 2.8 also comes with an optional one-window mode that combines the three floating pallets that were usual up to that point in a dock.

Gimp 2.8 runs for the first time on request in One-window mode.

It should take another ten years before 2018 the switch to GEGL with Gimp 2. 10, the direct successor of Gimp 2.8 by the way. The uneven version numbers are reserved for the developer versions. The long-awaited high color depth is finally a reality. In addition, GEGL brings immediate previews in the document window of filters such as Gaussian soft focus or unsharp masking. The current developer version 2. 54 promises support for HiDPI monitors, improved support for graphics tablets and a new plugin -API.

Gimp 2. 10 fully implements GEGL and shows, among other things, the effect of effects live in the document window.

Compiled installers are now naturally available for download at Gimp.org. Countless books, articles and videos explain the operation. Gimp becomes more user-friendly in adulthood. The free image processing has a permanent place on many PCs and has become an integral part of the open source world. We congratulate and look forward to further 25 years.

(akr)

japanese-companies-affected-by-cyber-attack-in-17-countries

Japanese companies affected by cyber attack in 17 countries

A cyber attack on Japanese companies and their subsidiaries lasted for almost a year, from mid-October 2019 to the beginning of October 2020. The large-scale operation is said to have primarily served for espionage purposes and targeted companies in 17 different countries.

Symantec’s “Threat Hunter Team” discovered the attack on some customers and attributes it to the Cicada hacker group, also known as APT 10, Stone Panda and Cloud Hopper. The group should be active since 2009. The US government has put APT 10 in connection with the Chinese government, which is why Symantec assumes that there is a connection to Beijing in this case as well. Targeting Japanese companies Cicada is known to primarily target Japanese companies. Symantec does not see a direct connection between the victims, the similarities amount to the type of attack and the techniques used. For example, the hackers exploited the ZeroLogon vulnerability, which was only closed in August 2020. Otherwise, they mainly used DLL sideloading to load malware onto the systems. Most recently, they built in “QuasarRAT”, an open source back door that Cicada had already used in the past. Methods for obfuscating activities would also correspond to the well-known procedures of Cicada.

The companies are primarily active in the automotive sector, both in production and as suppliers. But companies from the electronics, clothing and pharmaceutical industries are also affected. Symantec points out that managed service providers were also among the victims. The attackers were able to access other customer systems through their networks. The time that the intruders spent in the respective systems varied greatly: While some companies were spied on over a long period of time, others were only briefly or sporadically targeted by the attackers.

The companies concerned are in the US, Mexico, UK, France, Belgium, Germany, the United Arab Emirates, India, China, Hong Kong, Thailand, Singapore, Vietnam, the Philippines, Taiwan, South Korea and Japan. Symantec does not provide any further information about the company. (cbo)

telegram-chat:-the-secure-privacy-nightmare-–-an-analysis-and-a-comment

Telegram chat: the secure privacy nightmare – an analysis and a comment

Telegram is becoming more and more a synonym for “secure chat” and “chat with privacy” in certain circles. But even very simple tests, which everyone can carry out themselves, show that using the messenger service is almost completely naked.

Jürgen Schmidt – aka ju – is the managing editor of heise Security and Senior Fellow Security at heise. A graduate physicist by profession, he has been working for Heise for over 15 years and is also interested in the areas Networks, Linux and Open Source.

The first simple test is: Give a message with a link like ” https://www.heisec.de “, but does not send it yet! You will then see that your smartphone is already showing some information about heise Security:

Already at Typing provides Telegram information about the typed link.

WhatsApp, for example, does that too. The app on the mobile phone fetches the information from the URL in the background and shows it to you. Not so with Telegram: There the app delivers everything you type to the Telegram server – even before you send it. And this server then visits the URL and delivers it to the Telegram app on the mobile phone with the “Portal for IT Security”.

I did this test with a Honey URL. In other words, a URL that was only created for this purpose and has never been used anywhere before. Access from the TelegramBot appeared in the log files of my Honey URL server immediately after I typed this URL into the Telegram app. It had the IP address 149. 154. 161. 10, which belongs to a Telegram server in England. Mind you, that happened before I sent the link!

The telegram -Server visited my “secret” web page before I had sent the message with the URL.

During the cross-check with WhatsApp, the honey server also registered an access. But as expected, it was done from my own IP address. The app on my smartphone in the WLAN had retrieved the data, no external server.

The complete chat archive Now for the second test. Opens on the PC in a private Browser window the web page of the Telegram chat: https://web.telegram.org/. There you have to register with your cell phone number. Then Telegram will send you a login code in the form of a six-digit number. Before you type it into your browser, you switch the mobile phone to flight mode so that it can no longer send any data. If you then enter the code in the browser, a web page opens with all your chats.

Very convenient : You can also use Telegram in your browser.

What do you think where this data comes from? Not from your cell phone. Because that is in flight mode without a network. And before you have identified yourself with the code, the browser must never have received your data. There is only one possibility left: the content of the chats comes from the web server that your browser is talking to. For me it was a server in a data center in Amsterdam (99. 154. 161. 99).

So this server has access to a complete copy of all my chats. It even contains the previously typed but not yet sent message with the heise Security URL as a “draft”. And of course, Telegram not only stores my chats – but those of all Telegram users.

Everything that users write is stored centrally at Telegram and delivered when required. To you, if you identify yourself with the correct code. But certainly also to an officer who can show a search warrant. Or to a bribed employee or to hackers who gain access to the servers. And if Telegram decides one day that they want to use this data to make you “exciting offers” – that is, targeted advertising – there is nothing, at least from a technical point of view, that could prevent them from doing so. Privacy? Is not!

Theoretically, Telegram has so-called “secret chats” that are protected from being read by third parties. But they are so well hidden that most Telegram users don’t even know them, let alone use them. In addition, these secret chats come with a number of restrictions. So they cannot be used for groups and only ever used on one device. Almost all Telegram chats therefore run via the normal channels that can be read by Telegram.

serverless:-google-provides-cloud-functions-for-net-core-3.1

Serverless: Google provides Cloud Functions for .NET Core 3.1

Google has announced the preview of the integration of .NET Core 3.1 with the GCP service Cloud Functions. This gives developers the opportunity to use cloud functions from their preferred .NET runtime environment (Windows, Mac, Linux), for example for serverless applications or in the mobile and IoT environment. The Functions Framework for .NET is available as open source on GitHub for developing the functions.

Application examples for Cloud Functions

(Image: Google)

The functions created with the Functions Framework for .NET can be executed locally and provided via the fully managed cloud service Cloud Functions or other .NET environments. Developers can also grant the functions access to resources in virtual private cloud networks (VPC) in order to create applications for real-time data processing, video, image and mood analysis or even chatbots and virtual assistants.

The Functions Framework for .NET currently supports HTTP and CloudEvent functions, which are useful for webhook / HTTP use cases as well as Google Use the Pub / Sub, Cloud Storage and Firestore cloud services. The framework also has templates ready that developers can use both on the command line and in Visual Studio – including those for F # and Visual Basic.

The preview of Cloud Functions for. NET is available immediately. More information is available on the Google blog announcing the integration. More details on the open source framework can be found in the GitHub repo for the Functions Framework for .NET. If you are more interested in the technical aspects behind it, we recommend a deep dive in Jon Skeet’s blog.

(map)

policy-management:-kyverno-can-prove-itself-in-the-cncf-sandbox

Policy management: Kyverno can prove itself in the CNCF sandbox

In addition to the Open Policy Agent, the Cloud Native Computing Foundation (CNCF) has now added another project for policy management to its portfolio. The Kyverno Policy Engine, originally developed by Nirmata, can now prove itself in the CNCF sandbox. The open source project is designed to be seamlessly integrated into Kubernetes and to use its existing resources and tools – developers should be able to forego learning new languages ​​or tools, promises Nirmata founder and CEO Jim Bugwadia.

Policies with CRDs, YAML and JSON regulate In contrast to Open Policy Agent, which requires the use of the Rego language for policy management, uses Kyverno YAML or JSON and can be combined with the kubectl, git and kustomize tools that most Kubernetes users are familiar with. In order to handle complex policy configurations with sometimes hundreds of parameters in the API, especially in a corporate context, when it comes to handling complex policy configurations, Kyverno uses the declarative approach of Kubernetes.

With the help of Custom Resource Definitions (CRDs), Kubernetes administrators can create, manage and automate guidelines for a wide variety of application areas. Kyverno can be used, for example, to automatically build certificates into pods, or to create sidecar containers. The policy engine can even be used for access control. Kyverno works as a validating and mutating webhook with the Kubernetes API server to block invalid or non-conforming configurations if necessary.

Easier configuration for more security Kyverno’s approach, which is based on patterns and best practices from Kubernetes, is intended to help make policy management easier, even in more complex corporate environments. Under the umbrella of the CNCF, Nirmata boss Bugwadia also hopes for synergies through closer cooperation with other projects. Among other things, the development team behind the CNCF sandbox project cert-manager has already expressed interest in using Kyverno for policy administration related to certificate management.

Further information on the policy engine can be found on the Kyverno homepage, the announcement as part of KubeCon + CloudNativeCon and in the project overview of the Cloud Native Computing Foundation.

(map)

version-management:-gitlab-13.6-refines-the-insight-into-code-quality

Version management: GitLab 13.6 refines the insight into code quality

GitLab has released version 13. 6 of the version management platform of the same name. For the code quality, the release shows a gradation of the relevance and effects of the respective problems. In addition, the branch and tag lists now show the status of the CI / CD pipeline (Continuous Integration, Continuous Delivery). Finally, the Visual Studio Code Extension brings additions for the integration with the source code editor.

In interaction with cloud services, automated distribution has recently been implemented possible on AWS Cloud Compute (EC2). Developers must activate Auto DevOps and the environment variables AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY for interaction with the AWS CloudFormation API. and AWS_DEFAULT_REGION define.

Is this critical or is it allowed to go through? For the detailed code quality report and for merge requests, the severity of the individual violations of the code quality can now be determined specify. In this way, critical issues that block the project from being rolled out can be distinguished more quickly from smaller problems.

Some Problems such as exceeding the maximum lines of code according to the project specifications are less critical than others.

(Image: GitLab)

The output of the report on the code quality in HTML format to make the information easier to share with others. The output files created in this format can be made available on GitLab Pages, among others.

Icons in the branch and tags list, which reflect the status of the CI / CD pipeline, are also intended to provide a better overview . So far, GitLab has only displayed the pipeline status via the separate view on the respective branch.

Connection to the editor For the interaction with Microsoft’s open source source code editor Visual Studio Code, the GitLab Extension offers some additions to the new release. Among other things, developers can now use GitLab snippets, i.e. saved blocks of code or text, directly in the editor. You can also view and comment on merge requests and issues directly from Visual Studio Code.

GtiLab Snippets can now be inserted directly into Visual Studio Code.

(Image: GitLab)

It is also worth mentioning that group administrators in GitLab now use the name of the first branch. The innovation is part of the concept of saying goodbye to “master” as a term with racist roots. In the version management Git since version 2. 13 the term is no longer hard-coded. GitLab has allowed individual customization for some time, but the control at group level is new.

Further additions in GitLab 13. 6 can be found in the GitLab blog. The new features listed are available in all editions of GitLab. In addition, as usual, some new functions only relate to the commercial versions. Among other things, the display of the code coverage for selected projects, which is included in the two most expensive categories Premium and Ultimate or Silver and Gold, is worth mentioning.

(rme)

chemnitzer-linux-days-with-a-difference:-the-event-will-take-place-digitally-in-2021

Chemnitzer Linux Days with a difference: the event will take place digitally in 2021

“Just do it differently!” is the motto of the Chemnitzer Linux Days 2021. Like many other specialist conferences and community events, the Linux meeting will take place online in the coming year.

The corona pandemic is not just for the Failure of the Chemnitzer Linux Days 2020 responsible. It also ensures that work and learning are increasingly moving into the digital space for many people. That is why the role of Linux and open source in the rapid digitization is one of the central topics of the lectures and workshops in the coming year.

Browser instead Lecture hall Unlike in previous years, the central lecture hall building of the TU Chemnitz does not act as an event location. The workshops take place as a video conference. Lectures and presentations are broadcast via video stream. For discussion and exchange, the initiators rely on the open source solution “Big Blue Button”.

On the website of the Chemnitzer Linux Days, the organizers call for presentations and workshops to be submitted. Projects, companies and associations can apply for a virtual stand in the “Linux Live” area. The “CLT Junior” program with the target group of children and young people is also shifting to the digital space.

The Chemnitz Linux Days take place on 13. 03.2021 and 14.03.2021 instead of. Those interested can find more information on the Linux conference website.

(ndi)

amd-promises-open-source-multi-platform-fidelityfx-super-resolution

AMD Promises Open Source Multi-Platform FidelityFX Super Resolution

(Image credit: Shutterstock)

In a recent HotHardware Podcast with Scott Herkelman and Frank Azor, AMD spoke about its FidelityFX Super Resolution alternative to Nvidia’s DLSS technology they promised for RDNA 2. AMD says it wants its version of resolution upscaling and anti-aliasing to be an open-source API and a multi-platform capable technology, similar to what it’s done with other FidelityFX technologies like CAS (Contrast Aware Sharpening).

Right now, AMD is not ready to share technical details on its upscaling tech as it simply isn’t ready. However, Scott Herkelman was able to share AMD’s goals about it and what AMD wants to achieve with “its version of DLSS.”

First AMD, wants to make this technology fully open source, with a non-proprietary API. This potentially gives game developers a much easier time in implementing the tech when it’s ready. Instead of coding for a specific piece of hardware and specific game, it’ll be plug and play basically with one piece of code. Second, the open standard, in turn, will make AMD’s tech multi-platform capable. AMD says they want it to work on the new RDNA2 powered consoles, its own graphics cards, as well as Intel and Nvidia GPUs (though obviously the first two items in that list are the most important).

This seems like great news for this type of technology. The biggest issue with DLSS is that it’s locked into Nvidia GPUs and APIs. It’s true that implementing DLSS in widely used game engines like Unreal Engine has made it relatively simple to enable. However, developers that build their own engines still need to put in the extra work, all for a technique that only works on RTX cards.

There are two major questions regarding AMD’s forthcoming FidelityFX Super Resolution. The first is simple: Will it look good? We know that DLSS 2.x can look very good, sometimes even surpassing native rendering. Part of that is simply due to the blurriness of temporal AA, however. Remove TAA and use DLSS to remove aliasing and upscale a frame and it’s possible to get a more pleasing final rendering result.

(Image credit: Shutterstock)

The other question is how it will work. Will Super Resolution use an AI trained network to determine the best methods of upscaling and anti-aliasing? Will it be able to make use of hardware acceleration features like Nvidia’s Tensor cores? While we don’t know AMD’s answer for sure, there’s a good chance it’s a “no” on both aspects.

AMD’s Super Resolution needs to work on all platforms, and only Nvidia tech has Tensor cores, so obviously AMD doesn’t have any reason to build tech that relies on a feature it lacks. But as far as AI training goes, that’s still entirely possible — Microsoft at least has a massive amount of supercomputing power available in Azure. The trained algorithm just needs to run on standard GPU cores, and with double performance fast math (FP16), RDNA2 GPUs could likely run such an inference network without too much difficulty.

That leads into the algorithm itself. AMD already offers resolution upscaling and enhancement via the custom-tuned CAS algorithm, which can improve visuals and is extremely low impact when it comes to performance (less than 1 percent difference). But CAS right now doesn’t do nearly as well with upscaling, so Super Resolution just needs to focus on that aspect. Instead of the current methodology where a game does the rendering, applies an overly blurry temporal AA filter, undoes some of the blurring via CAS, and then upscales the final result … what if Super Resolution can combine all of those steps into one superior filter? That would be the goal, at least in our minds.

Naturally, AMD has a vested interest in making this work, but more importantly AMD’s console partners have even more of a need for tech like Super Resolution. The PlayStation 5 and Xbox Series X both theoretically offer up to 4K at 120Hz support, but hitting 120 fps at native 4K with GPU hardware that’s already slower than a Radeon RX 6800 isn’t going to happen without reducing image fidelity, or some other trick. Super Resolution fills that niche.

Overall, it’s great that AMD is working on a completely open version of upscaling and anti-aliasing. If the FidelityFX Super Resolution tech can bring DLSS-type quality to other GPUs, potentially including existing RX 5000 series and even Nvidia GTX cards, it’ll be a game changer. We still need to see the result, however, and we also need game developers to adopt it. That of course will take time.