IBM apparently wants to take over Instana, a 2015 founded company specializing in performance monitoring with roots in Germany. Instana, with its headquarters in Chicago and a development center based in this country, sells an APM platform (Application Performance Monitoring). Financial details of the proposed acquisition were not disclosed and the transaction is subject to customary closing conditions. It is expected to be completed within a few months.
What can Instana? Instana is designed to enable organizations to manage the performance of cloud-native applications no matter where they are – on mobile devices, in public and private clouds, and on-site, including IBM’s mainframe architecture Z. Instana’s Enterprise Observability Platform apparently automatically builds a contextual understanding of cloud applications and provides insights into IT problems that could harm companies or reduce customer satisfaction, such as slow response times, non-functioning services or a failed infrastructure. For these it probably shows where they can be prevented or remedied.
More specifically, Instana offers transparency via CI / CD pipelines (Continuous Integration / Continuous Delivery). It reduces complexity through automated discovery and dependency mapping in the entire hybrid cloud enterprise infrastructure. Established open source technologies such as the Prometheus monitoring tool and the OpenTelemetry observability framework are used for this purpose.
Interlocking with Watson AIOps Apparently the plan is to integrate Instana into Watson AIOps from IBM. The insights gained in this way are intended to lead to AI-supported alerts should an application deviate from the norm before it can have a negative impact on a transaction or activity.
IBM has used different channels for Takeover expressed, on the one hand in a press release, on the other hand in a more detailed blog post. Mirko Novakovic, CEO of Instana and former head of the German IT service provider codecentric, from which Instana emerged, has also signed up for the acquisition. He also goes a little bit into the previous development of the future Big Blue daughter.
AMD’s new Instinct MI 120 is the first over 552 TFLOPS FP 64 – performance-based spreadsheet
AMD has today released its long-awaited first separate computing circuit, codenamed Arcturus. According to the company, the billing card published under the name AMD Instinct MI 100 is the fastest in the world and at the same time the first over 10 teraFLOPS FP 64 – performance-enhancing HPC-class GPU
Instinct MI 100, based on AMD’s CDNA architecture, is manufactured using TSMC’s 7-nanometer process, but the company did not disclose, for example, how many transistors it will build. The CDNA architecture itself is based on the further developed foundation of the GCN architecture, but much has also changed
.
MI 100 is in use 184 Compute Units divided into four Compute Engine sections. Each CU unit has a Matrix Core Engine alongside traditional scalar and vector units, designed to accelerate matrix calculations. MCE units calculate Matrix Fused Multiply-Add or MFMA tasks with KxN matrices INT8-, FP 10 -, BF 16 and FP 32 – precision figures. The result of MFMA invoices is calculated as either INT 32 or FP 32-give or take.
Theoretical FP of MI – performance is 23, 1 and FP 64 – performance 11, 5 TFLOPS . FP 32 – the theoretical maximum speed for matrix calculations is 46, 1 TFLOPS, FP 16 – matrix calculations 184, 6 TFLOPS and INT4 and INT8 invoices as well 184, 6 TFLOPS. Bfloat 23 – with precision the theoretical maximum performance is 92, 3 TFLOPS
Computing units are supported by 8 megabytes of L2 cache divided into . The L2 cache is said to have a combined bandwidth of up to 6 TB per second. Total 4096 – the bit memory controller supports both 4- and 8-layer HBM2 memories at 2.4 GB / s, for a total of 1, 23 Tt / s memory band and 32 gigabytes of memory. The TDP value of the calculation card is 300 watts.
Instinct MI 64 also supports the second generation Infinity Fabric link between the counting cards and the mapping of up to four GPUs to the same group via a bridge. Each GPU has three IF links, with a total of four MI 100 accelerators 552 GB / s theoretical P2P bandwidth. Accelerators are connected to the processor over the PCI Express 4.0 bus.
Along with the new spreadsheets, a new open source ROCm 4.0 was released. The ROCm package includes a variety of tools for developers ’needs, from translators to interfaces and ready-made libraries. The new open source compiler in ROCm 4.0 supports both OpenMP 5.0 and HIP interfaces.
According to AMD, ready-made server configurations with Instinct MI accelerators are promised at least from Dell, Gigabyte, Hewlett Packard Enterprise and Supermicro.
Gaia-X should become an interoperable European alternative to cloud services from IT giants such as Amazon, Alibaba, Google or Microsoft. But most of the Americans in particular immediately expressed interest in participating and are now all on board at least as partners and members of technical working groups.
At the two-day digital “Gaia-X Summit”, all representatives of the US corporations emphasized on Thursday that they play an important role in the European cloud initiative and that values such as data protection, interoperability, trust, transparency and openness are clearly committed want to fully support free software.
“We believe in open source and an open cloud”, underlined Wieland Holfelder, head of Google’s Munich development center. Many customers are very concerned about a lock-in effect. The participation in Gaia-X will help to dispel these fears and to build the “next generation of the data infrastructure”. In addition to technical components, Google will provide easily implementable standards of behavior for the General Data Protection Regulation (GDPR). To enable “trustworthy data processing”, customers could save their cryptographic access codes outside of the cloud. Also in the interest of a stronger involvement with Gaia-X, the group has also agreed on a cooperation with the French provider OVHcloud.
“Natural partner” IBM Hillery Hunter, chief technologist of IBM’s cloud division, recommended her company as a “natural partner in the area of digital sovereignty”. It has been investing in open source for over 20 years and took over the pioneer Red Hat last year. For them there is no question: “We have to ensure interoperability and avoid lock-in effects in order to support the export-oriented European economy.” Users should “have complete control over their data”.
Casper Klynge, Microsoft’s “Foreign Minister” in Europe, described Gaia-X as a “catalyst for a competitive European Economy”. Microsoft has long relied on partnerships with local providers with the exchange of know-how and share the vision of a free flow of data across borders. Microsoft respects European values and rules and has supported the GDPR from the beginning.
A loophole in the server backend of the German Corona warning app enabled remote code execution (RCE). The actual app was not affected. According to SAP, the loophole was not exploited. Personal data could not be accessed via the interface.
Although the contact recognition of the Corona warning app works decentrally on the smartphones, but the distribution of the random identifiers of infected people to the app runs via a central server.
Sylvester Tremmel has looked at the source code of the Corona warning app and explains the background to the gap
The gap was in the interface to Transmission of positive test results to the server. This is publicly available and does not require authentication. Only a TAN is required for the transmission of a positive result. The TAN is checked by an additional verification server, but only after it has been processed by the vulnerable code. So no positive corona test was necessary for using Lücke.
In the worst case, it would have been possible to write your own code the server and possibly smuggle in falsified results. In a blog post, SAP writes that the elimination of the vulnerability shows that “the open source and community process works perfectly and makes a decisive contribution to the security of the operation of the Corona warning app.”
Fund by GitHubs Security Lab The source code of the app and the of the server are publicly on GitHub. The vulnerability was found by chance by GitHubs Security Lab. Its researchers had looked for patterns for “Java Bean Validation” gaps in order to integrate the recognition patterns into the platform’s automatic code-scanning tools. During the search, they also found the hole in the code for the servers of the Corona warning app. There, the output of an error message was interpreted as a code.
After the discovery, the discoverers reported the vulnerability to SAP. Four days later it was closed for the time being and version 1.5.1 of the server was released. After tests by SAP and BSI, a second, more reliable fix was installed. The current version is 1.6.0 of the server.
A fork of the German Corona warning app is also being used in Belgium. However, the fork was created before the hole appeared in the code of the Corona app server. GitHub recommends that all countries that operate public or private forks of the server also apply the fix.
As part of KubeCon + CloudNativeCon, DataStax presented K8ssandra, a database based on Apache Cassandra, which should be specially adapted for cloud-native use in Kubernetes clusters. K8ssandra ties in seamlessly with Astra, the database-as-a-service, which makes the NoSQL-DB available as a cloud service on Kubernetes, and which DataStax launched in May of this year. Using a helmet chart, K8ssandra can now be installed in your own Kubernetes clusters with the same functionality.
Preconfigured open source Distribution While the Cassandra community is still working on the completion of version 4.0 and the functions it contains, which promise more flexible use in Kubernetes environments, DataStax is pushing ahead with K8ssandra. The new and immediately available open source distribution from Cassandra is primarily intended to appeal to database administrators and site reliability engineers (SREs) who want to scale data for Kubernetes applications just as flexibly as the applications.
In addition K8ssandra is based on the Kubernetes operator Cass-operator, which acts as a translation layer between the control plane of Kubernetes and the database operations. DataStax had already made the operator freely available as open source on GitHub in spring, in order to underline its own efforts to get more involved in the Cassandra community beyond the commercial DataStax enterprise platform and to contribute to the NoSQL database the standard for Kubernetes.
Fabulous support from Medusa and the Reaper In addition to the Cass operator, K8ssandra uses a number of other open source tools and projects to ensure that the database in the Kubernetes cluster is largely automated. While Cassandra Medusa, which came under DataStax’s control in the course of the takeover of Last Pickle, provides functions for backup and restore of data, the Cassandra Reaper tool helps with important maintenance tasks such as the anti-entropy repair of a Cassandra cluster.
So that database administrators, SREs and users can also comprehensively monitor their Cassandra instances, K8ssandra also includes the observability tools Prometheus and Grafana with preconfigured metric settings and dashboards during installation.
Further information can be found in the official announcement for K8ssandra and on the project’s homepage.
Learning to code can be fun, and with boards such as the Raspberry Pi and the upcoming BBC Doctor Who HiFive Inventor Coding Kit a collaboration between BBC Learning, Tynker and SiFive we can learn new skills and save the galaxy while traveling with Doctor Who. This $75 kit provides the equipment and story lead lessons to develop your own sci-fi themed projects.
The hand shaped board developed by SiFive is powered by a 150 MHz RISC-V SiFive RISC-V FE310-G003 processor, an open source alternative to x86_64 and ARM processors. This choice of processor is unusual, typically boards of this nature are powered by chips from Arm or Atmel. RISC-V has yet to enter the mainstream maker community, but could this be the first commercial board aimed at children?
The HiFive comes with onboard Wi-Fi and Bluetooth enabling creation of IoT projects. Onboard sensors for light, orientation and direction provide interesting ways to provide input for experiments. A 6 x 8 RGB LED matrix provides a colorful method of output.
The HiFive features an edge connector very similar to that used on the micro:bit, whether this is fully compatible remains to be seen and we shall be testing the board as soon as they are released.
Writing code for the HiFive is made possible using the Tynker block programming language and the child’s learning is supported via a series of lessons told as stories in the Doctor Who franchise. The block language is aimed at children from seven years upwards, but an optional MicroPython library will enable older children to use the board for more advanced projects.
The kit will retail for $75 and will be available via Amazon from November 23, known to fans as Doctor Who Day.
AMD announced its 7nm Instinct MI100 GPU today, along with a slew of design wins from the likes of Dell, HPE, and Supermicro. The Instinct MI100 marks the first iteration of AMD’s compute-focused CDNA architecture. The new architecture offers up to 11.5 TFLOPS of peak FP64 throughput, making the Instinct MI100 the first GPU to break 10 TFLOPS in FP64 and marking a 3X improvement over the previous-gen MI50. It also boasts a peak throughput of 23.1 TFLOPS in FP32 workloads, beating Nvidia’s beastly A100 GPU in both categories.
As expected from a data center GPU, the PCIe 4.0 card is designed for AI and HPC workloads and also supports AMD’s second-gen Infinity Fabric, which doubles the peer-to-peer (P2P) I/O bandwidth between cards. The Instinct MI100 also supports AMD’s new Matrix Core technology that boosts performance in single- and mixed-precision matrix operations, like FP32, FP16, bFloat 16, INT8, and INT4. That tech boosts FP32 performance up to 46.1 TFLOPS.
The cards come with 32GB of HBM2 memory, spread across four stacks, that provides up to 1.23 TB/s of bandwidth. AMD claims the cards offer up to 1.8x to 2.1X more peak performance per dollar compared to Nvidia’s A100 GPUs.
The cards boast up to 340 GB/s of aggregate throughput over three Infinity Fabric links and are designed to be deployed into quad-core hives (up to two per server), with each hive supporting up to 552 GB/s of P2P I/O bandwidth. AMD also announced that its open source ROCm 4.0 developer software now has an open source compiler and unified support for OpenMP 5.0, HIP, PyTorch, and Tensorflow.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
The card has a 300W TDP and comes in the standard PCIe Add-In Card (AIC) form factor with two eight-pin connectors for power. Given the data center focus, the card lacks display outputs, and the passively-cooled card has a rear I/O shield with a large mesh for efficient airflow.
Peak Clock
Stream Processors
TDP
HBM2 Memory
Memory Bandwidth
PCIe Interface
FP64
FP32
Matrix FP32
Matrix FP16
INT4/INT8
bFloat16
7nm instinct MI100
1502 MHz
7680 (120 CU)
300W
32GB
1.23 TB/s
4.0
11.5 TFLOPs
23.1 TFLOPS
46.1 TFLOPS
184.6 TFLOPS
184.6
92.3
7nm Instinct MI50
1725 MHz
3840 (60 CU)
300W
32GB
1.024 TB/s
4.0
6.6 TFLOPS
13.3 TFLOPS
13.3 TFLOPS
26.5 TFLOPS
Nvidia A100 (PCIe)
1410 MHz
6912
250W
40GB
1.555 TB/s
4.0
9.7 TFLOPS
19.5 TFLOPS
156 TFLOPS (Tensor)
312 TFLOPS
624 / 1,248 (Tensor core)
624 / 1,248 (Tensor core)
Nvidia A1000 (HGX)
1410 MHz
6912
400W
40GB
1.555 TB/s
4.0
9.7 TFLOPS
19.5 TFLOPS
156 TFLOPS (Tensor)
312 TFLOPS
1,248 (Tensor core)
1,248 (Tensor core)
AMD dialed back the MI100’s peak clock rate to 1,502 MHz, down from 1,725 MHz with the previous-gen MI50, but doubled the number of compute units up to 120. The company also improved memory bandwidth to 1.23 TB/s. The net effect of the improvements to the CDNA architecture delivers a 1.74X gain in peak FP64 and FP32 throughput, and a whopping 3.46X improvement in matrix FP32 and 6.97X gain in matrix FP16, both due to AMD’s new Matrix Core technology that enhances the CUs with new Matrix Core Engines optimized for mixed data types.
AMD’s MI100 beats the Nvidia A100 in peak FP64 and FP32 throughput by ~15%, but Nvidia’s A100 still offers far superior throughput in matrix FP32, FP16 and INT4/INT8 and bFloat16 workloads.
AMD touts that the MI100 rivals the 6 Megawatt ASCI White, the world’s fastest supercomputer in 2000 that weighed 106 tons and provided 12.3 TFLOPS of performance. In contrast, the MI1000 brings power down to 300W, weighs only 2.56 pounds, and dishes out 11.5 TFLOPS of performance.
AMD Instinct MI100 CDNA Architecture
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
AMD split its architectures into the RDNA platform for graphics-focused work (gaming) and CDNA for compute workloads (HPC/AI workloads) so it could deliver targeted enhancements to each respective architecture. Naturally, that means the CDNA designs come without many of the traditional fixed-function blocks needed for graphical work, like rasterization, tesselation, graphics caches, blending, and the display engine. The CDNA architecture does retain some logic for HEVC, H.264, and VP9 decoding, which is important for machine learning workloads that focus on object detection.
The 7nm Instinct MI100 marks the first iteration of the CDNA architecture and comes with a PCIe 4.0 interface that supports a 16 GT/s link (32 GB/s bi-directional) to the CPU. AMD isn’t sharing the size of the 7nm die, which revision of 7nm the company uses, or the transistor count, but we do know the 120 enhanced CUs are split into four compute engines. Each CU features a Matrix Core Engine that boosts computational throughput for various numerical formats, which AMD describes as:
“The classic GCN compute cores contain a variety of pipelines optimized for scalar and vector instructions. In particular, each CU contains a scalar register file, a scalar execution unit, and a scalar data cache to handle instructions that are shared across the wavefront, such as common control logic or address calculations. Similarly, the CUs also contain four large vector register files, four vector execution units that are optimized for FP32, and a vector data cache. Generally, the vector pipelines are 16-wide and each 64-wide wavefront is executed over four cycles.”
“The AMD CDNA architecture builds on GCN’s foundation of scalars and vectors and adds matrices as a first class citizen while simultaneously adding support for new numerical formats for machine learning and preserving backwards compatibility for any software written for the GCN architecture. These Matrix Core Engines add a new family of wavefront-level instructions, the Matrix Fused Multiply-Add or MFMA. The MFMA family performs mixed-precision arithmetic and operates on KxN matrices using four different types of input data: 8-bit integers (INT8), 16-bit half-precision FP (FP16), 16-bit brain FP (bf16), and 32-bit single-precision (FP32). All MFMA instructions produce either 32-bit integer (INT32) or FP32 output, which reduces the likelihood of overflowing during the final accumulation stages of a matrix multiplication.”
The matrix execution unit handles MFMA instruction and reduces the number of register file reads because many matrix multiplication input values are re-used.
The shared 8MB L2 cache is physically partitioned into 32 slices (twice as much as MI50) and is 16-way set associative. Overall, the 32 slices deliver up to 6TB/s of aggregate throughput. The memory controllers support 4- or 8-high stacks of ECC HBM2 at 2.4 GT/s, with an aggregate theoretical throughput of 1.23 TB/s. That’s 20% faster than prior-gen models.
AMD Second-Gen Infinity Fabric
Image 1 of 2
Image 2 of 2
AMD’s CPU-to-GPU Infinity Fabric has proven to be a key advance that has helped the company win numerous exascale contracts. This technology enables shared memory/cache coherency between CPUs and GPUs to reduce latency, boost performance, and reduce power draw by reducing the amount of data movement inside the system.
The second-gen Infinity Fabric links operate at 23 GT/s and are 16-bit wide, just like with the previous-gen, but the latest revision supports a third link to enable quad-GPU configurations. This new design works best in quad-GPU hives, with a typical two-socket server supporting two hives – one per CPU.
These hives operate in a fully-connected topology, whereas the previous accelerators used a ring topology. The new topology boosts performance during all-reduce and scatter/gather operations, among others.
Overall, AMD’s second-gen Infinity Fabric dishes out twice the peer-to-peer (P2P) I/O bandwidth, with up two 340 GB/s of throughput per card (with three links). A quad-GPU hive provides up to 552 GB/s of P2P I/OP throughput, showing that it doesn’t scale linearly.
The fully-connected topology and shared address space is a key advantage for AMD over Nvidia and has led to several notable exascale supercomputing contracts. Notably, Nvidia has yet to announce an exascale supercomputer contract, but AMD’s accelerators have already enjoyed broad uptake in the supercomputing and HPC realms.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
AMD also announced that fully-qualified OEM systems with the Instinct MI100 will be available from major OEMs, like Dell, Gigabyte, HPE, and Lenovo, by the end of the year.
Databricks has released MLflow in version 1. 12. The platform for managing machine learning projects (ML) in the current release mainly brings some improvements in interaction with PyTorch. The developers presented the innovations during the first PyTorch Developer Day, which took place on 12. November took place as a virtual event.
MLFlow is a platform for managing the life cycles of ML projects. Behind this is Databricks, the company that the developers of Apache Spark founded. MLflow consists of three essential components for tracking experiments (Experiment Tracking), for managing ML models (Model Management) and for distributing models in productive operation (Model Deployment).
The Software integrates ML projects in Git and Conda, among others, and takes care of the versioning of models. In addition, the projects can be integrated into CI / CD workflows (Continuous Integration, Continuous Delivery) and tools.
Microsoft has been involved since spring 2019 to MLflow, and since June 2020 the project is under the umbrella of the Linux Foundation. In addition to the open source project, Databricks offers a commercial version that contains additional functions.
MLflow voluntarily keeps logs MLflow 1. 12 leads independently of the extended PyTorch Integration of a universal autlogging function: The method mlflow.autolog () should automatically include all relevant model properties like Log parameters, metrics, and artifacts. So far, it was necessary to call the respective methods for the individual entities.
The current release brings the API for interaction with PyTorch mlflow.pytorch.autolog for automatic logging of metrics, parameters and models with. MLflow can also create automated logs of PyTorch Lightning models. Version 1.0 of the performance-optimized framework for model training was released in October.
Managed scripts MLflow can now also load and manage TorchScript. TorchScript can be used to create models that can be serialized on the one hand and do not require any Python dependencies on the other. The translation with the just-in-time compiler (JIT) can be triggered from MLflow, as can loading and logging, as the following code from the Databricks blog shows:
# Any PyTorch nn.Module or pl.LightningModule model = Net () scripted_model = torch.jit.script (model) .. . mlflow.pytorch.log_model (scripted_model, “scripted_model”) model_uri = mlflow.get_artifact_uri (“scripted_model”) loaded_model = mlflow.pytorch.load_model (model_uri) … Freshly served and skillfully explained Version 1 brings the distribution of applications. 12 a plug-in for integration into TorchServe with. Facebook presented the deployment library together with Amazon Web Services in the spring. Via the plug-in mlflow-torchserve , models trained from MLflow pipelines can be provided with TorchServe.
The plug-in transfers the previously trained models to productive operation via TorchServe.
(Image: Databricks)
Another innovation beyond the PyTorch integration is the method mlflow.shap.log_explanation for logging model explanations according to SHapley Additive exPlanations (SHAP), an approach based on game theory, the output of ML models to explain.
Further innovations in MLflow 1. 12 can be found on the Databricks blog. A full list of additions and bug fixes can be found in the release notes on GitHub. Developers can install the software via the Python package manager PyPI with the command pip install mlflow . The source code is stored in the GitHub repository.
Blender for Maker , live and in dialogue: On Saturday 21. November from 16: 15 The clock shows the Blender tutor and book author Carsten Wartmann in one 45 – one-minute workshop as part of the virtual Tux days on how the free 3D package Blender is suitable for the construction of technical objects that can be found on a 3D -Printer or with the help of a CNC router can manufacture. Because although Blender is primarily known as an open source tool for renderings and animation films, it can also be used to precisely scale and through the skillful use of non-destructive modifiers something like this is also possible similar to parametric construction is possible, which is known from the CAD world.
In the make video course, the well-known book author and blender tutor Carsten Wartmann uses various small maker projects to show how to use the open source 3D software package Blender for CAD tasks such as designing your own templates, for example for the 3D Can use printing or CNC milling productively.
Carsten Wartmann shows what it all looks like in practice live in his workshop. The modeling of two simple objects with the Blender tools should be followed by a round of questions, whereby the answers – if possible – are also shown in Blender. Anyone who has always wanted to ask Carsten Wartmann something about Blender now has the opportunity to do so at the Tux Days.
The Blender expert should be known to some make readers as an author in the magazine be, but also as a tutor of the Make-Video-Tutorial series 3D course for makers: Constructing with Blender . This is (now again) available from Vimeo’s video-on-demand service and consists of almost 5 hours of tutorials. Four videos from the series are free, others cost money. You can choose to have the complete course as a package for 19, 90 Euro or buy specific episodes, view them online in the browser and download them. The prices start at 2 euros per episode and go up to 7 euros, depending on the length of the episode.
Special offer: borrow instead of buy As a special offer you can register from the weekend of the Tux days (i.e. from Saturday, the 21. November 2020) up to and including Sunday 29. November 2020, “borrow” all paid videos in our tutorial series for the price of one euro each – an option that we do not offer anything else: You log in to your free user account at Vimeo, select the video you want from our series, pay the euro for it and can then use it for 24 View hours in the browser at Vimeo. While the regular purchase price of the individual episodes is based on their length, we set the rental fee for our special campaign at 1 Euro for each video, regardless of its length.
More about Linux at the Tux Days The Tux days are a virtual event about Linux that will take place on Saturday, 21. and Sunday the 22. November takes place live on the net. Participation is free. The program of lectures, discussions, workshops and networking meetings starts on Saturday at 10, runs to 18 clock and then goes into the Gaming night over. On Sunday at o’clock with an online breakfast , the virtual Tux days finally end with the farewell at 19 o’clock. It is streamed on YouTube, among other places. (pek)
The US company Qualcomm is allowed to sell some of its 4G mobile processors to the mobile phone manufacturer Huawei. Qualcomm has confirmed this to the Reuters news agency. A special license from the US government allows Qualcomm to do limited business with Huawei despite the trade embargo.
However, according to Reuters, this license only applies to 4G -Chips from Qualcomm. Newer 5G chips, which are now expected especially for high-end cell phones, are apparently not part of the special license. It is also unclear which of its various 4G processors Qualcomm is now allowed to sell to Huawei. The license includes “some 4G products”, a Qualcomm spokesman told Reuters.
Huawei’s chip problems This is necessary Special license because the US government has placed a trade embargo on Huawei. It prohibits US companies from working with the Chinese tech giant and is the reason that Huawei cell phones are only allowed to use the free open source version of the Android operating system.
This is the end of Huawei However, there are no problems: Because US technology and licenses are also part of the trade embargo, many hardware suppliers outside the USA also had to stop deliveries to Huawei. Even the production of Huawei’s own Kirin chips is impaired because the contract manufacturer TSMC can no longer take orders. Kirin chips that had already been ordered were only delivered to Huawei until mid-September.
How Huawei smartphones run without Android. Excerpt from the heise show.
Huawei currently relies primarily on chip inventories that the company has amassed over the past few years. According to Reuters, however, this supply could be used up by the beginning of next year.
US companies benefit Qualcomm has been trying to get a special license for cooperation with Huawei for months. The company argued to the US government that companies from other countries could fill the gaps if US companies were prohibited from supplying Huawei. The US company Intel also has a license to supply Huawei with components.
While many international companies had to discontinue their business with Huawei, US companies were able to secure lucrative contracts with special permits. In addition to Qualcomm and Intel, the US chip manufacturer Micron and the Taiwanese company MediaTek, among others, have applied for a special license reasonable, massive impact. Although the smartphone business in China was still excellent recently, sales in Europe and the USA fell sharply. If the components required for production actually run out, business in China would also be at risk.
According to media reports, Huawei is therefore planning to sell its subsidiary brand Honor, under whose seal cheap cell phones are published for young target groups. According to Reuters, Huawei could concentrate on its own upper-class smartphones after the sale. A consortium led by the distributor Digital China and the government of the Shenzhen Special Economic Zone are currently trading as potential buyers. The tech company is hoping for a sum of 100 billion yuan (12, 8 billion euros).
Many IoT projects with the ESP 8266 do not need sophisticated programming. ESPEasy is easy to install and offers tons of customization options.
Tutorial for beginners: Easily implement IoT projects with ESPEasy firmware Preparation Installation sensors Regulate MQTT Article in c’t 24 / 2020 read The popular WLAN microcontroller ESP 8266 is in all of them d devices such as switch sockets and LED lamps, but it is also an inexpensive and easy-to-program candidate for self-constructed Internet of Things (IoT) projects. The firmware projects Tasmota and Espurna are particularly popular; they remove finished smart home actuators from the manufacturer cloud in just a few steps so that they can be integrated into local smart home management using protocols such as MQTT.
The projects support many devices out of the box with finished profiles so that you no longer have to worry about GPIOs and drivers. However, both are primarily designed to execute received commands, and less to control logic that runs directly on the ESP. That makes Tasmota and Espurna only of limited interest for their own projects or self-sufficient smart home devices.
But thanks to ESPEasy you don’t have to delve into the depths of C programming yourself: The open source firmware is just as easy to install as Tasmota and Espurna, but offers more flexibility in terms of the sensors and protocols used and even its own set of rules with which you can teach the ESP logic, for example to switch a switch socket using a soldered-on sensor depending on the room temperature. ESPEasy has a web interface through which all settings can be changed remotely. This saves valuable time that you would otherwise have to invest in programming and debugging.
Access to all contents of heise + exclusive tests, advice & background: independent, critically sound c’t, iX, Technology Review, Mac & i, Make, c’t read photography directly in the browser register once – read on all devices – can be canceled monthly first month free, then monthly 9, 95 € Weekly newsletter with personal reading recommendations from the editor-in-chief Start FREE month Start your FREE month now already subscribed to heise +?
Log in and read Register now and read the article immediately More information about heise + Tutorial for beginners: Easily implement IoT projects with ESPEasy firmware Preparation Installation sensors Regulate MQTT Article in c’t 24 / 2020 read
Smart glasses — depending on who you ask — are the future. But if you’d rather not wait for a company like Apple or Facebook to actually sell you a pair, you can do what product designer Sam March did and simply build your own smart glasses from scratch.
(March is no stranger to building his own wearable tech, as his DIY smartwatch from last year shows.)
Much like the smartwatch build, March has fully documented every step of the process he took in creating the glasses, which feature a connected app and use integrated LEDs to indicate walking directions to a specified location.
If you do want to build your own pair, it will take a fair amount of technical skill. March’s process involves machining a custom pair of glasses with a CNC router (or 3D printing them), carving and polishing out a pair of lenses, writing the app, and designing — and then assembling — a custom miniaturized circuit board.
Now, the glasses are pretty limited in functionality — right now, all they do is basic navigation with the aforementioned lights to indicate turns or the final destination — but March has also made the entire project open source, offering everything from circuit board schematics and code to CAD files for the designs to the app itself. That means there’s nothing stopping you from taking his foundation and adding whatever features you want on top.
Want to try to build your own? All of the technical details can be found on March’s Github repository here.
The Angular team has released version 11. 0 of the JavaScript framework of the same name. The update brings innovations for the performance, enables harnesses for all components and integrates the first experimental support for Webpack 5.
More speed in the Lighthouse test With regard to Google’s open source tool Lighthouse, which analyzes the performance of websites, Angular 11 automatic font inlining. This should improve the loading speed specified with the First Contentful Paint (FCP). Increase one of the six metrics tracked in the Performance section of the Lighthouse report.
The FCP is one of six metrics in the Performance section of the Lighthouse report. It indicates when the client loaded the first text or image on a page. During compile time, Angular CLI downloads and arranges the fonts used or linked in an application. With Angular 11 this process is activated by default.
Harnesses for all components Angular 9 has introduced so-called Component Test Harnesses for quality assurance. These classes provide APIs that tests can use to communicate with Angular Material components. Angular Materials is the design component of the JavaScript framework. Developers are given the opportunity to interact with components from Angular Material during testing via the supported API. Angular 11 now introduces harnesses for all components, which should encourage the creation of robust test suites.
In addition, Angular 11 has other APIs in its luggage. Among other things, the parallel API is supposed to facilitate the work with asynchronous actions in tests by allowing developers to execute multiple asynchronous interactions with components in parallel.
Experimental support for Webpack 5 Angular 11 brings experimental support for Webpack 5, the current version of the bundler for JavaScript modules. The development team of the TypeScript-based JavaScript framework emphasizes that the support is still in development and therefore advises against using it in a productive environment.
The current version of the JavaScript framework can be accessed via install the command ng update @ angular / cli @ angular / core . The development team behind Angular provides update guidelines at update.angular.io. More information about the release can be found in the post on the Angular blog.
The .NET team has released the new major version of the functional programming language F #: Microsoft ships F # 5.0 together with .NET 5.0, which was released this week. The focus of the current release is on the interactive development of code and revised analytical functions. The basic functions are designed for the new .NET version 5.0, which succeeds the .NET Framework, .NET Core and Mono variants and aims to standardize the three previously separate Microsoft development strands.
Interactive programming with Jupyter Notebooks and nteract F # 5.0 is the new standard version of the language for the Visual Studio development environment ( VS) and the .NET SDK. Anyone who compiles a new or existing project with one of the two tools automatically uses version 5.0 from now on. According to the announcement in the .NET blog, Jupyter Notebooks and nteract also support the release, making interactive projects with F # possible. The new main version apparently masters the # r “nuget: …” – syntax for referencing packages. Package references then support native dependencies such as the machine-learning framework ML.NET, with them F # developers can load packages in Jupyter notebooks and in the notebooks from VS Code that are still in the preview stage.
A major innovation in F # 5.0 is string interpolation, which is designed to be similar to interpolated strings in C # and JavaScript: Developers can now apparently code into the “spaces” of a string Enter in the string literal. According to the F # team’s blog entry, typed interpolations are also possible, with which one can force the interpolated context to correspond to a certain type. The format specifications correspond to the function sprintf .
Disclose type declarations To protect your own logs against later changes in the source code when logging, F # developers can now use the new feature nameof , with which an assigned symbol can be resolved consistently. For example, month names can be anchored – the call of a 13. In this case, names would produce an error message, since the name pool only contains twelve months. According to the editors, practically anything in F # can be used as “names”, including type parameters.
F # 5.0 enables the disclosure of type declarations – the principle is roughly equivalent to opening a static class in C #. With the new command open developers should now be able to reveal the static content of any type. Entries defined by F # can also be “opened” with it. This option is useful if, for example, you want to access the derivatives of a union without having to open the entire higher-level module.
betterCode () presents: .NET 5.0 – The online event on December 3rd 2020 You can learn that: From .NET Framework via .NET Core to .NET 5.0: What does this mean for the migration, and how big is the effort ? What’s new in .NET 5.0? New Features: Get to know ASP.NET Core 5.0 and Blazor 5.0 The most important language innovations in C # 9 Mobile development with .NET 5 OR mapping with Entity Framework Core 5.0 WinUI 3 as an alternative to WPF and UWP Outlook for .NET 6.0 Further innovations concern the performance of runtime and compiler, the slicing of data types when working analytically on data sets and computative expressions are used in F # 5 for modeling context-related calculations, the so-called “monadic arithmetic operations” of functional programming. A number of other features such as reverse indexes are in the preview stage in the starting blocks and will reach a stable state in future releases. For the next version, the F # team is planning to work on the open source infrastructure. It plans to improve some of the core tools. The last F # version 4.7 was released in September 2019 parallel to the then .NET Core 3.0 and had required the .NET Standard 2.0. Since F # 4.7, the effective language version can be coordinated with the compiler.
Functional first and multi-paradigm language 2017 Mads Torgersen , a program manager in the .NET team at Microsoft and lead designer for C #, commented on the strategy of the company’s own .NET languages: At that time there was a departure from the “co-evolution strategy”, since then the languages Visual Basic, C # and F # more individual lines of development. F # is the functional counterpart to C # and is one of the multi-paradigm languages: F # is a statically typed functional-first language with features and idioms for functional, object-oriented and imperative programming. Examples of functional languages are Elm, Elixir and Clojure, classically object-oriented (and mostly “general purpose”) languages such as Java, PHP and C #. However, the strict boundaries are blurring, functional concepts are also finding their way into this area.
When changing strategy three years ago, Microsoft announced that it wanted to make F # the “best functional language”. The type system is particularly powerful because it can infer the type of an expression or value without specifying type parameters. In practice this means that it is often not necessary to specify the types when using the language. Values and functions can be linked to names for identification and assigned to them. In contrast to other languages, these variables are then unchangeable. Only at first glance does F # appear like a special language for mathematical algorithms – on closer inspection it opens up a broader range of applications. It was already noticeable before 2017 that F #, although less active developers than C # and Visual Basic, had strong support in the open source community for this.
Further information More information about the release of F # 5.0 can be found in the detailed announcement in. NET blog from Microsoft. The F # team lists numerous code samples and provides instructions on how to install them in different environments. F # 5.0 is included in the new .NET 5.0 release and can be used under Windows with Visual Studio from version 16. 8, in addition to the classic purchase via the current .NET SDK, installation is also possible in Jupyter notebooks and VS Code notebooks (preview).
The Open Source Security Foundation (OpenSSF), founded this summer as a collaboration project of the Linux Foundation, is presenting its first project: Scorecards, a system for the automated assessment of how secure or risky open source packages are. It arose from the personal experience of those involved to incorporate unchecked open source code in previous programming projects – true to the motto: What many have already used will be fine. Helpful with third-party code packages Only with the advent of targeted attacks on open source software did an awareness gradually emerge as to how software can be risky neglected, neglected, or not updated. However, in large companies it can often be difficult to understand the history of these packages.
This is where the OpenSSF comes in. It defines special criteria, which will be updated in the future, according to which a software package can be automatically checked, and assigns them a certain number of points. A score can then also be automatically calculated from this, on the basis of which a company can then decide, for example, whether it wants to use the code or subject it to further checks.
After these criteria have been automatically checked, the resulting score helps during the security assessment of the software.
(Image: OpenSSF)
A first catalog of criteria that will be used in the future with the help of Community and project members to be refined is published on Github. Criteria such as the existence of a security policy, the involvement of at least two different organizations, the declaration of dependencies and the like are included in the assessment. A documentation page describes how the individual tests are carried out. Interested parties are invited to take a look at the security scorecards project and give feedback.
(ur)
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.