webkit2gtk3-2.38.4-1.fc36

Read Time:45 Second

FEDORA-2023-19900752a6

Packages in this update:

webkit2gtk3-2.38.4-1.fc36

Update description:

Improve GStreamer multimedia playback across the board with improved codec selection logic, better handling of latency, and improving frame discard to avoid audio/video desynchronization, among other fixes.
Disable HLS media playback by default, which makes web sites use MSE instead. If needed WEBKIT_GST_ENABLE_HLS_SUPPORT=1 can be set in the environment to enable it back.
Disable threaded rendering in GTK4 builds by default, as it was causing crashes.
Fix MediaSession API not showing artwork images.
Fix MediaSession MPRIS usage when running inside a Flatpak sandbox.
Fix input element controls to correctly scale when applying a zoom factor different than the default.
Fix leakage of Web processes in certain situations.
Fix several crashes and rendering issues.
Security fixes: CVE-2023-23517, CVE-2023-23518, CVE-2022-42826, and many additional security issues

Read More

Manipulating Weights in Face-Recognition AI Systems

Read Time:1 Minute, 56 Second

Interesting research: “Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons“:

Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other’s existence with almost no interference.

We have experimentally verified the attacks on a FaceNet-based facial recognition system, which achieves SOTA accuracy on the standard LFW dataset of 99.35%. When we tried to individually anonymize ten celebrities, the network failed to recognize two of their images as being the same person in 96.97% to 98.29% of the time. When we tried to confuse between the extremely different looking Morgan Freeman and Scarlett Johansson, for example, their images were declared to be the same person in 91.51% of the time. For each type of backdoor, we sequentially installed multiple backdoors with minimal effect on the performance of each one (for example, anonymizing all ten celebrities on the same model reduced the success rate for each celebrity by no more than 0.91%). In all of our experiments, the benign accuracy of the network on other persons was degraded by no more than 0.48% (and in most cases, it remained above 99.30%).

It’s a weird attack. On the one hand, the attacker has access to the internals of the facial recognition system. On the other hand, this is a novel attack in that it manipulates internal weights to achieve a specific outcome. Given that we have no idea how those weights work, it’s an important result.

Read More