biosig4c++-2.6.0-3.fc40

Read Time:21 Second

FEDORA-2024-ff6a72d8e9

Packages in this update:

biosig4c++-2.6.0-3.fc40

Update description:

2.6.0 – Security Update

BrainVisionMarker

fixes CVE-2024-23305

BrainVision: proved parser and sanity checks

fixes CVE-2024-22097, CVE-2024-23809

EGI

fixes CVE-2024-21795

FAMOS: disabled, support can be enabled by setting BIOSIG_FAMOS_TRUST_INPUT=1

mitigate vulnerabilities CVE-2024-21812, CVE-2024-23313, CVE-2024-23310, CVE-2024-23606

Read More

USN-6713-1: QPDF vulnerability

Read Time:14 Second

It was discovered that QPDF incorrectly handled certain memory operations
when decoding JSON files. If a user or automated system were tricked into
processing a specially crafted JSON file, QPDF could be made to crash,
resulting in a denial of service, or possibly execute arbitrary code.

Read More

Licensing AI Engineers

Read Time:1 Minute, 9 Second

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Read More