AI Industry is Trying to Subvert the Definition of “Open Source AI”

Read Time:1 Minute, 26 Second

The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.

And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one rebuttal to the definition.)

This is worth fighting for. We need a public AI option, and open source—real open source—is a necessary component of that.

But while open source should mean open source, there are some partially open models that need some sort of definition. There is a big research field of privacy-preserving, federated methods of ML model training and I think that is a good thing. And OSI has a point here:

Why do you allow the exclusion of some training data?

Because we want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information ­ like decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

How about we call this “open weights” and not open source?

Read More

Cisco URWB Access Point Command Injection Vulnerability (CVE-2024-20418)

Read Time:54 Second

What is the Vulnerability?A maximum severity security (CVS Score 10.0) vulnerability in the web-based management interface of Cisco Unified Industrial Wireless Software for Cisco Ultra-Reliable Wireless Backhaul (URWB) Access Points could allow an unauthenticated, remote attacker to perform command injection attacks with root privileges on the underlying operating system. This vulnerability is due to improper validation of input to the web-based management interface. An attacker could exploit this vulnerability by sending crafted HTTP requests to the web-based management interface of an affected system.The FortiGuard Threat Research Team is actively monitoring the vulnerability and will update this report with any new developments.What is the recommended Mitigation?Cisco has released security updates to address the vulnerability. [Cisco Advisory and Patch]What is the FortiGuard Coverage?FortiGuard recommends users to apply the fix provided by the vendor and follow any mitigation steps provided.The FortiGuard Incident Response Team can be engaged to help with any suspected compromise.FortiGuard IPS protection is being reviewed to defend against any attack attempts targeting the vulnerable devices.

Read More