FEDORA-EPEL-2023-64b282dfaf
Packages in this update:
sympa-6.2.72-2.el7
Update description:
Update to sympa 6.2.72
Fixes CVE-2021-32850
For details, see:
https://github.com/sympa-community/sympa/releases/tag/6.2.72
sympa-6.2.72-2.el7
Update to sympa 6.2.72
Fixes CVE-2021-32850
For details, see:
https://github.com/sympa-community/sympa/releases/tag/6.2.72
sympa-6.2.72-2.el9
Update to sympa 6.2.72
Fixes CVE-2021-32850
For details, see:
https://github.com/sympa-community/sympa/releases/tag/6.2.72
sympa-6.2.72-2.fc38
Update to sympa 6.2.72
Fixes CVE-2021-32850
For details, see:
https://github.com/sympa-community/sympa/releases/tag/6.2.72
sympa-6.2.72-2.fc37
Update to sympa 6.2.72
Fixes CVE-2021-32850
For details, see:
https://github.com/sympa-community/sympa/releases/tag/6.2.72
Attackers who are targeting open-source package repositories like PyPI (Python Package Index) have devised a new technique for hiding their malicious code from security scanners, manual reviews, and other forms of security analysis. In one incident, researchers have found malware code hidden inside a Python bytecode (PYC) file that can be directly executed as opposed to source code files that get interpreted by the Python runtime.
“It may be the first supply chain attack to take advantage of the fact that Python bytecode files can be directly executed, and it comes amid a spike in malicious submissions to the Python Package Index,” researchers from security firm ReversingLabs said in a report. “If so, it poses yet another supply-chain risk going forward, since this type of attack is likely to be missed by most security tools, which only scan Python source code (PY) files.”
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of AI research has dramatically changed.
This development hasn’t made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations.
The power shift comes from simplification. The LLMs built by OpenAI and Google rely on massive data sets, measured in the tens of billions of bytes, computed on by tens of thousands of powerful specialized processors producing models with billions of parameters. The received wisdom is that bigger data, bigger processing, and larger parameter sets were all needed to make a better model. Producing such a model requires the resources of a corporation with the money and computing power of a Google or Microsoft or Meta.
But building on public models like Meta’s LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models—but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop. Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.
More importantly, these smaller and faster LLMs are much more accessible and easier to experiment with. Rather than needing tens of thousands of machines and millions of dollars to train a new model, an existing model can now be customized on a mid-priced laptop in a few hours. This fosters rapid innovation.
It also takes control away from large companies like Google and OpenAI. By providing access to the underlying code and encouraging collaboration, open-source initiatives empower a diverse range of developers, researchers, and organizations to shape the technology. This diversification of control helps prevent undue influence, and ensures that the development and deployment of AI technologies align with a broader set of values and priorities. Much of the modern internet was built on open-source technologies from the LAMP (Linux, Apache, mySQL, and PHP/PERL/Python) stack—a suite of applications often used in web development. This enabled sophisticated websites to be easily constructed, all with open-source tools that were built by enthusiasts, not companies looking for profit. Facebook itself was originally built using open-source PHP.
But being open-source also means that there is no one to hold responsible for misuse of the technology. When vulnerabilities are discovered in obscure bits of open-source technology critical to the functioning of the internet, often there is no entity responsible for fixing the bug. Open-source communities span countries and cultures, making it difficult to ensure that any country’s laws will be respected by the community. And having the technology open-sourced means that those who wish to use it for unintended, illegal, or nefarious purposes have the same access to the technology as anyone else.
This, in turn, has significant implications for those who are looking to regulate this new and powerful technology. Now that the open-source community is remixing LLMs, it’s no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries. The only governance mechanism available to governments now is to regulate usage (and only for those who pay attention to the law), or to offer incentives to those (including startups, individuals, and small companies) who are now the drivers of innovation in the arena. Incentives for these communities could take the form of rewards for the production of particular uses of the technology, or hackathons to develop particularly useful applications. Sticks are hard to use—instead, we need appealing carrots.
It is important to remember that the open-source community is not always motivated by profit. The members of this community are often driven by curiosity, the desire to experiment, or the simple joys of building. While there are companies that profit from supporting software produced by open-source projects like Linux, Python, or the Apache web server, those communities are not profit driven.
And there are many open-source models to choose from. Alpaca, Cerebras-GPT, Dolly, HuggingChat, and StableLM have all been released in the past few months. Most of them are built on top of LLaMA, but some have other pedigrees. More are on their way.
The large tech monopolies that have been developing and fielding LLMs—Google, Microsoft, and Meta—are not ready for this. A few weeks ago, a Google employee leaked a memo in which an engineer tried to explain to his superiors what an open-source LLM means for their own proprietary tech. The memo concluded that the open-source community has lapped the major corporations and has an overwhelming lead on them.
This isn’t the first time companies have ignored the power of the open-source community. Sun never understood Linux. Netscape never understood the Apache web server. Open source isn’t very good at original innovations, but once an innovation is seen and picked up, the community can be a pretty overwhelming thing. The large companies may respond by trying to retrench and pulling their models back from the open-source community.
But it’s too late. We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive AI landscape. This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players.
This essay was written with Jim Waldo, and previously appeared on Slate.com.
Progress Software has discovered a vulnerability in its file transfer software MOVEit Transfer that could lead to escalated privileges and potential unauthorized access to the environment, the company said in a security advisory.
“A SQL injection vulnerability has been found in the MOVEit Transfer web application that could allow an unauthenticated attacker to gain unauthorized access to MOVEit Transfer’s database,” the company said in the post, adding that depending on the database engine being used (MySQL, Microsoft SQL Server, or Azure SQL), an attacker may be able to infer information about the structure and contents of the database in addition to executing SQL statements that alter or delete database elements.
Progress has discovered a vulnerability in file transfer software MOVEit Transfer that could lead to escalated privileges and potential unauthorized access to the environment, the company said in a security advisory.
“A SQL injection vulnerability has been found in the MOVEit Transfer web application that could allow an unauthenticated attacker to gain unauthorized access to MOVEit Transfer’s database,” the company said in the post, adding that depending on the database engine being used (MySQL, Microsoft SQL Server, or Azure SQL), an attacker may be able to infer information about the structure and contents of the database in addition to executing SQL statements that alter or delete database elements.
A Vulnerability has been discovered in Progress Moveit Transfer, which could allow for potential unauthorized access to the environment, escalated privileges, and remote code execution. MOVEit Transfer is a managed file transfer software that allows the enterprise to securely transfer files between business partners and customers using SFTP, SCP, and HTTP-based uploads. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights