AI to Aid Democracy

Read Time:8 Minute, 45 Second

There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

An A.I. built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

Today, state-of-the-art A.I. systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their A.I.s and what sort of access we have. They can steer and shape those A.I.s to conform to their corporate interests. That isn’t the world we want. Instead, we want A.I. options that are both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

But a public option A.I. could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that A.I. will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. A.I. will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these A.I. uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting, lobbying, and campaign messaging. Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and A.I. could serve a similar role for everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Finally, and most ambitiously, A.I. could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But A.I. could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

A.I. chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of A.I.-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the A.I. company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how A.I. might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

This future requires an A.I. public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public A.I. systems would be political.

Some technological tools are already publicly available. In fairness, tech giants like Google and Meta have made many of their latest and greatest A.I. tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the A.I. expertise and capability to oversee this effort.

Instead of releasing half-finished A.I. systems for the public to test, we need to make sure that they are robust before they&’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent A.I. chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

To build assistive A.I. for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “align” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political environment suspicious of both A.I. and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply A.I. through piecemeal democratic engineering, carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

The next generation of A.I. experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public A.I. option.

Even with these approaches, building and fielding a democratic A.I. option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial A.I. domination undermines democratic politics—will be much messier and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

Read More

Application Programming Interface (API) testing for PCI DSS compliance

Read Time:2 Minute, 36 Second

This is the fourth blog in the series focused on PCI DSS, written by an AT&T Cybersecurity consultant. See the first blog relating to IAM and PCI DSS here. See the second blog on PCI DSS reporting details to ensure when contracting quarterly CDE tests here. The third blog on network and data flow diagrams for PCI DSS compliance is here.

Requirement 6 of the Payment Card Industry (PCI) Data Security Standard (DSS) v3.2.1 was written before APIs became a big thing in applications, and therefore largely ignores them.

However, the Secure Software Standard  and PCI-Secure-SLC-Standard-v1_1.pdf from PCI have both begun to recognize the importance of covering them.

The Open Web Application Security Project (OWASP) issued a top 10 flaws list specifically for APIs from one of its subgroups, the OWASP API Security Project in 2019. Ultimately if the APIs exist in, or could affect the security of the CDE, they are in scope for an assessment.

API testing transcends traditional firewall, web application firewall, SAST and DAST testing in that it addresses the multiple co-existing sessions and states that an application is dealing with. It uses fuzzing techniques (automated manipulation of data fields such as session identifiers) to validate that those sessions, including their state information and data, are adequately separated from one another.

As an example: consumer-A must not be able to access consumer-B’s session data, nor to piggyback on information from consumer-B’s session to carry consumer-A’s possibly unauthenticated session further into the application or servers. API testing will also ensure that any management tasks (such as new account creation) available through APIs are adequately authenticated, authorized and impervious to hijacking.

Even in an API with just 10 methods, there can be more than 1,000 tests that need to be executed to ensure all the OWASP top 10 issues are protected against. Most such testing requires the swagger file (API definition file) to start from, and a selection of differently privileged test userIDs to work with.

API testing will also potentially reveal that some useful logging, and therefore alerting, is not occurring because the API is not generating logs for those events, or the log destination is not integrated with the SIEM. The API may thus need some redesign to make sure all PCI-required events are in fact being recorded (especially when related to access control, account management, and elevated privilege use). PCI DSS v4.0 has expanded the need for logging in certain situations, so ensure tests are performed to validate the logging paradigm for all required paths.

Finally, both internal and externally accessible APIs should be tested because least-privilege for PCI requires that any unauthorized persons be adequately prevented from accessing functions that are not relevant to their job responsibilities.

AT&T Cybersecurity provides a broad range of consulting services to help you out in your journey to manage risk and keep your company secure. PCI-DSS consulting is only one of the areas where we can assist. Check out our services.

Read More

Embracing zero-trust: a look at the NSA’s recommended IAM best practices for administrators

Read Time:42 Second

By now, most of the industry has realized we’re seeing a shift from the legacy perimeter-based security model to an identity-centric approach to cybersecurity. If defenders haven’t realized this, malicious actors certainly have, with 80% of web application attacks utilizing stolen credentials and 40% of breaches that don’t involve insider threats and user error involving stolen credentials, according to sources such as the 2022 Verizon Data Breach Investigation Report.

Compromised credentials were involved in incidents such as the 2021 Colonial national gas pipeline breach, the 2021 Oldsmar Florida water treatment plant attack, and an attack on the South Staffordshire water treatment plant in the UK in 2022, illustrating that these incidents can and have spilled over from the digital realm to the physical, impacting critical infrastructure.

To read this article in full, please click here

Read More

PaperCut Remote Code Execution Vulnerability (CVE-2023–27350) Exploited in the Wild

Read Time:2 Minute, 16 Second

FortiGuard Labs is aware that a recently disclosed vulnerability in PaperCut MF/NG (CVE-2023-27350) is susceptible to a remote code execution attack and is currently being exploited in the wild. Various remote management and maintenance software and Truebot malware were reportedly to have been deployed to unpatched severs. As such, patches should be applied as soon as possible. PaperCut NG is a print management software that helps organizations manage printing within their environment. It provides tools for monitoring printer usage, setting policies, and controlling access to resources. PaperCut NG is compatible with a wide range of printers, copiers, and multi-function devices and can be installed on various operating systems such as Windows, Linux, and macOS. The MF version shares the same codebase, but allows for support of multifunction devices.What is CVE-2023-27350?CVE-2023-27350 is a Remote Code Execution (RCE) vulnerability that allows an attacker to bypass authentication and remotely execute malicious code on unpatched servers.What is the CVSS Score?The vulnerability has a CVSS base score of 9.8.Is CVE-2023-27350 being Exploited in the Wild?PaperCut confirms the vulnerability is being exploited in the wild. Furthermore, known remote management, maintenance software and the Truebot malware were reported deployed on vulnerable servers. The Clop ransomware threat actor is believed to have used the Truebot malware in their attacks in this latest attack.Has the Vendor Released an Advisory for CVE-2023-27350?Yes, a vendor advisory is available. Please refer to the Appendix for a link to “URGENT | PaperCut MF/NG vulnerability bulletin (March 2023)”.Has the Vendor Released a Patch for CVE-2023-27350?Yes, PaperCut has released a patch for CVE-2023-27350 for PaperCut MF and PaperCut NG versions 20.1.7, 21.2.11 and 22.0.9 and later. Please refer to the “URGENT | PaperCut MF/NG vulnerability bulletin (March 2023) (PaperCut)” in the APPENDIX for further details.Which Versions of PaperCut are Vulnerable to CVE-2023-27350?According to the advisory, PaperCut MF or NG version 8.0 or later on all OS platforms are vulnerable.What is the Status of Protection?FortiGuard Labs has the following AV coverage in place for the known remote management and maintenance software deployed on servers after exploitation of CVE-2023-27350 as:W64/Agent.CGW!trRiskware/RemoteAdminAll reported network IOCs related to the post-exploitation activities are blocked by Webfiltering. FortiGuard Labs is currently investigating additional coverage and will update this Threat Signal when new information becomes available.Any Suggested Mitigation?The PaperCut advisory contains detailed mitigation and work arounds. Please refer to the “URGENT | PaperCut MF/NG vulnerability bulletin (March 2023) (PaperCut)” in the APPENDIX for further details.

Read More

git-2.40.1-1.fc36

Read Time:43 Second

FEDORA-2023-003e7d2867

Packages in this update:

git-2.40.1-1.fc36

Update description:

update to 2.40.1 (CVE-2023-25652, CVE-2023-25815, CVE-2023-29007)

Refer to the release notes for 2.30.9 for details of each CVE as well as
the following security advisories from the git project:

https://github.com/git/git/security/advisories/GHSA-2hvf-7c8p-28fx (CVE-2023-25652)
https://github.com/git/git/security/advisories/GHSA-v48j-4xgg-4844 (CVE-2023-29007)

(At this time there is no upstream advisory for CVE-2023-25815. This
issue does not affect the Fedora packages as we do not use the runtime
prefix support.)

Release notes:
https://github.com/git/git/raw/v2.30.9/Documentation/RelNotes/2.30.9.txt
https://github.com/git/git/raw/v2.40.1/Documentation/RelNotes/2.40.1.txt

Read More

git-2.40.1-1.fc38

Read Time:43 Second

FEDORA-2023-eaf1bdd5ae

Packages in this update:

git-2.40.1-1.fc38

Update description:

update to 2.40.1 (CVE-2023-25652, CVE-2023-25815, CVE-2023-29007)

Refer to the release notes for 2.30.9 for details of each CVE as well as
the following security advisories from the git project:

https://github.com/git/git/security/advisories/GHSA-2hvf-7c8p-28fx (CVE-2023-25652)
https://github.com/git/git/security/advisories/GHSA-v48j-4xgg-4844 (CVE-2023-29007)

(At this time there is no upstream advisory for CVE-2023-25815. This
issue does not affect the Fedora packages as we do not use the runtime
prefix support.)

Release notes:
https://github.com/git/git/raw/v2.30.9/Documentation/RelNotes/2.30.9.txt
https://github.com/git/git/raw/v2.40.1/Documentation/RelNotes/2.40.1.txt

Read More