Friday Squid Blogging: Why There Are No Giant Squid in Aquariums

Read Time:12 Second

They’re too big and we can’t recreate their habitat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Read More

Hackers Stole Access Tokens from Okta’s Support Unit

Read Time:4 Minute, 4 Second

Okta, a company that provides identity tools like multi-factor authentication and single sign-on to thousands of businesses, has suffered a security breach involving a compromise of its customer support unit, KrebsOnSecurity has learned. Okta says the incident affected a “very small number” of customers, however it appears the hackers responsible had access to Okta’s support platform for at least two weeks before the company fully contained the intrusion.

In an advisory sent to an undisclosed number of customers on Oct. 19, Okta said it “has identified adversarial activity that leveraged access to a stolen credential to access Okta’s support case management system. The threat actor was able to view files uploaded by certain Okta customers as part of recent support cases.”

Okta explained that when it is troubleshooting issues with customers it will often ask for a recording of a Web browser session (a.k.a. an HTTP Archive or HAR file). These are sensitive files because in this case they include the customer’s cookies and session tokens, which intruders can then use to impersonate valid users.

“Okta has worked with impacted customers to investigate, and has taken measures to protect our customers, including the revocation of embedded session tokens,” their notice continued. “In general, Okta recommends sanitizing all credentials and cookies/session tokens within a HAR file before sharing it.”

The security firm BeyondTrust is among the Okta customers who received Thursday’s alert from Okta. BeyondTrust Chief Technology Officer Marc Maiffret said that alert came more than two weeks after his company alerted Okta to a potential problem.

Maiffret emphasized that BeyondTrust caught the attack earlier this month as it was happening, and that none of its own customers were affected. He said that on Oct 2., BeyondTrust’s security team detected that someone was trying to use an Okta account assigned to one of their engineers to create an all-powerful administrator account within their Okta environment.

When BeyondTrust reviewed the activity of the employee account that tried to create the new administrative profile, they found that — just 30 minutes prior to the unauthorized activity — one of their support engineers shared with Okta one of these HAR files that contained a valid Okta session token, Maiffret said.

“Our admin sent that [HAR file] over at Okta’s request, and 30 minutes after that the attacker started doing session hijacking, tried to replay the browser session and leverage the cookie in that browser recording to act on behalf of that user,” he said.

Maiffret said BeyondTrust followed up with Okta on Oct. 3 and said they were fairly confident Okta had suffered an intrusion, and that he reiterated that conclusion in a phone call with Okta on October 11 and again on Oct. 13.

In an interview with KrebsOnSecurity, Okta’s Deputy Chief Information Security Officer Charlotte Wylie said Okta initially believed that BeyondTrust’s alert on Oct. 2 was not a result of a breach in its systems. But she said that by Oct. 17, the company had identified and contained the incident — disabling the compromised customer case management account, and invalidating Okta access tokens associated with that account.

Wylie declined to say exactly how many customers received alerts of a potential security issue, but characterized it as a “very, very small subset” of its more than 18,000 customers.

The disclosure from Okta comes just weeks after casino giants Caesar’s Entertainment and MGM Resorts were hacked. In both cases, the attackers managed to social engineer employees into resetting the multi-factor login requirements for Okta administrator accounts.

In March 2022, Okta disclosed a breach from the hacking group LAPSUS$, a criminal hacking group that specialized in social-engineering employees at targeted companies. An after-action report from Okta on that incident found that LAPSUS$ had social engineered its way onto the workstation of a support engineer at Sitel, a third-party outsourcing company that had access to Okta resources.

Okta’s Wylie declined to answer questions about how long the intruder may have had access to the company’s case management account, or who might have been responsible for the attack. However, she did say the company believes this is an adversary they have seen before.

“This is a known threat actor that we believe has targeted us and Okta-specific customers,” Wylie said.

Update, 2:57 p.m. ET: Okta has published a blog post about this incident that includes some “indicators of compromise” that customers can use to see if they were affected. But the company stressed that “all customers who were impacted by this have been notified. If you’re an Okta customer and you have not been contacted with another message or method, there is no impact to your Okta environment or your support tickets.”

This is a fast-moving story. Updates will be noted and timestamped here.

Read More

USN-6440-2: Linux kernel (Azure) vulnerabilities

Read Time:2 Minute, 48 Second

Seth Jenkins discovered that the Linux kernel did not properly perform
address randomization for a per-cpu memory management structure. A local
attacker could use this to expose sensitive information (kernel memory) or
in conjunction with another kernel vulnerability. (CVE-2023-0597)

It was discovered that the IPv6 implementation in the Linux kernel
contained a high rate of hash collisions in connection lookup table. A
remote attacker could use this to cause a denial of service (excessive CPU
consumption). (CVE-2023-1206)

Yu Hao and Weiteng Chen discovered that the Bluetooth HCI UART driver in
the Linux kernel contained a race condition, leading to a null pointer
dereference vulnerability. A local attacker could use this to cause a
denial of service (system crash). (CVE-2023-31083)

Ross Lagerwall discovered that the Xen netback backend driver in the Linux
kernel did not properly handle certain unusual packets from a
paravirtualized network frontend, leading to a buffer overflow. An attacker
in a guest VM could use this to cause a denial of service (host system
crash) or possibly execute arbitrary code. (CVE-2023-34319)

Lin Ma discovered that the Netlink Transformation (XFRM) subsystem in the
Linux kernel contained a null pointer dereference vulnerability in some
situations. A local privileged attacker could use this to cause a denial of
service (system crash). (CVE-2023-3772)

Kyle Zeng discovered that the networking stack implementation in the Linux
kernel did not properly validate skb object size in certain conditions. An
attacker could use this cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-42752)

Kyle Zeng discovered that the netfiler subsystem in the Linux kernel did
not properly calculate array offsets, leading to a out-of-bounds write
vulnerability. A local user could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-42753)

Kyle Zeng discovered that the IPv4 Resource Reservation Protocol (RSVP)
classifier implementation in the Linux kernel contained an out-of-bounds
read vulnerability. A local attacker could use this to cause a denial of
service (system crash). Please note that kernel packet classifier support
for RSVP has been removed to resolve this vulnerability. (CVE-2023-42755)

Bing-Jhong Billy Jheng discovered that the Unix domain socket
implementation in the Linux kernel contained a race condition in certain
situations, leading to a use-after-free vulnerability. A local attacker
could use this to cause a denial of service (system crash) or possibly
execute arbitrary code. (CVE-2023-4622)

Budimir Markovic discovered that the qdisc implementation in the Linux
kernel did not properly validate inner classes, leading to a use-after-free
vulnerability. A local user could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-4623)

Alex Birnberg discovered that the netfilter subsystem in the Linux kernel
did not properly validate register length, leading to an out-of- bounds
write vulnerability. A local attacker could possibly use this to cause a
denial of service (system crash). (CVE-2023-4881)

It was discovered that the Quick Fair Queueing scheduler implementation in
the Linux kernel did not properly handle network packets in certain
conditions, leading to a use after free vulnerability. A local attacker
could use this to cause a denial of service (system crash) or possibly
execute arbitrary code. (CVE-2023-4921)

Read More

llhttp-9.1.3-1.fc40 python-aiohttp-3.8.6-1.fc40

Read Time:1 Minute, 23 Second

FEDORA-2023-f2bb9ee617

Packages in this update:

llhttp-9.1.3-1.fc40
python-aiohttp-3.8.6-1.fc40

Update description:

python-aiohttp 3.8.6 (2023-10-07)

https://github.com/aio-libs/aiohttp/blob/v3.8.6/CHANGES.rst#386-2023-10-07

Security bugfixes

Upgraded llhttp to v9.1.3: https://github.com/aio-libs/aiohttp/security/advisories/GHSA-pjjw-qhg8-p2p9
Updated Python parser to comply with RFCs 9110/9112: https://github.com/aio-libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg

Deprecation

Added fallback_charset_resolver parameter in ClientSession to allow a user-supplied character set detection function. Character set detection will no longer be included in 3.9 as a default. If this feature is needed, please use fallback_charset_resolver.

Features

Enabled lenient response parsing for more flexible parsing in the client (this should resolve some regressions when dealing with badly formatted HTTP responses).

Bugfixes

Fixed PermissionError when .netrc is unreadable due to permissions.
Fixed output of parsing errors pointing to a n.
Fixed GunicornWebWorker max_requests_jitter not working.
Fixed sorting in filter_cookies to use cookie with longest path.
Fixed display of BadStatusLine messages from llhttp.

llhttp 9.1.3

Fixes

Restart the parser on HTTP 100
Fix chunk extensions quoted-string value parsing
Fix lenient_flags truncated on reset
Fix chunk extensions’ parameters parsing when more then one name-value pair provided

llhttp 9.1.2

What’s Changed

Fix HTTP 1xx handling

llhttp 9.1.1

What’s Changed

feat: Expose new lenient methods

llhttp 9.1.0

What’s Changed

New lenient flag to make CR completely optional
New lenient flag to have spaces after chunk header

Read More

AI and US Election Rules

Read Time:7 Minute, 20 Second

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic images generated by AI in political attack ads?

For now, the answer to these questions is probably “yes.” These are fairly innocuous uses of AI, not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software. Even in cases where AI tools will be put to scurrilous purposes, that’s probably legal in the US system. Political ads are, after all, a medium in which you are explicitly permitted to lie.

The concern over AI is a distraction, but one that can help draw focus to the real issue. What matters isn’t how political content is generated; what matters is the content itself and how it is distributed.

Future uses of AI by campaigns go far beyond deepfaked images. Campaigns will also use AI to personalize communications. Whereas the previous generation of social media microtargeting was celebrated for helping campaigns reach a precision of thousands or hundreds of voters, the automation offered by AI will allow campaigns to tailor their advertisements and solicitations to the individual.

Most significantly, AI will allow digital campaigning to evolve from a broadcast medium to an interactive one. AI chatbots representing campaigns are capable of responding to questions instantly and at scale, like a town hall taking place in every voter’s living room, simultaneously. Ron DeSantis’ presidential campaign has reportedly already started using OpenAI’s technology to handle text message replies to voters.

At the same time, it’s not clear whose responsibility it is to keep US political advertisements grounded in reality—if it is anyone’s. The FEC’s role is campaign finance, and is further circumscribed by the Supreme Court’s repeated stripping of its authorities. The Federal Communications Commission has much more expansive responsibility for regulating political advertising in broadcast media, as well as political robocalls and text communications. However, the FCC hasn’t done much in recent years to curtail political spam. The Federal Trade Commission enforces truth in advertising standards, but political campaigns have been largely exempted from these requirements on First Amendment grounds.

To further muddy the waters, much of the online space remains loosely regulated, even as campaigns have fully embraced digital tactics. There are still insufficient disclosure requirements for digital ads. Campaigns pay influencers to post on their behalf to circumvent paid advertising rules. And there are essentially no rules beyond the simple use of disclaimers for videos that campaigns post organically on their own websites and social media accounts, even if they are shared millions of times by others.

Almost everyone has a role to play in improving this situation.

Let’s start with the platforms. Google announced earlier this month that it would require political advertisements on YouTube and the company’s other advertising platforms to disclose when they use AI images, audio, and video that appear in their ads. This is to be applauded, but we cannot rely on voluntary actions by private companies to protect our democracy. Such policies, even when well-meaning, will be inconsistently devised and enforced.

The FEC should use its limited authority to stem this coming tide. The FEC’s present consideration of rulemaking on this issue was prompted by Public Citizen, which petitioned the Commission to “clarify that the law against ‘fraudulent misrepresentation’ (52 U.S.C. §30124) applies to deliberately deceptive AI-produced content in campaign communications.” The FEC’s regulation against fraudulent misrepresentation (C.F.R. §110.16) is very narrow; it simply restricts candidates from pretending to be speaking on behalf of their opponents in a “damaging” way.

Extending this to explicitly cover deepfaked AI materials seems appropriate. We should broaden the standards to robustly regulate the activity of fraudulent misrepresentation, whether the entity performing that activity is AI or human—but this is only the first step. If the FEC takes up rulemaking on this issue, it could further clarify what constitutes “damage.” Is it damaging when a PAC promoting Ron DeSantis uses an AI voice synthesizer to generate a convincing facsimile of the voice of his opponent Donald Trump speaking his own Tweeted words? That seems like fair play. What if opponents find a way to manipulate the tone of the speech in a way that misrepresents its meaning? What if they make up words to put in Trump’s mouth? Those use cases seem to go too far, but drawing the boundaries between them will be challenging.

Congress has a role to play as well. Senator Klobuchar and colleagues have been promoting both the existing Honest Ads Act and the proposed REAL Political Ads Act, which would expand the FEC’s disclosure requirements for content posted on the Internet and create a legal requirement for campaigns to disclose when they have used images or video generated by AI in political advertising. While that’s worthwhile, it focuses on the shiny object of AI and misses the opportunity to strengthen law around the underlying issues. The FEC needs more authority to regulate campaign spending on false or misleading media generated by any means and published to any outlet. Meanwhile, the FEC’s own Inspector General continues to warn Congress that the agency is stressed by flat budgets that don’t allow it to keep pace with ballooning campaign spending.

It is intolerable for such a patchwork of commissions to be left to wonder which, if any of them, has jurisdiction to act in the digital space. Congress should legislate to make clear that there are guardrails on political speech and to better draw the boundaries between the FCC, FEC, and FTC’s roles in governing political speech. While the Supreme Court cannot be relied upon to uphold common sense regulations on campaigning, there are strategies for strengthening regulation under the First Amendment. And Congress should allocate more funding for enforcement.

The FEC has asked Congress to expand its jurisdiction, but no action is forthcoming. The present Senate Republican leadership is seen as an ironclad barrier to expanding the Commission’s regulatory authority. Senate Majority Leader Mitch McConnell has a decades-long history of being at the forefront of the movement to deregulate American elections and constrain the FEC. In 2003, he brought the unsuccessful Supreme Court case against the McCain-Feingold campaign finance reform act (the one that failed before the Citizens United case succeeded).

The most impactful regulatory requirement would be to require disclosure of interactive applications of AI for campaigns—and this should fall under the remit of the FCC. If a neighbor texts me and urges me to vote for a candidate, I might find that meaningful. If a bot does it under the instruction of a campaign, I definitely won’t. But I might find a conversation with the bot—knowing it is a bot—useful to learn about the candidate’s platform and positions, as long as I can be confident it is going to give me trustworthy information.

The FCC should enter rulemaking to expand its authority for regulating peer-to-peer (P2P) communications to explicitly encompass interactive AI systems. And Congress should pass enabling legislation to back it up, giving it authority to act not only on the SMS text messaging platform, but also over the wider Internet, where AI chatbots can be accessed over the web and through apps.

And the media has a role. We can still rely on the media to report out what videos, images, and audio recordings are real or fake. Perhaps deepfake technology makes it impossible to verify the truth of what is said in private conversations, but this was always unstable territory.

What is your role? Those who share these concerns can submit a comment to the FEC’s open public comment process before October 16, urging it to use its available authority. We all know government moves slowly, but a show of public interest is necessary to get the wheels moving.

Ultimately, all these policy changes serve the purpose of looking beyond the shiny distraction of AI to create the authority to counter bad behavior by humans. Remember: behind every AI is a human who should be held accountable.

This essay was written with Nathan Sanders, and was previously published on the Ash Center website.

Read More