Chinese Group Exploiting Linux Backdoor to Target Governments

Read Time:7 Second

The new backdoor is being used by Earth Lusca to conduct cyber-espionage campaigns, primarily against governments in Asia and the Balkans

Read More

CVE-2022-47555

Read Time:10 Second

** UNSUPPPORTED WHEN ASSIGNED ** Operating system command injection in ekorCCP and ekorRCI, which could allow an authenticated attacker to execute commands, create new users with elevated privileges or set up a backdoor.

Read More

CVE-2022-47554

Read Time:11 Second

** UNSUPPPORTED WHEN ASSIGNED ** Exposure of sensitive information in ekorCCP and ekorRCI, potentially allowing a remote attacker to obtain critical information from various .xml files, including .xml files containing credentials, without being authenticated within the web server.

Read More

CVE-2022-47553

Read Time:10 Second

** UNSUPPPORTED WHEN ASSIGNED ** Incorrect authorisation in ekorCCP and ekorRCI, which could allow a remote attacker to obtain resources with sensitive information for the organisation, without being authenticated within the web server.

Read More

Detecting AI-Generated Text

Read Time:1 Minute, 13 Second

There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI writes:

Do AI detectors work?

In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.

When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.

There is some good research in watermarking LLM-generated text, but the watermarks are not generally robust.

I don’t think the detectors are going to win this arms race.

Read More