[tool] tc – anonymous and cyphered chat over Tor circuits in PGP

Read Time:27 Second

Posted by 0xf— via Fulldisclosure on Jul 07

Hello,

tc is a low-tech free software to chat anonymously and cyphered over
Tor circuits in PGP. Use it to protected your communication end-to-end
with RSA/DSA encryption and keep yourself anonymously reachable by
anyone who only know your .onion address and your public key. All this
and more in 2400 lines of C code that compile and run on BSD and Linux
systems with an IRC like GUI.

It’s a minimal, easy to customize unix tool that I write…

Read More

CVE-2020-8934

Read Time:23 Second

The Site Kit by Google plugin for WordPress is vulnerable to Sensitive Information Disclosure in versions up to, and including, 1.8.0 This is due to the lack of capability checks on the admin_enqueue_scripts action which displays the connection key. This makes it possible for authenticated attackers with any level of access obtaining owner access to a site in the Google Search Console. We recommend upgrading to V1.8.1 or above.

Read More

The AI Dividend

Read Time:4 Minute, 20 Second

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.

Read More

What is an incident response plan (IRP) and how effective is your incident response posture?

Read Time:4 Minute, 3 Second

As everyone looks about, sirens begin to sound, creating a sense of urgency; they only have a split second to determine what to do next. The announcer repeats himself over the loudspeaker in short bursts… This is not a drill; report to your individual formations and proceed to the allocated zone by following the numbers on your squad leader’s red cap. I take a breather and contemplate whether this is an evacuation. What underlying danger is entering our daily activities? 1…2….3…. Let’s get this party started!

When I come to… I find that the blue and red lights only exist in the security operations center. Intruders are attempting to infiltrate our defenses in real time; therefore, we are on high alert. The time has come to rely on incident response plans, disaster recovery procedures, and business continuity plans. We serve as security posture guardians and incident response strategy executors as organizational security leaders. It is vital to respond to and mitigate cyber incidents, as well as to reduce security, financial, legal, and organizational risks in an efficient and effective manner.

Stakeholder community

CISOs, as security leaders, must develop incident response teams to combat cybercrime, data theft, and service failures, which jeopardize daily operations and prevent consumers from receiving world-class service. To maintain operations pace, alert the on-the-ground, first-line-of-defense engagement teams, and stimulate real-time decision-making, Incident Response Plan (IRP) protocols must include end-to-end, diverse communication channels.

Stakeholder Types

 

What does an incident response plan (IRP) do?

That’s an excellent question. The incident response plan gives a structure or guideline to follow to reduce, mitigate, and recover from a data breach or attack. Such attacks have the potential to cause chaos by impacting customers, stealing sensitive data or intellectual property, and damaging brand value. The important steps of the incident response process, according to the National Institute of Standards and Technology (NIST), are preparation, detection and analysis, containment, eradication, and recovery, and post-incident activity that focuses on a continual learning and improvement cycle.

Lifecycle of Incident Response

Many company leaders confront a bottleneck when it comes to assigning a severity rating that determines the impact of the incident and establishes the framework for resolution strategies and external messaging. For some firms, being able to inspect the damage and appropriately assign a priority level and impact rating can be stressful and terrifying.

Rating events can help prioritize limited resources. The incident’s business impact is calculated by combining the functional effect on the organization’s systems and the impact on the organization’s information. The recoverability of the situation dictates the possible answers that the team may take while dealing with the issue. A high functional impact occurrence with a low recovery effort is suited for fast team action.

The heart beat

Companies should follow industry standards that have been tried and tested by fire departments to improve overall incident response effectiveness. This includes:

Current contact lists, on-call schedules/rotations for SMEs, and backups
Conferencing tools (e.g., distribution lists, Slack channels, emails, phone numbers)
Technical documentation, network diagrams, and accompanying plans/runbooks
Escalation processes for inaccessible SMEs

Since enemies are moving their emphasis away from established pathways to avoid defenders, it is vital to enlist third-party threat landscape evaluations. These can halt the bleeding and cauterize the wound, much like a surgeon in a high-stress operation. Threat actors are always improving their abilities using the same emerging sizzling cyber technologies that defenders use.

Despite widespread recognition of the human aspect as the weakest link, threat actors study their prey’s network to seek alternative weak points such as straddle vulnerability exploitation and credential theft. Employ Managed Threat Detection Response (MTDR), Threat Model Workshop (TMW), and Cyber Risk Posture Assessment (CRPA) services to expertly manage your infrastructure and cloud environments in a one-size-fits-all way.

Takeaways

Take inventory of your assets

Increase return on investment
Provide comprehensive coverage
Accelerate compliance needs
Create a cybersecurity monitoring response strategy
Emphasize essential resources, attack surface area, and threat vectors
Deliver transparent, seamless security

Elevate security ecosystem

Improve the efficiency and effectiveness of incident response systems.
The Cyber Risk Posture Assessment (CRPA) encourages better decision-making in order to assess governance security posture.
The Cyber Risk Posture Assessment (CRPA) & Threat Model Workshop (TMW) provide a method for evaluating the security attack surface and threat vectors.
The Managed Threat Detection Response (MTDR) expands the security team’s capabilities and competencies.
Use scenario-based tabletop exercises and incident response planning exercises.

In the future, businesses should implement an incident response strategy, a collection of well-known, verified best practices, and assess their actual versus realized assets and security attack surface portfolio. Is your organization crisis-ready? A strong incident management solution increases organizational resiliency and continuity of operations in the event of a crisis.

Read More