CVE-2019-16470

Read Time:15 Second

Adobe Acrobat Reader versions 2019.021.20056 and earlier are affected by a Stack-based Buffer Overflow vulnerability that could result in arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.

Read More

USN-6356-1: OpenDMARC vulnerabilities

Read Time:26 Second

Jianjun Chen, Vern Paxson and Jian Jiang discovered that OpenDMARC
incorrectly handled certain inputs. If a user or an automated system were
tricked into receiving crafted inputs, an attacker could possibly use this
to falsify the domain of an e-mails origin. (CVE-2020-12272)

Patrik Lantz discovered that OpenDMARC incorrectly handled certain inputs.
If a user or an automated system were tricked into opening a specially
crafted input file, a remote attacker could possibly use this issue to
cause a denial of service. (CVE-2020-12460)

Read More

On Robots Killing People

Read Time:6 Minute, 8 Second

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Davi Ottenheimer, and previously appeared on Atlantic.com.

Read More

Top blockchain Cybersecurity threats to watch out for

Read Time:3 Minute, 39 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Approximately 57 cryptocurrency thefts occurred in just the first quarter of 2023, echoing similarly disastrous results in 2022, when fraudsters relied on a wide variety of techniques to steal $3.8 billion in cryptocurrency. The perception of vulnerabilities with blockchain-based currency have led to a dramatic drop in the total value of cryptocurrency, whose worth has fallen from over $2 trillion at the beginning of 2022 to just over $820 billion by the end of that year. Attacks range from confidentiality breaches to compromised “smart contracts,” leading to a need to redefine the nature of digital security. Below are just a few of the biggest threats to watch out for. 

Threats towards consensus protocols

Consensus protocols are placed to prevent one single person from controlling an entire blockchain. Multiple people must reach an agreement to decide what a blockchain should contain at a given moment. All consensus protocols require numerous security features in order to protect themselves against ARP and DDoS attacks. Address Resolution Protocol (ARP) spoofing tricks devices into sending messages to the hacker instead of the intended destination. On the other hand, Distributed Denial of Service attacks are malicious attempts to disrupt an individual’s network traffic by overwhelming the target with a flood of internet traffic.

Privacy and confidentiality breaches

Blockchains are also vulnerable to the exposure of private and sensitive data. They are designed to be transparent, providing users with as much knowledge about their transaction as possible. However, attackers can take advantage of this transparency, and access and share confidential information. Part of the appeal of digital currencies is the anonymity of participants. The possibility of tracing transactions to individuals results in the disclosure of private information, disincentivizing users from utilizing digital currencies instead of their physical counterparts. 

Private key improvisation

In cryptocurrency, keys are used to authorize transactions, access wallets, and prove ownership of assets. They are encrypted to protect users from theft and unauthorized access to their funds. However, some 23 private keys with a total value of over $900 million were compromised in 2022. The two main ways in which keys are illegitimately accessed are through social engineering and malicious software. For example, keyloggers record every single input that users make with their keyboard. When a user types their private key while a keylogger is active on their device, the hacker obtains access to it.

Risks during exchanges

Cryptocurrency exchange platforms allow users to purchase and sell digital assets. They function as a “middleman”, connecting two users in a trade. This makes them one of the most common targets for cybercriminals, as is evident in the relatively recent FTX hacking claims, in which this exchange claimed that almost $0.5 billion had been removed in unauthorized transactions. Although this type of attack is rare, cybercriminals have intercepted transactions in the past, replacing existing exchange platforms, so that funds are transferred to them instead of to authorized recipients.

Cybercriminals can also create outright fake platforms that disguise themselves as authentic applications with fake reviews and offers. When partaking in a digital trade, make sure you use secure cryptocurrency exchange services. The anonymity regarding blockchains makes it exceptionally difficult to track cybercriminals and seek justice. 

Defects in smart contracts

Smart contracts on the blockchain are apps that complete each side of a transaction. Those involving fund transfers can include a third party that verifies that the transfer took place successfully. They are based on templates, however, meaning that they cannot be amended for a particular use. Their code is extremely complex, making it near impossible to identify potential security risks. This can be seen as a benefit and a drawback since it is more difficult to discover vulnerabilities as a hacker and as a coder. 

Cybersecurity and blockchain

Cybersecurity has proven itself to be a core feature of the blockchain, since the increase in cryptocurrency attacks has led to a colossal drop in the value of digital currencies. Features such as consensus protocols, implemented to make the blockchain safer, have become weak points themselves and have facilitated access to private and sensitive information. Cybercriminals are also infecting devices with malicious software to illegitimately access private keys and wallets. 

Read More