python-flask-cors-4.0.1-1.fc41

Read Time:16 Second

FEDORA-2024-e5558a889a

Packages in this update:

python-flask-cors-4.0.1-1.fc41

Update description:

Automatic update for python-flask-cors-4.0.1-1.fc41.

Changelog

* Mon Jun 3 2024 František Zatloukal <fzatlouk@redhat.com> – 4.0.1-1
– flask-cors-4.0.1 (RHBZ#2279177 and RHBZ#2276153)

Read More

How to Stay Safe Against Scams While Traveling

Read Time:3 Minute, 30 Second

Following a whirlwind year of travel in 2023, 40% of Americans are gearing up for even more adventures in 2024. As the warmth of summer approaches and travel plans start to take shape, it’s crucial to prepare for often overlooked risks that may come up while traveling. The mix of unfamiliar surroundings, increased distraction, and reliance on public Wi-Fi creates an ideal environment for malicious actors to exploit. From impersonation tricks to oversharing on social media, attackers have plenty of ways to target unsuspecting travelers. 

What are the most common scams you should watch out for, and how can you stay safe from them? 

Impersonation Scams: Beware of Who You Trust  

One of the most common social engineering threats while traveling is impersonation scams. Attackers may pose as hotel staff, tour guides, or even fellow travelers to gain access to personal information or valuable belongings. Always verify the identity of individuals before sharing any sensitive information or handing over personal belongings. If someone claims to be an employee of a hotel or a service provider, don’t hesitate to ask for official identification or contact the establishment directly to confirm their identity. 

Public Wi-Fi Risks: Proceed with Caution  

Public Wi-Fi networks are a convenient way to stay connected while traveling, but they also pose significant security risks. Hackers can easily intercept data transmitted over these networks, including login credentials, credit card information, and personal messages. Avoid accessing sensitive accounts or conducting financial transactions while connected to public Wi-Fi. Instead, use a virtual private network (VPN) to encrypt your internet connection and protect your data from prying eyes. 

Oversharing on Social Media: Think Before You Post  

Sharing vacation photos and updates on social media may seem harmless, but it can inadvertently put you at risk. Posting your location in real time or sharing details about your travel itinerary can make you a target for thieves and cybercriminals. Avoid oversharing on social media, especially when it comes to your whereabouts, and consider waiting to post travel updates until you are home.  

Take a deep dive into your privacy settings to ensure that bad actors can’t access your personal information through your social media accounts. Our Social Privacy Manager can do that work for you, automatically adjusting more than 100 privacy settings across all the accounts you choose. 

Phishing Emails and Texts: Stay Vigilant  

Phishing emails and texts are a common tactic used by cybercriminals to trick travelers into revealing sensitive information or downloading malware onto their devices. Be wary of unsolicited messages claiming to be from airlines, hotels, or financial institutions, especially if they ask for personal information or prompt you to click on suspicious links. Verify the legitimacy of any unexpected communications by contacting the sender directly using official contact information obtained from their official website or a trusted source. 

 Protecting Your Personal Information: Practical Tips and Strategies  

In addition to being aware of the risks, there are proactive steps you can take to protect your personal information before traveling: 

Enable multi-factor authentication on your accounts to add an extra layer of security. 
Use strong, unique passwords for each of your online accounts and consider using a password manager to keep track of them securely.  
Keep your devices up to date with the latest software updates and security patches to protect against known vulnerabilities.  
Be cautious when using ATMs and credit card terminals and cover your PIN when entering it to prevent shoulder surfing 
Monitor your financial accounts regularly for any suspicious activity and report any unauthorized transactions immediately. 

By staying informed and vigilant, you can minimize the risk of falling victim to scams while traveling and enjoy a worry-free vacation experience. Remember to trust your instincts and err on the side of caution when encountering unfamiliar situations or individuals.  

Having a complete set of online protection software is like having a team of cyber guardians watching over you on vacation. With the right precautions in place, you can focus on making memories and exploring new destinations without compromising your personal information or security. Safe travels! 

The post How to Stay Safe Against Scams While Traveling appeared first on McAfee Blog.

Read More

deepin-qt5integration-5.6.11-7.fc40 deepin-qt5platform-plugins-5.6.12-7.fc40 dwayland-5.25.0-6.fc40 fcitx-qt5-1.2.6-21.fc40 fcitx5-qt-5.1.6-3.fc40 gammaray-3.0.0-6.fc40 kddockwidgets-1.7.0-10.fc40 keepassxc-2.7.8-2.fc40 kf5-akonadi-server-23.08.5-3.fc40 kf5-frameworkintegration-5.115.0-3.fc40 kf5-kwayland-5.115.0-3.fc40 plasma-integration-6.0.5-2.fc40 python-qt5-5.15.10-6.fc40 qadwaitadecorations-0.1.5-4.fc40 qgnomeplatform-0.9.2-15.fc40 qt5-5.15.14-1.fc40 qt5-qt3d-5.15.14-1.fc40 qt5-qtbase-5.15.14-1.fc40 qt5-qtcharts-5.15.14-1.fc40 qt5-qtconnectivity-5.15.14-1.fc40 qt5-qtdatavis3d-5.15.14-1.fc40 qt5-qtdeclarative-5.15.14-1.fc40 qt5-qtdoc-5.15.14-1.fc40 qt5-qtgamepad-5.15.14-1.fc40 qt5-qtgraphicaleffects-5.15.14-1.fc40 qt5-qtimageformats-5.15.14-1.fc40 qt5-qtlocation-5.15.14-1.fc40 qt5-qtmultimedia-5.15.14-1.fc40 qt5-qtnetworkauth-5.15.14-1.fc40 qt5-qtquickcontrols-5.15.14-1.fc40 qt5-qtquickcontrols2-5.15.14-1.fc40 qt5-qtremoteobjects-5.15.14-1.fc40 qt5-qtscript-5.15.14-1.fc40 qt5-qtscxml-5.15.14-1.fc40 qt5-qtsensors-5.15.14-1.fc40 qt5-qtserialbus-5.15.14-1.fc40 qt5-qtserialport-5.15.14-1.fc40 qt5-qtspeech-5.15.14-1.fc40 qt5-qtsvg-5.15.14-1.fc40 qt5-qttools-5.15.14-1.fc40 qt5-qttranslations-5.15.14-1.fc40 qt5-qtvirtualkeyboard-5.15.14-1.fc40 qt5-qtwayland-5.15.14-1.fc40 qt5-qtwebchannel-5.15.14-1.fc40 qt5-qtwebengine-5.15.16-6.fc40 qt5-qtwebkit-5.212.0-0.87alpha4.fc40 qt5-qtwebsockets-5.15.14-1.fc40 qt5-qtwebview-5.15.14-1.fc40 qt5-qtx11extras-5.15.14-1.fc40 qt5-qtxmlpatterns-5.15.14-1.fc40 qt5ct-1.1-24.fc40

Read Time:1 Minute, 51 Second

FEDORA-2024-2e27372d4c

Packages in this update:

deepin-qt5integration-5.6.11-7.fc40
deepin-qt5platform-plugins-5.6.12-7.fc40
dwayland-5.25.0-6.fc40
fcitx5-qt-5.1.6-3.fc40
fcitx-qt5-1.2.6-21.fc40
gammaray-3.0.0-6.fc40
kddockwidgets-1.7.0-10.fc40
keepassxc-2.7.8-2.fc40
kf5-akonadi-server-23.08.5-3.fc40
kf5-frameworkintegration-5.115.0-3.fc40
kf5-kwayland-5.115.0-3.fc40
plasma-integration-6.0.5-2.fc40
python-qt5-5.15.10-6.fc40
qadwaitadecorations-0.1.5-4.fc40
qgnomeplatform-0.9.2-15.fc40
qt5-5.15.14-1.fc40
qt5ct-1.1-24.fc40
qt5-qt3d-5.15.14-1.fc40
qt5-qtbase-5.15.14-1.fc40
qt5-qtcharts-5.15.14-1.fc40
qt5-qtconnectivity-5.15.14-1.fc40
qt5-qtdatavis3d-5.15.14-1.fc40
qt5-qtdeclarative-5.15.14-1.fc40
qt5-qtdoc-5.15.14-1.fc40
qt5-qtgamepad-5.15.14-1.fc40
qt5-qtgraphicaleffects-5.15.14-1.fc40
qt5-qtimageformats-5.15.14-1.fc40
qt5-qtlocation-5.15.14-1.fc40
qt5-qtmultimedia-5.15.14-1.fc40
qt5-qtnetworkauth-5.15.14-1.fc40
qt5-qtquickcontrols2-5.15.14-1.fc40
qt5-qtquickcontrols-5.15.14-1.fc40
qt5-qtremoteobjects-5.15.14-1.fc40
qt5-qtscript-5.15.14-1.fc40
qt5-qtscxml-5.15.14-1.fc40
qt5-qtsensors-5.15.14-1.fc40
qt5-qtserialbus-5.15.14-1.fc40
qt5-qtserialport-5.15.14-1.fc40
qt5-qtspeech-5.15.14-1.fc40
qt5-qtsvg-5.15.14-1.fc40
qt5-qttools-5.15.14-1.fc40
qt5-qttranslations-5.15.14-1.fc40
qt5-qtvirtualkeyboard-5.15.14-1.fc40
qt5-qtwayland-5.15.14-1.fc40
qt5-qtwebchannel-5.15.14-1.fc40
qt5-qtwebengine-5.15.16-6.fc40
qt5-qtwebkit-5.212.0-0.87alpha4.fc40
qt5-qtwebsockets-5.15.14-1.fc40
qt5-qtwebview-5.15.14-1.fc40
qt5-qtx11extras-5.15.14-1.fc40
qt5-qtxmlpatterns-5.15.14-1.fc40

Update description:

Qt 5.15.14 bugfix update.

Fix CVE-2024-36048

Read More

Seeing Like a Data Structure

Read Time:27 Minute, 27 Second

Technology was once simply a tool—and a small one at that—used to amplify human intent and capacity. That was the story of the industrial revolution: we could control nature and build large, complex human societies, and the more we employed and mastered technology, the better things got. We don’t live in that world anymore. Not only has technology become entangled with the structure of society, but we also can no longer see the world around us without it. The separation is gone, and the control we thought we once had has revealed itself as a mirage. We’re in a transitional period of history right now.

We tell ourselves stories about technology and society every day. Those stories shape how we use and develop new technologies as well as the new stories and uses that will come with it. They determine who’s in charge, who benefits, who’s to blame, and what it all means.

Some people are excited about the emerging technologies poised to remake society. Others are hoping for us to see this as folly and adopt simpler, less tech-centric ways of living. And many feel that they have little understanding of what is happening and even less say in the matter.

But we never had total control of technology in the first place, nor is there a pretechnological golden age to which we can return. The truth is that our data-centric way of seeing the world isn’t serving us well. We need to tease out a third option. To do so, we first need to understand how we got here.

Abstraction

When we describe something as being abstract, we mean it is removed from reality: conceptual and not material, distant and not close-up. What happens when we live in a world built entirely of the abstract? A world in which we no longer care for the messy, contingent, nebulous, raw, and ambiguous reality that has defined humanity for most of our species’ existence? We are about to find out, as we begin to see the world through the lens of data structures.

Two decades ago, in his book Seeing Like a State, anthropologist James C. Scott explored what happens when governments, or those with authority, attempt and fail to “improve the human condition.” Scott found that to understand societies and ecosystems, government functionaries and their private sector equivalents reduced messy reality to idealized, abstracted, and quantified simplifications that made the mess more “legible” to them. With this legibility came the ability to assess and then impose new social, economic, and ecological arrangements from the top down: communities of people became taxable citizens, a tangled and primeval forest became a monoculture timber operation, and a convoluted premodern town became a regimented industrial city.

This kind of abstraction was seemingly necessary to create the world around us today. It is difficult to manage a large organization, let alone an interconnected global society of eight billion people, without some sort of structure and means to abstract away details. Facility with abstraction, and abstract reasoning, has enabled all sorts of advancements in science, technology, engineering, and math—the very fields we are constantly being told are in highest demand.

The map is not the territory, and no amount of intellectualization will make it so. Creating abstract representations by necessity leaves out important detail and context. Inevitably, as Scott cataloged, the use of large-scale abstractions fails, leaving leadership bewildered at the failure and ordinary people worse off. But our desire to abstract never went away, and technology, as always, serves to amplify intent and capacity. Now, we manifest this abstraction with software. Computing supercharges the creative and practical use of abstraction. This is what life is like when we see the world the way a data structure sees the world. These are the same tricks Scott documented. What has changed is their speed and their ubiquity.

Each year, more students flock to computer science, a field with some of the highest-paying, most sought-after jobs. Nearly every university’s curriculum immediately introduces these students to data structures. A data structure enables a programmer to organize data—about anything—in a way that is easy to understand and act upon in software: to sort, search, structure, organize, or combine that data. A course in data structures is exercise after exercise in building and manipulating abstractions, ones that are typically entirely divorced from the messy, context-laden, real-world data that those data structures will be used to store.

As students graduate, most join companies that demand these technical skills—universally seen as essential to computer science work—who see themselves as “changing the world,” often with even grander ambitions than the prosaic aims of state functionaries cataloged by Scott.

Engineers are transforming data about the world around us into data structures, at massive scale. They then employ another computer science trick: indirection. This is the ability to break apart some sociotechnical process—to “disrupt”—and replace each of the now-broken pieces with abstractions that can interface with each other. These data structures and abstractions are then combined in software to take action on this view of reality, action that increasingly has a human and societal dimension.

Here’s an example. When the pandemic started and delivery orders skyrocketed, technologists saw an opportunity: ghost kitchens. No longer did the restaurant a customer was ordering from actually have to exist. All that mattered was that the online menu catered to customer desires. Once ordered, the food had to somehow get sourced, cooked, and packaged, sight unseen, and be delivered to the customer’s doorstep. Now, lots of places we order food from are subject to this abstraction and indirection, more like Amazon’s supply chain than a local diner of yore.

Facebook sees its users like a data structure when it classifies us into ever more precise interest categories, so as to better sell our attention to advertisers. Spotify sees us like a data structure when it tries to play music it thinks we will like based on the likes of people who like some of the same music we like. TikTok users often exclaim and complain that its recommendations seem to uncannily tap into latent desires and interests, leading many to perform psychological self-diagnosis using their “For You” page.

Data structures dominate our world and are a byproduct of the rational, modern era, but they are ushering in an age of chaos. We need to embrace and tame, but not extinguish, this chaos for a better world.

Machines

Historian of technology Lewis Mumford once wrote that clocks enabled the division of time, and that enabled the regimentation of society that made the industrial revolution possible. This transformation, once fully underway around the world in the 20th century, fundamentally changed the story of society. It shifted us away from a society centered around interpersonal dynamics and communal interactions to one that was systematic and institutional.

We used to take the world in and interpret it through human eyes. The world before the industrial revolution wasn’t one in which ordinary people interacted with large-scale institutions or socio-technical systems. It wasn’t possible for someone to be a “company man” before there was a corporate way of doing things that in theory depended only on rules, laws, methods, and principles, not on the vicissitudes of human behavior.

Since the beginning of the industrial revolution, workers and the natural world have been subject to abstraction. This involves the use of abstract reason over social preferences. Knowledge about the world was no longer in our heads but out in the world. So we got newspapers, instruction manuals, bylaws, and academic journals. And we should be clear: this was largely an improvement. The era of systems—of modernity—was an improvement on what came before. It’s better for society to have laws rather than rulers, better for us to lean on science than superstition. We can’t and shouldn’t go back.

The tools of reason enabled the “high modernists,” as Scott calls them, to envision a world shaped entirely by reason. But such reason was and is never free of personal biases. It always neglects the messiness of reality and the tacit and contextual knowledge and skill that is needed to cope with that mess—and this is where trouble began to arise.

Workers were and are treated as cogs in the industrial machine, filling a narrow role on an assembly line or performing a service job within narrow parameters. Nature is treated as a resource for human use, a near-infinite storehouse of materials and dumping ground for wastes. Even something as essential and grounding as farming is seen as mechanistic—”a farm is a factory in a remote area,” as put by one John Deere executive—where plants are machines that take in nitrogen, phosphorus, and potassium and produce barely edible dent corn. There’s even a popular myth that eminent business theorist W.E. Deming said: “If you can’t measure it, you can’t manage it”—lending credence to the measurement and optimization mindset.

The abstractions nearly write themselves. Though, leaving nothing to chance, entrepreneurs and their funders have flocked to translating these precomputing abstractions for the age of data structures. This is happening in both seen and unseen ways. Uber and Lyft turned people into driving robots that follow algorithmic guidance from one place to another. Amazon made warehouse workers perform precisely defined tasks in concert with literal robots. Agtech companies turn farms into data structures to then optimize the application of fertilizer, irrigation water, pesticides, and herbicides.

Beyond simply dividing time, computation has enabled the division of information. This is embodied at the lowest levels—bits and packets of data flowing through the Internet—all the way up to the highest levels, where many jobs can be described as a set of information-processing tasks performed by one worker only to be passed along to another. But this sort of computing—that’s just worn-out optimization techniques that date back to last century’s Taylorism—didn’t move us into the unstable world we’re in today. It was a different sort of computation that did that.

Computation

Today we’re in an era where computing not only abstracts our world but also defines our inner worlds: the very thoughts we have and the ways we communicate.

It is this abstracted reality that is presented to us when we open a map on our phones, search the Internet, or “engage” on social media. It is this constructed reality that shapes the decisions businesses make every day, governs financial markets, influences geopolitical strategy, and increasingly controls more of how global society functions. It is this synthesized reality we consume when the answers we seek about the world are the entire writings of humanity put into a blender and strained out by a large language model.

The first wave of this crested a decade ago only to crash down on us. Back then, search engines represented de facto reality, and “just Google it” became a saying: whatever the search engine said was right. But in some sense that was a holdover from the previous “modern” era but with a large data structure—the search engine’s vast database—replacing some classic source of truth such as the news media or the government. We all had a hope that with enough data, and algorithms to sift through it all, we could have a simple technological abstraction over the messiness of reality with a coherent answer no matter what the question was.

As we move toward the future promised by some technologists, our human-based view of the world and that of the data structures embedded in our computing devices will converge. Why bother to make a product at all when you can just algorithmically generate thousands of “ghost products,” in the hopes that someone will buy.

Scott’s critiques of datafication remain. We are becoming increasingly aware that things are continuous spectra, not discrete categories. Writing about the failure of contact tracing apps, activist Cory Doctorow said, “We can’t add, subtract, multiply or divide qualitative elements, so we just incinerate them, sweep up the dubious quantitative residue that remains, do math on that, and simply assert that nothing important was lost in the process.”

A pair of augmented-reality glasses may no longer let us see the world unfiltered by data structures but instead dissect and categorize every experience. A person on the street is no longer an individual but a member of a subcategory of “person” as determined by an AI classifier. A street is no longer the place you grew up but an abstraction from a map. And a local cafe is no longer a community hangout but a data structure containing a menu, a list of reservation options, and a hundred 5-star ratings.

Whether as glasses we look through or simply as screens on our devices, reality will be augmented by the data structures that categorize the world around us. Just as search engines caused the rise of SEO, where writers tweak their writing to attract search engines rather than human readers, this augmented reality will result in its own optimizations. We may be seeing the first signs of this with “Thai Food Near Me” as the literal name of businesses that are trying to satisfy the search function of mapping apps. Soon, even the physical form of things in the world may be determined in a coevolution with technology, where the form of things in the real world, even a dish at a restaurant, is chosen by what will look best when seen through our technological filters. It’s a data layer on top of reality. And the problems get worse when the relative importance of the data and reality flip. Is it more important to make a restaurant’s food taste better, or just more Instagrammable?

People are already working to exploit the data structures and algorithms that govern our world. Amazon drivers hang smartphones in trees to trick the system. Songwriters put their catchy choruses near the beginning to exploit Spotify’s algorithms. And podcasters deliberately mispronounce words because people comment with corrections and those comments count as “engagement” to the algorithms.

These hacks are fundamentally about the breakdown of “the system.” (We’re not suggesting that there’s a single system that governs society but rather a mess of systems that interact and overlap in our lives and are more or less relevant in particular contexts.) Systems work according to rules, either ones made consciously by people or, increasingly, automatically determined by data structures and algorithms. But systems of rules are, by their nature, trying to create a map for a messy territory, and rules will always have loopholes that can be taken advantage of.

The challenge with previous generations of tech—and the engineers who built them—is that they got stuck in the rigidity of systems. That’s what the company man was all about: the processes of the company, of Taylorism, of the McKinsey Way, of Scrum software development, of effective altruism, and of so many more. These all promised certainty, control, optimality, correctness, and sometimes even virtue: all just manifestations of a rigid and “rational” way of thinking and solving problems. Making systems work in this way at a societal level has failed. This is what Scott was saying in his seminal book. It was always doomed to fail.

Fissures

Seeing like a state was all about “legibility.” But the world is too difficult to make legible today. That’s where data structures, algorithms, and AI come in: humans no longer need to manually create legibility. Nor do humans even need to consume what is made legible. Raw data about the world can be fed into new AI tools to create a semblance of legibility. We can then have yet more automated tools act upon this supposed representation of the world, soon with real-life consequences. We’re now delegating the process of creating legibility to technology. Along the way, we’ve made it approximate: legible to someone or something else but not to the person who actually is in charge.

Right now, we’re living through the last attempts at making those systems work, with a perhaps naive hope and a newfound belief in AI and the data science that fuels it. The hope is that, because we have better algorithms that can help us make sense of even more data, we can somehow succeed at making systems work where past societies have failed. But it’s not going to work because it’s the mode of thought that doesn’t work.

The power to see like a state was intoxicating for government planners, corporate efficiency experts, and adherents to high modernism in general. But modern technology lets us all see like a state. And with the advent of AI, we all have the power to act on that seeing.

AI is made up of data structures that enable a mapping from the messy multidimensional reality that we inhabit to categories and patterns that are useful in some way. Spotify may organize songs into clever new musical genres invented by its AI, but it’s still an effort to create legibility out of thin air. We’re sending verbose emails with AI tools that will just be summarized by another AI. These are all just concepts, whether they’re created by a human mind or by a data structure or AI tool. And while concepts help us understand reality, they aren’t reality itself.

The problem we face is at once simple to explain and fiendishly difficult to do something about. It’s the interplay of nebulosity and pattern, as scholar David Chapman puts it: reality is nebulous (messy), but to get on with our lives, we see patterns (make sense of it in context-dependent ways). Generally, we as people don’t have strict rules for how to make breakfast, and we don’t need the task explained to us when a friend asks us for a cup of coffee. But that’s not the case for a computer, or a robot, or even a corporate food service, which can’t navigate the intricacies and uncertainties of the real world with the flexibility we expect of a person. And at an even larger scale, our societal systems, whether we’re talking about laws and governments or just the ways our employers expect us to get our jobs done, don’t have that flexibility built into them. We’ve seen repeatedly how breaking corporate or government operations into thousands of disparate, rigid contracts ends in failure.

Decades ago, the cracks in these rational systems were only visible to a few, left for debate in the halls of universities, board rooms, and militaries. Now, nebulosity, complexity, and the breakdown of these systems is all around for everyone to see. When teenagers are training themselves to see the world the way social-media ranking algorithms do, and can notice a change in real time, that’s how we know that the cracks are pervasive.

The complexity of society today, and the failure of rigid systems to cope, is scary to many. Nobody’s in charge of, or could possibly even understand, all these complex technological systems that now run our global society. As scholar Brian Klaas puts it, “the cognitive shortcuts we use to survive are mismatched with the complex reality we now navigate.” For some, this threat demands dramatic action, such as replacing some big system we have—say, capitalism—with an alternative means of organizing society. For others, it demands throwing out all of modernity to go back to a mythical, simpler golden age: one with more human-scale systems of order and authority, which they imagine was somehow better. And yet others see the cracks in the system but hope that with more data and more tweaks, it can be repaired and our problems will be definitively solved.

However, it’s not this particular system that failed but rather the mode of society that depends on rigid systems to function. Replacing one rigid system with another won’t work. There’s certainly no golden age to return to. And simpler forms of society aren’t options for us at the scale of humanity today. So where does that leave us?

Tension

The ability to see like a data structure afforded us the technology we have today. But it was built for and within a set of societal systems—and stories—that can’t cope with nebulosity. Worse still is the transitional era we’ve entered, in which overwhelming complexity leads more and more people to believe in nothing. That way lies madness. Seeing is a choice, and we need to reclaim that choice. However, we need to see things and do things differently, and build sociotechnical systems that embody this difference.

This is best seen through a small example. In our jobs, many of us deal with interpersonal dynamics that sometimes overwhelm the rules. The rules are still there—those that the company operates by and laws that it follows—meaning there are limits to how those interpersonal dynamics can play out. But those rules are rigid and bureaucratic, and most of the time they are irrelevant to what you’re dealing with. People learn to work with and around the rules rather than follow them to the letter. Some of these might be deliberate hacks, ones that are known, and passed down, by an organization’s workers. A work-to-rule strike, or quiet quitting for that matter, is effective at slowing a company to a halt because work is never as routine as schedules, processes, leadership principles, or any other codified rules might allow management to believe.

The tension we face is that on an everyday basis, we want things to be simple and certain. But that means ignoring the messiness of reality. And when we delegate that simplicity and certainty to systems—either to institutions or increasingly to software—they feel impersonal and oppressive. People used to say that they felt like large institutions were treating them like a number. For decades, we have literally been numbers in government and corporate data structures.

Breakdown

As historian Jill Lepore wrote, we used to be in a world of mystery. Then we began to understand those mysteries and use science to turn them into facts. And then we quantified and operationalized those facts through numbers. We’re currently in a world of data—overwhelming, human-incomprehensible amounts of data—that we use to make predictions even though that data isn’t enough to fully grapple with the complexity of reality.

How do we move past this era of breakdown? It’s not by eschewing technology. We need our complex socio-technical systems. We need mental models to make sense of the complexities of our world. But we also need to understand and accept their inherent imperfections. We need to make sure we’re avoiding static and biased patterns—of the sort that a state functionary or a rigid algorithm might produce—while leaving room for the messiness inherent in human interactions. Chapman calls this balance “fluidity,” where society (and really, the tech we use every day) gives us the disparate things we need to be happy while also enabling the complex global society we have today.

These things can be at odds. As social animals, we need the feeling of belonging, like being part of a small tribe. However, at the same time, we have to “belong” in a technological, scientific, and institutional world of eight billion interconnected people. To feel connected to those around us, we need access to cultural creativity, whether it be art, music, literature, or forms of entertainment and engagement that have yet to be invented. But we also need to avoid being fragmented into nanogenres where we can’t share that creativity and cultural appreciation with others. We must be able to be who we are and choose who we associate with on an ever-changing basis while being able to play our parts to make society function and feel a sense of responsibility and accomplishment in doing so. And perhaps most importantly, we need the ability to make sense of the torrent of information that we encounter every day while accepting that it will never be fully coherent, nor does it need to be.

This isn’t meant to be idealistic or something for the distant future. It’s something we need now. How well civilization functions in the coming years depends upon making this a reality. On our present course, we face the nihilism that comes with information overload, careening from a world that a decade ago felt more or less orderly to one in which nothing has any clear meaning or trustworthiness. It’s in an environment like this that polarization, conspiracies, and misinformation thrive. This leads to a loss of societal trust. Our institutions and economic systems are based upon trust. We’ve seen what societies look like when trust disappears: ordinary social systems fail, and when they do work, they are more expensive, capricious, violent, and unfair.

The challenge for us is to think how we can create new ways of being and thinking that move us—and not just a few of us but everyone—to be able to at first cope, and then later thrive, in this world we’re in.

Fluidity

There’s no single solution. It’ll be a million little things, but they all will share the overall themes of resilience in the form of fluidity. Technology’s role in this is vital, helping us make tentative, contextual, partial sense of the complex world around us. When we take a snapshot of a bird—or listen to its song—with an app that identifies the species, it is helping us gain some limited understanding. When we use our phones to find a park, local restaurant, or even a gas station in an unfamiliar city, it is helping us make our way in a new environment. On vacation in France, one of us used our phone’s real-time translation feature to understand what our tour guide was saying. Think of how we use weather apps, fitness apps, or self-guided museum tour apps to improve our lives. We need more tools like this in every context to help us to understand nuance and context beyond the level we have time for in our busy lives.

It’s not enough to have software, AI or otherwise, interpret the world for us. What we need is the ability to seamlessly navigate all the different contexts in our life. Take, for instance, the problem of understanding whether something seen online is true. This was already tricky and is now fiendishly difficult what with the Internet, social media, and now generative AI all laden with plausible untruths. But what does “true” mean, anyway? It’s equally wrong to believe in a universal, singular, objective truth in all situations as to not know what to believe and hold everything to be equally false (or true). Both of these options give propagandists a leg up.

Instead, we need fluidity: in Chapman’s terms, to be able to always ask, “In what sense?” Let’s say you see a video online of something that doesn’t seem physically possible and ask, “Is this real?” A useful technology would help you ask, “In what sense?” Maybe it’s something done physically, with no trickery involved, and it’s just surprising. Maybe it’s a magic trick, or real as in created for a TV show promotion, but not actually something that happened in the physical world. Maybe it was created by a movie special effects team. Maybe it’s propaganda created by a nation state. Sorting through contexts like this can be tedious, and while we intuitively do it all the time, in a technologically complex world we could use some help. It’s important to enable people to continue to communicate and interact in ways that make us feel comfortable, not completely driven either by past social custom or by algorithms that optimize for engagement. Think WhatsApp groups where people just talk, not Facebook groups that are mediated and controlled by Meta.

Belonging is important, and its lack creates uncertainty and a lack of trust. There are lessons we can learn from nontechnological examples. For example, Switzerland has a remarkable number of “associations”—for everything from business groups to bird watching clubs—and a huge number of Swiss residents take part. This sort of thing was once part of American culture but declined dramatically over the 20th century as documented in Putnam’s classic book Bowling Alone. Technology can enable dynamic new ways for people to associate as the online and offline worlds fuse—think of the Internet’s ability to help people find each other—though it must avoid the old mindset of optimization at all costs.

We all struggle with life in our postmodern society, that unplanned experiment of speed, scale, scope, and complexity never before seen in human history. Technology can help by bridging what our minds expect with how systems work. What if every large institution, whether a government or corporation, were to enable us to interact with it not on its terms, in their bureaucratic language and with all the complexity that large systems entail, but with computational tools that use natural language, understand context and nuance, and yet can still interface with the data structures that make its large systems tick. There are some promising early prototypes, such as tools that simplify the process of filling out tedious paperwork. That might feel small, almost trivial. But refined, and in aggregate, this could represent a sea change in how we interact with large systems. They will come to feel no longer as impersonal and imposing bureaucracies but as enablers of functioning and flourishing societies.

And it’s not all about large scale either. Scale isn’t always desirable; as Bill McKibben wrote in Eaarth, we’d probably be better off with the Fortune 500,000 than the Fortune 500. Scale brings with it the ills of Seeing Like a State; the authoritarian high modernist mindset takes over at large scale. And while large organizations can exist, they can’t be the only ones with access to, or ability to, afford new technologies. Enabling the dynamic creation and destruction of new organizations and new types of organization—and legal and technical mechanisms to prevent lock-in and to prevent enclosure of public commons—will be essential to keep this new fluid era thriving. We can create new “federated” networks of organizations and social groups, like we’re seeing in the open social web of Mastodon and similar technologies, ones where local groups can have local rules that differ from, but do not conflict with, their participation in the wider whole.

This shift is not just about how society will work but also how we see ourselves. We’re all getting a bit more used to the idea of having multiple identities, and some of us have gotten used to having a “portfolio career” that is not defined by a single hat that we wear. While today there is often economic precarity involved with this way of living, there need not be, and the more we can all do the things that are the best expressions of ourselves, the better off society will be.

Ahead

As Mumford wrote in his classic history of technology, “The essential distinction between a machine and a tool lies in the degree of independence in the operation from the skill and motive power of the operator.” A tool is controlled by a human user, whereas a machine does what its designer wanted. As technologists, we can build tools, rather than machines, that flexibly allow people to make partial, contextual sense of the online and physical world around them. As citizens, we can create meaningful organizations that span our communities but without the permanence (and thus overhead) of old-school organizations.

Seeing like a data structure has been both a blessing and a curse. Increasingly, it feels like it is an avalanche, an out-of-control force that will reshape everything in its path. But it’s also a choice, and there is a different path we can take. The job of enabling a new society, one that accepts the complexity and messiness of our current world without being overwhelmed by it, is one all of us can take part it. There is a different future we can build, together.

This essay was written with Barath Raghavan, and originally appeared on the Harvard Kennedy School Belfer Center‘s website.

Read More

AI Will Increase the Quantity—and Quality—of Phishing Scams

Read Time:30 Second

A piece I coauthored with Fredrik Heiding and Arun Vishwanath in the Harvard Business Review:

Summary. Gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous. Recent research showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Companies need to: 1) understand the asymmetrical capabilities of AI-enhanced phishing, 2) determine the company or division’s phishing threat severity level, and 3) confirm their current phishing awareness routines.

Here’s the full text.

Read More

Security Testing in Software Development: Assessing Vulnerabilities and Weaknesses

Read Time:6 Minute, 31 Second

The content of this post is solely the responsibility of the author.  LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

The critical role of security testing within software development cannot be overstated. From protecting personal information to ensuring that critical infrastructure remains unbreachable, security testing serves as the sentry against a multitude of cyber threats.

Vulnerabilities and design weaknesses within software are like hidden fault lines; they may remain unnoticed until they cause significant damage. These flaws can compromise sensitive data, allow unauthorized access, and disrupt service operations. The repercussions extend beyond the digital world. They can lead to tarnished reputations, legal penalties, and, in extreme cases, endangerment of lives. Understanding these potential impacts underscores the crucial role of security testing as a protective measure.

Security testing functions like a health check-up for software, identifying vulnerabilities in much the same way a doctor’s examination would. Being proactive rather than reactive is essential here. It is always better to prevent than to cure. Security testing transcends the mere act of box-ticking; it is a vital, multi-layered process that protects both the integrity of the software and the privacy of its users. And it is not only about finding faults but also about instilling a culture of security within the development lifecycle.

Understanding Security Testing

Once more, the primary role of security testing is to identify and help fix security flaws within a system before they can be exploited. Consider it a comprehensive evaluation process that simulates real-world attacks, designed to ensure that the software can withstand and counter a variety of cybersecurity threats.

By conducting security testing, developers can provide assurance to investors and users that their software is not only functional but also secure against different attacks.

There is a diverse arsenal of methodologies available for security testing:

1) Penetration Testing

Penetration testing, also known as ethical hacking, entails conducting simulated cyber-attacks on computer systems, networks, or web applications to uncover vulnerabilities that could be exploited. Security experts use pentest platforms and act as attackers and try to breach the system’s defenses using various techniques. This method helps uncover real-world weaknesses as well as the potential impact of an attack on the system’s resources and data.

2) Code Review

A code review is a systematic examination of the application source code to detect security flaws, bugs, and other errors that might have been overlooked during the initial development phases. It involves manually reading through the code or using automated tools to ensure compliance with coding standards and to check for security vulnerabilities. This process helps in maintaining a high level of security by ensuring that the code is clean, efficient, and robust against cyber threats.

3) Vulnerability Assessment

Unlike penetration testing, which attempts to exploit vulnerabilities, vulnerability assessment focuses on listing potential vulnerabilities without simulating attacks. Tools and software are used to scan systems and software to detect known security issues, which are then cataloged and analyzed so that appropriate mitigation strategies can be developed. This methodology is crucial for maintaining an up-to-date security posture against known vulnerabilities.

Assessing Weaknesses in Software Development

It is important to understand the difference between software vulnerabilities and weaknesses. Vulnerabilities refer to specific points in an application that can be exploited, while weaknesses are more systemic and often result from suboptimal coding practices or design flaws.

Imagine a well-guarded castle with sturdy, high walls but a poorly designed passage layout that could easily confuse its defenders. In software terms, the walls represent specific vulnerabilities, while the confusing design reflects underlying weaknesses. Although weaknesses may not serve as direct entry points for attacks, they can act as a breeding ground for vulnerabilities and amplify their impact, significantly compromising the security of the software.

Identifying Software Vulnerabilities

Some security flaws frequently challenge the integrity of computer systems and apps. At the top of this list, common vulnerabilities like SQL injection and cross-site scripting (XSS) stand out – they are the bane of developers and a boon for cyber attackers.

SQL injection enables attackers to modify the queries sent by an application to its database. It is similar to a thief tampering with a lock to enter your data’s home without being noticed. Equally troubling is cross-site scripting, which occurs when attackers insert malicious scripts into the content of websites.

History is marked by high-profile security breaches that serve as reminders of the significant risks involved. Take, for example, the infamous 2017 Equifax breach, in which the personal data of about 147 million users was compromised due to overlooked vulnerabilities. Then there is the Heartbleed bug of 2014. This security flaw allowed cybercriminals to extract sensitive information from the memory systems of millions of web servers. More recently, attacks and vulnerabilities in ConnectWise remote-access software and Citrix NetScalers have been disclosed.

These incidents are not merely tales to frighten cybersecurity novices; they are real-life lessons that emphasize the critical need for proactive vigilance. The number of vulnerabilities is increasing. It is like a snowball effect, and dealing with them is becoming increasingly difficult.

Security Testing Best Practices

Securing a product against potential threats is not a one-time job. It is an ongoing commitment. This involves establishing best practices that integrate into the development culture. Let’s explore these practices below:

By introducing threat modeling into the design stage, companies can take proactive steps to address security issues and construct more resilient systems. Threat modeling involves evaluating the likelihood of potential threats and then prioritizing them based on their likely impact. Subsequently, organizations implement appropriate countermeasures to mitigate the risks.
Effective security testing is not accidental. It is the result of careful planning and execution. One fundamental practice is to integrate security testing early in the software development lifecycle. By doing so, software development companies can identify and fix security problems before they become entrenched in the codebase.
Ensure that all software and systems are configured securely by following industry best practices, such as disabling unnecessary services, applying security patches promptly, and implementing strong access controls.
Another crucial practice is to employ a variety of testing tools and methods, such as dynamic application security testing (DAST), static application security testing (SAST), and interactive application security testing (IAST), to find different types of vulnerabilities.
Additionally, automating security tests can help maintain a consistent standard of security while freeing up human resources for complex analysis and decision-making tasks.
Digital environments are continually evolving, with new threats surfacing on a regular basis. Continuous monitoring and regular updates are vital these days. Employing real-time monitoring tools and setting up automated alerts for suspicious activities can greatly enhance a team’s ability to respond quickly to potential breaches. Regularly updating security measures to combat new threats and conducting regular code reviews are essential practices.
The most secure systems often emerge from collaborative efforts. When developers and security professionals collaborate, they bring diverse perspectives and expertise, crafting a more robust defense strategy. This collaboration can manifest in various ways: regular meetings to address security issues, joint training sessions to keep both teams abreast of the latest security trends, and cross-functional workshops to cultivate a shared understanding of security objectives and methods.

Conclusion

It is not difficult to envision the devastating consequences that a single security oversight can unleash. Now, imagine a world where such problems do not exist. Picture the peace of mind and confidence that come from knowing all applications are fortified against breaches. To achieve this, it is important to foster a culture where security takes center stage. Developers, testers, user support teams, and top management must come together in this shared endeavor, dedicating time, resources, and effort to implementing robust security testing protocols within their software development processes.

Read More