How to Protect Your Family’s Privacy on Twitter: A Guide for Parents and Kids

Read Time:5 Minute, 38 Second

It’s no secret that when it comes to social networks, teen preferences can change dramatically from year to year. That holds with Twitter. Even though the social network has seen a dip in use overall, Twitter has proven its staying power among certain communities, and that includes teens.  

According to a 2022 Pew Center Study, 23 percent of teens online use Twitter (down from 33 percent in 2014-15). Because of Twitter’s loyal fanbase, it’s important for tweeting teens as well as parents, and caregivers to understand how to engage safely on the fast-moving platform.   

What do kids do on Twitter?

Many teens love the public aspect of Twitter. They see it as a fun place to connect with friends and stay up to date on sports, school news, memes, online trends and challenges, and popular culture. However, because the platform’s brief, 140–280-word format is so distinct from other popular networks such as TikTok, YouTube, and Snapchat, the online etiquette and ground rules for engagement are also distinct. 

As fun as Twitter content is to share and consume, the platform still comes with hidden risks (as do all social networks).  

Here’s a guide to help your family understand safe Twitter use and still have fun on this unique social network.  

1. Think Before You Tweet 

This is likely one of the most important phrases you can convey to your child when it comes to using Twitter. Every word shared online can have positive or negative repercussions. Twitter’s fast-moving, ticker-like feed can tempt users to underestimate the impact of an impulsive, emotionally charged tweet. Words—digital words especially—can cause harm to the reputation of the person tweeting or to others.  

For this reason, consider advising your kids to be extra careful when sharing their thoughts or opinions, retweeting others, or responding to others’ tweets. We all know too well that content shared carelessly or recklessly online can affect future college or career opportunities for years to come.  

2. Protect Personal Privacy 

There’s little more important these days than protecting your family’s privacy. Every online risk can be traced to underestimating the magnitude of this single issue.  

It’s never too early or too late to put the right tools in place to protect your family’s privacy online. While Twitter has privacy and reporting features designed to protect users, it’s wise to add a comprehensive identity and privacy protection solution to protect your family’s devices and networks.

Kids get comfortable with their online communities. This feeling of inclusion and belonging can lead to oversharing personal details. Discuss the importance of keeping personal details private online reminding your kids to never share their full name, address, phone number, or other identity or location-revealing details. This includes discerning posting photos that could include signage, school or workplace logos, and addresses. In addition, advise family members not to give away data just because there’s a blank. It’s wise to only share your birthday month and day and keep your birth year private.  

3. (Re)Adjust Account Settings  

When is the last time you reviewed social media account settings with your child? It’s possible that, over time, your child may have eased up on their settings. Privacy settings on Twitter are easy to understand and put in place. Your child’ can control their discoverability, set an account to be public or private, and protect their tweets from public search. It’s easy to filter out unwanted messages, limit messages from people you don’t follow, and limit who can see your Tweets or tag you in photos. It’s also possible to filter the topics you see.  

4. Recognize Cyberbullying  

Respecting others is foundational to engaging on any social network. This includes honoring the beliefs, cultures, traditions, opinions, and choices of others. Cyberbullying plays out in many ways on Twitter and one of those ways is by subtweeting. This vague form of posting is a form of digital gossip. Subtweeting is when one Twitter user posts a mocking or critical tweet that alludes to another Twitter user without directly mentioning their name. It can be cruel and harmful. Discuss the dangers of subtweeting along with the concept of empathy. Also, encourage your child to access the platform’s social media guidelines and know how to unfollow, block, and report cyberbullies on Twitter.   

5. Monitor Mental Health 

Maintaining a strong parent-child bond is essential to your child’s mental health and the first building block of establishing strong online habits. Has your child’s mood suddenly changed? Are they incessantly looking at their phone? Have their grades slipped? An online conflict, a risky situation, or some type of bullying may be the cause. You don’t have to hover over your child’s social feeds every day, but it’s important to stay involved in their daily life to support their mental health. If you do monitor their social networks, be sure to check the tone and intent of comments, captions, and replies. You will know bullying and subtweeting when you see it. 

6. Highlight Responsibility  

We love to quote Spiderman’s uncle Ben Parker and remind families that “with great power comes great responsibility” because it sums up technology ownership and social media engagement perfectly. The more time kids spend online, the more comfortable they can become and the more lapses in judgment can occur. Consider discussing (and repeating often) that social media isn’t a right, it’s a privilege that carries responsibility and consequences.  

7. Know & Discuss Risks 

The FBI estimates there are approximately 500,000 predators active online each day and that they all have multiple profiles. Anonymous, catfish, and fake accounts abound online wooing even the savviest digital native into an unsafe situation. Engaging on any social network can expose kids to a wide array of possible dangers including scammers, catfishes, and predators. Scams and predator tactics continue to get more sophisticated. For this reason, it’s important to candidly talk about online predator awareness and the ever-evolving tactics bad actors will go to deceive minors online.
 

Twitter continues to attract tweens and teens who appreciate its brevity and breaking news. While navigating online safety and social media can be daunting for parents, it’s critical to stay engaged with your child and understand their digital life. By establishing an open flow of communication and regularly discussing privacy and appropriate online behavior, you can create a culture of openness in your family around important issues. We’re rooting for you!  

The post How to Protect Your Family’s Privacy on Twitter: A Guide for Parents and Kids appeared first on McAfee Blog.

Read More

Building Trustworthy AI

Read Time:7 Minute, 14 Second

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.

Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.

Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector. Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.

First, a trustworthy AI system must be controllable by the user. That means that the model should be able to run on a user’s owned electronic devices (perhaps in a simplified form) or within a cloud service that they control. It should show the user how it responds to them, such as when it makes queries to search the web or external services, when it directs other software to do things like sending an email on a user’s behalf, or modifies the user’s prompts to better express what the company that made it thinks the user wants. It should be able to explain its reasoning to users and cite its sources. These requirements are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune the AI system. When modern LLMs are built, they are first trained on massive, generic corpora of textual data typically sourced from across the Internet. Many systems go a step further by fine-tuning on more specific datasets purpose built for a narrow application, such as speaking in the language of a medical doctor, or mimicking the manner and style of their individual user. In the near future, corporate AIs will be routinely fed your data, probably without your awareness or your consent. Any trustworthy AI system should transparently allow users to control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with knowledge of which edits we have accepted in the past and which we did not. We would be more skeptical of a chatbot knowledgeable about which of their search results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it access other apps on your phone, and the data stored with them? Can it retrieve information from external sources, mixing your inputs with details from other places you may or may not trust? Can it send a message in your name (hopefully based on your input)? Weighing these types of risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

In the world’s first few months of widespread use of models like ChatGPT, we’ve learned a lot about how AI creates risks for users. Everyone has heard by now that LLMs “hallucinate,” meaning that they make up “facts” in their outputs, because their predictive text generation systems are not constrained to fact check their own emanations. Many users learned in March that information they submit as prompts to systems like ChatGPT may not be kept private after a bug revealed users’ chats. Your chat histories are stored in systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking their safety controls; these work largely because many of the “rules” applied to these systems are soft, like instructions given to a person, rather than hard, like coded limitations on a product’s functions. It’s as if we are trying to keep AI safe by asking it nicely to drive carefully, a hopeful instruction, rather than taking away its keys and placing definite constraints on its abilities.

These risks will grow as companies grant chatbot systems more capabilities. OpenAI is providing developers wide access to build tools on top of GPT: tools that give their AI systems access to your email, to your personal account information on websites, and to computer code. While OpenAI is applying safety protocols to these integrations, it’s not hard to imagine those being relaxed in a drive to make the tools more useful. It seems likewise inevitable that other companies will come along with less bashful strategies for securing AI market share.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on Gizmodo.com.

Read More

New DownEx malware campaign targets Central Asia

Read Time:20 Second

A previously undocumented malware campaign called DownEx has been observed actively targeting government institutions in Central Asia for cyberespionage, according to a report by Bitdefender. 

The first instance of the malware was detected in 2022 in a highly targeted attack aimed at exfiltrating data from foreign government institutions in Kazakhstan. Researchers observed another attack in Afghanistan.

To read this article in full, please click here

Read More

The 6 best password managers for business

Read Time:43 Second

What’s a password manager?

A password manager is a program that stores passwords and logins for various sites and apps, and generates new strong passwords when a user needs to change an old one or create a new account. Users can sign into a password manager with a single strong password or by using biometrics, and access all their login information.

Most password managers allow users to sign in on multiple devices (including Macs, Windows machines, and iPhone or Android smartphones) and work with multiple browsers (including Chrome, Firefox, Safari and Microsoft Edgfe) to automatically fill in username and password info, storing encrypted password information and facilitating secure synchronization between devices. And while these tools got their start in the consumer world, most offerings now have editions aimed at businesses with enterprise features.

To read this article in full, please click here

Read More

python-waitress-1.4.3-1.el8

Read Time:21 Second

FEDORA-EPEL-2023-9191f31d36

Packages in this update:

python-waitress-1.4.3-1.el8

Update description:

This update takes the package from version 1.2.1 to version 1.4.3. This is necessary to fix multiple CVEs.

CVE-2019-16785 (high)
CVE-2019-16786 (high)
CVE-2019-16789 (high)
CVE-2019-16792 (high)
CVE-2020-5236 (medium)

There are no breaking changes mentioned in the upstream changelog.

Read More

Smashing Security podcast #321: Eurovision, acts of war, and Twitter circles

Read Time:23 Second

Twitter shares explicit photos without users’ permission, one US company can look forward to a $1.4 billion payout seven years after an infamous cyberattack, and how might hackers target Eurovision?

All this and much much more is discussed in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by cybersecurity reporter John Leyden.

Plus don’t miss our featured interview with Outpost24’s John Stock.

Read More