Cross-Site Request Forgery (CSRF) vulnerability in ThingsForRestaurants Quick Restaurant Reservations plugin <=Â 1.5.4 versions.
Daily Archives: May 22, 2023
CVE-2022-41608
Cross-Site Request Forgery (CSRF) vulnerability in Thomas Belser Asgaros Forum plugin <=Â 2.2.0 versions.
UK Man Sentenced to 13 Years for Running Multi-Million Fraud Website
Confirmed global losses from iSpoof scams were £100m, with the actual figure believed to be far higher
Sharing your business’s data with ChatGPT: How risky is it?
The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
As a natural language processing model, ChatGPT – and other similar machine learning-based language models – is trained on huge amounts of textual data. Processing all this data, ChatGPT can produce written responses that sound like they come from a real human being.
ChatGPT learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns.
For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history.
In light of these implications, let’s discuss if – and how – ChatGPT stores its users’ input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT.
Does ChatGPT store users’ input data?
The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model.
Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information.
But the technology is still in its nascent developing stages – ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model.
Risks of sharing business data with ChatGPT
In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents.
The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property.
One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requests to input data on ChatGPT’s platform from about 67,000 employees at the security firm’s client companies.
Statistics from the security platform cite that the average company is releasing sensitive data to ChatGPT hundreds of times per week. These requests have presented serious cybersecurity concerns, with employees attempting to input data that includes client or patient information, source codes, confidential data, and regulated information.
For example, medical clinics use private patient communication software to help protect patient data all the time. According to the team at Weave, this is important to ensure that medical clinics can gain actionable data and analytics so they can make the best decisions while ensuring that their patients’ sensitive information remains secure. But using ChatGPT can pose a threat to the security of this kind of information.
In one troubling example, a doctor typed their patient’s name and specific details about their medical condition into ChatGPT, prompting the LLM to compose a letter to that patient’s insurance company. In another worrying example, a business executive copied the entire 2023 strategy document of their firm into ChatGPT’s platform, causing the LLM to craft a PowerPoint presentation from the strategy document.
Data exposure
There are preventive measures you can take to protect your data in advance and some companies have already begun to impose regulatory measures to prevent data leaks from ChatGPT usage.
JP Morgan, for example, recently restricted ChatGPT usage for all of its employees, citing that it was impossible to determine who was accessing the tool, for what purposes, and how often. Restricting access to ChatGPT altogether is one blanket solution, but as the software continues to develop, companies will likely need to find other strategies that incorporate the new technology.
Boosting company-wide awareness about the possible risks and dangers, instead, can help make employees more sensitive about their interactions with ChatGPT. For example, Amazon employees have been publicly warned to be careful about what information they share with ChatGPT.
Employees have been warned not to copy and paste documents directly into ChatGPT and instructed to remove any personally identifiable information, such as names, addresses, credit card details, and specific positions at the company.
But limiting the information you and your colleagues share with ChatGPT is just the first step. The next step is to invest in secure communication software that provides robust security, ensuring that you have more control over where and how your data is shared. For example, building in-app chat with a secure chat messaging API ensures that your data stays away from prying eyes. By adding chat to your app, you ensure that users get context-rich, seamless, and most importantly secure chat experiences.
ChatGPT serves other functions for users. As well as composing natural, human-sounding language responses, it can also create code, answer questions, speed up research processes, and deliver specific information relevant to businesses.
Again, choosing a more secure and targeted software or platform to achieve the same aims is a good way for business owners to prevent cybersecurity breaches. Instead of using ChatGPT to look up current social media metrics, a brand can instead rely on an established social media monitoring tool to keep track of reach, conversion and engagement rates, and audience data.
Conclusion
ChatGPT and other similar natural language learning models provide companies with a quick and easy resource for productivity, writing, and other tasks. Since no training is needed to adopt this new AI technology, any employee can access ChatGPT. This means the possible risk of a cybersecurity breach becomes expanded.
Widespread education and public awareness campaigns within companies will be key to preventing damaging data leaks. In the meantime, businesses may want to adopt alternative apps and software for daily tasks such as interacting with clients and patients, drafting memos and emails, composing presentations, and responding to security incidents.
Since ChatGPT is still a new, developing platform it will take some time before the risks are effectively mitigated by developers. Taking preventive action is the best way to ensure your business is protected from potential data breaches.
Instagram Safety for Kids: Protecting Privacy and Avoiding Risks
If you’re a parent of a teen, there’s a good chance that Instagram is the culprit behind a good chunk of their screen time. However, woven into the stream of reels, stories, selfies, and Insta-worthy moments, are potential risks to your child’s privacy and safety.
According to a recent Pew Research Center report, 62 percent of teens use Instagram, making it the third most popular social media platform after YouTube and TikTok. Teens use the photo and video-sharing platform to share their creativity, connect with friends, and get updates on their favorite celebrities and influencers.
Instagram’s format makes it easy for kids (and adults!) to spend hours using filters and stickers, commenting, liking posts, and counting likes. But all this fun can take a turn if kids misuse the platform or fail to take the risks seriously.
Whether your child is new to Instagram or a seasoned IG user, consider pausing to talk about the many aspects of the platform.
Here are a few critical topics to help you kick off those conversations.
Instagram Privacy & Safety Tips
1. Resist oversharing.
Acknowledging the impulsive behavior and maturity gaps unique to the teen years is essential. Do you feel like you are repeating yourself on these topics? That’s okay—it means you are doing it right. Repetition works. Advise them: Sharing too many personal details online can set them up for serious privacy risks, including identity theft, online scams, sextortion, or cyberbullying. Also, oversharing can negatively influence potential schools and employers who may disapprove of the content teens choose to share online.
Suggestion: Sit down together and review Instagram’s privacy settings to limit who can see your child’s content. Please encourage them to use strong passwords and two-factor authentication to secure accounts. Also, advise them to think twice before posting something and warn them about the risks of sharing intimate photos online (even with friends), as they can be easily shared or stolen. Now may be the time if you’ve never considered adding security software to protect your family devices. McAfee+ provides all-in-one privacy, identity, and device protection for families. It includes helpful features, including identity monitoring, password manager, unlimited VPN, file shredding, protection score, and parental controls. The software has updated features to include personal data cleanup and credit monitoring and reporting to protect kids from identity theft further.
2. Just say no to FOMO.
This acronym stands for Fear of Missing Out. This word came from the subtle undercurrent of emotions that can bubble up when using social media. It’s common for kids to feel anxious or even become depressed because they think they are being excluded from the party. FOMO can lead them to spend too much time and money on social media, neglect their family or school responsibilities, or engage in risky behaviors to fit in with or impress others.
Suggestion: Help your child understand that it’s normal to sometimes have FOMO feelings. Please encourage them to focus on their strengths and to develop fulfilling hobbies and interests offline. To reduce FOMO, encourage your child to take breaks from social media. Also, install software to help you manage family screen time.
3. Social Comparison.
Akin to FOMO, comparing oneself to others is an ever-present reality among teens that is only amplified on Instagram. According to several reports, Instagram’s photo-driven culture and photo filters that enhance facial and body features can make teens feel worse about their bodies and increase the risk of eating disorders, depression, and risky behaviors. Girls, especially, can develop low self-esteem, comparing themselves to unrealistic or edited images of celebrities, influencers, or friends. Social comparison can also lead to the fixation on getting more likes, followers, or comments on their posts.
Suggestion: Create a safe space for your teen to discuss this topic with you. Help them understand the differences between Instagram life and real life. Help them be aware of how they feel while using Instagram. Encourage them to follow accounts that inspire and uplift them and unfollow accounts that spark feelings of comparison, jealousy, or inferiority.
4. Talk about cyberbullying.
Hurtful events that impact teens, such as gossip, rumor spreading, peer pressure, criticism, and conflict, can increase in online communities. If your child posts online, they can receive mean or sexual comments from people they know and strangers (trolls). Cyberbullying can surface in many ways online, making kids feel anxious, fearful, isolated, and worthless.
Suggestions: Keep up on how kids bully one another online and check in with your child daily about what’s happening in their life. Encourage them not to respond to bullies and to block and report the person instead. Also, if they are getting bullied, remind them to take and store screenshots. Such evidence can be helpful if they need to confide with a parent, teacher, or law enforcement.
5. Emphasize digital literacy.
Understanding how to discern true and false information online is becoming more complicated daily. In the McAfee 2023 Threat Predictions: Evolution and Exploitation, experts predict that AI tools will enable more realistic and efficient manipulation of images and videos, which could increase disinformation and harm the public’s mental health. Understanding online content is a great way to help your kids build their confidence and security on Instagram and other networks.
Suggestion: Encourage critical thinking and guide kids to use fact-checking tools before believing or sharing content that could be fake and using ethical AI frameworks. Remind them of their digital footprints and how the things they do online can have long-lasting consequences.
It’s important to remember that all social networks come with inherent dangers and that Instagram has taken a number of steps to reduce the potential risks associated with its community by improving its security features and safety rules for kids. Remember, nothing protects your child like a solid parent-child relationship. As a parent or caregiver, you play a critical role in educating your child about their digital well-being and privacy. Working together, as a family, your child will be equipped to enjoy the good stuff and avoid the sketchy side of the digital world.
The post Instagram Safety for Kids: Protecting Privacy and Avoiding Risks appeared first on McAfee Blog.
What cybersecurity professionals can learn from the humble ant
When an ant colony is threatened, individual ants release pheromones to warn of the impending danger. Each ant picking up the warning broadcasts it further, passing it from individual to individual until the full defenses of the colony are mobilized. Instead of a single ant facing the danger alone, thousands of defenders with a single purpose swiftly converge on the threat. This all happens without the need for direction from a central authority or guidance from a single leader.
Just like the ants, public-private partnerships (PPP) should be responding to cybersecurity klaxons and working together to combat threats from all corners of the globe. Examples of this are already starting to happen. US President Joe Biden’s National Cybersecurity Strategy outlines the expectation of an all-of-government approach to cybersecurity, which would give a common purpose to private organizations and national infrastructure that are vulnerable to attack.
CVE-2022-0010
Insertion of Sensitive Information into Log File vulnerability in ABB QCS 800xA, ABB QCS AC450, ABB Platform Engineering Tools.
An attacker, who already has local access to the QCS nodes, could successfully obtain the password for a system user account. Using this information, the attacker could have the potential to exploit this vulnerability to gain control of system nodes.
This issue affects QCS 800xA: from 1.0;0 through 6.1SP2; QCS AC450: from 1.0;0 through 5.1SP2; Platform Engineering Tools: from 1.0:0 through 2.3.0.