SoulTec Solutions
  • Home
  • Pricing
  • Free Guides
  • Free Tools
    • Secure Password Check
    • Email Security Check
    • Downtime Calculator
  • Screen Time
  • About Us
  • Contact Us
  • Billing Portal
  • Home
  • Pricing
  • Free Guides
  • Free Tools
    • Secure Password Check
    • Email Security Check
    • Downtime Calculator
  • Screen Time
  • About Us
  • Contact Us
  • Billing Portal
Search

Screen Time

Tech advice and news.
From the experts, for you.

Criminals are exploiting AI to create more convincing scams

4/20/2023

0 Comments

 
Picture
One of the many cool things about the new wave of Artificial Intelligence tools is their ability to sound convincingly human. 

AI chatbots can be prompted to generate text that you’d never know was written by a robot. And they can keep producing it – quickly, and with minimal human intervention.

So it’s no surprise that cyber criminals have been using AI chatbots to try to make their own lives easier.

Police have identified the three main ways crooks have found to use the chatbot for malicious reasons. 

 
1. Better phishing emailsUntil now, terrible spelling and grammar have made it easy to spot many phishing emails. These are intended to trick you into clicking a link to download malware or steal information.AI-written text is way harder to spot, simply because it isn’t riddled with mistakes.

Worse, criminals can make every phishing email they send unique, making it harder for spam filters to spot potentially dangerous content.
 

2. Spreading misinformation “Write me ten social media posts that accuse the CEO of the Acme Corporation of having an affair. Mention the following news outlets”. Spreading misinformation and disinformation may not seem like an immediate threat to you, but it could lead to your employees falling for scams, clicking malware links, or even damage the reputation of your business or members of your team.
 

3. Creating malicious code AI can already write pretty good computer code and is getting better all the time. Criminals could use it to create malware. 


It’s not the software’s fault – it’s just doing what it’s told – but until there’s a reliable way for the AI creators to safeguard against this, it remains a potential threat.


The creators of AI tools are not the ones responsible for criminals taking advantage of their powerful software. ChatGPT creator OpenAI, for example, is working to prevent its tools from being used maliciously.

What this does show is the need to stay one step ahead of the cyber crooks in everything we do. That’s why we work so hard with our clients to keep them protected from criminal threats, and informed about what’s coming next.

If you’re concerned about your people falling for increasingly sophisticated scams, be sure to keep them updated about how the scams work and what to look out for. ​

If you need help with that, get in touch. 
 
 
Published with permission from Your Tech Updates.
0 Comments



Leave a Reply.

    Archives

    January 2025
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023

    Categories

    All
    Cybersecurity
    Education
    Employees
    News
    Products
    Technology
    Training

    RSS Feed

Get Help Moving Your Business Into The Future.

​SoulTec Solutions exists to give you peace of mind in working with an industry ​that moves faster than the speed of light.
Let's Get Started!
SoulTec Solutions
[email protected]
(419) 558-3167
© COPYRIGHT 2023. ALL RIGHTS RESERVED.
  • Home
  • Pricing
  • Free Guides
  • Free Tools
    • Secure Password Check
    • Email Security Check
    • Downtime Calculator
  • Screen Time
  • About Us
  • Contact Us
  • Billing Portal