The trouble with technology

I started talking about the use of ‘deepfakes’ in the use of romance scams over 2 years ago when we started to see evidence of them in fake news videos with celebrities and heads of state. Unfortunately, this is only the beginning of the journey and more sophisticated use of this technology will be on it’s tails as seen in the likes of the acclaimed BBC series, The Capture.  Let’s take a dive into ‘The trouble with Technology’. 

Technology has brought about numerous benefits to society, from improved healthcare to faster communication to increased productivity. However, there are also several challenges associated with technology that we need to be aware of, including:

  1. Dependency: We have become increasingly dependent on technology to perform daily tasks, which can lead to issues if technology fails or is unavailable.
  2. Privacy and security: Technology has made it easier for personal information to be collected and shared, raising concerns about privacy and security. Cybersecurity threats, such as hacking and identity theft, are also becoming more prevalent.
  3. Social isolation: Technology can create a sense of isolation and detachment, particularly in younger generations who spend more time interacting online than in-person.
  4. Addiction: Technology can be addictive, particularly social media and gaming, which can lead to negative impacts on mental health, productivity, and relationships.
  5. Job displacement: Technology is also changing the nature of work, with automation and artificial intelligence replacing many jobs. This can lead to job displacement and require workers to reskill and adapt to new industries.
  6. Environmental impact: Technology can have a significant environmental impact, from the manufacture of devices to the energy required to power them. E-waste disposal is also a growing concern.

To address these challenges, it’s important to use technology responsibly and be mindful of the potential impacts on society and the environment. We need to develop policies and regulations to ensure technology is used in a way that benefits everyone, not just a select few. Ultimately, we need to find a balance between the benefits and challenges of technology to create a more equitable and sustainable future.

So far, regulation and legislation are failing on a grand scale.


So how will new technology benefit scammers in Romance Fraud?

Chat GPT bots will be used by scammers with less skills and for the spamming process until a victim is hooked.
Technology has made it easier for scammers to target and deceive people through online dating platforms and social media. They can create fake profiles, steal identities, and use persuasive tactics to gain trust and extract money or personal information from their victims. In the future, technology may also be used to improve detection and prevention of romance fraud, such as through the use of artificial intelligence and machine learning to identify suspicious activity or patterns. It may also be used to help victims report and recover from fraud. It’s important to be cautious and aware of the potential risks when using online dating and social media to find romantic partners.
As a language model, ChatGPT is designed to help with conversations by generating responses that are relevant, coherent, and appropriate in context. Whether you’re looking to have a casual chat or seeking assistance with a complex topic, ChatGPT can offer insights, information, and guidance to help move the conversation forward. Here are some ways that ChatGPT can help with conversations.

 

Providing information: ChatGPT can help answer questions and provide information on a wide range of topics, including science, history, and current events. Generating ideas: If you’re brainstorming ideas for a project or just looking for some inspiration, ChatGPT can help generate new ideas and perspectives. Offering advice: ChatGPT can offer advice and guidance on personal or professional issues, such as career choices or relationship problems. Providing entertainment: ChatGPT can also be a source of entertainment, telling jokes, sharing stories, or playing games.

Overall, ChatGPT can help make conversations more engaging, informative, and enjoyable, and can be a valuable tool for anyone seeking to improve their communication skills or connect with others.
Unfortunately, scammers can use ChatGPT or any other conversational AI system to talk to victims of romance fraud. Here are some ways scammers may use ChatGPT to perpetrate romance fraud:
  1. Generating personalized messages: ChatGPT can be used to generate personalized messages that can make the victim feel special and targeted. Scammers can use this to create a false sense of connection and build trust with the victim.
  2. Mimicking human-like responses: ChatGPT can generate responses that appear to be human-like, making it difficult for the victim to detect that they are talking to a machine. Scammers can use this to make the victim believe they are talking to a real person.
  3. Exploiting vulnerabilities: Scammers can use ChatGPT to learn about the victim’s interests, hobbies, and preferences and use that information to exploit their vulnerabilities. For example, if the victim expresses loneliness, the scammer may use that information to build a deeper emotional connection and manipulate the victim into sending money.
  4. Increasing efficiency: ChatGPT can be used to automate responses and interact with multiple victims at the same time, making it easier for scammers to perpetrate romance fraud on a large scale.
It’s important to note that while ChatGPT can be used by scammers, it can also be used by law enforcement and anti-fraud organizations to detect and prevent romance fraud. It’s crucial to stay vigilant and aware of the signs of romance fraud to protect yourself from scammers.

And what about deep fakes?

Scammers have used technology in various ways to perpetrate scams through video calls. Here are some examples:

Fake video calls. Scammers can use pre-recorded videos or fake personas to create the illusion of a live video call. They may use sophisticated software to manipulate their appearance or disguise their voice to appear more convincing.

This includes the use of intimate videos in sextortion. Scammers may use pre recorded videos on calls to coerce victims in explicit actions to extort victims by recording explicit footage and threatening to share it publicly unless the victim pays a ransom.

Of course these videos have their limitations, the scammer not being able to react as requested if asked and voiceovers only being successful if the scammer has that part well executed to hide true origins. They are also mostly short and fairly poor quality but as with any technology issues this can be explained away. 

To see an example of a fake video call click here.

How have scammer video calls evolved with the help of technology?
Scammers have evolved their video call tactics with the help of technology to make their scams more convincing and sophisticated. 

 

Deepfake is a method by which artificial intelligence (AI) gathers data to be more educated. In this case, the Al uses its data,  here we will see facial movements, to superimpose a new face onto an existing face and body. The term ‘deepfake’ is a fairly new but the origin of the concept isn’t.

Photoshopped images which have evolved into singing pictures have been circulating for at least 15 years. What makes the new deepfake technology problematic is that this once highly expensive technology is now easily accessible, creating ridiculously sophisticated videos that to the untrained eye will appear seamless. 

Scammers can use deepfake technology to create convincing fake videos that appear to be from someone else. They can use this to impersonate celebrities or public figures, or to create videos of fake customer support agents or government officials or the person they have chosen to scam with.

To see a scammer exploring the deepfake software, click here.

To see scammers using deepfake software in their scamming, click here.

Virtual backgrounds: Scammers can use virtual backgrounds to create a false sense of legitimacy or credibility. They may use backgrounds that look like a legitimate office or government agency to convince the victim that they are speaking with a legitimate representative.

Screen sharing: Scammers can use screen sharing to show the victim fake or manipulated content to further their scam. For example, they may show fake bank statements or other documents to convince the victim to send money or share personal information.

How can you spot a deep fake?
Spotting a deepfake can be challenging, as they can be highly convincing. However, here are some common tell-tale signs that you might be looking at a deepfake:
  1. Stiff or robotic facial movements: Deepfakes can still have difficulties with replicating natural human expressions, movements and eye blinks.
  2. Artificial looking eyes: Sometimes, the eyes in a deepfake video can look unrealistic or glassy.
  3. Inconsistent lighting: Shadows and lighting can appear to change or be inconsistent in deepfake videos.
  4. Audio discrepancies: The audio in a deepfake video may not match the movements of the person’s mouth or may sound artificial.
  5. Blurry or low-resolution areas: Some parts of a deepfake video might be more blurry or lower-resolution than others.
It’s important to note that these signs aren’t foolproof, and some deepfakes can be highly sophisticated. Therefore, it’s important to approach all videos with a critical eye and consider the context and source of the content.
FYI: This blog (minus the parts in Italics and the video links) was created using Chat GPT.