Microsoft Recall, Google AI Phish and LinkedIn AI Training

by | Oct 21, 2024 | Security Alerts | 0 comments

Several important security and privacy issues related to Microsoft, Google and LinkedIn have come up in the last couple of weeks. Please take the time to understand these issues so you can advise your staff, colleagues and friends to take action and to protect themselves.

Microsoft Recall:

Earlier this month we learned that Microsoft began rolling out its now infamous product Recall through an update to Windows 11. Recall is a new capability of Windows 11 that records everything you do so you can leverage it to “remember” whatever your brain can’t. It was first introduced back in June and came under heavy scrutiny as it had many unaddressed security and privacy risks. The new release has addressed many of these concerns, but for many it still represents a significant and unnecessary risk. If your computer becomes compromised or someone manages to log into your computer, they would be able to review all your activities and possibly passwords and other sensitive information as Recall continuously captures images of your screen. Recall will automatically be enabled on compatible machines starting this month as systems receive Windows updates. We recommend disabling this feature unless it is explicitly requested or needed by a user and users have been advised and trained on enabling the built-in privacy features.

Google AI Attack:

Many news outlets and social media have been reported on a sophisticated phishing attack leveraging AI generated phone calls. The incident was first reported by Sam Mitrovic, a Microsoft consultant. The attack relies on a series of well-planned phishing emails and then an AI driven phone call from a phone number with a Google Support caller ID. The sophisticated ruse is intended to gain access to your Gmail account and leverage that access to defraud the victim. Phishing and other fraud activities are getting harder to spot for even the most sophisticated users.

LinkedIn AI Training:

In September, we learned from reporting from 404media that LinkedIn started training it’s AI LLM models with user data without user consent. The only evidence of this was a new option in setting to disable “Use my data for training content creation AI models.” While this use of user data likely doesn’t represent a security risk, it’s an affront to privacy and may be concern to original content creators.

What do I need to do?

  • Microsoft Recall: Disable the Recall feature.  Manual instructions can be found here.
  • Google AI Attack: Participating in end user cybersecurity awareness training like our managed KnowBe4 is important to train users on red flags for phishing schemes.  Users should be advised of the current attack and the use of AI generated phone calls. They should hang up on unexpected support calls and reach out directly to their IT support or Google support in the case of personal Gmail accounts.
  • LinkedIn AI Training: It is advised that users should disable the setting “Use my data for training content creation AI models” and submit a Data Processing Objection request. LinkedIn provides no benefit to users to help train their systems and potentially subjects their original content, writing styles and other unique characteristics from being exploited and commercialized.

Additional Resource and Details:

As always if you have any additional questions or concerns about this latest security disclosure, please feel free to reach out.

Related posts

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *