AI Privacy & Security: Do's and don'ts of safe online behaviour

AI Privacy & Security: Do's and don'ts of safe online behaviour

11/21/2024 - 14:22

Recent and ongoing developments in artificial intelligence offer significant benefits in terms of utility and accessibility for individual users. However, these advancements come with the potential risk of compromising personal data privacy. The value of this trade-off varies depending on the specific use case. Wouter van Tankeren, Lecturer in Data Governance and Business Intelligence at Breda University of Applied Sciences, shares his top tips for safe online behaviour in this evolving digital landscape.
Data Science & AI
  • Expertise

In recent years, the field of Artificial Intelligence (AI) has seen rapid advancements, particularly in the development of large language models (LLMs) like GPT and Gemini. These technologies have gained significant momentum, prompting many companies to integrate AI functionalities into their existing products or to launch new AI-driven services. While this surge in AI applications offers exciting possibilities, it also creates a complex and crowded market. The influx of new AI-based tools and the addition of AI features to existing apps can make it challenging to navigate this evolving landscape.

When it comes to responding to such changes at a global scale, higher education institutes can be put on a spectrum between a somewhat risk-averse reactive approach, or a more experimental and proactive approach. Whereas most universities seem to take the 'wait and see'-route, BUas is striving to go take a proactive route which offers a lot of potential opportunities for us in the way we work, teach, study, or conduct our research.

However, proactive experimentation does carry some risks with it, especially related to the safety and security of (personal) data… 


The AI problem

Those additional risks that AI poses mainly have to do with how AI models are trained and maintained: they need a lot of data to function and, once data is processed into a model, the models which result from training make it practically impossible to remove specific 'elements' from the trained model. Once it is in the AI model, we should expect it to remain there indefinitely: this is sometimes called (training) data persistence.

So, while you may legally be in your right to revoke access to your data (and under the GDPR you have that right) this is made nigh impossible by the nature of the technology. And this means that that 'data' embedded in a model, may later 'leak' out of the model. 
 

What does that mean for you PERSONALLY? 

All the things you (should) have learned over the past decade or so about safe online behaviour still apply when AI is involved, just more so than before. This also means that the principles of safe online behaviour have not changed much, but some of them just became even more important now than they already were:  

Informed consent
Always make sure you are aware of what you are agreeing to before you click agree. At the very least you should always check how any data you enter will be used and you should definitely not agree if what you find is not to your liking or does not make sense (e.g., TikTok does not need access to your clipboard data). 

Check your permissions 
Most applications - whether on PC, Mac, phone, or Roomba - have settings related to sharing data. For example, often apps have an option saying something like 'share your data with [app publisher] to help improve our services': you should probably turn this off if you have any privacy concerns. 

Opt out by default 
Get in the habit of not sharing data unless you have a sufficiently compelling reason to do so. You (probably) should not dismiss sharing your personal data outright but be careful and smart about it and you should be fine. 

2FA everything 
You want to be secure, then add an extra security layer wherever you can: 2-factor authentication is one of the most powerful security measures you can take. If the option is available, you probably should use it. 

Maintain good password hygiene 
If anything, the advancements in machine learning and AI have made it much more likely passwords get cracked. Either way, good password discipline is important, which means: 
 

  1.  Use good and unique passwords (not: ‘correcthorsebatterystaple’). 
  2. If you find all this too cumbersome: password management apps may be interesting for you (e.g., Bitwarden, KeePassXC, Dashlane, etc.). 
  3. Check periodically whether your data has been exposed (e.g., periodically check https://haveibeenpwned.com). 
  4. Change passwords periodically (or immediately if your data was compromised). 
     

What does that mean for you PROFESSIONALLY?

Despite the advice above you can do with your own personal data whatever you want. However, most of us also have access to data of our students, colleagues, and/or third parties, which means we also bear responsibility to them to be judicious in who gets access to it and why.

Necessity or permission 
Do not share data of/on others unless you have to and/or have permission to do so. The GDPR dictates that we need explicit permission and necessity to process someone's data. 

Anonymize or pseudonymize 
If you really (really!) have to share (you probably really don't), then make sure to anonymize or pseudonymize the data you are entering (note: in MS Word, Ctrl-F and Ctrl-H can work wonders). Do note: it is definitely not a fool-proof method for prevention


Where does the AI Pioneer team come in?

It would be unreasonable for every individual student or staff member of BUas to be completely informed about every single potentially useful application out there. The AI Pioneer team of BUas, is trying to provide students and staff with the means to 'ride the AI wave' in a safe and sound manner while also taking away (some of) the friction. However, we also need to maintain some control over the personal data we, as employees of BUas, bear a collective responsibility for. In turn, that means we need to know where our data is and who has access to it. 

To that end, we are working on a collection of AI-based tools which are approved for use within our organization (as in: they can be considered 'safe for use'). Together, the BUas community can build a collection of tools which will help us do our work assisted by AI, but without neglecting the privacy of our students and colleagues. 


Disclaimer: The header image was created using AI.