works with


5 GPT Security Risks to Consider When Summarizing Meetings

GPT is an incredible technical evolution. There’s no doubt about it. More and more people are becoming aware of the ways in which the tech can be used, but there are plenty of unaddressed GPT security risks that seem to get swept under the rug too.

And we’re not just talking about OpenAI’s public version of ChatGPT here, which has its own array of controversies. Instead, we’re focusing more on the vulnerabilities of using GPT in businesses, programming it with the specifics of an individual company. These specifics often include private or sensitive information that shouldn’t really be available to people outside your organization. 

So how do GPT security risks come into play here? And more importantly, how can we avert them?

In this article, we’ll cover the major security concerns when using GPT, and the more specialized ones when it comes to summarizing meetings using the same technology (and security). With so many things going digital, and so many hacks and exploits around, people are becoming more cautious about who has access to their data – and rightly so. By the end of this article, you should be able to alleviate most of your customers’ concerns regarding their data security, as well as employ better practices at work when integrating with technologies like GPT.

So let’s get to it. Brace yourself for a good, long look at the underbelly of GPT.

Table of Contents

General Security Risks with GPT

Data Privacy

A large part of the recent writers and actors strike that brought Hollywood to a stand still were due to this. They wanted to ensure their likeness – or writing style – couldn’t be fed into an AI machine and effectively replace them. If GPT advances to the point where it can accurately emulate a writer’s style, or an actor’s face, then that writer or actor may not be needed anymore. Ever seen Black Mirror?

There’s also an ongoing class action lawsuit against OpenAI and other AI builders for infringing copyright. It’s been described as “software piracy on an unprecedented scale.” And it’s hard to deny. ChatGPT isn’t the only one in the firing line, though. Microsoft is also guilty as sin, with its GitHub Copilot accused of scraping licensed code that requires credit when reused – which isn’t given.

This is an eerie foreshadowing of the stereotypical dystopian future where AI replaces almost every industry. It starts with copying from creatives, not giving any credit whatsoever, then emulating and replicating their style, content, and work. Once you’ve got an AI system that can write like George RR Martin in a few seconds – why would you wait for The Winds of Winter?

There are obviously nuances here. It’s arguable that AI will never create like a human – though it would probably do a better job than D. B. Weiss and David Benioff trying to finish Martin’s story à la A Game of Thrones.

What it really means is that specialized creatives may be more highly sought after compared to the generic mass produced AI stuff that’s already being pumped out all over the internet. Whatever happens in the future, there’s no denying that this is a key – and possibly the most important – GPT security concern.

Centralized Storage

In January 2024, ChatGPT received an average of 54 million new visitors every single day. That’s a lot of people’s data to protect. And the problem with GPT is that all of your data is stored on OpenAI’s centralized servers. If they’re compromised, so is everyone’s data.

Facebook, LinkedIn, Yahoo, and practically every other huge tech name under the sun have suffered enormous exploits over the past several years, with billions of people’s data leaking onto the black market. One of the big concerns about GPT is a similar hack or exploit that leaks people’s private information, including name, email, prompts and conversations, as well as payment information, approximate location, and IP address.

As history has shown, this isn’t an improbable event. In fact, hacks like this are quite common. When a single point of failure is responsible for the safety and protection of hundreds of millions of people’s data, that single point becomes a luxurious target.

Centralized data storage is a big concern for all big databases. However, arguably the most controversial GPT privacy concern is the fact that Generative Pre-trained Transformers are trained on vast swathes of data, plucked from the internet without permission of the rightful owners. Blog posts, videos, books, and movies are all fed into GPT so it can learn conversational intelligence – but none of the people are credited for this work or compensated for it.


GPT models can potentially be used to generate content that’s deemed misinformation. This is particularly true in the context of businesses that integrate with GPT for their customer support. If a customer reaches out and has a conversation with your AI, you want them to deliver correct information about your business and/or product.

GPT generates answers based on learned patterns. That means its responses are only as good as its programming. If there’s a human error when feeding it the data, it can show up in the response, causing confusion and even turning customers away. GPT doesn’t care about what’s right, only what it was programmed with.

5 GPT Security Risks for Meeting Summaries

So what about the GPT security risks that are more common for businesses integrating GPT meeting summaries with their tech? Let’s take a peek.

1. Data Privacy and Confidentiality

When you record online meetings, it’s very possible that you’re going to capture sensitive business information or personal data of various participants. This is especially true for internal calls. 

There’s obviously a risk here in the possibility of unintentionally disclosing confidential information if the meeting recordings are not secured well enough. The transcripts are also vulnerable if they’re stored in an insecure location or shared to external parties freely. 

The best way to counter potential data leaks is to encrypt recordings and transcripts both in transit and at rest. Another good tip would be to restrict access to the meeting recordings and transcripts to strictly authorized personnel. With tl;dv, your meetings, summaries, and customer insights are all stored in the tl;dv library, only accessible by your team.

2. Misuse of Recorded Content

If your meeting recording or transcript falls into the wrong hands, it can be manipulated to misrepresent what was said. This can lead to potential legal consequences, not to mention the devastating impact it could have on your reputation.

With tl;dv, your meeting recording and transcript will only be accessible by those who attended the meeting, or teammates who have access to your business’s tl;dv library. You can share the call externally through a whole number of integrations, but so long as you have the original, any attempt to misrepresent the discussion can be quelled instantly.

3. Inaccurate or Biased Transcripts

GPT models may introduce biases or inaccuracies when transcribing meeting recordings, especially if the model was not trained on diverse or representative data. This could impact decision-making or lead to misunderstandings among participants.

Luckily for you, tl;dv has state-of-the-art speaker recognition and can pick up on different accents and colloquial phrases. It understands context like no other meeting recorder and can quickly and accurately differentiate who’s speaking at any given moment.

This avoids confusion and allows your transcripts to stand as pillars of truth.

4. Compliance and Regulatory Concerns

Depending on the industry and jurisdiction, there’s most likely gonna be regulations governing the recording and storage of meeting content. This is particularly true if it involves sensitive data or personally identifiable information. Failure to comply with these regulations could result in penalties or legal action – something nobody wants to deal with.

To counter this, you’ll have to ensure your team is familiar with GDPR compliance and the ever-evolving landscape of regulations. This includes industry-specific guidelines that can vary drastically depending on region and field of work. 

OR if you don’t want all that hassle, you could just let tl;dv handle it for you… tl;dv is GDPR compliant for all users, and the team keeps an internal record of data processing activities to keep track of exactly how personal data is processed. We take security seriously.

5. Retention and Deletion Policies

Ideally, your business will establish clear policies regarding the retention and deletion of meeting recordings and transcripts. This minimizes the risk of unauthorized access or misuse. You don’t want meeting recordings being hoarded for years and years. The longer you retain them, the higher the chances are that there will be a data leak somewhere along the lines. 

Your customer has the right to request that their data is deleted, too – this needs to be respected. A clear plan should be in place for if this request is ever made. It’s not easy to sift through thousands of videos to find the ones you’re looking for. That is, of course, unless you’re using tl;dv. Otherwise it’s so simple that you could’ve done it already by the time you finish reading this paragraph.

Simply search via keyword and immediately find all the recordings that mention said keyword. Instantly. Not only does it locate the specific recording, but it links to the timestamp of the exact moment in the transcript that the discussion moves onto the keyword. Click there to jump immediately into the video call recording, as shown below.

On tl;dv’s free plan, your videos will automatically be deleted after 6 months. On the Pro plan, you’re gonna have to implement your own retention and deletion workflows. But this is not as hard as it sounds if you get it done early.

For more info about how long you can (and should) store video call recordings, check out our article on exactly that!

BONUS Risk: GPT Saves Too Much of Your Time

While there are some very serious risks when it comes to GPT technology, its usefulness can’t be overstated. With tl;dv’s GPT powered AI, you can automate your workflows so that all the data received from sales calls or customer service calls can be funneled directly into your CRM. You don’t need to lift a finger.

No more manual uploading. All you have to do is set up a two-second integration with your CRM of choice. It also integrates with your calendar, email app, and work platforms like Slack, Notion, and Trello.

And this doesn’t just transport the call recording and some basic AI summary, either. This allows you to generate recurring reports about specific keywords mentioned across ALL your calls in one swift swoop. It’s completely revolutionizing how we collect data from our calls. 

It’s going to save you so many hours, you won’t know what to do with yourself. You may actually be able to use some of those hours being productive rather than attending meetings… 

What a crazy risk that might be.

GPT Security Concerns Can Be Safely Navigated

In essence, there are ways to navigate the security concerns posed by GPT if you’re clever. For starters, you can use a secure platform that takes care of most of the stress for you, like tl;dv. 

Get started for free with unlimited call recordings and transcriptions. There’s literally nothing to lose.

tl;dv Blog

Subscribe to our Blogs

Subscribe and stay up to date with the latest tips and news on Meetings, Sales, Customer Success, Productivity, and Work Culture.