data security chatgpt openai

Guarding Sensitive Data: The Growing Concerns Surrounding ChatGPT and Data Security in AI Services

Guarding Sensitive Data: The Growing Concerns Surrounding ChatGPT and Data Security in AI Services

As the use of OpenAI’s ChatGPT continues to rise among employees, concerns about data security and the potential incorporation of sensitive information into large language models (LLMs) have grown. Cyberhaven, a data security service, has identified a significant risk of confidential data leakage through ChatGPT, with 4.2% of workers at client companies attempting to input sensitive data into the LLM.

Examples include executives pasting strategy documents into ChatGPT for PowerPoint creation, or doctors using patients’ personal and medical information to draft letters. Cyberhaven CEO, Howard Ting, predicts that the migration of data to generative apps like ChatGPT will increase, potentially exacerbating these risks.

Companies and security professionals are becoming increasingly concerned that sensitive data may resurface through ChatGPT and other LLMs. In response, JPMorgan has restricted workers’ use of ChatGPT , while Amazon, Microsoft, and Wal-Mart have warned their employees to exercise caution when using generative AI services.

Karla Grossenbacher, a partner at law firm Seyfarth Shaw, suggests that employers should include prohibitions on using confidential information with AI chatbots or LLMs like ChatGPT in employee confidentiality agreements and policies. She also notes that employees could unintentionally receive and use copyrighted, trademarked, or intellectual property information from ChatGPT , creating legal risks for employers.

In 2021, researchers discovered that “training data extraction attacks” could successfully recover text sequences, personally identifiable information (PII), and other data from LLMs like GPT-2. Such attacks could reveal sensitive information or intellectual property and pose a significant threat to companies and individuals.

Despite the risks, the adoption of ChatGPT and other AI-based services is accelerating, with companies like Snap, Instacart, and Shopify using the ChatGPT API to enhance their applications. To address data security concerns, Cyberhaven’s Ting recommends educating employees on the risks of sharing sensitive information with ChatGPT. OpenAI is also working to limit ChatGPT access to personal and sensitive data by programming the LLM to avoid providing personal details or sensitive corporate information when queried.

Similar Posts