ChatGPT: How the Use of AI Will Affect Your Business
There is no escaping ChatGPT. The AI-backed language model has caught the public’s imagination through its power to instantly create prose that passes for authentic human writing.
It’s been used by professionals, students and hobbyists to generate quotes, answer questions and summarise research. But as the popularity of the technology has soared, there has been a drastic change in attitude towards the technology.
What was once little more than a novelty, something we toyed with to engage in conversation or create fiction, could soon have major economic consequences.
In a widely shared report from earlier this year, 49% of organisations said they had already adopted ChatGPT in the workplace, while another 30% said they were planning to. The technology is helping to support certain content-related tasks, from content writing and andpublic relations to coding and audit reporting.
As the technology develops, there are fears that it could wipe out entire job roles, with specific versions of GPT gaining the expertise to handle tasks with almost no human intervention.
A study produced by ChatGPT claimed that 80% of all jobs in the US were at risk. Granted, we should take that figure with a grain of salt, given the source of the study and its desire to promote its own capabilities.
Nonetheless, it demonstrates the scale to which ChatGPT and other AI could affect workplace practices. Perhaps the most striking part of all of this is how quickly it’s happened.
ChatGPT was released publicly only five months ago, and the speed with which it is breaking through industry roles that have remained unchanged for decades has heightened the public response on both ends of the spectrum.
On the one hand, you have employees panicking that their job, and their entire skillset, is about to be made redundant, and on the other you have employers who see it as a deus ex machina amid a faltering economy and the cost of living crisis.
It’s understandable that discussions of ChatGPT in the workplace have centred on its economic – and perhaps existential – consequences, but this is only the tip of the iceberg. There is almost no doubt that it will change the way your organisation operates. But what happens next is a whole other concern.
Will you be breaking the law?
One of the biggest questions hanging over AI and the use of these platforms in the workplace is how it will affect your data protection compliance practices.
These platforms store interactions with users by default, and the information is used to develop their capabilities and provide responses to other people’s solutions.
This creates two potential regulatory compliance problems. The first, and most obvious, is that your employees might input corporate information into the platform, where it will then sit on the AI platform’s database.
It could then be viewed by employees, compromised in a cyber attack or shared with other users as they input a command.
The second, and less obvious, problem is that your organisation might unwittingly use classified information that someone else has provided.
This can cause several major problems. On the less severe end of the spectrum, individuals might inadvertently plagiarise content or cite incorrect sources. If discovered, this could damage their reputation, result in an order to remove the content or possibly even a legal challenge.
Of even greater concern, however, is the possibility of data privacy breaches. After all, despite generating the information artificially, these platforms use pools of data that come from real people – and they are entitled to the same data privacy protections as anyone else.
If you are familiar with or subject to the GDPR (General Data Protection Regulation), you will know that that’s a problem.
The Regulation contains strict rules regarding the way personal information is processed and stored, and those requirements are especially tough when it comes to third parties – which an AI platform would be in these circumstances.
Specifically, the GDPR states that organisations can be held be liable for a security incident that occurs further down the supply chain.
It’s why experts recommend that organisations receive assurances that third parties have appropriate technical and organisational measures in place. They might do this by auditing their processes, and by including provisions in their contractual agreements stipulating that certain practices are carried out.
When you consider that many of these AI platforms are based in the US, where there is less emphasis on data protection and data privacy, you should begin to question the compliance headaches you might be creating.
Indeed, the godfather of these platforms, ChatGPT, recently found itself in regulatory hot water after the Italian data protection watchdog found several problems in the way it uses personal data.
The Garante per la protezione dei dati personal, Italy’s data protection supervisory authority found that chatbot and its parent company, OpenAI, breached several requirements of the GDPR.
The compliance problems stemmed from the fact that OpenAI trained its language model on 570GB of data from the Internet, including webpages, books and other material.
In a strict regulatory sense that may not necessarily a problem, particularly if data subjects have themselves uploaded that information. But simply because personal data is easily accessible on the Internet and in the public domain, it does not automatically mean it will fall outside the scope of the GDPR.
In addition, the information can be repurposed by ChatGPT and used to answer questions, and this is where further issues arise.
The language model doesn’t know the context for which the data was gathered. As a result, it often produces output that sounds factual but is inaccurate or contains false information. This includes details about specified, named individuals.
As TechCrunch observes, “This looks problematic in the EU since the GDPR provides individuals with a suite of rights over their information – including a right to rectification of erroneous information.
“And, currently, it’s not clear OpenAI has a system in place where users can ask the chatbot to stop lying about them.”
Managing these problems
If your organisation uses an AI platform, you must consider the following factors:
- Contractual terms: You should review the terms of service or data processing agreement provided by the platform, and ensure that it includes appropriate provisions that meet the requirements of the GDPR.
- Lawful basis: Identify the lawful basis if you’re processing personal data. This could be based on obtaining consent from individuals, performing a contract, compliance with a legal obligation, protection of vital interests, performance of a task carried out in the public interest or in the exercise of official authority, or legitimate interests pursued by your organisation or a third party.
- Data minimisation: Avoid sharing unnecessary personal information and only provide data that is essential for the intended purpose.
- Data subject rights: Ensure that you have mechanisms in place to address data subject rights under the GDPR, such as the right to access, rectify, erase, restrict processing, data portability, and object to processing.
- Security measures: Verify that the platform has implemented appropriate technical and organisational measures to protect personal data against unauthorised access, loss, or alteration.
- Data transfers: If personal data is transferred outside the European Economic Area, ensure that appropriate safeguards are in place.
Meanwhile, OpenAI also has more prosaic regulatory problems. The Garante notes that the organisation failed to verify the age of its users, potentially exposing minors to inappropriate content – in breach of the GDPR.
Elsewhere, the regulator drew attention to a data breach that ChatGPT suffered on 20 March. A bug caused some users to see others’ chat titles, the first message of active users’ chat history, payment details and other information of subscribers who were active during a nine-hour window.
Besides the obvious steps, such as taking greater steps to verify users’ ages, OpenAI has been told that it must offer greater transparency regarding its data processing practices.
Transparency is one of the key principles of the GDPR, and although the term isn’t defined in the GDPR, Recital 39 provides some clarity.
It explains that organisations must tell data subjects “[what] personal data concerning them [is] collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed”.
The Garante echoes this, instructing OpenAI to describe “the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects (users and non-users)”.
It adds that it “will have to be easily accessible and placed in such a way as to be read before signing up to the service.”
A force for good
It’s not all doom and gloom for the information security prospects of AI-backed language models, though. Cyber security experts have been quick to point out the ways in which the tools can enhance automated threat detection systems.
Similarly, the systems can be used to collate data from previous breaches to spot trends in the way attackers are targeting organisations and developing protections to thwart those attacks.
Both Google and Microsoft have released versions of this technology, which will be embedded within its platforms in an attempt to simplify cyber security.
Google announced its Cloud Security AI Workbench at the RSA Conference 2023, which will feature its proprietary language model called Sec-PaLM.
It incorporates security intelligence on issues such as software vulnerabilities, malware and threat indicators.
One of the main benefits of the Cloud Security AI Workbench, Google says, is the breadth of resources at its disposal. It combines a range of new AI-powered tools, such as Mandiant’s Threat Intelligence AI, which will use Sec-PaLM to find, summarise and respond to security threats.
Mandiant is owned by Google, as is VirusTotal, which will also use Sec-PaLM to help subscribers analyse and explain malicious scripts.
Sec-PaLM will also reportedly help customers of Google’s Cloud cyber security service Chronicle to search for security events. The tool aims to use the language skills used in the likes of ChatGPT and Bard to interact with users in a conversational manner.
Meanwhile, users of Google’s Security Command Center AI will get “human-readable” explanations of attack exposure, including affected assets, recommended mitigation strategies and risk summaries for security, compliance, and privacy findings.
Commenting on its Cloud Security AI Workbench, Google’s vice president and general manager, Sunil Potti, said: “While generative AI has recently captured the imagination, Sec-PaLM is based on years of foundational AI research by Google and DeepMind, and the deep expertise of our security teams.
“We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community.”
Several weeks before Google announced its Cloud Security AI Workbench, Microsoft revealed its own cybersecurity machine learning programme: Security Copilot.
The platform works alongside Office apps to summarize and “make sense” of threat intelligence, which Microsoft hopes will help prevent data breaches.
Microsoft said that Security Copilot uses information fed to it through GPT-4 to study data breaches and find patterns.
The tech giant didn’t explain exactly how it incorporates GPT-4, instead highlighting its trained custom model that “incorporates a growing set of security-specific skills” and “deploys skills and queries” related to cybersecurity.
Security Copilot looks like many of the other chatbot interfaces that we have surely all now experimented with in the past few months, but the data that it’s been taught with relates specifically to cyber threat intelligence.
“We don’t think of this as a chat experience. We really think of it as more of a notebook experience than a freeform chat or general purpose chatbot,” explained Chang Kawaguchi, an AI security architect at Microsoft, in an interview with The Verge.
This all bodes well for the future of automated threat detection, but once again there is reason to exercise caution.
As Kyle Wiggers, a senior reporter at TechCrunch, noted in his report on Google’s Cloud Security AI Workbench, all AI language models make mistakes, no matter how cutting edge they are. In particular, he referenced their susceptibility to attacks such as prompt injections, which can cause them to behave in ways that their creators didn’t intend.
“In truth, generative AI for cybersecurity might turn out to be more hype than anything,” Wiggers concluded, noting the lack of studies on its effectiveness. “We’ll see the results soon enough with any luck, but in the meantime, take Google’s and Microsoft’s claims with a healthy grain of salt.”
Aim higher than compliance
Despite the concerns surrounding its practical applications, we can’t see the interest in AI language models going away any time soon. Some organisations, such as Apple, have greatly restricted its use, but not everyone can afford to be circumspect.
We are in the midst of an AI arms race, and those that figure out a way to use the technology effectively and responsibly will reap the rewards.
One of the main challenges that you must overcome is how to align your new AI weapons with your regulatory requirements. You may well save money by automating certain processes, but if you fall foul of the GDPR, you could be subject to a fine and other enforcement actions that leaves you worse off than ever.
Moreover, you could suffer severe reputational damage if you’ve been found to have used AI improperly. This could be the case whether you commit a data privacy breach or fail to check the content that’s it produced and inadvertently publish incorrect or misleading information.
It’s why organisations must be extremely careful in their AI endeavours. The technology creates an array of risks that puts you in a precarious compliance posture, and you will need strict policies and processes stating what can and cannot be performed using AI.
Even if you operate flawlessly, you have to win over your customers. It’s their data that could be compromised, and you must prove to them that their information is safe with you.
One of the best ways to gain their trust is by seeking assurances from experts that your business practices are robust. With guidance from DQM GRC’s team of consultants, you can ensure that you’re aware of any data privacy risks you might face.
We have worked hard to understand the challenges you face, including ensuring that your practices (such as using AI) won’t land you in hot water.
One of the services that may help you in this area is a Supply Chain Audit, where we assess your suppliers and report any data protection deviations from your contract – for example if a supplier of yours is using AI in a way that may compromise data protection. We also offer bespoke consultancy services, which could cover a review of your own use of AI for business purposes.