AI writers and chatbots have seen huge success in recent months following a boom started by ChatGPT since its launch in November 2022.
With its latest incarnation now upon us – the multi-modal GPT-4 model, capable of handling media like images – large language models (LLMs) have been improving worker efficiency and harnessing creativity, but they’re not without their risk.
To that end, the UK’s National Cyber Security Centre (NCSC) has issued a general warning in a blog post (opens in new tab), exposing some of the associated risks to cybersecurity from LLMs.
Technical directors and authors David C and Paul J begin by debunking some myths: for example, a growing concern that ChatGPT and similar tools will learn and store information fed into them by end users. This is not true, with training carried out in a controlled environment by the creators of such models.
However, information about queries do get fed back to the relevant companies, meaning that OpenAI, the firm behind ChatGPT, will be able to determine the sorts of questions users are asking to improve its services. For this reason, the NCSC advises not to share personal or confidential information anywhere online, including with chatbots.
Outside the realms of everyday users, the authors also note the use by malicious actors who may be able to carry out cyberattacks beyond their usual scope, suggesting that we are at an increased risk of more sophisticated attacks.
As interest in artificial intelligence continues to grow, so too does skepticism about the cutting-edge technology. Recently, Bloomberg (opens in new tab) reported that banking giant JP Morgan had introduced restrictions against staff usage of ChatGPT due to fears of having to entrust external software. Educational establishments around the world have also imposed restrictions over its use in acts of academic dishonesty.
(Except for the headline, this story has not been edited by PostX Digital and is published from a syndicated feed.)