On November 9, 2023, Microsoft briefly restricted employee access to OpenAI’s ChatGPT, citing security concerns. The move came as a surprise, as Microsoft has invested billions of dollars in OpenAI and has been working closely with the company to develop and deploy ChatGPT.
In an internal update, Microsoft said that “a number of AI tools are no longer available for employees to use” due to security and data concerns. The company did not specify which AI tools were affected, but CNBC reported that ChatGPT was one of them.
The announcement was made on an internal website, with the firm going as far as blocking corporate devices from accessing the AI chatbot.
While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service […] That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well.
The move sparked speculation about the potential security risks associated with ChatGPT. Some experts suggested that the tool could be used to generate phishing emails or other malicious content. Others raised concerns about the potential for ChatGPT to be used to steal confidential data from Microsoft or its customers.
Microsoft lifted the restriction on ChatGPT after a few hours, but the incident highlighted the growing concerns about the security of large language models like ChatGPT. As these models become more powerful and sophisticated, it is increasingly important to develop safeguards to protect them from being used for malicious purposes.
While speaking with CNBC, Microsoft acknowledged that the temporary restriction on ChatGPT was an unintentional error arising from a system test conducted for large language models. A spokesperson of the firm said:
We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees […] We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.
Major corporations have limited the application of ChatGPT to mitigate the risk of disclosing sensitive information. In its recent update, Microsoft recommends making use of its proprietary Bing Chat tool, which leverages artificial intelligence models developed by OpenAI.
A senior Microsoft engineer, participating in a forum discussion, indicated that employees were permitted to utilize ChatGPT. However, cautionary advice was given against inputting confidential information into the system, as reported by Insider.
Meanwhile, Microsoft has been actively introducing updates to its Windows operating system and Office applications throughout the year, capitalizing on OpenAI services that are, in turn, hosted on Microsoft’s Azure cloud infrastructure.
Overall, the incident is a reminder that even the most advanced AI technologies are not without risk. As we continue to develop and deploy these technologies, it is important to be mindful of the potential security risks and to take steps to mitigate them.