To prevent prying eyes, OpenAI improves security measures
OpenAI Boosts Security after Espionage Accusations
OpenAI is reportedly reinforcing its security shield in response to potential corporate spying, as per a Financial Times report. This move accelerated after allegations against Chinese start-up, DeepSeek. OpenAI claimed DeepSeek improperly replicated its models using "distillation" techniques, prompting an early security crackdown.
To protect its invaluable assets, OpenAI has implemented "information tenting" policies restricting employee access to confidential algorithms and breakthrough products. Take the development of the o1 model, for instance - only pre-approved team members with apt knowledge were allowed in-the-know discussions in common office zones, as the FT shares.
But, that's not all. OpenAI has now taken extra precautions to isolate its proprietary technology on offline computers, vaccine its office areas with biometric controls, like fingerprint scanning, and introduced a "deny-by-default" internet policy where any external connections must get the green light first, says the report. It also discloses that OpenAI has escalated its physical security around data centers and added more firepower to its cybersecurity team.
These transformations reportedly aim to counteract attempts from foreign rivals looking to hijack OpenAI's intellectual goldmine. However, considering the incessant talent poaching wars among American AI companies and frequent slips of CEO Sam Altman's missives, OpenAI might be also looking to tackle internal security breaches.
We've gotten in touch with OpenAI for their view on the matter.