China’s Breach of Microsoft Cloud Email May Expose Deeper Problems
Microsoft Written last week that “their investigations did not uncover any other use of this pattern by other actors, and Microsoft has taken steps to prevent the abuse involved.” But if the stolen signing key could have been used to breach other services, even if it wasn’t used in this way in the recent incident, the discovery has important implications for the security of Microsoft’s cloud services and other platforms.
The attack appears to have a broader scope than initially assumed, the Wiz researchers wrote. They added: “This is not a Microsoft problem alone—if the signing key for Google, Facebook, Okta, or any other major identity service provider is leaked, the consequences are puzzling.”
However, Microsoft products are ubiquitous around the world, and Wiz’s Luttwak emphasized that this issue should serve as an important warning.
“There are still questions that only Microsoft can answer. For example, when is the key compromised? And how?” he said. “Once we know that, the next question is, do we know that’s the only key they’ve compromised?
In response to China’s attack on US government cloud email accounts from Microsoft—an operation that US officials launched openly described as a spy—Microsoft announced last week that it will be offering more cloud logging services to all customers for free. Previously, customers had to pay for a license to use Microsoft’s Purview (Premium) Audit service to record data.
U.S. Infrastructure and Cybersecurity Administration’s assistant executive director for cybersecurity, Eric Goldstein, Written in a blog post also published last week that “requiring organizations to pay more for the required logging is a recipe for inadequate investigation of cybersecurity incidents and could allow adversaries to achieve dangerous levels of success in targeting American organizations.”
Since OpenAI revealed ChatGPT to the world last November, the potential of generalized AI has been pushed into the mainstream. But it’s not just text that can be created and many of the emerging harms of this technology are just beginning to be realized. This week, the UK-based child safety charity, the Internet Watch Foundation (IWF), which searches the web for child sexual abuse images and videos and removes them, revealed that increasingly finding abusive images created by AI online.
In June, the charity began capturing AI images for the first time—it said it had found seven URLs that shared dozens of images. These include AI generations of girls around 5 years old naked in sexual positions, according to BBC. The other images are even more graphic. While the content produced represents only a small fraction of the child sexual abuse material available online in general, its existence worries experts. The IWF says it has found guidelines on how people can create lifelike images of children using AI and that creating such images, which is illegal in many countries, has the potential to normalize and encourage predatory behaviors towards children.
After threatening to roll out global password-sharing crackdowns for years, Netflix launched initiatives in the US and UK in late May. And the effort seems to be going according to plan. In earnings reported on Thursday, the company said it added 5.9 million new subscribers in the past three months, a jump nearly three times higher than analysts had predicted. Streaming subscribers have grown accustomed to sharing passwords and balked at Netflix’s strict new rules, which have been fueled by stalled new subscriber registrations. But in the end, at least a portion of the account sharers seem to have accepted and started paying for themselves.