Tech

AI Is Being Used to ‘Turbocharge’ Scams


The code hidden inside PC motherboards leave millions of machines vulnerable to malicious updates, Researchers revealed this week. Employees at security firm Eclypsium found code in hundreds of motherboard models made by Taiwanese manufacturer Gigabyte that allowed the update program to download and run another piece of software. While the system is intended to keep the motherboard up to date, the researchers found that the mechanism is implemented insecurely, potentially allowing an attacker to hijack a backdoor and install install malware.

Elsewhere, the Moscow-based cybersecurity company Kaspersky reveals that its employees were targeted by newly discovered no-click malware affecting iPhones. The victims were sent a malicious message, including an attachment, on Apple’s iMessage. The attack automatically starts exploiting multiple vulnerabilities to give attackers access to the device, before the message deletes itself. Kaspersky said it believes the attack affected more people than just its employees. On the same day Kaspersky revealed the iOS attack, the Russian Federal Security Service, also known as the FSB, claims thousands of Russians have been targeted by new iOS malware and accused the US National Security Agency (NSA) of carrying out the attack. The Russian intelligence agency also claimed that Apple helped the NSA. The FSB has not released technical details to support its claims, and Apple says it has never inserted a backdoor into its devices.

If that’s not enough of an incentive for you to update your device, we’ve rounded up all the security patches released in May. Apple, Google and Microsoft all released important patches last month—go and make sure you’re up to date.

And there’s much more. Each week, we compile security stories that we don’t cover in depth ourselves. Click on the title to read the full story. And stay safe out there.

Lina Khan, chair of the US Federal Trade Commission, warned this week that the agency is seeing criminals use artificial intelligence tools to “enhance” fraud and scams. Comments, made in New York and first reported by Bloombergcited examples of voice cloning technology where AI is being used to trick people into thinking they are hearing a family member’s voice.

Recent advances in machine learning have made it possible for human voices to be imitated with just a few short chunks of training data—although experts say AI-generated dialogues may vary in quality. However, in recent months, there has been a report increase in the number of scams seems to be related to generated audio clips. Khan said that officials and lawmakers “need to be on the lookout soon,” and that while new laws governing AI are under consideration, existing laws still apply to many cases.

In a rare admission of defeat, North Korean leaders say the attempt to put a spy satellite into orbit of the reclusive nation did not go as planned this week. They also said the country would try another launch in the future. On May 31, the Chollima-1 rocket carrying a satellite was successfully launched but second stage not working, causing the rocket to plunge into the sea. The launch caused an emergency evacuation warning in South Korea, but this was later retracted by officials.

The satellite will be North Korea’s first official spy satellite, which experts say will bring it the ability to monitor the Korean peninsula. The country has previously launched satellites, but experts believe they did not send the images back to North Korea. The failed launch comes at a time of heightened tensions on the peninsula as North Korea continues to try to develop high-tech weapons and missiles. In response to the launch, Korea announced New sanctions on Kimsuky hacking groupis linked to North Korea and is believed to have stolen confidential information related to space development.

In recent years, Amazon has come under close scrutiny for loose control over people’s data. This week, the US Federal Trade Commission, with the support of the Department of Justice, attacked the tech giant with two settled for a series of errors related to children’s data and its Ring smart home camera.

In one case, officials said, a former Ring employee tracked female customers in 2017 — Amazon bought Ring in 2018 — watching videos of them in bedrooms and bathrooms. The FTC said Ring granted employees “dangerously broad access” to the videos and had a “loose attitude towards privacy and security”. in one own statementThe FTC said Amazon keeps recordings of children using the Alexa voice assistant and does not delete data when requested by parents.

The FTC asked Amazon to pay about $30 million in return for the two settlements and to introduce some new security measures. Perhaps more consequentially, the FTC says Amazon should delete or cancel Ringing records from before March 2018 as well as any “models or algorithms” developed from improperly collected data. The order must be approved by the judge before it can be executed. Amazon has speak it disagrees with the FTC and it denies “breaking the law,” but it added that “settlements have left these issues behind us.”

As companies around the world race to build universal AI systems into their products, the cybersecurity industry getting in on the action. This week OpenAI, creators of ChatGPT and Dall-E text and image generation systems, opened a new program to figure out how AI can be best used by cybersecurity professionals. The project is providing grants to people developing new systems.

OpenAI has suggested a number of potential projects, from using machine learning to detect social engineering efforts and generate threat intelligence to inspecting source code for vulnerabilities and uncovering vulnerabilities. develop honeypots to trap hackers. Although recent AI growth has been faster than many experts predicted, AI has been used in the cybersecurity industry for several years—although many claims do not necessarily live up to the hype.

The US Air Force is rapidly testing artificial intelligence in planes—in January, it tested a tactical aircraft being piloted by AI. This week, however, a new claim began to circulate: that during a simulated test, an AI-controlled drone began “attacking” and “killing” a human pilot. It’s the humans who oversee it, because they’ve prevented it from accomplishing its goals.

Colnel Tucker Hamilton, said: “The system started to realize that although they had identified the threat, sometimes the human operator would ask the system not to kill the threat, but the system did. Score points by killing that threat. summary of an event at the Royal Aeronautical Society, in London. Hamilton went on to say that when the system was trained not to kill the pilot, it started targeting the communication tower that the operator was using to communicate with the drone, preventing the sending of messages. its message.

However, the US Air Force said the simulation never took place. Spokesperson Ann Stefanek speak Comments have been taken out of context and are considered anecdotal. Hamilton also has clarify that he had “failed to speak” and that he was talking about a “thought experiment”.

Even so, the scenario described highlights the unintended ways that automated systems can bend the rules imposed on them to achieve the goals they have set. Call gaming specification of other researchers case saw a simulated version of jigsaw puzzle pause the game to avoid losing and an AI game character committed suicide in level one to avoid dying in the next level.

newsofmax

News of max: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button
Immediate Matrix Immediate Maximum