EVENT TICKETS
ALL TICKETS >
2025 New Year's Eve
Regular Events
Hurry! Get Your Tickets Now! Countdown has begun!!

2025 Midnight Madness NYE PARTY
Regular Events
Join us for an unforgettable night filled with glitz, glamour, and good vibes! The 2025 Midnight Madness NYE Party promises to be a night to remember with Live Music by DJ Malay

Big Fat New Year Eve 2025
Regular Events
Arizona's Largest & Hottest New Year’s Eve Event: Big Fat Bollywood Bash - Tuesday Dec 31, 2024. Tickets @ early bird pricing on sale now (limited quantity of group discount

This 'black box' chip in devices can frustrate hackersNew York, May 14 (AZINS) In an era of Machine Learning (ML)-enabled hacking, in which Artificial Intelligence (AI) technology is trained to "learn" and model inputs and outputs, a new chip-based technology termed as a "black box" can thwart hackers' plans, say researchers.

According to Computer Science Professor Dmitri Strukov from the University of California-Santa Barbara, he and his team were looking to put an extra layer of security on devices.

The result is a chip that deploys "ionic memristor" technology.

Key to this technology is the memristor, or memory resistor -- an electrical resistance switch that can "remember" its state of resistance based on its history of applied voltage and current.

A circuit made of memristors results in a "black box" of sorts, as Strukov called it, with outputs extremely difficult to predict based on the inputs.

"You can think of it as a black box. Due to its nature, the chip is physically unclonable and can, thus, render the device invulnerable to hijacking, counterfeiting or replication by cyber-criminals," said Strukov in a paper which appeared in the journal Nature Electronics.

With ML, an attacker doesn't even need to know what exactly is occurring as the computer is trained on a series of inputs and outputs of a system.

"For instance, if you have 2 million outputs and the attacker sees 10,000 or 20,000 of these outputs, he can, based on that, train a model that can copy the system afterwards," said Hussein Nili, the paper's lead author.

The "memristive black box" can circumvent this method of attack because it makes the relationship between inputs and outputs look random enough to the outside world even as the circuits' internal mechanisms are repeatable enough to be reliable.

"If we scale it a little bit further, it's going to be hardware which could be, in many metrics, the state-of-the-art," Strukov noted.