EVENT TICKETS
ALL TICKETS >
2025 New Year's Eve
Regular Events
Hurry! Get Your Tickets Now! Countdown has begun!!

2025 Midnight Madness NYE PARTY
Regular Events
Join us for an unforgettable night filled with glitz, glamour, and good vibes! The 2025 Midnight Madness NYE Party promises to be a night to remember with Live Music by DJ Malay

Big Fat New Year Eve 2025
Regular Events
Arizona's Largest & Hottest New Year’s Eve Event: Big Fat Bollywood Bash - Tuesday Dec 31, 2024. Tickets @ early bird pricing on sale now (limited quantity of group discount

New AI-based 'privacy filter' to block facial recognition techToronto, June 1 (AZINS) Addressing the rising concerns over privacy and data security on social networks, researchers led by one of Indian-origin, have designed a new artificial intelligence(AI)-based algorithm that protects users' privacy by dynamically disrupting facial recognition tools designed to identify faces in photos.

"The disruptive AI can 'attack' what the neural net for the face detection is looking for," said lead author Avishek Bose, graduate student at the University of Toronto in Canada.

"If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector they're significant enough to fool the system," he added.

The new system leverages a deep learning technique called adversarial training, which pits two AI algorithms against each other.

The team designed a set of two neural networks: the first working to identify faces, and the second working to disrupt the facial recognition task of the first. The two are constantly battling and learning from each other, setting up an ongoing AI arms race.

The result is an Instagram-like filter that can be applied to photos to protect privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.

"Personal privacy is a real issue as facial recognition becomes better and better. This is one way in which beneficial anti-facial-recognition systems can combat that ability," said Parham Aarabi, Professor at the University.

Aarabi and Bose tested their system on the 300-W face dataset, an industry standard pool of more than 600 faces that includes a wide range of ethnicities, lighting conditions and environments.

They showed that their system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.

The findings will be presented at the 2018 IEEE International Workshop on Multimedia Signal Processing in Vancouver.

In addition, the new technology also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be extracted automatically.

The team now hopes to make the privacy filter publicly available, either via an app or a website.
Toronto, June 1 (IANS) Addressing the rising concerns over privacy and data security on social networks, researchers led by one of Indian-origin, have designed a new artificial intelligence(AI)-based algorithm that protects users' privacy by dynamically disrupting facial recognition tools designed to identify faces in photos.

"The disruptive AI can 'attack' what the neural net for the face detection is looking for," said lead author Avishek Bose, graduate student at the University of Toronto in Canada.

"If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector they're significant enough to fool the system," he added.

The new system leverages a deep learning technique called adversarial training, which pits two AI algorithms against each other.

The team designed a set of two neural networks: the first working to identify faces, and the second working to disrupt the facial recognition task of the first. The two are constantly battling and learning from each other, setting up an ongoing AI arms race.

The result is an Instagram-like filter that can be applied to photos to protect privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.

"Personal privacy is a real issue as facial recognition becomes better and better. This is one way in which beneficial anti-facial-recognition systems can combat that ability," said Parham Aarabi, Professor at the University.

Aarabi and Bose tested their system on the 300-W face dataset, an industry standard pool of more than 600 faces that includes a wide range of ethnicities, lighting conditions and environments.

They showed that their system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.

The findings will be presented at the 2018 IEEE International Workshop on Multimedia Signal Processing in Vancouver.

In addition, the new technology also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be extracted automatically.

The team now hopes to make the privacy filter publicly available, either via an app or a website.