EVENT TICKETS
ALL TICKETS >
2025 New Year's Eve
Regular Events
Hurry! Get Your Tickets Now! Countdown has begun!!

2025 Midnight Madness NYE PARTY
Regular Events
Join us for an unforgettable night filled with glitz, glamour, and good vibes! The 2025 Midnight Madness NYE Party promises to be a night to remember with Live Music by DJ Malay

Big Fat New Year Eve 2025
Regular Events
Arizona's Largest & Hottest New Year’s Eve Event: Big Fat Bollywood Bash - Tuesday Dec 31, 2024. Tickets @ early bird pricing on sale now (limited quantity of group discount

Facebook to launch independent body for calls on contentNov 16 (AZINS) Facebook announced it is creating an independent body to make potentially precedent-setting calls on what content should be yanked from the social network.

The announcement came as Facebook reported it has ramped up its ability to quickly detect "hate speech" and other posts violating community rules, with the leading social network under pressure from regulators in various countries and activists to root out abusive and inappropriate content.

"I have come to believe that we shouldn't be making so many decisions about free expression and safety on our own," Facebook chief executive Mark Zuckerberg said in a media briefing.

Content spied by artificial intelligence software or reported by users is now reviewed by an internal system that Facebook has been ramping up. An independent body to be constituted in the coming year will act as a "higher court" of sorts, considering appeals of content removal decisions made by the social network, Zuckerberg said.

The composition of the appeals body along with how to keep it independent while remaining in line with Facebook principles and policies was to be determined in the coming year. Facebook also planned next year to begin releasing content removal summaries quarterly in a tempo on par with earnings reports, according to executives.

"We have made progress getting hate, bullying and terrorism off our network," Zuckerberg said. "It's about finding the right balance between giving people a voice and keeping people safe."

Challenges faced by the California-based social network include the fact that people naturally tend to engage with more sensational content that, while perhaps at the edge of violating Facebook policies, are unhealthy for civilized discourse, according to Zuckerberg. "We see this in cable news and tabloids too," Zuckerberg said. "A lot of our work is to insure that borderline content that comes close to violating our content gets less attention not more."

Bullying represents a tougher challenge to AI systems, because it tends to be personal and subjective. For example, someone might playfully mock a friend  in a post that could also be interpreted to be mean.

Detecting bullying or hate can also require understanding of the gamut of languages used at Facebook, along with cultural contexts. "We are getting better at proactively identifying violating content before anyone reports it, specifically for hate speech and violence and graphic content," Facebook said in the new transparency report.

"But, there are still areas where we have more work to do." Facebook said that since its last transparency report, the amount of hate speech detected proactively, before anyone reported it, has more than doubled.  "The single biggest improvement comes from AI and machine learning," said product management vice president Guy Rosen.