Common Sense for the 21st Century

Renegade. Nonpartisan. Revolutionary.

Algorithms Are Why TV Shows and Commercials SUCK!

What makes me most nervous about the coming AI apocalypse is not the job losses (that was a thing long before this current hype train), but the Neo-alchemist that program and deploy their models.

I am a geek when it comes to news in AI research. Long before the current generative AI boom, I would (attempt) to read the papers and research connected to this field. I would like to say I saw it coming, but that would be presumptuous on my part.

My predictions were 2-4 years off.

What makes me most nervous about the coming AI apocalypse is not the job losses (that was a thing long before this current hype train), but the Neo-alchemist that program and deploy their models.

My concern isn’t because they are brilliant and driven, but they also seem to have a cavalier attitude about the potential dangers of their work. They seem to believe that they can control these powerful AI systems and steer them toward some utopian, inclusive future. 

But I’m not so sure.

I worry that these systems and their predictive outputs are being used for purposes that conform to a very alien worldview. You know…that place between Santa Rosa and Gilroy, CA. 

So, for me, it’s important to be prepared for the potential dangers of AI wielded by this worldview, and I hope that others will do the same.

Back in the early days of tech tyranny (Pre-COVID), there existed a creeping toward “Algorithmic Fairness”, inclusion and diversity in dataset, etc. Like most of these now cancerous efforts, it started out from a good place. AI and machine-learning models require extremely large (and dare I say diverse) datasets to train. This helps make the predictions (called inference) these models make so extraordinary.

But who decides these datasets?

That brings me to my daily brief on the latest AI Research & Development.  Then I read this title this morning, “Responsible AI at Google Research: Perception Fairness” from Google Research.  Uh oh!  What exactly is Perception Fairness? 

Springer defined it in one of its papers as “Perceptions of fairness refer to any element of the environment perceived by individuals or collectives as fair according to previous norms or standards.”  As I feared, what exactly is meant by “perceived by individuals or collectives as fair” and according to whose norms and standards? 

These would all be Socratic and Navel gazing questions to ask if this was simply stuck in the realm of research. However, this is much more than research. Some of these alien principles have been applied to the media.

The chief example is The Media Understanding for Social Exploration (MUSE) project at Google Research. The Media Understanding for Social Exploration (MUSE) project is in their words, “A Google Research project that uses AI to understand how people are portrayed in mainstream media. The goal is to inspire more equitable content”.

The MUSE project used AI to study patterns in how certain people are portrayed in TV over a 12-year span (2010-2022). The project is a partnership with the GDI. Who?  The Geena Davis Institute for Gender in Media. YES, that Geena Davis (A League of Their Own, The Long Kiss Goodnight)!!! 

What is their/them mission?  To be the “…global research-based organization working collaboratively within the entertainment industry to create gender balance, foster inclusion and reduce negative stereotyping in family entertainment media.”  Sounds relatively innocuous, right? 

Well, read this ominous excerpt:

Our tools are also used to study representation in large-scale content collections. Through our Media Understanding for Social Exploration (MUSE) project, we’ve partnered with academic researchers, nonprofit organizations, and major consumer brands to understand patterns in mainstream media and advertising content. We first published this work in 2017, with a co-authored study analyzing gender equity in Hollywood movies. Since then, we’ve increased the scale and depth of our analyses. In 2019, we released findings based on over 2.7 million YouTube advertisements. In the latest study, we examine representation across intersections of perceived gender presentation, perceived age, and skin tone in over twelve years of popular U.S. television shows. These studies provide insights for content creators and advertisers and further inform our own research.” 

So, Google’s tools are used to study representation in large-scale content collections. The Media Understanding for Social Exploration (MUSE) project has partnered with academic researchers, nonprofit organizations, and major consumer brands to understand patterns in mainstream media and advertising content.

Does that nexus of institutions sound familiar?

Studies have been published on gender equity in Hollywood movies, YouTube advertisements, and popular U.S. television shows. These studies supposedly provide insights for content creators and advertisers. But is it wrong to ask if they also provide marching orders to useful idiots or cultural revolutionaries? 

Based on a massive dataset of 440 hours of programming, from comedies, political dramas to romances, and sci-fi the following insights were revealed. 

  • The screen time of female characters is rising, but the male-female gap remains. In 2021, male characters still received about 16 percent more screen time than females.
  • The screen time gap between characters with light and dark skin tones narrowed from 81% in 2010 to 55% in 2021. But a large gap still remains. Characters with medium and dark skin tones saw screen time increases of 8 and 9 percentage points, respectively, from 2010 to 2021.
  • Older men but younger women get more screen time. The most common age group on screen for male characters appears to be 33 to 60; for female characters, it appears to be 18 to 33. Women over the apparent age of 60 still receive less than 1 percent of all available screen time.
  • Speaking time for female characters of dark skin tone rose the most. But they are still the group least likely to speak when shown on screen. The speaking time increased at an average rate of 1.2 % per year, but female characters with dark skin tone are speaking only 16 % of the time.

Do you question why it seems like it’s been a decade since you’ve seen a male protagonist portrayed heroically in a television show?

Have you ever wondered why they are trying to gay all the things?

Do you wonder why they race swapped traditional characters that were say Nordic or even the opposite sex? 

And let’s be even more blunt, When’s the last time you saw a moderately attractive person in a television commercial? When’s the last time you saw male competence, particularly white male competence, displayed in a television commercial?

Well, you can thank this study which uses the power of large AI models with biased data sets trained by people between Santa Rosa and Gilroy California. 

AI can be dangerous if wielded by people with a certain worldview — increasing the potential for bias in AI datasets.

What’s the most frightening is who wields these AIS and how are the inferences from these AI’s utilized as a cultural cudgel to what you see and what you hear coming from Hollywood. Prepare for the dangers of AI Enforcing cultural conformity — under the guise of algorithmic fairness or its sinister sounding cousin, “perception fairness.”

There’s no need to fear the coming AI apocalypse because of potential the job losses.

Fear the worldview and agendas of the neo-alchemists who program and deploy their models to increase their cultural hegemony.

support renegade media

Free Speech dies when truth-tellers fail to obtain the resources we need to bring you the facts, with some flare. 

The spirit of  1776Returns.com is to hold media’s feet to the flames. 

You enable us to grab more kindling for the bonfire. 

After all, who doesn’t love a good barbecue?

%d