Lisa D. Dance’s Post

View profile for Lisa D. Dance, graphic

I help organizations improve online and offline experiences | UX Researcher | Service Designer | UX Designer | Speaker | Author | Ethical Research & Design | 3Q-DO NO HARM Framework| I ❣️Books & Libraries | Remote First

Instead of blindly rushing to AI utopia, let's adopt a new mindset on AI. Today, let's discuss #2 Historical Data = Historical Problems on Blast from "Beware of AI Hype & Harm" (below): - AI models are based on historical data with all its flaws (bias, inaccuracies, misinformation, delusions, limited representation, and more). Think of historical as both the recent past (ex. yesterday or last month) to "history" history (ex. the 1950s or 1800s). While companies use both computers and humans to remove some bias in the data for these models, bias still shows through like when Lensa images that sexualized women and anglicized features(1). The speed of AI systems amplifies information meaning the sins of the past can be repeated on blast. - Some might say humans have the same flaws, but there are important differences: the scale and speed of AI as well as AI models' ability to not forget that past. As the “Combating Automation Bias” article by ForHumanity clearly points out AI doesn’t allow for the very human characteristic of changing course or overcoming your past, you are what the data says you are (aka predicts). Isn’t the ability to change at the core of who we are or want to be as humans? Should AI through large language models be able to stop that? (2) - Another important aspect is that the data used to train LLMs (large language models) aren't 100% clear but are partly based on non-representative sources particularly the internet. Think of how the internet is dominated by some voices more than others through manipulation and misinformation. So, people who are already marginalized are marginalized at scale through the use of these models which was famously forewarned in ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” paper. (3) Have you considered these aspects of using AI or mitigated for them? #ai #data #bias #largelanguagemodels #aiethics (1) https://lnkd.in/emFWZ2hE (2) https://lnkd.in/ehSXhkR4 (3) https://lnkd.in/e547XgKb

  • Beware of AI Hype & Harm
Big Money Deprioritizes Safety
Highlighted Section: Historical Data = Historical Bias on Blast
Who Will Be Ultimately Responsible?
Are We Unpaid Workers in the AI Value Chain?
New Mindset for AI (Always Investigate)
ServiceEase Logo
Lisa D. Dance

I help organizations improve online and offline experiences | UX Researcher | Service Designer | UX Designer | Speaker | Author | Ethical Research & Design | 3Q-DO NO HARM Framework| I ❣️Books & Libraries | Remote First

1y

The co-authors of the "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” paper Timnit Gebru, Emily M. Bender Angelina McMillan-Major and Margaret Mitchell are on NOW reflecting on what has happened in the last two years, what the large language model landscape currently look like, and where we are headed vs where we should be headed. https://www.eventbrite.com/e/stochastic-parrots-day-tickets-524219965027

Donna Stewart Sharits

Artist. Now retired from a successful career in fund raising and event management.

1y

Thank you Lisa for this informative post. This statement “The speed of AI systems amplifies information meaning the sins of the past can be repeated on blast” certainly gave me pause.

See more comments

To view or add a comment, sign in

Explore topics