Lisa D. Dance’s Post

View profile for Lisa D. Dance, graphic

I help organizations improve online and offline experiences | UX Researcher | Service Designer | UX Designer | Speaker | Author | Ethical Research & Design | 3Q-DO NO HARM Framework| I ❣️Books & Libraries | Remote First

Concluding the Beware of AI Hype & Harm series with: 5. New Mindset for AI (Always Investigate) The PR push, news stories, influencer posts, and lobbying efforts promising freedom from tiresome tasks, future medical breakthroughs, and no jobs left for humans want us to adopt AI and other technologies without questions. Instead, we need to Develop and Maintain a New Mindset for AI  to Always Investigate claims. - In the name of innovation, some want to burden individuals with enduring, policing, and reporting AI issues despite known harms like hallucinations, harmful content, amplifying stereotypes, bias, misinformation and disinformation, privacy violations, economic impacts, and more (1). Instead, we need governments to firmly establish the responsibility and liability around AI lies with the developers and deployers of AI technology through robust regulation and enforcement. - Since AI, like social media, can be a tool for misinformation, we should Always Investigate claims. Here are some of the questions we should consider: 1.     Is AI needed for this purpose? 2.     Is the AI claim true? Is the result desirable? Could it harm people? 3.     Are the people or companies making the claim credible? Do they provide documentation or sources? 4.     What is their motivation? Ex. Money, influence, clicks, engagement, misinformation, amusement, etc. 5.     Where does the data come from? Do they have permission to use it? Is the data robust and accurate? 6.     Is my data protected when I use it? 7.     Does the AI model or algorithm produce biased outcomes? 8.     What laws apply to this use of AI? Ex. Employment, Housing, and Banking Laws 9.     Am I providing free data and labor when I interact with these tools? Is this a fair exchange? 10.  Are workers being exploited to improve the technology? Ex. Underpaid workers in the Global South labeling toxic content to help improve the model. I’m not sure what to think about the  “Open Letter calling for a pause on AI” released today. Is it simply  about deep concern, an effort to influence potential regulation or an attempt to misdirect the narrative around AI? (2) Who knows? - Instead, I will continue to plug into a broader and more diverse community researching and/or working with #AI and AI Ethics that includes AIethicist.org, Algorithmic Justice League, Distributed AI Research Institute (DAIR), Fight for Our Future, ForHumanity, and ProPublica (3) and support appropriate and robust regulation that protects humans today and in the future.   Links in 1st comment. ✨ ✨ ✨ 👋🏽Hi, I’m Lisa D. Dance, a UX Researcher, who helps businesses be easier to do business with through my consultancy ServiceEase. ✅I'm open to speaking, workshop, and UX Audits engagements. Key topics include: #ux #technology #customerexperience #aiethics

  • Beware of AI Hype & Harm
Big Money Deprioritizes Safety
Highlighted Section: Historical Data = Historical Bias on Blast
Who Will Be Ultimately Responsible?
Are We Unpaid Workers in the AI Value Chain?
New Mindset for AI (Always Investigate)
ServiceEase Logo
Lisa D. Dance

I help organizations improve online and offline experiences | UX Researcher | Service Designer | UX Designer | Speaker | Author | Ethical Research & Design | 3Q-DO NO HARM Framework| I ❣️Books & Libraries | Remote First

1y
Like
Reply
Paul Rollins

UX Researcher & Designer | User Advocate | Teacher & Trainer | Providing structure and meaning to the previously incomprehensible

1y

Thank you, Lisa, for sharing your thoughts on this topic! One other concern I have is how dismissive the technology's proponents seem to be to giving proper credit or attribution to original (human) content creators. Of course, you do touch on this in Items 3 and 5 above. I've actually worked alongside data scientists and machine-learning experts as a data annotator. We essentially were the humans-in-the loop with regard to model training, and mitigating bias, among other ethical and security concerns, was top-of-mind. Lastly, we worked with data given with consent by its creators and were legally held responsible for its care. Can the same be said for much larger operations where annotators are given very little information at all about the impact of their work? Of course, I do wonder if the general public has really given much thought to what it truly means to give consent to access of their data in these situations, or how broadly that consent can be applied. I suppose if one operates on the premise that nothing online is private (and more critically and catastrophically, open to exploitation and profiteering by other actors), then "No harm, no foul." But I feel this approach is too cavalier, to say the least.

See more comments

To view or add a comment, sign in

Explore topics