I enjoyed this book as an audio book - and it did a fair job of covering the risks of falling for the "AI is inevitable" nonsense. The authors do a great job of pointing out the real issues of using LLMs as a "one size fits all" in law, medicine, health management, journalism, art, academia, scientific research and other areas. LLMs need to have better transparency and more "human in the middle" (a term I was waiting for them to use). They authors do a good job explaining the topics but miss an opportunity to describe things like "Value Sensitive Design" and "Human Centered-AI."
They mention that about 16 oz of water is used for every LLM prompt - but fail to dig deeper into the real impact on people in areas where data centers are demanding use priority over limited aquifer resources. There is a quote about how some tech billionaire mentions that AI will be used to analyze x-rays and images. While the authors mention that studies show medical imaging jobs are predicted to be one of the faster growing fields, they fail to tie together the two thoughts: the tech bros WANT that business. They want to take over that field and push people out. The reality is that we need the "human in the middle" to ensure quality. Recent studies of doctors lose the skills of reading imaging when they become dependent on AI, just like humans miss out on critical thinking tasks required in generating meeting notes or writing their own assignments.
The recommendations provided by the authors are not novel - and they are covered in other works on the topic I have read. They also mention Cory Doctorow a lot, and it seems he supports an idea I have been trying to float whenever I talk about AI: more task or topic specific small language models are needed.
AI is hurting a lot of people's jobs and churning out garbage that nobody wants to read or look at. Demand better from your employers, schools and companies that provide you software that you use for your day-to-day. The authors tell people to opt out when they can - from using AI (even facial recognition at airports) - and mercilessly mock and call out bad AI generated content.
Not included in this book is my recommendation: demand that businesses do better and provide transparency about the amount of natural resources consumed for every session, whether it is your search on Google, or using Co-Pilot to polish some copy in your memo. This should be transparent and visible to end users, system managers (ie, in enterprise or academic settings) and aggregate impact should be visible to the entire world. Companies all got on the green bandwagon over the last several decades and promised to improve their greenouse gas emissions and energy consumption but AI is leading them all in the opposite direction.
People over profits, always!
REVIEW: The AI Con: How to Fight Big Tech's Hype and Create the Future We Want by Emily M. Bender
RATING: 3 stars
© Jennifer R Clark. This work is licensed under a Creative Commons Attribution 4.0 International License. You may share and adapt this content with proper attribution.
No comments:
Post a Comment