The Age of Artificial Intelligence
Moderators: NCF, salmar80, BF004, APB, Packfntk
Yeah, big disagree with the most recent takes ITT. First off, we don't live in the abstract world where a worker "makes widgets." That is a relic of a bygone era when the blacksmith made the tools, tailors made the clothes, production was an individual pursuit, the worker sold the fruit of their labor and kept all the profit. That model stopped existing hundreds of years ago, even if Econ class in the west likes to pretend otherwise.
In reality, production has long since become a socialized(!) process thanks to industrialization; workers at best make -part of- the widgets. And that's in countries that still have manufacturing, which the west largely outsourced, automated, and replaced largely with service, gigs, and "BS Jobs" as they are called.
Second, I don't think laziness is to blame for people not working hard in those jobs; I think it is a sober awareness that their jobs do not contribute much of real value to society. And that "getting ahead" often is not a matter of who is the most productive worker, but the arbitrary whims of owners and managers, instead going with inferior workers that were just better at "shmoozing" them.
Or (in more egregious cases) rank nepotism. The system itself discredited its myth of meritocracy.
In reality, production has long since become a socialized(!) process thanks to industrialization; workers at best make -part of- the widgets. And that's in countries that still have manufacturing, which the west largely outsourced, automated, and replaced largely with service, gigs, and "BS Jobs" as they are called.
Second, I don't think laziness is to blame for people not working hard in those jobs; I think it is a sober awareness that their jobs do not contribute much of real value to society. And that "getting ahead" often is not a matter of who is the most productive worker, but the arbitrary whims of owners and managers, instead going with inferior workers that were just better at "shmoozing" them.
Or (in more egregious cases) rank nepotism. The system itself discredited its myth of meritocracy.
“Most other nations don't allow a terrorist to be their leader.”
“... Yet so many allow their leaders to be terrorists.”—Magneto
“... Yet so many allow their leaders to be terrorists.”—Magneto
Anybody following this? Any outcomes worth mentioning?
Sharing these before I've watched them (haha):
“Most other nations don't allow a terrorist to be their leader.”
“... Yet so many allow their leaders to be terrorists.”—Magneto
“... Yet so many allow their leaders to be terrorists.”—Magneto
Ah a fellow Graeber fan.
I do think that AI will be hugely influencial in Dr. Graeber's field. Our understanding of history (and thus anthropology) is highly dependent on a few limiting factors, translating ancient texts is a big one. AI assisted search is a dramatic leap.
I'm pretty into geneology, which is history and anthropology adjacent, and its readily apparent the places where AI is going to be an assist. A reasonably accurate family tree of humanity is coming; it is a huge but solvable polynomial problem (these types of problems can be partially solved with a best fit). Old text search is seeing a big leap (it can read better than human eyes). Almost everything we know about the dutch colony of New Netherlands is because of one guy, Charles Gehring, who spent his life translating old texts that basically noone else in the world was capable of translating (this history was not carried forward in the English colonies, current Dutch speakers cannot translate 17th century texts). There is so much out there that doesn't have a Charles Gehring; our current AI level is capable of this task.
The double whammy of AI and DNA is changing our understanding of the past (and thus ourselves) at a rapidly accelerating pace.
Its really easy to see how its going to have a dramatic impact on my field, engineering, in the relatively near future. Most engineering works is busy work with heavy copy pasta. You don't rewrite a set of specs from scratch for each project, you just site adapt (+edit if needed) specs from another project(s). You don't redraw details, just update references and text, editing the drawing as needed. Permitting paperwork and support drawings is purely busywork. NEPA compliance is 1% engineering and 99% paperwork.
AI will be able to do all the stupid mindless stuff really soon. Better than a human. Humans make a lot of mistakes when making copy pasta, most commenting rounds are merely correcting the oversights (rarely is the root engineering commented apon). Tools like MS copilot and something similar for CAD, in the hands of skilled engineer, can turn several months of engineering work into a few hours. I feel like I could produce a full drawing package, with specs and all permitting, for a new project I just set eyes on, in a matter of days (the actual engineering takes place in my head and it only takes minutes). We're not there yet, generative AI isn't really trusted much yet in this setting (I've moved on from active engineering to management, but can see the potential and it has me thinking about going back, all the work I hated AI can do).
Kids these days though, my oldest, a tween, has been "cheating" on his homework with AI for a couple years now (I'm not sure he's ever written anything more than a paragraph without). Getting better at rewriting to hide the AI source. He makes games on Roblox, using AI to write the code and make graphic assets. The AI natives are coming.
I do think that AI will be hugely influencial in Dr. Graeber's field. Our understanding of history (and thus anthropology) is highly dependent on a few limiting factors, translating ancient texts is a big one. AI assisted search is a dramatic leap.
I'm pretty into geneology, which is history and anthropology adjacent, and its readily apparent the places where AI is going to be an assist. A reasonably accurate family tree of humanity is coming; it is a huge but solvable polynomial problem (these types of problems can be partially solved with a best fit). Old text search is seeing a big leap (it can read better than human eyes). Almost everything we know about the dutch colony of New Netherlands is because of one guy, Charles Gehring, who spent his life translating old texts that basically noone else in the world was capable of translating (this history was not carried forward in the English colonies, current Dutch speakers cannot translate 17th century texts). There is so much out there that doesn't have a Charles Gehring; our current AI level is capable of this task.
The double whammy of AI and DNA is changing our understanding of the past (and thus ourselves) at a rapidly accelerating pace.
Its really easy to see how its going to have a dramatic impact on my field, engineering, in the relatively near future. Most engineering works is busy work with heavy copy pasta. You don't rewrite a set of specs from scratch for each project, you just site adapt (+edit if needed) specs from another project(s). You don't redraw details, just update references and text, editing the drawing as needed. Permitting paperwork and support drawings is purely busywork. NEPA compliance is 1% engineering and 99% paperwork.
AI will be able to do all the stupid mindless stuff really soon. Better than a human. Humans make a lot of mistakes when making copy pasta, most commenting rounds are merely correcting the oversights (rarely is the root engineering commented apon). Tools like MS copilot and something similar for CAD, in the hands of skilled engineer, can turn several months of engineering work into a few hours. I feel like I could produce a full drawing package, with specs and all permitting, for a new project I just set eyes on, in a matter of days (the actual engineering takes place in my head and it only takes minutes). We're not there yet, generative AI isn't really trusted much yet in this setting (I've moved on from active engineering to management, but can see the potential and it has me thinking about going back, all the work I hated AI can do).
Kids these days though, my oldest, a tween, has been "cheating" on his homework with AI for a couple years now (I'm not sure he's ever written anything more than a paragraph without). Getting better at rewriting to hide the AI source. He makes games on Roblox, using AI to write the code and make graphic assets. The AI natives are coming.
Now THIS is something I can get behind!
I'm not talking talking an AI Professor of Sociology or History. I'm talking math and the sciences with a focus on, you know, actual science. Having my own personal AI tutor while going through my aeronautics degree program would have been of great benefit.
From Chat GPT. I asked "Is math racist?". The response:
Math itself isn't inherently racist. It's a tool for understanding and describing patterns and relationships in the world. However, like any tool, it can be used in ways that perpetuate or reflect biases present in society. For example, if mathematical models are built using biased data or assumptions, they can produce biased results. Similarly, the ways in which math is taught and who has access to mathematical education can reflect and perpetuate systemic inequalities. So, while math itself isn't racist, the way it's applied, taught, and used in society can sometimes reflect or perpetuate racial biases.
Just asked ChatGPT to "List citations for the inert placebo controlled double blind clinical trials of any FDA authorized or approved non-covid 19 vaccines"
It came back with a list of about a half dozen citations. As to my knowledge, there has never been one double blind inert placebo controlled clinical trial on any of the normal childhood (or any other, except for some hastily done C19) vaccines. So, I randomly chose one. It of course was not placebo controlled.
So I asked ChatGPT: "Long-term immunogenicity of hepatitis B vaccination and policy for booster: an Italian multicentre study is not placebo controlled. why did you lie and include that citation to my question?"
It responded: "Apologies for the oversight. You're correct; the study titled "Long-term immunogenicity of hepatitis B vaccination and policy for booster: an Italian multicentre study" does not involve a placebo-controlled design. Thank you for pointing out the error."
I chose another of the studies at random. It of course did not use an inert placebo. So I asked ChatGPT: "The study you responded with FUTURE II Study Group. Quadrivalent vaccine against human papillomavirus to prevent high-grade cervical lesions did not use an INERT placebo, it used an aluminum containing one. Why did you lie and include that citation?"
Response: "You're correct, and I apologize for the error. The FUTURE II Study Group's trial on the quadrivalent HPV vaccine did indeed use an aluminum-containing placebo, not an inert one. Thank you for pointing out the mistake."
So, AI lies. It isn't ready for prime time, because of programmer bias built in. Even for science and "science." If I checked every citation I'd get the same thing I did for the two I looked into specifically at random - a lie by ChatGPT that is designed to advance a certain narrative.
It came back with a list of about a half dozen citations. As to my knowledge, there has never been one double blind inert placebo controlled clinical trial on any of the normal childhood (or any other, except for some hastily done C19) vaccines. So, I randomly chose one. It of course was not placebo controlled.
So I asked ChatGPT: "Long-term immunogenicity of hepatitis B vaccination and policy for booster: an Italian multicentre study is not placebo controlled. why did you lie and include that citation to my question?"
It responded: "Apologies for the oversight. You're correct; the study titled "Long-term immunogenicity of hepatitis B vaccination and policy for booster: an Italian multicentre study" does not involve a placebo-controlled design. Thank you for pointing out the error."
I chose another of the studies at random. It of course did not use an inert placebo. So I asked ChatGPT: "The study you responded with FUTURE II Study Group. Quadrivalent vaccine against human papillomavirus to prevent high-grade cervical lesions did not use an INERT placebo, it used an aluminum containing one. Why did you lie and include that citation?"
Response: "You're correct, and I apologize for the error. The FUTURE II Study Group's trial on the quadrivalent HPV vaccine did indeed use an aluminum-containing placebo, not an inert one. Thank you for pointing out the mistake."
So, AI lies. It isn't ready for prime time, because of programmer bias built in. Even for science and "science." If I checked every citation I'd get the same thing I did for the two I looked into specifically at random - a lie by ChatGPT that is designed to advance a certain narrative.
Ok, I get what you're saying but it's not the context I'm referring to.
When you frame a math question within a societal context, it's going to deliver an answer based upon it's developer's biases, whatever they may be. That's not learning math, though. It's a social question, the likes of which I already clarified in my previous response. The study citation question you presented as an example is certainly more troubling but it's not the level of scholarly instruction I'm getting after.
What I'm saying is having an on-demand tutor at your disposal to teach you total aerodynamic force or to help work through a physics question is a definite plus. A middle school student falling behind in their introductory algebra class could definitely use an on-demand at-home tutor to help close the gap and keep them from years of catch-up work, of which the likelihood of ever catching up is stacked against them. There are countless examples where this type access to assisted learning could be advantageous.
I get it. You're a person who ties their own political bias into many everyday routine activities and reject anything/everything that doesn't fit the lens with which you see the world. Maybe you see it as a threat to your livelihood? I don't know. I don't necessarily disagree with your points but I can get past them without rejecting AI out of hand entirely as inherently bad and definitely see it as a helpful tool in assisting students (or schmucks like me) understand certain difficult areas of study.
When you frame a math question within a societal context, it's going to deliver an answer based upon it's developer's biases, whatever they may be. That's not learning math, though. It's a social question, the likes of which I already clarified in my previous response. The study citation question you presented as an example is certainly more troubling but it's not the level of scholarly instruction I'm getting after.
What I'm saying is having an on-demand tutor at your disposal to teach you total aerodynamic force or to help work through a physics question is a definite plus. A middle school student falling behind in their introductory algebra class could definitely use an on-demand at-home tutor to help close the gap and keep them from years of catch-up work, of which the likelihood of ever catching up is stacked against them. There are countless examples where this type access to assisted learning could be advantageous.
I get it. You're a person who ties their own political bias into many everyday routine activities and reject anything/everything that doesn't fit the lens with which you see the world. Maybe you see it as a threat to your livelihood? I don't know. I don't necessarily disagree with your points but I can get past them without rejecting AI out of hand entirely as inherently bad and definitely see it as a helpful tool in assisting students (or schmucks like me) understand certain difficult areas of study.
I see your point. I actually have something similar to this in my syllabi and talk to my students about it. I see the only currently useful purpose of AI to be things like rudimentary analytical physics or math.APB wrote: ↑14 May 2024 11:09What I'm saying is having an on-demand tutor at your disposal to teach you total aerodynamic force or to help work through a physics question is a definite plus. A middle school student falling behind in their introductory algebra class could definitely use an on-demand at-home tutor to help close the gap and keep them from years of catch-up work, of which the likelihood of ever catching up is stacked against them. There are countless examples where this type access to assisted learning could be advantageous.
More AI bias. Thread...