The Age of Artificial Intelligence
Moderators: NCF, salmar80, BF004, APB, Packfntk
The Age of Artificial Intelligence
Lots of talk recently about the rise of artificial intelligence (AI) and what it could mean for humankind. There are some who believe it's a bigger deal than the harnessing of fire while others fear an intelligence that will quickly supersede that of humankind and ultimately seek to break free of human engineered limitations.
It's certainly an advancement worth considering its impacts.
I'm going to link two essays: one that envisions a great leap for humankind and another that fears for its destruction. The goal is to generate a basis for discussion from members.
Side note: My initial thought was to initiate this discussion in The Podium but I fear that forum's audience is relatively limited. Without getting into too much detail, I get it. That said, I'd be interested in hearing thoughts from the greater membership while leaving societal and political grievances on the sideline.
So, what do you think?
Is the continued development of AI a pathway to a giant leap for mankind? Or are we potentially engineering our own destruction?
It's certainly an advancement worth considering its impacts.
I'm going to link two essays: one that envisions a great leap for humankind and another that fears for its destruction. The goal is to generate a basis for discussion from members.
Side note: My initial thought was to initiate this discussion in The Podium but I fear that forum's audience is relatively limited. Without getting into too much detail, I get it. That said, I'd be interested in hearing thoughts from the greater membership while leaving societal and political grievances on the sideline.
So, what do you think?
Is the continued development of AI a pathway to a giant leap for mankind? Or are we potentially engineering our own destruction?
Based on no reading and no research, I have never understood the "machines will be smarter than humans" theory. I have very little concerns surrounding that and while proceed with caution is always good advice, I would lean far more heavily towards technological advancement.
Read More. Post Less.
Would it surprise you to know some of the very same developers who are/were working to develop functional AI have left the project out of concerns it was learning things on its own and at a rate they didn't anticipate? The AI programs were even learning things - like languages, for example - the developers didn't even train it to learn. And it has spooked them.NCF wrote: ↑13 Jul 2023 07:29Based on no reading and no research, I have never understood the "machines will be smarter than humans" theory. I have very little concerns surrounding that and while proceed with caution is always good advice, I would lean far more heavily towards technological advancement.
https://www.bbc.com/news/world-us-canada-65452940
https://www.usatoday.com/story/tech/new ... 269260007/
- Pckfn23
- Huddle Heavy Hitter
- Reactions:
- Posts: 14459
- Joined: 22 Mar 2020 22:13
- Location: Western Wisconsin
ChatGPT really brought it out to the public, but what we consider AI now didn't really take a huge scary leap this last year. Consider that 5 years ago Google Assistant could make phone calls like this:
I even had fun with it allowing it to make dinner reservations for me a decent number of times. Then all restaurants went online reservations, so killed that.
In short I think we are many many years off, maybe even centuries, for true AI and true machine learning. The pieces have been in place for many years to showcase what we see with ChatGPT. It's no surprise that languages are fairly easy for something like that as we have had Google Translate for years now and languages are more or less built around rules.
I really am impressed by what we now have access to. When it first came out I fed it a box score and had it write a sports article about it. The first try sucked, but I fed it in again with some better parameters and it wrote a decent article. I should try to find it.
This article pretty much aligns with my thoughts: https://www.newyorker.com/science/annal ... e-is-no-ai
I even had fun with it allowing it to make dinner reservations for me a decent number of times. Then all restaurants went online reservations, so killed that.
In short I think we are many many years off, maybe even centuries, for true AI and true machine learning. The pieces have been in place for many years to showcase what we see with ChatGPT. It's no surprise that languages are fairly easy for something like that as we have had Google Translate for years now and languages are more or less built around rules.
I really am impressed by what we now have access to. When it first came out I fed it a box score and had it write a sports article about it. The first try sucked, but I fed it in again with some better parameters and it wrote a decent article. I should try to find it.
This article pretty much aligns with my thoughts: https://www.newyorker.com/science/annal ... e-is-no-ai
Palmy - "Very few have the ability to truly excel regardless of system. For many the system is the difference between being just a guy or an NFL starter. Fact is, everyone is talented at this level."
Last night the wife and I were sitting watching TV. The Roku remote was next to me (I wasn't touching it) and all of a sudden it starts pausing, fast forwarding, changing the volume.
I'm pretty sure the Roku remote has become self aware. If it turns on The Bachelor or something, I'm moving off the grid.
I'm pretty sure the Roku remote has become self aware. If it turns on The Bachelor or something, I'm moving off the grid.
RIP JustJeff
Artificial general intelligence is far off, but the language learning algorithms are getting good enough to give us something hard to distinguish from artificial intelligence. I think our own ethical relationship with AI and with truth are pretty concerning.
But maybe the biggest thing we have to think about is AI and related technologies taking over the jobs that used to employ half of all people. Job loss due to automation is already much bigger than any concerns we could have had about job loss to immigration (or exportation). We need to start considering universal basic income and universal health care.
But maybe the biggest thing we have to think about is AI and related technologies taking over the jobs that used to employ half of all people. Job loss due to automation is already much bigger than any concerns we could have had about job loss to immigration (or exportation). We need to start considering universal basic income and universal health care.
The Matrix and iRobot should be watched before going too far.
I don’t know enough about AI and it’s learning potential to really have an informed opinion. I do listen to and read things related to it from industry experts, though, and their concerns are, well, concerning.
The thought of a machine basically developing a form of consciousness, or sentience, is wild. And what would a machine intelligence do with that level of awareness? I could see it going both ways - helpful to humankind or an enemy to humankind.
I read one expert who put the chances of human annihilation at 10%. That’s a risk I’d rather not take. Technology experts have suggested putting a hold on development of AI until we better understand it. I don’t think that’s asking too much but I also think that cat may already be out of the bag. Too many developers are racing to the forefront of AI advancement and the apparent riches that will go with it.
I hear some people suggest it’s just a matter of pulling the plug. I wish it were that simple. It’s not, and I don’t see industry developers bothering to concern themselves with putting active protections in place that would serve to protect against a hostile intelligence. At least not yet, based upon industry experts testimonies.
Yeah, I probably have watched too many movies. That said, I believe there is some risk as we, humankind, forge ahead blindly without knowing really what we’re dealing with. I’d be in favor of putting a pause on things to better understand exactly how AI is doing the things it’s doing and better understanding it’s future capabilities…and likely intentions.
The thought of a machine basically developing a form of consciousness, or sentience, is wild. And what would a machine intelligence do with that level of awareness? I could see it going both ways - helpful to humankind or an enemy to humankind.
I read one expert who put the chances of human annihilation at 10%. That’s a risk I’d rather not take. Technology experts have suggested putting a hold on development of AI until we better understand it. I don’t think that’s asking too much but I also think that cat may already be out of the bag. Too many developers are racing to the forefront of AI advancement and the apparent riches that will go with it.
I hear some people suggest it’s just a matter of pulling the plug. I wish it were that simple. It’s not, and I don’t see industry developers bothering to concern themselves with putting active protections in place that would serve to protect against a hostile intelligence. At least not yet, based upon industry experts testimonies.
Yeah, I probably have watched too many movies. That said, I believe there is some risk as we, humankind, forge ahead blindly without knowing really what we’re dealing with. I’d be in favor of putting a pause on things to better understand exactly how AI is doing the things it’s doing and better understanding it’s future capabilities…and likely intentions.
- lupedafiasco
- Reactions:
- Posts: 5325
- Joined: 24 Mar 2020 17:17
Other than the talking fish (WTF was up with that??), some pretty interesting - and threatening - stuff.
- lupedafiasco
- Reactions:
- Posts: 5325
- Joined: 24 Mar 2020 17:17
I hate the $%@# fish. I think it’s corny. Other than that if you want some
Interesting conspiracy stuff his channel is very interesting.
Cancelled by the forum elites.
I have a pretty strong techno-progressivist streak; I generally believe that building a society where machines/computers produce material wealth in great abundance is in fact THEE goal. So I think that AI tech and automation are potentially very exciting developments.
That said, I made a thread and posted a video on this in Podium that brought up a lot of issues relating to AI that I was not aware of and did not know I needed to be.
Here's that video:
To me, what it comes down to is... AI could be a tool that hyper-advances society in a very exciting way, -BUT- it could also be weaponized by the same small class of people that seem to be the cause of all major problems in society today (the way seemingly all technological advancements ends up being so weaponized).
That said, I made a thread and posted a video on this in Podium that brought up a lot of issues relating to AI that I was not aware of and did not know I needed to be.
Here's that video:
Spoiler
To me, what it comes down to is... AI could be a tool that hyper-advances society in a very exciting way, -BUT- it could also be weaponized by the same small class of people that seem to be the cause of all major problems in society today (the way seemingly all technological advancements ends up being so weaponized).
Last edited by Labrev on 19 Jul 2023 09:41, edited 1 time in total.
“Most other nations don't allow a terrorist to be their leader.”
“... Yet so many allow their leaders to be terrorists.”—Magneto
“... Yet so many allow their leaders to be terrorists.”—Magneto
Elon Musk's theory is to purposely develop a super-intelligent AI, arguing that the more intelligent the AI, the *less* likely it is to be destructive:
https://fortune.com/2023/07/17/elon-mus ... pt-openai/
https://fortune.com/2023/07/17/elon-mus ... pt-openai/
“Most other nations don't allow a terrorist to be their leader.”
“... Yet so many allow their leaders to be terrorists.”—Magneto
“... Yet so many allow their leaders to be terrorists.”—Magneto
That video is actually what spurred me to start looking further into AI and it's potential impacts. I didn't know squat about it until then, and I know slightly more than squat now, but when industry leaders are concerned, I think Joe Barstool oughta be, too.Labrev wrote: ↑19 Jul 2023 09:41I have a pretty strong techno-progressivist streak; I generally believe that building a society where machines/computers produce material wealth in great abundance is in fact THEE goal. So I think that AI tech and automation are potentially very exciting developments.
That said, I made a thread and posted a video on this in Podium that brought up a lot of issues relating to AI that I was not aware of and did not know I needed to be.
Here's that video:Spoiler
To me, what it comes down to is... AI could be a tool that hyper-advances society in a very exciting way, -BUT- it could also be weaponized by the same small class of people that seem to be the cause of all major problems in society today (the way seemingly all technological advancements ends up being so weaponized).
- Pckfn23
- Huddle Heavy Hitter
- Reactions:
- Posts: 14459
- Joined: 22 Mar 2020 22:13
- Location: Western Wisconsin
Let's just say this, killer AI is not the issue. It's what humans will do with what we now call AI that is the issue, ie... job loss/financial crisis, deep fakes, anonymous AI accounts (social manipulation), surveillance/privacy (not strictly an AI thing though), etc...
Palmy - "Very few have the ability to truly excel regardless of system. For many the system is the difference between being just a guy or an NFL starter. Fact is, everyone is talented at this level."
- Pckfn23
- Huddle Heavy Hitter
- Reactions:
- Posts: 14459
- Joined: 22 Mar 2020 22:13
- Location: Western Wisconsin
On queue
Palmy - "Very few have the ability to truly excel regardless of system. For many the system is the difference between being just a guy or an NFL starter. Fact is, everyone is talented at this level."
I think this kind of relates here...
-
- Reactions:
- Posts: 1265
- Joined: 05 Oct 2020 18:57
I've been using ChatGPT to create lectures and readings for my middle school students. I also have it generate multiple choice quizzes from the readings it has generated. It does well when given very, very specific prompts - and then I read the output and give more specificity to the prompt.
For example, I might type "Write one paragraph describing the precedents established during the presidency of George Washington." I will then read the response and make it add anything that I think is important. It only works because I know the subject really well. That said, I can often generate 40+ pages of material in an hour, which is exponentially far more than I could generate if I was doing the writing myself. On Thursday I worked for 2 hours and generated 100 pages of material for World Geography (6th Grade), World History (7th Grade), and US History (8th Grade).
I've also done some experiments like "Create a plan for chess improvement" and "Write the opening three paragraphs of a novel using the narrative style of X author, with a theme of Y."
I am far less impressed when I give it these kinds of tasks. I can understand why Hollywood writers are terrified of AI, as terrible writing is right up Chat's alley...
For example, I might type "Write one paragraph describing the precedents established during the presidency of George Washington." I will then read the response and make it add anything that I think is important. It only works because I know the subject really well. That said, I can often generate 40+ pages of material in an hour, which is exponentially far more than I could generate if I was doing the writing myself. On Thursday I worked for 2 hours and generated 100 pages of material for World Geography (6th Grade), World History (7th Grade), and US History (8th Grade).
I've also done some experiments like "Create a plan for chess improvement" and "Write the opening three paragraphs of a novel using the narrative style of X author, with a theme of Y."
I am far less impressed when I give it these kinds of tasks. I can understand why Hollywood writers are terrified of AI, as terrible writing is right up Chat's alley...