Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

brooklynite

(94,974 posts)
Tue Mar 14, 2023, 09:02 PM Mar 2023

GPT-4 has arrived. It will blow ChatGPT out of the water.

Washington Post

The artificial intelligence research lab OpenAI on Tuesday launched the newest version of its language software, GPT-4, an advanced tool for analyzing images and mimicking human speech, pushing the technical and ethical boundaries of a rapidly proliferating wave of AI.

OpenAI’s earlier product, ChatGPT, captivated and unsettled the public with its uncanny ability to generate elegant writing, unleashing a viral wave of college essays, screenplays and conversations — though it relied on an older generation of technology that hasn’t been cutting-edge for more than a year.

GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands. When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.

The buzzy launch capped months of hype and anticipation over an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things. In fact, the public had a sneak preview of the tool: Microsoft announced Tuesday that the Bing AI chatbot, released last month, had been using GPT-4 all along.

5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
GPT-4 has arrived. It will blow ChatGPT out of the water. (Original Post) brooklynite Mar 2023 OP
Saw tweets earlier criticizing this. Refusal to give info on the dataset is NOT highplainsdem Mar 2023 #1
You are probably right about the company TheRealNorth Mar 2023 #3
Which is something else OpenAI doesn't care about, any more highplainsdem Mar 2023 #4
Twitter thread on part of what's wrong about this highplainsdem Mar 2023 #2
Here is the website Renew Deal Mar 2023 #5

highplainsdem

(49,122 posts)
1. Saw tweets earlier criticizing this. Refusal to give info on the dataset is NOT
Tue Mar 14, 2023, 09:24 PM
Mar 2023

good for a company still calling itself OpenAI (OpenBullshit might be more accurate).

The new version will still make mistakes and hallucinate. It still can't be trusted.

And OpenAI will charge more to use it, which I didn't see mentioned when I skimmed that article.

People paying $20 a month for ChatGPT Plus will get some very limited use of ChatGPT-4.

They're going to have a more expensive tier for people who want to use it more.

This is all about the money.

TheRealNorth

(9,500 posts)
3. You are probably right about the company
Tue Mar 14, 2023, 09:30 PM
Mar 2023

But someone will eventually figure it out because of the capacity this tech has to replace people.

highplainsdem

(49,122 posts)
4. Which is something else OpenAI doesn't care about, any more
Tue Mar 14, 2023, 09:36 PM
Mar 2023

than it cares about the hallucinations, inaccuracies, copyright violations, etc.

Renew Deal

(81,899 posts)
5. Here is the website
Tue Mar 14, 2023, 10:19 PM
Mar 2023
https://openai.com/research/gpt-4

Here is the description of its capabilities:

Capabilities
In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.

To understand the difference between the two models, we tested on a variety of benchmarks, including simulating exams that were originally designed for humans. We proceeded by using the most recent publicly-available tests (in the case of the Olympiads and AP free response questions) or by purchasing 2022–2023 editions of practice exams. We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training, but we believe the results to be representative—see our technical report for details.

There are also a lot of interesting benchmarks on that list including test scores to common tests (Bar, LSAT, SAT, etc.)

Here are limitations:

Limitations
Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.

While still a real issue, GPT-4 significantly reduces hallucinations relative to previous models (which have themselves been improving with each iteration). GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations:
Latest Discussions»General Discussion»GPT-4 has arrived. It wil...