Last week in a seminar we discussed a text that was largely about sexual violence, including mass rape during war. Heavy stuff.
One student admitted they had not read the text but worked off a ChatGPT summary.
They had no idea the text was about sexual violence. ChatGPT withheld that information.
This wasn’t just a minor error nor a typical LLM hallucination.
About a third of the text, arguably its most important part, went completely ignored because it didn’t match OpenAI’s content policies
💙🩷💜Ⓑⓡⓔⓣⓣ🐡🍉🐧 likes this.


Cassandra is only carbon now
in reply to Æ. • • •Violet Madder
in reply to Cassandra is only carbon now • • •The people designing these bots can lean on the scales in any and all directions they please.
Violet Madder
in reply to Violet Madder • • •@xgranade
Oh, and also keep in mind...
Systems like this are being used to "summarize" every bit of your online activity and lifestyle, gauging the likelihood that you might be an enemy of the state.
Violet Madder
Unknown parent • • •There is nothing efficient about a system that guzzles water and electricity at a monstrous rate while it eats almost everything everyone ever wrote and grinds it into an industrial avalanche of bullshit.