Has your perspective or personal posture changed toward LLMs and generative AI this year? Please, boost for reach.

  • I see LLMs more positively than I used to. (7%, 251 votes)
  • I see LLMs more negatively than I used to. (38%, 1289 votes)
  • My positive views on LLMs have not changed. (2%, 97 votes)
  • My negative views on LLMs have not changed. (50%, 1688 votes)
3325 voters. Poll end: 2 weeks ago

Ian Campbell 🏴 reshared this.

in reply to Ian Campbell 🏴

atc_scanner uses open whisper and prettified by ollama. All run locally and open source. Now if I can find something better for transcribing that is open source with learning abilities that isn't LLMs/gen AI I would use it in a heartbeart.

That being said, atc_scanner basically floors your GPU or CPU to do this task so it definitely isn't efficient.

Gen AI and LLMs in general are bad. They are bad for society, they are bad for our brains, they are bad for our privacy, they are bad for our security and they are bad for our environment.

It is fun for little pet projects, maybe. I think it has utility but is wwaayyyyyy overhyped. People both wildly over and wildly underestimate its capabilities.

I think using it in prod isnt a good idea. Good luck securing or planning around something one doesn't fully understand that doesn't actually understand what it is doing or how to say "I don't know". I think we need to regulate AI ASAP and not use this shit in production. I think we need to protect our society and environment from its dangers.

This entry was edited (3 weeks ago)
in reply to Ian Campbell 🏴

my position has only been enriched. It is a weak niche product marketed for use cases it is horrible or even extremely dangerous for. I only see companies burning themselves down because of horrible investment misfire, with trying to push it everywhere. Allure of fantasy of "winning capitalism" has only exploded.
I expect a market crash that will dwarf 2007 and great depression combined.
in reply to Ian Campbell 🏴

generative AI isn't AI its plagiarism, another view is that it is outright theft of Intellectual Property. I never gave permission to have my GitHub repos scanned, nor my websites. So its theft.

Generative AI isn't AI, if it was it would be capable of original thinking, not regurgitation.

The levels of misinformation and hype are simply astonishing for a technology that's bad for the planet.

in reply to Ian Campbell 🏴

I started with a very positive outlook on LLMs. It has been turned negative not because of the technology, but because I understood that the technology exists in the capitalistic framework. It's not about how to use it effectively and how to optimize it, make it better and more ecological. It's only about how to extract ALL of the money, RIGHT NOW.
This entry was edited (3 weeks ago)
in reply to Ian Campbell 🏴

seen more uses for LLMs that make sense, mainly via wife's use - organising data into a podcast for revision and checking for arguments used in educational documents to provide page references and sources for essay writing. However, people are now using ChatGPT as a search engine.
I would suggest that anyone using LLMs is required to view the energy usage of their sessions, and then pay the cost of it.
The theft/training via integration part is just going to get worse though.
in reply to Ian Campbell 🏴

Watched a lot of CaryKH as a teenager. I know how AI models work. I don't see training models as inherently theft or copyright infringement.

What I do see it as is a huge waste of storage space and time. Diffusion models (and the like) are VERY cool in concept, but the tech is being abused to punch down instead of up or sideways. Gone are the days of AI being trained to supplement human skill (e.g. catching cancerous cells). Now people are using it to replace human creativity, interaction, and love.

This entry was edited (2 weeks ago)
in reply to Ian Campbell 🏴

Overall... I'm more negative about LLMs. That said, I'm somewhat more open to someone showing that I'm "wrong" to be negative. Ie. I'm somewhat more open to see a legitimate use case. I've yet to see it.

I've seen "not bad inherently" uses, but I can't see that they are justified. Like the person who has used a LLM, where their prompt would have done as well as the outcome (mostly).

in reply to Ian Campbell 🏴

i voted "I see LLMs more negatively than i used to", not because of that per se, but because people decided that it was a good idea to combine a bunch of them as "agents" and give them a lot more authority and autonomy than reply with some random text.

So the potential risks are far more severe now

Ian Campbell 🏴 reshared this.

⇧