I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • Schorsch@feddit.org
    link
    fedilink
    arrow-up
    31
    ·
    17 days ago

    It’s kinda handy if you don’t want to take the time to write a boring email to your insurance or whatever.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 days ago

      Yeah that’s how I use it, essentially as an office intern. I get it to write cover letters and all the other mindless piddly crap I don’t want to do so I can free up some time to do creative things or read a book or whatever. I think it has some legit utility in that regard.

    • Pechente@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      I get the point here but I think it’s the wrong approach. If you feel the email needs too much business fluff, just write it more casual and get to the point quicker.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    17 days ago

    People keep meaning different things when they say “Generative AI”. Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?

    The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they’re still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It’s also used to de-noise raytracing and show cleaner reflections.

    Also people are missing the point on why AI is being invested in so much. No, I don’t think “AGI” is coming any time soon, but the reason they’re sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    16
    ·
    17 days ago

    In the context of programming:

    • Good for boilerplate code and variables naming when what you want is for the model to regurgitate things it has seen before.
    • Short pieces of code where it’s much faster to verify that the code is correct than to write the code yourself.
    • Sometimes, I know how to do something but I’ll wait for Copilot to give me a suggestion, and if it looks like what I had in mind, it gives me extra confidence in the correctness of my solution. If it looks different, then it’s a sign that I might want to rethink it.
    • It sometimes gives me suggestions for APIs that I’m not familiar with, prompting me to look them up and learn something new (assuming they exist).

    There’s also some very cool applications to game AI that I’ve seen, but this is still in the research realm and much more niche.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    17 days ago

    I treat it as a newish employee. I don’t let it do important tasks without supervision, but it does help building something rough that I can work on.

  • Gravitwell@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    16 days ago

    I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style… and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.

    Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.

    Basically they are good at parsing information and reformatting it in a way that works better for me.

  • saigot@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    15 days ago

    Here’s some uses:

    • skin cancer diagnoses with llms has a high success rate with a low cost. This is something that was starting to exist with older ai models, but llms do improve the success rate. source
    • VLC recently unveiled a new feature of using ai to generate subtitles, i haven’t used it but if it delivers then it’s pretty nice
    • for code generation, I agree it’s more harmful than useful for generating full programs or functions, but i find it quite useful as a predictive text generator, it saves a few keystrokes. Not a game changer but nice. It’s also pretty useful at generating test data so long as it’s hard to create but easy (for a human) to validate.
  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    17 days ago

    I wrote guidelines for my small business. Then I uploaded the file to chatgpt and asked it to review it.

    It made legitimately good suggestions and rewrote the documents using better sounding English.

    Because of chatgpt I will be introducing more wellness and development programs.

    Additionally, I need med images for my website. So instead of using stock photos, I was able to use midjourney to generate a whole bunch of images in the same style that fit the theme of my business. It looks much better.

  • Flaqueman@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    17 days ago

    Money. It’s always about money. But more seriously, I also wonder what’s the point since all my interactions with GenAI have been disappointment after disappointment. But I read Dev saying that it’s great at creating drafts

  • Affidavit@lemm.ee
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    16 days ago

    I’d say there are probably as many genuine use-cases for AI as there are people in denial that AI has genuine use-cases.

    Top of my head:

    • Text editing. Write something (e.g. e-mails, websites, novels, even code) and have an LLM rewrite it to suit a specific tone and identify errors.
    • Creative art. You claim generative AI art is soulless and poor quality, to me, that indicates a lack of familiarity with what generative AI is capable of. There are tools to create entire songs from scratch, replace the voice of one artist with another, remove unwanted background noise from songs, improve the quality of old songs, separate/add vocal tracks to music, turn 2d models into 3d models, create images from text, convert simple images into complex images, fill in missing details from images, upscale and colourise images, separate foregrounds from backgrounds.
    • Note taking and summarisation (e.g. summarising meeting minutes or summarising a conversation or events that occur).
    • Video games. Imagine the replay value of a video game if every time you play there are different quests, maps, NPCs, unexpected twists, and different puzzles? The technology isn’t developed enough for this at the moment, but I think this is something we will see in the coming years. Some games (Skyrim and Fallout 4 come to mind) have a mod that gives each NPC AI generated dialogue that takes into account the NPC’s personality and history.
    • Real time assistance for a variety of tasks. Consider a call centre environment as one example, a model can be optimised to evaluate calls based on language and empathy and correctness of information. A model could be set up with a call centre’s knowledge base that listens to the call and locates information based on a caller’s enquiry and tells an agent where the information is located (or even suggests what to say, though this is currently prone to hallucination).
  • whome@discuss.tchncs.de
    link
    fedilink
    arrow-up
    6
    ·
    16 days ago

    I use it to sort days and create tables which is really helpful. And the other thing that really helped me and I would have never tried to figure out on my own:

    I work with the open source GIS software qgis. I’m not a cartographer or a programmer but a designer. I had a world map and wanted to create geojson files for each country. So I asked chatgpt if there was a way to automate this within qgis and sure thing it recommend to create a Python script that could run in the software, to do just that and after a few tweaks it did work. that saved me a lot of time and annoyances. Would it be good to know Python? Sure but I know my brain has a really hard time with code and script. It never clicked and likely never will. So I’m very happy with this use case. Creative work could be supported in a drafting phase but I’m not so sure about this.

  • peppers_ghost@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    16 days ago

    “at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.”

    I’ve not experienced this. Debugging for me is always faster than writing something entirely from scratch.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 days ago

    Idea generation.

    E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.

    A work colleague asked an LLM client for wedding vow ideas to break through writer’s block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      16 days ago

      Those are just ideas that were previously “generated” by humans though, that the LLM learned

  • nafzib@feddit.online
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 days ago

    I have had some decent experiences with Copilot and coding in C#. I’ve asked it to help me figure out what was wrong with a LINQ query I was doing with an XDocument and it pointed me in the right direction where I figured it out. It also occasionally has some super useful auto complete blocks of code that actually match the pattern of what I’m doing.

    As for art and such, sometimes people just want to see some random bizarre thing realized visually that they don’t have the ability (or time/dedication) to realize themselves and it’s not something serious that they would be commissioning an artist for anyway. I used Bing image creator recently to generate a little character portrait for an online DND game I’m playing in since I couldn’t find quite what I was looking for with an image search (which is what I usually do for those).

    I’ve seen managers at my job use it to generate fun, relevant imagery for slideshows that otherwise would’ve been random boring stock images (or just text).

    It has actual helpful uses, but every major corporation that has a stake in it just added to or listened to the propaganda really hard, which has caused problems for some people; like the idiot who proudly fired all of his employees because he replaced all their jobs with automation and AI, then started hunting for actual employees to hire again a couple months later because everything was terrible and nothing worked right.

    They’re just tools that can potentially aid people, but they’re terrible replacements for actual people. I write automated tests for a living, and companies will always need people for that. If they fired me and the other QAs tomorrow, things would be okay for a short while thanks to the automation we’ve built, but as more and more code changes go into our numerous and labyrinthine systems, more and more bugs would get through without someone to maintain the automation.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    16 days ago

    What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.

    We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.

    We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.

    We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.

    Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      4
      ·
      16 days ago

      For the 99% of us who don’t know what tweening is and were scared to Google it in case it was perverted, it’s short for in-betweening and means the short frames of an animation in-between two main scenes

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        16 days ago

        I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.