• MiDaBa@lemmy.ml
    link
    fedilink
    arrow-up
    80
    arrow-down
    1
    ·
    11 months ago

    The bad news is the AI they’ll pay for will instead estimate your net worth and the highest price you’re likely to pay. They’ll then dynamicly change the price of things like groceries to make sure the price they’re charging will maximize their profits on any given day. That’s the AI you’re going to get.

  • BoastfulDaedra@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    8
    ·
    11 months ago

    We really need to stop calling things “AI” like it’s an algorithm. There’s image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they’ve all been called AI, but functionally they have almost nothing to do with each other.

    For computer scientists this year has been a sonofabitch to communicate through.

      • BoastfulDaedra@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        11 months ago

        I’m not fighting, I’m just disgusted. As someone’s wise grandma once said, “[BoastfulDaedra], you are not the fuckface whisperer.”

    • OpenStars@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      AI = “magic”, or like “synergy” and other buzzwords that will soon become bereft of all meaning as a result of people abusing it.

    • CobblerScholar@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      There’s whole countries that refer to the entire internet itself as Facebook, once something takes root it ain’t going anywhere

    • d20bard@ttrpg.network
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      11 months ago

      Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that’ll require some hefty neural net.

      Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn’t AI.

      • BoastfulDaedra@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        You’re very wrong.

        Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.

        A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn’t to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.

    • DudeBro@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I imagine it’s because all of these technologies combine to make a sci-fi-esque computer assistant that talks to you, and most pop culture depictions of AI are just computer assistants that talk to you. The language already existed before the technology, it already took root before we got the chance to call it anything else.

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 months ago

      But “AI” is the umbrella term for all of them. What you said is the equivalent of saying:

      we really need to stop calling things “vehicles”. There’s cars, trucks, airplanes, submarines, and space shuttles and they’ve all been called vehicles, but functionally they have almost nothing to do with each other

      All of the things you’ve mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer “thinking” and making shaved intelligent decisions.

      Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.

      Edit: fixed an autocorrect typo

      • MotoAsh@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        11 months ago

        No. No AI is NOT the umbrella term for all of them.

        No computer scientist will ever genuinely call basic algorithmic tasks “AI”. Stop saying things you literally do not know.

        We are not talking about what what the word means to normies colloquially. We’re talking about what it actually means. The entire point it is a separate term from those other things.

        Engineers would REALLY appreciate it if marketing morons would stop misapplying terminology just to make something sound cooler… NONE of those things are “AI”. That’s the fucking point. Marketing gimmicks should not get to choose our terms. (as much as they still do)

        If I pull up to your house on a bicycle and tell you, “quickly, get in my vehicle so I can drive us to the store.” You SHOULD look at that person weirdly: They’re treating a bicycle like it’s a car capable of getting on the freeway with passengers.

        • no banana@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 months ago

          What I’ve learned as a huge nerd is that people will take a term and use it as an umbrella term for shit and they’re always incorrect but there’s never any point in correcting the use because that’s the way the collective has decided words work and it’s how they will work.

          Now the collective has decided that AI is an umbrella term for executing “more complex tasks” which we cannot understand the technical workings of but need to get done.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            11 months ago

            Sometimes, but there are many cases where the nerds win. Like with technology. How many times do we hear old people misuse terms because they don’t care about the difference just for some young person to laugh and make fun of their lack of perspective?

            I’ve seen it quite a lot, and I have full confidance it will happen here so long as an actual generalized intelligence comes along to show everyone the HUGE difference every nerd talks about.

            • lad@programming.dev
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              But it will be called something different so almost nobody will notice that they now should see the difference

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    11 months ago

    AI could do this. Conventional programming could do it faster and better, even if it was written by AI.

    It’s an important concept to grasp

    • theblueredditrefugee@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      Cameras in your fridge and pantry to keep tabs on what you have, computer vision to take inventory, clustering to figure out which goods can be interchanged with which, language modeling applied to a web crawler to identify the best deals, and then some conventional code to aggregate the results into a shopping list

      Unless you’re assuming that you’re gonna be supplied APIs to all the grocery stores which have an incentive to prevent this sort of thing from happening, and also assuming that the end user is willing, able, and reliable enough to scan every barcode of everything they buy

      This app basically depends on all the best ai we already have except for image generation

      • lightnsfw@reddthat.com
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        11 months ago

        Cameras and computer vision aren’t necessary. Food products already come with upcs. All you need is a barcode reader to input stuff and to track what you use in meals. Tracking what you use could also be used for meal planning.

        • theblueredditrefugee@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Yeah, I did think of the barcode approach, but I didn’t think anyone would be willing to scan every item, which is why I ignored it

          However, revisiting this question made me realize that we could probably have the user scan receipts. It would take some doing but you could probably extract all the information from the receipt because it’s in a fairly predictable format, and it’s far less onerous.

          OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that

          The tough part, then, is scouring the internet for deals. Should be doable though.

          Might try to slap something together tonight or tomorrow for that first bit, seems pretty easy, I bet you’ve got open source libraries for handling barcodes, and scanning receipts can probably just be done with existing OCR tech, error correction using minimum edit distance, and a few if statements to figure out which is the quantity and which is the item. That is, if my adhd doesn’t cause me to forget

          • lightnsfw@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that

            If you can also keep recipes in the system you could skip scanning the barcodes here. You’d just need to input how many servings you prepared and any waste. Even if the “recipe” is just “hot pocket” or something. If the system knows how much is in a package it can deduct what you use from the total and add it to the list when you need more.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Tracking what you use would be a lot easier with AI. Then you wouldn’t have to keep a barcode scanner in the kitchen. You could just have a camera pointed at your food prep space

          • lightnsfw@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            is AI good enough to manage that with just a camera? how would it determine how much of a given product you uses? Like if you dump a cup of flour in a bowel, how does it know how much that was.

            If you have to point the product in front of the camera to register it anyway, might as well use a barcode reader anyway because it’s the same thing at that point just without the risk of the AI misidentifying something.

      • Komatik@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        I think you can achieve a similar result by having one giant DB so we can average out general consumption and then have a personal/family profile, where we in the first place manually feed the AI with data like, what did we bought, exp date, when did we partly or fully consume it. Although intensive at first I think AI will increasingly become more accurate whereby you will need to input less and less data as the data will be comming from both you and the rest of the users. The only thing that still needs to be input is “did you replace it ?”

        This way we don’t need cameras

  • BoastfulDaedra@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 months ago

    Next, she’s going to want a Libre AI that does not share her information with third parties or suggest unnecessary changes to make her spend more at sponsored businesses.

    • jivemasta@reddthat.com
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      I mean, when that xkcd was made, that was a hard task. Now identifying a bird in a picture can be done in realtime on a raspberry pi in a weekend project.

      The problem in the op isn’t really a limitation of AI, it’s coming up with an inventory management system that can detect low inventory without being obtrusive in a users life. The rest is just scraping local stores prices and compiling a list with some annealing algo that gets the best price to stops ratio.

      • IndefiniteBen@leminal.space
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        I think you focused too much on the details…

        AI image manipulation is entirely based in a computer where an image is processed by an algorithm. Grocerybot involves many different systems and crosses the boundary between digital and physical. The intertwined nature of the complexity is what makes it (relatively) difficult to explain.

  • Professorozone@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    11 months ago

    I want AI to control traffic lights so that I don’t sit stopped through an entire cycle as the only car in a 1 mile radius. Also, if there is just one more car in line, let the light stay green just a couple seconds longer. Imagine the gas and time that could be saved… and frustration.

    • Grass@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      11 months ago

      Doesn’t need AI, and there are countries that already have a system in place with the same result. Unsurprisingly the places with more focus on pedestrian, cyclist, and public transit infrastructure have the most enjoyable driving experience. All the people that don’t want to drive will stop as soon as it is safe and convenient, and all those cars off the road also help with this because the lights will be queued up with fewer cars.

          • IndefiniteBen@leminal.space
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            How do I make it sound like that? You first need to build traffic light and road infrastructure that can handle advanced traffic flow, along with the processing power to make decisions based on sensor readings and rules.

            The software (AI is kinda overkill) exists to handle and optimise traffic flow over an entire city, but your software does not matter if there are insufficient sensors for the software to make decisions, or too few controllable lights to implement decisions (or both).

    • NaoPb@eviltoast.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      To be fair, there are already more intelligent traffic light systems that use sensors in the road to see if there is traffic sitting at the lights. These can be combined with sensors further up the road to see if more traffic is coming and extend the periods of green light for certain sides. It may not be perfect and it may require touching up later after seeing which times could be extended or shortened. It’s not AI but it works a lot better than the old hard coded traffic lights. Here in the Netherlands there are only a handfull of intersections left that still have the hard coded traffic lights.

  • where_am_i@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    11 months ago

    I’m sure Sara is not ready to be served the optimal outcome from a competitive multi-agent simulation. Because when everyone gets that AI, oh boy the local deals on groceries will be fun.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      The equilibrated state your imagining never happens. This is like talking about when the ocean finally levels out. The ocean’s never going to level out. There will always be waves to surf.

  • TIMMAY@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    Cant I get both? Here are your weekly projections, sir. You will need to get this list of items at these locations and here is what you would look like as a latin american dictator. Enjoy

  • eclectic_electron@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    This is surprisingly difficult problem because different people are okay with different brand substitutions. Some people may want the cheapest butter regardless of brand, while others may only buy brand name.

    For example my wife is okay with generic chex from some grocery stores but not others, but only likes brand names Cheerios. Walmart, Aldi, and Meijer generic cheese is interchangable, but brand name and Kroger brand cheese isn’t acceptable.

    Making a software system that can deal with all this is really hard. AI is probably the best bet, but it needs to be able to handle all this complexity to be useable, which is a lot of up front work

  • sighofannoyance@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    Sarah radz is a hacker who has broken into your computer and stolen the iPhone app idea you’ve been working on these past 10 years.

    you feel defeated.

  • blind3rdeye@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    Why would AI made by some company bother searching website or fliers when they could instead show you products from businesses who pay them to show you products?

    When the AI made by a company, running on their company’s servers, with no way for you to know what it is basing its decisions on… you’d probably best assume that it is acting for the company’s benefit rather than yours.

    • WiildFiire@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      It’ll be kept within product marketing and, I dunno how, but it would absolutely be used to see what they can raise prices on

  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    Idk man, AI art generation is pretty rad, it opens a whole new world of artistic endeavors up for people who never had the access, ability, time, or energy to do so otherwise. Also, por que no los dos?

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      no, it doesn’t. The prompter is the one “commissioning a painting”. That does not make them an artist.

      • Ookami38@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        11 months ago

        Creating art is creating art. Back in the day you had to mix your own pigments and put those on a canvas. Now you can buy pigments and canvas, or skip that altogether and go digital. That doesn’t make digital art any less art. It’s the same thing with AI art. It’s another tool for us to use.

        New tools come up for professions ALL THE TIME, and it’s up to people in those fields to figure out how to roll with it. This tool is out of the box and it’s not going back in.

        What we should be asking now is how do we ETHICALLY use this tool? Well, probably by crediting people. Licensing any copywritten material that needs it. Don’t use it to make gross shit like deep fakes or direct rip-offs. Which, just because it’s easier to do these things with an AI, doesn’t mean they didn’t happen with Photoshop etc.

        There’s also more nuance to the process than “type a prompt get image.” That works, but it’ll get you shit, inconsistent results. You still have to play around with the image, adjusting parameters and sometimes even loading it into a “real” image manipulation software.

        To give you an idea of how I personally am using stable diffusion, I’ve been using it to generate a few dozen images that look like a character in going for. I’ll grab those images, edit them, and then use them to train my own LoRA (a kind of mini, specific, model) to use for future generation of that character. It’s actually work, just work I’m better at than manipulating images manually.

          • Ookami38@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Cool, good addition to the conversation. I’m willing to accept that I might be wrong, if someone could provide me a single actual logical point that makes any kind of sense. Instead I get “but it takes no skiiiilllll” or “sure sure.”