The bad news is the AI they’ll pay for will instead estimate your net worth and the highest price you’re likely to pay. They’ll then dynamicly change the price of things like groceries to make sure the price they’re charging will maximize their profits on any given day. That’s the AI you’re going to get.
Damn that’s depressing. :(
I heard they do something like this on Uber already.
If capitalism brought me into this world, will it also take me out of it?
In 977 years, a suicide booth will cost 25¢
You are (currently) more profitable alive than dead so… no, now get back to work!
The prices will automatically adjust like an Uber algorithm for supply and demand, so as people buy the item it keeps going up and when people stop buying it if decreases slowly.
That is the “apps” you will get. But there will always be an open source version that will do what the comunity need and wants.
I hope you’re right. I don’t think you’re right, but i hope you’re right.
I want said AI to be open source and run locally on my computer
I can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi
Automatic1111 image generation
They might not be the most performant applications but they are very easy to use.
You seem to have missed the point a bit
Just read it again and you’re right. But maybe someone else finds it useful.
I do, so thank you :)
deleted by creator
Funny how these comments appeared only today in my instance, I guess there are some federation issues still
new phone who dis?
“I wish I had X”
“Here’s X”
What point was missed here?
The post “I wish X instead of Y”
The comment: “And run it [X] locally”
The next comment: “You can run Y locally”Also the one I told this literally admitted that I was right and you’re arguing still
I want mine in an emotive-looking airborne bot like Flubber
A lot of it can if you have a big enough computer.
It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.
I’m already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it’s not necessary to have.
Ha. Lame.
Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.
I want some of whatever you have, man.
Reckless disregard for the opinions of the fanatically security and privacy conscious? Or just a good-natured appreciation for pissing people off? :)
Drugs. I want the drugs.
We really need to stop calling things “AI” like it’s an algorithm. There’s image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they’ve all been called AI, but functionally they have almost nothing to do with each other.
For computer scientists this year has been a sonofabitch to communicate through.
I think you’re fighting a losing battle.
I’m not fighting, I’m just disgusted. As someone’s wise grandma once said, “[BoastfulDaedra], you are not the fuckface whisperer.”
AI = “magic”, or like “synergy” and other buzzwords that will soon become bereft of all meaning as a result of people abusing it.
There’s whole countries that refer to the entire internet itself as Facebook, once something takes root it ain’t going anywhere
Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that’ll require some hefty neural net.
Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn’t AI.
You’re very wrong.
Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.
A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn’t to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.
I imagine it’s because all of these technologies combine to make a sci-fi-esque computer assistant that talks to you, and most pop culture depictions of AI are just computer assistants that talk to you. The language already existed before the technology, it already took root before we got the chance to call it anything else.
But “AI” is the umbrella term for all of them. What you said is the equivalent of saying:
we really need to stop calling things “vehicles”. There’s cars, trucks, airplanes, submarines, and space shuttles and they’ve all been called vehicles, but functionally they have almost nothing to do with each other
All of the things you’ve mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer “thinking” and making
shavedintelligent decisions.Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.
Edit: fixed an autocorrect typo
No. No AI is NOT the umbrella term for all of them.
No computer scientist will ever genuinely call basic algorithmic tasks “AI”. Stop saying things you literally do not know.
We are not talking about what what the word means to normies colloquially. We’re talking about what it actually means. The entire point it is a separate term from those other things.
Engineers would REALLY appreciate it if marketing morons would stop misapplying terminology just to make something sound cooler… NONE of those things are “AI”. That’s the fucking point. Marketing gimmicks should not get to choose our terms. (as much as they still do)
If I pull up to your house on a bicycle and tell you, “quickly, get in my vehicle so I can drive us to the store.” You SHOULD look at that person weirdly: They’re treating a bicycle like it’s a car capable of getting on the freeway with passengers.
What I’ve learned as a huge nerd is that people will take a term and use it as an umbrella term for shit and they’re always incorrect but there’s never any point in correcting the use because that’s the way the collective has decided words work and it’s how they will work.
Now the collective has decided that AI is an umbrella term for executing “more complex tasks” which we cannot understand the technical workings of but need to get done.
Sometimes, but there are many cases where the nerds win. Like with technology. How many times do we hear old people misuse terms because they don’t care about the difference just for some young person to laugh and make fun of their lack of perspective?
I’ve seen it quite a lot, and I have full confidance it will happen here so long as an actual generalized intelligence comes along to show everyone the HUGE difference every nerd talks about.
But it will be called something different so almost nobody will notice that they now should see the difference
AI could do this. Conventional programming could do it faster and better, even if it was written by AI.
It’s an important concept to grasp
Cameras in your fridge and pantry to keep tabs on what you have, computer vision to take inventory, clustering to figure out which goods can be interchanged with which, language modeling applied to a web crawler to identify the best deals, and then some conventional code to aggregate the results into a shopping list
Unless you’re assuming that you’re gonna be supplied APIs to all the grocery stores which have an incentive to prevent this sort of thing from happening, and also assuming that the end user is willing, able, and reliable enough to scan every barcode of everything they buy
This app basically depends on all the best ai we already have except for image generation
Cameras and computer vision aren’t necessary. Food products already come with upcs. All you need is a barcode reader to input stuff and to track what you use in meals. Tracking what you use could also be used for meal planning.
Yeah, I did think of the barcode approach, but I didn’t think anyone would be willing to scan every item, which is why I ignored it
However, revisiting this question made me realize that we could probably have the user scan receipts. It would take some doing but you could probably extract all the information from the receipt because it’s in a fairly predictable format, and it’s far less onerous.
OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that
The tough part, then, is scouring the internet for deals. Should be doable though.
Might try to slap something together tonight or tomorrow for that first bit, seems pretty easy, I bet you’ve got open source libraries for handling barcodes, and scanning receipts can probably just be done with existing OCR tech, error correction using minimum edit distance, and a few if statements to figure out which is the quantity and which is the item. That is, if my adhd doesn’t cause me to forget
OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that
If you can also keep recipes in the system you could skip scanning the barcodes here. You’d just need to input how many servings you prepared and any waste. Even if the “recipe” is just “hot pocket” or something. If the system knows how much is in a package it can deduct what you use from the total and add it to the list when you need more.
Tracking what you use would be a lot easier with AI. Then you wouldn’t have to keep a barcode scanner in the kitchen. You could just have a camera pointed at your food prep space
is AI good enough to manage that with just a camera? how would it determine how much of a given product you uses? Like if you dump a cup of flour in a bowel, how does it know how much that was.
If you have to point the product in front of the camera to register it anyway, might as well use a barcode reader anyway because it’s the same thing at that point just without the risk of the AI misidentifying something.
I think you can achieve a similar result by having one giant DB so we can average out general consumption and then have a personal/family profile, where we in the first place manually feed the AI with data like, what did we bought, exp date, when did we partly or fully consume it. Although intensive at first I think AI will increasingly become more accurate whereby you will need to input less and less data as the data will be comming from both you and the rest of the users. The only thing that still needs to be input is “did you replace it ?”
This way we don’t need cameras
I’m stoked that you’ve got it all hammered out, let me know when you’re done.
Next, she’s going to want a Libre AI that does not share her information with third parties or suggest unnecessary changes to make her spend more at sponsored businesses.
Relevant XKCD: https://xkcd.com/1425/
I mean, when that xkcd was made, that was a hard task. Now identifying a bird in a picture can be done in realtime on a raspberry pi in a weekend project.
The problem in the op isn’t really a limitation of AI, it’s coming up with an inventory management system that can detect low inventory without being obtrusive in a users life. The rest is just scraping local stores prices and compiling a list with some annealing algo that gets the best price to stops ratio.
I think you focused too much on the details…
AI image manipulation is entirely based in a computer where an image is processed by an algorithm. Grocerybot involves many different systems and crosses the boundary between digital and physical. The intertwined nature of the complexity is what makes it (relatively) difficult to explain.
I want AI to control traffic lights so that I don’t sit stopped through an entire cycle as the only car in a 1 mile radius. Also, if there is just one more car in line, let the light stay green just a couple seconds longer. Imagine the gas and time that could be saved… and frustration.
Doesn’t need AI, and there are countries that already have a system in place with the same result. Unsurprisingly the places with more focus on pedestrian, cyclist, and public transit infrastructure have the most enjoyable driving experience. All the people that don’t want to drive will stop as soon as it is safe and convenient, and all those cars off the road also help with this because the lights will be queued up with fewer cars.
That’s already a thing, though it isn’t AI driven. Many intersections have sensors that detect traffic and can change the lights quickly or let them stay green longer if you’re approaching it. It’s only getting more advanced as time goes on.
As always, such systems need infrastructure investment to make them widespread.
You make it sound like the AI traffic lights will just magically download themselves onto the grid?
How do I make it sound like that? You first need to build traffic light and road infrastructure that can handle advanced traffic flow, along with the processing power to make decisions based on sensor readings and rules.
The software (AI is kinda overkill) exists to handle and optimise traffic flow over an entire city, but your software does not matter if there are insufficient sensors for the software to make decisions, or too few controllable lights to implement decisions (or both).
To be fair, there are already more intelligent traffic light systems that use sensors in the road to see if there is traffic sitting at the lights. These can be combined with sensors further up the road to see if more traffic is coming and extend the periods of green light for certain sides. It may not be perfect and it may require touching up later after seeing which times could be extended or shortened. It’s not AI but it works a lot better than the old hard coded traffic lights. Here in the Netherlands there are only a handfull of intersections left that still have the hard coded traffic lights.
I’m sure Sara is not ready to be served the optimal outcome from a competitive multi-agent simulation. Because when everyone gets that AI, oh boy the local deals on groceries will be fun.
The equilibrated state your imagining never happens. This is like talking about when the ocean finally levels out. The ocean’s never going to level out. There will always be waves to surf.
Cant I get both? Here are your weekly projections, sir. You will need to get this list of items at these locations and here is what you would look like as a latin american dictator. Enjoy
I want both.
This is surprisingly difficult problem because different people are okay with different brand substitutions. Some people may want the cheapest butter regardless of brand, while others may only buy brand name.
For example my wife is okay with generic chex from some grocery stores but not others, but only likes brand names Cheerios. Walmart, Aldi, and Meijer generic cheese is interchangable, but brand name and Kroger brand cheese isn’t acceptable.
Making a software system that can deal with all this is really hard. AI is probably the best bet, but it needs to be able to handle all this complexity to be useable, which is a lot of up front work
AI does exactly this. Like exactly.
Sarah radz is a hacker who has broken into your computer and stolen the iPhone app idea you’ve been working on these past 10 years.
you feel defeated.
Why would AI made by some company bother searching website or fliers when they could instead show you products from businesses who pay them to show you products?
When the AI made by a company, running on their company’s servers, with no way for you to know what it is basing its decisions on… you’d probably best assume that it is acting for the company’s benefit rather than yours.
I feel like since that’s a very useful product it will not be made available to me.
It’ll be kept within product marketing and, I dunno how, but it would absolutely be used to see what they can raise prices on
Idk man, AI art generation is pretty rad, it opens a whole new world of artistic endeavors up for people who never had the access, ability, time, or energy to do so otherwise. Also, por que no los dos?
no, it doesn’t. The prompter is the one “commissioning a painting”. That does not make them an artist.
Creating art is creating art. Back in the day you had to mix your own pigments and put those on a canvas. Now you can buy pigments and canvas, or skip that altogether and go digital. That doesn’t make digital art any less art. It’s the same thing with AI art. It’s another tool for us to use.
New tools come up for professions ALL THE TIME, and it’s up to people in those fields to figure out how to roll with it. This tool is out of the box and it’s not going back in.
What we should be asking now is how do we ETHICALLY use this tool? Well, probably by crediting people. Licensing any copywritten material that needs it. Don’t use it to make gross shit like deep fakes or direct rip-offs. Which, just because it’s easier to do these things with an AI, doesn’t mean they didn’t happen with Photoshop etc.
There’s also more nuance to the process than “type a prompt get image.” That works, but it’ll get you shit, inconsistent results. You still have to play around with the image, adjusting parameters and sometimes even loading it into a “real” image manipulation software.
To give you an idea of how I personally am using stable diffusion, I’ve been using it to generate a few dozen images that look like a character in going for. I’ll grab those images, edit them, and then use them to train my own LoRA (a kind of mini, specific, model) to use for future generation of that character. It’s actually work, just work I’m better at than manipulating images manually.
sure, sure
Cool, good addition to the conversation. I’m willing to accept that I might be wrong, if someone could provide me a single actual logical point that makes any kind of sense. Instead I get “but it takes no skiiiilllll” or “sure sure.”