![](https://spgrn.com/api/v3/image_proxy?url=https%3A%2F%2Flemmy.ml%2Fpictrs%2Fimage%2F9607eade-708d-4d79-9434-b84347dbce0b.jpeg)
![](https://spgrn.com/api/v3/image_proxy?url=https%3A%2F%2Flemmy.ml%2Fpictrs%2Fimage%2Fd3d059e3-fa3d-45af-ac93-ac894beba378.png)
Nice try, HR.
Nice try, HR.
I coordinate an academic makerspace at a college.
And now the right hand doesn’t know what the left hand is doing.
Cleaning crews need time to clean all the rooms after morning checkout. Some hotels have early check-in available if you ask, if they have rooms already available.
Sounds like the Mechanical Turk which was run by chess players moving the “automaton.”
So much of the wow factor of new technologies is just marketing hyperbole.
(Why did my autocorrect suggest Hadrian’s chicken?)
The history they taught you in school was wrong. The wall was built to keep out the chicken.
“Sorry, guys, but I gotta bail.”
Technically, you could say we’re the ones who set since it’s the Earth’s rotation causing the change.
California also isn’t an island, but it’s named after a fictional island in a Spanish novel, and was once thought to be an island.
Taking someone’s lead sounds like a British saying indicating the opposite of following someone’s lead. It sounds like you’re taking someone’s leash in your hands and directing them where to go.
EMP, MRI, or what about anti-nanobots? If you can program nanobots that kill people with particular DNA, couldn’t you program nanobots that target other nanobots? I would assume they hadn’t yet built in a self-defense protocol for the nanobots since they were cutting edge and not assumed to have any countermeasures yet. Anti-nanobots seem just as plausible as DNA targeting nanobots.
Die Another Day was meh, but I really didn’t care for Skyfall and No Time to Die. The plots were too contingent on inorganic and out of character details. Q wouldn’t be stupid enough to plug a USB drive into an MI6 networked device found on a known hacker supervillain. The convenience of the targeted DNA nanobots just magically being declared to have no solution without anyone doing any testing of theories was unbelievable and just revealed the obvious “we need to kill Bond in this one so come up with a reason for him to die nobly” pitch meeting pitch. It ruined the suspension of disbelief entirely. I feel like they just tried too hard to keep upping the stakes and outdo themselves that it just got ridiculous.
Without consent, it would definitely be unethical.
I’ve crossposted this to !crows@lemmy.ml since others might be interested there. Thanks.
handless deaf mute bard.
So, fart musician?
I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there’s often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?
I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn’t consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.
“The simple idea of a 13-month perennial calendar has been around since at least the middle of the 18th century. Versions of the idea differ mainly on how the months are named, and the treatment of the extra day in leap year.”
It’s basically translation convention minus the overt indication that it’s a translation.
https://tvtropes.org/pmwiki/pmwiki.php/Main/TranslationConvention
Generally, no, but context and approach matter.
The ability to notice a flaw isn’t the same as the skill, experience, and background that might be necessary to design a useful solution for a particular issue, especially complex issues. It’s generally reasonable to say, “I don’t know of a better solution, but I can predict that x and y problems will likely result from your proposed solution.”
It’s especially valid to warn someone when their proposed solution will harm people or make things worse. You don’t have to have a better solution to try to prevent someone from doing something ill-conceived or hasty or reckless.
If the stakes are low or the person proposing a solution is likely to be sensitive to criticism, it might work better to try to approach your response as an attempt to help them refine their solution, rather than just opposing it outright. Be considerate of their feelings and make it clear you’re working together.
There are a lot of hobbies you can get into that can be started with little or not cost, or with equipment/materials you already own.
Figure out what interests you and see what can be done inexpensively.
With a phone or computer, there’s writing, music, programming, learning new skills, Wikipedia, Pinterest, et al. Maybe take your phone and start photographing stuff in your area that interests you.
Find someone who has experience in an area you’re interested in. People tend to like to talk about their hobbies and interests and they can tell you how easy or difficult it is to get started. They might even be able to help you get started.
Maybe find a volunteer opportunity that helps pad your resume. Like animals? Volunteer at a local shelter.
There are a bunch of job certifications you can train for online that can also help build your resume.