• 10 Posts
  • 1.83K Comments
Joined 11 months ago
cake
Cake day: November 8th, 2023

help-circle


  • the comment is going “grrr ai”

    You asked how Mozilla betrayed users, so let’s focus on that. I don’t care about the ways they haven’t betrayed users, in the same way I don’t care about how Seamus built a bridge.

    second, that chatbot thing you’re crying about is>>> only in nightly

    No, it’s in Firefox 130. I know this because I use Firefox.

    it most likely will never make it into a non-nightly release

    LOL

    Also, when i say that a model is open source, i am referring to the binary being downloadable

    This is not what open source means. If that’s the case, Microsoft Windows is open source. Go nuts.

    You clearly saw the word “ai” and decided that mozilla was as bad as google, without looking into it at all.

    People aren’t blindly saying all AI is bad. They were pointing at a specific thing. Do not strawman.


  • Im glad you decided to copy-paste an overly padded ream of text instead of forming your own opinion

    I endorse it. Even if I didn’t, you asked, and do you received an answer.

    P.4 Those models are also ran completely locally.

    Wrong.

    None of the models provided by Mozilla as defaults are run locally. They are all run on their own respective providers’ clouds.

    P.2 All ai models used in firefox currently are fully open source.

    Also wrong.

    Even if we ignore the fact that Mozilla only allows you to connect to third parties that are running something in a black box, there is no such thing as open source AI, as far as I have seen. The models are always closed source black boxes. If you have downloaded a binary and cannot compile it yourself, you are not using something that is open source.

    P.10 The two ai features currently in firefox are alt-text generation for blind people and privacy-respecting page translation

    Which was not being discussed in the linked post. The fact you are trying to veer off topic from “how has Mozilla betrayed you” to “this particular other thing has not betrayed you” is unhelpful.

    Just ask Seamus (the bridge builder) what people remember him for.


  • Lemmy client devs? I have used quite a few Lemmy clients, and none of the ones I have tried (at least, the ones that have the best UIs IMO) support it.

    For what it’s worth, Mastodon clients also seem to have either slick user interfaces or they are packed with features, but usually more of one equates with less of the other. It probably depends on which one the developers want to spend more time on. Since Mastodon (and the wider Fediverse) and Lemmy specifically are a bit different, it seems like Lemmy devs have opted to not handle the idiosyncrasies of something that is not Lemmy specific.




  • Based on Mozilla’s documentation, it looks like CHIPS only applies to “cross site” cookies that are just accessible on different subdomains of the same site. A third party cookie could share data between a.site.example and b.site.example if it asked nicely, but not on site2.example.

    If this isn’t about it subdomains exclusively, it’s not apparent to me. But it’s all pretty confusing, and CHIPS appears to be just one minor thing that Google introduced when they were creating Privacy Sandbox back in 2022. (You know, to facilitate the total removal of third-party cookies, something they eventually backtracked on anyway.)




  • Found online:

    With introduction of even more AI services to Firefox I wanted to express that to me it does seem like Firefox is missing with development of features. This sentiment is echoed by a lot of people in my social bubble of technologists, ethicists and other people with same priorities as what one could think are values which Firefox was built on.

    Those concerns are in my opinion very valid. The machine learning models have shown to be unreliable - just some of the recent examples from AI products made by large corporations: pointing users to eat pizza with glue, providing false information about just about anything. There are a number of ethical issues yet to be resolved with usage of AI, from its intense usage of computing resources that adds up to electric grid demands. Through privacy and possible copyright violations of datasets that power the models. To an entire bag of other issues monitored by excellent resources such as AIAAIC repository.

    AI/Machine learning is an amazing field with many likely applications, and yet, its recent rise to fame is characterized by failures and issues in many implementations. Personally I often don’t even see whether the application of AI is truthfully necessary, in many cases a human would do a more trustworthy and fast task of information gathering than a large machine learning model.

    When we compare current state of “AI”, does it reflect what Firefox stands for? Does it reflect Mozilla’s principals?

    Let’s compare.

    • Principle 2
      The internet is a global public resource that must remain open and accessible.

    LLMs are known for being a black box. Depending on our definition of “open and accessible”, LLMs can be a very free resource or a completely inaccessible black box of math.

    • Principle 4
      Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.

    It’s clear that LLMs pose a privacy risk to Internet users. LLMs pose risk in at least two ways - because the data they are trained on sometimes contains private information due to negligent training process. In this case users of a learned model can possibly access private information. The second risk is of course usage of 3rd party services that may use information to infringe on privacy of users. While Mozilla in blog assures that “we are committed to following the principles of user choice, agency, and privacy as we bring AI-powered enhancements to Firefox”, it’s unclear how supporting such services as “ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral” helps protect Firefox user privacy. Giving users choice should not compromise their safety and privacy.

    • Principle 10
      Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.

    In my opinion in the process of designing AI functionalities on top of Firefox there was no evaluation on how those functionalities can benefit the public. There are a number of issues as mentioned above with the LLMs, they can be dangerous and work in detriment to users. Investing and supporting in technology of this type may lead to terrible consequences with little actual benefit.

    In addition Mozilla claims:

    • We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.

    I argue that AI models are the opposite to that. AI output is not verifiable. They are working against sharing knowledge by making seemingly accurate information that turn out to be false.

    I ask Mozilla to reevaluate impact of AI considering all of those points. I ask on behalf of myself as well as many users that I see on Fediverse being greatly worried and frustrated with AI changes added on top of Firefox. There is certainly a lot of potential greatness that could be done with AI, but those steps must be taken responsibly.





  • Ente and Immich are both projects like that, they’re both trying to be drop-in replacements for Google Photos. Immich requires you to self-host, and Ente makes it an option that doesn’t make it look too daunting.

    The pricing is weird: Immich (like other FUTO sponsored projects) has a WinRAR-style license that requires you to pay them for hosting an instance, but only once, and you can technically ignore it. Ente, meanwhile, allows you to use their apps with third-party instances without charging for the privilege.

    I would definitely recommend checking out either. I held out for a long time, because I thought image hosting might not be useful (and because deleting local photos is still a bit of a crapshoot, both backup-wise and functionality-wise) but it turns out to be pretty nifty.


  • LWD@lemm.eetoFirefox@lemmy.mlMozilla grants Ente $100k
    link
    fedilink
    arrow-up
    36
    arrow-down
    2
    ·
    edit-2
    2 days ago

    If Mozilla must throw money at AI, this is the way to go… I guess. Ente is trying to build a Google Photos replacement that translates image contents into searchable text while being fully end to end encrypted (read: as private as it gets), after all. Ente also allows you to fully self-host, so you can get these benefits without even trusting their servers.

    Out of the $65 million Mozilla has committed to throwing at for-profit and AI companies (that’s roughly 9.4 Mitchell Baker Salaries), $100,000 is a drop in the bucket, only 1.44% of the size of a Mitchell Baker Salary.

    I remain skeptical about Mozilla’s commitment to “open source AI models” when I haven’t seen a single blob of AI data released that is reproducible or open source. They are black boxes, and black boxes so closed that not even the people that created them could tell you what’s inside of them (unless we count the blood, sweat, and tears of the underpaid workers behind it).

    Full disclosure: I am a paying Ente subscriber





  • Language removed so I can elaborate:

    I don’t believe Google sets aside the money made through Firefox exclusively for Firefox. (If you believe this is the case, good luck demonstrating it, I guess.) Google’s money probably goes into a big pool named “ad revenue”, and that pool is probably filled disproportionately with Google’s own Chrome users.

    Again, Google is doing to Mozilla what Microsoft did for Apple: hurling money at them with the facade of an exchange of something, in order to stave off regulators.