Hi, I’m Eric and I work at a big chip company making chips and such! I do math for a job, but it’s cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

My pfp is Hank Azaria in Heat, but you already knew that.

  • 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: January 22nd, 2024

help-circle
  • OAI announced their shiny new toy: DeepResearch (still waiting on DeeperSeek). A bot built off O3 which can crawl the web and synthesize information into expert level reports!

    Noam is coming after you @dgerard, but don’t worry he thinks it’s fine. I’m sure his new bot is a reliable replacement for a decentralized repository of all human knowledge freely accessible to all. I’m sure this new system doesn’t fail in any embarrassing wa-

    After posting multiple examples of the model failing to understand which player is on which team (if only this information was on some sort of Internet Encyclopedia, alas), Professional AI bully Colin continues: “I assume that in order to cure all disease, it will be necessary to discover and keep track of previously unknown facts about the world. The discovery of these facts might be a little bit analogous to NBA players getting traded from team to team, or aging into new roles. OpenAI’s “Deep Research” agent thinks that Harrison Barnes (who is no longer on the Sacramento Kings) is the Kings’ best choice to guard LeBron James because he guarded LeBron in the finals ten years ago. It’s not well-equipped to reason about a changing world… But if it can’t even deal with these super well-behaved easy facts when they change over time, you want me to believe that it can keep track of the state of the system of facts which makes up our collective knowledge about how to cure all diseases?”

    xcancel link if anyone wants to see some more glorious failure cases:

    https://xcancel.com/colin_fraser/status/1886506507157585978#m



  • A random walk, in retrospect, looks like like directional movement at a speed of √n.

    I aint clicking on LW links on my day off (ty for your service though). I am trying to reverse engineer wtf this poster is possibly saying though. My best guess: If we have a random walk in Z_2, with X_i being a random var with 2 outcomes, -1 or +1 (50% chance left at every step, 50% chance for a step to the right), then the expected squared distance from the origin after n steps E[ (Σ_{i=1}^n X_i)^2 ] = E[Σ_{i=1}^n X_i^2}] + E[Σ_{i not = j, i,j both in {1,2,…n}} X_i X_j}]. The expectation of any product E[X_i X_j] with i not = j is 0, (again 50% -1, 50% +1), so the 2nd expectation is 0, and (X_i)^2 is always 1, hence the whole expectation of the squared distance is equal to n => the expectation of the nonsquared distance should be on the order of root(n). (I confess this rather straightforward argument comes from the wikipedia page on simple random walks, though I swear I must have seen it before decades ago.)

    Now of course, to actually get the expected 1-norm distance, we need to compute E[Σ_{i=1}^n |X_i| ]. More exciting discussion here if you are interested! https://mathworld.wolfram.com/RandomWalk1-Dimensional.html

    But back to the original posters point… the whole point of this evaluation is that it is directionLESS, we are looking at expected distance from the origin without a preference for left or right. Like I kind of see what they are trying to say? If afterwards I ignored any intermediate steps of the walker and just looked at the final location (but why tho), I could say "the walker started at the origin and now is approx root(2n/pi) distance away in the minus direction, so only looking at the start and end of the walk I would say the average velocity is d(position)/(d(time)) = ( - root(2n/pi) - 0) /( n ) -> the walker had directional movement in the minus direction at a speed of root(2/(pi*n)) "

    wait, so the “speed” would be O(1/root(n)), not root(n)… am I fucking crazy?




  • Neo-Nazi nutcase having a normal one.

    It’s so great that this isn’t falsifiable in the sense that doomers can keep saying, well “once the model is epsilon smarter, then you’ll be sorry!”, but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.

    Bonus sneer: “we are going to find out if Yud was right” Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn’t require a data center to run so if we had acted on Yud’s geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don’t listen to this bozo? I’ve been wrong before, but god damn, dawg, I’ve never been starvingInRadioactiveCratersWrong.