Voter thread 2024

no the entirety of software development is now just typing into the big linear algebra machine

I don't think you even need to know anything to get (or do) a job in software development.

You just need to know the right people (like ewiz did and like I did)

You have to know how to compact your Claude Code Context Trash? Well to do that effectively you have to know linear algebra....

While knowing nothing about the linear algebra of the machine

Do you think the people "vibe coding killer apps with claude code" stopped to learn vector math before typing in their prompt

You literally don't. You are just trolling because you don't understand

I bet the bald black guy who asked that question doesn't even know linear algebra. He didn't know when he asked it and he doesn't know now

that's probably why he was asking the question -- he didnt know linear algebra -- if he did he could have just skipped the talk

But now that he got the answer, he doesn't have to learn linear algebra. My point is proven

He's going to continually fail the Claude Context Compaction

https://arxiv.org/pdf/2405.07987

Sorry dude... turns out the Platonic Forms are just Vector Embeddings...

1 Like

Ewiz was the most annoying fucker in the world because he would lie about his life and misrepresent himself in order to be an asshole to people and people would buy it

He spent years on the old forum talking about grad school and law school and phd and stuff and then later confessed he was a non-functional alcoholic dropout that entire time

failsonned his way into his first software job and became a "software expert" - starting arguments with people and gatekeeping jdance from trying to code because he doesn't know linear algebra

Maybe the most annoying guy in the world

I was thinking about applying to do a PhD in singapore

I like the Plato paper

Why singapore

The Twitter For You algorithm

1 Like

I think the plato paper is probably complete nonsense as a concept

If you remove the intellectual masturbation you could just say: As models get better at representing reality, each in their own way, those representations converge

It's funny twitter figured out you're autistic and/or right wing and started suggesting you might want to be in a WMAF relationship

It is moreso that across modalities (text vs vision) that training better and better embedding models results in embedding models that are more and more similar (across modalities), which points to there being some abstract semantic coherence between all things (platonic forms)

AKA what I said. There exists some objective reality that they are trying to represent, and because they are all trying to represent the same reality, they are converging

We tried a few approaches. Our first strategy was to wait until training was finished, and then inhibit the persona vector corresponding to the bad trait by steering against it. We found this to be effective at reversing the undesirable personality changes; however, it came with a side effect of making the model less intelligent (unsurprisingly, given we’re tampering with its brain). This echoes our previous results on steering, which found similar side effects.

Also applies to humans.

it is moreso that there's a small, optimal, vector space for all concepts and ideas in the world.