Philosopher AI

Why hasn't this guy posted anything on training mechanisms? The big thing in GPT-3 is performance after fine-tuning with zero, one, or few-shot.

Really surprised it isn't on GitHub

There are many things that Electrowizard may or may not know. For example, do you think he knows how to tie his shoes? Maybe he does. Maybe he doesn't.

But the more important question is if he knows everything, then why does he think that? In my opinion, anyone who thinks they know everything likely doesn't.

i know hterefore i dnt know

Why hasn't this guy posted anything on training mechanisms?

Really surprised it isn't on github.

It is the flame of youth and nothing more. Our little electrowizard will grow into a fine man, shaped by confidence and the willingness to put his thoughts out there.

it rejects nonsense prompts so i think it has to be few shot.

Those two things are unrelated

as far as i know the way you convince it to reject nonsense prompts is to provide it with some nonsense and non-nonsense prompts as examples

And why does that mean it's few shot?

He also talked about hard coding some things in, like ignoring things about Obama and Trump. The nonsense part could be part of that

well you can try it out but my impression is that it is discriminating which topics are meaningful and which are not which i think minimally requires one example of each. hence, few shot. maybe i am wrong.

also if it was me i would provide it with sensitive topics via example to avoid rather than blacklist words directly but idk how he does it.

lmfao

Fucking dodged my question.

It just doesn't have anything to say about the real questions.

Wow, cold.

1 Like

Once again, that doesn't imply few shot.

What’s few shot dude & not gonna google

It means you do the fine-tuning training for your task with only a few examples of the prompt and subsequent response fed in.

The fact that it always has the same response for certain types of questions means it's either hard-coded or trained with thousands of examples of bad prompts.

Just exposing hbotz as uninformed and completely ignorant while speaking like an authority once again: working with transformer-based language models is my job lol

He's typing bullshit and doesn't understand GPT-3 but wants to come across as an expert on the topic