Future-proof your skills with Linux, Python, vim & git as I share with you the most timeless and love-worthy tools in tech through my two great projects that work great together.

The Lex Fridman Interview with Sam Altman on GAI

In this recent Lex Fridman interview, I heard from Sam Altman about GAI. He discussed the challenge of creating a system that is in line with people's beliefs, as well as the need to regulate other's speech. We also discussed the GPT-4 System Card PDF, Steve Jobs' story about why he added a handle to the original iMac, the advancements from GPT-3 to GPT-4, and the concept of LL. Join me as I explore these topics.

Exploring GAI with Sam Altman: Regulations, GPT-4, Steve Jobs, and LL

By Michael Levin

Sunday, March 26, 2023

Listing to this Lex Fridman interview with Sam Altman that just came out. Really important to listen to, beginning to end (and I’m only 1/4 through it). So many things to digest.

These things have the ability to reason, and they do have biases, and they can always design loopholes to get around restrictions and alignment, no matter how strict. It’s so much like raising a global child, and everyone’s going to have different opinions about the morals and norms that child should possess.

Sam Altman is saying that the base model is not so easy to give out. What people mostly want is a system that has been RLHF’d to the worldview they subscribe to. It’s really about regulating other people’s speech. Hypocrisy, hahaha! Sometimes there’s a really egregiously dumb answer and when you can just take a screenshot and share, it can misrepresent.

Statistics will bear out the nattering nabobs of negativism and the little people with axes hacking away at the heals of giants they’re scared and jealous of. Those folks will get swept along with inevitable change, haha! Be one of the people riding the crest of the wave of those changes. Don’t struggle and drown. Tap all the exploratory thought you’ve done over the years with SciFi and Fantasy to forge a wonderful path into the future for myself… and probably my kid, and in a distant 3rd the rest of the world who cares to listen.

This is the GPT-4 System Card PDF

Sam Altman doesn’t like the feeling of being scolded by a computer. A story that has always stuck with him is the story of why Steve Jobs put a handle on the back of the original iMac is that you should never trust a computer you can’t throw out a window. There has to be trust. Being scolded by a computer invokes a visceral response. Treating an adult like a child is insulting, even when exploring topics like flat earth.

There’s a lot of technical leaps in the base model from GPT-3 to GPT-4. There’s a lot of small wins that multiply together. And it’s the detail and care that go into hundreds of complicated interacting things. How we collect the data, clean the data, how we do the training, how we do the optimizer… so many things.

Does size matter in terms of neural networks in terms of how the system performs. GPT-4 has 100 trillion parameters. There’s a big purple circle meme that Lex gave, and it was apparently taken out of context. This may be the most complex software object anyone has produced. It will be trivial in a few years and anyone will be able to do it.

People get caught up in the parameters race the same way they got caught up in the gigahertz race of the 1990s. What matters is getting the best performance. One of the things about AI is it’s pretty truth-seeking. LLMs are sometimes a hated result in the field because people wanted a different way of getting to GAI. But LLM keeps working.

I posted this comment: I think the reason LLMs are such a good way to get to GAI is that they get the human experience the way humans do by vicarious experiences through reading. Surprisingly I’m being convinced this is an acceptable proxy for not being given a literal android body for a truly analogous human experience. Language contains much of what it is to be human, and learning based on it is better than “raw” sensory input like just cameras and microphones that today would only inadequately simulate the human experience given todays state of robots.

I am excited about a future where AI is an extension of human will and an amplifier of our abilities. Maybe we never build AGI but make humans super-great. If GPT is going to take your job, maybe you were a shitty programmer.