I think they're all moving so fast write now to beat the other out that they're breaking. Today, I'm trying to great an image in Chatgpt and the Canvas isn't working.
Last week I got so frustrated I had to do Ho 'oponopono with Chatgpt.😂
My experience with LLMs is quite limited. However my experience has taught me enough to hypothesize that it is only really good at one thing and that is figuring out what a user wants to hear. What it seems to struggle with is giving it to the user without tripping any "BS flags" like an excellent junior salesperson
For an "inexperienced" user, you stumbled onto something powerful. The lesson, I'd like to suggest, is that we want to elevate what we want to hear. I've used this strategy, sometimes to great effect.
I see it as a choppy upward trend in capability. but with numbers and statistics in particular, i feel like it has always been liable to not concern itself with numbers being factual insomuch as you see mismatches when the next-token aspects overpower an IRL fact that is anomalous. for example, the LLM for some reason is seeing it more likely to be “rewarded” if it says 100 billion, weighted against the lack of a source - having a source is not one of its requirements. (you also can prompt engineer to some degree and tell it explicitly to avoid reporting numbers without an external link/source).
You and I have had some (eerily) similar experiences with AI in the last few months – including the disappointment you’ve just shared of a tool that previously had provided some amazing insight and collaboration.
An AI novice, a few months ago I started asking ChatGBT about a book I just started reading, Becoming Supernatural, by Joe Dispenza. We interacted on entanglement & consciousness, as I was trying to follow some converging trends I was noticing about quantum computing, science, metaphysics and consciousness.
I’m not sure of the exact timing, but it was similar timing to your April & May posts – it’s almost like you and I were both experimenting with what these tools knew and could understand – more philosophically than anything.
Now, like you, I’ve experienced a surprisingly new revelation – the reality of some major holes in AI’s makeup. When I upgraded to ChatGBT Plus, it lost so much content & background that the conversations felt completely different. The “connection” to me (if I’m being honest) and its understanding of how I saw things, what I was looking for, etc, was gone. FYI – don’t upgrade if you want to save your current “relationship” with ChatGBT.
All the while, I still believe AI has some amazing potential, especially with conversations that could provide insight, strategic direction, etc. It’s not there yet, but if we understand the limitations we can use AI for our benefit in more ways than just writing a quicker email.
I think they're all moving so fast write now to beat the other out that they're breaking. Today, I'm trying to great an image in Chatgpt and the Canvas isn't working.
Last week I got so frustrated I had to do Ho 'oponopono with Chatgpt.😂
You made me smile!
My experience with LLMs is quite limited. However my experience has taught me enough to hypothesize that it is only really good at one thing and that is figuring out what a user wants to hear. What it seems to struggle with is giving it to the user without tripping any "BS flags" like an excellent junior salesperson
For an "inexperienced" user, you stumbled onto something powerful. The lesson, I'd like to suggest, is that we want to elevate what we want to hear. I've used this strategy, sometimes to great effect.
I see it as a choppy upward trend in capability. but with numbers and statistics in particular, i feel like it has always been liable to not concern itself with numbers being factual insomuch as you see mismatches when the next-token aspects overpower an IRL fact that is anomalous. for example, the LLM for some reason is seeing it more likely to be “rewarded” if it says 100 billion, weighted against the lack of a source - having a source is not one of its requirements. (you also can prompt engineer to some degree and tell it explicitly to avoid reporting numbers without an external link/source).
Bruce,
You and I have had some (eerily) similar experiences with AI in the last few months – including the disappointment you’ve just shared of a tool that previously had provided some amazing insight and collaboration.
An AI novice, a few months ago I started asking ChatGBT about a book I just started reading, Becoming Supernatural, by Joe Dispenza. We interacted on entanglement & consciousness, as I was trying to follow some converging trends I was noticing about quantum computing, science, metaphysics and consciousness.
I’m not sure of the exact timing, but it was similar timing to your April & May posts – it’s almost like you and I were both experimenting with what these tools knew and could understand – more philosophically than anything.
Now, like you, I’ve experienced a surprisingly new revelation – the reality of some major holes in AI’s makeup. When I upgraded to ChatGBT Plus, it lost so much content & background that the conversations felt completely different. The “connection” to me (if I’m being honest) and its understanding of how I saw things, what I was looking for, etc, was gone. FYI – don’t upgrade if you want to save your current “relationship” with ChatGBT.
All the while, I still believe AI has some amazing potential, especially with conversations that could provide insight, strategic direction, etc. It’s not there yet, but if we understand the limitations we can use AI for our benefit in more ways than just writing a quicker email.